Functional Lodash

Did you know that Lodash - the popular JavaScript utility library - had a functional flavor?

The most exciting part to me is that functions of Lodash FP do not mutate the data they receive.

As a Clojure developer, I am excited because I am addicted to data immutability.


This article is an excerpt from my upcoming book about Data Oriented Programming. The book will be published by Manning, once it is completed (hopefully in 2021).

More excerpts are available on my blog.

Enter your email address below to get notified when the book is published.


Lodash FP differs from the standard Lodash on 4 main points:

  1. The functions receive the data to be manipulated as last argument
  2. The functions do not mutate the data they manipulate
  3. The functions are auto curried
  4. The functions receive the iteratee as first argument


In Lodash FP, the functions do not mutate the data they manipulate.

For instance, the set() function differs from the standard Lodash in two points:

  1. It receives the object as last argument
  2. It returns a new version of the object instead of modifying the object

In the following code snippet, you see that a is not modified by fp.set():

var a = {foo: 1};
var b = fp.set("foo", 2, a);

However in standard Lodash, a is modified by _.set():

var a = {foo: 1};
var b = _.set(a, "foo", 2);

Be aware that the implementation of the data immutability is via deep cloning. When the objects are big, the performance hit of deep cloning might be an issue. You might prefer to use a library that implement persistent data structures to avoid this performance hit. But then you’d have to pay the price of converting persistent data structures to native objects back and forth (like I illustrated in this article).

Auto currying

In Lodash FP, the functions that receive an iteratee (a piece of code that expresses the data manipulation) receive the iteratee as first argument and are curried.

Functions like map() receive a single argument (the iteratee) and return a function to be called on a collection: => x +1)([1, 2, 3])

Functions like reduce() function receive two arguments (the iteratee and the accumulator) and return a function to be called on a collection:

fp.reduce((a,b) => a + b, 0)([1, 2, 3])


The installation instruction and the semantics of the functional flavor are exposed in Lodash FP Guide.


clojure.core.async: An unfulfilled promise


  • Its my first (ever) technical blog post, first every in English
  • I am no natural English speaker, so read this with a Hebrew accent in your mind, and do - excuse my blunt style.
  • The subject is extremely outdated, but has been sitting in my head for a (very) long time and only now as I start writing, I'm spitting it out.

I'll start by stating the obvious:

Rich Hickey is one of the most influential software development philosopher of our time

If not on the world - with no doubt on me, just ask anyone I worked with or talked tech to since around 2009 how much I've been babbling about it. I'm a true follower.

Nothing has affected the way I write code more than the Clojure language. Each and every talk of Rich Hickey (and I try to see all of them, numerous times, advise you to do so as well) has changed the way I see the topic at hand and software as a whole.

Why I chose the title Unfulfilled promise? First of all: the obvious pun. But mainly,  the core.async library (and Clojure in the whole) is an amazing idea that remains, in my world, unfulfilled: I only use it for toying around. Continue reading to understand (some) of why.

Ever Since I first met Clojure (probably circa 2009), I toy around with Clojure often (I even wrote part of my Geometry thesis in Clojure, but more about that in a future post) and yet:

   I never choose to use Clojure, not even in my toy projects.

This basically makes me a professional software engineer - Clojure hobbyist. The only way to read this post is to understand I am truly not a professional Clojure developer. In some manner this post may shed some light on why some hobbyists do no turn professionals in the Clojure stack. I do not think it is important for a language to be considered industry leading and it is great for a developer to have educational stacks like the Clojure stack is for me. I know that it is very common in the industry to use Clojure, but for me it remains educational only. This post details one aspect of why that remains such.

That aspect is Ease of debugging in the common machines it targets (I'll focus the browser here)

I know for a fact that Clojure is very common in the industry. This Blog post is not regarding whether Clojure is a good fit for others - I am confident it is. It is regarding a very specific aspect I find missing in the stack in order for me to choose it as something I would use if I had to choose: the ease of debugging when you encounter the bad path of things.

This blog post will try and explore the above via a specific talk Hickey gave a few years ago on a specific, ingenious library/framework, namely core.async.

I strongly suggest not to continue reading beyond this point without seeing the talk as specific references will start now.

Is core.async instead of, say, Observables, in fact easier in the bad path when a problem needs to be debugged, given the layers of transpiling and macro expansion one needs to dig through?

I'll start by stating a few amazing properties of core.async:

  • It allows state machines to be written in a very natural way even when they stumble upon the need to blockingly wait for other state machines which is infinitely amazing (see the talk, please do)
  • It allows apparently blocking code to be written in languages that do not allow threading such as javascript
  • CSP is amazing, and core.async offers CSP in the browser and CSP without tons of threads in the JVM.

The flipside is in the ingenious implementation. The exact property that makes the above possible is exactly what makes it so hard to comprehend, its magic.

All of the above is achieved with some LISPy magic called hygenic macros. Macros are basically tools that rewrite your code in the pre-processing phase, just before it is compiled using transformations that you (or the framework developers) wrote for you. In other words, the Macro mechanism is "pluggable" and not baked into the language. Macros allow a language to be extensible, which is exactly what Clojure is. However, there is a flip side which mainly has to do with creating a larger difference between what is run on the machine (JVM / javascript / CLI / Python) to what you originally wrote. Therefore, all the core.async "magic", that is backed by transformations you prefer not being aware of is exactly what stands between your original code and you are now debugging. This added complexity, in my view is simply moving the complexity Rich is speaking about in the talk to opaque layers which you now have to understand in order to debug already very complex code. I'll re-iterate it in two bullets:

  • Rich offers a more natural way of writing concurrency. Well: there is nothing natural about running LISP on procedural machines. This comes from someone who loves LISP dearly, had it been run on LISP-aware machines with proper debug and trace tools, etc. (Racket for example). 
  • In order to achieve the above magic, there are many "layers" or code rewriting (complex macro expansion) + transpiling in the js env /compiling in the JVM so basically what you get is amazing but very hard to debug since code since it is extremely far from what is eventually run. In my humble experience source maps do not really solve it in the browser environment. To be honest I never really tried it in the JVM but I gather its more complex as it compiles to byte code. I cannot emphasize how important for me is the ease of debugging. Moreover, what Rich offers in the talk is exactly that - a library easier to debug than "callback hell" and "place oriented programming".

One might argue: "Hey, any compiling does exactly the above. Moreover, any use of framework". For that I answer: 

Compilers: It is obvious, for me at least, that interpreted languages are easier to debug. Complex macros create a very hard challange for debugging.

As for frameworks: generally the same answer. Since using a framework is an obvious must, I argue that non-macro based frameworks are much much easier to comprehend and debug as they do not change the code under your feet and sending traces (logs) is generally much more naturally achieved.

 So where do we go from here if we do want apprent blocking code in the browser?

I'll use a famous idiom Rich hickey also uses often:

There is no silver bullet

As I write this, I'm trying to bake (or Google) a way of writing seemingly blocking js code that involves less transpiling and preprocessing,

It cannot, naturally,  promise the ease of use of core.async promises, but is easier to debug with the browser debugger. I will fully bake the idea and write a blog post shortly.

With a little abuse of notation, the search can loosely be said to be inspired by Hickey's monumental talk Simple made Easy.


Reloading Woes

Update: seems Stuart Sierra’s blog post has dropped off the internet. I’ve updated the link to refer to the Wayback Machine’s version instead.

Setting the Stage

When doing client work I put a lot of emphasis on tooling and workflow. By coaching people on their workflow, and by making sure the tooling is there to support it, a team can become many times more effective and productive.

An important part of that is having a good story for code reloading. Real world projects tend to have many dependencies and a large amount of code, making them slow to boot up, so we want to avoid having to restart the process.

Restarting the process is done when things get into a bad state. This can be application state (where did my database connection go?), or Clojure state (I’m redefining this middleware function, but the webserver isn’t picking it up.)

tools.namespace and component/integrant

We can avoid the cataclysm of a restart if we can find some other way to get to a clean slate. To refresh Clojure’s view of the world, there’s, which is able to unload and then reload namespaces.

To recreate the app state from scratch there’s the “reloaded workflow” as first described by Stuart Sierra, who created Component for this reason.

Combining these two allows you to tear down the app state, reload all your Clojure code, then recreate the app state, this time based on the freshly loaded code. Combining them like this is important because when you use tools.namespace to reload your code, it effectively creates completely new and separate versions of functions, records, multimethods, vars, even namespace objects, and any state that is left around from before the reload might still be referencing the old ones, making matters even worse than before.

So you need to combine the two, which (naively) would look something like this.

(ns user
  (:require [com.stuartsierra.component :as component]
            [ :as c.t.n.r]))

(def app-state nil)

(defn new-system []
  (component/system-map ,,,))

(defn go []
  (alter-var-root! app-state (fn [_]
                               (-> (new-system)

(defn reset
  "Stop the app, reload all changed namespaces, start the app again."
  (component/stop app-state)

  ;; Clojure tools namespace takes a symbol, rather than a function, so that it
  ;; can resolve the function *after code reloading*. That way it's sure to get
  ;; the latest version.
  (c.t.n.r/refresh :after 'user/go))

This pattern is encoded in reloaded.repl; System also contains an implementation. I recommend using an existing implementation over rolling your own, as there are some details to get right. It’s also interesting to note that reloaded.repl uses suspendable, an extension to Component that adds a suspend/resume operation.

To dig deeper into this topic, check out the Lambda Island episodes on Component and System.

The author of reloaded.repl, James Reeves, eventually became dissatisfied with Component, and created Integrant as an alternative, with integrant-repl as the counterpart of reloaded.repl.

With reloaded.repl your own utility code now looks like this:

(ns user
  (:require reloaded.repl

(defn new-system []
  (com.stuartsierra.component/system-map ,,,))

(reloaded.repl/set-init! new-system)

Now you start the system with (reloaded.repl/go). To stop the system, reload all changed namespaces, and start the system again, you do (reloaded.repl/reset).

With integrant-repl things look similar.

(ns user
  (:require integrant.repl

(defn system-config []
  (-> "system_config.edn" slurp clojure.edn/read-string))

(integrant.repl/set-prep! system-config)

Now (integrant.repl/go) will bring your system up, and with (integrant.repl/reset) you can get back to a clean slate.


CIDER (Emacs’s Clojure integration) supports tools.namespace through the cider-refresh command. This only does code reloading though, but you can tell it to run a function before and after the refresh, to achieve the same effect.

;; emacs config

;; reloaded.repl
(setq  cider-refresh-before-fn "reloaded.repl/suspend")
(setq  cider-refresh-after-fn "reloaded.repl/resume")

;; integrant-repl
(setq  cider-refresh-before-fn "integrant.repl/suspend")
(setq  cider-refresh-after-fn "integrant.repl/resume")

You can also configure this on a per-project basis, by creating a file called .dir-locals.el in the project root, which looks like this.

((nil . ((cider-refresh-before-fn . "integrant.repl/suspend")
         (cider-refresh-after-fn . "integrant.repl/resume"))))

Now calling cider-refresh (Spacemacs: , s x) will again stop the system, reload all namespaces, and restart.

Plot twist

So far so good, this was a lengthy introduction, but I wanted to make sure we’re all on the same page, now let’s look at some of the things that might spoil our lovely reloading fun.


Clojure has a nifty feature called AOT or “ahead of time compilation”. This causes Clojure to pre-compile namespaces to Java classes and save those out to disk. This can be a useful feature as part of a deployment pipeline, because it can speed up booting your app. It has some serious drawbacks though, as it messes with Clojure’s dynamic nature.

What tends to happen is that at some point people find out about AOT, they think it’s amazing, and enable it everywhere. A bit later errors start popping up during development that just make no sense.

AOT should only be used for deployed applications. Don’t use it during development, and don’t use it for libraries you publish.

Even when following this advice you can get into trouble. Say you have aot enabled as part of the process of building an uberjar for deployment.

(defproject my-project "0.1.0"
  :profiles {:uberjar
             {:aot true}})

You do a lein uberjar to test your build locally. This will create AOT compiled classes and put them on the classpath (under target/ to be precise). Next time you try to (reset) it will tell you it can’t find certain Clojure source files, even though they’re right there. I have lost hours of my life figuring this out, and have more than once found my own Stackoverflow question+answer when googling for this. Watch out for the ghost of AOT!

By the way, here’s a handy oneliner: git clean -xfd. It removes any files not tracked by git, including files in .gitignore. It’s the most thorough way of cleaning out a repository. Do watch out, this might delete files you still want to keep! With git clean -nxfd you can do a dry run to see what it plans to delete.

defonce won’t save you

Ideally all your state is inside your system, but maybe there’s something else that you want to carry over between reloads. No need to judge, life is compromise.

You might think, “I know, I’ll just defonce it!”, but defonce won’t save you. tools.namespace will completely remove namespaces with everything in them before loading them again, so that var created by defonce the first time is long gone by the time defonce gets called again, and so it happily defines it anew.

What you can do instead is add some namespace metadata telling tools.namespace not to unload the namespace.

(ns my-ns
  (:require [ :as c.t.n.r]))

(c.t.n.r/disable-unload!) ;; this adds false
                          ;; to the namespace metadata

(defonce only-once (Date.))

This namespace will still get reloaded, so if you have functions in there then you’ll get the updated definitions, but it won’t be unloaded first, so defonce will work as expected. This does also mean that if you remove (or rename) a function, the old one is still around.

To completely opt-out of reloading, use (c.t.n.r/disable-reload!). This implies disable-unload!.

cljc + ClojureScript source maps

When ClojureScript generates source maps, it needs a way to point the browser at the original source files, so it copies cljs and cljc files into your web server document root. (your “public/” directory). Seems innocent enough, but it can confuse tools.namespace.

It is quite a common practice to search for files that are requested over HTTP in the classpath, so the “public/” directory tends to be on the classpath as well. tools.namespace scans the classpath for Clojure files (clj or cljc) and finds those cljc files that ClojureScript copied there, but their namespace name doesn’t correspond with their location, and things breaks. There is a JIRA ticket for this for tools.namespace: TNS-45, with several proposed patches, but no consensus yet about the right way forward.

The easiest way around this is to limit tools.namespace to only scan your own source directories.

(c.t.n.r/set-refresh-dirs "src/clj")

CIDER does its own thing

cider-refresh roughly does the same thing as, and it uses tools.namespace under the hood, but it reimplements refresh using lower level tools.namespace primitives. This leads to subtle differences.

Currently cider-refresh does not honor set-refresh-dirs. It honors the unload/load metadata, but does not follow the convention that false implies false.

I proposed a patch to address both of these issues. In general if cider-refresh seems to have issues, then try to use (reset) directly in the REPL.


Full Stack Web Developer - Clojure

Full Stack Web Developer - Clojure

Spatial Informatics Group
$80000 - $85

Spatial Informatics Group, LLC (SIG) was founded in 1998 and is a group of applied thinkers with expertise in environmental fields ranging from landscape ecology, wildlife ecology, transportation modeling, ecosystem services valuation, natural hazards risk assessment, and forestry to natural resource economics. Our group combines spatial analytics with ecological, social and economic sciences to understand the effects of management and policy choices on the short and long-term stability of ecosystems. We translate data into knowledge that can be used to inform decisions.

Job Description: SIG is searching for a Full Stack Web Developer for a variety of programs relating to our Natural Hazards, Environmental Mapping and Forest Carbon domains. The successful candidate will have the expertise and experience to build decision support tools for our clients across the Americas, Africa and Southeast Asia. SIG is a digitally native organization with personnel distributed across three continents. Therefore, the position is for remote work only, preferably based in US compatible timezones.

Application Deadline: Applications will be continuously reviewed until a suitable candidate is identified.

**Requirements: **The following list provides a summary of the position's requirements:

  • experience in Clojure/Clojurescript or experience in either Java/Javascript and/or a different Lisp dialect (e.g., Scheme, Common Lisp)
  • Experience with GeoServer and WMS/WCS/WFS
  • Experience with geospatial processing using PostGIS, GeoTools, and/or GDAL/OGR
  • Experience with web mapping using OpenLayers
  • familiar with functional reactive UI development using ReactJS and/or Reagent
  • familiar with functional programming techniques (e.g., pure functions, immutable data, closures, laziness, function composition, memoization)
  • experience with SQL DB programming in Postgresql (bonus points for PostGIS experience)
  • good at working in a distributed team environment with frequent text, voice, and video communications with other developers
  • fluent in version control procedures with Git and Github/Gitlab, including creating/closing issues, creating/reviewing/merging pull requests, and good branch management
  • experience with web mapping libraries (e.g., Openlayers) and/or web mapping services (i.e., WMS, WCS, WFS) and servers (e.g., GeoServer) is a plus
  • comfortable working in a Linux environment from the command line and administrating remote servers over SSH

Terms of employment: The position is recruited on a full time employment basis. In addition, part-time consultancy work may also be considered depending on the experience and availability of the candidate

Applications: Applicants are invited to apply via the following Greenhouse job board link with a CV, cover letter (optional) and Github handle.


End-of-November 2020 data science study meetings: Davavis, ETL, NLP, Notespace

TL;DR – please register

In the last two weekends, we have started a habit of weekend study meetings, with two groups that will meet interchangeably.

Every two weekends, we will have study sessions on machine learning and data science practices. This habit started for the first time with the mid-November ML study meetings. On the other weekends, we will have study sessions of the "Scicloj foundations", where we will be learning to become contributors to the emerging ecosystem of Clojure data science and the libraries and tools around it. This habit started for the first time with the meetings last weekend.

We encourage you to follow these two groups on the corresponding Zulip streams: #ml-study and #sci-fu.

In the coming last weekend of November, we will have a few more meetings of the first kind. They will vary in their content. In some of them, we will experiment with various tools and ways of learning together.


Data visualizations practice

  • Plan: We will explore a data problem with an emphasis on data visualizations.
  • Workflow: We will share a joint REPL session on a remote machine with our local editors. We will switch hands from time to time.
  • Facilitated by: L Jordan Miller and John Stevenson
  • Time: 13:00 27 Nov 2020 in UTC

NLP session

ETL practice

  • Plan: We will explore a data problem with an emphasis on ETL data processing.
  • Workflow: Everybody will edit the same code using VSCode's remote abilities in a mob coding fashion.
  • Facilitated by: J
  • Time: 18:00 28 Nov 2020 in UTC

Tweaking Notespace

  • Plan: We will learn about Notespace from the point of view of extending and tweaking it. We will also discuss some possible directions for our next steps in building literate programming solutions.
  • Workflow: A talk, a discussion, and some hacking together.
  • Facilitated by: Daniel Slutsky
  • Time: 12:00 29 Nov 2020 in UTC

Please register in advance as much as possible, and let us know if anything changes your plans. This will help a lot in the preparations.

See you there!


Q: Can I go to any of the meetings?

A: Yes. Each of the meetings this weekend is self-contained and different.

Q: What should I do to participate?


Q: How long are the meetings?


  • Each of the study meetings will be 2 hours long.

Q: Are the meetings beginner-friendly?


  • Not exactly. At the moment, we are studying tools and libraries which are still changing and breaking. Sometimes, they are not entirely documented. Looking into them may require an open mind and might not be easy.
  • However, we do assume that nobody in the meetings is an expert. We will seek clarity and will make an effort to help each other.
  • After the ecosystem matures a bit more, we will organize workshops that are much more beginner-friendly.

Q: What knowledge will be assumed?


  • We will assume basic knowledge of Clojure (say, at least chapters 1,3,4,5 of Daniel Higginbotham's "Clojure for the Brave and True").
  • In this weekend, we will not assume any specific background in data science concepts.

Q: What platform will we use for the meetings?


  • We will use Zoom for the video meetings. We will email you the Zoom link.
  • We will use the Clojurians Zulip chat for our notes and textual discussions. Here is some recommended background about our use of Zulip:

Q: Will the meetings be recorded and published?


  • Usually, our study meetings are recorded, but the recordings are not shared publicly, but rather shared in our internal chat streams for our internal use.
  • This time, some of the meetings may be published for wider audiences. This will be mentioned in the beginning of the meetings.


Jenkins vs Travis vs Bamboo vs TeamCity: Clash Of The Titans’

What’s the first thing that comes to mind when you hear the words Software Development and DevOps? There’s only one magic word (five to be more precise)- Continuous Integration and Continuous Delivery.

It is impossible to carry out software development without counting on DevOps testing or CI/CD tools, making picking the right CI/CD tool super important. Now the question is, how do you choose the right tool as there is no shortage of options? Well, to make it a little easier for you, we have picked four of the best CI/CD tools, and we will be comparing these tools: Jenkins vs. Travis vs. Bamboo vs. TeamCity in the article so that you can make an informed decision.

Why Are CI/ CD Tools Important?

When multiple team members work on the same project, it can often get agonizing for developers to contribute to the end code. They need to communicate and coordinate all sorts of modifications manually. This synchronization goes far beyond the development team, as it’s not just the developers who are responsible for all product features.

For example, software product teams tend to strategize a sequence of feature introductions and choose individuals to be in charge of each feature. But the larger the team gets, so do the chances of code failures. Without any fail-safe in place, this often leads to a vicious cycle of blame.

To mitigate this blame game and bring some relief to developers, CI (Continuous Integration) was introduced. In essence, continuous integration is the practice of regularly merging all working copies of code to a shared channel or repository numerous times a day. CI brings together all the code alterations in a single place, prepares their production, then prepares and tests the code release.

Each of these steps aids in improving the quality of the code, reducing all sorts of human error in code review, and ultimately saving precious time and effort. And thus arises the need to choose the best Continuous Integration/Continuous Delivery tools among a plethora of options available to us. As promised above, this blog will talk about the much-heated battle between Jenkins vs Travis vs Bamboo vs TeamCity.

Digging Deeper Into CI/CD

Before jumping into CI/CD tools and comparing Jenkins vs Travis vs Bamboo vs TeamCity, we first need to understand CI/CD better. CI/CD mainly denotes the software development & testing infrastructure that intends to bring out quality products with less effort and time. In a nutshell, Continuous Integration (CI)/Continuous Delivery (CD) typically goes hand in hand with an agile environment. This entire process allows software development teams to incorporate, authenticate, fix errors, and set up code to the production environment automatically.

Key Objectives Of CI/CD

  • Offer instant feedback about the quality of the code to software developers.
  • Automate tests and build a document trail.
  • Allow developers to spend relatively more time writing codes than in testing or integrating fresh code with the present codebase.

Major Distinctions: Continuous Integration, Continuous Deployment, & Continuous Delivery

Continuous Integration (CI) is the initial phase for Continuous Delivery & Continuous Deployment. CI is a discipline that mainly helps developers combine pieces of their code into one major code branch without any break in the collective code. CI runs test automation so that bugs would not slip into the code and guarantees that the end-product does not break every time fresh commits are incorporated into the mainline.

Both Continuous Deployment and Continuous Delivery are what comes about when the modifications have moved through each phase of development and are ready to be released. A CI/CD pipeline is the best practice to systematize the complete development process. Generally, it is broken into phases so software developers can get rapid response or feedback.

Continuous Delivery (CD) is the method for releasing the alteration. Continuous Integration guarantees that all changes in the code are combined regularly in an automated mode. On the flip side, the CD is the practice that permits developers to release fresh alterations from the source code repository to end-users. With CD (Continuous Delivery), this procedure gets automated and organized- you simply need to click a button at any time you are ready to deploy.

You can essentially decide the rhythm that better meets your business requirements and release monthly, weekly, daily, etc. The goto way would be to release it early. If there were an issue with the code, small batches would be simpler to fix compared to large modifications.

However, Continuous Deployment is a wholly automated and highly advanced version of Continuous Delivery. After going through the phases mentioned above of the production pipeline, alterations can be released to your users via automatic releases. This again eliminates the probability of human errors.

Automated releases let the software development team concentrate on their job and stop stressing about ‘Release Day.’ Moreover, they can get customer response or feedback quite rapidly and check the consequences or results of their work within minutes of completing it. To put it plainly, CI lays the base for Continuous Deployment and Continuous Delivery. These two are more alike — both of them can help release changes; however, as a contrast, Continuous Deployment releases occur automatically.

All releases, irrespective of which one of the CDs you choose, follow a specific release workflow. Typically, the release workflow is categorised into stages:

  • Define Plan — recording of primary information, define release package, schedule, and release plans, conveying tasks to members of the team.
  • Release Build — generating one or more release packages.
  • Testing Phase- tests the release build in mock environments.
  • Notes or Document — generating release documents.
  • Approve deployment to a live environment — the approval of deployment of the release to a live situation and surroundings.
  • Deployment Phase — to deploy the release to a live situation.
  • Review the Final Release- to record final details, finishing the release.

Commercial Benefits Of CI/CD

It is essential to understand the commercial aspects of CI/CD tools before comparing Jenkins vs Travis vs Bamboo vs TeamCity. This section will help you align your business requirements or project requirements with the perfect CI/CD tools.

  • Sustainability- Every company prefers to be on top of their product development game. To avoid untimely burnout, they can delegate the recurring job to machines and lessen the manual labor. This will also control extra costs, as manual efforts add more costs than tools.
  • Quick Response- Rapid development enables the organization to react earlier to market trends and changes and exceed the competition. No one requires a feature that was needed six months ago. You have to remember to always keep superior quality standards. With no quality, speed would not get you very far.
  • Satisfaction & Productivity- Unhappy developers are less engaged and less productive. You do not want that for your team of developers. Allow machines to deal with defects and bugs while your developers’ team does what matters the most. Let your team be happy, satisfied, and productive.

Having understood the CI/CD concept and benefits, let’s dive into an in-depth analysis of the four most popular CI/CD tools — Jenkins vs Travis vs Bamboo vs TeamCity.

Jenkins vs Travis vs Bamboo vs TeamCity- FaceOff

Gear up folks, it is time to dive into the ultimate battle between Jenkins vs Travis vs Bamboo vs TeamCity. Each tool offers its own set of unique features and advantages when it comes to DevOps testing. Let’s get right to it.

1. Jenkins

Jenkins is one of the most common and popular names in the CI market. It has expanded into the largest open source tool that supports engineering teams in automating their deployments. Written in Java, this amazing CI/ CD tool is used to build and test projects and makes it simple and effortless for developers to integrate the necessary changes to the project.

Jenkins allows developers to integrate into several phases of the DevOps testing/cycle — build, document, test, package, stage, deploy, static scrutiny, and much more. It is the most common tool with an amazing global support. Jenkins supports around 1000+ plugins and is portable with essential platforms and OS-browser combinations such as Chrome, Linux, macOS, Windows, OS X, Firefox, Internet Explorer, Ubuntu, RedHat, etc.

To incorporate any specific tool with Jenkins, you need to simply install the appropriate plugin, for instance, Amazon EC2, Maven 2, Git, project, HTML publisher, and so on. Using this effective tool, hyper-growth companies and startups can speed up the software development procedure through automation.

Setting It Up

You just need to perform three steps if Apache TomCat and Java are already installed.

  • From the official website simply Download the Jenkins war file.
  • Deploy the war file.
  • Install the necessary plugins.

What Does It Mainly Do?

  • With this tool, you can automate to build, document, test, and deploy jobs. It can be easily installed using Docker and native system packages or installed as a separate on any machine with a JRE (Java Runtime Environment) installed. It gives an exceptional browser-hosted project management dashboard.
  • Practically, this fantastic tool offers development team members the expertise to push their code to the building and obtains instant reaction on whether it is all set for production or not. In most cases, this will necessitate some tailoring and tinkering of Jenkins as per your team’s custom necessities.
  • Being one of the most-admired free CI/CD tools, it also provides you the option to tailor fit it for a home-grown solution. Jenkins follows two release lines — LTS (Long Term Support) and Weekly similar to other commonly used open-source projects. The place where this tool mainly excels is with its well-heeled plugin ecosystem. It provides an advanced version with around 1,400 plugins, which allows incorporation with almost every service and tool that is accessible on the market.

Key Features

  • Supports all important languages.
  • An open-source or free code base, which is written in Java.
  • Ease of maintenance as it has an in-built GUI tool to upgrade.
  • Runs on third-party cloud hosting choice or your own private server.
  • Fully compatible with any version control system.
  • Great Jenkins pipeline syntax creating a script that assists in automating several procedures, including testing.
  • Configuration through Jenkinsfile; can be custom to minute detail. However, it is probably one of the more complicated processes, yet the support of pipeline scripts makes it a little simpler.
  • Extensively its elements are open and free to use. However, one can’t underestimate the DevOps engineer’s cost and time to custom, build, and run the environment.

If you are seeking for an open-source (and inexpensive) CI (Continuous Integration) solution and need assistance from a reputed community, this fantastic CI/ CD tool is the right choice for you.

2. Travis CI

Travis CI is another outstanding and popular tool in the CI/CD arena. It was mainly launched for open-source projects and then expanded to closed-source projects over some decades. This tool focuses mostly on CI (Continuous Integration), enhancing the build procedure’s performance with test automation and an alert system. You can rapidly test your existing code — this incredible tool will supervise all modifications and allow you to know if the alteration is doing well or not. Some other tools like CircleCI, Travis CI are perfect to make your initial steps in CI (Continuous Integration) out-of-the-box solutions.

It mainly functions for the GitHub SCM tool. Travis CI is mostly written in Ruby. It functions as a CI tool only. Travis CI tool is developed by the Travis CI community and chiefly supports web platforms. It has excellent traits such as: faster setup, pre-installed database services, live-build views, and auto-deployment on passing builds, pull request support, deploy anywhere, clean virtual machines for each build, support approximately every single platform such as Linux, Mac, etc.

Unlike other CI/CD tools, this tool does support the build matrix — providing a chance to execute tests with varied packages and versions of languages. You can easily customize it the way you wish to. For instance, failures in a few environments can activate notifications; however, it will not fail the whole build (which is useful for development versions of packages).

Setting It Up

Below are the steps-

  1. To begin using Travis CI, browse & Sign up with GitHub.
  2. Allow the Authorization of Travis CI.
  3. After that click on your account picture in the top right corner of your Travis Dashboard, click Settings & then the green Activate button, and choose the repositories you would like to use with the tool.

What Does It Mainly Do?

  • Travis CI concentrates on enabling users to rapidly test their code as it is deployed. It supports small and large code alterations and is well-designed to detect alterations in building as well as testing. When a modification is identified, this incredible CI/ CD tool can immediately provide a reaction whether the alteration was successful or not.
  • It supports testing the open-source app for free and charges you to test private apps. Two main build flows that pull request build flow and branch build flow. This tool support about 30 varied programming languages like GO, Xcode, Java, Ruby, Python, Haskell, Clojure, PHP, Node, Perl, Scala, C#, and much more.
  • Travis CI can be configured after integrating the filename.travis.yml. This is a file with YAML format found in the GitHub repository. Travis CI supports the incorporation of external tools also.
  • Software developers can easily use Travis CI to notice the tests as they execute, run various tests in parallel, and incorporate the tool with HipChat, Email, Slack, etc. to get alerted of problems or failed builds.
  • It also supports container build and additionally supports OSX and Linux Ubuntu. It has a restricted list of third-party integrations; however, as the concentration is on Continuous Integration (CI) rather than CD (Continuous Delivery), it might not be a problem for your use case.

Key Features

  • Free to test open-source apps.
  • Supports amalgamation with external tools.
  • Several parts of the software are open on GitHub, with a few private codes written in a language like Ruby.
  • Offer both a private server & cloud hosted choices.
  • Support numerous languages like Python, Scala, Ruby, Perl, that are built using macOS, Linux, and significantly- Windows.
  • No free plan (restricted a free trial for 100 first builds and two concurrent jobs)
  • Version Control Software (VCS) is GitHub.
  • Testing that can be completed on apps or library against numerous runtimes and data stores without the requirement of installation locally over several OSs (operating systems).
  • Lightweight and well-documented yml configuration settings; easy set-up of projects thanks to in-built services and databases.
  • The out-of-the-box cloud solution that is comparatively simple to manage and maintain once set-up.

In case if your code is free or open-source and you are more worried about the CI (Continuous Integration) of your builds, this amazing CI/ CD tool is worth it for you.

3. Bamboo

Developed by Atlassian in 2007, Bamboo is the significant automation server used for CI that automatically builds, tests, deploys the source code, and sets up the app for deployment. This licensed tool offers high-end visibility over the entire development, testing, & deployment process. Bamboo comes with great flexibility to use several tools and simple to use GUI. It also supports numerous languages and enables software developers to use CI/CD methodologies. It has robust integrations to different Atlassian products that are relevant for the Continuous Integration cycle, like Bitbucket and Jira.

With this amazing tool, you can ensure superior quality and status, get in-depth visibility into release execution, and spend the utmost time writing the code rather than integrating other software. Bamboo also gives built-in deployment hold, automated merging, robust build agent management, and built-in Git branch workflows.

Setting It Up

Install Java, just after installing it and forming a dedicated user to run Bamboo, follow these steps:

  • Download Bamboo.
  • Generate an installation directory.
  • Create the home directory.
  • Start Bamboo and start configuring Bamboo.

What Does It Mainly Do?

  • The building, integrating, testing, and setting up are all divisions of its package, as well as the software test part is made with the support of a Bamboo agent. Just like the Java monitoring agents, this amazing tool also provides two options. The remote agents execute on other servers and computers whilst the local agents execute as a section of the Bamboo server (part of its procedure). Every single agent is allocated for builds that match its capacities, which lets assigning dissimilar agents to dissimilar builds.
  • The key benefit that Bamboo provides is the strong tie-up to Atlassian’s rest products, like Bitbucket and Jira. Using Bamboo, you can observe code alterations and Jira problems that have been brought into the code as the final deployment. In this way, software developers can synchronize their flow of work and always continue on track & recognize which version is after that and what (must) have been set.

Key Features

  • Bitbucket and Jira software server integration for high-end visibility into release execution, status, and quality.
  • Send a continuous build flow to the testing environment and automated release builds to users.
  • Automated integration between Mercurial and Git branches.
  • Extraordinary build agent management which enables you to scale build capability by linking servers on your network through Amazon EC2. You can easily visualize system necessities for every single build with the Agent Matrix attribute, permitting you to allocate builds to the correct agents.
  • Straight from the branch name, you can automatically identify, build, test, and combine branches for deploying code constantly to staging or production servers.
  • Support is accessible for company teams.
  • Comes with Atlassian’s strong support, together with great workflows for the organization’s current products. If anyone is willing to incorporate Bitbucket and JIRA to your Continuous Integration procedure in a flawless mode and wish to pay for it, Bamboo is a great attempt.

Overall, Bamboo is a powerful tool as long as you are making use of it with Jira and Bitbucket and willing to pay for your Continuous Integration (CI) solution.

4. TeamCity

TeamCity is the most-admired and popular commercial CI/CD server, which is also a Java-based CI server package. It is an automation server and a building management tool. JetBrains made TeamCity. TeamCity enables users to better fit the tool to their own necessities and environment. Its server is a prime component, yet the interface (browser-hosted) serves as the main mode to administer this tool’s users, agents, plans, & build configuration. It manages project status and reporting information appropriate for a broad range of project stakeholders and users. TeamCity offers drill-down detail, build development, and historical detail on the configurations and projects.

TeamCity offers CI “out of the box” and offers assistance for numerous languages (Ruby, .NET, Java, and several others). It contains JetBrains to back the tool up with support and records wisely. Its slick GUI and simple to use attributes make it more pleasant for some new to CI. This fantastic tool runs in a Java environment, mainly on an Apache Tomcat server, while it can be installed on both Linux and Windows servers. It provides huge assistance and support for OpenStack & .NET projects, integrating into both IDEs like Eclipse and Visual Studio.

Setting It Up

These are the steps to follow-

  • You have to Download TeamCity from the authentic site.
  • Choose the installation directory that follows the configuration.

What Does It Mainly Do?

  • As one of the best software test tools, it aims to enhance release cycles. With TeamCity, you can review on-the-fly of testing outcomes, notice code coverage and locate duplicates, plus customize statistics on build interval, code quality, success rate, as well as other custom metrics.
  • It provides restricted characteristics as freeware under a few terms and conditions. TeamCity supports servlet based servers such as Apache Tomcat etc. TeamCity also supports varied important platforms like.NET, Java, and Ruby.
  • Once this amazing tool detects a modification in a version control system, it automatically adds a build to the get-in-line. The server discovers an idle attuned build agent and allocates the queued build to this agent, which runs the build actions.
  • While this procedure runs, TeamCity server logs the varied log messages, testing reports, as well as other alterations that are being completed. The modifications are saved, synced and uploaded in real-time, so users can understand what is going to happen with the build as they make modifications to it. The TeamCity tool also provides an opportunity to run parallel builds concurrently on distinct environments and platforms.

Key Features

  • Private code-base CI/ CD tool written in Java.
  • Provides both a hosted and private server cloud options.
  • VCS: Back-up projects using both GitHub & Bitbucket.
  • Deep emphasis on quality of test as well as test metadata.
  • An out-of-the-box cloud solution that is comparatively trouble-free to maintain once deployed.
  • Lightweight yml configuration settings are perfectly documented.
  • Provides various means to reprocess the settings of a parent project in a subproject.
  • Takes the source code from 2 varied VCS (version control systems), for a solitary build.
  • Unrestricted build configurations for 60 days to try CI.
  • Provides a rather useful free-version that offers you unlimited build and 100 build configurations. After that, costs start at USD 299.
  • Comes with a preference of gated commits, which can support software developers from breaking sources in a VCS (version control systems). This is made by successfully running the build remotely for local alterations before you commit it.

This is one of the incredible CI/ CD tools that has been gaining fame over the past few decades, providing a decent option as compared to other CI tools available in the marketplace. Besides, if you are interested to check your builds and tests as they come about, or want a powerful and free Continuous Integration (CI) solution, there is no doubt that TeamCity is worth checking out.

Jenkins vs Travis vs Bamboo vs TeamCity: Key Differences

Commercial vs Open Source

Travis CI and Jenkins are open-source projects supported by software developers across the globe. TeamCity and Bamboo are both popular commercial CI/CD tools developed and managed by their parent businesses. The major difference for the user is the community size found roughly about Jenkins versus the other CI/ CD tools.

What To Consider When Picking Out The Right CI/ CD Tool?

  1. Where is the tool hosted? Hosting is important. Cloud-based tools need the least configuration and can be attuned as per the user’s requirements. When thinking about self-hosted solutions, the task to manage it rests with the in-house team of DevOps testing. The former choices reduce setup complications, while the latter provides immense flexibility.
  2. Usability: A well-designed and clever interface makes the entire building procedure much simpler. One looks for a clear-cut and easy Graphical User Interface and UX.
  3. Great integrations: A perfect Continuous Integration tool should be incorporated with other tools commonly used in the software development procedure. It might project legal compliance tools, static analysis tools, management tools (Jira), incident filing tools (Bugzilla), and so on. It should also support build tools (Maven, Gradle) as well as VCS (Perforce, Git,).
  4. Reusable code library: Remember to select a tool comprising a public library of various plugins as well as usable build steps. This could be commercially accessible or open-source.
  5. Container support: CI/ CD tools that are configured for container orchestration tools (Kubernetes, Docker) or have a deploy plugin have a great time connecting to an app’s niche environment.

Wrapping Up

While selecting the right CI/CD tool, you will need to glance at your internal resources, budget, and amount of time you wish to spend on the setup and learning. However, before you decide on your tool, there are some significant aspects that you should consider:

  • Management and support provided by tools
  • User interface (UI) & integration support
  • Type of systems like large software systems and standalone systems

These are just a few relevant parameters you have to bear in mind before selecting one between Jenkins vs Travis vs Bamboo vs TeamCity. To sum up, we can say that all of these CI/CD tools serve a purpose in their own way. Remember that your CI/CD tools are merely one of the tools you will require to win the software development race. Application monitoring and deployment are just as significant elements of agile development. Furthermore, employing cloud-based browser compatibility testing tools like LambdaTest can help escalate the entire process for your teams. Consider all the aspects & then choose between Jenkins vs Travis vs Bamboo vs TeamCity.

Happy testing!

Author Rahul Jain

Jenkins vs Travis vs Bamboo vs TeamCity: Clash Of The Titans’ was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thinking like a programmer Everyday Without Being a Programmer, Issue 1

Drawers versus FunctionsPlaces to put things versus when-you-need-it-dispensersProgramming in the age of C and AssemblyBack in the day, programming was all about making hurdling-devices to run their course down a narrow track. Like a rubber-band powered mini-car, programmers would set up their intricate devices on a contrived track to skip and hop the hurdles and achieve some side-effect. Being able to read code 6-8 months down the line was less important than cranking out that last 10-20% of efficiency on large and small devices. Nice for speed and efficiency and gloating, creating cryptic and enigmatic loops and register allocations might rest nicely on the ego and its historical assu...

Tags: Clojure, Functional Programming

Continue reading


Ep 089: Branching Out

Each week, we discuss a different topic about Clojure and functional programming.

If you have a question or topic you’d like us to discuss, tweet @clojuredesign, send an email to, or join the #clojuredesign-podcast channel on the Clojurians Slack.

This week, the topic is: “if, when, case, condp, and cond.” We wander through the myriad ways of making decisions and listing choices in Clojure.

Selected quotes:

  • “Clojure is all about the practical.”
  • “Never be afraid of the in-line if.”
  • “We need to conserve parentheses, for future generations of lispers.”
  • “Build up the language to the vocabulary of your domain, and you won’t have to think about the language any more, you’ll just be thinking about your problem.”
  • “Where are you in programming, without branching?”


Related Episodes:


Using Docker for Atomic Games Deployment

We’ve just run our sixth edition of the Atomic Games! The original version happened way back in 2015. For each edition, we’ve built a game board with core game logic. Games participants write AI players that compete against other teams’ solutions.

We’ve always written the game boards in Clojure, and that’s typically meant distributing a JAR file for students to launch the game. This year, we tried using Docker to distribute the game instead, and it went very well!

What’s Wrong with a JAR File?

Java used to boldly promise that you could “write once, run anywhere.” Of course, that rarely turns out to be true. We typically run into two problems every year:

  • Students often have incompatible Java versions installed on their laptops. Most have very little idea about what they have installed or how to switch to a different version. We often have teams wasting precious time trying to uninstall one Java version, download another, figure out what’s in their paths, etc.
  • Students often have port conflicts or firewalls that prevent our server from running. This is generally not a huge problem to solve, but by the time they’ve figured out the issue, done some googling, asked for help, etc., they’ve wasted precious time.

What We Did Instead

This year, we provided the game as a Docker container. The Dockerfile was quite simple to build:

FROM openjdk:13
COPY ./ao-game-server.jar /usr/src/ao-game/
WORKDIR /usr/src/ao-game
ENTRYPOINT ["java", "-jar", "ao-game-server.jar"]
CMD ["-w"]

We asked students to make sure that they had Docker installed prior to arriving at the event. With that out of the way, getting everyone up and running went quite smoothly. We had some concerns that folks would have trouble installing or using Docker, but we gave everyone example commands to work from, and that was enough to get everybody off to the races.

One really nice outcome is that it’ll be trivial for students to remove the game from their machines. We didn’t have to modify paths or versions or anything of that sort, so students won’t have any work to do to return to a previous state.

Another nice side effect is that we had an opportunity to introduce students to Docker (and hopefully demystify it a bit). It’s a great tool for developers to have in their toolboxes!

What We’ll Do Next Time

We’ve been thinking about cloud hosting the game next time around so that folks don’t have to bother installing it at all. Of course, the Docker approach worked so well that I’m starting to reconsider whether a cloud-hosted approach would be worth the effort. Some nice scripts to wrap the needed commands might be all we need to make this solution really useable for folks who are new to Docker.

The post Using Docker for Atomic Games Deployment appeared first on Atomic Spin.


Full-Stack Software Engineer

Full-Stack Software Engineer

rocklog GmbH | Anywere
PLANET-ROCKLOG, a cloud-based, blazing-fast b2b SaaS for logistics, supply chain

rocklog is providing PLANET-ROCKLOG, a cloud-based, blazing-fast b2b SaaS for logistics, supply chain management and integration. Pricing is completly transparent, no trap-in, no hidden cost, used by small and mid-sized companies and NGOs.

We have an interesting stack

Front-end: mithril.js, JS, TS, D3.js
Backend: Clojure/Lisp, Software Transactional Memory, CouchDB, Docker

We are looking for a software engineer (full-time, part-time, intern) to help us shape the future of our software

Requirements & Key Traits:
• Curiosity, well-backed opinion on computer science, software engineering and the software development lifecycle and good abstraction skills and the thriving for quality
• Good understanding of functional programming
• Good communication skills, willingness to work in a remote setup
• Experience using React, Mithril, SolidJS, D3.js or similar frameworks
• Good Feeling for UI/UX trends and patterns
• Knowledge on types/limitations and current trends on web frameworks
• Value UI performance as a primary target
• Entrepreneurial mindset - rapid prototyping, rapid delivery, go for customer value
• Friendly/open-minded person

• Be a first+core engineer
• Freedom to work where you want
• Shape the UI/UX architecture and culture
• Friendly, empathic and welcoming fellows


What Gödel Discovered

In 1931, a 25-year-old Kurt Gödel wrote a proof that turned mathematics upside down. The implication was so astounding, and his proof so elegant that it was...kind of funny. I wanted to share his discovery with you. Fair warning though, I’m not a mathematician; I’m a programmer. This means my understanding is intuitive and not exact. Hopefully, that will come to our advantage since I have no choice but to avoid formality 🙂. Let’s get to it.


For the last 300 years, mathematicians and scientists alike made startling discoveries, which led to one great pattern. The pattern was unification: ideas that were previously thought to be disparate and different consistently turned out to be one and the same!

Newton kicked this off for physicists when he discovered that what kept us rooted on the Earth was also what kept the Earth dancing around the sun. People thought that heat was a special type of energy, but it turned out that it could be explained with mechanics. People thought that electricity, magnetism, and light were different, but Maxwell discovered they could be explained by an electromagnetic field.

Darwin did the same for biologists. It turned out that our chins, the beautiful feathers of birds, deer antlers, different flowers, male and female sexes, the reason you like sugar so much, the reason whales swim differently...could all be explained by natural selection.

Mathematicians waged a similar battle for unification. They wanted to find the “core” principles of mathematics, from which they could derive all true theories. This would unite logic, arithmetic, and so on, all under one simple umbrella. To get a sense of what this is about, consider this question: How do we know that 3 is smaller than 5? Or that 1 comes before 2? Is this a “core” principle that we take on faith (the formal name for this is called an “axiom”) or can this be derived from some even more core principle? Are numbers fundamental concepts, or can they be derived from something even more fundamental?


Mathematicians made great progress in this battle for core principles. For example, a gentleman called Frege discovered that he could craft a theory of sets, which could represent just about everything. For numbers, for example, he could do something like this:

A demonstration of how to represent numbers with sets

Here, he represents 0 the empty set. 1 as the set which contains the set for 0. 2 as the set that contains the set for 1 and 0. From this he could set a principle to get the “next” number: just wrap all previous numbers in a set. Pretty cool! Frege was able to take that and prove arithmetic rules like “1 + 1”, “numbers are infinite”, etc.

This looked formidable and cool, but Bertrand Russell came along and broke the theory in one fell swoop.

He used the rules that Frege laid out to make a valid but nonsensical statement. He proved something analogous to 1 + 1 = 3 1. This sounds innocuous; it was after-all just one statement. But nevertheless it was disastrous for a foundational theory of mathematics. If you could prove that 1 + 1 = 3, then you can’t really trust any true statement that results from this foundation.

This put mathematicians on a tail-spin. They even dubbed this period the “Foundational Crisis of Mathematics”

Hilbert’s Program

In an effort to solve this problem, a mathematician called Hilbert laid down some requirements for what a fundamental theory of mathematics had to look like 2. He said that this theory must be a new language, with a set of rules that satisfied two primary constraints:

The theory would need to be able to prove any true mathematical statement. For example, imagine the statement 1 + 1 = 2. If this language can’t prove that statement, then it certainly can’t prove all of mathematics. Hilbert called this quality completeness. The language would need to be complete.

The second hard requirement, as we discussed earlier, was that it could not prove a false mathematical statement. If we could prove 1 + 1 = 3, then all was for naught. Hilbert called this consistency. The language would need to be consistent.

Russell and Whitehead

Bertrand Russell, the gentleman who broke Frege’s theory, worked together with Alfred North Whitehead to develop a theory of their own. They labored for years to craft an immense volume of work, called Principia Mathematica 3.

They started by writing a new language (let’s call it PM) with a few simple rules. They took those rules, and proceeded to prove a bunch of things. Russell and Whitehead took almost nothing on faith. For example, let’s look at this almost-impossible-to-read proof over here (don’t worry, you don’t need to understand the syntax for this essay):

An example, very hard-to-read proof from Principia Mathematica

This proof showed that “1 + 1”, does indeed equal “2”. It took 2 volumes to get here.

Their language was dense and the work laborious, but they kept on proving a whole bunch of different truths in mathematics, and so far as anyone could tell at the time, there were no contradictions. It was imagined that at least in theory you could take this foundation and eventually expand it past mathematics: could you encode in pure logic how a dog behaves, or how humans think?

Gödel Comes Along

It certainly looked like Principia Mathematica could serve as the foundational theory for Mathematics. Until Gödel came along.

He proved that Principia Mathematica did indeed have true mathematical statements which could not be proven in the language. Principia Mathematica was incomplete.

This was startling, but his proof went even further. He showed that the entire enterprise behind Hilbert’s Program — to find a formal foundation for mathematics — could never work.

It’s hard to believe that a person could really prove that something can “never” happen — imagine if someone told you that we could never travel farther than our solar system — you’d look at them with suspicion.

Yet here Gödel was...a 25 year-old who proved beyond a doubt that this enterprise was impossible. He did this by showing that if a language could represent numbers, then unprovable statements would necessarily pop up.

Let’s think about that for a second: Numbers seem so quaint and easy to prove — just “1”, “2”, “3” on. People thought we could eventually write down how humans think — imagine how shocked they must have been to see that we couldn’t prove all truths about...numbers.

Let’s see how Gödel did it.


Now Russel and Whitehead’s language was hard to read. There’s no harm done in changing some of their symbols around. Let’s map their language to something more amenable to programmers: Lisp!

You can imagine that Russell and Whitehead came up with a lisp-like language. Here’s how their syntax looked:

First, they had a few symbols for arithmetic.

the next successor
(next 0)
(+ 0 (next 0))
(* 0 (next 0))
(= 0 (* 0 (next 0)))

Just from these symbols, they could represent all natural numbers. If they proved that the symbol 0 worked like 0. and the symbol next worked like a successor function, then (next 0) could represent 1, (next (next 0)) could represent 2, and so on.

Here’s how they could write 1 + 1 = 2:

(= (+ (next 0) (next 0)) 
   (next (next 0)))

Now, for the purpose of this essay, I’ll add one rule. If you ever see me using a natural number inside PM-Lisp other than 0 (i.e “15”), you can imagine it’s shorthand to writing (next (next (next ...)))) that many times. In this case, “15” means next applied to 0, 15 times

(next (next ...)) applied to 0 <natual-number> times

means (next (next (next 0)))

(Next (pun-intended)), they came up with some symbols to represent logic:

(not (= 0 1))
(or (= 0 1) (not (= 0 1)))
when ... then ..
(when 0 (or 0 1))

when 0, then there is either 0 or 1
there is ... such that ...
(there-is x (= 4 (* x 2))

These symbols map closely to the logical statements we are used to in programming. The most unusual one is there-is. Let’s see one of those for an example:

(there-is x (= 4 (* x 2)))

This is making a statement, that there is some number x, such that (* x 2) equals 4. Well, that is indeed true: x = 2. That’s pretty cool — we’ve just made a general arithmetic statement.

Where did the x come from though? Oops, we need to account for that in our language:

a...z, A...Z

In order to represent general truths, Russell and Whitehead introduced variables. Here’s how they could derive and for example:

(not (or (not A) (not B)))

When this statement is true, both A and B must be true!

Very cool. One more trick for our essay. To make it a bit easier to read, sometimes I’ll introduce new symbols. They won’t actually be a part of the language, but it can make good shorthand for us in the essay

(def <name> <formula>)
define <name> to represent <formula>
(def and (not (or (not A) (not B)))

same as (and <var-a> <var-b>...)

Now we can write (and 1 2) 🙂

PM-Lisp Axioms

All we saw above were symbols. They had no meaning yet.

Russell and Whitehead needed to prove that 0 works like zero, and that = works like equals. In order to breathe life into those symbols, they started off with some core principles — the axioms.

Here’s what they chose:

(when (or p p) p)
when either apples or apples, then apples
(when p (or p q))
when apples, then either apples or bananas
(when (or p q) (or q p))
when either apples or bananas, then either bananas or apples
(when (or p (or q r)) (or q (or p r))
when either apples, bananas, or pears, then either bananas, apples, or pears
(when (when q r) (when (or p q) (or p r))
when apples are a fruit, then bananas or apples implies bananas or fruits

That’s it. This is all we needed to take them on faith for. They took these rules and laboriously combined them in intricate ways to derive everything else.

For example, here’s how they derived =:

(def = (and (when A B) (when B A)))

If A implies B, and B implies A, they must be equal! Imagine this done for hundreds and hundreds of pages.

Note something essential here: their rules are so precise that there is no room for human judgement; a computer could run them. This was a key component for a foundational theory of mathematics: if the rules were so simple that they could be run as an algorithm, then we could side-step errors in human judgement.

Gödel’s First Idea

Now, Gödel wanted to study Russell and Whitehead’s language. But, it’s hard to study symbols. How do you reason about relationships between them?

Well, there there is one thing you can study very well...numbers! So he came up with an idea: what if he could express all of PM-Lisp with numbers?

This is what he did:


First, he took all the symbols and assigned a number to them:

Gödel Number

Now, say he wanted to write when. He could just write 19. This is good but doesn’t cover much: how would he represent formulas?


He crafted a solution for formulas too. He made a rule:

Take any formula, like this one:

(there-is a (= (next 0) a))

and convert each symbol to the corresponding Gödel Number:

Gödel Number

Then take the list of ascending prime numbers, and set each one to the power of the Gödel Number:

primeGödel Number

Multiply them all together, and you get this one huuge number:


There’s something very interesting about this number. Because it is comprised only of ascending prime numbers, it’s guaranteed to be unique! This means that he could represent every formula of PM-Lisp, with a unique Gödel Number!


Formulas are great, but they’re not all of PM-Lisp. We’d also want to support proofs. In a proof, we would have a “sequence” of formulas:

(there-is a (= (next 0) a)) 
(there-is a (= a (next 0)))

He applied the same trick again, but this time over each individual formula:

Gödel Formula
(there-is a (= (next 0) a))
(there-is a (= a (next 0)))
Gödel Number
primeGödel Number

Now if we took

2^25777622821258399946386094792423028037950734506637287219050 * 3^76887114166817775146256448336954145299389470803180389491850

We’d have one ginormous number. Just the first term in this calculation has 7 octodecillion digits! (1 octodecillion has 58 digits itself) But we’d have something more. This ginormous number uniquely represents the proof we just wrote!

All of a sudden, Gödel could represent symbols, formulas, and even proofs, uniquely with Gödel Numbers!

PM-Lisp on PM-Lisp

Now, we can use math to study relationships between numbers: for example “how are even numbers and prime numbers related?”, “are prime numbers infinite?” and so on. In the same way that we could use math to study prime numbers, Gödel realized that he could use math to study “all the numbers that represent PM-Lisp proofs”!

Now, what language could he use to study these relationships? Well, Russell and Whitehead made sure PM-Lisp itself was great for studying numbers...and it certainly worked well for studying why not use PM-Lisp to study “all the numbers that represent PM-Lisp proofs”?

And that’s exactly what Gödel did: he used study PM-Lisp!

It’s certainly not what Russell and Whitehead had intended, but it was nevertheless possible. Let’s take a look at some examples, to get a sense of what we mean.

Describing formulas

Say you had a formula like this:

(there-is a (= (next 0) a)) 

What if we wanted to prove the statement “The second symbol in this formula is ‘there-is’”?

Well, if we had the Gödel Number for this:


All we’d have to do, is to say in PM-Lisp:

“The largest 3* factor of this Gödel Number is 321”.

If we said would be equivalent to saying that the second symbol (the prime number 3), is “there-is” (Gödel Number 21)! Very cool.

Well, that relationship is trivial to say in PM-Lisp. Let’s start by writing a formula to check if a number is a factor of another:

(there-is x (= (* x 5) 30))

This statement says that there is an x such that (* x 5) must equal 30. If x = 6, this works out, so the statement is true. Well, that maps to the idea that 5 is a factor of 30! So let’s make this a “factoring” shortcut:

(def factor? (there-is x (= (* x y) z)))

We can then use factor? for our statement:

  (factor? x 3^21 25777622821258399946386094792423028037950734506637287219050)
  (not (factor? x 3^22 25777622821258399946386094792423028037950734506637287219050)))

This statement says that 321 is a factor of our number, and that 322 is not. If that is true, it means that 321 is the largest 3* factor in 25777622821258399946386094792423028037950734506637287219050. And if that is true, then PM-Lisp just said something about that formula: it said the second symbol must be there-is!

Constructing formulas

We can go further. We can even construct PM-Lisp formulas in PM-Lisp! Imagine we had a bunch of helper statements for primes and exponents:

(def prime? ...) ; (prime? 5) ; true
(def largest-prime ...) ; (largest-prime 21) ; 7
(def next-prime ...) ; (next-prime 7) ; 11
(def expt ...) ; (expt 10 3) ; 1000

Since PM-Lisp is all about math, you can imagine Russell and Whitehead went deep into primes and gave us these handy statements. Now, we could write a formula that “appends” a ) symbol, for example:

(* n (expt (next-prime (largest-prime n)) 3))

Say n was the Gödel Number for (there-is a (= (next 0) a)).

Here’s what that statement says:

  • Find the largest prime for n: 37
  • get the next prime after that: 41
  • Multiply n by 413

Multiplying n by 413 would be equivalent to appending that extra )! Mind bending.

(successor? a b)

Now, Gödel started wondering: what other kinds of statements could we construct? Could we make a statement like this:

(successor? a b)

This would say: “the formula with the Gödel Number a implies the formula with the Gödel Number b.”

It turns out...this is a valid, provable statement in PM-Lisp! The mathematical proof is a bit hard to follow, but the intuitive one we can grasp well.

Consider that in PM-Lisp, to go from one statement to the next statement, it must boil down to one of the axioms that Russell and Whitehead wrote out!

For example from the sentence p, we can apply the axiom (when p (or p q)), so one valid next statement can be (or p q). From there, we can use more axioms: (when (or p q) (or q p) can help us transform this to (or q p). And so on.

We already saw that we can use PM-Lisp, to “change” around formulas (like how we added an extra bracket at the end). Could we write some more complicated statements, that can “produce” the next possible successors, from a statement and those axioms?

As one example, to go from p to (or p q) we’d just need a mathematical function that takes the Gödel number for p, and does the equivalent multiplications that prepend (or, and appends q).

Turns out, this can be done with some serious math on prime numbers! Well, if that’s possible, then we could check whether the next statement in a sequence is valid:

(def successor?
  (one-of b (possible-successors a)))

This statement says “one of the possible successor Gödel Numbers from the formula with Gödel Number a , equals the formula with the Gödel Number b.” If that is true, then indeed b must be a successor of a.

Nice! PM-Lisp can say that one formula implies another.

(proves a b)

If we can prove that that a formula is a successor, can we say even more?

How about the statement (proves a b). This would say: “the sequence of formulas with the Gödel Number a proves the formula with the Gödel Number b."

Well, let’s think about it. Getting a “list” of Gödel Number formulas from a is pretty straight-forward: just extract the exponents on prime numbers. PM-Lisp can certainly do that.

Well, we already have a successor? function. We could just apply it to every statement, to make sure it’s a valid successor!

  (every-pair sucessor? (extract-sequence a))
  (successor? (last-formula a) b))

There’s a lot of abstraction over there that I didn’t talk about — every-pair, extract-sequence, etc — but you can sense that each one is certainly a mathematical operation: from extracting exponents to checking that a Gödel Number is a proper successor.

The statement above would in effect say:

"Every formula in the sequence with the Gödel Number a are proper successors, and imply the Gödel Number b."

Gödel went through a lot of trouble to prove this in his paper. For us, I think the intuition will do. Using PM-Lisp, we can now say some deep truths about PM-Lisp, like “this proof implies this statement" — nuts!

(subst a b c)

There’s one final statement he proved. Imagine we had this formula

(there-is b (= b (next a)))

The Gödel number would be 26699108848097731568417316859014651425159900891216992323750

This says “There is a number b that is one greater than a.”

What if we wanted to replace the symbol a with 0?

Well, this would be a hard but straight-forward thing: we just need to replace all exponents that equal 2 in this number (remember that 2 is the Gödel Number for the symbol a), with 5. (the Gödel Number for 0 )


Again, this seems pretty straight-forward mathematical computation, and we can sense that PM-Lisp could do it. It would involve a lot of math — extracting exponents, plopping multiplications — but all within reasonable logical realms.

Gödel proved that this function was also a provable statement in PM-Lisp. Our expression above for example, would produce the Gödel Number that represented this formula:

(there-is b (= b (next 0)))

Wild! a replaced with 0. PM-Lisp could now make substitutions on PM-Lisp formulas. I imagine that when Russell and Whitehead saw this, they started getting a little queasy.

Suspicious use of subst

If they weren’t already queasy, this certainly would make them so:


This replaces a, with the Gödel Number of the formula itself!

In this case, the formula would now say:

(there-is b (= b (next 25777622821258399946386094792423028037950734506637287219050))) 

It’s weird to use the Gödel Number of a formula itself inside the formula, but it is a number at the end of the day, so it’s all kosher and logical.

Very cool: PM-Lisp can now say if a certain proof is valid, and it can even replace variables inside formulas!


Gödel combined these formulas into a jaw-dropping symphony. Let’s follow along:

He starts with this:

(proves a b)

So far saying “the sequence with the Gödel Number a proves the formula with Gödel Number b

Next, he brought in a there-is

(there-is a (proves a b))

So far saying “There is some sequence with the Gödel Number a that proves the formula with the Gödel Number b"

Now, he popped in a not:

(not (there-is a (proves a b)))

This would mean

“There is no sequence that proves the formula with the Gödel Number b"

Then he popped in subst:

(not (there-is a (proves a (subst b 4 b))))

Wow what. Okay, this is saying

“There is no sequence that proves the formula that results when we take The Gödel Number for b, and replace 4 (the Gödel Number for the symbol “b”) with the Gödel Number *b* itself!

So far so good. But what is b right now? It can be anything. Let’s make it a specific thing:

What if we took the Gödel Number of

(not (there-is a (proves a (subst b 4 b))))

It would be an ungodly large number. Let’s call it G

Now, what if we replaced b with G?

(not (there-is a (proves a (subst G 4 G))))

Interesting...what is this saying?

Gödel’s Formula

Let’s look at it again:

(not (there-is a (proves a (subst G 4 G))))

This is saying: “There is no proof for the formula that is produced when we take &aposthe formula with the Gödel Number G&apos” -- let’s remember that G is the Gödel Number for:

(not (there-is a (proves a (subst b 4 b))))

“And replace b with with G"...which would result in the Gödel number for the formula:

(not (there-is a (proves a (subst G 4 G))))

Hold on there! That’s the formula we just started with.

Which means that

(not (there-is a (proves a (subst G 4 G))))

Is saying: “I am not provable in PM-Lisp”. 😮

What to Believe

Well, that’s an interesting statement, but is it true? Let’s consider it for a moment:

“This Formula is not Provable in PM-Lisp.”

If this was true:

It would mean that PM-Lisp was incomplete: Not all true mathematical statements can be proven in PM-Lisp. This very sentence would be an example of a statement that couldn’t be proven.

But, if this was false:

Then that would mean that PM-Lisp could prove “This Formula is not Provable in PM-Lisp”. But, if it could prove this statement, then the statement would be false! This formula is provable right, so how could we prove that it is not provable? That would make our language inconsistent — it just proved a false statement, analogous to 1 + 1 = 3!

Hence Gödel came to a startling conclusion: If PM-Lisp was consistent, then it would have to be incomplete. If it was complete, it would have to be inconsistent.

Power of Numbers

That was a blow for Russell and Whitehead, but what about Hilbert? Could we just come up with some new language that could avoid it?

Well, as soon as a language can represent whole numbers, it will fall into the same trap: Gödel can just map the language to numbers, create a valid successor? function, and produce the equivalent “I am not provable in X”.

This flew in the face of many a mathematician’s dreams: even arithmetic had a quality to it that could not be reduced to axioms.

In programming, this translates to: there are some truths that you can never write down as an algorithm. This is the essence of what Gödel discovered.

He went on to prove some more surprising things. It turns out that he could write a similar, valid sentence that said “I cannot prove that I am consistent”. This meant that no formal system, could prove by itself, that it could only produce true statements.

Now, this doesn’t mean that all is for naught. For example, it may mean that we can’t write an algorithm that can think like a dog...but perhaps we don’t need to. The way neurons aren’t aware of a dog’s love of toys, our algorithms wouldn’t have to be either: perhaps a consciousness would emerge as epiphenomena in the same way. The idea of “think like a dog” just won’t be written down concretely.

We can’t prove within a system that it is consistent, but we could prove that using another system. But it begs the question of course: how could we prove that other system was consistent? And so on!

I see Gödel’s idea like a guide: it shows us the limit of what we can do with prescriptive algorithms. And I find what he did so darn funny. Russell and Whitehead went through a lot of trouble to avoid self-reference in their work. In a way, Gödel got around that by building the first “meta-circular evaluator” — a language that interpreted itself — and came up with some surprising conclusions as a result.


I hope you had fun going through this :). If you want to go deeper on Gödel’s proof, there are a few books you may like. Hofstadter’s “I’m a Strange Loop” gives a very friendly introduction in Chapter 9. Nagel and Newman’s “Gödel’s Proof” explains the background, alongside a logical overview very well. For those who want to do go even deeper, I really enjoyed Peter Smith’s “Introduction to Gödel’s Theorems”. He shows much more substantiated proofs for (proves a b) and (subst a b c) — I highly suggest giving that a read too!

Also, if you want to play with creating your own Gödel Numbers, here’s a quick script in Clojure.

Thanks to Irakli Popkhadze, Daniel Woelfel, Alex Reichert, Davit Magaltadze, Julien Odent, Anthony Kesich, Marty Kausas, Jan Rüttinger, Henil, for reviewing drafts of this essay


Okay, it’s not quite 1 + 1 = 3. His statement could not be proven true or false. See Russell’s Paradox


This is a great resource to learn more about Principia Mathematica


Let The Asset Pipeline Die

日本じんRubyプログラマー、どうもありがとうございます! Rubyがだいすきです。日本語がよくわかりません。だから、これは推測です。すみません!
TLDR: The best front-end compilers come from the JavaScript world. Use words accurately. Eschew hate. Forget MINASWAN, because Ruby is 和. Learn Japanese.

don't save sprockets

The good folks at Arkency recently got in touch and asked if I wanted to include my book Rails As She Is Spoke in their Rails Book Bundle. I was delighted to say yes, and big shoutout to Arkency for that. It was a fun coincidence: I had actually been reviewing this book, with the goal of rewriting it, for a few different reasons.

First, Gary Bernhardt's Destroy All Software, in its first incarnation, included a ton of Rails content which I felt kinda put my book to shame, and I wanted to upgrade the whole book as a consequence. Second, I had realized that my book had accidentally downplayed the role of Kathy Sierra in the design of Rails. When Rails was new, Sierra's Creating Passionate Users blog was a centerpiece in the ongoing discussions about how to build the Web, not just in general among the Web community, but specifically on David Heinemeier Hansson's blog. DHH frequently linked to Sierra's blog, and frequently discussed how both Basecamp and Rails were founded on and/or in line with Sierra's thinking. But I left that out of my book, and when I re-read my own book, I realized that I had not only done Kathy Sierra a disservice there, but also made the story less coherent in the process.

Third, when I wrote the book, Rails was all the way back on Rails 3. It's now on version 5. So I also had some updating to do. In the original book, I had kind of skipped over the asset pipeline, and today it's one of the major hassles in Rails development, so I felt that this was a minor deficiency that had turned more serious. So I set out to find a definitive answer on that.

Which is how I ended up watching @schneems's talk Saving Sprockets. In this talk, @schneems explains why he's maintaining Sprockets, and why it's a lot of work. TLDR: the code base is a mess, it overuses modules, and ultimately, Sprockets is a single God object with 105 methods.

Saving Sprockets is an important talk. A lot of people need to see it. Programmers all over the world need to remember that open source is free; that filing a ticket means asking somebody else to do work and solve your problems for free; that open source maintainers do not owe you that free work and are not bad people for having lives beyond their GitHub Issues. I was really enjoying the talk, until there came a point when @schneems referred to Node.js as "the technology that shall not be named," and I nearly threw my computer out the fucking window.

rails hate against javascript is deranged

Rails was first created in 2004. Browsers were horrible in 2004. You still had to deal with IE6. You had quirks mode. At the time, "optimizing for programmer happiness" and "doing front-end development" were opposites. So Rails has a long history of creating adjunct technologies which enable you to build interactive web sites without ever having to touch a line of JavaScript.

In 2006, Rails introduced RJS, which allowed you to write Ruby which would generate JavaScript. It was a bad idea, and Rails retired it only one year later. Since then, Rails has introduced Sprockets and Turbolinks, and adopted CoffeeScript. Personally, of all these technologies, the only one I really love is the one that wasn't invented in-house.

But in 2006, at least, wanting to avoid JavaScript as much as Rails does was reasonable. Today it's like meeting somebody who thinks Czechoslovokia, which doesn't exist any more, is part of the USSR, which doesn't exist either. Browsers got better, the language got better, and you can now run it from the command line if you want. It's just a different world.
And indeed Czechoslovakia, when it did exist, was an independent country. Thanks to Jon Roesner for context on this.

Not only that, the Rails culture has had a problem with Node.js hate since Node.js was invented. For me personally, this has been a severe source of caremad since about 2011. For instance, here I am arguing with Avdi Grimm about it in 2012:

Avdi's prediction did not materialize, of course, but we'll get to that.

Ruby caremad is very durable for me. Rails itself might not even be the best tech for my needs any more, but Rails gathered a community around principles like beautiful code and programmer happiness. I think those principles are very important.

The thing that bothers me so much about this, however, is not that I love Node or love JavaScript. The thing that bothers me so much is that every time a Rubyist hates on JavaScript, they shit on all of the things that were said to me when I was first curious but doubtful about Ruby — that it's a polyglot community, that Ruby represents a synthesis of many great ideas from many different programming languages.

A little before I got into Ruby, I started going to Python meetups, but I stopped because I got sick of hearing them hate on Perl. It was constant; no Python meetup was complete without somebody saying something about how terrible Perl was. Perl is by no means a perfect language, but I'm just not the kind of person who likes to join groups who use hating another group as their key organizing principle. I consider that very unhealthy. I don't know if Python ever got over that shit, but I am very sad to see how sick the Rails culture has become.

So I started trying to figure out what went wrong, and I think I know.

Let's start by getting our terms right.

When people talk about Ruby programmers as a group of people, they usually say "the Ruby community." I often prefer Gary Bernhardt's term — "the Ruby culture" — but it's really a diaspora. Ruby originated in Japan, and most of the Ruby core team is Japanese to this day. Japanese Rubyists and Western Rubyists form distinct local cultures which share a common language and a common origin. That's a diaspora.

(Likewise, you might be tempted to say "the JavaScript community," but everybody uses JavaScript. It's like saying "the human beings who breathe community." It's too big to be a community or even a culture. It's a world.)

When programmers in the West discovered Ruby and told each other about it, they formed a distinct local culture of Western Rubyists. (Being Western, of course, they just called themselves Rubyists.) Every culture needs a story to tell about itself, and Western Rubyists were no different. But the story they came up with was so weak that the culture's current sickness was made inevitable.

The story was "Matz is nice, and so we are nice."

They abbreviate it MINASWAN.

ruby is 和

Just for the record, MINASWAN is at least half true. Matz is nice. I met him, and he's nice. I went to dinner with him, several other Rubyists, and the late, terrific, and much-missed James Golick at MountainWest RubyConf.

We were thrilled to get our Matz selfies.

But there are several problems with MINASAWN. It's simplistic hero worship, and it's so vague it inevitably leads to hypocrisy. "Be nice" is really general advice, and some of the people who say it don't actually do it.

I think there is a much, much better way to understand Ruby.

I've been studying Japanese for a few months, and one of the other students in my class brought in pictures of a Buddhist temple that he visited in Japan. One picture involved a Shinto shrine, even though Shinto and Buddhism are two completely different religions. This Buddhist temple simply included a space for Shinto worshippers, like if a church in America included a small corner that had been constructed as a tiny mosque. Our teacher told us that this is a characteristic of Japanese culture; where in America, you have a church over here and a mosque over there, in Japan, it's very common to find people reconciling or combining disparate elements in a harmonious blend.

This is what Ruby does. Ruby took disparate elements from Perl, Python, Lisp, and Smalltalk, and combined them in a harmonious blend.

I wasn't there when MINASWAN was first said. But what I imagine is that there were two people disagreeing with each other, and Matz, who is Japanese, looked at their points of view and sought to combine disparate elements from these two points of view in a harmonious blend. And all the American Ruby programmers looked at that and thought, "oh, he's nice." Like that was all there was to it.

In reality, this is a fundamental thing about Japanese culture. And Ruby comes from Japan. This is not a coincidence. The Japanese language even has a word for it: 和, which is roughly pronounced "wa."

It should hopefully be very obvious how hating on Node.js, or any language, is fundamentally irreconcilable with 和, and fundamentally inappropriate for any event or group of people organized around Ruby, if you regard Ruby as a product of 和.

I see Ruby that way. In fact I think we should go further, and say that Ruby is 和.

I hate to say this, but I believe this is a situation where we Western Rubyists are simply being very bad guests. Ruby was created in a spirit of 和. It should be used in a spirit of 和. It's just a matter of courtesy.

MINASWAN implies that Ruby on Rails does not exist

Now let's think about one of the major, glaringly obvious contradictions in MINASWAN.

By far the most famous Rubyist in the world is DHH. I would not call DHH nice. I think he wouldn't describe himself as nice either. In The Rails Doctrine, he describes Rails as "a deeply narcissistic endeavor." In 2006, at the first Rails conference ever, Canada on Rails in Vancouver — about a month before the first RailsConf — DHH gave a talk where he quoted some critics of Rails who had described him as arrogant, and he agreed with them.

A few years later, when someone else delivered a Rails conference presentation which contained porn in its slides, DHH defended them, saying this:
I've found that the fewer masks I try to wear, the better...

You're bound to upset, offend, or annoy people when you're not adding heavy layers of social sugarcoating...

What I'm not going to do, though, is apologize for any of these preferences and opinions...If you can deal with that, I'm sure we're going to get along just fine.
In other words, "I'm just being honest." The thing that only assholes say.

So, according to DHH, DHH is arrogant, doesn't mince words, and regards his own life's work as a narcissistic project. That's what this guy says about himself.

So if MINASWAN is really a basic truth about the Ruby culture, then how does DHH fit in at all? MINASWAN would predict that DHH was not involved in Ruby, since "we are nice" is a statement which probably cannot be made about a group of people which includes DHH. But MINASWAN's prediction is false.

DHH isn't just involved in Ruby, and he isn't just a very important figure in Ruby. He's also a huge Ruby fanboy. Half of The Rails Doctrine is just all about how much DHH loves Ruby. He wrote a long essay about the ideas behind his massively successful web framework, and despite it being a narcissistic project by his own admission, half of this essay just reads like a love letter to a programming language. And it's probably not a coincidence that his Ruby reads better than most people's Ruby.

Rails also fundamentally embraces 和. Think back to the example of a Shinto shrine within a Buddhist temple. Another way to describe 和 could be as a spirit of inclusive compromise. DHH famously hates RSpec — and I've recently come to understand his point of view — but you can not only use RSpec with Rails, rails new has a built-in --skip-test option which allows you to use RSpec more easily. There's a --skip-whatever option for nearly everything DHH thinks you should use in your Rails app; consider also the "no one paradigm" section of The Rails Doctrine.

Ever since DHH came along and transformed Ruby, in the West, from an esoteric hobby into a thing you could actually do for a living, MINASWAN's been this weird, illogical thing about pretending DHH doesn't exist. Which is already a weird red flag. But if you say "how can we carry forward Ruby's spirit of 和?" then DHH poses one answer. In his communication style, he's Western to a fault, individualistic and utterly uninterested in getting along with anybody, but when it comes to naming variables and methods, he's obsessed with balance and harmony. In other words, he chooses 和 when he's writing code, and he chooses a Western individualism when he's arguing on the Internet.

(And, sadly, when he's architecting systems, which is the core problem with the asset pipeline.)

DHH's answer might not be the best answer. It's certainly not the only answer. But it's an answer. MINASWAN implies that DHH just doesn't know what Ruby is about, as a culture, and that's a ridiculous position. It's the (Western) Ruby old school sticking its fingers in its ears and yelling "la la la I can't hear that Danish guy."

If we say "Ruby is 和," then we can acknowledge the obvious reality that DHH understands Ruby pretty well. You can still disagree with what he's chosen as the balance of Western individualism and Japanese 和, but that's fine — speaking as a Westerner myself, I feel like that's his decision to make — and you now have a sane, coherent, rational model of the Ruby culture. The model for itself that this culture has chosen instead, MINASWAN, implies absurdities. You can't structure a conversation around something like that. MINASWAN was simple enough when Ruby was a tiny scene of hobby programmers only, but when DHH made it a serious professional thing, MINASWAN should have evolved into something more nuanced. Instead, some Rubyists forgot about it, others never even heard of it, and others took it as a joke because it ignored DHH.

Still others employed it as a sort of snooty, superior, tragic thing. That's the old school Rubyists who tried to use MINASWAN to inhabit the moral high ground. But MINASWAN fails as an attempt to occupy the moral high ground because it implies DHH has no "real" place in Ruby. He's literally the only reason most of you have jobs! So it's not only terribly entitled and ungrateful, it's also passive-aggressive, since nobody ever addresses this huge contradiction that DHH somehow is just a footnote. But if you're passive-aggressive, entitled, and ungrateful, then you're not nice.

MINASWAN is garbage. It'd be more accurate to say, "Ruby showcases the Japanese value of 和, but we are arrogant Americans, so we reduce this to a really basic American idea, harshly compressing it in the process to a state where it cannot possibly mean anything any more, instead of bothering to learn something about the outside world for once." But MINASWAN was already a long acronym, so I guess they had to draw the line at RSTJVO和BWAAASWRTTARBAIHCIITPTASWICPMAAMIOBTLSATOWFO.

Instead of advocating this acronym myself, I have a simpler suggestion: shitcan MINASWAN, and say instead "Ruby is 和." That's pronounced, "Ruby is wa." Now you may be asking yourself, 和t the fuck? It's not enough that newbies have to learn OOP and some FP concepts, now we have to tell people that if they want to understand Ruby, they have to learn Japanese?

Well, every Japanese programmer has to learn English to program in Ruby, which was invented in Japan, so I don't really think that's as demanding as it might sound. I think it's actually pretty reasonable to ask that programmers be good at languages. But I'm not asking you to learn a whole language. I'm literally saying you should just learn this one word. It's a really important word.

Ruby's artful blend of disparate paradigms is not a fluke; it's just the most recent step in a tradition which stretches back at least a thousand years. This blending already has a name, and if you want to reason about it — say, for example, you're writing a new language, which aims to achieve a similar balance, as Jeremy Ashkenas did when he created CoffeeScript — then it makes sense to refer to it by name.

Alternatively, I suppose you could just say "Giles felt like his ideas were too obvious, so he started blogging in Japanese." But that would miss the point, because プログラマーはトランスレーターです.
"A programmer is a translator." In other words, translation is inherent to programming.


Around the same time I watched the Saving Sprockets presentation, I saw this (shorter) video too:

Take a second to watch it. It's just fantastic.

I also saw this remark on GitHub:

Since illuminating this murk was a major goal for my Rails As She Is Spoke rewrite, I dug into this some more, and found a great blog post about migrating from Sprockets to Webpack. It said:
I couldn't have imagined tools like Webpack or Gulp existing a few years ago, but today, Javascript asset packagers are becoming increasingly advanced and sophisticated. It seems to me that Sprockets will have a real tough run for its money in the very near future.
I agree.

This is the answer to all the confusion around the asset pipeline: don't fucking use it. Sprockets is poorly designed abandonware which causes endless problems. This is an easy choice.

The JavaScript world has a terrible and well-deserved reputation for changing too fast, but if Rails moves away from Sprockets and embraces Webpack, it'll be the first major change in the design of the asset pipeline since 2009. I think you can change your asset pipeline in fundamental ways, nearly eight years later, without deserving any criticism for moving too fast.

Plus, according to @searls, Ruby is still the best way to test browsers on a Node.js project:

Maybe it would be easier to sell Node programmers on a polyglot approach here if Rails itself hadn't turned its back on polyglot programming, aka Ruby's spirit of 和. Here's one way you might sell Node programmers on using a Ruby project: "hey, we talk shit about you all the time, but our Selenium wrapper is better than your Selenium wrapper, so suck a dick, dumbshits." Here's another way: "hey, we use a Node front-end compiler, because we're an open-minded, polyglot community, and we have this really good browser testing API, so why not be polyglots too?"
@searls showing up twice like this doesn't imply an endorsement from him, it just shows that he's freaking everywhere.

Also, Webpack is amazing. If you've read Ilya Grigorik's book on front-end performance — and frankly, if you have any opinion at all about Sprockets, then you should have read this book already — Webpack is a dream.

To dig up the Sprockets logo, I had to go to a 2009 blog post, because today, the Sprockets web site is a GoDaddy domain landing page.

Webpack doesn't just compile CoffeeScript, bundle up JavaScript files, gzip everything, and set up fingerprinting, aka cache-busting, like the asset pipeline does. Webpack can also embed CSS files in JavaScript — along with SVG, JPG, GIF, PNG, and even MP3 files — which allows you to reduce the number of network requests that your front-end code is making. Both Sprockets and Webpack can turn images into data URIs, but Webpack can transpile many other languages to JavaScript, not just CoffeeScript, and using Webpack means your front-end code can use the Node.js require() functionality which JavaScript proper still doesn't really support, at least not in every browser. So you can replace the hacky require_tree directives in the comments of your application.js with actual require() semantics.

Both Webpack and Sprockets have modular APIs which allow third parties to write plugins, but there are important differences. First, Webpack works more reliably than Sprockets, and its API doesn't flummox third-party contributors (or indeed its core team) the way the Sprockets API does. Second, as a technology favored by JavaScripters, Webpack gets more relevant third-party contributions than Sprockets does, it gets them faster, and they come from people who understand the front end better.

Community is probably the most important part of this. As long as you're using a JavaScript front-end compiler instead of a Ruby one, it doesn't even matter that much which one you use instead. I prefer Webpack, but there are strong arguments for Rollup, Browserify, and even Bower as well.

Node is like Ruby used to be back in the day, i.e., a new new hotness every three months, so you should find the balance that works best for you, but all of the serious JavaScript options are better for this than anything Ruby has ever produced, or ever could produce.

Webpack 2 has tree-shaking, and the Webpack team is looking for ways to integrate Rollup-style scope-hoisting as well. Could this ever happen with Sprockets? Can Sprockets solve CSS's namespace problems? How many refactors until this legacy code acquires an API clean enough to support third-party sunburst graphs? And how soon is Sprockets going to get its real-time dashboard?

The answer is either never or not any time soon. It's going to take a heroic amount of work just for Sprockets 4 to become more than one object. But despite a culture packed to the gills with OO gurus, Rails does not have a good track record with refactoring — remember Rails 3? — and even in the best-case scenario, Sprockets 4 is not going to deliver tree-shaking or scope-hoisting ever. It's just never going to happen. You need great JavaScript parsing for these features, and Ruby's JavaScript parsing libraries are not great.

I know this first-hand. I had this crazy idea to build a Ruby project which could automatically refactor JavaScript. Basically, a front-end compiler, except I hadn't even thought of compressing JavaScript. I just wanted to build a system which could automatically turn ugly code into more readable code. And I did that, sort of, a tiny bit. I built an MVP, really just a proof of concept. It was not easy, in fact it was excruciatingly painful, and the Ruby libraries which were available for the task were not great, or even good. The experience was so awful that only two things can possibly explain it: sheer stubbornness, and the ludicrous idea, which I believed at the time, that JavaScript parsers, compilers, and insane automatic refactoring experiments should all be written in Ruby.

In reality, the best JavaScript parsers are written in C or JavaScript. (And although Ruby has arguably good integration with C, the best JavaScript parsers for C are typically hidden inside JavaScript engines like V8, and tied too closely to those engines to be usable.) In general, it's reasonable to assume that the people who care the most about JavaScript and understand JavaScript the best are going to write the best JavaScript compilers and parsers, and further, that they will do so in JavaScript.

The first Google search result for "best css parser" also just happens to be written in the language that front-end coders use the most. I haven't done any CSS parsing, so I can't vouch for it. I just wanted to shock you with an astonishing coincidence. Because the intuitive, rational thing to assume, which I'm sure you assumed, would be that the best CSS parser was written in Ruby, right? This is what any rational person would assume. Yet for some inscrutable reason, it appears the best CSS parser's either written in JavaScript or C++. I guess this inexplicable phenomenon will remain an unsolved mystery until the end of time.

Unless, of course, you happen to believe that Ruby is a beautiful language which is good at some things but bad at others, in which case you might conclude that it's maybe not reasonable to be writing compilers in Ruby for anything other than a teaching example.

ok, save sprockets a little

At this point in the blog post, you might be expecting a caffeine-induced, fist-shaking rant, foaming at the mouth with impotent rage, but that's not what comes next. First I did a bunch of passive research, i.e., reading blog posts and watching videos. Then I did some active research, i.e., building stuff. Then I did a little bit of work on a few Sprockets tickets. We're not talking about a ton of work here. I answered a few questions, investigated a few bugs, and created a few example apps (which @schneems had requested, in the Saving Sprockets presentation).

But I did these things. I made a contribution of effort to Sprockets. Because Ruby is 和.

Although the anti-Node.js bigotry in Saving Sprockets annoyed me, and although I disagree with the fundamental premise — that Sprockets is worth saving — the presentation was a great presentation. The work of open source volunteers has made my life better, which is the real point of Saving Sprockets, and I feel grateful for that, partly because @schneems convinced me in his presentation that I should feel grateful for that.

Remember, one of my theories re 和 is that it explains DHH way better than MINASWAN can. I think this theory is obviously correct and nobody but an utter dipshit could possibly contest it. And that's something DHH himself might say, in his Western, non-和 mode. And my idea here is that his Ruby code, and his many writings about how to write Ruby code, display a very classically Rubyist 和. And that this is the balance he strikes between the two cultural currents in the Ruby diaspora.

His balance isn't the only balance. I decided to strike a different balance. Although I think Sprockets is the wrong direction overall, I decided I would show a little gratitude and respect for all the free hard work that other people have done on Rails, and Webpack, and so many other technologies, because it's made it possible for me to do stuff. Even though I think that Sprockets's time is ending, that piece of technology's been something useful which I got for free since 2009. @schneems said in his presentation that we should help the projects that have helped us, and that all programmers should do that, and I agree. So I contributed a tiny bit to this project even though I ultimately believe it should be wound down and retired. I feel this is 和.

When I cloned Sprockets, I even named my directory ~/code/和, but this caused 44 spec fails, all with error messages like URI::InvalidURIError: URI must be ascii only, because Sprockets builds its testing URLs from your directory name. I didn't file a ticket for this; I felt it was an unusual and inconsequential edge case. I just created another directory with a more run-of-the-mill name.

But I kind of hope @schneems uses his powers of persuasion in the service of a more current project next time. Because how on earth can Rails say it exalts beautiful code and optimizes for programmer happiness when it's keeping really bad legacy code alive on the spare time of volunteers?

It kinda feels like watching a bunch of mean teenagers telling lies to a younger kid. "Fix this legacy code, dude! You'll be a hero!" You might as well tell him that the Spanish word for "friend" is "pendejo" while you're at it. ("Pendejo" literally means "pubic hair," and if you use it as a term of address, it's not taken as a compliment.)

Imagine an alternate universe in which, instead of seeing Ruby programmers as expendable components we can burn out in order to extend the lifespan of terrible Basecamp side projects from 2009, we saw the evolution of Node.js as a way to expand the Ruby community. We used to need gem wrappers in order to get package management for our "JavaScripts" — app/assets/javascripts still uses this increasingly bizarre plural — but now we have npm. We used to need a bad asset pipeline made of bad code, but now we have a ton of excellent options to choose from. Rails was ahead of the curve; all you need to do now is be grateful that the JavaScript world finally caught up with you.

The whole reason JavaScript has so many front-end compilers is because Jo Liss wanted to have the Rails asset pipeline, but for JavaScript. So she built it, and she called it Broccoli. Other people liked the idea, saw their own ways to do it, and so, many other projects followed after. Webpack is part of a very long lineage that originates with Broccoli, and thus with Rails, and therefore with Sprockets. If you can't give Node.js credit for expanding on a Rails idea, that's just a cultural sickness. To use Sprockets in 2016 is to deny the profound influence it had back in the day.

(Although, to be fair, a lot of these projects were inspired by the Google Closure Compiler, too.)

The crazy irony of it is that the JavaScript world needs omakase so badly. (That's what's so great about Elm.) So many people try to jump into modern front-end code and just give up because it's got none of the luxuries Rails has, which ease the learning curve and keep you focused on building apps instead of duct-taping infrastructure together. There are starter/boilerplate projects, of course, which aim to serve a purpose similar to rails new or scaffolding, but it's not the same. A couple gems integrate Webpack and Rails, which is a step in the right direction, but really, gem wrappers need to die, preferably in a fire so hot and intense they melt to their component atoms. Imagine if Rails presented a nice, clean wrapper around Webpack, or something like it, in a uniquely calm corner of, which people could trust and consider authoritative. If Rails did that, instead of clinging to its legacy code, it'd be a tremendous public service.

Somehow, the Rails culture looked at this landscape and instead saw an opportunity to get talented people to mire themselves in legacy code for free. In my opinion, that's dysfunctional, and it won't end well. And it's not the only dysfunction involved.

Hating on Node.js might just be evil sexist bullshit

I hesitated before posting this rant, because of a tweet from Wes Bos:

And especially a blog post by Aurynn Shaw:
when I started programming in 2001, it was du jour in the communities I participated in to be highly critical of other languages. Other languages sucked, the people using them were losers or stupid, if they would just use a real language, such as the one we used, everything would just be better...

This sort of culturally-encoded language was really prevalent around condemning PHP and Java. Developers in these languages were actively referred to as less competent than developers in the other, more blessed languages.

And at the time, as a new developer, I internalised this pretty heavily...

[but] I was asked to consider who and what I was criticising... starting with Wordpress-based design backgrounds and moving from more simple themes to more complex themes where PHP knowledge is required, to plugin development is a completely valid narrative, but a path that is predominately for women.
With apologies for the pedantry, I'm fairly certain Ms. Shaw is using French incorrectly here, and that the phrase she wants is de rigeur rather than du jour. Du jour can mean trendy, which would imply that programmer tribalism was a thing of the past, which I would disagree with, although I would also wish it were true. De rigeur means something required by etiquette, which I think is what Ms. Shaw intended to convey.

So first, yes, OK, the asset pipeline works. I think a company which wants performance, and an asset pipeline that is easy to reason about, will have less trouble if they --skip-sprockets and install Webpack instead, but if you're learning Rails at a code bootcamp, by all means, Sprockets will get the job done.

(Also, Webpack is easy to use with Rails, and I've literally written a book which will show you how to do it, but we'll get to that.)

Second, I'm pretty damn sure @schneems had no sexist intent. He didn't even seem to hate Node.js himself, he seemed like he was apologizing for acknowledging the existence of Node.js, because other people hated it. That's not necessarily on him, that's on the audience. And I've already discussed why this alarms me.

But consider what Ms. Shaw said about the WordPress -> PHP path being a path which brings a lot of women into programming. Is it possible that a lot of women come into programming via front-end work, which in Silicon Valley's parlance is often considered nothing more than "making the app pretty"? Who might be tasked with the work of making things pretty?

My guess is that JavaScript has more women in it than Ruby. And which programming language community had a big fight over gendered pronouns? Which community banned Douglas Crockford from an event for allegedly sexist remarks and allegedly making women uncomfortable? Which community's package manager is allegedly overrun with so-called SJWs?
This has a link to the awful KotakuInAction. I tried to use, but it died? If you're up on the latest move to make re this, please LMK.

Every time a Rubyist distances themselves from Node.js, they're distancing themselves from a community where feminists play a more prominent role than they do in Ruby. I'm sure this is unintentional in some cases, but I'm equally sure that it's intentional in others. I don't know how you would calculate the ratio of intentional vs unintentional, but I very much doubt that intent matters as much as the practical outcome anyway. If I'm right about JavaScript having more women than Ruby, then every time a Rubyist sneers at JavaScript, they're not just building a wall around Ruby which keeps JavaScript programmers out, but also a wall which keeps women out.

It's obvious that "Matz is nice and so we are sneering a lot" just plain does not make sense, but before I read Ms. Shaw's blog post, I assumed the cognitive dissonance was based on insecurity. I assumed it was about Rubyists being mad that they weren't the hot new flavor any more.

This happened a long time ago, and I thought it was the main reason Rubyists hate on Node so much:

But now I have to consider an interpretation which paints the Ruby culture as more discriminatory than I had imagined it to be.

And by the way, that too has roots in Japan, although, again, not necessarily in a malicious sense.

why ruby has no code of conduct

Ruby's had a very difficult time establishing a code of conduct, and Ruby core's Japanese contingent have been staunch in their opposition to it. There might be some sexism to that, and there might not. But there's no ambiguity on one crucial point: 和 emphasizes working out differences via compromise, while codes of conduct tend to set relatively stringent rules, and use ostracism and exile as modes of punishment.

This is a sensible strategy in the West. Exile is a mild punishment in an individualistic culture — more a safeguarding mechanism than a punishment at all, really — but a harsh one in a culture like Japan's. One of the more important aspects of 和 is that you're supposed to work for the good of the group, not just yourself. Imagine that's how you're raised, how everyone around you was raised. If you get exiled, what do you even do with your life? In a culture where children are taught that helping your group is the whole point of doing things, ostracism is not a mild punishment. It means no longer having any reason to do things, or to exist.

和 is so fundamental to Japanese culture that 和 was the Japanese word for Japan about a thousand years ago, and still functions as an adjective meaning "Japanese" in many older words. As Michael Carr wrote in The International Journal of Lexicography, "[t]he notion that Japanese culture is based upon wa 和 'harmony' has become an article of faith among Japanese and Japanologists."

Unsurprisingly, Japanese members of the ruby-core mailing list, Matz included, were especially opposed to the ostracism aspect of a CoC. If you understand 和, the Japanese Rubyists' response to a CoC not only makes more sense, but sounds almost aghast at American barbarism.

Matz said, for instance:
The CoC contains banning members from the community as a punishment. This does not mean anything but hurting individuals... Besides that one can regret the previous act and change the attitude.
Matz's implied solution — atonement — is virtually never seen in discussions of code of conduct. It's not a coincidence that in Japanese, there are at least 11 different ways to say "I'm sorry". Let me tell you how that looks to me personally. I'm a first-generation American with roots in England — which, like Japan, is a formerly imperialist island nation, which has notoriously idiosyncratic etiquette, which is fond of compromise, and where the average resident apologizes constantly, frequently for things that are not even their fault. From my point of view, there's an evangelical, absolutist zeal to codes of conduct which I find distinctively American.

I support CoCs, in general, but I sometimes find it unsettling how certain CoC advocates are of their own righteousness. Likewise, perhaps again as a quasi-English person, the weirdest thing about CoCs to me is that virtually nobody ever even raises the possibility of apologizing, atoning, or learning anything. (Sorry to lean so hard on my tenuous claim to Britishness, I realize it's a bit disingenuous.)

Of course, most CoC discussions involve communities which are predominantly American, so this perspective might not even matter, overall. But the Ruby culture is a diaspora, and MINASWAN isn't worth shit if you're using it to decipher the drama around the failed attempt to adopt a Ruby Code of Conduct. "Matz is nice and so he is opposed to requiring that Rubyists be nice to each other" makes no goddamn sense at all, whereas "Ruby embodies the Japanese principle of harmonious balance, aka 和, and convincing Japanese Rubyists to abandon that spirit of inclusive compromise is exceedingly difficult" makes perfect sense. Let's say MINASWAN served its purpose, but it's time for a more nuanced point of view. Ruby is 和.

btw, buy my stuff

This blog post started out as a new chapter for the Rails As She Is Spoke sequel/revamp that I have planned, but along the way, I decided to write a new book entirely. It's a book about bringing modern front-end code to Rails applications. The TLDR will probably be "use Webpack, use Elm," but (as with many things) it's the journey, not the destination. The book starts out with a very simple, run-of-the-mill Rails app, with a front end using CoffeeScript and jQuery. I then carefully rebuild the front end for this app several times — in ES5, ES6, React, and ClojureScript (using Om) — to demonstrate modern front-end development and explore the tradeoffs that you have to consider when doing it. I also show you how to replace the asset pipeline with Webpack. These chapters are all written; I've also got a chapter on Elm which I'm still working on. (The code is written, but not the prose.) The book's working title is Modern Front-End Development with Ruby on Rails and Puppies, because there's going to be a puppy on every page.

btw, if you're like, "wait, Giles accused the entire Ruby community of sexism to sell a book," well, sure. I'm not above picking a fight now and then. Which reminds me: @schneems absolutely deserves his recognition as a Ruby Hero, but where's my recognition as a Ruby Villain? I know I'm not Lex Luthor, but I figure I'm at least that gorilla with a light bulb on his head. I don't even need an award ceremony, just throw me a t-shirt or something.

Anyway, my underrated villainy is actually kind of relevant to this discussion, because DHH isn't above picking fights either, and anybody who thinks otherwise is living in a dream world. Consider what Getting Real, a book DHH co-wrote, has to say on the subject:
Pick a fight

Sometimes the best way to know what your app should be is to know what it shouldn't be. Figure out your app's enemy and you'll shine a light on where you need to go....

One bonus you get from having an enemy is a very clear marketing message. People are stoked by conflict. And they also understand a product by comparing it to others. With a chosen enemy, you're feeding people a story they want to hear. Not only will they understand your product better and faster, they'll take sides. And that's a sure-fire way to get attention and ignite passion.
This could have been titled "The Troll's Guide To Marketing." Obviously, this is DHH in a Western mode. "Pick a fight" is not 和, but it's not MINASWAN either, is it? MINASWAN wants us to agree that Matz is nice, which is easy, but it also wants us to pretend DHH doesn't troll people all the time, which is ridiculous. The Rails Code of Conduct defines trolling as unacceptable behavior, which means Rails has to ban DHH from Rails, and Tenderlove as well! So the bad news is I guess I'm banned, but the good news is I'll be in excellent company.

I hate to provide MRAs with fuel for their arguments, but is there anything more ridiculous than the fact that the Rails Code of Conduct bans trolling? There are really only two elements to the Rails culture: trolling, and the color red.

One problem is that Internet generations are much shorter than regular generations, and "trolling" today means death threats, doxing, and harassment, while many Rubyists come from an Internet generation where "trolling" meant playful sarcasm like this:

But perhaps a much deeper problem is that sometimes people adopt a Code of Conduct in the spirit of lip service, and don't care too much about this kind of contradiction. Maybe they should.

Moving on, just because I'm saying it to sell a book doesn't mean it isn't true. In general, I like my books to contain true statements. I'd even go as far as to say that if you want to sell a book, telling people things for free is a good way to start, and those things you're saying for free should be true things.

So here are some true things: Rubyist contempt for Node.js might have plenty of non-sexist intent, but it also has a sexist effect — and it's also just impractical. You need front-end compilers. Sprockets is not a good front-end compiler. Webpack is. So are Rollup, Browserify, Google's Closure Compiler, and many other tools from the JavaScript world. Node.js in particular has much more overlap between systems programming and front-end work than the Ruby culture, by far, which is kind of what you should be looking for if you need somebody to write a front-end compiler, and it is so fucking weird that anybody would need to say that in the first place.

I saw my first Haskell talk at a Ruby conference and I saw my first Clojure talk at a Ruby conference too. Western Rubyists once had a magpie subculture, with lots of interest in other languages, and Ruby itself being a blend of many different ideas and influences. This is 和.

Imagine you were so foolish, as I once was, as to believe that the spirit of 和, which made Ruby what it is, was still alive and well over here in the West. When a Ruby project needed a front-end compiler, you would expect it to look at all the different ideas out there and pick a blend of the best options. And you'd expect at least some of those best options to come from the JavaScript world. Because the JS world is always going to produce better front-end compilers, and using one written in Ruby instead would already be a little eccentric even if that Ruby were beautiful and well-factored. But it is neither! Sprockets is one object with 105 methods!

What the fuck? No. Just no. Use the best tool for the job, and acknowledge the obvious fact that the best tool for the job might be written in another language. Get over this ridiculous Hatfields and McCoys bullshit. Otherwise, if we don't put a stop to this shit, the Rails culture is going to turn into an asshat pipeline.

We all know Rails is omakase.

And sure, omakase is all well and good, but here's a very related Japanese word: 旬, which I think is spelled しゅん. (My Japanese spelling is frequently wrong.) It's transliterated "shun" but it sort of rhymes with "moon." Its meaning is sort of like "freshest," except it also implies "most seasonal." Like you wouldn't serve a winter salad in July, no matter how fresh the ingredients were, because even if they were fresh, they couldn't be しゅん.

Sprockets isn't fresh, but it could be; @schneems is trying to make it fresh again, and even though I think it's a mistaken goal, he's having some success with it, and I respect that work. But today, Sprockets cannot be しゅん. That's just not possible any more. Whether we save it or not, its season has passed. When Sprockets was first written, I think only the Google Closure Compiler existed as a serious alternative. It couldn't handle CoffeeScript or CSS, and I don't think you could run it from the command line either (or if you could, you had to use Java, so it was painfully slow at best). Sprockets was a great innovation at the time, but that time is over. And if something is neither fresh nor しゅん, it doesn't belong on an omakase menu.

Still, I'm not actually proposing a change to the omakase stack. Basecamp appears to be developing NIH in its old age, but the omakase menu is up to the chef.

Let's just think about the real stack, the stack that every Rails app except Basecamp is using. The omakase concept might be arrogant, but it's not tyrannical. That --skip-sprockets option is right there, and the smart move is to use it.

And by the way, my new book shows you how to set up Webpack where Sprockets used to be.

Let's talk about this book

Here are a bunch of old tweets where people said nice things about my first book:

I've been doing this for a while, but recently, I decided to upgrade the design of my products. So I've created a new home for my books: is a new brand, and its goal is to signify a new standard. For example, in the past, I put no energy into design at all. Here's some screenshots from my old books:

And here are some design sketches for my new book:

In addition to a higher standard of design, this new book is longer than my other books, has more puppies than my other books, and ships with two git repos.

One repo is a quick overview of Flexbox, which simplifies a complex topic, while the other is the app I mentioned earlier, which teaches you all about the modern front end. It's a simple Rails app, with the same simple UI recreated in several different front-end technologies, including ES5, ES6, React with JSX, Webpack integration, ClojureScript and Om, and even a sprinkling of ES8 experimental features. (And with an Elm branch on the way quite soon.)

I also build a checklist of elements to consider when assessing the tradeoffs between each of these different front-end strategies, and run these different implementations through this checklist. The purpose is to make it easy for people to pick up new front-end technologies and evaluate them, because people who are new to front-end coding often have difficulty with that. This book aims to make that a lot easier.

Modern Front-End Development with Ruby on Rails and Puppies goes on sale soon, so sign up for my email list to learn more.

Update: early release version is on sale!

でも、この文を無視してください。私は外人がGoogle Translateに行きたいです。すみません!


Graphics for JVM

Let’s say I want to build high-quality desktop apps. I also want to do it on JVM. Don’t get your hopes up—we are not there yet. But I have a plan.

Why JVM?

It’s high level enough—performant, yet doesn’t make you overthink every memory allocation. It is cross-platform. It has great languages — Kotlin, Scala and, of course, Clojure. C# would do, too, but it doesn’t have Clojure.

Can’t you already build desktop apps on JVM?

You can. But traditionally AWT, Swing, and JavaFX came with a lot of quality and performance drawbacks. They were so significant that only one company managed to build a decent-looking app in Swing. It is possible but requires a tremendous effort.

Aren’t all Java UIs cursed?

No, not really. AWT, Swing and JavaFX have their problems, but they are their only. There’s no fundamental reason why high-quality UI can’t be built on JVM. It just wasn’t done yet.

Why hasn’t it been done yet?

Patience, I think. We are so used to things we can hack together in a week, nobody is thinking in terms of years. And good UI requires years of work. It’s a big commitment.

Why not Electron?

The first reason is performance. JS is a great language for building UI, but it is much slower than JVM. Wasm can be fast but implies C++ or Rust.

The second is the DOM. It is a horrible collection of hacks that make simple things hard and hard things impossible. I have thought many times “if only was I drawing this control/layout directly, I would’ve finished hours ago.”

That means there’s a very low ceiling, performance-wise and quality-wise, of what a web app can do. I believe we can, and should, do better.

Electron taught us two good things, though:

  • People crave for native apps. Nobody wants to work from the browser.
  • People don’t care if apps don’t look native to the platform as long as they look good.

Is desktop still relevant?

I believe it is!

I watched an interview recently, between an Android developer and an iOS developer. One was asking:

“Does someone still writes desktop apps?”

To which the other answered:

“I have no idea… Maybe?”

Both of them were recording it on a desktop, in a desktop application, while having a call over another desktop application. Multiple other desktop apps were probably used for post-production. None of those were written by magic elves or left to us by a mighty ancient civilization. The desktop might be less trendy, but only because it’s harder to sell useless crap here.

And I’ve been on both sides. I lived without a desktop for a few weeks once. You get used to it, but it’s certainly not ideal. Any sort of information gathering and processing is very painful: it’s hard to select text, hard to search on a page, hard to have multiple tabs, hard to move data between apps. For example, you are adding an event to the calendar. You need to lookup an address for the event in the mail, which has a link that opens a browser. By the time you found what you needed and returned to the calendar, it has been unloaded from memory and all context is lost. Ability to have multiple windows open at the same time is the desktop’s superpower.

Phones are great for small, quick, single-purpose tasks. They have their place, but life is way more complex than a phone can handle. Many of us still need that bicycle for the mind.

Ok, what are you proposing?

The road to high-quality UI on JVM is a long one. We’ll need:

  • a graphics library,
  • a window/OS integration library,
  • a UI toolkit.

Today I am happy to announce the first part of this epic quest: the graphics library. It’s called Skija and it is just a collection of bindings to a very powerful, well-known Skia library. The one that powers Chrome, Android, Flutter and Xamarin.

Like any other JVM library, it’s cross-platform. It runs on Windows, Linux and macOS. It’s as simple to use as adding a JAR file. Much simpler than massaging C++ compiler flags for days before you can even compile anything. Skija takes care of memory management for you. And the bindings are hand-crafted, so they always make sense and are a pleasure to use (as far as Skia API allows, at least).

What you can do with it? Draw things, mostly. Lines. Triangles. Rectangles. But also: curves, paths, shapes, letters, shadows, gradients.

Drawn with Skija

Think of it as a Canvas API. But, like, really advanced. It understands color spaces, modern typography, text layout, GPU acceleration, stuff like that.

Drawn with Skija by Robert Felker

Oh, and it’s fast. Really fast. If it’s good enough for Chrome, it will probably be enough for your app too.

What can I do with it?

Many things! Custom UI widget libraries and whole toolkits, graphs, diagrams, visualizations, games. For example, we’ve played with implementing java.awt.Graphics2D and running Swing on top of it—seems to work fine.

Why release a separate graphics library? How is it useful?

I am not a big fan of bundling everything together in one package. You can never guess all parts right—someone will always be unhappy about a particular decision.

Independent, interchangeable libraries, are more flexible. Unix-way.

What’s with the rest of the puzzle?

Both things are in the works in JetBrains.

  • For window management/OS integration, there’s Skiko. It says that it is Skia for Kotlin, but it also implements window creation, events, packaging, the whole deal. It even integrates with AWT and Swing.

  • And for UI toolkit there’s Compose Desktop. It’s a fork of Android Compose, a declarative UI framework, that runs in a desktop environment.

But the beauty is that it doesn’t even have to be these two!

Don’t like AWT? Bring your own window library.

Kotlin is not your cup of tea? Use any other JVM language.

Compose performs poorly on your load? Pray for someone to write an alternative, or write your own (sorry, no good solution yet. It’s still early days).

And, by all means, if you want to roll your own window library or widget toolkit—please, do! That’s what we hope to happen.

In conclusion

Skija is a part of the bigger picture. Java UI progress was blocked by the poor-performing Graphics2D. It’s not anymore. What will grow out of it? Time will tell.

Please give Skija a try, and let us know what you think. Or, maybe, start using it—we’d be happy if you did! Here’s the link:


Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.