We can fly with what we have

There was a time when I just didn’t trust Emacs. This lack of confidence was the result of mixing good and evil: listening to other reliable Emacsers’ opinions while being too lazy to see for myself what Emacs could do. The outcome of this endeavour was a frenetic copying-and-pasting of code snippets in my init.el, blindly dragging in more and more external packages.

Don’t get me wrong, external packages made Emacs what it is today for me. I can’t even imagine using it without the likes of Magit or CIDER. However, ignoring what is already there and reaching for MELPA every time I have an itch to scratch has made me overlook built-in niceties like project.el.

Flymake did fell prey of this line of reasoning. Flycheck has always seemed like the way go, so much so that I have barely registered João Távora’s and other Emacs developers’ efforts to improve Flymake. Since my recent experiments with project.el have turned my eyes inwards again, I wanted to see if I can live without Flycheck.

There are three places where I need on-the-fly syntax checking:

Kind people took care of Emacs Lisp and Clojure for me: package-lint-flymake comes with package-lint and flymake-kondor is a valid alternative to flycheck-clj-kondo. But I wasn’t able to find an existing integration with proselint, so I decided to provide one.

My first tries with flymake-easy didn’t go very well. I asked for help on Emacs StackExchange, before realizing I could use flymake-quickdef like flymake-kondor does and answer to myself. The next step was making the solution available to everyone in the form of a package, and so I published the little flymake-proselint.

Flymake may not have the extensive support for checkers that its bigger brother has, and it doesn’t seem to have the same huge community behind, but it’s still a great library to play with. There is a comparison between the two projects on the Flycheck website, so have a look there before making your choice. Note that the comparison doesn’t mention flymake-quickdef, which I find easier to use for extending Flymake.

As for me, my requirements for on-the-fly syntax checking are met by Flymake. At the end of the day it’s good to know that after all these years together Emacs can still surprise me.


Centralized schemas and microservices, match made in Hell?

So you’ve ended up in the microservice swamp, or somewhere else where you need to deal with a zoo full of fractious, opinionated, distributed systems. Now you’ve found out that there’s a set of common things many of your services need to have a shared understanding about, but don’t. You also prefer to retain even a bit of your dwindling sanity until the project is over. What to do?

This post is an attempt to distill into a digestible format the experiences I and my team have had during the last few years building a distributed system around centralized schema management (I’m going to just say CSM from here on.) I’m not entirely sure we were that sane to begin with, but at least the loss in cognitive coherency has been manageable. Your mileage may vary, caveat emptor, etc.

Centralized schema, in its most simplified form, means that you have a common authority who can tell every other part of your system (service, API, lambda function, database, etc.) what kinds of objects or data structures your system contains. It’s the responsibility of each service to determine what to do with that information — the assumption being, however, that whenever a service needs to communicate with other services (or the outside world) about a Thing it should use the authority-provided definition of that Thing.

Microservices kidnapped by CSM

Systems that need to work with a large set of various kinds of data objects across multiple services are the prime candidates to benefit from centralized schema management. Many don’t, and since CSM means giving up a portion of the flexibility service-oriented architectures (whether micro, midi or macro) bring, it’s not a good fit for every system.

In our case we have a system that is supposed to manage all the assets of the Finnish national road network. Assets in this case meaning the roads themselves (surface material, how many lanes, …), and anything related to them such as traffic signs, surface markings, fences, off-ramps, service areas, various measurements, and so on. Altogether that adds up to roughly a hundred different types of assets. Each of them needs to be imported, stored, validated, exported to interested parties via some API, returned as search results, drawn on a map… you get the idea.

Why centralize schema if everything else is distributed?

The common wisdom around microservices is that everything needs to be decentralized. That’s how you’re supposed to reap their benefits. Unfortunately, that wisdom tends to slink away (looking slightly guilty as it goes, with those benefits in its pocket) whenever you need to have more than two of your services talk about something, because it’s a lot of work to keep all parties in agreement on what they’re talking about.

CSM is a tool and architectural pattern to manage that problem. It forces all parties interested in a given kind of data to use a standard definition of it — what properties the data has, what’s the set of allowed values for each property, which ones are mandatory, and so forth. This is normally not optimal from the viewpoint of any particular service, since depending on how they’re built they usually have a richer set of tools available than the smallest common denominator an external schema represents. For example, a TypeScript-based service would rather use its own type system for defining objects, and a relational schema defined in SQL in the fourth normal form is a thing of beauty compared to any JSON schema document.

For systems that have just a few kinds of data, or just a few different services that deal with it, the tradeoff is likely not worthwhile. But when you have a dozen services that all need to be able to process some or all of the set of 100 data types, implementing CSM is the only way to stave off the impending madness. Even if that results in all of your services having to submit to a jackbooted overlord who controls what they may or may not say in public. (Please do not try to extrapolate this into anything human-related.)

A portrait of a CSM in three acts


To be of any use, a CSM implementation, whether homebrew or off-the-shelf, needs to include at least the following:

  • Some way to distribute schemas in a format or formats that are understood by all parts of the system.
    • Your options format-wise include JSON Schema, Apache Avro, Protobuf schemas, and the comedy options of XML Schema or RELAX-NG. (Not going to link to those two.)
    • Ideally you define your schema just once, and distribute it in multiple formats.
    • If you have, or suspect you’ll have in the future any external APIs, then your selection narrows. There are not many formats that are widely accepted, and JSON Schema is probably the least problematic option.
  • For each of your services, a means of consuming (using) the schema. Having multiple formats available makes it easier to find a good consumer or validator.
    • It’s a nice bonus if your consumer can process the schema into something native for your under-the-hood implementation. This includes F# type providers, Clojure spec or Plumatic schema, or even just plain old Java code generators. (Let’s not mention JAXB.)
    • If you can do this runtime, that’s even better. But you can often get by with compile-time consumption.
    • At a minimum, your consumer must be able to tell you if a data structure is valid or not in the context of a schema.
    • If it can also do coercion of almost valid data into completely valid data, like parsing an ISO8601 string into a DateTime object, that’s good.
  • Schema distribution to consumers can be done during compilation or packaging, but it’s more microservicey if you have a schema registry service. It can just be nginx serving static files, in a pinch.

We’re not in MVP land anymore

The above is fine for a workshop demo. However, it’s likely your system will need something a bit more advanced to survive in the wild. To upgrade your CSM from the “Home Edition” into “Professional”, you’ll probably need these:

  • Support for schema evolution. Schemas are like war plans, they never survive enemy contact. You need to prepare for this by having a way to handle changes to your schemas without the need to rebuild everything and breaking all your external APIs.
    • But what to do if you end up with an object of version 1, and you need to upgrade it into version 3? Or the reverse?
    • Transform is what you do. Your CSM system can help by providing transformation services, or supplying transformation instructions alongside your schemas. To do this efficiently and in a way compatible with as many languages and platforms as possible is difficult, though. More on this later on.
  • Support for schema composition. If you need CSM to begin with, then your data types probably have things in common between them. You’ll want to be able to compose your schemas from components. It’s boring and error-prone to copy-paste the same list of create-time, created-by, modify-time and modified-by properties to every schema.
    • I don’t need to mention this complicates evolution and transformation somewhat, do I?
    • You could also consider deriving schemas from other schemas, but that smells like OOP-style inheritance and we don’t do that.
  • Enumerations as first-class citizens. Real-world systems always have various lists of enumeration-type values that evolve over time. You need to be able to apply the same tools to them as the rest of your schemas: evolution, transformation, etc.
  • API management integration — if you can connect your CSM with your API management solution you can have it handle inbound/outbound data validation for you.

Nah, man. I’m pretty friggin’ far from MVP

In for a penny, in for a pound? Why not go for the whole hog? Since you’re already committed, why leave money on the table by not extracting some more value from your fancy bespoke CSM? To evolve your CSM into its final form you could…

  • Embed business logic into your schemas. If your CSM supports arbitrary metadata you can include things such as UX rendering instructions alongside your property definitions.
    • This opens up the rabbit hole of dynamically constructing forms and other UX layouts purely based on your schema. Don’t say I didn’t warn you.
    • Other possibilities include having things like embedded Elasticsearch mappings (for the special cases where you can’t generate them automatically from your schema, of course.)
  • Allow runtime end-user modifications to your schemas (probably just administrators, but still.) If the previous item was a rabbit hole, then this is a portal to the netherworld. All of your services would need to be capable of runtime schema reloading for this to work.

This will quickly veer into “if all you have is a hammer” territory so let’s stop here.

Are there existing options?


There are things like Confluent Schema Registry which provides tight integration with Apache Kafka but also works with anything else that can consume JSON Schema, Avro or Protobuf. I don’t have any hands-on experience with it, but looks like it provides most, but not all of the functionality described above.

In any case, we rolled our own because we could not find anything fitting our requirements. Besides, this post is already too long to include a market survey.

How we did it (and why)

As I said above, in our project (Velho, for the Finnish Transport Infrastructure Agency) we ended up creating our own, custom centralized schema management solution. This was due to several reasons:

  • Our project has to produce a long-lived system, and our architectural solution to that was a “midi-service” (i.e. less granular than micro, but definitely not a monolith) approach. So we would have a moderately large amount of independent services, each with their own responsibility of data storage etc.
  • To allow for a ship of Theseus -type evolution path for the entire system, our internal architecture would explicitly and consistently support a polyglot implementation. This means every service chooses its own tech stack, and disallows the use of any integration technologies specific to any single language or platform. To keep ourselves honest we dogfood, i.e. consume internally the same protocols and APIs we provide externally.
  • The number of data types our system needs to manage is around 100, of varying complexity, each with their own properties and quirks. They’re split into multiple collections and each collection is managed by a different service.
  • The above made the need for a CSM evident early on.
  • The existing options did not feel suitable. Granted, we didn’t do any really comprehensive market research. So sue me.

Our stack

AWS native. Containerized with AWS Fargate, FaaS with AWS Lambda. Multiple independent SQL databases (Aurora PostgreSQL). Elasticsearch. Redis. S3. AWS API Gateway. Infrastructure-as-Code via CloudFormation.

Languages: Clojure backend services. ClojureScript frontend with Reagent, Re-frame and Web Components. Lambdas in Clojure, Python and Javascript.

What we use CSM for

  • We validate incoming data (APIs, import jobs, etc) using CSM-provided schemas.
    • Inbound data using old schema versions is supported, and we automatically transform it into the latest version when needed.
  • We use Elasticsearch to provide advanced search capabilities, and generate ES indexes automatically based on our schemas, with custom mappings embedded in the schema metadata
  • When we read data from our own storage, we validate and (if needed) upgrade its schema version. This allows us to not bother with updating our data in storage whenever our schemas change — we have multiple kinds of storage backends, not all of which support easy in-place updates.
  • We provide schema definitions for our external partners in OpenAPI format
  • We provide a “data catalog” user interface for our end-users, which includes human-friendly descriptions of the various data types we manage
  • We construct various UX views into our data dynamically based on our schemas. This includes…
    • schematic views of the road network
    • a search builder allowing the end-user to graphically construct a complex query into our data
    • in the future we’ll provide asset view and update forms generated from our schema

Get to the point already

It’s written in Clojure like most of our project and deployed as a Docker container. The schemas themselves are written as EDN files, which are more-or-less equivalent to JSON or YAML files, but with added Clojureness (including the ability to embed code). Each of the schema definitions contains the complete description of a single asset type, including

  • All of its versions from the beginnings of time until the current day. Versions are identified by monotonically increasing numbers (1,2,3,5…)
  • Transformations from one version into the next. We only support forward transformations (i.e. increasing versions). A gap in version numbering indicates that an automatic transformation would not be possible.
  • For each version, a specification of its properties: their types, cardinalities, whether optional or mandatory… Collections including lists and sets are supported, as are nested objects. We are limited mainly by what’s possible to express in JSON Schema. The properties are defined using a notation based on Metosin’s Data Spec.
  • Arbitrary metadata for the asset and all its properties. This includes human-readable names, Elasticsearch mappings and anything else we’ll come up with. We also use metadata for routing: each asset defines which of our services “owns” it and thus requests for it can be routed to the correct service.

Here’s a sanitized, redacted and translated example of the schemas for Fence, which is part of the Road Furniture namespace (and therefore owned by the furniture-registry service). There are two schema versions, a transformation from v1 to v2, and some metadata. The schemas refer to two generic components (the velho/import directive) which include properties defined elsewhere, and there’s a property whose type is an enum schema (velho/enum).

{:latest-version 2
 :versions {1 (velho/import
                   {:properties {:code string?
                                 (ds/opt :material) (velho/enum :furniture/material)
                                 :type (velho/enum :furniture/type)
                                 :size (ds/maybe pos-int?)}})

            2 (velho/import
                   {:properties {:code string?
                                 (ds/opt :material) (velho/enum :furniture/material)
                                 :type (velho/enum :furniture/type)
                                 :height (ds/maybe pos-int?)}})}

 :transforms {1 "$merge([$, {'properties': $merge([$sift($, function($v, $k) {$k != 'size'}), {'height': $.'size'}]),
                             'version': 2}])"}

 :metadata  {:oid-prefix ""
             :owner-service :furniture-registry
             :indexing true
             :name "Fence"
             :fields {:properties {:_metadata {:name "Properties" :indexing true}
                                   :code {:_metadata {:name "Code" :index true}}
                                   :material {:_metadata {:name "Material" :index true}}
                                   :type {:_metadata {:name "Fence type" :index true}}
                                   :height {:_metadata {:name "Fence height" :index true}}}}}}

We don’t serve these EDNs outside our schema registry service. EDN is a Clojure-specific format, therefore it’s an implementation detail, and we want to punish everyone equivalently. Our schemas are transformed into an OpenAPI 3 definition (which is an extension of JSON Schema) and served via a REST API.

OpenAPI does not directly support all of our features (e.g. transforms and metadata) so we use extensions for them. The resulting definition is still completely valid OpenAPI, and third parties would just ignore the more esoteric stuff.

Eat it up

Currently we consume our schemas only from Clojure or ClojureScript code. We do this by…

  1. Fetching the OpenAPI definition via our REST API
  2. Translating it back into the same EDN format as seen above
    • At this stage we also extract our custom stuff from the OAS extensions.
  3. Processing our custom extensions (import and enum)
  4. Feeding the processed and evaluated schemas to Data Spec, which…
  5. … ends up registering the schemas as Clojure specs.
  6. The resulting specs are then used in our code, both backend and frontend, in the usual Clojure/CLJS fashion to validate and coerce incoming data.

It’s not a coincidence that our “native” data format is so close to Data Spec already. Hooray for dynamic languages and runtime eval.

Yes, we do runtime consumption of schemas and our services (both frontend and backend) can handle schemas that change on-the-fly.

About those transforms…

More than meets the eye, isn’t it?

As I alluded to somewhere above, transformations between schema versions are an issue, primarily because we can’t really define them in a language/platform-independent way. (Unless we go full XML Schema in which case XSLT would work. But we don’t want to. Never go full XML.) Fortunately we have a good-enough solution in JSONata which is an expression language for querying and transforming JSON-like data. It has implementations for Java, JavaScript, Python and .NET (at least), covering the common platforms nicely.

It must be said that JSONata is far from perfect. The various implementations differ in the set of features they support, and this is not really documented anywhere.

Woman angry at a cat who´s not using XSLT

In the example above, the JSONata transform takes a version 1 object, adds a key properties.height which is set to the value of the properties.size key, and removes the now-unnecessary size. It additionally sets the version property to equal 2, as is good and correct for a version 2 object. The version property itself is imported from the general/basic-props component schema alongside many other properties, so it is not visible in the example.

An astute reader would at this point remember that JSON Schema and OpenAPI do not have support for these kinds of transformations. That’s entirely correct — we have custom consumer-side code to run them, and we deliver the transformations via OpenAPI extensions. Our consumers so far have been solely Clojure or ClojureScript-based, so we only have client code for browsers (JSONata/JS used from ClojureScript) and the JVM (JSONata/Java via interop from Clojure).

Can I play with your toys?

Hopefully yes, in the near future! We’d like very much to open-source our CSM implementation but there are a few bureaucratic hurdles to overcome yet (and it needs some cleanup).

Final words

To summarize:

  • Centralized schema management is necessary when you have a distributed system with many data types handled by many services.
  • There are a lot of decisions you need to make when implementing CSM related to formats, schema evolution, transformations etc.
  • Planning for the future is haram in the Agile methodology, but in this case you might want to make an exception, since CSM tends to infiltrate all parts of your system whether you want it to or not.
  • Properly done CSM makes microservice architecture slightly more manageable.
  • You can throw together a bespoke CSM if you need to and be pretty happy in the end.
  • I know this because we did it.
  • We ended up implementing quite a lot of advanced features, including schema evolution with embedded transformations, runtime schema loading, etc.

This post took a long time to write. I’ve attempted to be not entirely boring, and I hope you got something useful out of it.


The things described in this post are a result of a lot of teamwork. While I might have written the largest number of lines (easy enough when you end up throwing away the entire first implementation!) the good stuff wouldn’t have been possible without the rest of the Velho team. Thank you, Mikko, Kimmo and the rest — you know who you are and you’re awesome! The dumb parts are my own, except for that meme picture, which is Mikko’s.


React Native from Clojurescript

React Native

I have always been curious about mobile apps development. In 2018 I tried with a friend to launch a startup and the first thing I tried was to develop a mobile app. I wanted to write my code only once for Android and iOS and not the same logic twice in Java+Swift.

At the time of research the two opponents were Xamarin and React Native. The promise is the same: write the logic once, have the framework manage the native code.

After reading some pros and cons I decided to write C# with Xamarin because scared of React and Javascript and the fontend world. It was an ok experience but the framework was not mature and C# did not excite me. When I hit my first real problem when implementing authentication, I gave up.

Fast forward 2 years and React Native is mature and I am no longer afraid! Reagent made me fall in love React and Clojurescript allows me to skip Javascript.

React Native with the support of Facebook has developed rapidly (most active github repo in 2019). It can leverage the React ecosystem, it has good documentation, its generic components are well designed.


Inspired by this blog post, my first attemp at React Native from Clojurescript was with Krell. Krell’s philoshopy is to provide a very thin layer over React Native. Well, I had some hiccups during the setup, I found it still (too) barebones.

Few months later I saw another announcement on Slack: figwheel for React Native. I followed the Getting Started docs and I quickly had my iOS simulator running alongside figwheel hot-reloading.

I had also been hearing very good things about Expo, which should handle for you complicated things like camera, location, notifications. It was supported out of the box, here is my ios.cljs.edn:

^{:react-native :expo
  :launch-js ["yarn" "ios"]}
{:main app.core}

When I run cider-jack-in-cljs, CIDER will ask me to run figwheel-main, ios configuration. This will return a cljs REPL and will run yarn ios in the background. This is defined in package.json and runs "expo start --ios". With the iOS Simulator running I can then run the Expo app and select my iOS build.


My first steps consisted of learning what a React Native component is. This is the first example in the Rect Native docs:

import { Text, View } from 'react-native';

const YourApp = () => {
  return (
    <View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}>
        Hello World!

Javascript makes it slightly verbose but the concept is quite simple: our app includes a View component and inside that a Text component. Since react-native is really just React, we can use reagent to have hiccup-like syntax and smart UI reloading.

Looking on github for repos using the cljs + react-native combo I realized that every developer uses js interop in a slightly different way to wrap react-native components. The reagent-react-native project helps eliminating this “common boilerplate” by providing ready-to-use components. This is my deps.edn:

{:deps {org.clojure/clojurescript     {:mvn/version "1.10.773"}
        io.vouch/reagent-react-native {:git/url "https://github.com/vouch-opensource/reagent-react-native.git"
                                       :sha     "54bf52788ab051920ed7641f386177374419e847"}
        reagent                       {:mvn/version "0.10.0"
                                       :exclusions  [cljsjs/react cljsjs/react-dom]}
        com.bhauman/figwheel-main     {:mvn/version "0.2.10-SNAPSHOT"}}}

And here is the minimal example above, with reagent syntax:

(ns core.app
  (:require [react]
            [reagent.react-native :as rrn]))

(defn hello []
  [rrn/view {:style {:flex 1 :align-items "center" :justify-content "center"}}
   [rrn/text "Hello World!"]])

It can’t get any more simple. The reagent code is an abstraction for this lower level interop code:

(def <> react/createElement)

(<> rn/View
      #js {:style #js {:flex            1
                       :align-items "center"
                       :justifyContent  "center"}}
      (<> rn/Text (str "HELLO WORLD!!")))

Following the React Native docs was relatively easy. I only had troubles when wrapping the FlatList example:

const FlatListBasics = () => {
  return (
    <View style={styles.container}>
          {key: 'Devin'},
          {key: 'Dan'},
        renderItem={({item}) => <Text style={styles.item}>{item.key}</Text>}

This is how I solved it:

(defn flat-list []
   {:data        [{:key "Devin"}
                  {:key "Devn"}]
    :render-item #(<> rn/Text
                      #js {:style #js {:color     "black" :textAlign "center"}}
                      (.-key (.-item %)))}])

The render-item function is passed a single argument, an object. We can access the data accessing the .-item key.

Calling clojure

You soon come to the realization that 99% of the mobile apps we use can be represented by React Native components, some simple data logic and styling. What makes cljs attractive for mobile app development is that you can write your logic in clojure.

To go beyond the basic tutorial, I decided to develop a quick app to play sudoku. First I set up the View code to represent the Sudoku grid as a flat-list, as explained above. Then, to implement the Model code I resorted to Clojure, functional programming and lazy sequences.

Instead of having to spin up figwheel + Expo + Simulator, I could simply open a clj REPL. After writing the code for my sudoku grid in sudoku.clj (note the defmacro):

(defmacro sudoku-grid []
  (->> (repeatedly nine-rows)
       (filter valid-rows?)
       (filter valid-columns?)
       (filter valid-blocks?)

I could simply “require it” in sudoku.cljs:

(ns app.sudoku
  (:require-macros [app.sudoku]))

I could have just written the logic directly in sudoku.cljs but this approach allows to leverage the whole clj ecosystem and permits faster experimentation. This is the screenshot of the result, it was a lot of fun: img


Load Shedding in Clojure

This the second post in a two-part series. In the first part of the series, we discussed the different aspects of load shedding and why it matters to think of it as part of the design of web services. We recommend reading it first as it sets the context for this post. Here, we will implement our load shedding strategy in Clojure and corroborate it with experiments.

Every web service has a finite capacity of work it can do at a time. If this is exceeded, then the performance of the server degrades in terms of response times and availability. As discussed in the previous post, we will attempt to shed load before a service’s capacity is exceeded. We will do this by the following strategy:

  1. Limit the number of concurrent requests in the system
  2. Add an upper bound on the response time
  3. Recover from load by unblocking clients

Conceptual model

In our load shedding server, we want the server container (Jetty) threads to be the liaison between request queueing and request processing. That is, we want Jetty threads to hand off requests to application threads instead of processing it. Jetty threads should only monitor request processing in the system as a whole, part of which is to ensure that excess load is shed when there is too much to process.

The conceptual model for the load shedding server exhibiting the above properties is shown below. The salient aspects of the model are:

  1. A bounded queue of requests
  2. Measurement of time spent in the queue
  3. Measurement of time spent in processing a request

Jetty threads preempt responses for requests which have waited too long either due to too much queueing or due to slow request processing. This ensures that neither the processing time nor the queuing time is too large. Time is measured at the queueing level and the processing level.

A conceptual model for our load shedding server. The important aspects are: Jetty responds to clients when requests wait too long in the queue or take too long to be processed. Jetty hands off request processing to application handler threads and preempts responses for those requests which take too long.

Load monitoring middleware

In our load shedding server, we will use asynchronous handlers in Jetty to detect and react to load in the system. Our system starts with defining a Jetty server that uses the options :async-timeout-handler and :async-timeout as shown below. :async-timeout-handler is a function which is invoked by Jetty to respond to a request has been in the system for too long (including waiting time and processing time). The amount of time a request is allowed to wait for response is configured by the :async-timeout option. Our server uses Compojure routes, a timeout handler and a timeout value. The Compojure routes themselves or the web application in general is not asynchronous, the asynchronous processing is transparent to the application and is only visible to the middleware — we will describe this in detail in the next sections.


In the above snippet, the app is a Jetty server which uses the wrap-load-monitor middleware. The goal of this middleware is to monitor the current load in the system and drop requests when necessary.


The purpose of this middleware is to hand off request to the application threads. Jetty threads are only used to liaison between the application threads and waiting requests. The definition of the hand-off-request function will determine how the system behaves to handle load.

If at any time, the hand-off-request does not have the resources to process the request, it is expected to throw a RejectedExecutionException signalling to the load monitor that the request was rejected due to overload. The load monitor then responds to the client with a graceful error code, like 429 above.

With this, the server setup is ready, and we can now look at how we will achieve the three goals we concluded to be necessary for load shedding implementations in the previous post.

Goal #1: Limited concurrent requests

We will use a thread pool which has a bounded queue of tasks while implementing the hand-off-request function used by the load-monitor middleware. To that end, we use the ThreadPoolExecutor class which provides options for specifying the number of threads in the pool and takes a queue in which pending tasks wait.


We are using an ArrayBlockingQueue to limit the number of pending tasks for this thread pool. This ensures that there will be an upper bound on the number of tasks that can be waiting for a thread. An alternative to this is to use a SynchronousQueue, in which case there will be no queueing in the application thread pool and tasks which cannot be served immediately are rejected. By and large, the choice of queue sets the strategy for shedding load when the number of concurrent requests reaches its limit.


To sum up, Jetty hands off request processing to the request-processor-pool which is a thread pool with a bounded queue. At any time, if there are too many requests in the system, the request-processor-pool will reject the task which the load-monitor middleware will handle by gracefully responding to the client with an error status. Therefore, the number of concurrent requests in the system is always limited.

Goal #2: Bounded response time

The response time as seen by the client is a sum of the time a request waits for service and the time it takes to process the request. The goal is to ensure the total time a client waits for a response is bounded.

Let us now define the behaviour of hand-off-request to see how we can achieve the property of having a bounded response time. In the snippet below, hand-off-request converts the asynchronous handler that Jetty sees into a synchronous handler since the rest of the application code is written to process requests synchronously. This may not be necessary if the application is asynchronous to start with. The asynchronous nature of the Jetty handler used in this example, is only visible to the load-monitor middleware. This approach to load shedding can be applied to any web server written in Clojure/Ring without affecting the rest of the code or business logic— the asynchronous nature of the handler is abstracted away from the application implementation.

In the snippet below, we define hand-off-request to be function that runs on the application thread pool request-processor-pool and it detects delays in queueing based on which it decides whether to process the request or drop it due to delay. We are using a custom fork of Ring where we have added :request-ts-millis as a key to the Ring request map. This timestamp is derived from Jetty’s getTimeStamp method that records the time a request came into the system (prior to queueing). We use this to check if a request has waited too long in the request queue.


With this mechanism, a request that has been waiting too long in the queue is not processed at all and is dropped. Since we are using Jetty’s asynchronous handlers with a timeout, whenever a request takes longer than the configured value, the client will get the response as determined by :async-timeout-handler defined when creating the Jetty server (as shown in the above sections). If the request has already waited too long in the queue, it is not even picked up for processing, which gives the server headroom to recover from load that has piled up.

The load monitor middleware ensures that the application seen by Jetty is an asynchronous one. This allows us to set a limit on the request processing time. If a request takes longer than this time, the client will be responded to by the :async-timeout-handler independent of the request processing itself, so that clients will never wait longer than a configured timeout. This is a change we made to our fork of Ring to support load shedding.

Goal #3: Recover from load by unblocking clients

Whether it is the processing of the request that has become slow, or if it is that there is a sudden surge of incoming requests which is causing traffic to get jammed, the Jetty container will respond to the client irrespectively with the preconfigured timeout response as determined by :async-timeout-handler. Therefore, clients never wait for too long to get a response. More importantly, this ensures that clients will not hold on to resources on the server when the load increases. This frees up resources which helps recover from load once the cause for the load has subsided.

The entire source code for our implementation is available here. We are using a custom fork of Ring with the above changes, which has been discussed in this pull request: https://github.com/ring-clojure/ring/pull/410. The pull request has been merged with some of the changes required for this implementation.

Experiment #1: Sudden bursty traffic

To demonstrate our load shedding server implementation, we created a setup with two servers, one with load shedding enabled and one without. Other than this, everything else — including the handler functions, clients making requests — were identical for both servers. The source code used for these results is available on Github.

  1. The black vertical lines in the graphs that follow denote the duration of the load in the system — a sudden burst of requests which lasted for under a minute.
  2. The response time was set to be 3 sec, that is the handler took exactly 3 sec to respond to each request.

Perceived response time during bursty traffic

Let us look at the time taken to get a response, as seen by the client for both servers.

  1. For the non-load shedding server, the response time rose to as high as 3 min on average although the processing time was constantly 3 sec (plotted as “request handling time”).
  2. The effect lasted much longer than the cause itself. It took more than 3 min for the non-load shedding server to recover from a burst that lasted less than a minute.
  3. For the load shedding server, there was a dip in the response time during the duration of the load. This is because after the system was saturated, requests were not even picked up for processing, skewing the measurement of response time. The load shedding server never took longer than 3 sec to respond to the client, which is the time the request processing was set to take.
Response time climbs for the non-load shedding server even after the sudden burst of requests ended. While the time taken to process requests remains constant, the response time perceived by the client is ever-increasing. The load shedding server ensures that the client never sees a high response time.

Response codes during bursty traffic

Let us now compare the response codes seen by the clients during the bursty load.

  1. The non-load shedding server was unreachable from the client when the load became too high (denoted as “FAIL” when the server refused connections).
  2. The load shedding server quickly recovered after the burst and came back to normal behaviour. As per our implementation, the statues of 429 and 503 statuses signify too many requests being made and requests taking too long to be processed respectively. This only happened during the the burst of requests, after which the server regained full functionality.
The non-load shedding server reports no error code until its queues are full at which point it stops accepting requests, which is a bad experience for its users. The load shedding server ensures that the load is communicated to the clients with graceful error codes and it recovers to full functionality after the surge ends.

Bursty traffic: Summary

This first experiment demonstrated the behaviour of the load shedding server and the non-load shedding server when there is a sudden burst of traffic. To summarise:

  1. A non-load shedding server can become very slow during a sudden burst of requests. The impact on the server response time can last much longer than the load itself.
  2. If the load is too high, the non-load shedding server can become entirely unresponsive.

Experiment #2: Degraded downstream dependency

In our second experiment, keeping the rate of requests was constant, we increased the request handling time to simulate a degraded downstream dependency.

  1. There were two routes: one was set to take 3 sec and the other was constantly set to take less than 1 sec.
  2. After a while (at the first dotted black line) the slow route was increased from 3 sec to 4 sec. The fast route was kept as is.
  3. The processing duration of the slow route was brought back down to 3 sec after 6 min (at the second dotted black line).
  4. Both routes on both servers were called the same number of times at the same rate.

Perceived response time during degraded downstream

As with the first experiment, we will first compare the response time as seen by the clients.

  1. For the non-load shedding server, requests took as long as 25 sec to respond while it should have taken at most 4 sec. The “request handling time” never rose above 4 sec.
  2. The load shedding server never blocked a client longer than 6 sec, a pre-configured value.
  3. Both servers recovered quickly after the degraded dependency was resolved.
The response time for the non-load shedding server increases manifold when there is a degraded dependency (from 4 sec to 25 sec). The load shedding server ensures that the client never sees a response time higher than 6 sec, which is a preconfigured value. Note: The average value for “request handling time” is an average of both the slow and fast routes and is therefore less than 3 sec.

Response codes during degraded downstream

In this section we compare the response codes received by the clients for both types of servers during a degraded dependency.

  1. The non-load shedding server never reported an error status, but this does not indicate the user experience. This is a misleading status because the client is most likely experiencing a delay or timing out, while the server is unaware and is reporting no error status.
  2. The load shedding server responded with a status of 503 when requests had waited too long and 429 when there were too many incoming requests. The load shedding server was able to detect that there were more requests in the system than it is configured to handle.
The non-load shedding server does not report any error code, while the response time keeps increasing silently. This is a slow poisoning in the system which can go unnoticed while the clients face delays. The load shedding server ensures that it communicates errors gracefully to the client and recovers after the degraded service is back to normal capacity.

Degraded downstream: Summary

In this experiment, we simulated a degraded dependency by increasing the processing time of a route and observing the impact on the two servers.

  1. Without load shedding, response times seen by clients can keep increasing when there is a degraded dependent service.
  2. The non-load shedding server does not report any error code and lets the load build up in the system. This may be a silent killer because the application is not able to notice this degradation but its clients notice significant delays.

Further improvements

We looked at load shedding from the perspective of protecting our system against unprecedented load. But other factors may be taken into consideration to make the load shedding more tuneable.

  • Cost analysis of shedded load: The Google SRE blog rightly calls this kind of load shedding a one-size-fits-all Procrustean load shedding. In our approach, we rejected any request that arrived after a threshold was exceeded. There may be other considerations to make: some routes may be more impactful or non-retriable and should be shed last, or we might want per-route load shedding strategies.
  • Localisation of load: Some problems may be solved with a more granular level of control that constructs like circuit breakers can provide. Localised problems like slowness in one database which is used for a small fraction of requests can be better handled by employing circuit breakers which have a more intimate relationship with the application logic than load shedding.
  • Dynamic load shedding: We hard-coded the threshold of the number of requests that are open in the system. Deciding this number for production can be difficult, primarily because the performance characteristics of services changes over time and varies between services. A more dynamic approach might be better suited which relies on properties which can be tuned on the fly. For example, TCP congestion control changes its window size (the number of packets that are allowed to be in-flight) by the AIMD algorithm, where the window size is increased linearly when packet delivery is successful and is reduced exponentially when packets fail to be acknowledged (when there are too many in-flight packets).


In this post, we showed how to implement load shedding in a web service written in Clojure. Our implementation of load shedding used asynchronous Jetty handlers and a load monitoring middleware to detect a high load scenario and shed extra load so as to preserve the quality of the service as well as prevent a cascading effect of failure from one service to another.

This concludes this two-part series on load-shedding in web services. You can find the first part here which talked about load shedding in general. To summarise this series:

  1. Load shedding is a pre-mortem strategy to minimise service during extreme load, in the interest of preventing a wider incident or a collapse of the service entirely.
  2. Capacity planning should be part of the design of high throughput web services.
  3. A simple but effective strategy can go a long way in preventing cascading failures.
  4. Queueing time, processing time and the number of concurrent requests need to be bounded during high load scenarios.
  5. Jetty’s asynchronous handlers and Ring’s middleware can be used to make any Clojure web service shed load without changing the application code.

The code used in this post is available on Github, and its dependent fork of Ring is available here.

Load Shedding in Clojure was originally published in helpshift-engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.


The Prisoners Part 2

Dawn of the second day.

According to the internet, the thing I intend to build is called a Roguelikelike, teetering on the very edge of being a Roguelike. So it goes; we'll see if I end up taking the title or not.

Last time, we laid out the basics of prisoners, their interactions and their strategies. This time, lets get some different scenarios and some player interaction going.


Payoff matrices involve deciding who gets what bonus or penalty as a result of an interaction. Given a pair of defect/cooperate choices, a payoff-matrix will return the scores to be delivered to each player in turn.

(defun payoff-matrix (cc-a cc-b cd-a cd-b dc-a dc-b dd-a dd-b)
  (let ((tbl {(cons :cooperate :cooperate) (list cc-a cc-b)
	      (cons :defect :cooperate) (list dc-a dc-b)
	      (cons :cooperate :defect) (list cd-a cd-b)
	      (cons :defect :defect) (list dd-a dd-b)}))
    (lambda (a b) (lookup tbl (cons a b)))))

Now we can define some basic scenarios. A dilemma is the name I'll pick for the situation where co-operating is better for the group, and both defecting is the worst thing for everyone, but a single defector will end out better off by defecting.

(defparameter dilemma
   3 3  1 5
   5 1  0 0))

A stag-hunt is a situation where a pair of players can pool their resources for a greater prize, and ignore each other for the lesser. If either player attempts to hunt the stag alone, they get nothing, while their defecting partner still gets a rabbit.

(defparameter stag-hunt
   3 3  0 1
   1 0  1 1))

A trade is one in which both parties benefit, but to which both parties must agree.

(defparameter trade
   3 3  0 0
   0 0  0 0))

A theft is one where a player takes from the other. But if both players cooperate, or both try to rob each other, they come to an impasse.

(defparameter theft
   0 0   -3 3
   3 -3   0 0))

A trap is a situation where cooperating leads to disaster, ignoring the situation leads to no gain, and defecting to make it clear to your partner that you don't intend to follow ends up benefiting both players.

(defparameter trap
   -3 -3  2 2
    2  2  0 0))

The last scenario I'll concern myself with is the mutual-prediction. Where guessing what your partner/opponent will choose benefits you, and failing to do so does nothing.

(defparameter mutual-prediction
   3 3  0 0
   0 0  3 3))


In order to move through the world, our prisoners need a world to move through. Let us begin at the ending.

(defparameter ending
  {:description "You have come to the end of your long, perilous journey."})

There is nothing to do at the end other than display this fact.

(defun repl! (adventure)
  (format t "~%~%~a~%~%" (lookup adventure :description)))
THE-PRISONERS> (repl! ending)

You have come to the end of your long, perilous journey.


But what led us here was a choice. An adventure is more than a description, it's also the options, a prisoner, the scenario, and a way to continue the action. continueing means making a choice and effectively playing the opposing/cooperating prisoner and abiding by the results.

(defun mk-adventure ()
  (let ((prisoner (polo)))
     "A stranger approaches. \"I see you have baubles. Would you like to trade, that we both may enrich ourselves?\""
     :cooperate "accept" :defect "refuse" :prisoner prisoner :scenario trade
     :continue (lambda (choice)
		 (let ((their-choice (play prisoner)))
		   (update! prisoner choice)
		   (funcall trade choice their-choice)

This sort of adventure also takes a bit more machinery to run from the repl. We need to present the description, but also get an appropriate choice from the user. Getting that choice is a bit more complicated than you might think at first.

(defun get-by-prefix (lst prefix)
  (let ((l (length prefix)))
    (loop for elem in lst
       when (and (>= (length elem) l)
		 (== (subseq elem 0 l) prefix))
       do (return elem))))

(defun get-repl-choice (adventure)
  (let* ((responses (mapcar #'string-downcase (list (lookup adventure :cooperate) (lookup adventure :defect))))
	 (r-map {(string-downcase (lookup adventure :cooperate)) :cooperate
		 (string-downcase (lookup adventure :defect)) :defect})
	 (by-pref nil)
	 (resp ""))
    (loop until (and (symbolp resp)
		     (setf by-pref
			    (string-downcase (symbol-name resp)))))
       do (format
	   t "~a/~a:"
	   (lookup adventure :cooperate)
	   (lookup adventure :defect))
       do (setf resp (read)))
    (lookup r-map by-pref)))

Well behaved players are easy to deal with, true...

THE-PRISONERS> (get-repl-choice (mk-adventure))

THE-PRISONERS> (get-repl-choice (mk-adventure))

THE-PRISONERS> (get-repl-choice (mk-adventure))


... but we want to be a bit more general than that.

THE-PRISONERS> (get-repl-choice (mk-adventure))
Accept/Refuse:fuck you
Accept/Refuse: (error 'error)
Accept/Refuse: (quit)


That's the only hard par though. Interacting with the game once we're sure we have valid input from our player is relatively simple.

(defun repl! (adventure)
  (format t "~%~%~a~%~%" (lookup adventure :description))
  (when (contains? adventure :continue)
    (let ((choice (get-repl-choice adventure)))
      (repl! (funcall (lookup adventure :continue) choice)))))
THE-PRISONERS> (repl! (mk-adventure))

A stranger approaches. "I see you have baubles. Would you like to trade, that we both may enrich ourselves?"


You have come to the end of your long, perilous journey.


This is obviously not the perilous journey being spoken of. At least, not all of it. The simplest way to extend it into one is to wrap scenarios around our existing adventure.

(defun mk-adventure ()
  (let ((def (defector)))
    {:description "A muscled street thug approachs, knife drawn."
     :cooperate "surrender" :defect "run" :prisoner def :scenario theft
     :continue (lambda (choice)
		 (let ((their-choice (play def)))
		   (update! def choice)
		   (funcall theft choice their-choice))
		 (let ((prisoner (polo)))
		    "A stranger approaches. \"I see you have baubles. Would you like to trade, that we both may enrich ourselves?\""
		    :cooperate "accept" :defect "refuse" :prisoner prisoner :scenario trade
		    :continue (lambda (choice)
				(let ((their-choice (play prisoner)))
				  (update! prisoner choice)
				  (funcall trade choice their-choice)
THE-PRISONERS> (repl! (mk-adventure))

A muscled street thug approachs, knife drawn.


A stranger approaches. "I see you have baubles. Would you like to trade, that we both may enrich ourselves?"


You have come to the end of your long, perilous journey.


Of course, since we want it to be much longer and more perilous, we'll want that process automated to at least some degree.

(defun wrap-scenario (adventure scenario)
    (lambda (choice)
      (let* ((them (lookup scenario :prisoner))
	     (their-choice (play them)))
	(update! them choice)
	(funcall (lookup scenario :scenario) choice their-choice)

(defun mk-adventure ()
     "A stranger approaches. \"I see you have baubles. Would you like to trade, that we both may enrich ourselves?\""
     :cooperate "accept" :defect "refuse" :prisoner (polo) :scenario trade})
    "A muscled street thug approachs, knife drawn. \"Yer money or yer life, fop!\""
    :cooperate "surrender" :defect "run" :prisoner (defector) :scenario theft}))

This isn't enough for the Roguelikelike title, and I don't think I'll get there today, but I do want the ability to make an arbitrarily long adventure. The dumbest way of doing this is to make a list of scenarios, and pick from them when the need arises.

(defun random-scenario ()
     "A stranger approaches. \"I see you have baubles. Would you like to trade, that we both may enrich ourselves?\""
     :cooperate "accept" :defect "refuse" :prisoner (polo) :scenario trade}
     "A muscled street thug approachs, knife drawn. \"Yer money or yer life, fop!\""
     :cooperate "surrender" :defect "run" :prisoner (defector) :scenario theft})))

(defun mk-adventure (&key (scenarios 5))
  (let ((adventure ending))
    (loop repeat scenarios
       do (setf adventure (wrap-scenario adventure (random-scenario))))

An adventure of even 5 scenarios will end up being repetitive since we currently only have a grand total of two. But we can do something about that...

(defun random-scenario ()
     "A stranger approaches. \"I see you have baubles. Would you like to trade, that we both may enrich ourselves?\""
     :cooperate "accept" :defect "refuse" :prisoner (polo) :scenario trade}
     "A muscled street thug approachs, knife drawn. \"Yer money or yer life, fop!\""
     :cooperate "surrender" :defect "run" :prisoner (defector) :scenario theft}
     "As you walk through an expansive market square, a gambler motions you over. \"Fancy your chances at evens or odds?"
     :cooperate "Evens!" :defect "Odds!" :prisoner (gambler) :scenario mutual-prediction}
     "A hunter approaches you in a forest clearing. \"Hallo there, young one. Would you help me hunt a deer? I've had enough hares for now, but I promise we'll eat well if we work together!\""
     :cooperate "<Nocks bow>" :defect "Rather go my own way" :prisoner (dantes) :scenario stag-hunt}
     "\"Hey follow me into this bear trap!\""
     :cooperate "Sure; I've grown tired of living" :defect "No. No, I'd rather not."
     :prisoner (robin) :scenario trap}
     "You see a merchant ahead of you, paying little attention to his overfull coin purse. You could cut it and run."
     :cooperate "It's too tempting" :defect "No; I hold strong"
     :prisoner (dantes) :scenario theft}
     "At the end of your travails with your co-conspirator, you get to the treasure first and can pocket some if you want."
     :cooperate "Take it" :defect "No, we split fairly"
     :prisoner (gambler :defect 5) :scenario dilemma})))

This gives me some ideas about how to go about generating scenarios a lot more programmatically, but I'll leave that for later, when I'm in the right frame of mind to do cosmetic improvements.

Wanna play a game?

THE-PRISONERS> (repl! (mk-adventure))

At the end of your travails with your co-conspirator, you get to the treasure first and can pocket some if you want.

Take it/Split fairly:split

You see a merchant ahead of you, paying little attention to his overfull coin purse. You could cut it and run.

It's too tempting/No:it's

"Hey follow me into this bear trap!"

Sure; I've grown tired of living/No. No, I'd rather not.:no

You see a merchant ahead of you, paying little attention to his overfull coin purse. You could cut it and run.

It's too tempting/No:it's

A stranger approaches. "I see you have baubles. Would you like to trade, that we both may enrich ourselves?"


You have come to the end of your long, perilous journey.


This is about as far as I'm going today, and I'm not entirely sure how far I'm going during my next session.

As always, I'll let you know.


Ep 084: All Sorts

Each week, we discuss a different topic about Clojure and functional programming.

If you have a question or topic you’d like us to discuss, tweet @clojuredesign, send an email to feedback@clojuredesign.club, or join the #clojuredesign-podcast channel on the Clojurians Slack.

This week, the topic is: “sort and sort-by.” We lay out a list of ways to sort your data, ordered by their relative power.

Selected quotes:

  • “Clojure core is a shared vocabulary that we all can use.”
  • “I can’t sort things in my mind, I’ll just have the computer do it for me.”
  • “There only needs to be one sort-by function, because of the flexibility of passing in the classifying function.”
  • “We don’t need any framework-y mumbo-jumbo.”
  • “We do interop, but only at the edges.”



Rewriting the Technical Interview

Frigitt, vi danser
For et øyeblikk, vi leker
Vi tusen små bladskiper
Gleder oss, på det klare morgenlys

The mothercone unfurls her softening scales, and in the time of days I am emancipated. Adrift in the chill dawn light, my siblings flutter, slender forms catching breeze, a susurrus of spiraling, drifting, swirling in the cool air above the creekbed. It is the first and only time in our long lives that we will know such freedom: those the creek does not wash away, those who do not succumb to rot or hunger, will root and raise, and so commune, fixed, for the rest of their days. It is early spring, and the snow still piles in soft hillocks around my mother’s roots. It is time to wait.

I nestle in the hollow at the edge of a stone, where the warmth of the sun builds through the day, and this hearth keeps a narrow channel of earth from freezing. When it is time, my coat dissolves. I thrust my hypocotyl downwards, and gasp at the acid bite of the soilwater here—the taste of last fall’s needledrop filling my first root. Drinking fully, I inflate my cotyledons, which I have carried curled since my gestation in the mothercone. I take my first full breaths, and taste a mild sweetness in their greening.

It is not enough.

I have landed in a place too dark, and chose my time too early. The stone’s shadow occults the sun each morning, and what hours of light are left each day, snow muffles. I struggle, raising one inch, then two, my meristem straining skyward. A corona of needles—my first true leaves—erupts from my terminal bud, but this costs precious energy and I grow hungrier each day. I cleave the cotyledons away, and focus on height, height at any cost. I must rise for light, but there is nothing left. I consumed my seed-hoard in forming my first root.

I ache in grief, and bitter flavonoids leak into the soil. Days pass in quiet weakness. Then there comes a probing, a gentle prickling at my roots. Fine threads from somewhere in the soil bring bright pinpricks of contact. A hundred thousand fibers caress me. I am woven into the fabric of the earth.

With this weaving comes life: nitrogen, iron, yes, but most important for me are the sugars the soil-web provides. With these I survive the late snows and lack of light, and when summer comes, I swell with borrowed strength. Bud upon bud burst from my stem, and I grow to nearly four inches. For we Picea are rarely alone: the adults who tower above me give their nutrients to the web of fungi which fills the soil. When I call out, they answer, and the rhizome shares their bounty with me.

Fall’s dormancy comes with a gentle satisfaction: I am going to make it. As the days shorten, I set my buds and prepare for frost. I concentrate water in the interstices of my cells, forming pockets so pure no ice crystals can form. Thus hardened, I wait out the winter, and in spring, my twigs burst forth in earnest. I race eager, reckless for the sun. I reach five meters in my first decade, fourteen by twenty. My taproot delves, my trunk swells. The stone which once sheltered my seed rests at my base, half swallowed by cambium. I raise branch after branch, mining the creek-moist air, and rejoicing in the long-light of summers.

My siblings and I commune through roots and air. Their scents fill me with belonging. When I am healthy, I give to the soil that which is asked: phosphorous, manganese, sugars of my own. When I am weakened in my fourth decade by a beetle infestation, their strength comes to me again. I flood the air with volatile organics, and my siblings fill their needles with defensive acids. When the beetles come for them, they are ready.

I shed many boughs that year, but my roots are strong, and together, we survive. The light remains sweet. In my second century, I—


Startled, you lift your hand from the clean-cut slab of spruce which now serves as the lobby bench, and dab gingerly at the corners of your eyes. You are not, you sense, the first woman who has wept in this place.

“Hey…” The receptionist says, drawing out the syllable softly. She is kneeling with a box of tissues in one hand. Jenean, you remember. She offers you a key, and the close warmth of a smile. “Why don’t you take a minute to freshen up? Out the door, turn right, left, door at the end of the hall.”

You pause for a moment, wondering how to explain the taste of sap lingering on your tongue. “Don’t worry,” Jenean reassures. “I’ll cover for you.”

You don’t doubt for an instant that this woman, with her split braids neatly circling the crown of her head, could cover for anything from a late train to an ongoing jewel heist. You take the key from her outstretched hand, and murmuring the words quickly, trace a blessing on her palm. “Thank you.”

When you return, your steps are light, aura clear, makeup freshly set. These things matter, in a technical interview.

“We like to begin with a little coding exercise,” your interviewer explains. He is kind, heavy-set, and with streaks of white and black in his greying beard, which reminds you of a badger you met in the Canadian Rockies. His name, he tells you, is Martín, and he is a senior backend engineer. “Just something simple, before we move on to architecture discussions.”

Nothing is simple, you think, and smile wistfully. Sprinkle salt in the form of parentheses, and thus enclosed, open your laptop.

“You may have heard this one before,” Martín begins. “I’d like you to write a program which prints the numbers from one to a hundred, but with a few exceptions. For multiples of three, print ‘Fizz’ instead, and for multiples of five, print ‘Buzz’. For multiples of both, print ‘FizzBuzz’.

You have, in fact, heard this one before. In your browser, search for "fizzbuzz solution”, and pick the first link that looks promising. Copy and paste. You are a real engineer.

for (i = 1 ; i < 101 ; i++) { if (i % 15 == 0) { println("FizzBuzz"); } else if (i % 3 == 0) { println("Fizz"); } else if (i % 5 == 0) { println("Buzz"); } else { println(i); }; }

“Ah, er, yes.” Martín is trying to break some unfortunate news as gently as possible. “The point of these questions is… for you to write the program yourself, rather than using someone else’s code.”

You shift, surprised. “People haven’t seemed to like that so far.”

He seems on the verge of asking why, but decides against it. Instead, he requests that you run the program, so you can discuss how it works.

“That,” you apologize. “Might be slightly more difficult.”

Martín’s brow furrows. You make a mental note to check on that badger. Perhaps a brace of fieldmice, this time.

Stretch your hands overhead, fingers interlocked, and stretch from side to side. This is not particularly magical, but it feels nice. Then return to your editor, and anchor yourself to the void.

(defn fixed-point [f x] (let [x' (f x)] (if (= x x') x (recur f x'))))

Your taproot extends to the base of Yggdrasil itself, and you feel its strength connect with yours. Now, many things are possible. You begin to weave a spell of translation: first in sequences…

(defn rewrite-seq-1 ([f term] (rewrite-seq-1 f [] term)) ([f scanned term] (if (seq term) (if-let [term' (f term)] (into scanned term') (recur f (conj scanned (first term)) (next term))) scanned)))

… and then for any kind of form:

(defn rewrite-term-1 [f term] (cond (map-entry? term) term (vector? term) (vec (rewrite-seq-1 f term)) (seq? term) (if (seq term) (seq (rewrite-seq-1 f term)) term) :else (or (f term) term)))

“Ah yes,” you sigh. “The four genders.”

Martín clears his throat: a sort of gentle chuffing. “You’re building a term rewriting system. To solve… FizzBuzz?”

“Yes.” You reply. “You did ask me to. Remember?”

Take the opportunity to remember another’s memory: the feeling of spreading branches, of forking, dividing, needles erupting from your fingers.

(require '[clojure.walk :refer [postwalk]]) (defn rewrite-walk-1 [f term] (postwalk (partial rewrite-term-1 f) term)) (defn rewrite-walk [term f] (fixed-point (partial rewrite-walk-1 f) term))

“Hang on,” Martín interrupts. “I understand why you’re rewriting sequences—it’s so you can transform numbers like ‘three’ and ‘six’ into ‘Fizz’, and so on. But you don’t need to do any sort of tree-walking recursion for that. It’s a flat sequence.”

“Trees,” you murmur. “Are often under-appreciated.”

Martín nods at this, and you move on. Without breaking eye contact, weave a language for translation.

(defn single-rule [[[guard term] body]] `(fn [~term] (when (~guard ~term) ~body))) (defn seq-rule [[bindings body]] (let [[bindings [_ more]] (split-with (complement #{'&}) bindings) more-sym (or more (gensym 'more)) term (gensym 'term) pairs (partition 2 bindings) guards (map first pairs) names (map second pairs) guard-exprs (map-indexed (fn [i guard] `(~guard (nth ~term ~i))) guards)] `(fn [~term] (try (when (and (sequential? ~term) (<= ~(count guards) (count ~term)) ~@guard-exprs) (let [[~@names ~'& ~more-sym] ~term] ~(if more body `(concat ~body ~more-sym)))))))) (defn rule [rule] (if (vector? (first rule)) (seq-rule rule) (single-rule rule)))

Release your gaze. You have done Martín a kindness by hiding this from him. “Now, a small macro to rewrite a sequence.”

“Of integers.”

“Sure.” You know how to let strangers assume what makes them comfortable.

(defmacro rewrite [expr & rules] (let [rules (partition 2 rules) matches (map rule rules)] `(let [rules# [~@matches]] (reduce rewrite-walk ~expr rules#))))

It would be a good idea, at this point, to reassure Martín that you are still on track.

user=> (rewrite ["Og" 1 "til javanissen!"] (number? x) (str (inc x)) [string? x, string? y] [(str x " " y)]) ["Og 2 til javanissen!"]

You sing a few bars to yourself. It is good praxis.

“So… you’ve got this term-rewriting system, which can rewrite individual terms, or any subsequence of things matching some predicates. And you’re planning to use that to solve FizzBuzz?”

“Precisely!” You grin brightly. He’s on board now, though he doesn’t know it.

“Okay. That’s a bit unorthodox, but… valid, I guess. Can you show me the transformation rules now?”

Summon a language from the void.

(defrecord FnCall [fun args]) (defn a [type] (fn [term] (instance? type term)))

Martín blinks. Something has gone wrong.

(defmacro c [& exprs] (rewrite `(do ~@exprs) [symbol? fun, seq? args] [(FnCall. fun args)] ((a FnCall) fc) (cons (:fun fc) (:args fc))))

“People always complain,” you murmur. “That Lisps have too many parentheses. What they really mean is that their positions are too far to the left.”

user=> (c reduce(+, map(inc, [1, 2, 3]))) 9

“And that there’s no infix or postfix notation. Well, that’s fixable.”

(def infix (into '{% mod == =} (map (juxt identity identity) '[< <= > >= + - / *]))) (def postfixes {"++" inc "--" dec}) (defn postfix-sym [x] (when (symbol? x) (when-let [p (first (filter (partial str/ends-with? (name x)) (keys postfixes)))] (list (postfixes p) (symbol (str/replace (name x) p "")))))) (defmacro c [& exprs] (rewrite `(do ~@exprs) [symbol? fun, seq? args] [(FnCall. fun args)] [any? a, infix f, any? b] [(FnCall. (infix f) [a b])] (postfix-sym x) (postfix-sym x) ((a FnCall) fc) (cons (:fun fc) (:args fc))))

“There. Much better.” You debate, for a moment, whether your chimera is pleasing or abominable, and settle on beloved, if quirky, pet.

user=> (c 1 + 2 == 3) true user=> (c let([x 3] x++ * 2)) 8

Martín is agog. “You can’t seriously be thinking about doing this.”

“I know, I know,” you apologize. “They’re all left-associative this way. We could split them out into separate rules by binding precedence, but we are on the clock here and I can never remember the exact precedence rules anyway.” Truth be told, no one can. It’s called Ritchie’s Revenant. You don’t remember why, and assume that’s the Revenant’s fault as well.

“We might as well fix the assignment operator, while we’re here.”

(defmacro c [& exprs] (rewrite `(do ~@exprs) [symbol? fun, seq? args] [(FnCall. fun args)] [any? a, infix f, any? b] [(FnCall. (infix f) [a b])] (postfix-sym x) (postfix-sym x) [symbol? var, #{'=} _, any? rhs, & more] [`(let [~var ~rhs] ~@more)] ((a FnCall) fc) (cons (:fun fc) (:args fc)))) user=> (c x = 3; x++ / 5; ) 4/5

“That’s… that’s not how that’s supposed to work.” Martín has the look of a man whose daughter has tamed multiple eagles, and insists on serving them tea and tiny hors d'oeuvres using the family china, and who also (and not entirely coincidentally) harbors a justifiable fear of displeased eagles.

“You’re quite right. Shall we do conditionals?”

(defrecord Cond [branches]) (defrecord Elsif [test body]) (defn braces [m] (cons 'do (mapcat identity m))) (defmacro c [& exprs] (rewrite `(do ~@exprs) [#{'else} _, #{'if} _, seq? test, map? body] [(Elsif. `(do ~@test) (braces body))] [#{'if} _, seq? test, map? t] [(Cond. [`(do ~@test) (braces t)])] [(a Cond) cond, (a Elsif) elsif] [(update cond :branches conj (:test elsif) (:body elsif))] [(a Cond) cond, #{'else} _, map? body] [(update cond :branches conj :else (braces body))] ... ((a Cond) c) `(cond ~@(:branches c))))

“In Lisp,” you offer. “We often write domain-specific languages to solve new problems.”

“C is not a DSL!”

“If you insist.” Keep going anyway.

user=> (c x = 3; if (x == 2) { println("two"); } else if (x == 3) { println("yes!"); } else { println("nope"); } ) yes!

A single eyelash detaches from the corner of your eye, and drifts into the air, smoldering gently. Side effects come at a cost.

Martín stares intently at your REPL, as if there is something wrong with it, and not the world. “Those are… map literals,” he states, as if uncertain.

“They are, aren’t they?” You agree, delighted.

“They’re not… ordered maps… are they?”

You can barely keep from cackling. “They are, up to sixteen terms.”

While Martín sputters, you think about adding another else if clause, and realize that your spell requires a more transgressive magic. Double-check your keybindings, clap twice, and trace a protective ward upon the tabletop. What you are about to do is not exactly evil, but might piss something off.

(defn spaced-sym [x] (when (symbol? x) (let [parts (str/split (name x) #" ")] (when (< 1 (count parts)) (map symbol parts))))) (defmacro c [& exprs] (rewrite `(do ~@exprs) [spaced-sym s] (spaced-sym s) ... ['#{return ;} _] nil))

Martín is asking something pedestrian about the reader. “Line terminators are a social construct,” you offer, gently, because information is often uncomfortable. “As are spaces. It’s… actually in the spec.” You wonder, not for the first time, why Dr. Judith Butler took such an interest in chairing the ISO 10646 working group. She must have had her reasons.

“All that is left is the for loop itself.” A nontrivial construct, you realize, and prepare to weave another function. Initialization, iteration, termination, evaluation. Trace the sigils in the air and give them form.

(defn gen-for [exprs body] (let [[[var _ init] test change] (remove '#{(;)} (partition-by '#{;} exprs)) body (mapcat identity body)] `(loop([~var ~init ret# nil] if (~@test) { recur(do(~@change), do(~@body)) } ~'else { ~'return ret# })))) (defmacro c [& exprs] (rewrite `(do ~@exprs) [#{'for} _, seq? expr, map? body] (gen-for expr body) [spaced-sym s] (spaced-sym s) [#{'else} _, #{'if} _, seq? test, map? body] [(Elsif. `(do ~@test) (braces body))] [#{'if} _, seq? test, map? t] [(Cond. [`(do ~@test) (braces t)])] [(a Cond) cond, (a Elsif) elsif] [(update cond :branches conj (:test elsif) (:body elsif))] [(a Cond) cond, #{'else} _, map? body] [(update cond :branches conj :else (braces body))] [symbol? fun, seq? args] [(FnCall. fun args)] [any? a, infix f, any? b] [(FnCall. (infix f) [a b])] (postfix-sym x) (postfix-sym x) [symbol? var, #{'=} _, any? rhs, & more] [`(let [~var ~rhs] ~@more)] ((a FnCall) fc) (cons (:fun fc) (:args fc)) ((a Cond) c) `(cond ~@(:branches c)) ['#{return ;} _] nil))

“Martín,” you whisper. Dust shimmers in columns above your parentheses. “We are ready now. Would you like to see?”

user=> (c for (i = 1 ; i < 101 ; i++) { if (i % 15 == 0) { println("FizzBuzz"); } elseif (i % 3 == 0) { println("Fizz"); } elseif (i % 5 == 0) { println("Buzz"); } else { println(i); }; } ) 1 2 Fizz 4 Buzz Fizz ...

As the numbers slide upwards along the screen, Martín closes his eyes and releases a long, tired breath. One hand rests on the waxed pine of the conference room’s table; the other supports his temple. “I’m recommending strong hire of course, but…” He leans in, and speaks more quietly. “Do you really think you’d be happy here?”

You are blessed with time and power, and need not root in poor soil. Thank him, raise your seed-wing, and let your feet lift gently as you leave.

With sincerest thanks to C. Scott Andreas, André Arko, David Ascher, Mike Bernstein, Lita Cho, Nicole Forsgren, Brad Greenlee, Coda Hale, Michael Handler, Marc Hedlund, Ben Linsay, Caitie McCaffrey, Dan McKinley, Greg Poirier, Marco Rogers, Kelly Shortridge, Tasha, and Leif Walsh.


Deep Learning in Clojure with Fewer Parentheses than Keras and Python

Deep Diamond() is a new Deep Learning library written in Clojure. Its goal is to be simple, superfast, and to support both CPU and GPU computing. Try it out!

But it's Clojure, you might say. Why not Python? Python is simple. Python is easy. Python is supported by Google and Facebook. And Clojure… Clojure is a… you know… a… Lisp! Lisp is diiifiiicuuuult. Lisp has soooo many pareentheses. Noone wants to write their DL models like ((((((model.add (((Conv2d etc.))))))))))))))))))))))))). And Clojure is… you know… running on JVM, and JVM is heeeeavy and sloooooow. And no one is using it except from Rich Hickey and his guitar.

Both of these are not true! I'll demonstrate this by direct comparison with the paragon of simplicity and elegance of deep learning in Python - Keras. In this post, I'll take a convolutional neural network from Keras examples.

Below is the relevant model code, first in Keras, and then in Deep Diamond. You can compare them aesthetically. Keras is a high bar to clear, but I think that Deep Diamond's code is even more straightforward.

But the main point is easy to count: parentheses and other punctuation! There, it's easy to show (by simple counting) that Deep Diamond uses fewer parentheses, fewer punctuation symbols overall, fewer constructs, and has less incidental complexity.

I'll argue that its code has nicer layout, too, and a fine sprinkle of colorful symbols in well balanced places, but that's just my subjective preference. Don't look at that nice sparkling thing.

Keras CNN in Python

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 input_shape=(28, 28, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))


model.fit(x_train, y_train,

Deep Diamond CNN in Clojure

(defonce net-bp
  (network (desc [128 1 28 28] :float :nchw)
           [(convo [32] [3 3] :relu)
               (convo [64] [3 3] :relu)
               (pooling [2 2] :max)
               (dense [128] :relu)
               (dense [10] :softmax)]))

(defonce net (init! (net-bp :adam)))

(train net train-images y-train :crossentropy 12 [])

Let's count

How about the number of dreaded parentheses, ( and )?

  Python Clojure
( and ) 48 28
(, ), [, and ] 50 48
Grouped (()) 8 2
))) 2 1
, 17 0
model.add 8 0

As we can see from the table, on every punctuation metric that I could think of, Deep Diamond and Clojure fare better than Keras & Python.

Keras uses almost twice as much parentheses than Deep Diamond. Clojure uses [] for vector literals, which Deep Diamond uses as tensor shapes. You will note that there are more than a few of these, and argue that these are parentheses, too. Fine. Add them up, and Clojure fares slightly better than Python!

A parenthesis here and there is not a problem, but there are horror tales of ((((((( and ))))))) in Lisps. Not in Clojure. See that there is not a single (( in the Clojure example, and only two occurances of )). In Python - there are 8.

Then we come to all additional assorted punctuation in Python: commas, dots, etc. In Clojure, there are none, while in Python there are dozens.

Python is also riddled with redundant stuff such as model.add().

Etc., etc. You get my point.


But, speed! - you might say. Deep Diamond is faster, too (at least for this model), but that is a nice topic for some other blog post :) Both tools are free so you can try for yourself in the meantime.

The books

Should I mention that the book Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, DNNL, Java, and Clojure teaches the nuts and bolts of neural networks and deep learning by showing you how Deep Diamond is built, from scratch? In interactive sessions. Each line of code can be executed and the results inspected in the plain Clojure REPL. The best way to master something is to build it yourself!

It' simple. But fast and powerful!

Please subscribe, read the drafts, get the full book soon, and support my work on this free open source library.


Full Stack Developer

Full Stack Developer

Pilloxa | Anywhere - 4h overlap with Stockholm (CET) during workday
Improving patients' adherence to treatment
€40000 - €80000

About Pilloxa

Pilloxa is on a mission to improve patients' adherence to their treatment. Non-adherence to one's treatment plan is all too common. It is estimated to be the root cause of every 10th hospitalization and 8 deaths daily, in Sweden alone. Adherence to treatment is hard, and at Pilloxa we have set our minds to making it easier.

Pilloxa is a medtech company based in Stockholm, Sweden working with the latest technologies in an effort to improve the patient journey and quality of life for patients. We work together with patients, healthcare and pharmaceutical companies in bringing together all actors that have an impact on patients' treatment.


You'll be an integral part of our small and flat team, working closely with the product and building a first-class user experience. The app is the centerpiece of Pilloxa's service and this is where you'll likely spend most of your time hacking in ClojureScript. As we grow you'll also likely be extending our still small Clojure backend and dabble with all parts of the stack.

Preferred experience

The more boxes you tick, the better.

  • Passion for making a positive impact in peoples' lives
  • 2+ years full-stack engineer
  • MSc in Computer Science or equivalent
  • Experience with Clojure
  • Experience with React Native
  • Experience with reagent/re-frame
  • Startup experience


  • Call with CTO
  • Call with Co-founder
  • Technical assignment (max 8h)
  • Presentation of assignment
  • Call with CEO
  • Reference calls


Clojure Journey VI – Data

Data is the core of all computer application, an information system basically receives information as input, executes operations and produces some output. Data is the core of computation, we execute operations using data, and as programmers we need to have ways of represent data at computer-level.

With this requirement, engineers and computer language hackers, created a way of represent data at software level classifying them in different types, and this type will represent a range of possible data and the way that it will behave when dealing with hardware.

All programming languages has a set of datatypes to deal with basic data, and imagine a language without them is almost impossible, Clojure of course has set of datatypes.

This post can look kind of boring but it’s necessary to study it before we start learning about functions in the next posts. It’ll do my best to this post don’t look boring and be short.

Hosted Types?

As a hosted language on JVM, most of its datatypes are from Java, and as we know, its heavy-tested for more 20 years, with excellent performance and interoperability, as we will see, Clojure strings are java.lang.Strings and so on.

To check type of something in Clojure, just use the function type, let’s start exploring with numbers, Clojure has it’s integer numbers that we already used in our journey when dealing with arithmetical operations:

=> (type 1)

As we see, the type is mapped to a Java type (remember that Clojure is build on top of JVM), other common types behave like this:

=> (type "Hello String")
=> (type true)
=> (type 42.42)
Clojure primitives are java types

As supposed, you can use string to represent text, boolean to represent true/false and double to represent fractional numbers.


Most of languages includes a float type, but as you should know, floats is not good enough, so, doubles are default in Clojure.

=> (type (+ 41.0 1.0))

floats are not banned at all, and Clojure has it, but it’s not the default, if you want to use it, you need to make a type conversion:

=> (type (float (+ 41.0 1.0)))

Just to keep talking about math, Clojure has a type that is not common on most languages, it’s called ratio that is useful to represent fractions:

=> (type 1/3)

And as we already seen, we represent integer numbers with integer type:

=> (type 1)

We can do mathematical operations on number as expected, just remember to use the polish notation, as we see in the last post.

=> (- 52 10)

One of the most important things to note here, is that if one number is fractional, the result is fractional, but you can do operations with integers and doubles interacting.

=> (- 52.0 10)


You can represent strings in Clojure just putting any text inside double quotes:

=> (type "I'm a text")

With the function str you can convert almost everything in String, just try:

=> (type (str 1))

Clojure has it character type too, just type \ before a single char:

=> (type \t)

Logical Values

Logical values are represent by pure true and false, no complexity here!

user=> (type true)
user=> (type false)

And do logical operations with it too:

=> (and false false)
=> (and true true)
=> (or false true)
=> (not true)

In Clojure we only have two false values false and nil that we will see next.


Keyword is a difficult topic to explain, folks coming from Ruby or lisp already know what I’m talking about. They look like string, but aren’t string, but are dedicated to be used as identifiers, they are just themselves, and you common will see as a map key for example.

They start with : fallowed by a text:

=> (type :keyword)

On the next post we’ll see more about keyword when dealing with maps.


Symbols usually refers to something else, like functions, values. During the eval process they are evaluated to the value that its referring. To better understand it, let’s do and example.

We define a variable called hello, with the String “Hello Otavio”:

=> (def hello "Hello Otavio")

Now if we just print the hello, as expected it will the string binded.

=> (println hello)
"Hello Otavio"

And if we check the type of hello:

=> (type hello)

It will return the type of what is binded to hello and not the type of it, this occurs because evaluator first eval hello and look for what is binded, before call type. To check the true type of hello we need to delay the evaluation as we saw in the last post:

=> (type 'hello)

And that is our symbol. We already used a lot of symbols in this post, other example is the functions that we used:

=> (type 'println)
=> (type '+)
=> (type '-)

As we will see in future, symbols can have a namespace separated from the name by forward slash.

Fun fact from Clojure doc: There are three special symbols that are read as different types – nil is the null value, and true and false are the boolean values.


Regex has its own datatype in Clojure too, to represent a regex just use a hash fallowed by a string:

=> (type #"regex")

And with no surprise we will have a java datatype Pattern represented.

I will not dive deep into how to use regexs, but in the standard library we have good functions already built to deal with them, check this post if you want to learn more.

And Finally…The emptiness, the void, the villain!

Pay attention on nil monster or it will destory you!!

We’re talking about nil, the value that represent nothing, the emptiness. For those who already code something, you probably know it, in some languages are called Null, nil or None, it can be called a billion dollar mistake (watch this excellent talk anyway).

The rule is simple, nil is type nil and to represent nil just type nil:

=> (type nil)

Next steps

On the next post we will see the collections, and understand each one and its use cases, for now to don’t extend this post we’ll finish here.

As a nice exercise for this post, check Clojure cheatsheet and you’ll see nice functions to deal with these datatypes.


Clojure has simple datatypes, and a uncommon datatype called ratio, one of the interesting things its that Clojure uses Double by default, so we avoid to use Floats.

Final thought

If you have any questions that I can help you with, please ask! Send an email (otaviopvaladares at gmail.com), pm me on my Twitter or comment on this post!


Core.match available for self hosted ClojureScript (Planck and Klipse)

core.match - A pattern matching library for Clojure[script] - is available for self-hosted ClojureScript. It means that it can run in Planck and Klipse.

The code is available as a fork of core.match called abbinare.

The approach is similar to what Mike Fikes did for core.async with andare.

Both names comes from italian: “andare” means “go” and “abbinare” means match (in the sense of combine).


In order to use core.match in Klipse, simply require it and Klipse will fetch abbinare code from its analysis cache repository:

(require '[cljs.core.match :refer-macros [match]])

In order to use core.match in Planck, add abbinare as a dependency with:

 [viebel/abbinare "1.10.597"]

Here is a quick demo - running in your browser - of a solution to the famous Fizz buzz interview question with core.match:

(with-out-str (doseq [n (range 1 11)]
    (match [(mod n 3) (mod n 5)]
           [0 0] "FizzBuzz"
           [0 _] "Fizz"
           [_ 0] "Buzz"
           :else n))))

Want more core.match cool stuff in your browser? Read this core.match interactive tutorial.


Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.