Build and Deploy Web Apps With Clojure and FLy.io

This post walks through a small web development project using Clojure, covering everything from building the app to packaging and deploying it. It’s a collection of insights and tips I’ve learned from building my Clojure side projects but presented in a more structured format.

As the title suggests, we’ll be deploying the app to Fly.io. It’s a service that allows you to deploy apps packaged as Docker images on lightweight virtual machines.[1] My experience with it has been good, it’s easy to use and quick to set up. One downside of Fly is that it doesn’t have a free tier, but if you don’t plan on leaving the app deployed, it barely costs anything.

This isn’t a tutorial on Clojure, so I’ll assume you already have some familiarity with the language as well as some of its libraries.[2]

Project Setup

In this post, we’ll be building a barebones bookmarks manager for the demo app. Users can log in using basic authentication, view all bookmarks, and create a new bookmark. It’ll be a traditional multi-page web app and the data will be stored in a SQLite database.

Here’s an overview of the project’s starting directory structure:

.
├── dev
│   └── user.clj
├── resources
│   └── config.edn
├── src
│   └── acme
│       └── main.clj
└── deps.edn

And the libraries we’re going to use. If you have some Clojure experience or have used Kit, you’re probably already familiar with all the libraries listed below.[3]

;; deps.edn
{:paths ["src" "resources"]
 :deps {org.clojure/clojure               {:mvn/version "1.12.0"}
        aero/aero                         {:mvn/version "1.1.6"}
        integrant/integrant               {:mvn/version "0.11.0"}
        ring/ring-jetty-adapter           {:mvn/version "1.12.2"}
        metosin/reitit-ring               {:mvn/version "0.7.2"}
        com.github.seancorfield/next.jdbc {:mvn/version "1.3.939"}
        org.xerial/sqlite-jdbc            {:mvn/version "3.46.1.0"}
        hiccup/hiccup                     {:mvn/version "2.0.0-RC3"}}
 :aliases
 {:dev {:extra-paths ["dev"]
        :extra-deps  {nrepl/nrepl    {:mvn/version "1.3.0"}
                      integrant/repl {:mvn/version "0.3.3"}}
        :main-opts   ["-m" "nrepl.cmdline" "--interactive" "--color"]}}}

I use Aero and Integrant for my system configuration (more on this in the next section), Ring with the Jetty adaptor for the web server, Reitit for routing, next.jdbc for database interaction, and Hiccup for rendering HTML. From what I’ve seen, this is a popular “library combination” for building web apps in Clojure.[4]

The user namespace in dev/user.clj contains helper functions from Integrant-repl to start, stop, and restart the Integrant system.

;; dev/user.clj
(ns user
  (:require
   [acme.main :as main]
   [clojure.tools.namespace.repl :as repl]
   [integrant.core :as ig]
   [integrant.repl :refer [set-prep! go halt reset reset-all]]))

(set-prep!
 (fn []
   (ig/expand (main/read-config)))) ;; we'll implement this soon

(repl/set-refresh-dirs "src" "resources")

(comment
  (go)
  (halt)
  (reset)
  (reset-all))

Systems and Configuration

If you’re new to Integrant or other dependency injection libraries like Component, I’d suggest reading “How to Structure a Clojure Web”. It’s a great explanation about the reasoning behind these libraries. Like most Clojure apps that use Aero and Integrant, my system configuration lives in a .edn file. I usually name mine as resources/config.edn. Here’s what it looks like:

;; resources/config.edn
{:server
 {:port #long #or [#env PORT 8080]
  :host #or [#env HOST "0.0.0.0"]
  :auth {:username #or [#env AUTH_USER "john.doe@email.com"]
         :password #or [#env AUTH_PASSWORD "password"]}}

 :database
 {:dbtype "sqlite"
  :dbname #or [#env DB_DATABASE "database.db"]}}

In production, most of these values will be set using environment variables. During local development, the app will use the hard-coded default values. We don’t have any sensitive values in our config (e.g., API keys), so it’s fine to commit this file to version control. If there are such values, I usually put them in another file that’s not tracked by version control and include them in the config file using Aero’s #include reader tag.

This config file is then “expanded” into the Integrant system map using the expand-key method:

;; src/acme/main.clj
(ns acme.main
  (:require
   [aero.core :as aero]
   [clojure.java.io :as io]
   [integrant.core :as ig]))

(defn read-config
  []
  {:system/config (aero/read-config (io/resource "config.edn"))})

(defmethod ig/expand-key :system/config
  [_ opts]
  (let [{:keys [server database]} opts]
    {:server/jetty (assoc server :handler (ig/ref :handler/ring))
     :handler/ring {:database (ig/ref :database/sql)
                    :auth     (:auth server)}
     :database/sql database}))

The system map is created in code instead of being in the configuration file. This makes refactoring your system simpler as you only need to change this method while leaving the config file (mostly) untouched.[5]

My current approach to Integrant + Aero config files is mostly inspired by the blog post “Rethinking Config with Aero & Integrant” and Laravel’s configuration. The config file follows a similar structure to Laravel’s config files and contains the app configurations without describing the structure of the system. Previously I had a key for each Integrant component, which led to the config file being littered with #ig/ref and more difficult to refactor.

Also, if you haven’t already, start a REPL and connect to it from your editor. Run clj -M:dev if your editor doesn’t automatically start a REPL. Next, we’ll implement the init-key and halt-key! methods for each of the components:

;; src/acme/main.clj
(ns acme.main
  (:require
   ;; ...
   [acme.handler :as handler]
   [acme.util :as util])
   [next.jdbc :as jdbc]
   [ring.adapter.jetty :as jetty]))
;; ...

(defmethod ig/init-key :server/jetty
  [_ opts]
  (let [{:keys [handler port]} opts
        jetty-opts (-> opts (dissoc :handler :auth) (assoc :join? false))
        server     (jetty/run-jetty handler jetty-opts)]
    (println "Server started on port " port)
    server))

(defmethod ig/halt-key! :server/jetty
  [_ server]
  (.stop server))

(defmethod ig/init-key :handler/ring
  [_ opts]
  (handler/handler opts))

(defmethod ig/init-key :database/sql
  [_ opts]
  (let [datasource (jdbc/get-datasource opts)]
    (util/setup-db datasource)
    datasource))

The setup-db function creates the required tables in the database if they don’t exist yet. This works fine for database migrations in small projects like this demo app, but for larger projects, consider using libraries such as Migratus (my preferred library) or Ragtime.

;; src/acme/util.clj
(ns acme.util 
  (:require
   [next.jdbc :as jdbc]))

(defn setup-db
  [db]
  (jdbc/execute-one!
   db
   ["create table if not exists bookmarks (
       bookmark_id text primary key not null,
       url text not null,
       created_at datetime default (unixepoch()) not null
     )"]))

For the server handler, let’s start with a simple function that returns a “hi world” string.

;; src/acme/handler.clj
(ns acme.handler
  (:require
   [ring.util.response :as res]))

(defn handler
  [_opts]
  (fn [req]
    (res/response "hi world")))

Now all the components are implemented. We can check if the system is working properly by evaluating (reset) in the user namespace. This will reload your files and restart the system. You should see this message printed in your REPL:

:reloading (acme.util acme.handler acme.main)
Server started on port  8080
:resumed

If we send a request to http://localhost:8080/, we should get “hi world” as the response:

$ curl localhost:8080/
hi world

Nice! The system is working correctly. In the next section, we’ll implement routing and our business logic handlers.

Routing, Middleware, and Route Handlers

First, let’s set up a ring handler and router using Reitit. We only have one route, the index / route that’ll handle both GET and POST requests.

;; src/acme/handler.clj
(ns acme.handler
  (:require
   [reitit.ring :as ring]))

(def routes
  [["/" {:get  index-page
         :post index-action}]])

(defn handler
  [opts]
  (ring/ring-handler
   (ring/router routes)
   (ring/routes
    (ring/redirect-trailing-slash-handler)
    (ring/create-resource-handler {:path "/"})
    (ring/create-default-handler))))

We’re including some useful middleware:

  • redirect-trailing-slash-handler to resolve routes with trailing slashes,
  • create-resource-handler to serve static files, and
  • create-default-handler to handle common 40x responses.

Implementing the Middlewares

If you remember the :handler/ring from earlier, you’ll notice that it has two dependencies, database and auth. Currently, they’re inaccessible to our route handlers. To fix this, we can inject these components into the Ring request map using a middleware function.

;; src/acme/handler.clj
;; ...

(defn components-middleware
  [components]
  (let [{:keys [database auth]} components]
    (fn [handler]
      (fn [req]
        (handler (assoc req
                        :db database
                        :auth auth))))))
;; ...

The components-middleware function takes in a map of components and creates a middleware function that “assocs” each component into the request map.[6] If you have more components such as a Redis cache or a mail service, you can add them here.

We’ll also need a middleware to handle HTTP basic authentication.[7] This middleware will check if the username and password from the request map matche the values in the auth map injected by components-middleware. If they match, then the request is authenticated and the user can view the site.

;; src/acme/handler.clj
(ns acme.handler
  (:require
   ;; ...
   [acme.util :as util]
   [ring.util.response :as res]))
;; ...

(defn wrap-basic-auth
  [handler]
  (fn [req]
    (let [{:keys [headers auth]} req
          {:keys [username password]} auth
          authorization (get headers "authorization")
          correct-creds (str "Basic " (util/base64-encode
                                       (format "%s:%s" username password)))]
      (if (and authorization (= correct-creds authorization))
        (handler req)
        (-> (res/response "Access Denied")
            (res/status 401)
            (res/header "WWW-Authenticate" "Basic realm=protected"))))))
;; ...

A nice feature of Clojure is that interop with the host language is easy. The base64-encode function is just a thin wrapper over Java’s Base64.Encoder:

;; src/acme/util.clj
(ns acme.util
   ;; ...
  (:import java.util.Base64))

(defn base64-encode
  [s]
  (.encodeToString (Base64/getEncoder) (.getBytes s)))

Finally, we need to add them to the router. Since we’ll be handling form requests later, we’ll also bring in Ring’s wrap-params middleware.

;; src/acme/handler.clj
(ns acme.handler
  (:require
   ;; ...
   [ring.middleware.params :refer [wrap-params]]))
;; ...

(defn handler
  [opts]
  (ring/ring-handler
   ;; ...
   {:middleware [(components-middleware opts)
                 wrap-basic-auth
                 wrap-params]}))

Implementing the Route Handlers

We now have everything we need to implement the route handlers or the business logic of the app. First, we’ll implement the index-page function which renders a page that:

  1. Shows all of the user’s bookmarks in the database, and
  2. Shows a form that allows the user to insert new bookmarks into the database
;; src/acme/handler.clj
(ns acme.handler
  (:require
   ;; ...
   [next.jdbc :as jdbc]
   [next.jdbc.sql :as sql]))
;; ...

(defn template
  [bookmarks]
  [:html
   [:head
    [:meta {:charset "utf-8"
            :name    "viewport"
            :content "width=device-width, initial-scale=1.0"}]]
   [:body
    [:h1 "bookmarks"]
    [:form {:method "POST"}
     [:div
      [:label {:for "url"} "url "]
      [:input#url {:name "url"
                   :type "url"
                   :required true
                   :placeholer "https://en.wikipedia.org/"}]]
     [:button "submit"]]
    [:p "your bookmarks:"]
    [:ul
     (if (empty? bookmarks)
       [:li "you don't have any bookmarks"]
       (map
        (fn [{:keys [url]}]
          [:li
           [:a {:href url} url]])
        bookmarks))]]])

(defn index-page
  [req]
  (try
    (let [bookmarks (sql/query (:db req)
                               ["select * from bookmarks"]
                               jdbc/unqualified-snake-kebab-opts)]
      (util/render (template bookmarks)))
    (catch Exception e
      (util/server-error e))))
;; ...

Database queries can sometimes throw exceptions, so it’s good to wrap them in a try-catch block. I’ll also introduce some helper functions:

;; src/acme/util.clj
(ns acme.util
  (:require
   ;; ...
   [hiccup2.core :as h]
   [ring.util.response :as res])
  (:import java.util.Base64))
;; ...

(defn preprend-doctype
  [s]
  (str "<!doctype html>" s))

(defn render
  [hiccup]
  (-> hiccup h/html str preprend-doctype res/response (res/content-type "text/html")))

(defn server-error
  [e]
  (println "Caught exception: " e)
  (-> (res/response "Internal server error")
      (res/status 500)))

render takes a hiccup form and turns it into a ring response, while server-error takes an exception, logs it, and returns a 500 response.

Next, we’ll implement the index-action function:

;; src/acme/handler.clj
;; ...

(defn index-action
  [req]
  (try
    (let [{:keys [db form-params]} req
          value (get form-params "url")]
      (sql/insert! db :bookmarks {:bookmark_id (random-uuid) :url value})
      (res/redirect "/" 303))
    (catch Exception e
      (util/server-error e))))
;; ...

This is an implementation of a typical post/redirect/get pattern. We get the value from the URL form field, insert a new row in the database with that value, and redirect back to the index page. Again, we’re using a try-catch block to handle possible exceptions from the database query.

That should be all of the code for the controllers. If you reload your REPL and go to http://localhost:8080, you should see something that looks like this after logging in:

Screnshot of the app

The last thing we need to do is to update the main function to start the system:

;; src/acme/main.clj
;; ...

(defn -main [& _]
  (-> (read-config) ig/expand ig/init))

Now, you should be able to run the app using clj -M -m acme.main. That’s all the code needed for the app. In the next section, we’ll package the app into a Docker image to deploy to Fly.

Packaging the App

While there are many ways to package a Clojure app, Fly.io specifically requires a Docker image. There are two approaches to doing this:

  1. Build an uberjar and run it using Java in the container, or
  2. Load the source code and run it using Clojure in the container

Both are valid approaches. I prefer the first since its only dependency is the JVM. We’ll use the tools.build library to build the uberjar. Check out the official guide for more information on building Clojure programs. Since it’s a library, to use it we can add it to our deps.edn file with an alias:

;; deps.edn
{;; ...
 :aliases
 {;; ...
  :build {:extra-deps {io.github.clojure/tools.build 
                       {:git/tag "v0.10.5" :git/sha "2a21b7a"}}
          :ns-default build}}}

Tools.build expects a build.clj file in the root of the project directory, so we’ll need to create that file. This file contains the instructions to build artefacts, which in our case is a single uberjar. There are many great examples of build.clj files on the web, including from the official documentation. For now, you can copy+paste this file into your project.

;; build.clj
(ns build
  (:require
   [clojure.tools.build.api :as b]))

(def basis (delay (b/create-basis {:project "deps.edn"})))
(def src-dirs ["src" "resources"])
(def class-dir "target/classes")

(defn uber
  [_]
  (println "Cleaning build directory...")
  (b/delete {:path "target"})

  (println "Copying files...")
  (b/copy-dir {:src-dirs   src-dirs
               :target-dir class-dir})

  (println "Compiling Clojure...")
  (b/compile-clj {:basis      @basis
                  :ns-compile '[acme.main]
                  :class-dir  class-dir})

  (println "Building Uberjar...")
  (b/uber {:basis     @basis
           :class-dir class-dir
           :uber-file "target/standalone.jar"
           :main      'acme.main}))

To build the project, run clj -T:build uber. This will create the uberjar standalone.jar in the target directory. The uber in clj -T:build uber refers to the uber function from build.clj. Since the build system is a Clojure program, you can customise it however you like. If we try to run the uberjar now, we’ll get an error:

# build the uberjar
$ clj -T:build uber
Cleaning build directory...
Copying files...
Compiling Clojure...
Building Uberjar...

# run the uberjar
$ java -jar target/standalone.jar
Error: Could not find or load main class acme.main
Caused by: java.lang.ClassNotFoundException: acme.main

This error occurred because the Main class that is required by Java isn’t built. To fix this, we need to add the :gen-class directive in our main namespace. This will instruct Clojure to create the Main class from the -main function.

;; src/acme/main.clj
(ns acme.main
  ;; ...
  (:gen-class))
;; ...

If you rebuild the project and run java -jar target/standalone.jar again, it should work perfectly. Now that we have a working build script, we can write the Dockerfile:

# Dockerfile
# install additional dependencies here in the base layer
# separate base from build layer so any additional deps installed are cached
FROM clojure:temurin-21-tools-deps-bookworm-slim AS base

FROM base as build
WORKDIR /opt
COPY . .
RUN clj -T:build uber

FROM eclipse-temurin:21-alpine AS prod
COPY --from=build /opt/target/standalone.jar /
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "standalone.jar"]

It’s a multi-stage Dockerfile. We use the official Clojure Docker image as the layer to build the uberjar. Once it’s built, we copy it to a smaller Docker image that only contains the Java runtime.[8] By doing this, we get a smaller container image as well as a faster Docker build time because the layers are better cached.

That should be all for packaging the app. We can move on to the deployment now.

Deploying with Fly.io

First things first, you’ll need to install flyctl, Fly’s CLI tool for interacting with their platform. Create a Fly.io account if you haven’t already. Then run fly auth login to authenticate flyctl with your account.

Next, we’ll need to create a new Fly App:

$ fly app create
? Choose an app name (leave blank to generate one): 
automatically selected personal organization: Ryan Martin
New app created: blue-water-6489

Another way to do this is with the fly launch command, which automates a lot of the app configuration for you. We have some steps to do that are not done by fly launch, so we’ll be configuring the app manually. I also already have a fly.toml file ready that you can straight away copy to your project.

# fly.toml
# replace these with your app and region name
# run `fly platform regions` to get a list of regions
app = 'blue-water-6489' 
primary_region = 'sin'

[env]
  DB_DATABASE = "/data/database.db"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = "stop"
  auto_start_machines = true
  min_machines_running = 0

[mounts]
  source = "data"
  destination = "/data"
  initial_sie = 1

[[vm]]
  size = "shared-cpu-1x"
  memory = "512mb"
  cpus = 1
  cpu_kind = "shared"

These are mostly the default configuration values with some additions. Under the [env] section, we’re setting the SQLite database location to /data/database.db. The database.db file itself will be stored in a persistent Fly Volume mounted on the /data directory. This is specified under the [mounts] section. Fly Volumes are similar to regular Docker volumes but are designed for Fly’s micro VMs.

We’ll need to set the AUTH_USER and AUTH_PASSWORD environment variables too, but not through the fly.toml file as these are sensitive values. To securely set these credentials with Fly, we can set them as app secrets. They’re stored encrypted and will be automatically injected into the app at boot time.

$ fly secrets set AUTH_USER=hi@ryanmartin.me AUTH_PASSWORD=not-so-secure-password
Secrets are staged for the first deployment

With this, the configuration is done and we can deploy the app using fly deploy:

$ fly deploy
# ...
Checking DNS configuration for blue-water-6489.fly.dev

Visit your newly deployed app at https://blue-water-6489.fly.dev/

The first deployment will take longer since it’s building the Docker image for the first time. Subsequent deployments should be faster due to the cached image layers. You can click on the link to view the deployed app, or you can also run fly open which will do the same thing. Here’s the app in action:

The app in action

If you made additional changes to the app or fly.toml, you can redeploy the app using the same command, fly deploy. The app is configured to auto stop/start, which helps to cut costs when there’s not a lot of traffic to the site. If you want to take down the deployment, you’ll need to delete the app itself using fly app destroy <your app name>.

Adding a Production REPL

This is an interesting topic in the Clojure community, with varying opinions on whether or not it’s a good idea. Personally I find having a REPL connected to the live app helpful, and I often use it for debugging and running queries on the live database.[9] Since we’re using SQLite, we don’t have a database server we can directly connect to, unlike Postgres or MySQL.

If you’re brave, you can even restart the app directly without redeploying from the REPL. You can easily go wrong with it, which is why some prefer to not use it.

For this project, we’re gonna add a socket REPL. It’s very simple to add (you just need to add a JVM option) and it doesn’t require additional dependencies like nREPL. Let’s update the Dockerfile:

# Dockerfile
# ...
EXPOSE 7888
ENTRYPOINT ["java", "-Dclojure.server.repl={:port 7888 :accept clojure.core.server/repl}", "-jar", "standalone.jar"]

The socket REPL will be listening on port 7888. If we redeploy the app now, the REPL will be started but we won’t be able to connect to it. That’s because we haven’t exposed the service through Fly proxy. We can do this by adding the socket REPL as a service in the [services] section in fly.toml.

However, doing this will also expose the REPL port to the public. This means that anyone can connect to your REPL and possibly mess with your app. Instead, what we want to do is to configure the socket REPL as a private service.

By default, all Fly apps in your organisation live in the same private network. This private network, called 6PN, connects the apps in your organisation through Wireguard tunnels (a VPN) using IPv6. Fly private services aren’t exposed to the public internet but can be reached from this private network. We can then use Wireguard to connect to this private network to reach our socket REPL.

Fly VMs are also configured with the hostname fly-local-6pn, which maps to its 6PN address. This is analogous to localhost, which points to your loopback address 127.0.0.1. To expose a service to 6PN, all we have to do is bind or serve it to fly-local-6pn instead of the usual 0.0.0.0. We have to update the socket REPL options to:

# Dockerfile
# ...
ENTRYPOINT ["java", "-Dclojure.server.repl={:port 7888,:address \"fly-local-6pn\",:accept clojure.core.server/repl}", "-jar", "standalone.jar"]

After redeploying, we can use the fly proxy command to forward the port from the remote server to our local machine.[10]

$ fly proxy 7888:7888
Proxying local port 7888 to remote [blue-water-6489.internal]:7888

In another shell, run:

$ rlwrap nc localhost 7888
user=>

Now we have a REPL connected to the production app! rlwrap is used for readline functionality, e.g. up/down arrow keys, vi bindings. Of course you can also connect to it from your editor.

Deploy with GitHub Actions

If you’re using GitHub, we can also set up automatic deployments on pushes/PRs with GitHub Actions. All you need is to create the workflow file:

# .github/workflows/fly.yaml
name: Fly Deploy
on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  deploy:
    name: Deploy app
    runs-on: ubuntu-latest
    concurrency: deploy-group
    steps:
      - uses: actions/checkout@v4
      - uses: superfly/flyctl-actions/setup-flyctl@master
      - run: flyctl deploy --remote-only
        env:
          FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

To get this to work, you’ll need to create a deploy token from your app’s dashboard. Then, in your GitHub repo, create a new repository secret called FLY_API_TOKEN with the value of your deploy token. Now, whenever you push to the main branch, this workflow will automatically run and deploy your app. You can also manually run the workflow from GitHub because of the workflow_dispatch option.

End

As always, all the code is available on GitHub. Originally, this post was just about deploying to Fly.io, but along the way I kept adding on more stuff until it essentially became my version of the user manager example app. Anyway, hope this article provided a good view into web development with Clojure. As a bonus, here are some additional resources on deploying Clojure apps:

  1. The way Fly.io works under the hood is pretty clever. Instead of running the container image with a runtime like Docker, the image is unpacked and “loaded” into a VM. See this video explanation for more details. ↩︎

  2. If you’re interested in learning Clojure, my recommendation is to follow the official getting started guide and join the Clojurians Slack. Also, read through this list of introductory resources. ↩︎

  3. Kit was a big influence on me when I first started learning web development in Clojure. I never used it directly, but I did use their library choices and project structure as a base for my own projects. ↩︎

  4. There’s no “Rails” for the Clojure ecosystem (yet?). The prevailing opinion is to build your own “framework” by composing different libraries together. Most of these libraries are stable and are already used in production by big companies, so don’t let this discourage you from doing web development in Clojure! ↩︎

  5. There might be some keys that you add or remove, but the structure of the config file stays the same. ↩︎

  6. “assoc” (associate) is a Clojure slang that means to add or update a key-value pair in a map. ↩︎

  7. For more details on how basic authentication works, check out the specification. ↩︎

  8. Here’s a cool resource I found when researching Java Dockerfiles: WhichJDK. It provides a comprehensive comparison on the different JDKs available and recommendations on which one you should use. ↩︎

  9. Another (non-technically important) argument for live/production REPLs is just because it’s cool. Ever since I read the story about NASA’s programmers debugging a spacecraft through a live REPL, I’ve always wanted to try it at least once. ↩︎

  10. If you encounter errors related to Wireguard when running fly proxy, you can run fly doctor which will hopefully detect issues with your local setup and also suggest fixes for them. ↩︎

Permalink

Work and meta-work

Part of the work I did when I ran my own business was standardizing procedures. I got fascinated by checklists and instructions. Sometimes you only do something once in a while. You want to remember all of those little details, all the little problems solved, for next time. Because you will forget. Having them written down means you can pick it up whenever you need to. And, when you’re running a business, there’s enough to keep your mind busy. It’s nice to have a list of instructions to follow when you’re running out of decision-making power.

I started perfecting the process of writing processes. At one point, even the meta-work was included in the process itself. Think of the process like a recipe. The recipe says you need 4 cups of flour, 1 cup of oil, etc. Then it has steps like mix the flour and oil in the bottom of a heavy pan, mix some other ingredients in a medium bowl. It seems complete. But it’s not. At some point I started even writing the little steps like gather a spoon, a heavy pan, and a medium bowl. All of the implied steps got written down.

It might seem like those steps are obvious and don’t need to be explicit. To people who believe that, I say that they just haven’t experienced the calming effect of a good list of steps. The steps become the artifact that records all of your learning. You can rearrange the steps and otherwise optimize the whole process in writing.

It’s such a relief when you start to do that. It’s very relaxing to know that every little detail is handled in the instructions. For example, you might not know you need a medium bowl until you read step three. But at that point, you might have your pan on the stove and need to stir constantly. Having all of the steps, including the meta-steps, listed out and in an order that works is freeing. Your mind can attend to the present step.

At work I’ve been working through this same thing with project planning. We’ve got a feature we want to build. We discussed the details of it. Now it’s time to make a project in Linear to track it. There are two questions that I’m wrestling with: How much granularity do you want? And how much of the meta-work needs to be written down?

Too much granularity and things get a little too prescriptive. In work like programming, breaking the work down into smaller chunks and writing a description of those chunks is actually part of the work. And if your chunks are too small, you’re not taking into account the realistic uncertainty every programming project has about exactly how it should work. However, don’t break it down enough and you’ve got a single task: Implement the feature. Finding that middle path is part of the art of project planning.

But should planning the project be written down as the first step in the project? Sometimes I think yes. If the planning is not obvious (where you can just do it then and there), yes, you can write the first step is “Break the project into steps.” And if your planning process is more structured with multiple steps (like getting approval, etc.), write those down, too.

The default setup in Linear is to have multiple statuses for each task. Backlog→In-progress→In review→Done. But I find that code review where I work is hefty enough to merit its own step. So I put it. Meta-work is work when it requires effort. For instance, I wouldn’t put a step in the project saying “change status of step 2 to Done.” That seems weird and a little too meta. But asking for a review, following up until they do it, addressing the comments, then following up again until it’s approved is real work. That gets its own step.

It reminds me very much of the Getting Things Done methodology where they recommend writing down the “next action” to take. It should be precise enough to actually accomplish. For instance, if you need to call a plumber, but you don’t know a plumber, the next action might be “ask your friends for plumber recommendations.” Better yet would be to list the friends.

In the same way, we want each step to be clear. If the project requires research before you know what steps to take, write “Research x.” You can add more steps as things become clear. The point is to capture the steps you do know and feel a sense of progress when you check them off.

It’s all work, meta or not. The real trick is that planning is a way of manipulating the future. You write down steps, you change them, you rearrange them, you erase them, all before you take the first step. Some steps are known and fixed. Merging to main comes last. It’s pegged at the end, while some are fluid and can be rearranged, done in parallel, or even eliminated if they’re not necessary. The question is whether the meta-task warrants a place in the future among all the other tasks. I say we should default to saying yes more often. It helps us take the meta-tasks too seriously. For example, some teams may consider testing as a meta-task. By making it a task and writing it down, we honor its place among other tasks. And we honor ourselves when we complete it, giving us the satisfaction of a job done.

Permalink

Clojure Deref (June 16, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Clojure/Conj 2025 will be held Nov 12-14 in Charlotte, NC. Early Bird Registration is now open. We look forward to seeing you in-person and online!

Blogs, articles, and projects

Libraries and Tools

New releases and tools this week:

  • clojure-desktop-toolkit 0.3.1 - Create native state-of-the-art desktop applications in Clojure using Eclipse’s SWT graphics toolkit.

  • basilisp 0.4.0 - A Clojure-compatible(-ish) Lisp dialect targeting Python 3.9+

  • ai-tools 1.0 - tools that help AI development assistants do Clojure

  • malli 0.19.1 - High-performance data-driven data specification library for Clojure/Script.

  • lazytest 1.7.0 - A standalone BDD test framework for Clojure

  • markdown 0.7.186 - A cross-platform clojure/script parser for Markdown

  • deps-new 0.9.0 - Create new projects for the Clojure CLI / deps.edn

  • xtdb 2.0.0 - An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt

  • next-jdbc 1.3.1048 - A modern low-level Clojure wrapper for JDBC-based access to databases.

  • babashka 1.12.202 - Native, fast starting Clojure interpreter for scripting

  • quickblog 0.4.7 - Light-weight static blog engine for Clojure and babashka

  • fluent-clj 0.1.0 - Project Fluent for Clojure/script

  • patcho - Patching micro lib for Clojure

  • qclojure - A functional quantum computer programming library for Clojure with simulation backend and visualizations.

  • clojure-plus 1.6.0 - A project to improve experience of using Clojure stdlib

  • fusebox 1.0.11 - An extremely lightweight fault tolerance library for Clojure(Script) and Babashka

  • fs 0.5.26 - File system utility library for Clojure

  • calva 2.0.519 - Clojure & ClojureScript Interactive Programming for VS Code

  • pedestal 0.8.0-beta-1 - The Pedestal Server-side Libraries

  • dataspex 2025.06.7 - See the shape of your data: point-and-click Clojure(Script) data browser

  • joyride 0.0.47 - Making VS Code Hackable like Emacs since 2022

  • pretty 3.4.0 - Library for helping print things prettily, in Clojure - ANSI fonts, formatted exceptions

  • coax 2.0.4 - Clojure.spec coercion library for clj(s)

  • build-uber-log4j2-handler 2.25.0 - A conflict handler for log4j2 plugins cache files for the tools.build uber task.

  • logging4j2 1.0.3 - A Clojure wrapper for log4j2

Permalink

Akima splines

Recently I was looking for spline interpolation for creating curves from a set of samples. I knew cubic splines which are piecewise cubic polynomials fitted such that they are continuous up to the second derivative. I almost went ahead and implemented cubic splines using a matrix solver but then I found that the fastmath Clojure library already provides splines. The fastmath spline interpolation module is based on the interpolation module of the Java Smile library. I saved the interpolated samples to a text file and plotted them with Gnuplot.

(require '[fastmath.interpolation :as interpolation])
(use '[clojure.java.shell :only [sh]])

(def px [0 1 3 4 5 8 9])
(def py [0 0 1 7 3 4 6])
(spit "/tmp/points.dat" (apply str (map (fn [x y] (str x " " y "\n")) px py)))

(def cspline (interpolation/cubic-spline px py))
(def x (range 0 8.0 0.01))
(spit "/tmp/cspline.dat" (apply str (map (fn [x y] (str x " " y "\n")) x (map cspline x))))
(sh "gnuplot" "-c" "plot.gp" "/tmp/cspline.png" "/tmp/cspline.dat")
(sh "display" "/tmp/cspline.png")

I used the following Gnuplot script plot.gp for plotting:

set terminal pngcairo size 640,480
set output ARG1
set xlabel "x"
set ylabel "y"
plot ARG2 using 1:2 with lines title "spline", "/tmp/points.dat" using 1:2 with points title "points"

I used a lightweight configuration of the fastmath library without MKL and OpenBLAS. See following deps.edn:

{:deps {org.clojure/clojure {:mvn/version "1.12.1"}
        generateme/fastmath {:mvn/version "2.4.0" :exclusions [com.github.haifengl/smile-mkl org.bytedeco/openblas]}}}

The result is shown in the following figure. One can see that the spline is smooth and passes through all points, however it shows a high degree of oscillation:

cubic spline

However I found another spline algorithm in the fastmath wrappers: The Akima spline. The Akima spline needs at least 5 points and it first computes the gradient of the lines connecting the points. Then for each point it uses a weighted average of the previous and next slope value. The slope values are weighted using the absolute difference of the previous two slopes and the next two slopes, i.e. the curvature. The first and last two points use a special formula: The first and last point use the next or previous slope and the second and second last point use an average of the neighbouring slopes.

(require '[fastmath.interpolation :as interpolation])
(use '[clojure.java.shell :only [sh]])

(def px [0 1 3 4 5 8 9])
(def py [0 0 1 7 3 4 6])
(spit "/tmp/points.dat" (apply str (map (fn [x y] (str x " " y "\n")) px py)))

(def aspline (interpolation/akima-spline px py))
(def x (range 0 8.0 0.01))
(spit "/tmp/aspline.dat" (apply str (map (fn [x y] (str x " " y "\n")) x (map aspline x))))
(sh "gnuplot" "-c" "plot.gp" "/tmp/aspline.png" "/tmp/aspline.dat")
(sh "display" "/tmp/aspline.png")

Akima spline

So if you have a data set which causes cubic splines to oscillate, give Akima splines a try!

Enjoy!

Update: u/joinr showed, how you can use Clojupyter to quickly test a lot of splines.

Permalink

56: XTDB: A Bitemporal database in Clojure

Jeremy Taylor and James Henderson talk about building XTDB, bitemporality, SQL compatibility, and Apache Arrow. Launching XTDB v2 Grid Dynamics acquires JUXT LSM Tree The Generational Hypothesis JUXT Cast - Viktor Leis HTAP Processing Are We There Yet - Rich Hickey Jepsen Consistency Tree Jepsen Datomic test Jepsen Postgres test Andy Pavlo - CMU Intro to Database Systems sqllogictest

Permalink

Implementing dynamic scope for Fennel and Lua

I’m continuing my work on fennel-cljlib, my port of clojure.core and some other core libraries, focusing on porting missing functions and features to it. One such feature, which I sometimes miss in Lua and Fennel, is dynamic binding.

The Lua VM doesn’t provide dynamic scoping as a language feature, and Fennel itself doesn’t introduce any concepts like Clojure’s Var. However, we can still implement dynamic scoping that works similarly to Clojure and other Lisps using the debug library. Most of the ideas are based on information from the guides at leafo.net. There’s even a “Dynamic scoping in Lua” guide that implements a slightly different version of this feature, requiring variables to be referenced by name via dynamic("var_name") call. While this approach is feasible, I wanted something more in line with how other Lisps work, so let’s explore advancing it further. Luckily for us, Leafo already has all the necessary guides!

But first things first. I wanted to delay working on dynamic scoping as much as possible because it is a feature that’s hard to get right. I already have some experience with implementing dynamic scoping for one of my older libraries that implemented a condition system from Common Lisp in Fennel. This library, however, required special syntax to access all of the dynamically bound symbols and thus did not actually require anything fancy for it to work.

So what does dynamic binding/scoping mean in a language? If you know about lexical and dynamic scoping and wish to skip this tangent, feel free to do so.

In short, lexically scoped variables exist only where their lexical scope allows them to. For example, a variable defined in a block of code will only exist in that block because it is its lexical scope.

When working with languages that have higher-order functions, I often find myself in a situation where I want to refactor some code that uses anonymous functions by moving them out and giving them a name. Sometimes it’s possible; sometimes it’s not.

For example, imagine I wanted to move out this function from map:

(fn some-func [messages]
  (let [extra-data (other-func)]
    (map
     (fn [message]
       (do-stuff message extra-data))
     messages)))

If I were to do so, we would have a problem:

(fn process-message [message]
  ;; oops, extra-data is now an unknown variable
  (do-stuff message extra-data))

(fn some-func [messages]
  (let [extra-data (other-func)]
    (map process-message messages)))

So extra-data is bound lexically, and thus if we look at the lexical scope of the process-messages function, we’ll see that it tries to use extra-data while it’s not defined there. If extra-data were a global variable, it wouldn’t be problematic, but it is a local variable with a lexical scope. We could move the entire let that binds extra-data to the result of calling other-func, but let’s say we don’t want to call it on each iteration of a map because it’s slow and will do the same work repeatedly. So, what are our options here?

Well, we can make it a closure!

(fn make-message-processor [extra-data]
  (fn process-message [message]
    (do-stuff message extra-data)))

(fn some-func [messages]
  (let [extra-data (other-func)]
    (map (make-message-processor extra-data) messages)))

Now, we pass extra-data only once to the make-message-processor function, and it returns a function that has this variable stored in a closure. However, this is still a lexical scope because, as you can see, extra-data is present there.

In a language that uses dynamic scoping, this could be a whole different story. Let’s look at Clojure; although I wouldn’t recommend doing it this way, it is possible to do it this way1:

(defn process-message [message]
  (do-stuff message extra-data))

(defn some-func [messages]
  (binding [extra-data (other-func)]
    (mapv process-message messages)))

Here, I assume that extra-data is a dynamic variable that obeys the rules of dynamic scoping. The binding call introduces a dynamic scope within which extra-data is set to the value of (other-func). It acts more like a scoped global variable, or at least you can think of it that way.

To introduce a dynamic scope, Clojure uses binding. Let’s look at it briefly:

(defmacro binding
  "binding => var-symbol init-expr

  Creates new bindings for the (already-existing) vars, with the
  supplied initial values, executes the exprs in an implicit do, then
  re-establishes the bindings that existed before.  The new bindings
  are made in parallel (unlike let); all init-exprs are evaluated
  before the vars are bound to their new values."
  {:added "1.0"}
  [bindings & body]
  (assert-args
    (vector? bindings) "a vector for its binding"
    (even? (count bindings)) "an even number of forms in binding vector")
  (let [var-ize (fn [var-vals]
                  (loop [ret [] vvs (seq var-vals)]
                    (if vvs
                      (recur  (conj (conj ret `(var ~(first vvs))) (second vvs))
                             (next (next vvs)))
                      (seq ret))))]
    `(let []
       (push-thread-bindings (hash-map ~@(var-ize bindings)))
       (try
         ~@body
         (finally
           (pop-thread-bindings))))))

It’s a simple idea: a try block without catch statements, only with a finally clause. Before we enter the try block, we set all mentioned variables to their values, and after we’re done with the body, we restore those values.

In Clojure, dynamic bindings work really well, but this is due to a combination of factors. First, the try support in the JVM is excellent, ensuring that finally will perform its intended function. Additionally, the JVM supports thread-local bindings, so even in a multithreaded context, binding still works. Finally, heh, Clojure has Vars, which makes it all possible.

Now, let’s try to implement the same concept in Fennel!

Implementing dynamic scope in Fennel

Before I descend into madness, I would say that we could do the same thing as in Clojure: set some variables, run code in a protected call, and reset the variables afterward. While this approach would indeed work, I wanted to tidy up my understanding of function environments in Lua. It’s a neat concept that Lua and a few other languages have, but Lua is one of the few languages that actually allows users to manipulate function environments. So, let’s explore this idea.

First, we need a way to forcefully set a function’s environment. This could be done in Lua 5.1 via setfenv; however, it was removed starting from Lua 5.2 onward. It can be implemented like this:

(local setfenv
  (or _G.setfenv
      (fn setfenv [f env i]
        (let [i (or i 1)]
          (case (debug.getupvalue f i)
            :_ENV (doto f (debug.upvaluejoin i (fn [] env) 1))
            nil f
            _ (setfenv f env (+ i 1)))))))

Now, we can set an environment for any function.

But what exactly is this function environment? I’ve realized that I never explained that, so here we go.

In Lua, the environment is a table2 that stores the names of the variables. Starting from Lua 5.2, the environment is represented by a variable called _ENV, which is what we’re testing for in setfenv above. By default, _ENV has the same value as _G, a table that contains all global variables. However, we can change the function’s environment by modifying the value of _ENV.

For instance, in Lua, we can set _ENV to a table, and all global definitions would end up in that table:

a = 0

local function f (t)
    _ENV = t
    a = 42
    b = 322
    return t
end

local t = {}

f(t)

print(a, b) -- 0, nil
print(t.a, t.b) -- 42, 322

This is a cool feature, and we can actually use environments for sandboxing code, but that’s a story for another time. Let’s return to dynamic scoping.

Looking at this, you might get the idea that if we change the function’s environment and set our dynamic variables in it specifically, once we leave the lexical scope of that _ENV, all changes revert to normal because they never happened in a global environment!

Unfortunately, it’s not that simple. Yes, we can change the function’s environment, but it will only affect that specific function. Moreover, this change is permanent, meaning that we’ll have to reset the function back to its original environment. So, it’s not as straightforward as just changing _ENV around the code we want to run. Of course, we could write getfenv, then wrap the entire thing in a pcall, and safely restore the environment once the work is done.

However, we can’t set the environment of just the function we’re calling. _ENV is stored in a closure, so we’ll need to change all of the functions called by the function we wish to invoke with a custom environment. This makes undoing changes trickier to implement.

Luckily for us, we can bypass the need to roll back the changes to the function’s environment completely! Instead, we can simply clone the function and set its environment as we wish! Here’s an implementation:

(fn clone-function-with-env [f env]
  "Recursively clones the function `f`, and any subsequent functions that it
might call via upvalues.  Sets `env` as environment for the cloned function."
  (let [dumped (string.dump f)
        cloned (load dumped)]
    (var (done? i) (values false 1))
    (while (not done?)
      (case (debug.getupvalue f i)
        (where (name val) (= :function (type val)))
        (let [subf (clone-function-with-env val env)]
          (debug.setupvalue cloned i subf))
        name
        (debug.upvaluejoin cloned i f i)
        nil (set done? true))
      (set i (+ i 1)))
    (setfenv cloned env)))

Finally, we write the function that will call a given function f in a context where the given bindings are dynamically bound:

(fn dynamic-call [bindings f ...]
  "Calls `f` with `bindings` as its root environment."
  (let [new-env (setmetatable bindings {:__index _ENV})
        f* (clone-function-with-env f new-env)]
    (f* ...)))

As well, as a convenience macro for using it like a let but with dynamic binding:

(macro binding [bindings ...]
  (assert-compile (sequence? bindings) "expected a sequence of bindings" bindings)
  (assert-compile
   (= 0 (% (length bindings) 2))
   "expected an even number of forms in binding sequence"
   bindings)
  `(dynamic-call
    ,(faccumulate [res {} i 1 (length bindings) 2]
       (doto res
         (tset (tostring (. bindings i)) (. bindings (+ i 1)))))
    (fn [] ,...)))

Now we can use dynamic binding in Fennel!

Usage example

To illustrate, let’s create some variables that we wish to treat dynamically:

(global foo 21)
(global bar 73)

(print foo bar) ;; 21	73

With globals in place, we can try our binding macro:

(binding [foo 42]
  (print foo bar))
;; 42 73

As can be seen, foo no longer refers to 21, but now it is 42. However, if we try to print foo outside of binding’s scope, we will again get 21:

(binding [foo 42]
  (print foo))
;; 42
(print foo)
;; 21

A keen reader would mention that this example is not so different from using a plain let:

(let [foo 42]
  (print foo))
;; 42
(print foo)
;; 21

And you’d be right! However, where it’s going to be different is when we put functions into the mix:

(fn f []
  (print "f:" foo bar))

How, if we were to call it inside of let, the bindings introduced by it won’t affect the function, because both foo and bar are not lexically present here:

(let [foo 42
      bar 1337]
  (f))
;; still prints:
;; f: 21 73

This is where binding jumps in. Instead of following lexical binding rules, that are natural for most languages, we now introduce dynamic binding of foo and bar:

(binding [foo 42
          bar 1337]
  (f))
;; prints:
;; f: 42 1337
(f)
;; prints:
;; f: 21 73

And, as can be seen above, inside binding’s scope, f sees foo as 42, and bar as 1337, while outside of it, the values are still 21 and 73. So, we never actually changed the values of foo and bar, they’re still 21 and 73, respectfully. Instead, in the scope of binding we changed how f accesses these variables.

This also works with functions that call other functions:

(fn f []
  (print "f:" foo))

(fn g []
  (f)
  (print "g:" bar))

(fn h []
  ((fn [] (print "h:" foo bar))))

(binding [foo 42
          bar 322]
  (f)
  ;; prints:
  ;; f: 42
  (g)
  ;; prints:
  ;; f: 42
  ;; g: 322
  (h)
  ;; prints:
  ;; h: 42 322
  )

That’s pretty much it! This approach has a lot of flaws though.

First, it will only work on ordinary functions, so no tricks with __call metamethod, or native functions are supported.

Second, it won’t work with coroutines either. You can’t use string.dump on something like coroutine.resume directly, so we won’t be able to do (dynamic-call {:foo 42} coroutine.resume some-coroutine). It won’t even work if we wrap coroutine.resume into an anonymous function like (dynamic-call {:foo 42} (fn [] (coroutine.resume coro))), because coro here, while being an upvalue, is not a function, so we can’t clone it.

Finally, it relies on the debug library, and recursive function dumping, which itself is already pretty crazy.

There are probably more things that can go wrong with this.

So why not just set the globals temporarily?

First of all, yes, we could just set the globals temporarily, and restore their values later:

(fn set-globals [globals]
  (collect [name new-val (pairs globals)]
    (let [old-val (. _G name)]
      (set (. _G name) new-val)
      (values name old-val))))

(fn close-handler [old-vals ok? ...]
  (each [name val (pairs old-vals)]
    (set (. _G name) val))
  (if ok?
      ...
      (error ... 0)))

(fn call-with-temp-globals [globals f ...]
  (-> globals
      set-globals
      (close-handler (pcall f ...))))

(call-with-temp-globals {:foo 123 :bar 456} g)
;; prints:
;; f: 123
;; g: 456
(g)
;; prints:
;; f: 21
;; g: 73

While this works, I don’t like the idea that we’re actually changing the values instead of shadowing them in the environment, though this is more akin to the original Clojure implementation. Since Lua is single-threaded it should not be problematic, however, I think it can still mess things up if we were to introduce some kind of an asynchronous scheduler, like in my async.fnl library. This also messes up stacktraces in cases where an error occurs as a result of calling f because you can’t re-throw errors in Lua like in other languages. There’s also a potential to use <close> marker that came in Lua 5.4 to avoid pcall altogether:

local function set_globals(globals)
    local old_values = {}
    for name, new_val in pairs(globals) do
        old_values[name] = _G[name]
        _G[name] = new_val
    end
    return old_values
end

local function close_handler(old_values)
    for name, val in pairs(old_values) do
        _G[name] = val
    end
end

local function dynamic_call_close(globals, f, ...)
    local old_values <close> =
        setmetatable(set_globals(globals), {__close = close_handler})
    return f(...)
end

This would keep stack traces intact, and values would be restored right when we exit dynamic_call_close but will only work in Lua 5.4.

Thus, while doing it with pcall is more generic, I wanted to explore the environment approach first since Lua already provides a mechanism for working with function environments. But for now, I think I’ll leave dynamic scoping out of cljlib as I’m not really sold on any of the ways of doing it that I’ve come up with so far.


  1. I really don’t like this way of refactoring this code, but sometimes it is exactly what you need. Not this time, though. The approach with closures is what I’d use in this case in Clojure too. ↩︎

  2. Everything in Lua is a table, because of course it is. ↩︎

Permalink

The Loneliness of Architectural Completion

​I wrote this as the final pieces of Ooloi's backend architecture were falling into place. What began as a meditation on infrastructure and isolation turned into something more personal about mastery, loss, and the strange kind of solitude that comes with finishing something no one else can see. This isn't documentation. It's a reflection.

/ Peter
There's a peculiar melancholy that settles over you when you near the completion of something genuinely complex, something that has consumed many months of concentrated thought and represents the synthesis of decades of accumulated understanding. I find myself in precisely this position with Ooloi's backend architecture, and the psychological reality proves a bit more complicated than I'd anticipated.

It's rather like the post-coital moment after particularly intense sex: that strange combination of satisfaction, exhaustion, and existential emptiness when the driving urgency suddenly lifts. You've achieved something profound, yet find yourself staring at the aftermath wondering what, precisely, comes next. I'm smoking a conceptual cigarette, as it were, contemplating the peculiar loneliness that follows architectural completion.

In a matter of days, I'll complete the final piece: the endpoint resolution system for slurs and ties that uses the framework I've spent months building. Once that's finished, the backend will be conceptually complete – 15,000+ tests passing, STM transactions handling 100,000+ operations per second, Vector Path Descriptors enabling elegant client-server communication, and a transducer-based piece-walker that coordinates musical time with mathematical precision. The piece-walker literally performs the musical score, traversing it in time just as I once performed Vierne at the organ.

To anyone versed in these technical domains, that represents serious work. To everyone else, it's incomprehensible gobbledygook happening 'under the hood' of something they might one day use to write music. And therein lies the first layer of loneliness: having solved genuinely difficult problems that almost nobody can fully appreciate.

​The Weight of Invisible Architecture

Software architecture, when done properly, is invisible to its eventual users. They should never know about the STM transaction coordination that keeps their concurrent edits from colliding, or the VPD system that allows them to reference musical elements without direct object pointers, or the careful functional design that ensures their work remains consistent across complex operations.

This invisibility is precisely the point – and precisely the problem. I've spent months solving challenges that required rather more thought than I'd initially anticipated, creating abstractions that handle the full complexity of musical notation whilst remaining elegant enough to extend indefinitely. Yet once complete, this work vanishes into infrastructure. The better I've done my job, the less visible it becomes.

There's something profoundly isolating about completing work that embodies your best thinking but can never be fully shared. The musicians who will eventually use Ooloi might appreciate its responsiveness or reliability, but they'll never see the polymorphic dispatch system that makes complex musical operations feel effortless, or understand why the pure tree structure with ID references elegantly solves problems that have plagued notation software for decades.

​Clojure for Closure

The choice of Clojure wasn't merely technical: it was also psychological. Having started programming in Lisp in 1976, having built Common Lisp compilers and interpreters, having spent $7.5 million of investor money and then having unresolved feelings about Igor Engraver's death for a quarter of a century, returning to a Lisp dialect feels like completing a circle that's been open far too long.

Clojure for closure, if you will.

But this completion reveals its own complexity. I'm 64, carrying more than five decades of programming experience and a parallel career as an internationally performed composer – an intersection that doesn't exactly suffer from overcrowding. The same mind that wrote what apparently is the internationally most played Swedish opera now architects STM concurrency patterns. The same hands that have performed French romantic organ works now implement temporal traversal through transducers.

This convergence of domains should feel like triumph. Instead, it often feels like exile – not belonging entirely to the musical world I've moved beyond, nor quite fitting into the tech world that didn't shape me. I don't belong anywhere, really. The isolation isn't just professional; it's existential.

​The Economics of Art and Pragmatism

I must confess something that still sits uneasily: I've essentially given up composing, despite international success, because conditions in Sweden for composers have deteriorated to the point where I had to prioritise my pension. There's an unwritten opera I'd like to complete – I have the text ready – but it will likely never come to fruition.

Whether this represents economic necessity or conscious rejection of a cultural environment I found increasingly superficial and performative, I honestly can't say. Perhaps both. The exact proportion remains unclear even to myself, and I've learned to be comfortable with that ambiguity. Life rarely offers the clean motivations we prefer in retrospect.

What I can say is this: the creative energy that might have gone into that final opera has found other expression. The same understanding of temporal flow, structural relationship, and expressive possibility that shaped my musical work now manifests in software architecture. It's sublimation in the deepest sense: not compromise, but transformation.

​The Paradox of Completion

Here's what nobody tells you about completing something genuinely substantial: the moment of architectural completion isn't triumph, it's vertigo. All those months of wrestling with complex problems, of holding intricate systems in your head, of solving puzzles that demanded your full intellectual capacity – suddenly that pressure lifts, and you're left staring at what you've built with a strange mixture of satisfaction and emptiness.

The backend is nearly finished. The hard problems are solved. The foundation is solid. And now comes the work that should be 'easier': creating user interfaces, handling the cultural and aesthetic dimensions of human interaction, making decisions about visual design and workflow that seem trivial after months of STM transaction coordination but are actually far more treacherous.

Technical problems have logical solutions. Human interface problems have cultural solutions, psychological solutions, aesthetic solutions; domains where being right isn't enough, where the same mind that can architect transducer pipelines struggles with questions like 'should this button be blue or green?' not because the technical challenge is greater, but because the criteria for success shift from mathematical to cultural.

​The Transition Challenge

Moving from backend completion to frontend implementation isn't just a technical transition. It's a psychological one. After months of building infrastructure that only I can see, I must now create experiences that others will judge. After solving problems where elegance and correctness align, I must now solve problems where user perception and technical reality often diverge.

The loneliness of architectural completion isn't just about having done complex work in isolation. It's something else entirely. The 'easy' work ahead may be harder in ways that have nothing to do with computational complexity. It's about moving from mathematical elegance to human messiness, from logical purity to cultural compromise.

Most acutely, it's about the strange position of being someone who carries irreplaceable knowledge – the synthesis of decades in both musical and computational domains – and wondering how to encode that understanding into forms that others can inherit and extend. Not just the technical patterns, but the aesthetic judgements, the performance intuitions, the hard-won understanding of how creative work actually happens.

​What Comes Next

In a couple of weeks, when the final endpoint resolution system is working and the backend architecture is truly complete, I'll begin the gRPC implementation that bridges backend and frontend. Then comes the 'Hello World' window – Ooloi's first visible manifestation, however simple.

The psychological challenge isn't technical uncertainty. I've built user interfaces before, in a previous technological era. It's the weight of transition: from solving invisible problems to creating visible experiences, from mathematical elegance to cultural navigation, from the loneliness of architectural completion to the different loneliness of human interface design.

The work continues, but its nature changes completely. After months of building the engine, it's time to build the car. And to discover what new forms of isolation await when mathematical precision meets human perception.

For now, I sit with the strange melancholy of nearly completing something that matters enormously but whose full significance can be communicated to virtually no one. It's a peculiar form of creative isolation – not the romantic loneliness of the misunderstood artist, but the technical loneliness of someone who happens to carry knowledge that exists at intersections most people never visit.

Clojure for closure, indeed. But it turns out that closure reveals as much as it resolves.

Time for a smoke.

Permalink

Building the Future of Clojure: Welcoming Christoph Neumann as Nubank’s First Clojure Developer Advocate

At Nubank, technology is not just a tool — it’s how we rethink financial services, empower millions across Latin America, and challenge the status quo. 

Since our very first line of code in 2013, Clojure has been at the heart of this mission: a simple yet powerful language that has helped us scale with quality, build reliable systems, and cultivate a unique engineering culture.

In 2023, Clojure celebrated its 15th anniversary — a milestone that reflects not only its longevity but also its growing influence in companies like Nubank. 

Today we are pleased to announce that Christoph Neumann has joined Nubank as the first Clojure Developer Advocate! Christoph will be focusing on ways to support the existing Clojure community and grow the community through outreach and development.

Christoph’s background is in programming languages and software engineering. He’s worked in manufacturing, web and mobile application development, and live TV and sports production. As his career progressed, he moved from academia, into industry, and finally entrepreneurship.

We got together with Christoph to ask a few questions as he takes on this important role.

Welcome Christoph!

What was your introduction to Clojure and the community?

I heard about Clojure shortly after Rich Hickey announced it publicly. I had recently started working at HP, and a coworker, Keith Irwin, introduced me to Clojure and gave me my first Clojure demo. At the time, I thought Clojure was a fun toy for Lisp fanatics. I didn’t see the value at all!

Prior to HP, I was a PhD student at Oregon State, and I was fascinated by research in programming languages. At the time, I thought languages with big feature sets and type systems were the way to revolutionize programming. Lisp was “old” and “done” in my mind.

It took me a few years to take Clojure seriously! Keith, now a friend, helped me see all of the complexity hiding behind the big language features and how Clojure was much simpler. After I saw Rich’s talk, “Simple Made Easy”, I decided to redouble my efforts to take Clojure seriously, and Keith helped me get through many hurdles.

In those early days, Keith was my Clojure community. His kind persistence helped me get my footing in Clojure, then I was able to turn to the broader community online. Prior to Keith’s help, I was just confused by the resources I found online. Clojure was so different than the other languages I had used professionally!

What are the attributes of Clojure that you have found most useful in your own work and how has that kept you engaged over time?

I could talk about this all day! Instead, I’ll keep it brief.

  1. Clojure is safe
  2. Data is first class
  3. Clojure is live and interactive

Clojure is safe. By default, Clojure doesn’t allow code to change data in place (aka “immutability”). That may sound like a small thing, but it has huge implications. It eliminates whole categories of bugs. It helps you reason about code behavior as the codebase grows and the runtime scales.

Data is first class. Clojure separates information (”data”) from computation (”functions”). Data is represented in a generic way using Clojure’s built-in data structures like lists, maps, and sets. That allows Clojure to have a huge library of built-in functions to work with that generic data. Furthermore, Clojure’s built-in data structures use a human-friendly notation, so it is easy to define data, inspect it, and save it without any specialized code.

Clojure is live and interactive. With Clojure, you don’t compile and restart your application during development. Instead, you launch the Clojure runtime, connect it to your editor, and send source code to the runtime to evaluate on the fly (aka “connected REPL”). The whole application is in memory along with the all the state. You can inspect any part of it, add to it, and even redefine things, on the fly, without restarting.

For me, developing in Clojure has been an entirely different experience than other languages. Development feels so fast and visual. I can figure out an initial solution quickly and evolve it into something easy to understand and solid to maintain.

How do you think we can introduce new people to Clojure?

Clojure is quite different from the most popular languages, so that makes it a challenge to introduce people to Clojure. As I mentioned, I needed someone to show me the way and help me “get it”. So many things are different. Some differences are obvious, like the syntax, but some differences are non-obvious like the live-coding workflow.

Clojure is a “purely functional” programming language, so the differences run deeper than tooling, syntax, and workflow. This second-level of learning involves a mind shift, but it opens up architectures and solutions that improve long-term maintenance, performance, and reusability.

I think the best way to introduce people to Clojure is to demonstrate the whole package in the small and work up from there: syntax, generic data, functional concepts, tooling, structural editing, and the connected REPL workflow.

What’s your approach to balancing education, outreach, and feedback collection in advocacy work?

All three are essential and interconnected. Without outreach, no one hears, but once they hear, they must learn. To learn they need resources, but those resources can’t be improved without feedback. It’s a continual cycle of creating, sharing, and improving in service of a core mission.

My core mission is to ensure developers have a phenomenal experience with Clojure right from the start and continuing on as they learn and grow.

As a community grows, network effects come into play, so advocacy work involves more organizing to encourage those indirect effects. The work involves forming systems, structures, and partnerships that help the community create and spread their own work in effective, sustainable ways.

But even when the community is thriving, it’s important to keep up the work of the core mission, because no programming language can thrive if the newest members of the community become frustrated and leave.

What are you most looking forward to in this new role?

Oh, that’s an easy choice. By far, I’m the most excited about meeting developers that are brand new to Clojure! If you’re curious, just getting started, or find yourself skeptical (like I did), I’d love to hear from you.

Of course, I love the broader Clojure community too! I’d love to hear about your experience with Clojure and the community. Come find me online.

Where can people find you online?

The best place to reach me is @neumann in the Clojurians Slack (http://clojurians.net/). It’s a friendly place for Clojure beginners to experienced developers alike!

If you want more information about me and my mission, visit christophneumann.dev.

Building the Future Together

At Nubank, we believe that strong communities and open knowledge sharing are key to driving meaningful technological progress. Welcoming Christoph as our first Clojure Developer Advocate marks an exciting new chapter in our commitment to supporting the Clojure ecosystem and empowering developers around the world.

We’re thrilled to continue contributing to the growth and evolution of Clojure — not just as a language, but as a vibrant community of thinkers, builders, and innovators.

If you’re curious about Clojure or eager to collaborate, don’t hesitate to reach out to Christoph or any of us at Nubank. Together, let’s keep building the future of Clojure.

The post Building the Future of Clojure: Welcoming Christoph Neumann as Nubank’s First Clojure Developer Advocate appeared first on Building Nubank.

Permalink

The Musical Journey to Understanding Transducers: Building Ooloi's Piece-Walker

How solving a real music notation problem revealed the perfect transducer use case

​The Problem That Started It All

I found myself confronting what appeared to be a deceptively simple requirement for Ooloi: 'Resolve slur endpoints across the musical structure'.

Rather straightforward, one might assume: simply traverse the musical structure and locate where slurs terminate. But then, as so often is the case, the requirements revealed added complexity:
  • Slur endpoints must be discovered in temporal order (measure 1 before measure 2, naturally)
  • Yet slurs can span multiple voices on the same staff
  • Or cross between neighbouring staves within an instrument
  • And precisely the same temporal traversal logic would be required for MIDI playback
  • Not to mention visual layout calculations and formatting
  • And harmonic analysis
  • And collaborative editing conflict resolution
It became apparent that I needed a general-purpose piece traversal utility: something handling temporal coordination whilst remaining flexible enough for multiple applications. Rather than construct something bespoke (and likely regrettable), I researched the available approaches within Clojure's ecosystem.

That's when I recognised this as precisely what transducers were designed for.

​The Architecture Recognition

​Allow me to demonstrate the pattern I anticipated avoiding. Without a general traversal utility, each application would require its own approach:
Three functions, identical traversal logic, different transformations. Exactly the architectural smell I wanted to avoid from the outset.

This was precisely Rich Hickey's transducer insight made manifest: "What if the transformation was separate from the collection?"

​The Transducer Revelation

​What if I could write the temporal traversal once, then apply different transformations to it?
Objective achieved: one traversal algorithm, many applications.

But its architectural reach turned out to be even more profound.

​The Architectural Insight

The design decision hinged upon recognising that I was conflating two distinct concerns: the mechanism of traversal and the logic of transformation. This wasn't merely about avoiding the tedium of duplicated code (though that would have been reason enough) but rather about establishing clean architectural boundaries that would serve the system's long-term evolution.

Consider the conceptual shift this separation enabled:

Rather than thinking in terms of specific operations upon musical structures:
  • 'I need to find slur endpoints in this piece'
  • 'I need to generate MIDI from this piece'
  • 'I need to calculate layout from this piece'

The transducer approach encouraged thinking in terms of composed processes:
  • 'I need to traverse this piece temporally, then apply endpoint filtering'
  • 'I need to traverse this piece temporally, then apply MIDI transformation'
  • 'I need to traverse this piece temporally, then apply layout transformation'

The traversal thus became reusable infrastructure, whilst the transformation became pluggable logic. This distinction would prove invaluable as the system's requirements expanded.

​The Broader Applications

​What I hadn't anticipated was how broadly applicable the resulting abstraction would prove. After implementing the piece-walker for attachment resolution, I discovered it elegantly supported patterns I hadn't originally considered, each demonstrating the composability that emerges naturally from separating traversal concerns:
​Each is built from simple, testable pieces. And they all inherit the same temporal coordination guarantee. This composability emerged naturally from the transducer design: a pleasant architectural bonus.

​The Performance Characteristics

As one would expect from a well-designed transducer, memory usage remained constant regardless of piece size: a particularly crucial consideration when dealing with the sort of orchestral scores that might contain hundreds of thousands of musical elements.

Consider the alternative approach, which would create intermediate collections at each processing step:
​The transducer version processes one item at a time:
​Same result, constant memory usage. This exemplifies what Rich meant by 'performance without compromising composability'.

​Demystifying Transducers

Transducers suffer from an unfortunate reputation for complexity, often relegated to 'advanced topics' when they needn't be. This is particularly galling given that they're fundamentally straightforward when you encounter the right use case, which the musical domain provides in abundance.

Think of transducers as 'transformation pipelines' that work with any data source, much as one might design AWS data processing workflows that operate regardless of whether the data arrives from S3 buckets, database queries, or API streams:
The pipeline stays the same. The data source changes.

In Ooloi:

​Why This Matters Beyond Music

The piece-walker solved a universal software problem: How does one avoid duplicating traversal logic whilst maintaining performance and composability?

This pattern applies everywhere:
  • Web scraping: Same page traversal, different data extraction
  • Log analysis: Same file reading, different filtering and aggregation
  • Database processing: Same query execution, different transformations
  • Image processing: Same pixel iteration, different filters

Transducers provide the infrastructure for "traverse once, transform many ways."

​The Bigger Picture

Building the piece-walker demonstrated that transducers aren't an abstract functional programming concept. They're a practical design pattern for a specific architectural problem: separating the concerns of traversal from transformation.

The musical domain made this separation particularly clear because the temporal coordination requirements are so explicit. When you need the same traversal logic applied with different transformations, transducers provide the elegant answer.

This separation makes code:
  1. More testable (test traversal and transformations independently)
  2. More reusable (same traversal, different applications)
  3. More maintainable (one place to optimise traversal performance)
  4. More composable (mix and match transformations)

What's Next?

The piece-walker is documented thoroughly in our Architecture Decision Record for those wanting technical details. But the real value lies not in the musical specifics but in observing how transducers address genuine architectural challenges with apparent effortlessness.

The next time you find yourself contemplating similar data processing logic across multiple contexts, you might ask: 'What if the transformation was separate from the collection?'

You may well recognise your own perfectly suitable transducer use case.

References and Further Reading

Rich Hickey's Essential Talks
  • "Transducers" - Strange Loop 2014 - The definitive introduction to transducers by their creator. This talk explains the core concepts, motivation, and design philosophy behind transducers.
  • "Inside Transducers" - Clojure/conj 2014 - A deeper technical dive into the implementation details of transducers, focusing on the internals and integration with core.async.

Official Documentation
  • Clojure Reference: Transducers - The official Clojure documentation provides comprehensive coverage of transducer usage, with examples and best practices.
  • ClojureDocs: transduce - Community-driven documentation with practical examples of the transduce function.

Educational Resources
Advanced Topics

Permalink

When You Get to Be Smart Writing a Macro

Day-to-day programming isn’t always exciting. Most of the code we write is pretty straightforward: open a file, apply a function, commit a transaction, send JSON. Finding a problem that can be solved not the hard way, but smart way, is quite rare. I’m really happy I found this one.

I’ve been using hashp for debugging for a long time. Think of it as a better println. Instead of writing

(println "x" x)

you write

#p x

It returns the original value, is shorter to write, and doesn’t add an extra level of parentheses. All good. It even prints original form, so you know which value came from where.

Under the hood, it’s basically:

(defn hashp [form]
  `(let [res# ~form]
     (println '~form res#)
     res#))

Nothing mind-blowing. It behaves like a macro but is substituted through a reader tag, so defn instead of defmacro.

Okay. Now for the fun stuff. What happens if I add it to a thread-first macro? Nothing good:

user=> (-> 1 inc inc #p (* 10) inc inc)
Syntax error macroexpanding clojure.core/let at (REPL:1:1).
(inc (inc 1)) - failed: vector? at: [:bindings] spec: :clojure.core.specs.alpha/bindings

Makes sense. Reader tags are expanded first, so it replaced inc with (let [...] ...) and then tried to do threading. Wouldn’t fly.

We can invent a macro that would work, though:

(defn p->-impl [first-arg form fn & args]
  (let [res (apply fn first-arg args)]
    (println "#p->" form "=>" res)
    res))

(defn p-> [form]
  (list* 'p->-impl (list 'quote form) form))

(set! *data-readers* (assoc *data-readers* 'p-> #'p->))

Then it will expand to

user=> '(-> 1 inc inc #p-> (* 10) inc inc)

(-> 1
  inc
  inc
  (p->-impl '(* 10) * 10)
  inc
  inc)

and, ultimately, work:

user=> (-> 1 inc inc #p-> (* 10) inc inc)
#p-> (* 10) => 30
32

Problem? It’s a different macro. We’ll need another one for ->>, too, so three in total. Can we make just one instead?

Turns out you can!

Trick is to use a probe. We produce an anonymous function with two arguments. Then we call it in place with one argument (::undef) and see where other argument goes.

Inside, we check where ::undef lands: first position means we’re inside ->>, otherwise, ->:

((fn [x y]
   (cond
     (= ::undef x) <thread-last>
     (= ::undef y) <thread-first>))
 ::undef)

Let’s see how it behaves:

(macroexpand-1
  '(-> "input"
     ((fn [x y]
        (cond
          (= ::undef x) <thread-last>
          (= ::undef y) <thread-first>))
      ::undef)))

((fn [x y]
   (cond
     (= ::undef x) <thread-last>
     (= ::undef y) <thread-first>))
   "input" ::undef)

(macroexpand-1
  '(->> "input"
     ((fn [x y]
        (cond
          (= ::undef x) <thread-last>
          (= ::undef y) <thread-first>))
      ::undef)))

((fn [x y]
   (cond
     (= ::undef x) <thread-last>
     (= ::undef y) <thread-first>))
   ::undef "input")

If we’re not inside any thread first/last macro, then no substitution will happen and our function will just be called with a single ::undef argument. We handle this by providing an additional arity:

((fn
   ([_]
    <normal>)
   ([x y]
    (cond
      (= ::undef x) <thread-last>
      (= ::undef y) <thread-first>)))
   ::undef)

And boom:

user=> #p (- 10)
#p (- 10)
-10

user=> (-> 1 inc inc #p (- 10) inc inc)
#p (- 10)
-7

user=> (->> 1 inc inc #p (- 10) inc inc)
#p (- 10)
7

#p was already very good. Now it’s unstoppable.

You can get it as part of Clojure+.

Permalink

Clojure Deref (June 6, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Libraries and Tools

New releases and tools this week:

Permalink

A look into Nubank’s tech hub in Berlin

Today, Nu is one of the largest digital financial services platforms in the world, with over 118 million customers across Brazil, Mexico, and Colombia. From day one, our goal has been to challenge the status quo of the financial industry—using proprietary technology and data to build innovative, easy-to-use products that actually solve real problems.

Our mission is simple: to fight financial complexity and empower people. That’s why we create end-to-end solutions that support customers throughout their entire financial journey—promoting access, transparency, and real progress through responsible credit.

With an efficient and scalable business model, we’re able to combine low operational costs with sustainable growth. This approach has earned international recognition in rankings like Time’s “100 Most Influential Companies,” Fast Company’s “Most Innovative Companies,” and Forbes’ “World’s Best Banks.”

So why having a tech hub at Berlin?

Our mission is to give people back control of their financial lives. We make the extraordinary happen by consistently challenging the status quo and driving innovation.

Some pillars support us to achieve this goal – and technology is certainly one of them.

Many of our teams need to analyze large datasets to make the best decisions for the customer – be it, for example, develop a product, make our service smarter and more efficient or better understand how people interact with our services. And this is where our technology hub in Berlin, Germany, opened in late 2017, comes in.

Situated in the Mitte neighborhood, synthesis of the past, the present and the future of Berlin, the office is home to a growing number of cross-functional teams. They are dedicated to maintain and evolve a world-class self-service data platform, enabling nearly every team at Nubank to leverage data for automated decision making, financial control and reporting, strategic analysis, etc.

Our Berlin office includes meeting rooms equipped with Zoom and Google Meet features, phone booths for focused individual work, and a game room designed for relaxation and team-building interactions.

The data infrastructure teams

Our data infrastructure teams are responsible for building services that are scalable and resilient for processing, managing, and monitoring large-scale distributed data processing systems. That includes designing well thought and documented APIs, backed by scalable data ingestion, processing, and serving systems. 

We make heavy use of Nubank’s standard tech stack, including building Clojure microservices. But given our focus on data processing, we also continually explore and leverage big data technologies for batch and streaming processing, building machine learning models, etc.

Our data infrastructure teams do not build data pipelines, models, or analyses for specific business areas. Instead, we provide a managed self-service platform that empowers business analysts, data scientists, analytics engineers, and many other roles to leverage data with autonomy, supporting a high-performance culture aligned with Nubank’s values.

In addition to our data infrastructure teams, a growing number of teams from Nubank’s wider Engineering Horizontal organization have a presence in the Berlin tech hub. They are focused on areas such as core infrastructure automation, developer experience, databases, mobile development platform, distributed systems R&D, among others.

Why open a tech hub in Berlin?

There are several reasons for having a team in Germany.

The first one is that Berlin is a fantastic city not only for the quality of life, but also for being a center of activities related to technology. The city is a hub for events, companies and data processing research – and it is also a central point that gives easy access to other cities in Europe.

Another reason is that the city has a friendly infrastructure for foreigners and many inhabitants speak English – which greatly facilitates communication.

Finally, and related to the previous points, investing in a European tech hub allows us to foster one of our main values, that is to build strong and diverse teams. At Nubank, diversity and collaboration help us achieve extraordinary outcomes, as we’re committed to being a good reference for career growth.

Nubank’s work at our Berliner tech hub

The atmosphere in the office is very similar to that of our São Paulo headquarters. We don’t have a formal dress code (wearing shorts and flip flops is absolutely normal) and we encourage anyone who wants to bring their pets to work.

As Nubank is a company that actively seeks diversity at the workplace and gender equality, Berlin is a perfect blend of a great pool of talent with diverse backgrounds.

Nowadays, we have over 40 Nubankers from twelve nationalities (and growing), coming from three different continents. We arrived at the office from Brazil, France, Italy, India, Poland, United States, Peru, Chile, Portugal, Ukraine, Croatia and, clearly, Germany. 

Nubank operates as a global company. In addition to our offices in the markets where we currently operate—Brazil, Mexico, and Colombia—and our tech hub in Berlin, we also have technology centers in Uruguay and the United States, home to Cognitect (creators of Clojure and Datomic), which became part of Nubank in 2020.

In Berlin, our way of working is built around small, autonomous teams. That means more agility, independent decision-making, and less day-to-day bureaucracy. But autonomy doesn’t mean isolation—we remain deeply connected to the rest of Nubank, sharing the same values, the same mission, and a strong commitment to creating real impact in people’s lives.

We’re looking for diverse technical talent to help build this vision with us—from software engineers to data infrastructure specialists, from product leaders to systems engineers. Here, every person plays a key role in building financial solutions that are simpler, more accessible, and truly innovative for millions of customers.

The post A look into Nubank’s tech hub in Berlin appeared first on Building Nubank.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.