Evolving Nubank’s Architecture Toward Thoughtful Platformization

Nubank Engineering Meetup #12 offered valuable insights into the company’s technical evolution — how we’ve reshaped our platforms, systems, and career paths over the years. The conversation was led by Bel Gonçalves, now a Principal Engineer at Nubank, who shared the lessons learned over nearly a decade building technology at the company.

More than an inspiring personal story, the meetup gave attendees a deep dive into the topics that shape our daily engineering work — from architectural decisions to the development of large-scale technical career structures.

From a single product to a platform-based architecture

Nubank’s original architecture was simple, like many early-stage startups: built for a single product (credit card) in a single country (Brazil). But as we expanded by adding financial services, entering new markets, and navigating diverse regulations, this structure had to evolve.

The answer was platformization. The challenge was to build systems flexible enough to support different products, across multiple countries, with unique requirements, without rewriting everything from scratch each time. This meant clearly separating product-specific logic such as localized business rules from reusable components such as authorization engines and card and account management.

By making services more parameterizable, we accelerated development, reduced redundancy, and maintained the resilience needed to operate at scale. The journey included tough decisions, such as extracting critical code from legacy services in production, rewriting foundational components, and avoiding overengineering by balancing generality with simplicity.

Platforms that power scalability

One of the clearest examples of this shift was the creation of the card platform, which decoupled debit and credit functionalities from specific products or geographies. What used to be handled by a single service like the former CreditCardAccounts that as the name suggested, was specific to credit card, was restructured and endup derivating other services more flexible and capable to adapt to different realities, such as Brazil’s combo card and Mexico’s single cards for debit and credit.

Another critical milestone was the evolution of our Authorizer system, responsible for real-time transaction approvals. As one of the most sensitive parts of our operation, its migration from physical data centers to the cloud required low latency and high availability, especially to maintain communication with partners like Mastercard. This project required not only technical excellence but also meticulous planning to avoid any disruption to customers.

Standardization as the backbone of consistent engineering

To support engineering scalability, Nubank adopted a consistent approach rooted in standardization. All teams work with Clojure, build microservices, and favor asynchronous communication. This shared foundation encourages reuse, lowers cognitive load, and enables more predictable architectural evolution.

Our use of both Clojure and the Datomic database, both rooted in functional programming, also reflects a focus on safety and predictability. Immutability, for example, is not just a design choice—it’s a necessity to prevent harmful outcomes from incorrect system states.

This level of consistency helps teams replicate proven patterns and best practices across different contexts, accelerating product and market expansion.

The technical career path at Nubank

As our architecture has matured, so too has our technical career framework. The path includes clear milestones: engineer, senior, lead, staff, senior staff, principal, and finally, distinguished engineer. Each level brings increasing responsibility, not just in code, but in system-wide and strategic influence.

Unlike traditional models that nudge engineers into management roles, Nubank supports the growth of deep technical careers. Engineers can specialize in a given technology or take on broader roles, becoming cross-team technical leaders, especially in products or platforms with many stakeholders.

We also encourage movement between tracks. Experience in people leadership, for instance, can add perspective and empathy to those returning to hands-on technical work, strengthening business understanding and collaboration skills along the way.

Engineering at the intersection of tech, product, and business

In our cross-functional environment, engineering goes far beyond code. Engineers are involved in product decisions, help shape go-to-market strategies, and openly discuss trade-offs with stakeholders from other disciplines. Collaboration with data, design, and business teams is part of our daily rhythm, improving both product quality and creative thinking.

This collaborative model means engineers need not only technical depth, but also strong communication, active listening, and negotiation skills.

Culture, trust, and inclusion as core pillars

The architectural structure of Nubank is not just built on services and platforms — it’s built on people. Teams are the core unit of our company, and collaboration is the most essential skill. Behind every critical system, there’s a trusted network where different voices, backgrounds, and ways of thinking come together.

Building strong and diverse teams is part of our culture — and that’s why it’s a strategic priority for us. In initiatives like the creation of the NuCel team, we actively seek to build teams made up of people with different abilities, experiences, and perspectives across functions like engineering, product, design, and more.

Environments like this lead to more complete, empathetic, and relevant solutions for the people who use our products.

Balance and ownership in a high-complexity environment

With over 100 million customers, a growing product portfolio, and operations in multiple countries, pressure and complexity are part of our daily challenges. To manage this, our engineering team relies on mature processes, transparent communication, and a culture of autonomy with accountability.

Planning cycles balance short- and long-term goals. Product timelines are co-developed with engineering, with technical feasibility, resource constraints, and risk trade-offs always in the equation. It is common to adjust scope or renegotiate deadlines, always with a focus on delivering value sustainably.

A culture anchored in learning

If one principle guides everything we do at Nubank Engineering, it is continuous learning. Whether it is tackling a massive refactor, launching a new platform, or navigating the next career step, the mindset is always to stay curious and stay adaptable.

It is not just about mastering a tech stack or leading high-impact projects. It is about being where innovation happens, even when that means stepping out of your comfort zone.

The post Evolving Nubank’s Architecture Toward Thoughtful Platformization appeared first on Building Nubank.

Permalink

Evolving Nubank’s Architecture Toward Thoughtful Platformization

Nubank Engineering Meetup #12 offered valuable insights into the company’s technical evolution — how we’ve reshaped our platforms, systems, and career paths over the years. The conversation was led by Bel Gonçalves, now a Principal Engineer at Nubank, who shared the lessons learned over nearly a decade building technology at the company.

More than an inspiring personal story, the meetup gave attendees a deep dive into the topics that shape our daily engineering work — from architectural decisions to the development of large-scale technical career structures.

From a single product to a platform-based architecture

Nubank’s original architecture was simple, like many early-stage startups: built for a single product (credit card) in a single country (Brazil). But as we expanded by adding financial services, entering new markets, and navigating diverse regulations, this structure had to evolve.

The answer was platformization. The challenge was to build systems flexible enough to support different products, across multiple countries, with unique requirements, without rewriting everything from scratch each time. This meant clearly separating product-specific logic such as localized business rules from reusable components such as authorization engines and card and account management.

By making services more parameterizable, we accelerated development, reduced redundancy, and maintained the resilience needed to operate at scale. The journey included tough decisions, such as extracting critical code from legacy services in production, rewriting foundational components, and avoiding overengineering by balancing generality with simplicity.

Platforms that power scalability

One of the clearest examples of this shift was the creation of the card platform, which decoupled debit and credit functionalities from specific products or geographies. What used to be handled by a single service like the former CreditCardAccounts that as the name suggested, was specific to credit card, was restructured and endup derivating other services more flexible and capable to adapt to different realities, such as Brazil’s combo card and Mexico’s single cards for debit and credit.

Another critical milestone was the evolution of our Authorizer system, responsible for real-time transaction approvals. As one of the most sensitive parts of our operation, its migration from physical data centers to the cloud required low latency and high availability, especially to maintain communication with partners like Mastercard. This project required not only technical excellence but also meticulous planning to avoid any disruption to customers.

Standardization as the backbone of consistent engineering

To support engineering scalability, Nubank adopted a consistent approach rooted in standardization. All teams work with Clojure, build microservices, and favor asynchronous communication. This shared foundation encourages reuse, lowers cognitive load, and enables more predictable architectural evolution.

Our use of both Clojure and the Datomic database, both rooted in functional programming, also reflects a focus on safety and predictability. Immutability, for example, is not just a design choice—it’s a necessity to prevent harmful outcomes from incorrect system states.

This level of consistency helps teams replicate proven patterns and best practices across different contexts, accelerating product and market expansion.

The technical career path at Nubank

As our architecture has matured, so too has our technical career framework. The path includes clear milestones: engineer, senior, lead, staff, senior staff, principal, and finally, distinguished engineer. Each level brings increasing responsibility, not just in code, but in system-wide and strategic influence.

Unlike traditional models that nudge engineers into management roles, Nubank supports the growth of deep technical careers. Engineers can specialize in a given technology or take on broader roles, becoming cross-team technical leaders, especially in products or platforms with many stakeholders.

We also encourage movement between tracks. Experience in people leadership, for instance, can add perspective and empathy to those returning to hands-on technical work, strengthening business understanding and collaboration skills along the way.

Engineering at the intersection of tech, product, and business

In our cross-functional environment, engineering goes far beyond code. Engineers are involved in product decisions, help shape go-to-market strategies, and openly discuss trade-offs with stakeholders from other disciplines. Collaboration with data, design, and business teams is part of our daily rhythm, improving both product quality and creative thinking.

This collaborative model means engineers need not only technical depth, but also strong communication, active listening, and negotiation skills.

Culture, trust, and inclusion as core pillars

The architectural structure of Nubank is not just built on services and platforms — it’s built on people. Teams are the core unit of our company, and collaboration is the most essential skill. Behind every critical system, there’s a trusted network where different voices, backgrounds, and ways of thinking come together.

Building strong and diverse teams is part of our culture — and that’s why it’s a strategic priority for us. In initiatives like the creation of the NuCel team, we actively seek to build teams made up of people with different abilities, experiences, and perspectives across functions like engineering, product, design, and more.

Environments like this lead to more complete, empathetic, and relevant solutions for the people who use our products.

Balance and ownership in a high-complexity environment

With over 100 million customers, a growing product portfolio, and operations in multiple countries, pressure and complexity are part of our daily challenges. To manage this, our engineering team relies on mature processes, transparent communication, and a culture of autonomy with accountability.

Planning cycles balance short- and long-term goals. Product timelines are co-developed with engineering, with technical feasibility, resource constraints, and risk trade-offs always in the equation. It is common to adjust scope or renegotiate deadlines, always with a focus on delivering value sustainably.

A culture anchored in learning

If one principle guides everything we do at Nubank Engineering, it is continuous learning. Whether it is tackling a massive refactor, launching a new platform, or navigating the next career step, the mindset is always to stay curious and stay adaptable.

It is not just about mastering a tech stack or leading high-impact projects. It is about being where innovation happens, even when that means stepping out of your comfort zone.

The post Evolving Nubank’s Architecture Toward Thoughtful Platformization appeared first on Building Nubank.

Permalink

Convergence of Random Events

Life is full of random events.

We learn that multiple coin flips are “independent events” – no matter whether the past flip was heads or tails, the next flip is 50/50. (So why do they show the last few results at the routlette table? Hint: Don’t play routlette.) We learn that about half of babies are male and half female, so chances are 50/50 that your new little sibling will be a boy or a girl.

I found the answer to “Of my 8 children, what are the chances that 4 are girls and 4 are boys?” counterintuitive. The central limit theorem is crucial to intuition around this question.

When I initially encountered the Monte Hall problem, the correct answer wasn’t obvious or intuitive, but the mathemetical explanation is surprisingly understandable. We’ll try here to make the central limit theorem more understandable as well.

Start with a single random event – value drawn from [0.0, 1.0)

(rand)
0.9444741798633549

One way to combine random events is to take the average:

(defn avg [nums]
  (/ (reduce + nums) (count nums)))
(avg [0.0 1.0])
0.5

Let’s try taking the average of several events together:

(avg [(rand) (rand)])
0.2260346609043261
(avg [(rand) (rand) (rand)])
0.5777898105446688

This is getting repetitive. We can make the computer repeat for us:

(avg (repeatedly 3 rand))
0.718099653009579

The more events that you average, the closer the result comes to 0.5:

(avg (repeatedly 30 rand))
0.42449078088020803
(avg (repeatedly 300 rand))
0.5030610184636088

Let’s try taking several events together:

(defn event []
  (rand))
(event)
0.3890018948970616
(defn combined-event [number-of-events]
  (avg (repeatedly number-of-events event)))
(combined-event 1)
0.30888003329596103
(combined-event 2)
0.5193090196027024
(combined-event 5)
0.41497234661446525

Let’s look at a series of multiple of these combined event

(repeatedly 5 #(combined-event 2))
(0.40846261646800947
 0.6034609724398203
 0.5100203767753714
 0.5715178565795758
 0.7643895475048696)
(repeatedly 5 #(combined-event 5))
(0.5504639686792496
 0.29688596633947595
 0.6381304902703808
 0.46521032488771963
 0.3726026061621697)
(repeatedly 5 #(combined-event 10))
(0.31571650141595553
 0.4779976291697417
 0.5323091524540302
 0.40372033455175577
 0.48422141387833334)

As we combine a larger number of events, the values cluster more closely to the middle of the original distribution.

And regardless of the shape of the original event distribution, the result of combining more and more events will approach the normal distribution – it’s a unique function toward which these combinations always converge.

This is true for both continuous variables (like (rand)) or discrete variables (like dice (rand-nth [1 2 3 4 5 6])), and it’s true even for oddly shaped distributions. When you combine enough of them, they take on the character of the bell-shaped curve.

Learn More at 3Blue1Brown - But what is the Central Limit Theorem?

Permalink

Build and Deploy Web Apps With Clojure and FLy.io

This post walks through a small web development project using Clojure, covering everything from building the app to packaging and deploying it. It’s a collection of insights and tips I’ve learned from building my Clojure side projects but presented in a more structured format.

As the title suggests, we’ll be deploying the app to Fly.io. It’s a service that allows you to deploy apps packaged as Docker images on lightweight virtual machines.[1] My experience with it has been good, it’s easy to use and quick to set up. One downside of Fly is that it doesn’t have a free tier, but if you don’t plan on leaving the app deployed, it barely costs anything.

This isn’t a tutorial on Clojure, so I’ll assume you already have some familiarity with the language as well as some of its libraries.[2]

Project Setup

In this post, we’ll be building a barebones bookmarks manager for the demo app. Users can log in using basic authentication, view all bookmarks, and create a new bookmark. It’ll be a traditional multi-page web app and the data will be stored in a SQLite database.

Here’s an overview of the project’s starting directory structure:

.
├── dev
│   └── user.clj
├── resources
│   └── config.edn
├── src
│   └── acme
│       └── main.clj
└── deps.edn

And the libraries we’re going to use. If you have some Clojure experience or have used Kit, you’re probably already familiar with all the libraries listed below.[3]

deps.edn
{:paths ["src" "resources"]
 :deps {org.clojure/clojure               {:mvn/version "1.12.0"}
        aero/aero                         {:mvn/version "1.1.6"}
        integrant/integrant               {:mvn/version "0.11.0"}
        ring/ring-jetty-adapter           {:mvn/version "1.12.2"}
        metosin/reitit-ring               {:mvn/version "0.7.2"}
        com.github.seancorfield/next.jdbc {:mvn/version "1.3.939"}
        org.xerial/sqlite-jdbc            {:mvn/version "3.46.1.0"}
        hiccup/hiccup                     {:mvn/version "2.0.0-RC3"}}
 :aliases
 {:dev {:extra-paths ["dev"]
        :extra-deps  {nrepl/nrepl    {:mvn/version "1.3.0"}
                      integrant/repl {:mvn/version "0.3.3"}}
        :main-opts   ["-m" "nrepl.cmdline" "--interactive" "--color"]}}}

I use Aero and Integrant for my system configuration (more on this in the next section), Ring with the Jetty adaptor for the web server, Reitit for routing, next.jdbc for database interaction, and Hiccup for rendering HTML. From what I’ve seen, this is a popular “library combination” for building web apps in Clojure.[4]

The user namespace in dev/user.clj contains helper functions from Integrant-repl to start, stop, and restart the Integrant system.

dev/user.clj
(ns user
  (:require
   [acme.main :as main]
   [clojure.tools.namespace.repl :as repl]
   [integrant.core :as ig]
   [integrant.repl :refer [set-prep! go halt reset reset-all]]))

(set-prep!
 (fn []
   (ig/expand (main/read-config)))) ;; we'll implement this soon

(repl/set-refresh-dirs "src" "resources")

(comment
  (go)
  (halt)
  (reset)
  (reset-all))

Systems and Configuration

If you’re new to Integrant or other dependency injection libraries like Component, I’d suggest reading “How to Structure a Clojure Web”. It’s a great explanation about the reasoning behind these libraries. Like most Clojure apps that use Aero and Integrant, my system configuration lives in a .edn file. I usually name mine as resources/config.edn. Here’s what it looks like:

resources/config.edn
{:server
 {:port #long #or [#env PORT 8080]
  :host #or [#env HOST "0.0.0.0"]
  :auth {:username #or [#env AUTH_USER "john.doe@email.com"]
         :password #or [#env AUTH_PASSWORD "password"]}}

 :database
 {:dbtype "sqlite"
  :dbname #or [#env DB_DATABASE "database.db"]}}

In production, most of these values will be set using environment variables. During local development, the app will use the hard-coded default values. We don’t have any sensitive values in our config (e.g., API keys), so it’s fine to commit this file to version control. If there are such values, I usually put them in another file that’s not tracked by version control and include them in the config file using Aero’s #include reader tag.

This config file is then “expanded” into the Integrant system map using the expand-key method:

src/acme/main.clj
(ns acme.main
  (:require
   [aero.core :as aero]
   [clojure.java.io :as io]
   [integrant.core :as ig]))

(defn read-config
  []
  {:system/config (aero/read-config (io/resource "config.edn"))})

(defmethod ig/expand-key :system/config
  [_ opts]
  (let [{:keys [server database]} opts]
    {:server/jetty (assoc server :handler (ig/ref :handler/ring))
     :handler/ring {:database (ig/ref :database/sql)
                    :auth     (:auth server)}
     :database/sql database}))

The system map is created in code instead of being in the configuration file. This makes refactoring your system simpler as you only need to change this method while leaving the config file (mostly) untouched.[5]

My current approach to Integrant + Aero config files is mostly inspired by the blog post “Rethinking Config with Aero & Integrant” and Laravel’s configuration. The config file follows a similar structure to Laravel’s config files and contains the app configurations without describing the structure of the system. Previously I had a key for each Integrant component, which led to the config file being littered with #ig/ref and more difficult to refactor.

Also, if you haven’t already, start a REPL and connect to it from your editor. Run clj -M:dev if your editor doesn’t automatically start a REPL. Next, we’ll implement the init-key and halt-key! methods for each of the components:

src/acme/main.clj
;; src/acme/main.clj
(ns acme.main
  (:require
   ;; ...
   [acme.handler :as handler]
   [acme.util :as util])
   [next.jdbc :as jdbc]
   [ring.adapter.jetty :as jetty]))
;; ...

(defmethod ig/init-key :server/jetty
  [_ opts]
  (let [{:keys [handler port]} opts
        jetty-opts (-> opts (dissoc :handler :auth) (assoc :join? false))
        server     (jetty/run-jetty handler jetty-opts)]
    (println "Server started on port " port)
    server))

(defmethod ig/halt-key! :server/jetty
  [_ server]
  (.stop server))

(defmethod ig/init-key :handler/ring
  [_ opts]
  (handler/handler opts))

(defmethod ig/init-key :database/sql
  [_ opts]
  (let [datasource (jdbc/get-datasource opts)]
    (util/setup-db datasource)
    datasource))

The setup-db function creates the required tables in the database if they don’t exist yet. This works fine for database migrations in small projects like this demo app, but for larger projects, consider using libraries such as Migratus (my preferred library) or Ragtime.

src/acme/util.clj
(ns acme.util 
  (:require
   [next.jdbc :as jdbc]))

(defn setup-db
  [db]
  (jdbc/execute-one!
   db
   ["create table if not exists bookmarks (
       bookmark_id text primary key not null,
       url text not null,
       created_at datetime default (unixepoch()) not null
     )"]))

For the server handler, let’s start with a simple function that returns a “hi world” string.

src/acme/handler.clj
(ns acme.handler
  (:require
   [ring.util.response :as res]))

(defn handler
  [_opts]
  (fn [req]
    (res/response "hi world")))

Now all the components are implemented. We can check if the system is working properly by evaluating (reset) in the user namespace. This will reload your files and restart the system. You should see this message printed in your REPL:

:reloading (acme.util acme.handler acme.main)
Server started on port  8080
:resumed

If we send a request to http://localhost:8080/, we should get “hi world” as the response:

$ curl localhost:8080/
# hi world

Nice! The system is working correctly. In the next section, we’ll implement routing and our business logic handlers.

Routing, Middleware, and Route Handlers

First, let’s set up a ring handler and router using Reitit. We only have one route, the index / route that’ll handle both GET and POST requests.

src/acme/handler.clj
(ns acme.handler
  (:require
   [reitit.ring :as ring]))

(def routes
  [["/" {:get  index-page
         :post index-action}]])

(defn handler
  [opts]
  (ring/ring-handler
   (ring/router routes)
   (ring/routes
    (ring/redirect-trailing-slash-handler)
    (ring/create-resource-handler {:path "/"})
    (ring/create-default-handler))))

We’re including some useful middleware:

  • redirect-trailing-slash-handler to resolve routes with trailing slashes,
  • create-resource-handler to serve static files, and
  • create-default-handler to handle common 40x responses.

Implementing the Middlewares

If you remember the :handler/ring from earlier, you’ll notice that it has two dependencies, database and auth. Currently, they’re inaccessible to our route handlers. To fix this, we can inject these components into the Ring request map using a middleware function.

src/acme/handler.clj
;; ...

(defn components-middleware
  [components]
  (let [{:keys [database auth]} components]
    (fn [handler]
      (fn [req]
        (handler (assoc req
                        :db database
                        :auth auth))))))
;; ...

The components-middleware function takes in a map of components and creates a middleware function that “assocs” each component into the request map.[6] If you have more components such as a Redis cache or a mail service, you can add them here.

We’ll also need a middleware to handle HTTP basic authentication.[7] This middleware will check if the username and password from the request map matche the values in the auth map injected by components-middleware. If they match, then the request is authenticated and the user can view the site.

src/acme/handler.clj
(ns acme.handler
  (:require
   ;; ...
   [acme.util :as util]
   [ring.util.response :as res]))
;; ...

(defn wrap-basic-auth
  [handler]
  (fn [req]
    (let [{:keys [headers auth]} req
          {:keys [username password]} auth
          authorization (get headers "authorization")
          correct-creds (str "Basic " (util/base64-encode
                                       (format "%s:%s" username password)))]
      (if (and authorization (= correct-creds authorization))
        (handler req)
        (-> (res/response "Access Denied")
            (res/status 401)
            (res/header "WWW-Authenticate" "Basic realm=protected"))))))
;; ...

A nice feature of Clojure is that interop with the host language is easy. The base64-encode function is just a thin wrapper over Java’s Base64.Encoder:

src/acme/util.clj
(ns acme.util
   ;; ...
  (:import java.util.Base64))

(defn base64-encode
  [s]
  (.encodeToString (Base64/getEncoder) (.getBytes s)))

Finally, we need to add them to the router. Since we’ll be handling form requests later, we’ll also bring in Ring’s wrap-params middleware.

src/acme/handler.clj
(ns acme.handler
  (:require
   ;; ...
   [ring.middleware.params :refer [wrap-params]]))
;; ...

(defn handler
  [opts]
  (ring/ring-handler
   ;; ...
   {:middleware [(components-middleware opts)
                 wrap-basic-auth
                 wrap-params]}))

Implementing the Route Handlers

We now have everything we need to implement the route handlers or the business logic of the app. First, we’ll implement the index-page function which renders a page that:

  1. Shows all of the user’s bookmarks in the database, and
  2. Shows a form that allows the user to insert new bookmarks into the database
src/acme/handler.clj
(ns acme.handler
  (:require
   ;; ...
   [next.jdbc :as jdbc]
   [next.jdbc.sql :as sql]))
;; ...

(defn template
  [bookmarks]
  [:html
   [:head
    [:meta {:charset "utf-8"
            :name    "viewport"
            :content "width=device-width, initial-scale=1.0"}]]
   [:body
    [:h1 "bookmarks"]
    [:form {:method "POST"}
     [:div
      [:label {:for "url"} "url "]
      [:input#url {:name "url"
                   :type "url"
                   :required true
                   :placeholer "https://en.wikipedia.org/"}]]
     [:button "submit"]]
    [:p "your bookmarks:"]
    [:ul
     (if (empty? bookmarks)
       [:li "you don't have any bookmarks"]
       (map
        (fn [{:keys [url]}]
          [:li
           [:a {:href url} url]])
        bookmarks))]]])

(defn index-page
  [req]
  (try
    (let [bookmarks (sql/query (:db req)
                               ["select * from bookmarks"]
                               jdbc/unqualified-snake-kebab-opts)]
      (util/render (template bookmarks)))
    (catch Exception e
      (util/server-error e))))
;; ...

Database queries can sometimes throw exceptions, so it’s good to wrap them in a try-catch block. I’ll also introduce some helper functions:

src/acme/util.clj
(ns acme.util
  (:require
   ;; ...
   [hiccup2.core :as h]
   [ring.util.response :as res])
  (:import java.util.Base64))
;; ...

(defn preprend-doctype
  [s]
  (str "<!doctype html>" s))

(defn render
  [hiccup]
  (-> hiccup h/html str preprend-doctype res/response (res/content-type "text/html")))

(defn server-error
  [e]
  (println "Caught exception: " e)
  (-> (res/response "Internal server error")
      (res/status 500)))

render takes a hiccup form and turns it into a ring response, while server-error takes an exception, logs it, and returns a 500 response.

Next, we’ll implement the index-action function:

src/acme/handler.clj
;; ...

(defn index-action
  [req]
  (try
    (let [{:keys [db form-params]} req
          value (get form-params "url")]
      (sql/insert! db :bookmarks {:bookmark_id (random-uuid) :url value})
      (res/redirect "/" 303))
    (catch Exception e
      (util/server-error e))))
;; ...

This is an implementation of a typical post/redirect/get pattern. We get the value from the URL form field, insert a new row in the database with that value, and redirect back to the index page. Again, we’re using a try-catch block to handle possible exceptions from the database query.

That should be all of the code for the controllers. If you reload your REPL and go to http://localhost:8080, you should see something that looks like this after logging in:

Screnshot of the app

The last thing we need to do is to update the main function to start the system:

src/acme/main.clj
;; ...

(defn -main [& _]
  (-> (read-config) ig/expand ig/init))

Now, you should be able to run the app using clj -M -m acme.main. That’s all the code needed for the app. In the next section, we’ll package the app into a Docker image to deploy to Fly.

Packaging the App

While there are many ways to package a Clojure app, Fly.io specifically requires a Docker image. There are two approaches to doing this:

  1. Build an uberjar and run it using Java in the container, or
  2. Load the source code and run it using Clojure in the container

Both are valid approaches. I prefer the first since its only dependency is the JVM. We’ll use the tools.build library to build the uberjar. Check out the official guide for more information on building Clojure programs. Since it’s a library, to use it we can add it to our deps.edn file with an alias:

deps.edn
{;; ...
 :aliases
 {;; ...
  :build {:extra-deps {io.github.clojure/tools.build 
                       {:git/tag "v0.10.5" :git/sha "2a21b7a"}}
          :ns-default build}}}

Tools.build expects a build.clj file in the root of the project directory, so we’ll need to create that file. This file contains the instructions to build artefacts, which in our case is a single uberjar. There are many great examples of build.clj files on the web, including from the official documentation. For now, you can copy+paste this file into your project.

build.clj
(ns build
  (:require
   [clojure.tools.build.api :as b]))

(def basis (delay (b/create-basis {:project "deps.edn"})))
(def src-dirs ["src" "resources"])
(def class-dir "target/classes")

(defn uber
  [_]
  (println "Cleaning build directory...")
  (b/delete {:path "target"})

  (println "Copying files...")
  (b/copy-dir {:src-dirs   src-dirs
               :target-dir class-dir})

  (println "Compiling Clojure...")
  (b/compile-clj {:basis      @basis
                  :ns-compile '[acme.main]
                  :class-dir  class-dir})

  (println "Building Uberjar...")
  (b/uber {:basis     @basis
           :class-dir class-dir
           :uber-file "target/standalone.jar"
           :main      'acme.main}))

To build the project, run clj -T:build uber. This will create the uberjar standalone.jar in the target directory. The uber in clj -T:build uber refers to the uber function from build.clj. Since the build system is a Clojure program, you can customise it however you like. If we try to run the uberjar now, we’ll get an error:

# build the uberjar
$ clj -T:build uber
# Cleaning build directory...
# Copying files...
# Compiling Clojure...
# Building Uberjar...

# run the uberjar
$ java -jar target/standalone.jar
# Error: Could not find or load main class acme.main
# Caused by: java.lang.ClassNotFoundException: acme.main

This error occurred because the Main class that is required by Java isn’t built. To fix this, we need to add the :gen-class directive in our main namespace. This will instruct Clojure to create the Main class from the -main function.

src/acme/main.clj
(ns acme.main
  ;; ...
  (:gen-class))
;; ...

If you rebuild the project and run java -jar target/standalone.jar again, it should work perfectly. Now that we have a working build script, we can write the Dockerfile:

Dockerfile
# install additional dependencies here in the base layer
# separate base from build layer so any additional deps installed are cached
FROM clojure:temurin-21-tools-deps-bookworm-slim AS base

FROM base as build
WORKDIR /opt
COPY . .
RUN clj -T:build uber

FROM eclipse-temurin:21-alpine AS prod
COPY --from=build /opt/target/standalone.jar /
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "standalone.jar"]

It’s a multi-stage Dockerfile. We use the official Clojure Docker image as the layer to build the uberjar. Once it’s built, we copy it to a smaller Docker image that only contains the Java runtime.[8] By doing this, we get a smaller container image as well as a faster Docker build time because the layers are better cached.

That should be all for packaging the app. We can move on to the deployment now.

Deploying with Fly.io

First things first, you’ll need to install flyctl, Fly’s CLI tool for interacting with their platform. Create a Fly.io account if you haven’t already. Then run fly auth login to authenticate flyctl with your account.

Next, we’ll need to create a new Fly App:

$ fly app create
# ? Choose an app name (leave blank to generate one): 
# automatically selected personal organization: Ryan Martin
# New app created: blue-water-6489

Another way to do this is with the fly launch command, which automates a lot of the app configuration for you. We have some steps to do that are not done by fly launch, so we’ll be configuring the app manually. I also already have a fly.toml file ready that you can straight away copy to your project.

fly.toml
# replace these with your app and region name
# run `fly platform regions` to get a list of regions
app = 'blue-water-6489' 
primary_region = 'sin'

[env]
  DB_DATABASE = "/data/database.db"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = "stop"
  auto_start_machines = true
  min_machines_running = 0

[mounts]
  source = "data"
  destination = "/data"
  initial_sie = 1

[[vm]]
  size = "shared-cpu-1x"
  memory = "512mb"
  cpus = 1
  cpu_kind = "shared"

These are mostly the default configuration values with some additions. Under the [env] section, we’re setting the SQLite database location to /data/database.db. The database.db file itself will be stored in a persistent Fly Volume mounted on the /data directory. This is specified under the [mounts] section. Fly Volumes are similar to regular Docker volumes but are designed for Fly’s micro VMs.

We’ll need to set the AUTH_USER and AUTH_PASSWORD environment variables too, but not through the fly.toml file as these are sensitive values. To securely set these credentials with Fly, we can set them as app secrets. They’re stored encrypted and will be automatically injected into the app at boot time.

$ fly secrets set AUTH_USER=hi@ryanmartin.me AUTH_PASSWORD=not-so-secure-password
# Secrets are staged for the first deployment

With this, the configuration is done and we can deploy the app using fly deploy:

$ fly deploy
# ...
# Checking DNS configuration for blue-water-6489.fly.dev
# Visit your newly deployed app at https://blue-water-6489.fly.dev/

The first deployment will take longer since it’s building the Docker image for the first time. Subsequent deployments should be faster due to the cached image layers. You can click on the link to view the deployed app, or you can also run fly open which will do the same thing. Here’s the app in action:

The app in action

If you made additional changes to the app or fly.toml, you can redeploy the app using the same command, fly deploy. The app is configured to auto stop/start, which helps to cut costs when there’s not a lot of traffic to the site. If you want to take down the deployment, you’ll need to delete the app itself using fly app destroy <your app name>.

Adding a Production REPL

This is an interesting topic in the Clojure community, with varying opinions on whether or not it’s a good idea. Personally I find having a REPL connected to the live app helpful, and I often use it for debugging and running queries on the live database.[9] Since we’re using SQLite, we don’t have a database server we can directly connect to, unlike Postgres or MySQL.

If you’re brave, you can even restart the app directly without redeploying from the REPL. You can easily go wrong with it, which is why some prefer to not use it.

For this project, we’re gonna add a socket REPL. It’s very simple to add (you just need to add a JVM option) and it doesn’t require additional dependencies like nREPL. Let’s update the Dockerfile:

Dockerfile
# ...
EXPOSE 7888
ENTRYPOINT ["java", "-Dclojure.server.repl={:port 7888 :accept clojure.core.server/repl}", "-jar", "standalone.jar"]

The socket REPL will be listening on port 7888. If we redeploy the app now, the REPL will be started but we won’t be able to connect to it. That’s because we haven’t exposed the service through Fly proxy. We can do this by adding the socket REPL as a service in the [services] section in fly.toml.

However, doing this will also expose the REPL port to the public. This means that anyone can connect to your REPL and possibly mess with your app. Instead, what we want to do is to configure the socket REPL as a private service.

By default, all Fly apps in your organisation live in the same private network. This private network, called 6PN, connects the apps in your organisation through Wireguard tunnels (a VPN) using IPv6. Fly private services aren’t exposed to the public internet but can be reached from this private network. We can then use Wireguard to connect to this private network to reach our socket REPL.

Fly VMs are also configured with the hostname fly-local-6pn, which maps to its 6PN address. This is analogous to localhost, which points to your loopback address 127.0.0.1. To expose a service to 6PN, all we have to do is bind or serve it to fly-local-6pn instead of the usual 0.0.0.0. We have to update the socket REPL options to:

Dockerfile
# ...
ENTRYPOINT ["java", "-Dclojure.server.repl={:port 7888,:address \"fly-local-6pn\",:accept clojure.core.server/repl}", "-jar", "standalone.jar"]

After redeploying, we can use the fly proxy command to forward the port from the remote server to our local machine.[10]

$ fly proxy 7888:7888
# Proxying local port 7888 to remote [blue-water-6489.internal]:7888

In another shell, run:

$ rlwrap nc localhost 7888
# user=>

Now we have a REPL connected to the production app! rlwrap is used for readline functionality, e.g. up/down arrow keys, vi bindings. Of course you can also connect to it from your editor.

Deploy with GitHub Actions

If you’re using GitHub, we can also set up automatic deployments on pushes/PRs with GitHub Actions. All you need is to create the workflow file:

.github/workflows/fly.yaml
name: Fly Deploy
on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  deploy:
    name: Deploy app
    runs-on: ubuntu-latest
    concurrency: deploy-group
    steps:
      - uses: actions/checkout@v4
      - uses: superfly/flyctl-actions/setup-flyctl@master
      - run: flyctl deploy --remote-only
        env:
          FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

To get this to work, you’ll need to create a deploy token from your app’s dashboard. Then, in your GitHub repo, create a new repository secret called FLY_API_TOKEN with the value of your deploy token. Now, whenever you push to the main branch, this workflow will automatically run and deploy your app. You can also manually run the workflow from GitHub because of the workflow_dispatch option.

End

As always, all the code is available on GitHub. Originally, this post was just about deploying to Fly.io, but along the way I kept adding on more stuff until it essentially became my version of the user manager example app. Anyway, hope this article provided a good view into web development with Clojure. As a bonus, here are some additional resources on deploying Clojure apps:


  1. The way Fly.io works under the hood is pretty clever. Instead of running the container image with a runtime like Docker, the image is unpacked and “loaded” into a VM. See this video explanation for more details. ↩︎

  2. If you’re interested in learning Clojure, my recommendation is to follow the official getting started guide and join the Clojurians Slack. Also, read through this list of introductory resources. ↩︎

  3. Kit was a big influence on me when I first started learning web development in Clojure. I never used it directly, but I did use their library choices and project structure as a base for my own projects. ↩︎

  4. There’s no “Rails” for the Clojure ecosystem (yet?). The prevailing opinion is to build your own “framework” by composing different libraries together. Most of these libraries are stable and are already used in production by big companies, so don’t let this discourage you from doing web development in Clojure! ↩︎

  5. There might be some keys that you add or remove, but the structure of the config file stays the same. ↩︎

  6. “assoc” (associate) is a Clojure slang that means to add or update a key-value pair in a map. ↩︎

  7. For more details on how basic authentication works, check out the specification. ↩︎

  8. Here’s a cool resource I found when researching Java Dockerfiles: WhichJDK. It provides a comprehensive comparison on the different JDKs available and recommendations on which one you should use. ↩︎

  9. Another (non-technically important) argument for live/production REPLs is just because it’s cool. Ever since I read the story about NASA’s programmers debugging a spacecraft through a live REPL, I’ve always wanted to try it at least once. ↩︎

  10. If you encounter errors related to Wireguard when running fly proxy, you can run fly doctor which will hopefully detect issues with your local setup and also suggest fixes for them. ↩︎

Permalink

AI, Lisp, and Programming

On the last Apropos, we welcomed Christoph Neumann to talk about his new role as the Clojure Developer Evangelist at Nubank. It’s very exciting that the role is in such great hands.

Our next guest is Peter Strömberg. Peter is known as PEZ online. He is the creator of Calva, the Clojure plugin for VS Code. He’s going to demo some REPL-Driven Development with Calva.


AI, Lisp, and Programming

I have a Masters of Science in Computer Science, with a specialty in Artificial Intelligence. With the way AI salaries are these days, you’d think I pull seven figures. Alas, my degree is from 2008. At that time, people wondered why I wanted to go into a field that had been in a sorry state since the 1980s. Wouldn’t I enjoy something more lucrative like security or databases?

But I liked the project of AI. AI, to me, was an exploration of our own intelligence—how we thought, solved problems, and perceived the world—to better understand how we can get a machine to do it. It was a kind of reverse-engineering of the human mind. By building our own minds, we could understand ourselves.

It’s clear that Lisp and AI have been linked since very early in the two fields’ existences. John McCarthy, the inventor of Lisp, also coined and defined the term artificial intelligence. Lisp was traditionally closely associated with the study of AI. Lisp has been a generator of programming language ideas that might seem normal now, but were considered weird or costly at the time. Here’s a partial list:

  1. Garbage collection

  2. Structured conditionals

  3. First-class and higher-order functions

  4. Lexical scoping

While it’s clear that Lisp has been influential on programming languages (even after being ridiculed for the same features languages borrow), what is not so clear is how much AI has been an influence on programming practice. Until recently, it was kind of a joke that AI had been around since the 1950s but hadn’t produced any real results. The other side of the joke is that once something works, it’s just considered programming and not artificial intelligence.

Artificial intelligence has produced so many riches in the programming world. Here is a partial list:

  1. Compilers (which used to be called “automatic programming”)

  2. Tree search

  3. Hash tables

  4. Constraint satisfaction

  5. Rules engines

  6. Priority queues

Let me put it directly: Seeking to understand human thought has been fruitful for software engineering. My interest in Lisp is interlinked with my interest in AI. AI has always been synonymous with “powerful programming techniques.” And I have always seen myself as part of that thread, however small my contribution might be.

The link between AI, Lisp, and programming was so strong 30 years ago that Peter Norvig started the preface of Paradigms of Artificial Intelligence Programming with these words:

This book is concerned with three related topics: the field of artificial intelligence, or AI; the skill of computer programming; and the programming language Common Lisp.

source

In 2008, when I graduated, Google could barely categorize images. Identifying a cat in a photo was still considered a hard problem. Many attempts had been made, but none could get close to human-level. In 2014, I heard about a breakthrough called “deep learning”. It was using the scale of the internet and the vast parallelism of GPUs to make huge neural networks trained on millions of images to break accuracy records. It was working. And it was completely uninteresting to me.

Okay, not really completely uninteresting. It tickled my interest in building new things. I could see how being able to identify cats (or other objects) reliably could be useful. But I saw in this nothing of the project for understanding ourselves. Instead, it was much like what happened at Intel.

Nobody really likes the Intel architecture. It’s not that great. But once Intel got a slight lead in marketshare, it could ride Moore’s Law. Instead of looking for a better architecture, invest your time instead in scaling the transistor down and scaling the number of transistors up. Even the worst architecture will get faster. And Intel can cement their lead by investing in better manufacturing processes. Their dominance wound up lasting decades. But computer architecture has languished relative to the growth of the demand for computing.

The same effect is at play in neural networks: Instead of investing in understanding of how thought works, just throw more processing and more training at bigger networks. With enough money to fund the project, your existing architectures, scaled up, will do better.

These are oversimplifications. There were undoubtedly many minor and some major architectural breakthroughs that helped Intel keep pace with Moore’s Law. Likewise, there have been similar architectural breakthroughs in neural networks, including convolutions and transformers. But the neural network strategy is dominated by scale—more training data, more neurons, more FLOPS.

My whole point is that my research into the history of the field of AI has somewhat inoculated me against the current hype. I don’t think AI will “replace all humans”. And I don’t think AGI (artificial general intelligence) is defined well enough to be a real goal. So where does that leave us? How is AI going to transform programming? Where will all of this end up?

Artificial intelligence has always been a significant part of the leading edge of programming. And its promise has always been far ahead of its ability. In the next few issues, I want to explore what this current hype wave of AI means for us. I don’t like where I see AI going. But I also want to give apply some optimism to it because I think a lot of the consequences are inevitable. The world is being changed, and we will have to live in that new world.

Permalink

What The Heck Just Happened?

Also titled “Lesson learned in CLJS Frontends” …

I have been doing frontend development for well over two decades now, and over that time I have used a lot of different frontend solutions. For over a decade I have been into ClojureScript and I don’t think I’m ever moving on from that. The language is just beautiful and the fact that I can use the same language on the server is just perfect. I’ll only talk about CLJS here though, since the frontend just runs JS, which we compile to.

I like optimizing things, so I’ll frequently try new things when I encounter new ideas. One the most impactful was when react came onto the scene. Given its current state I obviously wasn’t the only one. I’d say in general the CLJS community adopted it quite widely. The promise of having the UI represented as a function of your state was a beautiful idea. (render state) is just too nice to not like. However, we very quickly learned that this doesn’t scale beyond toy examples. The VDOM-representation this render call returns needs to be “diffed”, as in compared to the previous version whenever any change is made. This becomes expensive computationally very quickly.

In this post I kinda want to document what I (and others) have learned over the years, and where my own journey currently stands.

What The Heck Just Happened?

Answering this question becomes the essential question the rendering library of choice has to answer. It only sees two snapshots and is supposed to find the needle in a possibly large pile of data. So it has to ask and answer this a lot.

Say you have an input DOM element and its value should be directly written into some other div. The most straightforward way is to do this directly in JS:

const input = document.getElementById("input");
const target = document.getElementById("target");

input.addEventListener("input", function(e) {
   target.textContent = input.value;
});

Plus a <input type="text" id="input"> and <div id="target"></div> somewhere. I’ll call this the baseline. The most direct way to get the result we want. There is not a single “What the Heck Just Happened?” asked, the code is doing exactly what it needs in as little work was possible.

Of course, frontend development is never this simple and this style of imperative code often leads to very entangled hard to maintain messes. Thus enter react, or any library of that kind, that instead has you write a declarative representation of the current desired UI state. I’ll use CLJS and hiccup from here. The actual rendering library used is almost irrelevant to the rest of this post. The problems are inherent in the approach.

So, for CLJS this might look something like:

(defn render [state]
  [:<>
   [:input {:type "text" :on-input handle-input}]
   [:div (:text state)]])

I’ll very conveniently ignore where state comes from or how its handled. Just assume it is a CLJS map and the handle-input is a function adds the current input value under that :text key and whatever mechanism is used just calls render again with the updated map.

So, what the rendering library of choice now has to decipher is two different snapshots in time.

Before:

[:<>
 [:input {:type "text" :on-input handle-input}]
 [:div "foo"]]

After:

[:<>
 [:input {:type "text" :on-input handle-input}]
 [:div "foo!"]]

It has to traverse these two snapshots and find the difference. So, for this dumb example it has to check 3 vectors, 2 keywords and a map with 2 key value pairs and our actually changed String. We basically went from 1 operation in the pure JS example to I don’t even know how many. The = operation in CLJS is well-defined and pretty fast, but actually often more than one operation. For simplicity’s sake lets say 1 though. So, in total we are now at 12 times asking “What The Heck Just Happened?”.

Point being that this very quickly gets out of hand. I’m going to assume there are going to be hundreds or even thousands of DOM elements. An easy way to do a count how many DOM elements your favorite “webapp” has is running document.body.querySelectorAll("*").length in the devtools console. I did this for my shadow-cljs github repo and get 2214 in incognito. For reasons I can’t even see it goes up to 2647 when logged in. Amazon Prime Video Homepage is 5517 for me. You see how quickly this goes up. Of course not every app is going to have that many elements, but it also isn’t uncommon.

The cost isn’t only the “diffing”. You also have to create that Hiccup in the first place. Please note that it is pretty much irrelevant which what representation is used. React Elements have to do the same work. However, for libraries that convert from hiccup to react at runtime (e.g. reagent) you also have to pay the conversion cost. Hence, why there are many libraries that try to do much of this at compile time via macros.

Death by a thousand cuts. This all adds up to become significant.

Enter The Trade-Offs

Staying within the pure (render state) model sure would be nice, but if you ask me it just doesn’t scale. Diffing thousands of elements to “find” one change is just bonkers. Modern Hardware is insanely fast, but I’d rather not waste all that power.

I’m by no means saying that everyone should always render everything on the client. Sometimes it is just best to have the server spit out some HTML and be done with it. Can’t be faster than no diff ever right? Well, we want some dynamic content, so we have to get there somehow. I covered strategies for dealing with server-side content in my previous series (1, 2, 3).

This is all about frontend and pretty much SPA territory, where this kinda of scaling begins to matter. We can employ various techniques to speed the client up significantly.

Memoization

One of the simplest and also most impactful things is making sure things stay identical? (or === in JS terms). That is the beauty of the CLJS datastructures. If they are identical? we know they didn’t change. JS objects not so much, but I won’t go into that here.

So, modifying the example above we can just extract the input element, since it never changes.

(def input
  [:input {:type "text" :on-input handle-input}])
   
(defn render [state]
  [:<>
   input
   [:div (:text state)]])

Our rendering library of choice now finds an identical? hiccup vector, whereas before it had to do a full = check. This removes 6 “WTHJH?” questions. Quite a significant drop. Again, probably not the exact number, but I hope you get the idea.

This technique is called memoization, where you have a function that returns an identical result when called with the same arguments. The goal being to reduce the amount of things we have to compare. Often just comparing a few values instead of the expanded “virtual tree”, i.e. avoid running as much code as possible. I skipped the function part here to make that example easier to follow. Same principle though.

Components

Components are basically the next level of memoization. Let’s say we turn our input into a “component”, using a simplistic CLJS defn for now.

(defn input []
  [:input {:type "text" :on-input handle-input}])
   
(defn render [state]
  [:<>
   [input]
   [:div (:text state)]])

So, our rendering library of choice will now encounter a hiccup vector with a function as its first element. On the first render run it’ll just call that and get the expanded virtual tree. On subsequent runs it can check whether that fn is identical? as well, and since it doesn’t take any extra arguments can just skip calling it completely. Again bypassing “a lot of work”.

There is a lot more to components of course, but I first want to highlight one problem they do not solve. render received the state argument. Let’s assume that for some reason input needs it as well.

(defn render [state]
  [:<>
   [input state]
   [:div (:text state)]])

Here the rendering lib will call (input state) basically. But it has no clue what part of state is actually used by input, it only knows that state changed, so it has to call input even though it still might expand to the exact same tree. Even the input “component” needs additional tools and care to avoid just generating the hiccup from scratch. Hence, in react terms to get the optimal performance you are supposed to useMemo a lot. Which isn’t exactly “fun” and very far away from our (render state) ideals.

We could change it so that we only pass the needed data to input, but that requires knowledge of the implementation, and that isn’t always straightforward.

Trying To Find Balance

In the end the real goal is finding a good balance between developer productivity/convenience and runtime performance. A solution that nobody can work on is no good, but neither is something that is unusably slow for the end user. Do not underestimate how slow things can get, especially if you measure against the not very-latest high-tech devices. Try 4x Slowdown in Chrome and the results may shock you.

My Journey

I have been working on my own rendering lib for a rather long time now. I have never documented or even talked about it much. Quite honestly I wrote this for myself and I consider this an active journey. It works and is good enough for my current needs, but far from finished.

One Conclusion I have come to is that the “push” model of just passing the whole state into the root doesn’t scale. Instead, it is better to have each component pull the data it needs from some well-defined datasource. And then have that datasource notify the component if the data it requested was changed. Then allowing the component to kick off an update from its place in the tree, instead of from the root.

Another superpower CLJS has is macros. So, leveraging those more can give substantial gains. I have the fragment macro (<<), which can statically analyze the hiccup structure and not only bypass the hiccup creation completely. It can also just generate the code to directly apply the update.

Changing our render to

(defn render [state]
  (<< [:input {:type "text" :on-input handle-input}])
      [:div (:text state)]]))

This still looks very hiccup-ish, even though it doesn’t create hiccup at all. But during macro expansion this is very trivial to tell that only (:text state) can possibly change here. This gets very close the initial 1 operation JS example. Heck if you help the macro a little more it becomes that literal 1 operation.

(defn render [{:keys [^string text] :as state}]
  (<< [:input {:type "text" :on-input handle-input}])
      [:div text]]))

But that level of optimization is really unnecessary. It sure was fun to build the implementation though. For the people that care, the trick here is that the update impl can just set the .textContent property if it can determine that all children are “strings”. So, it even works for stuff like [:div "Hello, " text "!"].

Also, an optimization what the React Compiler only wishes it could achieve. Not that I have any hope that it would ever understand CLJS code to begin with.

Conclusion

The point is that there are so many optimizations to find, that the challenge isn’t finding them. The challenge is building a coherent “framework” that can scale from tiny to very large, all while writing “beautiful” code.

This post is already getting too long, so I intend to write more detailed posts about my experiences later. I don’t even want to talk about my specific library. Most of the stuff I built I didn’t come up with in the first place. They are just “Lessons learned” by the greater community over the years. All of those techniques have been done before, I just adapted them to CLJS. A lot of them you could translate 1:1 and use them with react or whatever else.

Some of them however you cannot. The fragment macro could be adapted for react, but it could never reach the level of performance it has in shadow-grove because react can only do the “virtual DOM diffing” and is not extensible in that area. shadow-grove doesn’t have a “virtual DOM”, as in there is no 1:1 mapping of virtual and actual DOM elements. A fragment may handle many elements, like it did in the render example above.

There is this sentiment that due to LLMs having seen so much react code, that there is no value in building something that is not react. I think that is utter nonsense. There is however the argument of “ecosystem value”, where react clearly wins and is very hard to compete with. You kinda have to be a maniac like me and accept that you have to write your own implementation for stuff a lot, instead of just using readily available 3rd party libraries.

Again, all I wish to highlight with this post, and all my posts really, is that we should talk about Trade-Offs more. A lot of stuff on the Internet just talks about why they are good, and never mentions what it will “cost” you.

I’m very aware that my Trade-Offs would not be correct for most people, but everyone still should be aware of the ones they are making by committing to Solution X. Talking about them might get us close to a coherent story for CLJS, rather than the fragmented react-heavy landscape it currently is.

Building on top of something built by a part-time hobby enthusiast isn’t a good business strategy, so I really do not want this to be about shadow-grove, rather some of the ideas behind it and how we could maybe apply them more broadly.

I wrote a follow-up post to back all this up with some numbers and some insights into how I arrived at my conclusions.

Permalink

Transforming Datasets to Stack Charts

With observed data, presumably from two runs of some experiment…

(def ds0
  (ds/->dataset "https://gist.githubusercontent.com/harold/18ba174c6c34e7d1c5d8d0954b48327c/raw"
                {:file-type :csv}))
(def ds1
  (ds/->dataset "https://gist.githubusercontent.com/harold/008bbcd477bf51b47548d680107a6195/raw"
                {:file-type :csv}))

Well, what have we got?

ds0

https://gist.githubusercontent.com/harold/18ba174c6c34e7d1c5d8d0954b48327c/raw [500 1]:

y
-0.09138541
0.73573478
0.66637442
1.42894310
1.17985915
2.10245096
2.35628501
1.65951387
2.66932952
1.96287689
6.30911743
6.65394635
5.88407917
6.59312352
6.32078823
5.78220740
6.11383638
6.62701870
6.29688536
5.87255145
6.34171349

A few hundred numbers… Hm…

ds1

https://gist.githubusercontent.com/harold/008bbcd477bf51b47548d680107a6195/raw [500 1]:

y
1.23590349
0.97176804
1.44779983
2.09836076
2.39260885
2.33861635
2.55252144
2.75108032
3.42274612
3.13478376
22.45761328
22.35632666
21.93285307
22.24006990
22.51064120
22.38858256
22.53949283
22.57957379
22.31971585
22.69953383
22.23848485

This neglects the hundreds of thousands of years invested in evolving a visual system…

(-> ds0
    (plotly/base {:=title "Run 0"})
    (plotly/layer-point {:=y "y"}))
(-> ds1
    (plotly/base {:=title "Run 1"})
    (plotly/layer-point {:=y "y"}))

Better; however, our aim is to compare them… Which is higher?

(-> (ds/concat (assoc ds0 :v "Run 0")
               (assoc ds1 :v "Run 1"))
    (plotly/base {:=title "Comparison Between Runs"})
    (plotly/layer-point {:=y "y"
                         :=color :v}))

Now it’s up to the viewer to decide whether they like higher numbers or not.


There are a couple of interesting ideas in that last bit of code:

  1. assoc a constant onto a ds creates a constant column
  2. :=color takes care of grouping the results and the downstream display

Neat.

Permalink

Notes from starting a mobile application from 0 in 2025.

I remember vaguely what was required to build a mobile application 10 years ago. First, you had to make one application per platform. Targeting Android and iOS, meaning that you multiply by 2 your development, setup, integration pipeline, etc, from the start. Second, you had to learn a completely new SDK on each platform. It uses Java for Android, a familiar programming language for me, but still a big learning curve with moving SDK due to the crazy release schedule of new phones.

So I was a bit apprehensive when I started discussing building a mobile application. How time-consuming would it be now?

Below is a not super well-organized list of the lessons I learned.

My target

Get a feeling for the amount of effort required to build and maintain a production-grade application by building a simple prototype that can use the device’s sound and send it to a backend for further processing.

Pick a tech stack

More by instinct than scientific analysis, I initially explored Flutter. However, I ultimately decided on React Native because I’ve been using React for a while. The idea of leveraging my existing React knowledge and the potential to share code between a mobile and frontend application sounded appealing. This felt like the perfect way to cut corners and focus on learning the full development and maintenance cycle of a mobile application.

Lesson learned:

  1. AI assistant does not prevent you from reading the documentation: the Expo documentation is extremely well done and tinkering with AI to reach a starting application was a waste of time and tokens.
  2. You don’t need the Android emulator anymore. Thanks to the Expo platform, you can try things out directly on your mobile with a smooth experience using Expo Go.
  3. React-native helps you immensely to start if you are already familiar with React and the web in general. No more SWT-like or other GUI abstraction to learn, you can come with your web knowledge and get a basic application without much trouble.
  4. Should I use ClojureScript or embrace TypeScript?

To make it work, I had to create 2 patches for NPM libraries. One for making the expo-router work and one for disabling the fast-reload on the web. Thanks to Roman Liutikov for that because I would have had no clue on how to proceed with it without his example: https://github.com/roman01la/react-native-cljs-expo-router/tree/master

Routing:

The downside of using Expo is that it bundles everything for you (hello simple VS easy 👋🏻, 14 years ago already …), and the latest Expo is bundled with expo-router. The problem with Expo router and ClojureScript is that it uses an unusual file system name like “(tabs)” and other unfriendly characters for Clojure namespace, and Unix filesystem in general. To make it work, you need to fake the Expo file system and make it use the JavaScript generated by shadow-cljs. I don’t know why they use that, but it forces a first patch and a custom shadow-cljs build to overcome the problem. I’m confident the custom shadow build will work overtime, but the NPM patch will most likely be a constant source of maintenance with Expo release frequency.

Fast Refresh

It was already a problem in the Peter Strömberg (aka PEZ) template. Expo is bundled with React Fast Refresh, but shadow-cljs is already doing the hot reload. The big quack is that the fast-refresh can be disabled on mobile but cannot be disabled on the Web.

Bummer. I had to patch expo-router/build/fast-refresh.js to let shadow-cljs be the only one reloading. The patch consists of setting the extension to the fast refresh to false, see here. To an external eye, that part of the code looks hacky and brittle: it’s a monkey patch over react-fast refresh with TODO, link to other codebase, and a big try/catch. I don’t know what is going on here and why it can be opted out on mobile but not on the web but I’m fairly certain that this code will evolve further in the future, and updating my patch will be a constant effort.

So, ClojureScript or TypeScript?

It’s great that it can work with CLJS, but considering the current state of AI assistants, maybe picking TypeScript is the path with less friction? I enjoyed playing around with UIX and Reframe thoughts, and if the application gets complex, I’m more confident in the CLJS stack to scale. What I’m lacking is the experience of scaling a mobile application. Can it become that complex on a mobile screen?

5. Using a monorepo with Expo

This one reminded me again so much of the Simple VS Easy talk. A must-see.

In order to build your mobile application in the cloud, Expo archives and sends your code to their platform. Make sense, right? Expo platform is like a CI flow, running all the processes required to make a final artifact, and eventually report the build problem.

The hic is that Expo reuses the .gitignore to decide which files need to be wired to the build server. That’s a bit unexpected… why not have it explicitly as part of the configuration? But okay, why not.

When it becomes accidental complexity is when I find out that the Expo application looks in the root folder of my monorepo to decide what it should send… That was really unexpected… For those still reading this: I start the build from the folder containing the application, and the script packages my entire repository 😕 The fix was to add an .easignore that mirrors the .gitignore and mark the folders of the monorepo to be ignored. So now I have to maintain 2 .gitignore.

Note1: Expo has a documentation for monorepo, but between the complexity warning and the yarn requirement, I did not want to go on that road.

Note2: I encountered the issue because, somehow, I had a folder which was owned by root on the project. That should not have been the case, but still, that brings me back to the reality of using a fully bundled platform. You have to embrace their choice and hope that they go in the same direction as your product. In my case, I think the approach with less friction is to have one repository per application and explore a setup with git sub-module to share code between them.

6. Expo component

Unfortunately, I started the project with expo-av, which was deprecated in favor of expo-audio. Unfortunate timing for me. Otherwise, the Expo documentation is really nice, including examples, is pleasant to read and offers clear information on what is working in each platform (ie. the initial crux of being native). I was able to record, play, and send to my backend a sound easily.

If I have to nitpick, an improvement would be to let users contribute examples themselves. The PHP and Clojure docs propose that, and I often find good examples of what I had in mind when I look for the documentation.

7. The limit of LLM-driven development

LLMs are frozen in time of their making. With a fast-moving platform that bundles so many complex components, they give you a deprecated answer while being assertive about their accuracy. It was a good time sink for the bootstrap phase of a project like this.

After I was done with the prototype, I got a suggestion to retrieve the Expo codebase locally and use it as context for the LLM to work out the prompt with the matching codebase. I have not tried it, but it’s an exciting idea! For example, using the fast-refresh.ts file as input and asking the agent to prevent the refresh will have led to a better patch than mine, and maybe lead to a more solid solution leveraging Metro. Also, a good reminder that AI assistant for coding is an extremely fast-moving area, and I have to think out of the box to use the right tool at the right time.

8. Vibe coding

Once I got the whole initial setup working, using Cursor and MCPs to build the application was working smoothly. Grabbing the Figma design, creating a React equivalent, and introducing the hooks or re-frame subcription/events was quick and satisfying.

However, I don’t know how to create a Vibe coding experience in the sense of giving Cursor a high-level feature to build and let it chew it until right.

Expo Go tends to crash entirely if the code is wrong, and I have no MCP tool to get access to what my phone is seeing. I’m thinking of experimenting with using Puppeteer MCP and the web version to get a full loop, but I wonder how it will work in the long run as the web version has different features than the Android or iOS version. In my demo case, I want to use the microphone, and unfortunately, the concept is not fully abstracted by the Expo library. I’m guessing that other native aspects of a phone will not be the same on the web, making a full automatic loop not possible yet. I’m sure a coding agent company will figure that out soon with an in-the-cloud experience to loop on the appropriate environment or multiple environments in parallel.

Conclusion

It was a fun experience building a mobile application. The technology has matured a lot, and with the React bridge, the cost of building and maintaining a mobile application can be under control.

Like in other software areas, AI assistants have the potential to increase the speed of mobile applications significantly. It’s not hard to envision all the automation and trivial bug fixing that could be solved without human intervention.

Permalink

EDN-infused plain html forms

Merry solstice. After about a year, I'm roughly 80% done with the Yakread rewrite. Now all that's left is the remaining 80%. My last post is still a good explanation of the new Biff things I've been hacking on as part of that. Over the past couple weeks I've also been thinking about how to do forms.

So far Biff hasn't provided anything special for forms: if you need an email address, you do [:input {:name "email"} ...], you get the value as a string from (-> request :params :email), you parse it if needed (not needed in this case), then you stick it in a map like {:user/email email, ...} or whatever. Works fine for small forms; no need to over-complicate things.

But what if you have form with 50 fields? It would be nice if we could get EDN from the frontend, e.g. {:user/email "abc@example.com", :user/age 666} instead of {:email "abc@example.com", :age "666"}. Same as you get if you're doing a cljs frontend instead of htmx. htmx users deserve nice things too!

I've started rendering my form fields like [:input {:name (pr-str :user/email)} ...] (turns out :name will accept just about anything) and then using a wrap-parse-form middleware to parse the requests. That function attempts to parse each key in the form params with edn/read-string (fast-edn, actually), skipping keys that fail. For each parsed key, we then check your Biff app's Malli schema to see if that key is defined and what its type is. We use the type to figure out how to parse the form value. There are default parse functions for a few common types (int is Long/parseLong, :uuid is parse-uuid, etc). For other types, you can define a custom form parser in your schema, for example:

(def schema
  {::cents [:int {:biff.form/parser
                  #(-> %
                       (Float/parseFloat)
                       (* 100)
                       (Math/Round))}]
   :ad [:map {:closed true}
        [:ad/budget ::cents]
        ...

Now if I have a form field like [:input {:name (pr-str :ad/budget)} ...] and the user types in 12.34, on the backend I'll get {:ad/budget 1234, ...} automagically.

The form data isn't quite self-describing like EDN is: it relies on schema defined somewhere outside the form. I started out doing stuff like [:input {:name (pr-str {:field :user/favorite-number, :type :int})} ...] (seriously, you really can put anything in :name), but since I'm writing this middleware for Biff apps specifically, I didn't feel like that approach was adding much value. And I'm all about value.

What about forms with multiple entities? If your :name value is a vector like (pr-str [:user :user/email]), then wrap-parse-form will do an (assoc-in params [:user :user/email] ...). I don't at the moment have any special support for arrays of things, but you can do :name (pr-str [:users 3 :user/email]) and then you'll get {:users {3 {:user/email ...}}} in the request.


Other Biff news

Remaining things in the Yakread TODO list include finishing the ad system, adding premium plans, precomputing some recommendation models so that page loads are faster, and setting up email digests of your subscriptions. How long could that take? Surely not long! Oh, and then I just need to migrate all the users over from the currently-in-production Yakread as well as another similar app that stopped being profitable last year... but yes, certainly not long.

Once that's humming along and my monthly side project operational costs are back in the double digits, it'll be time for a much needed Biff release. I'll extract some of the stuff from Yakread and package it up real nice, and then go through some maintenance tasks that have been... festering, shall we say. And then it's time for...

xᴛᴅʙ ᴠᴇʀsɪᴏɴ 2: at last. Everyone's favorite 4-letter immutable database is out of beta. Which means it's really time to get Biff on it. I figure Yakread, once the rewrite is done, will make a nice open-source example of porting a nontrivial app from XTDB v1 to v2. So expect a big Biff release with migration guide and all that. Hopefully by the end of the year 😬. Maybe I could even look into integrating XTDB with Rama.

Until we meet again, perhaps at the equinox. Or at the conj. I've got my ticket already.

Two free t-shirt ideas:

  • "(got? :lisp)" -- styled to look like those "got milk?" fridge magnets.
  • "Breaking changes are for the weak" -- not sure how to style it, but this t-shirt definitely needs to exist.

Permalink

Clojure Deref (June 20, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

The Clojure/conj 2025 Call for Presentations is open now until July 27! We are seeking proposals for both 30 minute sessions and 10 minute lightning talks.

Blogs, articles, and projects

Libraries and Tools

New releases and tools this week:

  • nippy 3.6.0 - Fast serialization library for Clojure

  • yoltq 0.2.82 - An opinionated Datomic queue for building (more) reliable systems. Supports retries, backoff, ordering and more.

  • sci 0.10.46 - Configurable Clojure/Script interpreter suitable for scripting and Clojure DSLs

  • scittle 0.7.23 - Execute Clojure(Script) directly from browser script tags via SCI

  • finddep - Find the root (top) of a given dependency in Clojure

  • trove - Modern logging facade for Clojure/Script

  • rv 0.0.10 - A Clojure library exploring the application of pure reasoning algorithms.

  • babashka 1.12.203 - Native, fast starting Clojure interpreter for scripting

  • squint 0.8.149 - Light-weight ClojureScript dialect

  • joyride 0.0.50 - Making VS Code Hackable like Emacs since 2022

Permalink

Lucas Vegi responde qual é a próxima fronteira da engenharia de software

Olha, eu vou responder essa pergunta. Eu me preparei para ela, hein, mas vou responder ela de uma forma não tão objetiva, tá? Como a gente está falando de engenharia de software aplicada à programação funcional, não tem como eu não falar que essa é uma das fronteiras de engenharia de software. Naturalmente eu não vou dizer que é a principal, mas essa é uma delas. A gente tem uma engenharia de software inteira para ser revisitada dentro desse paradigma, então tem muita coisa para se fazer, muito trabalho inédito a ser conduzido.

No entanto, eu acho que, da mesma forma que o Valim falou recentemente e outras pessoas também compartilharam aqui com vocês, eu acho que a principal fronteira é entender de fato, de uma maneira mais conclusiva, como a IA pode de fato nos ajudar dentro do processo de engenharia de software. A gente já tem muitas pesquisas em andamento com IA aplicada para automatização de várias das subtarefas do processo de desenvolvimento de software, mas tudo ainda é muito exploratório. A gente ainda não tem nada muito conclusivo, a gente tem muito a evoluir nesse sentido.

Até porque, fazendo um gancho com a primeira parte da minha resposta, se a gente for pensar em IA aplicada à engenharia de software funcional, eu especulo que os resultados podem divergir bastante em relação à maioria das outras pesquisas. Por que, quando a gente pensa em LLMs, essas LLMs são treinadas com grande volume de código, por exemplo. Naturalmente, se a gente for pegar um GitHub da vida, ou um ChatGPT da vida, o ChatGPT recebeu muito mais códigos em Python, Java, JavaScript do que códigos em Elixir, do que códigos em Clojure. Então, possivelmente, algo, um resultado que a gente obtiver, uma pesquisa que se aplica a bases de códigos orientados a objeto talvez não tenha resultados tão bons em códigos aplicados em linguagens funcionais.

Então isso acaba convergindo as duas coisas. Eu acho que tem espaço para a gente ter esse tipo de investigação também: a IA aplicada à engenharia de software em contextos específicos, porque eu tenho pra mim que esses resultados com certeza vão divergir bastante e a gente precisa conhecer isso para poder aplicar. Então, as fronteiras, no meu ponto de vista, são essas.

Episódio completo

Permalink

Work and meta-work

Part of the work I did when I ran my own business was standardizing procedures. I got fascinated by checklists and instructions. Sometimes you only do something once in a while. You want to remember all of those little details, all the little problems solved, for next time. Because you will forget. Having them written down means you can pick it up whenever you need to. And, when you’re running a business, there’s enough to keep your mind busy. It’s nice to have a list of instructions to follow when you’re running out of decision-making power.

I started perfecting the process of writing processes. At one point, even the meta-work was included in the process itself. Think of the process like a recipe. The recipe says you need 4 cups of flour, 1 cup of oil, etc. Then it has steps like mix the flour and oil in the bottom of a heavy pan, mix some other ingredients in a medium bowl. It seems complete. But it’s not. At some point I started even writing the little steps like gather a spoon, a heavy pan, and a medium bowl. All of the implied steps got written down.

It might seem like those steps are obvious and don’t need to be explicit. To people who believe that, I say that they just haven’t experienced the calming effect of a good list of steps. The steps become the artifact that records all of your learning. You can rearrange the steps and otherwise optimize the whole process in writing.

It’s such a relief when you start to do that. It’s very relaxing to know that every little detail is handled in the instructions. For example, you might not know you need a medium bowl until you read step three. But at that point, you might have your pan on the stove and need to stir constantly. Having all of the steps, including the meta-steps, listed out and in an order that works is freeing. Your mind can attend to the present step.

At work I’ve been working through this same thing with project planning. We’ve got a feature we want to build. We discussed the details of it. Now it’s time to make a project in Linear to track it. There are two questions that I’m wrestling with: How much granularity do you want? And how much of the meta-work needs to be written down?

Too much granularity and things get a little too prescriptive. In work like programming, breaking the work down into smaller chunks and writing a description of those chunks is actually part of the work. And if your chunks are too small, you’re not taking into account the realistic uncertainty every programming project has about exactly how it should work. However, don’t break it down enough and you’ve got a single task: Implement the feature. Finding that middle path is part of the art of project planning.

But should planning the project be written down as the first step in the project? Sometimes I think yes. If the planning is not obvious (where you can just do it then and there), yes, you can write the first step is “Break the project into steps.” And if your planning process is more structured with multiple steps (like getting approval, etc.), write those down, too.

The default setup in Linear is to have multiple statuses for each task. Backlog→In-progress→In review→Done. But I find that code review where I work is hefty enough to merit its own step. So I put it. Meta-work is work when it requires effort. For instance, I wouldn’t put a step in the project saying “change status of step 2 to Done.” That seems weird and a little too meta. But asking for a review, following up until they do it, addressing the comments, then following up again until it’s approved is real work. That gets its own step.

It reminds me very much of the Getting Things Done methodology where they recommend writing down the “next action” to take. It should be precise enough to actually accomplish. For instance, if you need to call a plumber, but you don’t know a plumber, the next action might be “ask your friends for plumber recommendations.” Better yet would be to list the friends.

In the same way, we want each step to be clear. If the project requires research before you know what steps to take, write “Research x.” You can add more steps as things become clear. The point is to capture the steps you do know and feel a sense of progress when you check them off.

It’s all work, meta or not. The real trick is that planning is a way of manipulating the future. You write down steps, you change them, you rearrange them, you erase them, all before you take the first step. Some steps are known and fixed. Merging to main comes last. It’s pegged at the end, while some are fluid and can be rearranged, done in parallel, or even eliminated if they’re not necessary. The question is whether the meta-task warrants a place in the future among all the other tasks. I say we should default to saying yes more often. It helps us take the meta-tasks too seriously. For example, some teams may consider testing as a meta-task. By making it a task and writing it down, we honor its place among other tasks. And we honor ourselves when we complete it, giving us the satisfaction of a job done.

Permalink

Clojure Deref (June 16, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Clojure/Conj 2025 will be held Nov 12-14 in Charlotte, NC. Early Bird Registration is now open. We look forward to seeing you in-person and online!

Blogs, articles, and projects

Libraries and Tools

New releases and tools this week:

  • clojure-desktop-toolkit 0.3.1 - Create native state-of-the-art desktop applications in Clojure using Eclipse’s SWT graphics toolkit.

  • basilisp 0.4.0 - A Clojure-compatible(-ish) Lisp dialect targeting Python 3.9+

  • ai-tools 1.0 - tools that help AI development assistants do Clojure

  • malli 0.19.1 - High-performance data-driven data specification library for Clojure/Script.

  • lazytest 1.7.0 - A standalone BDD test framework for Clojure

  • markdown 0.7.186 - A cross-platform clojure/script parser for Markdown

  • deps-new 0.9.0 - Create new projects for the Clojure CLI / deps.edn

  • xtdb 2.0.0 - An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt

  • next-jdbc 1.3.1048 - A modern low-level Clojure wrapper for JDBC-based access to databases.

  • babashka 1.12.202 - Native, fast starting Clojure interpreter for scripting

  • quickblog 0.4.7 - Light-weight static blog engine for Clojure and babashka

  • fluent-clj 0.1.0 - Project Fluent for Clojure/script

  • patcho - Patching micro lib for Clojure

  • qclojure - A functional quantum computer programming library for Clojure with simulation backend and visualizations.

  • clojure-plus 1.6.0 - A project to improve experience of using Clojure stdlib

  • fusebox 1.0.11 - An extremely lightweight fault tolerance library for Clojure(Script) and Babashka

  • fs 0.5.26 - File system utility library for Clojure

  • calva 2.0.519 - Clojure & ClojureScript Interactive Programming for VS Code

  • pedestal 0.8.0-beta-1 - The Pedestal Server-side Libraries

  • dataspex 2025.06.7 - See the shape of your data: point-and-click Clojure(Script) data browser

  • joyride 0.0.47 - Making VS Code Hackable like Emacs since 2022

  • pretty 3.4.0 - Library for helping print things prettily, in Clojure - ANSI fonts, formatted exceptions

  • coax 2.0.4 - Clojure.spec coercion library for clj(s)

  • build-uber-log4j2-handler 2.25.0 - A conflict handler for log4j2 plugins cache files for the tools.build uber task.

  • logging4j2 1.0.3 - A Clojure wrapper for log4j2

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.