Saying Hello You with ClojureScript
Notes
Notes
📖 Series: Time-Travel Debugging in State Management (Part 1 of 3)
From debugging tool to competitive UX advantage
Imagine: you're testing a checkout form. The user fills in all fields, clicks "Pay"... and gets an error.
You start debugging. But instead of reproducing the scenario again and again, you simply rewind the state back — to the moment before the error. Like in a video game where you respawn from the last checkpoint.
This is Time-Travel Debugging — the ability to move between application states over time.
💡 Key Insight: In modern applications, time-travel has evolved from exclusively a developer tool to a standalone user-facing feature that becomes a competitive product advantage.
💡 Note: The techniques and patterns in this series work for both scenarios — debugging AND user-facing undo/redo.
| Domain | Examples | History Depth | Value |
|---|---|---|---|
| 📝 Text Editors | Google Docs, Notion | 500-1000 steps | Version history, undo/redo |
| 📋 Forms & Builders | Typeform, Tilda | 50-100 steps | Real-time change reversal |
| 🎨 Graphic Editors | Figma, Canva | 50-100 steps | Design experimentation |
| 💻 Code Editors | VS Code, CodeSandbox | 500+ steps | Local change history |
| 🏗️ Low-code Platforms | Webflow, Bubble | 100-200 steps | Visual version control |
| 🎬 Video Editors | Premiere Pro, CapCut | 10-20 steps | Edit operation rollback |
In this article, we'll explore architectural patterns that work across all these domains — from simple forms to complex multimedia systems.
This article uses the following terms:
| Term | Description | Library Equivalents |
|---|---|---|
| State Unit | Minimal indivisible part of state | Universal concept |
| Atom | State unit in atom-based libraries | Jotai: atom, Recoil: atom, Nexus State: atom
|
| Slice | Logically isolated part of state | Redux Toolkit: createSlice, Zustand: state key
|
| Observable | Reactive object with auto-tracking | MobX: observable, Valtio: proxy, Solid.js: signal
|
| Store | Container for state units (global state) | Zustand: store, Redux: store
|
| Snapshot | State copy at a point in time | Universal term |
| Delta | Difference between two snapshots | Universal term |
💡 Note: "State unit" is used as a universal abstraction. Depending on your library, this might be called:
- Atom (Jotai, Recoil, Nexus State)
- Slice / state key (Redux, Zustand)
- Observable property (MobX, Valtio)
- Signal (Solid.js, Preact)
mindmap
root((State Unit))
Atom
Jotai
Recoil
Nexus State
Slice
Redux Toolkit
Zustand
Observable
MobX
Valtio
Signal
Solid.js
Preact
// Nexus State / Jotai / Recoil
const countAtom = atom(0);
// Zustand (state unit equivalent)
const useStore = create((set) => ({
count: 0, // ← this is a "state unit"
}));
// Redux Toolkit (state unit equivalent)
const counterSlice = createSlice({
name: 'counter',
initialState: { value: 0 },
// ^^^^^^^^^^^ this is a "state unit"
});
// MobX (state unit equivalent)
const store = makeObservable({
count: 0, // ← this is a "state unit"
});
Why "state unit"?
Time-Travel Debugging is a debugging method where the system preserves state history and allows developers to:
interface TimeTravelAPI {
// Navigation
undo(): boolean;
redo(): boolean;
jumpTo(index: number): boolean;
// Availability checks
canUndo(): boolean;
canRedo(): boolean;
// History
getHistory(): Snapshot[];
getCurrentSnapshot(): Snapshot | undefined;
// Management
capture(action?: string): Snapshot;
clearHistory(): void;
}
Time-travel debugging isn't new. First significant implementations appeared in mid-2000s:
| Year | System | Description |
|---|---|---|
| 2004 | Smalltalk Squeak | One of first environments with state "rollback" |
| 2010 | OmniGraffle | Undo/redo for graphic operations |
| 2015 | Redux DevTools | Popularized time-travel for web apps |
| 2016 | Elm Time Travel | Built-in support via immutable architecture |
| 2019 | Akita (Angular) | Built-in time-travel for Angular |
| 2021 | Elf (Shopify) | Reactive state management on RxJS with DevTools |
| 2020+ | Modern Libraries | Jotai, Zustand, MobX with plugins |
timeline
title Time-Travel Debugging Evolution
2010-2015 : Simple Undo/Redo
: Full snapshots
: Limited depth
2015-2020 : DevTools Integration
: Redux DevTools
: Action tracking
2020+ : Optimized Systems
: Delta compression
: User-facing features
Generation 1 (2010-2015): Simple undo/redo stacks
Generation 2 (2015-2020): DevTools integration
Generation 3 (2020+): Optimized systems
Classic approach where each state change is encapsulated in a command object:
interface Command<T> {
execute(): T;
undo(): void;
redo(): void;
}
// For atom-based libraries (Jotai, Recoil, Nexus State)
class SetAtomCommand<T> implements Command<T> {
constructor(
private atom: Atom<T>,
private newValue: T,
private oldValue?: T
) {}
execute(): T {
this.oldValue = this.atom.get();
this.atom.set(this.newValue);
return this.newValue;
}
undo(): void {
this.atom.set(this.oldValue!);
}
redo(): void {
this.execute();
}
}
// For Redux / Zustand (equivalent)
class SetStateCommand<T extends Record<string, any>> implements Command<T> {
constructor(
private store: Store<T>,
private slice: keyof T,
private newValue: any
) {}
execute(): void {
this.oldValue = this.store.getState()[this.slice];
this.store.setState({ [this.slice]: this.newValue });
}
undo(): void {
this.store.setState({ [this.slice]: this.oldValue });
}
redo(): void {
this.execute();
}
}
Pros:
Cons:
Preserving full state copies at key moments:
interface Snapshot {
id: string;
timestamp: number;
action?: string;
state: Record<string, AtomState>;
metadata: {
label?: string;
source?: 'auto' | 'manual';
};
}
class SnapshotManager {
private history: Snapshot[] = [];
capture(action?: string): Snapshot {
const snapshot: Snapshot = {
id: generateId(),
timestamp: Date.now(),
action,
state: deepClone(this.store.getState()),
metadata: { label: action },
};
this.history.push(snapshot);
return snapshot;
}
}
Pros:
Cons:
Storing only changes between states:
interface DeltaSnapshot {
id: string;
type: 'delta';
baseSnapshotId: string;
changes: {
[atomId: string]: {
oldValue: any;
newValue: any;
};
};
timestamp: number;
}
class DeltaCalculator {
computeDelta(before: Snapshot, after: Snapshot): DeltaSnapshot {
const changes: Record<string, any> = {};
for (const [key, value] of Object.entries(after.state)) {
const oldValue = before.state[key]?.value;
if (!deepEqual(oldValue, value)) {
changes[key] = { oldValue, newValue: value };
}
}
return {
id: generateId(),
type: 'delta',
baseSnapshotId: before.id,
changes,
timestamp: Date.now(),
};
}
}
Pros:
Cons:
Modern approach combining snapshots and deltas:
flowchart LR
A[State Change] --> B{Full Snapshot<br/>Interval?}
B -->|Yes| C[Create Full Snapshot]
B -->|No| D[Compute Delta]
C --> E[History Array]
D --> E
E --> F{Restore Request}
F -->|Full| G[Direct Return]
F -->|Delta| H[Apply Delta Chain]
H --> I[Reconstructed State]
class HybridHistoryManager {
private fullSnapshots: Snapshot[] = [];
private deltaChain: Map<string, DeltaSnapshot> = new Map();
// Every N changes, create a full snapshot
private fullSnapshotInterval = 10;
private changesSinceFull = 0;
add(state: State): void {
if (this.changesSinceFull >= this.fullSnapshotInterval) {
// Create full snapshot
const full = this.createFullSnapshot(state);
this.fullSnapshots.push(full);
this.changesSinceFull = 0;
} else {
// Create delta
const base = this.getLastFullSnapshot();
const delta = this.computeDelta(base, state);
this.deltaChain.set(delta.id, delta);
this.changesSinceFull++;
}
}
restore(index: number): State {
const full = this.getNearestFullSnapshot(index);
const deltas = this.getDeltasBetween(full.index, index);
// Apply deltas to full snapshot
return deltas.reduce(
(state, delta) => this.applyDelta(state, delta),
full.state
);
}
}
When to use:
| Pattern | Use when... | Avoid when... |
|---|---|---|
| Command | Complex operations, macros | Simple changes, async |
| Snapshot | Small states, need simplicity | Large states, frequent changes |
| Delta | Frequent small changes | Rare large changes |
| Hybrid | Universal case | Very simple apps |
// Universal example for any library
function createFullSnapshot(store: Store): Snapshot {
return {
id: uuid(),
state: JSON.parse(JSON.stringify(store.getState())),
timestamp: Date.now(),
};
}
// For Redux / Zustand
const snapshot = {
state: {
counter: { value: 5 }, // Redux slice
user: { name: 'John' } // Redux slice
},
timestamp: Date.now()
};
// For Jotai / Nexus State
const snapshot = {
state: {
'count-atom-1': { value: 5, type: 'atom' },
'user-atom-2': { value: { name: 'John' }, type: 'atom' }
},
timestamp: Date.now()
};
Characteristics:
// Universal example
function computeDelta(before: State, after: State): Delta {
const changes: Record<string, Change> = {};
for (const key of Object.keys(after)) {
if (!deepEqual(before[key], after[key])) {
changes[key] = {
from: before[key],
to: after[key],
};
}
}
return { changes, timestamp: Date.now() };
}
// Example: Redux slice
const delta = {
changes: {
'counter.value': { from: 5, to: 6 },
'user.lastUpdated': { from: 1000, to: 2000 }
}
};
// Example: Jotai atoms
const delta = {
changes: {
'count-atom-1': { from: 5, to: 6 }
}
};
Characteristics:
Using immutable structures with shared references:
// Example with Immutable.js
import { Map } from 'immutable';
const state1 = Map({ count: 1, user: { name: 'John' } });
const state2 = state1.set('count', 2);
// state1 and state2 share the user object
// Only count changed
// For React + Immer (more popular approach)
import { produce } from 'immer';
const state1 = { count: 1, user: { name: 'John' } };
const state2 = produce(state1, draft => {
draft.count = 2;
// user remains the same reference
});
Characteristics:
| Aspect | Immer (Proxy) | Immutable.js |
|---|---|---|
| Memory | O(n + m) best case | O(log n) for Persistent Data Structures |
| Restoration | O(1) with references | O(log n) for access |
| Requirements | Proxy API (ES2015+) | Specialized library |
| Compatibility | High (transparent objects) | Medium (special types) |
Note: Characteristics may differ by implementation. For ClojureScript, Mori, and other persistent data structure libraries, complexity will vary.
| Strategy | Memory | Restoration | Complexity | Use Case |
|---|---|---|---|---|
| Full Snapshots | High | Fast | Low | Small states |
| Deltas | Low | Medium | Medium | Frequent small changes |
| Structural Sharing | Medium | Fast | High | Immutable states |
| Hybrid | Medium | Medium | High | Universal |
In Part 2 ("Performance & Advanced Topics"), we'll cover:
Which pattern would you choose for your project?
Think about your current project:
- How often does state change?
- What's the state size (small/medium/large)?
- Do you need deep history (100+ steps)?
Share your choice in the comments!
To be continued... → Part 2: Performance & Advanced Topics
This is Part 1 of 3 in the Time-Travel Debugging article series.
Tags: #javascript #typescript #state-management #debugging #architecture #react #redux #performance
BigConfig began as a simple Babashka script designed to DRY up a complex Terraform project for a data platform. Since those humble beginnings, it has evolved through several iterations into a robust template and workflow engine. But as the tool matured, I realized that technical power wasn’t enough; the way it was framed was the true barrier to adoption.
BigConfig is powerful as a library, but I’ve faced a hard truth: very few developers will learn a language like Clojure just to use a library. However, history shows that developers will learn a new language if it solves a fundamental deployment problem.
People learned Ruby to master Homebrew ; they learn Nix for reproducible builds. Meanwhile, tools like Helm force users to juggle the awkward marriage of YAML and Go templates—a “solution” many endure only because no better alternative exists. To get developers to cross the language barrier, you have to offer more than a tool; you have to offer a total solution.
I noticed a significant shift in engagement depending on how I framed the project. When I describe BigConfig as a library, it feels abstract—like “more work” added to a developer’s plate. When I introduce it as a package manager, the interest is immediate.
In the mind of a developer, a library is a component you have to manage. A package manager is the system that manages things for you. By shifting the perspective, BigConfig goes from being a “Clojure utility” to an “Infrastructure Orchestrator.”
Like Nix and Guix , BigConfig embraces a full programming language. However, it avoids the “two-language architecture” common in those ecosystems—where you often have a compiled language for the CLI and a separate interpreted one for the user.
BigConfig is Clojure all the way down (in the spirirt of Emacs). This allows it to support three distinct environments seamlessly:
Beyond the language, BigConfig introduces robust client-side coordination, featuring an Atlantis-style locking mechanism that uses GitHub tags to prevent developer collisions in shared environments.
The level of abstraction is where BigConfig truly shines. When you adopt the system, you aren’t locked into a rigid schema; you can adapt the entire engine to your specific needs. Complex tasks—like deploying the same architecture across different hyperscalers—are reduced from massive refactors to simply updating a property. It moves the conversation from how to deploy to what to deploy.
The next phase is focused on expanding the ecosystem and making package discovery seamless:
The evolution of BigConfig is a testament to the idea that the right abstraction is just as important as the right code. By reframing the tool from a utility you have to manage into a system that manages for you, we bridge the gap between complex cloud resources and developer productivity.
As we expand our hyperscaler support and formalize our package registry, the goal remains the same: to move infrastructure management away from the “how” and toward the “what.” Whether you are deploying a niche remote environment or a massive data stack like a Data Lake, BigConfig provides the language and the logic to make your infrastructure as versionable and reproducible as your software.
Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.
I want to start off by saying that I don't expect anyone to read this. However, neither OpenAI nor Anthropic's models get this right on the first try, so I figure that writing about it might get picked up in future training runs.
A pattern that I sometimes want in Clojure is to implement the same function for multiple types (defined by either defrecord or deftype). A simple way to do this is to do the work in an external function, and then have every implementation call it.
For instance, say I want to get a list of feet from several animal types. I can create a protocol for this, with the function get-feet:
(defprotocol Footed
(get-feet [animal] "Get a sequence of feet"))
Then, I may have a few different groups of animals, each sharing a number of feet. I can create a function for each of these groups:
(defn get-2-feet [] [:left :right])
(defn get-4-feet [] [:front-left :front-right
:back-left :back-right])
(defn get-6-feet [] [:front-left :front-right
:middle-left :middle-right
:back-left :back-right])
Then the different record types will call the function they need:
(defrecord Ape [name]
Footed
(get-feet [_] (get-2-feet)))
(defrecord Bird [name]
Footed
(get-feet [_] (get-2-feet)))
(defrecord Cat [name]
Footed
(get-feet [_] (get-4-feet)))
(defrecord Ant [name]
Footed
(get-feet [_] (get-6-feet)))
…and so on.
This works, but it is very unsatisfying. It also gets noisy if the protocol has more than one function.
Instead, it would be nice if we could implement the protocol once, and then inherit this in any type that needs that implementation. Clojure doesn't support inheritance like this, but it has something close.
A Protocol in Clojure is a set of functions that an object has agreed to support. The language and compiler have special dispatch support around protocols, making their functions fast and easy to call. While many people know the specifics of protocols, this often comes about through exploration rather than documentation. I won't go into an exhaustive discussion of protocols here, but I will mention a couple of important aspects.
Whenever a protocol is created in Clojure, two things are created: the protocol itself, and a plain-old Java Interface. (ClojureScript also has protocols, but they don't create interfaces). The protocol is just a normal data structure, which we can see at a repl:
user=> (defprotocol Footed
(get-feet [animal] "Get a sequence of feet"))
Footed
user=> Footed
{:on user.Footed,
:on-interface user.Footed,
:sigs {:get-feet {:tag nil, :name get-feet, :arglists ([animal]), :doc "Get a sequence of feet"}},
:var #'user/Footed,
:method-map {:get-feet :get-feet},
:method-builders {#'user/get-feet #object[user$eval143$fn__144 0x67001148 "user$eval143$fn__144@67001148"]}}
This describes the protocol, and each of the associated functions. This is also the structure that gets modified by some of the various protocol extension macros. You may see how the :method-map refers to functions by their name, rewritten as a keywords.
Of interest here is the reference to the interface user.Footed. I'm using a repl with the default user namespace. Because we are already in this namespace, that Footed interface name is being shadowed by the protocol object. But it is still there, and we can still do things with it.
Protocols are often "extended" onto new datatypes. This is a very flexible operation, and allows new behavior to be associated with any datatype, including those not declared in Clojure (for instance, new behavior could be added to a java.util.String). This applies to Interfaces as well as Classes, which is something we can use here.
First of all, we want a new protocol/interface for each type of behavior that we want:
(defprotocol Feet2)
(defprotocol Feet4)
(defprotocol Feet6)
These protocols don't need functions, as they just serve to "mark" the objects that want to implement the desired behavior.
Next, we can extend the protocol with our functions onto the types described by each of these Interfaces:
(extend-protocol Footed
user.Feet2
(get-feet [_] [:left :right])
user.Feet4
(get-feet [_] [:front-left :front-right :back-left :back-right])
user.Feet6
(get-feet [_] [:front-left :front-right :middle-left :middle-right :back-left :back-right]))
Going back to the Footed protocol, we can see that it now knows about these implementations.
user=> Footed
{:on user.Footed,
:on-interface user.Footed,
:sigs {:get-feet {:tag nil, :name get-feet, :arglists ([animal]), :doc "Get a sequence of feet"}},
:var #'user/Footed,
:method-map {:get-feet :get-feet},
:method-builders {#'user/get-feet #object[user$eval143$fn__144 0x67001148 "user$eval143$fn__144@67001148"]},
:impls {
user.Feet2 {:get-feet #object[user$eval195$fn__196 0x24fabd0f "user$eval195$fn__196@24fabd0f"]},
user.Feet4 {:get-feet #object[user$eval199$fn__200 0x250b236d "user$eval199$fn__200@250b236d"]},
user.Feet6 {:get-feet #object[user$eval203$fn__204 0x61f3fbb8 "user$eval203$fn__204@61f3fbb8"]}}}
Note how the :impls value now maps each of the extended interfaces to the attached functions.
You might have noticed that I had to use the fully-qualified name for these interfaces due to the protocol name shadowing them. When a protocol is not in the same namespace, then it can be required, and referenced by its namespace, while the Interface can be imported from that namespace. For instance, a project that I've been working on recently has require/imports of:
(ns my.project
(:require [quoll.rdf :as rdf])
(:import [quoll.rdf IRI]))
In this example I am able to reference the protocol via rdf/IRI while the interface is just IRI.
Now that the Footed protocol has been extended to each of these interfaces, the protocols associated with those interfaces can be attached to any type that wants that behavior.
Going back to our animals, we can do the same thing again, but this time without the stub functions that redirect to the common functionality:
(defrecord Ape [name] Feet2)
(defrecord Bird [name] Feet2)
(defrecord Cat [name] Feet4)
(defrecord Ant [name] Feet6)
Instances of these types will now pick up the implementations extended to these marker protocols:
(def magilla (Ape. "Magilla"))
(def big-bird (Bird. "Big"))
(def garfield (Cat. "Garfield"))
(def atom-ant (Ant. "Atom"))
user=> (get-feet magilla)
[:left :right]
user=> (get-feet big-bird)
[:left :right]
user=> (get-feet garfield)
[:front-left :front-right :back-left :back-right]
user=> (get-feet atom-ant)
[:front-left :front-right :middle-left :middle-right :back-left :back-right]
After explaining so much of the mechanism, the code has been scattered widely across this post. Putting the declarations together, we have:
(defprotocol Footed (get-feet [_]))
(defprotocol Feet2)
(defprotocol Feet4)
(defprotocol Feet6)
(extend-protocol Footed
user.Feet2
(get-feet [_] [:left :right])
user.Feet4
(get-feet [_] [:front-left :front-right :back-left :back-right])
user.Feet6
(get-feet [_] [:front-left :front-right :middle-left :middle-right :back-left :back-right]))
(defrecord Ape [name] Feet2)
(defrecord Bird [name] Feet2)
(defrecord Cat [name] Feet4)
(defrecord Ant [name] Feet6)
Functional programming in Clojure is not generally served by having multiple types like this, but it does happen. While this is a trivial example, with only a single function on the protocol, the need for this pattern becomes apparent when protocols come with multiple functions.
I've called it inheritance, but that is only an analogy. It's not actually inheritance that we are applying here, but it does behave in a similar way.
core.async 1.9.847-alpha3 is now available. This release reverts the core.async virtual thread implementation added in alpha2, and provides a new implementation (ASYNC-272).
Threads must block while waiting on I/O operations to complete. "Parking" allows the platform to unmount and free the underlying thread resource while waiting. This allows users to write "normal" straight line code (without callbacks) while consuming fewer platform resources.
io-thread execution contextio-thread was added in a previous core.async release and is a new execution context for running both blocking channel operations and blocking I/O operations (which are not supported in go). Parking operations are not allowed in io-thread (same as the thread context).
io-thread uses the :io executor pool, which will now use virtual threads, when available. If used in Java without virtual threads (< 21), io-thread continues to run in a cached thread pool with platform threads.
With this change, all blocking operations in io-thread park without consuming a platform thread on Java 21+.
go blocksClojure core.async go blocks use an analyzer to rewrite code with inversion of control specifically for channel parking operations (the ! async ops like >!). Other blocking operations (!! channel ops or arbitrary I/O ops) are not allowed. Additionally, go blocks are automatically collected if the channels they depend on are collected (and parking can never progress).
The Java 21 virtual threads feature implements I/O parking in the Java platform itself - that capability is a superset of what go blocks provide by supporting all blocking I/O operations. Like regular threads, (and unlike go blocks) virtual threads must terminate ordinarily and will keep referenced resources alive until they do.
Due to this difference in semantics, go blocks are unchanged and continue to use the go analyzer and run on platform threads. If you wish to get the benefits and constraints of virtual threads, convert go to io-thread and parking ops to blocking ops.
Note: existing IOC compiled go blocks from older core.async versions are unaffected.
The clojure.core.async.executor-factory System property now need only provide Executor instances, not ExecutorService instances. This is a reduction in requirements so is backwards-compatible.
Additionally, the io-thread virtual thread Executor no longer holds references to virtual threads as it did in 1.9.829-alpha2.
A complete step-by-step guide to creating project called cat (or workspace, in Polylith terms) with a filesystem component, a main base, and a cli project using the Polylith architecture.
cat/ ← workspace root
├── components/
│ └── filesystem/ ← reads a file and prints its content
├── bases/
│ └── main/ ← entry point (-main function)
└── projects/
└── cli/ ← deployable artifact (uberjar)
Data flow: java -jar cli.jar myfile.txt → main/-main → filesystem/read-file → stdout
Install the following before starting:
java -versionclojure --versiongit --version; also configure user.name and user.email:
git config --global user.name "Your Name"
git config --global user.email "you@example.com"
poly toolmacOS:
brew install polyfy/polylith/poly
For other OS/platforms please refer to the official Installation doc.
Verify:
poly version
Run this outside any existing git repository:
poly create workspace name:cat top-ns:com.acme :commit
Move into the workspace:
cd cat
Your directory structure will look like:
cat/
├── .git/
├── .gitignore
├── bases/
├── components/
├── deps.edn
├── development/
│ └── src/
├── projects/
├── readme.md
└── workspace.edn
workspace.ednOpen workspace.edn and set :auto-add to true so that files generated by poly create commands are automatically staged in git:
{:top-namespace "com.acme"
:interface-ns "interface"
:default-profile-name "default"
:dialects ["clj"]
:compact-views #{}
:vcs {:name "git"
:auto-add true} ;; <-- change this to true
:tag-patterns {:stable "^stable-.*"
:release "^v[0-9].*"}
:template-data {:clojure-ver "1.12.0"}
:projects {"development" {:alias "dev"}}}
filesystem Componentpoly create component name:filesystem
This creates:
components/filesystem/
├── deps.edn
├── src/
│ └── com/acme/filesystem/
│ └── interface.clj
└── test/
└── com/acme/filesystem/
└── interface_test.clj
The interface namespace is the only file other bricks are allowed to call. Edit components/filesystem/src/com/acme/filesystem/interface.clj:
(ns com.acme.filesystem.interface
(:require [com.acme.filesystem.core :as core]))
(defn read-file
"Reads the file at `filename` and prints its content to stdout."
[filename]
(core/read-file filename))
Create the file components/filesystem/src/com/acme/filesystem/core.clj:
(ns com.acme.filesystem.core
(:require [clojure.java.io :as io]))
(defn read-file
"Reads the file at `filename` and prints its content to stdout."
[filename]
(let [file (io/file filename)]
(if (.exists file)
(println (slurp file))
(println (str "Error: file not found — " filename)))))
deps.ednOpen the root ./deps.edn and add the filesystem component:
{:aliases {:dev {:extra-paths ["development/src"]
:extra-deps {com.acme/filesystem {:local/root "components/filesystem"}
org.clojure/clojure {:mvn/version "1.12.0"}}}
:test {:extra-paths ["components/filesystem/test"]}
:poly {:main-opts ["-m" "polylith.clj.core.poly-cli.core"]
:extra-deps {polyfy/clj-poly {:mvn/version "0.3.32"}}}}}
main Basepoly create base name:main
This creates:
bases/main/
├── deps.edn
├── src/
│ └── com/acme/main/
│ └── core.clj
└── test/
└── com/acme/main/
└── core_test.clj
A base differs from a component in that it has no interface — it is the entry point to the outside world. Edit bases/main/src/com/acme/main/core.clj:
(ns com.acme.main.core
(:require [com.acme.filesystem.interface :as filesystem])
(:gen-class))
(defn -main
"Entry point. Accepts a filename as the first argument and prints its content."
[& args]
(if-let [filename (first args)]
(filesystem/read-file filename)
(println "Usage: cat <filename>"))
(System/exit 0))
Key points:
(:gen-class) tells the Clojure compiler to generate a Java class with a main method.com.acme.filesystem.interface/read-file — never the core namespace directly.System/exit 0 ensures the JVM terminates cleanly after running.deps.ednAdd the main base alongside filesystem:
{:aliases {:dev {:extra-paths ["development/src"]
:extra-deps {com.acme/filesystem {:local/root "components/filesystem"}
com.acme/main {:local/root "bases/main"}
org.clojure/clojure {:mvn/version "1.12.0"}}}
:test {:extra-paths ["components/filesystem/test"
"bases/main/test"]}
:poly {:main-opts ["-m" "polylith.clj.core.poly-cli.core"]
:extra-deps {polyfy/clj-poly {:mvn/version "0.3.32"}}}}}
cli Projectpoly create project name:cli
This creates:
projects/cli/
└── deps.edn
workspace.ednOpen workspace.edn and add a cli alias to :projects:
:projects {"development" {:alias "dev"}
"cli" {:alias "cli"}}
Edit projects/cli/deps.edn to include the filesystem component, the main base, the uberjar entry point, and the build alias:
{:deps {com.acme/filesystem {:local/root "components/filesystem"}
com.acme/main {:local/root "bases/main"}
org.clojure/clojure {:mvn/version "1.12.0"}}
:aliases {:test {:extra-paths []
:extra-deps {}}
:uberjar {:main com.acme.main.core}}}
The poly tool does not include a build command — it leaves artifact creation to your choice of tooling. We will use Clojure tools.build.
:build alias to the root deps.ednYour final root ./deps.edn should look like this:
{:aliases {:dev {:extra-paths ["development/src"]
:extra-deps {com.acme/filesystem {:local/root "components/filesystem"}
com.acme/main {:local/root "bases/main"}
org.clojure/clojure {:mvn/version "1.12.0"}}}
:test {:extra-paths ["components/filesystem/test"
"bases/main/test"]}
:poly {:main-opts ["-m" "polylith.clj.core.poly-cli.core"]
:extra-deps {polyfy/clj-poly {:mvn/version "0.3.32"}}}
:build {:deps {io.github.clojure/tools.build {:mvn/version "0.9.6"}}
:ns-default build}}}
build.clj at the workspace rootCreate the file build.clj under the workspace root:
(ns build
(:require [clojure.tools.build.api :as b]
[clojure.java.io :as io]))
(defn uberjar
"Build an uberjar for a given project.
Usage: clojure -T:build uberjar :project cli"
[{:keys [project]}]
(assert project "You must supply a :project name, e.g. :project cli")
(let [project (name project)
project-dir (str "projects/" project)
class-dir (str project-dir "/target/classes")
;; Create the basis from the project's deps.edn.
;; tools.build resolves :local/root entries and collects all
;; transitive :paths (i.e. each brick's "src" and "resources").
basis (b/create-basis {:project (str project-dir "/deps.edn")})
;; Collect every source directory declared across all bricks.
;; basis :classpath-roots contains the resolved paths.
src-dirs (filterv #(.isDirectory (java.io.File. %))
(:classpath-roots basis))
main-ns (get-in basis [:aliases :uberjar :main])
_ (assert main-ns
(str "Add ':uberjar {:main <ns>}' alias to "
project-dir "/deps.edn"))
jar-file (str project-dir "/target/" project ".jar")]
(println (str "Cleaning " class-dir "..."))
(b/delete {:path class-dir})
(io/make-parents jar-file)
(println (str "Compiling " main-ns "..."))
(b/compile-clj {:basis basis
:src-dirs src-dirs
:class-dir class-dir})
(println (str "Building uberjar " jar-file "..."))
(b/uber {:class-dir class-dir
:uber-file jar-file
:basis basis
:main main-ns})
(println "Uberjar is built.")))
Run the poly info command to see the current state of your workspace:
poly info
You should see both bricks (filesystem and main) listed, along with the cli project. Then validate the workspace integrity:
poly check
This should print OK. If there are errors, the command will describe what to fix.
From the workspace root:
clojure -T:build uberjar :project cli
Expected output:
Compiling com.acme.main.core...
Building uberjar projects/cli/target/cli.jar...
Uberjar is built.
Create a test file and run the app:
echo "Hello from Polylith!" > /tmp/hello.txt
java -jar projects/cli/target/cli.jar /tmp/hello.txt
Expected output:
Hello from Polylith!
Test the missing-file error path:
java -jar projects/cli/target/cli.jar /tmp/nonexistent.txt
Expected output:
Error: file not found — /tmp/nonexistent.txt
Test the no-argument path:
java -jar projects/cli/target/cli.jar
Expected output:
Usage: cat <filename>
cat/
├── bases/
│ └── main/
│ ├── deps.edn
│ └── src/com/acme/main/
│ └── core.clj ← -main, calls filesystem/read-file
├── components/
│ └── filesystem/
│ ├── deps.edn
│ └── src/com/acme/filesystem/
│ ├── interface.clj ← public API (read-file)
│ └── core.clj ← implementation
├── projects/
│ └── cli/
│ ├── deps.edn ← wires filesystem + main, :uberjar alias
│ └── target/
│ └── cli.jar ← generated artifact
├── build.clj ← tools.build script
├── deps.edn ← dev + test + poly + build aliases
└── workspace.edn ← top-ns, project aliases, vcs config
cat/interface ns, such as filesystemmainclicom.acme.filesystem.interfacepoly Commandspoly info # overview of bricks and projects
poly check # validate workspace integrity
poly test # run all tests affected by recent changes
poly deps # show dependency graph
poly libs # show library usage
poly shell # interactive shell with autocomplete
poly create component name:parser for argument parsinggit tag stable-main after a clean poly testpoly check and poly test in your pipeline; tag as stable on success
Building a workflow engine for infrastructure operations is not trivial. Most people start with a simple mental model: a desired state and a sequence of functions that produce side effects. In Clojure, this looks like a simple thread-first macro:
(-> {} fn1 fn2 ...)Your state {} is threaded through fn1 and fn2. However, real-world operations are rarely linear. They require complex branching, error handling, and conditional jumps (e.g., “if success, continue; otherwise, jump to cleanup”).
To handle non-linear flows, we associate functions with qualified keywords (steps). Together with the next step, they form the “wiring”. You can override sequential execution by providing a next-fn to handle custom branching.
The core execution loop looks like this:
(loop [step first-step opts opts] (let [[f next-step] (wire-fn step step-fns) new-opts (f opts) [next-step next-opts] (next-fn step next-step new-opts)] (if next-step (recur next-step next-opts) next-opts)))Here is how we use this engine to create a client-side lock for Terraform using Git tags. The opts map represents our “World State”, shared across all functions.
We invoke it like this: (lock [] {}). The first argument is a list of middleware-style step functions, and the second is the starting state.
(->workflow {:first-step ::generate-lock-id :wire-fn (fn [step _] (case step ::generate-lock-id [generate-lock-id ::delete-tag] ::delete-tag [delete-tag ::create-tag] ::create-tag [create-tag ::push-tag] ::push-tag [push-tag ::get-remote-tag] ::get-remote-tag [(comp get-remote-tag delete-tag) ::read-tag] ::read-tag [read-tag ::check-tag] ::check-tag [check-tag ::end] ::end [identity])) :next-fn (fn [step next-step opts] (case step ::end [nil opts] ::push-tag (choice {:on-success ::end :on-failure next-step :opts opts}) ::delete-tag [next-step opts] (choice {:on-success next-step :on-failure ::end :opts opts})))})In many CI/CD systems, debugging is a nightmare of “print” statements and re-running 10-minute pipelines. Because Clojure data structures are immutable and persistent, we can use a debug macro provided by BigConfig and a “spy” function to inspect the state at every step.
(comment (debug tap-values (create [(fn [f step opts] (tap> [step opts]) ;; "Spy" on every state change (f step opts))] {::bc/env :repl ::tools/tofu-opts (workflow/parse-args "render") ::tools/ansible-opts (workflow/parse-args "render")})))Using tap>, you get the result “frozen in time”. You can render templates and inspect them without ever executing a side effect.
Operations often require calling the same sub-workflow multiple times. If every workflow uses the same top-level keys, they clash. We solve this with Nested Options.
By using the workflow’s namespace as a key, we isolate state. However, sometimes a child needs data from a sibling (e.g., Ansible needs an IP address generated by Terraform). We use an opts-fn to map these values explicitly at runtime.
The specialized ->workflow* constructor uses this next-fn to manage this state isolation:
(fn [step next-step {:keys [::bc/exit] :as opts}] (if (steps-set step) (do (swap! opts* merge (select-keys opts [::bc/exit ::bc/err])) (swap! opts* assoc step opts)) (reset! opts* opts)) (cond (= step ::end) [nil @opts*] (> exit 0) [::end @opts*] ;; Error handling jump :else [next-step (let [[new-opts opts-fn] (get step->opts-and-opts-fn next-step [@opts* identity])] (opts-fn new-opts))]))This logic ensures that if a step is a sub-workflow, its internal state is captured within the parent’s state under its own key. The opts-fn allows us to bridge the gap—for instance, pulling a Terraform-generated IP address into the Ansible configuration dynamically.
In operations, you must render configuration files before invoking tools. If you compose multiple workflows, you run into the “Maven Diamond Problem”: two different parent workflows sharing the same sub-workflow. To prevent them from overwriting each other’s files, we use dynamic, hashed prefixes for working directories:
.dist/default-f704ed4d/io/github/amiorin/alice/tools/ansible
The hash f704ed4d is dynamic. If a workflow is moved or re-composed, the hash changes, ensuring total isolation during template rendering.
Tools like AWS Step Functions , Temporal , or Restate are powerful workflow engines, but for many operational tasks, they are not a good fit. BigConfig has an edge because it is local and synchronous where it counts. It turns infrastructure into a local control loop orchestrating multiple tools.
In the industry, “Easy” (using the same language as the backend, like Go) often wins over “Simple”. But Go lacks a REPL, immutable data structures, and the ability to implement a debug macro that allows for instantaneous feedback.
Infrastructure eventually becomes a mess of “duct tape and prayers” when the underlying tools aren’t built for complexity. If you choose Simple over Easy, Clojure is the best language for operations—even if you’re learning Clojure for the first time.
Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.
Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).
Do you use clojure for Data Science? Please take the survey. Your responses will help shape the future of the Noj toolkit and the Data Science ecosystem in Clojure.
The results of the 2025 State of Clojure Survey are now available. Thank you to everyone who participated!
Also, a big thanks to the many folks in the community who helped make the survey possible by providing feedback, suggesting questions, and recruiting others to participate.
Check out the video discussion of the results. It includes many topics, such as: where Clojure is being used around the world, what was surprising, the experience level of the community, who Clojure attracts, how Clojure fits in with other languages, and just how much developers love Clojure.
On February 10, the Clojure team hosted our first Clojure Dev Call!
Watch the recording to hear what the team has been working on and what’s on the horizon. Stick around until the end to hear the community Q&A.
Clojurists Together has opened the Q2 2026 funding round for open-source Clojure projects. Applications will be accepted through March 19th.
Read the announcement for more details.
Clojure Jam 2026: Postponed. Read why.
Clojure real-world-data 50: Mar 13
Babashka Conf: May 8. Amsterdam, NL. See the schedule.
Dutch Clojure Days 2026: May 9th. Amsterdam, NL. See the schedule.
Clojure Core Team Dev Call, Feb 2026 - ClojureTV
2025 Clojure Survey: Insights, Surprises, and What Really Matters - ClojureTV
Lexical Complexity in Software Engineering (by Samantha Cohen) - London Clojurians
Scicloj AI Meetup 13: Agent-o-rama - Sci Cloj
Apropos with Michiel Borkent - Borkdude! Feb 17, 2026 - apropos clojure
Broader Implications of AI - panel discussion - Macroexpand 2025-10-25 - Sci Cloj
Coding in Arabic with Clojure - Clojure Diary
Clojure Notebooks - Clojure Diary
Ridley — 3D Modeling with Turtle Graphics and Code - Vincenzo Piombo
BigConfig: Escape the YAML trap - Alberto Miorin
Transactional Event Sourcing with Clojure and Sqlite - Max Weber
Wrapper’s in Clojure Ring - Clojure Diary
Test Driven Development with Clojure and Midje - Jan Wedekind
Call for Proposals. Feb. 2026 Survey - Kathy Davis
(nth (concat) 6) - Ana Carolina, Arthur Fücher
Your CI/CD Pipeline Deserves Better Than YAML: Introducing MonkeyCI - Wout Neirynck
On Dyslexia, Programming and Lisp. — Relections on Software Engineering - Ivan Willig
New ClojureStream - Changelog - ClojureStream - ClojureStream
Creating long-term value with Clojure - Solita - Matti Uusitalo
ClojureScript Guide: Why Modern Devs Need It Now ( 2026 Edition) - Jiri Knesl
Tetris-playing AI the Polylith way - Part 3 - Joakim Tengstrand
Babashka 1.12.215: Revenge of the TUIs - Michiel Borkent
LLMe - Michael Fogus
Connecting Clojure-MCP to Alternative LLM APIs – Clojure Civitas - Matthias Buehlmaier, Annie Liu
Pull Playground - Interactive Pattern Learning - Loic Blanchard
Managing Web App Modes with Fun-Map in Clojure - Loic Blanchard
Comparison of hiccup libraries - Max Rothman
Simple Made Inevitable: The Economics of Language Choice in the LLM Era - Felix Barbalet
Reconstructing Biscuit in Clojure - Şeref Ayar
One year of LLM usage with Clojure — Relections on Software Engineering - Ivan Willig
Introducing Gloat and Glojure - GloatHub - Ingy dot Net
Introducing BigConfig Package - Alberto Miorin
2 Introduction to Supervised Machine Learning with metamorph.ml – metamorph.ml topics - Carsten Behring
Managing Complexity with Mycelium - Dmitri Sotnikov
The YAML Trap: Escaping Greenspun’s Tenth Rule with BigConfig - Alberto Miorin
metamorph tutorial - Carsten Behring
Stratum: SQL that branches - Christian Weilbach
Clojure + NumPy Interop: The 2026 Guide to Hybrid Machine Learning Pipelines - Jiri Knesl
Why Gaiwan Loves the Predictive Power of Universal Conventions - Gaiwan
Composable Plotting in Clojure – Clojure Civitas - Daniel Slutsky
Codex in the REPL - Vlad Protsenko
What’s Next for clojure-mode? - Bozhidar Batsov
Browse your live Clojure objects in a web UI - Dustin Getz
OSS updates January and February 2026 - Michiel Borkent
jank is off to a great start in 2026 - Jeaye Wilkerson
Run a REPL in a MonkeyCI job - Wout Neirynck
Postponing Clojure Jam 2026 - Daniel Slutsky
Just What IS Clojure, Anyway? - Dimension AI Technologies
Universal Infrastructure: Solving the Portability Gap with BigConfig - Alberto Miorin
Composability: Orchestrating Infrastructure with Babashka and BigConfig Package - Alberto Miorin
Debut release
tools.deps.edn - Reader for deps.edn files
cream - Fast starting Clojure runtime built with GraalVM native-image + Crema
gloat - Glojure AOT Tool
patcho - Patching micro lib for Clojure
coll-tracker - Track which keys and indices of a deep data structures are accessed.
inst - Clojure time library that always returns a #inst.
r11y - CLI tool for extracting URLs as Markdown
leinpad - launchpad for leiningen
bb-depsolve - Generic monorepo dependency sync, upgrade & reporting for babashka/Clojure
sqlatom - Clojure library that stores atoms in a SQLite database
ruuter - A zero-dependency, runtime-agnostic router.
briefkasten - A mail client that can sync and index with Datahike and Scriptum (Lucene).
zsh-clj-shell - Clojure (Babashka) shell integration for Zsh
icehouse - Icehouse tabletop game
neanderthal-blas-like - BLAS-like Extensions for Neanderthal, Fast Clojure Matrix Library
avatar-maker - GitHub - avidrucker/avatar-maker
icd11-export - Turtle export of ICD-11
mycelium - Mycelium uses Maestro state machines and Malli contracts to define "The Law of the Graph," providing a high-integrity environment where humans architect and AI agents implement.
hyper - Reactive server-rendered web framework for Clojure
awesome-clojure-llm - Concise, curated resources for working with the Clojure Programming and LLM base coding agents
stratum - Versioned, fast and scalable columnar database.
any - Objects for smart comparison in tests.
sankyuu-template-clj - A clojure project utilizing lwjgl + assimp + opengl + imgui to render glTF models and MMD models.
epupp - A web browser extension that lets you tamper with web pages, live and/or with userscripts.
clj-yfinance - Fetch prices, historical OHLCV, dividends, splits, earnings dates, fundamentals, analyst estimates and options from Yahoo Finance. Pure Clojure + built-in Java 11 HttpClient, no API key, no Python.
ecbjure - Access ECB financial data from Clojure — FX conversion, EURIBOR, €STR, HICP, and the full SDMX catalogue
brepl-opencode-plugin - brepl integration for OpenCode - automatic Clojure syntax validation, auto-fix brackets, and REPL evaluation.
lalinea - linear algebra with dtype-next tensors
superficie - Surface syntax for Clojure to help exposition/onboarding.
kaven - A Clojure API for interacting with Maven respositories
igor - Constraint Programming for Clojure
Updates
tools.deps 0.29.1598 - Deps as data and classpath generation
clojure_cli 1.12.4.1618 - Clojure CLI
core.cache 1.2.263 - A caching library for Clojure implementing various cache strategies
core.memoize 1.2.281 - A manipulable, pluggable, memoization framework for Clojure
pathling 0.2.1 - Utilities for scanning and updating data structures
scoped 0.1.16 - ScopedValue in Clojure, with fallback to ThreadLocal
dompa 1.2.3 - A zero-dependency, runtime-agnostic HTML parser and builder.
persistent-sorted-set 0.4.119 - Fast B-tree based persistent sorted set for Clojure/Script
pocket 0.2.4 - filesystem-based caching of expensive computations
contajners 1.0.8 - An idiomatic, data-driven, REPL friendly clojure client for OCI container engines
hive-mcp 0.13.0 - MCP server for hive-framework development. A memory and agentic coordination solution.
basic-tools-mcp 0.2.1 - Standalone babashka MCP server wrapping clojure-mcp-light — delimiter repair, nREPL eval, cljfmt formatting as IAddon tools
bb-mcp 0.4.0 - Lightweight MCP server in Babashka (~50MB vs ~500MB JVM)
clj-kondo-mcp 0.1.1 - Standalone MCP server for clj-kondo static analysis (Babashka + JVM)
lsp-mcp 0.2.1 - Clojure LSP analysis MCP server — standalone babashka or JVM addon for hive-mcp
qclojure-braket 0.3.0 - AWS Braket backend for QClojure
statecharts 1.3.0 - A Statechart library for CLJ(S)
fulcro 3.9.3 - A library for development of single-page full-stack web applications in clj/cljs
tableplot 1-beta16 - Easy layered graphics with Hanami & Tablecloth
cljd-video-player 1.3 - A reusable ClojureDart video player package with optional background audio service
fulcro-spec 3.2.8 - A library that wraps clojure.test for a better BDD testing experience.
drawbridge 0.3.0 - An HTTP/HTTPS nREPL transport, implemented as a Ring handler.
yggdrasil 0.2.20 - Git-like, causal space-time lattice abstraction over systems supporting this memory model.
hirundo 1.0.0-alpha211 - Helidon 4.x - RING clojure adapter
kit 2026-02-18 - Lightweight, modular framework for scalable web development in Clojure
clojure-lsp 2026.02.20-16.08.58 - Clojure & ClojureScript Language Server (LSP) implementation
neanderthal 0.61.0 - Fast Clojure Matrix Library
diamond-onnxrt 0.24.0 - Fast Clojure Machine Learning Model Integration
splint 1.23.1 - A Clojure linter focused on style and code shape.
metamorph.ml 1.3.0 - Machine learning functions based on metamorph and machine learning pipelines
aws-simple-sign 2.3.1 - A Clojure library for pre-signing S3 URLs and signing HTTP requests for AWS.
clojurecuda 0.27.0 - Clojure library for CUDA development
nrepl 1.6.0 - A Clojure network REPL that provides a server and client, along with some common APIs of use to IDEs and other tools that may need to evaluate Clojure code in remote environments.
inf-clojure 3.4.0 - Basic interaction with a Clojure subprocess from Emacs
calva 2.0.563 - Clojure & ClojureScript Interactive Programming for VS Code
clay 2.0.12 - A REPL-friendly Clojure tool for notebooks and datavis
clj-media 3.0-alpha.3 - Read, write, and transform audio and video with Clojure.
pp 2026-03-01.107 - Peppy pretty-printer for Clojure data.
rewrite-clj 1.2.52 - Rewrite Clojure code and edn
portfolio 2026.03.1 - Component-driven development for Clojure
transit-java 1.1.401-alpha - transit-format implementation for Java
transit-clj 1.1.354-alpha - transit-format implementation for Clojure
babashka 1.12.216 - Native, fast starting Clojure interpreter for scripting
babashka-sql-pods 0.1.5 - Babashka pods for SQL databases
clojure-mode 5.22.0 - Emacs support for the Clojure(Script) programming language
datalevin 0.10.7 - A simple, fast and versatile Datalog database
ridley 1.8.0 - A turtle graphics-based 3D modeling tool for 3D printing. Write Clojure scripts, see real-time 3D preview, export STL. WebXR support for VR/AR visualization.
deps-new 0.11.1 - Create new projects for the Clojure CLI / deps.edn
malli 0.20.1 - High-performance data-driven data specification library for Clojure/Script.
instaparse-bb 0.0.7 - Use instaparse from babashka
clojure.jdbc 0.9.2 - JDBC library for Clojure
get-port 0.2.0 - Find available TCP ports for your Clojure apps and tests.
plumcp 0.2.0-beta2 - Clojure/ClojureScript library for making MCP server and client
kmono 4.11.1 - The missing workspace tool for clojure tools.deps projects
proletarian 1.0.115 - A durable job queuing and worker system for Clojure backed by PostgreSQL or MySQL.
monkeyci 0.24.2 - Next-generation CI/CD tool that uses the full power of Clojure!
hulunote 1.1.0 - An open-source outliner note-taking application with bidirectional linking.
beichte 0.2.6 - Static purity and effect analysis for Clojure.
reitit 0.10.1 - A fast data-driven routing library for Clojure/Script
thneed 1.1.8 - An eclectic set of Clojure utilities that I’ve found useful enough to keep around.
eca 0.112.0 - Editor Code Assistant (ECA) - AI pair programming capabilities agnostic of editor
Look at this line of code:
processCustomerOrder(customer, orderItems)
Any developer with six months of experience knows roughly what that does. The name is explicit, the structure is familiar, the intent is readable. Now look at this:
(reduce + (map f xs))
The reaction most developers have is immediate and unfavourable. Parentheses everywhere. No obvious structure. It looks less like a programming language and more like a typographer's accident. The old joke writes itself: LISP stands for Lost In Stupid Parentheses.
That joke is, technically, a backronym. John McCarthy named it LISP as a contraction of LISt Processing when he created it in 1958. The sardonic expansion came later, coined by programmers who had opinions about the aesthetic choices involved. Those opinions have not mellowed with time.
And yet Clojure – a modern descendant of Lisp – ranked as one of the highest-paying languages in the Stack Overflow Developer Survey for several consecutive years around 2019. Developers walked away from stable Java and C# positions to build production systems in it. A Brazilian fintech used it to serve tens of millions of customers. Something requires explaining.
Clojure only makes sense against the background of Lisp, and Lisp only makes sense as what it actually was: not merely a programming language, but a direct implementation of mathematical ideas about computation.
McCarthy's 1958 creation introduced concepts that took the rest of the industry decades to absorb. Garbage collection, conditional expressions, functional programming, symbolic computation – all present in Lisp before most working developers today were born. Many programmers encounter Lisp's descendants daily without being aware of it.
The defining feature is the S-expression:
(+ 1 2)
Everything is written as a list. This is not merely a syntactic preference. Because code and data share the same underlying structure, a Lisp program can manipulate other programs directly. This property – homoiconicity – is the technical foundation of Lisp macros: code that generates and transforms other code at compile time, with a flexibility that few conventional infix languages match. It is the reason serious Lisp practitioners regard the syntax not as a historical curiosity but as a genuine technical advantage.
Lisp also, however, developed a reputation for producing work that individual experts could write brilliantly and teams could not maintain at all. The tension between expressive power and collective readability never fully resolved. Clojure inherits this tradition knowingly, and is aware of the cost.
Rich Hickey created Clojure in 2007. His central design decision was not to build a new runtime from scratch but to attach Lisp to an existing ecosystem.
| Layer | Technology |
|---|---|
| Runtime | JVM |
| Libraries | Java ecosystem |
| Language model | Lisp |
This host strategy gave Clojure immediate access to decades of mature Java libraries without needing to rebuild any of them. A Clojure developer can call Java code directly. The same logic drove two later variants: ClojureScript, which compiles to JavaScript and found real traction in teams already working with React, and ClojureCLR, which runs on .NET. Rather than fight the unwinnable battle of building its own ecosystem from scratch, Clojure attached itself to three of the largest ones that already existed.
Clojure does not attempt to displace existing ecosystems. It operates inside them.
Central to how Clojure development actually works is the REPL – Read–Eval–Print Loop. Rather than the standard write–compile–run–crash cycle, developers send code fragments to a running system and modify it live. Functions are redefined while the application continues executing. For experienced practitioners this is a material productivity difference: the feedback loop is short, and the distance between an idea and a tested result is small. Experienced Clojure developers report unusually low defect rates, a claim that is plausible given the constraints immutability places on the ways a programme can fail.
Hickey's 2011 Strange Loop talk Simple Made Easy is the philosophical engine behind every design choice in Clojure. It draws a distinction that most language design ignores.
| Term | Meaning |
|---|---|
| Easy | Familiar; close to what you already know |
| Simple | Not intertwined; concerns kept separate |
Most languages pursue easy. They aim to resemble natural language, minimise cognitive friction at the point of learning, and reduce the effort required to write the first working programme. This also means that languages favoured by human readers tend to be the hardest for which to write parsers and compilers.
Clojure instead pursues simple. Its goal is to minimise tangled interdependencies in the resulting system, even at the cost of an unfamiliar surface. Writing parsers for Lisps is comparatively straightforward, at the cost of human readability.
Hickey's specific target is what he calls place-oriented programming: the treatment of variables as named locations in memory whose values change over time – mutability, in more formal terms. His argument is that conflating a value with a location generates incidental complexity at scale, particularly in concurrent systems. When you cannot be certain what a variable contains at a given moment, reasoning about a programme becomes difficult in proportion to the programme's size.
The design of Clojure follows directly from this diagnosis. Immutable data, functional composition, minimal syntax, and data structures in place of object hierarchies are all consequences of the same underlying position. The language may not feel easy. The resulting systems are intended to be genuinely simpler to reason about.
Clojure's core model is data-oriented. Rather than building class hierarchies, programmes pass simple structures through functions:
(assoc {:name "Alice" :age 30} :city "London")
This creates a new map. The original is untouched. That is the default behaviour across all of Clojure's data structures – values do not change; new versions are produced instead.
This is made practical by persistent data structures, which use structural sharing. When a new version of a data structure is produced, it shares most of its internal memory with the previous version rather than copying it entirely. The comparison that makes this intuitive for most developers: Git does not delete your previous commits when you push a new one. It stores only the difference, referencing unchanged content from before. Clojure applies the same principle to in-memory data.
The consequence for concurrency results directly from this. Race conditions require mutable shared state. If data cannot be mutated, the precondition for the most common class of concurrency bug does not exist. This was Clojure's most compelling practical argument during the multicore boom of the 2010s, when writing correct concurrent code had become a routine industrial concern rather than a specialist one. Clojure let developers eliminate that entire class of problem.
Between roughly 2012 and 2020, functional programming moved from academic discussion to genuine industry interest. The drivers were concrete: multicore processors created pressure to write concurrent code correctly; distributed data systems required reasoning about transformation pipelines rather than mutable state; and the sheer complexity of large-scale software made the promise of mathematical rigour appealing.
Clojure was among the most visible representatives of this movement, alongside Haskell, Scala, and F#. Conference talks filled. Engineering blogs ran long series on immutability and monads. For a period it seemed plausible that functional languages might displace the mainstream ones.
What actually happened was different. Mainstream languages absorbed the useful ideas and continued. And the majority of working programmers, it turned out, rarely needed to reason about threading and concurrency at all.
Java gained streams and lambdas in Java 8. JavaScript acquired map, filter, and reduce as first-class patterns, and React popularised unidirectional data flow. C# extended its functional capabilities across successive versions. Rust built immutability and ownership into its type system from the outset. The industry did not convert to functional programming – it extracted what it needed and kept the syntax it already knew.
A developer who can obtain most of functional programming's benefits inside a language they already know will rarely conclude that switching entirely is justified.
The deeper reason functional languages lost the mainstream argument is not technical. It is sociological. Python won because it is, in the most precise sense, the Visual Basic of the current era. That comparison is not an insult – Visual Basic dominated the 1990s because it made programming accessible to people who had no intention of becoming professional developers, and that accessibility produced an enormous, self-reinforcing community. Python did exactly the same thing for data scientists, academics, hobbyists, and beginners, and for precisely the same reason: it is easy to learn, forgiving of error, and immediately rewarding to write. Network effects took care of the rest. Libraries multiplied. Courses proliferated. Employers specified it. The ecosystem became self-sustaining.
Clojure is the antithesis of this process. It is a language for connoisseurs – genuinely, not dismissively. Its internal consistency is elegant, its theoretical foundations are sound, and developers who master it frequently describe it with something approaching aesthetic appreciation. Mathematical beauty, however, has never been a reliable route to mass adoption. Narrow appeal does not generate network effects. And Clojure, by design, operates as something of a lone wolf: it rides atop the JVM rather than integrating natively with the broader currents of modern computing – the web-first tooling, the AI infrastructure, the vast collaborative ecosystems built around Python and JavaScript. At a moment when the decisive advantages in software development come from connectivity, interoperability, and the accumulated weight of shared tooling, a language that demands a clean break from everything a developer already knows is swimming directly against the tide.
Compare this with Kotlin or TypeScript, both of which succeeded in part because they offered a graduated path. A developer new to Kotlin can write essentially Java-style code and improve incrementally. A developer new to TypeScript can begin with plain JavaScript and add types as confidence grows. Both languages have, in effect, a beginner mode. Clojure has no such thing. You either think in Lisp or you do not write Clojure at all.
Despite remaining a specialist language, Clojure has real industrial presence.
The most prominent example is Nubank, a Brazilian fintech that reached a valuation of approximately $45 billion at its NYSE listing in December 2021. Nubank runs significant portions of its backend in Clojure, and in 2020 acquired Cognitect – the company that stewards the language. That acquisition was considerably more than a gesture; it was a statement of long-term commitment from an organisation operating at scale.
ClojureScript found parallel influence in the JavaScript ecosystem. The Reagent and re-frame frameworks attracted serious production use, demonstrating that the Clojure model could be applied to front-end development at scale and not merely to backend data pipelines.
The pattern that emerges from successful Clojure deployments is consistent: small, experienced teams working on data-intensive systems where correctness and concurrency matter more than onboarding speed. That is a narrow niche. It was also, not coincidentally, a well-paid one – for a time.
Clojure did not become a mainstream language. By any measure of adoption – survey rankings, job advertisements, GitHub repositories – it remains firmly in specialist territory. Even F#, a functional rival with the full weight of Microsoft's backing, has not broken through.
But the argument Clojure made in 2007 has largely been vindicated. Immutability is now a design principle in Rust, Swift, and Kotlin. Functional composition is standard across modern JavaScript and C#. Data-oriented design has become an explicit architectural pattern in game development and systems programming. The industry did not adopt Clojure, but it has been grateful for Hickey's ideas and has quietly absorbed them.
What did not transfer was the syntax – and behind the syntax lay an economic problem that no philosophical vindication could resolve.
A CTO evaluating a language does not ask only whether it is technically sound. The questions are: how large is the available talent pool? How long does onboarding take? What happens when a key developer leaves? Clojure's answers to all three were uncomfortable.
There is a further cost that rarely appears in language comparisons. A developer with ten years of experience in Java, C#, or Python carries genuine accumulated capital: hard-won familiarity with idioms, libraries, failure modes, and tooling. Switching to a Lisp-derived language does not extend that knowledge – it resets it. Clojure keeps the JVM underneath but discards almost everything a developer has learned about how to structure solutions idiomatically. The ten-year veteran spends their first six months feeling like a junior again. Recursion replaces loops. Immutable pipelines replace stateful objects. The mental models that took years to build are, at best, partially transferable. That cost is real and largely invisible in adoption discussions, and it falls on precisely the experienced developers an organisation most wants to retain. Knowledge compounds most effectively when it is built upon incrementally. Clojure does not permit that. It demands a clean break, and most organisations and most developers are not willing to pay that price.
The high wages Clojure commanded were not, from a management perspective, a straightforward mark of quality. They were also a warning of risk. They reflected something less flattering than productivity: the classic dynamic of the expert who becomes indispensable by writing systems that only they can maintain. At its worst this approaches a form of institutional capture – a codebase so entangled with one person's idiom that replacing them becomes prohibitively expensive, something uncomfortably close to ransomware in its commercial effect.
That position has been further undermined by the rise of agentic coding tools. The practical value of writing in a mainstream language has quietly increased, because AI coding assistants are trained on the accumulated body of code that exists – and that body is overwhelmingly Python, JavaScript, Java, and C#. The effect is concrete: ask a capable model to produce a complex data transformation in Python and it draws on an enormous foundation of high-quality examples. Ask it to do the same in idiomatic Clojure and the results are less reliable, the suggestions thinner, the tooling shallower. A language's effective learnability in 2026 is no longer a matter only of human cognition; it is also a function of training density. Niche languages are niche in the training data too, and that gap compounds. The expert moat – already questionable on organisational grounds – is being drained from two directions at once.
Clojure's ideas spread quietly through the languages that absorbed them and left the parentheses behind. Its practitioners, once among the best-paid developers in the industry, now find that the scarcity premium they commanded rested partly on barriers that no longer hold.
The language was right about the future of programming. It simply will not be present when that future arrives.
So, just what is Clojure, anyway? It is a language that was correct about the most important questions in software design, arrived a decade before the industry was ready to hear the answers, and expressed those answers in a notation the industry was never willing to learn. That is not a small thing. It is also not enough.
This article is part of an ongoing series examining what programming languages actually are and why they matter.
| Language | Argument |
|---|---|
| C | The irreplaceable foundation |
| Python | The approachable language |
| Rust | Safe systems programming |
| Clojure | Powerful ideas, niche language |
Coming next: Zig, Odin, and Nim – three languages that think C's job could be done better, and have very different ideas about how.
Two days of performance for the premiere of Quarto Escuro de Goethe in Lisbon.
Exploring the Vanta API with Clojure, Late Night Linux family of podcasts.
Notes
In this post I&aposll give updates about open source I worked on during January and February 2026.
To see previous OSS updates, go here.
I&aposd like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn&apost exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

Current top tier sponsors:
Open the details section for more info about sponsoring.
If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!
Babashka Conf 2026 is happening on May 8th in the OBA Oosterdok library in Amsterdam! David Nolen, primary maintainer of ClojureScript, will be our keynote speaker! We&aposre excited to have Nubank, Exoscale, Bob and Itonomi as sponsors. Wendy Randolph will be our event host / MC / speaker liaison :-). The CfP is now closed. More information here. Get your ticket via Meetup.com (there is a waiting list, but more places may become available). The day after babashka conf, Dutch Clojure Days 2026 will be happening, so you can enjoy a whole weekend of Clojure in Amsterdam. Hope to see many of you there!
I spent a lot of time making SCI&aposs deftype, case, and macroexpand-1 match JVM Clojure more closely. As a result, libraries like riddley, cloverage, specter, editscript, and compliment now work in babashka.
After seeing charm.clj, a terminal UI library, I decided to incorporate JLine3 into babashka so people can build terminal UIs. Since I had JLine anyway, I also gave babashka&aposs console REPL a major upgrade with multi-line editing, tab completion, ghost text, and persistent history. A next goal is to run rebel-readline + nREPL from source in babashka, but that&aposs still work in progress (e.g. the compliment PR is still pending).
I&aposve been working on async/await support for ClojureScript (CLJS-3470), inspired by how squint handles it. I also implemented it in SCI (scittle, nbb etc. use SCI as a library), though the approach there is different since SCI is an interpreter.
Last but not least, I started cream, an experimental native binary that runs full JVM Clojure with fast startup using GraalVM&aposs Crema. Unlike babashka, it supports runtime bytecode generation (definterface, deftype, gen-class). It currently depends on a fork of Clojure and GraalVM EA, so it&aposs not production-ready yet.
Here are updates about the projects/libraries I&aposve worked on in the last two months in detail.
NEW: cream: Clojure + GraalVM Crema native binary
eval, require, and library loadingdefinterface, deftype, gen-class, and other constructs that generate JVM bytecode at runtime.java source files directly, as a fast alternative to JBangbabashka: native, fast starting Clojure interpreter for scripting.
bb repl) improvements: multi-line editing, tab completion, ghost text, eldoc, doc-at-point (C-x C-d), persistent historydeftype with map interfaces (e.g. IPersistentMap, ILookup, Associative). Libraries like core.cache and linked now work in babashka.babashka.terminal namespace that exposes tty?deftype supports Object + hashCodereify with java.time.temporal.TemporalQueryreify with methods returning int/short/byte/floatSCI: Configurable Clojure/Script interpreter suitable for scripting
deftype now macroexpands to deftype*, matching JVM Clojure, enabling code walkers like riddleycase now macroexpands to JVM-compatible case* format, enabling tools like riddley and cloverageasync/await in ClojureScript. See docs.defrecord now expands to deftype* (like Clojure), with factory fns emitted directly in the macro expansionmacroexpand-1 now accepts an optional env map as first argumentproxy-super, proxy-call-with-super, update-proxy and proxy-mappingsthis-as in ClojureScriptclj-kondo: static analyzer and linter for Clojure code that sparks joy.
@jramosg, @tomdl89 and @hugod have been on fire with contributions this period. Six new linters!
:duplicate-refer which warns on duplicate entries in :refer of :require (@jramosg):aliased-referred-var, which warns when a var is both referred and accessed via an alias in the same namespace (@jramosg):is-message-not-string which warns when clojure.test/is receives a non-string message argument (@jramosg):redundant-format to warn when format strings contain no format specifiers (@jramosg):redundant-primitive-coercion to warn when primitive coercion functions are applied to expressions already of that type (@hugod)array, class, inst and type checking support for related functions (@jramosg)clojure.test functions and macros (@jramosg):condition-always-true linter to check first argument of clojure.test/is (@jramosg):redundant-declare which warns when declare is used after a var is already defined (@jramosg)pmap and future-related functions (@jramosg)squint: CLJS syntax to JS compiler @tonsky and @willcohen contributed several improvements this period.
squint.math, also available as clojure.math namespacecompare-and-swap!, swap-vals! and reset-vals! (@tonsky)dotimes with _ binding (@tonsky)shuffle not working on lazy sequences (@tonsky):require-macros with :refer now accumulate instead of overwriting (@willcohen)-0.0)prn js/undefined as nilyield* IIFEscittle: Execute Clojure(Script) directly from browser script tags via SCI
async/await. See docs.js/import not using evalthis-as#<Promise value> when a promise is evaluatednbb: Scripting in Clojure on Node.js using SCI
(js/Promise.resolve 1) ;;=> #<Promise 1>fs - File system utility library for Clojure
clerk: Moldable Live Programming for Clojure
neil: A CLI to add common aliases and features to deps.edn-based projects.
neil test now exits with non-zero exit code when tests failcherry: Experimental ClojureScript to ES6 module compiler
:require-macros clauses with :refer now properly accumulate instead of overwriting each otherContributions to third party projects:
cli/cli to cli/parse-opts, bumped riddleyThese are (some of the) other projects I&aposm involved with but little to no activity happened in the past month.

Translations: Russian
Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.
Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.
Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.
Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.
Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?
So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.
Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.
If everything is highlighted, nothing is highlighted.
There are two main use-cases you want your color theme to address:
1 is a direct index lookup: color → type of thing.
2 is a reverse lookup: type of thing → color.
Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.
Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.
I can’t.
Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?
Can you?
If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.
What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:
That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.
Limit the number of different colors to what you can remember.
If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?
Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.
I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.
Top-level definitions are another good idea. They give you an idea of a structure quickly.
Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.
Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.
Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.
The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.
But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:
Comments should be highlighted, not hidden away.
Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.
Another secret nobody is talking about is that there are two types of comments:
Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!
Here’s a real example from Clojure codebase that makes perfect use of two types of comments:
Disabled code is gray, explanation is bright yellowPer statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?
And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:
Notice how many colors there are. No one can remember that many.This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.
Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.
So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯
But!
But.
There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.
The second one has good contrast, but you can barely see colors.
The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.
UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.
Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.
In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.
Using italics and bold instead of colorsSome themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.
This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:
OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.
Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.
Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.
Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?
Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.
But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.
This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.
I’ve been applying these principles for about 8 years now.
I call this theme Alabaster and I’ve built it a couple of times for the editors I used:
It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).
Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.
As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.
I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.
I have a weird relationship with statistics: on one hand, I try not to look at it too often. Maybe once or twice a year. It’s because analytics is not actionable: what difference does it make if a thousand people saw my article or ten thousand?
I mean, sure, you might try to guess people’s tastes and only write about what’s popular, but that will destroy your soul pretty quickly.
On the other hand, I feel nervous when something is not accounted for, recorded, or saved for future reference. I might not need it now, but what if ten years later I change my mind?
Seeing your readers also helps to know you are not writing into the void. So I really don’t need much, something very basic: the number of readers per day/per article, maybe, would be enough.
Final piece of the puzzle: I self-host my web projects, and I use an old-fashioned web server instead of delegating that task to Nginx.
Static sites are popular and for a good reason: they are fast, lightweight, and fulfil their function. I, on the other hand, might have an unfinished gestalt or two: I want to feel the full power of the computer when serving my web pages, to be able to do fun stuff that is beyond static pages. I need that freedom that comes with a full programming language at your disposal. I want to program my own web server (in Clojure, sorry everybody else).
All this led me on a quest for a statistics solution that would uniquely fit my needs. Google Analytics was out: bloated, not privacy-friendly, terrible UX, Google is evil, etc.
What is going on?Some other JS solution might’ve been possible, but still questionable: SaaS? Paid? Will they be around in 10 years? Self-host? Are their cookies GDPR-compliant? How to count RSS feeds?
Nginx has access logs, so I tried server-side statistics that feed off those (namely, Goatcounter). Easy to set up, but then I needed to create domains for them, manage accounts, monitor the process, and it wasn’t even performant enough on my server/request volume!
So I ended up building my own. You are welcome to join, if your constraints are similar to mine. This is how it looks:

It’s pretty basic, but does a few things that were important to me.
Extremely easy to set up. And I mean it as a feature.
Just add our middleware to your Ring stack and get everything automatically: collecting and reporting.
(def app
(-> routes
...
(ring.middleware.params/wrap-params)
(ring.middleware.cookies/wrap-cookies)
...
(clj-simple-stats.core/wrap-stats))) ;; <-- just add this
It’s zero setup in the best sense: nothing to configure, nothing to monitor, minimal dependency. It starts to work immediately and doesn’t ask anything from you, ever.
See, you already have your web server, why not reuse all the setup you did for it anyway?
We distinguish between request types. In my case, I am only interested in live people, so I count them separately from RSS feed requests, favicon requests, redirects, wrong URLs, and bots. Bots are particularly active these days. Gotta get that AI training data from somewhere.
RSS feeds are live people in a sense, so extra work was done to count them properly. Same reader requesting feed.xml 100 times in a day will only count as one request.
Hosted RSS readers often report user count in User-Agent, like this:
Feedly/1.0 (+http://www.feedly.com/fetcher.html; 457 subscribers; like FeedFetcher-Google)
Mozilla/5.0 (compatible; BazQux/2.4; +https://bazqux.com/fetcher; 6 subscribers)
Feedbin feed-id:1373711 - 142 subscribers
My personal respect and thank you to everybody on this list. I see you.

Visualization is important, and so is choosing the correct graph type. This is wrong:

Continuous line suggests interpolation. It reads like between 1 visit at 5am and 11 visits at 6am there were points with 2, 3, 5, 9 visits in between. Maybe 5.5 visits even! That is not the case.
This is how a semantically correct version of that graph should look:

Some attention was also paid to having reasonable labels on axes. You won’t see something like 117, 234, 10875. We always choose round numbers appropriate to the scale: 100, 200, 500, 1K etc.
Goes without saying that all graphs have the same vertical scale and syncrhonized horizontal scroll.
We don’t offer much (as I don’t need much), but you can narrow reports down by page, query, referrer, user agent, and any date slice.
It would be nice to have some insights into “What was this spike caused by?”
Some basic breakdown by country would be nice. I do have IP addresses (for what they are worth), but I need a way to package GeoIP into some reasonable size (under 1 Mb, preferably; some loss of resolution is okay).
Finally, one thing I am really interested in is “Who wrote about me?” I do have referrers, only question is how to separate signal from noise.
Performance. DuckDB is a sport: it compresses data and runs column queries, so storing extra columns per row doesn’t affect query performance. Still, each dashboard hit is a query across the entire database, which at this moment (~3 years of data) sits around 600 MiB. I definitely need to look into building some pre-calculated aggregates.
One day.
Head to github.com/tonsky/clj-simple-stats and follow the instructions:

Let me know what you think! Is it usable to you? What could be improved?