Clojure 1.12.5
Clojure 1.12.5 is now available! Find download and usage information on the Downloads page.
Clojure 1.12.5 is now available! Find download and usage information on the Downloads page.
I keep seeing people share vibe-coded apps built on TypeScript/React + Supabase — seemingly the default recommendation from Lovable or Cursor. As a Clojure programmer, I can't stay quiet about this. In an era where AI agents are deeply embedded in the development workflow, that choice carries structural hidden costs that almost nobody is talking about.
LongCodeBench research shows that Claude 3.5 Sonnet's accuracy on bug-fixing tasks drops from 29% to 3% as context grows from 32K to 256K tokens. Chroma tested 18 frontier models and found the same pattern across all of them.
Coding agents accelerate this degradation: every tool call, every file read, every error message accumulates in the context. A 30-step agent session can consume more than ten times the context of a single conversation turn.
Countless efforts are already underway to manage context from the harness-design side — but the tech stack itself has an enormous impact on context efficiency that rarely gets discussed.
An AI agent completing a task doesn't need to read the entire codebase — only the files relevant to that task. Call this set the task-relevant subgraph. The size of the subgraph is determined by the architectural design of the framework, not by the model.
The problem with TypeScript + React + Supabase is that a single feature naturally spans multiple layers — component, hook, state, API client, type definition — each living in a different file. The subgraph starts large and only grows as shared dependencies accumulate.
AI tends to recommend the stack it was trained on the most, but "easy to generate" is not the same as "efficient for long-term AI-assisted development." These are two different things.
My current go-to is Clojure Stack Lite, and several of its design choices structurally shrink the task-relevant subgraph.
HTMX eliminates implicit client state. React state is scattered across multiple interdependent files; to verify behavior, an agent has to simulate browser interactions. HTMX is driven by server responses, so an agent can verify with a plain curl — the response is an HTML fragment, right or wrong, no ambiguity.
HoneySQL eliminates implicit lazy loading. When an ORM produces an N+1 problem, the debug subgraph includes model definitions, association configs, and migration files, because the issue is buried in implicit behavior. HoneySQL expresses queries as SQL-as-data — no lazy loading, no association magic. N+1 can't happen silently, because the syntax simply doesn't allow it to sneak in. The debug subgraph shrinks from five files to one.
Blocking IO eliminates implicit error paths. The fundamental problem with async isn't the syntax — it's that error paths are implicit. Every async call site is a potential break point where an exception can detach from the main flow. To locate a root cause, an agent must trace the entire call chain, and context width grows linearly with chain length. Clojure's blocking IO has no async boundaries; exceptions follow a single path — propagate upward, handled uniformly in middleware. When debugging, an agent only needs two places: the middleware log and the call site the log points to. Context scope stays fixed regardless of system size.
All three points share a common structure: the less implicit behavior, the smaller the context an agent needs to bring in.
The point here isn't a framework or language comparison — it's an observation about design philosophy. Explicit over implicit is a virtue for human developers; for AI agents, it's a structural guarantee that they won't go dumb prematurely.
Design principles the Clojure community has championed for years happen to be a competitive advantage in the AI agent era. I've chosen to frame this in terms of context efficiency, hoping it helps more people appreciate what the Clojure community figured out a long time ago.
On of the first programs I ever ran on a computer was a text adventure game, also known as Interactive Fiction. I think the first one I played was Adventureland by Scott Adams, which was based on the first ever text adventure called Adventure by Crowther and Woods. Adventureland was the first text adventure available for personal computers.
Not long after that I discovered Zork I, the first game by Infocom. I loved the Infocom games, I played most of them, spending many hours solving the games.
May 2026
Open psql. Connect. Run a query. Switch branches. Run it again — same connection, same wire protocol, different version of the database.
$ psql postgresql://localhost:5432/inventory
inventory=> SELECT count(*) FROM widget;
count
-------
4218
inventory=> SET datahike.branch = 'pricing-experiment';
SET
inventory=> SELECT count(*) FROM widget;
count
-------
4221
inventory=> RESET datahike.branch;
SET
That’s not a feature toggle on a Postgres replica. It’s the same database — addressed through standard pgwire — viewed through two different commits. The implementation is pg-datahike, a beta we’re shipping today.
pg-datahike embeds a PostgreSQL-compatible adapter inside a Datahike process: wire protocol, SQL translator, virtual pg_* and information_schema catalogs, constraint enforcement, schema hints. Clients that speak Postgres talk to Datahike without a Postgres install — pgjdbc, Hibernate, SQLAlchemy, Odoo 19, and Metabase bootstrap unmodified against it. The migration path is round-trippable: pg_dump output replays into pg-datahike via psql, and the standalone jar dumps Datahike databases back out as portable PG SQL. Detailed test results at the end of this post.
The operator runs one jar. Everything else is psql.
$ java -jar pg-datahike-VERSION-standalone.jar
pg-datahike VERSION ready on 127.0.0.1:5432
backend: file (~/.local/share/pg-datahike)
history: off
CREATE DATABASE: enabled
databases: ["datahike"]
Connect with: psql -h 127.0.0.1 -p 5432 -U datahike datahike
Press Ctrl+C to stop.
JDK 17+ is the only prerequisite; the jar is on GitHub releases. --memory for an ephemeral run; --help covers the rest.
The rest is psql — provision a fresh database, populate it, pin a session to a historical commit, drop it.
$ psql postgresql://localhost:5432/datahike
datahike=> CREATE DATABASE inventory;
CREATE DATABASE
datahike=> \c inventory
You are now connected to database "inventory".
inventory=> CREATE TABLE widget (sku TEXT PRIMARY KEY, weight INT);
CREATE TABLE
inventory=> INSERT INTO widget VALUES ('A', 10), ('B', 20);
INSERT 0 2
inventory=> SELECT datahike.commit_id();
commit_id
---------------------------------------
b4f2e1c0-2feb-5b61-be14-5590b9e01e48 ← copy this
inventory=> INSERT INTO widget VALUES ('C', 30);
INSERT 0 1
inventory=> SELECT count(*) FROM widget;
count
-------
3
inventory=> SET datahike.commit_id = 'b4f2e1c0-2feb-5b61-be14-5590b9e01e48';
SET
inventory=> SELECT count(*) FROM widget; -- the database before the third insert
count
-------
2
inventory=> RESET datahike.commit_id;
SET
inventory=> \c datahike
datahike=> DROP DATABASE inventory;
DROP DATABASE
SET datahike.commit_id pins the session to a historical commit; everything else is plain Postgres. Sixty seconds, one jar, no Postgres install, no Clojure.
What happens when you SET datahike.branch = 'feature'?
Datahike stores its database as a tree of immutable nodes in konserve, a key-value abstraction over filesystems, S3, JDBC, IndexedDB, and others. Every transaction writes new nodes for changed paths and shares unchanged subtrees with the previous version — the trick behind Clojure’s persistent vectors and Git’s object store. A commit is a small map listing the root pointers for each index; a branch is a named pointer at a commit.
So on SET datahike.branch = 'feature', the handler updates a session variable, and the next query loads that branch’s commit pointer from konserve, walks the tree, returns rows. No coordination with a transactor; storage is the source of truth. SET datahike.commit_id = '<uuid>' works the same way one level deeper — the session points at a specific commit instead of a branch head.
Two consequences worth flagging:
A single start-server call serves many Datahike connections. Clients route on the JDBC URL’s database name:
pg/start-server({"prod" prod-conn
"staging" staging-conn
"reports" reports-conn}
{:port 5432})
(pg/start-server {"prod" prod-conn
"staging" staging-conn
"reports" reports-conn}
{:port 5432})
Same shape on the standalone jar with repeatable --db flags: java -jar pg-datahike.jar --db prod --db staging --db reports.
jdbc:postgresql://localhost:5432/prod → prod-conn
jdbc:postgresql://localhost:5432/staging → staging-conn
jdbc:postgresql://localhost:5432/nonsuch → 3D000 invalid_catalog_name
SELECT current_database() returns the connected name; pg_database enumerates the registry. Useful for multi-tenant deployments, or when ops wants one pgwire endpoint serving many independent stores.
Existing Datahike schemas don’t always look the way you’d want them to over SQL. :datahike.pg/* meta-attributes customize the SQL view without touching the underlying schema:
pg/set-hint!(conn :person/full_name {:column "name"})
pg/set-hint!(conn :person/ssn {:hidden true})
pg/set-hint!(conn :person/company {:references :company/id})
(pg/set-hint! conn :person/full_name {:column "name"}) ; rename the column
(pg/set-hint! conn :person/ssn {:hidden true}) ; exclude from SQL
(pg/set-hint! conn :person/company {:references :company/id}) ; FK target
After set-hint!, SELECT name FROM person works, ssn is invisible to SELECT * and information_schema.columns, and JOIN company c ON p.company = c.id resolves on Datahike’s native ref semantics.
Datahike’s temporal primitives are exposed as session variables. The client doesn’t need to know what as-of means — it just sets a variable:
SET datahike.as_of = '2024-01-15T00:00:00Z'; -- d/as-of
SET datahike.since = '2024-01-01T00:00:00Z'; -- d/since
SET datahike.history = 'true'; -- d/history
RESET datahike.as_of;
Every subsequent query in the session sees the chosen view. A reporting tool that doesn’t know about Datahike can produce point-in-time reports by setting one variable.
Branching is cheap in Datahike: every transaction produces a new immutable commit, so a branch is just a named pointer at a commit UUID. Creation is O(1) — one konserve write, no data copy, no WAL replay. pgwire exposes the read side and the admin operations through standard PG mechanisms:
-- Introspect
SELECT datahike.branches();
SELECT datahike.current_branch();
SELECT datahike.commit_id();
-- Admin (konserve-level writes — they don't go through the tx writer)
SELECT datahike.create_branch('preview', 'db'); -- 'db' is Datahike's default branch name
SELECT datahike.create_branch('from-cid', '69ea6ee1-…');
SELECT datahike.delete_branch('preview');
-- Session view: three cuts on the same immutable log.
-- They compose — a feature branch's state as of yesterday is two SETs.
SET datahike.branch = 'feature';
SET datahike.commit_id = '69ea6ee1-2feb-5b61-be14-5590b9e01e48';
SET datahike.as_of = '2024-01-15T00:00:00Z';
Or pin a branch at connect time via the JDBC URL:
jdbc:postgresql://localhost:5432/prod:feature → prod-conn, pinned to :feature
jdbc:postgresql://localhost:5432/prod → prod-conn, default branch
SET datahike.commit_id = '<uuid>' is Datahike-unique: no other PG-compatible database lets a session pin to an exact commit identifier.
We’ll cover the structural-sharing model that makes branching this cheap in a follow-up post — including how it works across all the Datahike bindings, not just pgwire.
Set a :database-template on the server and pgwire clients self-provision and tear down databases over plain SQL. The template is a partial Datahike config; each CREATE DATABASE produces a fresh store with a generated UUID:
pg/start-server({"datahike" boot-conn}
{:port 5432 :database-template {:store {:backend :memory} :schema-flexibility :write :keep-history? true}})
(pg/start-server {"datahike" boot-conn}
{:port 5432
:database-template {:store {:backend :memory}
:schema-flexibility :write
:keep-history? true}})
WITH clauses override the template per-database, and the SQL surface accepts both standard PG forms:
CREATE DATABASE myapp; -- inherits the template
CREATE DATABASE histdb WITH KEEP_HISTORY = true; -- override per database
CREATE DATABASE memdb WITH (BACKEND = 'memory', -- Yugabyte-style paren form
INDEX = 'persistent-set');
DROP DATABASE myapp;
DROP DATABASE IF EXISTS old_one;
Accepted WITH keys map case-insensitively to Datahike config:
WITH option |
Datahike config | Notes |
|---|---|---|
BACKEND |
[:store :backend] |
'memory', 'file' built-in; 'jdbc', 's3', 'redis', 'lmdb', 'rocksdb', 'dynamodb' via external konserve libraries |
STORE_ID |
[:store :id] |
Defaults to a fresh UUID per CREATE |
PATH |
[:store :path] |
File backend; {{name}} interpolation supported |
HOST / PORT / USER / PASSWORD / DBNAME |
[:store :*] |
jdbc / redis backends |
SCHEMA_FLEXIBILITY |
:schema-flexibility |
'read' or 'write' |
KEEP_HISTORY |
:keep-history? |
|
INDEX |
:index |
'persistent-set' → :datahike.index/persistent-set |
OWNER / TEMPLATE / ENCODING / LOCALE / TABLESPACE / … |
— | Postgres-only; silently accepted with a NOTICE so pg_dump round-trips work |
The standalone jar enables this by default (use --no-create-database to disable). Embedded servers opt in via :database-template (or explicit :on-create-database / :on-delete-database hooks). Without one, CREATE / DROP DATABASE return SQLSTATE 0A000 feature_not_supported; mismatched preconditions return the standard PG SQLSTATEs.
Wire compatibility extends to pg_dump SQL on both sides. Three workflows.
pg_dump output replays straight into pg-datahike via psql or any JDBC client. Schema-side coverage: CREATE TABLE with FK constraints, CREATE SEQUENCE, DEFAULT nextval(…), CREATE TYPE … AS ENUM, CREATE DOMAIN, partitioned tables. Data-side: INSERT (single + multi-VALUES) and COPY … FROM stdin (text and CSV).
Run with the :pg-dump compat preset to silently accept constructs pg-datahike doesn’t model — triggers, functions, materialized views, ALTER OWNER:
java -jar pg-datahike.jar --compat pg-dump
psql -h localhost -p 5432 -U datahike -d datahike -f my_pg_dump.sql
Validated end-to-end against Chinook (15.6k rows, 11 tables, FKs, NUMERIC, TIMESTAMP) — full byte-identical bidirectional roundtrip — and Pagila (50k rows, 22 tables, ENUM, DOMAIN, partitioning, triggers, functions) — schema parses end-to-end, data loads.
The standalone jar’s dump subcommand walks a Datahike database and emits pg_dump-shaped SQL. The output replays into either pg-datahike or real PostgreSQL via psql:
java -jar pg-datahike.jar dump --data-dir DIR --db NAME --out out.sql
java -jar pg-datahike.jar dump --config datahike-config.edn --copy
Flags cover INSERT-vs-COPY output, schema-only / data-only, and table exclusion. --config accepts a full Datahike config EDN, so any konserve backend works; store-id is auto-discovered.
A native Datahike database — created with d/transact, never touched by SQL — also dumps as clean PG SQL. The inverse mapping is well-defined:
:db.unique/identity → PRIMARY KEY NOT NULL:db.unique/value → UNIQUE:db.cardinality/many T → T[] with PG array literals:db.type/ref → bigint (the entity id; opt in to FK constraints with set-hint! :references)So whether you start from a real PostgreSQL dump or from native Datahike, both sides translate cleanly through the same shape. The resulting schema is correct and queryable as both SQL relations and Datalog datoms. It isn’t always what you’d hand-design for entity-shaped Datalog queries — many apps stay with the relational shape, others evolve incrementally as they reach for Datalog’s strengths (pull patterns, rules, multi-source joins).
This is a 0.1 beta and we want to be specific about the gaps:
:pg-dump compat preset (loaded but not executed); strict mode rejects themLISTEN / NOTIFYCOPY … TO STDOUT (COPY … FROM stdin is supported in text and CSV formats)ON DELETE enforced for NO ACTION / RESTRICT / CASCADE; SET NULL / SET DEFAULT and any ON UPDATE action are rejected at DDLpublic schema — CREATE SCHEMA is silently accepted but a no-opSET datahike.branch is active. Reads respect the pinned branch; writes don’t yet. Use datahike.versioning/branch! and merge! from Clojure for branch-targeted writes, or open a second connection on /<db>:<branch>.NOT NULL, CHECK, UNIQUE, FK RESTRICT) are enforced by the pgwire handler; direct (d/transact) writes from Clojure bypass them because Datahike’s schema doesn’t yet carry the constraint vocabulary. A future release will lift enforcement into the tx layer so both paths are gated.COPY, pg_restore -j) are an order of magnitude faster, partly via deferred index construction; an analogous bulk-load fast path is a future item. Large migrations are overnight-cutover territory today.The conformance posture is: pass for the workloads we’ve measured against, fail fast and loud everywhere else. We’d rather reject a stored procedure than execute it incorrectly.
If you’ve used Neon or Xata, the goal will look familiar — branchable Postgres. The mechanism is different. Their branches are control-plane operations: call the API, get a new compute instance over copy-on-write storage. pg-datahike’s branches are session-level — SET datahike.branch = 'feature' inside an open psql connection switches what you’re reading. No provisioning, no compute. An agent or a query planner can switch branches mid-session.
Commit pinning — SET datahike.commit_id = '<uuid>' — is the part where we don’t know of a peer. Neon’s time-travel is bounded by a 6h–1d restore window; pg-datahike pins to any historical commit, indefinitely. We have not seen another PG-compatible database expose this directly through the wire protocol.
Dolt is the closest in spirit — git-like semantics, commit pinning, time-travel — but Dolt is MySQL with a custom storage engine. pg-datahike rides on the standard Postgres wire protocol; every PG client works without modification.
The honest tradeoff: we are a compatibility layer over Datahike’s storage, not a fork of Postgres. Some features tied to the Postgres codebase — PL/pgSQL, the extension ecosystem, procedural languages — aren’t on our roadmap today. If you need those, use Postgres. If your bottleneck is versioning, branching, or reproducibility, this gets you there without leaving the wire protocol your tools already speak.
Datahike has been a Datalog database with a Clojure API and growing language bindings; pg-datahike isn’t a separate database, just another front end on the same store. There’s a sibling: Stratum, a SIMD-accelerated columnar engine that speaks the same wire protocol over an analytical column store with the same fork-as-pointer semantics. Both fit into a shared branching model — see Yggdrasil: Branching Protocols for how a Datahike database, a Stratum dataset, and a vector index can fork together at a single snapshot.
The rest of this post is for callers who do speak Clojure — the same data accessible as relations and as datoms, in-process queries that skip the wire, embedded mode without TCP, and configuration knobs that aren’t exposed over SQL.
The pgwire layer is a view onto Datahike’s datom store, not a separate representation. Tables you create over SQL show up as normal Datahike schemas, queryable from Clojure with (d/q …). Existing Datahike schemas show up as SQL tables with no setup.
;; Plain Datahike schema, transacted from Clojure
d/transact(conn
[{:db/ident :person/id :db/valueType :db.type/long
:db/cardinality :db.cardinality/one :db/unique :db.unique/identity}
{:db/ident :person/name :db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
d/transact(conn [{:person/id 1, :person/name "Alice"}])
;; Plain Datahike schema, transacted from Clojure
(d/transact conn
[{:db/ident :person/id :db/valueType :db.type/long
:db/cardinality :db.cardinality/one :db/unique :db.unique/identity}
{:db/ident :person/name :db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(d/transact conn [{:person/id 1 :person/name "Alice"}])
-- Same database, over psql:
SELECT * FROM person;
-- id | name
-- ----+-------
-- 1 | Alice
The reverse holds too — CREATE TABLE over pgwire transacts a normal Datahike schema, and the next (d/q …) from Clojure sees the rows you just inserted. There is no shadow representation, no separate metadata. One datom store, two query languages.
Two ways to skip the standalone jar — start a server from your own JVM application, or bypass the wire layer entirely.
;; deps.edn
{:deps {org.replikativ/datahike {:mvn/version "LATEST"}
org.replikativ/pg-datahike {:mvn/version "LATEST"}}}
;; deps.edn
{:deps {org.replikativ/datahike {:mvn/version "LATEST"}
org.replikativ/pg-datahike {:mvn/version "LATEST"}}}
require('[datahike.api :as d] '[datahike.pg :as pg])
let [boot {:store {:backend :memory, :id random-uuid()}, :schema-flexibility :write}]:
d/create-database(boot)
pg/start-server({"datahike" d/connect(boot)} {:port 5432, :database-template {:store {:backend :memory}, :schema-flexibility :write, :keep-history? true}})
end
;; => :running on :5432
(require '[datahike.api :as d]
'[datahike.pg :as pg])
(let [boot {:store {:backend :memory :id (random-uuid)}
:schema-flexibility :write}]
(d/create-database boot)
(pg/start-server {"datahike" (d/connect boot)}
{:port 5432
:database-template {:store {:backend :memory}
:schema-flexibility :write
:keep-history? true}}))
;; => :running on :5432
Same pgwire surface, in-process. The integration patterns earlier in this post are the embedded-library API; the standalone jar wraps the same calls behind CLI flags.
Tests and in-process applications don’t need the wire layer at all:
def h: pg/make-query-handler(conn)
h.execute("CREATE TABLE person (id INT PRIMARY KEY, name TEXT)")
h.execute("INSERT INTO person VALUES (1, 'Alice')")
h.execute("SELECT * FROM person")
(def h (pg/make-query-handler conn))
(.execute h "CREATE TABLE person (id INT PRIMARY KEY, name TEXT)")
(.execute h "INSERT INTO person VALUES (1, 'Alice')")
(.execute h "SELECT * FROM person")
Same SQL surface, no socket. Useful for property-based testing of SQL workloads, or for embedding the SQL interface inside a Clojure or ClojureScript application without exposing a port.
By default the handler rejects unsupported DDL — GRANT, REVOKE, CREATE POLICY, ROW LEVEL SECURITY, CREATE EXTENSION, COPY — with SQLSTATE 0A000 feature_not_supported. Most ORMs emit some of these unconditionally. Two ways to relax:
;; silently accept every auth/RLS/extension no-op (Hibernate, Odoo)
pg/make-query-handler(conn {:compat :permissive})
;; accept specific kinds only
pg/make-query-handler(conn {:silently-accept #{:grant :policy}})
;; silently accept every auth/RLS/extension no-op (Hibernate, Odoo)
(pg/make-query-handler conn {:compat :permissive})
;; accept specific kinds only
(pg/make-query-handler conn {:silently-accept #{:grant :policy}})
The named presets in datahike.pg.server/compat-presets cover the common ORM patterns.
Both interfaces see the same datoms, the same indexes, the same history. The choice is about how the query reaches the engine.
Reach for SQL when callers don’t share a runtime with the database — services over the wire, analysts in Metabase, tools that only speak the wire protocol — or when you want existing tooling: ORMs, migration runners, BI dashboards.
Reach for Datalog when the query runs in the same process as the database. Datahike’s Datalog API is a Clojure function: pass values in, get values out, no parsing, no serialization, no socket. Even pg-datahike’s embedded mode (the make-query-handler path shown above) still goes through the SQL parser and the translator; Datalog skips both. You can invoke arbitrary Clojure functions inside predicates, return live data structures without copying, and join across multiple databases on different storage backends in a single query.
The two paths compose. DDL via Flyway over SQL, then reads in Datalog from your Clojure backend. Or: Datahike schema in Clojure, ORM-driven CRUD over SQL. Both stay coherent because they’re views of the same datom store.
We test pg-datahike against the same suites the Postgres ecosystem uses on itself. If a suite passes here, the apps that depend on it generally work here.
| Layer | Test suite | Result | What this proves |
|---|---|---|---|
| JDBC driver | pgjdbc 42.7.5 — ResultSetTest |
80 / 80 | Cursors, type decoding, and metadata behave the way every JVM Postgres client expects. |
| Java ORM | Hibernate 6 — DatahikeHibernateTest |
13 / 13 | JPA stacks — Spring, Quarkus, Jakarta — talk to pg-datahike the same way they talk to Postgres. |
| Python ORM | SQLAlchemy 2.0 dialect | 16 / 16 across 7 phases | The Python data ecosystem — Django, Flask, FastAPI, Airflow, dbt — connects via the standard dialect path. |
| SQL semantics | sqllogictest | 779 assertions, 61 files | Cases derived from PostgreSQL's regression suite, expressed in the sqllogictest format SQLite, CockroachDB, and DuckDB use for their own correctness work. |
| Real application | Odoo 19 — --init=base --test-tags=:TestORM |
11 / 11 cases, ~38k queries, zero translator errors | A 200-table ERP with one of the most demanding open-source ORM layers boots and passes its own test suite. |
| BI tool | Metabase native SQL | 20-probe MBQL sweep | Schema introspection, prepared statements, and result handling work for the paths real BI tools depend on. |
| Migration roundtrip | Chinook + Pagila pg_dump fixtures |
Chinook: byte-equal roundtrip. Pagila: schema parses, data loads. | A real Postgres database can be exported, replayed in pg-datahike, and dumped back — schema and data preserved through the round-trip. |
| Internal | Unit suite | 544 tests, 1603 assertions | Standard regression coverage. |
Per-commit suites run on CircleCI. Odoo, Metabase, and psql / libpq (\d, \dt, \df family) are run on a manual harness before each release. A dedicated compatibility page with linked test artifacts and a published gaps registry is in flight.
Download the jar from GitHub releases, java -jar pg-datahike-VERSION-standalone.jar, point psql at it. To embed in a JVM app, the coordinate is org.replikativ/pg-datahike on Clojars. Repo, docs, and issues at github.com/replikativ/pg-datahike; feedback to contact@datahike.io.
A follow-up post will cover the structural-sharing model that makes branching O(1), what merge! does, and the same workflow across every Datahike binding (Clojure, Java, JavaScript, Python, the C library, the CLI, and SQL). Subscribe to the RSS feed.
We’re happy to announce a new release of ClojureScript. If you’re an existing user of ClojureScript please read over the following release notes carefully.
Now that ClojureScript targets
ECMAScript 2016 we can
carefully choose new areas of enhanced interop. Starting with this
release, hinting a function as ^:async will make the ClojureScript
compiler emit an
JavaScript
async function:
(refer-global :only '[Promise])
(defn ^:async foo [n]
(let [x (await (Promise/resolve 10))
y (let [y (await (Promise/resolve 20))]
(inc y))
;; not async
f (fn [] 20)]
(+ n x y (f))))
This also works for tests:
(deftest ^:async defn-test
(try
(let [v (await (foo 10))]
(is (= 61 v)))
(let [v (await (apply foo [10]))]
(is (= 61 v)))
(catch :default _ (is false))))
In the last Clojure survey support for async functions dominated the list of desired ClojureScript enhancements for JavaScript interop. This enhancement eliminates the need to take on additional dependencies for the common cases of interacting with modern Browser APIs and popular libraries.
For a complete list of fixes, changes, and enhancements to ClojureScript see here
It often begins the same way. The system performs well, traffic increases, data volumes grow, and new features accumulate. Then, gradually, performance degrades. Deployments slow down, bugs become harder to trace, and engineers spend more time debugging than building. What once felt scalable begins to feel fragile.
This is the underlying challenge of data-intensive systems: as data grows, complexity tends to grow with it.
Most teams respond predictably—by adding more tools, more layers, and more abstractions. But this often compounds the problem rather than solving it.
What if the solution to scale isn’t added complexity, but reduced complexity? This is the core philosophy behind Clojure, created by Rich Hickey.
This guide explores how to build scalable data architectures using simple, data-centric approaches—without compromising performance, reliability, or developer productivity.
In Java, data is wrapped inside objects.
It works at the beginning of the application. But over time, complexity accumulates and becomes harder to manage. Why?
Because objects hide data:
This reflects a core limitation of Object-Oriented Programming: teams gradually shift from working with data to contending with the systems that encapsulate it.
In mutable systems:
Now imagine this happening across:
Teams get:
Complexity outpaces data growth, and that’s where things get messy.

As systems grow, teams often introduce:
Each “solution” adds more complexity.
Clojure takes a totally different approach. Forget all those complicated wrappers and abstractions. It just treats data as data — plain and straightforward. Do not stack items; no unnecessary layers.
In Clojure:

In Clojure, data is represented as plain maps, vectors, and sets. No classes. No hidden behavior. Just data that is easy to inspect, easy to serialize (JSON, EDN), and easy to transform across services, pipelines, and systems without rewriting everything.
You can pass data across:
Without rewriting everything.
Clojure uses persistent data structures. There are no full dataset copies — it reuses what’s the same and stores only what’s new. Teams end up with millions of records but almost no additional memory overhead

Teams end up with millions of records, but almost no additional memory gets used.
Immutability is the core idea. Once the team creates data, it stays exactly as it is — no messing around, no changes. That’s where the simplicity comes from. Instead:
This eliminates:
And enables safe concurrency.
The bigger a system gets, the trickier it is to keep data in line. Everyone is worried about data going off track—so how does a team maintain strict control? That’s where Malli steps in.
It’s a lightweight schema library that validates data and ensures teams aren’t sending anything unusual. Simple as that.
Example:

Whenever the app breaks down and produces unclear errors, Malli tells teams straight-up what’s wrong, so they can fix errors fast:

Instant Output:

Concurrency is where most systems break.
Locks. Deadlocks. Race conditions. Clojure avoids all of this.
This is a direct benefit of:
core.async makes handling streams simple.
Example:


Teams move beyond the traditional cycle of writing, building, deploying, and waiting. With a REPL, they can execute code immediately and receive instant feedback.
Need to understand how changes behave with real data? Simply load production data, experiment with live transformations, and debug in real time—while the system continues to run.
The overhead of long build cycles is eliminated. Teams can shape and refine their systems in real time, without delays or uncertainty.
🎬 Clojure exemplifies this approach—teams aren’t just writing code; they are interactively evolving their systems. You can see this in action in the video here:
As systems begin to handle larger volumes of data, complexity can escalate quickly. Some systems continue to perform reliably, while others struggle under the load. The difference is rarely accidental—it is largely determined by the underlying architecture.
Clojure takes a different path. It keeps things simple from the start—and that’s what makes it scale.
In many systems, logic comes first. Data is secondary. In Clojure, it’s the opposite. Data comes first.
And that changes how you build systems. Instead of designing classes, teams work with data flows.
Why this helps:
Each part of the system does one simple thing:
That’s it. No hidden state. No surprises. This works because of:
What teams get:
As systems grow, boundaries tend to blur. One service starts doing too much. Data shapes drift.
Clojure pushes you to keep things clear.
When teams do this,
Clojure keeps backend logic simple. Teams use small functions to shape data, so everyone’s right next to the real information. It clears out the interference.
Fewer lines of code mean teams reduce risks and clarify their goals.
Instead of calling each other up, services just broadcast events to the world, allowing the appropriate recipients to receive them.
So, when something happens, teams create an event. The rest of the system listens and responds as needed. It’s a cleaner way to connect everything without binding them too firmly. Everything runs independently.
As Martin Fowler explains, event sourcing lets teams rebuild system state by replaying events. That makes systems easier to scale and debug.
What this gives teams:
Think of the system as a pipeline.

Each step is simple:
Why this works:
Save the full history, not just the current version. For example:
The state results from all these events.
As Martin Fowler points out, this lets you:
In Clojure, most work is done through small functions.

Simple. Predictable. Testable.
Why it matters:

What does this indicate:
No mutation. No hidden steps.

It’s not only about clean code. Simple systems save money, let teams move faster, and prevent failures. They’re easier to handle and easier to expand.
Complex systems grow in layers.
More services, more duplication, more overhead. Simple systems stay lean.
What this means in practice:
Teams are not paying extra just to manage complexity.
When a system is hard to read, new developers slow down. They depend on others to understand how things work. Simple systems remove that friction.
The impact:
And when someone leaves, the system doesn’t become a mystery.
When things get complicated, failures follow. Hidden states, complex dependencies, and surprise side effects make bugs a pain to find. But simple systems just work better—they’re easier to predict.
What does this mean?
Simple data systems aren’t just a buzzword—it’s what keeps high-volume, real-time systems running smoothly, even as things shift and scale up.
Let’s begin with systems that receive events continuously, leaving little opportunity for delays or unforeseen errors.
Payment processing or trading platforms are pretty unforgiving. They process thousands of transactions each second, and teams can’t have mistakes or inconsistent data.
Simple, data-focused architecture really shines here: each transaction is just data, tracked as an event that teams can review or replay. When things go wrong, it’s a lot easier to pinpoint exactly what failed and roll everything back to a safe place.
What does this fix?
These systems face heavy loads with real-time updates—millions of people are refreshing nonstop. No matter how wild the traffic gets, the data everywhere needs to stay perfectly in sync.
That’s where tools like Apache Kafka help out. Score changes and other updates stream through constantly, and every part of the platform reacts right away, without missing a beat.
Why keep it simple?
AI systems are addicted to clean, reliable data. That’s usually where messy architectures fail.
Before models do anything, teams need to turn raw data into usable features. Data arrives continuously from everywhere, and each transformation has to stay consistent.
With a basic pipeline, teams always know what’s happening at each step. It’s way easier to test things, catch mistakes, and keep everything running as expected.
What teams get:
Data preprocessing is just as important. This is all the data cleaning, normalization, and blank-filling before training or inference.
If teams build things in a clear, functional way—no hidden side effects—each step is independent, and they can replay everything if needed.
That makes a difference:
🔹 Rich Hickey explains more about building data-heavy systems in his talks — explore below:
The ideas explored in this post are deeply rooted in the work of Rich Hickey. His talks have shaped how the Clojure community thinks about simplicity and data
The following talks by Rich Hickey form the intellectual foundation of the blog post “Building Data-Heavy Systems in Clojure Without Losing Simplicity.” These resources are recommended for further reading and should be linked from the blog where relevant.
Link: https://www.youtube.com/watch?v=SxdOUGdseq4
Summary: Hickey’s most influential talk. Reframes how developers think about complexity and simplicity — the philosophical backbone of the blog’s entire approach
Link: https://www.youtube.com/watch?v=-6BsiVyC1kM
Summary: A deep dive into why values win over variables. Hickey demonstrates that immutability eliminates entire categories of bugs around shared state, making it the natural foundation for concurrent, data-heavy systems.
Link: https://www.youtube.com/watch?v=ScEPu1cs4l0
Summary: Explores how software should explicitly model time. Argues that values should be immutable by default and that mutable state is a source of accidental complexity. This talk is the intellectual foundation for Clojure’s design around immutable data structures.
Link: https://www.youtube.com/watch?v=028LZLUB24s
Summary: Focuses specifically on Clojure’s two defining traits: data orientation and simplicity. Covers how these characteristics lead to faster time to market, smaller codebases, and better quality — exactly what the blog promises for data-heavy systems.
Link: https://www.youtube.com/watch?v=Cym4TZwTCNU
Summary: Hickey argues that traditional OOP and relational databases entangle value, identity, and state in ways that make reasoning about data evolution difficult. Directly relevant to the blog’s argument about avoiding hidden state in data-heavy systems.
Link: https://www.youtube.com/watch?v=ROor6_NGIWU
Summary: Examines how the architecture of distributed systems (multiple communicating programs) compares to single-program architecture. Explores tradeoffs in data formats and what characteristics well-designed system components should have.
Because it keeps data simple. You work with maps and vectors. No hidden state. No complex object layers. So it’s easier to track data, change it, and debug issues—even when the system grows.
It avoids a lot of moving parts.
You write less code. And it’s easier to see what’s going on.
Often, yes. Functional Programming removes side effects. That makes systems more predictable. When things are predictable, parallel execution is simple and stable.
It checks your data. You define what valid data looks like. Malli checks the data and makes it reliable across services.
If data doesn’t change, no race conditions or state‑sharing bugs.
If you want to update something, just make a new version instead of changing the old one. That means different parts of the system run in parallel without conflict.
Scaling up gets a lot easier and less risky.
Absolutely. Clojure comes with tools like core.async, so you can process streams of data in real time. It lets you build systems that
That’s why it’s such a good fit for streaming or event-driven applications.
Most teams believe: “Complex systems require complex solutions.”
Clojure proves the opposite.
By embracing:
You get:
And most importantly: Teams build systems that don’t collapse under their own weight.
Connect with Flexiana’s experts to get a clear view of your system.
We’ll dig into your architecture, pinpoint what’s slowing you down, and work with you to map out a plan that simplifies your setup and makes it ready to grow.
The post Building Data-Heavy Systems in Clojure Without Losing Simplicity appeared first on Flexiana.
Cookies and sessions are both ways to persist data across HTTP requests, but they work differently:
HttpOnly flag is set)Browser Server
| |
|--- Request (with session ID) ->|
| |--- Looks up session ID in store
| |--- Retrieves session data
|<-- Response -------------------|
A common pattern is: the server creates a session, stores the session ID in a cookie, and on each request the browser sends that cookie so the server can find the right session data.
| Feature | Cookie | Session |
|---|---|---|
| Storage location | Client (browser) | Server |
| Size limit | ~4KB | No strict limit |
| Lifetime | Configurable | Usually browser session |
| Security | Less secure | More secure |
| Performance | Faster (no server lookup) | Slightly slower |
| Use case | Preferences, tracking | Auth, sensitive data |
Rule of thumb: Use cookies for small, non-sensitive data you’re okay storing client-side. Use sessions for sensitive data or anything that shouldn’t be exposed to the client.
This library was my take on defining the type signature of a Clojure function, inspired by Carp. More info in the blog post.
This is a collection of various Babashka scripts that I use in my back-up process. The back-ups and back-up locations are all custom and offline so I had to have a custom solution.
More info and the manual in the project repository.
To illustrate the usage with an example, I use repository-state script to ‘monitor’ the files and folders in a root folder that I declared as a ‘repository’. The script output functions like a report of what is present in the repository and the state of the files at the time of the report.
When I want to create a backup of the repository, I create a snapshot using directory-snapshot. The script zips the whole directory and appends the current date to the archive name. Since the repo contains the output of repository-state it is included into the archive, so the archive now has info about the repository.
The directory-sync compares two directories and if needed synchronizes them. I use it when I want to perform back-ups (one or more directory snapshots, for example). The comparison produces an EDN output which I manually inspect and modify according to what am I doing at the moment. The EDN is used later for synchronization. I repeat the process for all my back-ups and redundant back-up locations.
The hex-dump was just fun to write, but it can be used to compare binary files.
Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).
After a ten-year hiatus, EuroClojure is coming to Prague!
May 19-21, 2027
2027.euroclojure.org
The theme will be the design, long-term development, and operation of reliable systems. Enjoy three days of workshops, talks, and conversation relevant for industry veterans, ranging from senior developers, tech leads, VPs of Engineering to CTOs.
The conference will showcase engineering approaches within the Clojure ecosystem, but with a strong emphasis on broader applicability. It will explore the real-world portability of these concepts to other tech stacks, mutual inspiration across different communities, and adapting to AI-assisted development.
Join the mailing list for early bird tickets and announcements. Share your ideas and content suggestions when you sign up.
We’re looking for 40-minute talks that go beyond the basics: hard-won lessons, production stories, trade-offs, deep dives into language features, libraries, or tools, and ideas that change how people build things. Tracks include: Language, Experience Report, Library, Tools, AI, Ideas, and Fun.
Join us for the largest gathering of Clojure developers in the world! Meet new people and reconnect with old friends. Enjoy two full days of talks, a day of workshops, social events, and more.
September 30 – October 2, 2026
Charlotte Convention Center, Charlotte, NC
Early bird and group tickets are on sale now.
In case you missed it, the Clojure Documentary is live!
Follow it up with the Clojure Documentary Q&A.
Don’t miss the Documentary show notes.
Babashka Conf: May 8. Amsterdam, NL. See the schedule.
Dutch Clojure Days 2026: May 9. Amsterdam, NL. See the schedule.
Spring Lisp Game Jam 2026: May 14-24. Online.
Programming as and for Inference (by Christian Weilbach): May 29
Pastedown: Paste rich text as markdown in VS Code with a Joyride Script - CalvaTV
Learn Ring - 11. hiccup & hiccup 2 - Clojure Diary
Learn Ring - 12. Navbar - Clojure Diary
Swish: Using Claude Code to Create a Lisp in Swift - ns & loading - Rod Schmidt
Clojurists Together Q2 2026 Funding Announcement - Kathy Davis
Clojure is the future of AI coding, but you won’t use it - Timur Latypoff
Hello, Jank – Clojure Civitas - Daniel Slutsky
Hello, Babashka – Clojure Civitas - Daniel Slutsky
Datavis in Babashka: analysing our calendar feed – Clojure Civitas - Daniel Slutsky
A game loop in a core.async goroutine - Eric Shull
Conference Tooling for Heart of Clojure 2024 - Arne Brasseur
OSS updates March and April 2026 - Michiel Borkent
Gloat Funded for Q2 2026! - GloatHub - Ingy dot Net
Uniting C and Clojure in Bbssh - Crispin Wellington
Debut release
janqua - Jank notebooks using Quarto
babqua - Babashka notebooks using Quarto
plotje - simple and easy plotting
go-joker - A personal twist on the original Clojure interpreter and linter, slightly mad, Go-ing places
re-frame-pair - A Skill for pair-programing with Claude Code on a re-frame app.
re-frame-pair-improver - A skill which analyses the use of another skill (re-frame-pair sessions) and proposes improvements
alike - A simple matching library
Updates
xtdb 2.2.0-beta1 - An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt
re-frame 1.4.7 - A ClojureScript framework for building user interfaces, leveraging React
calva-backseat-driver 0.0.31 - VS Code AI Agent Interactive Programming. Tools for CoPIlot and other assistants. Can also be used as an MCP server.
lexical-chocolate 0.0.4 - Provides utilities for building lexical contours.
clojure-cli-config 2026-05-01 - User aliases and Clojure CLI configuration for deps.edn based projects
joyride 0.0.74 - Making VS Code Hackable like Emacs since 2022
baredom 2.7.0 - BareDOM: Lightweight CLJS UI components built on web standards (Custom Elements, Shadow DOM, ES modules). No framework, just the DOM
phel-lang 0.35.0 - A functional, Lisp-inspired language that compiles to PHP. Inspired by Clojure, Phel brings macros, persistent data structures, and expressive functional idioms to the PHP ecosystem.
metamorph.ml 1.5.1 - Machine learning functions based on metamorph and machine learning pipelines
epupp 0.0.19 - A web browser extension that lets you tamper with web pages, live and/or with userscripts.
kairos 0.2.62 - Crontab parser for Clojure with human-readable cron explanations
clojure-lsp 2026.05.05-12.58.26 - Clojure & ClojureScript Language Server (LSP) implementation
re-frame-query 0.10.0 - Declarative data fetching and caching for re-frame inspired by tanstack query and redux toolkit query
calva 2.0.583 - Clojure & ClojureScript Interactive Programming for VS Code
In this post I&aposll give updates about open source I worked on during March and April 2026.
To see previous OSS updates, go here.
I&aposd like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn&apost exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

Current top tier sponsors:
Open the details section for more info about sponsoring.
If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!
Babashka Conf 2026 is happening on May 8th in the OBA Oosterdok library in Amsterdam! David Nolen, primary maintainer of ClojureScript, will be our keynote speaker. We&aposre excited to have Nubank, Exoscale, Bob, Flexiana and Itonomi as sponsors. Nubank and Exoscale are hiring. Wendy Randolph will be our event host. For the schedule and other info, see babashka.org/conf. Join the babashka-conf Slack channel on Clojurians Slack for last minute communication. The day after babashka conf, Dutch Clojure Days 2026 will be happening, so you can enjoy a whole weekend of Clojure in Amsterdam. Hope to see many of you there!
In the last two months I spent significant time organizing babashka conf, but made progress in several projects as well.
My upstream work to enable async/await in ClojureScript was merged in the beginning of March. The implementation mirrors squint. Thanks David for reviewing and merging. Also deftest now supports an ^:async annotation so you can use async/await and don&apost need to mess around with the cljs.test/async macro anymore:
I&aposll be presenting this work at the Dutch Clojure Days.
Rebel-readline is now bb compatible. The work involved mainly exposing more JLine stuff and making sure rebel-readline didn&apost hit any internal JLine APIs. One step to drive this to completion was to make a dependency, compliment, bb compatible. Thanks both to Bruce and Alexander for the cooperation.
Squint now supports cljs.test and multimethods! clojure-mode was ported to use the new cljs.test.
On the cream front, I put in effort to make the binary smaller and have been keeping up with the new GraalVM EA releases. I&aposve been posting bug reports to the crema maintainer. Currently there&aposs still an unfixed bug around core.async that I have trouble reproducing in pure Java. I also added lots of library tests to CI so I can ensure stability in the long run. For now it remains experimental, but the direction is promising.
A performance PR to weavejester/dependency speeds up depend, depends? and topo-sort significantly, so clerk notebooks render faster.
The cljfmt library, also by @weavejester, now fully runs from source in babashka. The Java diff library that wasn&apost bb-compatible was replaced with text-diff, but only for the babashka path. The JVM build of cljfmt still uses the original Java diff library, with a possible switch later once text-diff has matured.
Several SCI fixes were made to improve Clojure compatibility between babashka and Clojure. E.g. records can now support extending to IFn which was a blocker for some Clojure libs that tried to run in bb so far.
Clj-kondo 2026.04.15 got a few new linters thanks to @jramosg for stewarding most of these. It also has better out of the box potemkin support, and @alexander-yakushev contributed a wave of performance improvements.
Updates per project below. Bullets are highlights; see each project&aposs CHANGELOG.md for the full list.
babashka: native, fast starting Clojure interpreter for scripting.
Completer, Highlighter, ParsedLine, Writer, Readerclojure.repl/special-doc and clojure.repl/set-break-handler!clojure.main/repl-readorg.jline.reader.Buffer to class allowlistclojure.java.javadoc namespace with javadoc available in REPL #1933(doc var), (doc set!) and other special forms #1932(source inc) and (source babashka.fs/exists?) for built-in vars #1935BABASHKA_REPL_HISTORY env var for configurable REPL history location #1930deftype and defrecord inside non-top-level forms (e.g. let, testing) #1936java.util.HexFormat interop support:as-alias-version as an alias for --versionclojure.lang.EdnReader$ReaderException--prepare flag skipping next tokenclojure.data.xml.tree/{flatten-elements,event-tree}, clojure.data.xml.event record constructors, and clojure.data.xml.jvm.parse/string-sourcejava.net.Proxy and java.net.Proxy$Type Java classes (@jeeger)java.lang.reflect.Constructor, java.lang.reflect.Executable, java.util.stream.Collectors, java.util.Comparator (for reify), and morenextjournal.markdown to 0.7.255, edamame to 1.5.39, data.xml to 0.2.0-alpha11, jsoup to 1.22.2, rewrite-clj to 1.2.54, tools.cli to 1.4.256, transit-clj to 1.1.357, fs to 0.5.32SCI: Configurable Clojure/Script interpreter suitable for scripting
recur with 20+ args in loop (#1035)recur arity, throw when it doesn&apost match (#1034)IFn on defrecord, deftype and reify (#808, #1036)IPrintWithWriter as protocol (#1032)doc macrons-mapclj-kondo: static analyzer and linter for Clojure code that sparks joy.
:not-nil? which suggests (some? x) instead of (not (nil? x)), and similar patterns with when-not and if-not (default level: :off):protocol-method-arity-mismatch which warns when a protocol method is implemented with an arity that doesn&apost match any arity declared in the protocol (@jramosg):missing-protocol-method-arity (off by default) which warns when a protocol method is implemented but not all declared arities are covered:redundant-declare which warns when declare is used after a var is already defined (@jramosg)import-fn, import-macro, and import-defimport-vars :refer and :rename syntaxpmap and future-related functions (future, future-call, future-done?, future-cancel, future-cancelled?) (@jramosg)throw with string in CLJS no longer warns about type mismatch (@jramosg)/bin/clj-kondo (@harryzcy)StackOverflowError occurs during analysiscream: Clojure + GraalVM Crema native binary
clojure.core.async-testhttpkit, nextjournal/markdown, clj-yaml, core.async ioc-macrossquint: CLJS syntax to JS compiler
defmulti, defmethod, get-method, methods, remove-method, remove-all-methods, prefer-method, prefers, plus hierarchy ops isa?, derive, underive, make-hierarchy, parents, ancestors, descendants (#806)cljs.test/report is now a multimethod, extensible via defmethod. test-var now fires :begin-test-var / :end-test-var events.await in async functions, in anticipation of CLJS next. The legacy js-await and js/await forms continue to work as aliases for now.cljs.test / clojure.test support: deftest, is, testing, are, use-fixtures, async, run-testswith-meta now preserves callability when applied to a function.cljc files via :require (no need for :require-macros); resolve qualified symbols from macro expansionssquint.compiler/compile* and squint.compiler/transpile* which accept either a string or a sequence of pre-parsed forms, skipping the forms -> string -> forms roundtrip for SSR use cases#html / #jsx were erased when an attrs map was present without a :class keycherry: Experimental ClojureScript to ES6 module compiler
await as a special form, in anticipation of CLJS next:require-macros clauses with :refer now properly accumulate instead of overwriting each othercherry.test with clojure.test-compatible testing API: deftest, is, testing, are, use-fixtures, async, run-tests. Macros are compiler built-ins (shared with squint), so no :require-macros plumbing is needed in user code.nbb: Scripting in Clojure on Node.js using SCI
IFn on defrecord and reifyp/then results)fs: file system utility library for Clojure
touch fn (@lread & @borkdude)Coercions and Returns / Argument Naming Conventions sections to README (@lread):nofollow-links option (@lread)split-ext and extension on dotfiles with parent dirs (e.g. foo/.gitignore)gzip & gunzip now honor dest dir when specified (@lread)umask on created files and directories (@lread)clerk: Moldable Live Programming for Clojure
weavejester/dependency (#808)v0.12.51 (#793), enables async/await in viewer functionspresent+reset! (#809)build-graph crash on non-Clojure source files (#810)edamame: configurable EDN and Clojure parser with location metadata and more
Nextjournal Markdown: A cross-platform Clojure/Script parser for Markdown
:disable-footnotes true to disable parsing footnotes #67quickdoc: Quick and minimal API doc generation for Clojure
grasp: Grep Clojure code using clojure.spec regexes
grasp.implbabashka.nrepl: The nREPL server from babashka as a library
send to prevent interleaved bencode frames from concurrent writesinfo and lookup op refinements: lookup carries nested info map whereas info is a flatmappod-babashka-instaparse: instaparse from babashka
add-line-and-column-info-to-metadata--features=clj_easy.graal_build_time.InitClojureClasses to native-imageinstaparse-bb: Use instaparse from babashka
add-line-and-column-info-to-metadata and get-failureparser (e.g. :output-format :enlive)java.net.URL for grammarsbabashka-sql-pods: babashka pods for SQL databases
next.jdbc, cheshire (Jackson 2.12 -> 2.20), PostgreSQL, MSSQL, HSQLDB, MySQL Connector/J drivershttp-client: HTTP client built on java.net.http
httpstat.us examples with httpbin.org in testsneil: A CLI to add common aliases and features to deps.edn-based projects
deps.clj: a faithful port of the clojure CLI bash script to Clojure
Contributions to third party projects:
depend, depends?, and topo-sort:bb reader conditionals to replace the AutoFlattenSeq deftype with plain vectors plus metadata markers, swap the Segment deftype for a reify-based CharSequence, and add a CI test runner. Open, awaiting review.These are (some of the) other projects I&aposm involved with but little to no activity happened in the past two months.
This has been bugging me for years: you often run a JVM by a shell script wrapper, then want to jstack it to figure out what it’s doing, but can’t figure out what PID to ask for. Running jps gives remarkably unhelpful output, especially for tools like Leiningen. I wrote a hacky little Ruby script to dig into the process tree of everything matching a given pattern, find any JVMs those processes spawned, and hit the highest numbered one (presumably the last one started) with jstack. This is definitely wrong (PID rollover!) but it’s been surprisingly useful for debugging.
$ jstack+ lein test
jstack 1044647 (java -classpath /home/aphyr/hegel/test ...)
2026-05-04 10:23:07
Full thread dump OpenJDK 64-Bit Server VM (21.0.10+7-Ubuntu-124.04 mixed mode, emulated-client, sharing):
Threads class SMR info:
_java_thread_list=0x0000767f88002030, length=12, elements={
0x000076803801b140, 0x0000768038277460, 0x00007680382789e0, 0x000076803827a2a0,
0x000076803827b8f0, 0x000076803827cea0, 0x000076803827e6b0, 0x000076803828abe0,
0x0000768038296220, 0x000076803938d470, 0x0000768039392cd0, 0x0000767f88000fe0
}
"main" #1 [1044652] prio=5 os_prio=0 cpu=2403.65ms elapsed=2386.66s tid=0x000076803801b140 nid=1044652 waiting on condition [0x000076803e7fa000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@21.0.10/Native Method)
...
Here’s the script:
#!/usr/bin/env ruby
# Runs `jstack` on the highest-numbered java process invoked by the command
# matching the given pattern. This is really helpful when you want to find out
# why `lein test` is stuck, because lein is a shell script, which launches
# java, which launches *a new* java process.
# Parses a list of newline-separated pids as integers
def parse(pid_string)
pid_string.split(/\n/).map { |p| p.to_i }
end
# Recursively expands pid into [pid, child1, child2, ...]
def expand(pid)
children = parse `pgrep -P #{pid}`
children.reduce([pid]) do |all, child|
all.concat expand(child)
end
end
roots = parse %x[pgrep -f '#{ARGV.join(' ')}']
expanded = roots.flat_map { |p| expand p }
# Just JVMs please
jvms = parse(`jps`).to_set
fav = expanded.filter do |pid|
jvms.include? pid
end.max
unless fav
puts "No JVMs with that command line in their ancestry"
exit 1
end
# Show the command line so we can be sure of what process it was
cmd = `cat /proc/#{fav}/cmdline | tr '\\000' ' '`
puts "jstack #{fav} (#{cmd})"
puts
exec "jstack #{fav}"