Implementing functions in Kanipaan - The Beginnings
Notes
Notes
A Clojure port of XinJingHao’s PPO implementation using libpython-clj2, PyTorch, and Quil
Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).
The Clojure Documentary is live!
Afterward, enjoy the Clojure Documentary Q&A with Rich Hickey and other key people in Clojure’s history!
Don’t miss the Documentary show notes with links to:
The foundational research papers
Influential books
Rich’s talks
Historical archives
Dialects and runtimes
Community resources
Getting started videos
A glossary
and more!
The world is going through changes: in programming, technology, work, and in specific countries and regions, each with its own form of trouble, hope, or confusion.
People in Clojure communities, like elsewhere, are finding their way through it, sometimes with questions and sometimes with a sense of being alone in it.
The Clojure Community Check-In is a space to share how we’re doing.
Watch a short video from the organizers.
Sessions:
More details at: https://clojureverse.org/t/clojure-community-check-in/
September 30 – October 2, 2026
Charlotte Convention Center, Charlotte, NC
Join us for the largest gathering of Clojure developers in the world! Meet new people and reconnect with old friends. Enjoy two full days of talks, a day of workshops, social events, and more.
Early bird and group tickets are now on sale.
Is your company interested in sponsoring? Email us at clojure_conj@nubank.com.br to discuss opportunities.
Clojure real-world-data 57: Apr 24
Clojure Community Check-In: Apr 25-26. Register here.
Babashka Conf: May 8. Amsterdam, NL. See the schedule.
Dutch Clojure Days 2026: May 9. Amsterdam, NL. See the schedule.
Spring Lisp Game Jam 2026: May 14-24. Online.
How one programmer’s pet project changed how we think about software - CultRepo
Clojure Documentary Q&A - ClojureTV
Why one of the world’s largest digital banks chose Clojure and Datomic - CultRepo
Join us at the Clojure Community Check-In - Sci Cloj
Clojure Corner: Interview with Alex Miller - Flexiana
Avoiding Cyclic Dependency by Passing Functions as Arguments - Clojure Diary
Learn Ring - 10. More complex pages - Clojure Diary
Swish: Using Claude Code to Create a Lisp in Swift - Vars - Rod Schmidt
Runtime async in ClojureCLR - David Miller
Emmy and the EurOffice Spreadsheet – Clojure Civitas - Markus Agwin Kloimwieder
Building a Clojure interpreter from scratch | cljam - Reginaldo Junior
Biff 2.0 sneak peak - Jacob O’Bryant
Exploring core.async.flow as an Agent Executor - Şeref Ayar
One Tree, Many Forests: Representational Polymorphism in a Parallel Split/Join Tree Algebra - Dan Lentz
Debut release
webgen - Parameter driven web app generator
cljam - Clojure interpreter with a tokenizer, reader, macro expander, evaluator, incremental compiler, vite plugin, nREPL server compatible with calva on vscode, embedded browser REPL, CLI compatible with node and bun as host
bisql - Keep SQL executable, call it as Clojure functions 🚲️
miniforge-standards - Shared engineering standards for all miniforge.ai repositories
cljs-mjml - Write MJML email templates with Hiccup syntax in ClojureScript (or Node Babashka)
Updates
clojure 1.12.5-rc1 - The Clojure programming language
clj-kondo 2026.04.15 - Static analyzer and linter for Clojure code that sparks joy
baredom 2.2.0 - BareDOM: Lightweight CLJS UI components built on web standards (Custom Elements, Shadow DOM, ES modules). No framework, just the DOM
clj-format 0.1.2 - A Clojure DSL for cl-format inspired by Hiccup. No dependencies. Drop-in compatibility. The power of FORMAT made easy.
spel 0.9.5 - Idiomatic Clojure wrapper for Playwright. Browser automation, API testing, Allure reporting, and native CLI - for Chromium, Firefox, and WebKit
ordered-collections 0.2.1 - Fast, modern, ropes and ordered collections that do more than sort.
dexter 0.1-alpha-6 - Dexter - Graphical Dependency Explorer
glojure 0.6.5-rc17 - Clojure interpreter hosted on Go, with extensible interop support.
dataspex 2026.04.1 - See the shape of your data: point-and-click Clojure(Script) data browser
meme-clj 5.0.0 - meme-clj — M-Expressions with Macro Expansion
charm.clj 0.2.71 - A Clojure TUI (Terminal User Interface) library inspired by Bubble Tea
clj-xref 0.1.1 - LLM-friendly cross-reference database for Clojure code. Query who-calls, calls-who, who-implements, ns-deps to feed precise dependency neighborhoods to AI assistants instead of entire source trees. Built on clj-kondo.
babashka 1.12.218 - Native, fast starting Clojure interpreter for scripting
phel-lang 0.34.1 - A functional, Lisp-inspired language that compiles to PHP. Inspired by Clojure, Phel brings macros, persistent data structures, and expressive functional idioms to the PHP ecosystem.
nippy 3.7.0-beta1 - Fast serialization library for Clojure
statecharts 1.4.0-RC11 - A Statechart library for CLJ(S)
clojure-clr clojure-1.12.3-alpha7 - A port of Clojure to the CLR, part of the Clojure project
Over the years, I’ve experimented with almost every flavor of development environment. I’ve gone from manually provisioning tools on a Mac—hoping I’d remember every brew install six months later—to exploring Docker, Nix, and remote environments.
My journey has touched it all: asdf, Brew, Docker, Nix, and Devbox. I’ve jumped between terminal emulators and multiplexers like Tmux, Kitty, WezTerm, and Zellij.
My latest setup is built for speed, reproducibility, and a “keyboard-first” philosophy. It lives entirely in the terminal across two environments: my local iMac and an OCI Ampere VPS.
If you haven’t switched to a self-hosted GitHub runner yet, do it for your own sanity. You can thank me later.
By running your CI on your own hardware (like an OCI Ampere instance), you eliminate the overhead of public runners and gain full control over the environment. When paired with Devenv, your CI environment becomes an exact mirror of your local machine.
.yml file to use the self-hosted label.By moving to a self-hosted ARM64 runner, my feedback loop became incredibly tight. My tests now finish in 24 seconds, and the entire image creation process takes just 1 minute and 22 seconds.
Here is what the streamlined job looks like:
jobs: test: runs-on: - self-hosted - Linux - ARM64 steps: - uses: actions/checkout@v5 with: fetch-depth: 0
- name: Run tests id: run_tests run: devenv shell clojure -X:testBecause I’m using devenv shell, I don’t have to worry about whether the CI runner has Clojure, the right JDK, or specific libraries installed. If it works in my local terminal, it works in the CI. Period.
The way we use GitHub Actions today is often redundant. We spend an enormous amount of time writing complex YAML configurations to install dependencies, manage versions, and configure caching—essentially re-architecting our entire development environment for every single commit.
Provisioning and caching are solved problems. If you are using tools like Devenv or Nix, you’ve already defined exactly what your project needs to run. By moving to a self-hosted runner, you stop fighting the CI and start using it as a natural extension of your workstation. You gain:
devenv shell, it will run in CI. No exceptions.It’s time to stop treating CI like a special snowflake and start treating it like the high-performance terminal it should be. Stop provisioning twice, stop waiting for public runners, and start shipping faster.
Would you like to have a follow-up on this topic? What are your thoughts? I’d love to hear your experiences.
Written by: Nubank Editorial
What if, instead of relying on manual feature engineering, we could learn directly from raw financial behavior at scale?
That was the thesis we set out to defend on episode 122 of the Data Hackers podcast, Brazil’s largest Data and AI community. Founded in 2018, the community brings together thousands of data professionals and thought leaders to discuss the cutting edge of technology.
In conversation with hosts Monique Femme and Paulo Vasconcellos, Arissa Yoshida and Rafael Celente, Senior Research Engineers at Nubank, walked through the breakthroughs behind the paper “Your Spending Needs Attention: Modeling Financial Habits with Transformers” and how this research is already making its way into production.
The starting point is straightforward: financial institutions sit on massive volumes of data — transactions, in-app events, customer interactions — yet extracting real value from this data remains a hard problem. Its sequential, unstructured nature has historically pushed teams toward tabular models built on hand-crafted features.
The paper charts a different course: leveraging Transformer-based architectures and self-supervised learning to build representations directly from raw data. This work gave rise to nuFormer, a model that blends structured and textual transaction attributes and supports fine-tuning for tasks like credit scoring, fraud detection, and product recommendation — delivering measurable gains at scale.
From traditional machine learning to foundation models
To appreciate why this matters, consider where the industry started. For years, traditional ML models — particularly tree-based methods paired with heavy feature engineering — dominated financial applications. These models remain effective, but they hit a ceiling when the problem involves large volumes of unstructured data and the need to capture complex temporal patterns.
At Nubank, where we have an extraordinarily rich dataset — especially long sequences of financial transactions — this limitation becomes hard to ignore. As Arissa Yoshida puts it, these traditional approaches lean heavily on a manual, specialized step of variable construction.
“With traditional models, you rely heavily on handcrafted features — essentially building an entire engineering pipeline to extract value from your data. That requires people with deep domain expertise who can manually work through the data.”
Arissa Yoshida, Senior Machine Learning Engineer at Nubank
This dependency makes the process less scalable and more expensive, particularly as data volume and complexity grow. Rafael Celente reinforces this point by explaining that the challenge goes beyond modeling itself — it’s about generalization: “we have a massive dataset, and our hypothesis was that we could get a model to generalize customer behavior from that data.”.
This limitation, combined with the need for models that learn directly from data, opens the door to foundation models in finance.
The key paradigm shift lies in how we look at the data. Rather than treating transactions as isolated records, the idea is to interpret them as sequences with structure, context, and meaning — much like natural language.
Transformers operate on tokens and learn relationships between them. By converting transactions into tokenized sequences, we can capture behavioral patterns at a much deeper level. The model doesn’t care whether it’s processing words, pixels, or financial events — what matters is the relationships between these elements.
This flexibility is precisely what makes it possible to apply an architecture originally designed for natural language to an entirely different domain like finance.
This is the context in which nuFormer, was born — a foundation model developed by Nubank’s AI Core team to learn representations from financial data at scale. The goal isn’t to solve a single problem, but to build a reusable foundation for different applications across the bank. From these representations, we can improve use cases like fraud detection, product recommendation, and risk modeling.
The key differentiator is generalization. Instead of training a model from scratch for every problem, nuFormer learns a representation of financial behavior that can be reused across multiple contexts, giving different applications a shared starting point.
As Arissa Yoshida explains in the episode, the vision behind this new kind of model is “to generalize and extract insight from raw, often unstructured data, and scale that across many different problems.”.
Although the initial work started with transactions, the model quickly evolved to incorporate different data types. Today, the vision is multimodal — capable of integrating not just structured financial data, but also behavioral signals, in-app interactions, and other information sources.
This broadens the model’s potential significantly: it moves beyond isolated events to represent a more complete picture of customer behavior, unlocking more sophisticated applications.
This evolution also connects to other AI Core initiatives, such as AI agents that leverage these representations to operate in real-world scenarios at scale. The team shared these examples in the posts “Building AI agents in practice with Clojure” and “Building AI agents for 131 million customers”, here on Building Nubank.
One of the most insightful parts of the conversation made clear that the biggest challenge isn’t the model itself — it’s the engineering required to make it work. Training a model of this scale demands robust infrastructure: well-structured pipelines, GPU management, and distributed training. But the real pain point shows up when you try to take it to production.
Transformer-based models tend to have higher latency, which can be a sensitive factor in financial applications. Still, with the right infrastructure and specialized teams, it’s possible to achieve performance levels comparable to traditional models. This reality highlights that the challenge extends beyond ML — it’s a systems problem that requires cross-functional collaboration. As Rafael Celente sums it up: “it’s not just a machine learning problem — it’s a systems problem.{RQ}.
This complexity extends to the role of data and model evaluation. While training at scale is already a reality, ensuring models are learning correctly remains one of the biggest challenges. That involves building consistent data pipelines, continuous monitoring, and defining metrics aligned with business impact.
On top of that, the financial sector adds another layer of rigor: governance. Models must pass multiple rounds of validation before going to production, ensuring compliance with regulations and internal standards. In this landscape, building foundation models requires the joint effort of data engineering, infrastructure, evaluation, product, and business teams — ensuring solutions not only work, but deliver sustainable real-world impact.
Deploying these models into existing systems has driven significant gains in key metrics within just a few months — surpassing improvements that had been accumulated over years with traditional approaches.
These gains aren’t limited to a single use case. The model is already being applied across multiple fronts, including credit, lending, income prediction, and cross-sell, demonstrating that the approach can be reused across a variety of contexts within the bank.
This reusability doesn’t just accelerate the development of new solutions — it creates a multiplier effect, allowing different products to benefit from the same learning foundation.
Looking ahead, our ambition isn’t just to keep up with the state of the art — it’s to contribute to it. That means exploring new architectures, expanding multimodal capabilities, and continuing to share what we learn with the community. As discussed in the episode, the goal is to challenge the status quo and set new standards for AI in finance.
Our appearance on Data Hackers Podcast #122 underscores a central pillar of our strategy: foundation models are already being applied in practice to solve real problems in finance, with direct impact on how we build products, make decisions, and scale intelligence.
By applying Transformers to model financial habits at scale, Nubank is building an AI platform that learns directly from data and evolves continuously. nuFormer in production, with applications in credit and beyond, shows how this approach can expand horizontally and generate consistent value.
If you want to work on problems like these — dealing with data at scale, developing foundation models, and impacting over 131 million customers — we’re hiring on the AI Core team.
The post How Nubank Uses Transformers to Model Financial Habits at Scale appeared first on Building Nubank.
I have for the past year or two been working on some large Biff changes, such as those discussed in Structuring large Clojure codebases with Biff and Biff support for XTDB v2 is in pre-release. Now that coding agents have gone mainstream (and in particular, now that I personally have started using them heavily), I've had a few more ideas for changes I'd like to make to Biff. And also thanks to coding agents, I've actually been able to make consistent progress instead of my Biff development time being bottlenecked by how late I can stay awake on weekend nights after my kids are sleeping. So we, fingers crossed, are getting close to some major Biff updates, and I figure I may as well slap a 2.0 label on it.
Here's what I've got in the works.
This is the biggest change. Biff will retain first-class support for XTDB, but it'll also have first-class support for SQLite, and I'll update the starter project to use SQLite by default. There will still be a (non-default) starter project that uses XTDB.
Biff has used XTDB since its (Biff's) initial release in 2020, back when the database was still called Crux. About a year ago I started working on migrating Biff from XTDB v1 to XTDB v2, which brings a whole new architecture, including column-oriented indexes that make analytical queries faster. Besides writing some Biff-specific helper code for XTDB v2, I migrated Yakread (a 10k-LOC article recommender system) to v2 and did a bunch of benchmarking for Yakread's queries. (A big thank you to the XTDB team who responded to lots of my questions during this time and also made a bunch of query optimizations!)
Long-story short: despite the optimizations, I had trouble getting Yakread's page load times to be as quick as I wanted. For the particular queries Yakread runs—which are mostly row-oriented—I've generally found v2's performance to be slower than v1. There is also a larger per-query latency overhead, perhaps another design tradeoff of the new architecture (you can still run v2 as an embedded node within your application process, but it’s designed primarily to be run on a separate machine like more traditional networked databases).
I also will admit that before this benchmarking exercise I had not actually used SQLite much, and I was unaware of how ridiculously fast it is. And one of the main downsides of SQLite when compared to XTDB—that SQLite is a mutable database—is mitigated by Litestream, which streams changes to object storage and lets you restore from (and even run ad-hoc queries on) historical snapshots saved with 30-second granularity.
I could see myself switching back to XTDB at some point in the future. It's still the early days for v2 and the XTDB team is doing lots of work, including on query performance. And SQLite's speed comes with tradeoffs:
Scaling beyond one machine is an unsolved problem. Litefs can let you put SQLite nodes in a cluster where writes get forwarded to a single leader and changes are streamed to the other nodes. However, to use it with Litestream, you have to disable automatic leader failover. So you basically have to choose between HA or PITR.
SQLite only supports a few basic datatypes: ints, floats, strings, and blobs (byte arrays). A large part of my work in integrating SQLite into Biff has been to set up automatic data type coercion so you can use richer types (UUID, boolean, instant, enum, map/set/vector) in your schema without having to do manual coercion when reading and writing.
Litestream's snapshots-at-30-second-granularity is fine for recovering from bad transactions like
a DELETE FROM without the WHERE, but it's less helpful than XTDB/Datomic for the
debugging-weird-production-issues use case: you can't include a transaction ID or similar in your
production logs and then re-run queries with 100% confidence that the results you're seeing are
what the application saw when it e.g. threw an unexpected exception.
I was chatting with Jeremy from the XTDB team last week, and he mentioned they've been working on having XTDB ingest changes directly from Postgres. It sounds like it shouldn't be much work to make that work with SQLite too, which means that you could stick an XTDB node alongside your SQLite-powered Biff app and then get more granular historical queries. Maybe XTDB could be a replacement for Litestream?
That could get even more interesting if eventually we can do the inverse as well, where data from our immutable XTDB log could be sent both to a bitemporal index for historical queries and also to SQLite "indexes"/databases for the application servers to use. That would solve the HA problem too.
Anyway. However it happens, I'm looking forward to the glorious future when we finally have an inside-out database that's fast for all query shapes, highly available, models time correctly, and can even do advanced things like let you put a UUID in it. In the meantime, I think SQLite is a reasonable default given Biff's focus on solo developers, and I would absolutely consider XTDB today for situations in which modeling time correctly is a top concern.
Biff consists of a starter project, a bunch of helper code exposed through a single com.biffweb
namespace, tooling for CLI tasks and deployment, and a big pile of documentation. The
com.biffweb namespace is on its way out: I'll be publishing Biff helper code as individual
libraries like com.biffweb.sqlite (and com.biffweb.xtdb), com.biffweb.authentication,
com.biffweb.middleware, com.biffweb.config, etc.
Part of the motivation for this change is that Biff is more mature than it was five years ago and it's become more clear what the different cohesive parts of Biff should actually be. I started out with a single kitchen-sink library because splitting it up felt premature; I didn't think it would realistically make sense to use one of them outside a standard Biff project that would already be depending on all the Biff libraries anyway.
But over the past few months, I've been developing a couple new side projects from scratch without even using Biff. As I've done this, I've started extracting various things into standalone libraries, and this time I do see them as useful libraries in their own right. For example, the new biff.authentication library will be an easy way to add email-based authentication to any Clojure web app that uses Reitit—it even comes with a default sign-in page.
The other factor behind this change is agent-driven development. The difficulty of mixing-and-matching different libraries is dramatically easier now to the point where I wondered briefly if Biff was even needed anymore. Developing those new side projects via agent has disabused me of that notion: agents still need a lot of structure (e.g. in the form of these Biff libraries) to guide them. Even for starting new projects, why have everyone generate a different starter project via some prompt when you could have a single person generate the starter project, make sure it actually works, and then publish that?
That's still a meaningful change though: the effort required to create and maintain new project templates has decreased significantly. So I think it makes more sense for Biff to be split up into multiple libraries that can themselves be mixed-and-matched. I will myself provide Biff starter projects for SQLite and XTDB, respectively. If anyone else wants to make a Biff starter project variant with different library choices, they'll similarly be able to do that without much effort.
For vanity reasons, I'll need to continue having a single "main" Biff repo of some sort (did I mention Biff hit 1,000 github stars recently?). Maybe I'll have that repo be the default starter project.
Two of these Biff libraries that happen to contain some new stuff—instead of being a splitting-out of code that was already in Biff—are biff.graph, which lets you structure your domain model as a queryable graph, inspired by Pathom; and biff.fx, which helps you remove effectful code from your application logic via state machines.
Both libraries help you write purer code (and thus code that's easier to understand and test). biff.graph is a higher-level abstraction that helps with code that reads data. biff.fx is a lower-level thing that I mostly use when writing data. However they're also useful together: e.g. my GET request handlers are typically biff.fx machines that run a biff.graph query and pass the results to the (now pure) rendering code:
(def some-route
["/some-page/:id"
{:get
(fx/machine ::some-page
:start
(fn [{:keys [path-params] :as request}]
{:stuff [:biff.fx/graph
{:stuff/id (parse-uuid (:id path-params))}
[:stuff/foo :stuff/bar]]
:biff.fx/next :render-stuff})
:render-stuff
(fn [{:keys [stuff] :as request}]
{:status 200
:headers {"content/type" "text/html"}
:body (render-html
[:div "foo: " (:stuff/foo stuff)
", bar: " (:stuff/bar stuff)])}))}])
biff.fx provides a defroute macro to make this kind of thing more concise, so the code I actually write looks more like this:
(fx/defroute some-page "/some-page/:id"
[:biff.fx/graph
{:params/stuff [:stuff/foo :stuff/bar]}]
:get
(fn [request stuff]
[:div
"foo: " (:stuff/foo stuff)
", bar: " (:stuff/bar stuff)]))
I'll save a fuller explanation for later; hopefully that gives you the flavor of what these libs do.
I've been using Pathom heavily over the past few years, both for work and pleasure. I've started referring to the code structure it enables as “data-oriented dependency injection.” It helps you structure your application in small easy-to-understand chunks that declare exactly what data they need as input and what data they provide as output. The main downside in my experience is that it can be difficult to understand exactly what Pathom is doing and debug when things go wrong.
For “serious” projects, that's a price worth paying. For the kinds of solo projects that Biff is aimed at, I've felt apprehensive about foisting another layer of abstraction on people for code structure benefits that they may or may not notice.
However, my own experience is that even for small apps, the benefit is real. So biff.graph is an attempt to provide the same graph computational model / “data-oriented dependency injection” with as small of an implementation as possible: biff.graph is about 400 lines of code currently, whereas Pathom is closer to 10k.
The main tradeoff I've made in service of that goal is to omit the query planning step that Pathom uses. biff.graph traverses directly over your input query, looking up which resolver(s) to call for each attribute as it goes. For each resolver, biff.graph runs what is more-or-less a separate query to get that resolver's inputs. This hopefully makes biff.graph easier to trace and understand what it's doing, but it also means biff.graph isn't able to optimize the query plan the way Pathom does. (biff.graph does support batch resolvers and caching at least).
biff.fx is more of an original creation. Instead of a single function, you have a
map of functions, one for each state. Effects happen in the transitions. You define global “fx
handlers” that do things like HTTP requests, database queries/transactions, etc, represented by
keywords (e.g. :biff.fx/graph in the example). I’ve changed up the format
for describing effects a few times; I think I've finally landed on something that feels ergonomic
([:do-something arg1 arg2] as a replacement for (do-something! ctx arg1 arg2)).
Biff entered this world as a replacement for Firebase, which I had enjoyed using but left me with the desire for a regular long-lived clojure backend. Firebase lets your frontend submit arbitrary transactions from the frontend, and then they're checked against some centralized authorization rules you define (e.g. “documents in the stuff table can only be edited if the current user's ID is the same as stuff.user_id”). I implemented a similar thing where you would submit transactions in a format similar to Firebase's, then I would translate them to XTDB's transaction format and pass a diff of the database changes to your authorization functions.
I ended up abandoning the SPA approach altogether for server-side rendering (with htmx), and that made authorization rules unnecessary since transactions were originating from the backend: I no longer needed to validate completely arbitrary transactions.
Once again, coding agents have changed the game. When working on mature codebases, of course we all read our generated code carefully before submitting a pull request. But when I've got a new app idea, I want to mostly just vibe code it until I get to the MVP. I'd like to be able to do a light review just to make sure the structure of the code is reasonable. With authorization rules, you can carefully review those central rules in the same way you'd carefully review the database schema, and then you can have confidence that the feature code isn't missing an authorization check. (Of course you still have to make sure the agent didn't bypass the authorization rules...)
This is only for writing data. For reading data, I typically have a few Pathom/biff.graph resolvers
that e.g. read an entity ID from the incoming request's path parameters and ensure the user has
access to that entity (like the :param/stuff resolver alluded to in the example above). Other
related entities are queried as joins against that root entity, so if the authorization check fails,
the rest of the query will fail too. So once again you have a way to put authorization logic in a
central place that can be reused by your feature-specific code.
As mentioned above, Biff uses htmx. I like server-side rendering and I think it's a particularly good fit for Biff's solo developer focus. htmx however has a critical flaw: it's too popular. It has 47k github stars—that's half of what Tailwind has.
Datastar fixes this problem by being a much younger project—a niche of a niche. There is a much smaller chance that your colleagues will have heard of it. Datastar also has some smaller but still tangible benefits:
Of the changes I've mentioned, this one is the most experimental. I actually haven't even made an official decision if I really will switch Biff from htmx to Datastar; at this point I'm just making a prediction that I probably will.
More broadly I would like to explore how far I can push the server-side rendering model before I feel it breaking down. e.g. what approach would I use with it to handle forms with 50+ fields and lots of conditional logic, complex validation logic etc? How about charts? (What I'm getting at: would I regret asking an LLM to migrate our large codebase at work over to htmx/Datastar?).
I’d like to give an honorable mention to inline snapshot testing which I’ve been excited
about for a year and a half but now find
unnecessary—counterproductive, even—with coding agents. I had started working on some updates to
my test code so you could do inline snapshot tests in plain .clj files instead of in .edn files
(turns out that tooling support is best when you put your code in files meant for code). But with
coding agents, I’ve found that I don’t want tests that auto-update when the actual result changes:
it’s too easy for agents to ignore new results that are obviously incorrect. And of course I don’t
care if my coding agent finds updating unit tests to be tedious. So the test-related stuff that Biff
does will be limited to making your application code more pure so you (your agent) can write dumb
(is (= (f x) y)) tests. I might add some structure/patterns for integration tests, though.
Another change driven by coding agents, not a change to the code but a change to my philosophy: I'm more interested in smaller projects. As mentioned, my time for working on personal projects has been extremely limited until a few months ago. I've only ever had a single Biff project at a time that I have attempted to work on regularly; new projects started after the old one failed. So the primary use case I designed Biff for was “serious side projects,” applications that may be solo projects now but will definitely be bringing in a 6-figure income and fulfilling all your entrepreneurial desires at... some point. That one project is the only thing I've ever had a chance of having time for.
Now I can code up an MVP for something over a weekend without ever sitting down at my desk. I built an app that helps me find good Star Wars: Unlimited decks to play. I'm building a blogging platform next. After that maybe I'll build a music recommender system. Or a state legislation tracker/summarizer.
I'm having a blast. Maybe that will affect design decisions I make down the road? I certainly am interested in the use case of doing agent-driven development from a mobile device, so maybe expect something in that area.
I have been building an experimental graph-based AI agent runtime in Clojure called Ayatori. It is a side project where I try to stay hands-on with new ideas, in my favorite language. For Ayatori, I had in mind agents as graphs: nodes as functions, edges defining routing, and an executor tying it together.
This article is about exploring whether core.async.flow could replace the executor.
In Ayatori, an agent is a data structure. It declares nodes, edges, capabilities, and dependencies.
{:nodes {:preprocess (fn [input] {:result {...}})
:llm {:type :llm :client ... :prompt "..."}
:search (fn [input] {:result {...}})}
:edges {:preprocess :llm
:llm {:search :search :done :ayatori/done}
:search :llm}
:caps {:chat {:entry :preprocess}}
:deps [:external-cap]}Nodes are either plain functions or typed maps for built-in behaviors like LLM invocation or fan-out. Edges are either a keyword (unconditional routing) or a map from a route key to a target node (conditional routing). Caps declare which entry points are externally callable. Deps declare which external capabilities the agent needs at runtime, resolved by the system at start time.
The executor’s job is to take this definition and run it.
What I was building by hand was a straightforward go-loop:
(defn execute [graph input]
(async/go
(loop [node (:entry graph)
data input
step 0]
(when (>= step max-steps)
(throw (ex-info "Max steps exceeded" {})))
(let [result (<! (invoke-node graph node data))]
(when (instance? Throwable result)
(throw result))
(if-let [next (resolve-route graph node result)]
(recur (:next-node next) (:input next) (inc step))
(extract-result result))))))This worked well enough for an experiment. But I kept accumulating concerns around it: What if I need to pause mid-execution? What if a node produces faster than the next can consume? What if I want to stop one node without tearing down the whole graph? Each concern meant more code in the executor.
For readers unfamiliar with core.async.flow, the overview and guide are good starting points.
Thinking about those concerns, node lifecycle, channel wiring, backpressure, routing, the problem started to feel familiar. These are the kinds of concerns a runtime handles, not application code. Then I remembered core.async.flow. I had not found the time to look at it carefully when it was first announced. Working on this executor turned out to be a good reason to go back.
core.async.flow describes itself as a library for building “concurrent, event driven data processing flows out of communication-free functions, while centralizing control, reporting, execution and error handling.” The model is a directed graph of processes communicating via channels. You define step functions that transform data. The flow manages channel creation, process lifecycle, backpressure, and error handling.
Those were the same concerns accumulating in the executor. At first glance, the agent DSL and flow’s topology model seem to map well onto each other. An agent is a directed graph of computation steps. A flow is a directed graph of processes communicating through channels. The unit of work in both is a function that takes input and produces output. The connections are explicit and declared separately from the logic. That structural similarity is what made me want to try this.
Agent → Flow graph
Node → Process
Edge → Connection [[from :out] [to :in]]
Conditional edge → Multi-output process, route key selects port
Cap entry point → flow/inject target
Dep (cross-agent call) → IO process, blocking resolver
Fan-out node → Multi-output process with step state correlation
Graph result → Collector sink
The topology that I was managing imperatively in a go-loop would become a data structure passed to create-flow. Each node requires a step function and explicit port declarations, but the wiring and lifecycle are handled by the framework.
A simplified view of what that topology definition looks like:
{:procs {:preprocess {:proc (process #'preprocess-step)}
:llm {:proc (process #'llm-step) :args llm-config}
:search {:proc (process #'search-step)}
:collector {:proc (process (collector-step registry))}}
:conns [[:preprocess :out] [:llm :in]]
[[:llm :search] [:search :in]]
[[:search :out] [:llm :in]]
[[:llm :done] [:collector :in]]]}The nodes, edges, and routing from the Ayatori definition translate directly into procs and conns.
flow’s model is fire-and-forget. A caller injects a message and the graph processes it downstream. There is no built-in way to wait for a result.
Ayatori needs callers to wait for a result. This mismatch was not a surprise. Flow is a poor fit for RPC-style request paths, as Alex Miller noted on Ask Clojure. What I wanted to find out was what bridging that gap would actually cost in practice.
What I did was build request/reply semantics on top of flow using a correlation ID.
When a caller invokes a cap, a UUID is generated and stored in a registry alongside a promise channel. The message carries that ID through the entire flow. The terminal node is a dedicated collector process that delivers results directly to the caller’s channel. I initially considered using ::flow/report for result delivery, but the guide describes it as a channel for unified logging across the flow. Using it for results would be working against its purpose, so I went with the collector process instead.
;; inject side
(let [corr-id (str (random-uuid))
result-ch (async/promise-chan)]
(swap! registry assoc corr-id {:ch result-ch})
(flow/inject flow [entry-key :in] [{:data input :corr-id corr-id}])
result-ch)
;; collector step
(defn- collector-step [registry]
(flow/map->step
{:describe (fn [] {:ins {:in ""}})
:transform
(fn [state _ msg]
(when-let [{:keys [ch]} (get @registry (:corr-id msg))]
(async/put! ch (:data msg))
(swap! registry dissoc (:corr-id msg)))
[state {}])}))
;; topology - terminal outputs route to collector
{:procs {...
:collector {:proc (process (collector-step registry))}}
:conns [...
[[:llm :done] [:collector :in]]]}Error handling follows the same pattern. Processes throw exceptions with the corr-id in ex-data. A consumer on the flow’s error channel extracts the corr-id and delivers the exception back to the correct caller. Without this, errors would go to the error channel but never reach the caller waiting on the promise.
(catch Throwable e
(throw (ex-info "Node failed" {:corr-id corr-id} e)))
;; error handler
(let [corr-id (some-> err ::flow/ex ex-data :corr-id)]
(when corr-id
(deliver-result! registry corr-id (::flow/ex err))))The corr-id is a plain string. In multi-node work, the same pattern can apply across process boundaries without structural change.
This also introduces external mutable state. The registry is an atom that lives outside the flow. Flow’s own model assumes communication-free functions. The collector process breaks that assumption: it reads from and writes to the registry. This feels like forcing the model rather than working with it.
Fan-out requires coordinating results from multiple branches before emitting a final output. A fan-out node broadcasts to N processes and waits for all of them to respond.
flow does not yet provide a built-in way to wait for results from multiple branches. Rich Hickey has noted a planned sync->map process that will handle exactly this. For now, step state serves as a workaround: the fan-out process accumulates branch results across invocations and emits only when all have arrived.
This comes with tradeoffs. If a branch never responds, the pending entry stays in state indefinitely. There is no timeout in this implementation. For an experiment on a single node this is acceptable, but it is something to address before going further.
Each broadcast generates a fan-out ID. The step state holds a pending map keyed by that ID. When all expected results arrive, the step emits the aggregate with the original corr-id.
;; step state init
([_params] {:pending {}})
;; on input - broadcast to all branches
([state :in msg]
(let [fan-id (random-uuid)
outputs (into {} (map (fn [b] [b [{:data (:data msg) :fan-out-id fan-id}]])
branches))]
[(assoc-in state [:pending fan-id] {:expected (set branches)
:results {}
:corr-id (:corr-id msg)})
outputs]))
;; on branch result - collect and maybe emit
([state result-port msg]
(let [fan-id (:fan-out-id msg)
branch (parse-branch result-port)
updated (update-in state [:pending fan-id :results] assoc branch (:data msg))]
(if (all-branches-done? updated fan-id)
[(dissoc-in updated [:pending fan-id])
{:out [{:data (get-in updated [:pending fan-id :results])
:corr-id (get-in updated [:pending fan-id :corr-id])}]}]
[updated {}])))The LLM node handles streaming responses. Tokens arrive on a channel as the provider sends them. The node needs to read from that channel continuously until the stream is done, then emit the final result.
The approach is a self-loop: the process routes output back to one of its own input ports. On each invocation, it reads one event from the stream channel and either loops back to read another or emits the final result.
;; on request arriving at :in
([state :in msg]
(let [stream-ch (llm/start-stream config (:data msg) state)]
[(assoc state :stream-ch stream-ch :corr-id (:corr-id msg))
{::self-out [{:type :next}]}]))
;; on self-loop trigger at ::self-in
([state ::self-in _]
(let [event (async/<!! (:stream-ch state))]
(case (:type event)
:delta [(update state :accumulated str (:delta event))
{::token-out [{:delta (:delta event) :corr-id (:corr-id state)}]
::self-out [{:type :next}]}]
:done [(dissoc state :stream-ch :corr-id :accumulated)
{::done-out [{:data (:message event) :corr-id (:corr-id state)}]}])))
The process routes ::self-out back to ::self-in in the flow topology. This keeps the process self-contained: it does not need a reference to the flow itself, and the loop is visible in the topology definition rather than hidden inside imperative code.
Ayatori targets Java 21+. Processes declared with :workload :io run on virtual threads. That is why a blocking read inside the self-loop is not a practical concern. If flow were used on an older JVM, this implementation would need a different approach.
The flow guide says a step need not output at all: it can receive input, update its state, and emit nothing until it is ready. I think this self-loop pattern fits that intent. That said, the blocking read is working around flow’s async model rather than with it. It holds for now, but it is worth noting.
In this branch, lifecycle management that I would otherwise write by hand comes from the framework. Pause, resume, stop, and ping work across the entire graph or per process without additional code. Backpressure is automatic. The topology is still inspectable as data before execution begins.
(aya/pause-agent! sys :assistant)
(aya/ping-agent sys :assistant)
(aya/resume-agent! sys :assistant)These things could certainly be added to the hand-written executor. But each one takes time, and none of them is what this experiment is actually about. Each one was a solved problem I was solving again. The execution model is now more explicit: the topology is a data structure, the connections are declared, and the lifecycle is managed in one place. And with core.async.flow-monitor, visualization comes for free as well:
(aya/describe-topology agent)
;; => {:procs {...} :conns [...] :entry-key :preprocess :deps #{...}}Where flow worked well in this experiment: topology as data, lifecycle management, backpressure, observability. These came for free and replaced code I would have written by hand.
Where it required bridging: request/reply semantics, fan-out coordination, streaming. Each of those is outside what flow is designed for. The correlation pattern introduces external mutable state. The fan-out workaround has no timeout. The self-loop uses a blocking read inside the step function, bypassing flow’s assumption that step functions do not interact with channels directly.
Rich Hickey’s “Effective Programs” talk distinguishes between situated programs, long-running systems with state, context, and lifecycle, and transient programs, short-lived computations that start, do work, and finish. Ayatori’s runtime is situated. But its external API is transient: invoke a cap, get a result back. Flow handles the situated side well. That tension was always going to require bridging. Whether the bridge I built here is worth maintaining is what I am still working out.
Each workaround gave me the feeling of using flow outside its intended purpose. That might still be acceptable. I have not decided. What I did learn is what those costs actually look like in practice, which is what I set out to find.
The branch is not merged. The code is here: https://github.com/serefayar/ayatori/tree/flow
If you have worked with flow in a similar context, or think I am looking at this the wrong way, I would be glad to hear it.
Hi friends,
I’ll be attending Babashka Conf on May 8 and Dutch Clojure Days on May 9. If you’re attending either (or just visiting Amsterdam), drop me a line!
When I have an idea for a project, it tends to go in one of these two directions:
I just do it. Maybe I make a few minor revisions, but often it turns out exactly how I’d imagined and I’m happy.
I think, “I should look for prior art”. There’s a lot of prior art, dealing with a much broader scope than I’d originally imagined. I start to wonder if I should incorporate that scope. Or perhaps try to build my thing on top of the existing sorta-nearby-solutions. Or maybe I should just use the popular thing. Although I could do a better job than that thing, if I put a bunch of time into it. But actually, I don’t want to maintain a big popular project, nor do I want to put that much time into this project. Uh oh, now I’ve spent a bunch of time, having neither addressed the original issue nor experienced the joy of creating something.
I prefer the first outcome, and I think the pivotal factor is how well I’ve internalized my own success criteria.
For example, last weekend I hosted my friend Marcin and we decided it’d be fun to do some woodworking, so we threw together this shelf and 3d-printed hangers for my kitchen:

Absolute banger of a project:
The main success criteria was to jam on woodworking with a friend, and that helped me not overthink the object-level success criteria: Just make a shelf for my exact kitchen!
In contrast, this past Friday I noticed difftastic did a poor job, so I decided to shop around for structural/semantic diff tools and related workflows (a topic I’ve never studied, that I’m increasingly interested in as I’m reviewing more and more LLM-generated code).
I spent 4 hours over the weekend researching existing tools (see my notes below), going through dark periods of both “semantic tree diffing is a PhD-level complex problem” and “why do all of these have MCP servers? I don’t want an MCP server”, before I came to my senses and remembered my original success criteria: I just want a nicer diffing workflow for myself in Emacs, I should just build it myself — should take about 4 hours.
I’m cautiously optimistic that, having had this realization and committing myself to a minimal scope, I’ll be able to knock out a prototype before running out of motivation.
However, other long-running interests of mine:
seem to be deep in the well of outcome #2.
That is, I’ve spent hundreds of hours on background research and little prototypes, but haven’t yet synthesized anything that addresses the original motivating issue.
It’s not quite that I regret that time — I do love learning by reading — but I have a nagging sense of unease that my inner critic (fear of failure?) is silencing my generative tendencies, keeping me from the much more enjoyable (and productive!) learning by doing.
I think in these cases the success criteria has been much fuzzier: Am I trying to replace my own usage of Rust/Clojure? Only for some subset of problems? Or is it that I actually just need a playground to learn about language design/implementation, and it’s fine if I don’t end up using it?
Ditto for CAD: Am I trying to replace my commercial CAD tool in favor of my own? Only for some subset of simple or particularly parametric parts? Do I care if it’s useful for others? Does my tool need to be legibly different from existing open-source tools?
It’s worth considering these questions, sure. But at the end of the day, I’d much rather have done a lot than have only considered a lot.
So I’m trying to embrace my inner clueless 20-year-old and just do things — even if some turn out to be “obviously bad” in hindsight, I’ll still be coming out ahead on net =D
Of course, there’s only so much time to “just do things”, and there’s a balance to be had. I’m not sure how many times I’ll re-learn YAGNI (“you ain’t gonna need it”) in my career, but I was reminded of it again after writing a bunch of code with an LLM agent, then eventually coming to my senses and throwing it all out.
I wanted a Finda-style filesystem-wide fuzzy path search for Emacs. Since I’ve built (by hand, typing the code myself!) this exact functionality before (walk filesystem to collect paths, index them by trigram, do fast fuzzy queries via bitmap intersections), I figured it’d only take a few hours to supervise an LLM to write all the code.
I started with a “plan mode” chat, and the LLM suggested a library, Nucleo, which turned up since I wrote Finda (10 years ago, eek!).
I read through it, found it quite well-designed and documented, and decided to use it so I’d get its smart case and Unicode normalization functionality.
(E.g., query foo matches Foo and foo, whereas query Foo won’t match foo; similarly for cafe and café.)
Finding a great library wasn’t the problem, the problem was that Nucleo also supported some extra functionality: anchors (^foo only matches at the beginning of a line).
This got me thinking about what that might mean in a corpus that consists entirely of file paths.
Anchoring to the beginning of a line isn’t useful (everything starts with /), so I decided to try and interpret the anchors with respect to the path segments.
E.g., ^foo would match /root/foobar/ but not /root/barfoo/.
But to do this efficiently, the index needs to keep track of segment boundaries so that the query can be checked against each segment quickly.
But then we also need to handle a slash occurring in an anchored query (e.g., ^foo/bar) since that wouldn’t get matched when only looking at segments individually (root, foo, bar, and baz of a matching path /root/foo/bar/baz/).
Working through this took several hours: first throwing around design ideas with an LLM, having it write code to wrap Nucleo’s types, then realizing its code was bloated and didn’t spark joy, so finally writing my own (smaller) wrapper.
Then, after a break, I realized:
/ to the start or end of a query (this works for everything except anchoring to the end of a filename).So I tossed all of the anchoring code.
I’m pretty sure I still came out ahead compared to if I’d tried to write everything myself sans LLM or discussion with others, but I’m not certain.
Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
Speaking of unnecessary diversions, let me tell you everything I’ve learned about structural diffing recently — if you have thoughts/feelings/references in this space, I’d love to hear about ‘em!
When we’re talking about code, a “diff” usually means a summary of the line-by-line changes between two versions of a file.
This might be rendered as a “unified” view, where changed lines are prefixed with + or - to indicate whether they’re additions or deletions.
For example:

We’ve removed coffee and added apple.
The same diff might also be rendered in a side-by-side view, which can be easier to read when there are more complex changes:

The problem with these line-by-line diffs is that they’re not aware of higher-level structure like functions, types, etc. — if some braces match up somehow between versions, they might not be shown at all, even if the braces “belong” to different functions.
There’s a wonderful tool, difftastic, which tries to address this by calculating diffs using treesitter-provided concrete syntax trees. It’s a huge improvement over line-based diffs, but unfortunately it doesn’t always do a great job matching entities between versions.
Here’s the diff that motivated this entire foray:

Note that it doesn’t match up struct PendingClick, it shows it deleted on the left and added on the right.
I haven’t dug into why difftastic fails to match here, but I do feel like it’s wrong — even if the overall diff would be longer, I’d still rather see PendingClickRequest and PendingClick matched up between both sides.
Here’s a summary of tools / references in the space:
The most “baked” and thoughtful semantic diff tool I found is, perhaps unsurprisingly, semanticdiff.com, a small German company with a free VSCode plugin and web app that shows diffs for github PRs. Unfortunately they don’t have any code libraries I can use as a foundation for the workflow I want.
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
mergiraf: treesitter-based merge-driver written in rust
weave: also a treesitter-based merge-driver written in Rust
sem diff --verbose HEAD~4; it showed lines as having changed that…didn’t change at all.diffast: tree edit-distance of ASTs based on an algorithm from a 2008 academic paper.
autochrome: Clojure-specific diffs based on dynamic programming
Tristan Hume has a great article on Designing a Tree Diff Algorithm Using Dynamic Programming and A*
My primary use case is reviewing LLM output turn-by-turn — I’m very much in-the-loop, and I’m not letting my agent (or dozens of them, lol) run wild generating 10k+ lines of code at a time.
Rather, I give an agent a scoped task, then come back in a few minutes and want to see an overview of what it did and then either revise/tweak it manually in Emacs or throw the whole thing out and try again (or just write it myself).
The workflow I want, then, is to
Basically, I want something like Magit’s workflow for reviewing and staging changes, but on an entity level rather than file/line level.
In light of the "minimal scope, just get your project done” lesson I’ve just re-learned for the nth time, my plan is to:
Once that seems reasonable (i.e., it does a better job than difftastic did on that specific commit), I’ll:
Mayyybe if I’m happy with it I’ll end up releasing something. But I’m not trying to collect Github stars or HN karma, so I might just happily use it in the privacy of my own home without trying to “commercialize it”.
After all, sometimes I just want a shelf.
I’m in the market for a few square meters of Tyvek or other translucent, non-woven material suitable for building a light diffuser — let me know if you have any favorite vendors that can ship to the EU.
How They Made This - Coinbase Commercial Breakdown. Crypto is a negative-sum parasite on productive economic activity, but has the silver lining of funneling a lot of capital to weird creative folks.
The Easiest Way To Design Furniture…. Laura Kampf on getting off the computer and designing physical spaces with tape, lil’ wood sticks, and cardboard.
Hotel California - Reimagined on the Traditional Chinese Guzheng
C is not a low-level language: Your computer is not a fast PDP-11.
Loon is a Lisp. Thrilled to discover I’m not the only one who wants to mash together Clojure and Rust. The current implementation seems to have been manically vibe-coded and I quickly ran into some terrible bugs, but on the other hand it exists so I’m not going to be a hater.
Made a print in place box so I can easily hand out printed bees🐝. “Im quite content with the result, the bees fit snugly and the box opens and closes nicely”
“There isn’t a lot of reliable information out there about how to buy a gas mask, especially for the specific purpose of living under state repression. But hopefully after reading this guide you’ll feel equipped to make an educated decision.”
“This zoomable map shows every page of every issue of BYTE starting from the front cover of the first issue (top left) to the last page of the final edition (bottom right). The search bar runs RE2 regex over the full text of all 100k pages.” A lovely reminder that user-interfaces can be extremely fast and information dense.
Notes
The Clojure documentary is released, taking inspiration from the Datomic Ions approach to logging and alerting, and configuring AWS Lambda schedules with Terraform.
Most working engineers have spent ninety percent of their concurrent-programming life in one model: shared memory protected by locks. Threads that all see the same variables. Mutexes around the critical sections. Hope and care. It's the model every OS textbook teaches, every mainstream language supports, and every senior engineer has a horror story about.
It's also not the only option. Or even the best one, for many of the problems it gets used for. Three other models — CSP, actors, and software transactional memory — have been around for decades, mature enough for production, and each solves a class of problems that lock-based designs handle poorly.
This is a map of all four, from a working backend engineer who uses each of them for different jobs, and a take on when each is the right answer.
tl;dr — Concurrency has four viable pillars: shared memory + locks (threads, mutexes), CSP (channels, Go), actors (mailboxes, Erlang), and STM (transactional memory, Clojure). None is universally better. Each solves a different problem and has a different failure mode. Senior designs often mix three of them in one system. Mutex-for-everything works until it doesn't — usually at exactly the scale you promised you'd never reach.
The default. Threads, mutexes, atomics, condition variables. Every mainstream language has them.
How it works: multiple threads of execution share the same address space. They read and write the same data. Mutexes make sure only one thread touches a critical section at a time. Atomics do the same for single-word operations without a full lock.
Where it shines:
atomic.AddInt64, sync.Map, LRU caches. The right tool.Failure modes:
volatile in Java is not what you thought.Use mutexes for small, localized shared state. Once the shared state has three collaborators or more, or a nontrivial invariant across fields, reach for one of the other models.
Tony Hoare's 1978 paper, popularized by Occam and now Go. The model Rob Pike and Ken Thompson picked for Go's concurrency.
How it works: processes don't share memory; they send messages on named channels. Senders and receivers rendezvous on the channel. Ownership of data moves with the message. "Do not communicate by sharing memory; share memory by communicating."
Where it shines:
select with <-ctx.Done() is a clean primitive.Failure modes:
Use CSP for coordination-heavy designs. When the structure of "who's alive, who sends to whom, when do things stop" is the architecture, channels make that visible in the code.
Go is the obvious exemplar, but CSP-style is also available in Rust (crossbeam-channel, tokio::sync::mpsc), Kotlin (coroutines with channels), Python (asyncio.Queue), and C# (System.Threading.Channels).
Carl Hewitt's 1973 paper. Made practical by Erlang (1986) and later Akka (Scala/Java). The model behind WhatsApp, a decade of telecom, and most fault-tolerant messaging infrastructure.
How it works: an actor is a named entity with private state and a mailbox. Other actors send messages to its address. Messages are processed one at a time from the mailbox. No shared memory. Parent actors supervise children; when a child crashes, the parent decides to restart, escalate, or ignore. Crashes are normal.
Where it shines:
Failure modes:
Erlang and Elixir are the canonical runtimes. Akka brings actors to the JVM. Pony is a rare actor-first typed language. In Go, you can simulate actors with a goroutine + channel-as-mailbox pattern, but you lose Erlang's supervision and "let it crash" semantics unless you build them yourself.
Use actors when you have long-lived stateful entities with fault requirements. Telecom, messaging, multiplayer game servers, IoT device shadows, any system where "this particular entity has its own state machine, and we really care when it crashes" is the shape.
Imagine database transactions, but for in-memory data. That's STM.
How it works: critical sections are wrapped in transactions. The runtime tracks reads and writes optimistically. On commit, if any data touched was modified by another transaction, the current one rolls back and retries. No explicit locks. Composability — two transactions can be combined into a larger one without redesigning the locking order.
Where it shines:
Failure modes:
STM), Scala (scala-stm), Rust (experimental stm crates). Not a mainstream feature of Go/Java/C#.Clojure is the canonical "STM as a first-class citizen" language — its refs and transactions are idiomatic. Haskell's STM monad is arguably the cleanest realization. In other ecosystems, STM exists as libraries but hasn't displaced mutexes.
Use STM when the concurrent state is small-to-medium, the access pattern is read-heavy with occasional writes, and you want the composability. For the rare problems that fit, STM is strictly simpler to reason about than locks. For problems that don't fit (I/O-heavy, write-contention-heavy), STM is worse.
The surprise for engineers who've only used one model: mature systems mix three of them in one codebase.
A typical backend service I'd build today:
And I wouldn't use STM in that stack. Not because it's bad, but because the language runtime doesn't make it first-class. If I were writing Clojure, STM would be a natural fit for the in-memory state machines that would otherwise be locked maps.
The old "pick one concurrency model" debate was always a false choice. The real decision is per-problem: what shape is the concurrent work, what's the state-sharing pattern, and what failure semantics do I want.
Quick map:
errgroup.WithContext.Most people who get bitten by concurrency bugs got bitten because they used the wrong model, not because they used it wrong. A mutex-heavy design for a workload that's really a pipeline is fragile. A channels-for-everything design when there's a shared counter underneath ends up with awkward rendezvous. An actors-everywhere design when the business is CRUD requests reads like over-engineering.
The four pillars aren't competing theories of concurrency. They're four tools, each good at specific jobs. Senior engineers know all four and reach for the right one. Junior engineers reach for the only one they know and force-fit it.
If your career so far has been mostly mutexes, spend a weekend reading the other three. Write a toy pipeline in Go channels. Read Erlang's supervision documentation. Play with Clojure refs. The investment pays back every time you sit in a design review and someone proposes locking their way out of a structural problem.
Starting with .NET 11, the .NET runtime now offers runtime support for async/await. The latest release of ClojureCLR (1.12.3-alpha6) provides experimental support for this feature. In this post, we’ll take a look at what async/await, how it works on the CLR, and how ClojureCLR supports it.
Async is feature that allows methods to suspend their computation while waiting for a subcomputation to complete. Often, the subcomputation is something that requires waiting for an event, such as completion of an I/O operation. Suspending the computation means yielding control of the thread of execution, which frees the current thread to be used for other purposes during the wait, thus improving multi-threading efficiency.
To take advantage of async in C#, one marks the method’s signature with the async keyword. Within the method’s body, the await keyword can be used to mark the specific suspension points. A simple example:
public static class AsyncExample
{
public static async Task<List<string>> GetData(List<string> filenames)
{
List<string> contents = new List<string>(filenames.Count);
foreach (string filename in filenames)
{
contents.Add(await GetDataFromFile(filename));
}
return contents;
}
private static async Task<string> GetDataFromFile(string filename)
{
return await System.IO.File.ReadAllTextAsync(filename);
}
}
(No need for the class or the methods to be static, but that seems correct for this example.)
Several things to note:
async must have a return type derived from one of these:
TaskValueTaskTask<TResult>ValueTask<TResult>GetDataFromFile as async. When we use await on it in the caller, the caller must be marked as async. (One can use methods from the Task library to avoid using await, but await is the typical mechanism.)Prior to .NET 11, async and await in C# was implemented using code transformations performed by the compiler. A method marked async is rewritten as a finite-state machine. Suspension points required mechanisms to save and restore state around tha await call. This consumed heap memory. Amd the resulting code caused things like stack traces in exceptions to be loaded with compiler-generated method names that were not very helpful to the programmer.
As of .NET 11, these heroic compiler efforts are no longer required. The CLR core runtime detects methods marked as async (the compiler needs to pass along that information in the method metadata). The compiler translates await calls into certain special method calls that the runtime recognizes. The runtime now has the capability of dealing with saving and restoring state on its own. Simpler, and also more efficient, according to benchmarks.
Starting with version 1.12.3-alpha6, ClojureCLR provides experimental support for runtime async. A new library clojure.clr.async.task.alpha provides functions to help with this. Internally, the ClojureCLR compiler has been modified to pass along the critical metadata on async functions and to rewrite await calls to the special methods used by the runtime.
(This library is marked “alpha” because we are still experimenting with the API. Also, runtime async in .NET 11 is still in preview. In the final release, the library will be in namespace clojure.clr.async.task.)
It is helpful to import the namespace.. From the sample file for this post, we start with:
(ns test.async-test
(:require [clojure.clr.async.task.alpha :as t])
(:import [System.Threading.Tasks Task]
[System.IO Path File]
[System.Threading CancellationToken]))
We have imported a few other classes for the example code. Task is useful. We will also find Task<Object> useful, so we define an alias for it.
(alias-type ObjTask |System.Threading.Tasks.Task`1[Object]|)
We’re going to do some file I/O in our examples, so it helpful to have a few random files to work with:
(def ^String in-file (Path/GetTempFileName))
(def ^String out-file (Path/GetTempFileName))
(File/WriteAllText in-file "Some random content.")
If all you want to do is call an async method without awaiting it (yielding the thread), then you can use t/result. t/result will run a task if it is not already started/completed and wait to return its result. More typically we use it to extract the result of a task that has already completed, but it will start the task and wait if necessary.
(t/result (File/ReadAllTextAsync in-file CancellationToken/None)) ;; => "Some random content."
If you want to use await to mark a suspension point where control is yielded, you need to working in an async context. One way to provide that context is to defn a function with the ^:async tag. This marks all of its overloads (IFn.invoke methods) as async for the runtime. It also type hints the return type of the function as System.Threading.Tasks.Task<Object>. This applies to all the arities of the function.
(defn ^:async shout-it-out [infile outfile]
(let [content (t/await (File/ReadAllTextAsync infile CancellationToken/None))
capitalized (.ToUpper content)]
(t/await (File/WriteAllTextAsync ^String outfile capitalized CancellationToken/None))
"I'm done yelling."))
We have suspension points at the read and write calls, which are asynchronous I/O operations. The function will yield control at those points, allowing the thread to be used for other purposes while waiting for the I/O operations to complete. When the operations complete, the function will resume execution at the point of suspension.
Note that calling shout-it-out will return a Task<Object>.
(def t1 (shout-it-out in-file out-file))
(t/task? t1) ; => true
To get the result of the task, we can use t/result. In this case, again, the t/result will start and wait on the task if it is not already completed.
(t/result t1) ; => "I'm done yelling."
If you want to use await in a function but don’t want the function itself to return a task, but just get on with things, you can use t/async to provide an :async context. t/async wraps its body in an ^:async (fn [] ...body...). This form returns a task, so you will need to a tas operation on it to get work done.
Typically, you will need to call t/result if you want to get the value from the awaited call.
(defn just-read [infile]
(t/result (t/async (t/await (File/ReadAllTextAsync infile CancellationToken/None)))))
(just-read in-file) ;; => "Some random content"
The call to t/async returns a task, so we need to call t/result to get the value from the awaited call. If we had not done that, we would have gotten a Task<Object> back instead of the string content. In that case, it would be preferable generally to just define an ^:async function if you want to use await in it, but t/async can be useful if you want an anonymous async function. This might be useful to construct tasks for use in wait-all or wait-any calls, for example. (See below.)
There are several utility functions to create basic tasks.
(t/->task 42) ;; creates a Task<Object> that returns 42 when run
(t/completed-task) ;; creates a Task that is already completed
(t/delay-task 3000) ;; creates a Task that delays for 3000 milliseconds
You can run any zero-arg Clojure function as a task:
(t/result (t/run (fn [] (+ 1 2 3))))
(defn now [] DateTime/Now)
(t/result (t/run now))
Again, calling t/result on the result of t/run will start the task if it is not already started and wait for it to complete, returning the result. This does not take full advantage of suspension.
You can do wait-for-one and wait-for-all operations on a group of tasks. You can either just run the tasks or ask for their value(s).
| Function | Description |
|---|---|
(t/wait-all tasks) |
wait for all the tasks to complete; return nil |
(t/wait-any tasks) |
start all tasks, return the first one to complete |
(t/wait-all-results tasks) |
wait for all the tasks to complete, return a lazy sequence of their results |
(t/wait-any-result tasks) |
return the result of the first task to complete |
An example:
;; A little dummy function to delay and then return a value.
(defn ^:async delayed-value [msecs val]
(t/await (t/delay-task msecs))
val)
;; Just a little test to make sure things are taking time.
(time (t/result (delayed-value 4000 7))) ;; => take more than 4 seconds
(t/wait-all-results [(delayed-value 2000 2000)
(delayed-value 4000 4000)
(delayed-value 6000 6000)]) ;; => (2000 4000 6000)
(t/wait-any-result [(delayed-value 2000 2000)
(delayed-value 4000 4000)
(delayed-value 6000 6000)]) ;; => 2000 (most likely)
(t/wait-all-results [(delayed-value 2000 2000)
(delayed-value 4000 4000)
(delayed-value 6000 6000)]
500) ;; => nil (times out)
(t/wait-any-result [(delayed-value 2000 2000)
(delayed-value 4000 4000)
(delayed-value 6000 6000)]
500) ;; => nil (times out)
Short answer: yeah.
I did some simple tests to look at things like thread affinity and flooding the thread pool. The simplest things to do is to replace a call like (t/await ...) with (.Wait ...). The latter does not yield its thread; the difference in performance is notable.
Coding Clojure with scittle-kitchen in the FOSS EurOffice suite
So the next post will hopefully be about the compiler itself.
Unless I get distracted again.
Sike!
While I did some work on the compiler, I’m not feeling ready to talk about it yet. I want the post about it to be, well, more in-depth, so for now, let’s go back to immutable data structures. I got some comments on their performance, which can be summarized to “bad”.
And I agree! The performance is not great, especially when compared to plain Lua tables. However, my benchmarking process was a bit flawed, and I’ve improved it since.
This post won’t be long, but I wanted to make it anyway, to illustrate what I’m dealing with. Again, this is not the final version, I’m still working on possible optimizations as I write this, but some things have improved already, though not by much.
There were some low-hanging fruits that I could tackle:
fragment function with pure arithmetic.
I already have the same fragment function in my bitops module that uses native rshift+band.
The modulo operation is expensive.index function.faccumulate, fcollect, etc. with plain for loops.
These loops are quite hot, and Fennel compiles them to have additional assignment at each iteration.
Should be free, but actually had a small impact.[nil nil nil nil] tells the Lua compiler to pre-allocate four slots when creating the table, meaning it won’t resize once we insert elements.
Tables in Lua resize by a factor of 2 and copy everything on resize, which is not great for performance.With these changes in place (and some more I omitted), here are some of the benchmarks:
These images show per-commit changes to the performance. Really helped me track what was worth keeping, and what wasn’t.
Here’s the LuaJIT version of the benchmark:
Here’s another graph:
This is me testing how hash map branching factor affects operations. As can be seen, switching from 16 to 32 makes inserts slower on PUC Lua 5.5 but makes indexing faster. On LuaJIT both operations are faster.
Looking at the charts, the 7989c32 commit with that change (the second orange bar) — insertions are slower than the previous commit, and indexing is faster than the previous commit.
Unfortunately, LuaJIT charts are noisy and I can’t confirm the speedup there.
Something to do with instabilities in the LuaJIT tracer, I guess.
Note I know, the results for some operations are worse than before all optimizations that took place. This is still a work in progress. Just shows how important relative measuring is.
The results are a mixed bag. Most operations that improved did so by only 1 or 2 milliseconds overall, meaning the per-operation cost is still quite high, compared to Lua tables. Yes, some became faster, so I’m keeping these changes, but I can also see that for some operations, they actually got slower than they were before any optimization work. The reason for that may be that I fixed a few bugs along the way. Anyway, I tried to prioritize transient collection variants, as they benefit most from optimizations given how they’re used, and LuaJIT performance where possible.
Some bars are not meaningful for certain operations. E.g., I had changes that had nothing to do with Persistent Vector, but affected it anyway, so this benchmark is still a bit flawed. At least those differences are within the deviation range.
Anyhow, just wanted to show that optimization is hard, and not all changes yield the results you expect. In the case of this library, I don’t think there are more low-hanging fruits to optimize, or at least I don’t see any. For example, there are a lot of table allocations that are hard to get rid of in the current implementation. A deeper analysis is needed, probably by people with more Lua knowledge.
I’m still satisfied with some improvements I got, but we’ll see if more are on the way.
After a lot more benchmarks, flamegraphs, and throwing stuff at the wall, I think I can’t do much more. I decided to update this post instead of writing another one (and really should have waited before publishing the original) because at that point, I thought there was nothing more to do. Now I think there really isn’t. So the results are in:
Figure 1: PUC Lua 5.5 (green - before, orange - after)
Figure 2: LuaJIT (green - before, orange - after)
As can be seen, the changes are more dramatic on Lua than on LuaJIT. Mostly because it’s hard to optimize for LuaJIT without sacrificing performance on Lua, so I re-prioritized Lua, since LuaJIT is already much faster.
The final benchmark tables are below.
| Operation | Lua table | Persistent | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| build 50000 | 0.18 ms | 18.88 ms | 102x | 0.378 us | 119x | 0.440 us |
| lookup random | 0.48 ms | 11.44 ms | 24x | 0.229 us | 28x | 0.266 us |
| assoc random | 0.31 ms | 42.72 ms | 139x | 0.854 us | 222x | 1.4 us |
| pop all | 0.14 ms | 14.56 ms | 106x | 0.291 us | 150x | 0.415 us |
| iterate | 0.60 ms | 2.74 ms | 5x | 0.055 us | 17x | 0.196 us |
| Operation | Lua table | Transient | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| build 50000 | 0.19 ms | 6.55 ms | 35x | 0.131 us | 41x | 0.154 us |
| assoc random | 0.31 ms | 17.64 ms | 57x | 0.353 us | 67x | 0.408 us |
| pop all | 0.14 ms | 5.74 ms | 42x | 0.115 us | 51x | 0.142 us |
| Operation | Lua table | Persistent | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| insert 50000 | 2.01 ms | 81.44 ms | 41x | 1.6 us | 82x | 3.2 us |
| lookup random | 0.92 ms | 34.03 ms | 37x | 0.681 us | 81x | 1.7 us |
| contains random | 1.82 ms | 33.08 ms | 18x | 0.662 us | 49x | 1.7 us |
| dissoc all | 0.90 ms | 76.97 ms | 86x | 1.5 us | 187x | 3.3 us |
| dissoc 10% | 0.18 ms | 9.81 ms | 56x | 2.0 us | 112x | 3.7 us |
| iterate | 1.91 ms | 7.22 ms | 4x | 0.144 us | 4x | 0.146 us |
| lookup structural keys | 0.14 ms | 2.86 ms | 21x | 0.572 us | 146x | 4.1 us |
| insert collision-heavy | 0.13 ms | 6.50 ms | 51x | 1.3 us | 134x | 3.5 us |
| mixed 80/20 | 1.92 ms | 53.67 ms | 28x | 1.1 us | 62x | 2.3 us |
| Operation | Lua table | Transient | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| insert 50000 | 2.02 ms | 38.91 ms | 19x | 0.778 us | 44x | 1.7 us |
| lookup random | 1.14 ms | 35.56 ms | 31x | 0.711 us | 86x | 1.7 us |
| dissoc all | 0.89 ms | 44.52 ms | 50x | 0.890 us | 112x | 2.0 us |
| mixed 80/20 | 4.39 ms | 44.87 ms | 10x | 0.897 us | 22x | 1.9 us |
| Operation | Lua table | Persistent | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| build 50000 | 0.11 ms | 8.12 ms | 71x | 0.162 us | 84x | 0.177 us |
| lookup random | 0.06 ms | 0.67 ms | 11x | 0.013 us | 11x | 0.013 us |
| assoc random | 0.04 ms | 22.91 ms | 559x | 0.458 us | 731x | 0.585 us |
| pop all | 0.02 ms | 10.77 ms | 718x | 0.215 us | 657x | 0.210 us |
| iterate | 0.02 ms | 0.23 ms | 12x | 0.005 us | 26x | 0.011 us |
| Operation | Lua table | Transient | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| build 50000 | 0.05 ms | 0.79 ms | 16x | 0.016 us | 12x | 0.012 us |
| assoc random | 0.04 ms | 2.22 ms | 55x | 0.044 us | 28x | 0.022 us |
| pop all | 0.02 ms | 0.35 ms | 23x | 0.007 us | 50x | 0.016 us |
| Operation | Lua table | Persistent | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| insert 50000 | 0.79 ms | 32.32 ms | 41x | 0.646 us | 50x | 0.989 us |
| lookup random | 0.34 ms | 5.78 ms | 17x | 0.116 us | 43x | 0.248 us |
| contains random | 0.30 ms | 5.91 ms | 20x | 0.118 us | 40x | 0.257 us |
| dissoc all | 0.23 ms | 27.02 ms | 119x | 0.540 us | 191x | 0.894 us |
| dissoc 10% | 0.05 ms | 5.89 ms | 118x | 1.2 us | 121x | 1.4 us |
| iterate | 0.07 ms | 1.07 ms | 16x | 0.021 us | 30x | 0.040 us |
| lookup structural keys | 0.02 ms | 2.41 ms | 121x | 0.483 us | 119x | 0.498 us |
| insert collision-heavy | 0.28 ms | 5.20 ms | 19x | 1.0 us | 36x | 1.2 us |
| mixed 80/20 | 0.50 ms | 46.59 ms | 94x | 0.932 us | 40x | 0.513 us |
| Operation | Lua table | Transient | ratio | per op | ratio (old) | per op (old) |
|---|---|---|---|---|---|---|
| insert 50000 | 0.91 ms | 30.92 ms | 34x | 0.618 us | 11x | 0.242 us |
| lookup random | 0.32 ms | 34.90 ms | 110x | 0.698 us | 44x | 0.246 us |
| dissoc all | 0.23 ms | 36.35 ms | 162x | 0.727 us | 57x | 0.271 us |
| mixed 80/20 | 1.29 ms | 40.71 ms | 32x | 0.814 us | 11x | 0.309 us |
Yes, the benchmark shows that some operations slowed down, some sped up. Ideally, I’d like to have no slowdown, but that’s that. Still, I wouldn’t trust my LuaJIT benchmark, though - graphs show that for some weird reason unrelated changes affect performance in the modules not touched by those changes at all. So these changes are now merged into the main branch.
Main bottleneck now is table allocation and array copying - things I already spent optimizing a lot of time with mixed results. So I think I can’t do better than that. At least without major algorithmic restructuring.
With that, the next post should be about the compiler.
Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).
The Clojure Documentary will be released on the CultRepo YouTube channel this Thursday, April 16.
8pm CEST, 6pm UTC, 3pm BRT, 2pm EDT, 11am PDT
Thu, Apr 16: for the worldwide release
Agical in Stockholm, Sweden
Factor House in Northcote, Australia
Fri, Apr 17: before the Q&A
Clojure Camp Discord at 1:30pm ET, 10:30am PT
Clojure BR Discord at 2:30pm BRT
Join us for a special Clojure Documentary Q&A Webinar with Rich Hickey and other key people in Clojure’s history!
Friday, April 17, 3–4pm US ET / 9-10pm CEST
Duration: 1 hour
Language: English with simultaneous translation into Spanish and Portuguese
Recording: session will be recorded and uploaded to the Clojure TV YouTube channel
See the early years of Clojure via the Clojure IRC Log
September 30 – October 2, 2026
Charlotte Convention Center, Charlotte, NC
Join us for the largest gathering of Clojure developers in the world! Meet new people and reconnect with old friends. Enjoy two full days of talks, a day of workshops, social events, and more.
Early bird and group tickets are now on sale.
Is your company interested in sponsoring? Email us at clojure_conj@nubank.com.br to discuss opportunities.
Clojure real-world-data 56: Apr 17
Clojure Community Check-In: Apr 25. Register here.
Babashka Conf: May 8. Amsterdam, NL. See the schedule.
Dutch Clojure Days 2026: May 9. Amsterdam, NL. See the schedule.
Spring Lisp Game Jam 2026: May 14-24. Online.
Swish: Using Claude Code to Create a Lisp with Swift - Rod Schmidt
Learn Ring - 9. Refactoring Pages - Clojure Diary
Try Clojure under 30 secs - aka From Calva to REPL - CalvaTV
Apropos with Colin Fleming - April 14, 2026 - apropos clojure
A Regular expression to find functions - Clojure Diary
Shadow-cljs 3.4.x Updates - Thomas Heller
(nth (concat) 8) - Ana Carolina, Arthur Fücher
Swish - Clojure-like Lisp for Swift Video Series - Rod Schmidt
Mapping Column Names with Malli Schemas - Timothy Davis
BigConfig: The "React" for Agentic DevOps - Alberto Miorin
Leiningen — Complete Tutorial & Best Practices - Ivan Gavlik
Orchestration is not the hard part - Chris Lester
Anomaly Detection Belongs in Your Database - Christian Weilbach
DevOps Without the Code: Infrastructure as Markdown - Alberto Miorin
Eve sheets - a toy multi-user spreadsheet in < 250 LOC - Kyle Passarelli
Typed multiple dispatch as a Clojure library — how we built Julia-style polymorphism on the JVM - Christian Weilbach
Clojure on Fennel part two: immutable.fnl optimizations - Andrey Listopadov
Debut release
tree-sitter-clojure - a wasm version of tree-sitter-clojure
csp-clj - Communicating Sequential Processes for Clojure on JDK 24+ Virtual Threads
clj-oa3-vtn - OpenADR 3.1.0 VTN server in Clojure
price-server-user-guide - User guide for the Grid Coordination price server — California electricity prices via OpenADR 3.1.0
raster - Fast, functional numerical computing for Clojure/JVM.
clj-xref - LLM-friendly cross-reference database for Clojure code. Query who-calls, calls-who, who-implements, ns-deps to feed precise dependency neighborhoods to AI assistants instead of entire source trees. Built on clj-kondo.
layoutz-clj - Simple, beautiful CLI output
miniforge - miniforge is an autonomous software development system designed to behave like a factory, not a chatbot
Updates
data.xml 0.2.0-alpha11 - GitHub - clojure/data.xml
logging4j2 1.0.7 - A Clojure wrapper for log4j2
epupp 0.0.16 - A web browser extension that lets you tamper with web pages, live and/or with userscripts.
nvim-astro 2026-04-08 - Neovim 0.11 config for Clojure development, based on AstroNvim v5
pomegranate 1.3.26 - A sane Clojure API for Maven Artifact Resolver + dynamic runtime modification of the classpath
ordered-collections 0.2.0 - Fast, modern, ropes and ordered collections that do more than sort.
aleph 0.9.7 - Asynchronous streaming communication for Clojure - web server, web client, and raw TCP/UDP
hermes 1.4.1585 - A library and microservice implementing the health and care terminology SNOMED CT with support for cross-maps, inference, fast full-text search, autocompletion, compositional grammar and the expression constraint language.
datomic-type-extensions 2026.04.10 - A Clojure library that wraps Datomic API functions to add type extensions.
cli 1.29.127 - Opinionated command line argument handling, with excellent support for subcommands
beeld 1.1.5 - Get the metadata associated with an image. Also contains image utilities: filesize, scale, etc.
tableplot 1-beta17 - Easy layered graphics with Hanami & Tablecloth
clay 2.0.15 - A REPL-friendly Clojure tool for notebooks and datavis
calva 2.0.573 - Clojure & ClojureScript Interactive Programming for VS Code
eca 0.126.0 - Editor Code Assistant (ECA) - AI pair programming capabilities agnostic of editor
baredom 2.1.1 - BareDOM: Lightweight CLJS UI components built on web standards (Custom Elements, Shadow DOM, ES modules). No framework, just the DOM
nrepl 1.7.0 - A Clojure network REPL that provides a server and client, along with some common APIs of use to IDEs and other tools that may need to evaluate Clojure code in remote environments.
plumcp 0.2.0-rc2 - Clojure/ClojureScript library for making MCP server and client
pretty 3.7.0 - Library for helping print things prettily, in Clojure - ANSI fonts, formatted exceptions
shadow-cljs 3.4.2 - ClojureScript compilation made easy