OSS updates November and December 2025

In this post I&aposll give updates about open source I worked on during November and December 2025.

To see previous OSS updates, go here.

Sponsors

I&aposd like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn&apost exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

gratitude

Current top tier sponsors:

Open the details section for more info about sponsoring.

Sponsor info

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

Updates

Clojure Conj 2025

Last November I had the honor and pleasure to visit the Clojure Conj 2025. I met a host of wonderful and interesting long-time and new Clojurians, many that I&aposve known online for a long time and now met for the first time. It was especially exciting to finally meet Rich Hickey and talk to him during a meeting about Clojure dialects and Clojure tooling. The talk that I gave there: "Making tools developers actually use" will come online soon.

presentation at Dutch Clojure meetup

Babashka conf and Dutch Clojure Days 2026

In 2026 I&aposm organizing Babashka Conf 2026. It will be an afternoon event (13:00-17:00) hosted in the Forum hall of the beautiful public library of Amsterdam. More information here. Get your ticket via Meetup.com (currently there&aposs a waiting list, but more places will come available once speakers are confirmed). CfP will open mid January. The day after babashka conf, Dutch Clojure Days 2026 will be happening. It&aposs not too late to get your talk proposal in. More info here.

Projects

Here are updates about the projects/libraries I&aposve worked on in the last two months in detail.

  • babashka: native, fast starting Clojure interpreter for scripting.

    • Bump process to 0.6.25
    • Bump deps.clj
    • Fix #1901: add java.security.DigestOutputStream
    • Redefining namespace with ns should override metadata
    • Bump nextjournal.markdown to 0.7.222
    • Bump edamame to 1.5.37
    • Fix #1899: with-meta followed by dissoc on records no longer works
    • Bump fs to 0.5.30
    • Bump nextjournal.markdown to 0.7.213
    • Fix #1882: support for reifying java.time.temporal.TemporalField (@EvenMoreIrrelevance)
    • Bump Selmer to 1.12.65
    • SCI: sci.impl.Reflector was rewritten into Clojure
    • dissoc on record with non-record field should return map instead of record
    • Bump edamame to 1.5.35
    • Bump core.rrb-vector to 0.2.0
    • Migrate detecting of executable name for self-executing uberjar executable from ProcessHandle to to native image ProcessInfo to avoid sandbox errors
    • Bump cli to 0.8.67
    • Bump fs to 0.5.29
    • Bump nextjournal.markdown to 0.7.201
  • SCI: Configurable Clojure/Script interpreter suitable for scripting

    • Add support for :refer-global and :require-global
    • Add println-str
    • Fix #997: Var is mistaken for local when used under the same name in a let body
    • Fix #1001: JS interop with reserved js keyword fails (regression of #987)
    • sci.impl.Reflector was rewritten into Clojure
    • Fix babashka/babashka#1886: Return a map when dissociating a record basis field.
    • Fix #1011: reset ns metadata when evaluating ns form multiple times
    • Fix for https://github.com/babashka/babashka/issues/1899
    • Fix #1010: add js-in in CLJS
    • Add array-seq
  • clj-kondo: static analyzer and linter for Clojure code that sparks joy.

    • #2600: NEW linter: unused-excluded-var to warn on unused vars in :refer-clojure :exclude (@jramosg)
    • #2459: NEW linter: :destructured-or-always-evaluates to warn on s-expressions in :or defaults in map destructuring (@jramosg)
    • Add type checking support for sorted-map-by, sorted-set, and sorted-set-by functions (@jramosg)
    • Add new type array and type checking support for the next functions: to-array, alength, aget, aset and aclone (@jramosg)
    • Fix #2695: false positive :unquote-not-syntax-quoted in leiningen&aposs defproject
    • Leiningen&aposs defproject behavior can now be configured using leiningen.core.project/defproject
    • Fix #2699: fix false positive unresolved string var with extend-type on CLJS
    • Rename :refer-clojure-exclude-unresolved-var linter to unresolved-excluded-var for consistency
    • v2025.12.23
    • #2654: NEW linter: redundant-let-binding, defaults to :off (@tomdl89)
    • #2653: NEW linter: :unquote-not-syntax-quoted to warn on ~ and ~@ usage outside syntax-quote (`) (@jramosg)
    • #2613: NEW linter: :refer-clojure-exclude-unresolved-var to warn on non-existing vars in :refer-clojure :exclude (@jramosg)
    • #2668: Lint & syntax errors in let bindings and lint for trailing & (@tomdl89)
    • #2590: duplicate-key-in-assoc changed to duplicate-key-args, and now lints dissoc, assoc! and dissoc! too (@tomdl89)
    • #2651: resume linting after paren mismatches
    • clojure-lsp#2651: Fix inner class name for java-class-definitions.
    • clojure-lsp#2651: Include inner class java-class-definition analysis.
    • Bump babashka/fs
    • #2532: Disable :duplicate-require in require + :reload / :reload-all
    • #2432: Don&apost warn for :redundant-fn-wrapper in case of inlined function
    • #2599: detect invalid arity for invoking collection as higher order function
    • #2661: Fix false positive :unexpected-recur when recur is used inside clojure.core.match/match (@jramosg)
    • #2617: Add types for repeatedly (@jramosg)
    • Add :ratio type support for numerator and denominator functions (@jramosg)
    • #2676: Report unresolved namespace for namespaced maps with unknown aliases (@jramosg)
    • #2683: data argument of ex-info may be nil since clojure 1.12
    • Bump built-in ClojureScript analysis info
    • Fix #2687: support new :refer-global and :require-global ns options in CLJS
    • Fix #2554: support inline configs in .cljc files
  • edamame: configurable EDN and Clojure parser with location metadata and more Edamame: configurable EDN and Clojure parser with location metadata and more

    • Minor: leave out :edamame/read-cond-splicing when not splicing
    • Allow :read-cond function to override :edamame/read-cond-splicing value
    • The result from :read-cond with a function should be spliced. This behavior differs from :read-cond + :preserve which always returns a reader conditional object which cannot be spliced.
    • Support function for :features option to just select the first feature that occurs
  • squint: CLJS syntax to JS compiler

    • Allow macro namespaces to load "node:fs", etc. to read config files for conditional compilation
    • Don&apost emit IIFE for top-level let so you can write let over defn to capture values.
    • Fix js-yield and js-yield* in expression position
    • Implement some? as macro
    • Fix #758: volatile!, vswap!, vreset!
    • pr-str, prn etc now print EDN (with the idea that you can paste it back into your program)
    • new #js/Map reader that reads a JavaScript Map from a Clojure map (maps are printed like this with pr-str too)
    • Support passing keyword to mapv
    • #759: doseq can&apost be used in expression context
    • Fix #753: optimize output of dotimes
    • alength as macro
  • reagami: A minimal zero-deps Reagent-like for Squint and CLJS

    • Performance enhancements
    • treat innerHTML as a property rather than an attribute
    • Drop support for camelCased properties / (css) attributes
    • Fix :default-value in input range
    • Support data param in :on-render
    • Support default values for uncontrolled components
    • Fix child count mismatch
    • Fix re-rendering/patching of subroots
    • Add :on-render hook for mounting/updating/unmounting third part JS components
  • NEW: parmezan: fixes unbalanced or unexpected parens or other delimiters in Clojure files

  • CLI: Turn Clojure functions into CLIs!

    • #126: - value accidentally parsed as option, e.g. --file -
    • #124: Specifying exec fn that starts with hyphen is treated as option
    • Drop Clojure 1.9 support. Minimum Clojure version is now 1.10.3.
  • clerk: Moldable Live Programming for Clojure

    • always analyze doc (but not deps) when no-cache is set (#786)
    • add option to disable inline formulas in markdown (#780)
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI

  • Nextjournal Markdown

    • Add config option to avoid TeX formulas
    • API improvements for passing options
  • cherry: Experimental ClojureScript to ES6 module compiler

    • Fix cherry compile CLI command not receiving file arguments
    • Bump shadow-cljs to 3.3.4
    • Fix #163: Add assert to macros (@willcohen)
    • Fix #165: Fix ClojureScript protocol dispatch functions (@willcohen)
    • Fix #167: Protocol dispatch functions inside IIFEs; bump squint accordingly
    • Fix #169: fix extend-type on Object
    • Fix #171: Add satisfies? macro (@willcohen)
  • deps.clj: A faithful port of the clojure CLI bash script to Clojure

    • Released several versions catching up with the clojure CLI
  • quickdoc: Quick and minimal API doc generation for Clojure

    • Fix extra newline in codeblock
  • quickblog: light-weight static blog engine for Clojure and babashka

    • Add support for a blog contained within another website; see Serving an alternate content root in README. (@jmglov)
    • Upgrade babashka/http-server to 0.1.14
    • Fix :blog-image-alt option being ignored when using CLI (bb quickblog render)
  • nbb: Scripting in Clojure on Node.js using SCI

    • #395: fix vim-fireplace infinite loop on nREPL session close.
    • Add ILookup and Cons
    • Add abs
    • nREPL: support "completions" op
  • neil: A CLI to add common aliases and features to deps.edn-based projects.

    • neil.el - a hook that runs after finding a package (@agzam)
    • neil.el - adds a function for injecting a found package into current CIDER session (@agzam)
    • #245: neil.el - neil-executable-path now can be set to clj -M:neil
    • #251: Upgrade library deps-new to 0.10.3
    • #255: update maven search URL
  • fs - File system utility library for Clojure

    • #154 reflect in directory check and docs that move never follows symbolic links (@lread)
    • #181 delete-tree now deletes broken symbolic link root (@lread)
    • #193 create-dirs now recognizes sym-linked dirs on JDK 11 (@lread)
    • #184: new check in copy-tree for copying to self too rigid
    • #165: zip now excludes zip-file from zip-file (@lread)
    • #167: add root fn which exposes Path getRoot (@lread)
    • #166: copy-tree now fails fast on attempt to copy parent to child (@lread)
    • #152: an empty-string path "" is now (typically) understood to be the current working directory (as per underlying JDK file APIs) (@lread)
    • #155: fs/with-temp-dir clj-kondo linting refinements (@lread)
    • #162: unixify no longer expands into absolute path on Windows (potentially BREAKING)
    • Add return type hint to read-all-bytes
  • process: Clojure library for shelling out / spawning sub-processes

    • #181: support :discard or ProcessBuilder$Redirect as :out and :err options

Contributions to third party projects:

  • ClojureScript
    • CLJS-3466: support qualified method in return position
    • CLJS-3468: :refer-global should not make unrenamed object available

Other projects

These are (some of the) other projects I&aposm involved with but little to no activity happened in the past month.

Click for more details - [pod-babashka-go-sqlite3](https://github.com/babashka/pod-babashka-go-sqlite3): A babashka pod for interacting with sqlite3 - [unused-deps](https://github.com/borkdude/unused-deps): Find unused deps in a clojure project - [pod-babashka-fswatcher](https://github.com/babashka/pod-babashka-fswatcher): babashka filewatcher pod - [sci.nrepl](https://github.com/babashka/sci.nrepl): nREPL server for SCI projects that run in the browser - [babashka.nrepl-client](https://github.com/babashka/nrepl-client) - [http-server](https://github.com/babashka/http-server): serve static assets - [nbb](https://github.com/babashka/nbb): Scripting in Clojure on Node.js using SCI - [sci.configs](https://github.com/babashka/sci.configs): A collection of ready to be used SCI configs. - [http-client](https://github.com/babashka/http-client): babashka's http-client - [html](https://github.com/borkdude/html): Html generation library inspired by squint's html tag - [instaparse-bb](https://github.com/babashka/instaparse-bb): Use instaparse from babashka - [sql pods](https://github.com/babashka/babashka-sql-pods): babashka pods for SQL databases - [rewrite-edn](https://github.com/borkdude/rewrite-edn): Utility lib on top of - [rewrite-clj](https://github.com/clj-commons/rewrite-clj): Rewrite Clojure code and edn - [tools-deps-native](https://github.com/babashka/tools-deps-native) and [tools.bbuild](https://github.com/babashka/tools.bbuild): use tools.deps directly from babashka - [bbin](https://github.com/babashka/bbin): Install any Babashka script or project with one command - [qualify-methods](https://github.com/borkdude/qualify-methods) - Initial release of experimental tool to rewrite instance calls to use fully qualified methods (Clojure 1.12 only) - [tools](https://github.com/borkdude/tools): a set of [bbin](https://github.com/babashka/bbin/) installable scripts - [babashka.json](https://github.com/babashka/json): babashka JSON library/adapter - [speculative](https://github.com/borkdude/speculative) - [squint-macros](https://github.com/squint-cljs/squint-macros): a couple of macros that stand-in for [applied-science/js-interop](https://github.com/applied-science/js-interop) and [promesa](https://github.com/funcool/promesa) to make CLJS projects compatible with squint and/or cherry. - [grasp](https://github.com/borkdude/grasp): Grep Clojure code using clojure.spec regexes - [lein-clj-kondo](https://github.com/clj-kondo/lein-clj-kondo): a leiningen plugin for clj-kondo - [http-kit](https://github.com/http-kit/http-kit): Simple, high-performance event-driven HTTP client+server for Clojure. - [babashka.nrepl](https://github.com/babashka/babashka.nrepl): The nREPL server from babashka as a library, so it can be used from other SCI-based CLIs - [jet](https://github.com/borkdude/jet): CLI to transform between JSON, EDN, YAML and Transit using Clojure - [lein2deps](https://github.com/borkdude/lein2deps): leiningen to deps.edn converter - [cljs-showcase](https://github.com/borkdude/cljs-showcase): Showcase CLJS libs using SCI - [babashka.book](https://github.com/babashka/book): Babashka manual - [pod-babashka-buddy](https://github.com/babashka/pod-babashka-buddy): A pod around buddy core (Cryptographic Api for Clojure). - [gh-release-artifact](https://github.com/borkdude/gh-release-artifact): Upload artifacts to Github releases idempotently - [carve](https://github.com/borkdude/carve) - Remove unused Clojure vars - [4ever-clojure](https://github.com/oxalorg/4ever-clojure) - Pure CLJS version of 4clojure, meant to run forever! - [pod-babashka-lanterna](https://github.com/babashka/pod-babashka-lanterna): Interact with clojure-lanterna from babashka - [joyride](https://github.com/BetterThanTomorrow/joyride): VSCode CLJS scripting and REPL (via [SCI](https://github.com/babashka/sci)) - [clj2el](https://borkdude.github.io/clj2el/): transpile Clojure to elisp - [deflet](https://github.com/borkdude/deflet): make let-expressions REPL-friendly! - [deps.add-lib](https://github.com/borkdude/deps.add-lib): Clojure 1.12's add-lib feature for leiningen and/or other environments without a specific version of the clojure CLI

Permalink

rswan 1.1.0, and other Clojure updates

Notes

  • rswam 1.1.0-PRE https://codeberg.org/mindaslab/rswan
  • About setting repo
    • :repositories [["clojars" {:url "https://clojars.org/org.clojars.mindaslab.rswan"
                                   :sign-releases false}]
      
    • caused to due Java + Clojure upgrade, which resulted in diff nrepl versions
  • Try rswan - demo
  • Why clj is not recognized as Clojure in Logseq?
  • job
    • Weird
    • $0
    • Some equity if things go right
    • not sure
  • Anyone wants any Clojure help?
    • Want to learn more
    • Need not be paid
    • If paid, will be really happy

Permalink

Building Heretic: From ClojureStorm to Mutant Schemata

Heretic

This is Part 2 of a series on mutation testing in Clojure. Part 1 introduced the concept and why Clojure needed a purpose-built tool.

The previous post made a claim: mutation testing can be fast if you know which tests to run. This post shows how Heretic makes that happen.

We&aposll walk through the three core phases: collecting expression-level coverage with ClojureStorm, transforming source code with rewrite-clj, and the optimization techniques that keep mutation counts manageable.

Phase 1: Coverage Collection

Traditional coverage tools track lines. Heretic tracks expressions.

The difference matters. Consider:

(defn process-order [order]
  (if (> (:quantity order) 10)
    (* (:price order) 0.9)    ;; <- Line 3: bulk discount
    (:price order)))

Line-level coverage would show line 3 as "covered" if any test enters the bulk discount branch. But expression-level coverage distinguishes between tests that evaluate *, (:price order), and 0.9. When we later mutate 0.9 to 1.1, we can run only the tests that actually touched that specific literal - not every test that happened to call process-order.

ClojureStorm&aposs Instrumented Compiler

ClojureStorm is a fork of the Clojure compiler that instruments every expression during compilation. Created by Juan Monetta for the FlowStorm debugger, it provides exactly the hooks Heretic needs. (Thanks to Juan for building such a solid foundation - Heretic would not exist without ClojureStorm.)

The integration is surprisingly minimal:

(ns heretic.tracer
  (:import [clojure.storm Emitter Tracer]))

(def ^:private current-coverage
  "Atom of {form-id #{coords}} for the currently running test."
  (atom {}))

(defn record-hit! [form-id coord]
  (swap! current-coverage
         update form-id
         (fnil conj #{})
         coord))

(defn init! []
  ;; Configure what gets instrumented
  (Emitter/setInstrumentationEnable true)
  (Emitter/setFnReturnInstrumentationEnable true)
  (Emitter/setExprInstrumentationEnable true)

  ;; Set up callbacks
  (Tracer/setTraceFnsCallbacks
   {:trace-expr-fn (fn [_ _ coord form-id]
                     (record-hit! form-id coord))
    :trace-fn-return-fn (fn [_ _ coord form-id]
                          (record-hit! form-id coord))}))

When any instrumented expression evaluates, ClojureStorm calls our callback with two pieces of information:

  • form-id: A unique identifier for the top-level form (e.g., an entire defn)
  • coord: A path into the form&aposs AST, like "3,2,1" meaning "third child, second child, first child"

Together, [form-id coord] pinpoints exactly which subexpression executed. This is the key that unlocks targeted test selection.

The Coordinate System

To connect a mutation in the source code to the coverage data, we need a way to uniquely address any subexpression. Think of it as a postal address for code - we need to say "the a inside the + call inside the function body" in a format that both the coverage tracer and mutation engine can agree on.

ClojureStorm addresses this with a path-based coordinate system. Consider this function as a tree:

(defn foo [a b] (+ a b))
   │
   ├─[0] defn
   ├─[1] foo
   ├─[2] [a b]
   └─[3] (+ a b)
            │
            ├─[3,0] +
            ├─[3,1] a
            └─[3,2] b

Each number represents which child to pick at each level. The coordinate "3,2" means "go to child 3 (the function body), then child 2 (the second argument to +)". That gives us the b symbol.

This works cleanly for ordered structures like lists and vectors, where children have stable positions. But maps are unordered - {:name "Alice" :age 30} and {:age 30 :name "Alice"} are the same value, so numeric indices would be unstable.

ClojureStorm solves this by hashing the printed representation of map keys. Instead of "0" for the first entry, a key like :name gets addressed as "K-1925180523":

{:name "Alice" :age 30}
   │
   ├─[K-1925180523] :name
   ├─[V-1925180523] "Alice"
   ├─[K-1524292809] :age
   └─[V-1524292809] 30

The hash ensures stable addressing regardless of iteration order.

With this addressing scheme, we can say "test X touched coordinate 3,1 in form 12345" and later ask "which tests touched the expression we&aposre about to mutate?"

The Form-Location Bridge

Here&aposs a problem we discovered during implementation: how do we connect the mutation engine to the coverage data?

The mutation engine uses rewrite-clj to parse and transform source files. It finds a mutation site at, say, line 42 of src/my/app.clj. But the coverage data is indexed by ClojureStorm&aposs form-id - an opaque identifier assigned during compilation. We need to translate "file + line" into "form-id".

Fortunately, ClojureStorm&aposs FormRegistry stores the source file and starting line for each compiled form. We build a lookup index:

(defn build-form-location-index [forms source-paths]
  (into {}
        (for [[form-id {:keys [form/file form/line]}] forms
              :when (and file line)
              :let [abs-path (resolve-path source-paths file)]
              :when abs-path]
          [[abs-path line] form-id])))

When the mutation engine finds a site at line 42, it searches for the form whose start line is the largest value less than or equal to 42 - that is, the innermost containing form. This gives us the ClojureStorm form-id, which we use to look up which tests touched that form.

This bridging layer is what allows Heretic to connect source transformations to runtime coverage, enabling targeted test execution.

Collection Workflow

Coverage collection runs each test individually and captures what it touches:

(defn run-test-with-coverage [test-var]
  (tracer/reset-current-coverage!)
  (try
    (test-var)
    (catch Throwable t
      (println "Test threw exception:" (.getMessage t))))
  {(symbol test-var) (tracer/get-current-coverage)})

The result is a map from test symbol to coverage data:

{my.app-test/test-addition
  {12345 #{"3" "3,1" "3,2"}    ;; form-id -> coords touched
   12346 #{"1" "2,1"}}
 my.app-test/test-subtraction
  {12345 #{"3" "4"}
   12347 #{"1"}}}

This gets persisted to .heretic/coverage/ with one file per test namespace, enabling incremental updates. Change a test file? Only that namespace gets recollected.

At this point we have a complete map: for every test, we know exactly which [form-id coord] pairs it touched. Now we need to generate mutations and look up which tests are relevant for each one.

Phase 2: The Mutation Engine

With coverage data in hand, we need to actually mutate the code. This means:

  1. Parsing Clojure source into a navigable structure
  2. Finding locations where operators apply
  3. Transforming the source
  4. Hot-swapping the modified code into the running JVM

Parsing with rewrite-clj

rewrite-clj gives us a zipper over Clojure source that preserves whitespace and comments - essential for producing readable diffs:

(defn parse-file [path]
  (z/of-file path {:track-position? true}))

(defn find-mutation-sites [zloc]
  (->> (walk-form zloc)
       (remove in-quoted-form?)  ;; Skip &apos(...) and `(...)
       (mapcat (fn [z]
                 (let [applicable (ops/applicable-operators z)]
                   (map #(make-mutation-site z %) applicable))))))

The walk-form function traverses the zipper depth-first. At each node, we check which operators match. An operator is a data map with a matcher predicate:

(def swap-plus-minus
  {:id :swap-plus-minus
   :original &apos+
   :replacement &apos-
   :description "Replace + with -"
   :matcher (fn [zloc]
              (and (= :token (z/tag zloc))
                   (symbol? (z/sexpr zloc))
                   (= &apos+ (z/sexpr zloc))))})

Each mutation site captures the file, line, column, operator, and - critically - the coordinate path within the form. This coordinate is what connects a mutation to the coverage data from Phase 1.

Coordinate Mapping

The tricky part is converting between rewrite-clj&aposs zipper positions and ClojureStorm&aposs coordinate strings. We need bidirectional conversion for the round-trip:

(defn coord->zloc [zloc coord]
  (let [parts (parse-coord coord)]  ;; "3,2,1" -> [3 2 1]
    (reduce
     (fn [z part]
       (when z
         (if (string? part)      ;; Hash-based for maps/sets
           (find-by-hash z part)
           (nth-child z part)))) ;; Integer index for lists/vectors
     zloc
     parts)))

(defn zloc->coord [zloc]
  (loop [z zloc
         coord []]
    (cond
      (root-form? z) (vec coord)
      (z/up z)
      (let [part (if (is-unordered-collection? z)
                   (compute-hash-coord z)
                   (child-index z))]
        (recur (z/up z) (cons part coord)))
      :else (vec coord))))

The validation requirement is that these must be inverses:

(= coord (zloc->coord (coord->zloc zloc coord)))

With correct coordinate mapping, we can take a mutation at a known location and ask "which tests touched this exact spot?" That query is what makes targeted test execution possible.

Applying Mutations

Once we find a mutation site and can navigate to it, the actual transformation is straightforward:

(defn apply-mutation! [mutation]
  (let [{:keys [file form-id coord operator]} mutation
        operator-def (get ops/operators-by-id operator)
        original-content (slurp file)
        zloc (z/of-string original-content {:track-position? true})
        form-zloc (find-form-by-id zloc form-id)
        target-zloc (coord/coord->zloc form-zloc coord)
        replacement-str (ops/apply-operator operator-def target-zloc)
        modified-zloc (z/replace target-zloc
                                 (n/token-node (symbol replacement-str)))
        modified-content (z/root-string modified-zloc)]
    (spit file modified-content)
    (assoc mutation :backup original-content)))

Hot-Swapping with clj-reload

After modifying the source file, we need the JVM to see the change. clj-reload handles this correctly:

(ns heretic.reloader
  (:require [clj-reload.core :as reload]))

(defn init! [source-paths]
  (reload/init {:dirs source-paths}))

(defn reload-after-mutation! []
  (reload/reload {:throw false}))

Why clj-reload specifically? It solves problems that require :reload doesn&apost:

  1. Proper unloading: Calls remove-ns before reloading, preventing protocol/multimethod accumulation
  2. Dependency ordering: Topologically sorts namespaces, unloading dependents first
  3. Transitive closure: Automatically reloads namespaces that depend on the changed one

The mutation workflow becomes:

(with-mutation [m mutation]
  (reloader/reload-after-mutation!)
  (run-relevant-tests m))
;; Mutation automatically reverted in finally block

At this point we have the full pipeline: parse source, find mutation sites, apply a mutation, hot-reload, run targeted tests, restore. But running this once per mutation is still slow for large codebases. Phase 3 addresses that.

80+ Clojure-Specific Operators

The operator library is where Heretic&aposs Clojure focus shows. Beyond the standard arithmetic and comparison swaps, we have:

Threading operators - catch ->/->> confusion:

(-> data (get :users) first)   ;; Original
(->> data (get :users) first)  ;; Mutant: wrong arg position

Nil-handling operators - expose nil punning mistakes:

(when (seq users) ...)   ;; Original: handles empty list
(when users ...)         ;; Mutant: breaks on empty list (truthy)

Lazy/eager operators - catch chunking and realization bugs:

(map process items)    ;; Original: lazy
(mapv process items)   ;; Mutant: eager, different memory profile

Destructuring operators - expose JSON interop issues:

{:keys [user-id]}   ;; Original: kebab-case
{:keys [userId]}    ;; Mutant: camelCase from JSON

The full set includes first/last, rest/next, filter/remove, conj/disj, some->/->, and qualified keyword mutations. These are the mistakes Clojure developers actually make.

With 80+ operators applied to a real codebase, mutation counts grow quickly. The next phase makes this tractable.

Phase 3: Optimization Techniques

With 80+ operators and a real codebase, mutation counts get large fast. A 1000-line project might generate 5000 mutations. Running the full test suite 5000 times is not practical.

Heretic uses several techniques to make this manageable.

Targeted Test Execution

This is the big one, enabled by Phase 1. Instead of running all tests for every mutation, we query the coverage index:

(defn tests-for-mutation [coverage-map mutation]
  (let [form-id (resolve-form-id (:form-location-index coverage-map) mutation)
        coord (:coord mutation)]
    (get-in coverage-map [:coord-to-tests [form-id coord]] #{})))

A mutation at (+ a b) might only be covered by 2 tests out of 200. We run those 2 tests in milliseconds instead of the full suite in seconds.

This is where the Phase 1 coverage investment pays off. But we can go further by reducing the number of mutations we generate in the first place.

Equivalent Mutation Detection

Some mutations produce semantically identical code. Detecting these upfront avoids wasted test runs:

;; (* x 0) -> (/ x 0) is NOT equivalent (divide by zero)
;; (* x 1) -> (/ x 1) IS equivalent (both return x)

(def equivalent-patterns
  [{:operator :swap-mult-div
    :context (fn [zloc]
               (some #(= 1 %) (rest (z/child-sexprs (z/up zloc)))))
    :reason "Multiplying or dividing by one has no effect"}

   {:operator :swap-lt-lte
    :context (fn [zloc]
               (let [[_ left right] (z/child-sexprs (z/up zloc))]
                 (and (= 0 right)
                      (non-negative-fn? (first left)))))
    :reason "(< (count x) 0) is always false"}])

The patterns cover boundary comparisons ((>= (count x) 0) is always true), function contracts ((nil? (str x)) is always false), and lazy/eager equivalences ((vec (map f xs)) equals (vec (mapv f xs))).

Filtering equivalent mutations prevents false "survived" reports. But we can also skip mutations that would be redundant to test.

Subsumption Analysis

Subsumption identifies when killing one mutation implies another would also be killed. If swapping < to <= is caught by a test, then swapping < to > would likely be caught too.

Based on the RORG (Relational Operator Replacement with Guard) research, we define subsumption relationships:

(def relational-operator-subsumption
  {&apos<  [:swap-lt-lte :swap-lt-neq :replace-comparison-false]
   &apos>  [:swap-gt-gte :swap-gt-neq :replace-comparison-false]
   &apos<= [:swap-lte-lt :swap-lte-eq :replace-comparison-true]
   ;; ...
   })

For each comparison operator, we only need to test the minimal set. The research shows this achieves roughly the same fault detection with 40% fewer mutations.

The subsumption graph also enables intelligent mutation selection:

(defn minimal-operator-set [operators]
  (set/difference
   operators
   ;; Remove any operator dominated by another in the set
   (reduce
    (fn [dominated op]
      (into dominated
            (set/intersection (dominated-operators op) operators)))
    #{}
    operators)))

These techniques reduce mutation count. The final optimization reduces the cost of each mutation.

Mutant Schemata: Compile Once, Select at Runtime

The most sophisticated optimization is mutant schemata. Instead of applying one mutation, reloading, testing, reverting, reloading for each mutation, we embed multiple mutations into a single compilation:

;; Original
(defn calculate [x] (+ x 1))

;; Schematized (with 3 mutations)
(defn calculate [x]
  (case heretic.schemata/*active-mutant*
    :mut-42-5-plus-minus (- x 1)
    :mut-42-5-1-to-0     (+ x 0)
    :mut-42-5-1-to-2     (+ x 2)
    (+ x 1)))  ;; original (default)

We reload once, then switch between mutations by binding a dynamic var:

(def ^:dynamic *active-mutant* nil)

(defmacro with-mutant [mutation-id & body]
  `(binding [*active-mutant* ~mutation-id]
     ~@body))

The workflow becomes:

(defn run-mutation-batch [file mutations test-fn]
  (let [schemata-info (schematize-file! file mutations)]
    (try
      (reload!)  ;; Once!
      (doseq [[id mutation] (:mutation-map schemata-info)]
        (with-mutant id
          (test-fn id mutation)))
      (finally
        (restore-file! schemata-info)
        (reload!)))))  ;; Once!

For a file with 50 mutations, this means 2 reloads instead of 100. The overhead of case dispatch at runtime is negligible compared to compilation cost.

Operator Presets

Finally, we offer presets that trade thoroughness for speed:

(def presets
  {:fast #{:swap-plus-minus :swap-minus-plus
           :swap-lt-gt :swap-gt-lt
           :swap-and-or :swap-or-and
           :swap-nil-some :swap-some-nil}

   :minimal minimal-preset-operators  ;; Subsumption-aware

   :standard #{;; :fast plus...
               :swap-first-last :swap-rest-next
               :swap-thread-first-last}

   :comprehensive (set (map :id all-operators))})

The :fast preset uses ~15 operators that research shows catch roughly 99% of bugs. The :minimal preset uses subsumption analysis to eliminate redundant mutations. Both run much faster than :comprehensive while maintaining detection power.

Putting It Together

A mutation testing run with Heretic looks like:

  1. Collect coverage (once, cached): Run tests under ClojureStorm instrumentation, build expression-level coverage map
  2. Generate mutations: Parse source files, find all applicable operator sites
  3. Filter: Remove equivalent mutations, apply subsumption to reduce set
  4. Group by file: Prepare for schemata optimization
  5. For each file:
    • Build schematized source with all mutations
    • Reload once
    • For each mutation: bind *active-mutant*, run targeted tests
    • Restore and reload
  6. Report: Mutation score, surviving mutations, test effectiveness

The result is mutation testing that runs in seconds for typical projects instead of hours.


This covers the core implementation. A future post will explore Phase 4: AI-powered semantic mutations and hybrid equivalent detection - using LLMs to generate the subtle, domain-aware mutations that traditional operators miss.

Previously: Part 1 - Heretic: Mutation Testing in Clojure

Permalink

Clojure Deref (Dec 30, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Last chance for the annual Clojure surveys!

Time is running out to take the Clojure surveys! Please help spread the word, and take a moment to fill them out if you haven’t already.

Fill out the 2025 State of Clojure Survey if you use any version or dialect of Clojure in any capacity.

Fill out the 2025 State of ClojureScript Survey and if you use ClojureScript or dialects like Squint, Cherry, nbb, and such.

Thank you for your help!

Upcoming Events

Libraries and Tools

Debut release

  • crabjure - A fast static analyzer for Clojure and ClojureScript, written in Rust.

  • browser-jack-in - A web browser extension that let’s you inject a Scittle REPL server into any browser page.

  • clamav-clj - An idiomatic, modern Clojure wrapper for ClamAV.

  • heretic - Mutation testing for Clojure - fast, practical, and integrated

Updates

  • Many Clojure contrib libs were updated to move the Clojure dependency to 1.11.4, which is past the CVE fixed in 1.11.2.

  • partial-cps 0.1.50 - A lean and efficient continuation passing style transform, includes async-await support.

  • csvx 68fd22c - A zero dependencies tool that enables you to control how to tokenize, transform and handle files with char(s) separated values in Clojure and ClojureScript.

  • recife 0.22.0 - A Clojure model checker (using the TLA+/TLC engine)

  • polylith 0.3.32 - A tool used to develop Polylith based architectures in Clojure.

  • nrepl 1.5.2 - A Clojure network REPL that provides a server and client, along with some common APIs of use to IDEs and other tools that may need to evaluate Clojure code in remote environments.

  • manifold 0.5.0 - A compatibility layer for event-driven abstractions

Permalink

Tetris-playing AI the Polylith way - Part 1

Tetris AI

In this blog series, I will show how to work with the Polylith architecture and how organizing code into components helps create a good structure for high-level functional style programming.

You might feel that organizing into components is unnecessary, and yes, for a tiny codebase like this I would agree. It&aposs still easy to reason about the code and keep everything in mind, but as the codebase grows, so does the value of this structure, in terms of better overview, clearer system boundaries, and increased flexibility in how these building blocks can be combined into various systems.

We will get familiar with this by implementing a self-playing Tetris program in Clojure and Python while reflecting on the differences between the two languages.

The goal

The task for this first post is to place a T piece on a Tetris board (represented by a two-dimensional array):

[[0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,T,0,0,0]
 [0,0,0,0,0,T,T,T,0,0]]

We will put the code in the piece and board components in a Polylith workspace (output from the info command):

Poly info output

This will not be a complete guide to Polylith, Clojure, or Python, but I will explain the most important parts and refer to relevant documentation when needed.

The resulting source code from this first blog post in the series can be found here:

Workspace

We begin by installing the poly command line tool for Clojure, which we will use when working with the Polylith codebase:

brew install polyfy/polylith/poly

The next step is to create a Polylith workspace:

poly create workspace name:tetris-polylith top-ns:tetrisanalyzer

We now have a standard Polylith workspace for Clojure in place:

▾ tetris-polylith
  ▸ bases
  ▸ components
  ▸ development
  ▸ projects
  deps.edn
  workspace.edn

Python

We will use uv as package manager for Python (see setup for other alternatives). First we install uv:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then we create the tetris-polylith-uv workspace directory, by executing:

uv init tetris-polylith-uv
cd tetris-polylith-uv
uv add polylith-cli --dev
uv sync

which creates:

README.md
main.py
pyproject.toml
uv.lock

Finally we create the standard Polylith workspace structure:

uv run poly create workspace --name tetrisanalyzer --theme loose

which adds:

▾ tetris-polylith-uv
  ▸ bases
  ▸ components
  ▸ development
  ▸ projects
  workspace.toml

The workspace requires some additional manual steps, documented here.

The piece component

Now we are ready to create our first component for the Clojure codebase:

poly create component name:piece

This adds the piece component to the workspace structure:

  ▾ components
    ▾ piece
      ▾ src
        ▾ tetrisanalyzer
          ▾ piece
            interface.clj
            core.clj
      ▾ test
        ▾ tetrisanalyzer
          ▾ piece
            interface-test.clj

If you have used Polylith with Clojure before, you know that you also need to manually add piece to deps.edn, which is described here.

Python

Let&aposs do the same for Python:

uv run poly create component --name piece

This adds the piece component to the structure:

  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        core.py
  ▾ test
    ▾ components
      ▾ tetrisanalyzer
        ▾ piece
          __init__.py
          test_core.py

Piece shapes

In Tetris, there are 7 different pieces that can be rotated, summing up to 19 shapes:

Pieces

Here we will store them in a multi-dimensional array where each possible piece shape is made up of four [x,y] cells, with [0,0] representing the upper left corner.

For example the Z piece in its inital position (rotation 0) consists of the cells [0,0] [1,0] [1,1] [2,1]:

Z piece

This is how it looks like in Clojure (commas are treated as white spaces in Clojure and are often omitted):

(ns tetrisanalyzer.piece.piece)

(def pieces [nil

             ;; I (1)
             [[[0 0] [1 0] [2 0] [3 0]]
              [[0 0] [0 1] [0 2] [0 3]]]

             ;; Z (2)
             [[[0 0] [1 0] [1 1] [2 1]]
              [[1 0] [0 1] [1 1] [0 2]]]

             ;; S (3)
             [[[1 0] [2 0] [0 1] [1 1]]
              [[0 0] [0 1] [1 1] [1 2]]]

             ;; J (4)
             [[[0 0] [1 0] [2 0] [2 1]]
              [[0 0] [1 0] [0 1] [0 2]]
              [[0 0] [0 1] [1 1] [2 1]]
              [[1 0] [1 1] [0 2] [1 2]]]

             ;; L (5)
             [[[0 0] [1 0] [2 0] [0 1]]
              [[0 0] [0 1] [0 2] [1 2]]
              [[2 0] [0 1] [1 1] [2 1]]
              [[0 0] [1 0] [1 1] [1 2]]]

             ;; T (6)
             [[[0 0] [1 0] [2 0] [1 1]]
              [[0 0] [0 1] [1 1] [0 2]]
              [[1 0] [0 1] [1 1] [2 1]]
              [[1 0] [0 1] [1 1] [1 2]]]

             ;; O (7)
             [[[0 0] [1 0] [0 1] [1 1]]]])

Python

Here is how it looks in Python:

pieces = [None,

          # I (1)
          [[[0, 0], [1, 0], [2, 0], [3, 0]],
           [[0, 0], [0, 1], [0, 2], [0, 3]]],

          # Z (2)
          [[[0, 0], [1, 0], [1, 1], [2, 1]],
           [[1, 0], [0, 1], [1, 1], [0, 2]]],

          # S (3)
          [[[1, 0], [2, 0], [0, 1], [1, 1]],
           [[0, 0], [0, 1], [1, 1], [1, 2]]],

          # J (4)
          [[[0, 0], [1, 0], [2, 0], [2, 1]],
           [[0, 0], [1, 0], [0, 1], [0, 2]],
           [[0, 0], [0, 1], [1, 1], [2, 1]],
           [[1, 0], [1, 1], [0, 2], [1, 2]]],

          # L (5)
          [[[0, 0], [1, 0], [2, 0], [0, 1]],
           [[0, 0], [0, 1], [0, 2], [1, 2]],
           [[2, 0], [0, 1], [1, 1], [2, 1]],
           [[0, 0], [1, 0], [1, 1], [1, 2]]],

          # T (6)
          [[[0, 0], [1, 0], [2, 0], [1, 1]],
           [[0, 0], [0, 1], [1, 1], [0, 2]],
           [[1, 0], [0, 1], [1, 1], [2, 1]],
           [[1, 0], [0, 1], [1, 1], [1, 2]]],

          # O (7)
          [[[0, 0], [1, 0], [0, 1], [1, 1]]]]

In Clojure we had to specify the namespace at the top of the file, but in Python, the namespace is implicitly given based on the directory hierarchy.

Here we put the above code in shape.py, and it will therefore automatically belong to the tetrisanalyzer.piece.shape module:

▾ tetris-polylith-uv
  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        shape.py

Interface

In Polylith, only what&aposs in the component&aposs interface is exposed to the rest of the codebase.

In Python, we can optionally control what gets exposed in wildcard imports (from module import *) by defining the __all__ variable in the __init__.py module. However, even without __all__, all public names (those not starting with _) are still accessible through explicit imports.

This is how the piece interface in __init__.py looks like:

from tetrisanalyzer.piece.core import I, Z, S, J, L, T, O, piece

__all__ = ["I", "Z", "S", "J", "L", "T", "O", "piece"]

We could have put all the code directly in __init__.py, but it&aposs a common pattern in Python to keep this module clean by delegating to implementation modules like core.py:

from tetrisanalyzer.piece import shape

I = 1
Z = 2
S = 3
J = 4
L = 5
T = 6
O = 7


def piece(p, rotation):
    return shape.pieces[p][rotation]

The piece component now has these files:

▾ tetris-polylith-uv
  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        core.py
        shape.py

Clojure

In Clojure, the interface is often just a single namespace with the name interface:

  ▾ components
    ▾ piece
      ▾ src
        ▾ tetrisanalyzer
          ▾ piece
            interface.clj

Implemented like this:

(ns tetrisanalyzer.piece.interface
  (:require [tetrisanalyzer.piece.shape :as shape]))

(def I 1)
(def Z 2)
(def S 3)
(def J 4)
(def L 5)
(def T 6)
(def O 7)

(defn piece [p rotation]
  (get-in shape/pieces [p rotation]))

A language comparision

Let&aposs see what differences there are in the two languages:

;; Clojure
(defn piece [p rotation]
  (get-in shape/pieces [p rotation]))
# Python
def piece(p, rotation):
    return shape.pieces[p][rotation]

An obvious difference here is that Clojure is a Lisp dialect, while Python uses a more traditional syntax. This means that if you want anything to happen in Clojure, you put it first in a list:

  • (defn piece ...)
    
    is a macro that expands to (def piece (fn ...)) which defines the function piece
  • (get-in shape/pieces [p rotation])
    
    is a call to the function clojure.core/get-in, where:
    • The first argument shape/pieces refers to the pieces vector in the shape namespace
    • The second argument creates the vector [p rotation] with two arguments:
      • p is a value between 1 and 7, representing one of the pieces: I, Z, S, J, L, T, and O
      • rotation is a value between 0 and 3, representing the number of 90-degree rotations

Another significant difference is that data is immutable in Clojure, while in Python it&aposs mutable (like the pieces data structure).

However, a similarity is that both languages are dynamically typed, but uses concrete types in the compiled code:

;; Clojure
(class \Z) ;; Returns java.lang.Character
(class 2)  ;; Returns java.lang.Long
(class Z)  ;; Returns java.lang.Long (since Z is bound to 2)
# Python
type(&aposZ&apos)  # Returns <class &aposstr&apos> (characters are strings in Python)
type(2)    # Returns <class &aposint&apos>
type(Z)    # Returns <class &aposint&apos> (since Z is bound to 2)

The languages also share another feature: type information can be added optionally. In Clojure, this is done using type hints for Java interop and performance optimization. In Python, type hints (introduced in Python 3.5) can be added using the typing module, though they are not enforced at runtime and are primarily used for static type checking with tools like mypy.

The board component

Now let&aposs continue by creating a board component:

poly create component name:board

Which adds the board component to the workspace:

▾ tetris-polylith
  ▸ bases
  ▾ components
    ▸ board
    ▸ piece
  ▸ development
  ▸ projects

And this is how we create a board component in Python:

uv run poly create component --name board

This adds the board component to the workspace:

  ▾ components
    ▾ tetrisanalyzer
      ▸ board
      ▸ piece
  ▾ test
    ▾ components
      ▾ tetrisanalyzer
        ▸ board
        ▸ piece

The Clojure code that places a piece on the board is implemented like this:

(ns tetrisanalyzer.board.core)

(defn empty-board [width height]
  (vec (repeat height (vec (repeat width 0)))))

(defn set-cell [board p x y [cx cy]]
  (assoc-in board [(+ y cy) (+ x cx)] p))

(defn set-piece [board p x y piece]
  (reduce (fn [board cell]
            (set-cell board p x y cell))
          board
          piece))

In Python (which uses two blank lines between functions by default):

def empty_board(width, height):
    return [[0] * width for _ in range(height)]


def set_cell(board, p, x, y, cell):
    cx, cy = cell
    board[y + cy][x + cx] = p


def set_piece(board, p, x, y, piece):
    for cell in piece:
        set_cell(board, p, x, y, cell)
    return board

Let&aposs go through these functions.

empty-board

(defn empty-board [width height]
  (vec (repeat height (vec (repeat width 0)))))

To explain this function, we can break it down into smaller statements:

(defn empty-board [width height]  ;; [4 2]
  (let [row-list (repeat width 0) ;; (0 0 0 0)
        row (vec row-list)        ;; [0 0 0 0]
        rows (repeat height row)  ;; ([0 0 0 0] [0 0 0 0])
        board (vec rows)]         ;; [[0 0 0 0] [0 0 0 0]]
    board))

We convert the lists to vectors using the vec function, so that we (later) can access it via index. Note that it is the last value in the function (board) that is returned.

empty_board

def empty_board(width, height):
    return [[0] * width for _ in range(height)]

This can be rewritten as:

def empty_board(width, height): # width = 4, height = 2
    row = [0] * width           # row = [0, 0, 0, 0]
    rows = range(height)        # rows = lazy sequence with the length of 2
    board = [row for _ in rows] # board = [[0, 0, 0, 0], [0, 0, 0, 0]]
    return board

The [row for _ in rows] statement is a list comprehension and is a way to create data structures in Python by looping.

We loop twice through range(height), which yields the values 0 and 1, but we&aposre not interested in these values, so we use the _ placeholder.

set-cell

(defn set-cell [board p x y [cx cy]]
  (assoc-in board [(+ y cy) (+ x cx)] p))

Let&aposs break it down into an alternative implementation and call it with:

board = [[0 0 0 0] [0 0 0 0]] 
p = 6, x = 2, y = 0, cell = [0 1])
(defn set-cell [board p x y cell]
  (let [[cx cy] cell             ;; Destructures [0 1] into cx = 0, cy = 1
        xx (+ x cx)              ;; xx = 2 + 0 = 2
        yy (+ y cy)]             ;; yy = 0 + 1 = 1
    (assoc-in board [yy xx] p))) ;; [[0 0 0 0] [0 0 6 0]]

In the original version, destructuring of [cx cy] happens directly in the function&aposs parameter list. The assoc-in function works like board[y][x] in Python in this example, with the difference that it doesn&apost mutate, but instead returns a new immutable board.

set_cell

def set_cell(board, p, x, y, cell):
    cx, cy = cell
    board[y + cy][x + cx] = p  # [[0,0,0,0] [0,0,6,0]]

As mentioned earlier, this code mutates the two-dimensional list in place. It doesn&apost return anything, which differs from the Clojure version that returns a new board with one cell changed.

set-piece

(defn set-piece [board p x y piece]
  (reduce (fn [board cell]
            (set-cell board p x y cell))
          board   ;; An empty board as initial value
          piece)) ;; cells: [[1 0] [0 1] [1 1] [2 1]]

If you are new to reduce, think of it as a recursive function that processes each element in a collection, accumulating a result as it goes. The initial call to set-cell will use an empty board and the first [1 0] cell from piece, then use the returned board from set-cell and the second cell [0 1] from piece to call set-cell again, and continue like that until it has applied all cells in piece, where it returns a new board.

set_piece

def set_piece(board, p, x, y, piece):
    for cell in piece:
        set_cell(board, p, x, y, cell)
    return board

The Python version is pretty straight forward, with a for loop that mutates the board. We choose to return the board to make the function more flexible, allowing it to be used in expressions and enabling method chaining, which is a common Python pattern, even though the board is already mutated in place.

Test

The test looks like this in Clojure:

(ns tetrisanalyzer.board.core-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.interface :as piece]
            [tetrisanalyzer.board.core :as board]))

(def empty-board [[0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]])

(deftest empty-board-test
  (is (= empty-board
         (board/empty-board 10 15))))

(deftest set-piece-test
  (let [T piece/T
        rotate-two-times 2
        piece-t (piece/piece T rotate-two-times)
        x 5
        y 13]
    (is (= [[0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 T 0 0 0]
            [0 0 0 0 0 T T T 0 0]]
           (board/set-piece empty-board T x y piece-t)))))

Let&aposs execute the tests to check that everything works as expected:

poly test :dev
Poly test output

The tests passed!

Python

Now, let&aposs add a Python test for the board:

from tetrisanalyzer import board, piece

empty_board = [
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
]


def test_empty_board():
    assert empty_board == board.empty_board(10, 15)


def test_set_piece():
    T = piece.T
    rotate_two_times = 2
    piece_t = piece.piece(T, rotate_two_times)
    x = 5
    y = 13
    expected = [
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, T, 0, 0, 0],
        [0, 0, 0, 0, 0, T, T, T, 0, 0],
    ]

    assert expected == board.set_piece(empty_board, T, x, y, piece_t)

Let&aposs install and run the tests using pytest:

uv add pytest --dev

And run the tests:

uv run pytest
Pytest output

With that, we have finished the first post in this blog series!

If you&aposre eager to see a self-playing Tetris program, I happen to have made a couple in other languages that you can watch here.

Tetris Analyzer Scala
Tetris Analyzer C++
Tetris Analyzer Tool

Happy Coding!

Permalink

Heretic: Mutation Testing in Clojure

Heretic

Your tests pass. Your coverage is high. You deploy.

Three days later, a bug surfaces in a function your tests definitely executed. The coverage report confirms it: that line is green. Your test ran the code. So how did a bug slip through?

Because coverage measures execution, not verification.

(defn apply-discount [price user]
  (if (:premium user)
    (* price 0.8)
    price))

(deftest apply-discount-test
  (is (number? (apply-discount 100 {:premium true})))
  (is (number? (apply-discount 100 {:premium false}))))

Coverage: 100%. Every branch executed. Tests: green.

But swap 0.8 for 1.2? Tests pass. Change * to /? Tests pass. Flip (:premium user) to (not (:premium user))? Tests pass.

The tests prove some number comes back. They say nothing about whether it&aposs the right number.

The Question Nobody&aposs Asking

Mutation testing asks a harder question: if I introduced a bug, would any test notice?

The technique is simple. Take your code, introduce a small change (a "mutant"), and run your tests. If a test fails, the mutant is "killed" - your tests caught the bug. If all tests pass, the mutant "survived" - you&aposve found a gap in your verification.

This isn&apost new. PIT does it for Java. Stryker does it for JavaScript. cargo-mutants does it for Rust.

Clojure hasn&apost had a practical option.

The only dedicated tool, jstepien/mutant, was archived this year as "wildly experimental." You can run PIT on Clojure bytecode, but bytecode mutations bear no relationship to mistakes Clojure developers actually make. You&aposll get mutations like "swap IADD for ISUB" when what you want is "swap -> for ->> " or "change :user-id to :userId."

Why Clojure Makes This Hard

Mutation testing has a performance problem everywhere. Run 500 mutations, execute your full test suite for each one, and you&aposre measuring build times in hours. Most developers try it once, watch the clock, and never run it again.

But Clojure adds unique challenges:

Homoiconicity cuts both ways. Code-as-data makes programmatic transformation elegant, but distinguishing "meaningful mutation" from "syntactic noise" gets subtle when everything is just nested lists.

Macros muddy the waters. A mutation to macro input might not change the expanded code. A mutation inside a macro definition might break in ways that have nothing to do with your production logic.

The bugs we make are language-specific. Threading macro confusion, nil punning traps, destructuring gotchas from JSON interop, keyword naming collisions - these aren&apost + becoming -. They&aposre mistakes that come from thinking in Clojure.

What If It Could Be Fast?

The insight that makes Heretic practical: most mutations only need 2-3 tests.

When you mutate a single expression, you don&apost need your entire test suite. You need only the tests that exercise that expression. Usually that&aposs a handful of tests, not hundreds.

The challenge is knowing which ones. Not just which functions they call, but which subexpressions they touch. The + inside (if condition (+ a b) (* a b)) might be covered by different tests than the *.

Heretic builds this map using ClojureStorm, the instrumented compiler behind FlowStorm. Run your tests once under instrumentation. From then on, each mutation runs only the tests that actually touch that code.

Instead of running 200 tests per mutation, we run 2. Instead of hours, seconds.

What If It Understood Clojure?

Generic operators miss the bugs we actually make:

;; The mutation you want: threading macro confusion
(-> data (get :users) first)     ; Original
(->> data (get :users) first)    ; Mutant: wrong arg position, wrong result

;; The mutation you want: nil punning trap
(when (seq users) (map :name users))   ; Original (handles empty)
(when users (map :name users))         ; Mutant (breaks on empty list)

;; The mutation you want: destructuring gotcha
{:keys [user-id name]}           ; Original (kebab-case)
{:keys [userId name]}            ; Mutant (camelCase from JSON)

Heretic has 65+ mutation operators designed for Clojure idioms. Swap first for last. Change rest to next. Replace -> with some->. Mutate qualified keywords. The mutations you see will be the bugs you recognize.

What If It Could Think?

Here&aposs a finding that should worry anyone relying on traditional mutation testing: research shows that nearly half of real-world faults have no strongly coupled traditional mutant. The bugs that escape to production aren&apost the ones that flip operators. They&aposre the ones that invert business logic.

;; Traditional mutation: swap * for /
(* price 0.8)  -->  (/ price 0.8)     ; Absurd. Nobody writes this bug.

;; Semantic mutation: invert the discount
(* price 0.8)  -->  (* price 1.2)     ; Premium users pay MORE. Plausible bug.

A function called apply-discount should never increase the price. That&aposs the invariant tests should verify. An AI can read function names, docstrings, and context to generate the mutations that test whether your tests understand the code&aposs purpose.

This hybrid approach - fast deterministic mutations for the common cases, intelligent semantic mutations for the subtle ones - is where Heretic is heading. Meta&aposs ACH system proved the pattern works at industrial scale.

Why "Heretic"?

Clojure discourages mutation. Values are immutable. State changes through controlled transitions. The design philosophy is that uncontrolled mutation leads to bugs.

So there&aposs something a bit ironic about a tool that deliberately introduces mutations to find those bugs. We mutate your code to prove your tests would catch it if it happened accidentally - to verify that the discipline holds.


This is the first in a series on building Heretic. Upcoming posts will cover how ClojureStorm enables expression-level coverage mapping, how we use rewrite-clj and clj-reload for hot-swapping mutants, and the optimization techniques that make this practical for real codebases.

If your coverage is high but bugs still slip through, you&aposre measuring the wrong thing.

Permalink

One csv parser to rule them all

One would think that parsing CSV files is pretty straightforward until you get bitten by all kinds of CSV files exists in the wild. Many years ago, I have written a small CSV reader with following requirements in mind:

  • Should not depend on any other code other than Clojure
  • Should allow me to control how I tokenize and transform lines
  • Should allow me to have complete controll over delimiting charactor or charactors, file encoding, amount of lines to read and error handling

The result is csvx. I update it to work across Clojure and ClojureScript both in NodeJS and browser environment. The entire code is less than 200 lines including comments and blank lines. If you find yourself in need of a csv reader with above requirements, you are welcome to steal the code. Enjoy!

Permalink

Mixing Swift and Clojure in Your iOS App - Scittle

In my previous article, I showed how to embed a S7 Scheme interpreter in an iOS app. This time, I will show you how to embed a ClojureScript interpreter, or at least a dialect of it.

Clojure itself, runs on the JVM, and there’s not really a way to embed the JVM in an iOS app. Maybe once swift-java gets rolling.

There is GraalVM, which lets you compile java code to native code, but it doesn’t support compiling for iOS. Babashka, which is a native Clojure dialect interpreter uses GraalVM.

Permalink

Building elegant interfaces with ClojureScript, React, and UIx

During Clojure South, João Lanjoni, Software Engineer at Nubank, addressed a central challenge of modern web development: how to combine the ergonomics of ClojureScript with the maturity of React to build scalable, high-performance interfaces. 

According to João, the solution is UIx, a tool that represents the new generation of bridges that further aligns the Clojure universe with the React ecosystem. In his session, he detailed the context, the limitations of previous approaches, and the value of UIx as a new, efficient entry point for React developers into ClojureScript.

From 2013 to today: React and ClojureScript in perspective

Since its launch in 2013, React has redefined the structure of frontend applications by introducing concepts like consistent reactivity. The ClojureScript community quickly responded with idiomatic interfaces like Reagent, which became the de facto standard due to its solidity, providing a minimalistic interface between ClojureScript and React, using a Hiccup-like syntax to define components. With the arrival of functional components and hooks, starting around 2019, new interfaces came up to provide a direct way of using functional components (instead of old class-based components).

However, as React continuously evolved towards modern patterns, including concurrent rendering, functional components, and new ways to manage component state, Reagent remained tied to class-based components, mainly for backward compatibility. This mismatch resulted in some limitations like performance limitations in large codebases (due to Hiccup parsing in runtime), issues with functional components (as users may have to declare every functional component usage even when they were defined as a React standard), and hindered interoperability with modern React libraries, such as Material UI, Mantine, and Ant Design, widening the gap between the two ecosystems.

What UIx changes in your code

UIx emerges to resolve this divergence. Acting as a thin interface between ClojureScript and modern React, its focus is technical and pragmatic: it offers a minimal abstraction layer, more predictable performance, and the direct use of functional components and hooks. Furthermore, it ensures native interoperability with the React ecosystem, allowing the lifecycle and state management to be handled directly by React itself. 

“If React already handles state and lifecycle management well, why not let it do that?”

João Lanjoni, Software Engineer at Nubank

Instead of creating a complete framework or adding unnecessary abstractions, UIx is a lightweight bridge, leveraging what modern React does best, resulting in a ClojureScript codebase with idiomatic syntax but identical behavior to modern React.

UIx component structure

In practical terms, UIx centralizes component construction around two elements: defui for declaring React components and $ for rendering elements in an explicit and lightweight way. Component bodies process props identically to React. Hooks such as useState are exposed using idiomatic ClojureScript conventions, like use-state, with UIx handling the translation to native React APIs. This ergonomics combines the best of ClojureScript syntax with the React architecture, which, according to João, eliminates the need to train React developers in the internal details of layers like Reagent or Re-frame, keeping the mental model aligned with the React mainstream.

Performance in figures

A highlight of the presentation was the chart, created by Roman Liutikov – the UIx maintainer –, comparing the call stack depth when rendering a simple component in pure React, UIx, and Reagent. React exhibits the shortest path; UIx, by adding only a thin layer, follows closely. In contrast, Reagent, due to Hiccup being interpreted at runtime, shows a significantly deeper call stack. While the difference is minimal in small applications, the impact on predictability and performance becomes notable and increases in products with hundreds or thousands of components.

Who is already using UIx in production

João presented three real-world examples, all highlighted on the project’s official page:

  • Metosin, one of the largest Clojure consultancies in Europe;
  • Pitch, an AI presentation platform with amazing slide decks;
  • Cognician, an e-learning platform for personal development.

The Pitch case is particularly impressive.

The team migrated 2,500 components from Reagent to UIx, maintained compatibility with Re-frame, and saw improvements in predictability and performance.

Metosin, meanwhile, employs Juho Teperi, one of the main contributors to Reagent, who also made an example project for a full-stack app using Clojure and ClojureScript and chose UIx to build the web interface, also using Material UI as the component library without any special wrapper.

When someone who helped build the previous tool begins to advocate for the new approach, it says a lot about the current moment of the technology, even more with the launch of a new version of Reagent introducing default functional components and a thinner hooks wrapper (inspired also by UIx).

Reducing the developer learning curve

UIx’s value extends to the hiring and development of engineers, which opens a path for more professionals to enter the ClojureScript ecosystem without the requirement of mastering the intricacies of Reagent, Re-frame, or the atom-based state model from day one. It represents a pragmatic approach to lowering barriers without sacrificing the benefits of a functional and declarative language.

“The greatest value of UIx is allowing React developers to write ClojureScript with a minimal learning curve.”

João Lanjoni, Software Engineer at Nubank

When UIx is the best choice

UIx is especially recommended for modern and complex front-end applications and teams already familiar with React. It is ideal for codebases that rely heavily on hooks and for projects requiring interoperability with the latest React libraries, with a view toward strong long-term growth potential. The library, intentionally simple, does not attempt to reinvent global state management, maintaining compatibility with mature React libraries like Zustand and Jotai, instead of adding unnecessary layers, or even using a custom hook that subscribes to a Clojure atom to manage a global state (similar to those cited libraries).

In essence, UIx does not seek to replace React but rather to act as a thin, modern, and pragmatic bridge. Its goal is to allow teams to build scalable front-ends with the power of React, while preserving the expressiveness and elegance of the Clojure philosophy and syntax. For complex and modern projects in ClojureScript, UIx may be the missing link.

The post Building elegant interfaces with ClojureScript, React, and UIx appeared first on Building Nubank.

Permalink

2025 Highlights

Some notes on the year.

Movies/TV

Lots of TV shows this year. These are some of the ones that stood out.

Great

  • Andor
  • Adolescence
  • The Rehearsal - Season 2
  • The Pitt (probably my favourite of the year)
  • The Chair Company
  • Squid Game Season 3 (might be controversial to have this here, but I enjoyed it)
  • (movie) Jia Zhangke’s “Caught by the Tides”. Deeply moving meditation on time, love, displacement and process.
  • Long Story Short
  • (movie) No Other Choice
  • The Studio
  • I also went to a screening of Kwaiden (1964) this year and it was incredible.

Hounorable Mentions

  • The Eternaut
  • Pachinko - Season 2
  • Severance - Season 2
  • Foundation - Season 3
  • Dept Q.
  • Alice in Borderland - Season 3
  • Slow Horses

Disappointments

  • The Last of Us - Season 2
  • (movie) One Battle After Another - had its good points definitely, but I always have very high expectations for PTA and the last two let me down.
  • Alien: Earth - I did really enjoy this, but a lot of problems with it too (as an ‘Alien’ installment)

Books

Not too much reading this year, but my favourite was definitely “Every Living Thing” (Jason Roberts).

I also enjoyed:

  • Solaris
  • Pachinko
  • Drive Your Plough Over the Bones of the Dead
  • Delta V

Travel

Some for work, some for pleasure:

  • Japan (I visited many places in this wonderful country! Highlights - Kyoto, Naoshima Island)
  • Seattle
  • Baku

Programming

Continuing to learn more about clojure. I program purely as a hobby.

I participated in the first Scinoj Lite conference, which had some great talks. My project looked at ways of evaluating LLMs (from a very basic, almost ’naive’, perspective).

Write-up of my LLM evaluation project

Played around with the new clojure ‘flow’ libary.

Clojure Flow Blog Post Clojure Flow project

I started a webscraping project that is trying to map Irish-language content on the .ie domain.

Irish language webscraping project

I also enjoyed this year’s advent of code.

Advent of Code (clojure)

Permalink

Clojure Deref (Dec 23, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

The annual Clojure surveys are live

Help shape the future of Clojure!

Whether you use Clojure, ClojureScript, Babashka, or any other Clojure dialect, please fill out the 2025 State of Clojure Survey and spread the word on social media.

This survey gives us the best snapshot of the Clojure community, so help us get as many participants as possible.

If you use ClojureScript or dialects like Squint, Cherry, nbb, and such, please fill out the 2025 State of ClojureScript Survey and share it with others.

Thank you for your help!

Upcoming Events

Libraries and Tools

Debut release

  • cljs-uix-electron - Uix + Electron starter

  • cljs-uix-wails - Wails + ClojureScript starter

  • Oak - Oak is a Free and Open Source Identity Provider that you can host yourself

  • immersa - Open Source Web-based 3D Presentation Tool

  • bb-timemachine - Run code back in Git-time.

  • solid-cljs - ClojureScript bindings to Solid

  • clojars-download-stats - An always up-to-date, complete SQL export of artifacts daily downloads since November 2012

  • malt - Malli-Typed interfaces for Clojure

  • distributed-scope - Run one lexical scope across distributed peers.

Updates

  • repath-studio 0.4.11 - A local web-based vector graphics editor that combines procedural tooling with traditional design workflows.

  • dtype-next 10.000-beta-11 - A Clojure library designed to aid in the implementation of high performance algorithms and systems.

  • repl-mcp d00f661 - Model Context Protocol Clojure support including REPL integration with development tools.

  • virtuoso 0.1.2 - A number of trivial wrappers on top of virtual threads

  • bbin 0.2.5 - Install any Babashka script or project with one command

  • stripe-clojure 2.1.0 - Clojure SDK for the Stripe API.

  • muutos 2025-12-18 - Muutos is a zero-dependency Clojure library for reacting to changes in a PostgreSQL database.

  • cherry 0.5.34 - Experimental ClojureScript to ES6 module compiler

  • nbb 1.3.205 - Scripting in Clojure on Node.js using SCI

  • replicant 2025.12.1 - A data-driven rendering library for Clojure(Script) that renders hiccup to DOM or to strings.

  • joyride 0.0.72 - Making VS Code Hackable like Emacs since 2022

  • process 0.6.25 - Clojure library for shelling out / spawning sub-processes

  • fireworks 0.19.0 - Fireworks is a themeable tapping library for Clojure, ClojureScript, and Babashka.

  • bling 0.9.2 - Rich text console printing for Clojure, ClojureScript, and Babashka.

  • clj-kondo 2025.12.23 - Static analyzer and linter for Clojure code that sparks joy

  • calva 2.0.543 - Clojure & ClojureScript Interactive Programming for VS Code

  • sci 0.11.50 - Configurable Clojure/Script interpreter suitable for scripting and Clojure DSLs

  • scittle 0.7.30 - Execute Clojure(Script) directly from browser script tags via SCI

  • partial-cps 0.1.42 - A lean and efficient continuation passing style transform, includes async-await support.

Permalink

12 years of Component: A decade of interactive development

A little over a month ago, Nubank’s office in Vila Leopoldina, São Paulo became the meeting point for the Clojure community across South America and beyond. The second edition of Clojure South brought together more than 200 developers, researchers, and language enthusiasts for two days of knowledge sharing, connection, and celebration of functional programming.

The event reinforced Brazil’s role as one of the most vibrant hubs for the Clojure community, and Nubank’s role as an active focal point for technology communities.

It was in this atmosphere of shared enthusiasm that Alessandra Sierra, Principal Software Engineer at Nubank, opened the conference with her talk “12 Years of Component”, reflecting not only on the history of one of Clojure’s most influential libraries, but on her personal experience helping shape how thousands of developers work today.

Where the journey began

Alessandra’s story dates back to 2007, when she attended a meetup in New York City where Rich Hickey publicly introduced Clojure for the first time. She walked out of that session impressed as Clojure brought the power of an interactive Read-Eval-Print Loop (REPL) to the JVM ecosystem, enabling software developers to inspect and modify running programs and receive immediate feedback.

Sierra became one of the earliest adopters of Clojure for professional work and made significant early contributions to its standard library. Within a few years, this led to her joining Relevance, later renamed Cognitect, which was acquired by Nubank in 2020.

The power of the REPL and the frustration of interruptions

As a consultant at Cognitect, working with teams adopting Clojure for real-world applications, Sierra noticed a recurring pattern: the REPL gave developers enormous advantages in feedback and velocity, but many application structures made that experience harder than it needed to be.

In the early 2010s, web development in Clojure often centered around Ring: simple and elegant, but with trade-offs. Running a Jetty server from the REPL could block the main thread, and reloading code often left stale definitions in memory. Restarting the REPL became routine,  and frustrating.

Sierra wanted developers to keep the flow of interactive development even in systems with state and complexity. The core question was simple but fundamental: How can a system keep running while the developer continues evolving it?

The birth of Component

Instead of accepting the friction caused by restarting the REPL every time code changed, Sierra spent the following years designing a new discipline — a workflow that would eventually be known as the Reloaded Workflow, named after her widely read blog post “My Clojure Workflow, Reloaded.”

The goal was straightforward but transformative: developers should be able to evolve a running system safely, consistently, and without losing context.

This approach combined the tools.namespace library with Component, along with design practices that encouraged explicit dependencies, minimized global state, and separated pure logic from stateful boundaries. The result was a development experience centered on the REPL, where applications could be started, stopped, refreshed, and inspected in real time, without breaking flow.

Component offered a lightweight way to model systems as independent parts with clear lifecycles, without sacrificing functional design. Its small API surface and long-term stability were intentional: changes were introduced slowly and deliberately, helping the library remain simple, approachable, and durable.

A lasting impact

Twelve years after its introduction, Component remains one of the most influential libraries in the Clojure ecosystem. It has been referenced more than 13,000 times in Nubank’s production source code, continues to shape extensions and forks, and has even inspired ports to other languages — a level of longevity rarely achieved by tooling with such a minimal footprint.

Sierra closed her talk by acknowledging how fortunate she was to be at the right place, at the right time, with the right job, as Clojure was emerging. That kind of timing can’t be engineered.

But her advice to people beginning with Clojure or open-source was universal:

“Work on whatever you find interesting. Find a pattern that’s useful, build tools that help others benefit from it. But mostly, just have fun, because that’s the only thing you can choose.”

Alessandra Sierra, Principal Software Engineer at Nubank

The post 12 years of Component: A decade of interactive development appeared first on Building Nubank.

Permalink

Machine Learning in Clojure with libpython‑clj: Unlocking Causal Insights Using Microsoft’s EconML [Series 3]

“Beyond A/B testing: causal inference meets functional programming.”

In the first two parts of this series, we kept our core in Clojure while still using the best of Python.

╰┈➤ Series 1: In the first part, we explore how libpython-clj lets you use Python’s machine learning libraries right from the JVM, without needing to jump between languages.

╰┈➤ Series 2: In the second part, we explored Bayesian Networks. They are great for building models that actually make sense to humans, not just machines. That is a big deal in fields like healthcare or finance, where you cannot rely on black-box answers. Clarity is important. 

In Series 3, we focus on causal inference using EconML from Microsoft Research

╰┈➤ Predictive models answer “what might happen?”

╰┈➤ Causal inference asks “why did it happen?” and “what if we change the action?” 

That difference is essential when you need decisions you can trust, not just guesswork.

A simple A/B test might say a campaign increased conversions by 5%. EconML goes a layer deeper. It shows that students saw a 20% increase, while retirees saw no change. So you don ot get an average. You get heterogeneous treatment effects across segments. That is what you need to act with confidence.

If you work in Clojure, the process does not really change. You write the same clean and functional code. When you need Python’s causal tools, just use libpython-clj. You run your models, whether they use observational or experimental data. Then send the results right back into your JVM apps, without leaving Clojure.

Where this approach really shines:

╰┈➤ Dynamic pricing: Set prices by segment, not with a one-size-fits-all approach.

╰┈➤ Marketing: Focus on people who actually benefit, skip the rest.

╰┈➤ Healthcare: See how treatments work for different patient groups.

╰┈➤ Policy: Compare what works and break it down by demographic.

No need to make big promises. You get smarter decisions with models that separate signals from noise and show what’s making a difference for different people.

An infographic that shows “Prediction: Discount → +5% sales” vs “Causality: Students +20%, retirees no effect.”

🔗 Internal links:

[Series 1: ML in Clojure with libpython‑clj]

[Series 2: Bayesian Networks with libpython‑clj]

🌐 External link: Microsoft Research EconML – This page provides the official library overview with docs, papers, and examples.

The Problem with Traditional ML

Most traditional ML models are built on the idea that:

╰┈➤ One pattern fits the whole population

╰┈➤ One prediction applies to everyone

╰┈➤ One model captures the “average” relationship

This works only when people behave similarly, which they don’t. This is the homogeneity assumption.

A/B testing compares two groups and reports one average result.

But that average hides:

╰┈➤ Who loved it

╰┈➤ Who didn’t care

╰┈➤ Who reacted negatively

This variation is called heterogeneous treatment effects, the same issue ML struggles with.

“Traditional ML and A/B testing both look at averages. But averages hide differences. People don’t respond the same way, so relying on one number leads to decisions that don’t match what your audience actually needs.”

What should we monitor?

╰┈➤ Segment differences: Don’t stop at the average. Break results down by audience groups to see who benefits and who doesn’t.

╰┈➤ Adverse effects: A “winning” variant can still hurt certain groups. Look for where performance drops. 

╰┈➤ Context matters: Timing, demographics, past behavior, and geography all shape how people respond.

The Contrast Explained:

MetricOverall ImpactKey Takeaway
Aggregate Lift ($+5\%$ Overall)Shows mild success, but hides differences between groups.Average view is misleading- do not assume one strategy fits all.
Segmented LiftStudents ($+20\%$)Robust response.Action: Invest more in promotions for this group.
Bargain Hunters ($+12\%$)Strong positive effect.Action: Keep a moderate investment and try new offers.
Retirees ($0\%$)No impact.Action: Stop spending here and move the budget elsewhere.

Traditional ML predicts outcomes from inputs. Useful, but it does not tell you why things happen. It will not tell you what changes will occur if you take a different action. That is the gap causal inference fills.

Causal methods aim to separate correlation from cause. They work with real‑world logs and customer behavior (observational data). They also work with controlled trials, such as A/B tests (experimental data). You can use them even when a clean experiment is not feasible.

Here is where it helps:

╰┈➤ Dynamic pricing: Adjust prices based on how different segments actually respond, not just who looks similar.

╰┈➤ Churn reduction: Treatment effect estimates show which actions reduce cancellations, and for whom.

╰┈➤ Policy evaluation: Compare new vs old programs and see effects across demographics (heterogeneous effects).

And that is why it matters: with causal inference, you move from “what’s likely” to “what works,” using observational data or experiments when you have them. 

Introducing EconML

EconML is a Microsoft Research library. It estimates causal effects using machine learning. Most ML predicts outcomes. EconML asks why an outcome happened, and what would change if you took a different action.

The core method is called Double Machine Learning (DML). It trains two models, not one:

╰┈➤ Propensity score model: Estimates the probability of receiving a treatment (e.g., whether a customer is likely to receive a discount).

╰┈➤ Outcome model: Predicts the result of that treatment (like whether the discount leads to a purchase).

By combining these, EconML helps separate correlation from causation. That makes the insights more dependable than simple averages.

Diagram of the DML workflow — inputs → propensity score → outcome model → causal effect estimate.

  • Marketing campaign effectiveness: Identify which groups benefit from a promotion. Students may respond well, but retirees show nothing. Spend your money where it works, skip the rest.
  • Dynamic pricing: Set prices based on how different customers respond. Younger customers typically seek deals, while loyal customers are less price-sensitive. Do not just price for the average- match prices to the people. 
  • Medical treatment outcomes: Figure out how the impact of treatment changes with age, gender, or medical background. These details help doctors tailor care to each patient.
  • SaaS churn reduction: Identify which actions actually keep people from canceling- and who benefits from them. Focus on what makes a difference, and drop the stuff that does not.
  • Policy impact in economics: Compare new and old programs for different groups. Focus on policies that have a meaningful positive impact. Microsoft Research shares real examples and case studies using the EconML toolkit.

Diagram of segment‑level effects for each use case.

╰┈➤ Marketing campaign effectiveness.

╰┈➤ Dynamic pricing.

╰┈➤ Medical treatment outcomes.

╰┈➤ SaaS churn reduction.

╰┈➤ Policy impact in economics.

🌐 External link: Microsoft Research EconML – Case Studies – This page shows how EconML is used in real cases. It’s the official source from Microsoft Research.

Why EconML + Clojure via libpython‑clj

You do not have to leave the JVM just to use Python’s machine learning tools. With libpython-clj, you can use Python libraries like EconML right from your Clojure code. Reuse your old machine learning scripts, call familiar Python functions, and stay in your Clojure environment. It is all connected. There is no need to jump between languages or platforms.

╰┈➤ Faster experimentation: Test new ideas quickly without jumping between different tech stacks.

╰┈➤ Expressive functional code: you get the simplicity of Clojure and the power of Python’s machine learning tools.

╰┈➤ JVM ecosystem integration: your results move straight into enterprise systems without any awkward workaround code.

╰┈➤ Lower barriers: You can just build on what you have- no need to start over from scratch.

Let’s say you want to figure out what happens when you send out newsletters.

Dataset structure:

╰┈➤ XXX = It stands for things you know about your customers

╰┈➤ TTT = It tells you if they got a newsletter

╰┈➤ YYY = It shows whether they bought something or how much revenue you made.

EconML calculates τ(X), which indicates how much additional revenue you gain from sending a newsletter, broken down by customer type. Instead of just giving you one big average, you actually see how different groups react. For example,

╰┈➤ Dormant customers suddenly spend 30% more.

╰┈➤ VIP buyers spend 2% less when you send them a newsletter. (Negative effect)

╰┈➤ Bargain hunters go up by 12%, but only in certain situations. (Conditional effect)

So, now you know exactly who likes your emails- and who does not.

Table Comparing A/B Test vs EconML Uplift

SegmentA/B Test Result (Average)EconML Uplift (Segment‑level)Key Takeaway
Aggregate+5% overall liftAverage hides subgroup variation
StudentsNot visible on average+20%Strong positive effect → invest more
Bargain HuntersNot visible on average+12%Moderate effect → keep testing offers
RetireesNot visible on average0%No effect → stop spending here

Applying EconML in Practice

After EconML gives you the uplift scores for each customer, you have what you need to make wise choices. Here is how it usually goes:

1️⃣ Rank everyone on your mailing list by their predicted lift.

2️⃣ Choose the top half of those with an uplift above zero- and send your newsletters to them. 

3️⃣ Skip the bottom half to save yourself time and avoid any negative impact.

4️⃣ Customize the newsletter for each group. Give each segment the version that best fits them, so you are not just blasting the same thing to everyone.

Flowchart of the decision layer.

╰┈➤ Control A/B test: +$50k revenue

╰┈➤ EconML targeting: +$65k revenue

The uplift comes from sending fewer emails that do not matter: less spam, better delivery, and more revenue.

Bar chart comparing outcomes.

Technical Deep Dive: EconML Under the Hood

Residualization is EconML’s method for improving the reliability of causal estimates.

╰┈➤ It works by predicting what would’ve happened if the treatment had never occurred—the counterfactual outcome. 

╰┈➤ To avoid overfitting, EconML splits the data into training and validation sets. 

╰┈➤ There is also cross-fitting: models train on one subset of the data and are evaluated on a different subset. 

All of this helps cut through the noise and get to the real causal signals.  

EconML also supports causal forests, which are decision trees designed to capture heterogeneous treatment effects.

╰┈➤ They split the data into subgroups and estimate effects for each branch.

╰┈➤ This helps discover new customer segments that respond differently to interventions.

Example: “Customers under 35 who browse frequently and have not bought in 60 days increase spend by 22% when shown Instagram ads.”

Tree diagram of a causal forest. Branches by age, browsing behavior, and purchase history, with treatment effects at each leaf.

Flexiana’s Role

Flexiana has been building Clojure solutions for 9 years. Our team brings deep expertise in functional programming, machine learning integration, and global software delivery. Our projects span healthcare, fintech, SaaS, and enterprise systems.

╰┈➤ To empower teams with causal inference tools.

╰┈➤ To make advanced ML accessible to Clojure developers.

Flexiana’s focus is clear: Help organizations use causal ML without leaving their Clojure stack.

🔗 Internal link: Flexiana’s About page 

🌐 External link: Flexiana GitHub

FAQs (People Also Ask)

EconML is a Python library that helps you figure out the cause-and-effect of your actions. It uses observational or experimental data and applies machine learning to econometric models. The goal is to determine why an intervention (or “treatment”) led to a specific outcome. It is about moving past simple prediction to understand individualized treatment effects (ITE).

They have different goals. A/B testing tells you the average effect- it answers whether a change works for everyone overall. EconML focuses on the heterogeneous effects- it tells you who is most (or least) impacted by that change.

Plus, EconML can use data you already have (observational data) to target people better, saving you the time and cost of running a separate experiment for every targeting idea.

Yes, that’s what it is built for. Observational data is messy. EconML uses innovative techniques, such as Double Machine Learning (DML), to manage the many variables that can skew your results. This helps it address common issues such as selection bias, yielding honest, reliable causal estimates from non-experimental data.

It is about using the best tool for each job. Python has the best ML libraries (scikit-learn, EconML, TensorFlow) for building the models. Clojure, running on the Java Virtual Machine (JVM), provides a robust, concurrent, and highly stable production environment for running models at scale. You get Python’s excellent science ecosystem with the JVM’s rock-solid backend.

Think of a causal forest as a special kind of random forest. In regular random forests, the tree splits are based on predicting an outcome. In a causal forest (such as CausalForestDML), tree splits are based on maximizing the difference in the treatment effect between groups. This enables the algorithm to quickly identify and highlight the specific customer traits (features) that drive the uplift variation.

Conclusion: The Future of ML in Clojure

EconML changes how we use machine learning. Predictive models tell us what might happen. EconML helps explain why it happens and what changes if we act differently. That is useful when you need decisions based on cause and effect rather than averages.

With Clojure and libpython‑clj, you get a clean, functional way to build models while reusing Python’s ML libraries. It is simple to keep your JVM stack while still leveraging proven tools.

╰┈➤ Expressive code: Your code stays straightforward and easy to follow.

╰┈➤ Python interop: You can use existing ML libraries without leaving the JVM.

╰┈➤ Enterprise fit: You can send those causal insights straight into production systems- no extra steps.

Together, Clojure and EconML make machine learning more than just predictions. You can test faster, ship better, and actually trust what your models tell you.

Explore EconML with Flexiana. Let’s build causal ML solutions together.

🔗 Internal link: Contact Flexiana page 

🌐 External link: libpython‑clj GitHub repo

The post Machine Learning in Clojure with libpython‑clj: Unlocking Causal Insights Using Microsoft’s EconML [Series 3] appeared first on Flexiana.

Permalink

Machine Learning in Clojure with libpython-clj: Bridging Functional Elegance and Python’s ML Power [Series 1]

Python is the default choice for machine learning. But many teams using functional languages wonder if they have to switch. At Flexiana, we prioritize Clojure, but we also use Python.

With libpython-clj, Clojure can tap into Python’s machine learning libraries without leaving the JVM. You get the expressiveness and REPL workflow you love, plus solid speed.

In this series, let’s walk you through training a model in Python and integrating it right into your Clojure codebase. No hype, just straightforward steps to get machine learning running in Clojure.

Why This Matters Now

  • Python’s role: Python is the default for machine learning. It has TensorFlow, PyTorch, and scikit‑learn. If you are building models, you are probably using Python. That is fine. It is common and effective.
  • The issue: It creates a problem for teams that prefer other languages.
  • Flexiana’s stance: We are a Clojure‑first company at Flexiana. We work with functional programming, the REPL, and the JVM every day. We use Python when it makes sense. But our core is Clojure. So we asked a simple question: Do we need to leave Clojure to use modern ML tools?
  • Typical workflow pain: Many companies feel stuck in Python‑only workflows. Data scientists train models in Python. Developers then wrap those models in services to fit the main stack. It works, but it adds friction. It creates hand‑offs and silos. And it makes functional teams feel as if they are working on the edges.
  • A better path with libpython‑clj: With libpython‑clj, you do not have to pick one ecosystem and drop the other. You can keep Clojure’s clarity and still call Python’s ML libraries. Train a model in Python. Load it in Clojure. Use it in your codebase. No extra wrappers. No awkward bridges. Just a clean, direct path.
  • Why now: ML is now part of everyday software. Finance, healthcare, retail—many production systems use it. Most of those systems already run on the JVM. If you build in Clojure, you shouldn’t have to step out of your stack to add ML.
  • Why Clojure helps: Think of it like this- Python gives you the tools. Clojure gives you the environment. The REPL enables you to move faster. The functional style keeps code easy to reason about. The JVM fits into enterprise systems without fuss. Together, you get solid ML and clean integration.

That is what this series is about. ML in Clojure is not only possible; it is also practical. It is practical. You can keep your language and still use Python’s ecosystem. And you can fit ML into your stack without compromise.

AspectPython (ML Adoption)Clojure (ML Adoption)
Community SizeExtensive, global community. Dominant in ML and data science.Small but growing. Groups like Scicloj are active.
Library EcosystemMature ML stack: TensorFlow, PyTorch, scikit‑learn, Keras.Uses Python interop via libpython‑clj. Few native ML libraries.
Industry AdoptionCommon across finance, healthcare, retail, and research.Limited to enterprises. Often used in specialized or experimental work.
Learning CurveEasier start. Lots of tutorials and courses.Steeper start. Lisp syntax and functional style take time.
Integration with JVMIndirect. Runs outside the JVM and often requires wrappers or services.Native JVM language. Fits cleanly into enterprise stacks.
Performance in ML TasksStrong for training. Good GPU/TPU support.Suitable for orchestration and integration. Training is usually done in Python.
Current Trend (2025)Still the top ML language in most surveys.Growing interest in bridging Python ML into Clojure with libpython‑clj.

Sources:

Is ML in Clojure Possible?

Python dominates machine learning. The libraries are mature. The community is enormous. The tools fit Python well. But Clojure is not shut out. With its functional style and JVM roots, Clojure can also work with ML. The key is interop.

Chris Nuernberger built libpython‑clj to make this simple. Instead of wrapping models in services or switching stacks, Clojure can communicate directly with Python.

libpython‑clj is a bridge between Clojure and Python. It embeds the CPython runtime inside the JVM. You can call Python functions and use ML libraries from Clojure. Import TensorFlow, PyTorch, or scikit‑learn without leaving your REPL.

╰┈➤ JVM + CPython bridge: libpython‑clj runs CPython inside the JVM process.

╰┈➤ Direct calls: You call Python functions like Clojure functions.

╰┈➤ Shared workflow: Train a model in Python. Load and use it in your Clojure codebase.

This cuts friction- no extra wrappers. No microservices. No awkward hand‑offs. Just a clean, direct path that keeps Clojure as your primary language while using Python’s ML ecosystem.

Machine learning in Clojure with libpython‑clj is simple. You pull Python’s ML tools into your Clojure workflow. You stay in the REPL. You don’t need extra services.

╰┈➤ Train the model in Python: Use TensorFlow or PyTorch. Save the model when you’re done.

╰┈➤ Load the model in Clojure: Import Python modules with libpython‑clj. Call Python functions from Clojure.

╰┈➤ Run predictions in Clojure: Pass your data. Get results back without leaving the JVM.

Note: This is a simple placeholder. Your code will depend on the model and library you use.

╰┈➤ Step 1: Train the model in Python (TensorFlow/PyTorch).

╰┈➤ Step 2: Save the model file.

╰┈➤ Step 3: Import with libpython‑clj inside Clojure.

╰┈➤ Step 4: Run inference in your Clojure codebase.

Demonstrating ML in Action

Machine learning in Clojure with libpython‑clj follows a simple process. 

╰┈➤ Step 1: Train in Python: Use TensorFlow or PyTorch to build and train your model, then save it as a .pt (PyTorch) or SavedModel (TensorFlow).

╰┈➤ Step 2: Load in Clojure: You can import Python modules with libpython‑clj and load the saved model into the JVM.

╰┈➤ Step 3: Integrate: You can wrap inference in small Clojure functions and connect predictions to your data flow and test quickly in the REPL.

╰┈➤ Seamless API calls: You can call Python ML functions directly from Clojure without needing wrappers or microservices.

╰┈➤ REPL‑driven dev: It lets you test predictions, inspect tensors, and adjust data instantly.

╰┈➤ Boxed math: Passing boxed numbers or generic sequences can slow performance.

╰┈➤ Interop overhead: Frequent cross-language calls can add latency.

╰┈➤ Utility functions: Write utility functions if you want to convert data between Clojure types and Python‑friendly arrays or tensors.

╰┈➤ Optimized interop: You can batch calls, reduce crossings, cache modules, and reuse model objects.

Code comparison

Python (train and save):

Clojure (load and infer):

Note: Real PyTorch loading usually restores a model class and its state_dict. The snippet above is a placeholder to show the flow.

And that is the idea: Python handles training. Clojure handles clean integration and deployment. Together, you get solid ML with a straightforward path to production.

Why Care About Clojure in the ML World?

╰┈➤ Clear transformations: Clojure’s functional style allows you to express data flows in fewer lines.

╰┈➤ Less boilerplate: You write what is essential. Transformations stay front and center.

╰┈➤ Easier pipelines: Short and direct code makes ML steps easier to read and maintain.

╰┈➤ Mature runtime: The JVM offers you good optimization, threading, and memory management.

╰┈➤ Enterprise-friendly: You can plug ML into your already existing systems without any significant rewrites.

╰┈➤ Stable under load: You get predictable performance for production workflows. 

╰┈➤ Clean structure: Clojure has a clean, simple structure and syntax. The focus stays on the data.

╰┈➤ REPL-first workflow: With Clojure, you can test, inspect, and iterate quickly.

╰┈➤ Low ceremony: Clojure has fewer moving parts. You can easily see what each step does.

🔗 Clojure official site: https://clojure.org

╰┈➤ Reduced dependency on Python-only teams: You are not stuck relying on Python teams for everything anymore. With Clojure, your own developers can jump in to handle integration and deployment, while the Python folks focus on what they do best- training the models. That way, you can reduce risk, and scaling goes much more smoothly.

╰┈➤ Faster prototyping with REPL: Clojure’s REPL changes the game for prototyping. You get instant feedback- test predictions and tensors, as well as tweak your data right there in the loop. You see what works (and what does not) before you commit to building anything significant.

╰┈➤ Integration with enterprise JVM ecosystems: You can plug models into your company’s JVM systems directly. Less friction. Smoother deployment.  Everything lines up with the tools your team already knows and trusts. 

╰┈➤ Python-only: Higher dependency on specialized teams. Slower integration. More overhead.

╰┈➤ Hybrid: Shared ownership. Faster iteration. Better fit for enterprise workflows.

FAQs (People Also Ask Integration)

  • Question: Can you use Python ML libraries in Clojure?
  • Answer: Yes. You can bridge the CPython runtime with libpython‑clj and call Python code from Clojure.
  • Question: Is Clojure faster than Python for ML?
  • Answer: It depends. You can train models in Python for speed, and use Clojure for orchestration, integration, and deployment.
  • Question: Why not just use Python?
  • Answer: You can use Clojure for JVM interop, a REPL‑driven workflow, and clear functional code that fits enterprise systems.
  • Question: What companies use Clojure for ML?
  • Answer: You can reference case studies (Flexiana, fintech, healthcare) as placeholders until specific examples are approved.
  • Question: How do you integrate a trained model into Clojure?
  • Answer: You can load the model with libpython‑clj, wrap conversions in utility functions, and call inference from your Clojure codebase.

Future‑proof your enterprise with Flexiana’s Clojure‑first ML solutions.

This concludes Series 1, where we introduced how to combine Clojure and Python for machine learning using libpython-clj.

In Series 2, we go a step further by exploring Bayesian Networks and how they enable smarter, more interpretable AI models.
👉 Continue with Series 2: https://flexiana.com/news/2025/12/machine-learning-in-clojure-with-libpython-clj-using-bayesian-networks-for-smarter-interpretable-ai-series-2

The post Machine Learning in Clojure with libpython-clj: Bridging Functional Elegance and Python’s ML Power [Series 1] appeared first on Flexiana.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.