Statistics made simple

I have a weird relationship with statistics: on one hand, I try not to look at it too often. Maybe once or twice a year. It’s because analytics is not actionable: what difference does it make if a thousand people saw my article or ten thousand?

I mean, sure, you might try to guess people’s tastes and only write about what’s popular, but that will destroy your soul pretty quickly.

On the other hand, I feel nervous when something is not accounted for, recorded, or saved for future reference. I might not need it now, but what if ten years later I change my mind?

Seeing your readers also helps to know you are not writing into the void. So I really don’t need much, something very basic: the number of readers per day/per article, maybe, would be enough.

Final piece of the puzzle: I self-host my web projects, and I use an old-fashioned web server instead of delegating that task to Nginx.

Static sites are popular and for a good reason: they are fast, lightweight, and fulfil their function. I, on the other hand, might have an unfinished gestalt or two: I want to feel the full power of the computer when serving my web pages, to be able to do fun stuff that is beyond static pages. I need that freedom that comes with a full programming language at your disposal. I want to program my own web server (in Clojure, sorry everybody else).

Existing options

All this led me on a quest for a statistics solution that would uniquely fit my needs. Google Analytics was out: bloated, not privacy-friendly, terrible UX, Google is evil, etc.

What is going on?

Some other JS solution might’ve been possible, but still questionable: SaaS? Paid? Will they be around in 10 years? Self-host? Are their cookies GDPR-compliant? How to count RSS feeds?

Nginx has access logs, so I tried server-side statistics that feed off those (namely, Goatcounter). Easy to set up, but then I needed to create domains for them, manage accounts, monitor the process, and it wasn’t even performant enough on my server/request volume!

My solution

So I ended up building my own. You are welcome to join, if your constraints are similar to mine. This is how it looks:

It’s pretty basic, but does a few things that were important to me.

Setup

Extremely easy to set up. And I mean it as a feature.

Just add our middleware to your Ring stack and get everything automatically: collecting and reporting.

(def app
  (-> routes
    ...
    (ring.middleware.params/wrap-params)
    (ring.middleware.cookies/wrap-cookies)
    ...
    (clj-simple-stats.core/wrap-stats))) ;; <-- just add this

It’s zero setup in the best sense: nothing to configure, nothing to monitor, minimal dependency. It starts to work immediately and doesn’t ask anything from you, ever.

See, you already have your web server, why not reuse all the setup you did for it anyway?

Request types

We distinguish between request types. In my case, I am only interested in live people, so I count them separately from RSS feed requests, favicon requests, redirects, wrong URLs, and bots. Bots are particularly active these days. Gotta get that AI training data from somewhere.

RSS feeds are live people in a sense, so extra work was done to count them properly. Same reader requesting feed.xml 100 times in a day will only count as one request.

Hosted RSS readers often report user count in User-Agent, like this:

Feedly/1.0 (+http://www.feedly.com/fetcher.html; 457 subscribers; like FeedFetcher-Google)

Mozilla/5.0 (compatible; BazQux/2.4; +https://bazqux.com/fetcher; 6 subscribers)

Feedbin feed-id:1373711 - 142 subscribers

My personal respect and thank you to everybody on this list. I see you.

Graphs

Visualization is important, and so is choosing the correct graph type. This is wrong:

Continuous line suggests interpolation. It reads like between 1 visit at 5am and 11 visits at 6am there were points with 2, 3, 5, 9 visits in between. Maybe 5.5 visits even! That is not the case.

This is how a semantically correct version of that graph should look:

Some attention was also paid to having reasonable labels on axes. You won’t see something like 117, 234, 10875. We always choose round numbers appropriate to the scale: 100, 200, 500, 1K etc.

Goes without saying that all graphs have the same vertical scale and syncrhonized horizontal scroll.

Insights

We don’t offer much (as I don’t need much), but you can narrow reports down by page, query, referrer, user agent, and any date slice.

Not implemented (yet)

It would be nice to have some insights into “What was this spike caused by?”

Some basic breakdown by country would be nice. I do have IP addresses (for what they are worth), but I need a way to package GeoIP into some reasonable size (under 1 Mb, preferably; some loss of resolution is okay).

Finally, one thing I am really interested in is “Who wrote about me?” I do have referrers, only question is how to separate signal from noise.

Performance. DuckDB is a sport: it compresses data and runs column queries, so storing extra columns per row doesn’t affect query performance. Still, each dashboard hit is a query across the entire database, which at this moment (~3 years of data) sits around 600 MiB. I definitely need to look into building some pre-calculated aggregates.

One day.

How to get

Head to github.com/tonsky/clj-simple-stats and follow the instructions:

Let me know what you think! Is it usable to you? What could be improved?

P.S. You can try the live example at tonsky.me/stats. The data was imported from Nginx access logs, which I turned on and off on a few occasions, so it’s a bit spotty. Still, it should give you a general idea.

Permalink

I am sorry, but everyone is getting syntax highlighting wrong

Translations: Russian

Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.

Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.

Christmas Lights Diarrhea

Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.

Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.

Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?

So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.

Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.

If everything is highlighted, nothing is highlighted.

Enough colors to remember

There are two main use-cases you want your color theme to address:

  1. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?)
  2. Search for something. You want to know what to look for (which color).

1 is a direct index lookup: color → type of thing.

2 is a reverse lookup: type of thing → color.

Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.

Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.

I can’t.

Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?

Can you?

If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.

What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:

  • Green for strings
  • Purple for constants
  • Yellow for comments
  • Light blue for top-level definitions

That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.

Limit the number of different colors to what you can remember.

If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?

What should you highlight?

Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.

I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.

Top-level definitions are another good idea. They give you an idea of a structure quickly.

Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.

Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.

Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.

Comments are important

The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.

But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:

Comments should be highlighted, not hidden away.

Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.

Two types of comments

Another secret nobody is talking about is that there are two types of comments:

  1. Explanations
  2. Disabled code

Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!

Here’s a real example from Clojure codebase that makes perfect use of two types of comments:

Disabled code is gray, explanation is bright yellow

Light or dark?

Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?

And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:

Notice how many colors there are. No one can remember that many.

This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.

Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.

So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯

But!

But.

There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.

The second one has good contrast, but you can barely see colors.

The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.

UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.

Bold and italics

Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.

In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.

Using italics and bold instead of colors

Myth of number-based perfection

Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.

This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:

OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315

The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.

Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.

Let’s design a color theme together

Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.

Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?

Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.

But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.

This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.

Shameless plug time

I’ve been applying these principles for about 8 years now.

I call this theme Alabaster and I’ve built it a couple of times for the editors I used:

It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).

Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.

As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.

I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.

Permalink

Tools I loved in 2025

Hi friends!

While answering 40 questions to ask yourself every year, I realized I’d adopted a bunch of new tools over 2025. All of them have improved my life, so I want to share them with you so they might improve yours too =D

A common theme is that all of my favorite tools promote a sort of coziness: They help you tailor the spaces where you spend your time to your exact needs and preferences.

Removing little annoyances and adding personal touches — whether to your text editor or your kitchen — not only improves your day-to-day mood, but cultivates a larger sense of security, ownership, and agency, all of which I find essential to doing great work. (And making great snacks.)

Physical

Workshop space

Last summer we moved to a ground-level apartment, giving me (for the first time in my life) my very own little workshop space:

We’re just renting the apartment, so building a larger shed isn’t an option, but it turns out that one can get quite a lot done with floor area barely larger than a sheet of plywood.

I designed a Paulk-style workbench top, cut it out on my friend’s CNC, and mounted it all on screwed-together 2x4 legs:

So far I’ve mostly worked with cheap plywood from Hornbach, using the following tools:

  • Makita track saw for both rough and precision cuts. There’s a handy depth stop that makes it easy to do a shallow scoring cut before making a full cut, which reduces tearing out the thin veneer.
  • Fein wet/dry vac to collect dust/chips. This includes an electrical outlet you can plug tools in to so the vacuum automatically turns on when the tool is drawing power, which is great.
  • My DIY powered air respirator built around a 3M Versaflo helmet has been working great — it’s so much more comfortable than fiddling with separate respirators and eye + ear protection. Since it takes 15 seconds to take on/off, I’m pretty much always wearing it when I’m doing anything in the shop.
  • Record Power Sabre 250 desktop bandsaw for quick cross cuts. The accuracy isn’t great, but I haven’t tried replacing the blade or tuning it much yet.
  • Bosch 12V palm edge router extremely fun to use for rounding over and chamfering edges:
  • Bosch 12V drill/driver is lightweight and compact, and comes with a handy right-angle attachment that I’ve actually used:
  • Bosch PBD 40 - a drill press with digital speed control and zeroable digital depth gauge with 0.1mm precision. At $250, the 0.5mm play in the chuck is forgivable.

The workbench has MFT-style 20mm “dog holes” on a 96mm grid, which allows for all sorts of useful accessories.

For example: I purchased Hooked on Wood’s bench dog rail hinges, which makes it easy to flip the track up/down to slide wood underneath to make cross cuts:

The $10 dog hole clamp on the right holds a stop block, which allows me to make repeated cuts.

Since the dog holes were cut with a CNC and the rail is straight, square cuts can be made by:

  1. making a known straight cut with the track saw on its track (whether on the workbench or outside on sawhorses)
  2. pushing this reference edge up against the two bench dogs on the top
  3. cutting along the track

See Bent’s Woodworking for more detail on this process.

While I wish I had space for a full size sliding table saw and/or CNC, this workbench and track saw seems like a decent backyard shed solution.

LED lighting

Last winter I decided to fight the gloomy Dutch weather by buying a bunch of high-CRI LEDs to flood my desk with artificial daylight:

See the build log for more details.

This lighting has worked out swimmingly — it helps me wake up in the morning and makes the whole space feel nice even on the grayest rainy day.

After sundown, my computer switches to “dark mode” colors and I switch the room to cozier warm-white LEDs.

I had about 8 meters of LED strip leftover, which I used with LED diffuser profiles to illuminate my workshop.

Euroboxes

When we moved into our new (completely unfurnished) apartment over the summer, I was adamant I’d build all of the furniture we needed. However, sightly storage solutions have taken longer than anticipated, so to eliminate the immediate clutter I purchased a bunch of 600x400x75mm euroboxes:

At $4/each (used), they’re an absolute bargain.

Since they’re plastic, they slide great on waxed wood rails and make perfect lil’ utility drawers. The constraint of needing to use fixed size drawers makes it easier for me to design the rest of a furniture piece around them.

For example, just take a brad nailer to the limitless amounts of discarded particle board available on the streets of Amsterdam, and boom, one-day-build 3d-printer cart in the meter closet:

Or, throw some drawers in the hidden side of these living room plywood coffee table / bench / scooters:

Now we have a tidy place to hide the TV remote and wireless keyboard/mouse, coffee table books, store extra blankets, etc.

Ikea Skadis coffee/smoothie station

Our kitchen only has 1.6 m2 (17 ft2) of counter space, so we mounted two Ikea Skadis pegboards to the side of our fridge to make a coffee / smoothie station:

The clear Ikea containers are great for nuts and coffee since you can grab them with one hand and drop ‘em in the blender or grinder.

I designed and 3d-printed custom mounts for my Clever Dripper, coffee grinder (1Zpresso k-series), little water spritzer, and bottles of Friedhats beans.

Since we can’t screw into the fridge, the panels are hanging from some 3d-printed hooks command stripped on the top of the fridge cabinet. Clear nano-tape keeps the Skadis boards from swinging.

The cheap Ninja cup blender is quite loud, so we leave a pair of Peltor X4 ear defenders hanging next to it.

Ikea Maximera drawers

As soon as we made the smoothie station we decided to replace the cabinet shelves underneath it with drawers. (The only drawers that came with the kitchen were installed underneath the range, creating constant conflict between the cook and anyone needing silverware.)

Decent soft-close undermount drawer slides from Blum/Hettich cost like $30 each, and for the same price Ikea sells slides with a drawer box attached.

Since we’re renting and can’t make permanent changes to the kitchen, I built a carcass for the drawers within the existing cabinet:

The particle board sides carry the weight of the drawers to the base of the existing cabinet, and they’re fixed to the walls with nano-tape rather than screws.

Nicki 3d-printed cute pulls and we threw those onto temporary fronts made of leftover MDF. As you can see from the hardware poking through, these 8mm fronts are a bit too thin, so I plan to replace them with thicker MDF, probably sculpted with a goofy pattern in the style of Arno Hoogland.

Having drawers is awesome:

We bought a bunch of extra cups for the Ninja blender and keep them pre-filled with creatine and protein powder. The bottom drawer holds the dozen varieties of peanut butter remaining from the Gwern-inspired tasting I held in November. (The UK’s Pip & Nut peanut butters were the crowd favorites, by the way.)

Digital

Emacs and LLMs

I’ve used Emacs for something like 15 years, but after the first year or so I deliberately avoided trying to customize it, as that felt like too much of a distraction from the more important yaks I was shaving through my 20’s and early 30’s.

However, in early 2025 I decided to dive in, for two reasons:

  1. I ran across Prot’s video demonstrating a cohesive set of well-designed search and completion packages, which suggested to me that there were interesting ideas being developed in the “modern” Emacs community
  2. I discovered gptel, which makes it extremely easy to query large language models within Emacs — just highlight text, invoke gtpel, and the response is inserted right there.

What’s special about Emacs compared to other text editors is that it’s extremely easy to customize. Rather than a “text editor”, Emacs is better thought of as an operating system and live programming environment which just so happens to have a lot of functions related to editing text.

My day-to-day coding, writing, and thinking environment within Emacs has improved tremendously in 2025, as every time I’ve had any sort of customization or workflow-improvement idea, I’ve been able to try it out in just a minute or two by having an LLM generate the necessary Emacs Lisp code.

My mentality changed to “yeah, I’m sure it’s possible but I don’t have time or interest to figure out how to do it with this archaic and quirky programming language” to “let me spend two minutes trying”.

Turns out there’s a lot of little improvements that can be done by an LLM in few minutes. Here are some examples:

  • Literally for this article, I asked the LLM to write a Ruby method for my static site generator to render a table of contents (which you can see above!)

  • Lots of places don’t support markdown natively, so I had an LLM write me an Emacs function to render selected markdown text to HTML on the pasteboard, which lets me write in Emacs and then just ⌘V in Gmail to get a nicely formatted message.

  • When I write markdown and want to include an image, it’s annoying to have to copy/paste in the path to the image, so I had an LLM write me an autocomplete against all of the image files in the same directory as the markdown file. I’ve been using this for pretty much every article/newsletter I write now, since they usually have images.

  • I keep a daily activity log as a simple EDN file with entries like:

    {:start #inst "2025-12-28T12:00+01:00" :tags #{"lunch" "nicki"} :duration 2} 
    {:start #inst "2025-12-28T09:00+01:00" :tags #{"paneltron" "computering"} :duration 6
     :description "rough out add-part workflow."}
    {:start #inst "2025-12-28T08:20+01:00" :tags #{"wakeup"}}
    

    (Everything’s rounded to the half-hour.)

    I started this when I was billing by the hour (a decade ago), and have kept it up because it’s handy to have a lightweight, low-friction record of what I’ve been up to. I used to do occasional analysis manually via a REPL, but couldn’t sleep one night so I spent 30 minutes having an LLM throw together a visual summary which I can invoke for whatever week is under my cursor. It looks like:

    Week: 2025-12-22 to 2025-12-28
    
    Locations:
      Amsterdam - Friday 2025-12-19T13:30+01:00
    
    computering       16.5h ##################
    box-carts         14.5h ###############
    woodworking       13.0h ##############
    llms               8.0h ########
    dinner             7.0h #######
    

    and includes my most recent :location (an attribute I started tagging entries with to help keep track of visa limitations while traveling).

    I’m extremely chuffed about having quick access to weekly summaries and suspect that tying that to my existing habit of recording entries will be a good intra-week feedback loop about whether I’m spending time in alignment with my priorities.

  • Whenever I write an article or long message, before sending it I run it by an LLM with the following prompt:

    my friend needs feedback on this article — are there any typos, confusing sentences, or other things that could be improved? Be blunt and I’ll convey the feedback to my friend in a friendly way.

    This one doesn’t even involve any code, it’s just a habit that’s easy because it’s easy to call an LLM from within Emacs. LLMs will note repeated words, incorrect homonyms, and awkward sentences that simple spell-checkers miss. Here’s an example from this very article:

    1. Double parenthesis: [Paulk-style]((https://www.youtube.com/watch?v=KnNi6Tpp-ac)) — remove one set
    2. “Ikea Maximara drawers” in the heading, but the product is actually “Maximera”
    3. http://localhost:9292/newsletter/2025_06_03_prototyping_a_language/ — you’ve left a localhost URL in the FlowStorm section

Emacs has a pretty smooth gradient between “adjust a keybinding”, to “quick helper function”, to a full on workflow. Here’s an example from the latter end of that spectrum.

I asked Claude Code to make some minor CSS modifications for a project, then got nerd-sniped trying to understand why it used a million tokens to explore my 4000-word codebase and edit a dozen lines:

Usage by model:
        claude-haiku:  8.6k input, 5.4k output, 434.2k cache read, 33.4k cache write ($0.1207)
       claude-sonnet:  1.0k input, 262 output, 0 cache read, 0 cache write ($0.0069)
     claude-opus-4-5:  214 input, 8.3k output, 842.5k cache read, 47.8k cache write ($0.93)

After a bit of digging, it seemed likely this is a combination of factors:

  • Claude Code’s system and tool prompts
  • Repeatedly invoking tools to grep around the directory and read files 200 lines at a time
  • Absolute nonsense — Mario Zechner has some great analysis on this (fun fact: “Claude Code uses Haiku to generate those little whimsical "please wait” messages. For every. token. you. input.“).

For comparison, I invoked Opus 4.5 manually with all of my source code and asked what I needed to change, and it nailed the answer using only 5000 tokens (4500 input, 500 output).

So I leaned into this and wrote my own lightweight, single-shot workflow in Emacs:

  • I write something like:

    @rewrite
    
    /path/to/file1
    /path/to/file2
    
    Please do X, Y, Z, thanks!
    

    and when I send it, some elisp code:

  • adds the specified files to the context window

  • sets the system prompt to be "please reply only with a lil’ string replacement patch format that looks like …”

  • sends the LLM response to a Babashka script which applies this patch, sandboxed to just the specified files.

I’ve used it a handful of times so far and it works exactly the way I’d imagined — it’s much faster than waiting for an “agent” to make a dozen tool calls and lets me take advantage of LLMs for tedious work while remaining in the loop.

(Admittedly, this one took a few hours rather than a few minutes, but it was well worth it in terms of getting some hands-on experience building a structured LLM-based workflow.)

A little Photos.app exporter

When you copy images out of Photos.app, it only puts a 1024 pixel wide “preview image” on the pasteboard. This loses too much detail.

The built-in “export” workflow is tedious to use and doesn’t actually compress well, so you have to do another pass through Squoosh or ImageOptim anyway. Of course, you’ll want to resize before you do that, maybe in Preview?

I noticed how annoyed I was trying to add photos to my articles, so I vibe-coded a little Photos.app exporter that behaves exactly like how I want.

UV (Python dependencies)

I haven’t done much Python in my career, but it’s a popular language that I’d occasionally need to tap into to use some specific library (e.g., OpenCV, NumPy, JAX) or run the code behind some interesting research paper.

The package management has always been (from my perspective of a casual outsider) an absolute mess. Yes, with lots of good reasons related to packaging native code across different operating systems and architectures, etc. etc. etc.

Whatever, if pip install failed to build my eggs or wheels or whatever, I’d usually just give up.

As for reproducibility and pinning to exact versions…¯\_(ツ)_/¯.

“Maybe if I do nothing the problem will fix itself” isn’t a great problem solving strategy, but it definitely worked out for me for understanding Python dependency managers: I came across UV in late 2024 and it…just works? It’s also fast!

I can now finally create a Python project, specify some dependencies, and automatically get a lockfile ensuring it’ll will work just fine on my other computer, 6 months from now!

This problem has been a thorn in the side of not just software developers, but also scientific researchers for decades. The folks who’ve finally resolved it deserve, like, the Nobel Peace prize.

Mise-en-place (all dependencies)

This past year I’ve also been loving mise-en-place. From their homepage:

mise is a tool that manages installations of programming language runtimes and other tools for local development. For example, it can be used to manage multiple versions of Node.js, Python, Ruby, Go, etc. on the same machine.

Once activated, mise can automatically switch between different versions of tools based on the directory you’re in. This means that if you have a project that requires Node.js 18 and another that requires Node.js 22, mise will automatically switch between them as you move between the two projects.

I’ve used language-specific versions of this idea, and it’s been refreshing to throw all of those out in favor of a single uniform, fast solution.

I have a user-wide “default” config, containing stuff like:

  • languages (a JVM, clojure, ruby)
  • language servers (clojure-lsp, rust analyzer, etc.)
  • git-absorb “git commit –fixup, but automatic”
  • Difftastic AST-aware diffing
  • Numbat, an awesome unit-aware scientific calculator

and on specific projects I specify the same and additional tools, so that collaborators can have a “single install” that guarantees we’re all on the same versions of everything.

Sure something like Nix with hermetically sealed, content addressed, etc., etc. is better in theory, but the UX and conceptual model is a mess.

Mise feels like a strong, relatively simple local optimum: You specify what you want through a simple hierarchy of configs, mise generates lockfiles, downloads the requested dependencies, and puts them on the $PATH based on where you are.

I’ve been using it for a year and haven’t had to learn anything beyond what I did in the first 15 minutes. It works.

Atuin (shell history)

Atuin records all of your shell commands, (optionally) syncs them across multiple computers, and makes it easy to recall commands with a fuzzy finder. It’s all self-hostable and is distributed as a single binary.

I find the shell history extremely useful, both for remembering infrequently used commands, as well as for simply avoiding typing out tediously long ones with many flags. Having a proper fuzzy search that shows a dozens of results (rather than just one at a time) makes it straightforward to use.

Before this I wouldn’t have thought twice about my shell command history, and now it’s something I deliberately back up because it’s so useful.

YouTube Premium

At some point in 2025 the ad-blocker I was using stopped working on YouTube and I started seeing either pauses or (uggghhh) commercials (in Dutch, which Google must think I understand, despite 15 years of English GMail, not to mention my Accept-Language header).

Given that YouTube is a Wonder of the World and a $20/month leaves me with a consumer surplus in the neighborhood of $104–105, I decided to subscribe to YouTube Premium.

Honestly I just wanted the ads to stop, but it’s even better — I can conveniently download videos to my phone, which means I can watch interesting videos while riding on the stationary exercise bike in the park near my house.

It also comes with YouTube Music, which immediately played me a late 00’s indie rock playlist that brought me back to college.

100% worth it.

FlowStorm (Clojure debugger)

I don’t normally use debuggers, especially since in Clojure it’s usually straightforward to pretty-print the entire application state.

However, this usual approach failed me when I was building an interpreter for some CAD programming language experiments — each AST node holds a reference to its environment (a huge map), and simply printing a tree yields megabytes of text.

FlowStorm takes advantage of the fact that idiomatic Clojure uses immutable data structures — the debugger simply holds a reference to every value generated during a computation, so that you can analyze them later.

There are facilites to search all of the recorded values. So if you a see a string “foo” on your rendered webpage or whatever, you can easily answer questions like “where is the first callstack where the string ‘foo’ shows up?”.

All of the recorded data is available programmatically too. I used this infrastructure to make a live visualization of a 2D graphics program where as you move your cursor around the program source code, you see the closest graphical entity rendered automatically.

(“A debounced callback triggered by cursor movement which executes Clojure code and highlights text according to the return value” is another example of an Emacs customization I never would have attempted prior to LLMs.)

Whispertron (transcription)

I vibe-coded my own lil’ voice transcription app back in October 2024, but I’m including it in this list because using it has become second-nature to me in 2025.

Before I had reliable transcription at my fingertips, I never felt hindered by my typing speed (~125 words per minute). However, now that I have it, I find that I’m expressing much more when I’m dictating compared to typing.

It reminds me of the difference between responding to an email on an iPhone versus using a computer with a large monitor and full keyboard. I find myself providing much more context and otherwise elaborating my thoughts in more detail. It’s just easier to speak out loud than type out the same information.

This yields much better results when prompting LLMs: typing, I’ll say “do X”; speaking, I’ll say “do X, maybe try A, B, C, remember about Y and Z constraints”.

It also yields better relationships: When emailing and texting friends, I’ll dictate (and then clean up / format) much more detailed, longer responses than what I’d type.

Misc. stuff

Permalink

Joyful Python with the REPL

REPL Driven Development is a workflow that makes coding both joyful and interactive. The feedback loop from the REPL is a great thing to have at your fingertips.

"If you can improve just one thing in your software development, make it getting faster feedback."
Dave Farley

Just like Test Driven Development (TDD), it will help you write testable code. I have also noticed a nice side effect from this workflow: REPL Driven Development encourages a functional programming style.

REPL Driven Development is an everyday thing among Clojure developers and doable in Python, but far less known here. I'm working on making it an everyday thing in Python development too.

But what is REPL Driven Development?

What is it?

You evaluate variables, code blocks, functions - or an entire module - and get instant feedback, just by a hitting a key combination in your favorite code editor. There's no reason to leave the IDE for a less featured shell to accomplish all of that. You already have autocomplete, syntax highlighting and the color theme set up in your editor. Why not use that, instead of a shell?

Evaluate code and get feedback, without leaving the code editor.

Ideally, the result of an evaluation pops up right next to the cursor, so you don't have to do any context switches or lose focus. It can also be printed out in a separate frame right next to the code. This means that testing the code you currently write is at your fingertips.

Easy setup

With some help from IPython, it is possible to write, modify & evaluate Python code in a REPL Driven way. I would recommend to install IPython globally, to make it accessible from anywhere on your machine.

pip install ipython

Configure IPython to make it ready for REPL Driven Development:

c.InteractiveShellApp.exec_lines = ["%autoreload 2"] c.InteractiveShellApp.extensions = ["autoreload"] c.TerminalInteractiveShell.confirm_exit = False

You will probably find the configuration file here: ~/.ipython/profile_default/ipython_config.py

You are almost all set.

Emacs setup

Emacs is my favorite editor. I'm using a couple of Python specific packages to make life as a Python developer in general better, such as elpy. The auto-virtualenv package will also help out making REPL Driven Developer easier. It will find local virtual environments automatically and you can start coding without any python-path quirks.

Most importantly, set IPython as the default shell in Emacs. Have a look at my Emacs setup for the details.

VS Code setup

I am not a VS Code user. But I wanted to learn how well supported REPL Driven Development is in VS Code, so I added these extensions:

You would probably want to add keyboard shortcuts to get the true interactive feel of it. Here, I'm just trying things out by selecting code, right clicking and running it in an interactive window. It seems to work pretty well! I haven't figured out if the interactive window is picking up the global IPython config yet, or if it already refreshes a submodule when updated.

Evaluating code in the editor with fast feedback loops.
It would be great to have keyboard commands here, though.

Current limitations

In Clojure, you connect to & modify an actually running program by re-evaluating the source code. That is a wonderful thing for the developer experience in general. I haven't been able to do that with Python, and believe Python would need something equivalent to NRepl to get that kind of magic powers.

Better than TDD

I practice REPL Driven Development in my daily Python work. For me, it has become a way to quickly verify if the code I currently write is working as expected. I usually think of this REPL driven thing as Test Driven Development Deluxe. Besides just evaluating the code, I often write short-lived code snippets to test out some functionality. By doing that, I can write code and test it interactively. Sometimes, these code snippets are converted permanent unit tests.

For a live demo, have a look at my five minute lightning talk from PyCon Sweden about REPL Driven Development in Python.

Never too late to learn

I remember it took me almost a year learning & developing Clojure before I actually "got it". Before that, I sometimes copied some code and pasted it into a REPL and then ran it. But that didn't give me a nice developer experience at all. Copy-pasting code is cumbersome and will often fail because of missing variables, functions or imports. Don't do that.

I remember the feeling when figuring out the REPL Driven Development workflow, I finally had understood how software development should be done. It took me about 20 years to get there. It is never too late to learn new things. 😁



Top photo by ckturistando on Unsplash

Permalink

OSS updates November and December 2025

In this post I&aposll give updates about open source I worked on during November and December 2025.

To see previous OSS updates, go here.

Sponsors

I&aposd like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn&apost exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

gratitude

Current top tier sponsors:

Open the details section for more info about sponsoring.

Sponsor info

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

Updates

Clojure Conj 2025

Last November I had the honor and pleasure to visit the Clojure Conj 2025. I met a host of wonderful and interesting long-time and new Clojurians, many that I&aposve known online for a long time and now met for the first time. It was especially exciting to finally meet Rich Hickey and talk to him during a meeting about Clojure dialects and Clojure tooling. The talk that I gave there: "Making tools developers actually use" will come online soon.

presentation at Dutch Clojure meetup

Babashka conf and Dutch Clojure Days 2026

In 2026 I&aposm organizing Babashka Conf 2026. It will be an afternoon event (13:00-17:00) hosted in the Forum hall of the beautiful public library of Amsterdam. More information here. Get your ticket via Meetup.com (currently there&aposs a waiting list, but more places will come available once speakers are confirmed). CfP will open mid January. The day after babashka conf, Dutch Clojure Days 2026 will be happening. It&aposs not too late to get your talk proposal in. More info here.

Clojurists Together: long term funding

I&aposm happy to announce that I&aposm among the 5 developers that were granted Long term support for 2026. Thanks to all who voted! Read the announcement here.

Projects

Here are updates about the projects/libraries I&aposve worked on in the last two months in detail.

  • babashka: native, fast starting Clojure interpreter for scripting.

    • Bump process to 0.6.25
    • Bump deps.clj
    • Fix #1901: add java.security.DigestOutputStream
    • Redefining namespace with ns should override metadata
    • Bump nextjournal.markdown to 0.7.222
    • Bump edamame to 1.5.37
    • Fix #1899: with-meta followed by dissoc on records no longer works
    • Bump fs to 0.5.30
    • Bump nextjournal.markdown to 0.7.213
    • Fix #1882: support for reifying java.time.temporal.TemporalField (@EvenMoreIrrelevance)
    • Bump Selmer to 1.12.65
    • SCI: sci.impl.Reflector was rewritten into Clojure
    • dissoc on record with non-record field should return map instead of record
    • Bump edamame to 1.5.35
    • Bump core.rrb-vector to 0.2.0
    • Migrate detecting of executable name for self-executing uberjar executable from ProcessHandle to to native image ProcessInfo to avoid sandbox errors
    • Bump cli to 0.8.67
    • Bump fs to 0.5.29
    • Bump nextjournal.markdown to 0.7.201
  • SCI: Configurable Clojure/Script interpreter suitable for scripting

    • Add support for :refer-global and :require-global
    • Add println-str
    • Fix #997: Var is mistaken for local when used under the same name in a let body
    • Fix #1001: JS interop with reserved js keyword fails (regression of #987)
    • sci.impl.Reflector was rewritten into Clojure
    • Fix babashka/babashka#1886: Return a map when dissociating a record basis field.
    • Fix #1011: reset ns metadata when evaluating ns form multiple times
    • Fix for https://github.com/babashka/babashka/issues/1899
    • Fix #1010: add js-in in CLJS
    • Add array-seq
  • clj-kondo: static analyzer and linter for Clojure code that sparks joy.

    • #2600: NEW linter: unused-excluded-var to warn on unused vars in :refer-clojure :exclude (@jramosg)
    • #2459: NEW linter: :destructured-or-always-evaluates to warn on s-expressions in :or defaults in map destructuring (@jramosg)
    • Add type checking support for sorted-map-by, sorted-set, and sorted-set-by functions (@jramosg)
    • Add new type array and type checking support for the next functions: to-array, alength, aget, aset and aclone (@jramosg)
    • Fix #2695: false positive :unquote-not-syntax-quoted in leiningen&aposs defproject
    • Leiningen&aposs defproject behavior can now be configured using leiningen.core.project/defproject
    • Fix #2699: fix false positive unresolved string var with extend-type on CLJS
    • Rename :refer-clojure-exclude-unresolved-var linter to unresolved-excluded-var for consistency
    • v2025.12.23
    • #2654: NEW linter: redundant-let-binding, defaults to :off (@tomdl89)
    • #2653: NEW linter: :unquote-not-syntax-quoted to warn on ~ and ~@ usage outside syntax-quote (`) (@jramosg)
    • #2613: NEW linter: :refer-clojure-exclude-unresolved-var to warn on non-existing vars in :refer-clojure :exclude (@jramosg)
    • #2668: Lint & syntax errors in let bindings and lint for trailing & (@tomdl89)
    • #2590: duplicate-key-in-assoc changed to duplicate-key-args, and now lints dissoc, assoc! and dissoc! too (@tomdl89)
    • #2651: resume linting after paren mismatches
    • clojure-lsp#2651: Fix inner class name for java-class-definitions.
    • clojure-lsp#2651: Include inner class java-class-definition analysis.
    • Bump babashka/fs
    • #2532: Disable :duplicate-require in require + :reload / :reload-all
    • #2432: Don&apost warn for :redundant-fn-wrapper in case of inlined function
    • #2599: detect invalid arity for invoking collection as higher order function
    • #2661: Fix false positive :unexpected-recur when recur is used inside clojure.core.match/match (@jramosg)
    • #2617: Add types for repeatedly (@jramosg)
    • Add :ratio type support for numerator and denominator functions (@jramosg)
    • #2676: Report unresolved namespace for namespaced maps with unknown aliases (@jramosg)
    • #2683: data argument of ex-info may be nil since clojure 1.12
    • Bump built-in ClojureScript analysis info
    • Fix #2687: support new :refer-global and :require-global ns options in CLJS
    • Fix #2554: support inline configs in .cljc files
  • edamame: configurable EDN and Clojure parser with location metadata and more Edamame: configurable EDN and Clojure parser with location metadata and more

    • Minor: leave out :edamame/read-cond-splicing when not splicing
    • Allow :read-cond function to override :edamame/read-cond-splicing value
    • The result from :read-cond with a function should be spliced. This behavior differs from :read-cond + :preserve which always returns a reader conditional object which cannot be spliced.
    • Support function for :features option to just select the first feature that occurs
  • squint: CLJS syntax to JS compiler

    • Allow macro namespaces to load "node:fs", etc. to read config files for conditional compilation
    • Don&apost emit IIFE for top-level let so you can write let over defn to capture values.
    • Fix js-yield and js-yield* in expression position
    • Implement some? as macro
    • Fix #758: volatile!, vswap!, vreset!
    • pr-str, prn etc now print EDN (with the idea that you can paste it back into your program)
    • new #js/Map reader that reads a JavaScript Map from a Clojure map (maps are printed like this with pr-str too)
    • Support passing keyword to mapv
    • #759: doseq can&apost be used in expression context
    • Fix #753: optimize output of dotimes
    • alength as macro
  • reagami: A minimal zero-deps Reagent-like for Squint and CLJS

    • Performance enhancements
    • treat innerHTML as a property rather than an attribute
    • Drop support for camelCased properties / (css) attributes
    • Fix :default-value in input range
    • Support data param in :on-render
    • Support default values for uncontrolled components
    • Fix child count mismatch
    • Fix re-rendering/patching of subroots
    • Add :on-render hook for mounting/updating/unmounting third part JS components
  • NEW: parmezan: fixes unbalanced or unexpected parens or other delimiters in Clojure files

  • CLI: Turn Clojure functions into CLIs!

    • #126: - value accidentally parsed as option, e.g. --file -
    • #124: Specifying exec fn that starts with hyphen is treated as option
    • Drop Clojure 1.9 support. Minimum Clojure version is now 1.10.3.
  • clerk: Moldable Live Programming for Clojure

    • always analyze doc (but not deps) when no-cache is set (#786)
    • add option to disable inline formulas in markdown (#780)
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI

  • Nextjournal Markdown

    • Add config option to avoid TeX formulas
    • API improvements for passing options
  • cherry: Experimental ClojureScript to ES6 module compiler

    • Fix cherry compile CLI command not receiving file arguments
    • Bump shadow-cljs to 3.3.4
    • Fix #163: Add assert to macros (@willcohen)
    • Fix #165: Fix ClojureScript protocol dispatch functions (@willcohen)
    • Fix #167: Protocol dispatch functions inside IIFEs; bump squint accordingly
    • Fix #169: fix extend-type on Object
    • Fix #171: Add satisfies? macro (@willcohen)
  • deps.clj: A faithful port of the clojure CLI bash script to Clojure

    • Released several versions catching up with the clojure CLI
  • quickdoc: Quick and minimal API doc generation for Clojure

    • Fix extra newline in codeblock
  • quickblog: light-weight static blog engine for Clojure and babashka

    • Add support for a blog contained within another website; see Serving an alternate content root in README. (@jmglov)
    • Upgrade babashka/http-server to 0.1.14
    • Fix :blog-image-alt option being ignored when using CLI (bb quickblog render)
  • nbb: Scripting in Clojure on Node.js using SCI

    • #395: fix vim-fireplace infinite loop on nREPL session close.
    • Add ILookup and Cons
    • Add abs
    • nREPL: support "completions" op
  • neil: A CLI to add common aliases and features to deps.edn-based projects.

    • neil.el - a hook that runs after finding a package (@agzam)
    • neil.el - adds a function for injecting a found package into current CIDER session (@agzam)
    • #245: neil.el - neil-executable-path now can be set to clj -M:neil
    • #251: Upgrade library deps-new to 0.10.3
    • #255: update maven search URL
  • fs - File system utility library for Clojure

    • #154 reflect in directory check and docs that move never follows symbolic links (@lread)
    • #181 delete-tree now deletes broken symbolic link root (@lread)
    • #193 create-dirs now recognizes sym-linked dirs on JDK 11 (@lread)
    • #184: new check in copy-tree for copying to self too rigid
    • #165: zip now excludes zip-file from zip-file (@lread)
    • #167: add root fn which exposes Path getRoot (@lread)
    • #166: copy-tree now fails fast on attempt to copy parent to child (@lread)
    • #152: an empty-string path "" is now (typically) understood to be the current working directory (as per underlying JDK file APIs) (@lread)
    • #155: fs/with-temp-dir clj-kondo linting refinements (@lread)
    • #162: unixify no longer expands into absolute path on Windows (potentially BREAKING)
    • Add return type hint to read-all-bytes
  • process: Clojure library for shelling out / spawning sub-processes

    • #181: support :discard or ProcessBuilder$Redirect as :out and :err options

Contributions to third party projects:

  • ClojureScript
    • CLJS-3466: support qualified method in return position
    • CLJS-3468: :refer-global should not make unrenamed object available

Other projects

These are (some of the) other projects I&aposm involved with but little to no activity happened in the past month.

Click for more details - [pod-babashka-go-sqlite3](https://github.com/babashka/pod-babashka-go-sqlite3): A babashka pod for interacting with sqlite3 - [unused-deps](https://github.com/borkdude/unused-deps): Find unused deps in a clojure project - [pod-babashka-fswatcher](https://github.com/babashka/pod-babashka-fswatcher): babashka filewatcher pod - [sci.nrepl](https://github.com/babashka/sci.nrepl): nREPL server for SCI projects that run in the browser - [babashka.nrepl-client](https://github.com/babashka/nrepl-client) - [http-server](https://github.com/babashka/http-server): serve static assets - [nbb](https://github.com/babashka/nbb): Scripting in Clojure on Node.js using SCI - [sci.configs](https://github.com/babashka/sci.configs): A collection of ready to be used SCI configs. - [http-client](https://github.com/babashka/http-client): babashka's http-client - [html](https://github.com/borkdude/html): Html generation library inspired by squint's html tag - [instaparse-bb](https://github.com/babashka/instaparse-bb): Use instaparse from babashka - [sql pods](https://github.com/babashka/babashka-sql-pods): babashka pods for SQL databases - [rewrite-edn](https://github.com/borkdude/rewrite-edn): Utility lib on top of - [rewrite-clj](https://github.com/clj-commons/rewrite-clj): Rewrite Clojure code and edn - [tools-deps-native](https://github.com/babashka/tools-deps-native) and [tools.bbuild](https://github.com/babashka/tools.bbuild): use tools.deps directly from babashka - [bbin](https://github.com/babashka/bbin): Install any Babashka script or project with one command - [qualify-methods](https://github.com/borkdude/qualify-methods) - Initial release of experimental tool to rewrite instance calls to use fully qualified methods (Clojure 1.12 only) - [tools](https://github.com/borkdude/tools): a set of [bbin](https://github.com/babashka/bbin/) installable scripts - [babashka.json](https://github.com/babashka/json): babashka JSON library/adapter - [speculative](https://github.com/borkdude/speculative) - [squint-macros](https://github.com/squint-cljs/squint-macros): a couple of macros that stand-in for [applied-science/js-interop](https://github.com/applied-science/js-interop) and [promesa](https://github.com/funcool/promesa) to make CLJS projects compatible with squint and/or cherry. - [grasp](https://github.com/borkdude/grasp): Grep Clojure code using clojure.spec regexes - [lein-clj-kondo](https://github.com/clj-kondo/lein-clj-kondo): a leiningen plugin for clj-kondo - [http-kit](https://github.com/http-kit/http-kit): Simple, high-performance event-driven HTTP client+server for Clojure. - [babashka.nrepl](https://github.com/babashka/babashka.nrepl): The nREPL server from babashka as a library, so it can be used from other SCI-based CLIs - [jet](https://github.com/borkdude/jet): CLI to transform between JSON, EDN, YAML and Transit using Clojure - [lein2deps](https://github.com/borkdude/lein2deps): leiningen to deps.edn converter - [cljs-showcase](https://github.com/borkdude/cljs-showcase): Showcase CLJS libs using SCI - [babashka.book](https://github.com/babashka/book): Babashka manual - [pod-babashka-buddy](https://github.com/babashka/pod-babashka-buddy): A pod around buddy core (Cryptographic Api for Clojure). - [gh-release-artifact](https://github.com/borkdude/gh-release-artifact): Upload artifacts to Github releases idempotently - [carve](https://github.com/borkdude/carve) - Remove unused Clojure vars - [4ever-clojure](https://github.com/oxalorg/4ever-clojure) - Pure CLJS version of 4clojure, meant to run forever! - [pod-babashka-lanterna](https://github.com/babashka/pod-babashka-lanterna): Interact with clojure-lanterna from babashka - [joyride](https://github.com/BetterThanTomorrow/joyride): VSCode CLJS scripting and REPL (via [SCI](https://github.com/babashka/sci)) - [clj2el](https://borkdude.github.io/clj2el/): transpile Clojure to elisp - [deflet](https://github.com/borkdude/deflet): make let-expressions REPL-friendly! - [deps.add-lib](https://github.com/borkdude/deps.add-lib): Clojure 1.12's add-lib feature for leiningen and/or other environments without a specific version of the clojure CLI

Permalink

rswan 1.1.0, and other Clojure updates

Notes

  • rswam 1.1.0-PRE https://codeberg.org/mindaslab/rswan
  • About setting repo
    • :repositories [["clojars" {:url "https://clojars.org/org.clojars.mindaslab.rswan"
                                   :sign-releases false}]
      
    • caused to due Java + Clojure upgrade, which resulted in diff nrepl versions
  • Try rswan - demo
  • Why clj is not recognized as Clojure in Logseq?
  • job
    • Weird
    • $0
    • Some equity if things go right
    • not sure
  • Anyone wants any Clojure help?
    • Want to learn more
    • Need not be paid
    • If paid, will be really happy

Permalink

Building Heretic: From ClojureStorm to Mutant Schemata

Heretic

This is Part 2 of a series on mutation testing in Clojure. Part 1 introduced the concept and why Clojure needed a purpose-built tool.

The previous post made a claim: mutation testing can be fast if you know which tests to run. This post shows how Heretic makes that happen.

We&aposll walk through the three core phases: collecting expression-level coverage with ClojureStorm, transforming source code with rewrite-clj, and the optimization techniques that keep mutation counts manageable.

Phase 1: Coverage Collection

Traditional coverage tools track lines. Heretic tracks expressions.

The difference matters. Consider:

(defn process-order [order]
  (if (> (:quantity order) 10)
    (* (:price order) 0.9)    ;; <- Line 3: bulk discount
    (:price order)))

Line-level coverage would show line 3 as "covered" if any test enters the bulk discount branch. But expression-level coverage distinguishes between tests that evaluate *, (:price order), and 0.9. When we later mutate 0.9 to 1.1, we can run only the tests that actually touched that specific literal - not every test that happened to call process-order.

ClojureStorm&aposs Instrumented Compiler

ClojureStorm is a fork of the Clojure compiler that instruments every expression during compilation. Created by Juan Monetta for the FlowStorm debugger, it provides exactly the hooks Heretic needs. (Thanks to Juan for building such a solid foundation - Heretic would not exist without ClojureStorm.)

The integration is surprisingly minimal:

(ns heretic.tracer
  (:import [clojure.storm Emitter Tracer]))

(def ^:private current-coverage
  "Atom of {form-id #{coords}} for the currently running test."
  (atom {}))

(defn record-hit! [form-id coord]
  (swap! current-coverage
         update form-id
         (fnil conj #{})
         coord))

(defn init! []
  ;; Configure what gets instrumented
  (Emitter/setInstrumentationEnable true)
  (Emitter/setFnReturnInstrumentationEnable true)
  (Emitter/setExprInstrumentationEnable true)

  ;; Set up callbacks
  (Tracer/setTraceFnsCallbacks
   {:trace-expr-fn (fn [_ _ coord form-id]
                     (record-hit! form-id coord))
    :trace-fn-return-fn (fn [_ _ coord form-id]
                          (record-hit! form-id coord))}))

When any instrumented expression evaluates, ClojureStorm calls our callback with two pieces of information:

  • form-id: A unique identifier for the top-level form (e.g., an entire defn)
  • coord: A path into the form&aposs AST, like "3,2,1" meaning "third child, second child, first child"

Together, [form-id coord] pinpoints exactly which subexpression executed. This is the key that unlocks targeted test selection.

The Coordinate System

To connect a mutation in the source code to the coverage data, we need a way to uniquely address any subexpression. Think of it as a postal address for code - we need to say "the a inside the + call inside the function body" in a format that both the coverage tracer and mutation engine can agree on.

ClojureStorm addresses this with a path-based coordinate system. Consider this function as a tree:

(defn foo [a b] (+ a b))
   │
   ├─[0] defn
   ├─[1] foo
   ├─[2] [a b]
   └─[3] (+ a b)
            │
            ├─[3,0] +
            ├─[3,1] a
            └─[3,2] b

Each number represents which child to pick at each level. The coordinate "3,2" means "go to child 3 (the function body), then child 2 (the second argument to +)". That gives us the b symbol.

This works cleanly for ordered structures like lists and vectors, where children have stable positions. But maps are unordered - {:name "Alice" :age 30} and {:age 30 :name "Alice"} are the same value, so numeric indices would be unstable.

ClojureStorm solves this by hashing the printed representation of map keys. Instead of "0" for the first entry, a key like :name gets addressed as "K-1925180523":

{:name "Alice" :age 30}
   │
   ├─[K-1925180523] :name
   ├─[V-1925180523] "Alice"
   ├─[K-1524292809] :age
   └─[V-1524292809] 30

The hash ensures stable addressing regardless of iteration order.

With this addressing scheme, we can say "test X touched coordinate 3,1 in form 12345" and later ask "which tests touched the expression we&aposre about to mutate?"

The Form-Location Bridge

Here&aposs a problem we discovered during implementation: how do we connect the mutation engine to the coverage data?

The mutation engine uses rewrite-clj to parse and transform source files. It finds a mutation site at, say, line 42 of src/my/app.clj. But the coverage data is indexed by ClojureStorm&aposs form-id - an opaque identifier assigned during compilation. We need to translate "file + line" into "form-id".

Fortunately, ClojureStorm&aposs FormRegistry stores the source file and starting line for each compiled form. We build a lookup index:

(defn build-form-location-index [forms source-paths]
  (into {}
        (for [[form-id {:keys [form/file form/line]}] forms
              :when (and file line)
              :let [abs-path (resolve-path source-paths file)]
              :when abs-path]
          [[abs-path line] form-id])))

When the mutation engine finds a site at line 42, it searches for the form whose start line is the largest value less than or equal to 42 - that is, the innermost containing form. This gives us the ClojureStorm form-id, which we use to look up which tests touched that form.

This bridging layer is what allows Heretic to connect source transformations to runtime coverage, enabling targeted test execution.

Collection Workflow

Coverage collection runs each test individually and captures what it touches:

(defn run-test-with-coverage [test-var]
  (tracer/reset-current-coverage!)
  (try
    (test-var)
    (catch Throwable t
      (println "Test threw exception:" (.getMessage t))))
  {(symbol test-var) (tracer/get-current-coverage)})

The result is a map from test symbol to coverage data:

{my.app-test/test-addition
  {12345 #{"3" "3,1" "3,2"}    ;; form-id -> coords touched
   12346 #{"1" "2,1"}}
 my.app-test/test-subtraction
  {12345 #{"3" "4"}
   12347 #{"1"}}}

This gets persisted to .heretic/coverage/ with one file per test namespace, enabling incremental updates. Change a test file? Only that namespace gets recollected.

At this point we have a complete map: for every test, we know exactly which [form-id coord] pairs it touched. Now we need to generate mutations and look up which tests are relevant for each one.

Phase 2: The Mutation Engine

With coverage data in hand, we need to actually mutate the code. This means:

  1. Parsing Clojure source into a navigable structure
  2. Finding locations where operators apply
  3. Transforming the source
  4. Hot-swapping the modified code into the running JVM

Parsing with rewrite-clj

rewrite-clj gives us a zipper over Clojure source that preserves whitespace and comments - essential for producing readable diffs:

(defn parse-file [path]
  (z/of-file path {:track-position? true}))

(defn find-mutation-sites [zloc]
  (->> (walk-form zloc)
       (remove in-quoted-form?)  ;; Skip &apos(...) and `(...)
       (mapcat (fn [z]
                 (let [applicable (ops/applicable-operators z)]
                   (map #(make-mutation-site z %) applicable))))))

The walk-form function traverses the zipper depth-first. At each node, we check which operators match. An operator is a data map with a matcher predicate:

(def swap-plus-minus
  {:id :swap-plus-minus
   :original &apos+
   :replacement &apos-
   :description "Replace + with -"
   :matcher (fn [zloc]
              (and (= :token (z/tag zloc))
                   (symbol? (z/sexpr zloc))
                   (= &apos+ (z/sexpr zloc))))})

Each mutation site captures the file, line, column, operator, and - critically - the coordinate path within the form. This coordinate is what connects a mutation to the coverage data from Phase 1.

Coordinate Mapping

The tricky part is converting between rewrite-clj&aposs zipper positions and ClojureStorm&aposs coordinate strings. We need bidirectional conversion for the round-trip:

(defn coord->zloc [zloc coord]
  (let [parts (parse-coord coord)]  ;; "3,2,1" -> [3 2 1]
    (reduce
     (fn [z part]
       (when z
         (if (string? part)      ;; Hash-based for maps/sets
           (find-by-hash z part)
           (nth-child z part)))) ;; Integer index for lists/vectors
     zloc
     parts)))

(defn zloc->coord [zloc]
  (loop [z zloc
         coord []]
    (cond
      (root-form? z) (vec coord)
      (z/up z)
      (let [part (if (is-unordered-collection? z)
                   (compute-hash-coord z)
                   (child-index z))]
        (recur (z/up z) (cons part coord)))
      :else (vec coord))))

The validation requirement is that these must be inverses:

(= coord (zloc->coord (coord->zloc zloc coord)))

With correct coordinate mapping, we can take a mutation at a known location and ask "which tests touched this exact spot?" That query is what makes targeted test execution possible.

Applying Mutations

Once we find a mutation site and can navigate to it, the actual transformation is straightforward:

(defn apply-mutation! [mutation]
  (let [{:keys [file form-id coord operator]} mutation
        operator-def (get ops/operators-by-id operator)
        original-content (slurp file)
        zloc (z/of-string original-content {:track-position? true})
        form-zloc (find-form-by-id zloc form-id)
        target-zloc (coord/coord->zloc form-zloc coord)
        replacement-str (ops/apply-operator operator-def target-zloc)
        modified-zloc (z/replace target-zloc
                                 (n/token-node (symbol replacement-str)))
        modified-content (z/root-string modified-zloc)]
    (spit file modified-content)
    (assoc mutation :backup original-content)))

Hot-Swapping with clj-reload

After modifying the source file, we need the JVM to see the change. clj-reload handles this correctly:

(ns heretic.reloader
  (:require [clj-reload.core :as reload]))

(defn init! [source-paths]
  (reload/init {:dirs source-paths}))

(defn reload-after-mutation! []
  (reload/reload {:throw false}))

Why clj-reload specifically? It solves problems that require :reload doesn&apost:

  1. Proper unloading: Calls remove-ns before reloading, preventing protocol/multimethod accumulation
  2. Dependency ordering: Topologically sorts namespaces, unloading dependents first
  3. Transitive closure: Automatically reloads namespaces that depend on the changed one

The mutation workflow becomes:

(with-mutation [m mutation]
  (reloader/reload-after-mutation!)
  (run-relevant-tests m))
;; Mutation automatically reverted in finally block

At this point we have the full pipeline: parse source, find mutation sites, apply a mutation, hot-reload, run targeted tests, restore. But running this once per mutation is still slow for large codebases. Phase 3 addresses that.

80+ Clojure-Specific Operators

The operator library is where Heretic&aposs Clojure focus shows. Beyond the standard arithmetic and comparison swaps, we have:

Threading operators - catch ->/->> confusion:

(-> data (get :users) first)   ;; Original
(->> data (get :users) first)  ;; Mutant: wrong arg position

Nil-handling operators - expose nil punning mistakes:

(when (seq users) ...)   ;; Original: handles empty list
(when users ...)         ;; Mutant: breaks on empty list (truthy)

Lazy/eager operators - catch chunking and realization bugs:

(map process items)    ;; Original: lazy
(mapv process items)   ;; Mutant: eager, different memory profile

Destructuring operators - expose JSON interop issues:

{:keys [user-id]}   ;; Original: kebab-case
{:keys [userId]}    ;; Mutant: camelCase from JSON

The full set includes first/last, rest/next, filter/remove, conj/disj, some->/->, and qualified keyword mutations. These are the mistakes Clojure developers actually make.

With 80+ operators applied to a real codebase, mutation counts grow quickly. The next phase makes this tractable.

Phase 3: Optimization Techniques

With 80+ operators and a real codebase, mutation counts get large fast. A 1000-line project might generate 5000 mutations. Running the full test suite 5000 times is not practical.

Heretic uses several techniques to make this manageable.

Targeted Test Execution

This is the big one, enabled by Phase 1. Instead of running all tests for every mutation, we query the coverage index:

(defn tests-for-mutation [coverage-map mutation]
  (let [form-id (resolve-form-id (:form-location-index coverage-map) mutation)
        coord (:coord mutation)]
    (get-in coverage-map [:coord-to-tests [form-id coord]] #{})))

A mutation at (+ a b) might only be covered by 2 tests out of 200. We run those 2 tests in milliseconds instead of the full suite in seconds.

This is where the Phase 1 coverage investment pays off. But we can go further by reducing the number of mutations we generate in the first place.

Equivalent Mutation Detection

Some mutations produce semantically identical code. Detecting these upfront avoids wasted test runs:

;; (* x 0) -> (/ x 0) is NOT equivalent (divide by zero)
;; (* x 1) -> (/ x 1) IS equivalent (both return x)

(def equivalent-patterns
  [{:operator :swap-mult-div
    :context (fn [zloc]
               (some #(= 1 %) (rest (z/child-sexprs (z/up zloc)))))
    :reason "Multiplying or dividing by one has no effect"}

   {:operator :swap-lt-lte
    :context (fn [zloc]
               (let [[_ left right] (z/child-sexprs (z/up zloc))]
                 (and (= 0 right)
                      (non-negative-fn? (first left)))))
    :reason "(< (count x) 0) is always false"}])

The patterns cover boundary comparisons ((>= (count x) 0) is always true), function contracts ((nil? (str x)) is always false), and lazy/eager equivalences ((vec (map f xs)) equals (vec (mapv f xs))).

Filtering equivalent mutations prevents false "survived" reports. But we can also skip mutations that would be redundant to test.

Subsumption Analysis

Subsumption identifies when killing one mutation implies another would also be killed. If swapping < to <= is caught by a test, then swapping < to > would likely be caught too.

Based on the RORG (Relational Operator Replacement with Guard) research, we define subsumption relationships:

(def relational-operator-subsumption
  {&apos<  [:swap-lt-lte :swap-lt-neq :replace-comparison-false]
   &apos>  [:swap-gt-gte :swap-gt-neq :replace-comparison-false]
   &apos<= [:swap-lte-lt :swap-lte-eq :replace-comparison-true]
   ;; ...
   })

For each comparison operator, we only need to test the minimal set. The research shows this achieves roughly the same fault detection with 40% fewer mutations.

The subsumption graph also enables intelligent mutation selection:

(defn minimal-operator-set [operators]
  (set/difference
   operators
   ;; Remove any operator dominated by another in the set
   (reduce
    (fn [dominated op]
      (into dominated
            (set/intersection (dominated-operators op) operators)))
    #{}
    operators)))

These techniques reduce mutation count. The final optimization reduces the cost of each mutation.

Mutant Schemata: Compile Once, Select at Runtime

The most sophisticated optimization is mutant schemata. Instead of applying one mutation, reloading, testing, reverting, reloading for each mutation, we embed multiple mutations into a single compilation:

;; Original
(defn calculate [x] (+ x 1))

;; Schematized (with 3 mutations)
(defn calculate [x]
  (case heretic.schemata/*active-mutant*
    :mut-42-5-plus-minus (- x 1)
    :mut-42-5-1-to-0     (+ x 0)
    :mut-42-5-1-to-2     (+ x 2)
    (+ x 1)))  ;; original (default)

We reload once, then switch between mutations by binding a dynamic var:

(def ^:dynamic *active-mutant* nil)

(defmacro with-mutant [mutation-id & body]
  `(binding [*active-mutant* ~mutation-id]
     ~@body))

The workflow becomes:

(defn run-mutation-batch [file mutations test-fn]
  (let [schemata-info (schematize-file! file mutations)]
    (try
      (reload!)  ;; Once!
      (doseq [[id mutation] (:mutation-map schemata-info)]
        (with-mutant id
          (test-fn id mutation)))
      (finally
        (restore-file! schemata-info)
        (reload!)))))  ;; Once!

For a file with 50 mutations, this means 2 reloads instead of 100. The overhead of case dispatch at runtime is negligible compared to compilation cost.

Operator Presets

Finally, we offer presets that trade thoroughness for speed:

(def presets
  {:fast #{:swap-plus-minus :swap-minus-plus
           :swap-lt-gt :swap-gt-lt
           :swap-and-or :swap-or-and
           :swap-nil-some :swap-some-nil}

   :minimal minimal-preset-operators  ;; Subsumption-aware

   :standard #{;; :fast plus...
               :swap-first-last :swap-rest-next
               :swap-thread-first-last}

   :comprehensive (set (map :id all-operators))})

The :fast preset uses ~15 operators that research shows catch roughly 99% of bugs. The :minimal preset uses subsumption analysis to eliminate redundant mutations. Both run much faster than :comprehensive while maintaining detection power.

Putting It Together

A mutation testing run with Heretic looks like:

  1. Collect coverage (once, cached): Run tests under ClojureStorm instrumentation, build expression-level coverage map
  2. Generate mutations: Parse source files, find all applicable operator sites
  3. Filter: Remove equivalent mutations, apply subsumption to reduce set
  4. Group by file: Prepare for schemata optimization
  5. For each file:
    • Build schematized source with all mutations
    • Reload once
    • For each mutation: bind *active-mutant*, run targeted tests
    • Restore and reload
  6. Report: Mutation score, surviving mutations, test effectiveness

The result is mutation testing that runs in seconds for typical projects instead of hours.


This covers the core implementation. A future post will explore Phase 4: AI-powered semantic mutations and hybrid equivalent detection - using LLMs to generate the subtle, domain-aware mutations that traditional operators miss.

Previously: Part 1 - Heretic: Mutation Testing in Clojure

Permalink

Clojure Deref (Dec 30, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Last chance for the annual Clojure surveys!

Time is running out to take the Clojure surveys! Please help spread the word, and take a moment to fill them out if you haven’t already.

Fill out the 2025 State of Clojure Survey if you use any version or dialect of Clojure in any capacity.

Fill out the 2025 State of ClojureScript Survey and if you use ClojureScript or dialects like Squint, Cherry, nbb, and such.

Thank you for your help!

Upcoming Events

Libraries and Tools

Debut release

  • crabjure - A fast static analyzer for Clojure and ClojureScript, written in Rust.

  • browser-jack-in - A web browser extension that let’s you inject a Scittle REPL server into any browser page.

  • clamav-clj - An idiomatic, modern Clojure wrapper for ClamAV.

  • heretic - Mutation testing for Clojure - fast, practical, and integrated

Updates

  • Many Clojure contrib libs were updated to move the Clojure dependency to 1.11.4, which is past the CVE fixed in 1.11.2.

  • partial-cps 0.1.50 - A lean and efficient continuation passing style transform, includes async-await support.

  • csvx 68fd22c - A zero dependencies tool that enables you to control how to tokenize, transform and handle files with char(s) separated values in Clojure and ClojureScript.

  • recife 0.22.0 - A Clojure model checker (using the TLA+/TLC engine)

  • polylith 0.3.32 - A tool used to develop Polylith based architectures in Clojure.

  • nrepl 1.5.2 - A Clojure network REPL that provides a server and client, along with some common APIs of use to IDEs and other tools that may need to evaluate Clojure code in remote environments.

  • manifold 0.5.0 - A compatibility layer for event-driven abstractions

Permalink

Tetris-playing AI the Polylith way - Part 1

Tetris AI

In this blog series, I will show how to work with the Polylith architecture and how organizing code into components helps create a good structure for high-level functional style programming.

You might feel that organizing into components is unnecessary, and yes, for a tiny codebase like this I would agree. It&aposs still easy to reason about the code and keep everything in mind, but as the codebase grows, so does the value of this structure, in terms of better overview, clearer system boundaries, and increased flexibility in how these building blocks can be combined into various systems.

We will get familiar with this by implementing a self-playing Tetris program in Clojure and Python while reflecting on the differences between the two languages.

The goal

The task for this first post is to place a T piece on a Tetris board (represented by a two-dimensional array):

[[0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,T,0,0,0]
 [0,0,0,0,0,T,T,T,0,0]]

We will put the code in the piece and board components in a Polylith workspace (output from the info command):

Poly info output

This will not be a complete guide to Polylith, Clojure, or Python, but I will explain the most important parts and refer to relevant documentation when needed.

The resulting source code from this first blog post in the series can be found here:

Workspace

We begin by installing the poly command line tool for Clojure, which we will use when working with the Polylith codebase:

brew install polyfy/polylith/poly

The next step is to create a Polylith workspace:

poly create workspace name:tetris-polylith top-ns:tetrisanalyzer

We now have a standard Polylith workspace for Clojure in place:

▾ tetris-polylith
  ▸ bases
  ▸ components
  ▸ development
  ▸ projects
  deps.edn
  workspace.edn

Python

We will use uv as package manager for Python (see setup for other alternatives). First we install uv:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then we create the tetris-polylith-uv workspace directory, by executing:

uv init tetris-polylith-uv
cd tetris-polylith-uv
uv add polylith-cli --dev
uv sync

which creates:

README.md
main.py
pyproject.toml
uv.lock

Finally we create the standard Polylith workspace structure:

uv run poly create workspace --name tetrisanalyzer --theme loose

which adds:

▾ tetris-polylith-uv
  ▸ bases
  ▸ components
  ▸ development
  ▸ projects
  workspace.toml

The workspace requires some additional manual steps, documented here.

The piece component

Now we are ready to create our first component for the Clojure codebase:

poly create component name:piece

This adds the piece component to the workspace structure:

  ▾ components
    ▾ piece
      ▾ src
        ▾ tetrisanalyzer
          ▾ piece
            interface.clj
            core.clj
      ▾ test
        ▾ tetrisanalyzer
          ▾ piece
            interface-test.clj

If you have used Polylith with Clojure before, you know that you also need to manually add piece to deps.edn, which is described here.

Python

Let&aposs do the same for Python:

uv run poly create component --name piece

This adds the piece component to the structure:

  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        core.py
  ▾ test
    ▾ components
      ▾ tetrisanalyzer
        ▾ piece
          __init__.py
          test_core.py

Piece shapes

In Tetris, there are 7 different pieces that can be rotated, summing up to 19 shapes:

Pieces

Here we will store them in a multi-dimensional array where each possible piece shape is made up of four [x,y] cells, with [0,0] representing the upper left corner.

For example the Z piece in its inital position (rotation 0) consists of the cells [0,0] [1,0] [1,1] [2,1]:

Z piece

This is how it looks like in Clojure (commas are treated as white spaces in Clojure and are often omitted):

(ns tetrisanalyzer.piece.piece)

(def pieces [nil

             ;; I (1)
             [[[0 0] [1 0] [2 0] [3 0]]
              [[0 0] [0 1] [0 2] [0 3]]]

             ;; Z (2)
             [[[0 0] [1 0] [1 1] [2 1]]
              [[1 0] [0 1] [1 1] [0 2]]]

             ;; S (3)
             [[[1 0] [2 0] [0 1] [1 1]]
              [[0 0] [0 1] [1 1] [1 2]]]

             ;; J (4)
             [[[0 0] [1 0] [2 0] [2 1]]
              [[0 0] [1 0] [0 1] [0 2]]
              [[0 0] [0 1] [1 1] [2 1]]
              [[1 0] [1 1] [0 2] [1 2]]]

             ;; L (5)
             [[[0 0] [1 0] [2 0] [0 1]]
              [[0 0] [0 1] [0 2] [1 2]]
              [[2 0] [0 1] [1 1] [2 1]]
              [[0 0] [1 0] [1 1] [1 2]]]

             ;; T (6)
             [[[0 0] [1 0] [2 0] [1 1]]
              [[0 0] [0 1] [1 1] [0 2]]
              [[1 0] [0 1] [1 1] [2 1]]
              [[1 0] [0 1] [1 1] [1 2]]]

             ;; O (7)
             [[[0 0] [1 0] [0 1] [1 1]]]])

Python

Here is how it looks in Python:

pieces = [None,

          # I (1)
          [[[0, 0], [1, 0], [2, 0], [3, 0]],
           [[0, 0], [0, 1], [0, 2], [0, 3]]],

          # Z (2)
          [[[0, 0], [1, 0], [1, 1], [2, 1]],
           [[1, 0], [0, 1], [1, 1], [0, 2]]],

          # S (3)
          [[[1, 0], [2, 0], [0, 1], [1, 1]],
           [[0, 0], [0, 1], [1, 1], [1, 2]]],

          # J (4)
          [[[0, 0], [1, 0], [2, 0], [2, 1]],
           [[0, 0], [1, 0], [0, 1], [0, 2]],
           [[0, 0], [0, 1], [1, 1], [2, 1]],
           [[1, 0], [1, 1], [0, 2], [1, 2]]],

          # L (5)
          [[[0, 0], [1, 0], [2, 0], [0, 1]],
           [[0, 0], [0, 1], [0, 2], [1, 2]],
           [[2, 0], [0, 1], [1, 1], [2, 1]],
           [[0, 0], [1, 0], [1, 1], [1, 2]]],

          # T (6)
          [[[0, 0], [1, 0], [2, 0], [1, 1]],
           [[0, 0], [0, 1], [1, 1], [0, 2]],
           [[1, 0], [0, 1], [1, 1], [2, 1]],
           [[1, 0], [0, 1], [1, 1], [1, 2]]],

          # O (7)
          [[[0, 0], [1, 0], [0, 1], [1, 1]]]]

In Clojure we had to specify the namespace at the top of the file, but in Python, the namespace is implicitly given based on the directory hierarchy.

Here we put the above code in shape.py, and it will therefore automatically belong to the tetrisanalyzer.piece.shape module:

▾ tetris-polylith-uv
  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        shape.py

Interface

In Polylith, only what&aposs in the component&aposs interface is exposed to the rest of the codebase.

In Python, we can optionally control what gets exposed in wildcard imports (from module import *) by defining the __all__ variable in the __init__.py module. However, even without __all__, all public names (those not starting with _) are still accessible through explicit imports.

This is how the piece interface in __init__.py looks like:

from tetrisanalyzer.piece.core import I, Z, S, J, L, T, O, piece

__all__ = ["I", "Z", "S", "J", "L", "T", "O", "piece"]

We could have put all the code directly in __init__.py, but it&aposs a common pattern in Python to keep this module clean by delegating to implementation modules like core.py:

from tetrisanalyzer.piece import shape

I = 1
Z = 2
S = 3
J = 4
L = 5
T = 6
O = 7


def piece(p, rotation):
    return shape.pieces[p][rotation]

The piece component now has these files:

▾ tetris-polylith-uv
  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        core.py
        shape.py

Clojure

In Clojure, the interface is often just a single namespace with the name interface:

  ▾ components
    ▾ piece
      ▾ src
        ▾ tetrisanalyzer
          ▾ piece
            interface.clj

Implemented like this:

(ns tetrisanalyzer.piece.interface
  (:require [tetrisanalyzer.piece.shape :as shape]))

(def I 1)
(def Z 2)
(def S 3)
(def J 4)
(def L 5)
(def T 6)
(def O 7)

(defn piece [p rotation]
  (get-in shape/pieces [p rotation]))

A language comparision

Let&aposs see what differences there are in the two languages:

;; Clojure
(defn piece [p rotation]
  (get-in shape/pieces [p rotation]))
# Python
def piece(p, rotation):
    return shape.pieces[p][rotation]

An obvious difference here is that Clojure is a Lisp dialect, while Python uses a more traditional syntax. This means that if you want anything to happen in Clojure, you put it first in a list:

  • (defn piece ...)
    
    is a macro that expands to (def piece (fn ...)) which defines the function piece
  • (get-in shape/pieces [p rotation])
    
    is a call to the function clojure.core/get-in, where:
    • The first argument shape/pieces refers to the pieces vector in the shape namespace
    • The second argument creates the vector [p rotation] with two arguments:
      • p is a value between 1 and 7, representing one of the pieces: I, Z, S, J, L, T, and O
      • rotation is a value between 0 and 3, representing the number of 90-degree rotations

Another significant difference is that data is immutable in Clojure, while in Python it&aposs mutable (like the pieces data structure).

However, a similarity is that both languages are dynamically typed, but uses concrete types in the compiled code:

;; Clojure
(class \Z) ;; Returns java.lang.Character
(class 2)  ;; Returns java.lang.Long
(class Z)  ;; Returns java.lang.Long (since Z is bound to 2)
# Python
type(&aposZ&apos)  # Returns <class &aposstr&apos> (characters are strings in Python)
type(2)    # Returns <class &aposint&apos>
type(Z)    # Returns <class &aposint&apos> (since Z is bound to 2)

The languages also share another feature: type information can be added optionally. In Clojure, this is done using type hints for Java interop and performance optimization. In Python, type hints (introduced in Python 3.5) can be added using the typing module, though they are not enforced at runtime and are primarily used for static type checking with tools like mypy.

The board component

Now let&aposs continue by creating a board component:

poly create component name:board

Which adds the board component to the workspace:

▾ tetris-polylith
  ▸ bases
  ▾ components
    ▸ board
    ▸ piece
  ▸ development
  ▸ projects

And this is how we create a board component in Python:

uv run poly create component --name board

This adds the board component to the workspace:

  ▾ components
    ▾ tetrisanalyzer
      ▸ board
      ▸ piece
  ▾ test
    ▾ components
      ▾ tetrisanalyzer
        ▸ board
        ▸ piece

The Clojure code that places a piece on the board is implemented like this:

(ns tetrisanalyzer.board.core)

(defn empty-board [width height]
  (vec (repeat height (vec (repeat width 0)))))

(defn set-cell [board p x y [cx cy]]
  (assoc-in board [(+ y cy) (+ x cx)] p))

(defn set-piece [board p x y piece]
  (reduce (fn [board cell]
            (set-cell board p x y cell))
          board
          piece))

In Python (which uses two blank lines between functions by default):

def empty_board(width, height):
    return [[0] * width for _ in range(height)]


def set_cell(board, p, x, y, cell):
    cx, cy = cell
    board[y + cy][x + cx] = p


def set_piece(board, p, x, y, piece):
    for cell in piece:
        set_cell(board, p, x, y, cell)
    return board

Let&aposs go through these functions.

empty-board

(defn empty-board [width height]
  (vec (repeat height (vec (repeat width 0)))))

To explain this function, we can break it down into smaller statements:

(defn empty-board [width height]  ;; [4 2]
  (let [row-list (repeat width 0) ;; (0 0 0 0)
        row (vec row-list)        ;; [0 0 0 0]
        rows (repeat height row)  ;; ([0 0 0 0] [0 0 0 0])
        board (vec rows)]         ;; [[0 0 0 0] [0 0 0 0]]
    board))

We convert the lists to vectors using the vec function, so that we (later) can access it via index. Note that it is the last value in the function (board) that is returned.

empty_board

def empty_board(width, height):
    return [[0] * width for _ in range(height)]

This can be rewritten as:

def empty_board(width, height): # width = 4, height = 2
    row = [0] * width           # row = [0, 0, 0, 0]
    rows = range(height)        # rows = lazy sequence with the length of 2
    board = [row for _ in rows] # board = [[0, 0, 0, 0], [0, 0, 0, 0]]
    return board

The [row for _ in rows] statement is a list comprehension and is a way to create data structures in Python by looping.

We loop twice through range(height), which yields the values 0 and 1, but we&aposre not interested in these values, so we use the _ placeholder.

set-cell

(defn set-cell [board p x y [cx cy]]
  (assoc-in board [(+ y cy) (+ x cx)] p))

Let&aposs break it down into an alternative implementation and call it with:

board = [[0 0 0 0] [0 0 0 0]] 
p = 6, x = 2, y = 0, cell = [0 1])
(defn set-cell [board p x y cell]
  (let [[cx cy] cell             ;; Destructures [0 1] into cx = 0, cy = 1
        xx (+ x cx)              ;; xx = 2 + 0 = 2
        yy (+ y cy)]             ;; yy = 0 + 1 = 1
    (assoc-in board [yy xx] p))) ;; [[0 0 0 0] [0 0 6 0]]

In the original version, destructuring of [cx cy] happens directly in the function&aposs parameter list. The assoc-in function works like board[y][x] in Python in this example, with the difference that it doesn&apost mutate, but instead returns a new immutable board.

set_cell

def set_cell(board, p, x, y, cell):
    cx, cy = cell
    board[y + cy][x + cx] = p  # [[0,0,0,0] [0,0,6,0]]

As mentioned earlier, this code mutates the two-dimensional list in place. It doesn&apost return anything, which differs from the Clojure version that returns a new board with one cell changed.

set-piece

(defn set-piece [board p x y piece]
  (reduce (fn [board cell]
            (set-cell board p x y cell))
          board   ;; An empty board as initial value
          piece)) ;; cells: [[1 0] [0 1] [1 1] [2 1]]

If you are new to reduce, think of it as a recursive function that processes each element in a collection, accumulating a result as it goes. The initial call to set-cell will use an empty board and the first [1 0] cell from piece, then use the returned board from set-cell and the second cell [0 1] from piece to call set-cell again, and continue like that until it has applied all cells in piece, where it returns a new board.

set_piece

def set_piece(board, p, x, y, piece):
    for cell in piece:
        set_cell(board, p, x, y, cell)
    return board

The Python version is pretty straight forward, with a for loop that mutates the board. We choose to return the board to make the function more flexible, allowing it to be used in expressions and enabling method chaining, which is a common Python pattern, even though the board is already mutated in place.

Test

The test looks like this in Clojure:

(ns tetrisanalyzer.board.core-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.interface :as piece]
            [tetrisanalyzer.board.core :as board]))

(def empty-board [[0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]])

(deftest empty-board-test
  (is (= empty-board
         (board/empty-board 10 15))))

(deftest set-piece-test
  (let [T piece/T
        rotate-two-times 2
        piece-t (piece/piece T rotate-two-times)
        x 5
        y 13]
    (is (= [[0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 T 0 0 0]
            [0 0 0 0 0 T T T 0 0]]
           (board/set-piece empty-board T x y piece-t)))))

Let&aposs execute the tests to check that everything works as expected:

poly test :dev
Poly test output

The tests passed!

Python

Now, let&aposs add a Python test for the board:

from tetrisanalyzer import board, piece

empty_board = [
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
]


def test_empty_board():
    assert empty_board == board.empty_board(10, 15)


def test_set_piece():
    T = piece.T
    rotate_two_times = 2
    piece_t = piece.piece(T, rotate_two_times)
    x = 5
    y = 13
    expected = [
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, T, 0, 0, 0],
        [0, 0, 0, 0, 0, T, T, T, 0, 0],
    ]

    assert expected == board.set_piece(empty_board, T, x, y, piece_t)

Let&aposs install and run the tests using pytest:

uv add pytest --dev

And run the tests:

uv run pytest
Pytest output

With that, we have finished the first post in this blog series!

If you&aposre eager to see a self-playing Tetris program, I happen to have made a couple in other languages that you can watch here.

Tetris Analyzer Scala
Tetris Analyzer C++
Tetris Analyzer Tool

Happy Coding!

Permalink

Heretic: Mutation Testing in Clojure

Heretic

Your tests pass. Your coverage is high. You deploy.

Three days later, a bug surfaces in a function your tests definitely executed. The coverage report confirms it: that line is green. Your test ran the code. So how did a bug slip through?

Because coverage measures execution, not verification.

(defn apply-discount [price user]
  (if (:premium user)
    (* price 0.8)
    price))

(deftest apply-discount-test
  (is (number? (apply-discount 100 {:premium true})))
  (is (number? (apply-discount 100 {:premium false}))))

Coverage: 100%. Every branch executed. Tests: green.

But swap 0.8 for 1.2? Tests pass. Change * to /? Tests pass. Flip (:premium user) to (not (:premium user))? Tests pass.

The tests prove some number comes back. They say nothing about whether it&aposs the right number.

The Question Nobody&aposs Asking

Mutation testing asks a harder question: if I introduced a bug, would any test notice?

The technique is simple. Take your code, introduce a small change (a "mutant"), and run your tests. If a test fails, the mutant is "killed" - your tests caught the bug. If all tests pass, the mutant "survived" - you&aposve found a gap in your verification.

This isn&apost new. PIT does it for Java. Stryker does it for JavaScript. cargo-mutants does it for Rust.

Clojure hasn&apost had a practical option.

The only dedicated tool, jstepien/mutant, was archived this year as "wildly experimental." You can run PIT on Clojure bytecode, but bytecode mutations bear no relationship to mistakes Clojure developers actually make. You&aposll get mutations like "swap IADD for ISUB" when what you want is "swap -> for ->> " or "change :user-id to :userId."

Why Clojure Makes This Hard

Mutation testing has a performance problem everywhere. Run 500 mutations, execute your full test suite for each one, and you&aposre measuring build times in hours. Most developers try it once, watch the clock, and never run it again.

But Clojure adds unique challenges:

Homoiconicity cuts both ways. Code-as-data makes programmatic transformation elegant, but distinguishing "meaningful mutation" from "syntactic noise" gets subtle when everything is just nested lists.

Macros muddy the waters. A mutation to macro input might not change the expanded code. A mutation inside a macro definition might break in ways that have nothing to do with your production logic.

The bugs we make are language-specific. Threading macro confusion, nil punning traps, destructuring gotchas from JSON interop, keyword naming collisions - these aren&apost + becoming -. They&aposre mistakes that come from thinking in Clojure.

What If It Could Be Fast?

The insight that makes Heretic practical: most mutations only need 2-3 tests.

When you mutate a single expression, you don&apost need your entire test suite. You need only the tests that exercise that expression. Usually that&aposs a handful of tests, not hundreds.

The challenge is knowing which ones. Not just which functions they call, but which subexpressions they touch. The + inside (if condition (+ a b) (* a b)) might be covered by different tests than the *.

Heretic builds this map using ClojureStorm, the instrumented compiler behind FlowStorm. Run your tests once under instrumentation. From then on, each mutation runs only the tests that actually touch that code.

Instead of running 200 tests per mutation, we run 2. Instead of hours, seconds.

What If It Understood Clojure?

Generic operators miss the bugs we actually make:

;; The mutation you want: threading macro confusion
(-> data (get :users) first)     ; Original
(->> data (get :users) first)    ; Mutant: wrong arg position, wrong result

;; The mutation you want: nil punning trap
(when (seq users) (map :name users))   ; Original (handles empty)
(when users (map :name users))         ; Mutant (breaks on empty list)

;; The mutation you want: destructuring gotcha
{:keys [user-id name]}           ; Original (kebab-case)
{:keys [userId name]}            ; Mutant (camelCase from JSON)

Heretic has 65+ mutation operators designed for Clojure idioms. Swap first for last. Change rest to next. Replace -> with some->. Mutate qualified keywords. The mutations you see will be the bugs you recognize.

What If It Could Think?

Here&aposs a finding that should worry anyone relying on traditional mutation testing: research shows that nearly half of real-world faults have no strongly coupled traditional mutant. The bugs that escape to production aren&apost the ones that flip operators. They&aposre the ones that invert business logic.

;; Traditional mutation: swap * for /
(* price 0.8)  -->  (/ price 0.8)     ; Absurd. Nobody writes this bug.

;; Semantic mutation: invert the discount
(* price 0.8)  -->  (* price 1.2)     ; Premium users pay MORE. Plausible bug.

A function called apply-discount should never increase the price. That&aposs the invariant tests should verify. An AI can read function names, docstrings, and context to generate the mutations that test whether your tests understand the code&aposs purpose.

This hybrid approach - fast deterministic mutations for the common cases, intelligent semantic mutations for the subtle ones - is where Heretic is heading. Meta&aposs ACH system proved the pattern works at industrial scale.

Why "Heretic"?

Clojure discourages mutation. Values are immutable. State changes through controlled transitions. The design philosophy is that uncontrolled mutation leads to bugs.

So there&aposs something a bit ironic about a tool that deliberately introduces mutations to find those bugs. We mutate your code to prove your tests would catch it if it happened accidentally - to verify that the discipline holds.


This is the first in a series on building Heretic. Upcoming posts will cover how ClojureStorm enables expression-level coverage mapping, how we use rewrite-clj and clj-reload for hot-swapping mutants, and the optimization techniques that make this practical for real codebases.

If your coverage is high but bugs still slip through, you&aposre measuring the wrong thing.

Permalink

One csv parser to rule them all

One would think that parsing CSV files is pretty straightforward until you get bitten by all kinds of CSV files exists in the wild. Many years ago, I have written a small CSV reader with following requirements in mind:

  • Should not depend on any other code other than Clojure
  • Should allow me to control how I tokenize and transform lines
  • Should allow me to have complete controll over delimiting charactor or charactors, file encoding, amount of lines to read and error handling

The result is csvx. I update it to work across Clojure and ClojureScript both in NodeJS and browser environment. The entire code is less than 200 lines including comments and blank lines. If you find yourself in need of a csv reader with above requirements, you are welcome to steal the code. Enjoy!

Permalink

Mixing Swift and Clojure in Your iOS App - Scittle

In my previous article, I showed how to embed a S7 Scheme interpreter in an iOS app. This time, I will show you how to embed a ClojureScript interpreter, or at least a dialect of it.

Clojure itself, runs on the JVM, and there’s not really a way to embed the JVM in an iOS app. Maybe once swift-java gets rolling.

There is GraalVM, which lets you compile java code to native code, but it doesn’t support compiling for iOS. Babashka, which is a native Clojure dialect interpreter uses GraalVM.

Permalink

Building elegant interfaces with ClojureScript, React, and UIx

During Clojure South, João Lanjoni, Software Engineer at Nubank, addressed a central challenge of modern web development: how to combine the ergonomics of ClojureScript with the maturity of React to build scalable, high-performance interfaces. 

According to João, the solution is UIx, a tool that represents the new generation of bridges that further aligns the Clojure universe with the React ecosystem. In his session, he detailed the context, the limitations of previous approaches, and the value of UIx as a new, efficient entry point for React developers into ClojureScript.

From 2013 to today: React and ClojureScript in perspective

Since its launch in 2013, React has redefined the structure of frontend applications by introducing concepts like consistent reactivity. The ClojureScript community quickly responded with idiomatic interfaces like Reagent, which became the de facto standard due to its solidity, providing a minimalistic interface between ClojureScript and React, using a Hiccup-like syntax to define components. With the arrival of functional components and hooks, starting around 2019, new interfaces came up to provide a direct way of using functional components (instead of old class-based components).

However, as React continuously evolved towards modern patterns, including concurrent rendering, functional components, and new ways to manage component state, Reagent remained tied to class-based components, mainly for backward compatibility. This mismatch resulted in some limitations like performance limitations in large codebases (due to Hiccup parsing in runtime), issues with functional components (as users may have to declare every functional component usage even when they were defined as a React standard), and hindered interoperability with modern React libraries, such as Material UI, Mantine, and Ant Design, widening the gap between the two ecosystems.

What UIx changes in your code

UIx emerges to resolve this divergence. Acting as a thin interface between ClojureScript and modern React, its focus is technical and pragmatic: it offers a minimal abstraction layer, more predictable performance, and the direct use of functional components and hooks. Furthermore, it ensures native interoperability with the React ecosystem, allowing the lifecycle and state management to be handled directly by React itself. 

“If React already handles state and lifecycle management well, why not let it do that?”

João Lanjoni, Software Engineer at Nubank

Instead of creating a complete framework or adding unnecessary abstractions, UIx is a lightweight bridge, leveraging what modern React does best, resulting in a ClojureScript codebase with idiomatic syntax but identical behavior to modern React.

UIx component structure

In practical terms, UIx centralizes component construction around two elements: defui for declaring React components and $ for rendering elements in an explicit and lightweight way. Component bodies process props identically to React. Hooks such as useState are exposed using idiomatic ClojureScript conventions, like use-state, with UIx handling the translation to native React APIs. This ergonomics combines the best of ClojureScript syntax with the React architecture, which, according to João, eliminates the need to train React developers in the internal details of layers like Reagent or Re-frame, keeping the mental model aligned with the React mainstream.

Performance in figures

A highlight of the presentation was the chart, created by Roman Liutikov – the UIx maintainer –, comparing the call stack depth when rendering a simple component in pure React, UIx, and Reagent. React exhibits the shortest path; UIx, by adding only a thin layer, follows closely. In contrast, Reagent, due to Hiccup being interpreted at runtime, shows a significantly deeper call stack. While the difference is minimal in small applications, the impact on predictability and performance becomes notable and increases in products with hundreds or thousands of components.

Who is already using UIx in production

João presented three real-world examples, all highlighted on the project’s official page:

  • Metosin, one of the largest Clojure consultancies in Europe;
  • Pitch, an AI presentation platform with amazing slide decks;
  • Cognician, an e-learning platform for personal development.

The Pitch case is particularly impressive.

The team migrated 2,500 components from Reagent to UIx, maintained compatibility with Re-frame, and saw improvements in predictability and performance.

Metosin, meanwhile, employs Juho Teperi, one of the main contributors to Reagent, who also made an example project for a full-stack app using Clojure and ClojureScript and chose UIx to build the web interface, also using Material UI as the component library without any special wrapper.

When someone who helped build the previous tool begins to advocate for the new approach, it says a lot about the current moment of the technology, even more with the launch of a new version of Reagent introducing default functional components and a thinner hooks wrapper (inspired also by UIx).

Reducing the developer learning curve

UIx’s value extends to the hiring and development of engineers, which opens a path for more professionals to enter the ClojureScript ecosystem without the requirement of mastering the intricacies of Reagent, Re-frame, or the atom-based state model from day one. It represents a pragmatic approach to lowering barriers without sacrificing the benefits of a functional and declarative language.

“The greatest value of UIx is allowing React developers to write ClojureScript with a minimal learning curve.”

João Lanjoni, Software Engineer at Nubank

When UIx is the best choice

UIx is especially recommended for modern and complex front-end applications and teams already familiar with React. It is ideal for codebases that rely heavily on hooks and for projects requiring interoperability with the latest React libraries, with a view toward strong long-term growth potential. The library, intentionally simple, does not attempt to reinvent global state management, maintaining compatibility with mature React libraries like Zustand and Jotai, instead of adding unnecessary layers, or even using a custom hook that subscribes to a Clojure atom to manage a global state (similar to those cited libraries).

In essence, UIx does not seek to replace React but rather to act as a thin, modern, and pragmatic bridge. Its goal is to allow teams to build scalable front-ends with the power of React, while preserving the expressiveness and elegance of the Clojure philosophy and syntax. For complex and modern projects in ClojureScript, UIx may be the missing link.

The post Building elegant interfaces with ClojureScript, React, and UIx appeared first on Building Nubank.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.