Statistics made simple

I have a weird relationship with statistics: on one hand, I try not to look at it too often. Maybe once or twice a year. It’s because analytics is not actionable: what difference does it make if a thousand people saw my article or ten thousand?

I mean, sure, you might try to guess people’s tastes and only write about what’s popular, but that will destroy your soul pretty quickly.

On the other hand, I feel nervous when something is not accounted for, recorded, or saved for future reference. I might not need it now, but what if ten years later I change my mind?

Seeing your readers also helps to know you are not writing into the void. So I really don’t need much, something very basic: the number of readers per day/per article, maybe, would be enough.

Final piece of the puzzle: I self-host my web projects, and I use an old-fashioned web server instead of delegating that task to Nginx.

Static sites are popular and for a good reason: they are fast, lightweight, and fulfil their function. I, on the other hand, might have an unfinished gestalt or two: I want to feel the full power of the computer when serving my web pages, to be able to do fun stuff that is beyond static pages. I need that freedom that comes with a full programming language at your disposal. I want to program my own web server (in Clojure, sorry everybody else).

Existing options

All this led me on a quest for a statistics solution that would uniquely fit my needs. Google Analytics was out: bloated, not privacy-friendly, terrible UX, Google is evil, etc.

What is going on?

Some other JS solution might’ve been possible, but still questionable: SaaS? Paid? Will they be around in 10 years? Self-host? Are their cookies GDPR-compliant? How to count RSS feeds?

Nginx has access logs, so I tried server-side statistics that feed off those (namely, Goatcounter). Easy to set up, but then I needed to create domains for them, manage accounts, monitor the process, and it wasn’t even performant enough on my server/request volume!

My solution

So I ended up building my own. You are welcome to join, if your constraints are similar to mine. This is how it looks:

It’s pretty basic, but does a few things that were important to me.

Setup

Extremely easy to set up. And I mean it as a feature.

Just add our middleware to your Ring stack and get everything automatically: collecting and reporting.

(def app
  (-> routes
    ...
    (ring.middleware.params/wrap-params)
    (ring.middleware.cookies/wrap-cookies)
    ...
    (clj-simple-stats.core/wrap-stats))) ;; <-- just add this

It’s zero setup in the best sense: nothing to configure, nothing to monitor, minimal dependency. It starts to work immediately and doesn’t ask anything from you, ever.

See, you already have your web server, why not reuse all the setup you did for it anyway?

Request types

We distinguish between request types. In my case, I am only interested in live people, so I count them separately from RSS feed requests, favicon requests, redirects, wrong URLs, and bots. Bots are particularly active these days. Gotta get that AI training data from somewhere.

RSS feeds are live people in a sense, so extra work was done to count them properly. Same reader requesting feed.xml 100 times in a day will only count as one request.

Hosted RSS readers often report user count in User-Agent, like this:

Feedly/1.0 (+http://www.feedly.com/fetcher.html; 457 subscribers; like FeedFetcher-Google)

Mozilla/5.0 (compatible; BazQux/2.4; +https://bazqux.com/fetcher; 6 subscribers)

Feedbin feed-id:1373711 - 142 subscribers

My personal respect and thank you to everybody on this list. I see you.

Graphs

Visualization is important, and so is choosing the correct graph type. This is wrong:

Continuous line suggests interpolation. It reads like between 1 visit at 5am and 11 visits at 6am there were points with 2, 3, 5, 9 visits in between. Maybe 5.5 visits even! That is not the case.

This is how a semantically correct version of that graph should look:

Some attention was also paid to having reasonable labels on axes. You won’t see something like 117, 234, 10875. We always choose round numbers appropriate to the scale: 100, 200, 500, 1K etc.

Goes without saying that all graphs have the same vertical scale and syncrhonized horizontal scroll.

Insights

We don’t offer much (as I don’t need much), but you can narrow reports down by page, query, referrer, user agent, and any date slice.

Not implemented (yet)

It would be nice to have some insights into “What was this spike caused by?”

Some basic breakdown by country would be nice. I do have IP addresses (for what they are worth), but I need a way to package GeoIP into some reasonable size (under 1 Mb, preferably; some loss of resolution is okay).

Finally, one thing I am really interested in is “Who wrote about me?” I do have referrers, only question is how to separate signal from noise.

Performance. DuckDB is a sport: it compresses data and runs column queries, so storing extra columns per row doesn’t affect query performance. Still, each dashboard hit is a query across the entire database, which at this moment (~3 years of data) sits around 600 MiB. I definitely need to look into building some pre-calculated aggregates.

One day.

How to get

Head to github.com/tonsky/clj-simple-stats and follow the instructions:

Let me know what you think! Is it usable to you? What could be improved?

P.S. You can try the live example at tonsky.me/stats. The data was imported from Nginx access logs, which I turned on and off on a few occasions, so it’s a bit spotty. Still, it should give you a general idea.

Permalink

Wiring Clojure Web Apps with Aero, Pedestal, and Integrant

I've settled on a pattern for structuring Clojure web applications that I keep coming back to: combining Aero for configuration, Integrant for component lifecycle, and Pedestal for HTTP. The result is a fully declarative system where all wiring is explicit in configuration rather than scattered through code.Common patterns

Permalink

Everything you might have missed in Java in 2025 - JVM Weekly vol. 158

And the truth is - like year ago - I thought it would be a bit shorter... but quite a lot happened.

It took a bit since last JVM Weekly issue, so I had a lot of time… and may have overdone it a bit as original draft had 43 pages (and that’s before all the memes 😥). Not to worry, it was cut down, but still you should expect a monstrum of edition.

Thanks for reading JVM Weekly! Subscribe for free to receive new posts and support my work.

Yes, I feel fully recharged after the holiday, thank you for asking.

Sooo… without further ado - let’s start!

1. Language and Platform

We’ll start our “small” overview with JVM languages and all the changes happening within the JVM itself. I’ve selected only the most important new developments… although even those are plentiful. The order is random (more along the lines of “whatever came to mind first,” so there’s a certain chaotic prioritization to it). Enjoy!

Java turns 30!

Let’s start with the birthday wishes: Java this year turned 30 and has entered its fourth decade with style, under Oracle’s banner. Already back in March, JavaOne 2025 adopted an anniversary theme, adding retrospective keynotes and a special “Java at 30” track. The Inside Java newsletter also covered the behind-the-scenes details and celebration plans in its May issue, encouraging the community to join in. On top of that, May 22nd saw a six-hour livestream featuring James Gosling , Brian Goetz , Mark Reinhold , and Georges Saab , hosted by Ana-Maria Mihalceanu , Billy Korando , and Nicolai Parlog.

Partners and vendors joined the toast: JetBrains launched the #Java30 campaign with a What’s your inner Duke? quiz and limited-edition T-shirts up for grabs.

Article content

JetBrains also published a “Java Good Reads” list, released a plugin that changes the IDE splash screen, and even dropped a “Java 30” TikTok duet track. And before anyone tries to be cool asking, “What even is TikTok?”, just go play that song.

But that’s not all! Here are a few more highlights:

Article content
Geecon also had one on their badges :)

The upshot of all these distributed but coordinated initiatives is more than just a nostalgic tribute: joint events, open-access content, and new IDE plugins are strengthening the network of relationships that has kept the Java ecosystem vibrant for 30 years - and is preparing it for another decade of innovation.

And hey, in just ten years we’ll be celebrating Java’s 40th. Because, let’s face it, Java is forever.

Article content

Virtual Threads finally without Pinning

For nearly two years since their debut in JDK 21 (yes, JDK 21 was published 2 years ago), the results of Project Loom had one major “but” that effectively cooled enthusiasm in large codebases: pinning virtual threads on synchronized monitors. In theory, virtual threads were meant to be a lightweight and massively scalable at the same time, but in practice every synchronized block could pin them to a platform thread - destroying scalability where Java has the most legacy code… so in places where failures tend to surface at the least expected moment. Yes, the documentation warned about it, conference talks advised migrating to ReentrantLock, but in reality teams looked at millions of lines of existing Java and postponed Loom “until later.” I personally had that discussion more than once.

JDK 24 closes this chapter. Thanks to changes in the JVM monitor implementation (JEP 491), virtual threads are no longer automatically pinned when entering a synchronized block. As long as they do not perform operations that genuinely require binding to an OS thread, synchronized blocks can be unmounted and remounted just like any other virtual thread.

The importance of this change is hard to overstate, because - contrary to what might seem to a generation raised on java.util.concurrent, those summer children - synchronized is not an exotic detail but a core Java idiom that has been present from the very beginning in standard libraries, frameworks, application servers, and business code that has been running in production for decades… and will likely remain there for decades to come (Lindy effect, anyone?). Previously, Loom required teams to undertake conscious, often costly refactoring to avoid pinning. Now virtual threads become compatible with idiomatic Java of the past, so migration stops being an architectural project and becomes a purely runtime decision.

It is the moment when the narrative around Loom can finally change. Until JDK 24, it was irresponsible to approach the topic in any way other than “virtual threads are great, but…”, and as we all know, the “but” negates everything that comes before it.

Visualization:

Stream Gatherers finally open up the Stream API

From the very beginning, the Stream API contained a certain contradiction. On the one hand, it offered an elegant, functional way to describe data processing - it is probably the widest used “new” Java feature. However, it was surprisingly closed to extension, which clashes strongly with what one would expect from this kind of functionality… especially people coming from more functional background were rolling eyes. For years, JVM developers kept coming back to the same problem: as long as map, filter, and reduce are enough, everything is fine. But as soon as you need windowing, stateful scanning, rolling aggregates, or more complex transformations, all that elegance disappears into the depths of collect() and custom, hard-to-read Collectors.

JEP 485 and Stream Gatherers are a direct response to this long-standing tension. Instead of stuffing all the logic into a terminal collector, you can now define your own intermediate operations that behave like native elements of the pipeline. windowFixed, scan, fold, and other stream-processing patterns stop being hacks and, crucially, they preserve lazy evaluation and compose naturally with the rest of the stream. In practice, this means less imperative code, fewer external libraries, and much better readability in places where streams previously lost out to classic loops (which are surprisingly flexible pattern in the end).

Also, it’s good to see Java is increasingly choosing the path of “opening up existing abstractions” rather than creating entirely new APIs from scratch.

Class-File API and the end of the “ASM Dependency Era”

If Stream Gatherers change the day-to-day style of writing code - and I suspect most of you will use them at some point, even indirectly through a third-party library - then Class-File API (JEP 484), while remaining largely invisible, changes the very foundations on which the JVM ecosystem has stood for years. For two decades, bytecode manipulation was in practice synonymous with a single library: ASM. Every framework that generated or modified classes - Spring, Hibernate, Mockito, Byte Buddy, testing tools, profilers - had to rely on an external project that was constantly racing ahead of Java’s evolution. Even Java itself used it internally.

This was a quiet but very real structural problem. Every new JDK release meant a race: does ASM already support the new classfile format? Have the frameworks caught up? Will a preview feature break builds? The ecosystem lived in a state of permanent synchronization with a library that was formally “external” but, in practice, absolutely critical.

JEP 484 puts an end to this arrangement. Manipulating .class files becomes part of the JDK, with an official API and compatibility guarantees for future versions of the format. Responsibility for the evolution of the classfile returns to where it always should have been: the platform itself. Frameworks no longer have to guess how to read a new version of bytecode and can instead rely on a contract provided directly by Java.

Even though most of you will never touch classfiles directly, this is a change with enormous significance for the long-term maintainability of the platform. Fewer dependencies, fewer risky upgrades, less “waiting for someone to release a new version of ASM.” But it’s also something more: the internalization of critical infrastructure as certain parts of the ecosystem are too fundamental to live outside the standard library.

Class-File API won’t suddenly make the average developer start writing bytecode-level code (thanks God). But for framework authors, tool builders, and platform engineers, it’s one of the most important JEPs in years - one that reduces friction across the entire ecosystem.

Java enters the Post-Quantum Era before it becomes necessary

Post-quantum cryptography is a topic that’s easy to dismiss with a shrug: “quantum computers aren’t breaking RSA yet, so we still have time.” JEP 496 and JEP 497 introduce the first native implementations of the ML-KEM and ML-DSA algorithms into Java—used for key exchange and digital signatures respectively - aligned with the newly finalized NIST FIPS 203 and 204 standards.

Behind this lies the very pragmatic “harvest now, decrypt later” concept. Data encrypted today with classical asymmetric algorithms can be intercepted and decrypted in the future, once quantum hardware matures. For sectors such as public administration, banking, defense, healthcare, or data archives, the implication is straightforward: if information needs to remain confidential for 20–30 years, migration has to start now - not on the day the first “breakthrough” quantum computer appear

Compact Object Headers Graduate from Experimental Status

This is one of those JVM changes that doesn’t alter a single line of code, yet can completely change the economics of an application. JEP 519 moves Compact Object Headers from the experimental phase into full production, closing out years of work on slimming down one of the most fundamental structures in Java: the object header.

Java Compact Object Headers (JEP 519)

For decades, the object header in HotSpot was relatively “fat” - 96 or even 128 bits, depending on configuration and architecture. It carried everything: synchronization data, GC information, the class pointer, the hash code. JEP 519 reduces this overhead to 64 bits without sacrificing functionality. The result? In applications with a large number of small objects - which in practice means almost every modern JVM service - heap usage drops by an average of ~22%. And that’s no small thing. In the world of microservices, containers, and cloud-native deployments, memory is one of the primary operational costs. A smaller heap also means smaller pods, smaller VMs, faster cold starts, shorter GC pauses, and very real reductions in infrastructure bills.

Compact Headers have been tested, measured, and benchmarked extensively on real workloads. Promoting them to stable version means that OpenJDK considers them safe not only from a performance perspective, but also expects no surprises in synchronization, hash codes, or interactions with the garbage collector.

Pinky swear.

Scoped Values — the Quiet Successor to ThreadLocal

Scoped Values are one of those Project Loom features that don’t make much noise (at least not as much as Stream Gatherers), yet they significantly change how safe, concurrent code is written in Java.

xkcd: Lisp Cycles
Inspired by our God-and-Saviour, Lisp.

The problem Scoped Values address is old and well known: ThreadLocal. For years, it was the only reasonable way to propagate context - request IDs, security context, transactions, locale - without threading it through every method parameter. But ThreadLocal has two fundamental flaws. First, it lives as long as the thread does, which in a world of thread pools led to leaks, subtle bugs, and “bleeding” context. Second, it doesn’t fit virtual threads, where a thread stops being a long-lived carrier of context and becomes a lightweight, ephemeral abstraction.

Scoped Values invert this model. Instead of binding data to a thread, they bind it to an execution scope. A value is visible only within a clearly defined block, automatically propagates down the call stack, and—crucially—disappears exactly when the scope ends. No cleanup, no leak risk, no magical hooks. Another advantage of Scoped Values is that they are easier for the JVM to optimize. Their immutable nature and bounded lifetime mean the runtime knows precisely when a value exists and when it is no longer needed.

In short: if ThreadLocal was a hack that allowed Java to survive the era of application servers, Scoped Values are the mechanism that lets it enter the Loom era without repeating the same mistakes.

But remember - there is nothing wrong in a good hack. Even Gods-and-Saviours are doing that from time to time

xkcd: Lisp

JFR enters everyday observability

Java Flight Recorder has long been one of the most underrated components of the JVM. Powerful and low-overhead, yet often treated like a “black box” that you only look into after something goes wrong. JDK 25 clearly changes this role. The expansion of JFR in 2025 shows that Oracle increasingly sees it as a continuous source of data about application behavior, not just a post-mortem tool.

A key step here is JEP 509, which introduces CPU-time profiling on Linux. Until now, most profiles were based on wall-clock sampling, which in shared environments—containers, cgroups, noisy neighbors—often led to misleading conclusions. CPU-time profiling answers a much more practical question: where does the processor actually spend its time? This is especially important in the cloud, where every millisecond of CPU has a real cost at scale, and the difference between “waiting” and “computing” is crucial for optimization.

The second element, JEP 520, moves JFR toward analyses that previously required external tools or costly instrumentation. Method Timing & Tracing with bytecode instrumentation makes it possible to measure method execution time with much greater precision, without manually attaching profilers or agents (the JVM ones—for AI agents, we’ll get to that later). For the first time, JFR starts looking inside application code, not just observing it from the runtime level.

All of this is tied together by cooperative sampling - a mechanism that improves measurement accuracy while maintaining very low overhead. Instead of aggressively interrupting threads, the JVM cooperates with the executing code, resulting in more stable and representative data. In practice, this means JFR can run continuously in production without fear of a noticeable performance impact. Java Flight Recorder therefore stops being a Plan B for when the world is already on fire, and starts becoming the default way of understanding a JVM application in motion.

From backstage: this is the moment Substack informed I’m approaching the email limit - What can I say, are you f*cking kiddin’ me? I’m just getting up to speed.

Project Valhalla - After a decade, Java finally touches the metal

Project Valhalla is probably the most patiently developed project in the history of OpenJDK. When Brian Goetz announced it back in 2014, the Java world looked very different: there was no Loom, GraalVM was still an academic experiment, and “cloud-native” hadn’t yet become an overused buzzword. For many years, Valhalla existed more as an idea than a roadmap—names changed (inline types, value types, primitive classes), and assumptions were alternately tightened and relaxed.

In 2025, that changed. JEP 401: Value Classes and Objects reached Candidate status, and Oracle released the first early-access builds. This is the first moment when Valhalla stopped being just a Brian Goetz slide deck and became a concrete proposal for changes to the language and the VM.

What’s most striking is how little changes on the surface. One new keyword - value - placed before class or record… and that’s basically it. But beneath this modest syntax lies a fundamental shift in Java’s object model. Value classes give up object identity. Two objects with the same field values are indistinguishable - == ceases to be a reference comparison and becomes a value comparison. It’s a radical but consistent decision: if something has no identity, the JVM no longer has to pretend that every instance “lives its own life” on the heap.

Brian Goetz has summed up Valhalla in a single sentence for years: “code like a class, work like an int.” The core promise of Valhalla is flattening—the ability to place value object data inline in memory, without an extra layer of indirection. An ArrayList<Point> where Point is a value class is no longer an array of pointers to objects scattered across the heap, but a compact array of coordinates. Fewer dereferences, fewer cache misses, better data locality. This is exactly the kind of optimization that’s obvious in C or C++, but was historically out of reach in Java.

Why does this matter so much right now? Goetz often points out that the JVM memory model dates back to the early 1990s. In that world, the cost of memory access and the cost of arithmetic were comparable. In 2025, the difference is on the order of 200–1000×. Java, with its pointer-centric object model, pays a real price for this—especially in numerical code, game engines, stream processing, ML, and HPC. Valhalla is an attempt to regain control over memory layout without abandoning the safety and abstractions that made Java popular in the first place.

Importantly, Valhalla’s design in 2025 is far more pragmatic than in its early iterations. Earlier proposals assumed almost “sterile” isolation of value types. JEP 401 - still not yet targeted to a specific JDK release - allows abstract value classes and softens the boundaries between the reference world and the value world. This signals that the project aims not for academic purity, but for real-world adoptability, including in existing codebases.

At the same time, after years of “not yet,” we finally have a concrete JEP, concrete early-access builds, and a very clear trajectory. Java is beginning to repay its debt to the hardware it actually runs on - and has a chance to change the deepest foundation of all: the relationship between objects and memory.

Congratulation Valhalla team! However I have one thing in my mind

Project Panama and Project Babylon - Java steps beyond its own walled garden

If Loom changed the way Java thinks about concurrency, and Valhalla about memory, then Panama and Babylon go one step further and evolve how the JVM cooperates with the world beyond itself. For decades, Java has been a safe, portable, and… hermetic platform. Integration with native code existed, but JNI was a tool from the “use only as a last resort” category. GPUs, SIMD, specialized hardware? That usually meant C++ wrappers with a thin layer of Java on top. Panama and Babylon are an attempt to break away from this model - without abandoning the JVM.

Project Panama, whose culmination was the introduction of the stable Foreign Function & Memory API, tackles the most down-to-earth and painful problem: how to talk to native code safely and efficiently.

Cause that the source of Python popularity nowadays.

But that’s only half the story. Panama solves how to get outside JVM, but not why Java kept losing in areas such as HPC, GPUs, or intensive numerical computation. And that’s where Project Babylon comes in.

Babylon is one of the most ambitious projects in OpenJDK - and at the same time one of the least understood. Its goal is not to “speed up Java,” but to make Java a language capable of mapping itself onto modern hardware. In practice, this means enabling Java code - when properly described - to be compiled to different backends: CPUs with SIMD, GPUs, and AI accelerators. Not through magical runtimes, but through an explicit, analyzable computation model.

The key here is its connection to Valhalla and Panama. Valhalla’s value classes make it possible to describe data structures, while Panama provides safe access to memory and native ABIs. Babylon ties this all together by delivering a model in which the JVM can understand the semantics of computations, not just execute them on a CPU. Projects like TornadoVM already show today (which we’ll get to shortly) that Java can be a viable language for GPU programming - Babylon is meant to ensure this stops being a curiosity and becomes a feature available to the “common programmer.”

And that’s important - as Java is no longer just a language for backends and microservices. It is starting to become an infrastructural language that can handle everything: from classic services, through AI inference, to heterogeneous computing. And importantly - without copying the C++ or CUDA model. While preserving type safety, garbage collection, and the tooling that have been its advantages for years.

For now, Babylon is still a research project, but by the end of 2025 it remains one of the missing pieces of a puzzle that has been taking shape in OpenJDK for over a decade. If you want to learn more - this is the best material you can find nowadays.

Project Leyden: AOT as an Answer to GraalVM Native Image

For years, the discussion around JVM application startup time has been dominated by a single solution: GraalVM Native Image. Radically effective and impressive in benchmarks, but paid for with real costs - long compilation times, a “closed world,” and constant battles with reflection and libraries that assume JVM’s dynamic nature. Project Leyden was created as a conscious alternative to that philosophy. Not as yet another AOT mode, but as an attempt to achieve similar benefits without giving up what has defined Java for decades.

The key assumption behind Leyden is that most server applications are not chaotic. On the contrary - they are surprisingly predictable, boring even. The same classes are loaded on every startup. The same code paths warm up at the beginning. The same proxies, the same reflection metadata, the same execution profiles. JVM has long been able to see these patterns, but until now it discarded this knowledge on every restart - Leyden changes exactly that.

Leyden’s philosophy is speculative runtime optimization. The JVM observes the application during a so-called training run and assumes that future runs will be similar. Execution profiles are not random; they are a stable artifact of the application: classes that were needed will be needed again, and methods that were hot are worth preparing in advance. Instead of starting “from scratch” on every launch, the JVM begins to learn its own behavior.

In 2025, Leyden began to materialize in the mainline JDK. The foundation consists of three JEPs that together define a new JVM startup model. JEP 483, introduced in JDK 24, enables Ahead-of-Time Class Loading & Linking - the most expensive part of JVM startup can be done in advance and stored in a cache. JEP 514 simplifies the launch ergonomics, reducing the entire workflow to a single, sensible command instead of a set of experimental flags. JEP 515 allows method execution profiles to be preserved across runs, opening the door to much more aggressive optimizations without sacrificing safety.

In August 2025, Leyden Early Access 2 appeared, showing where the project is actually heading. For the first time, we saw the outline of AOT code compilation - methods can be compiled to native code during the training run and stored in a cache, so the work doesn’t have to be repeated on the next startup. On top of that comes AOT for dynamic proxies and reflection data - two areas that historically caused the biggest problems for both the classic JVM and Native Image. Crucially, all of this still works within HotSpot, without closing the world and without breaking the contract of dynamic behavior.

Leyden is not trying to beat GraalVM Native Image on its own turf. Instead, it redefines the reference point. If Native Image is an extreme - minimal startup at the cost of flexibility - then Leyden pushes the classic JVM as far toward “fast startup” as possible without abandoning dynamism. For the vast majority of enterprise applications, this is exactly the compromise that was missing. Project Leyden is, in practice, OpenJDK’s answer to the question: can the JVM be fast at startup without ceasing to be a JVM? In 2025, for the first time, the answer began to sound like: yes.

And since we’ve already invoked GraalVM…

GraalVM in 2025 - From “Java’s Savior” to a specialized runtime (with AI Inside)

Well… 2025 was one of the most turbulent years in GraalVM’s history. Not technologically - because on that front GraalVM made enormous progress - but strategically. Oracle very clearly shifted its priorities, effectively closing a certain era in thinking about GraalVM as “the future of Java.”

The most important signal came early: GraalVM for JDK 24 is the last release licensed as part of Java SE. The experimental Graal JIT, which in Oracle JDK 23 and 24 could replace C2 as an alternative compiler, has been withdrawn. Native Image is no longer part of the Java SE offering, and Oracle is explicitly steering customers toward Project Leyden as the default path for JVM startup and runtime optimization. For many observers, this was a “wake-up moment”: GraalVM is not disappearing, but it is no longer the universal answer to all of Java’s problems that it seemed to be over the past decade.

During Chrismas, I watched Star Wars The New Hope with my daughter. Can’t wait to expose her to the best blockbuster ever - Indiana Jones and The Last Crusade.

At the same time - and here lies the most interesting paradox - GraalVM has never been technologically stronger than it was in 2025. Instead of focusing on being a “better HotSpot,” the project fully embraced its unique identity: extremely aggressive AOT, a compiler as a research platform, and a proving ground for techniques that the classic JVM cannot adopt as quickly.

The best example of this shift is GraalNN - a neural-network-based static profiler. It is one of the most underrated yet most radical ideas in the compiler world in recent years, and it fits perfectly into the zeitgeist of 2025. Instead of relying solely on runtime profiles or simple heuristics, GraalVM uses a trained neural network that predicts branch probabilities based on the structure of the control flow graph - the compiler starts guessing how the code will behave before it ever runs. This is an evolution of the earlier GraalSP based on XGBoost, which has been enabled by default since version 23.0 - but GraalNN goes further. On top of that comes SkipFlow, an experimental optimization that tracks primitive values across the entire program and evaluates conditions already at the static analysis stage. In the world of Native Image, where every kilobyte and every method matters, this is a very concrete advantage.

In parallel, a new and interesting thread emerged: GraalVM and AI at the application level. Quarkus began exploring streamable HTTP and MCP in the context of LLMs, based on its fork of the project - Mandrel - while Oracle is positioning GraalPy as a runtime for building AI agents and workflows (e.g., LangGraph) with very low overhead and good integration with JVM-based backends.

High Level Design

At the same time, support for macOS x64 was dropped without sentimentality - GraalVM 25.0.1 is the last release for Intel; the future is exclusively Apple Silicon.

And this brings us to the key conclusion the community reached in 2025. GraalVM is no longer seen as a competitor to HotSpot and Leyden (rather, Leyden stopped being “GraalVM for the masses,” gradually pushing its older sibling out of its natural habitat). Instead, it has found its place as a specialized runtime and compiler: ideal where extreme trade-offs matter - serverless, CLI tools, edge computing, AI inference, native integration, minimal binaries. As Leyden takes over the mainstream JVM, GraalVM occupies the extreme end of the spectrum.

This was a difficult but healthy narrative correction. For years, GraalVM was the answer to criticisms from the Rust or Go communities, proudly pointing out that Java also had its own native format - though one that never broke into production to the extent the surrounding community had hoped. Now that it no longer has to “save Java,” it can focus on what it does best: experimenting faster than the mainline, bringing ML into compilation, and pushing the boundaries of AOT. And Java - paradoxically - comes out better for it, because instead of one tool for everything, it gains a coherent spectrum of solutions.

I think it’s fair to say that in 2025 it became clear: GraalVM is not the future of the entire JVM. But without GraalVM, that future would look significantly poorer.

Jakarta EE - The Year that breathed life into a “Dead” Platform

If anyone had written Jakarta EE off a year ago, 2025 must have felt like a cold shower. This was the year the platform stopped merely catching up and actually started building its future.

Jakarta EE 11 - after 34 months since the previous release - finally saw the light of day. First came the Core Profile (December 2024), then the Web Profile (March 2025), and the full platform on June 26. The delay, however, made sense: the Working Group spent that time thoroughly modernizing the TCK - migrating from Ant/TestHarness to Maven/JUnit 5/Arquillian. Boring? Yes. But it was precisely this “plumbing” that had blocked progress for years and scared off new vendors.

Version 11 itself brings two stars: Jakarta Data 1.0 - an answer to Spring Data without Spring, with repositories defined via interfaces and implementations generated automatically—and Jakarta Concurrency 3.1 with full support for Virtual Threads. On top of that come Java Records in Persistence and Validation, the removal of the Security Manager, and the deprecation of Managed Beans. The platform is finally speaking the language of modern Java.

And that’s not all. In March, Jakarta NoSQL 1.0 was approved - a specification standardizing access to document, graph, and key-value databases with an API modeled after JPA. It didn’t make it into EE 11, but it paves the way for future profiles.

And in the fall? Jakarta Agentic AI - a project under the Eclipse Foundation umbrella (initiated by Payara and Reza Rahman) that aims to define a vendor-neutral API for AI agents: lifecycle, integration with CDI and REST, and standard guardrails. The first release is planned for Q1 2026. At the same time, MicroProfile AI is taking its own, more pragmatic path - trying to integrate LangChain4j into the ecosystem without waiting for full specifications.

Greetings to the rest of technical community from the more peaceful wotld.

Article content

However, for those who crave emotions and enjoy the behind-the-scenes drama - over the summer, a governance conflict played out on the mailing lists. For years there have been discussions about merging MicroProfile into Jakarta EE (a logical step, since MicroProfile already depends on the Core Profile), but such a merge requires a so-called super-majority. David Blevins (Tomitribe) publicly announced that a coalition of several vendors has enough votes to block it—not because they oppose integration per se, but pending resolution of a conflict of interest. The crux of the dispute: within the Jakarta EE Working Group structure sits EE4J—a parent project that includes GlassFish. Blevins argues that the WG’s budget, marketing, and community programs effectively promote one implementation at the expense of competitors (Payara, Open Liberty, WildFly, TomEE). His proposal: split EE4J into a separate structure, and let the Jakarta EE WG focus solely on specifications. OmniFish (the GlassFish steward) agrees to the split but denies any intentional favoritism—calling it historical baggage that is gradually being removed. Current status: in November, a proposal passed to retain the org.eclipse.microprofile.* namespace in the event of a merge—a signal that the community is preparing for integration without repeating the pain of javax.*jakarta.*. but the EE4J issue remains unresolved.

Additionally: the plan for Jakarta EE 12 (scheduled for July 2026) has already been approved - mandatory JDK 21, support for JDK 25, new specifications Jakarta Config and Jakarta Query, deprecation of the Application Client, and further consolidation of the data layer.

Not a bad year for a “dead” platform.

Scala Says Goodbye to JDK 8

For a decade, JDK 8 was the anchor of the JVM ecosystem - but that era is coming to an end. Spring Boot 3 requires JDK 17. Jakarta EE 10 has a minimum of JDK 11. Hibernate 6, Quarkus 3.7, Micronaut 4 - all have gone down the same path. And it is precisely in this context that Scala made its most symbolic move in 2025. In March, the Scala team officially announced its decision: JDK 17 will become the new minimum, starting with Scala 3.8 (currently in RC4, so the release is likely imminent) and the next LTS (most likely Scala 3.9). Not JDK 11 - straight to 17. The discussion about whether to also drop JDK 11 revealed no convincing reason to keep it—11 is now almost as outdated as 8.

The direct reason? The implementation of lazy val in Scala - one of the language’s most distinctive constructs - relies on low-level operations from sun.misc.Unsafe, which are necessary to ensure correct behavior in multithreaded environments (initialization must happen exactly once, even when multiple threads access the value simultaneously). The problem is that Oracle has been planning to remove these methods for years - JEP 471 officially deprecates them, and future JDKs (25+) may remove them entirely. Scala therefore had to rewrite the lazy val implementation using a newer API (VarHandle), which is only available starting with JDK 9. Maintaining two code paths - the old one for JDK 8 and the new one for newer versions - was deemed too costly.

There is also a beautiful irony in all of this: for years, Scala was ahead of Java (pattern matching, sealed types, ADTs). Java eventually caught up with Scala at the language level - and now Scala can finally take advantage of the bytecode that Java created… inspired by Scala. The loop is complete.

Сomics meme: "Scala Java Scala Java Java Python" - Comics ...
We are one great family, in the end

Kotlin and the Moment When K2 Stops Being “New”

After Kotlin’s very dynamic development in 2024, with the highlight being version 2.0 in May, 2025 became a year of normalization. Kotlin 2.2 (June) and 2.3 (December) no longer try to convince anyone that K2 is the future - they assume instead that “the future is now,” and focus on what ultimately determines real adoption: performance, ergonomics, and ecosystem completeness.

The most telling signal is therefore tooling-related rather than language-level. K2 mode became the default in IntelliJ IDEA (from 2025.1). This means that code analysis, completion, highlighting, and refactorings in the IDE are powered by exactly the same engine that compiles the code. JetBrains decided that K2 is stable and fast enough to become the foundation of the entire workflow.

Kotlin 2.2 and 2.3 (which we didn’t covered yet in JVM Weekly, but it time will come) represent a maturation phase - rather than a single “killer feature,” we get a series of small but noticeable improvements. Guard conditions in when, context parameters that organize dependencies, non-local break/continue, multi-dollar string interpolation—these are changes that don’t make a splash (you’ve probably already forgotten what was in the previous sentence), but save time every day. API stabilization (HexFormat, Base64, UUID), compatibility with Gradle 9.0, new mechanisms for source registration - Kotlin stops living “alongside” the JVM toolchain and starts being co-designed with it.

At the same time, Kotlin Multiplatform continues to mature - a technology that for years was both the biggest promise and the biggest question mark. Compose Multiplatform for iOS reaches production-ready status in 2025, and not in the sense of “it works in a demo,” but as a fully-fledged UI stack: native scrolling, text selection, drag-and-drop, variable fonts, natural gestures. Add to that Compose Hot Reload, which dramatically shortens the feedback loop. This is the moment when UI stops being the “weak link” of KMP.

androiddev | Ranbir Singh | 64 comments

In the same context, Swift Export appears - still in preview, but highly symbolic. Kotlin no longer integrates with Swift via Objective-C, and instead starts exporting APIs directly into the Swift world.

The tooling backend is also being cleaned up. kapt uses K2 by default, old flags are being deprecated, and annotation processing finally stops being a “parallel world” alongside the new compiler. Alongside this, Amper continues to evolve—still experimental, but increasingly positioned as an alternative where Gradle is too heavy and tight IDE integration matters more than maximum configurability.

There is, of course, Kotlin LSP - but it is such an interesting phenomenon in its own right that we’ll come back to it later.

Clojure Is… AI-Skeptical

In 2025, Clojure did something that, in the age of “AI everywhere,” feels almost countercultural: instead of trying to win the fireworks race, it focused on steadily closing the gaps in the language’s and tooling’s foundations. The most “official” rhythm of the year came from successive releases in the 1.12.x line. On paper, these were tiny releases, but they’re quintessentially Clojure: semantic fixes, compiler improvements, visibility and safety issues in LazySeq and STM - the kinds of changes that improve the predictability of systems meant to last for years.

And here’s the most interesting part: this “quiet” year for the language itself collided with a very loud statement from Rich Hickey about AI - a statement that, in many ways, explains why Clojure in 2025 looks the way it does. In early January 2026, Hickey published an essay titled Thanks AI! (sneaking into this overview at the last possible moment), in which he directly addresses the side effects of generative AI: from flooding the world with “slop,” through degrading search and education, to cutting off the entry path for juniors (the disappearance of entry-level jobs as a “road to experience”). Classic Hickey: evaluating technology through the lens of whether it reduces complexity and improves the quality of social systems, not just technical ones.

Ai tools Memes · ProgrammerHumor.io

Instead of building “the Clojure AI agent framework of the year,” the community pumped energy into tooling and ergonomics. Clojurists Together’s 2025 reports explicitly show where funding and effort went: to people working on clojure-lsp and IDE integration, on clj-kondo, babashka, and SCI, on CIDER - in other words, on things that shorten the feedback loop and improve the quality of everyday work, regardless of whether an LLM is involved in the background.

And this is, paradoxically, one of the most important things about Clojure in 2025: in a world where many ecosystems respond to AI with a marketing reflex, Clojure responds… like Clojure. Quality, semantics, and tools first. The rest can wait.

Groovy 5.0 - A New (Small) Impulse for a Classic JVM Language

Groovy has a long history behind it - from a dynamic, “magical” JVM language that in the mid-2000s was one of the most interesting alternatives to Java, inspired by Ruby (the ultimate “cool kid” language of its time), through a scripting platform used in Gradle, Jenkins, and Grails, to today’s ecosystem where its mainstream adoption has been declining in favor of Kotlin/Java and more strongly typed tools.

Released in the summer of 2025, Groovy 5.0 is the first major revision since Groovy 4 and an attempt to bring more modern language features and tooling quality into a project that for a long time was seen more as a “support tool” than a fully fledged programming language. The release includes a wide range of improvements: broad compatibility with JDK 11–25, hundreds of new or enhanced extension methods for Java’s standard APIs, a reworked REPL console, better interoperability with modern JDK versions, and a number of conveniences for working with collections and iteration. All of this is meant to ensure that Groovy doesn’t fall behind in the era of newer JVM platforms.

One of the most anticipated features of 5.0 is GINQ (Groovy INtegrated Query) - a SQL-style query language built into Groovy that allows collection and data-structure operations to be expressed in a more declarative way. This functionality already existed in earlier 4.x versions, but in 5.0 it is strengthened and treated as a first-class part of the language. GINQ offers capabilities similar to LINQ in .NET, with from, where, orderby, and select clauses, and support for working with different kinds of data sources.

The community’s reception of Groovy 5.0 has been mixed, which is hardly surprising given the language’s position in 2025. Developers still invested in Groovy appreciate the modern improvements - such as better performance for collection operations, fixes to the REPL and interoperability, and the fact that the project continues to evolve beyond 4.x. As some commentators put it, this is “the best version of Groovy so far,” with hundreds of fixes and extensions. On the other hand, discussions and forum posts also point out that the language continues to struggle with limited adoption and tooling support outside its traditional niches, such as CI/CD scripts, Gradle DSLs, or automation tasks.

All in all, Groovy 5.0 is an interesting release: it delivers new language features, stronger JDK compatibility, and improved ergonomics, while at the same time fighting for attention in a JVM landscape dominated by Kotlin, Java, and statically typed tools. Its role in projects like Gradle (where it is gradually being displaced by Kotlin), Grails (which itself is no longer a popularity powerhouse), or Jenkins shows that the language still has its place - but its development in 2025 is more evolutionary than revolutionary.

2. What’s Happening in the Community

OpenRewrite: From a Tool to an Ecosystem

Not long ago, OpenRewrite existed in the collective consciousness as “that tool for migrating Spring Boot.” Useful and effective, but still fairly narrowly pigeonholed. In 2025, that phase definitively ended. OpenRewrite stopped being a product and started functioning like infrastructure - quiet, embedded, often invisible, but absolutely critical to how code in the JVM ecosystem is modernized today.

A symbolic moment was OpenRewrite being built directly into IntelliJ IDEA starting with version 2024.1. From that point on, it was no longer a plugin for enthusiasts, but a first-contact tool. YAML recipes with autocompletion, inline execution of complex transformations, gutter-based migration runs, and debugging scans and Refaster rules turned automated refactoring from a “special operation” into a normal part of everyday IDE work. When JetBrains treats a tool as part of the editor itself, it usually signals that the ecosystem has matured to a new standard.

In parallel, Microsoft’s App Modernization initiatives built around GitHub Copilot do not “write migrations” themselves. Copilot plans steps, explains changes, and closes error loops—but the deterministic code transformation is performed by OpenRewrite itself. AI analyzes the codebase and proposes a migration sequence, OpenRewrite applies semantic transformations, and Copilot iteratively fixes compilation and test failures. Java 8 → 25, Spring Boot 2 → 4, javaxjakarta, JUnit 4 → 6—without a semantic refactoring layer, this entire process would degrade into little more than prompt engineering and hope. Even the most advanced LLM-based tools still require a deterministic refactoring engine underneath to constrain stochastic behavior.

Article content

The same pattern appears elsewhere. Azul integrates OpenRewrite via Moderne rather than building its own refactoring system, combining automated code transformations with runtime insights from Azul Intelligence Cloud.

What’s most interesting is how many tools now rely on OpenRewrite without exposing that dependency prominently. Moderne is the obvious example, but the same pattern can be observed in security auditing tools, compliance platforms, mass repository migration systems, and internal “code remediation” platforms built by large organizations. OpenRewrite is increasingly playing the same role for refactoring that LLVM plays for compilers - a shared substrate on which others can build workflows, UIs, and integrations without reimplementing parsers, visitors, or semantic analysis from scratch.

At the same time, the technical scope of OpenRewrite continues to expand beyond Java. It now supports JavaScript, TypeScript, Android, and increasingly complete Scala coverage, with rapid parser updates tracking Java 25, Spring Boot 3.5, and Spring Cloud 2025. The OpenRewrite recipe catalog has surpassed 3,000 entries, evolving from a simple list of migrations into a living knowledge base that captures how large, long-lived systems actually evolve in practice.

Have fun with JavaScript, OpenRewrite.

Article content

Private Equity Enters the Game: Azul Acquires Payara

On December 10, 2025, Azul announced the acquisition of Payara—the company behind the enterprise fork of GlassFish and one of the last truly serious players in the Jakarta EE world. On paper, this looks like a classic product acquisition. In practice, it’s a much deeper move that closes a long sequence of events.

Just a few weeks earlier, Thoma Bravo entered Azul. And when private equity of that scale shows up in a technology company, it’s usually not about maintaining the status quo - it’s about building a platform capable of aggressive growth.

For years, Azul has been a very clearly profiled company: an excellent JDK, extremely strong expertise in performance, stability, and security—but still just one fragment of the overall stack. For large Java customers (you know - the true enterprise+ scale), the story rarely ends at the JVM. It ends with the question: “what about the application server themselves?”

And this is exactly where Payara comes into the picture.

For years, Payara has existed somewhat outside the main hype cycle, consistently serving organizations that could not - or did not want to - abandon Jakarta EE. Banks, public administration, industry - places where applications live for decades, not quarters.

For Azul, this was the missing piece of the puzzle. The acquisition of Payara makes it possible to offer customers a complete, coherent story: from the JDK, through the runtime, all the way to the application platform. One responsibility, one vendor, one operating model. The broader market context is even more interesting. The application server market - declared dead many times over, even in this article itself - is today worth tens of billions of dollars (yes, I was surprised too).

Why? Because microservices and the cloud didn’t erase monoliths - they merely added new layers alongside them. Organizations are migrating, but they do so in stages. And they need platforms that can handle both the “old” and the “new” at the same time.

BM Merges the Red Hat Java Team… and Buys Kafka

Azul isn’t the only company trying to give customers a more focused, end-to-end offering. 2025 brought two IBM moves that, taken separately, look like routine corporate reshuffles—but together form something much more interesting: an attempt to build a coherent enterprise stack that actually lives in real time, instead of pretending to analyze data “almost instantly.”

Early in the year - fairly quietly (unless you read JVM Weekly, in which case not quietly at all) - Red Hat’s Java and middleware teams were merged into IBM’s Data Security, IAM & Runtimes organization. Sounds like just another reorg? Maybe. But the context matters.

Since IBM acquired Red Hat in 2019 for $34 billion, the two companies have effectively been running parallel JVM universes: Red Hat with JBoss EAP, WildFly, and Quarkus and IBM with WebSphere, Open Liberty, and OpenJ9.

Two teams, two roadmaps, two philosophies tackling the same problem. Now they’re meant to be one. The projects remain open source (WildFly and Quarkus were moved under the Commonhaus Foundation), but middleware strategy is now owned by a single organization - not two competing ones.

December, however, brought the real fireworks: IBM buys Confluent for $11 billion in cash. Eleven. Not “considering,” not “exploring” - done, signed, with a premium and full commitment. And suddenly, everything clicks.

Resharing this meme by Stanislav Kozlovski as it's hilarious and true. Stan  what is the next domino?? Doesn't this shit just keep getting bigger?? |  Hugo Lu

Kafka may be the backbone of real-time data, but Confluent has never been profitable - billions in losses over the years - which made many analysts skeptical. IBM sees it differently: 45% of the Fortune 500 already use Confluent products. For a startup (okay, not that small anymore), that’s success. For Big Blue, it’s a list of customers who don’t yet realize they need event-driven architecture bundled with 24/7 support and long-term enterprise contracts.

And this is where Java comes back into the picture. Red Hat gave IBM the platform (OpenShift). HashiCorp (deal finalized this year as well) delivered infrastructure as code. Confluent brings real time.

The newly unified Java/middleware team is supposed to glue all of this into a single, predictable stack for enterprises that want to build event-reactive systems.

It’s an interesting bet: IBM is wagering that the future of enterprise isn’t better dashboards, but architectures where data stops being “what happened” and becomes “what is happening.” And to do that properly, you need runtimes (the Java stack), a platform (OpenShift), orchestration (Terraform), and a circulatory system (Kafka).

Will it work? Time will tell. But one thing is certain: IBM didn’t buy Confluent to improve the next quarterly report. The effects should become visible pretty soon.

Cause always remember the crucial difference:

Canonical Releases Its Own JDK

And while we’re on the topic of JDK teams tied to Linux distributions…

July also brought something unexpected: Canonical announced its own Canonical builds of OpenJDK - binary OpenJDK builds optimized specifically for Ubuntu, including ultra-slim Chiseled OpenJRE containers aimed at cloud and CI/CD environments. This is another step in a growing trend where the JDK is treated as an artifact tightly coupled to the platform, tooling, and ecosystem around it.

Canonical emphasizes that these builds are significantly smaller - up to 50% smaller than popular distributions like Temurin - without sacrificing throughput or performance.

For JVM developers, this means that the choice of runtime environment is becoming increasingly dependent on deployment context: Ubuntu Pro on servers, IBM Semeru or Red Hat OpenJDK/Quarkus in enterprise setups, Microsoft Build of OpenJDK for Azure, and so on.

As a result, choosing a JDK is no longer simply “just OpenJDK.” It’s becoming more complex, with each provider adding its own twist. Canonical focuses on container size reduction and long-term support; IBM and Red Hat emphasize long-term stability and enterprise application compatibility; Microsoft offers builds tailored specifically for Azure.

What’s interesting is how different organizations now view Java - as a set of composable artifacts fitted to their own ecosystems, rather than as “one universal distribution everyone just downloads and uses.” And that’s the broader trend worth paying attention to.

Hibernate ORM changes its license: The end of the LGPL Era

Hibernate’s license change from LGPL to the Apache License 2.0 also has an interesting broader context. For years, LGPL was “safe enough” for Hibernate. It allowed use in commercial projects, didn’t force companies to open their own code, and in practice rarely caused real problems for engineering teams. The issue is that open source reality doesn’t stop at engineers. In many organizations, the mere presence of LGPL in the dependency tree was enough to trigger a red flag—regardless of how liberal its interpretation actually was. For legal departments, intent mattered less than the license header.

This tension grew as the entire ecosystem evolved. Hibernate давно stopped being a “standalone project” and became a foundation of the modern Java stack: Jakarta EE, Quarkus, and newer initiatives centered around the Commonhaus Foundation. These ecosystems share one common trait: a strong preference for the Apache License as the default standard. LGPL increasingly became an exception - something that had to be explained, justified, and pushed through yet another committee.

Discover 12 Audit Humor and humor ideas | accounting humor, audit, work  humor and more

In that sense, relicensing was a rational move. The question wasn’t whether LGPL is “bad,” but whether it was still compatible with the direction Java as a platform has taken. Apache License provides simple answers to hard questions: you can use it, distribute it, build products on top of it - without nuances, exceptions, or interpretations - and nobody asks uncomfortable follow-up questions. For projects targeting enterprise adoption - and Hibernate very much is one - that simplicity has real value.

So why go all the way? Because the alternative was slow marginalization. Staying on LGPL would increasingly turn Hibernate into “that problematic component” that blocks adoption, complicates audits, and requires extra explanations.

The license change came with real costs and real losses. It required reaching out to hundreds of contributors across 25 years of project history. It was a test of how “alive” an open-source project really is - one that many people treat as a given part of the landscape. Where consent couldn’t be obtained, code had to be removed or split out. Envers remained under LGPL; other modules simply ceased to exist. Trade-offs were inevitable.

Deno vs Oracle: Who Owns “JavaScript”?

JavaScript is the air of the internet today. A language you don’t really choose—it’s just there. In the browser, on the server, in tools, frameworks, builds, and deploys. And precisely because of that, it can be surprising that, formally, the name “JavaScript” is still a trademark owned by one specific company: Oracle. For years, this was one of those legal oddities everyone knew about, but no one had the time or energy to do anything about. Until the end of 2024.

In November, Ryan Dahl the creator of Node.js and Deno - filed a formal petition with the U.S. Patent and Trademark Office to cancel the “JavaScript” trademark. The starting point is simple: Oracle inherited the mark from Sun Microsystems in 2009, but since then it has not participated in the development of the language, does not take part in TC39, does not sell products based on JavaScript, and in practice has no real connection to it beyond a historical entry in a registry.

The dispute drags on through 2025, with Oracle consistently refusing to voluntarily relinquish the trademark, rejecting the allegations, and attempting to neutralize the heaviest charge - fraud in the renewal of the mark in 2019, when a screenshot of the Node.js website was presented as proof of use. In legal terms, that turns out to be insufficient to prove intentional deception. The rest of the case, however, remains alive, and in September the parties enter the discovery phase - the most expensive and most brutal part of the proceedings.

In the background, something far more interesting is happening than the lawyers’ battle itself. A community that for three decades has treated JavaScript as a common good suddenly realizes that its name does not formally belong to it. An open letter published on javascript.tm gathers tens of thousands of signatures, including that of Brendan Eich - the creator of the language - as well as people who actually shape its evolution today. Deno launches a fundraiser to cover legal costs, because discovery is not paid for with ideals, but with very real money.

I’m writing about this because, in truth, this is not really a dispute about JavaScript itself, but about whether the trademark system should allow the “hoarding” of names that have become technological standards, or whether it should be able to release them once they stop being anyone’s product. Regardless of how the case ends - and we’ll likely wait until 2027 for a verdict - one thing has already happened: someone finally asked out loud whether JavaScript is still someone’s brand, or simply a language that belongs to everyone.

And sometimes, just asking that question is enough to shake the foundations.

Wish you luck, Deno. You will need it.

JetBrains Reveals Plans for a New Programming Language

In July 2025, Kirill Skrygan, CEO of JetBrains, said in an interview with InfoWorld that JetBrains is indeed working on a new programming language. And no - it’s not Kotlin 3.0, nor another iteration of a familiar paradigm. It’s an attempt to jump an entire level of abstraction.

Skrygan described it in a way that almost sounds old-fashioned: through the history of language evolution. First assembly, then C and C++, then Java and C#. Each step moved the programmer further away from the machine and closer to a mental model. The problem is that we’ve been stuck at this level for years. Today’s developers still translate their intent into classes, interfaces, annotations, and frameworks—even though the real design of a system lives elsewhere: in people’s heads, in architecture discussions, in design docs, diagrams, and domain descriptions.

That gap is exactly what JetBrains wants to address. The new language is not meant to be a “better Kotlin,” but an attempt to formalize what has so far been informal: the system’s ontology, the relationships between entities, architectural assumptions, and decisions that today get lost in comments, Confluence pages, or Slack threads - and later have to be manually reconstructed in code. In other words, everything agents need to operate: context.

The most controversial part of this vision appears when Skrygan talks about natural language. The foundation of the new language is supposed to be… English. But not in the sense of “write a sentence and magic happens.” Rather, as a controlled, semantic representation of intent - something between a design document and a formal system description language. AI agents, deeply integrated with JetBrains tooling, would take this description and generate implementations for different platforms: backend, frontend, mobile - everything consistent, everything derived from a single source of truth.

This distinction matters, because Skrygan explicitly distances himself from the popular slogan that “English will be the new programming language.” His view is surprisingly sober: you can’t build industrial systems in plain English. And that tension is key to understanding the whole initiative. JetBrains isn’t trying to replace programming with prompts. It’s trying to build a formal language of intent - one that looks like English but behaves like a programming language. A kind of spec-driven engineering baked directly into the core of the language.

Natural Language Instructions

The organizational context also matters. JetBrains is a private company. It doesn’t need to deliver quarterly narratives for the stock market, and it doesn’t have to chase hype. Skrygan says it openly: people are tired of grand AI declarations. And precisely because of that, JetBrains can afford a long-term experiment that may not produce a sellable product for years. It’s the same logic that once allowed them to invest in Kotlin long before anyone knew it would become mainstream.

Importantly, this new language isn’t emerging in a vacuum. JetBrains is already building infrastructure for a world where the IDE is no longer just a code editor. Junie as a coding agent (and yes, we’ll talk more about Junie later), local Mellum models, deep integrations with LLM providers - all of these are pieces of a puzzle in which code stops being the only artifact and becomes just one of many possible “renders of intent.”

We don’t know the timeline. It’s possible that for years we’ll see nothing but prototypes, internal DSLs, and experimental tools - or perhaps nothing at all. But the very decision to think about a new programming language in 2025 (let’s be honest: who even cares about programming languages in 2025?) already says something. JetBrains clearly believes that the next revolution won’t happen in yet another version of Java, Kotlin, or TypeScript, but in a place where we still lack sufficient precision - where code ends and thinking about the system begins.

And since we’re already talking about JetBrains…

The End of the Two - IntelliJ IDEA Era

December 2025 brought a decision that would have seemed unthinkable just a few years ago: the end of the split between IntelliJ IDEA Community Edition and Ultimate. Starting with version 2025.3, there is a single IntelliJ IDEA. Some features are available for free, others require a subscription or a 30-day trial - but the product no longer pretends these are two separate worlds.

For a long time, the Community vs. Ultimate split made sense. Community was a clean IDE for Java and Kotlin; Ultimate was a platform for the enterprise world—Spring, databases, and everything considered “serious.” The problem is that the line between “basic” and “enterprise” has become increasingly blurred. Today, even a student project very quickly touches Spring Boot, SQL, database migrations, or simple web integrations. Keeping these features behind a paywall increasingly stopped being monetization and started becoming frustration - and an onboarding barrier that many simply wouldn’t cross.

At the same time, two products meant two development pipelines, two test suites, two backlogs, and constant decisions about whether a given feature “belongs” in Community or already in Ultimate. In practice, this led to duplicated effort and slowed overall development. Consolidating into a single IDE is, first and foremost, an engineering decision: one codebase, one distribution model, one evolution path. Very Kotlin Multiplatform in spirit.

A (very user-friendly) side effect is the shift in the boundary of free features. Basic support for Spring and Jakarta EE, the Spring Boot wizard, SQL tooling, database connection configuration, and schema browsing suddenly stop being “enterprise-only” features - not because JetBrains became philanthropic, but because without them IntelliJ simply stops being a realistic starting tool for new developers.

But the real reason for this change, in my view, is much more down to earth: pressure from Cursor, Windsurf, Antigravity, and the entire wave of VS Code forks with built-in AI features. When the competition gives you not only a free IDE, but also an intelligent assistant that writes code, refactors, and debugs, the discussion of “should I pay for Spring support?” starts to sound a bit absurd. JetBrains clearly realized that fighting for user base now matters more than optimizing revenue from basic licenses.

Even if free access to these tools is.. a bit risky

Monetization doesn’t disappear - but it shifts dramatically. Instead of selling access to frameworks (especially for quasi-commercial users - companies will buy full IntelliJ anyway), JetBrains will sell AI-assisted development, advanced code analysis, team collaboration, and enterprise tooling. Upselling moves higher up the stack - where real value lies in productivity and scale, not in the mere fact that the IDE “understands Spring.”

This is the end of a certain era (a phrase heavily overused in 2025), but also an open admission of a new reality: if JetBrains wants a chance to compete with free, AI-powered editors, it first has to keep users inside its ecosystem. Limiting basic functionality was only accelerating the exodus to tools like Cursor. From 2025 onward, there is one IntelliJ. And this isn’t a gesture of goodwill, but a necessity in today’s world. That is what industry expects right no

meme_chatgpt
Obraz Pina z opowieścią

WASM for the JVM Is Gaining Momentum

WebAssembly is one of those standards that has been circulating in industry narratives for years like Death in Venice - everyone knows it’s about to take off, it’s just that the “any moment now” keeps getting postponed.

(Sorry for the slightly hermetic Polish literary joke - couldn’t resist.)

And yet, looking at what’s happening today in the JVM ecosystem, it’s hard to shake the feeling that WASM is finally starting to find its place - and in several interesting niches at once.

The most obvious signal is GraalVM, which is increasingly treating WebAssembly not as an exotic target, but as one of the first-class ways of executing code. WASM in Graal’s incarnation is not trying to replace the JVM or AOT in the style of Native Image — instead, it offers yet another secure sandbox that Java can interact with on its own terms.

In parallel, a completely different approach is emerging, represented by CheerpJ. This line of thinking doesn’t ask “how do we port Java cleanly to WASM?” but rather “how do we make existing JVM applications simply run in the browser?” — without rewriting, without porting, without months-long migrations. CheerpJ demonstrates that WebAssembly can act as a compatibility layer - something akin to a modern JRE for the web world. It’s a highly pragmatic vision, especially for old, heavy desktop applications that suddenly get a second life in the browser.

On top of that comes Kotlin, which for a long time has treated the web as a first-class platform rather than merely a “frontend to a backend.” In JetBrains’ latest plans, WASM is increasingly visible as a real alternative to JavaScript in places where predictability, performance, and consistency of execution models matter. This is not a mass movement yet, but the direction is clear: Kotlin/Wasm is not meant to compete with JavaScript in writing widgets, but in building serious web applications that don’t want to inherit the full baggage of the browser ecosystem.

The most interesting pieces, however, are projects that show WASM from the inside. One of my favorite spikes of the year is the story of porting the Chicory WebAssembly runtime to Android. It’s a great example of the fact that WebAssembly is not a magical format that solves all problems. Quite the opposite — it forces very deliberate thinking about memory, ABI, execution models, and host integration. And that’s precisely why it fits the JVM so well: a platform that has lived for decades at the intersection of high-level abstraction and hardcore runtime engineering.

Taken together, this paints an interesting picture. WASM in the JVM world is not a “new JVM” trying to replace bytecode or HotSpot. Instead, it’s another universal execution format that can be plugged in wherever isolation, portability, or execution in previously inaccessible environments is required. Browser, edge, sandbox, mobile - in all these places, WASM can serve as a natural bridge.

And perhaps that’s exactly why this time WebAssembly really does have a chance to take off. Not as a hype-driven replacement for everything, but as a missing puzzle piece. JVM, being the JVM, doesn’t rush into WASM with startup-style enthusiasm - it approaches it methodically, calmly, and with full awareness of its limitations. And Java’s history suggests that this is usually the best predictor of long-term success.

However we need to remember world looks a bit different nowadays.

TornadoVM Is having the time of its life

For years, TornadoVM was one of those projects everyone “had on their radar,” but few treated as anything more than an academic curiosity - fitting for a project coming out of the University of Manchester. Java on GPUs, FPGAs, and accelerators? It sounded like a future that was always one conference too far away. The year 2025 very clearly changed that picture. TornadoVM stopped being a promise and started to look like a technology that has genuinely found its place in the JVM ecosystem, addressing concrete needs of the platform.

And all that Engineering “just” to serve AI models from JVM

One of the most symbolic signals of this shift was a personnel move. Juan Fumero, one of the leaders of the TornadoVM project and the public face of the BeeHive Lab at the University of Manchester, joined the Java Platform Group at Oracle. This is the moment when an experimental research project meets, directly, the team responsible for the future of OpenJDK. It’s hard to imagine a clearer sign that JVM acceleration is no longer a side topic.

The strongest signal of TornadoVM’s “entry into the real world,” however, turned out to be GPULlama3.java. Running LLaMA on GPUs without leaving the Java ecosystem, without writing CUDA, and without manual memory management is something that would have sounded like clickbait just two years ago. The 0.2.0 release of the project proved that TornadoVM is not only capable of handling simple kernels, but can also support real AI workloads.

What’s crucial here is that TornadoVM never tried to compete with GraalVM, CUDA, or OpenCL on the level of “who’s faster.” Its ambition was different: to let JVM developers think in terms of tasks and data, rather than threads, blocks, and registers. In this context, TornadoVM 2.0 (which I will cover more deeply in January) looks like the transition point from a research prototype to a platform. A more stable programming model, better integration with the JVM, real support for GPUs, CPUs, and accelerators - and, above all, less magic and more predictable contracts. This is exactly the moment when a project stops living only in academic papers and starts living in production repositories.

The most interesting question is no longer “does TornadoVM make sense,” but rather: when will we see its DNA in OpenJDK? Perhaps not as a ready-made framework, but as APIs, abstractions, and a direction for JVM evolution prepared for a world where the CPU is just one of many places where code runs.

Seen more broadly, TornadoVM fits perfectly into a moment in which Java is once again redefining its boundaries. Not by changing syntax, but by changing where and how code is executed. And perhaps for the first time in a long while, hardware acceleration in Java no longer looks like an experiment.

TornadoVM has truly come into its own.

BTW: We had a guest appearance post in the December about TornadoVM Architecture - I consider it the essential lecture for 2026 😁

Project Stargate Becomes Oracle’s New Darling

At first glance, Project Stargate looks like something completely outside our bubble. OpenAI, SoftBank, gigantic data centers, hundreds of billions of dollars, and a narrative straight out of AI geopolitics. It’s hard to imagine anything further removed from the JVM, GC, and JDK releases. And yet - if you look more closely - Oracle shows up in this story in a way that makes it entirely reasonable to talk about it right here.

In January 2025, OpenAI and SoftBank announced Stargate: a long-term project to build hyperscale computing infrastructure for AI, ultimately valued at as much as several hundred billion dollars. The goal is not a single data center, but a global network of facilities designed from the ground up for training and inference of large models. It’s an attempt to create an “intelligence factory” - infrastructure positioned as a strategic asset… all based on GPUs from Nvidia.

I felt this meme was more true with nvidia than intel, so I changed it a  bit. : r/pcmasterrace

And this is where Oracle enters the picture, as a real operator and cloud provider. Oracle Cloud Infrastructure is meant to be one of the pillars of Stargate - both technologically and operationally. This is a very interesting turn, because for years Oracle was perceived more as a databases-and-enterprise-legacy company than as a player shaping the AI compute race. Stargate shows that this perception is becoming increasingly outdated.

From a JVM perspective, this is still very much a “side topic.” Stargate is not about Java, bytecode, or HotSpot, but about GPUs, networks, power, cooling, and decades-long contracts. At the same time, it involves the same company that owns the JavaScript trademark, develops OpenJDK, pushes Valhalla, Leyden, and Babylon, and makes enormous amounts of money on enterprise Java. Oracle is playing on several boards at once - and Stargate illustrates the scale of that game.

They also want to buy Warner Bros through Paramount.

Project Stargate is a story about just how much the landscape around the JVM is changing. About how companies that spent decades building stable, boring IT foundations are now also players in the most capital-intensive and futuristic technology race of our time.

And even if the JVM stands a bit off to the side here, Oracle - somewhat atypically for itself - is standing very close to the center of the stage. It will be interesting to see how much of this we’ll start to notice over the coming year.

3. Major Releases

Spring Framework 7 and Spring Boot 4: A New Generation

November 2025 closed a very long chapter for Spring. Spring Framework 7 and Spring Boot 4 are the first releases that assume the market has already moved on. You can see this in every decision: Java 17 as the minimum baseline, optimization for JDK 25, and Jakarta EE 11 as the new normal.

There is also a very clear shift toward standardization instead of custom, homegrown solutions. The move to JSpecify as the default source of null-safety signals that Spring is no longer building its own micro-standards and is instead playing in the same league as OpenJDK, JetBrains, and the rest of the tooling ecosystem. This matters enormously for adoption in polyglot teams, especially where Kotlin and Java coexist in a single codebase.

Spring Boot 4.0, in turn, marks a moment of deliberate “slimming down” of the ecosystem. The removal of Undertow, RestTemplate (on the horizon), XML, JUnit 4, and Jackson 2.x clearly shows that anything blocking evolution will not be maintained indefinitely. At the same time, the modularization of spring-boot-autoconfigure demonstrates that Spring is taking startup time, security surface, and debuggability very seriously - precisely the areas that now determine platform choices in large organizations.

Against the broader market backdrop, Spring 7 / Boot 4 positions itself with remarkable clarity. This is not a framework for the “freshest experiments” - that role is increasingly taken over by Quarkus or Micronaut.

Everything you might have missed in Java in 2022 - JVM Weekly #119 - Vived

Spring remains the default enterprise JVM platform, but in a far more disciplined, modern, and coherent form than a few years ago, consolidating what has become the standard: Jakarta, modern JDKs, cloud-native operations, and deep tooling integration.

Spring Modulith 2.0 as the consequence of a single, coherent vision

Spring Modulith 2.0 is particularly interesting not only because of what it introduces, but because of who stands behind it. This is still the same line of thinking that Oliver Drotbohm has been developing for years - the author of Spring Data, the initiator of jMolecules, and one of the most consistent voices in the Spring world when it comes to connecting domain architecture with framework practice. Modulith is therefore not a new idea coming “from the side,” but the culmination of concepts that have been maturing in the ecosystem for over a decade.

For years, Drotbohm has argued that the biggest problem of enterprise systems is not a lack of frameworks, but a lack of enforceable boundaries. Domain-Driven Design offered excellent concepts, but weak tools for enforcing them. Spring provided immense flexibility, but made it just as easy to abuse. jMolecules was the first attempt to dress architecture in code - to give names, stereotypes, and semantics to what previously existed only as diagrams. Spring Modulith is the natural next step: moving that semantics into the very heart of the Spring runtime.

Spring Modulith 2.0 stops pretending that modularity is a matter of developer goodwill. Integration with jMolecules 2.0 changes the rules of the game — DDD stereotypes such as @Aggregate, @Repository, or @ValueObject, which used to be comments for humans, now become contracts enforced by Spring. The framework understands your domain: it knows that an aggregate should not directly access another aggregate, that a repository operates on aggregate roots. Violations of module boundaries no longer surface in code reviews, but as red builds. Module boundaries stop being lines drawn in the sand on a wiki and become walls guarded by the framework - move a class between modules and you immediately see what you broke, instead of discovering it in production.

Spring Modulith 2.0 arrives at the perfect moment. The market is clearly moving away from unreflective microservices enthusiasm, but at the same time has no desire to return to unstructured monoliths. The so-called “modular monolith” thus becomes a conscious architectural choice. Through Modulith, Spring shows that it is possible to build systems that are coherent, modular, and operationally simple - without breaking everything into a mesh of services.

Spring AI - Spring positions itself as a system layer for LLM-Based Applications

By 2025, it was already clear that integration with LLMs is not the problem of a single library, but of the entire application architecture — from configuration and observability to security and model lifecycle. Spring AI emerged precisely in this gap. It very quickly stopped being a “wrapper around the OpenAI API” and began to play the role that “classic” Spring has always assumed during major technological shifts: normalizing chaos. And the release of Spring AI 1.0 in May 2025 - after more than a year of intensive development - officially closed the experimentation phase.

What matters most is what Spring AI does not try to do. It does not compete with LangChain or purely agent-oriented frameworks. Instead, it assumes that an LLM is just another external system — like a database, a broker, or a file system — and as such should be handled consistently with the rest of a Spring application. Concepts such as ChatClient, Advisors, memory, and tool calling were designed to fit existing configuration, transactional, and testing models, rather than bypass them.

In this way, against the current landscape, Spring AI positions itself as an enterprise-grade glue layer. For teams that already use Spring Boot, Spring Security, Actuator, and Micrometer, entering the world of LLMs does not require rebuilding their mental model of the application. This is a massive adoption advantage — especially in organizations where experiments must very quickly transition into maintenance, audit, and scaling mode… particularly if those organizations already use Spring. And since many do, having such a “default” from a familiar provider genuinely simplifies things.

Langchain4j 1.0: A Java-Native Framework for LLMs

Spring AI was not the only JVM project to reach a 1.0 milestone in 2025. A few months earlier, in February, Langchain4j crossed the same threshold — and while both frameworks address LLM integration, their philosophies differ significantly.

Langchain4j was created in 2023 as a direct response to the success of Python’s LangChain - a framework that at the time defined how LLM applications were built (and, despite competition, largely still does). The problem was that for JVM teams, LangChain was inaccessible without rewriting entire services in Python or maintaining hybrid stacks. Dmitriy Dumanskiy, the project’s creator, set out to fill this gap: the Java and Kotlin worlds lacked AI libraries that were design-wise consistent with their idioms. From the start, Langchain4j combined ideas not only from LangChain, but also from LlamaIndex and Haystack, adapting them to JVM realities - with integrations for Spring Boot, Quarkus, Micronaut, and Helidon.

It’s worth emphasizing, however: despite the name and inspiration, Langchain4j is not a port or a wrapper. The project evolved independently, with its own abstraction model and an API designed from the ground up for the JVM. While Python’s LangChain evolved toward LangGraph and LCEL, Langchain4j followed its own path - one closer to Java idioms than to Python’s functional chaining style.

Where Spring AI deliberately positions itself as a “glue layer” for existing Spring applications, Langchain4j has broader ambitions - it aims to be a self-contained framework for building agents, independent of any particular web stack. This is a fundamental difference in approach: Spring AI assumes that the LLM is another external system to be integrated into an application, whereas Langchain4j treats the agent as the central architectural element. Hence the equal-footing integrations with four different web frameworks.

Proof of this ambition is langchain4j-agentic - a subproject developed alongside the core library, providing ready-made abstractions for building multi-agent systems with orchestration, planning, and task delegation. It is the JVM’s answer to patterns represented in the Python ecosystem by CrewAI or AutoGen and the whole Agentic AI Economy

The House Of Cards Called Agentic AI · ProgrammerHumor.io
Agentic AI Economy

The library offers a unified API for multiple LLM providers and vector databases, supports context memory, RAG patterns, and tool calling — all expressed in idioms familiar to JVM developers. Integrations with Vertex AI, OpenAI, Ollama, and other providers allow companies to experiment with and compare models without rewriting integration logic.

Adoption is still not as broad as in Python, but in 2025 it is clearly growing - especially among Kotlin/Spring teams and enterprise applications that previously had to implement their own integration layers. According to Microsoft, hundreds of their customers are already using Langchain4j in production environments. The project has gained serious corporate backing: Red Hat actively co-develops Quarkus integrations (and Dmitriy himself now works at Red Hat), while Microsoft not only conducted a comprehensive security audit of the framework but also contributed nearly 200 pull requests — including integration of the official OpenAI Java SDK.

The community around the project is now incomparably larger than a year ago: the repository has surpassed 10,000 stars on GitHub, has over 500 contributors, and a rich ecosystem of extensions — from integrations with Azure Cosmos DB and Azure AI Search, through MCP support, to dedicated extensions for Quarkus and CDI. This is decisively no longer a “single author in a basement” project.

Kotlin LSP and why it really matters

In 2025, JetBrains officially released Kotlin LSP - an implementation of the Language Server Protocol developed internally by the Kotlin team, not as a community project. This marks a surprising shift in the positioning of a language that, throughout its history, was inseparably tied to a single IDE.

Where did that bond come from? Kotlin was created inside JetBrains as an “IDE-first” language. Code analysis, refactorings, and completions were all deeply integrated into IntelliJ’s architecture. Extracting that engine would have required rewriting the foundations. As long as IntelliJ dominated the JVM world, there was little pressure to do so. And then VS Code arrived… and not just VS Code.

canWeStopThisNonsense : r/ProgrammerHumor

By 2025, web-based editors, cloud tools, and - crucially - agentic IDEs like Cursor, Windsurf, and dozens of other AI-powered VS Code forks have become the norm. These tools are not built around IntelliJ plugins (although solutions are beginning to appear there as well). They are built around LSP. A language without LSP is a second-class language for them - the AI assistant can, at best, guess from raw text instead of operating on code structure, symbols, and types.

This is the crux of the issue: modern AI tools do not “read” code like humans do. They need structural information - dependency graphs, type definitions, semantic context. LSP has become the natural interface between a language and AI agents. Kotlin without LSP is Kotlin invisible to an entire generation of tools that will define how we write code in the coming years.

Kotlin LSP fits into a broader strategy of opening Kotlin up as a language platform: the same move as the K2 compiler front-end - one engine, many consumers. VS Code, Neovim, Zed, Cursor, and its successors all suddenly become real homes for Kotlin.

What comes next? If JetBrains takes LSP seriously - and all signs point to yes - then within the next 2–3 years we will see Kotlin as a full-fledged citizen in the ecosystem of agentic IDEs. This also means a potential explosion of third-party tools: linters, analyzers, code generators that do not exist today because building them required deep IntelliJ integration. LSP dramatically lowers that barrier. Paradoxically, by opening Kotlin to the world beyond IntelliJ, JetBrains may strengthen the position of the language itself more than any further IDE improvement ever could.

JetBrains Junie - an AI Coding Agent in the IDE World

JetBrains Junie, released in 2025, fits into a somewhat opposite trend — an AI coding agent deeply integrated with IntelliJ IDEA and the broader JetBrains ecosystem. It is the company’s direct response to Cursor, Windsurf, and the wave of agentic IDEs that, within just two years, evolved from a curiosity into a real threat to traditional editors.

Context is crucial here. Cursor and similar tools built their advantage on simplicity: take VS Code, add an AI agent, let it edit files and run commands. The problem is that these tools treat code as text. They parse it, tokenize it, and guess. This works great for simple tasks and greenfield projects. But in a large monorepo with thousands of classes, a complex dependency graph, and years of history? The guessing starts to break down.

JetBrains chose a different path. Instead of building an agent on top of text, they built it on top of semantics. Junie uses the same code analysis engine as IntelliJ itself — parsers, ASTs, type graphs, and symbol resolution. It understands not just the code, but also the build pipeline, dependency configuration, change history, and tests. When it proposes a refactoring, it knows which classes are related. When it generates code, it understands project conventions. This is an advantage Cursor simply doesn’t have — and one that’s extremely hard to recreate from scratch.

In practice, Junie goes far beyond “method completion.” It fixes compilation errors on the fly, completes test suites, proposes library migrations, and performs project-wide refactorings. For the JVM ecosystem, this is particularly important — Java and Kotlin have rich semantics, where architectural decisions manifest in relationships between classes and modules, not in individual lines of code.

Even if she is quite slow doing that

Junie also fits into JetBrains’ broader strategy. Kotlin LSP opens the language to external editors, Mellum provides local models, and the agent itself integrates with QA pipelines and test coverage analysis. JetBrains is clearly not treating AI as an add-on, but as the foundation for a new tooling layer.

The question for 2026 is whether deep semantic integration will prove to be a sufficient advantage over Cursor’s rapid iteration speed and the simplicity of VS Code. Cursor has momentum, a strong community, and a low barrier to entry. JetBrains has decades of experience in understanding code and a large IntelliJ user base that doesn’t want to switch editors. If Junie proves that a semantically aware agent makes fewer mistakes in large projects — and early signals (including my own tests) suggest that it does — it may win where it matters most: in enterprise environments, legacy codebases, and places where “almost works” is not good enough.

Kafka 4.0 and the definitive end of the ZooKeeper era

To close, a slightly unusual entry - Kafka touches the JVM only indirectly, as it’s not a library but an infrastructure component. But that’s precisely why it matters: Kafka is a flagship project of the JVM ecosystem, and its health reflects on the reputation of the entire platform.

And that reputation has been tested. Redpanda - rewritten from scratch in C++ - spent recent years pushing a simple, effective narrative: Kafka without the JVM, without ZooKeeper, without GC pauses, faster and easier to operate. WarpStream went even further - Kafka-compatible, but written in Go and built around a completely different cost model. This was real pressure on a project that for years carried the burden of ZooKeeper and a reputation for being “powerful, but hard.”

Kafka 4.0 is the answer. ZooKeeper is gone for good - KRaft becomes the only mode. No separate cluster, no separate quorum, no failures unrelated to brokers. Fewer components, faster rebalancing, more predictable recovery. Kafka starts to look like a modern platform rather than a system from an era when a “coordinator” was a separate entity requiring separate care.

And the mentioned acquisition of Confluent by IBM is a signal that Kafka is entering a phase of enterprise-grade stability measured in decades.

What’s next? Redpanda and WarpStream won’t disappear - they have their niches and momentum, even if they can be assimilated, like happened in case of Borg. But the main argument against Kafka has just evaporated. Over the next 2–3 years, we’ll likely see adoption in mid-scale environments where ZooKeeper was an insurmountable barrier. We’ll also see whether Kafka proves what the JVM has proven for years: that maturity and ecosystem are still powerful selling points.

Other Notable Releases

Hibernate 7.0: Jakarta Persistence 3.2, Jakarta Data 1.0, the new QuerySpecification API, a redesigned Restrictions API, a MongoDB Extension in public preview (HQL queries against MongoDB documents), and a move to the Apache License 2.0.

Quarkus + Mandrel 25: Full integration with Hibernate 7, a shared Apache License, Native Image optimizations, and support for Project Leyden AOT.

Testcontainers 2.0: API redesign, improved support for parallel execution, and tighter integration with cloud providers.

async-profiler 4.0 / 4.1: The de facto standard JVM profiling tool gains deeper integration with JFR and support for the latest JDKs — a quiet but important update for anyone serious about performance tuning.

Vert.x 5.0: The reactive framework from the Eclipse Foundation catches up with the Loom era — optimizations for Virtual Threads make it possible to combine the proven event-loop model with the simplicity of blocking code, without sacrificing performance.

Grails 7.0: The Groovy-based web framework jumps to a Spring Boot 4 baseline, continuing its strategy of closely following the Spring ecosystem — which means Java 17+ as a minimum and full compatibility with Jakarta EE 10.

JobRunr 8.0: One of the year’s curiosities - carbon-aware scheduling. Jobs can be scheduled for times when the power grid’s CO₂ emissions are lower.

Gradle 9.0: Configuration Cache enabled by default, deprecated APIs cleaned up, new contracts for code generators, and a hard dependency on Kotlin 2.3 signal the end of the era of “flexible but unpredictable” builds in favor of deterministic, cacheable build infrastructure.

JUnit 6.0: New lifecycle APIs, removal of legacy components, and improved parameterized testing.

Micronaut 4.8–4.10: Micronaut Langchain4j and Micronaut MCP for building Model Context Protocol servers — joining the JVM framework race for AI support alongside Spring AI and Helidon AI, while continuing strong AOT development.

Helidon 4.2: Oracle’s framework surprises not only with Native Image improvements, but above all with Helidon AI (no surprise there) — integration with LangChain4j. 2025 also brought CRaC support.

4. So, What Are We Entering 2026 With?

The Year No One Talked About Platforms - Yet Everything Stood on Them

2025 was a year in which the JVM ecosystem made some of the most important moves in its thirty-year history. Valhalla delivered its first concrete JEP. Leyden began to materialize in mainline JDK builds. Virtual Threads shed their last major “but.” Spring Framework went through its biggest transformation in years. GraalVM found its true identity.

At the same time, there’s a certain irony in the fact that 2025 will go down in history as the year of AI, agents, LLMs, and the generative revolution — while in everyday tech conversations, Java and the JVM were, at best, background noise. The world debated whether Cursor would replace programmers, whether GPT-5 would change everything, whether agentic IDEs were the future or a dead end… including within the JVM world. And in a way, that’s the perfect barometer of platform maturity: nobody talks about the foundations as long as they work. And in 2025, the JVM worked so well that it became invisible.

This isn’t a polite, courtesy compliment. Sometimes, in the race for novelty, we forget what critical infrastructure in real companies actually looks like.

  • Kafka processing trillions of events per day? JVM.

  • Spring powering half of the enterprise backend in the Fortune 500? JVM.

  • Banking applications, reservation systems, e-commerce platforms that handled

  • Black Friday and Cyber Monday without blinking? JVM.

  • ML models served on GPUs via TornadoVM, LLM integrations through Spring

  • AI and Langchain4j, agentic workflows in Quarkus? JVM.

Java doesn’t need to win the narrative war. It only needs to win the production war. Java’s strength in 2025 doesn’t lie in any single feature, framework, or hype cycle. It lies in something far harder to copy: ecosystem density. This is a platform that has an answer to almost every question - not always the most fashionable one, not always the most elegant, but one that works, is tested, maintained, and backward-compatible.

  • Want to build microservices? Spring Boot, Quarkus, Micronaut, Helidon - pick your philosophy

  • Need reactivity? Vert.x, Project Reactor, RxJava.

  • Prefer a modular monolith? Spring Modulith just reached version 2.0, enforcing module boundaries at the framework level.

  • Building an AI application? Spring AI, Langchain4j, Helidon AI — each with integrations to dozens of LLM providers.

  • Need native binaries? GraalVM Native Image.

  • Want to keep JVM dynamism but start fast? Project Leyden already delivers AOT class loading in JDK 24.

  • Need to talk to GPUs? TornadoVM.

  • To native code? Panama.

  • To WebAssembly? Chicory, GraalVM, CheerpJ.

This breadth of choice is both a curse and a blessing. A curse, because to outsiders it looks like chaos - “Java has three frameworks for everything and none of them is obvious.” A blessing, because for those inside the ecosystem, it means you almost never have to leave the platform. And in a world where context-switching and cross-stack integration costs are among the biggest hidden expenses in software projects, that cohesion has very real business value.

The power of a platform is that you don’t need to know everything in advance. The platform evolves, and your code keeps moving forward.

Where Java is at Risk - and why it’s worth talking about openly

It would be dishonest to talk about the JVM’s strength without explicitly naming where the platform has real problems. And in 2025, those problems became more visible than ever.

Problem one: narrative. Java is losing the mindshare war among young developers. For someone starting out in 2025, Python is “the AI language,” JavaScript is “the web language,” Go is “the cloud language,” and Rust is “the language of the future.” Java is “what they teach at university” or “what corporations use.” It’s not a fair assessment, but perception shapes reality. If young developers don’t enter the ecosystem, the ecosystem will face a talent problem a decade from now.

Problem two: AI-first tooling. Cursor, Windsurf, and the whole wave of agentic IDEs are built around VS Code and the JavaScript/TypeScript ecosystem. Language models are trained primarily on Python and JavaScript - Java is present, but it doesn’t dominate. Coding benchmarks (HumanEval, SWE-bench) are written in Python. When people ask “which language does AI write best,” Java is not the answer. This may change - I’m particularly hopeful about JetBrains, Junie, and deep semantic integration - but today, Java is not the “default language of the AI era.”

However, JetBrains fighting for relevance. The merger of Community and Ultimate into a single IntelliJ, Kotlin LSP opening the language to external editors, Junie as an answer to Cursor, and the announcement of a new programming language — these are not the moves of a company resting on its laurels. They’re the moves of a company that sees the threat and responds aggressively. If Junie proves that an agent with deep semantic understanding outperforms a text-based agent, JetBrains could reclaim the narrative in the enterprise segment - which very much does not want to give up its beloved IDE.

Obraz Pina z opowieścią

Problem three: serverless and edge. Despite progress with Leyden and GraalVM Native Image, Java is still not a natural choice for serverless functions with sub-100ms cold starts. AWS Lambda, Cloudflare Workers, Vercel Functions - these platforms favor JavaScript, Go, and Rust. Java can run there, but it’s not the first choice. Serverless is one of the fastest-growing segments of the market, and Java is not leading it.

Problem four: entry complexity. Maven or Gradle? Spring Boot or Quarkus? Oracle JDK, Temurin, Corretto, or Azul? IntelliJ or VS Code with extensions? For an experienced developer, these choices are trivial. For someone just starting out, they’re overwhelming. Python has pip install and Jupyter Notebook. JavaScript has npm init and Node. Java has… a debate about why Gradle is better than Maven that lasts longer than writing the first application.

These problems are real, and it’s worth talking about them openly. But it’s also worth pointing out what the ecosystem is doing to address them.


There’s a strong temptation in 2025 to say that “platforms no longer matter.” That only AI matters, that code writes itself, that developers will become “prompters,” and that programming languages will fade into the background. This narrative is appealing because it’s simple. The problem is that it’s false.

AI doesn’t operate in a vacuum. It runs on infrastructure. Models are served by servers. Requests flow through load balancers. Data is stored in databases. Events move through Kafka. Applications are deployed on Kubernetes. And underneath all of that - beneath layers of abstraction, frameworks, orchestrators, and agents - there is code. Code that someone must maintain. Code that must be secure. Code that must run for years, not months. We like to call this DeepTech now, but in reality, these are just the foundations that allow people to calmly vibe-code on top.

Java isn’t sexy. Java isn’t on the covers of tech magazines. Java doesn’t have its own TikTok full of viral tutorials. But Java is in every bank, every airline, every reservation system, every e-commerce platform larger than a hobby project. Java is in Netflix, Amazon, Google, LinkedIn. Java is in systems that cannot afford to fail.

In a world obsessed with novelty, there is something deeply valuable (and refreshing!) about a technology that can be boring in the best possible way. Boring because it works. Boring because it’s predictable. Boring because when production breaks at 3 a.m., you know where to look for logs, who to ask for help, and which tools to run.

Java in 2025 didn’t need to be exciting. It needed to be reliable. And it was.

Now it’s time for even more interesting 2026. Because if Valhalla makes it to preview in JDK 26, if Leyden truly changes the cold-start equation, if Babylon starts showing real results, if Junie proves the advantage of a semantically aware agent - then 2026 might be the year when even those who don’t care about platforms start talking about the JVM.

And that will be interesting.


Thank you for making it to the end of this review. I know it was long. I hope that coffee was well invested ☕ - and, as always, I invite you to spend another year with JVM Weekly.

Wishing you a great time fulfilling your New Year’s resolutions!

BTW: Why 2025 was special for me

2025 was a very interesting year for me personally. Not because of a single release or hype wave - but because I spent a big chunk of it writing this book. The full premiere is planned for 2026, but the journey already reshaped how I think about engineering, AI, and decision-making under uncertainty.

👉 Vibe Engineering: Best Practices, Mistakes and Tradeoffs — Manning MEAP

If you feel like challenging yourself with AI productivity in my (and my co-author, Tomek) interpretation - I recommend to at least check.

No memes inside (Daddy Manning wouldn’t allow it 😅), but I genuinely hope you’ll still enjoy the ride.

PS3: Just like last year, an uplifting song from an animation for you. Let’s make it our new year tradition ❤️

Awhhh… how I love Hazbin Hotel.

Thanks for reading JVM Weekly! Subscribe for free to receive new posts and support my work.

Permalink

I am sorry, but everyone is getting syntax highlighting wrong

Translations: Russian

Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.

Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.

Christmas Lights Diarrhea

Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.

Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.

Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?

So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.

Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.

If everything is highlighted, nothing is highlighted.

Enough colors to remember

There are two main use-cases you want your color theme to address:

  1. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?)
  2. Search for something. You want to know what to look for (which color).

1 is a direct index lookup: color → type of thing.

2 is a reverse lookup: type of thing → color.

Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.

Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.

I can’t.

Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?

Can you?

If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.

What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:

  • Green for strings
  • Purple for constants
  • Yellow for comments
  • Light blue for top-level definitions

That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.

Limit the number of different colors to what you can remember.

If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?

What should you highlight?

Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.

I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.

Top-level definitions are another good idea. They give you an idea of a structure quickly.

Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.

Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.

Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.

Comments are important

The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.

But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:

Comments should be highlighted, not hidden away.

Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.

Two types of comments

Another secret nobody is talking about is that there are two types of comments:

  1. Explanations
  2. Disabled code

Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!

Here’s a real example from Clojure codebase that makes perfect use of two types of comments:

Disabled code is gray, explanation is bright yellow

Light or dark?

Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?

And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:

Notice how many colors there are. No one can remember that many.

This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.

Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.

So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯

But!

But.

There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.

The second one has good contrast, but you can barely see colors.

The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.

UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.

Bold and italics

Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.

In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.

Using italics and bold instead of colors

Myth of number-based perfection

Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.

This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:

OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315

The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.

Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.

Let’s design a color theme together

Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.

Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?

Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.

But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.

This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.

Shameless plug time

I’ve been applying these principles for about 8 years now.

I call this theme Alabaster and I’ve built it a couple of times for the editors I used:

It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).

Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.

As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.

I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.

Permalink

Blue Ghost in Slovenia

Very old news, but just for completeless: Blue Ghost was performed at Cankarjev dom, Ljubljana, Slovenia back in October.

Permalink

Joyful Python with the REPL

REPL Driven Development is a workflow that makes coding both joyful and interactive. The feedback loop from the REPL is a great thing to have at your fingertips.

"If you can improve just one thing in your software development, make it getting faster feedback."
Dave Farley

Just like Test Driven Development (TDD), it will help you write testable code. I have also noticed a nice side effect from this workflow: REPL Driven Development encourages a functional programming style.

REPL Driven Development is an everyday thing among Clojure developers and doable in Python, but far less known here. I'm working on making it an everyday thing in Python development too.

But what is REPL Driven Development?

What is it?

You evaluate variables, code blocks, functions - or an entire module - and get instant feedback, just by a hitting a key combination in your favorite code editor. There's no reason to leave the IDE for a less featured shell to accomplish all of that. You already have autocomplete, syntax highlighting and the color theme set up in your editor. Why not use that, instead of a shell?

Evaluate code and get feedback, without leaving the code editor.

Ideally, the result of an evaluation pops up right next to the cursor, so you don't have to do any context switches or lose focus. It can also be printed out in a separate frame right next to the code. This means that testing the code you currently write is at your fingertips.

Easy setup

With some help from IPython, it is possible to write, modify & evaluate Python code in a REPL Driven way. I would recommend to install IPython globally, to make it accessible from anywhere on your machine.

pip install ipython

Configure IPython to make it ready for REPL Driven Development:

c.InteractiveShellApp.exec_lines = ["%autoreload 2"] c.InteractiveShellApp.extensions = ["autoreload"] c.TerminalInteractiveShell.confirm_exit = False

You will probably find the configuration file here: ~/.ipython/profile_default/ipython_config.py

You are almost all set.

Emacs setup

Emacs is my favorite editor. I'm using a couple of Python specific packages to make life as a Python developer in general better, such as elpy. The auto-virtualenv package will also help out making REPL Driven Developer easier. It will find local virtual environments automatically and you can start coding without any python-path quirks.

Most importantly, set IPython as the default shell in Emacs. Have a look at my Emacs setup for the details.

VS Code setup

I am not a VS Code user. But I wanted to learn how well supported REPL Driven Development is in VS Code, so I added these extensions:

You would probably want to add keyboard shortcuts to get the true interactive feel of it. Here, I'm just trying things out by selecting code, right clicking and running it in an interactive window. It seems to work pretty well! I haven't figured out if the interactive window is picking up the global IPython config yet, or if it already refreshes a submodule when updated.

Evaluating code in the editor with fast feedback loops.
It would be great to have keyboard commands here, though.

Current limitations

In Clojure, you connect to & modify an actually running program by re-evaluating the source code. That is a wonderful thing for the developer experience in general. I haven't been able to do that with Python, and believe Python would need something equivalent to NRepl to get that kind of magic powers.

Better than TDD

I practice REPL Driven Development in my daily Python work. For me, it has become a way to quickly verify if the code I currently write is working as expected. I usually think of this REPL driven thing as Test Driven Development Deluxe. Besides just evaluating the code, I often write short-lived code snippets to test out some functionality. By doing that, I can write code and test it interactively. Sometimes, these code snippets are converted to proper unit tests.

For a live demo, have a look at my five minute lightning talk from PyCon Sweden about REPL Driven Development in Python.

Never too late to learn

I remember it took me almost a year learning & developing Clojure before I actually "got it". Before that, I sometimes copied some code and pasted it into a REPL and then ran it. But that didn't give me a nice developer experience at all. Copy-pasting code is cumbersome and will often fail because of missing variables, functions or imports. Don't do that.

I remember the feeling when figuring out the REPL Driven Development workflow, I finally had understood how software development should be done. It took me about 20 years to get there. It is never too late to learn new things. 😁



Top photo by ckturistando on Unsplash

Permalink

Simple decorator in Clojure

Code

;; simple_function_decorator.clj

;; 1. Simple Function Decorator (Higher-Order Function)

;; Define a decorator function
(defn timer-decorator [f]
  (fn [& args]
    (println "Starting execution...")
    (let [start (System/currentTimeMillis)
          result (apply f args)
          end (System/currentTimeMillis)]
      (println (str "Execution time: " (- end start) "ms"))
      result)))

;; Define a function
(defn slow-add [x y]
  (Thread/sleep 1000)
  (+ x y))

;; Apply the decorator
(def timed-add (timer-decorator slow-add))

;; Use it
(timed-add 5 3)  ; prints timing info and returns 8

Permalink

Clojure Deref (Jan 7, 2026)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Upcoming Events

Libraries and Tools

Debut release

  • quiescent - A Clojure library for composable async tasks with automatic parallelization, structured concurrency, and parent-child and chain cancellation

  • machine-latch - Low-level synchronization primitive with state machine semantics

  • scoped - ScopedValue in Clojure, with fallback to ThreadLocal

  • pathling - Utilities for scanning and updating data structures

  • clj-physics - Multi-domain physics modeling in Clojure: frames, environments, dynamics, EM, and reduced-order CFD helpers.

Updates

  • Many Clojure contrib libs were updated to move the Clojure dependency to 1.11.4, which is past the CVE fixed in 1.11.2.

  • rewrite-clj 1.2.51 - Rewrite Clojure code and edn

  • ClojureScriptStorm 1.12.134-3 - A fork of the official ClojureScript compiler with extra code to make it a dev compiler

  • browser-jack-in 0.0.6 - A web browser extension that let’s you inject a Scittle REPL server into any browser page.

  • m1p 2026.01.1 - Map interpolation and DIY i18n/theming toolkit

  • re-frame 1.4.4 - A ClojureScript framework for building user interfaces, leveraging React

  • re-frame-10x 1.11.0 - A debugging dashboard for re-frame. X-ray vision as tooling.

  • hikari-cp 4.0.0 - A Clojure wrapper to HikariCP JDBC connection pool

  • powerpack 2026.01.1 - A batteries-included static web site toolkit for Clojure

  • clojure-mcp 0.2.2 - Clojure MCP

  • rewrite-edn 0.5.0 - Utility lib on top of rewrite-clj with common operations to update EDN while preserving whitespace and comments.

Permalink

Tools I loved in 2025

Hi friends!

While answering 40 questions to ask yourself every year, I realized I’d adopted a bunch of new tools over 2025. All of them have improved my life, so I want to share them with you so they might improve yours too =D

A common theme is that all of my favorite tools promote a sort of coziness: They help you tailor the spaces where you spend your time to your exact needs and preferences.

Removing little annoyances and adding personal touches — whether to your text editor or your kitchen — not only improves your day-to-day mood, but cultivates a larger sense of security, ownership, and agency, all of which I find essential to doing great work. (And making great snacks.)

Physical

Workshop space

Last summer we moved to a ground-level apartment, giving me (for the first time in my life) my very own little workshop space:

We’re just renting the apartment, so building a larger shed isn’t an option, but it turns out that one can get quite a lot done with floor area barely larger than a sheet of plywood.

I designed a Paulk-style workbench top, cut it out on my friend’s CNC, and mounted it all on screwed-together 2x4 legs:

So far I’ve mostly worked with cheap plywood from Hornbach, using the following tools:

  • Makita track saw for both rough and precision cuts. There’s a handy depth stop that makes it easy to do a shallow scoring cut before making a full cut, which reduces tearing out the thin veneer.
  • Fein wet/dry vac to collect dust/chips. This includes an electrical outlet you can plug tools in to so the vacuum automatically turns on when the tool is drawing power, which is great.
  • My DIY powered air respirator built around a 3M Versaflo helmet has been working great — it’s so much more comfortable than fiddling with separate respirators and eye + ear protection. Since it takes 15 seconds to take on/off, I’m pretty much always wearing it when I’m doing anything in the shop.
  • Record Power Sabre 250 desktop bandsaw for quick cross cuts. The accuracy isn’t great, but I haven’t tried replacing the blade or tuning it much yet.
  • Bosch 12V palm edge router extremely fun to use for rounding over and chamfering edges:
  • Bosch 12V drill/driver is lightweight and compact, and comes with a handy right-angle attachment that I’ve actually used:
  • Bosch PBD 40 - a drill press with digital speed control and zeroable digital depth gauge with 0.1mm precision. At $250, the 0.5mm play in the chuck is forgivable.

The workbench has MFT-style 20mm “dog holes” on a 96mm grid, which allows for all sorts of useful accessories.

For example: I purchased Hooked on Wood’s bench dog rail hinges, which makes it easy to flip the track up/down to slide wood underneath to make cross cuts:

The $10 dog hole clamp on the right holds a stop block, which allows me to make repeated cuts.

Since the dog holes were cut with a CNC and the rail is straight, square cuts can be made by:

  1. making a known straight cut with the track saw on its track (whether on the workbench or outside on sawhorses)
  2. pushing this reference edge up against the two bench dogs on the top
  3. cutting along the track

See Bent’s Woodworking for more detail on this process.

While I wish I had space for a full size sliding table saw and/or CNC, this workbench and track saw seems like a decent backyard shed solution.

LED lighting

Last winter I decided to fight the gloomy Dutch weather by buying a bunch of high-CRI LEDs to flood my desk with artificial daylight:

See the build log for more details.

This lighting has worked out swimmingly — it helps me wake up in the morning and makes the whole space feel nice even on the grayest rainy day.

After sundown, my computer switches to “dark mode” colors and I switch the room to cozier warm-white LEDs.

I had about 8 meters of LED strip leftover, which I used with LED diffuser profiles to illuminate my workshop.

Euroboxes

When we moved into our new (completely unfurnished) apartment over the summer, I was adamant I’d build all of the furniture we needed. However, sightly storage solutions have taken longer than anticipated, so to eliminate the immediate clutter I purchased a bunch of 600x400x75mm euroboxes:

At $4/each (used), they’re an absolute bargain.

Since they’re plastic, they slide great on waxed wood rails and make perfect lil’ utility drawers. The constraint of needing to use fixed size drawers makes it easier for me to design the rest of a furniture piece around them.

For example, just take a brad nailer to the limitless amounts of discarded particle board available on the streets of Amsterdam, and boom, one-day-build 3d-printer cart in the meter closet:

Or, throw some drawers in the hidden side of these living room plywood coffee table / bench / scooters:

Now we have a tidy place to hide the TV remote and wireless keyboard/mouse, coffee table books, store extra blankets, etc.

Ikea Skadis coffee/smoothie station

Our kitchen only has 1.6 m2 (17 ft2) of counter space, so we mounted two Ikea Skadis pegboards to the side of our fridge to make a coffee / smoothie station:

The clear Ikea containers are great for nuts and coffee since you can grab them with one hand and drop ‘em in the blender or grinder.

I designed and 3d-printed custom mounts for my Clever Dripper, coffee grinder (1Zpresso k-series), little water spritzer, and bottles of Friedhats beans.

Since we can’t screw into the fridge, the panels are hanging from some 3d-printed hooks command stripped on the top of the fridge cabinet. Clear nano-tape keeps the Skadis boards from swinging.

The cheap Ninja cup blender is quite loud, so we leave a pair of Peltor X4 ear defenders hanging next to it.

Ikea Maximera drawers

As soon as we made the smoothie station we decided to replace the cabinet shelves underneath it with drawers. (The only drawers that came with the kitchen were installed underneath the range, creating constant conflict between the cook and anyone needing silverware.)

Decent soft-close undermount drawer slides from Blum/Hettich cost like $30 each, and for the same price Ikea sells slides with a drawer box attached.

Since we’re renting and can’t make permanent changes to the kitchen, I built a carcass for the drawers within the existing cabinet:

The particle board sides carry the weight of the drawers to the base of the existing cabinet, and they’re fixed to the walls with nano-tape rather than screws.

Nicki 3d-printed cute pulls and we threw those onto temporary fronts made of leftover MDF. As you can see from the hardware poking through, these 8mm fronts are a bit too thin, so I plan to replace them with thicker MDF, probably sculpted with a goofy pattern in the style of Arno Hoogland.

Having drawers is awesome:

We bought a bunch of extra cups for the Ninja blender and keep them pre-filled with creatine and protein powder. The bottom drawer holds the dozen varieties of peanut butter remaining from the Gwern-inspired tasting I held in November. (The UK’s Pip & Nut peanut butters were the crowd favorites, by the way.)

Digital

Emacs and LLMs

I’ve used Emacs for something like 15 years, but after the first year or so I deliberately avoided trying to customize it, as that felt like too much of a distraction from the more important yaks I was shaving through my 20’s and early 30’s.

However, in early 2025 I decided to dive in, for two reasons:

  1. I ran across Prot’s video demonstrating a cohesive set of well-designed search and completion packages, which suggested to me that there were interesting ideas being developed in the “modern” Emacs community
  2. I discovered gptel, which makes it extremely easy to query large language models within Emacs — just highlight text, invoke gtpel, and the response is inserted right there.

What’s special about Emacs compared to other text editors is that it’s extremely easy to customize. Rather than a “text editor”, Emacs is better thought of as an operating system and live programming environment which just so happens to have a lot of functions related to editing text.

My day-to-day coding, writing, and thinking environment within Emacs has improved tremendously in 2025, as every time I’ve had any sort of customization or workflow-improvement idea, I’ve been able to try it out in just a minute or two by having an LLM generate the necessary Emacs Lisp code.

My mentality changed to “yeah, I’m sure it’s possible but I don’t have time or interest to figure out how to do it with this archaic and quirky programming language” to “let me spend two minutes trying”.

Turns out there’s a lot of little improvements that can be done by an LLM in few minutes. Here are some examples:

  • Literally for this article, I asked the LLM to write a Ruby method for my static site generator to render a table of contents (which you can see above!)

  • Lots of places don’t support markdown natively, so I had an LLM write me an Emacs function to render selected markdown text to HTML on the pasteboard, which lets me write in Emacs and then just ⌘V in Gmail to get a nicely formatted message.

  • When I write markdown and want to include an image, it’s annoying to have to copy/paste in the path to the image, so I had an LLM write me an autocomplete against all of the image files in the same directory as the markdown file. I’ve been using this for pretty much every article/newsletter I write now, since they usually have images.

  • I keep a daily activity log as a simple EDN file with entries like:

    {:start #inst "2025-12-28T12:00+01:00" :tags #{"lunch" "nicki"} :duration 2} 
    {:start #inst "2025-12-28T09:00+01:00" :tags #{"paneltron" "computering"} :duration 6
     :description "rough out add-part workflow."}
    {:start #inst "2025-12-28T08:20+01:00" :tags #{"wakeup"}}
    

    (Everything’s rounded to the half-hour.)

    I started this when I was billing by the hour (a decade ago), and have kept it up because it’s handy to have a lightweight, low-friction record of what I’ve been up to. I used to do occasional analysis manually via a REPL, but couldn’t sleep one night so I spent 30 minutes having an LLM throw together a visual summary which I can invoke for whatever week is under my cursor. It looks like:

    Week: 2025-12-22 to 2025-12-28
    
    Locations:
      Amsterdam - Friday 2025-12-19T13:30+01:00
    
    computering       16.5h ##################
    box-carts         14.5h ###############
    woodworking       13.0h ##############
    llms               8.0h ########
    dinner             7.0h #######
    

    and includes my most recent :location (an attribute I started tagging entries with to help keep track of visa limitations while traveling).

    I’m extremely chuffed about having quick access to weekly summaries and suspect that tying that to my existing habit of recording entries will be a good intra-week feedback loop about whether I’m spending time in alignment with my priorities.

  • Whenever I write an article or long message, before sending it I run it by an LLM with the following prompt:

    my friend needs feedback on this article — are there any typos, confusing sentences, or other things that could be improved? Be blunt and I’ll convey the feedback to my friend in a friendly way.

    This one doesn’t even involve any code, it’s just a habit that’s easy because it’s easy to call an LLM from within Emacs. LLMs will note repeated words, incorrect homonyms, and awkward sentences that simple spell-checkers miss. Here’s an example from this very article:

    1. Double parenthesis: [Paulk-style]((https://www.youtube.com/watch?v=KnNi6Tpp-ac)) — remove one set
    2. “Ikea Maximara drawers” in the heading, but the product is actually “Maximera”
    3. http://localhost:9292/newsletter/2025_06_03_prototyping_a_language/ — you’ve left a localhost URL in the FlowStorm section

Emacs has a pretty smooth gradient between “adjust a keybinding”, to “quick helper function”, to a full on workflow. Here’s an example from the latter end of that spectrum.

I asked Claude Code to make some minor CSS modifications for a project, then got nerd-sniped trying to understand why it used a million tokens to explore my 4000-word codebase and edit a dozen lines:

Usage by model:
        claude-haiku:  8.6k input, 5.4k output, 434.2k cache read, 33.4k cache write ($0.1207)
       claude-sonnet:  1.0k input, 262 output, 0 cache read, 0 cache write ($0.0069)
     claude-opus-4-5:  214 input, 8.3k output, 842.5k cache read, 47.8k cache write ($0.93)

After a bit of digging, it seemed likely this is a combination of factors:

  • Claude Code’s system and tool prompts
  • Repeatedly invoking tools to grep around the directory and read files 200 lines at a time
  • Absolute nonsense — Mario Zechner has some great analysis on this (fun fact: “Claude Code uses Haiku to generate those little whimsical "please wait” messages. For every. token. you. input.“).

For comparison, I invoked Opus 4.5 manually with all of my source code and asked what I needed to change, and it nailed the answer using only 5000 tokens (4500 input, 500 output).

So I leaned into this and wrote my own lightweight, single-shot workflow in Emacs:

  • I write something like:

    @rewrite
    
    /path/to/file1
    /path/to/file2
    
    Please do X, Y, Z, thanks!
    

    and when I send it, some elisp code:

  • adds the specified files to the context window

  • sets the system prompt to be "please reply only with a lil’ string replacement patch format that looks like …”

  • sends the LLM response to a Babashka script which applies this patch, sandboxed to just the specified files.

I’ve used it a handful of times so far and it works exactly the way I’d imagined — it’s much faster than waiting for an “agent” to make a dozen tool calls and lets me take advantage of LLMs for tedious work while remaining in the loop.

(Admittedly, this one took a few hours rather than a few minutes, but it was well worth it in terms of getting some hands-on experience building a structured LLM-based workflow.)

A little Photos.app exporter

When you copy images out of Photos.app, it only puts a 1024 pixel wide “preview image” on the pasteboard. This loses too much detail.

The built-in “export” workflow is tedious to use and doesn’t actually compress well, so you have to do another pass through Squoosh or ImageOptim anyway. Of course, you’ll want to resize before you do that, maybe in Preview?

I noticed how annoyed I was trying to add photos to my articles, so I vibe-coded a little Photos.app exporter that behaves exactly like how I want.

UV (Python dependencies)

I haven’t done much Python in my career, but it’s a popular language that I’d occasionally need to tap into to use some specific library (e.g., OpenCV, NumPy, JAX) or run the code behind some interesting research paper.

The package management has always been (from my perspective of a casual outsider) an absolute mess. Yes, with lots of good reasons related to packaging native code across different operating systems and architectures, etc. etc. etc.

Whatever, if pip install failed to build my eggs or wheels or whatever, I’d usually just give up.

As for reproducibility and pinning to exact versions…¯\_(ツ)_/¯.

“Maybe if I do nothing the problem will fix itself” isn’t a great problem solving strategy, but it definitely worked out for me for understanding Python dependency managers: I came across UV in late 2024 and it…just works? It’s also fast!

I can now finally create a Python project, specify some dependencies, and automatically get a lockfile ensuring it’ll will work just fine on my other computer, 6 months from now!

This problem has been a thorn in the side of not just software developers, but also scientific researchers for decades. The folks who’ve finally resolved it deserve, like, the Nobel Peace prize.

Mise-en-place (all dependencies)

This past year I’ve also been loving mise-en-place. From their homepage:

mise is a tool that manages installations of programming language runtimes and other tools for local development. For example, it can be used to manage multiple versions of Node.js, Python, Ruby, Go, etc. on the same machine.

Once activated, mise can automatically switch between different versions of tools based on the directory you’re in. This means that if you have a project that requires Node.js 18 and another that requires Node.js 22, mise will automatically switch between them as you move between the two projects.

I’ve used language-specific versions of this idea, and it’s been refreshing to throw all of those out in favor of a single uniform, fast solution.

I have a user-wide “default” config, containing stuff like:

  • languages (a JVM, clojure, ruby)
  • language servers (clojure-lsp, rust analyzer, etc.)
  • git-absorb “git commit –fixup, but automatic”
  • Difftastic AST-aware diffing
  • Numbat, an awesome unit-aware scientific calculator

and on specific projects I specify the same and additional tools, so that collaborators can have a “single install” that guarantees we’re all on the same versions of everything.

Sure something like Nix with hermetically sealed, content addressed, etc., etc. is better in theory, but the UX and conceptual model is a mess.

Mise feels like a strong, relatively simple local optimum: You specify what you want through a simple hierarchy of configs, mise generates lockfiles, downloads the requested dependencies, and puts them on the $PATH based on where you are.

I’ve been using it for a year and haven’t had to learn anything beyond what I did in the first 15 minutes. It works.

Atuin (shell history)

Atuin records all of your shell commands, (optionally) syncs them across multiple computers, and makes it easy to recall commands with a fuzzy finder. It’s all self-hostable and is distributed as a single binary.

I find the shell history extremely useful, both for remembering infrequently used commands, as well as for simply avoiding typing out tediously long ones with many flags. Having a proper fuzzy search that shows a dozens of results (rather than just one at a time) makes it straightforward to use.

Before this I wouldn’t have thought twice about my shell command history, and now it’s something I deliberately back up because it’s so useful.

YouTube Premium

At some point in 2025 the ad-blocker I was using stopped working on YouTube and I started seeing either pauses or (uggghhh) commercials (in Dutch, which Google must think I understand, despite 15 years of English GMail, not to mention my Accept-Language header).

Given that YouTube is a Wonder of the World and a $20/month leaves me with a consumer surplus in the neighborhood of $104–105, I decided to subscribe to YouTube Premium.

Honestly I just wanted the ads to stop, but it’s even better — I can conveniently download videos to my phone, which means I can watch interesting videos while riding on the stationary exercise bike in the park near my house.

It also comes with YouTube Music, which immediately played me a late 00’s indie rock playlist that brought me back to college.

100% worth it.

FlowStorm (Clojure debugger)

I don’t normally use debuggers, especially since in Clojure it’s usually straightforward to pretty-print the entire application state.

However, this usual approach failed me when I was building an interpreter for some CAD programming language experiments — each AST node holds a reference to its environment (a huge map), and simply printing a tree yields megabytes of text.

FlowStorm takes advantage of the fact that idiomatic Clojure uses immutable data structures — the debugger simply holds a reference to every value generated during a computation, so that you can analyze them later.

There are facilites to search all of the recorded values. So if you a see a string “foo” on your rendered webpage or whatever, you can easily answer questions like “where is the first callstack where the string ‘foo’ shows up?”.

All of the recorded data is available programmatically too. I used this infrastructure to make a live visualization of a 2D graphics program where as you move your cursor around the program source code, you see the closest graphical entity rendered automatically.

(“A debounced callback triggered by cursor movement which executes Clojure code and highlights text according to the return value” is another example of an Emacs customization I never would have attempted prior to LLMs.)

Whispertron (transcription)

I vibe-coded my own lil’ voice transcription app back in October 2024, but I’m including it in this list because using it has become second-nature to me in 2025.

Before I had reliable transcription at my fingertips, I never felt hindered by my typing speed (~125 words per minute). However, now that I have it, I find that I’m expressing much more when I’m dictating compared to typing.

It reminds me of the difference between responding to an email on an iPhone versus using a computer with a large monitor and full keyboard. I find myself providing much more context and otherwise elaborating my thoughts in more detail. It’s just easier to speak out loud than type out the same information.

This yields much better results when prompting LLMs: typing, I’ll say “do X”; speaking, I’ll say “do X, maybe try A, B, C, remember about Y and Z constraints”.

It also yields better relationships: When emailing and texting friends, I’ll dictate (and then clean up / format) much more detailed, longer responses than what I’d type.

Misc. stuff

Permalink

OSS updates November and December 2025

In this post I&aposll give updates about open source I worked on during November and December 2025.

To see previous OSS updates, go here.

Sponsors

I&aposd like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn&apost exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

gratitude

Current top tier sponsors:

Open the details section for more info about sponsoring.

Sponsor info

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

Updates

Clojure Conj 2025

Last November I had the honor and pleasure to visit the Clojure Conj 2025. I met a host of wonderful and interesting long-time and new Clojurians, many that I&aposve known online for a long time and now met for the first time. It was especially exciting to finally meet Rich Hickey and talk to him during a meeting about Clojure dialects and Clojure tooling. The talk that I gave there: "Making tools developers actually use" will come online soon.

presentation at Dutch Clojure meetup

Babashka conf and Dutch Clojure Days 2026

In 2026 I&aposm organizing Babashka Conf 2026. It will be an afternoon event (13:00-17:00) hosted in the Forum hall of the beautiful public library of Amsterdam. More information here. Get your ticket via Meetup.com (currently there&aposs a waiting list, but more places will come available once speakers are confirmed). CfP will open mid January. The day after babashka conf, Dutch Clojure Days 2026 will be happening. It&aposs not too late to get your talk proposal in. More info here.

Clojurists Together: long term funding

I&aposm happy to announce that I&aposm among the 5 developers that were granted Long term support for 2026. Thanks to all who voted! Read the announcement here.

Projects

Here are updates about the projects/libraries I&aposve worked on in the last two months in detail.

  • babashka: native, fast starting Clojure interpreter for scripting.

    • Bump process to 0.6.25
    • Bump deps.clj
    • Fix #1901: add java.security.DigestOutputStream
    • Redefining namespace with ns should override metadata
    • Bump nextjournal.markdown to 0.7.222
    • Bump edamame to 1.5.37
    • Fix #1899: with-meta followed by dissoc on records no longer works
    • Bump fs to 0.5.30
    • Bump nextjournal.markdown to 0.7.213
    • Fix #1882: support for reifying java.time.temporal.TemporalField (@EvenMoreIrrelevance)
    • Bump Selmer to 1.12.65
    • SCI: sci.impl.Reflector was rewritten into Clojure
    • dissoc on record with non-record field should return map instead of record
    • Bump edamame to 1.5.35
    • Bump core.rrb-vector to 0.2.0
    • Migrate detecting of executable name for self-executing uberjar executable from ProcessHandle to to native image ProcessInfo to avoid sandbox errors
    • Bump cli to 0.8.67
    • Bump fs to 0.5.29
    • Bump nextjournal.markdown to 0.7.201
  • SCI: Configurable Clojure/Script interpreter suitable for scripting

    • Add support for :refer-global and :require-global
    • Add println-str
    • Fix #997: Var is mistaken for local when used under the same name in a let body
    • Fix #1001: JS interop with reserved js keyword fails (regression of #987)
    • sci.impl.Reflector was rewritten into Clojure
    • Fix babashka/babashka#1886: Return a map when dissociating a record basis field.
    • Fix #1011: reset ns metadata when evaluating ns form multiple times
    • Fix for https://github.com/babashka/babashka/issues/1899
    • Fix #1010: add js-in in CLJS
    • Add array-seq
  • clj-kondo: static analyzer and linter for Clojure code that sparks joy.

    • #2600: NEW linter: unused-excluded-var to warn on unused vars in :refer-clojure :exclude (@jramosg)
    • #2459: NEW linter: :destructured-or-always-evaluates to warn on s-expressions in :or defaults in map destructuring (@jramosg)
    • Add type checking support for sorted-map-by, sorted-set, and sorted-set-by functions (@jramosg)
    • Add new type array and type checking support for the next functions: to-array, alength, aget, aset and aclone (@jramosg)
    • Fix #2695: false positive :unquote-not-syntax-quoted in leiningen&aposs defproject
    • Leiningen&aposs defproject behavior can now be configured using leiningen.core.project/defproject
    • Fix #2699: fix false positive unresolved string var with extend-type on CLJS
    • Rename :refer-clojure-exclude-unresolved-var linter to unresolved-excluded-var for consistency
    • v2025.12.23
    • #2654: NEW linter: redundant-let-binding, defaults to :off (@tomdl89)
    • #2653: NEW linter: :unquote-not-syntax-quoted to warn on ~ and ~@ usage outside syntax-quote (`) (@jramosg)
    • #2613: NEW linter: :refer-clojure-exclude-unresolved-var to warn on non-existing vars in :refer-clojure :exclude (@jramosg)
    • #2668: Lint & syntax errors in let bindings and lint for trailing & (@tomdl89)
    • #2590: duplicate-key-in-assoc changed to duplicate-key-args, and now lints dissoc, assoc! and dissoc! too (@tomdl89)
    • #2651: resume linting after paren mismatches
    • clojure-lsp#2651: Fix inner class name for java-class-definitions.
    • clojure-lsp#2651: Include inner class java-class-definition analysis.
    • Bump babashka/fs
    • #2532: Disable :duplicate-require in require + :reload / :reload-all
    • #2432: Don&apost warn for :redundant-fn-wrapper in case of inlined function
    • #2599: detect invalid arity for invoking collection as higher order function
    • #2661: Fix false positive :unexpected-recur when recur is used inside clojure.core.match/match (@jramosg)
    • #2617: Add types for repeatedly (@jramosg)
    • Add :ratio type support for numerator and denominator functions (@jramosg)
    • #2676: Report unresolved namespace for namespaced maps with unknown aliases (@jramosg)
    • #2683: data argument of ex-info may be nil since clojure 1.12
    • Bump built-in ClojureScript analysis info
    • Fix #2687: support new :refer-global and :require-global ns options in CLJS
    • Fix #2554: support inline configs in .cljc files
  • edamame: configurable EDN and Clojure parser with location metadata and more Edamame: configurable EDN and Clojure parser with location metadata and more

    • Minor: leave out :edamame/read-cond-splicing when not splicing
    • Allow :read-cond function to override :edamame/read-cond-splicing value
    • The result from :read-cond with a function should be spliced. This behavior differs from :read-cond + :preserve which always returns a reader conditional object which cannot be spliced.
    • Support function for :features option to just select the first feature that occurs
  • squint: CLJS syntax to JS compiler

    • Allow macro namespaces to load "node:fs", etc. to read config files for conditional compilation
    • Don&apost emit IIFE for top-level let so you can write let over defn to capture values.
    • Fix js-yield and js-yield* in expression position
    • Implement some? as macro
    • Fix #758: volatile!, vswap!, vreset!
    • pr-str, prn etc now print EDN (with the idea that you can paste it back into your program)
    • new #js/Map reader that reads a JavaScript Map from a Clojure map (maps are printed like this with pr-str too)
    • Support passing keyword to mapv
    • #759: doseq can&apost be used in expression context
    • Fix #753: optimize output of dotimes
    • alength as macro
  • reagami: A minimal zero-deps Reagent-like for Squint and CLJS

    • Performance enhancements
    • treat innerHTML as a property rather than an attribute
    • Drop support for camelCased properties / (css) attributes
    • Fix :default-value in input range
    • Support data param in :on-render
    • Support default values for uncontrolled components
    • Fix child count mismatch
    • Fix re-rendering/patching of subroots
    • Add :on-render hook for mounting/updating/unmounting third part JS components
  • NEW: parmezan: fixes unbalanced or unexpected parens or other delimiters in Clojure files

  • CLI: Turn Clojure functions into CLIs!

    • #126: - value accidentally parsed as option, e.g. --file -
    • #124: Specifying exec fn that starts with hyphen is treated as option
    • Drop Clojure 1.9 support. Minimum Clojure version is now 1.10.3.
  • clerk: Moldable Live Programming for Clojure

    • always analyze doc (but not deps) when no-cache is set (#786)
    • add option to disable inline formulas in markdown (#780)
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI

  • Nextjournal Markdown

    • Add config option to avoid TeX formulas
    • API improvements for passing options
  • cherry: Experimental ClojureScript to ES6 module compiler

    • Fix cherry compile CLI command not receiving file arguments
    • Bump shadow-cljs to 3.3.4
    • Fix #163: Add assert to macros (@willcohen)
    • Fix #165: Fix ClojureScript protocol dispatch functions (@willcohen)
    • Fix #167: Protocol dispatch functions inside IIFEs; bump squint accordingly
    • Fix #169: fix extend-type on Object
    • Fix #171: Add satisfies? macro (@willcohen)
  • deps.clj: A faithful port of the clojure CLI bash script to Clojure

    • Released several versions catching up with the clojure CLI
  • quickdoc: Quick and minimal API doc generation for Clojure

    • Fix extra newline in codeblock
  • quickblog: light-weight static blog engine for Clojure and babashka

    • Add support for a blog contained within another website; see Serving an alternate content root in README. (@jmglov)
    • Upgrade babashka/http-server to 0.1.14
    • Fix :blog-image-alt option being ignored when using CLI (bb quickblog render)
  • nbb: Scripting in Clojure on Node.js using SCI

    • #395: fix vim-fireplace infinite loop on nREPL session close.
    • Add ILookup and Cons
    • Add abs
    • nREPL: support "completions" op
  • neil: A CLI to add common aliases and features to deps.edn-based projects.

    • neil.el - a hook that runs after finding a package (@agzam)
    • neil.el - adds a function for injecting a found package into current CIDER session (@agzam)
    • #245: neil.el - neil-executable-path now can be set to clj -M:neil
    • #251: Upgrade library deps-new to 0.10.3
    • #255: update maven search URL
  • fs - File system utility library for Clojure

    • #154 reflect in directory check and docs that move never follows symbolic links (@lread)
    • #181 delete-tree now deletes broken symbolic link root (@lread)
    • #193 create-dirs now recognizes sym-linked dirs on JDK 11 (@lread)
    • #184: new check in copy-tree for copying to self too rigid
    • #165: zip now excludes zip-file from zip-file (@lread)
    • #167: add root fn which exposes Path getRoot (@lread)
    • #166: copy-tree now fails fast on attempt to copy parent to child (@lread)
    • #152: an empty-string path "" is now (typically) understood to be the current working directory (as per underlying JDK file APIs) (@lread)
    • #155: fs/with-temp-dir clj-kondo linting refinements (@lread)
    • #162: unixify no longer expands into absolute path on Windows (potentially BREAKING)
    • Add return type hint to read-all-bytes
  • process: Clojure library for shelling out / spawning sub-processes

    • #181: support :discard or ProcessBuilder$Redirect as :out and :err options

Contributions to third party projects:

  • ClojureScript
    • CLJS-3466: support qualified method in return position
    • CLJS-3468: :refer-global should not make unrenamed object available

Other projects

These are (some of the) other projects I&aposm involved with but little to no activity happened in the past month.

Click for more details - [pod-babashka-go-sqlite3](https://github.com/babashka/pod-babashka-go-sqlite3): A babashka pod for interacting with sqlite3 - [unused-deps](https://github.com/borkdude/unused-deps): Find unused deps in a clojure project - [pod-babashka-fswatcher](https://github.com/babashka/pod-babashka-fswatcher): babashka filewatcher pod - [sci.nrepl](https://github.com/babashka/sci.nrepl): nREPL server for SCI projects that run in the browser - [babashka.nrepl-client](https://github.com/babashka/nrepl-client) - [http-server](https://github.com/babashka/http-server): serve static assets - [nbb](https://github.com/babashka/nbb): Scripting in Clojure on Node.js using SCI - [sci.configs](https://github.com/babashka/sci.configs): A collection of ready to be used SCI configs. - [http-client](https://github.com/babashka/http-client): babashka's http-client - [html](https://github.com/borkdude/html): Html generation library inspired by squint's html tag - [instaparse-bb](https://github.com/babashka/instaparse-bb): Use instaparse from babashka - [sql pods](https://github.com/babashka/babashka-sql-pods): babashka pods for SQL databases - [rewrite-edn](https://github.com/borkdude/rewrite-edn): Utility lib on top of - [rewrite-clj](https://github.com/clj-commons/rewrite-clj): Rewrite Clojure code and edn - [tools-deps-native](https://github.com/babashka/tools-deps-native) and [tools.bbuild](https://github.com/babashka/tools.bbuild): use tools.deps directly from babashka - [bbin](https://github.com/babashka/bbin): Install any Babashka script or project with one command - [qualify-methods](https://github.com/borkdude/qualify-methods) - Initial release of experimental tool to rewrite instance calls to use fully qualified methods (Clojure 1.12 only) - [tools](https://github.com/borkdude/tools): a set of [bbin](https://github.com/babashka/bbin/) installable scripts - [babashka.json](https://github.com/babashka/json): babashka JSON library/adapter - [speculative](https://github.com/borkdude/speculative) - [squint-macros](https://github.com/squint-cljs/squint-macros): a couple of macros that stand-in for [applied-science/js-interop](https://github.com/applied-science/js-interop) and [promesa](https://github.com/funcool/promesa) to make CLJS projects compatible with squint and/or cherry. - [grasp](https://github.com/borkdude/grasp): Grep Clojure code using clojure.spec regexes - [lein-clj-kondo](https://github.com/clj-kondo/lein-clj-kondo): a leiningen plugin for clj-kondo - [http-kit](https://github.com/http-kit/http-kit): Simple, high-performance event-driven HTTP client+server for Clojure. - [babashka.nrepl](https://github.com/babashka/babashka.nrepl): The nREPL server from babashka as a library, so it can be used from other SCI-based CLIs - [jet](https://github.com/borkdude/jet): CLI to transform between JSON, EDN, YAML and Transit using Clojure - [lein2deps](https://github.com/borkdude/lein2deps): leiningen to deps.edn converter - [cljs-showcase](https://github.com/borkdude/cljs-showcase): Showcase CLJS libs using SCI - [babashka.book](https://github.com/babashka/book): Babashka manual - [pod-babashka-buddy](https://github.com/babashka/pod-babashka-buddy): A pod around buddy core (Cryptographic Api for Clojure). - [gh-release-artifact](https://github.com/borkdude/gh-release-artifact): Upload artifacts to Github releases idempotently - [carve](https://github.com/borkdude/carve) - Remove unused Clojure vars - [4ever-clojure](https://github.com/oxalorg/4ever-clojure) - Pure CLJS version of 4clojure, meant to run forever! - [pod-babashka-lanterna](https://github.com/babashka/pod-babashka-lanterna): Interact with clojure-lanterna from babashka - [joyride](https://github.com/BetterThanTomorrow/joyride): VSCode CLJS scripting and REPL (via [SCI](https://github.com/babashka/sci)) - [clj2el](https://borkdude.github.io/clj2el/): transpile Clojure to elisp - [deflet](https://github.com/borkdude/deflet): make let-expressions REPL-friendly! - [deps.add-lib](https://github.com/borkdude/deps.add-lib): Clojure 1.12's add-lib feature for leiningen and/or other environments without a specific version of the clojure CLI

Permalink

Building Heretic: From ClojureStorm to Mutant Schemata

Heretic

This is Part 2 of a series on mutation testing in Clojure. Part 1 introduced the concept and why Clojure needed a purpose-built tool.

The previous post made a claim: mutation testing can be fast if you know which tests to run. This post shows how Heretic makes that happen.

We&aposll walk through the three core phases: collecting expression-level coverage with ClojureStorm, transforming source code with rewrite-clj, and the optimization techniques that keep mutation counts manageable.

Phase 1: Coverage Collection

Traditional coverage tools track lines. Heretic tracks expressions.

The difference matters. Consider:

(defn process-order [order]
  (if (> (:quantity order) 10)
    (* (:price order) 0.9)    ;; <- Line 3: bulk discount
    (:price order)))

Line-level coverage would show line 3 as "covered" if any test enters the bulk discount branch. But expression-level coverage distinguishes between tests that evaluate *, (:price order), and 0.9. When we later mutate 0.9 to 1.1, we can run only the tests that actually touched that specific literal - not every test that happened to call process-order.

ClojureStorm&aposs Instrumented Compiler

ClojureStorm is a fork of the Clojure compiler that instruments every expression during compilation. Created by Juan Monetta for the FlowStorm debugger, it provides exactly the hooks Heretic needs. (Thanks to Juan for building such a solid foundation - Heretic would not exist without ClojureStorm.)

The integration is surprisingly minimal:

(ns heretic.tracer
  (:import [clojure.storm Emitter Tracer]))

(def ^:private current-coverage
  "Atom of {form-id #{coords}} for the currently running test."
  (atom {}))

(defn record-hit! [form-id coord]
  (swap! current-coverage
         update form-id
         (fnil conj #{})
         coord))

(defn init! []
  ;; Configure what gets instrumented
  (Emitter/setInstrumentationEnable true)
  (Emitter/setFnReturnInstrumentationEnable true)
  (Emitter/setExprInstrumentationEnable true)

  ;; Set up callbacks
  (Tracer/setTraceFnsCallbacks
   {:trace-expr-fn (fn [_ _ coord form-id]
                     (record-hit! form-id coord))
    :trace-fn-return-fn (fn [_ _ coord form-id]
                          (record-hit! form-id coord))}))

When any instrumented expression evaluates, ClojureStorm calls our callback with two pieces of information:

  • form-id: A unique identifier for the top-level form (e.g., an entire defn)
  • coord: A path into the form&aposs AST, like "3,2,1" meaning "third child, second child, first child"

Together, [form-id coord] pinpoints exactly which subexpression executed. This is the key that unlocks targeted test selection.

The Coordinate System

To connect a mutation in the source code to the coverage data, we need a way to uniquely address any subexpression. Think of it as a postal address for code - we need to say "the a inside the + call inside the function body" in a format that both the coverage tracer and mutation engine can agree on.

ClojureStorm addresses this with a path-based coordinate system. Consider this function as a tree:

(defn foo [a b] (+ a b))
   │
   ├─[0] defn
   ├─[1] foo
   ├─[2] [a b]
   └─[3] (+ a b)
            │
            ├─[3,0] +
            ├─[3,1] a
            └─[3,2] b

Each number represents which child to pick at each level. The coordinate "3,2" means "go to child 3 (the function body), then child 2 (the second argument to +)". That gives us the b symbol.

This works cleanly for ordered structures like lists and vectors, where children have stable positions. But maps are unordered - {:name "Alice" :age 30} and {:age 30 :name "Alice"} are the same value, so numeric indices would be unstable.

ClojureStorm solves this by hashing the printed representation of map keys. Instead of "0" for the first entry, a key like :name gets addressed as "K-1925180523":

{:name "Alice" :age 30}
   │
   ├─[K-1925180523] :name
   ├─[V-1925180523] "Alice"
   ├─[K-1524292809] :age
   └─[V-1524292809] 30

The hash ensures stable addressing regardless of iteration order.

With this addressing scheme, we can say "test X touched coordinate 3,1 in form 12345" and later ask "which tests touched the expression we&aposre about to mutate?"

The Form-Location Bridge

Here&aposs a problem we discovered during implementation: how do we connect the mutation engine to the coverage data?

The mutation engine uses rewrite-clj to parse and transform source files. It finds a mutation site at, say, line 42 of src/my/app.clj. But the coverage data is indexed by ClojureStorm&aposs form-id - an opaque identifier assigned during compilation. We need to translate "file + line" into "form-id".

Fortunately, ClojureStorm&aposs FormRegistry stores the source file and starting line for each compiled form. We build a lookup index:

(defn build-form-location-index [forms source-paths]
  (into {}
        (for [[form-id {:keys [form/file form/line]}] forms
              :when (and file line)
              :let [abs-path (resolve-path source-paths file)]
              :when abs-path]
          [[abs-path line] form-id])))

When the mutation engine finds a site at line 42, it searches for the form whose start line is the largest value less than or equal to 42 - that is, the innermost containing form. This gives us the ClojureStorm form-id, which we use to look up which tests touched that form.

This bridging layer is what allows Heretic to connect source transformations to runtime coverage, enabling targeted test execution.

Collection Workflow

Coverage collection runs each test individually and captures what it touches:

(defn run-test-with-coverage [test-var]
  (tracer/reset-current-coverage!)
  (try
    (test-var)
    (catch Throwable t
      (println "Test threw exception:" (.getMessage t))))
  {(symbol test-var) (tracer/get-current-coverage)})

The result is a map from test symbol to coverage data:

{my.app-test/test-addition
  {12345 #{"3" "3,1" "3,2"}    ;; form-id -> coords touched
   12346 #{"1" "2,1"}}
 my.app-test/test-subtraction
  {12345 #{"3" "4"}
   12347 #{"1"}}}

This gets persisted to .heretic/coverage/ with one file per test namespace, enabling incremental updates. Change a test file? Only that namespace gets recollected.

At this point we have a complete map: for every test, we know exactly which [form-id coord] pairs it touched. Now we need to generate mutations and look up which tests are relevant for each one.

Phase 2: The Mutation Engine

With coverage data in hand, we need to actually mutate the code. This means:

  1. Parsing Clojure source into a navigable structure
  2. Finding locations where operators apply
  3. Transforming the source
  4. Hot-swapping the modified code into the running JVM

Parsing with rewrite-clj

rewrite-clj gives us a zipper over Clojure source that preserves whitespace and comments - essential for producing readable diffs:

(defn parse-file [path]
  (z/of-file path {:track-position? true}))

(defn find-mutation-sites [zloc]
  (->> (walk-form zloc)
       (remove in-quoted-form?)  ;; Skip &apos(...) and `(...)
       (mapcat (fn [z]
                 (let [applicable (ops/applicable-operators z)]
                   (map #(make-mutation-site z %) applicable))))))

The walk-form function traverses the zipper depth-first. At each node, we check which operators match. An operator is a data map with a matcher predicate:

(def swap-plus-minus
  {:id :swap-plus-minus
   :original &apos+
   :replacement &apos-
   :description "Replace + with -"
   :matcher (fn [zloc]
              (and (= :token (z/tag zloc))
                   (symbol? (z/sexpr zloc))
                   (= &apos+ (z/sexpr zloc))))})

Each mutation site captures the file, line, column, operator, and - critically - the coordinate path within the form. This coordinate is what connects a mutation to the coverage data from Phase 1.

Coordinate Mapping

The tricky part is converting between rewrite-clj&aposs zipper positions and ClojureStorm&aposs coordinate strings. We need bidirectional conversion for the round-trip:

(defn coord->zloc [zloc coord]
  (let [parts (parse-coord coord)]  ;; "3,2,1" -> [3 2 1]
    (reduce
     (fn [z part]
       (when z
         (if (string? part)      ;; Hash-based for maps/sets
           (find-by-hash z part)
           (nth-child z part)))) ;; Integer index for lists/vectors
     zloc
     parts)))

(defn zloc->coord [zloc]
  (loop [z zloc
         coord []]
    (cond
      (root-form? z) (vec coord)
      (z/up z)
      (let [part (if (is-unordered-collection? z)
                   (compute-hash-coord z)
                   (child-index z))]
        (recur (z/up z) (cons part coord)))
      :else (vec coord))))

The validation requirement is that these must be inverses:

(= coord (zloc->coord (coord->zloc zloc coord)))

With correct coordinate mapping, we can take a mutation at a known location and ask "which tests touched this exact spot?" That query is what makes targeted test execution possible.

Applying Mutations

Once we find a mutation site and can navigate to it, the actual transformation is straightforward:

(defn apply-mutation! [mutation]
  (let [{:keys [file form-id coord operator]} mutation
        operator-def (get ops/operators-by-id operator)
        original-content (slurp file)
        zloc (z/of-string original-content {:track-position? true})
        form-zloc (find-form-by-id zloc form-id)
        target-zloc (coord/coord->zloc form-zloc coord)
        replacement-str (ops/apply-operator operator-def target-zloc)
        modified-zloc (z/replace target-zloc
                                 (n/token-node (symbol replacement-str)))
        modified-content (z/root-string modified-zloc)]
    (spit file modified-content)
    (assoc mutation :backup original-content)))

Hot-Swapping with clj-reload

After modifying the source file, we need the JVM to see the change. clj-reload handles this correctly:

(ns heretic.reloader
  (:require [clj-reload.core :as reload]))

(defn init! [source-paths]
  (reload/init {:dirs source-paths}))

(defn reload-after-mutation! []
  (reload/reload {:throw false}))

Why clj-reload specifically? It solves problems that require :reload doesn&apost:

  1. Proper unloading: Calls remove-ns before reloading, preventing protocol/multimethod accumulation
  2. Dependency ordering: Topologically sorts namespaces, unloading dependents first
  3. Transitive closure: Automatically reloads namespaces that depend on the changed one

The mutation workflow becomes:

(with-mutation [m mutation]
  (reloader/reload-after-mutation!)
  (run-relevant-tests m))
;; Mutation automatically reverted in finally block

At this point we have the full pipeline: parse source, find mutation sites, apply a mutation, hot-reload, run targeted tests, restore. But running this once per mutation is still slow for large codebases. Phase 3 addresses that.

80+ Clojure-Specific Operators

The operator library is where Heretic&aposs Clojure focus shows. Beyond the standard arithmetic and comparison swaps, we have:

Threading operators - catch ->/->> confusion:

(-> data (get :users) first)   ;; Original
(->> data (get :users) first)  ;; Mutant: wrong arg position

Nil-handling operators - expose nil punning mistakes:

(when (seq users) ...)   ;; Original: handles empty list
(when users ...)         ;; Mutant: breaks on empty list (truthy)

Lazy/eager operators - catch chunking and realization bugs:

(map process items)    ;; Original: lazy
(mapv process items)   ;; Mutant: eager, different memory profile

Destructuring operators - expose JSON interop issues:

{:keys [user-id]}   ;; Original: kebab-case
{:keys [userId]}    ;; Mutant: camelCase from JSON

The full set includes first/last, rest/next, filter/remove, conj/disj, some->/->, and qualified keyword mutations. These are the mistakes Clojure developers actually make.

With 80+ operators applied to a real codebase, mutation counts grow quickly. The next phase makes this tractable.

Phase 3: Optimization Techniques

With 80+ operators and a real codebase, mutation counts get large fast. A 1000-line project might generate 5000 mutations. Running the full test suite 5000 times is not practical.

Heretic uses several techniques to make this manageable.

Targeted Test Execution

This is the big one, enabled by Phase 1. Instead of running all tests for every mutation, we query the coverage index:

(defn tests-for-mutation [coverage-map mutation]
  (let [form-id (resolve-form-id (:form-location-index coverage-map) mutation)
        coord (:coord mutation)]
    (get-in coverage-map [:coord-to-tests [form-id coord]] #{})))

A mutation at (+ a b) might only be covered by 2 tests out of 200. We run those 2 tests in milliseconds instead of the full suite in seconds.

This is where the Phase 1 coverage investment pays off. But we can go further by reducing the number of mutations we generate in the first place.

Equivalent Mutation Detection

Some mutations produce semantically identical code. Detecting these upfront avoids wasted test runs:

;; (* x 0) -> (/ x 0) is NOT equivalent (divide by zero)
;; (* x 1) -> (/ x 1) IS equivalent (both return x)

(def equivalent-patterns
  [{:operator :swap-mult-div
    :context (fn [zloc]
               (some #(= 1 %) (rest (z/child-sexprs (z/up zloc)))))
    :reason "Multiplying or dividing by one has no effect"}

   {:operator :swap-lt-lte
    :context (fn [zloc]
               (let [[_ left right] (z/child-sexprs (z/up zloc))]
                 (and (= 0 right)
                      (non-negative-fn? (first left)))))
    :reason "(< (count x) 0) is always false"}])

The patterns cover boundary comparisons ((>= (count x) 0) is always true), function contracts ((nil? (str x)) is always false), and lazy/eager equivalences ((vec (map f xs)) equals (vec (mapv f xs))).

Filtering equivalent mutations prevents false "survived" reports. But we can also skip mutations that would be redundant to test.

Subsumption Analysis

Subsumption identifies when killing one mutation implies another would also be killed. If swapping < to <= is caught by a test, then swapping < to > would likely be caught too.

Based on the RORG (Relational Operator Replacement with Guard) research, we define subsumption relationships:

(def relational-operator-subsumption
  {&apos<  [:swap-lt-lte :swap-lt-neq :replace-comparison-false]
   &apos>  [:swap-gt-gte :swap-gt-neq :replace-comparison-false]
   &apos<= [:swap-lte-lt :swap-lte-eq :replace-comparison-true]
   ;; ...
   })

For each comparison operator, we only need to test the minimal set. The research shows this achieves roughly the same fault detection with 40% fewer mutations.

The subsumption graph also enables intelligent mutation selection:

(defn minimal-operator-set [operators]
  (set/difference
   operators
   ;; Remove any operator dominated by another in the set
   (reduce
    (fn [dominated op]
      (into dominated
            (set/intersection (dominated-operators op) operators)))
    #{}
    operators)))

These techniques reduce mutation count. The final optimization reduces the cost of each mutation.

Mutant Schemata: Compile Once, Select at Runtime

The most sophisticated optimization is mutant schemata. Instead of applying one mutation, reloading, testing, reverting, reloading for each mutation, we embed multiple mutations into a single compilation:

;; Original
(defn calculate [x] (+ x 1))

;; Schematized (with 3 mutations)
(defn calculate [x]
  (case heretic.schemata/*active-mutant*
    :mut-42-5-plus-minus (- x 1)
    :mut-42-5-1-to-0     (+ x 0)
    :mut-42-5-1-to-2     (+ x 2)
    (+ x 1)))  ;; original (default)

We reload once, then switch between mutations by binding a dynamic var:

(def ^:dynamic *active-mutant* nil)

(defmacro with-mutant [mutation-id & body]
  `(binding [*active-mutant* ~mutation-id]
     ~@body))

The workflow becomes:

(defn run-mutation-batch [file mutations test-fn]
  (let [schemata-info (schematize-file! file mutations)]
    (try
      (reload!)  ;; Once!
      (doseq [[id mutation] (:mutation-map schemata-info)]
        (with-mutant id
          (test-fn id mutation)))
      (finally
        (restore-file! schemata-info)
        (reload!)))))  ;; Once!

For a file with 50 mutations, this means 2 reloads instead of 100. The overhead of case dispatch at runtime is negligible compared to compilation cost.

Operator Presets

Finally, we offer presets that trade thoroughness for speed:

(def presets
  {:fast #{:swap-plus-minus :swap-minus-plus
           :swap-lt-gt :swap-gt-lt
           :swap-and-or :swap-or-and
           :swap-nil-some :swap-some-nil}

   :minimal minimal-preset-operators  ;; Subsumption-aware

   :standard #{;; :fast plus...
               :swap-first-last :swap-rest-next
               :swap-thread-first-last}

   :comprehensive (set (map :id all-operators))})

The :fast preset uses ~15 operators that research shows catch roughly 99% of bugs. The :minimal preset uses subsumption analysis to eliminate redundant mutations. Both run much faster than :comprehensive while maintaining detection power.

Putting It Together

A mutation testing run with Heretic looks like:

  1. Collect coverage (once, cached): Run tests under ClojureStorm instrumentation, build expression-level coverage map
  2. Generate mutations: Parse source files, find all applicable operator sites
  3. Filter: Remove equivalent mutations, apply subsumption to reduce set
  4. Group by file: Prepare for schemata optimization
  5. For each file:
    • Build schematized source with all mutations
    • Reload once
    • For each mutation: bind *active-mutant*, run targeted tests
    • Restore and reload
  6. Report: Mutation score, surviving mutations, test effectiveness

The result is mutation testing that runs in seconds for typical projects instead of hours.


This covers the core implementation. A future post will explore Phase 4: AI-powered semantic mutations and hybrid equivalent detection - using LLMs to generate the subtle, domain-aware mutations that traditional operators miss.

Previously: Part 1 - Heretic: Mutation Testing in Clojure

Permalink

Clojure Deref (Dec 30, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Last chance for the annual Clojure surveys!

Time is running out to take the Clojure surveys! Please help spread the word, and take a moment to fill them out if you haven’t already.

Fill out the 2025 State of Clojure Survey if you use any version or dialect of Clojure in any capacity.

Fill out the 2025 State of ClojureScript Survey and if you use ClojureScript or dialects like Squint, Cherry, nbb, and such.

Thank you for your help!

Upcoming Events

Libraries and Tools

Debut release

  • crabjure - A fast static analyzer for Clojure and ClojureScript, written in Rust.

  • browser-jack-in - A web browser extension that let’s you inject a Scittle REPL server into any browser page.

  • clamav-clj - An idiomatic, modern Clojure wrapper for ClamAV.

  • heretic - Mutation testing for Clojure - fast, practical, and integrated

Updates

  • Many Clojure contrib libs were updated to move the Clojure dependency to 1.11.4, which is past the CVE fixed in 1.11.2.

  • partial-cps 0.1.50 - A lean and efficient continuation passing style transform, includes async-await support.

  • csvx 68fd22c - A zero dependencies tool that enables you to control how to tokenize, transform and handle files with char(s) separated values in Clojure and ClojureScript.

  • recife 0.22.0 - A Clojure model checker (using the TLA+/TLC engine)

  • polylith 0.3.32 - A tool used to develop Polylith based architectures in Clojure.

  • nrepl 1.5.2 - A Clojure network REPL that provides a server and client, along with some common APIs of use to IDEs and other tools that may need to evaluate Clojure code in remote environments.

  • manifold 0.5.0 - A compatibility layer for event-driven abstractions

Permalink

Tetris-playing AI the Polylith way - Part 1

Tetris AI

In this blog series, I will show how to work with the Polylith architecture and how organizing code into components helps create a good structure for high-level functional style programming.

You might feel that organizing into components is unnecessary, and yes, for a tiny codebase like this I would agree. It&aposs still easy to reason about the code and keep everything in mind, but as the codebase grows, so does the value of this structure, in terms of better overview, clearer system boundaries, and increased flexibility in how these building blocks can be combined into various systems.

We will get familiar with this by implementing a self-playing Tetris program in Clojure and Python while reflecting on the differences between the two languages.

The goal

The task for this first post is to place a T piece on a Tetris board (represented by a two-dimensional array):

[[0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,0,0,0,0]
 [0,0,0,0,0,0,T,0,0,0]
 [0,0,0,0,0,T,T,T,0,0]]

We will put the code in the piece and board components in a Polylith workspace (output from the info command):

Poly info output

This will not be a complete guide to Polylith, Clojure, or Python, but I will explain the most important parts and refer to relevant documentation when needed.

The resulting source code from this first blog post in the series can be found here:

Workspace

We begin by installing the poly command line tool for Clojure, which we will use when working with the Polylith codebase:

brew install polyfy/polylith/poly

The next step is to create a Polylith workspace:

poly create workspace name:tetris-polylith top-ns:tetrisanalyzer

We now have a standard Polylith workspace for Clojure in place:

▾ tetris-polylith
  ▸ bases
  ▸ components
  ▸ development
  ▸ projects
  deps.edn
  workspace.edn

Python

We will use uv as package manager for Python (see setup for other alternatives). First we install uv:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then we create the tetris-polylith-uv workspace directory, by executing:

uv init tetris-polylith-uv
cd tetris-polylith-uv
uv add polylith-cli --dev
uv sync

which creates:

README.md
main.py
pyproject.toml
uv.lock

Finally we create the standard Polylith workspace structure:

uv run poly create workspace --name tetrisanalyzer --theme loose

which adds:

▾ tetris-polylith-uv
  ▸ bases
  ▸ components
  ▸ development
  ▸ projects
  workspace.toml

The workspace requires some additional manual steps, documented here.

The piece component

Now we are ready to create our first component for the Clojure codebase:

poly create component name:piece

This adds the piece component to the workspace structure:

  ▾ components
    ▾ piece
      ▾ src
        ▾ tetrisanalyzer
          ▾ piece
            interface.clj
            core.clj
      ▾ test
        ▾ tetrisanalyzer
          ▾ piece
            interface-test.clj

If you have used Polylith with Clojure before, you know that you also need to manually add piece to deps.edn, which is described here.

Python

Let&aposs do the same for Python:

uv run poly create component --name piece

This adds the piece component to the structure:

  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        core.py
  ▾ test
    ▾ components
      ▾ tetrisanalyzer
        ▾ piece
          __init__.py
          test_core.py

Piece shapes

In Tetris, there are 7 different pieces that can be rotated, summing up to 19 shapes:

Pieces

Here we will store them in a multi-dimensional array where each possible piece shape is made up of four [x,y] cells, with [0,0] representing the upper left corner.

For example the Z piece in its inital position (rotation 0) consists of the cells [0,0] [1,0] [1,1] [2,1]:

Z piece

This is how it looks like in Clojure (commas are treated as white spaces in Clojure and are often omitted):

(ns tetrisanalyzer.piece.piece)

(def pieces [nil

             ;; I (1)
             [[[0 0] [1 0] [2 0] [3 0]]
              [[0 0] [0 1] [0 2] [0 3]]]

             ;; Z (2)
             [[[0 0] [1 0] [1 1] [2 1]]
              [[1 0] [0 1] [1 1] [0 2]]]

             ;; S (3)
             [[[1 0] [2 0] [0 1] [1 1]]
              [[0 0] [0 1] [1 1] [1 2]]]

             ;; J (4)
             [[[0 0] [1 0] [2 0] [2 1]]
              [[0 0] [1 0] [0 1] [0 2]]
              [[0 0] [0 1] [1 1] [2 1]]
              [[1 0] [1 1] [0 2] [1 2]]]

             ;; L (5)
             [[[0 0] [1 0] [2 0] [0 1]]
              [[0 0] [0 1] [0 2] [1 2]]
              [[2 0] [0 1] [1 1] [2 1]]
              [[0 0] [1 0] [1 1] [1 2]]]

             ;; T (6)
             [[[0 0] [1 0] [2 0] [1 1]]
              [[0 0] [0 1] [1 1] [0 2]]
              [[1 0] [0 1] [1 1] [2 1]]
              [[1 0] [0 1] [1 1] [1 2]]]

             ;; O (7)
             [[[0 0] [1 0] [0 1] [1 1]]]])

Python

Here is how it looks in Python:

pieces = [None,

          # I (1)
          [[[0, 0], [1, 0], [2, 0], [3, 0]],
           [[0, 0], [0, 1], [0, 2], [0, 3]]],

          # Z (2)
          [[[0, 0], [1, 0], [1, 1], [2, 1]],
           [[1, 0], [0, 1], [1, 1], [0, 2]]],

          # S (3)
          [[[1, 0], [2, 0], [0, 1], [1, 1]],
           [[0, 0], [0, 1], [1, 1], [1, 2]]],

          # J (4)
          [[[0, 0], [1, 0], [2, 0], [2, 1]],
           [[0, 0], [1, 0], [0, 1], [0, 2]],
           [[0, 0], [0, 1], [1, 1], [2, 1]],
           [[1, 0], [1, 1], [0, 2], [1, 2]]],

          # L (5)
          [[[0, 0], [1, 0], [2, 0], [0, 1]],
           [[0, 0], [0, 1], [0, 2], [1, 2]],
           [[2, 0], [0, 1], [1, 1], [2, 1]],
           [[0, 0], [1, 0], [1, 1], [1, 2]]],

          # T (6)
          [[[0, 0], [1, 0], [2, 0], [1, 1]],
           [[0, 0], [0, 1], [1, 1], [0, 2]],
           [[1, 0], [0, 1], [1, 1], [2, 1]],
           [[1, 0], [0, 1], [1, 1], [1, 2]]],

          # O (7)
          [[[0, 0], [1, 0], [0, 1], [1, 1]]]]

In Clojure we had to specify the namespace at the top of the file, but in Python, the namespace is implicitly given based on the directory hierarchy.

Here we put the above code in shape.py, and it will therefore automatically belong to the tetrisanalyzer.piece.shape module:

▾ tetris-polylith-uv
  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        shape.py

Interface

In Polylith, only what&aposs in the component&aposs interface is exposed to the rest of the codebase.

In Python, we can optionally control what gets exposed in wildcard imports (from module import *) by defining the __all__ variable in the __init__.py module. However, even without __all__, all public names (those not starting with _) are still accessible through explicit imports.

This is how the piece interface in __init__.py looks like:

from tetrisanalyzer.piece.core import I, Z, S, J, L, T, O, piece

__all__ = ["I", "Z", "S", "J", "L", "T", "O", "piece"]

We could have put all the code directly in __init__.py, but it&aposs a common pattern in Python to keep this module clean by delegating to implementation modules like core.py:

from tetrisanalyzer.piece import shape

I = 1
Z = 2
S = 3
J = 4
L = 5
T = 6
O = 7


def piece(p, rotation):
    return shape.pieces[p][rotation]

The piece component now has these files:

▾ tetris-polylith-uv
  ▾ components
    ▾ tetrisanalyzer
      ▾ piece
        __init__.py
        core.py
        shape.py

Clojure

In Clojure, the interface is often just a single namespace with the name interface:

  ▾ components
    ▾ piece
      ▾ src
        ▾ tetrisanalyzer
          ▾ piece
            interface.clj

Implemented like this:

(ns tetrisanalyzer.piece.interface
  (:require [tetrisanalyzer.piece.shape :as shape]))

(def I 1)
(def Z 2)
(def S 3)
(def J 4)
(def L 5)
(def T 6)
(def O 7)

(defn piece [p rotation]
  (get-in shape/pieces [p rotation]))

A language comparision

Let&aposs see what differences there are in the two languages:

;; Clojure
(defn piece [p rotation]
  (get-in shape/pieces [p rotation]))
# Python
def piece(p, rotation):
    return shape.pieces[p][rotation]

An obvious difference here is that Clojure is a Lisp dialect, while Python uses a more traditional syntax. This means that if you want anything to happen in Clojure, you put it first in a list:

  • (defn piece ...)
    
    is a macro that expands to (def piece (fn ...)) which defines the function piece
  • (get-in shape/pieces [p rotation])
    
    is a call to the function clojure.core/get-in, where:
    • The first argument shape/pieces refers to the pieces vector in the shape namespace
    • The second argument creates the vector [p rotation] with two arguments:
      • p is a value between 1 and 7, representing one of the pieces: I, Z, S, J, L, T, and O
      • rotation is a value between 0 and 3, representing the number of 90-degree rotations

Another significant difference is that data is immutable in Clojure, while in Python it&aposs mutable (like the pieces data structure).

However, a similarity is that both languages are dynamically typed, but uses concrete types in the compiled code:

;; Clojure
(class \Z) ;; Returns java.lang.Character
(class 2)  ;; Returns java.lang.Long
(class Z)  ;; Returns java.lang.Long (since Z is bound to 2)
# Python
type(&aposZ&apos)  # Returns <class &aposstr&apos> (characters are strings in Python)
type(2)    # Returns <class &aposint&apos>
type(Z)    # Returns <class &aposint&apos> (since Z is bound to 2)

The languages also share another feature: type information can be added optionally. In Clojure, this is done using type hints for Java interop and performance optimization. In Python, type hints (introduced in Python 3.5) can be added using the typing module, though they are not enforced at runtime and are primarily used for static type checking with tools like mypy.

The board component

Now let&aposs continue by creating a board component:

poly create component name:board

Which adds the board component to the workspace:

▾ tetris-polylith
  ▸ bases
  ▾ components
    ▸ board
    ▸ piece
  ▸ development
  ▸ projects

And this is how we create a board component in Python:

uv run poly create component --name board

This adds the board component to the workspace:

  ▾ components
    ▾ tetrisanalyzer
      ▸ board
      ▸ piece
  ▾ test
    ▾ components
      ▾ tetrisanalyzer
        ▸ board
        ▸ piece

The Clojure code that places a piece on the board is implemented like this:

(ns tetrisanalyzer.board.core)

(defn empty-board [width height]
  (vec (repeat height (vec (repeat width 0)))))

(defn set-cell [board p x y [cx cy]]
  (assoc-in board [(+ y cy) (+ x cx)] p))

(defn set-piece [board p x y piece]
  (reduce (fn [board cell]
            (set-cell board p x y cell))
          board
          piece))

In Python (which uses two blank lines between functions by default):

def empty_board(width, height):
    return [[0] * width for _ in range(height)]


def set_cell(board, p, x, y, cell):
    cx, cy = cell
    board[y + cy][x + cx] = p


def set_piece(board, p, x, y, piece):
    for cell in piece:
        set_cell(board, p, x, y, cell)
    return board

Let&aposs go through these functions.

empty-board

(defn empty-board [width height]
  (vec (repeat height (vec (repeat width 0)))))

To explain this function, we can break it down into smaller statements:

(defn empty-board [width height]  ;; [4 2]
  (let [row-list (repeat width 0) ;; (0 0 0 0)
        row (vec row-list)        ;; [0 0 0 0]
        rows (repeat height row)  ;; ([0 0 0 0] [0 0 0 0])
        board (vec rows)]         ;; [[0 0 0 0] [0 0 0 0]]
    board))

We convert the lists to vectors using the vec function, so that we (later) can access it via index. Note that it is the last value in the function (board) that is returned.

empty_board

def empty_board(width, height):
    return [[0] * width for _ in range(height)]

This can be rewritten as:

def empty_board(width, height): # width = 4, height = 2
    row = [0] * width           # row = [0, 0, 0, 0]
    rows = range(height)        # rows = lazy sequence with the length of 2
    board = [row for _ in rows] # board = [[0, 0, 0, 0], [0, 0, 0, 0]]
    return board

The [row for _ in rows] statement is a list comprehension and is a way to create data structures in Python by looping.

We loop twice through range(height), which yields the values 0 and 1, but we&aposre not interested in these values, so we use the _ placeholder.

set-cell

(defn set-cell [board p x y [cx cy]]
  (assoc-in board [(+ y cy) (+ x cx)] p))

Let&aposs break it down into an alternative implementation and call it with:

board = [[0 0 0 0] [0 0 0 0]] 
p = 6, x = 2, y = 0, cell = [0 1])
(defn set-cell [board p x y cell]
  (let [[cx cy] cell             ;; Destructures [0 1] into cx = 0, cy = 1
        xx (+ x cx)              ;; xx = 2 + 0 = 2
        yy (+ y cy)]             ;; yy = 0 + 1 = 1
    (assoc-in board [yy xx] p))) ;; [[0 0 0 0] [0 0 6 0]]

In the original version, destructuring of [cx cy] happens directly in the function&aposs parameter list. The assoc-in function works like board[y][x] in Python in this example, with the difference that it doesn&apost mutate, but instead returns a new immutable board.

set_cell

def set_cell(board, p, x, y, cell):
    cx, cy = cell
    board[y + cy][x + cx] = p  # [[0,0,0,0] [0,0,6,0]]

As mentioned earlier, this code mutates the two-dimensional list in place. It doesn&apost return anything, which differs from the Clojure version that returns a new board with one cell changed.

set-piece

(defn set-piece [board p x y piece]
  (reduce (fn [board cell]
            (set-cell board p x y cell))
          board   ;; An empty board as initial value
          piece)) ;; cells: [[1 0] [0 1] [1 1] [2 1]]

If you are new to reduce, think of it as a recursive function that processes each element in a collection, accumulating a result as it goes. The initial call to set-cell will use an empty board and the first [1 0] cell from piece, then use the returned board from set-cell and the second cell [0 1] from piece to call set-cell again, and continue like that until it has applied all cells in piece, where it returns a new board.

set_piece

def set_piece(board, p, x, y, piece):
    for cell in piece:
        set_cell(board, p, x, y, cell)
    return board

The Python version is pretty straight forward, with a for loop that mutates the board. We choose to return the board to make the function more flexible, allowing it to be used in expressions and enabling method chaining, which is a common Python pattern, even though the board is already mutated in place.

Test

The test looks like this in Clojure:

(ns tetrisanalyzer.board.core-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.interface :as piece]
            [tetrisanalyzer.board.core :as board]))

(def empty-board [[0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]
                  [0 0 0 0 0 0 0 0 0 0]])

(deftest empty-board-test
  (is (= empty-board
         (board/empty-board 10 15))))

(deftest set-piece-test
  (let [T piece/T
        rotate-two-times 2
        piece-t (piece/piece T rotate-two-times)
        x 5
        y 13]
    (is (= [[0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 0 0 0 0]
            [0 0 0 0 0 0 T 0 0 0]
            [0 0 0 0 0 T T T 0 0]]
           (board/set-piece empty-board T x y piece-t)))))

Let&aposs execute the tests to check that everything works as expected:

poly test :dev
Poly test output

The tests passed!

Python

Now, let&aposs add a Python test for the board:

from tetrisanalyzer import board, piece

empty_board = [
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
]


def test_empty_board():
    assert empty_board == board.empty_board(10, 15)


def test_set_piece():
    T = piece.T
    rotate_two_times = 2
    piece_t = piece.piece(T, rotate_two_times)
    x = 5
    y = 13
    expected = [
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, T, 0, 0, 0],
        [0, 0, 0, 0, 0, T, T, T, 0, 0],
    ]

    assert expected == board.set_piece(empty_board, T, x, y, piece_t)

Let&aposs install and run the tests using pytest:

uv add pytest --dev

And run the tests:

uv run pytest
Pytest output

With that, we have finished the first post in this blog series!

If you&aposre eager to see a self-playing Tetris program, I happen to have made a couple in other languages that you can watch here.

Tetris Analyzer Scala
Tetris Analyzer C++
Tetris Analyzer Tool

Happy Coding!

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.