1.12.42 Release

We’re happy to announce a new release of ClojureScript. If you’re an existing user of ClojureScript please read over the following release notes carefully.

This release features two significant dependency changes. First, Google Closure Compiler has been updated to v20250402. This change makes Java 21 a requirement for ClojureScript. The other significant change is that this release now depends on the Clojure fork of Google Closure Library. Please read on for more details about these changes.

For a complete list of fixes, changes, and enhancements to ClojureScript see here

Google Closure Compiler & Java 21

Last year we noted that updating Google Closure Compiler would mean losing Java 8 support. Google Closure now requires Java 21. From our perspective this change doesn’t seem strictly necessary, but Google is a large organization and this change is likely to due to internal requirements which are hard to influence from the outside. The general enthusiasm in the Clojure community around adopting more recent Java releases hopefully softens the overall impact of this change.

So far, the burden of staying current with Google Closure has been manageable. If for some reason that calculus changes, we could adopt the strategy we have taken with Google Closure Library.

Clojure’s Fork of Google Closure Library

The incredible stability of Google Closure Library started declining around 2019. Google was both trying many things with respect to their internal JavaScript strategy as well as becoming less concerned about the impact on outside consumers. Finally, Google stopped contributing to Google Closure Library last August.

We have forked Google Closure Library (GCL) and taken up maintenance. We backed out a few years of needless breaking changes and aligned the codebase with the latest Google Closure Compiler release.

One of the biggest benefits of GCL is that it makes ClojureScript a complete solution for a variety of JavaScript contexts, not limited to the browser. Taking on additional dependencies always comes with a cost. One of ClojureScript’s original value propositions was a rock solid set of readily available JavaScript tools as dependable as clojure.core.

We are working on restoring that original stability. With this release, you’ll find that quite a few old ClojureScript libraries work again today as well as they did 14 years ago.

ClojureScript is not and never was only just for rich web applications. Even in the post React-world, a large portion of the web is (sensibly) still using jQuery. If you need robust DOM manipulation, internationalization, date/time handling, color value manipulation, mathematics, programmatic animation, browser history management, accessibility support, graphics, and much more, all without committing to a framework and without bloating your final JavaScript artifact - ClojureScript is a one stop shop.

Give it a try!

Permalink

🥷 Clojure Pro Tip 5: Hiccup Raw

In case you don’t know it, we can use raw strings, like embedding JS code, in hiccup. I just found out we can use raw to prevent strings from getting escaped. I used to have to define a dedicated app.js for that, which would need an extra HTTP request. Hiccup raw in action Looking back, I should’ve found this at the very beginning, as it’s just mentioned on its GitHub homepage, but somehow I missed it.

Permalink

Wrote about Java integration, and private functions in Clojure Book

Some companies use Java, and they would like to explore Clojure. My new updates in Clojure book is aimed at them. I have added sections named Java Files in Clojure Project, which teaches one how current Java code can work along with Clojure projects. Another section named jar files in Clojure project, teaches one how to use jar files in Clojure. Some old Java libraries can be compiled into jar and thus code could be reused.

I am not a huge fan of private functions. I think still Python does not have a concept of private functions. Clojure does have private functions, till now my book avoided it, but not anymore. The section named Private Functions, teaches one how to code private functions, and it also tells how to test them.

My book is available online, as well as on Amazon as EPUB, paper back (in color) and hard cover (in color). I hope it’s of use to you.

Permalink

FlowStorm: Debugging and Understanding Clojure Code on a New Level

Table of Contents (Click to expand/collapse)

1. What is FlowStorm and Why Should You Care? 

Ever found yourself lost in a cascade of println statements, desperately trying to understand how data transforms across your Clojure functions? Or you’ve battled elusive bugs that only surface under specific, hard-to-reproduce conditions? If these scenarios sound familiar, you’re in for a treat.

In a previous article, https://flexiana.com/news/2025/04/why-clojure-developers-love-the-repl-so-much, I mentioned some tools for examining data and code. Since 2023, one of them has become essential in my projects: FlowStorm

1.1. Brief Overview of FlowStorm’s Capabilities 

FlowStorm started as a tracing debugger but has evolved into something far more ambitious. Its description as “Much more than a debugger” is apt, as it offers a suite of visualization, interactive exploration, and deep code understanding capabilities that most other programming languages don’t provide. 

Think of it: the ability to effortlessly trace data flows, step through execution, and visualize complex states without peppering your code with temporary debug lines. This is the power FlowStorm brings to your Clojure(Script) development.

Here’s a quick look at its features: 

  • CLJ / CLJS / Babashka support: FlowStorm works with most Clojure types. Although some features, like automatic setup, work a bit differently on ClojureScript. 
  • Execution Recording: It records how your Clojure code runs. Usually, you don’t need to change your code or do a lot of setup, especially with ClojureStorm. 
  • Timeline Navigation: Lets you go through the recorded history of your code. You can step forward and backward or jump to specific spots
  • Data Visualization: Has tools to look at your application’s data as it was when the code ran. It has built-in viewers, and you can make your own. 
  • Code Analysis Tools: Includes things like call-stack trees, function call summaries, and search to help you see how your program ran. 
  • REPL Integration: FlowStorm and your REPL work together in both directions: you can send values and states from FlowStorm to your REPL, and also access FlowStorm’s recordings and internals from within your REPL. 
  • Extensibility: Supports plugins and custom tooling (detailed in “Additional Features” section)

FlowStorm can be integrated into your workflow in two ways. The recommended approach is “ClojureStorm,” which swaps your Clojure compiler at dev time for automatic instrumentation. Alternatively, “Vanilla FlowStorm” lets you add FlowStorm to your dev classpath and instrument by tagging and re-evaluating forms. In this tutorial, we’ll walk through the core features of FlowStorm with hands-on examples, from basic setup to advanced debugging techniques.

2. Your First Project with FlowStorm – An Interactive Tutorial 

Before you start: This tutorial assumes you have a Clojure setup, know basic Clojure, and how to use a REPL. 

In this tutorial, I will guide you from the most simple to the more advanced features step by step. If you ever get lost or you would like to try a different tutorial, there is own offered by FlowStorm itself and accessible via “Help” -> “Tutorial”. 

2.1. Setting up the Environment 

  1. To create a project, run two following commands in your terminal:

mkdir fs-example
cd fs-example
cat << EOF > deps.edn
{:paths ["src/"] ; where your cljd files are
 :deps {quil/quil {:mvn/version "4.3.1563"}}
 :aliases {:dev {:classpath-overrides {org.clojure/clojure nil}
                 :extra-deps {com.github.flow-storm/clojure {:mvn/version "1.12.0-9"}
                              com.github.flow-storm/flow-storm-dbg {:mvn/version "4.4.1"}}}}}
EOF
  1. Create a default user namespace: 

mkdir dev
cat << EOF > dev/user.clj
(ns user
  (:require [quil.core :as q]))
EOF

Note: You can create a global alias or lein profile so you don’t need to modify your project’s source files like we do here.

2.2. Launching and Initialization 

  1. Navigate to fs-example project and launch a REPL with :dev alias 
  2. If you use Emacs + Cider, this can be done by pressing
    C-u C-c M-j and adding :dev to the end of the prompt in the minibuffer.
  3. Then evaluate /dev/user.clj namespace and execute :dbg in REPL. 
  4.  You should now see the FlowStorm main window pop up. 

2.3. Basic Instrumentation 

With the project and REPL set up and FlowStorm started, we can try its basic features:
instrumenting code and recording it. 

Figure 1: The FlowStorm “Browser” tool showing loaded namespaces.
  1. On the left side of main window, you’ll find several tools. Click on “Browser” (see Figure 1). 

The Browser tool lists all loaded namespaces in your application. If you correctly evaluated your user.clj file which requires quil, you should see several quil namespaces and your user namespace. You can select any namespace to see its functions, their argument lists, and potentially their docstrings. When looking at the list of namespaces, you’ll notice that library namespaces are also included. FlowStorm is an excellent tool for understanding how libraries work – you don’t have to limit yourself to just examining your code.
For this tutorial, we will instrument our
user namespace.

  1. Right-click on the user namespace in the Browser list.
    From the menu, select “Add instr prefix for user.*”.
    This tells FlowStorm to instrument all functions in the user namespace

Note: FlowStorm also lets you say which namespaces to instrument with a JVM property.

2.4. Record your first expression 

Figure 2: Recording button
  1. After instrumenting the user namespace, switch from the “Browser” tool back to the “Flows” tool on the left side of the FlowStorm window. 
  2. In the “Flows” tool interface, you will see a record button (see Figure 2). Click the “Record” button to start capturing instrumented function calls. 
    (If you don’t know what some FlowStorm button does, you can usually hover over it and get a tooltip.)
  3. With recording active, let’s feed FlowStorm its first bite of code. We’ll start with something simple.
    Return to your REPL and run a simple Clojure expression, for example: 
    (+ 1 2 3)
  4. When this runs, an entry should show up in the “Flows” window. This is the trace of the + function call. 

2.5. Navigating the Recording

Figure 3: Navigating a recording. (1) Navigation buttons, (2) Data Window.
  1. You will see the recording of (+ 1 2 3) in the “Flows” window. Above the code form, there are buttons for navigation. Click on the button with a simple right arrow (1 in Figure 3) to step into the details of the form’s execution. 
  2. Now, direct your attention to the ‘Data Window’ on the right. This is your interactive lens into the runtime values of your code (2 in Figure 3). You can see this data in different ways; use the dropdown menu in the “Data Window” (it says :int by default). 

2.6. Working with Multiple Traces in a Single Flow 

FlowStorm records traces one after another in the current “flow” (one recording session). 

  1. (OPTIONAL) Click the “Pause” button (which replaces the “Record” button when recording) to stop the active recording. 
  2. (OPTIONAL) Click the “Record” button again to continue recording in the same flow. 
  3. Return to your REPL and evaluate another expression, for instance: 
    (+ 2 3 4)
  4. You will see that the number of recorded steps in the current flow increases. You can navigate this new trace like the first one.

2.7. Organizing Work with Multiple Flows 

Figure 4: Choosing a different “Flow” (e.g., flow-1).
Figure 5: A new trace in flow-1

As your debugging sessions get more involved, you might be investigating different features or bugs simultaneously. Instead of one massive, confusing trace, FlowStorm lets you organize your work into multiple ‘Flows’.
If you want to have a new recording in a different “Flow”, you can pick another one (see Figure 4) and switch between them. They help organize different debug sessions. 

  1. Let’s select another flow, e.g., flow-1 
  2. Click Record (if it is not already on). 
  3. In your REPL, evaluate a simple expression like (reduce + (range 100)) 
  4. Observe this new trace in flow-1 (see Figure 5), You can switch flows anytime. 

2.8. Dealing with Exceptions

Figure 6: Exception navigation in the “Flow” window.

Let’s face it, exceptions happen, and Flowstorm has a good way to navigate them.

  1. Go back to your REPL and run: 
    (/ 10 0)
  2. An exception drop-down will appear in “Flow” (see Figure 6). You can easily go to where the exception happened.

2.9. Diving Into a More Complex Example with Quil 

Simple math is useful for a first look, but FlowStorm shines when dealing with more complex code. Let’s use a Quil example to explore further. 

  1. Open your user.clj file in your editor and add the following Clojure code. It defines a simple Quil sketch. 

(defn setup []
  (q/frame-rate 1)      ;; Set framerate to 1 FPS
  (q/background 200))   ;; Set the background colour

(defn make-ellipse [x y diam]
  (q/ellipse x y diam diam))

(defn draw []
  (q/stroke (q/random 255))             ;; Set the stroke colour to a random grey
  (q/stroke-weight (q/random 10))       ;; Set the stroke thickness randomly
  (q/fill (q/random 255))               ;; Set the fill colour to a random grey

  (let [diam (q/random 100)             ;; Set the diameter to a value between 0 and 100
        x    (q/random (q/width))       ;; Set the x coord randomly within the sketch
        y    (q/random (q/height))]     ;; Set the y coord randomly within the sketch
    (make-ellipse x y diam)             ;; Draw a circle at x y with the correct diameter
    ))

(q/defsketch fs-example            ;; Define a new sketch named example
  :title "Oh so many grey circles" ;; Set the title
  :settings #(q/smooth 2)          ;; Turn on anti-aliasing
  :setup setup                     ;; Specify the setup fn
  :draw draw                       ;; Specify the draw fn
  :size [323 200])                 ;; Set the sketch size

Before re-evaluating the namespace and running the Quil sketch: 

  1. In FlowStorm, click the “Clean all flows” button (on the left of the recording button) to clear all existing recordings. 
  2. Make sure the recording is on by clicking the “Record” button. 
  3. Now reload the user namespace. 

When the user namespace is evaluated and the sketch starts, a window titled “Oh so many grey circles” should appear, drawing circles. FlowStorm will begin to capture traces related to the draw and setup functions, because they are in the instrumented user namespace. You might see about 34 steps recorded. 

2.10. Working with Threads 

If you step through the traces after the sketch starts, you might not see much about the animation. This is because the Quil animation (the draw loop) usually runs in its own thread. 
FlowStorm makes navigating between threads a breeze: 

Figure 7: Picking a thread from “Threads” dropdown.
  1. Click on the “Threads” dropdown in FlowStorm “Flow” window. 
  2. You should see a list of threads. Select the thread responsible for the animation (see Figure 7; it should be named “Animation Thread”). 
  3. Once the animation thread is selected, you will see the traces generated by the setup and draw function calls, accumulating as the animation runs (if recording is still active). You can stop it now. 

TIP: FlowStorm can show all threads in one window. To do that, start recording by clicking on “Start recording of the multi-thread timeline” next to the “Record” button and then select “Multi-thread timeline browser” under “More tools”. 

2.11. Defining Values 

Figure 8: The “def” button for defining variables.

Spotted an interesting piece of data in a trace? Why not pull it directly into your REPL

  1. Navigate through the traces in the animation thread until you find a call to a Quil function q/stroke. In the “Data Window” on the right, you’ll see the value of q/stroke is a Clojure map (clojure.lang.PersistentArrayMap). 
  2. Suppose you want to inspect or manipulate this specific map in your REPL. FlowStorm lets you define any displayed value as a variable in your user namespace (or any other). With the value selected in the “Data Window”, click the “def” button (see Figure 8). 
  3. You’ll be prompted to give the new var a name. Enter a name (e.g., my-stroke-data

This value is now available in your user namespace in the REPL under the name you provided, allowing you to interact with it directly. 

2.12. Advanced Navigation: Quick Jump & Power Step 

Figure 9: Using the “Quick Jump” feature.
Figure 10: Using “Power Step” with the same-coord option.

As your application runs, traces can grow quite long. Manually stepping through hundreds or thousands of evaluations isn’t efficient. Say you are looking at q/stroke call and want to see the q/stroke call for the next circle drawn, without manually stepping through everything. This is where “Power Step” is useful. The “Power Step” feature has several modes. Here, we want to jump to the next time a function is called at the same place in the code (like the q/stroke line in your draw function). 

  1. Make sure you are on a q/stroke call in the trace view. You can use the “Quick Jump” feature (see Figure 9) – type user/draw there and pick the function (with an amount of calls next to it). It will take you to the function. Then move two steps more and you should be on q/stroke
  2. Pick the same-coord option from the “Power Step” dropdown menu and click the “Power Step forward” button (highlighted in Figure 10). 
  3. FlowStorm will jump to the next time the code at that place (the q/stroke call in draw) runs. You can see in the “Data Window” how values (like :current-stroke in the map) changed for the new circle. 

2.13. Custom Data Visualization 

Figure 11: Custom visualizer showing the stroke color.

FlowStorm’s default data view is useful, but sometimes a custom view gives better insight, especially for domain-specific data. Let’s make one for our Quil stroke.

Looking at the q/stroke data (a map with color info), it’s just numbers. It would be more intuitive if we could see the actual color represented visually. FlowStorm allows custom visualizers for this purpose. 

  1. Add the following code to your user.clj namespace. It registers a new visualizer for data that represents a Quil stroke. It uses JavaFX (which FlowStorm’s UI is built with) to draw a line with the stroke’s color. 

(require '[flow-storm.api :as fsa])
(require '[flow-storm.debugger.ui.data-windows.visualizers :as viz])
(require '[flow-storm.runtime.values :as fs-values])
(import '[javafx.scene.shape Line]
        '[javafx.scene.paint Color])


(viz/register-visualizer
 {:id :stroke ;; name of your visualized
  :pred (fn [val] (and (map? val) (:quil/stroke val))) ;; predicate
  :on-create (fn [{:keys [quil/stroke]}]
               {:fx/node (let [gray (/ (first (:current-stroke stroke)) 255) ;; gray normalization
                               color (Color/gray gray)
                               line (doto (Line.) ;; draw line
                                      (.setStartX 0.0)
                                      (.setStartY 0.0)
                                      (.setEndX 30)
                                      (.setEndY 30)
				      (.setStrokeWidth 5.0)
                                      (.setStroke color))]
                           line)})})


(fs-values/register-data-aspect-extractor ;; extract data from q/stroke 
 {:id :stroke
  :pred (fn [val _] (and (map? val) (:current-stroke val)))
  :extractor (fn [stroke _] {:quil/stroke stroke})})


(viz/add-default-visualizer (fn [val-data] (:stroke (:flow-storm.runtime.values/kinds val-data))) :stroke)
  1. If the Quil sketch window (“Oh so many grey circles”) is still open, close it. 
  2. In FlowStorm, clear any existing recordings. 
  3. Make sure the recording is on. 
  4. Re-evaluate your user namespace (which should restart the Quil sketch). 

Now, when you go to a trace involving q/stroke (or more precisely, the data map that contains the stroke information) in the animation thread, you should see a short line whose color matches the stroke color used for a circle (see Figure 11). 

If you use “Power Step” (set to same-coord) to jump to the next q/stroke data, you should see the color of the line in the custom visualizer change accordingly. 

Custom visualizers provide significant power. For even more advanced customization or integration with external tools, FlowStorm also supports a plugin system. Exploring plugins is beyond the scope of this initial tutorial, but it’s a feature to be aware of for future needs. 

2.14. Use of the Call Tree Tool 

Figure 12: The “Call Tree” tool.

Stepping line-by-line shows what happens, but how does it all fit together? The “Call Tree” tool shows a tree of function calls from a recording. This helps understand the structure of how the code runs and how functions call each other. The tree displays arguments or return values to help differentiate between multiple calls to the same function. 

  1. Keep the animation thread selected and open the “Call Tree” tool (see Figure 12). 
  2. The tool will display a tree. Look for repeated calls to user/draw or other functions from your sketch, like user/setup
  3. Click on one of the user/draw nodes in the call tree, and you should see a call to (user/make ellipse). Double-clicking any node takes you to it. 

2.15. Use of the Function List 

Figure 13: The “Function List” tool.

Need a quick summary of all functions that ran, or want to find every single call to user/draw? The ‘Function List’ tool gives you a flat, searchable index of all recorded function invocations.

  1. Open the “Function List” tool (see Figure 13). 
  2. You’ll see a list of functions. Find entries for functions from your code (e.g., user/draw) and library functions that were called (e.g., quil.core/ellipse, quil.core/random). 
  3. Pick a function from the list, for example, user/make-ellipse. A panel on the right should show all the individual calls of that function that were recorded. For each call, you can see its arguments and what it returned. 
  4. Like the “Call Tree” tool, double-clicking on a specific invocation will navigate you to it. 

2.16. Saving Positions: Bookmarks

Figure 14: The bookmark button.

Found a particularly interesting spot in a long trace that you know you’ll want to revisit?
FlowStorm’s bookmarks save your place.

To save a bookmark: 

  1. Go to the step in the trace you want to bookmark. 
  2. Click on the bookmark button in the FlowStorm UI (see Figure 14). 
  3. It will ask for a name for your bookmark. Choose a descriptive name. 

Once saved, you can use the bookmark list in FlowStorm (“View” -> “Bookmarks”) to quickly jump back to that exact position in the trace at any time, without needing to manually step through again. You can also make bookmarks in your Clojure code. This lets you mark important points in your program’s run.
Let’s add one to our Quil example. 

  1. First, make sure flow-storm.api namespace is available in your user.clj file.
    Let’s give it fsa alias. 
  2. Now change your setup function in user.clj to include a call to fsa/bookmark

(defn setup []
  (q/frame-rate 1)
  (q/background 200)
  (fsa/bookmark "Quil setup function complete")) ; <--- Add this line
  1. Re-evaluate the namespace. Next time setup runs, a bookmark “Quil setup function complete” will be automatically created, and FlowStorm will jump to it. You can also find this new bookmark in FlowStorm’s bookmark list.

2.17. Sending Data to the Output Tool with tap> 

Figure 16: “Output” tool with maps.

We all know println debugging. While sometimes useful, it can clutter your console and mix with other output. Clojure’s tap> offers a more structured way to inspect values, and FlowStorm can be its dedicated display. 

  1. Modify the draw function in your user.clj file to tap> the circle’s properties just before it’s drawn. 

(defn draw []
  (q/stroke (q/random 255))
  (q/stroke-weight (q/random 10))
  (q/fill (q/random 255))
  (let [diam (q/random 100)
        x    (q/random (q/width))
        y    (q/random (q/height))]
    (tap> {:event :circle-drawn, :x x, :y y, :diameter diam}) ; <--- Add this line
    (make-ellipse x y diam)))
  1. Re-evaluate your user namespace and make sure FlowStorm is recording.
  2. In the FlowStorm UI, find and open the “Output” tool.
  3. As the Quil animation runs, you should see maps appearing in the “Output” tool. Each map is the data you tapped. This gives a log of data from your draw function (see Figure 15). 

2.18. Using the Printers Tool 

Figure 16: Adding the expression to the “Printers tool”.
Figure 17: Finding “Printers” and “Search” in the “More Tools” dropdown.

Sometimes, you want to keep an eye on a specific value or expression as it changes over many executions, without clicking through traces each time. The ‘Printers’ tool is like setting up a persistent watch window, tailored to exactly what you want to see.

  1. Make sure your Quil sketch is running and you have some traces recorded (e.g., in flow-0 and the “Animation thread” is selected). 
  2. In Flowstorm’s code stepping view, navigate to the traces of the draw function. Right-click on x in (q/ellipse ...) and select “Add to prints” (see Figure 16). 
  3. Dialog will show up. You can set up the printer: 
    • Message format: Enter a descriptive string that will help you to identify the value, e.g. X of ellipse
    • Expression: Used for transforming the value, let’s try int (that will change floats to integers).
  4. Confirm the dialog. Now go to “Printers” that is hidden inside “More tools” dropdown in the “Flows” window (see figure 17). 
  5. A new “FlowStorm printers” window with definition of our existing printer will appear (see Figure 18). When you now click the refresh button (top left), you will see the value for every x transformed to int that you recorded. You can always redefine your printers and do this again. Double-clicking any printed value takes you to it
Figure 18: The “FlowStorm printers” window showing recorded values.

2.19. Search in Traces 

Figure 19: The “search” tool with results for “frame-rate”.

When your traces become vast landscapes of data, finding that one specific value or call can feel like searching for a needle in a haystack. FlowStorm’s search functionality can help you.

  1. Select the “Search” tool (from “More tools”, see Figure 17). A new “FlowStorm search” window should pop up. 
  2. Search for all the occurrences of the frame-rate. Type frame-rate into the search bar and execute the search. You should see all the occurrences of this key (see Figure 19). 
  3. Double-clicking any result takes you to it. 
  4. Feel free to experiment with the remaining search functions like “By predicate” etc. 

3. Conclusion 

This tutorial showed you FlowStorm’s main features: setting up, first recordings, and using tools like custom views and printers. You saw how FlowStorm can help understand your Clojure code’s execution, making debugging easier

(Here is a gist with a complete user.clj example.)

The best way to learn is to use FlowStorm in your projects. The more you use it, the more you’ll see how it helps with complex code. The next section gives a quick look at other features. For more details, the official FlowStorm documentation is very helpful. 

4. Some Additional FlowStorm Features Worth Mentioning 

This tutorial covered many things, but FlowStorm has more. Here’s a quick list of other features. Look at the official documentation for details: 

  • Metadata Navigation: Lets you see metadata of Clojure data structures in traces. 
  • datafy / nav Support: Works with Clojure’s datafy and nav to show custom data types better. 
  • EQL Filtering for Data Structures: Use EQL queries to find specific parts of big or nested data. 
  • Thread Breakpoints: Pause threads at specific functions, can be conditional on arguments (check platform for support ). 
  • Plugins: Add custom tools or use existing ones like Web Plugin, FlowBook Plugin, CLJS compiler plugin, or Async Flow Plugin.
    • Web Plugin – an experimental plugin for visualizing web application flows. 
    • FlowBook Plugin – allows you to store and replay your flows with an optional “flowbook”, which is a notebook that supports links to your recordings that can be used to explain your flows. 
    • Flow Storm CLJS compiler plugin – helps you visualize and move around your ClojureScript compilation recordings by representing them as an interactive graph. 
    • Flow Storm Async Flow Plugin – allows you to visualize your core.async.flow graph recorded activity from a graph
  • Specific Loop Navigation: Right-click values in loops to jump to other iterations or start/end of loops. 
  • Output Tool – Capturing stdout and stderr.
  • Remote Debugging: Debug Clojure apps running on other machines, often with SSH. 
  • Dealing with Too Many Traces (Recording Limits): Set limits for recording functions or threads to control trace size and speed, helps avoid OutOfMemoryError. 
  • Handling Mutable Values: Use flow-storm.runtime.values/SnapshotP protocol to tell FlowStorm how to save the state of mutable values. 
  • Instrumentation Limitations & Control: Know about limits (e.g., very big forms) and use controls to pick what to instrument. 
  • General Programmable Debugging/Analysis: Use FlowStorm’s APIs (like in flow-storm.runtime.indexes.api) to look at recorded traces with code from your REPL.
  • Multi-thread timeline: Records and shows operations from many threads in order. Good for finding concurrency bugs. 
  • Editor Integration: Works with editors like Emacs (CIDER), VSCode (Calva), and IntelliJ (Cursive) to jump from FlowStorm to your code. 
  • Styling and Theming: Change UI look with themes (light/dark) or custom CSS.

The post FlowStorm: Debugging and Understanding Clojure Code on a New Level appeared first on Flexiana.

Permalink

REPL-Driven Development and Learning Velocity

Our next Apropos will feature Nathan Marz on May 20. Be sure to subscribe!


REPL-Driven Development and Learning Velocity

The main advantage of Lisps (including Clojure) over other languages is the REPL (Read-Eval-Print Loop). Lisp used to have a bunch of advantages (if statements, garbage collection, built-in data structures, first-class closures, etc.), but these are common now. The last holdout is the REPL.

The term REPL has diluted, so I should define it: A REPL is a way to interactively inspect and modify running, partially correct software. My typical workflow is to open my editor, start the REPL, and start the application server from within it. I can make requests from my browser (to the running server), recompile functions, run functions, add new libraries, and inspect variables.

The REPL accelerates learning by increasing the speed and information richness of feedback. While programming, you learn about:

  • Your problem domain

  • The languages and libraries you’re using

  • The existing codebase and its behavior

The REPL improves the latency and bandwidth of feedback. Faster and richer feedback helps you learn. It lets you ask more questions, check your assumptions, and learn from and correct your mistakes. Fast, rich feedback is essential to achieving a flow state.

The obvious contrast with REPLs is with the mainstream edit-compile-run (ECR) loop that most languages enable. You edit your source code, run the compiler, and run the code. Let’s look at the main differences between REPL and ECR:

  • In ECR, your state starts from scratch. In REPL, your state is maintained throughout. All of the variables you set up are still there. Web sessions are still open.

  • In ECR, your compiler may reject your program, forcing you back into the Edit phase. Nothing is running, so you must fix it before continuing. In REPL, when the compiler rejects your change, the system is still running with the old code, so you can use runtime information.

  • In ECR, if you want to try something out, you have to write an entire program to compile and run. In REPL, trying something out means typing the expression and hitting a keystroke.

The end result is that the ECR loop is much slower than the REPL. One of the benefits of modern incremental testing practices (like TDD) is that it approximates the fast feedback you get from the REPL:

  • With testing, you do not maintain your state. However, you write the code to initialize the state for each test.

  • With testing, you make small changes to the code before rerunning the tests so you are usually not far from a running program.

  • With testing, it is easy to add a new test and run just that.

The advantage of testing is that you have a regression suite, which you don’t get from the REPL. But feedback in testing is poorer. I haven’t heard of anyone writing a test just to see what the result of an expression is. And, by the way, doing incremental testing with the REPL is a breeze. It’s like the best of both worlds.

The REPL gives you fast, rich feedback in three main ways:

  • Maintaining state — Your running system is still running, with all in-memory stuff still loaded. This means that after editing a function and hitting a keystroke, you can inspect the result of your change with a delay that is below human perception.

  • Running small expressions — Within your running system, you can understand the return value from any expression you can write, including expressions calling your code or libraries you use. The cost of running these expressions is below a psychological threshold, so they feel almost free compared to having to scaffold a test or a public static void main(String[] argv).

  • Ad hoc inspection of the running system — This is a big one you gain skill with over time. You can do anything you can imagine, from running your partially completed function (just to make sure it does what you expect) to printing out the value of a global variable (that you saved values to from your last web request). The flexibility makes tools like debuggers feel rigid.

However, other languages have chipped away at the advantages of REPL-Driven Development. I already talked about incremental testing approaches (like TDD) and how they approximate the feedback of the REPL. But there are more technologies that provide good feedback in the mainstream ECR (Edit-Compile-Run) paradigm:

  • Static analysis — You can get feedback on problems without leaving the edit phase with tools like LSP and squiggles under your code.

  • Static types — If you’ve got good types and you know how to use them, the Edit-Compile cycle can also give you rich feedback. The question is whether your compiler is fast enough to keep up.

  • IDEs with run buttons — Many IDEs for compiled languages use their own, incremental compiler. The code is constantly being compiled as you edit. When you hit the Run button, you’re essentially cutting out the Compile phase (which can often be very lengthy). If you can set it up to run a small expression at a keystroke, you’re very close.

  • Autocomplete — Autocomplete speeds up the Edit phase. Autocomplete with the REPL is a cinch. You can inspect what variables are available in the environment dynamically. However, modern IDEs can use static analysis to aid autocomplete.

  • Incremental testing — Incremental testing (like TDD) speeds up the Edit and Run phases. Added here just for completeness.

But don’t tell anyone: we can use these in addition to the REPL. In fact, Clojure has an excellent LSP and a great testing story. The only thing we don’t have is a great story about static typing.

Many languages claim that they have a REPL. But what they really have is an interactive prompt where you can type in expressions and get the result. This is great! But it doesn’t fully capture the full possibility of the REPL.

People often ask what’s missing, especially from a very dynamic language like JavaScript. I’ve tried to set up a REPL-Driven Development workflow in JavaScript, but I encountered these roadblocks. There are probably more:

  • Redefining const: In Clojure, we pride ourselves on immutability. However, global variables are mutable so that we can redefine them during development. Unfortunately, JavaScript engines are strict about global variables defined with const. I found no way to redefine them once they were defined.

  • Reloading code: It’s not clear how to reload code after it has changed. Let’s say I have a module called catfarm.js and I modify one of the functions in it. My other module old-macdonald.js imports catfarm. How do I get old-macdonald to run the new code? The engines I tried did not re-read the imported modules, instead opting for a cached version. In addition, even if they did, the old-macdonald code needs to be recompiled. Clojure’s global definitions have a mechanism to allow them to be redefined, and any accesses to them after redefinition are immediately available. If I recompile a function a in Clojure, the next time I call b (which calls a), it calls the new version of a.

  • Calling functions from other modules: When you import a module, it’s basically a compiler directive where you say what you’re importing. But how do you call things from that aren’t imported? Why would you do that? Because you want to try them out! This, combined with not being able to re-import them, makes it really hard to incrementally work on your code. In Clojure, require is a function that loads new modules. And in-ns is a function that lets you navigate through the modules like in a directory structure from your terminal. require() in JavaScript (the old way of doing modules) worked more like that.

Hot module reloading is an attempt to address these limitations. It also seems like a major project with a lot of pitfalls. And it still doesn’t maintain state. Maybe it will get good enough one day to be close to REPL-Driven Development. Or maybe Node should have a “REPL” mode where const variables can be redefined and modules can be reloaded and navigated.

The biggest problem with REPLs is that they require an enormous amount of skill to operate effectively. To know what you need to recompile, you must understand how Clojure loads code and how Vars work. You need to navigate namespaces. Most importantly, you need to develop the habit of using the REPL, which is not built-in. It’s common for someone in a beginner’s chat to ask “what happens when I call foo on a nil?” My first reaction is: “Why are you asking me? Instead of typing it here, type it in the REPL!” People need to be indoctrinated.

Here are some ways to improve your use of the Clojure REPL:

  • Next time you’re writing a function, use a rich comment block ( (comment …) ) to call that function. After the tiniest of modifications, call the function to inspect the return value. This is useful when writing long chains of functions.

  • The next time you’re tempted to look up documentation for a function, do a little experiment to see what happens at the REPL. Some common questions to answer:

    • What type does a function return?

    • Can I call that with an empty value?

    • What does the zero-argument version do?

  • Learn the keystrokes in your editor to evaluate:

    • The whole file

    • The current top-level form (often referred to as defn)

    • The expression just before the cursor

  • Learn the keystrokes to run the tests from your editor. Run your tests. Edit and compile functions (see previous bullet). Run the tests again. This combines the best of incremental testing and RDD.

I love the REPL. I miss it when I go to other languages. The REPL is also forgiving. It can make up for a lot of missing tooling (like autocomplete and debuggers). The REPL also requires a lot of skill. Squiggles and LSP can immediately give you hints for making your code better. But the REPL requires a deep understanding of the language and lots of practice. This is ultimately the biggest barrier to its adoption. People who haven’t learned how to use the REPL don’t even know what they are missing.

PS: You can learn the art of REPL-Driven Development in my Beginner Clojure signature course.

Permalink

On Interactive Development

by Laurence Chen

When I was a child, my sister caught chickenpox. Instead of isolating us, our parents let us continue playing together, and I ended up getting infected too. My father said, “It’s better to get chickenpox now—it won’t hurt you. Children aren’t just miniature adults.” This saying comes from pediatric medicine. While children may resemble adults on the outside, their internal physiology is fundamentally different. Some illnesses are mild for children but can hit adults much harder—chickenpox being a prime example.

I often think of this phrase, especially when reflecting on Clojure’s interactive development model: interactive development feels natural and smooth in small systems, but without deliberate effort, it often breaks down as the system grows. Just as children and adults must be treated differently, small and large systems have fundamentally different needs and designs when it comes to interactive capabilities.

Let me start by defining interactive development:

Interactive development is a development model that allows developers to modify, execute, and observe arbitrary small sections of code in a running system. This enables developers to obtain input/output behavior of that code and explore related structures, thereby supplementing or verifying the understanding gained from reading the source code.

Benefits of interactive development:

  1. It’s easy to enter a state of flow, since developers can quickly verify new code and receive meaningful feedback.
  2. It supports thought processes—developers can choose to understand complex logic either by reading source code or by observing input/output behavior in a black-box manner, corresponding to what Out of the Tar Pit classifies as informal reasoning about the code and testing, respectively.
  3. It partially replaces unit testing.
  4. It partially replaces integration testing.

Despite these obvious benefits, interactive development is one of the first things to erode as systems evolve.

Reason for erosion #1: Tight coupling between components and system

Without specific attention to interactive development, system initialization often tightly couples all components together. A common pattern is to load all config files, initialize databases, background schedulers, set up web routes, and assemble all components into a large object during startup. In such cases, any change to a component requires restarting the entire system for the changes to take effect.

This creates a problem: if I just want to tweak a setting—say, changing a port from 8080 to 8089—I’d have to restart the entire system. And system startup times can be slow enough to disrupt flow.

The solution is to enable local control over each component’s lifecycle, providing individual start/stop functions for each service. Popular Clojure lifecycle management libraries like mount, integrant, and makina support this.

The benefit of such design is that during interactive development, you can freely stop a component (e.g., just shut down :web), observe behavior after changes, and restart it individually.

Reason for erosion #2: Tight coupling between business logic and services

Some components, like web servers or schedulers, are expensive to restart—often taking several seconds. If business logic is tightly coupled to these services, even after re-evaluating code in the REPL, behavior won’t immediately reflect the change because the service is still running old logic.

This can be addressed by decoupling business logic from services using vars.

In many Clojure web applications, you’ll see this pattern:

(def web-handler ...)
(run-jetty #'web-handler {:port 8080})

In Clojure, #' is a var reference. It ensures the system accesses the variable indirectly during runtime. So even if web-handler is redefined later, Jetty will automatically use the new version. There’s no need to restart the server, because it’s bound to the var, not the original function.

This technique applies to cron jobs too. For example:

(defn job-handler [n] (prn "job " n))
(schedule-task! {:handler #'job-handler 
                 :tab "/5 * * * * * *"})

As long as schedule-task! binds to #'job-handler and not the function body, you can redefine and re-evaluate job-handler without re-registering the whole job. This significantly shortens feedback loops and improves interactive development efficiency.

Reason for erosion #3: SQL queries

When using SQL databases, interactive development usually requires:

  1. The ability to compose SQL queries programmatically.
  2. The ability to inspect the raw SQL query from the corresponding function.

Libraries like YeSQL make these hard to achieve, because:

  • SQL is stored as strings in external files, so it can’t be composed.
  • Functions are generated via macros, making it hard to use “go to definition” in interactive development and to trace SQL queries from function names.

A better approach is using a data-driven library like HoneySQL, which allows SQL queries to be expressed as Clojure maps:

(def q {:select [:id :name]
        :from [:users]
        :where [:= :status "active"]})

You can then inspect the query string by calling (hsql/format q).

Reason for erosion #4: Web handlers, routers, interceptors

If you use Ring to develop web apps, interactive development is very intuitive:

  • Requests are Clojure maps.
  • Responses are Clojure maps.
  • Handlers are just plain functions.

You can test handlers like this:

(my-handler
 {:uri "/new"
  :request-method :get})

However, things get more complex with webhook handlers. These often need to extract raw payloads from (:body request), and the body must be a java.io.InputStream. This complexity can hinder interactive development.

A good workaround is ring-mock:

(require '[ring.mock.request :as mock])

(def req
  (-> (mock/request :post "/webhook")
      (mock/header "Content-Type" "application/json")
      (mock/header "x-signature" "sig-1234")
      (mock/body json-body)))

(webhook-handler req)

Other interactive development blockers are similar to this ring-mock example: when we’re unfamiliar with a library, we may miss tools that enable interactive work. For instance, with reitit (a router), you can use match-by-path to check if a URI routes correctly. With sieppari (an interceptor library), you can use execute to observe how a set of interceptors behaves given a request.

Reason for erosion #5: New dependency hot-reload

By default, adding a new library to deps.edn does not make it available in the running REPL session.

I recommend Launchpad as a solution. It handles hot-reloading of dependencies automatically—no need to remember or invoke specific hot-reload functions.

Reason for erosion #6: Editor integration

If your editor has strong REPL integration—or even lets you modify it using Lisp—you can take interactive development to another level.

It wasn’t until I’d used Conjure for quite a while that I realized its full potential. For instance, I used to believe all testing had to be done via shell. But Conjure lets you run the test under the cursor with <localleader> tc.

Another hidden gem in Conjure is pretty-printing nested EDN values. If you use clojure.pprint/pprint, Conjure prefixes each line with ; out, marking it as output. If you don’t want this, you can use (tap> variable) to send it to the tap queue, then <localleader> vt to display the queue, which will auto pretty-print without ; out.

There are two classes of editor-integrated commands that I believe still hold great potential:

  • Do something based on what’s under the cursor.
  • Navigate somewhere based on what’s under the cursor.

Conclusion

Interactive development isn’t just a productivity tool—it’s a way of working in active dialogue with the system. It turns programming into something playful: through cycles of trial, observation, and adjustment, we refine both our understanding and design. The quality of this dialogue depends heavily on system architecture, tooling, library choices, and editor integration.

But just as “children aren’t miniature adults,” interactive development may feel natural in small systems, yet as systems grow, maintaining it takes conscious effort. Otherwise, it quietly slips away amidst tighter coupling, increased complexity, and growing dependencies.

The investment is worth it. It deepens our system understanding, makes flow states more accessible, gives test-oriented functions new life, transforms the editor into a rich interface—and helps us design truly decoupled systems.

>

Permalink

Annually-Funded Developers' Update: Mar./April 2025

Hello Fellow Clojurists! This is the second report from the 5 developers receiving Annual Funding in 2025.

Dragan Duric: Apple M Engine Neanderthal
Eric Dallo: metrepl, lsp-intellij, repl-intellij. lsp, lsp4clj
Michiel Borkent: clj-kondo, squint, babashka, fs, SCI, and more…
Oleksandr Yakushev: CIDER, Compliment, JDK24
Peter Taoussanis: Telemere, Tufte, Truss

Dragan Duric

2025 Annual Funding Report 2. Published May 5, 2025.

My goal with this funding in 2025 is to support Apple silicon (M cpus) in Neanderthal (and other Uncomplicate libraries where that makes sense and where it’s possible).

In March and April, I implemented the JNI bindings for 5 Apple Accelerate libraries (blas_new, lapack_new, Sparse, BNNS, BNNS Graph, vDSP, vForce and vImage) and implemented almost all functionality of Neaanderthal’s Apple M engine based on Accelerate. It will soon be ready for proper release (currently waiting some bugfixes, polishing, and a javacpp-presets release to be available in Maven Central).

In more detail:

Here’s what I’ve proposed when applying for the CT grant.
I proposed to Implement an Apple M engine for Neanderthal. This involves:

  • buying an Apple M2/3 Mac (the cheapest M3 in Serbia is almost 3000 USD (with VAT).
  • learning enough macOS tools (Xcode was terrible back in the days) to be able to do anything.
  • exploring JavaCPP support for ARM and macOS.
  • exploring relevant libraries (OpenBLAS may even work through JavaCPP).
  • exploring Apple Accelerate.
  • learning enough JavaCPP tooling to be able to see whether it is realistic that I build Accelerate wrapper (and if I can’t, at least to know how much I don’t know).
  • I forgot even little C/C++ that I did know back in the day. This may also give me some headaches, as I’ll have to quickly pick up whatever is needed.
  • writing articles about relevant topics so Clojurians can pick this functionality as it arrives.

Projects directly involved:
https://github.com/uncomplicate/neanderthal
https://github.com/uncomplicate/deep-diamond
https://github.com/uncomplicate/clojure-cpp

As I implemented and OpenBLAS-based engine in January-February, in March-April I tackled the main objective: Apple Accelerate bindings. As Apple’s documentation is not stellar, and there are multiple tools and languages involved, it was a slow and tedious work consisting of lots of experimentation and discovery. Even boring, I would say. But, slowly and steadily I discovered how relevant JavaCPP generators work, I unearthed Accelerate C++ intricacies, fought with them one by one, and slowly I managed to create the proper Java bindings! Along the way I even contributed some fixes and updates to JavaCPP itself! YAY! This is available as a new project under Uncomplicate umbrella at https://github.com/uncomplicate/apple-presets

Next, I returned to the pleasant part of work - programming in Clojure - and almost completed the dedicated Neanderthal engine that utilizes Apple Accelerate for BLAS, LAPACK, as well as Math functions and random generators on M Macs. This covers the core and linalg namespace, that was already supported by the alternative OpenBLAS engine (implemented in Jan-Feb) AND Math and RNG engines. I didn’t manage to iron out all the bugs so it could be ready for release, but this will certainly be ready in May-June. I also didn’t manage to tackle Sparse matrices, but as I managed to create Accelerate bindings for all types, including Sparse, I expect this to not be a problem and be completed during this funding round.

Looking at the funding proposal, I can say that I’m very satisfied that all the features that I promised to build are progressing even better than expected, so that will leave some time to try to do some of the features that I said I hope to be able to support, namely to explore Deep Diamond Tensor support on Apple M (via BNNS and/or BNNS Graph) and GPU support via Metal.

I even got some ideas for additional projects based on native Apple functionality related to machine learning and audio/video, but lets not getting too much ahead.

All in all, I feel optimistic about how this project progresses!


Eric Dallo

2025 Annual Funding Report 2. Published April 30, 2025.

In these 2 last months I could work on multiple projects and even focus on a new exciting project called metrepl, a nREPL middleware to help extract metrics about your REPL, really helpful when you have multiple coworkers working in multiple projects and you want to collect information about performance, time spent in REPL features and others! Besides that, I worked hard on keep improving the IntelliJ experience using the 2 OSS plugins of LSP + REPL, and of cource improving clojure-lsp the base of all major editors now.

metrepl

- 0.3.1

  • Improve export exception handler
  • Remove jvm started flaky metric
  • Fix event/op-completed metric to measure time correctly
  • Add event/test-executed event.
  • Add event/test-passed, event/test-errored and event/test-failed events.
  • Add session-time-ms to close op.
  • Add :project-path to metrics.
  • Add compatibility with older Clojure versions

clojure-lsp

2025.03.07-17.42.36 - 2025.04.23-18.16.46

  • General
    • Bump clj-kondo to 2025.02.20.
    • Add support for OpenTelemetry(otlp) log, enabled if configured. #1963
    • Bump rewrite-clj to 0bb572a03c0025c01f9c36bfca2815254683fbde. #1984
    • Bump clj-kondo to 2025.02.21-20250314.135629-7.
    • Add support for ignoring tests references for the clojure-lsp/unused-public-var linter. #1878
    • Add :test-locations-regex configuration to allow customizing test file detection for the unused-public-var linter’s :ignore-test-references? and test code lenses. #1878
    • Improve and standardize all logs for better troubleshooting and metrics collection.
    • Fix unused-public-var false positives when :ignore-test-references? true.
    • Bump clj-kondo to 2025.04.07.
  • Editor
    • Improve paredit slurp and barf corner cases. #1973 #1976
    • Add Semantic Tokens support for the Clojure Reader Dispatch macro #_ (ignore next form). #1965
    • Fix regression on previous version on snippets completion. #1978
    • Add forward, forward-select, backward and backward-select paredit actions.
    • Show add require code action for invalid syntax codes like my-alias/. #1957
    • Improve startup performance for huge projects avoiding publish empty diagnostics for every file of the project unnecessarily.
    • Improve timbre context log.
    • Fix suggestion for add require code action. #2017
    • Improve find definition so it works on declare forms too. #1986

clojure-lsp-intellij

3.1.1 - 3.4.0

  • Remove : lexer check since this is delegated to clojure-lsp/clj-kondo already.
  • Fix comment form complain about missing paren.
  • Improve server installation fixing concurrency bugs + using lsp4ij install API.
  • Bump clj4intellij to 0.7.1
  • Support Namespaces on search everywhere (Shift + shift). #64
  • Add support for forward, backward, forward-select, backward-select paredit actions. #72
  • Fix go to declaration or usages. #70

clojure-repl-intellij

Together with the help of @afucher, we improved so much the IntelliJ REPL experience, fixing multiple issues and adding multiple features, the experience now is pretty close to other REPL experiences in other editors!

2.3.0 - 2.5.2

  • Update repl window ns after switching ns.
  • Fix exception on settings page.
  • Fix special form evaluations. #135
  • Add support for JVM args on local REPL configuration. #124
  • Send to REPL eval results. #92
  • Fix repl input when evaluated the same input of last eval.
  • Fix history navigation via shortcut not working after 2.0.0.
  • Enhance REPL evaluations. #108
    • Isolate ns from REPL windows and file editors
    • Evaluate ns form from file automatically to avoid namespace-not-found errors.
  • Fix REPL window horizontal scrollbar not working.
  • Fix REPL window broken after making any change to its layout. #144
  • Disable “clear REPL” action when REPL is not connected. #126
  • Bump clj4intellij to 0.8.0
  • Configure project with IntelliJ integration tests (headless)

lsp4clj

1.12.0 - 1.13.0

  • Add textDocument/selectionRange LSP feature coercers.
  • Add inlay-hint LSP feature coercers.

clj4intellij

0.7.0 - 0.8.0

  • Create def-extension to create plugin.xml extension points easily and more idiomatic.
  • Fix clojure-lsp hook
  • Drop support of older IntelliJ versions (2021/2022). Now requires minimum IntelliJ 2023.3 (Build 233)
  • Bump JAVA min version to 17
  • Add support for tests.

Michiel Borkent

2025 Annual Funding Report 2. Published May 2, 2025.

In this post I’ll give updates about open source I worked on during March and April 2025.

To see previous OSS updates, go here.

Sponsors

I’d like to thank all the sponsors and contributors that make this work possible. Without you the below projects would not be as mature or wouldn’t exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

gratitude

Current top tier sponsors:

Open the details section for more info about sponsoring.

Sponsor info

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

On to the projects that I’ve been working on!

Blog posts

I blogged about an important improvement in babashka regarding type hints here.

Interviews

Also I did an interview with Jiri from Clojure Corner by Flexiana, viewable here.

Updates

Here are updates about the projects/libraries I’ve worked on in the last two months.

  • babashka: native, fast starting Clojure interpreter for scripting.

    • Improve Java reflection based on provided type hints (read blog post here)
    • Add compatibility with the fusebox library
    • Fix virtual ThreadBuilder interop
    • Add java.util.concurrent.ThreadLocalRandom
    • Add java.util.concurrent.locks.ReentrantLock
    • Add classes:
      • java.time.chrono.ChronoLocalDate
      • java.time.temporal.TemporalUnit
      • java.time.chrono.ChronoLocalDateTime
      • java.time.chrono.ChronoZonedDateTime
      • java.time.chrono.Chronology
    • #1806: Add cheshire.factory namespace (@lread)
    • Bump GraalVM to 24
    • Bump SCI to 0.9.45
    • Bump edamame to 1.4.28
    • #1801: Add java.util.regex.PatternSyntaxException
    • Bump core.async to 1.8.735
    • Bump cheshire to 6.0.0
    • Bump babashka.cli to 0.8.65
  • clerk: Moldable Live Programming for Clojure

    • Replace tools.analyzer with a more light-weight analyzer which also adds support for Clojure 1.12
  • squint: CLJS syntax to JS compiler

    • #653: respect :context expr in compile-string
    • #657: respect :context expr in set! expression
    • #659: fix invalid code produced for REPL mode with respect to return
    • #651 Support :require + :rename + allow renamed value to be used in other :require clause
    • Fix #649: reset ns when compiling file and fix initial global object
    • Fix #647: emit explicit null when written in else branch of if
    • Fix #640: don’t emit anonymous function if it is a statement (@jonasseglare)
    • Fix #643: Support lexicographic compare of arrays (@jonasseglare)
    • Fix #602: support hiccup-style shorthand for id and class attributes in #jsx and #html
    • Fix #635: range fixes
    • Fix #636: add run!
    • defclass: elide constructor when not provided
    • Fix #603: don’t emit multiple returns
    • Drop constructor requirement for defclass
  • clj-kondo: static analyzer and linter for Clojure code that sparks joy.

    • #2522: support :config-in-ns on :missing-protocol-method
    • #2524: support :redundant-ignore on :missing-protocol-method
    • #1292: NEW linter: :missing-protocol-method. See docs
    • #2512: support vars ending with ., e.g. py. according to clojure analyzer
    • #2516: add new --repro flag to ignore home configuration
    • #2493: reduce image size of native image
    • #2496: Malformed deftype form results in NPE
    • #2499: Fix (alias) bug (@Noahtheduke)
    • #2492: Report unsupported escape characters in strings
    • #2502: add end locations to invalid symbol
    • #2511: fix multiple parse errors caused by incomplete forms
    • document var-usages location info edge cases (@sheluchin)
    • Upgrade to GraalVM 24
    • Bump datalog parser
    • Bump built-in cache
  • SCI: Configurable Clojure/Script interpreter suitable for scripting

    • Fix #957: sci.async/eval-string+ should return promise with :val nil for ns form rather than :val <Promise>
    • Fix #959: Java interop improvement: instance method invocation now leverages type hints
    • Fix #942: improve error location of invalid destructuring
    • Add volatile? to core vars
    • Fix #950: interop on local in CLJS
    • Bump edamame to 1.4.28
  • quickdoc: Quick and minimal API doc generation for Clojure

    • Fix #32: fix anchor links to take into account var names that differ only by case
    • Revert source link in var title and move back to <sub>
    • Specify clojure 1.11 as the minimal Clojure version in deps.edn
    • Fix macro information
    • Fix #39: fix link when var is named multiple times in docstring
    • Upgrade clj-kondo to 2025.04.07
    • Add explicit org.babashka/cli dependency
  • CLI: Turn Clojure functions into CLIs!

    • #119: format-table now formats multiline cells appropriately (@lread)
    • Remove pom.xml and project.clj for cljdoc
    • #116: Un-deprecate :collect option to support custom transformation of arguments to collections (@lread)
    • Support :collect in :spec
  • process: Clojure library for shelling out / spawning sub-processes

    • #163, #164: Program resolution strategy for exec and Windows now matches macOS/Linux/PowerShell (@lread)
    • Fix memory leak by executing shutdown hook when process finishes earlier than VM exit (@maxweber)
  • html: Html generation library inspired by squint’s html tag

    • Fix #3: allow dynamic attribute value: (html [:a {:a (+ 1 2 3)}])
    • Fix #9: shortcuts for id and classes
  • cherry: Experimental ClojureScript to ES6 module compiler

    • Add cljs.pprint/pprint
    • Add add-tap
    • Bump squint compiler common which brings in new #html id and class shortcuts + additional features and optimizations, such as an optimization for aset
  • nbb: Scripting in Clojure on Node.js using SCI

    • Add better Deno + jsr: dependency support, stay tuned.
  • instaparse-bb: Use instaparse from babashka

  • edamame: Configurable EDN/Clojure parser with location metadata

    • #117: throw on triple colon keyword
  • fs - File system utility library for Clojure

    • #141: fs/match doesn’t match when root dir contains glob or regex characters in path
    • #138: Fix fs/update-file to support paths (@rfhayashi)
  • sql pods: babashka pods for SQL databases

    • Upgrade to GraalVM 23, fixes encoding issue with Korean characters

Other projects

These are (some of the) other projects I’m involved with but little to no activity happened in the past month.

Click for more details - [rewrite-edn](https://github.com/borkdude/rewrite-edn): Utility lib on top of - [deps.clj](https://github.com/borkdude/deps.clj): A faithful port of the clojure CLI bash script to Clojure - [scittle](https://github.com/babashka/scittle): Execute Clojure(Script) directly from browser script tags via SCI - [rewrite-clj](https://github.com/clj-commons/rewrite-clj): Rewrite Clojure code and edn - [pod-babashka-go-sqlite3](https://github.com/babashka/pod-babashka-go-sqlite3): A babashka pod for interacting with sqlite3 - [tools-deps-native](https://github.com/babashka/tools-deps-native) and [tools.bbuild](https://github.com/babashka/tools.bbuild): use tools.deps directly from babashka - [http-client](https://github.com/babashka/http-client): babashka's http-client
- [http-server](https://github.com/babashka/http-server): serve static assets - [bbin](https://github.com/babashka/bbin): Install any Babashka script or project with one comman - [sci.configs](https://github.com/babashka/sci.configs): A collection of ready to be used SCI configs. - Added a configuration for `cljs.spec.alpha` and related namespaces - [qualify-methods](https://github.com/borkdude/qualify-methods) - Initial release of experimental tool to rewrite instance calls to use fully qualified methods (Clojure 1.12 only0 - [neil](https://github.com/babashka/neil): A CLI to add common aliases and features to deps.edn-based projects.
- [tools](https://github.com/borkdude/tools): a set of [bbin](https://github.com/babashka/bbin/) installable scripts - [sci.nrepl](https://github.com/babashka/sci.nrepl): nREPL server for SCI projects that run in the browser - [babashka.json](https://github.com/babashka/json): babashka JSON library/adapter - [squint-macros](https://github.com/squint-cljs/squint-macros): a couple of macros that stand-in for [applied-science/js-interop](https://github.com/applied-science/js-interop) and [promesa](https://github.com/funcool/promesa) to make CLJS projects compatible with squint and/or cherry. - [grasp](https://github.com/borkdude/grasp): Grep Clojure code using clojure.spec regexes - [lein-clj-kondo](https://github.com/clj-kondo/lein-clj-kondo): a leiningen plugin for clj-kondo - [http-kit](https://github.com/http-kit/http-kit): Simple, high-performance event-driven HTTP client+server for Clojure. - [babashka.nrepl](https://github.com/babashka/babashka.nrepl): The nREPL server from babashka as a library, so it can be used from other SCI-based CLIs - [jet](https://github.com/borkdude/jet): CLI to transform between JSON, EDN, YAML and Transit using Clojure - [pod-babashka-fswatcher](https://github.com/babashka/pod-babashka-fswatcher): babashka filewatcher pod - [lein2deps](https://github.com/borkdude/lein2deps): leiningen to deps.edn converter - [cljs-showcase](https://github.com/borkdude/cljs-showcase): Showcase CLJS libs using SCI - [babashka.book](https://github.com/babashka/book): Babashka manual - [pod-babashka-buddy](https://github.com/babashka/pod-babashka-buddy): A pod around buddy core (Cryptographic Api for Clojure). - [gh-release-artifact](https://github.com/borkdude/gh-release-artifact): Upload artifacts to Github releases idempotently - [carve](https://github.com/borkdude/carve) - Remove unused Clojure vars - [4ever-clojure](https://github.com/oxalorg/4ever-clojure) - Pure CLJS version of 4clojure, meant to run forever! - [pod-babashka-lanterna](https://github.com/babashka/pod-babashka-lanterna): Interact with clojure-lanterna from babashka - [joyride](https://github.com/BetterThanTomorrow/joyride): VSCode CLJS scripting and REPL (via [SCI](https://github.com/babashka/sci)) - [clj2el](https://borkdude.github.io/clj2el/): transpile Clojure to elisp - [deflet](https://github.com/borkdude/deflet): make let-expressions REPL-friendly! - [deps.add-lib](https://github.com/borkdude/deps.add-lib): Clojure 1.12's add-lib feature for leiningen and/or other environments without a specific version of the clojure CLI


Oleksandr Yakushev

2025 Annual Funding Report 2. Published May 5, 2025.

Hello friends! Here’s an update on my March-April 2025 Clojurists Together work.

CIDER

  • We just published a huge CIDER 1.18 release that I spent two months working on. It is packed with features to the brim! See the full list of changes in the announcement.
  • 150 commits and 64 PRs accross 6 repositories.
  • Auxiliary releases: cider-nrepl 0.52.1 -> 0.55.7, Orchard 0.30.1 -> 0.34.3.

Compliment

  • New release: 0.7.0.
  • New feature: priority-based candidate sorting.

Maintenance

  • Started testing all of the projects I maintain again JDK24.
  • Added support for JDK24 in Virgil (0.4.0)

Peter Taoussanis

2025 Annual Funding Report 2. Published April 30, 2025.

A big thanks to Clojurists Together, Nubank, and other sponsors of my open source work! I realise that it’s a tough time for a lot of folks and businesses lately, and that sponsorships aren’t always easy 🙏

- Peter Taoussanis

Hi folks! 👋👋
Hope everyone’s well, and those in Europe enjoying the first glimpses of actual ☀️ in a while :-)

Recent work

Telemere: structured logs and telemetry for Clj/s

Telemere v1 stable is now officially and finally available! 🍾🥳🎉

It was a lot of work to get here, but I’m happy with the results - and I’m very grateful for all the folks that have been patiently testing early releases and giving feedback 🙏

If you haven’t yet had an opportunity to check out Telemere, now’s a pretty good time.

It’s basically a modern rewrite of Timbre that handles both structured and unstructured logging for Clojure and ClojureScript applications. It’s small, fast, and very flexible.

I’ll of course continue to support Timbre, but Telemere offers a lot of advantages, and migration is often pretty straight-forward.

There’s a couple video intros:

Telemere also has the most extensive docs I’ve written for a library, including both:

Tufte: performance monitoring for Clj/s

Tufte v3 RC1 is now also available.

Tufte’s been around for ages but recently underwent a major overhaul focused on improving usability, and interop with Telemere.

The two now share a common core for filtering and handling. This means that they get to share relevant concepts, terminology, capabilities, and config APIs.

The shared core also means wider testing, easier ongoing maintenance, and the opportunity for improvements to further cross-pollinate in future.

Performance has also been significantly improved, and the documentation greatly expanded. There’s too much new stuff to mention here, but as usual please see the release notes for details.

Other stuff

Several other releases worth mentioning:

I’ll note that Telemere, Tufte, and Truss are now intended to form a sort of suite of complementary observability tools for modern Clojure and ClojureScript systems:

  • Telemere for logging, tracing, and general telemetry
  • Tufte for performance monitoring
  • Truss for assertions and error handling

Together the 3x offer what I hope is quite a pleasant (and unique) observability story for Clojure/Script developers.

Upcoming work

Next couple months I expect to focus on:

  • Getting Tempel v1 stable out (data security framework for Clojure)
  • Significant work on Sente (realtime web comms for Clojure/Script)

After that, still need to decide. Might be additional stuff for Telemere, or gearing up for the first public release of Carmine v4 (Redis client + message queue for Clojure).

Cheers everyone! :-)

Permalink

LSP client in Clojure in 200 lines of code

Awhile ago I was prototyping integrating LLMs with LSP to enable a language model to answer questions about code while having access to code navigation tools provided by language servers. I wasn’t that successful with this prototype, but I found it cool that I could write a minimal LSP client in around 200 lines of code. Of course, it was very helpful that I previously wrote a much more featureful LSP client for the Defold editor… So let me share with you a minimal LSP client, written in Clojure, in under 200 lines. Also, at the end of the post, I’ll share my thoughts on the LSP.

Who is the target audience of this blog post? I don’t even know… Clojure developers writing code editors? There are, like, 3 of us! Okay, let’s try to change the scope of this exercise a bit: let’s build a command line linter that uses a language server to do the work. Surely that wouldn’t be a problem…

The what

Some terminology and scope first. LSP stands for Language Server Protocol, a standard that defines how some text editor (a language client) should talk to some language-specific tool (a language server) that knows the semantics of a programming language and may provide contextual information like code navigation, refactoring, linting etc.

The main benefit of LSP is that the so called MxN problem of IDEs and languages becomes M+N with LSP. Here is a good explanation. In short, as a language author, previously you had to write integration for every code editor. Or, as an IDE author, you had to write a separate integration for every language. Now there is a common interface — LSP — and both language authors and IDE authors only need to support this interface.

In 200 LoC, we will implement essential blocks of the LSP Specification that supports programmatic read-only querying of language servers. We will implement:

  1. base communication layer between language client and server processes. It is similar to HTTP protocol: client and server talk to each other using byte streams with messages formatted as headers + JSON message bodies. The base layer establishes a way to exchange JSON blobs.
  2. JSON-RPC — a layer on top of the base layer that adds meaning to JSON blobs, turning them into either requests/responses, or notifications.
  3. A wrapper around JSON-RPC connection that is a leaving breathing language server we can talk to.

We will use Java 24 with virtual threads: writing blocking code that performs and scales well is nice, sweet, and performant. Now, here are few things we will not implement:

  • JSON parser. I mean come on. We will just use a dependency. I picked jsonista because it’s fast and has a cool name.
  • Document syncing. When the user opens a file in a text editor and makes some changes to it without saving, the editor notifies running language servers about the new text of the open files. We are not building a text editor here, just a small PoC, so we’ll skip this.

Now, Let’s go!

The how

If you just want to look at the code, here it is. Now I’ll walk you through it.

Base layer

First, we start with a base communication layer. Language server runs in another process, so the communication happens over InputStream + OutputStream pair. We will run the language server as a process and we will communicate via stdin/stdout, so java Process will provide us the pair. Both client and server send and receive HTTP-like requests with JSON blobs. Each individual message looks like this:

Content-Length: 14\r\n
\r\n
{"json": true}

First, there are 1 or more headers with a required Content-Length header, separated with \r\n. Then, an empty line. Then comes a JSON string. The headers are serialized using ASCII encoding (so 1 byte is always 1 char), the JSON blob uses UTF-8.

We start with a function that reads a line of ascii text from InputStream:

(defn- read-ascii-line [^InputStream in]
  (let [sb (StringBuilder.)]
    (loop [carriage-return false]
      (let [ch (.read in)]
        (if (= -1 ch)
          (if (zero? (.length sb)) nil (.toString sb))
          (let [ch (char ch)]
            (.append sb ch)
            (cond
              (= ch \return) (recur true)
              (and carriage-return (= ch \newline)) (.substring sb 0 (- (.length sb) 2))
              :else (recur false))))))))

So, we read characters byte by byte into a string until we get to \r\n. If we reached end of stream, we return nil. We can’t use BufferedReader’s readLine here for a few reasons:

  • it buffers, meaning it might read more than we want.
  • it uses both \n and \r\n as line separators, while we only want \r\n.
  • it uses a single encoding, while the communication channel uses a mix of ASCII and UTF-8.

The next step is a single function that implements the whole base communication layer:

(defn- lsp-base [^InputStream in ^BlockingQueue server-in ^OutputStream out ^BlockingQueue server-out]
  (-> (Thread/ofVirtual)
      (.name "lsp-base-in")
      (.start
        #(loop []
           (when-some [headers (loop [acc {}]
                                 (when-let [line (read-ascii-line in)]
                                   (if (= "" line)
                                     acc
                                     (if-let [[_ field value] (re-matches #"^([^:]+):\s*(.+?)\s*$" line)]
                                       (recur (assoc acc (string/lower-case field) value))
                                       (throw (IllegalStateException. (str "Can't parse header: " line)))))))]
             (let [^String content-length (or (get headers "content-length")
                                              (throw (IllegalStateException. "Required header missing: Content-Length")))
                   len (Integer/valueOf content-length)
                   bytes (.readNBytes in len)]
               (if (= (alength bytes) len)
                 (do (.put server-in (json/read-value (String. bytes StandardCharsets/UTF_8) json/keyword-keys-object-mapper))
                     (recur))
                 (throw (IllegalStateException. "Couldn't read enough bytes"))))))))
  (-> (Thread/ofVirtual)
      (.name "lsp-base-out")
      (.start
        #(while true
           (let [^bytes message-bytes (json/write-value-as-bytes (.take server-out))]
             (doto out
               (.write (.getBytes (str "Content-Length: "
                                       (alength message-bytes)
                                       "\r\nContent-Type: application/vscode-jsonrpc; charset=utf-8\r\n\r\n")
                                  StandardCharsets/UTF_8))
               (.write message-bytes)
               (.flush)))))))

This function converts the client/server communication from InputStream+OutputStream pair (bytes) to input+output BlockingQueues of json blobs. The "lsp-base-in" part reads headers from the InputStream, then reads a JSON object and finally puts it onto a server-in queue. This way, whenever a language server sends us something, we’ll get it as a JSON in a queue. The "lsp-base-out" is an inverse: it reads JSON objects from server-out and writes them to the server. This way, when we will want to send a message to the language server, we will only need to put a JSON value onto a server-out queue.

JSON-RPC layer

LSP client and server exchange JSON blobs in a special format called JSON-RPC. The main idea is to agree on the shape and meaning of the exchanged data so that exchanging JSON objects supports these use cases:

  1. send a request to perform a specific action and receive a response for this request (aka “remote procedure call”)
  2. send a notification that does not expect a response

This use case is achieved by exchanging JSON objects with special combinations of fields, i.e.:

  1. to send a request, use a JSON object with fields id (request identifier) and method (action identifier). Optionally, you can provide params, i.e. an “argument” to the “method call”.
  2. to send a notification, use a request, but without the id field
  3. to respond to a request, send a JSON object with id of the received request, and either an error or a result field, depending on whether we got an error or a successfully produced a result. The error has to be an object with code and message fields.

Now I’ll walk you through the implementation of JSON-RPC protocol, which happens to be a single function.

We start with this argument list:

(defn- lsp-jsonrpc [^BlockingQueue client-in ^BlockingQueue server-in ^BlockingQueue server-out handlers] 
  ...)

server-in and server-out are the base layer of the LSP commucation. We will put JSON-RPC objects to server-out to send messages to the language server. We will read from server-in to receive language server JSON-RPC objects from the language servers. So, what are client-in and handlers?

client-in is another queue that we will use to send requests and notifications to the language server. Our lsp-jsonrpc function will take objects from client-in, perform some pre-processing, and then will post the JSON-RPC objects to server-out. This will enable us to write a simple API to send messages to the language server.

handler is a map from JSON-RPC “method name” to a function. When language server decides to notify us about something, or sends us a request, we will lookup a function to handle this notification in the handlers map. This enables us to respond to requests from language servers.

The next bit of code in the function “merges” client-in and server-in into a single queue (in):

  (let [in (SynchronousQueue.)]
    (-> (Thread/ofVirtual)
        (.name "lsp-jsonrpc-client")
        (.start #(while true (.put in [:client (.take client-in)]))))
    (-> (Thread/ofVirtual)
        (.name "lsp-jsonrpc-server")
        (.start #(while true (.put in [:server (.take server-in)]))))
    ...)

Now, we can write a single sequential loop that take messages from in and handles both messages from “us”, i.e. the client, and “them”, i.e. remote language server. With virtual threads, this blocking code stays lightweight and performant. On a side note, I think the only reason for core.async to exist post JDK 24 is the observability tooling that flow provides. And, maybe, sliding buffers — AFAIK, there are no blocking alternatives to them in the JDK.

Okay, let’s move on. The next piece of code in the JSON-RPC implementation is the loop:

    (-> (Thread/ofVirtual)
        (.name "lsp-jsonrpc")
        (.start
          #(loop [next-id 0
                  requests {}]
             (let [[src message] (.take in)]
               (case src
                  ...)))))

We start another lightweight process that handles incoming messages from both language server and client. We need next-id and requests to support sending requests and then handling the incoming responses to these requests. We are taking from in, so src is either :client or :server, and message is a JSON-RPC message. Now, let’s start handling stuff! First we’ll handle the :client case, i.e. messages that we send to the server:

                 :client (let [out-message (cond-> {:jsonrpc "2.0"
                                                    :method (:method message)}
                                             (contains? message :params)
                                             (assoc :params (:params message)))]
                           (if-let [response-queue (:response message)]
                             (do
                               (.put server-out (assoc out-message :id next-id))
                               (recur (inc next-id) (assoc requests next-id response-queue)))
                             (do
                               (.put server-out out-message)
                               (recur next-id requests))))

Remember, we need to support both notifications (don’t expect a response) and requests (need a response). We will differentiate between them by using :response key on the client messages. The value for the key is going to be a BlockingQueue — once we receive a response from the language server, we will put the response value onto this queue. If we are sending a response, we increment the next-id counter and store the queue that awaits a response in the in-flight requests map. If we are sending a notification, we simply send a JSON-RPC object and continue.

That’s it for the client! Now we handle incoming messages from server. There are 3 possible message types:

  1. responses to our requests: those have an id and either result or error.
  2. notifications: those have method, but not id
  3. requests: those have both method and id

Here is the :server case:

                 :server (cond
                           ;; response?
                           (and (contains? message :id)
                                (or (contains? message :result)
                                    (contains? message :error)))
                           (let [id (:id message)
                                 ^BlockingQueue response-out (get requests id)]
                             (.put response-out message)
                             (recur next-id (dissoc requests id)))

                           ;; notification?
                           (and (contains? message :method)
                                (not (contains? message :id)))
                           (do
                             (when-let [handler (get handlers (:method message))]
                               (handler (:params message)))
                             (recur next-id requests))

                           ;; request?
                           (and (contains? message :method)
                                (contains? message :id))
                           (do
                             (.put
                               server-out
                               (try
                                 {:jsonrpc "2.0"
                                  :id (:id message)
                                  :result ((get handlers (:method message)) (:params message))}
                                 (catch Throwable e
                                   {:jsonrpc "2.0"
                                    :id (:id message)
                                    :error {:code -32603 :message (or (ex-message e) "Internal Error")}})))
                             (recur next-id requests))

                           :else
                           (do
                             (.put server-out {:jsonrpc "2.0" :id (:id message) :error {:code -32600 :message "Invalid Request"}})
                             (recur next-id requests))))))))))

When we receive a response to our request, we put it on the queue stored in the in-flight requests map, and remove the queue from the map. When we get a notification, we simply invoke the handler if it exists. Handling requests is a bit different, because we want to ensure the server will always receive a response. So we do a try/catch and always send back something. We do the request handling on the JSON-RPC process thread, so if it blocks for a long time, no other messages are processed. That’s actually a downside. So let’s just say I kept things simple for illustrative purposes, and spawning one more virtual thread to compute and send a response to the server is left as an exercise for the reader :D

Finally, there is an :else branch that responds to unexpected messages with an error. Which, I guess, is unnecessarily defensive given the lack of error handling and validations in other places.

The API

Now that all communication is implemented, it’s time to create an API. We will only need 3 functions:

  1. start! to start a language server.
  2. request! to send a request to the language server and get a result back
  3. notify! to send a notification to the language server and get nothing back

Let’s start with start!-ing a server:

(defn start!
  ([^Process process handlers]
   (start! (.getInputStream process) (.getOutputStream process) handlers))
  ([^InputStream in ^OutputStream out handlers]
   (let [client-in (ArrayBlockingQueue. 16)
         server-in (ArrayBlockingQueue. 16)
         server-out (ArrayBlockingQueue. 16)]
     (lsp-jsonrpc client-in server-in server-out handlers)
     (lsp-base in server-in out server-out)
     client-in)))

I made 2 arities for the start! function:

  1. Helper process arity specifically for process stdio, since this is what is used in 99% of LSP client/server communication implementations. We are going to use it to start the server.
  2. Generic arity over InputStream+OutputStream pair. This arity is the one that does the work. LSP allows various transports, e.g. pipes, network sockets, or stdio communication between processes. The generic arity supports it all, you only need to provide the input and output streams. In the setup, I allocate small buffers so if some part of the commucation consumes too slow (or produces too fast), there is some buffering and then backpressure. I don’t know if these buffer sizes are any good to be honest, I just made them up. Anyway, here, we call lsp-jsonrpc and lsp-base to wire everything together, and finally return the client-in. Yes, the LSP client object is just a queue. Yes, it probably should be something else, like a custom type, in a proper implementation.

Next step is sending a notification. This is simpler than sending a request because we don’t get a response back:

(defn notify!
  ([^BlockingQueue lsp method]
   (.put lsp {:method method}))
  ([^BlockingQueue lsp method params]
   (.put lsp {:method method :params params})))

Finally, sending a request. If you remember, back when we were implementing the lsp-jsonrpc function, we agreed that LSP request maps will use a :response key with a queue value. Now is the time to do it:

(defn request!
  ([lsp method]
   (request! lsp method nil))
  ([^BlockingQueue lsp-client method params]
   (let [queue (SynchronousQueue.)]
     (.put lsp-client (cond-> {:method method :response queue} params (assoc :params params)))
     (let [m (.take queue)]
       (if-let [e (:error m)]
         (throw (ex-info (:message e) e))
         (:result m))))))

SynchronousQueue is a queue with a buffer of size 0. This means every blocking .take (which we do here) will wait until someone else (lsp-jsonrpc function) puts a value onto the queue. So this is like a promise that we await here. This implementation creates a request map, submits it to the lsp client, and then blocks until a response arrives from the language server. What’s extra nice here is that JSON-RPC errors are thrown as java exceptions, and successful results are simply returned as values. As if this is some sort of synchronous “method call”. That also performs well because virtual threads. Java 24 is really nice.

Anyway, that’s it! We now can start language servers and do stuff with them! Yay, we implemented an LSP client, all in 150 (not even 200) lines of code!

Yay?

You might feel a bit let down now because everything we did — base and jsonrpc layer — although required for the LSP, don’t actually have anything to do with actual language servers. But it’s so nice and short and focused! Oh, well. Now, I guess, the time has come to destroy all this beauty by actually trying to use a language server. After all, we still have a budget for 50 more LoC.

The ugly linter

Let’s discuss the language server lifecycle first. When client starts a language server, it’s not actually immediately ready for use. Now we are entering the real LSP integration territory. We have to initialize it (a request), then notify it that it’s initialized (a notification), then use it (issue 0 or more requests or notifications), then shut it down (a request), and then finally notify it so it can exit (one more notification). The initialization process is necessary to exchange capabilities: the client says what it can do, and then the server says what it can do, and LSP demands both client and server to honor what they said to each other. For example, a proper language client (like a text editor, not the toy that we build here) might say “I will ask you about code completion, but please don’t notify me about your linting since I don’t support displaying squiggly lines yet”, and the server might say “I can provide both code completion and notify you about code issues as you type, but I won’t do that since you don’t support it”.

All capabilities are defined in the LSP specification, and almost all of them are optional to implement. This allows for both LSP client and server developers to build the support gradually over time. For example, in Defold editor, the LSP support story started only with displaying diagnostics (this is the term LSP specification uses for linting squigglies), and then was gradually expanded to code completion, hovers and symbol renaming.

Let’s see what we have in stock for diagnostics. A diagnostic is a data describing the code issue. It has a text range (something like “from line 20 char 5 to line 20 char 10”), severity (warning/error etc.) and a text message. LSP specification defines these 2 methods that we could use to get diagnostics from the language server:

  1. document diagnostics: a client may request a server to lint a particular file and return a result.
  2. workspace diagnostics: a client may request a server to lint the whole project and return a result.

So, with these 2 methods at hand, and with our nice LSP client implementation, we could sketch a linting function that does linting using roughly this algorithm:

  1. start a server
  2. initialize it, telling the server that we may ask it for workspace and document diagnostics
  3. if server supports workspace diagnostics, we use that; if server supports document diagnostics, we list all files in a project and ask it to lint them; otherwise, we report an error that the server can’t do what we want it to do.
  4. we shut down the server

Should be easy. Really, it should be this easy! It should be easy!!! Why isn’t it this easy?!?!..

Okay.

Here comes the ugly part.

When preparing this post, I went through a lot of language servers to use as an example. I only needed one of them to implement either of the methods. But no. Not single one of them did. All these language servers that boast that they provide diagnostics. They are not even lying. But! They don’t actually implement diagnostics on request. You see, there is a third way language servers can use to provide these pesky little squigglies. They can post them, out of the blue, whenever they want, as a notification. No way to ask them about it. And that’s what they do. All of them. And they do it, mostly, as a response to 2 specific notifications from the client: when the client notifies the server that it opened a document, and when the client notifies the server that a text of an open document has changed. This notification approach existed first, and every language server implementor just uses it because it’s easy and it works and everything else is unnecessary. It makes total sense for a text editor: most of the time, you are only interested in squigglies for the file you are editing, while you are editing it. But unfortunately it means that I can’t make a nice example of using our tiny language client to do something useful without building a full-blown text editor — all the other features only make sense in the code editing context where we have cursors and text selection and we can ask a language server about this thing on this line.

So. It’s going to be ugly. But this not a problem of the LSP specification. It’s just that I got unlucky with the example that I wanted to use. Instead of this simple straightforward request/response thing I’m going to do something awful. I’ll start a language server. I will initialize it, saying only that I am open to receiving diagnostic notifications. I will ignore server capabilities completely because at this point why bother. And then I will open every file in a project, and then I’ll wait a bit to receive diagnostic notifications, and then I’ll shut this abomination down. I’m not going to explain all the code, because it’s so awful, but here it is in all it’s glory. Here, I’ll only show the good parts.

We start with a function signature:

(defn lint [& {:keys [cmd path ext]}] 
  ...)

The function takes an LSP shell cmd to run (either a string or a coll of strings), a directory path to lint, and a file extension to select the files to lint. Since the function accepts kv-args, and it’s on github, and you are using an up-to-date clj tool (aren’t you?), you can actually try to run it. Maybe it will even work! For example, you can download clojure-lsp, and then run the following command in your project:

clj -Sdeps '{:deps {io.github.vlaaad/lsp-clj-client {:git/sha "57c618d7ecfc9f94fbef9157cfe4534a4816be45"}}}' \
    -X io.github.vlaaad.lsp/lint \
    :cmd '"/Users/vlaaad/Downloads/clojure-lsp.exe"' \
    :path '"."' \ 
    :ext '"clj"'

For the code that we discussed in this post, the output will look like this:

file:///Users/vlaaad/Projects/lsp-clj-client/src/io/github/vlaaad/lsp.clj at 168:22:  Redundant let expression.

Turns out there is a warning in the lint function implementation! But the warning is in a bad, messy part of the code, so there is no point in fixing it in the function. Nothing can fix this function… Anyway, we start a process and then make it a server:

(let [... ...
      ^Process process (apply process/start {:err :inherit} (if (string? cmd) [cmd] cmd))
      ... ...
      server (start! process {"textDocument/publishDiagnostics" (fn [diagnostics] ...)})]
  ...)

We are only going to listen to textDocument/publishDiagnostics notification that might be sent by the language server when we open files. At this point, the server is not initialized yet, so we do it next:

(request! server "initialize" {:processId (.pid (ProcessHandle/current))
                               :rootUri (uri path)
                               :capabilities {:textDocument {:publishDiagnostics {}}}})

We issue a blocking initialize call, and tell the server our process id (so it can exit if we die before stopping it), which directory is the project root, and what are our capabilities. You are expected to take the return value and check if it e.g. supports the diagnostics, but I decided to skip it in this example.

Next step: we notify the server that it’s initialized:

(notify! server "initialized")

Not sure why it’s necessary, but the protocol demands it. Then we use the server and print the results (horrors omitted). Then we shut it down:

(request! server "shutdown")
(notify! server "exit")

And that’s it!

Discussion

Okay, let’s take a deep breath. I took a deep breath and spent some time reflecting on all this. I like LSP. It’s great for the ecosystem: IDEs get better support for more programming languages, and programming languages are easier to integrate into more IDEs. It’s not a great protocol for building command line linters: even though the protocol supports it, in reality it’s going to be hard to find a server that has the necessary capabilites. But it’s much better for building text editors, I promise :)

I built LSP support for the Defold editor. Now that I also spent a bit of time reflecting on it, I’d like to share my opinions on the matter. First of all, integrating diagnostics into the text editor was actually pretty easy, since there was no requirement to explicitly request diagnostics, they just appear and get displayed. That wasn’t the complex part. Defold LSP support is much more complex than our toy implementation because a text editor needs to manage the whole zoo of language servers, each with it’s own lifecycle, initialization process and capabilities. When implementing the LSP support in a text editor, I found that most of the complexity comes from having to manage this zoo, where each server has different runtime state (starting, running, stopped), and where each of language server processes might decide to die at any point. This complicates, for example, the following:

  • Tracking open files with unsaved changes. Not only does the text editor need to notify running language servers when the user opens a files, it should also notify a freshly started (or restarted) servers about all currently open documents. There needs to be book-keeping of open (visible to the user) and unsaved files (not necessarily visible to the user).
  • Sending requests to multiple servers at the same time. This might be not immediately obvious, but LSP does not get in the way of running multiple language servers — for the same language, in the same project — simultaneously. VSCode does it. Defold editor does it too. When the editor asks for code completions to present a completion popup, the LSP integration actually asks all capable running language servers for the target language, and then merges the results. Same applies for displaying diagnostics. Having multiple language servers per file is very useful. For example, you might run a language server dedicated to the code analysis and, additionally, a spell checking language server that highlights typos, and the editor will display diagnostics from both in the same file. So, implementing support for sending a request to multiple language servers at once, with different capabilities, where every server might die at any moment, but we still wan’t to receive a response from all matching servers, within a time frame, wasn’t easy.

Compared to that, here is a critique of LSP that I’ve read about before, but don’t find convincing:

  1. Missing causality. The editor changes the code, then immediately asks for something like code actions from the server. It’s possible that the server won’t have a chance to update it’s internal state and will return results for an outdated text state. Or it will post diagnostics that no longer apply. But then it will post the correct ones a bit later. I think it doesn’t matter since the problem is easily recoverable with e.g. an undo in a text editor, or with repeating a request, or it will recover itself automatically a bit later. There is no need for strong causalty/consistency guarantees: interactions with language servers are mostly read-only, there is no harm in the thing being a bit lax/late.
  2. Different endpoints encode data slightly differently. For example, unsaved changes to text files are communicated incrementally (as diffs), but text document outline (i.e. list of defined classes/functions/modules etc.) is always refreshed in full. I think inconsistencies here don’t matter: writing a pre/post processing is easy. Different state synchronization approaches are dictated by the context and there are trade-offs. Text state synchronization should be fast, therefore requiring support for incremental text synchronization for clients and servers is reasonable — we might be editing very large files, we shouldn’t constantly send them in full on every change. Outline refreshes, on the other hand, are requested as needed, and not on typing, so there is no need for incremental diffs there.
  3. Specification is big. It is, but it doesn’t matter: we can opt into into parts of it using capabilities.
  4. Weird type definitions. A lot of JSON schema of requests/response is written using Typescript types. Truth be told, I was perplexed by it initially, but I quicky got used to it. It communicates the data shape well enough.

LSP has it’s warts and inconsistencies, as every successful protocol that has grown over time. If it was designed from scratch now, it would be simpler, particularly around request and response data shapes. But that’s not as hard as e.g. managing the state of the servers, which is an unfortunate consequence of the fact that language servers are separate stateful processes. Perhaps, LSP successor will be not a better protocol for inter-process communication, but a WASM “interface” that will allow writing language servers in-process, synchronous, in whatever language, as long as it compiles to WASM. And then, every code editor will run some WASM runtime. Meanwhile, LSP is infinitely better than building bespoke language integrations, so I’m happy to use it.

Permalink

Clojure Deref (May 10, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation.

Blogs, articles, and projects

Libraries and Tools

New releases and tools this week:

  • core.async.flow-monitor 0.1.1 - A real-time monitoring and interaction tool for clojure.core.async.flow

  • proto-relay 0.1.0 - Utilities for creating functions that delegate to some underlying protocol

  • datomic-graph-viz 1.0.0 - Visualize a datomic database as a graph

  • stripe-clojure 0.3.0 - Clojure SDK for the Stripe API

  • calva-backseat-driver 0.0.10 - VS Code AI Agent Interactive Programming. Tools for CoPIlot and other assistants. Can also be used asan MCP server

  • mokujin 1.0.0.82 - Structured logging for Clojure. Thin layer on top of clojure.tools.logging with MDC support

  • dataspex 2025.05.7 - See the shape of your data: point-and-click Clojure(Script) data browser

  • nbb 1.3.201 - Scripting in Clojure on Node.js using SCI

  • clj-freqt - Frequent subtree mining with FREQT in Clojure

  • navi 0.1.4 - A tiny, data-driven library converting OpenAPI spec to Reitit routes

  • clojure-plus 1.5.0 - A collection of utilities that improve Clojure experience

  • calva 2.0.509 - Clojure & ClojureScript Interactive Programming for VS Code

  • msgpack-clj 1.1.0 - High performance Clojure bindings for msgpack-java

  • edamame 1.4.30 - Configurable EDN/Clojure parser with location metadata

  • conjtest 0.0.2 - A command-line utility heavily inspired by and partially based on Conftest

  • rv 0.0.9 - A Clojure library exploring the application of pure reasoning algorithms

  • clay 2-beta44 - A tiny Clojure tool for dynamic workflow of data visualization and literate programming

  • tableplot 1-beta13 - Easy layered graphics with Hanami & Tablecloth

  • noj 2-beta18 - A clojure framework for data science

  • desiderata 2.1.2 - Things wanted or needed but missing from clojure.core

  • amalgam 2.8.3 - Useful utilities and mixtures for com.stuartsierra/component

  • amalgam-dirigiste 0.3.0 - Self-adjusting thread pool component with metrics reporting

  • ripley 2025-05-08 - Server rendered UIs over WebSockets

  • squint 0.8.147 - Light-weight ClojureScript dialect

  • cherry 0.4.27 - Experimental ClojureScript to ES6 module compiler

  • pod-babashka-fswatcher 0.0.6 - Babashka filewatcher pod

  • clojure-stack-lite 0.1.3 - A quick way to start a full-stack Clojure app with server-side rendering

Permalink

CLJS: Dealing with Zombies

Ok, this is for the CLJS enthusiasts trying to get your builds as small as possible. The Closure Compiler is quite good at eliminating dead code from your builds. However, it sometimes leaves some Zombie code that is essentially dead, but not quite. This stems from the fact that :advanced dead-code elimination isn’t quite aware of some CLJS code patterns and doesn’t identify them as dead. The lines get a bit blurry sometimes, so this is hard to get right to begin with.

A common example is any code using defmethod/defmulti. Any defmethod modifies the object created by defmulti. So, to the closure compiler this looks like it is getting used. Identifying whether the defmulti is ever actually called is a bit tricky, so this all stays alive forever as possible Zombies lingering in your code. A common offender here is cljs.pprint, often added during development, forgotten and just waiting to add hefty chunk of dead weight to your builds.

Identify your Zombies via Build Reports

Unfortunately, actual dead code can be rather hard to identify from just the compiler side. So, sometimes a little help is needed to identify and remove it.

You’ll have to generate a build report and dig into it a little bit. Only you know what is actually needed.

Here is an example build report from a dummy build. Using an example of a perfectly valid namespace that just wanted to colocate some tests with the code directly in the same file.

(ns demo.zombie
  (:require
    [cljs.test :refer (deftest is)]))

(defn init []
  ;; just some code to keep the CLJS collections alive
  (js/console.log [:hello {:who "World!"}]))

(deftest my-test
  (is (= :super :cool)))

The resulting report looks like this:

In this particular case the deftest macro already takes care of eliding the test from the build. What however cannot be elided like this is the (:require [cljs.test ...]) from the ns form. cljs.test alone isn’t that large, so it doesn’t hurt that much. It however brings in cljs.pprint, which adds a quite hefty chunk.

Next I’ll cover some strategies to deal with this, but first to get an actual comparison here is the build we actually want.

We removed ~30kb gzip’d from our build, which in this instance is pretty significant. Of course in larger builds it won’t be that dramatic, but as I said this is for enthusiasts that care about the little things.

Strategy #1: Avoid it in the first place

Using my test example above the common recommended strategy is to just have the tests in their own dedicated namespace. Moving the test to demo.zombie-tests will also allow removing the cljs.test require from the demo.zombie namespace. Thus avoiding the problem entirely.

Sometimes you still want to modify your runtime environment in a way that makes development easier. My recommendation here is to use the :preloads option in your build config. This lets you inject extra code only during development, that is not added to your release build.

Strategy #2: Stub it out

Moving tests to a separate file is of course subjective and many people prefer to keep tests colocated with the actual source. Other times the code causing the issue is code used added only for development purposes. cljs.pprint alone is a prime example. It just makes it easier to just quickly pprint something if it is already required and ready to go.

So, instead of being forced to remove a required namespace, we can just replace that namespace with a stubbed out shell that never has the problematic code in the first place. re-frame-10x has been doing that for a long time, but I never quite documented how that all works.

The :ns-aliases build option in shadow-cljs lets you define replacements that should be used whenever a :require for a certain ns is encountered. For example :build-options {:ns-aliases {cljs.pprint cljs.pprint-stubs}} causes any (:require [cljs.pprint ...]) to act as if you had (:require [cljs.pprint-stubs ...]) in your code instead. It’ll never actually add cljs.pprint to your build. You can set this as a :release option in your build config, so that it still stays as normal during development.

As of shadow-cljs version 3.0.5 I added a new :release-stubs option to make this a bit less verbose. It expects a set of namespace symbols to stub out. For the build above I added :release-stubs #{cljs.test} to the build config, and shadow-cljs automatically sets that as the proper ns alias, by using the convention of adding -stubs and using those as the replacement. In this release I also added 2 basic stubs to shadow-cljs itself, so that cljs.test and cljs.pprint are already covered and don’t need to be recreated in every project. The files can be found here.

Any library can follow this pattern and just provide the stubs directly and letting users opt-in to use them.

Strategy #3: Stubbing the old way

I’d consider this a deprecated and undesirable option, but I will cover it since many people are used to it from other tools. :ns-aliases lets you swap out namespace by using a new name. Another option is to replace the actual name.

The way this is done is by swapping the definitions on the classpath directly. So, you’d have a dev/my/app.cljs and a prod/my/app.cljs, both defining (ns my.app), giving you a namespace that is defined twice. One with all the development stuff, one a bit more “optimized”. The classpath is then used to only include one at a time.

shadow-cljs itself doesn’t support this and cannot switch the classpath between builds or dev/release modes. You can however do this directly with deps.edn and just launching shadow-cljs from there.

{:paths ...
 :deps ...
 :aliases
 {:dev {:extra-paths ["dev"] ...}
  :prod {:extra-paths ["prod"] ...}
  :shadow {:extra-deps {thheller/shadow-cljs ...}}}

Then when making a release build activate the :prod alias and tell shadow-cljs to make a release build.

clj -M:shadow:prod -m shadow.cljs.devtools.cli release app

This of course works fine, but I never quite liked the classpath switching since it requires launching a new JVM to make release builds. I prefer to just have one setup that lets me seamlessly switch between watch/release without even restarting shadow-cljs.

Permalink

Crafting your environment

This week we have JP Monetta on Apropos. He’s the creator of FlowStorm Debugger, which is a time traveling debugger for Clojure. Check it out! It is quite amazing.

Beginner Clojure, my video course, is better than ever. I recently completely rebuilt the Introduction to Clojure module. It’s the fastest way to get from zero to a deep, functional programming and data-driven programming experience. Go check it out to see why hundreds of people have enjoyed it. If you buy, you get lifetime access plus all updates. Planned future updates include VS Code with Calva, The Clojure Command-Line, and Shell Scripting with Babashka. The course already contains eight great modules, including JVM, Repl-Driven Development, and functional programming. Buy it today.


Crafting your environment

My Emacs setup is boring. So is my shell prompt. I’ve usually left things vanilla. Sometimes I look at other programmers and their awesome tooling setup and their colorful zsh prompts and feel superior. I tell myself that I am serious. I don’t waste time with silly things. Instead, I get to work. I’m very conservative.

But over time, I’ve heard of enough arguments for extreme configuration and seen enough examples of powerful IDE setups and missed enough tools that are now standard that I wonder if I’m wrong. Is it a waste of time? Or is it time invested? I’m starting to like the other side, though I’m still very bad at it.

Let’s start with the argument for why it’s not really worth your time to improve your tools.

Being slightly faster does not pay off

Sometimes I see a programmer obsessing over their setup. They’re spending hours reading forums and getting their Emacs config just the way they like it. They get into building their own keyboard. They figure out the best chair to sit in. It’s not so much that it doesn’t matter at all, it’s that it seems like a huge distraction. You want to be better at programming, so you spend hours 3D printing your keycaps? It seems to me like a big waste of time.

Let’s calculate it. If you spend 10 hours making a keyboard (which is very conservative), and it makes you 1% faster at typing, and we assume you spend 1 hour per day typing, you will need to type for 1,000 hours before you’ve made up the difference. That’s 2.7 years. That doesn’t seem worthwhile to me.

Typing speed is not the bottleneck

One of my friends put it this way: Typing speed is not the bottleneck in programming. They’re evoking a principle from the Theory of Constraints which is very popular in manufacturing circles. The principles says that every system has a bottleneck, and improvements to anything besides the bottleneck will not improve the overall productivity of the system. The bottleneck is the limiting factor. According to my friend, typing speed is not the the limiting factor to your effectiveness as a programmer.

All that time spent learning another vim command won’t really make your programming faster. You’re still limited by the bottleneck, which I agree is not typing speed.

Customization is not shareable

And just one more issue with ultra-customization. One time I was pairing with my coworker on his laptop. I was having trouble communicating something he should try, so he handed me the laptop. I put my hands on the keys. And every key I pressed surprised me. I was skilled at Emacs. And this was Emacs. But this was not the Emacs I knew.

He had customized the basic Emacs commands to different keys. And the official keys were remapped to other commands. They made sense to him, but I was useless on his keyboard. That really made me double down on leaving keys standard. With non-standard keys, sure, you may be faster, but nobody else can use your computer.

Do you agree with these? Are there other arguments?

I once agreed with these, but not any more. These three arguments are missing something important.

Being slightly faster is not the point

The reason the “efficiency argument” doesn’t hold water is that it’s not really about being faster. If all you’re looking at is productivity such as lines of code per second, you’re missing the main point. I talk about this a lot in The Surprising Benefits of Automation. We improve our tools and skills for agency and efficient use of cognitive resources, which we’ll talk about soon.

Typing is on the critical path

My friend evoked the concept of the bottleneck from manufacturing (note the efficiency mindset). Another concept from manufacturing and project management is the critical path. This is the path of sequential steps that affect the timeline of the project if they take more or less time. Typing might not be the bottleneck, but it is on the critical path, especially of iterative development practices like REPL-Driven Development (RDD).

I think what my friend meant when he said that typing is not the bottleneck is that programming is mostly about thinking. You might spend two hours thinking and ten minutes typing it in. Taking eleven minutes to type it in isn’t going to seriously change the timeline.

However, when I’m doing RDD, the speed I can type a piece of code, eval it, and understand its effect is part of the thinking process. With RDD, we extend our mental capacity with the REPL, just like a geometer extends their mental capacity with a compass and straightedge. The faster you can go through the loop, the better integrated your thinking and the REPL can be. Slowing down your typing can significantly affect that integration. And speeding it up with better configuration can help it, too.

And typing speed becomes more important the more iterative and incremental your process is. If you’re doing waterfall, your process is dominated by thinking, planning, design, and redesign. But if you’re working iteratively, you are typing a lot and learning from the result. The faster you type, the faster you learn.

Customization is not shareable

Well, this one I think is still huge. But as Juan Monetta opined on our recent episode of Apropos, that is much less important when we’re all programming on our own machines. We share a lot of development tooling: Linter configs, formatter configs, and tests. But do we need to share editor configurations? I’ve heard it both ways. I’ve heard people who love pairing and process argue that the team should be discovering as a group the best way to work as a team—meaning they should use the same editor configuration. But I’ve also worked at places that took an individualistic approach, so much so that the variance of productivity and understanding was huge. Some people knew how to run the code at the REPL, and others thought it was impossible. More on this later.

Now that we see that it’s not about speed but about cognitive resources, what are some arguments for configuration?

Energy and joy

Let’s say you open up a drawer in your kitchen. It’s filled with kitchen tools, but it’s a cluttered mess. It’s frustrating. You dig around, find the spoon you need, and close it. You get on with your cooking task. Or you could stop what you’re doing and organize the drawer. Toss out broken tools. Create sections and sort the tools that you still use. How do you feel afterwards? I bet you feel energized. The frustration has turned to joy.

The fact is, feeling control over your environment is important psychologically. That frustration is a signal that you want something to change. Ignoring that signal takes energy. Resigning yourself to live with frustration is a sure path to despair and burnout.

Energy is important. You can’t measure everything based on time efficiency. Time efficiency could be important, but it’s not everything. A programmer’s good energy improves their agency and their productivity. If a programmer feels like they can’t improve their environment, they’re more likely to zone out at work. You might as well use that zone-out time to make some improvements.

Cognitive resources are a bottleneck

My friend was correct that thinking is the bottleneck. Each of us is endowed with a certain amount of “cognitive resources” to work with. Research shows that this is a limited and shared resource—there’s only one pool of it. So if your roof is leaking and you’re worried about it, that will affect your programming at work. The bottleneck is the cognitive resource pool.

Your resources are also drained by tasks that take effort. Imagine if you had to search for the { key on the keyboard each time you needed to type it. Sure, it would slow you down. But worse than the slowdown is that you would use up precious cognitive resources. You would have fewer resources to think about the problem. If it took you long enough, you might even forget what you were trying to do. In short, it’s not slow typing that is the problem, but typing that takes effort.

How many of your subskills take effort because you haven’t mastered them yet? Maybe you can type { without thinking about it. But what about figuring out which braces to close? An easy solution is to use paredit—but that requires configuration and learning new keystrokes. Well worth it in my opinion. But what other packages could unlock similar improvements?

Experimentation and waste

I try to read a lot of books on programming. Learning from others is a great way to get better. Right now, I’m focused on books on software design. Unfortunately, most of the books on software design have bad advice. Does that mean it’s a waste of time to even look? Hour-for-hour, I don’t think it’s worth it. My software design skills could not have improved by that much. I have spent thousands of hours over my career reading programming advice. But it probably has not led me to ship everything thousands of hours faster (even in aggregate).

But how can we know? I certainly feel like the time spent reading was worth it, even considering that a lot of the advice was bad. When I look at my skill vs someone who doesn’t read, I can see the difference. When I look at my skill vs myself 10 years ago, having read fewer books, I can see the difference. But I can’t attribute any improvement to any particular piece of advice.

While you’re reading, you are absorbing other people’s experience. You’re organizing your own experiences. Even when you read something you don’t agree with, it could give you a new perspective on that idea. Reading alongside direct experience helps you learn.

Likewise, when I see someone building their fifth custom keyboard, it feels like a waste of time. Do you really think you’re going to get that much better this time? But they think it’s possible. And even if the keyboard doesn’t make them better, there’s something else going on: Engagement in the process. Building your own keyboard is a physical way to engage your brain. What keystrokes are important to my process? How do I want that key to feel when I press it? What fingers do I want to use? The learning transfers to configuration and typing.

You cannot program 100% of the time

Another misconception is that configuration time is eating away at programming time. If you’d give up that time spent configuring, you’d have more time to program. But that’s not how people work. We need breaks from the productive work. The productive work makes a mess, and we need to clean up. The productive work gives us ideas for doing our work better, and we can use those ideas to improve our work.

Permission to make small improvements

If you want to engender resentment and passivity in your programmers, tell them they can’t configure their tools. They’ll probably say they agree with you that it’s a waste of time. But you’ve also tapped into the efficiency-seeking part of the brain. That efficiency-seeking part of the brain isn’t looking to get more done with less. No, it’s about doing as little as they can get away with.

If you want to activate someone’s brain to look for ways to improve, encourage (or require!) them to make small changes daily. Some companies require each employee to make a change that could save them 2 seconds. Setting the bar low makes it achievable. But it also gets the mental balls rolling. With each small change, they develop a sense of ownership over their process and that engenders agency—which is probably what you want in an employee.

Improvements compound

Some tweak to your process might seem small. But over time, the work with each other to improve non-linearly.

Working at the right cognitive level

Again, typing speed is not the bottleneck, but thought is. If you are moving a cursor around, backspacing, and typing, you’re thinking about the cursor and characters. Another person, however, could be issuing higher-level commands, like rename symbol, if they had the right configuration and practice with the keystrokes. There is less distance between what they want done and the command to do it. It’s not about speed. It’s about the cognitive processing it takes to translate the desired action into editor commands.

Improvements reveal underlying problems

As workers at Toyota eliminated inventory between steps, the assembly line would start to break. They would say: “We told you that we need that inventory. Otherwise the process doesn’t run smoothly.”

Taiichi Ohno didn’t agree: “We are lowering the level of the water in the river (the inventory) to reveal the stones (the problems). The stones were always there, we just couldn’t see them.”

Taiichi Ohno knew that inventory tends to pile up before the bottleneck. Since the bottleneck is the slowest step in the process, the fast steps before it produce more than the bottleneck can handle. But when he looked around, there was inventory everywhere. By eliminating the easy piles of inventory, he could see where the real problems were.

We can do that in our tools, too. Our tools are full of frustration. We can start anywhere, fixing problems as we see them. This will make the process smoother up to a point. But the better our configuration is, the easier it is to see the parts that still frustrate us. Focusing on the bottleneck is good in theory, but it’s hard to recognize the bottleneck when everything feels frustrating.

Fixing our frustrations leads to new projects

If we don’t fix our frustrations, we have a tendency to become blind to them. We get used to them and work around them. But these frustrations can be very valuable. Frustration with existing debugging approaches (like println) fueled the development of FlowStorm. And Tailwind CSS’s creator Adam Wathan credits his sensitivity to frustration for the development of Tailwind. If you’re feeling frustrated, chances are, other programmers do, too. Your solution could be valuable to them.

Discontinuous improvement

There’s a phrase “continuous improvement”, which sounds great. You’re making improvements all the time. But I also think there’s something like discontinuous improvement, too. You see, if our cognitive resources are the bottleneck, there’s a discontinuity at the edge of our cognitive capacity. Jobs just bigger than our capacity need some kind of process and externalization (like a todo list) to manage it. But jobs small enough to fit in our minds don’t. If you can cross that boundary with better skills and tooling, you’ve got a stepwise improvement.

Aesthetics are important

When I see people geeking out over their keyboards, it’s not just about function. People choose colors and lights and other design elements that please them. The same for a lovely shell prompt or a nice font in their editor. The aesthetics of your environment plays a noticeable role in your productivity. If your office is a dump, you’ll probably make trash software. But if you surround yourself with beautiful things, you’ll be inspired to do beautiful work.

Now let’s talk about some principles that can help make the most of this learning.

Manage resources

The most important principle is that we are trying to manage limited cognitive resources. You can:

  • Make a tool more pleasant so that it gives you more resources (e.g., make it have nice UX)

  • Use tools that operate with the right concepts (e.g., paredit)

  • Learn a skill better so that it takes fewer resources to perform (e.g., practicing paredit commands so they are muscle memory)

Choose your battles

Alan Kay talks about the problem with a good engineer is they are so dissatisfied with everything that they want to build it all themselves. It takes great discernment to choose what you build on top of. Likewise, improvements do have a cost. Are you really going to hand-craft your keyboard? Might you want to try one of the existing commercial ones first? It might get you 99% of the way there.

Configure away, but use standard keystrokes

Emacs has a set of keystrokes that come standard out of the box. They have been the same for decades. C-a goes to the beginning of the line. C-e goes to the end. C-x-f opens a file. And, believe it or not, there are many packages for opening files. So while you can change the way you open files, most packages keep the same keystroke. Even though the behavior is different, someone familiar with the keystrokes can still use it. I recommend that approach.

Share when you can

I’m not as extreme as my friend who wanted everyone at the company to program with the same tools. But I also think most companies don’t encourage enough sharing of tooling and process. Pairing and watching people work can help spread the great tools and skills that people have set up. But you can also just talk about it. Have a monthly meeting where someone shows off their editor.

AI can help you configure

I don’t use AI to generate production code. But I’m leaning on AI now to help me configure my tools. Even though I’ve used Emacs for years, I’m terrible at programming it. I have to admit that it’s one of the reasons I haven’t done more configuration. Likewise for packages. There are just so many out there. I don’t have the patience to try them all. But with AI, I can ask for package recommendations and for tips to configure it how I like it. It’s really helping.

Permalink

How Clojure Shapes Teams and Products

Four episodes into our journey exploring real-world Clojure stories, fascinating patterns have emerged from our conversations with leaders at Quuppa, CodeScene, Catermonkey, and Griffin. While each company's domain is distinct — from indoor positioning technology to banking infrastructure – their experiences reveal compelling insights about how Clojure influences not just code but entire organizations.

Building Teams and Projects

The journey to adopting Clojure often begins with practical challenges. At Quuppa, they needed better ways to handle data serialization in their enterprise system. Catermonkey's Marten Sytema had already built a working product in Java but saw the potential for faster iteration with Clojure. 

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.