Clojure: SQLite C API with project Panama and Coffi
A gestão de projetos é, cada vez mais, um pilar técnico dentro das organizações modernas. Não adianta ter um time que domina frameworks de ponta ou uma stack atualizada se os projetos caminham sem clareza, priorização e cadência. Em muitos casos, não é a tecnologia que trava o time é a ausência de uma gestão estruturada.
Empresas como o Nubank, por exemplo, entenderam desde cedo a importância de ciclos curtos e times autônomos. Ao adotar uma variação do modelo Agile, com squads multidisciplinares e foco em entregas iterativas, conseguiram escalar seus produtos mesmo utilizando uma linguagem funcional como Clojure, considerada pouco comum no mercado. Não foi apenas a escolha da linguagem que impulsionou o sucesso, mas como os projetos foram organizados para extrair o máximo do time técnico.
Já a Spotify se tornou referência com seu modelo de Squads, Tribes e Guilds, inspirado no Agile, mas adaptado à sua realidade. Cada squad tem autonomia para escolher o método de trabalho mais eficaz para seu objetivo, o que, na prática, mistura Scrum, Kanban e elementos próprios. O resultado? Mais autonomia, menos dependência e entregas mais rápidas.
Para times que lidam com imprevisibilidade ou mudanças constantes, o Kanban tem ganhado espaço como no caso da Zalando, gigante do e-commerce europeu. Com operações altamente dinâmicas, a empresa utiliza Kanban em times de suporte e infraestrutura, priorizando fluxo contínuo e respostas rápidas. Nesse cenário, limitar o trabalho em progresso (WIP) e visualizar bloqueios em tempo real tem se mostrado muito mais eficaz do que sprints fixos.
Outro exemplo relevante é a Basecamp, criadora do método Shape Up, que usa ciclos de seis semanas com problemas bem definidos e foco profundo. Diferente do Scrum, não há backlog infinito nem daily meetings. O time recebe autonomia real para resolver um problema e apresentar uma solução viável no final do ciclo. É uma abordagem que vem sendo testada por startups que buscam reduzir burocracia sem perder entrega como a Hepta, no Brasil, que adotou Shape Up para equilibrar profundidade técnica com agilidade.
Quanto às ferramentas, gigantes como Atlassian (Jira) continuam sendo o padrão de mercado, principalmente em empresas que exigem flexibilidade e rastreabilidade entre áreas. A Shopify, por exemplo, usa o Jira em larga escala para orquestrar centenas de equipes. Já o Linear tem conquistado empresas como a Vercel e a Ramp por seu foco em velocidade e interface amigável para desenvolvedores. Times que adotam o Linear tendem a ter mais foco em fluxo contínuo e menos overhead de gestão.
Na esteira do ecossistema Microsoft, o Azure Boards se integra com DevOps e vem sendo adotado por empresas que têm uma base forte em .NET, como parte do fluxo de CI/CD caso da Stack Overflow, que usa Azure DevOps para gerenciar todo o ciclo de vida dos seus produtos.
Mas a metodologia, por si só, não é mágica. O verdadeiro diferencial está na capacidade de transformar essas abordagens em cultura. Medir lead time, cycle time, throughput e rework rate permite ao time entender o impacto real de cada decisão e criar ciclos de melhoria contínua. Projetos bem gerenciados não são os que seguem uma cartilha, mas os que evoluem com base em dados, contexto e maturidade do time.
Grandes empresas entendem que tecnologia de ponta precisa estar acompanhada de uma gestão que respeite o tempo de desenvolvimento, promova foco e incentive a excelência. A gestão de projetos não serve para controlar desenvolvedores serve para liberar o potencial de um time técnico de verdade.
Day-to-day programming isn’t always exciting. Most of the code we write is pretty straightforward: open a file, apply a function, commit a transaction, send JSON. Finding a problem that can be solved not the hard way, but smart way, is quite rare. I’m really happy I found this one.
I’ve been using hashp for debugging for a long time. Think of it as a better println
. Instead of writing
(println "x" x)
you write
#p x
It returns the original value, is shorter to write, and doesn’t add an extra level of parentheses. All good. It even prints original form, so you know which value came from where.
Under the hood, it’s basically:
(defn hashp [form]
`(let [res# ~form]
(println '~form res#)
res#))
Nothing mind-blowing. It behaves like a macro but is substituted through a reader tag, so defn
instead of defmacro
.
Okay. Now for the fun stuff. What happens if I add it to a thread-first macro? Nothing good:
user=> (-> 1 inc inc #p (* 10) inc inc)
Syntax error macroexpanding clojure.core/let at (REPL:1:1).
(inc (inc 1)) - failed: vector? at: [:bindings] spec: :clojure.core.specs.alpha/bindings
Makes sense. Reader tags are expanded first, so it replaced inc
with (let [...] ...)
and then tried to do threading. Wouldn’t fly.
We can invent a macro that would work, though:
(defn p->-impl [first-arg form fn & args]
(let [res (apply fn first-arg args)]
(println "#p->" form "=>" res)
res))
(defn p-> [form]
(list* 'p->-impl (list 'quote form) form))
(set! *data-readers* (assoc *data-readers* 'p-> #'p->))
Then it will expand to
user=> '(-> 1 inc inc #p-> (* 10) inc inc)
(-> 1
inc
inc
(p->-impl '(* 10) * 10)
inc
inc)
and, ultimately, work:
user=> (-> 1 inc inc #p-> (* 10) inc inc)
#p-> (* 10) => 30
32
Problem? It’s a different macro. We’ll need another one for ->>
, too, so three in total. Can we make just one instead?
Turns out you can!
Trick is to use a probe. We produce an anonymous function with two arguments. Then we call it in place with one argument (::undef
) and see where other argument goes.
Inside, we check where ::undef
lands: first position means we’re inside ->>
, otherwise, ->
:
((fn [x y]
(cond
(= ::undef x) <thread-last>
(= ::undef y) <thread-first>))
::undef)
Let’s see how it behaves:
(macroexpand-1
'(-> "input"
((fn [x y]
(cond
(= ::undef x) <thread-last>
(= ::undef y) <thread-first>))
::undef)))
((fn [x y]
(cond
(= ::undef x) <thread-last>
(= ::undef y) <thread-first>))
"input" ::undef)
(macroexpand-1
'(->> "input"
((fn [x y]
(cond
(= ::undef x) <thread-last>
(= ::undef y) <thread-first>))
::undef)))
((fn [x y]
(cond
(= ::undef x) <thread-last>
(= ::undef y) <thread-first>))
::undef "input")
If we’re not inside any thread first/last macro, then no substitution will happen and our function will just be called with a single ::undef
argument. We handle this by providing an additional arity:
((fn
([_]
<normal>)
([x y]
(cond
(= ::undef x) <thread-last>
(= ::undef y) <thread-first>)))
::undef)
And boom:
user=> #p (- 10)
#p (- 10)
-10
user=> (-> 1 inc inc #p (- 10) inc inc)
#p (- 10)
-7
user=> (->> 1 inc inc #p (- 10) inc inc)
#p (- 10)
7
#p
was already very good. Now it’s unstoppable.
You can get it as part of Clojure+.
Whereby I talk about an under-appreciated technique in Clojure where function arities serve as a pseudo-protocol...
The other night, I was talking about meta programming to other developers, and at one point someone asked how macros could be used to do meta programming. They were probably thinking about C type of macros, which are powerful but are just text processor tools.
Using macros you could have the following code:
double Compute()
{
// doing something
return result;
}
and transform it into:
double Compute()
{
if (_cache.GetValue(out var result)
return result;
// doing something
_cache.SetValue(result);
return result;
}
💡 Macros
They are a tool to manipulate code, with code. It allows one to write generic code and have the compiler do the heavy lifting and monomorphize your code.
📝 Note
What's meta programming?
According to Wikipedia, it's a computer programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyse, or transform other programs, and even modify itself, while running. In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time.
Then someone asked how we could implement the following code in ArkScript (not working by the way, because Rust macros are hygienic):
macro_rules! using_a {
($e:expr) => {
{
let a = 42;
$e
}
}
}
let four = using_a!(a / 10);
I quickly threw this together, and sure enough, ArkScript let it pass as its macros aren't hygienic:
($ using_a (e) {
(let a 42)
e })
(let four {(using_a (/ a 10))})
(print four) # 4.2
A concept that gets talked about quite frequently in the context of macros.
It is the ability to manipulate code in the language, using code from the same language. Code is data, and data is code. Lisp is the most common homoiconic language, but we can also cite all its dialects (Clojure, Scheme, Racket...), as well as Rebol, and you guessed it, ArkScript.
Using parentheses to represent S-expressions in Lisp inspired languages helps as you can represent your AST in code, and AST is also data.
In ArkScript macros, there is no capture of variables, you play directly with the AST:
($ foo (val)
(print (@ val 2)))
(foo (list 1 5)) # -> becomes (print 5)
As long as you are inside a macro, the AST node isn't evaluated unless it is involved in an expression that can be evaluated at compile time (eg (+ 1 arg)
).
Alas it comes at a cost and expressions can be evaluated at compile when you didn't mean to, if the example was passed something like (+ 1 2)
it would print 2! To prevent this behavior I thought I just needed to have a way to stop macro evaluation and added $as-is
to paste nodes in the AST as-is, stopping further macro evaluation on them.
This is particularly useful in the testing framework that relies on many macros, we can write:
(test:suite name {
# ...
(test:expect (some_computation)) })
As the test:xxx
macros use ($as-is arg)
to escape each argument:
($ test:expect (_cond ..._desc) {
(if (!= true ($as-is _cond))
(testing:_report_error true ($as-is _cond) "true" ($repr _cond) _desc)
(testing:_report_success)) })
TL;DR: I reinvented (a less powerful) quote
Originally from lexp.lt
We’re happy to announce a new release of ClojureScript. If you’re an existing user of ClojureScript please read over the following release notes carefully.
This release features two significant dependency changes. First, Google Closure
Compiler has been updated to v20250402
. This change makes Java 21 a
requirement for ClojureScript. The other significant change is that this release
now depends on the Clojure fork of Google Closure Library. Please read on for
more details about these changes.
For a complete list of fixes, changes, and enhancements to ClojureScript see here
Last year we noted that updating Google Closure Compiler would mean losing Java 8 support. Google Closure now requires Java 21. From our perspective this change doesn’t seem strictly necessary, but Google is a large organization and this change is likely to due to internal requirements which are hard to influence from the outside. The general enthusiasm in the Clojure community around adopting more recent Java releases hopefully softens the overall impact of this change.
So far, the burden of staying current with Google Closure has been manageable. If for some reason that calculus changes, we could adopt the strategy we have taken with Google Closure Library.
The incredible stability of Google Closure Library started declining around 2019. Google was both trying many things with respect to their internal JavaScript strategy as well as becoming less concerned about the impact on outside consumers. Finally, Google stopped contributing to Google Closure Library last August.
We have forked Google Closure Library (GCL) and taken up maintenance. We backed out a few years of needless breaking changes and aligned the codebase with the latest Google Closure Compiler release.
One of the biggest benefits of GCL is that it makes ClojureScript a complete
solution for a variety of JavaScript contexts, not limited to the browser.
Taking on additional dependencies always comes with a cost. One of
ClojureScript’s original value propositions was a rock solid set of readily
available JavaScript tools as dependable as clojure.core
.
We are working on restoring that original stability. With this release, you’ll find that quite a few old ClojureScript libraries work again today as well as they did 14 years ago.
ClojureScript is not and never was only just for rich web applications. Even in the post React-world, a large portion of the web is (sensibly) still using jQuery. If you need robust DOM manipulation, internationalization, date/time handling, color value manipulation, mathematics, programmatic animation, browser history management, accessibility support, graphics, and much more, all without committing to a framework and without bloating your final JavaScript artifact - ClojureScript is a one stop shop.
Give it a try!
In case you don’t know it, we can use raw strings, like embedding JS code, in hiccup. I just found out we can use raw to prevent strings from getting escaped. I used to have to define a dedicated app.js for that, which would need an extra HTTP request. Hiccup raw in action Looking back, I should’ve found this at the very beginning, as it’s just mentioned on its GitHub homepage, but somehow I missed it.
Some companies use Java, and they would like to explore Clojure. My new updates in Clojure book is aimed at them. I have added sections named Java Files in Clojure Project, which teaches one how current Java code can work along with Clojure projects. Another section named jar files in Clojure project, teaches one how to use jar files in Clojure. Some old Java libraries can be compiled into jar and thus code could be reused.
I am not a huge fan of private functions. I think still Python does not have a concept of private functions. Clojure does have private functions, till now my book avoided it, but not anymore. The section named Private Functions, teaches one how to code private functions, and it also tells how to test them.
My book is available online, as well as on Amazon as EPUB, paper back (in color) and hard cover (in color). I hope it’s of use to you.
Ever found yourself lost in a cascade of println statements, desperately trying to understand how data transforms across your Clojure functions? Or you’ve battled elusive bugs that only surface under specific, hard-to-reproduce conditions? If these scenarios sound familiar, you’re in for a treat.
In a previous article, https://flexiana.com/news/2025/04/why-clojure-developers-love-the-repl-so-much, I mentioned some tools for examining data and code. Since 2023, one of them has become essential in my projects: FlowStorm.
FlowStorm started as a tracing debugger but has evolved into something far more ambitious. Its description as “Much more than a debugger” is apt, as it offers a suite of visualization, interactive exploration, and deep code understanding capabilities that most other programming languages don’t provide.
Think of it: the ability to effortlessly trace data flows, step through execution, and visualize complex states without peppering your code with temporary debug lines. This is the power FlowStorm brings to your Clojure(Script) development.
Here’s a quick look at its features:
FlowStorm can be integrated into your workflow in two ways. The recommended approach is “ClojureStorm,” which swaps your Clojure compiler at dev time for automatic instrumentation. Alternatively, “Vanilla FlowStorm” lets you add FlowStorm to your dev classpath and instrument by tagging and re-evaluating forms. In this tutorial, we’ll walk through the core features of FlowStorm with hands-on examples, from basic setup to advanced debugging techniques.
Before you start: This tutorial assumes you have a Clojure setup, know basic Clojure, and how to use a REPL.
In this tutorial, I will guide you from the most simple to the more advanced features step by step. If you ever get lost or you would like to try a different tutorial, there is own offered by FlowStorm itself and accessible via “Help” -> “Tutorial”.
mkdir fs-example
cd fs-example
cat << EOF > deps.edn
{:paths ["src/"] ; where your cljd files are
:deps {quil/quil {:mvn/version "4.3.1563"}}
:aliases {:dev {:classpath-overrides {org.clojure/clojure nil}
:extra-deps {com.github.flow-storm/clojure {:mvn/version "1.12.0-9"}
com.github.flow-storm/flow-storm-dbg {:mvn/version "4.4.1"}}}}}
EOF
mkdir dev
cat << EOF > dev/user.clj
(ns user
(:require [quil.core :as q]))
EOF
Note: You can create a global alias or lein profile so you don’t need to modify your project’s source files like we do here.
fs-example
project and launch a REPL with :dev
alias C-u C-c M-j
and adding :dev
to the end of the prompt in the minibuffer./dev/user.clj
namespace and execute :dbg
in REPL. With the project and REPL set up and FlowStorm started, we can try its basic features:
instrumenting code and recording it.
The Browser tool lists all loaded namespaces in your application. If you correctly evaluated your user.clj
file which requires quil
, you should see several quil
namespaces and your user
namespace. You can select any namespace to see its functions, their argument lists, and potentially their docstrings. When looking at the list of namespaces, you’ll notice that library namespaces are also included. FlowStorm is an excellent tool for understanding how libraries work – you don’t have to limit yourself to just examining your code.
For this tutorial, we will instrument our user
namespace.
user
namespace in the Browser list.Note: FlowStorm also lets you say which namespaces to instrument with a JVM property.
(+ 1 2 3)
:int
by default). FlowStorm records traces one after another in the current “flow” (one recording session).
(+ 2 3 4)
As your debugging sessions get more involved, you might be investigating different features or bugs simultaneously. Instead of one massive, confusing trace, FlowStorm lets you organize your work into multiple ‘Flows’.
If you want to have a new recording in a different “Flow”, you can pick another one (see Figure 4) and switch between them. They help organize different debug sessions.
flow-1
Record
(if it is not already on). (reduce + (range 100))
flow-1
(see Figure 5), You can switch flows anytime. Let’s face it, exceptions happen, and Flowstorm has a good way to navigate them.
(/ 10 0)
Simple math is useful for a first look, but FlowStorm shines when dealing with more complex code. Let’s use a Quil example to explore further.
user.clj
file in your editor and add the following Clojure code. It defines a simple Quil sketch.
(defn setup []
(q/frame-rate 1) ;; Set framerate to 1 FPS
(q/background 200)) ;; Set the background colour
(defn make-ellipse [x y diam]
(q/ellipse x y diam diam))
(defn draw []
(q/stroke (q/random 255)) ;; Set the stroke colour to a random grey
(q/stroke-weight (q/random 10)) ;; Set the stroke thickness randomly
(q/fill (q/random 255)) ;; Set the fill colour to a random grey
(let [diam (q/random 100) ;; Set the diameter to a value between 0 and 100
x (q/random (q/width)) ;; Set the x coord randomly within the sketch
y (q/random (q/height))] ;; Set the y coord randomly within the sketch
(make-ellipse x y diam) ;; Draw a circle at x y with the correct diameter
))
(q/defsketch fs-example ;; Define a new sketch named example
:title "Oh so many grey circles" ;; Set the title
:settings #(q/smooth 2) ;; Turn on anti-aliasing
:setup setup ;; Specify the setup fn
:draw draw ;; Specify the draw fn
:size [323 200]) ;; Set the sketch size
Before re-evaluating the namespace and running the Quil sketch:
user
namespace. When the user
namespace is evaluated and the sketch starts, a window titled “Oh so many grey circles” should appear, drawing circles. FlowStorm will begin to capture traces related to the draw
and setup
functions, because they are in the instrumented user
namespace. You might see about 34 steps recorded.
If you step through the traces after the sketch starts, you might not see much about the animation. This is because the Quil animation (the draw loop) usually runs in its own thread.
FlowStorm makes navigating between threads a breeze:
setup
and draw
function calls, accumulating as the animation runs (if recording is still active). You can stop it now. TIP: FlowStorm can show all threads in one window. To do that, start recording by clicking on “Start recording of the multi-thread timeline” next to the “Record” button and then select “Multi-thread timeline browser” under “More tools”.
Spotted an interesting piece of data in a trace? Why not pull it directly into your REPL?
q/stroke
. In the “Data Window” on the right, you’ll see the value of q/stroke
is a Clojure map (clojure.lang.PersistentArrayMap
). user
namespace (or any other). With the value selected in the “Data Window”, click the “def” button (see Figure 8). my-stroke-data
) This value is now available in your user
namespace in the REPL under the name you provided, allowing you to interact with it directly.
As your application runs, traces can grow quite long. Manually stepping through hundreds or thousands of evaluations isn’t efficient. Say you are looking at q/stroke
call and want to see the q/stroke
call for the next circle drawn, without manually stepping through everything. This is where “Power Step” is useful. The “Power Step” feature has several modes. Here, we want to jump to the next time a function is called at the same place in the code (like the q/stroke
line in your draw
function).
q/stroke
call in the trace view. You can use the “Quick Jump” feature (see Figure 9) – type user/draw
there and pick the function (with an amount of calls next to it). It will take you to the function. Then move two steps more and you should be on q/stroke
. same-coord
option from the “Power Step” dropdown menu and click the “Power Step forward” button (highlighted in Figure 10). q/stroke
call in draw) runs. You can see in the “Data Window” how values (like :current-stroke
in the map) changed for the new circle. FlowStorm’s default data view is useful, but sometimes a custom view gives better insight, especially for domain-specific data. Let’s make one for our Quil stroke.
Looking at the q/stroke
data (a map with color info), it’s just numbers. It would be more intuitive if we could see the actual color represented visually. FlowStorm allows custom visualizers for this purpose.
user.clj
namespace. It registers a new visualizer for data that represents a Quil stroke. It uses JavaFX (which FlowStorm’s UI is built with) to draw a line with the stroke’s color.
(require '[flow-storm.api :as fsa])
(require '[flow-storm.debugger.ui.data-windows.visualizers :as viz])
(require '[flow-storm.runtime.values :as fs-values])
(import '[javafx.scene.shape Line]
'[javafx.scene.paint Color])
(viz/register-visualizer
{:id :stroke ;; name of your visualized
:pred (fn [val] (and (map? val) (:quil/stroke val))) ;; predicate
:on-create (fn [{:keys [quil/stroke]}]
{:fx/node (let [gray (/ (first (:current-stroke stroke)) 255) ;; gray normalization
color (Color/gray gray)
line (doto (Line.) ;; draw line
(.setStartX 0.0)
(.setStartY 0.0)
(.setEndX 30)
(.setEndY 30)
(.setStrokeWidth 5.0)
(.setStroke color))]
line)})})
(fs-values/register-data-aspect-extractor ;; extract data from q/stroke
{:id :stroke
:pred (fn [val _] (and (map? val) (:current-stroke val)))
:extractor (fn [stroke _] {:quil/stroke stroke})})
(viz/add-default-visualizer (fn [val-data] (:stroke (:flow-storm.runtime.values/kinds val-data))) :stroke)
user
namespace (which should restart the Quil sketch). Now, when you go to a trace involving q/stroke
(or more precisely, the data map that contains the stroke information) in the animation thread, you should see a short line whose color matches the stroke color used for a circle (see Figure 11).
If you use “Power Step” (set to same-coord
) to jump to the next q/stroke
data, you should see the color of the line in the custom visualizer change accordingly.
Custom visualizers provide significant power. For even more advanced customization or integration with external tools, FlowStorm also supports a plugin system. Exploring plugins is beyond the scope of this initial tutorial, but it’s a feature to be aware of for future needs.
Stepping line-by-line shows what happens, but how does it all fit together? The “Call Tree” tool shows a tree of function calls from a recording. This helps understand the structure of how the code runs and how functions call each other. The tree displays arguments or return values to help differentiate between multiple calls to the same function.
user/draw
or other functions from your sketch, like user/setup
. user/draw
nodes in the call tree, and you should see a call to (user/make ellipse)
. Double-clicking any node takes you to it. Need a quick summary of all functions that ran, or want to find every single call to user/draw
? The ‘Function List’ tool gives you a flat, searchable index of all recorded function invocations.
user/draw
) and library functions that were called (e.g., quil.core/ellipse
, quil.core/random
). user/make-ellipse
. A panel on the right should show all the individual calls of that function that were recorded. For each call, you can see its arguments and what it returned. Found a particularly interesting spot in a long trace that you know you’ll want to revisit?
FlowStorm’s bookmarks save your place.
To save a bookmark:
Once saved, you can use the bookmark list in FlowStorm (“View” -> “Bookmarks”) to quickly jump back to that exact position in the trace at any time, without needing to manually step through again. You can also make bookmarks in your Clojure code. This lets you mark important points in your program’s run.
Let’s add one to our Quil example.
flow-storm.api
namespace is available in your user.clj
file. fsa
alias. setup
function in user.clj
to include a call to fsa/bookmark
:
(defn setup []
(q/frame-rate 1)
(q/background 200)
(fsa/bookmark "Quil setup function complete")) ; <--- Add this line
setup
runs, a bookmark “Quil setup function complete” will be automatically created, and FlowStorm will jump to it. You can also find this new bookmark in FlowStorm’s bookmark list.We all know println
debugging. While sometimes useful, it can clutter your console and mix with other output. Clojure’s tap>
offers a more structured way to inspect values, and FlowStorm can be its dedicated display.
draw
function in your user.clj
file to tap>
the circle’s properties just before it’s drawn.
(defn draw []
(q/stroke (q/random 255))
(q/stroke-weight (q/random 10))
(q/fill (q/random 255))
(let [diam (q/random 100)
x (q/random (q/width))
y (q/random (q/height))]
(tap> {:event :circle-drawn, :x x, :y y, :diameter diam}) ; <--- Add this line
(make-ellipse x y diam)))
user
namespace and make sure FlowStorm is recording.draw
function (see Figure 15). Sometimes, you want to keep an eye on a specific value or expression as it changes over many executions, without clicking through traces each time. The ‘Printers’ tool is like setting up a persistent watch window, tailored to exactly what you want to see.
flow-0
and the “Animation thread” is selected). draw
function. Right-click on x
in (q/ellipse ...)
and select “Add to prints” (see Figure 16). X of ellipse
int
(that will change floats to integers).x
transformed to int
that you recorded. You can always redefine your printers and do this again. Double-clicking any printed value takes you to it. When your traces become vast landscapes of data, finding that one specific value or call can feel like searching for a needle in a haystack. FlowStorm’s search functionality can help you.
frame-rate
. Type frame-rate
into the search bar and execute the search. You should see all the occurrences of this key (see Figure 19). This tutorial showed you FlowStorm’s main features: setting up, first recordings, and using tools like custom views and printers. You saw how FlowStorm can help understand your Clojure code’s execution, making debugging easier.
(Here is a gist with a complete user.clj example.)
The best way to learn is to use FlowStorm in your projects. The more you use it, the more you’ll see how it helps with complex code. The next section gives a quick look at other features. For more details, the official FlowStorm documentation is very helpful.
This tutorial covered many things, but FlowStorm has more. Here’s a quick list of other features. Look at the official documentation for details:
datafy / nav
Support: Works with Clojure’s datafy
and nav
to show custom data types better. core.async.flow
graph recorded activity from a graphstdout
and stderr
.flow-storm.runtime.values/SnapshotP
protocol to tell FlowStorm how to save the state of mutable values. flow-storm.runtime.indexes.api
) to look at recorded traces with code from your REPL. The post FlowStorm: Debugging and Understanding Clojure Code on a New Level appeared first on Flexiana.
Our next Apropos will feature Nathan Marz on May 20. Be sure to subscribe!
The main advantage of Lisps (including Clojure) over other languages is the REPL (Read-Eval-Print Loop). Lisp used to have a bunch of advantages (if statements, garbage collection, built-in data structures, first-class closures, etc.), but these are common now. The last holdout is the REPL.
The term REPL has diluted, so I should define it: A REPL is a way to interactively inspect and modify running, partially correct software. My typical workflow is to open my editor, start the REPL, and start the application server from within it. I can make requests from my browser (to the running server), recompile functions, run functions, add new libraries, and inspect variables.
The REPL accelerates learning by increasing the speed and information richness of feedback. While programming, you learn about:
Your problem domain
The languages and libraries you’re using
The existing codebase and its behavior
The REPL improves the latency and bandwidth of feedback. Faster and richer feedback helps you learn. It lets you ask more questions, check your assumptions, and learn from and correct your mistakes. Fast, rich feedback is essential to achieving a flow state.
The obvious contrast with REPLs is with the mainstream edit-compile-run (ECR) loop that most languages enable. You edit your source code, run the compiler, and run the code. Let’s look at the main differences between REPL and ECR:
In ECR, your state starts from scratch. In REPL, your state is maintained throughout. All of the variables you set up are still there. Web sessions are still open.
In ECR, your compiler may reject your program, forcing you back into the Edit phase. Nothing is running, so you must fix it before continuing. In REPL, when the compiler rejects your change, the system is still running with the old code, so you can use runtime information.
In ECR, if you want to try something out, you have to write an entire program to compile and run. In REPL, trying something out means typing the expression and hitting a keystroke.
The end result is that the ECR loop is much slower than the REPL. One of the benefits of modern incremental testing practices (like TDD) is that it approximates the fast feedback you get from the REPL:
With testing, you do not maintain your state. However, you write the code to initialize the state for each test.
With testing, you make small changes to the code before rerunning the tests so you are usually not far from a running program.
With testing, it is easy to add a new test and run just that.
The advantage of testing is that you have a regression suite, which you don’t get from the REPL. But feedback in testing is poorer. I haven’t heard of anyone writing a test just to see what the result of an expression is. And, by the way, doing incremental testing with the REPL is a breeze. It’s like the best of both worlds.
The REPL gives you fast, rich feedback in three main ways:
Maintaining state — Your running system is still running, with all in-memory stuff still loaded. This means that after editing a function and hitting a keystroke, you can inspect the result of your change with a delay that is below human perception.
Running small expressions — Within your running system, you can understand the return value from any expression you can write, including expressions calling your code or libraries you use. The cost of running these expressions is below a psychological threshold, so they feel almost free compared to having to scaffold a test or a public static void main(String[] argv)
.
Ad hoc inspection of the running system — This is a big one you gain skill with over time. You can do anything you can imagine, from running your partially completed function (just to make sure it does what you expect) to printing out the value of a global variable (that you saved values to from your last web request). The flexibility makes tools like debuggers feel rigid.
However, other languages have chipped away at the advantages of REPL-Driven Development. I already talked about incremental testing approaches (like TDD) and how they approximate the feedback of the REPL. But there are more technologies that provide good feedback in the mainstream ECR (Edit-Compile-Run) paradigm:
Static analysis — You can get feedback on problems without leaving the edit phase with tools like LSP and squiggles under your code.
Static types — If you’ve got good types and you know how to use them, the Edit-Compile cycle can also give you rich feedback. The question is whether your compiler is fast enough to keep up.
IDEs with run buttons — Many IDEs for compiled languages use their own, incremental compiler. The code is constantly being compiled as you edit. When you hit the Run button, you’re essentially cutting out the Compile phase (which can often be very lengthy). If you can set it up to run a small expression at a keystroke, you’re very close.
Autocomplete — Autocomplete speeds up the Edit phase. Autocomplete with the REPL is a cinch. You can inspect what variables are available in the environment dynamically. However, modern IDEs can use static analysis to aid autocomplete.
Incremental testing — Incremental testing (like TDD) speeds up the Edit and Run phases. Added here just for completeness.
But don’t tell anyone: we can use these in addition to the REPL. In fact, Clojure has an excellent LSP and a great testing story. The only thing we don’t have is a great story about static typing.
Many languages claim that they have a REPL. But what they really have is an interactive prompt where you can type in expressions and get the result. This is great! But it doesn’t fully capture the full possibility of the REPL.
People often ask what’s missing, especially from a very dynamic language like JavaScript. I’ve tried to set up a REPL-Driven Development workflow in JavaScript, but I encountered these roadblocks. There are probably more:
Redefining const
: In Clojure, we pride ourselves on immutability. However, global variables are mutable so that we can redefine them during development. Unfortunately, JavaScript engines are strict about global variables defined with const
. I found no way to redefine them once they were defined.
Reloading code: It’s not clear how to reload code after it has changed. Let’s say I have a module called catfarm.js
and I modify one of the functions in it. My other module old-macdonald.js
imports catfarm
. How do I get old-macdonald
to run the new code? The engines I tried did not re-read the imported modules, instead opting for a cached version. In addition, even if they did, the old-macdonald
code needs to be recompiled. Clojure’s global definitions have a mechanism to allow them to be redefined, and any accesses to them after redefinition are immediately available. If I recompile a function a
in Clojure, the next time I call b
(which calls a
), it calls the new version of a
.
Calling functions from other modules: When you import
a module, it’s basically a compiler directive where you say what you’re importing. But how do you call things from that aren’t imported? Why would you do that? Because you want to try them out! This, combined with not being able to re-import them, makes it really hard to incrementally work on your code. In Clojure, require
is a function that loads new modules. And in-ns
is a function that lets you navigate through the modules like in a directory structure from your terminal. require()
in JavaScript (the old way of doing modules) worked more like that.
Hot module reloading is an attempt to address these limitations. It also seems like a major project with a lot of pitfalls. And it still doesn’t maintain state. Maybe it will get good enough one day to be close to REPL-Driven Development. Or maybe Node should have a “REPL” mode where const
variables can be redefined and modules can be reloaded and navigated.
The biggest problem with REPLs is that they require an enormous amount of skill to operate effectively. To know what you need to recompile, you must understand how Clojure loads code and how Vars work. You need to navigate namespaces. Most importantly, you need to develop the habit of using the REPL, which is not built-in. It’s common for someone in a beginner’s chat to ask “what happens when I call foo
on a nil
?” My first reaction is: “Why are you asking me? Instead of typing it here, type it in the REPL!” People need to be indoctrinated.
Here are some ways to improve your use of the Clojure REPL:
Next time you’re writing a function, use a rich comment block ( (comment …)
) to call that function. After the tiniest of modifications, call the function to inspect the return value. This is useful when writing long chains of functions.
The next time you’re tempted to look up documentation for a function, do a little experiment to see what happens at the REPL. Some common questions to answer:
What type does a function return?
Can I call that with an empty value?
What does the zero-argument version do?
Learn the keystrokes in your editor to evaluate:
The whole file
The current top-level form (often referred to as defn
)
The expression just before the cursor
Learn the keystrokes to run the tests from your editor. Run your tests. Edit and compile functions (see previous bullet). Run the tests again. This combines the best of incremental testing and RDD.
I love the REPL. I miss it when I go to other languages. The REPL is also forgiving. It can make up for a lot of missing tooling (like autocomplete and debuggers). The REPL also requires a lot of skill. Squiggles and LSP can immediately give you hints for making your code better. But the REPL requires a deep understanding of the language and lots of practice. This is ultimately the biggest barrier to its adoption. People who haven’t learned how to use the REPL don’t even know what they are missing.
PS: You can learn the art of REPL-Driven Development in my Beginner Clojure signature course.
by Laurence Chen
When I was a child, my sister caught chickenpox. Instead of isolating us, our parents let us continue playing together, and I ended up getting infected too. My father said, “It’s better to get chickenpox now—it won’t hurt you. Children aren’t just miniature adults.” This saying comes from pediatric medicine. While children may resemble adults on the outside, their internal physiology is fundamentally different. Some illnesses are mild for children but can hit adults much harder—chickenpox being a prime example.
I often think of this phrase, especially when reflecting on Clojure’s interactive development model: interactive development feels natural and smooth in small systems, but without deliberate effort, it often breaks down as the system grows. Just as children and adults must be treated differently, small and large systems have fundamentally different needs and designs when it comes to interactive capabilities.
Let me start by defining interactive development:
Interactive development is a development model that allows developers to modify, execute, and observe arbitrary small sections of code in a running system. This enables developers to obtain input/output behavior of that code and explore related structures, thereby supplementing or verifying the understanding gained from reading the source code.
Despite these obvious benefits, interactive development is one of the first things to erode as systems evolve.
Without specific attention to interactive development, system initialization often tightly couples all components together. A common pattern is to load all config files, initialize databases, background schedulers, set up web routes, and assemble all components into a large object during startup. In such cases, any change to a component requires restarting the entire system for the changes to take effect.
This creates a problem: if I just want to tweak a setting—say, changing a port from 8080 to 8089—I’d have to restart the entire system. And system startup times can be slow enough to disrupt flow.
The solution is to enable local control over each component’s lifecycle, providing individual start/stop functions for each service. Popular Clojure lifecycle management libraries like mount, integrant, and makina support this.
The benefit of such design is that during interactive development, you can freely stop a component (e.g., just shut down :web
), observe behavior after changes, and restart it individually.
Some components, like web servers or schedulers, are expensive to restart—often taking several seconds. If business logic is tightly coupled to these services, even after re-evaluating code in the REPL, behavior won’t immediately reflect the change because the service is still running old logic.
This can be addressed by decoupling business logic from services using vars.
In many Clojure web applications, you’ll see this pattern:
(def web-handler ...)
(run-jetty #'web-handler {:port 8080})
In Clojure, #'
is a var reference. It ensures the system accesses the variable indirectly during runtime. So even if web-handler
is redefined later, Jetty will automatically use the new version. There’s no need to restart the server, because it’s bound to the var, not the original function.
This technique applies to cron jobs too. For example:
(defn job-handler [n] (prn "job " n))
(schedule-task! {:handler #'job-handler
:tab "/5 * * * * * *"})
As long as schedule-task!
binds to #'job-handler
and not the function body, you can redefine and re-evaluate job-handler
without re-registering the whole job. This significantly shortens feedback loops and improves interactive development efficiency.
When using SQL databases, interactive development usually requires:
Libraries like YeSQL make these hard to achieve, because:
A better approach is using a data-driven library like HoneySQL, which allows SQL queries to be expressed as Clojure maps:
(def q {:select [:id :name]
:from [:users]
:where [:= :status "active"]})
You can then inspect the query string by calling (hsql/format q)
.
If you use Ring to develop web apps, interactive development is very intuitive:
You can test handlers like this:
(my-handler
{:uri "/new"
:request-method :get})
However, things get more complex with webhook handlers. These often need to extract raw payloads from (:body request)
, and the body must be a java.io.InputStream
. This complexity can hinder interactive development.
A good workaround is ring-mock:
(require '[ring.mock.request :as mock])
(def req
(-> (mock/request :post "/webhook")
(mock/header "Content-Type" "application/json")
(mock/header "x-signature" "sig-1234")
(mock/body json-body)))
(webhook-handler req)
Other interactive development blockers are similar to this ring-mock example: when we’re unfamiliar with a library, we may miss tools that enable interactive work. For instance, with reitit (a router), you can use match-by-path
to check if a URI routes correctly. With sieppari (an interceptor library), you can use execute
to observe how a set of interceptors behaves given a request.
By default, adding a new library to deps.edn
does not make it available in the running REPL session.
I recommend Launchpad as a solution. It handles hot-reloading of dependencies automatically—no need to remember or invoke specific hot-reload functions.
If your editor has strong REPL integration—or even lets you modify it using Lisp—you can take interactive development to another level.
It wasn’t until I’d used Conjure for quite a while that I realized its full potential. For instance, I used to believe all testing had to be done via shell. But Conjure lets you run the test under the cursor with <localleader> tc
.
Another hidden gem in Conjure is pretty-printing nested EDN values. If you use clojure.pprint/pprint
, Conjure prefixes each line with ; out
, marking it as output. If you don’t want this, you can use (tap> variable)
to send it to the tap queue, then <localleader> vt
to display the queue, which will auto pretty-print without ; out
.
There are two classes of editor-integrated commands that I believe still hold great potential:
Interactive development isn’t just a productivity tool—it’s a way of working in active dialogue with the system. It turns programming into something playful: through cycles of trial, observation, and adjustment, we refine both our understanding and design. The quality of this dialogue depends heavily on system architecture, tooling, library choices, and editor integration.
But just as “children aren’t miniature adults,” interactive development may feel natural in small systems, yet as systems grow, maintaining it takes conscious effort. Otherwise, it quietly slips away amidst tighter coupling, increased complexity, and growing dependencies.
The investment is worth it. It deepens our system understanding, makes flow states more accessible, gives test-oriented functions new life, transforms the editor into a rich interface—and helps us design truly decoupled systems.
>
Notes
Hello Fellow Clojurists! This is the second report from the 5 developers receiving Annual Funding in 2025.
Dragan Duric: Apple M Engine Neanderthal
Eric Dallo: metrepl, lsp-intellij, repl-intellij. lsp, lsp4clj
Michiel Borkent: clj-kondo, squint, babashka, fs, SCI, and more…
Oleksandr Yakushev: CIDER, Compliment, JDK24
Peter Taoussanis: Telemere, Tufte, Truss
2025 Annual Funding Report 2. Published May 5, 2025.
My goal with this funding in 2025 is to support Apple silicon (M cpus) in Neanderthal (and other Uncomplicate libraries where that makes sense and where it’s possible).
In March and April, I implemented the JNI bindings for 5 Apple Accelerate libraries (blas_new, lapack_new, Sparse, BNNS, BNNS Graph, vDSP, vForce and vImage) and implemented almost all functionality of Neaanderthal’s Apple M engine based on Accelerate. It will soon be ready for proper release (currently waiting some bugfixes, polishing, and a javacpp-presets release to be available in Maven Central).
In more detail:
Here’s what I’ve proposed when applying for the CT grant.
I proposed to Implement an Apple M engine for Neanderthal. This involves:
Projects directly involved:
https://github.com/uncomplicate/neanderthal
https://github.com/uncomplicate/deep-diamond
https://github.com/uncomplicate/clojure-cpp
As I implemented and OpenBLAS-based engine in January-February, in March-April I tackled the main objective: Apple Accelerate bindings. As Apple’s documentation is not stellar, and there are multiple tools and languages involved, it was a slow and tedious work consisting of lots of experimentation and discovery. Even boring, I would say. But, slowly and steadily I discovered how relevant JavaCPP generators work, I unearthed Accelerate C++ intricacies, fought with them one by one, and slowly I managed to create the proper Java bindings! Along the way I even contributed some fixes and updates to JavaCPP itself! YAY! This is available as a new project under Uncomplicate umbrella at https://github.com/uncomplicate/apple-presets
Next, I returned to the pleasant part of work - programming in Clojure - and almost completed the dedicated Neanderthal engine that utilizes Apple Accelerate for BLAS, LAPACK, as well as Math functions and random generators on M Macs. This covers the core and linalg namespace, that was already supported by the alternative OpenBLAS engine (implemented in Jan-Feb) AND Math and RNG engines. I didn’t manage to iron out all the bugs so it could be ready for release, but this will certainly be ready in May-June. I also didn’t manage to tackle Sparse matrices, but as I managed to create Accelerate bindings for all types, including Sparse, I expect this to not be a problem and be completed during this funding round.
Looking at the funding proposal, I can say that I’m very satisfied that all the features that I promised to build are progressing even better than expected, so that will leave some time to try to do some of the features that I said I hope to be able to support, namely to explore Deep Diamond Tensor support on Apple M (via BNNS and/or BNNS Graph) and GPU support via Metal.
I even got some ideas for additional projects based on native Apple functionality related to machine learning and audio/video, but lets not getting too much ahead.
All in all, I feel optimistic about how this project progresses!
2025 Annual Funding Report 2. Published April 30, 2025.
In these 2 last months I could work on multiple projects and even focus on a new exciting project called metrepl, a nREPL middleware to help extract metrics about your REPL, really helpful when you have multiple coworkers working in multiple projects and you want to collect information about performance, time spent in REPL features and others! Besides that, I worked hard on keep improving the IntelliJ experience using the 2 OSS plugins of LSP + REPL, and of cource improving clojure-lsp the base of all major editors now.
event/op-completed
metric to measure time correctlyevent/test-executed
event.event/test-passed
, event/test-errored
and event/test-failed
events.session-time-ms
to close
op.:project-path
to metrics.2025.02.20
.0bb572a03c0025c01f9c36bfca2815254683fbde
. #19842025.02.21-20250314.135629-7
.clojure-lsp/unused-public-var
linter. #1878:test-locations-regex
configuration to allow customizing test file detection for the unused-public-var
linter’s :ignore-test-references?
and test code lenses. #1878unused-public-var
false positives when :ignore-test-references? true
.2025.04.07
.#_
(ignore next form). #1965forward
, forward-select
, backward
and backward-select
paredit actions.my-alias/
. #1957declare
forms too. #1986:
lexer check since this is delegated to clojure-lsp/clj-kondo already.forward
, backward
, forward-select
, backward-select
paredit actions. #72Together with the help of @afucher, we improved so much the IntelliJ REPL experience, fixing multiple issues and adding multiple features, the experience now is pretty close to other REPL experiences in other editors!
textDocument/selectionRange
LSP feature coercers.def-extension
to create plugin.xml extension points easily and more idiomatic.2025 Annual Funding Report 2. Published May 2, 2025.
In this post I’ll give updates about open source I worked on during March and April 2025.
To see previous OSS updates, go here.
I’d like to thank all the sponsors and contributors that make this work possible. Without you the below projects would not be as mature or wouldn’t exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.
Current top tier sponsors:
Open the details section for more info about sponsoring.
If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!
On to the projects that I’ve been working on!
I blogged about an important improvement in babashka regarding type hints here.
Also I did an interview with Jiri from Clojure Corner by Flexiana, viewable here.
Here are updates about the projects/libraries I’ve worked on in the last two months.
babashka: native, fast starting Clojure interpreter for scripting.
ThreadBuilder
interopjava.util.concurrent.ThreadLocalRandom
java.util.concurrent.locks.ReentrantLock
java.time.chrono.ChronoLocalDate
java.time.temporal.TemporalUnit
java.time.chrono.ChronoLocalDateTime
java.time.chrono.ChronoZonedDateTime
java.time.chrono.Chronology
cheshire.factory
namespace (@lread)24
0.9.45
1.4.28
java.util.regex.PatternSyntaxException
1.8.735
6.0.0
0.8.65
clerk: Moldable Live Programming for Clojure
squint: CLJS syntax to JS compiler
:context expr
in compile-string
:context expr
in set!
expressionreturn
:require
+ :rename
+ allow renamed value to be used in other :require clausenull
when written in else branch of if
#jsx
and #html
range
fixesrun!
defclass
: elide constructor when not provideddefclass
clj-kondo: static analyzer and linter for Clojure code that sparks joy.
:config-in-ns
on :missing-protocol-method
:redundant-ignore
on :missing-protocol-method
:missing-protocol-method
. See docs.
, e.g. py.
according to clojure analyzer--repro
flag to ignore home configurationdeftype
form results in NPE
(alias)
bug (@Noahtheduke)SCI: Configurable Clojure/Script interpreter suitable for scripting
sci.async/eval-string+
should return promise with :val nil
for ns form rather than :val <Promise>
volatile?
to core vars1.4.28
quickdoc: Quick and minimal API doc generation for Clojure
<sub>
deps.edn
2025.04.07
org.babashka/cli
dependencyCLI: Turn Clojure functions into CLIs!
process: Clojure library for shelling out / spawning sub-processes
html: Html generation library inspired by squint’s html tag
cherry: Experimental ClojureScript to ES6 module compiler
cljs.pprint/pprint
add-tap
#html
id and class shortcuts + additional features and optimizations, such as an optimization for aset
nbb: Scripting in Clojure on Node.js using SCI
jsr:
dependency support, stay tuned.instaparse-bb: Use instaparse from babashka
edamame: Configurable EDN/Clojure parser with location metadata
fs - File system utility library for Clojure
fs/match
doesn’t match when root dir contains glob or regex characters in pathfs/update-file
to support paths (@rfhayashi)sql pods: babashka pods for SQL databases
These are (some of the) other projects I’m involved with but little to no activity happened in the past month.
2025 Annual Funding Report 2. Published May 5, 2025.
Hello friends! Here’s an update on my March-April 2025 Clojurists Together work.
2025 Annual Funding Report 2. Published April 30, 2025.
A big thanks to Clojurists Together, Nubank, and other sponsors of my open source work! I realise that it’s a tough time for a lot of folks and businesses lately, and that sponsorships aren’t always easy 🙏
Hi folks! 👋👋
Hope everyone’s well, and those in Europe enjoying the first glimpses of actual ☀️ in a while :-)
Telemere v1 stable is now officially and finally available! 🍾🥳🎉
It was a lot of work to get here, but I’m happy with the results - and I’m very grateful for all the folks that have been patiently testing early releases and giving feedback 🙏
If you haven’t yet had an opportunity to check out Telemere, now’s a pretty good time.
It’s basically a modern rewrite of Timbre that handles both structured and unstructured logging for Clojure and ClojureScript applications. It’s small, fast, and very flexible.
I’ll of course continue to support Timbre, but Telemere offers a lot of advantages, and migration is often pretty straight-forward.
There’s a couple video intros:
Telemere also has the most extensive docs I’ve written for a library, including both:
Tufte v3 RC1 is now also available.
Tufte’s been around for ages but recently underwent a major overhaul focused on improving usability, and interop with Telemere.
The two now share a common core for filtering and handling. This means that they get to share relevant concepts, terminology, capabilities, and config APIs.
The shared core also means wider testing, easier ongoing maintenance, and the opportunity for improvements to further cross-pollinate in future.
Performance has also been significantly improved, and the documentation greatly expanded. There’s too much new stuff to mention here, but as usual please see the release notes for details.
Several other releases worth mentioning:
I’ll note that Telemere, Tufte, and Truss are now intended to form a sort of suite of complementary observability tools for modern Clojure and ClojureScript systems:
Together the 3x offer what I hope is quite a pleasant (and unique) observability story for Clojure/Script developers.
Next couple months I expect to focus on:
After that, still need to decide. Might be additional stuff for Telemere, or gearing up for the first public release of Carmine v4 (Redis client + message queue for Clojure).
Cheers everyone! :-)