Senior Software Developer (backend) at Crossref

Senior Software Developer (backend) at Crossref

90

  • Location: Remote and global, to partially overlap with working hours in European time zones.
  • Type: Full-Time, 40 hours a week, Mon-Fri.
  • Remuneration: 90k USD equivalent. We pay salaries in the currency of the country in which you’re based. We arrive at the local USD-equivalent salary by determining the average 5-year USD exchange rate, to stabilise currency fluctuations.
  • Benefits: Check out our Employee handbook for more details on paid time off, unlimited sick time, paid parental and medical leaves, and more.
  • Reports to: Program Technical Lead, Carlos del Ojo Elias
  • Timeline: Advertise in February-March and offer by April

About the role

We are looking for a Senior Software Developer to join our Contributing to the Research Nexus (CRN) program. In this backend-focused role, you will help maintain, extend, and modernize our existing services while also leading the design and implementation of new greenfield systems. The role centres on JVM technologies and cloud-native, distributed systems operating at scale.

Crossref collects a wide range of metadata for an ever-growing and increasingly diverse collection of scholarly outputs. We build and operate services that register, link, and distribute scholarly research metadata. The CRN program develops retrieval, matching, and enrichment services that integrate closely with systems across Crossref.

We are a small organisation with a big impact, and we’re seeking a mission-driven Senior Software Developer who can help maintain and evolve our services, design well-scoped solutions, and contribute to operational reliability through code reviews and documentation. This role will collaborate closely with colleagues across Technology and Programs & Services teams.

Key responsibilities

  • Understand Crossref’s mission and how we support it with our services
  • Work collaboratively in multi-functional project teams
  • Work closely with the Programs & Services Team to solve problems, maintain and improve our services and execute technology changes
  • Collaborate with external stakeholders when needed
  • Produce well-scoped and testable software design and specification
  • Implement and test solutions using Clojure, Kotlin, Java and other relevant technologies
  • Pursue continuous improvement across legacy and green-field codebases
  • Provide code reviews and guidance to other developers regarding development practices and help maintain and improve our development environment
  • Identify and report vulnerabilities and inefficiencies in our services
  • Document and share development plans and changes
  • Be an escalation point for technical support; investigate and respond to occasional but complex user issues

About you

You’re a software developer who enjoys understanding problems end-to-end and making thoughtful technical decisions. You’re comfortable working with ambiguity, you care deeply about users, and you take pride in building systems that last. You don’t need close supervision, but you value collaboration, challenge assumptions constructively, and know when to bring others into technical decisions.

We know no-one will meet all the requirements, but we are looking for people who are willing to learn and like to meet new challenges - please apply if this feels like you!

Essential skills and experience:

  • Minimum 5 years of hands-on experience in software development, engineering, or similar
  • Working knowledge of Clojure or another Lisp / functional language, or demonstrated ability and willingness to learn Clojure quickly.
  • Familiarity with JVM technologies (Kotlin and/or Java)
  • Comfortable working with Git, including code reviews and collaborative workflows
  • Experience contributing to or maintaining production systems, including reading and extending existing codebases
  • Experienced with continuous integration, testing and delivery frameworks, and cloud operations concepts and techniques
  • Familiar with Docker technologies
  • Strong communication skills and a collaborative approach to problem-solving
  • Strong written communication skills, particularly for design discussions and technical documentation
  • Comfortable being part of a geographically distributed team
  • Self-directed, a good manager of your own time, with the ability to focus

Nice-to-have:

  • Curious and tenacious at learning new things and getting to the bottom of problems
  • Strong understanding of functional programming concepts, including immutability, pure functions, higher-order functions, composition
  • Outstanding at interpersonal relations and relationship management
  • Ability to work autonomously while collaborating in a distributed team environment
  • A working understanding of XML and document-oriented systems such as Elasticsearch
  • Some experience with Python, JavaScript or similar scripting languages
  • Experience building tools for online scholarly communication or related fields such as library and information science
  • Comfortable working in open source projects, including public issue tracking, pull requests, and community discussion
  • Experience with JVM web frameworks (Spring, Quarkus, or similar)
  • Direct experience with Clojure in production, especially in open source projects
  • Experience with JVM internals, performance tuning, or memory management
  • Familiarity with the scholarly communications domain

About Crossref & the team

We’re a non-profit membership organisation that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.

We envision a rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 23,000+ members from 160+ countries, 170+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.

Take a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.

It also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 50+ dedicated people who take our work seriously, but don’t take ourselves seriously - we like to play quizzes, measure coffee intake, and create 100s of custom slack emojis. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.

We can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart, the latest Annual Meeting recordings, and our financial information.

Thinking of applying?

We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications. You can be based anywhere in the world where we can employ staff, either directly or through an employer of record.

We will invite selected candidates to an initial call to discuss the role. Following that, shortlisted candidates will be invited to work on a short (1-2 hours) take-home assignment. This will be followed by a technical interview. The last step will be a panel interview, where you will receive questions in advance. All interviews will be held remotely on Zoom.

Click here to apply!

Applications close on March 10th, 2026.

Anticipated salary for this role is 90k USD-equivalent, paid in local currency. Crossref offers competitive compensation, benefits, flexible work arrangements, professional development opportunities, and a supportive work environment. Check out our Employee Handbook for more details on paid time off, unlimited sick time, paid parental and medical leaves, and more.

Equal opportunities commitment

Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.

Thanks for your interest in joining Crossref. We are excited to hear from you!

Permalink

One year of LLM usage with Clojure

Introduction

This essay is a reflection on using LLM agents with the Clojure programming language. At Shortcut, we have spent the last year building Korey, an LLM agent focused on product management. During that time, we have attempted to use different LLM agents to work with our rather large Clojure code base, ~250,000–300,000 lines of Clojure code. In doing so, we discovered a lot. I hope this essay can help people in the Clojure community and in other unorthodox languages more generally take more advantage of LLM tools for their work. I would break our approach down into eras:

  • Early adoption and struggles
  • The doldrums of Claude Code
  • The clojure-mcp revolution
  • Skills, Prompts, and OpenCode
  • System prompt refinements
  • PI Agent and liberation

Early Adoption and Struggles

It is commonly understood that LLM models are more effective with languages that dominate the training dataset. Research has shown that model performance varies significantly based on the volume of training data for each programming language, with Python, JavaScript, and TypeScript being heavily represented in public code repositories. This fact is and was concerning to me. For one, at Shortcut we have a large Clojure code base that we have grown, nurtured, and maintained over the last 11 years. We don't have the time, money, or interest in rewriting this to Python or TypeScript simply because state-of-the-art models prefer these languages. Additionally, to me, it does not seem like a good business move to throw out working code.

With that in mind, we decided to try to teach Claude Code how to write Clojure code well. Quickly, we realized that certain aspects of the way Claude Code is structured by default make it very difficult to work with a large code base like ours. An example of this is that Claude will run the entire test suite after each thing it implements, and running the entire test suite locally on our machine takes several minutes at this point, unfortunately. As Clojure engineers, we prefer tight feedback loops on the REPL. This several-minute-long test suite run was unacceptable to us.

We began to tweak our AGENT.md file, teaching Claude Code about how Clojure and Clojure's data structures work. With that, we were able to narrow the static verification steps that Claude Code made after each point in its implementation process. We noticed large improvements in our performance and our iterative process at this point.

The doldrums of Claude Code

At this point we felt using Claude Code was mostly functional, and we were capable of achieving a certain level of development flow with Claude Code. Claude used our large code base itself as a model of how it should write Clojure code, although we often had to prompt it to do so. I began experimenting with the best ways of prompting the LLM agent. One of the things I discovered is that specifying in detail what the LLM agent should do is critical; you can't leave any ambiguity.

However, we still struggled with certain aspects of how Claude approached software engineering. For example, we noticed that Claude often leapt ahead before looking. Claude would write a bunch of code — potentially several thousand lines over a few minutes — and then attempt to verify it. The problem is that because of hallucinations or misunderstandings, the new code would often contain errors. At that time, roughly six months ago, Claude was poor at debugging these errors.

Additionally, we observed a pattern: when Claude ran into an error, it often solved the problem by adding more complexity, which, as software engineers, we know is rarely the right approach. Consequently, we would see Claude spend a lot of time and tokens debugging a problem it created, and it couldn't resolve it because the code didn't actually belong in our code base. This leap-before-looking severely limited Claude's effectiveness in our code base.

We also noticed some more fundamental flaws with the way Claude—this was Sonnet 3.5 at the time—was approaching Clojure code. One thing we commonly notice is that Sonnet defaults to an imperative programming approach, which we know does not work well in Clojure. For example, we discovered, embedded in a large code change, a doseq with an atom where a map, or a reduce would be preferable. These issues were problematic for us because Claude can generate a lot of code, which is very difficult to undo. Ultimately, we want the LLM to generate the correct code in the first place. So we faced the dilemma: how do we achieve better functional Clojure code from the LLM? The next step was to explore certain avenues to achieve that.

The clojure-mcp revolution

The first step I took was to explore the Clojure MCP tool. This tool exposed a set of editing and evaluation features that greatly reduce AI hallucinations, and with that we were able to ground the LLM better in our code base. The Clojure MCP's edit functionality was essential for preventing invalid parentheses, syntax errors, and other issues from entering our code base.

Clojure MCP fundamentally altered my belief in the ability of LLMs to write effective Clojure code. Previously, I was struggling and frustrated; working with Clojure and LLMs was a constant source of hallucinated functions, invalid syntax, and a generally unpleasant experience, with a lot of rework and misdirection. Clojure MCP really changed that. Thanks a ton to Bruce Hauman for building Clojure MCP and Clojure MCP Light and releasing them as open-source tools.

Skills, Prompts and OpenCode

The next step I took toward achieving better LLM output was to evaluate how Anthropic's skill system works. I also developed a tool I ended up calling Clojure Skills. Clojure Skills was envisioned as a SQLite database with a command-line interface that would allow the LLM agent and the human to search through a set of anthropic-style skills. These skills would inform the LLM agent about patterns, idioms, specific libraries, and whatnot, hoping that it would produce much better output and that I would spend a lot less time debugging. This was also the point when I started experimenting a little with tools like OpenCode.

What's interesting about OpenCode is that it lets you easily define your own system prompt. So at this point I defined a system prompt for building a skill that would dynamically load the library onto the Clojure REPL and build a skill for that library.

Here is an example of using OpenCode with a custom system prompt:

{
  "$schema": "https://opencode.ai/config.json",
  "agent": {
    "clojure": {
      "description": "Expert Clojure developer with REPL-driven workflow",
      "model": "anthropic/claude-sonnet-4",
      "prompt": "{file:./SYSTEM.md}"
    }
  },
  "default_agent": "clojure"
}

It was during the process of building my own skill system and using those skills day-to-day that I noticed a fundamental issue: over long LLM sessions with large context windows, the knowledge and the skill didn't stay very sticky. The LLM initially uses the correct skills and patterns, but eventually it starts to forget, ignore, and just do its own thing.

After some research I discovered that this problem is documented in the literature. In their 2023 paper "Lost in the Middle: How Language Models Use Long Contexts", Liu et al. demonstrated that LLMs effectively have a U-shaped memory curve: the most recent conversation turns and the system prompt are weighted more heavily than the middle of the conversation. Consequently, when skills are loaded mid-conversation, the LLM often fails to follow them.

I began experimenting within the clojure-skills system with a concept I call "prompt packing." I saw on Reddit that other people were putting all the knowledge of their code base directly into the system prompt. That made immediate sense given what we know about context windows. So I tried both inlining and referencing skills in the system prompt. OpenCode let me carefully curate the skills in my system prompt.

With this approach I achieved a new level of effectiveness: I was able to one-shot more and more tasks with the LLM than I ever could with Claude Code alone.

System Prompts Refinements and Prompt Evals

At this point in our development of Korey, it was time to tweak the system prompt for Korey itself. I was lucky enough to be assigned this task at work, and I spent a couple of weeks understanding how people evaluate system prompts and how they make them more effective for their task. I then drew the connection between my engineering system prompt and a prompt-evaluation system. This was key to our development for iterating and evaluating how we use LLMs and how our system prompt can become more effective. A helpful resource for this type of research is hamel.dev. Hamel Husain is a great communicator and writer. He was critical in my understanding of how to think about refining and defining system prompts, especially for working with code. Thanks, Hamel.

At this point, I started working on what eventually became my Clojure system prompt project. This was a prompt that we iterated a lot internally at Shortcut, and I have released it as an open-source project. I hope other Clojure developers will find this a helpful starting point to devise their own system prompts. One thing you can notice is I use the REPL extensively to ground truth, aka prevent hallucinations, before the LLM agent writes any code. This was key for transforming what was an incredibly frustrating experience into a much smoother and more graceful LLM interaction.

Another step I took was to deepen the ability of the LLM to use the Clojure platform itself. Clojure is designed to be interacted with from the REPL. This turns out to be a huge advantage when working with an LLM. For example, instead of defining a new skill for each library, I taught the LLMs about clojure.repl. This allows the LLM to dynamically explore any Clojure library on the REPL. Individual skills or lessons became less important, and the platform itself served as a dynamic feedback loop. Here is an example of that

<discovering-functions>
The REPL is your documentation browser. Explore before coding:

List all public functions in a namespace:
```shell
clj-nrepl-eval -p 7889 "(clojure.repl/dir clojure.string)"
# Output: blank? capitalize ends-with? escape includes? index-of join
#         lower-case replace reverse split split-lines starts-with? trim ...
```

Get detailed documentation for any function:
```shell
clj-nrepl-eval -p 7889 "(clojure.repl/doc map)"
# Output:
# -------------------------
# clojure.core/map
# ([f] [f coll] [f c1 c2] [f c1 c2 c3] [f c1 c2 c3 & colls])
#   Returns a lazy sequence consisting of the result of applying f to
#   the set of first items of each coll, followed by applying f to the
#   set of second items in each coll, until any one of the colls is
#   exhausted. Any remaining items in other colls are ignored...
```

Search for functions by name pattern:
```shell
clj-nrepl-eval -p 7889 "(clojure.repl/apropos \"split\")"
# Output: (clojure.string/split clojure.string/split-lines split-at split-with)
```

Search documentation text:
```shell
clj-nrepl-eval -p 7889 "(clojure.repl/find-doc \"regular expression\")"
# Shows all functions whose documentation mentions "regular expression"
```

Read function source code:
```shell
clj-nrepl-eval -p 7889 "(clojure.repl/source filter)"
# Shows the actual implementation - great for understanding edge cases
```

Despite these breakthroughs, the development flow was still not what I wanted it to be. First, OpenCode seems to consume a large amount of memory. It is a real bummer when you're working with a context window that you have crafted over thirty or forty minutes, and the LLM agent crashes because of the harness. It's a very frustrating experience.

Additionally, I've noticed that OpenCode tries to minimize the output of tool calls. This follows the general industry pattern we see in Claude Code, where certain information is hidden from the engineer. It's unclear to me what the exact design goals are, but I believe they're trying to minimize the information the developer sees so as not to overwhelm them.

However, when you're refining your system prompts and thinking carefully about your LLM interactions, this hiding of information becomes a real hindrance. I want to know that my tool calls are correct, that my edits are clear, and that my system functions as effectively as possible.

At this point I read a blog post about a new agent, pi-agent, and I was hooked.

PI Agent and liberation

Coming from an LLM agent harness like Claude Code that attempts to hide what the LLM is doing to a simple LLM harness that shows you everything felt like a liberating experience to me. Not only that, but pi-agent encourages you to solve your own problems, just like Emacs does. If there's a tool or functionality that you need that pi-agent doesn't provide, you simply build your own plugin. I have written a couple of plugins and used several more from the community.

One plugin I wrote is something to track my energy usage while working with LLMs. Like many, I am deeply concerned about LLMs' effects on our planet. My plugin tracks how much carbon my interactions are generating and compares that to standard car use.

What I value most about pi-agent is its reliability over long contexts. It also clearly indicates whether a tool call succeeded or failed with a green or red output. It's very minimal, does not use sub-agents, and allows me to focus on what my LLM is doing. As I follow along, I can tighten my iteration loop even more.

Current stack and future directions

My current stack is Clojure MCP Light, pi-agent, and my own system prompt. I plan to continue iterating on the Clojure system prompt and develop tools that make working with Clojure from LLMs more effective. I've also begun to experiment more and more with open-weight models, including the excellent Kimi 2.5 model from Moonshot AI, which is effectively my day-to-day driver now along with Sonnet 4.5.

There are a bunch of future directions that I would love to explore if given the time and opportunity. The rise of very effective open-weight models like Kimi 2.5 and MinMax 2.5 opens the possibility to post-train these models on Clojure and on our code base. Theoretically, this could allow us to not even have to use custom system prompts, skills, or specialized tools. We could effectively train our own Kimi Clojure or MinMax Clojure model that would have all of our best practices baked in. Of course, this is theoretical; I've done a little research, and there seems to be strong evidence that this would work. However, we won't really know until we try it out.

I also think it would be worthwhile to begin to curate all the tools that individual Clojure developers are building to work with Clojure and building an ecosystem and documentation around this.

Conclusion

I think there is a tendency among certain users of LLMs to limit the platforms and languages that we use with the LLMs. Out of the box, there are good reasons to consider this. If I were working on a greenfield project, I might consider using something like Python. However, I believe software decisions should serve the needs of the people who work on it and with it, not the needs of the LLM that might play a role creating it.

Ultimately, human beings are responsible for managing these software systems, and human knowledge still, in my view, trumps LLM statistical output. Taking that into consideration, the artifact produced by the LLM process is still a critical part of software development. I prefer to debug and deploy Clojure JVM artifacts, and I know many other organizations do as well.

Also, I think there are important reasons not to abandon all the lessons we've learned over the last twenty years of using Clojure. Clojure itself eliminates a whole set of common engineering problems like statefulness. I don't see the advantage of going back to a language where we have to use mutable state for iteration, for example, or a language where functional idioms are not the default.

My experiences with using an LLM for software engineering and developing an LLM have taught me that, rather than constraining the platforms we use and adopt, the LLM era could actually free us. The idea that LLMs are only good at something because it's in the training set ignores the tools, skills, and system prompts that we can apply post-training.

We can craft and refine LLMs for our specific platform. Not only that, there is something beautiful about LLMs: instead of narrowing how we work, they can allow people to work in very different ways.

I hope this blog post was insightful and informative, and I hope that my experience and our experience at Shortcut can help you craft and shape an LLM experience that solves your particular engineering problems. Thank you for reading.

References

Katzy, J., & Izadi, M. (2023). On the Impact of Language Selection for Training and Evaluating Programming Language Models. arXiv preprint arXiv:2308.13354. https://arxiv.org/abs/2308.13354

Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2023). Lost in the Middle: How Language Models Use Long Contexts. arXiv preprint arXiv:2307.03172. https://arxiv.org/abs/2307.03172

Permalink

Clojure Notebooks

Code

;; clojure_notebook.clj

(+ 1 1)

(println "Hello World!")

(* 2 (+ 1 1))

(* 6 7) Write code to find if number is prime

(defn prime? [n]
  (and (> n 1)
       (not (some #(zero? (mod n %)) (range 2 n)))))

(prime? 17)


(prime? 12)

Notes

Permalink

Week Notes 2026.08

A look at the current state of the art for Clojure web development, alternatives to Datomic, and the static doc site is done.

Permalink

Reconstructing Biscuit in Clojure

Authority in Agentic Systems

Over the past few months, experimenting with agentic systems, my thinking kept coming back to one question: how does authority actually move between components? That led me to OCapN and structural authority, then to interpreting OCapN in cloud-native architectures. Those articles are below.



Most systems answer this with identity. You authenticate, get a role, and a policy engine decides what you can do. This tends to work well when everything is centralized. But distributed systems can put pressure on this model. Consider two cases in particular.

An agent that needs to make an authorization decision offline. Or an agent that needs to delegate a narrow slice of its authority to another agent, across a service boundary. In both cases, the token has to carry enough information to be evaluated on its own. Identity alone tends not to be enough for this.

This is the tension I wanted to explore: what if authority were something you carry explicitly, rather than something a central engine derives for you?


Thanks for reading Taorem! Subscribe for free to receive new posts and support my work.


Two Mental Models

Identity-first: You prove who you are. A policy engine looks up what you are allowed to do. Delegation means giving someone a role. Restricting authority means writing more policy rules.

Capability-first: You carry a token. The token contains the authority directly. Delegation means giving someone a narrower version of your token. In this model, the token is designed to enforce the constraints.

The difference tends to matter in distributed systems. With identity-first, you typically need the policy engine available at the point of evaluation. With capability-first, the token is designed to be self-contained, you can verify it without calling back to a central service.

Biscuit is one concrete implementation of this model. It is a token format where authorization logic, facts, rules, and checks, travels inside the token itself, expressed in a Datalog-style reasoning language

What Biscuit Does

A Biscuit token contains three things: facts, rules, and checks.

Facts are statements about the world. “Alice has the role of agent.” “Bob is an internal agent who owns a web-search tool.”

Rules define what can be derived from facts. “If a user has the role of agent, and a target is a known internal agent, that user can read that target.”

Checks are conditions that must hold for the token to be valid. “This token must verify that Alice can use Bob’s web-search tool.”

When you verify a Biscuit token, each block is evaluated within its own scope. Facts in one block are not automatically visible to another. Checks in a block are evaluated against only the facts that block can see. If all checks pass and all signatures are valid, the token is valid.

Delegation works by appending a new block. You can add facts or checks, but existing blocks cannot be changed without invalidating the token, because each block is cryptographically signed. Because each block’s checks are evaluated in isolation, a later block cannot bypass a constraint set by an earlier one. In Biscuit's design, that guarantee is structural, not policy-based.

There are tradeoffs worth considering, and some of them are discussed in the open questions below. The most immediate is complexity, Biscuit uses a Datalog-style reasoning model. Many developers are not familiar with it. The mental model is different from role-based access control or a tool like OPA. This is a real cost.



Rebuilding the Core: Kex

I wanted to understand Biscuit by building a minimal version of it. Not a full implementation. Not production-ready. Just the core ideas, small enough to inspect.

The result is kex, written in Clojure.

Why Clojure? Because facts, rules, and proofs map naturally to immutable maps and vectors. The whole system stays visible. You can evaluate a token in the REPL, inspect the derived facts, and follow the reasoning step by step.

Facts

A fact is a vector:

[:role "alice" :agent]

Facts live inside blocks:

{:facts [[:user "alice"]
         [:role "alice" :agent]]}

Nothing is evaluated yet. This is just structured data.

Rules

A rule describes how to derive new facts:

{:id   :agent-can-read-agents
 :head [:right ?user :read ?agt]
 :body [[:role ?user :agent]
        [:internal-agent ?agt]]}

If [:role "alice" :agent] and [:internal-agent "bob"] exist, this rule derives [:right "alice" :read "bob"]. Rules keep firing until nothing new appears.

Kex implements a minimal Datalog engine using plain Clojure data structures. This tends to keep the system easy to inspect, but recursive rules and negation are not supported. That is a deliberate trade.

Checks

A check is a query that must return at least one result:

{:id    :can-read-web-search
 :query '[[:right "alice" :read "web-search"]]}

If the query returns nothing, the token is invalid. In kex, all facts from all blocks are collected first, then all rules are applied to derive new facts, and finally checks are evaluated against the full combined fact set. In the example above, the check is satisfied because the :can-implies-right rule, added in the delegation block, derives [:right "alice" :read "web-search"] from the [:can "alice" :read "web-search"] fact. Biscuit evaluates each block within its own scope, blocks cannot see each other's private facts. Kex does not implement this isolation.

Issuing a Token

The issuer creates the first block. It defines who Alice is and what agents are allowed to do.

(def token
  (kex/issue
    {:facts  [[:user "alice"] 
              [:role "alice" :agent]]
     :rules  '[{:id   :agent-can-read-agents
                :head [:right ?user :read ?agt]
                :body [[:role ?user :agent]
                       [:internal-agent ?agt]]}]
     :checks []}
    {:private-key (:priv keypair)}))

This signs the block and returns a token. The block cannot be changed after this point.

Delegation

A second service appends a new block. It adds facts about what Alice can access, and a rule that derives read rights from those facts. Because kex collects all facts and rules from all blocks into a single pool before evaluation, this block's facts and rules will be combined with the first block's during derivation.

(def delegated-token
  (kex/attenuate
    token
    {:facts  [[:internal-agent "bob"]
              [:can "alice" :read "web-search"]]
     :rules  '[{:id   :can-implies-right
                :head [:right ?user :read ?res]
                :body [[:can ?user :read ?res]]}]
     :checks []}
    {:private-key (:priv keypair)}))

A new block is appended. The old block is untouched.

Adding a Check

A third party appends one more block. It adds nothing but a check. This token is only valid if Alice can access Bob's web-search tool.

(def auth-token
  (kex/attenuate
    delegated-token
    {:facts  []
     :rules  []
     :checks [{:id :can-read-web-search
               :query '[[:right "alice" :read "web-search"]]}]}
    {:private-key (:priv keypair)}))

Verification and Explanation

(kex/verify auth-token {:public-key (:pub keypair)})

(def decision (kex/evaluate auth-token :explain? true))
(:valid? decision)
(:explain decision)

The explain output shows which rules fired and which facts satisfied each check. You can turn this into a graph:

(kex/graph (:explain decision))

In kex, authorization tends to become something you can read, not just trust.


Thanks for reading Taorem! Subscribe for free to receive new posts and support my work.


What Kex Does Not Do

Kex does not handle revocation, recursive rules, or the full Biscuit serialization format. It is not performance optimized. Do not use it in production.

It also does not fully enforce attenuation. A new block can add broader facts that expand authority if no check prevents it. In Biscuit, block isolation prevents this, a new block cannot see or override facts from another block’s private scope. In kex, that isolation is not implemented.


The full source is available here: https://github.com/serefayar/kex


Open Questions

Building kex made the capability model concrete, but it also made some hard problems more visible.

Revocation and offline verification are in tension. If a token is self-contained and does not need a central service, how do you invalidate it before it expires? Biscuit has partial answers here, but the problem does not go away. It shifts.

Token size grows with each delegation. In systems with deep delegation chains, this can become a practical concern.

Ecosystem fit is also an open question. Most existing infrastructure expects JWT or OAuth tokens. Biscuit does not slot in easily.

Explainability is useful in small systems. Whether it scales to the rule complexity of a real authorization policy is a different question.

And the bigger question: do capability-first models actually solve distributed authorization, or do they mostly reframe it? I do not have a confident answer. Kex is one small experiment in that direction.

Permalink

State of Clojure 2025 Results

Recently, we completed the 2025 State of Clojure survey. You can find the full survey results in this report.

In the report and the highlights below, "Clojure" is often used to refer to the whole ecosystem of Clojure and its dialects as a whole. When relevant, specific dialects are mentioned by name, such as ClojureScript, Babashka, ClojureCLR, etc.

See the following sections for highlights and selected analysis:

Demographics

80 countries represented

80 different countries were represented by respondents to the State of Clojure Survey!

Responses by Country

Responses by country

The Top 10 countries, by count:

1. United States
2. Brazil
3. Germany
4. United Kingdom
5. Finland

6. Sweden
7. France
8. Norway
9. Canada
10. Poland

In fact, the top 4 countries constituted 50.1% of the respondents, so by the numbers, the United States, Brazil, Germany, and the United Kingdom have the same number of Clojure users as the rest of the world.

What if we adjust for population? We can see where Clojure is most concentrated per capita.

1. Finland
2. Norway
3. Sweden
4. Denmark
5. Switzerland

6. Serbia
7. Ireland
8. Netherlands
9. Czech Republic
10. Uruguay

Northern Europe has an especially high concentration of Clojurists.

Responses by Per Capita

Responses for Europe per capita

Also, despite the population differences, Austria, Australia, United States, Brazil, and Canada all have a similar concentration of Clojurists.

82% of Clojure developers have 6 or more years of professional programming experience

Experienced developers continue to be well represented in the Clojure community.

Question: How many years have you been programming professionally?

Professional experience

Clojure attracts developers across a wide range of professional experience

Clojure isn’t just appealing to highly experienced professional developers. Clojure also attracts developers with little to no professional experience. New Clojure developers are from a wide range of professional programming experience.

Professional experience for those with ≤ 1 year of Clojure experience

Professional experience for new Clojurists

Most Clojure developers use Clojure as their primary language

About 2/3 of the respondents use Clojure as their primary language. When Clojure isn’t primary, popularity seems to influence language choice more than a specific language attribute (such as a functional style).

Question: What was your PRIMARY language in the last year?

Top primary languages for Clojurists
Other primary languages for Clojurists

Developer Satisfaction

10% of Clojure developers indicated that they only used Clojure. All others indicated at least one other language they used. This choice, like the primary language, appears to be influenced by popularity, although functional languages (eg. Elixir, Lisp, Scheme/Racket, etc.) appear to be overrepresented versus their general popularity.

Question: What programming languages have you used in the last year? (select all)

Top languages used with Clojure
Other languages used with Clojurists

1 in 10 Clojure developers would quit programming without Clojure

The results below are for developers that selected Clojure and its dialects as their primary language.

Question: If you couldn’t use Clojure, what language would you use instead?

Top alternatives to Clojure
Other alternatives to Clojure

Unsurprisingly, the most popular languages are well represented in the top choices: Java, Python, TypeScript, Go, etc., but notice the functional languages languages are overrepresented versus their general popularity: Elixir, Common Lisp, Scheme/Racket, Haskell, and Erlang.

The design of the Elixir language was influenced by Clojure, so it makes sense that it would stand out as a Clojure alternative versus other functional languages.

Clojure developers are very likely to recommend Clojure to others.

70% of the respondents said they were very likely to recommend Clojure with only 8% saying they would not.

Question: How likely is it that you would recommend Clojure to a friend or colleague?

Net Promoter Score

Industries and Applications

Survey respondents have nearly as much fun with Clojure (52% for hobbies) as more serious uses (71% for work).

Question: How would you characterize your use of Clojure today? (select all)

Use of Clojure today

Fintech, Enterprise Software, and Healthcare are the top industries for Clojure at over 51% combined.

Clojure is used across a range of industries, but Financial Services, Enterprise Software and Healthcare stand out as the top ones. Fintech is 2.5x more popular for Clojure than Enterprise Software and over 4x more popular than Healthcare.

Question: What primary industry do you develop for?

Top industries
Other industries

Clojure is used at large companies and small companies alike.

16% of Clojurists are solo developers. 55% are in an organization of 100 people or less. 26% are in an organization larger than 1000 people—​many are likely part of Nubank, the world’s largest digital-only financial services platform, which employs thousands of Clojure developers.

Question: What is your organization size?

Organization Size

New Users

Clojure continues to attract and retain developers.

15% of respondents have used Clojure for one year or less. That’s roughly equivalent to the 16% that have used Clojure 11-15 years. With 16+ years of experience, 3% of the Clojure community is made up of Clojure’s earliest adopters.

Question: How long have you been using Clojure?

Years of Clojure experience

Using equally sized buckets, it becomes clear that about half the community has 5 or less years of Clojure experience and the other half has 6 or more years.

Years of Clojure experience

Clojure experience bucketed

Functional programming, work, Lisp heritage, and Rich Hickey’s talks are the top reasons for investigating Clojure.

The survey asked developers with ≤ 1 year of Clojure experience to select all the factors that first prompted them to investigate Clojure.

Question: Why did you first start looking at Clojure? (select all)

Seeking a functional programming language

40.20%

Use at work

39.70%

Seeking a modern LISP

39.20%

Inspired by conference talk or video by Rich Hickey or others

32.16%

Seeking a more concise/expressive language on the JVM

14.57%

Seeking a better language for web / full stack programming

13.07%

Inspired by programming writings by prominent authors

12.56%

Enjoyed the community

9.55%

Seeking a language for safe concurrent programming

8.54%

Introduced by a friend or colleague

8.54%

Inspired by using a tool or framework written in Clojure

7.04%

Other (please specify)

6.53%

Business advantages like leverage, hiring, pay

3.52%

Interested in doing music / art programming

2.51%

Use in a university class

1.01%

Nearly half of new Clojure developers are unfamiliar with structured editing.

Structured editing allows a developer to efficiently edit Clojure code while keeping parenthesis and other delimiters balanced. It is especially useful for Lisp-style syntax where the distance between those delimiters ("on the outside") can span many lines of code.

As you can see below, only 19% of experienced Clojurists don’t use it ("manual") or are "not sure" about structured editing. For the inexperienced group, a full 48% don’t use it or are not sure.

Question: Which method do you use to edit Clojure code while balancing parentheses, brackets, etc? (Structured editing)

Respondents with 2 or more years of Clojure experience

Clojure development environment

Respondents with 1 year or less of Clojure experience

Clojure development environment for new Clojurists

Clojure Dialects and Tools

3 out of 5 respondents indicated they use Babashka, which edged out ClojureScript for the #2 spot for the second year in a row.

Question: Which dialects of Clojure have you used in the last year? (select all)

Top Clojure dialects
Other Clojure dialects

Emacs still holds the top spot overall, but new Clojurists are much more likely to use VS Code with Calva.

Across all respondents, Emacs is the most popular, although there is a near perfect 50-50 split between Emacs + Vim and all the others.

Question: What is your primary Clojure development environment?

Clojure development environment

For Clojure developers with one year or less of Clojure experience, Emacs and VS Code essentially trade places.

Respondents with 1 year or less of Clojure experience

Clojure development environment for new Clojurists

70% of Clojure developers have used AI tools for software development, and 12% are considering it.

The industry-wide surge of AI tooling can be seen in the Clojure community. Although a huge majority of Clojure developers have used AI tooling, a disinterested 18% are quite content without it.

Question: Have you used AI tools for software development?

AI coding tool usage

Final Comments

44% of respondents took the time to express appreciation.

After a very long survey, nearly half of the respondents took even more time to express appreciation for others in the Clojure community. You can read their many, many words of appreciation in the full results of the 2025 survey.

Question: Who do you appreciate in the Clojure community and why?

Appreciative responses

In the spirit of thanks, we would like to thank you again for using Clojure and participating in the survey!

Previous Years

We’re celebrating our 15th State of Clojure Survey! 🎉 🥳

What better way to celebrate than by looking back at the years gone by? You can find the full results for this and prior years at the links below:

Permalink

Tetris-playing AI the Polylith way - Part 3

Tetris AI

The focus in this third part of the blog series is to implement an algorithm that computes all valid moves for a piece (Tetromino) in its starting position. We are refining our domain model and improving the readability of parts of the codebase, while continuing to implement the code in Clojure and Python using the component-based Polylith architecture.

Earlier parts:

  • Part 1 - Places a piece on a board. Shows the differences between Clojure and Python and creates the piece and board components.
  • Part 2 - Implements clearing of completed rows. Shows how to get fast feedback when working REPL-driven.

The resulting source code from this post:

Tetris Variants

Tetris has been made in several different variants, such as the handheld Game Boy, the Nintendo NES console, and this Atari arcade game, which I played an unhealthy amount of in my younger days at a pool hall that no longer exists!

Each variant behaves slightly differently when it comes to colours, starting positions, rotation behaviour, and so on.

In most Tetris variants, the pieces start in these rotation states (lying flat) before they start falling:

Tetris pieces

Where on the board the pieces start also varies. For instance, on Nintendo NES and Atari Arcade they start in the fifth x-position, while on Game Boy they start in the fourth:

Start position

In these older versions of Tetris, the pieces rotate only counterclockwise, unlike in some newer games where you can rotate both clockwise and counterclockwise.

The following table compares how pieces rotate across the three mentioned variants:

Rotation table

On Atari, pieces are oriented toward the top-left corner (except the vertical I), while on the other two they mostly rotate around their centre.

In our code, we represent a piece as four [x y] cells:

[[0 1] [1 1] [2 1] [1 2]]

This representation is easy for the code to work with, but poorly communicates the shape of a piece to a human.

The main rule is that code should be written to be easy to understand for the people who read and change it (humans and AI agents).

Let us therefore define a piece like this instead:

(def T0 [&apos---
         &aposxxx
         &apos-x-])

Python:

T0 = [
    "---",
    "xxx",
    "-x-"]

Now we can define all seven pieces and their rotation states for Game Boy (Python code is almost identical):

(ns tetrisanalyzer.piece.settings.game-boy
  (:require [tetrisanalyzer.piece.shape :as shape]))


(def O0 [&apos----
         &apos-xx-
         &apos-xx-
         &apos----])

(def I0 [&apos----
         &apos----
         &aposxxxx
         &apos----])

(def I1 [&apos-x--
         &apos-x--
         &apos-x--
         &apos-x--])

(def Z0 [&apos---
         &aposxx-
         &apos-xx])

(def Z1 [&apos-x-
         &aposxx-
         &aposx--])

(def S0 [&apos---
         &apos-xx
         &aposxx-])

(def S1 [&aposx--
         &aposxx-
         &apos-x-])

(def J0 [&apos---
         &aposxxx
         &apos--x])

(def J1 [&apos-xx
         &apos-x-
         &apos-x-])

(def J2 [&aposx--
         &aposxxx
         &apos---])

(def J3 [&apos-x-
         &apos-x-
         &aposxx-])

(def L0 [&apos---
         &aposxxx
         &aposx--])

(def L1 [&apos-x-
         &apos-x-
         &apos-xx])

(def L2 [&apos--x
         &aposxxx
         &apos---])

(def L3 [&aposxx-
         &apos-x-
         &apos-x-])

(def T0 [&apos---
         &aposxxx
         &apos-x-])

(def T1 [&apos-x-
         &apos-xx
         &apos-x-])

(def T2 [&apos-x-
         &aposxxx
         &apos---])

(def T3 [&apos-x-
         &aposxx-
         &apos-x-])

(def pieces [[O0]
             [I0 I1]
             [Z0 Z1]
             [S0 S1]
             [J0 J1 J2 J3]
             [L0 L1 L2 L3]
             [T0 T1 T2 T3]])

(def shapes (shape/shapes pieces))

The shapes function at the end converts the pieces into the format the code uses:

[;; O
 [[[1 1] [2 1] [1 2] [2 2]]]
 ;; I
 [[[0 2] [1 2] [2 2] [3 2]]
  [[1 0] [1 1] [1 2] [1 3]]]
 ;; Z
 [[[0 1] [1 1] [1 2] [2 2]]
  [[1 0] [0 1] [1 1] [0 2]]]
 ;; S
 [[[1 1] [2 1] [0 2] [1 2]]
  [[0 0] [0 1] [1 1] [1 2]]]
 ;; J
 [[[0 1] [1 1] [2 1] [2 2]]
  [[1 0] [2 0] [1 1] [1 2]]
  [[0 0] [0 1] [1 1] [2 1]]
  [[1 0] [1 1] [0 2] [1 2]]]
 ;; L
 [[[0 1] [1 1] [2 1] [0 2]]
  [[1 0] [1 1] [1 2] [2 2]]
  [[2 0] [0 1] [1 1] [2 1]]
  [[0 0] [1 0] [1 1] [1 2]]]
 ;; T
 [[[0 1] [1 1] [2 1] [1 2]]
  [[1 0] [1 1] [2 1] [1 2]]
  [[1 0] [0 1] [1 1] [2 1]]
  [[1 0] [0 1] [1 1] [1 2]]]]

The test for the shape function looks like this:

(ns tetrisanalyzer.piece.shape-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.shape :as shape]))

(deftest converts-a-piece-shape-grid-to-a-vector-of-xy-cells
  (is (= [[2 0]
          [1 1]
          [2 1]
          [1 2]]
         (shape/shape [&apos--x-
                       &apos-xx-
                       &apos-x--
                       &apos----]))))

Python:

from tetrisanalyzer.piece.shape import shape


def test_converts_a_piece_shape_grid_to_a_list_of_xy_cells():
    assert [[2, 0],
            [1, 1],
            [2, 1],
            [1, 2]] == shape(["--x-",
                              "-xx-",
                              "-x--",
                              "----"]
    )

Implementation in Clojure:

(ns tetrisanalyzer.piece.shape)

(defn cell [x character y]
  (when (= \x character)
    [x y]))

(defn row-cells [y row]
  (keep-indexed #(cell %1 %2 y)
                (str row)))

(defn shape [piece-grid]
  (vec (mapcat identity
               (map-indexed row-cells piece-grid))))

(defn shapes [piece-grids]
  (mapv #(mapv shape %)
        piece-grids))

If you are new to Clojure, here are some explanatory examples of a couple of the functions:

(map-indexed vector ["I" "love" "Tetris"])

;; ([0 "I"] [1 "love"] [2 "Tetris"])

The map-indexed function iterates over "I", "love", and "Tetris", and builds a new list where each element is created by calling vector with the index, which is equivalent to:

(list (vector 0 "I")
      (vector 1 "love")
      (vector 2 "tetris"))

;; ([0 "I"] [1 "love"] [2 "Tetris"])

The function keep-indexed works in the same way, but only keeps values that aren&apost nil, hence the use of when:

;; %1 = first argument (index)
;; %2 = second argument (value)
(keep-indexed #(when %2 [%1 %2]) 
              ["I" nil "Tetris"])

;; ([0 "I"] [2 "Tetris"])

Implementation in Python:

def shape(piece_grid):
    return [
        [x, y]
        for y, row in enumerate(piece_grid)
        for x, ch in enumerate(row)
        if ch == "x"]

def shapes(pieces_grids):
    return [
        [shape(piece_grid) for piece_grid in piece_grids]
        for piece_grids in pieces_grids]

Here we use list comprehension to convert the data into [x, y] cells. The enumerate function is equivalent to Clojure’s map-indexed, in that it adds an index (0, 1, 2, …) to each element.

Domain Modelling

The new code that calculates the valid moves for a piece in its starting position has to live somewhere. We need to be able to move and rotate a piece, and check whether the target position on the board is free.

In object-oriented programming we have several options. We could write piece.set(board), board.set(piece), or maybe move.set(piece, board), while making every effort not to expose the internal representation.

In functional programming, we have more freedom and don&apost try to hide how we represent our data. The fact that the board is stored as a two-dimensional vector is no secret, and it isn’t just board that can create updated copies of this two-dimensional vector.

Code usually belongs where we expect to find it. We have the function set-piece, which, according to this reasoning, should live in piece, so I moved it from board where I&aposd put it earlier. The new placements function also goes in piece, since it&aposs about finding valid moves for a piece. Our domain model now looks like this:

Components

Inside each component we list what belongs to its interface (what&aposs public), and the arrow shows that piece calls functions in board.

We split the implementation across the namespaces move, placement, and visit, which we put in the move package:

▾ tetris-polylith
  ▸ bases
  ▾ components
    ▸ board
    ▾ piece
      ▾ src
        ▸ settings
        ▾ move
          move.clj
          placement.clj
          visit.clj
        bitmask.clj
        interface.clj
        piece.clj
        shape.clj
      ▾ test
        ▾ move
          move_test.clj
          placement_test.clj
          visit_test.clj
        piece_test.clj
        shape_test.clj
  ▸ development
  ▸ projects

The move-test looks like this:

(ns tetrisanalyzer.piece.move.move-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.piece :as piece]
            [tetrisanalyzer.piece.move.move :as move]
            [tetrisanalyzer.piece.bitmask :as bitmask]
            [tetrisanalyzer.board.interface :as board]
            [tetrisanalyzer.piece.settings.atari-arcade :as atari-arcade]))

(def x 2)
(def y 1)
(def rotation 0)
(def S piece/S)
(def shapes atari-arcade/shapes)
(def bitmask (bitmask/rotation-bitmask shapes S))
(def piece (piece/piece S rotation shapes))

(def board (board/board [&aposxxxxxxxx
                         &aposxxx--xxx
                         &aposxx--xxxx
                         &aposxxxxxxxx]))

(deftest valid-move
  (is (= true
         (move/valid-move? board x y S rotation shapes))))

(deftest valid-left-move
  (is (= [2 1 0]
         (move/left board (inc x) y S rotation nil shapes))))

(deftest invalid-left-move
  (is (= nil
         (move/left board x y S rotation nil shapes))))

(deftest valid-right-move
  (is (= [2 1 0]
         (move/right board (dec x) y S rotation nil shapes))))

(deftest invalid-right-move
  (is (= nil
         (move/right board x (dec y) S rotation nil shapes))))

(deftest unoccupied-down-move
  (is (= [[2 1 0] nil]
         (move/down board x (dec y) S rotation nil shapes))))

(deftest down-move-hits-ground
  (is (= [nil [[2 1 0]]]
         (move/down board x y S rotation nil shapes))))

(deftest valid-rotation
  (is (= [2 1 0]
         (move/rotate board x y S (dec rotation) bitmask shapes))))

(deftest invalid-rotation-without-kick
  (is (= nil
         (move/rotate board (inc x) y S (inc rotation) bitmask shapes))))

(deftest valid-rotation-with-kick
  (is (= [2 1 0]
         (move/rotate-with-kick board (inc x) y S (inc rotation) bitmask shapes))))

(deftest invalid-move-outside-board
  (is (= false
         (move/valid-move? board 10 -10 S rotation shapes))))

The first test, valid-move, checks that the S piece:

[&apos-xx
 &aposxx-]

Can be placed at position x=2, y=1, on the board:

[&aposxxxxxxxx
 &aposxxx--xxx
 &aposxx--xxxx
 &aposxxxxxxxx]

Beyond that, we test various valid moves and rotations into the empty area, plus invalid moves outside the board.

In Tetris there&aposs something called kick, or wall kick. When you rotate a piece and that position is occupied on the board, one step left is also tried (x-1). On Nintendo NES this is turned off, while it&aposs enabled in the other two variants we support here. In newer Tetris games, other placements besides x-1 are sometimes tested as well.

The implementation looks like this:

(ns tetrisanalyzer.piece.move.move
  (:require [tetrisanalyzer.piece.piece :as piece]))

(defn cell [board x y [cx cy]]
  (or (get-in board [(+ y cy) (+ x cx)])
      piece/X))

(defn valid-move? [board x y p rotation shapes]
  (every? zero?
          (map #(cell board x y %)
               (piece/piece p rotation shapes))))

(defn left [board x y p rotation _ shapes]
  (when (valid-move? board (dec x) y p rotation shapes)
    [(dec x) y rotation]))

(defn right [board x y p rotation _ shapes]
  (when (valid-move? board (inc x) y p rotation shapes)
    [(inc x) y rotation]))

(defn down
  "Returns [down-move placement] where:
   - down-move: next move when moving down or nil if blocked
   - placement: final placement if blocked, or nil if can move down"
  [board x y p rotation _ shapes]
  (if (valid-move? board x (inc y) p rotation shapes)
    [[x (inc y) rotation] nil]
    [nil [[x y rotation]]]))

(defn rotate [board x y p rotation bitmask shapes]
  (let [new-rotation (bit-and (inc rotation) bitmask)]
    (when (valid-move? board x y p new-rotation shapes)
      [x y new-rotation])))

(defn rotate-with-kick [board x y p rotation bitmask shapes]
  (or (rotate board x y p rotation bitmask shapes)
      (rotate board (dec x) y p rotation bitmask shapes)))

(defn rotation-fn [rotation-kick?]
  (if rotation-kick?
    rotate-with-kick
    rotate))

The functions are fairly straightforward, so let us instead look at the code that helps us keep track of which moves have already been visited:

(ns tetrisanalyzer.piece.move.visit)

(defn visited? [visited-moves x y rotation]
  (if-let [visited-rotations (get-in visited-moves [y x])]
    (not (zero? (bit-and visited-rotations
                         (bit-shift-left 1 rotation))))
    true)) ;; Cells outside the board are treated as visited

(defn visit [visited-moves x y rotation]
  (assoc-in visited-moves [y x] (bit-or (get-in visited-moves [y x])
                                        (bit-shift-left 1 rotation))))

Calling the standard bit-shift-left function returns a set bit in one of the four lowest bits:

rotationbit
00001
10010
20100
31000

These "flags" are used to mark that we&aposve visited a given [x y rotation] move on the board. Note that we pass a "visited board" (visited-moves) into visit and get back a copy where the [x y] cell has a bit set for the given rotation. This “copying” is very fast and memory-efficient, see “structural sharing” under Data Structures.

The tests look like the following:

(ns tetrisanalyzer.piece.move.visit-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.move.visit :as visit]))

(def x 2)
(def y 1)
(def rotation 3)
(def unvisited [[0 0 0 0]
                [0 0 0 0]])

(deftest move-is-not-visited
  (is (= false
         (visit/visited? unvisited x y rotation))))

(deftest move-is-visited
  (let [visited (visit/visit unvisited x y rotation)]
    (is (= true
           (visit/visited? visited x y rotation)))))

Python:

from tetrisanalyzer.piece.move.visit import is_visited, visit

X = 2
Y = 1
ROTATION = 3
UNVISITED = [
    [0, 0, 0, 0],
    [0, 0, 0, 0]]


def test_move_is_not_visited():
    assert is_visited(UNVISITED, X, Y, ROTATION) is False


def test_move_is_visited():
    visited = [row[:] for row in UNVISITED]
    visit(visited, X, Y, ROTATION)
    assert is_visited(visited, X, Y, ROTATION) is True

We have now laid the groundwork to implement the placements function that computes all valid moves for a piece in its starting position.

We start with the test:

(ns tetrisanalyzer.piece.move.placement-test
  (:require [clojure.test :refer :all]
            [tetrisanalyzer.piece.piece :as piece]
            [tetrisanalyzer.piece.move.placement :as placement]
            [tetrisanalyzer.piece.settings.atari-arcade :as atari-arcade]))

(def start-x 2)
(def sorter (juxt second first last))

(def board [[0 0 0 0 0 0]
            [0 0 1 1 0 0]
            [0 0 1 0 0 1]
            [0 0 1 1 1 1]])

(def shapes atari-arcade/shapes)

;; Start position of the J piece:
;; --JJJ-
;; --xxJ-
;; --x--x
;; --xxxx
(deftest placements--without-rotation-kick
  (is (= [[2 0 0]
          [3 0 0]]
         (sort-by sorter (placement/placements board piece/J start-x false shapes)))))

;; With rotation kick, checking if x-1 fits:
;; -JJ---
;; -Jxx--
;; -Jx--x
;; --xxxx
(deftest placements--with-rotation-kick
  (is (= [[1 0 1]
          [2 0 0]
          [3 0 0]
          [0 1 1]]
         (sort-by sorter (placement/placements board piece/J start-x true shapes)))))

This tests that we get back the valid [x y rotation] positions where a piece can be placed on the board from its starting position.

The implementation:

(ns tetrisanalyzer.piece.move.placement
  (:require [tetrisanalyzer.piece.move.move :as move]
            [tetrisanalyzer.piece.move.visit :as visit]
            [tetrisanalyzer.board.interface :as board]
            [tetrisanalyzer.piece.bitmask :as bitmask]))

(defn ->placements [board x y p rotation bitmask valid-moves visited-moves rotation-fn shapes]
  (loop [next-moves (list [x y rotation])
         placements []
         valid-moves valid-moves
         visited-moves visited-moves]
    (if-let [[x y rotation] (first next-moves)]
      (let [next-moves (rest next-moves)]
        (if (visit/visited? visited-moves x y rotation)
          (recur next-moves placements valid-moves visited-moves)
          (let [[down placement] (move/down board x y p rotation bitmask shapes)
                moves (keep #(% board x y p rotation bitmask shapes)
                            [move/left
                             move/right
                             rotation-fn
                             (constantly down)])]
            (recur (into next-moves moves)
                   (concat placements placement)
                   (conj valid-moves [x y rotation])
                   (visit/visit visited-moves x y rotation)))))
      placements)))

(defn placements [board p x kick? shapes]
  (let [y 0
        rotation 0
        bitmask (bitmask/rotation-bitmask shapes p)
        visited-moves (board/empty-board board)
        rotation-fn (move/rotation-fn kick?)]
    (if (move/valid-move? board x y p rotation shapes)
      (->placements board x y p rotation bitmask [] visited-moves rotation-fn shapes)
      [])))

Let us walk through the following section in ->placements:

(loop [next-moves (list [x y rotation])
       placements []
       valid-moves valid-moves
       visited-moves visited-moves]

These four lines initialise the data we&aposre looping over: next-moves is the list of moves we need to process (it grows and shrinks as we go), and placements accumulates valid moves.

Since Clojure doesn’t support tail recursion, we use loop instead, to avoid stack overflow on boards larger than 10×20.

(if-let [[x y rotation] (first next-moves)]

Retrieves the next move from next-moves and continues with the code immediately after, or returns placements (the last line in the function, representing all valid moves) if next-moves is empty.

(let [next-moves (rest next-moves)]

Drops the first element from next-moves, the one we just picked.

(if (visit/visited? visited-moves x y rotation)

If we&aposve already visited this move, continue with:

(recur next-moves placements valid-moves visited-moves)

Which continues our search for valid moves (the line after (loop [...]) by moving on to the next move to evaluate.

Otherwise, if the move hasn&apost been visited, we do:

(let [[down placement] (move/down board x y p rotation bitmask shapes)
     ...]

This sets down to the next downward move (if free) or placement if we can&apost move down, which happens when we hit the bottom or when part of the "stack" is in the way.

For these lines:

(keep #(% board x y p rotation bitmask shapes)
      [move/left
       move/right
       rotation-fn
       (constantly down)])

The % gets replaced with each function in the vector, which is equivalent to:

[(move/left board x y p rotation bitmask shapes)
 (move/right board x y p rotation bitmask shapes)
 (rotation-fn board x y p rotation bitmask shapes)
 (down board x y p rotation bitmask shapes)]

These function calls generate all possible moves (including rotations), returning [x y rotation] for positions that are free on the board, or nil if occupied. The keep function filters out nil values, leaving only valid moves in moves.

Finally we execute:

(recur (into next-moves moves)
       (concat placements placement)
       (conj valid-moves [x y rotation])
       (visit/visit visited-moves x y rotation))

Which calls loop again with:

  • next-moves updated with any new moves
  • placements updated with any valid placement
  • valid-moves updated with the current move
  • visited-moves with the current move marked as visited

This keeps going until next-moves is empty, and then we return placements.

The function that kicks everything off and returns valid moves for a piece in its starting position:

(defn placements [board p x kick? shapes]
  (let [y 0
        rotation 0
        bitmask (bitmask/rotation-bitmask shapes p)
        visited-moves (board/empty-board board)
        rotation-fn (move/rotation-fn kick?)]
    (if (move/valid-move? board x y p rotation shapes)
      (->placements board x y p rotation bitmask [] visited-moves rotation-fn shapes)
      [])))
  • board: a two-dimensional vector representing the board, usually 10x20.
  • p: piece index (0, 1, 2, 3, 4, 5, or 6).
  • x: which column the 4x4 grid starts in (where the piece sits). First column is 0.
  • y: set to 0 (top row for the 4x4 grid).
  • rotation: set to 0 (starting rotation).
  • bitmask: used when iterating over rotations so that it wraps back to 0 after reaching the maximum number of rotations it can perform.
  • visited-moves: has the same structure as a board, a two-dimensional array, usually 10x20.
  • rotation-fn: returns the right rotation function depending on whether kick is enabled. Also tries position x-1 if kick? is true.
  • shapes: the shapes for all pieces and their rotation states, stored as [x y] cells.
  • (if (move/valid-move? board x y p rotation shapes): we need to check whether the initial position is free; if not, return an empty vector.
  • (->placements board x y p rotation bitmask [] visited-moves rotation-fn shapes) computes the valid moves.

Implementation in Python:

from collections import deque

from tetrisanalyzer import board as board_ifc
from tetrisanalyzer.piece import piece
from tetrisanalyzer.piece.bitmask import rotation_bitmask
from tetrisanalyzer.piece.move import move
from tetrisanalyzer.piece.move import visit

def _placements(board, x, y, p, rotation, bitmask, valid_moves, visited_moves, rotation_move_fn, shapes):
    next_moves = deque([[x, y, rotation]])
    valid_placements = []

    while next_moves:
        x, y, rotation = next_moves.popleft()

        if visit.is_visited(visited_moves, x, y, rotation):
            continue

        down_move, placement = move.down(board, x, y, p, rotation, bitmask, shapes)

        moves = [
            move.left(board, x, y, p, rotation, bitmask, shapes),
            move.right(board, x, y, p, rotation, bitmask, shapes),
            rotation_move_fn(board, x, y, p, rotation, bitmask, shapes),
            down_move]

        moves = [m for m in moves if m is not None]

        next_moves.extend(moves)

        if placement is not None:
            valid_placements.extend(placement)

        valid_moves.append([x, y, rotation])
        visit.visit(visited_moves, x, y, rotation)

    return valid_placements


def placements(board, p, start_x, kick, shapes):
    y = 0
    rotation = 0
    bitmask = rotation_bitmask(shapes, p)
    visited_moves = board_ifc.empty_board(board_ifc.width(board), board_ifc.height(board))
    rotation_move_fn = move.rotation_fn(kick)

    if not move.is_valid_move(board, start_x, y, p, rotation, shapes):
        return []

    return _placements(board, start_x, y, p, rotation, bitmask, [], visited_moves, rotation_move_fn, shapes)

The code follows the same algorithm as in Clojure. We use deque because it&aposs slightly faster than a list when performing both popleft and extend.

Testing

Finally, we run our tests:

$> cd ~/source/tetrisanalyzer/langs/clojure/tetris-polylith
$> poly test :dev
Projects to run tests from: development

Running tests for the development project using test runner: Polylith built-in clojure.test runner...
Running tests from the development project, including 2 bricks: board, piece

Testing tetrisanalyzer.board.core-test

Ran 1 tests containing 1 assertions.
0 failures, 0 errors.

Test results: 1 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.board.clear-rows-test

Ran 1 tests containing 1 assertions.
0 failures, 0 errors.

Test results: 1 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.board.grid-test

Ran 2 tests containing 2 assertions.
0 failures, 0 errors.

Test results: 2 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.piece.shape-test

Ran 1 tests containing 1 assertions.
0 failures, 0 errors.

Test results: 1 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.piece.move.placement-test

Ran 2 tests containing 2 assertions.
0 failures, 0 errors.

Test results: 2 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.piece.move.move-test

Ran 11 tests containing 11 assertions.
0 failures, 0 errors.

Test results: 11 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.piece.move.visit-test

Ran 2 tests containing 2 assertions.
0 failures, 0 errors.

Test results: 2 passes, 0 failures, 0 errors.

Testing tetrisanalyzer.piece.piece-test

Ran 1 tests containing 1 assertions.
0 failures, 0 errors.

Test results: 1 passes, 0 failures, 0 errors.

Execution time: 0 seconds

Python:

$> cd ~/source/tetrisanalyzer/langs/python/tetris-polylith-uv
$> uv run pytest
======================================================================================================= test session starts ========================================================================================================
platform darwin -- Python 3.13.11, pytest-9.0.2, pluggy-1.6.0
rootdir: /Users/tengstrand/source/tetrisanalyzer/langs/python/tetris-polylith-uv
configfile: pyproject.toml
collected 21 items

test/components/tetrisanalyzer/board/test_clear_rows.py .                                                                                                                                                                    [  4%]
test/components/tetrisanalyzer/board/test_core.py ..                                                                                                                                                                         [ 14%]
test/components/tetrisanalyzer/board/test_grid.py ..                                                                                                                                                                         [ 23%]
test/components/tetrisanalyzer/piece/move/test_move.py ...........                                                                                                                                                           [ 76%]
test/components/tetrisanalyzer/piece/move/test_placement.py ..                                                                                                                                                               [ 85%]
test/components/tetrisanalyzer/piece/move/test_visit.py ..                                                                                                                                                                   [ 95%]
test/components/tetrisanalyzer/piece/test_shape.py .                                                                                                                                                                         [100%]

======================================================================================================== 21 passed in 0.02s ========================================================================================================

Nice, all tests passed!

Summary

In this third post, I took on the not entirely trivial task of computing all valid moves for a piece in its starting position.

I avoided implementing it as a recursive algorithm, since that would limit how large our boards can get.

We reminded ourselves that code should live where we expect to find it.

We also took the opportunity to make the code easier to work with, by specifying pieces in a more readable way, and with that change we could easily support three different Tetris variants.

Hope you had just as much fun as I did 😃

Happy Coding!

Permalink

Babashka 1.12.215: Revenge of the TUIs

Babashka is a fast-starting native Clojure scripting runtime. It uses SCI to interpret Clojure and compiles to a native binary via GraalVM, giving you Clojure&aposs power with near-instant startup. It&aposs commonly used for shell scripting, build tooling, and small CLI applications. If you don&apost yet have bb installed, you can with brew:

brew install borkdude/brew/babashka

or bash:

bash <(curl -s https://raw.githubusercontent.com/babashka/babashka/master/install)

This release is, in my opinion, a game changer. With JLine3 bundled, you can now build full terminal user interfaces in babashka. The bb repl has been completely overhauled with multi-line editing, completions, and eldoc. deftype now supports map interfaces, making bb more compatible with existing libraries like core.cache. SCI has had many small improvements, making riddley compatible too. Riddley is used in Cloverage, a code coverage library for Clojure, which now also works with babashka (Cloverage PR pending).

Babashka conf 2026

But first, let me mention an exciting upcoming event! Babashka conf is happening again for the second time! The first time was 2023 in Berlin. This time it&aposs in Amsterdam. The Call for Proposals is open until the end of February, so there is still time to submit your talk or workshop. We are also looking for one last gold sponsor (500 euros) to cover all costs.

Highlights

JLine3 and TUI support

Babashka now bundles JLine3, a Java library for building interactive terminal applications. You get terminals, line readers with history and tab completion, styled output, keyboard bindings, and the ability to reify custom completers, parsers, and widgets — all from bb scripts.

JLine3 works on all platforms, including Windows PowerShell and cmd.exe.

Here&aposs a simple interactive prompt that reads lines from the user until EOF (Ctrl+D):

(import &apos[org.jline.terminal TerminalBuilder]
        &apos[org.jline.reader LineReaderBuilder])

(let [terminal (-> (TerminalBuilder/builder) (.build))
      reader   (-> (LineReaderBuilder/builder)
                   (.terminal terminal)
                   (.build))]
  (try
    (loop []
      (when-let [line (.readLine reader "prompt> ")]
        (println "You typed:" line)
        (recur)))
    (catch org.jline.reader.EndOfFileException _
      (println "Goodbye!"))
    (finally
      (.close terminal))))

babashka.terminal namespace

A new babashka.terminal namespace exposes a tty? function to detect whether stdin, stdout, or stderr is connected to a terminal:

(require &apos[babashka.terminal :refer [tty?]])

(when (tty? :stdout)
  (println "Interactive terminal detected, enabling colors"))

This accepts :stdin, :stdout, or :stderr as argument. It uses JLine3&aposs terminal provider under the hood.

This is useful for scripts that want to behave differently when piped vs. run interactively, for example enabling colored output or progress bars only in a terminal.

charm.clj compatibility

charm.clj is a new Clojure library for building terminal user interfaces using the Elm architecture (Model-Update-View). It provides components like spinners, text inputs, lists, paginators, and progress bars, with support for ANSI/256/true color styling and keyboard/mouse input handling.

charm.clj is now compatible with babashka (or rather, babashka is now compatible with charm.clj), enabled by the combination of JLine3 support and other interpreter improvements in this release. This means you can build rich TUI applications that start instantly as native binaries.

Here&aposs a complete counter example you can save as a single file and run with bb:

#!/usr/bin/env bb

(babashka.deps/add-deps
 &apos{:deps {io.github.TimoKramer/charm.clj {:git/sha "cf7a6c2fcfcccc44fcf04996e264183aa49a70d6"}}})

(require &apos[charm.core :as charm])

(def title-style
  (charm/style :fg charm/magenta :bold true))

(def count-style
  (charm/style :fg charm/cyan
               :padding [0 1]
               :border charm/rounded-border))

(defn update-fn [state msg]
  (cond
    (or (charm/key-match? msg "q")
        (charm/key-match? msg "ctrl+c"))
    [state charm/quit-cmd]

    (or (charm/key-match? msg "k")
        (charm/key-match? msg :up))
    [(update state :count inc) nil]

    (or (charm/key-match? msg "j")
        (charm/key-match? msg :down))
    [(update state :count dec) nil]

    :else
    [state nil]))

(defn view [state]
  (str (charm/render title-style "Counter App") "\n\n"
       (charm/render count-style (str (:count state))) "\n\n"
       "j/k or arrows to change\n"
       "q to quit"))

(charm/run {:init {:count 0}
            :update update-fn
            :view view
            :alt-screen true})
charm.clj counter example running in babashka

More examples can be found here.

Deftype with map interfaces

Until now, deftype in babashka couldn&apost implement JVM interfaces like IPersistentMap, ILookup, or Associative. This meant libraries that define custom map-like types, a very common Clojure pattern, couldn&apost work in babashka.

Starting with this release, deftype supports map interfaces. Your deftype must declare IPersistentMap to signal that you want a full map type. Other map-related interfaces like ILookup, Associative, Counted, Seqable, and Iterable are accepted freely since the underlying class already implements them.

This unlocks several libraries that were previously incompatible:

  • core.cache: all cache types (BasicCache, FIFOCache, LRUCache, TTLCache, LUCache) work unmodified
  • linked: insertion-ordered maps and sets

Riddley and Cloverage compatibility

Riddley is a Clojure library for code walking that many other libraries depend on. Previously, SCI&aposs deftype and case did not macroexpand to the same special forms as JVM Clojure, which broke riddley&aposs walker. Several changes now align SCI&aposs behavior with Clojure: deftype macroexpands to deftype*, case to case*, and macroexpand-1 now accepts an optional env map as second argument (inspired by how the CLJS analyzer API works). Together these changes enable riddley and tools built on it, like cloverage and Specter, to work with bb.

Riddley has moved to clj-commons, thanks to Zach Tellman for transferring it. I&aposd like to thank Zach for all his contributions to the Clojure community over the years. Version 0.2.2 includes bb compatibility, which was one of the first PRs merged after the transfer. Cloverage compatibility has been submitted upstream, all 75 cloverage tests pass on both JVM and babashka.

Console REPL improvements

The bb repl experience has been significantly improved with JLine3 integration. You no longer need rlwrap to get a comfortable console REPL:

  • Multi-line editing: the REPL detects incomplete forms and continues reading on the next line with a #_=> continuation prompt
  • Tab completion: Clojure-aware completions powered by SCI, including keywords (:foo, ::foo, ::alias/foo)
Tab completions in bb repl
  • Ghost text: as you type, the common completion prefix appears as faint inline text after the cursor. Press TAB to accept.
Ghost text in bb repl
  • Eldoc: automatic argument help — when your cursor is inside a function call like (map |), the arglists are displayed below the prompt
  • Doc-at-point: press Ctrl+X Ctrl+D to show full documentation for the symbol at the cursor
  • Persistent history: command history saved across sessions in ~/.bb_repl_history
  • Ctrl+C handling: first press on an empty prompt warns, second press exits

Many of these features were inspired by rebel-readline, Leiningen&aposs REPL, and Node.js&aposs REPL.

SCI improvements

Under the hood, SCI (the interpreter powering babashka) received many improvements in this cycle:

  • Functional interface adaptation for instance targets: you can now write (let [^Predicate p even?] (.test p 42)) and SCI will adapt the Clojure function to the functional interface automatically.
  • Type tag inference: SCI now infers type tags from let binding values to binding names, reducing the need for explicit type hints in interop-heavy code.
  • Several bug fixes: read with nil/false as eof-value, letfn with duplicate function names, ns-map not reflecting shadowed vars, NPE in resolve, and .method on class objects routing incorrectly.

Other improvements

  • Support multiple catch clauses in combination with ^:sci/error
  • Fix satisfies? on protocols with proxy
  • Support reify with java.time.temporal.TemporalQuery
  • Fix reify with methods returning int/short/byte/float primitives
  • nREPL server now uses non-daemon threads so the process stays alive without @(promise)
  • Add clojure.test.junit as built-in source namespace
  • Add cp437 (IBM437) charset support in native binary via selective GraalVM charset Feature, avoiding the ~5MB binary size increase from AddAllCharsets. More charsets can be added on request.

For the full list of changes including new Java classes and library bumps, see the changelog.

Thanks

Thank you to all the contributors who helped make this release possible. Special thanks to everyone who reported issues, tested pre-release builds from babashka-dev-builds, and provided feedback.

Thanks to Clojurists Together and all babashka sponsors and contributors for their ongoing support. Your sponsorship makes it possible to keep developing babashka.

And thanks to all babashka users: you make this project what it is. Happy scripting!

Permalink

Pull Playground - Interactive Pattern Learning

Context

The lasagna-pull pattern DSL is central to how we build APIs at Flybot (see Building a Pure Data API with Lasagna Pull). But learning the syntax from documentation alone is slow. You need to type patterns, see results, and build intuition through experimentation.

I built the playground as a companion to flybot.sg. The goal was a zero-setup environment where someone could open a URL and start writing patterns immediately, without cloning a repo, starting a REPL, or connecting to a database.

Two modes, one UI

The playground supports two modes, toggled via URL path (/sandbox, /remote):

Mode How it works Backend needed?
Sandbox SCI evaluates patterns in-browser against sample data No
Remote HTTP POST to a live server API (e.g. flybot.sg) Yes

The UI is mode-agnostic. Views dispatch {:pull :pattern} and the effect system routes to the right executor (see dispatch-of). Switching modes changes the transport, not the interface.

Sandbox is the default and the one most people use. It ships with progressive examples that teach the DSL step by step: binding scalars, querying collections, using :when constraints, composing across collections, and performing mutations (create, update, delete). Each example loads a pre-filled pattern into the editor. For mutations, the data panel refreshes automatically so you can see the effect.

Remote connects to a live server and sends the same Transit-encoded patterns that flybot.sg uses for its own frontend. This is useful for testing patterns against real data or debugging API behavior. Remote mode also adds schema-aware autocomplete tooltips from the server's Malli schema.

Why SCI

Pull patterns support :when constraints with predicate functions:

{:posts {{:id 1} {:title (?t :when string?)}}}

On a server, string? resolves from clojure.core. In the browser, there is no Clojure runtime. SCI (Small Clojure Interpreter) fills this gap: it provides a sandboxed Clojure evaluator in ClojureScript.

The sandbox initializes SCI with a curated whitelist of safe functions (pos?, string?, count, =, etc.) covering what people actually use in :when constraints. No eval, no IO, no side effects.

The key insight is that the same remote/execute function runs in both modes. On the server, it uses Clojure's built-in resolve. In the sandbox, it uses SCI's resolve and eval. The pull engine does not know or care which one it is talking to.

Same engine, in-memory data

The sandbox needs something that behaves like a database. The collection library provides atom-source, an in-memory implementation backed by atoms that supports the same CRUD operations as a real DataSource. The sandbox store is a map of atom-source-backed collections, initialized with sample data (users, posts, config).

Two design choices worth noting:

Schema is pull-able data. Instead of a separate endpoint, the schema lives in the store alongside domain collections. Querying {:schema ?s} returns it through the same pull mechanism as {:users ?all}. The playground uses pull for everything, including introspecting its own API.

Reset is a pull mutation. Resetting data to defaults is expressed as {:seed {nil true}}, a standard create mutation. The seed entry resets all atom-sources to their initial state. No special reset endpoint, no reload.

Deployment

The sandbox runs entirely in the browser, so the playground is a pure SPA with no server dependency. This makes hosting straightforward: an S3 bucket behind CloudFront, deployed via GitHub Actions on git tag. Total cost is under $1/month.

The deploy pipeline reuses the same CI/CD setup as flybot.sg. A bb tag examples/pull-playground 0.1.0 triggers a shadow-cljs release build, S3 sync, and CloudFront cache invalidation.

The full source is in lasagna-pattern/examples/pull-playground.

Permalink

Managing Web App Modes with Fun-Map in Clojure

Context

Every non-trivial Clojure application has stateful components (database connections, HTTP servers, caches) that depend on each other and need to start and stop in order. Stuart Sierra framed the problem well in his Components: Just Enough Structure talk: stateful resources should not be scattered across namespaces as top-level defs. They need explicit lifecycle management and dependency ordering.

The Clojure community has produced several solutions, each with different tradeoffs:

Library System definition Component definition Overriding for test/dev
Component system-map + using declarations defrecord implementing Lifecycle protocol assoc new component, but must still be a record
Integrant EDN config map defmethod init-key / halt-key! multimethods Merge config maps
Mount Implicit (global defstate vars) defstate with :start/:stop with-args, with-substitutions
Fun-map Regular Clojure map fnk + closeable (plain functions) assoc/dissoc on the map

Component requires every component to be a defrecord. Integrant splits the system across EDN config and multimethods in separate namespaces. Mount ties state to global vars, making parallel testing awkward.

Fun-map, created by @Robert Luo, takes a different approach: the system IS a regular Clojure map. Values can be plain data or fnk functions that declare dependencies on other keys. life-cycle-map adds startup/shutdown ordering. That is the entire model.

The base system

Here is the production system from flybot-site, a blog platform built on Datahike + http-kit. I have trimmed the logging calls and middleware details for clarity, but the structure is from the actual code:

(defn make-base-system [config]
  (let [{:keys [server db auth session init uploads log]} (cfg/prepare-cfg config)
        {:keys [port base-url]} server
        db-cfg    (build-datahike-cfg db)
        session-key (cfg/parse-session-secret (:secret session))]
    (life-cycle-map
     {;;--- Config (plain values, no fnk needed) ---
      ::port          port
      ::base-url      base-url
      ::db-cfg        db-cfg
      ::owner-emails  (:owner-emails auth)

      ;;--- Logger (mulog) ---
      ::logger
      (fnk []
           (let [stop-fn ...]
             (closeable {:log ...} #(stop-fn))))

      ;;--- Database ---
      ::db
      (fnk [::db-cfg ::logger]
           (let [conn (db/create-conn! db-cfg)]
             ...
             (closeable {:conn conn :cfg db-cfg}
                        #(db/release-conn! conn db-cfg))))

      ;;--- API (request -> {:data :schema}) ---
      ::api-fn
      (fnk [::db ::logger]
           (api/make-api {:conn (:conn db)}))

      ;;--- Session (secure cookies for prod) ---
      ::session-config
      (fnk [::logger]
           {:store (cookie-store {:key ...})
            :cookie-attrs {:same-site :lax :http-only true :secure true}})

      ;;--- Upload handler (S3 or local) ---
      ::upload-handler
      (fnk [::logger] ...)

      ;;--- Dev user slot (nil in prod, overridden in dev) ---
      ::dev-user nil

      ;;--- Ring application ---
      ::ring-app
      (fnk [::api-fn ::session-config ::dev-user ::base-url
            ::upload-handler ::logger ::db ::owner-emails]
           (-> (fn [_] ...)
               (remote/wrap-api api-fn {:path "/api"})
               (auth/wrap-google-auth {...})
               (wrap-dev-user dev-user)
               (wrap-session session-config)
               ...))

      ;;--- HTTP Server ---
      ::http-server
      (fnk [::ring-app ::port ::logger]
           (let [stop-fn (http-kit/run-server ring-app {:port port})]
             (closeable {:port port :api-endpoint ...}
                        #(stop-fn))))})))

The dependency graph is readable at a glance:

life-cycle-map
├── ::port, ::base-url, ::db-cfg, ::owner-emails   (plain config values)
├── ::logger                                         (no deps)
├── ::db                                             (depends on ::db-cfg, ::logger)
├── ::api-fn                                         (depends on ::db, ::logger)
├── ::session-config                                 (depends on ::logger)
├── ::upload-handler                                 (depends on ::logger)
├── ::dev-user                                       (nil in prod)
├── ::ring-app                                       (depends on most of the above)
└── ::http-server                                    (depends on ::ring-app, ::port, ::logger)

Three fun-map primitives do all the work:

  • fnk: a function that destructures its dependencies from the map. (fnk [::db ::logger] ...) declares that this component needs ::db and ::logger to start.
  • closeable: wraps a value with a teardown function. When halt! is called, closeables are torn down in reverse dependency order.
  • life-cycle-map: a map that tracks which components have been started. Access any key to trigger its transitive dependency chain. halt! stops everything.
(def sys (make-base-system prod-config))

;; Start: access any key to trigger its dependency chain
(::http-server sys)
;; => {:port 8080, :api-endpoint "http://localhost:8080/api"}

;; Stop: close all components in reverse dependency order
(halt! sys)

Three modes via assoc

The real payoff is building variant systems. Since the system is a map, different environments are assoc operations on the base. No conditionals inside components.

The nil slot pattern

Notice ::dev-user nil in the base system. The wrap-dev-user middleware checks this value:

(defn- wrap-dev-user [handler dev-user]
  (if dev-user
    (fn [request]
      (let [session (merge (:session request)
                           {:user-id      (:id dev-user)
                            :user-email   (:email dev-user)
                            :user-name    (:name dev-user)
                            :user-picture (:picture dev-user)
                            :roles        (or (:roles dev-user) #{:member :admin :owner})})]
        (handler (assoc request :session session))))
    handler))

In prod, dev-user is nil, so wrap-dev-user returns the handler unchanged. It is a no-op, not a conditional. The middleware does not know about modes. It only knows about its input.

Dev system (skip OAuth)

For local development, we need insecure cookies (no HTTPS) and an auto-login user (skip the Google OAuth flow). Two assoc calls on the base:

(defn make-dev-system [config]
  (let [{:keys [session dev]} (cfg/prepare-cfg config)
        session-key (cfg/parse-session-secret (:secret session))
        {dev-user-cfg :user} dev]
    (-> (make-base-system config)
        (assoc ::session-config (make-dev-session-config session-key))
        (assoc ::dev-user (make-dev-user-component dev-user-cfg)))))

make-dev-session-config returns a fnk identical to the prod version but with :secure false. make-dev-user-component creates the user in Datahike and grants roles at startup:

(defn- make-dev-user-component [dev-user-cfg]
  (fnk [::db ::logger]
       (when dev-user-cfg
         (let [conn (:conn db)
               {:keys [id name email roles]} dev-user-cfg
               roles (or roles #{:member :admin :owner})]
           (db/upsert-user! conn #:user{:id id :email email :name name :picture ""})
           (doseq [role roles]
             (db/grant-role! conn id role))
           {:id id :email email :name name :picture nil :roles roles}))))

The rest of the system (database, API, server, middleware stack) is inherited unchanged.

Dev with OAuth (test the login flow)

Sometimes we need to test the Google OAuth flow locally but still want insecure cookies. One assoc:

(defn make-dev-oauth-system [config]
  (let [session-key (cfg/parse-session-secret (:secret (:session (cfg/prepare-cfg config))))]
    (-> (make-base-system config)
        (assoc ::session-config (make-dev-session-config session-key)))))

No ::dev-user override, so it stays nil, and wrap-dev-user remains a no-op. The OAuth middleware handles login normally.

What each mode changes

Component Prod (base) Dev Dev with OAuth
::session-config :secure true :secure false :secure false
::dev-user nil (no-op) Auto-login user with roles nil (no-op)
Everything else Base Inherited Inherited

Mode dispatch

A single entry point selects the constructor:

(defn make-system
  ([] (make-system {}))
  ([config]
   (let [{:keys [mode]} (cfg/prepare-cfg config)]
     (case mode
       :dev             (make-dev-system config)
       :dev-with-oauth2 (make-dev-oauth-system config)
       (make-base-system config)))))

Testing with a fresh system

Tests use the dev mode with an in-memory database. The fixture creates a system, starts it, and tears it down:

(def test-config
  {:mode   :dev
   :server {:port 18765 :base-url "http://localhost:18765"}
   :db     {:backend :mem :id "test-blog"}
   :auth   {:owner-emails #{"owner@test.com"}}
   :init   {:seed? false}
   :dev    {:user {:id "owner" :email "owner@test.com" :name "Test Owner"}}})

(defn with-system [f]
  (let [sys (system/make-system test-config)]
    (try
      (touch sys)
      (binding [*sys* sys]
        (f))
      (finally
        (halt! sys)))))

(use-fixtures :once with-system)

No special test infrastructure. The system is a value. touch starts everything, halt! stops it. The in-memory Datahike backend means each test run is isolated. Because there is no global state, you could run multiple systems in the same JVM for parallel testing.

The pattern across projects

This pattern scales beyond flybot-site. In our internal analytics platform built on Rama, the same structure appears with entirely different components:

(defn system
  [{:cfg/keys [external-http rama log analytics-cfg dashboards-cfg oauth2]}]
  (life-cycle-map
   {::logger              (fnk [] ...)
    ::rama-clu             (fnk [::logger] ...)
    ::cache                (fnk [::rama-clu] ...)
    ::oauth2-config        {:google (merge cfg/oauth2-default-config oauth2)}
    ::ring-handler         (fnk [::injectors ::saturn-handler ::executors ::system-monitor ::oauth2-config] ...)
    ::external-http-server (fnk [::logger ::ring-handler] ...)}))

Different components (Rama cluster manager, Prometheus metrics, Jetty instead of http-kit), same composition model. The system is a map. Components are entries. Lifecycle is closeable.

Why not the alternatives

vs. Component: No defrecords, no Lifecycle protocol. Components are functions, not types. You do not need to define a record just to hold a database connection.

vs. Integrant: No separation between config (EDN) and implementation (multimethods). The system definition IS the implementation. You see the dependency graph, the startup logic, and the teardown logic in one place.

vs. Mount: No global state. You can run multiple systems in the same JVM. Parallel test execution works naturally because each test gets its own system value.

The assoc composition also means you never need conditional logic inside components. The production ::session-config does not check if dev?. Instead, the dev system replaces it entirely. Each component does one thing.

Conclusion

  • A system is a map: readable, inspectable in the REPL, serializable as data
  • Three modes, zero conditionals: prod is the base, dev assocs two components, dev-with-oauth assocs one. No if dev? inside any component.
  • The nil slot pattern: ::dev-user nil in the base lets middleware be a no-op in prod without knowing about modes
  • No framework buy-in: components are fnk functions and closeable wrappers, not protocol implementations or multimethod dispatches
  • Partial startup: access one key and only its transitive dependencies start
  • Same pattern across projects: flybot-site (http-kit + Datahike) and hibou (Jetty + Rama) use identical composition

Permalink

Last Call for Q2 2026 Funding Survey

Greetings Clojurists!

We are about to close our Q2 2026 Funding Survey which helps inform our Q2 project awards. It is not a heavy lift - maybe 5 minutes of your time. But your input is invaluable! A link to the survey was sent to your email in the last few weeks - and just in case it made its way to spam, you can look for “We Need Your Input - Q2 2026 Funding”. The survey closes midnight PST on February 21, 2026.

Thanks as always for your support of Clojurists Together and for being a part of this awesome community.

Any questions, please email me at kdavis@clojuriststogether.org

Kathy Davis Program Manager Clojurists Together Foundation

Permalink

ClojureScript Guide: Why Modern Devs Need It Now (2026 Edition)

Why is ClojureScript Such a Big Deal and Matter So Much for Modern Web Developers in 2026?

Simple. Today, developers need speed, scalability, and a smooth developer experience (DX) in web development. ClojureScript meets all three requirements effectively.

Write Clojure, and it turns into optimized JavaScript using Google Closure. It helps deliver scalable user interfaces and makes it much easier to share code across the full stack.

Compared to plain JavaScript, ClojureScript offers several significant enhancements. 

  • Functional immutability keeps code clean and helps to dodge bugs. 
  • shadow-cljs, help with REPL hot-reloading. Add new code changes and see them instantly. 
  • And core.async handles async workflows without callback overhead.

This is more than a tool; it’s a shift in approach. ClojureScript changes the way many developers think about creating applications. It is about making things faster, cleaner, and much more reliable.

So, What’s ClojureScript All About? Beginner Guide

What’s ClojureScript All About? Beginner Guide

ClojureScript takes Clojure- a Lisp dialect- and turns it into lean, optimized JavaScript using the Google Closure toolchain. It delivers all the perks of functional, declarative code in that classic Lisp syntax, but it runs fast right in the browser.

Lisp Syntax for Concise, Immutable Code

Why bother with Lisp syntax? As complexity grows, JavaScript becomes harder to reason about. ClojureScript uses Hiccup syntax and keeps the data immutable. The code is kept neat and simple to understand, so it is less likely to break. New team members can dive in and build features rather than tracking bugs.

Google Closure Optimization

The Closure Compiler removes dead code and optimizes the remaining code. That means smaller files, faster load times, and lighter apps overall.

Functional Programming Foundation

At its core, ClojureScript leans hard into functional programming- immutability, pure functions, and simple, declarative UIs. When you use tools like Reagent or re-frame, handling app state and building complex interfaces becomes much smoother. 

Beginner‑Friendly Learning Curve

ClojureScript may feel challenging at first, especially for developers accustomed to JavaScript. Once they learn immutability and pure functions, building simple, declarative UIs becomes easier. Suddenly, building apps feels smoother, quicker, and honestly, kind of fun.

Why Modern Web Devs Love ClojureScript (Key Benefits) 

Why Modern Web Devs Love ClojureScript (Key Benefits) 

REPL Hot‑Reloading With shadow‑cljs For Instant Iteration  

There is nothing quite like seeing the code change in real time in a live environment. With ClojureScript’s REPL (Read‑Eval‑Print Loop), just type and watch the results show up right in the browser. shadow-cljs takes it up a notch- no more waiting for builds or hitting refresh over and over. Once code changes are made, they are available instantly. Fast feedback keeps developers in the flow and helps to get more done.

Functional Immutability Boosts React Performance via Reagent  

Now, let’s talk about performance. Reagent wraps around React but uses ClojureScript’s immutable data. Data never gets changed in place, so React only updates what’s actually new. That keeps things fast and predictable. This reduces bugs, keeps apps running smoothly, and lets developers focus on building features rather than debugging.

core.async For Cleaner, Callback‑Free async Flows  

Async code in JavaScript can get messy fast, making nested functions hard to read and maintain. core.async in ClojureScript simplifies that. Developers can use channels and lightweight threads, so the code appears simple while running asynchronously behind the scenes. Event handling, API calls, background tasks- suddenly, they are not such a headache.

A Modern Toolkit That Actually Makes Scalable Web Apps  

Put all these features together: live iteration with REPL and shadow‑cljs, predictable rendering with immutable data, and simplified async flows with core.async. Developers get a setup that helps move fast and avoid the typical issues that come with larger JavaScript apps. ClojureScript simplifies scaling, letting developers build more and fix less.

Top ClojureScript Tools and Libraries 2026

Top ClojureScript Tools and Libraries 2026

Looking ahead to 2026, these libraries are really powering ClojureScript- making it possible to build apps that are fast, reliable, and ready to scale.

Tooling & Build

  • Shadow-cljs: It works seamlessly with npm, gives a live REPL, and provides extremely fast hot-reload. Development just feels smoother and quicker.
  • Figwheel: It has remained a key feature for live reload and interactive dev for years. Shadow-CLJS is more common now, but plenty of projects still stick with Figwheel.

UI & Rendering

  • Reagent: A slim React wrapper that allows writing UI in Hiccup syntax. Because it is based on immutable data, UI updates are efficient, and issues are less likely to occur.
  • Rum: Another React wrapper that keeps things simple. If anyone is working on a smaller project or just wants the minimum layer on top of React, Rum is a good choice.
  • Hoplon: A library that helps to build reactive UIs with a spreadsheet-like model. Its “cells” update automatically, so making interactive apps feels more natural.

State Management & App Structure

  • Re-frame: It is a predictable state management library that organizes app state. Inspired by Redux, it makes data flow clear and helps to keep even big, complex apps manageable.
  • Keechma: It provides a structured app. It handles lifecycle and routing and integrates well with Reagent and re-frame. If the project is large, Keechma keeps things organized.

Data & Persistence

  • Datascript: An immutable database right in the browser. It supports Datomic-style queries, making the local state both powerful and flexible.
  • Fulcro: A full-stack framework for sharing code between the client and the server. It includes built-in GraphQL support and a data-driven design, making scaling straightforward.

Styling & Assets

  • Garden: With the Garden library, developers write their CSS as ClojureScript data. That makes styles easy to generate, reuse, and maintain. 

ClojureScript vs JavaScript/React: Quick Comparison 

FeatureClojureScript JavaScript/React 
State ManagementAtoms, immutable by default, re‑frameMutable by default, Redux/Context for control
PerformanceDead‑code removal, small bundles, efficient re‑rendersReact’s virtual DOM helps, but mutable state adds complexity
DXLive REPL experimentationConsole debugging
Full‑StackShare code with Clojure backendLanguage silos
Syntax & StyleFunctional, Lisp‑based, Hiccup for UIMix of imperative + functional, JSX for UI
Toolingshadow‑cljs, FigwheelWebpack, Vite, CRA
Async Handlingcore.async channels Promises, async/await, callbacks
Learning CurveSteeper (Lisp syntax, functional mindset)Easier entry, widely taught, and documented
Community & AdoptionSmaller but focused (finance, data apps, e‑commerce)Huge global adoption, dominant in web dev
StylingGarden (CSS as data)CSS‑in‑JS, styled‑components, plain CSS
Best FitComplex, scalable apps needing reliabilityGeneral apps, startups, fast onboarding

Real‑World ClojureScript Success Stories

Nubank  

Nubank stands out as one of Latin America’s largest fintechs, and it uses ClojureScript to power its user interfaces. Their teams rely on immutability and Reagent to build apps that serve millions of people. This setup keeps everything predictable and reliable- something absolutely needed in finance.

Reagent in Practice  

Apps built with Reagent just run smoother than typical React apps. Thanks to immutable data, updates are quick and reduce bugs caused by changing state. For many teams, Reagent is the go-to option for speed and stability without the headache.

Startups and Enterprises  

It is not just the giants using ClojureScript. Startups and big companies both use it to scale their apps while keeping things simple. With libraries like re-frame, state management gets easier, and tools like shadow-cljs let teams move fast. That combo allows it to grow without losing control.

Flexiana’s Success Stories  

Flexiana, a global consultancy, has delivered ClojureScript projects across different industries. We have built video consultation platforms, worked with graph databases, and co-developed SaaS products. By pairing ClojureScript with Polylith architecture, we keep systems modular and stable- even as things expand.

Why These Stories Matter

  • Nubank demonstrates that ClojureScript can handle millions of users.  
  • Reagent gives apps a real performance boost over plain React.  
  • Startups, big companies, and consultancies like Flexiana use it for actual products, not just side projects.  
  • Immutable data and predictable state cut bugs and simplify maintenance. 

Real-world businesses use ClojureScript, demonstrating its practicality.

Future of ClojureScript: Why Now in 2026

ClojureScript is no longer a mystery. By 2026, it became a stable choice for teams. It also supports scaling up while providing the right modern tools for the job.

Growing npm interop and GPU/web Tech Support  

Now, getting npm packages to play nice is easy. It helps capture what is needed and integrate it directly into the JavaScript workflow. Plus, with GPU support and WebAssembly, heavy tasks- like graphics, simulations, and data‑intensive apps run right in the browser. No more waiting around.

Rising Adoption of AI‑Driven Frontends  

AI is not just a buzzword these days; it is supported into almost every interface. ClojureScript’s focus on immutability and functional programming keeps things predictable, even when the state gets complex. That kind of reliability is huge when building smart, scalable frontends. 

Community Growth  

The community is stronger than ever. There are more libraries, tutorials, and frameworks available. New folks can learn without getting lost. Experienced teams can create complex structures. Polylith keeps large projects manageable and well-organized. 

So, Why Now  

In 2026, ClojureScript combines functional programming with modern web tech. It also connects naturally to AI design. It is now a proven choice for teams that need reliable performance and durable systems.

ClojureScript Tutorial: Build the First App

Let’s build a simple interactive app with ClojureScript, shadow‑cljs, and Reagent. It helps to get a real, working counter that updates in real time with a click.

Step 1: Set up shadow‑cljs

First, install shadow-cljs. This tool integrates ClojureScript with npm packages, runs the REPL, and supports hot reloading. 

Make sure Node.js and npm are installed. Then add shadow‑cljs to the project folder and run:

Set up the project: add a src folder for code and a shadow-cljs.edn file for builds.

Step 2: Create a Reagent Counter

Reagent is a ClojureScript wrapper for React that creates a UI with Hiccup syntax, not JSX.

Add Reagent to the dependencies in shadow-cljs.edn.

You can run the code locally by cloning the repository cljs_hot_reload_demo

Step 3: Run with REPL Hot‑Reloading

Start shadow‑cljs with:

Open the REPL and connect it to the running app.

Shadow-CLJS now instantly reloads everything in the browser anytime changes are made to the code. There is no need to refresh to view the changes right away.

What Results Achieved?

  • The developer just made a live update by adding a header and changing its color.
  • The developer set up a project that is ready to scale.
  • Additionally, the developer is working in a flow that combines the strength of the React ecosystem with the functional approach of ClojureScript. 

ClojureScript FAQs for JS Devs

ClojureScript vs React?  

Reagent builds on React. It introduces functional immutability and a cleaner, shorter syntax, making component development simpler and reducing bugs.

Learning curve?  

If a developer already knows JavaScript and React, there is no need to relearn everything. If developers are fans of functional programming, they will feel at home.

How does state management work?  

State remains immutable and is managed with atoms. Tools like re‑frame make complex state easier to manage and debug than typical mutable JS.

Can I use npm packages?  

Absolutely. With shadow-cljs, developers just pull in npm packages like they normally would. ClojureScript integrates seamlessly with the JS ecosystem.

What about tooling?  

shadow-cljs offers hot‑reloading, a live REPL, and smooth JavaScript interop- built for fast, flexible development.

Is performance an issue?  

Not at all. ClojureScript compiles down to efficient JavaScript. Paired with React or Reagent, the apps run just as quickly as plain JS ones- sometimes even faster.

How big is the community? 

The community is smaller than JavaScript’s but active, with numerous libraries, guides, and frameworks for developers.

Can I mix ClojureScript with existing JS code?  

Yep. Interop is easy. Developers can call JS functions from ClojureScript, and they can make their ClojureScript code available to JavaScript too.

Why choose ClojureScript over plain JS?  

Developers get immutable data and functional design. They also benefit from clear syntax. This reduces bugs and makes apps more reliable. It also keeps the code easier to manage.

Wrapping Up

By 2026, ClojureScript will be mainstream- a proven way to build scalable web apps. It provides instant feedback via REPL hot-reloading, so there is no need to wait for changes to appear. Immutable data keeps the app’s state under control, and that means fewer bugs to chase. 

core.async? It just makes async work less of a headache. Plus, the ecosystem is stacked: ShadowCLJS, Reagent, re-frame, and Fulcro all help build interactive UIs and enable front-to-back code reuse.

Developers can see the impact. Nubank, one of the largest fintech companies in Latin America, relies on ClojureScript to serve millions. Flexiana works with clients worldwide, using ClojureScript across healthcare and SaaS. Startups and large companies rely on us to grow without added complexity.

ClojureScript gives teams what they need: speed, reliability, and easier-to-maintain code. It is not just a passing trend. It is here to stay, and it is a smart choice for anyone who wants their apps to last.

Facing challenges with big datasets? Flexiana’s ClojureScript experts ensure smooth scaling.

The post ClojureScript Guide: Why Modern Devs Need It Now (2026 Edition) appeared first on Flexiana.

Permalink

Python to Wisp: The Lisp That Stole Python's Indentation

Python to Wisp: The Lisp That Stole Python's Indentation

Every Lisp programmer has heard the complaint: "too many parentheses." Every Python programmer has heard the praise: "the indentation makes it readable." Wisp asks a dangerous question: what if we put them together?

Wisp is an indentation-sensitive syntax for Scheme — specifically GNU Guile Scheme — standardized as SRFI-119. It takes the structure of Lisp and expresses it through indentation, the way Python does. The outer parentheses vanish. What remains is code that looks strangely like Python but thinks like Lisp — with macros, tail-call optimization, exact arithmetic, and homoiconicity.

This article walks through every major topic in the official Python tutorial and shows how Wisp approaches the same concept. Two languages, both indented, separated by philosophy.

1. Whetting Your Appetite

Python sells itself on readability, rapid prototyping, and an enormous ecosystem. It's the default choice for scripts, web backends, data science, and teaching.

Wisp sells itself on a different proposition: the readability of Python's visual structure combined with the raw power of Scheme. Scheme is one of the most theoretically clean programming languages ever designed — tail-call optimization, first-class continuations, hygienic macros — but its parenthesized syntax scares newcomers away. Wisp removes that barrier.

Wisp runs on GNU Guile, a mature Scheme implementation that ships with many GNU/Linux distributions. It gives you access to Guile's full ecosystem: real OS threads (no GIL), arbitrary-precision arithmetic, a sophisticated module system, and the ability to reshape the language through macros.

The trade-off? Python has PyPI. Wisp has the Guile ecosystem and whatever you can call via FFI. This isn't a contest for library count — it's a contest for expressiveness per line.

2. The Interpreter / The REPL

Python:

>>> 2 + 2
4
>>> print("Hello, world!")
Hello, world!

Wisp:

> + 2 2
4
> display "Hello, world!"
Hello, world!

To use Wisp interactively, launch guile and switch languages with ,L wisp. Or run scripts directly: guile --language=wisp -s hello.w.

The syntax is prefix notation — the function comes first, then its arguments — but without the outer parentheses that traditional Scheme requires. The line display "Hello, world!" is equivalent to Scheme's (display "Hello, world!"). Indentation replaces the parens.

3. An Informal Introduction

Numbers

Python:

>>> 17 / 3       # 5.666...
>>> 17 // 3      # 5
>>> 17 % 3       # 2
>>> 5 ** 2       # 25

Wisp:

/ 17 3            ; 17/3 — an exact rational!
quotient 17 3     ; 5
remainder 17 3    ; 2
expt 5 2          ; 25

The first difference is striking. (/ 17 3) in Scheme returns 17/3 — an exact rational number, not a floating-point approximation. Scheme distinguishes exact and inexact numbers at the type level. If you want the float, you ask for it explicitly: (exact->inexact (/ 17 3)) gives 5.666....

Operators take multiple arguments naturally:

+ 1 2 3 4 5       ; 15
* 2 3 4            ; 24
< 1 2 3 4          ; #t (chained comparison)

And Scheme gives you arbitrary-precision integers for free. No int vs long distinction, no overflow:

expt 2 1000        ; a 302-digit number, exactly

Strings

Python:

word = "Python"
word[0]        # 'P'
word[0:2]      # 'Py'
len(word)      # 6
f"Hello, {word}!"

Wisp:

define word "Wisp"
string-ref word 0       ; #\W (a character)
substring word 0 2      ; "Wi"
string-length word       ; 4
format #f "Hello, ~a!" word  ; "Hello, Wisp!"

Scheme strings are mutable (unlike Python's), though idiomatic Scheme treats them as values. The format function uses ~a as a placeholder — #f means "return a string" and #t means "print directly":

format #t "Hello, ~a!\n" word   ; prints: Hello, Wisp!

Lists

Python:

squares = [1, 4, 9, 16, 25]
squares[0]              # 1
squares + [36, 49]      # [1, 4, 9, 16, 25, 36, 49]
squares.append(36)      # mutates!

Wisp:

define squares '(1 4 9 16 25)
list-ref squares 0              ; 1
append squares '(36 49)         ; (1 4 9 16 25 36 49)

Scheme lists are linked lists — immutable by convention, efficient at the head. append returns a new list without modifying the original. There's no .append mutation. If you want indexed access, Scheme has vectors:

define sq #(1 4 9 16 25)
vector-ref sq 0                 ; 1

The '(1 4 9 16 25) is a quoted list — the quote ' prevents Scheme from trying to call 1 as a function.

4. Control Flow

if / cond

Python:

if x < 0:
    print("Negative")
elif x == 0:
    print("Zero")
else:
    print("Positive")

Wisp:

cond
  : < x 0
    display "Negative"
  : = x 0
    display "Zero"
  else
    display "Positive"

In Wisp, cond handles multi-branch conditionals. Each branch starts with a : (which creates the nested parentheses Scheme expects). The else clause catches everything.

The simple if takes a condition, a then-branch, and an else-branch:

if : < x 0
  display "Negative"
  display "Non-negative"

And because if is an expression, not a statement, you can use it anywhere:

define label
  if : < x 0
    . "negative"
    . "non-negative"

The . (dot) prevents the string from being treated as a function call — it marks a bare value rather than an application.

for loops

Python:

for word in ["cat", "window", "defenestrate"]:
    print(word, len(word))

Wisp:

for-each
  lambda : word
    display word
    display " "
    display : string-length word
    newline
  '("cat" "window" "defenestrate")

Scheme doesn't have a built-in for loop — iteration is done through for-each (for side effects) or map (for transformation). The lambda defines an anonymous function applied to each element.

Or using a named let (Scheme's idiomatic loop):

let loop
  : words '("cat" "window" "defenestrate")
  when : not : null? words
    display : car words
    display " "
    display : string-length : car words
    newline
    loop : cdr words

This is recursion that looks like a loop. car gets the first element, cdr gets the rest, and loop calls itself with the remaining list. Guile guarantees tail-call optimization, so this uses constant stack space — unlike Python, which would hit a recursion limit.

range

Python:

list(range(0, 10, 2))  # [0, 2, 4, 6, 8]

Wisp:

use-modules : srfi srfi-1

iota 5 0 2             ; (0 2 4 6 8)

Guile's iota from SRFI-1 generates numeric sequences. The arguments are count, start, and step — slightly different from Python's start, stop, step.

while

Python:

a, b = 0, 1
while a < 10:
    print(a)
    a, b = b, a + b

Wisp:

let loop : : a 0
             b 1
  when : < a 10
    display a
    newline
    loop b {a + b}

There's no while keyword here — it's a named let that recurses. The {a + b} uses Wisp's curly-brace infix syntax (from SRFI-105), which lets you write math the familiar way. Without it, you'd write (+ a b).

Pattern Matching

Python 3.10+:

match command:
    case "quit":
        quit_game()
    case "help":
        show_help()
    case _:
        print("Unknown")

Wisp (using Guile's match):

use-modules : ice-9 match

match command
  : "quit"
    quit-game
  : "help"
    show-help
  : _
    display "Unknown"

Guile's match supports destructuring, guards, and nested patterns — similar to Python's structural pattern matching but available since long before Python 3.10.

Functions

Python:

def factorial(n):
    """Return the factorial of n."""
    if n == 0:
        return 1
    return n * factorial(n - 1)

Wisp:

define : factorial n
  "Return the factorial of n."
  if : = n 0
    . 1
    * n : factorial {n - 1}

Functions are defined with define. The : factorial n creates (factorial n) — a function named factorial taking one argument. The docstring goes right after the parameter list. The last expression is the return value.

The Wisp version also benefits from tail-call optimization if rewritten in tail-recursive style:

define : factorial n
  let loop : : i n
               acc 1
    if : = i 0
      . acc
      loop {i - 1} {acc * i}

This handles (factorial 100000) without breaking a sweat. Python would need sys.setrecursionlimit or an iterative rewrite.

Default and Keyword Arguments

Python:

def greet(name, greeting="Hello"):
    print(f"{greeting}, {name}!")

Wisp:

use-modules : ice-9 optargs

define* : greet name #:optional (greeting "Hello")
  format #t "~a, ~a!\n" greeting name

Guile uses define* (from ice-9 optargs) for optional and keyword arguments:

define* : connect host #:key (port 8080) (timeout 30)
  format #t "Connecting to ~a:~a (timeout ~a)\n" host port timeout

connect "example.com" #:port 443
; => Connecting to example.com:443 (timeout 30)

#:optional for positional defaults, #:key for keyword arguments. More explicit than Python's unified syntax, but equally capable.

Lambda Expressions

Python:

double = lambda x: x * 2
sorted(words, key=lambda w: len(w))

Wisp:

define double
  lambda : x
    * x 2

sort words
  lambda : a b
    < (string-length a) (string-length b)

In Wisp, lambda is a full block — no single-expression restriction like Python's lambda. The indented body can contain anything.

5. Data Structures

Lists (Linked Lists)

Scheme lists are cons-cell linked lists — the fundamental data structure of all Lisps:

define fruits '("apple" "banana" "cherry")
car fruits                    ; "apple" (first)
cdr fruits                    ; ("banana" "cherry") (rest)
cons "mango" fruits           ; ("mango" "apple" "banana" "cherry")
length fruits                 ; 3
list-ref fruits 1             ; "banana"
Operation Python Wisp
First element lst[0] car lst
Rest of list lst[1:] cdr lst
Prepend [x] + lst cons x lst
Append lst + [x] append lst (list x)
Length len(lst) length lst
Nth element lst[n] list-ref lst n

Vectors (Arrays)

For indexed access, Scheme has vectors:

define v #(10 20 30 40 50)
vector-ref v 2               ; 30
vector-length v               ; 5

Hash Tables (Dictionaries)

Python:

tel = {"jack": 4098, "sape": 4139}
tel["guido"] = 4127
del tel["sape"]

Wisp:

define tel : make-hash-table

hashq-set! tel 'jack 4098
hashq-set! tel 'sape 4139
hashq-set! tel 'guido 4127
hashq-remove! tel 'sape

hashq-ref tel 'jack          ; 4098

Hash tables in Guile use explicit functions rather than syntax. It's more verbose, but Scheme compensates with association lists for small mappings:

define tel
  ' : (jack . 4098)
      (sape . 4139)

assoc 'jack tel               ; (jack . 4098)

Association lists are simple, immutable, and idiomatic for small key-value collections.

Sets

Guile doesn't have a built-in set type, but SRFI-113 provides them, or you can use sorted lists with lset- operations from SRFI-1:

use-modules : srfi srfi-1

define a '(1 2 3 4)
define b '(3 4 5 6)

lset-difference equal? a b     ; (1 2)
lset-union equal? a b          ; (1 2 3 4 5 6)
lset-intersection equal? a b   ; (3 4)

List Comprehensions

Python:

[x**2 for x in range(10) if x % 2 == 0]

Wisp:

use-modules : srfi srfi-42

list-ec : : i (index 10)
  if : = 0 : remainder i 2
  expt i 2
; => (0 4 16 36 64)

SRFI-42 provides eager comprehensions. The : lines create the nested structure that Scheme expects.

Or idiomatically with map and filter:

map
  lambda : x
    expt x 2
  filter even? : iota 10

Looping Techniques

Python:

for i, v in enumerate(["tic", "tac", "toe"]):
    print(i, v)

Wisp:

let loop : : i 0
             words '("tic" "tac" "toe")
  when : not : null? words
    format #t "~a ~a\n" i : car words
    loop {i + 1} : cdr words

Or using SRFI-42:

do-ec : : i (index 3)
  format #t "~a ~a\n" i
    list-ref '("tic" "tac" "toe") i

6. Modules

Importing

Python:

import math
from os.path import join
import json as j

Wisp:

use-modules : ice-9 regex

use-modules
  : srfi srfi-1
    #:select : iota fold

use-modules
  : ice-9 popen
    #:prefix popen-

Guile's module system uses use-modules. The #:select option imports specific symbols, and #:prefix namespaces them — similar to Python's from X import Y and import X as Y.

Defining Modules

Python:

# mymodule.py
def greet(name):
    return f"Hello, {name}!"

Wisp:

define-module : myproject mymodule
  . #:export : greet

define : greet name
  format #f "Hello, ~a!" name

The #:export clause explicitly lists what the module makes public — like Python's __all__ but mandatory and standard.

The Script Entry Point

Python:

if __name__ == "__main__":
    main()

Wisp scripts use a shell header that invokes Guile with the right flags:

#!/usr/bin/env bash
exec guile -L . -x .w --language=wisp -e main -s "$0" "$@"
!#

define : main args
  display "Hello from the command line!\n"

The -e main flag tells Guile to call the main function with command-line arguments.

7. Input and Output

String Formatting

Python:

name = "World"
f"Hello, {name}!"
"{:.2f}".format(3.14159)

Wisp:

define name "World"
format #f "Hello, ~a!" name           ; "Hello, World!"
format #f "~,2f" 3.14159              ; "3.14"

Guile's format uses tilde-based directives: ~a for display, ~s for write (with quotes), ~d for integer, ~f for float, ~% for newline. It's closer to Common Lisp's format than to Python's f-strings.

File I/O

Python:

with open("data.txt", encoding="utf-8") as f:
    content = f.read()

with open("output.txt", "w") as f:
    f.write("Hello\n")

Wisp:

use-modules : ice-9 textual-ports

; Read entire file
define content
  call-with-input-file "data.txt" get-string-all

; Write to file
call-with-output-file "output.txt"
  lambda : port
    display "Hello\n" port

Or using with-input-from-file for a more Python-with-like pattern:

with-input-from-file "data.txt"
  lambda ()
    let loop : : line (read-line)
      when : not : eof-object? line
        display line
        newline
        loop : read-line

Scheme's port system is lower-level than Python's file objects but equally capable. Every I/O function accepts an optional port argument — display "hello" port writes to a specific destination.

JSON

Guile has a JSON module:

use-modules : json

define data
  json-string->scm "{\"name\": \"Ada\"}"
; => (("name" . "Ada"))

scm->json-string '((name . "Ada"))
; => "{\"name\":\"Ada\"}"

8. Errors and Exceptions

Try / Catch

Python:

try:
    x = int(input("Number: "))
except ValueError as e:
    print(f"Invalid: {e}")
finally:
    print("Done")

Wisp:

catch #t
  lambda ()
    ; try body
    define x
      string->number : read-line
    when : not x
      error "Invalid number"
  lambda : key . args
    ; catch handler
    format #t "Error: ~a ~a\n" key args

display "Done\n"

Guile uses catch and throw (or the newer with-exception-handler / raise-exception). The handler is a lambda that receives the error key and arguments.

The more modern style:

use-modules : ice-9 exceptions

with-exception-handler
  lambda : exn
    format #t "Error: ~a\n" : exception-message exn
  lambda ()
    string->number "not-a-number"
  . #:unwind? #t

Raising Exceptions

Python:

raise ValueError("something went wrong")

Wisp:

error "something went wrong"

Or with structured keys:

throw 'value-error "something went wrong"

Custom Error Types

Python creates exception classes. Guile uses symbol keys:

throw 'insufficient-funds "Need more money"
  . balance amount

Or with the newer condition system from R7RS:

use-modules : ice-9 exceptions

define-exception-type &insufficient-funds &error
  make-insufficient-funds
  insufficient-funds?
  : balance insufficient-funds-balance
    amount insufficient-funds-amount

More ceremony than Python's class hierarchy, but also more structured — each field is explicitly named and accessible.

9. Classes and Objects

This is where the languages diverge most sharply.

Python's Object-Oriented Approach

class Dog:
    kind = "canine"

    def __init__(self, name):
        self.name = name

    def speak(self):
        return f"{self.name} says woof!"

rex = Dog("Rex")
rex.speak()

Wisp's Approach: GOOPS

Guile has GOOPS (GNU Object-Oriented Programming System), a powerful CLOS-style object system:

use-modules : oop goops

define-class <dog> ()
  kind
    . #:init-value "canine"
    . #:getter dog-kind
  name
    . #:init-keyword #:name
    . #:getter dog-name

define-method : speak (dog <dog>)
  format #f "~a says woof!" : dog-name dog

define rex : make <dog> #:name "Rex"
speak rex                ; "Rex says woof!"

GOOPS supports multiple inheritance, multiple dispatch (methods dispatch on all argument types, not just the first), metaclasses, and method combinations. It's more powerful than Python's object system, but also more verbose.

Inheritance

Python:

class Puppy(Dog):
    def speak(self):
        return f"{self.name} says yip!"

Wisp:

define-class <puppy> (<dog>)

define-method : speak (dog <puppy>)
  format #f "~a says yip!" : dog-name dog

The Alternative: Just Use Data

Many Scheme programmers avoid GOOPS entirely and use plain data structures with functions — similar to Clojure's philosophy:

define : make-dog name
  ' : (kind . "canine")
      (name . ,name)

define : dog-speak dog
  format #f "~a says woof!" : assoc-ref dog 'name

define rex : make-dog "Rex"
dog-speak rex

This is simpler and often sufficient.

10. The Standard Library

Python's Batteries vs. Guile's Foundations

Python ships with modules for everything. Guile ships with fewer but more fundamental tools:

Python Module Guile Equivalent
os POSIX bindings (built-in)
sys (ice-9 command-line)
re (ice-9 regex)
math Built-in exact arithmetic + (ice-9 math)
json (json)
threading (ice-9 threads) — real OS threads, no GIL!
unittest SRFI-64
datetime SRFI-19
argparse (ice-9 getopt-long)
collections SRFIs (1, 69, 113, etc.)
http (web client), (web server)
sqlite3 Guile-DBI

Regex

use-modules : ice-9 regex

define m : string-match "[0-9]+" "hello 42 world"
match:substring m              ; "42"

Threading — No GIL

This is Guile's secret weapon. Where Python has the GIL limiting true parallelism, Guile has real POSIX threads:

use-modules : ice-9 threads

call-with-new-thread
  lambda ()
    display "Hello from another thread!\n"

par-map
  lambda : x
    * x x
  iota 10
; => (0 1 4 9 16 25 36 49 64 81)

par-map is parallel map — it distributes work across OS threads. No multiprocessing workaround, no async/await complexity. Just threads.

Web Server in 10 Lines

use-modules : web server

run-server
  lambda : request request-body
    values
      ' : (content-type . (text/plain))
      . "Hello, World!"

Guile ships with a built-in HTTP server and client. No framework needed for basic use.

11. Advanced Standard Library

Output Formatting

use-modules : ice-9 pretty-print

pretty-print
  ' : (name . "Ada")
      (languages . ("Python" "Wisp" "Scheme"))

Multi-threading (Continued)

use-modules : ice-9 threads

define mutex : make-mutex
define counter 0

define : increment
  lock-mutex mutex
  set! counter {counter + 1}
  unlock-mutex mutex

; Spawn 100 threads
let loop : : i 100
  when : > i 0
    call-with-new-thread increment
    loop {i - 1}

Exact Decimal Arithmetic

Python needs the decimal module. Scheme has exact rationals built-in:

+ 1/10 2/10               ; 3/10 (exact!)
= (+ 1/10 2/10) 3/10      ; #t
* 7/10 105/100             ; 147/200 (exact!)

No Decimal class, no imports. Exact fractions are a primitive type.

12. Virtual Environments and Packages

Python has venv and pip. Guile has a different model.

Installation:

# Guile is often pre-installed on GNU/Linux
sudo apt install guile-3.0

# Wisp
sudo apt install guile-wisp
# or from source

Dependencies:
Guile uses a load-path model rather than per-project virtual environments:

export GUILE_LOAD_PATH="/path/to/library:$GUILE_LOAD_PATH"
guile --language=wisp -s myapp.w

For project-specific dependencies, the community uses GNU Guix (a functional package manager) or manual load-path management. It's less ergonomic than pip install, but aligns with the GNU ecosystem's philosophy.

There's no PyPI equivalent. Libraries are distributed through GNU Guix, OS packages, or directly as source. This is Wisp's weakest point compared to Python.

13. Floating-Point Arithmetic

Python:

>>> 0.1 + 0.2 == 0.3
False

Wisp:

= {0.1 + 0.2} 0.3             ; #f (same problem with floats!)

Same IEEE 754, same surprises. But Scheme has an escape hatch Python doesn't — exact rationals are a first-class type, not a library:

= {1/10 + 2/10} 3/10          ; #t (exact!)
exact->inexact 1/3              ; 0.3333333333333333
inexact->exact 0.1              ; 3602879701896397/36028797018963968

That last line reveals the actual value stored for 0.1 — a ratio of two large integers. Scheme lets you move freely between exact and inexact worlds.

14. Interactive Editing

Python's REPL supports readline history and tab completion.

Guile's REPL offers similar features plus meta-commands:

,help              ; show all REPL commands
,describe display  ; show documentation
,time (fib 30)     ; benchmark an expression
,profile (fib 30)  ; profile an expression
,L wisp            ; switch to Wisp syntax
,L scheme          ; switch back to Scheme

You can switch between Wisp and Scheme syntax in the same session. This is useful for learning — write in Wisp, see what Scheme it corresponds to.

15. The Secret Weapon: Macros

Like all Lisps, Wisp supports macros — compile-time code transformations. But because Wisp code looks like indented pseudocode, Wisp macros are unusually readable for Lisp macros.

Example: A Python-style for-in loop

define-syntax-rule : for-in var lst body ...
  for-each
    lambda : var
      body ...
    lst

; Now use it:
for-in word '("cat" "window" "defenestrate")
  display word
  newline

You just added a for-in construct to the language. It's not a function that takes a callback — it's real syntax that the compiler expands before execution.

Example: Unless

define-syntax-rule : unless condition body ...
  when : not condition
    body ...

unless : = 1 2
  display "Math still works\n"

Example: Time-it

define-syntax-rule : time-it body ...
  let : : start (current-time)
    body ...
    format #t "Elapsed: ~a seconds\n"
      - (current-time) start

time-it
  let loop : : i 1000000
    when : > i 0
      loop {i - 1}

Why This Matters

In Python, if you need a new control structure, you write a function with callbacks or a context manager — a workaround. In Wisp, you write a macro that generates the exact code you want, with zero runtime overhead. The result is indistinguishable from a built-in language feature.

This is the deep reason Lisp syntax exists: when code and data have the same structure, programs that write programs become natural.

The Big Picture

Dimension Python Wisp
Syntax Indentation + keywords Indentation + prefix notation
Paradigm Multi-paradigm (imperative-first) Multi-paradigm (functional-first)
Type System Dynamic Dynamic
Mutability Mutable by default Immutable by convention
Tail Calls No optimization (recursion limit) Guaranteed optimization
Arithmetic Floats by default Exact rationals by default
Threading GIL limits parallelism Real OS threads
Macros No Hygienic macros
OOP Classes + inheritance GOOPS (multiple dispatch) or plain data
Ecosystem Massive (PyPI) Small (Guile + GNU Guix)
Runtime CPython GNU Guile (JIT-compiled Scheme)
Best For Scripts, ML/AI, web, teaching Systems, DSLs, concurrency, extensible programs

Closing Thoughts

Wisp is not trying to replace Python. It's trying to answer a question: can Lisp be approachable?

The answer is yes. Wisp code reads like indented pseudocode. You can show it to someone who has never programmed and they'll follow the structure. But underneath that gentle surface lies one of the most powerful programming models ever devised — one where functions are data, data is code, and the language reshapes itself to fit your problem.

The ecosystem gap is real. You won't find a Wisp equivalent of NumPy or Django. But for systems programming, scripting, language design, concurrent servers, or any domain where you want to think differently about code — Wisp offers something Python cannot: a Lisp you can read at a glance.

display "Welcome to Wisp.\n"
display "Where indentation meets imagination.\n"

Get started: install Guile, install Wisp, and type ,L wisp in the Guile REPL. The parentheses are gone. The power remains.

This article was written as a companion to the official Python tutorial. Every section maps to a chapter in that tutorial, showing the same concepts through Wisp's indentation-sensitive Lisp lens. For more on Wisp, visit draketo.de/software/wisp.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.