Limit concurrent HTTP connections to avoid crippeling overload

Even the fastest web servers become bottlenecks when handling CPU-intensive API work. Slow response times can cripple your service, especially when paired with an API gateway that times out after 29 seconds.

I solved this using a simple middleware that eliminated 504 - Gateway Timeout responses and significantly reduced unnecessary load on my service API.

I assumed that if a single request takes 5 seconds on average, at least five requests could complete before hitting Amazon API Gateway’s 29-second timeout:

Visualization of how I assumed concurrent HTTP connections would put load on the CPU.

In practice, the behavior was completely different (though in retrospect, it makes perfect sense). CPU resources are divided equally among all concurrent requests, causing all responses to slow down proportionally:

Visualization of how the concurrent HTTP connections actually put load on the CPU.

I found myself in a situation where even a mediocre load would make the system incapable of responding within the time limit. The API gateway would return 504 - Gateway Timeout on behalf of my service, while my unaware service would occupy CPU resources for responses that would never be used for anything, slowing everything even further.

A sure way to contribute to climate change and get a high cloud bill, while delivering zero value.

Oh wait…
a caller is very likely to want to retry a request, indicating a temporary problem, which the HTTP response code 504 - Gateway Timeout does. Now multiply your already high cloud bill by the retry count.

In other words: A disaster. ☠️

An entirely different architecture, maybe involving a queue or some async response mechanism, would probably have been a better solution. But sometimes, we need to work with what we’ve got.

Since my CPU load was fairly consistent across requests, I could predict how many concurrent connections could complete within the timeout limit.

With the following middleware, I limit concurrent active connections to ensure high CPU utilization while still responding within the timeout:

(defn wrap-limit-concurrent-connections
  "Middleware that limits the number of concurrent connections to ´max-connections´,
   via the atom `current-connections-atom`.
   This means that the middleware can be applied in several different places
   while still sharing an atom if necessary."
  [handler current-connections-atom max-connections]
  (fn [request]
    (let [connection-no (swap! current-connections-atom inc)]
      (try
        (if (>= max-connections connection-no)
          (handler request)
          {:status 503 :body "Service Unavailable"})
        (finally
          (swap! current-connections-atom dec))))))

The middleware implementation is very naive and assumes that, the service only exposes work with a similar load profile so that the same middleware (and coordination atom) can be reused across the service.

Though the middleware does make the 504 - Gateway Timeout responses go away, they are replaced with slightly fewer 503 - Service Unavailable errors. The important part is that the maximum possible number of 200 - OK are allowed to pass through, making the system partially responsive while scaling up (deploying more instances).

Visualization of how the concurrent HTTP connections actually put load on the CPU with the middleware applied.

I ran tests to find the right value for max-connections that matches the given work and hardware the service was running on.

Endpoints with low CPU intensity, such as health checks, should not be wrapped in the middleware. You don’t want a service instance terminated and restarted just because the health check can’t communicate: I’m still doing important stuff.

A more sophisticated rate-limiting middleware is possible using the same scaffolding as above. Maybe something that times requests and reduces concurrency as response time goes up, or something with different weights instead of just incrementing and decrementing by one. But if this starts getting hairy, you might be better off with an entirely different architecture.

Use with caution. 💚

Permalink

Datastar Observations

I've been very impressed, so far, with Datastar[https://data-star.dev], a tiny JavaScript library for front-end work; I've been switching a personal side-project from using Svelte for it's UI to Datastar, and as amazing as Svelte is, Datastar has impressed me more.

Datastar's essential concept is for the client to shift virtually all logic and all markup rendering back to the server; event handlers can succinctly call server endpoints, which return markup, and the markup is morphed into the running DOM. This makes the server-side is the system of record. Datastar has a nice DSL, based on data-* attributes, allowing you to do nearly anything you need to do in the client, declaratively.

Alternately, the server can start an SSE (server sent event) stream and send down markup to morph into the DOM, or JavaScript to execute, over any period of time. For example, my project has a long running process and it was a snap to create a modal progress dialog and keep it updated as the server-side process looped through its inputs.

The mantra of Datastar is to trust the morph and the browser -- it's surprisingly fast to update even when sending a fair bit of content. It feels wasteful to update a whole page just to change a few small things (say, mark a button as disabled) but it works, and it's fast, and it frees you from a nearly all client-side reactive updates (and all the related edge cases and unforseen consequences).

The server side is not bound to any particular language or framework (they have API implementations for Clojure, Java, Python, Ruby, and many others) ... and you could probably write your own API in an afternoon.

I especially like side-stepping the issue of needing more representations of data; the data lives server-side, all that is ever sent to the client is markup. There's no over-the-wire representation, and no parallel client-side data model. All that's ever exposed as endpoints are intentional ones that do work and deliver markup ... in other words, always use-case based, never schema based.

There's a minimal amount of reactive logic in the client, but the essence of moving the logic to server feels like home; Tapestry (way back in 2005) had some similar ideas, but was far more limited (due to many factors, including JavaScript and browser maturity in that time).

I value simplicity, and Datastar looks to fit my needs without doing so much that is magical or hidden. I consider that a big win!

Permalink

Java in 2026: still boring, still powerful, still printing money

Let’s be honest.

Java is not sexy.
Nobody wakes up excited thinking “wow, I hope today I can write some beautiful enterprise Java code.”
Java boring meme

And yet…

Banks, fintechs, payment processors, credit engines, risk platforms, and trading systems are still massively powered by Java in 2026.
And they’re not moving away anytime soon. With AI taking up to 70% of written code? Not a problem.

I have worked as a software engineer for more than 17 years, and since I've started, people talking about java becoming legacy was just part of my day.

we are in 2026, american debt is skyrocketing, BTC is melting and dollar losing value... while Java? Well, looks like this guy is a tough dinosaur
java bad

Quick Java timeline (why this dinosaur refuses to die)

  • 1995 → Java is born. “Write once, run anywhere.”
  • 2006 → Open-sourced. Enterprise adoption explodes.
  • 2014 → Java 8. Lambdas. Streams. Real modern Java begins.
  • 2018–2021 → 6-month release cycle. JVM performance goes crazy.
  • 2021 → Java 17 LTS becomes enterprise default.
  • 2023 → Java 21 LTS ships with virtual threads (Project Loom). Massive scalability shift.
  • 2026 → Java 25 era. Cloud-native, AI-assisted dev, still dominating finance production systems.

So yeah…
Not dead. Not even close.

Why finance still trusts Java more than anything else

financial systems do not care about hype.

They care about, predictability, latency stability, memory safety, tooling maturity and last but not least a huge hiring pool

JVM stability is unmatched if you run:

  • loan engines
  • payment authorization
  • anti-fraud scoring
  • card transaction routing

JVM gives:

  • battle-tested GC (even with its problems)
  • insane observability
  • deterministic performance tuning
  • backwards compatibility across decades

Concurrency changed the game (virtual threads)

Before Java 21:

threads were expensive and async code was ugly, now:

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    IntStream.range(0, 1_000_000).forEach(i ->
        executor.submit(() -> processTransaction(i))
    );
}

This is millions of concurrent operations with simple blocking code and no reactive nightmare.
For fintech backends, this is huge.
huge

Spring ecosystem still dominates enterprise

Yes, people complain about Spring Boot, but look at reality:

  • 80%+ of fintech APIs run on Spring
  • security + observability + config = solved problems
  • onboarding engineers is easy
  • production support is predictable

Example minimal API:

@RestController
@RequestMapping("/loans")
public class LoanController {

    @GetMapping("/{id}")
    public Loan getLoan(@PathVariable String id) {
        return new Loan(id, BigDecimal.valueOf(1000));
    }
}

Boring?
Yes, but every engineer can get it easily, even in an eventual case of some AI-code-creationg halucination.
Finance chooses boring.

Performance is no longer an excuse

Modern JVM gives ZGC/Shenandoah which is ultra-low latency GC, JIT + profiling that optimizes the runtime, theres also a GraalVM native images for faster startup on any cloud provider

The real shift in 2026: AI-augmented Java developers

This is where things get interesting, guess what? java is a big winner here, if I can't check everything is being created, how about I rely on a very deterministic well shapped programming language?
essential

A simple example on putting claude code/github copilot to work for you

Example: generate a Spring service instantly

Prompt in editor:
create a service that calculates compound interest with validation and unit tests

Result in seconds:

@Service
public class InterestService {

    public BigDecimal compound(
        BigDecimal principal,
        BigDecimal rate,
        int periods
    ) {
        if (principal.signum() <= 0 || rate.signum() < 0 || periods < 0) {
            throw new IllegalArgumentException("Invalid input");
        }

        return principal.multiply(
            BigDecimal.ONE.add(rate).pow(periods)
        );
    }
}

tests look like a charming...

@Test
void compound_shouldGrow() {
    var service = new InterestService();
    var result = service.compound(
        BigDecimal.valueOf(1000),
        BigDecimal.valueOf(0.1),
        2
    );

    assertEquals(new BigDecimal("1210.00"), result.setScale(2));
}

Time saved with control.

How about using claude for refactoring and architecture?

Some things that were a nightmare before, now are just a task in jira like:

  • migrating legacy Java 8 → Java 21
  • converting blocking code → virtual threads
  • generating integration tests
  • explaining weird enterprise codebases

Real workflow:

inline or chat with the legacy class

modernize for Java 21 + clean architecture

tadah: get production-ready refactor

This is insane leverage.

Lets talk about career now, this moment a lot of software engineers are actually loosing their jobs, one more reason why java careers are still strong...

Everyone wants to learn:

  1. Rust
  2. Go
  3. A new AI-agent-ish of work
  4. shiny new things

But banks still run on Java, and guess what?

  • So supply ↓
  • Demand ↑

Opportunity.

Lets finish here...

Java in 2026 is like:
a boring Swiss bank account that quietly keeps getting richer
Not hype.
Not trendy.
But extremely powerful where it matters.

And in finance…, and somehow, even this boring guy is leveraging himself on AI to get fun a nice

I'm not saying sitck with java, java is more than a programming language, it is a technology, we have for example Clojure that is a lisp way of coding that runs on JVM, how about Kotlin? My point is to be aware and awake, Java still rocks.

posted here

Permalink

Full Stack Engineer (mid- to senior-level) at OpenMarkets Health

Full Stack Engineer (mid- to senior-level) at OpenMarkets Health


OpenMarkets people are…

  • Committed to driving waste out of healthcare
  • Transparent and accountable to their colleagues on a weekly basis.
  • Committed to the success of their customers and their teammates.
  • Hungry to learn by making and sharing their mistakes, as well as reading and discussing ideas with their teammates.
  • Eager to do today what most people would put off until tomorrow.

Why you want to work with us…

  • Fast-paced start-up environment with a lot of opportunity to make a large impact.
  • Passionate, dedicated colleagues with a strong vision for changing how healthcare equipment purchasing is done.
  • Opportunity to develop software to help remove wasteful spending from equipment purchasing, leaving more dollars for patient care.
  • Other benefits include comprehensive health care benefits, 401K with 4% match, pre-tax transit benefits, generous PTO, flexible maternity/family leave options and the ability to work remotely.

Apply today if you are someone…

  • Who is proficient in Clojure and ClojureScript (bonus if you're also familiar with Ruby on Rails).
  • Who knows (or is willing to learn) re-frame and Reagent Forms.
  • Who practices test-driven development
  • Who has written software for at least 4 years.
  • Is empathetic towards their team, understands the tradeoffs in their implementations, and communicates their code effectively.
  • Can speak and write in non-technical terms, and believes in the value of effective time management.

We want everyone OpenMarkets is an equal opportunity employer. We believe that we can only make healthcare work for everyone if we get everyone to work on it. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Permalink

Python Only Has One Real Competitor

Python Only Has One Real Competitor

Python Only Has One Real Competitor

by: Ethan McCue

Clickbait subtitle: "and it's not even close"


Python is the undisputed monarch in exactly one domain: Data Science.

The story, as I understand it, goes something like this:

Python has very straight-forward interop with native code. Because interop is straightforward, libraries like numpy and pandas got made. Around these an entire Data Science ecosystem bloomed.

This in turn gave rise to interactive notebooks with iPython (now "Jupyter"), plotting with Matplotlib, and machine learning with PyTorch and company.

There are other languages and platforms - like R, MATLAB, etc. - which compete for users with Python. If you have a field biologist out there wading through the muck to measure turtles, they will probably make their charts and papers with whatever they learned in school.

But the one glaring weakness of these competitors is that they are not general purpose languages. Python is. This means that Python is also widely used for other kinds of programs - such as HTTP servers.

So for the people who are training machine learning models to do things like classify spam it is easiest to then serve those models using the same language and libraries that were used to produce them. This can be done without much risk because you can assume Python will have things like, say, a Kafka library should you need it. You can't really say the same for MATLAB.

And in the rare circumstance you want to do something that is easiest in one of those alternative ecosystems, Python will have a binding. You can call R with rpy2, MATLAB with another library, Spark with PySpark, and so on.

For something to legitimately be a competitor to Python I think it needs to do two things.

  1. Be at least as good at everything as Python.
  2. Be better than Python in a way that matters.

The only language which clears this bar is called Clojure.

Clojure Language Logo


That's a bold claim, I know. Hear me out.

Clojure has a rich ecosystem of Data Science libraries, including feature complete numpy and pandas equivalents in dtype-next and tech.ml.dataset. metamorph.ml for Machine Learning pipelines, Tableplot for plotting, and Clay for interactive notebooks.

For what isn't covered it can call Python directly via libpython-clj, R via ClojisR, and so on.

Clojure is also a general purpose language. Making HTTP servers or whatever else in Clojure is very practical.

What starts to give Clojure the edge is also the answer to the age-old question: "Why is Python so slow?"

Python is slow because it cannot be made fast. The dark side of Python's easy interop with native code is that many of the implementation details of CPython were made visible to, and relied upon by, authors of native bindings.

Because all these details were relied upon, the authors of the CPython runtime can't really change those details and not break the entire Data Science ecosystem. This heavily constrains the optimizations that the CPython runtime can do.

This means that people need to constantly avoid writing CPU intensive code in Python. It is orders of magnitude faster to use something which delegates to the native world than something written in pure Python. This affects the experience of things like numpy and pandas. There is often a "fast way" to do something and several "slow ways." The slow ways are always when too much actual Python code gets involved in the work.

Clojure does not have this problem. Clojure is a language that runs on the Java Virtual Machine. The JVM can optimize code like crazy on account of all the souls sacrificed to it. So you can write real logic in Clojure no issue.

There's a reason Python's list is implemented in C code but Java can have multiple competing implementations, all written in Java. Java code can count on some aggressive runtime optimizations when it matters.

This also means that if you use Clojure for something like an HTTP Server to serve a model, you can generally expect much better performance at scale than the equivalent in Python. You could even write that part in pure Java to make use of that trained pool of developers. Anecdotally, startups often switch from whatever language they started with to something that runs on the JVM once they get big enough to care about performance.

Clojure's library ecosystem includes many high quality libraries written in Java. Many of these are better performing than their Python analogues. Many also do things for which Python has no equivalent. Clojure then gets access to all Python libraries via libpython-clj.

Clojure's interop story is also quite strong at the language level. Calling a Python function is almost as little friction linguistically as calling a Clojure function. Calling native code with coffi is also pretty darn simple.

The language is also very small even compared to Python. Obviously the education system infrastructure is not in place, but in principle there is less to learn about the language itself before one can productively learn how to do Data Science.

An extremely important part of productive Data Science work is interacting with a dataset. This is why interactive notebooks are such a big part of this world. It's also a benefit of using dynamic languages like Python and Clojure. Being able to run quick experiments and poke at data is more important than static type information.

Clojure is part of a family of languages with a unique method of interactive development. This method is considered by its fans to be superior to the cell-based notebooks that Jupyter provides.

All in all, it's a competitive package. Whether it ever gets big enough to take a big bite of Python comes down to kismet, but I think it's the only thing that might stand a chance to.


If this got you interested in learning Clojure check out Clojure Camp for resources and noj for a cohesive introduction to the Data Science ecosystem.


<- Index

Permalink

Clojure’s Persistent Data Structures: Immutability Without the Performance Hit

How structural sharing makes immutable collections fast enough to be the default choice in functional programming In most programming languages, immutability is a performance compromise. Make your data structures immutable, the thinking goes, and prepare to pay the cost in memory and speed. Every modification means a full copy. Every update means allocating new memory. …

Permalink

Clojure Deref (Feb 3, 2026)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Clojure Dev Call

Join the Clojure core team for an update on what we’ve been working on and what’s on our horizon. We’ll save time for a Q&A, so bring your questions. Feb 10 @ 18:00 UTC. Register here.

Upcoming Events

Libraries and Tools

Debut release

  • clojure-elisp - A Clojure dialect that compiles to Emacs Lisp

  • yamlstar - A YAML framwork for all programming languages

  • yggdrasil - Git-like, causal space-time lattice abstraction over systems supporting this memory model.

  • chrondb - Chronological key/value Database built on Git architecture with complete version history

  • pruner - A TreeSitter-powered formatter orchestrator

  • ouroboros - An AI vibe-coding game using babashka

  • ridley - A turtle graphics-based 3D modeling tool for 3D printing. Write Clojure scripts, see real-time 3D preview, export STL. WebXR support for VR/AR visualization.

  • restructure - Rewrite nested Clojure data with a declared shape.

  • hipflask - Offline-first, real-time collaboration for ClojureScript

  • plumcp - Clojure/ClojureScript library for making MCP server and client

Updates

  • clara-eql 0.2.1 - Generate Clara rules to collect data from EDN Query Language queries

  • monkeyci 0.23.1 - Next-generation CI/CD tool that uses the full power of Clojure!

  • unlazy 1.1.0 - Configuration for clj-kondo, discouraging lazy processing

  • next-jdbc 1.3.1093 - A modern low-level Clojure wrapper for JDBC-based access to databases.

  • cursive 2026.1-eap1 - Cursive: The IDE for beautiful Clojure code

  • hive-mcp 0.11.0 - MCP server for hive-framework development, highly integrated to emacs

  • clay 2.0.9 - A REPL-friendly Clojure tool for notebooks and datavis

  • kindly 4-beta23 - A small library for defining how different kinds of things should be rendered

  • calva 2.0.547 - Clojure & ClojureScript Interactive Programming for VS Code

  • html 0.2.3 - Html generation library inspired by squint’s html tag

  • bankster 2.1.1 - Money as data, done right.

  • aws-api 0.8.800 - AWS, data driven

  • datalevin 0.10.4 - A simple, fast and versatile Datalog database

Permalink

Clojure Developer at 3E nv

Clojure Developer at 3E nv


Our SaaS Solution

SynaptiQ is 3E’s SaaS product, an independent, evolutionary software suite for asset management of renewable energy portfolios (more information on ). SynaptiQ collects near-real time data of more than 20 million devices spread over 10 thousands utility scale and commercial solar and wind sites spread all over the globe.

We develop and operate advanced analytical services to enrich the monitoring data by:

  • satellite imaging data,
  • meteorological modelling,
  • advanced system modelling,
  • machine learning & artificial intelligence.

The platform combines domains related to big data, high-performance processing, IoT protocols and AI and is the product of the interactions between a multidisciplinary team of developers, scientists, renewable energy architects, electrical engineers, and enthusiast sales that implement, operate and commercialize SynaptiQ worldwide. 

The added value realized by SynaptiQ is performance improvement and operational cost reduction for its Operations & Maintenance customers.

What you will be doing

We are looking for a Back-End Developer with experience in Clojure and a passion for creating efficient, scalable, and accessible back-end systems. At 3E, you will have the opportunity to work on meaningful projects that contribute to the advancement of renewable energy technologies and digitalization.

Responsabilities

  • Develop, test, and maintain robust, performant, and scalable back-end codebase in accordance with design and product requirements.
  • Identify and optimize performance bottlenecks in the application (and MySQL db) using best practices and techniques.
  • Use Polylith, Reitit, Malli to build APIs which are efficient and maintainable.
  • Design and develop efficient business-logic processors that act on data.
  • Conduct thorough code reviews and enforce good back-end practices to ensure quality and maintainability of code.
  • Collaborate with multidisciplinary teams of developers, scientists, renewable energy architects, electrical engineers, and sales enthusiasts to achieve project goals (in-person or via email/MS Teams).
  • Track project evolution (Jira).
  • Maintain a codebase that is easy to understand, modify, and extend, and adhere to coding standards and best practices.

Requirements

To fulfil this role we are looking for someone with:

  • Minimum of 3 years of experience in Clojure OR significant open-source contributions which can show a significant level of skill.
  • A proven track record of optimizing performance bottlenecks, enforcing good back-end practices.
  • Good understanding of performance optimization techniques in the context of Clojure & MySQL.
  • Product-based experience – supporting and modifying a product through several years – living with the decisions of the past and building on top of them.
  • Experience with refactoring a codebase as new features are written.

Bonus points for:

  • Good profiling skills (JVM profiler).
  • Other lisp languages (e.g SBCL).
  • Knowledge of Docker

Benefits

Our offices are hidden in the centre of Brussels with a view on a pond, with ducks and a heron bringing a regular visit. In addition to a stimulating atmosphere in a highly motivated group of people**, 3E offers a unique opportunity to further develop yourself in a company/team with an ambitious growth plan, delivering innovative services.

Furthermore:

  • Flexible & gliding working hours
  • Open to fully remote candidates
  • An international environment with colleagues of 25+ nationalities and projects in over 100 countries.
  • An open-minded company where everybody can bring their ideas to the table

Permalink

Building AI agentes in practice with Clojure

During Clojure South, Marlon Silva, Senior Software Engineer at Nubank, shared his perspective on a recurring challenge for software engineers working with AI today: how to move beyond using AI assistants and start engineering reliable, task-oriented AI agents.

According to Marlon, the industry has made it increasingly easy to consume AI — APIs, copilots, and assistants are everywhere — but building AI-powered systems still requires engineers to make a series of low-level, often uncomfortable decisions. In his talk, he focused on demystifying those decisions, framing AI not as a black box, but as infrastructure, and integrations that need to be reasoned about explicitly.

Rather than presenting a new framework or abstraction layer, Marlon walked through the architectural choices he believes matter most when building agents in practice, and explained why Clojure offers a particularly strong foundation for exploring this space.

Infrastructure first: where your models live matters

Marlon started by arguing that any serious AI initiative begins with infrastructure — specifically, how teams access models. While this decision is often treated as an implementation detail, he emphasized that it directly impacts scalability, experimentation, security, and integration with existing systems.

From his perspective, engineers typically face two options:

  • Direct AI vendors: Marlon noted that these providers are an excellent entry point for individual developers. Signing up is straightforward, APIs are well-documented, and it is possible to start experimenting almost immediately. For learning and early exploration, this path minimizes friction.
  • Cloud providers: For organizations, however, Marlon argued that leveraging existing cloud relationships is usually the better long-term decision. Most companies already have accounts, billing, security controls, and observability in place. Cloud Providers like AWS and GCP make it possible to access models from multiple AI Labs without introducing new suppliers into the stack.

According to Marlon, when teams are operating inside an organization, the most pragmatic default is to use the models already available through their cloud providers. This removes procurement overhead and allows engineers to focus on building systems instead of managing vendors.

The fragmentation problem: too many APIs

Once access to models is established, Marlon pointed out a second, inevitable problem: API fragmentation. Each provider exposes different request formats, parameters, and SDKs, which quickly complicates development and makes experimentation costly.

In his talk, Marlon described this fragmentation as one of the first scaling pain points teams encounter when AI moves beyond a single script or proof of concept.

To address this, he introduced LiteLLM as a practical unification layer. LiteLLM is a proxy that standardizes access to multiple models and providers behind a single API, regardless of where the model is hosted.

Marlon highlighted three concrete benefits of this approach:

  • A unified interface, allowing teams to switch models without rewriting integration code.
  • Centralized observability, creating a single point for logging, debugging, and auditing model interactions.
  • Cost control, which he emphasized as critical. Token usage scales quickly in production, and LiteLLM enables organizations to track and limit usage per team, service, or key.

From Marlon’s perspective, this kind of proxy is not an optimization: it becomes foundational infrastructure as soon as AI is part of a real system.

Why smaller models often work better for agents

Marlon then challenged a common assumption in the AI space: that larger models are always better.

For task-oriented AI agents, he argued, this is rarely true. Citing recent research from Nvidia, Marlon explained that Small Language Models (SLMs) are often a better fit for agents designed to execute specific, well-defined actions.

According to him, the broad generalization capabilities of large LLMs, models capable of writing essays or long-form prose, are unnecessary for most agent workloads. Using them in these contexts leads to wasted capacity, higher costs, and increased complexity.

He outlined several practical advantages of SLMs:

  • Cost efficiency: models like Llama 4 Scout on AWS Bedrock cost orders of magnitude less per token than large proprietary models.
  • Lower energy consumption: making them a more responsible choice at scale and more eco-friendly.
  • Feasible fine-tuning: adapting a 7–10B parameter model to a specific domain is realistic, whereas doing the same with very large models often is not real for most companies.

For Marlon, choosing SLMs is not a compromise, as it is an engineering decision aligned with the actual requirements of agent-based systems.

Why Clojure fits this problem space

From there, Marlon shifted focus to tooling. He explained that AI development is inherently experimental: prompts change, parameters are adjusted, models are swapped, and assumptions are constantly tested. In that context, developer feedback loops matter. This is where Clojure stands out.

Marlon described Clojure as an ergonomic language — not because of syntax alone, but because of its REPL-driven development model. The ability to evaluate functions incrementally, inspect results immediately, and iterate without restarting an application fundamentally changes how engineers explore problem spaces.

In his experience, this interactive workflow aligns closely with how AI systems are built and refined.Beyond the REPL, Marlon highlighted two interoperability advantages:

  • Java interoperability: Because Clojure runs on the JVM, it has seamless access to the Java ecosystem. Cloud SDKs, HTTP clients, observability tools, and mature libraries are immediately available.
  • Python interoperability: With libraries such as libpython-clj, Clojure can import and execute Python code directly. While not as seamless as Java interop, this capability allows engineers to reuse Python-based AI tooling without abandoning Clojure’s interactive workflow.

For Marlon, this combination makes Clojure a strong orchestration layer for AI systems that need to integrate with multiple ecosystems.

From theory to practice: the live demonstration

To make these ideas concrete, Marlon walked through a live demonstration.

He started with simple Python scripts that sent text, images, and PDFs to Bedrock-hosted models via a local LiteLLM proxy. These examples established a baseline using familiar tooling.

Next, he reproduced the same workflows in Clojure by importing the Python LiteLLM package directly into the Clojure runtime. Python functions were called from Clojure code, with inputs and outputs handled interactively.

According to Marlon, the most important part of the demo was not that this approach works, but how it changes the development experience. Python-based workflows often require frequent context switching — editing files, running scripts, and restarting processes. With Clojure and the REPL, the entire feedback loop stays inside the editor.

For exploratory domains like AI, Marlon argued, this difference directly translates into faster iteration and deeper focus.

Conclusion

Marlon closed by emphasizing that AI agents remain a young and rapidly evolving field. Libraries, architectures, and best practices are still in flux, which makes flexibility a key requirement.

He also offered a note of caution: granting models excessive autonomy without clear boundaries and controls can lead to fragile systems. In his view, frameworks that favour obscure control flow and mostly relies on LLM itself encourages a “ship and pray” approach to AI development. At the same time, Marlon pointed out that new agent architectures are actively emerging from research groups at organizations like Nvidia and DeepMind, signaling that significant changes are still ahead.

His conclusion was pragmatic: combining well-chosen infrastructure, models sized to the problem, and tools that favor exploration creates a solid foundation for building AI systems grounded in engineering discipline. The repository shared during the talk serves as a starting point for engineers interested in continuing that exploration.

References

Small Language Models are the Future of Agentic AI

AlphaEvolve: A coding agent for scientific and algorithmic discovery

GitHub – marlonjsilva/clj-agents

GitHub – clj-python/libpython-clj

The post Building AI agentes in practice with Clojure appeared first on Building Nubank.

Permalink

I am sorry, but everyone is getting syntax highlighting wrong

Translations: Russian

Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.

Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.

Christmas Lights Diarrhea

Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.

Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.

Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?

So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.

Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.

If everything is highlighted, nothing is highlighted.

Enough colors to remember

There are two main use-cases you want your color theme to address:

  1. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?)
  2. Search for something. You want to know what to look for (which color).

1 is a direct index lookup: color → type of thing.

2 is a reverse lookup: type of thing → color.

Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.

Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.

I can’t.

Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?

Can you?

If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.

What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:

  • Green for strings
  • Purple for constants
  • Yellow for comments
  • Light blue for top-level definitions

That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.

Limit the number of different colors to what you can remember.

If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?

What should you highlight?

Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.

I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.

Top-level definitions are another good idea. They give you an idea of a structure quickly.

Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.

Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.

Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.

Comments are important

The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.

But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:

Comments should be highlighted, not hidden away.

Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.

Two types of comments

Another secret nobody is talking about is that there are two types of comments:

  1. Explanations
  2. Disabled code

Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!

Here’s a real example from Clojure codebase that makes perfect use of two types of comments:

Disabled code is gray, explanation is bright yellow

Light or dark?

Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?

And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:

Notice how many colors there are. No one can remember that many.

This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.

Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.

So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯

But!

But.

There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.

The second one has good contrast, but you can barely see colors.

The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.

UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.

Bold and italics

Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.

In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.

Using italics and bold instead of colors

Myth of number-based perfection

Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.

This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:

OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315

The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.

Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.

Let’s design a color theme together

Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.

Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?

Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.

But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.

This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.

Shameless plug time

I’ve been applying these principles for about 8 years now.

I call this theme Alabaster and I’ve built it a couple of times for the editors I used:

It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).

Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.

As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.

I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.

Permalink

Statistics made simple

I have a weird relationship with statistics: on one hand, I try not to look at it too often. Maybe once or twice a year. It’s because analytics is not actionable: what difference does it make if a thousand people saw my article or ten thousand?

I mean, sure, you might try to guess people’s tastes and only write about what’s popular, but that will destroy your soul pretty quickly.

On the other hand, I feel nervous when something is not accounted for, recorded, or saved for future reference. I might not need it now, but what if ten years later I change my mind?

Seeing your readers also helps to know you are not writing into the void. So I really don’t need much, something very basic: the number of readers per day/per article, maybe, would be enough.

Final piece of the puzzle: I self-host my web projects, and I use an old-fashioned web server instead of delegating that task to Nginx.

Static sites are popular and for a good reason: they are fast, lightweight, and fulfil their function. I, on the other hand, might have an unfinished gestalt or two: I want to feel the full power of the computer when serving my web pages, to be able to do fun stuff that is beyond static pages. I need that freedom that comes with a full programming language at your disposal. I want to program my own web server (in Clojure, sorry everybody else).

Existing options

All this led me on a quest for a statistics solution that would uniquely fit my needs. Google Analytics was out: bloated, not privacy-friendly, terrible UX, Google is evil, etc.

What is going on?

Some other JS solution might’ve been possible, but still questionable: SaaS? Paid? Will they be around in 10 years? Self-host? Are their cookies GDPR-compliant? How to count RSS feeds?

Nginx has access logs, so I tried server-side statistics that feed off those (namely, Goatcounter). Easy to set up, but then I needed to create domains for them, manage accounts, monitor the process, and it wasn’t even performant enough on my server/request volume!

My solution

So I ended up building my own. You are welcome to join, if your constraints are similar to mine. This is how it looks:

It’s pretty basic, but does a few things that were important to me.

Setup

Extremely easy to set up. And I mean it as a feature.

Just add our middleware to your Ring stack and get everything automatically: collecting and reporting.

(def app
  (-> routes
    ...
    (ring.middleware.params/wrap-params)
    (ring.middleware.cookies/wrap-cookies)
    ...
    (clj-simple-stats.core/wrap-stats))) ;; <-- just add this

It’s zero setup in the best sense: nothing to configure, nothing to monitor, minimal dependency. It starts to work immediately and doesn’t ask anything from you, ever.

See, you already have your web server, why not reuse all the setup you did for it anyway?

Request types

We distinguish between request types. In my case, I am only interested in live people, so I count them separately from RSS feed requests, favicon requests, redirects, wrong URLs, and bots. Bots are particularly active these days. Gotta get that AI training data from somewhere.

RSS feeds are live people in a sense, so extra work was done to count them properly. Same reader requesting feed.xml 100 times in a day will only count as one request.

Hosted RSS readers often report user count in User-Agent, like this:

Feedly/1.0 (+http://www.feedly.com/fetcher.html; 457 subscribers; like FeedFetcher-Google)

Mozilla/5.0 (compatible; BazQux/2.4; +https://bazqux.com/fetcher; 6 subscribers)

Feedbin feed-id:1373711 - 142 subscribers

My personal respect and thank you to everybody on this list. I see you.

Graphs

Visualization is important, and so is choosing the correct graph type. This is wrong:

Continuous line suggests interpolation. It reads like between 1 visit at 5am and 11 visits at 6am there were points with 2, 3, 5, 9 visits in between. Maybe 5.5 visits even! That is not the case.

This is how a semantically correct version of that graph should look:

Some attention was also paid to having reasonable labels on axes. You won’t see something like 117, 234, 10875. We always choose round numbers appropriate to the scale: 100, 200, 500, 1K etc.

Goes without saying that all graphs have the same vertical scale and syncrhonized horizontal scroll.

Insights

We don’t offer much (as I don’t need much), but you can narrow reports down by page, query, referrer, user agent, and any date slice.

Not implemented (yet)

It would be nice to have some insights into “What was this spike caused by?”

Some basic breakdown by country would be nice. I do have IP addresses (for what they are worth), but I need a way to package GeoIP into some reasonable size (under 1 Mb, preferably; some loss of resolution is okay).

Finally, one thing I am really interested in is “Who wrote about me?” I do have referrers, only question is how to separate signal from noise.

Performance. DuckDB is a sport: it compresses data and runs column queries, so storing extra columns per row doesn’t affect query performance. Still, each dashboard hit is a query across the entire database, which at this moment (~3 years of data) sits around 600 MiB. I definitely need to look into building some pre-calculated aggregates.

One day.

How to get

Head to github.com/tonsky/clj-simple-stats and follow the instructions:

Let me know what you think! Is it usable to you? What could be improved?

Permalink

FFI with GraalVM Native Image: The Real Work of Maintaining a Library That Crosses Language Boundaries

Exposing a library via FFI seems simple on paper.

You compile your code to .so, export some functions with extern "C", write bindings in the target language. Done, interoperability achieved.

Except that's not how it works in practice. At least not when your lib is a database written in Clojure, compiled with GraalVM native-image, that needs to manage Git and Lucene internally, and will be called from Rust in environments ranging from dev laptops to CI containers with 8MB of stack.

Permalink

Episode 12: It allows a small team to achieve really great goals, with Marcin Maicki, Dentons

Episode 12 of "Clojure in product. Would you do it again?" is live — Marcin Maicki, Global Data Developer & Lead Developer at Dentons, joins Artem Barmin and Vadym Kostiuk to talk about running Clojure inside a large, decentralized enterprise.

Highlights:

- Marcin’s Clojure origin story: started with ClojureScript, moved into Clojure, and found the functional mindset a natural fit coming from React.

- How Clojure landed at Dentons: a conscious choice for a focused referral‑network venture that valued expressiveness and small teams.

- Practical stack and ops: Postgres, Elasticsearch, Reagent/Re‑frame + Material UI, Metabase; Marcin also works on PySpark/Databricks in his global data role.

- Maintenance and risk: why they’re migrating away from old, unmaintained libs; regular security scans and external testing make dependency health a real concern.

- Team, onboarding, and hiring: a small Clojure pod (Marcin + one dev, testers, DevOps); knowledge sharing, docs, and close pairing are the onboarding tools — hiring remains the main practical blocker.

- Enterprise realities: polycentric org structure, integration friction with firm standards (Power BI, Azure), and the tradeoffs that make Clojure a strong fit in some contexts but a harder sell in others.

Watch Episode 12 to hear the full conversation and the nuances of keeping a nine‑year Clojure codebase healthy in a corporate setting.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.