Java in 2026: still boring, still powerful, still printing money

Let’s be honest.

Java is not sexy.
Nobody wakes up excited thinking “wow, I hope today I can write some beautiful enterprise Java code.”
Java boring meme

And yet…

Banks, fintechs, payment processors, credit engines, risk platforms, and trading systems are still massively powered by Java in 2026.
And they’re not moving away anytime soon. With AI taking up to 70% of written code? Not a problem.

I have worked as a software engineer for more than 17 years, and since I've started, people talking about java becoming legacy was just part of my day.

we are in 2026, american debt is skyrocketing, BTC is melting and dollar losing value... while Java? Well, looks like this guy is a tough dinosaur
java bad

Quick Java timeline (why this dinosaur refuses to die)

  • 1995 → Java is born. “Write once, run anywhere.”
  • 2006 → Open-sourced. Enterprise adoption explodes.
  • 2014 → Java 8. Lambdas. Streams. Real modern Java begins.
  • 2018–2021 → 6-month release cycle. JVM performance goes crazy.
  • 2021 → Java 17 LTS becomes enterprise default.
  • 2023 → Java 21 LTS ships with virtual threads (Project Loom). Massive scalability shift.
  • 2026 → Java 25 era. Cloud-native, AI-assisted dev, still dominating finance production systems.

So yeah…
Not dead. Not even close.

Why finance still trusts Java more than anything else

financial systems do not care about hype.

They care about, predictability, latency stability, memory safety, tooling maturity and last but not least a huge hiring pool

JVM stability is unmatched if you run:

  • loan engines
  • payment authorization
  • anti-fraud scoring
  • card transaction routing

JVM gives:

  • battle-tested GC (even with its problems)
  • insane observability
  • deterministic performance tuning
  • backwards compatibility across decades

Concurrency changed the game (virtual threads)

Before Java 21:

threads were expensive and async code was ugly, now:

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    IntStream.range(0, 1_000_000).forEach(i ->
        executor.submit(() -> processTransaction(i))
    );
}

This is millions of concurrent operations with simple blocking code and no reactive nightmare.
For fintech backends, this is huge.
huge

Spring ecosystem still dominates enterprise

Yes, people complain about Spring Boot, but look at reality:

  • 80%+ of fintech APIs run on Spring
  • security + observability + config = solved problems
  • onboarding engineers is easy
  • production support is predictable

Example minimal API:

@RestController
@RequestMapping("/loans")
public class LoanController {

    @GetMapping("/{id}")
    public Loan getLoan(@PathVariable String id) {
        return new Loan(id, BigDecimal.valueOf(1000));
    }
}

Boring?
Yes, but every engineer can get it easily, even in an eventual case of some AI-code-creationg halucination.
Finance chooses boring.

Performance is no longer an excuse

Modern JVM gives ZGC/Shenandoah which is ultra-low latency GC, JIT + profiling that optimizes the runtime, theres also a GraalVM native images for faster startup on any cloud provider

The real shift in 2026: AI-augmented Java developers

This is where things get interesting, guess what? java is a big winner here, if I can't check everything is being created, how about I rely on a very deterministic well shapped programming language?
essential

A simple example on putting claude code/github copilot to work for you

Example: generate a Spring service instantly

Prompt in editor:
create a service that calculates compound interest with validation and unit tests

Result in seconds:

@Service
public class InterestService {

    public BigDecimal compound(
        BigDecimal principal,
        BigDecimal rate,
        int periods
    ) {
        if (principal.signum() <= 0 || rate.signum() < 0 || periods < 0) {
            throw new IllegalArgumentException("Invalid input");
        }

        return principal.multiply(
            BigDecimal.ONE.add(rate).pow(periods)
        );
    }
}

tests look like a charming...

@Test
void compound_shouldGrow() {
    var service = new InterestService();
    var result = service.compound(
        BigDecimal.valueOf(1000),
        BigDecimal.valueOf(0.1),
        2
    );

    assertEquals(new BigDecimal("1210.00"), result.setScale(2));
}

Time saved with control.

How about using claude for refactoring and architecture?

Some things that were a nightmare before, now are just a task in jira like:

  • migrating legacy Java 8 → Java 21
  • converting blocking code → virtual threads
  • generating integration tests
  • explaining weird enterprise codebases

Real workflow:

inline or chat with the legacy class

modernize for Java 21 + clean architecture

tadah: get production-ready refactor

This is insane leverage.

Lets talk about career now, this moment a lot of software engineers are actually loosing their jobs, one more reason why java careers are still strong...

Everyone wants to learn:

  1. Rust
  2. Go
  3. A new AI-agent-ish of work
  4. shiny new things

But banks still run on Java, and guess what?

  • So supply ↓
  • Demand ↑

Opportunity.

Lets finish here...

Java in 2026 is like:
a boring Swiss bank account that quietly keeps getting richer
Not hype.
Not trendy.
But extremely powerful where it matters.

And in finance…, and somehow, even this boring guy is leveraging himself on AI to get fun a nice

I'm not saying sitck with java, java is more than a programming language, it is a technology, we have for example Clojure that is a lisp way of coding that runs on JVM, how about Kotlin? My point is to be aware and awake, Java still rocks.

posted here

Permalink

Full Stack Engineer (mid- to senior-level) at OpenMarkets Health

Full Stack Engineer (mid- to senior-level) at OpenMarkets Health


OpenMarkets people are…

  • Committed to driving waste out of healthcare
  • Transparent and accountable to their colleagues on a weekly basis.
  • Committed to the success of their customers and their teammates.
  • Hungry to learn by making and sharing their mistakes, as well as reading and discussing ideas with their teammates.
  • Eager to do today what most people would put off until tomorrow.

Why you want to work with us…

  • Fast-paced start-up environment with a lot of opportunity to make a large impact.
  • Passionate, dedicated colleagues with a strong vision for changing how healthcare equipment purchasing is done.
  • Opportunity to develop software to help remove wasteful spending from equipment purchasing, leaving more dollars for patient care.
  • Other benefits include comprehensive health care benefits, 401K with 4% match, pre-tax transit benefits, generous PTO, flexible maternity/family leave options and the ability to work remotely.

Apply today if you are someone…

  • Who is proficient in Clojure and ClojureScript (bonus if you're also familiar with Ruby on Rails).
  • Who knows (or is willing to learn) re-frame and Reagent Forms.
  • Who practices test-driven development
  • Who has written software for at least 4 years.
  • Is empathetic towards their team, understands the tradeoffs in their implementations, and communicates their code effectively.
  • Can speak and write in non-technical terms, and believes in the value of effective time management.

We want everyone OpenMarkets is an equal opportunity employer. We believe that we can only make healthcare work for everyone if we get everyone to work on it. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Permalink

Python Only Has One Real Competitor

Python Only Has One Real Competitor

Python Only Has One Real Competitor

by: Ethan McCue

Clickbait subtitle: "and it's not even close"


Python is the undisputed monarch in exactly one domain: Data Science.

The story, as I understand it, goes something like this:

Python has very straight-forward interop with native code. Because interop is straightforward, libraries like numpy and pandas got made. Around these an entire Data Science ecosystem bloomed.

This in turn gave rise to interactive notebooks with iPython (now "Jupyter"), plotting with Matplotlib, and machine learning with PyTorch and company.

There are other languages and platforms - like R, MATLAB, etc. - which compete for users with Python. If you have a field biologist out there wading through the muck to measure turtles, they will probably make their charts and papers with whatever they learned in school.

But the one glaring weakness of these competitors is that they are not general purpose languages. Python is. This means that Python is also widely used for other kinds of programs - such as HTTP servers.

So for the people who are training machine learning models to do things like classify spam it is easiest to then serve those models using the same language and libraries that were used to produce them. This can be done without much risk because you can assume Python will have things like, say, a Kafka library should you need it. You can't really say the same for MATLAB.

And in the rare circumstance you want to do something that is easiest in one of those alternative ecosystems, Python will have a binding. You can call R with rpy2, MATLAB with another library, Spark with PySpark, and so on.

For something to legitimately be a competitor to Python I think it needs to do two things.

  1. Be at least as good at everything as Python.
  2. Be better than Python in a way that matters.

The only language which clears this bar is called Clojure.

Clojure Language Logo


That's a bold claim, I know. Hear me out.

Clojure has a rich ecosystem of Data Science libraries, including feature complete numpy and pandas equivalents in dtype-next and tech.ml.dataset. metamorph.ml for Machine Learning pipelines, Tableplot for plotting, and Clay for interactive notebooks.

For what isn't covered it can call Python directly via libpython-clj, R via ClojisR, and so on.

Clojure is also a general purpose language. Making HTTP servers or whatever else in Clojure is very practical.

What starts to give Clojure the edge is also the answer to the age-old question: "Why is Python so slow?"

Python is slow because it cannot be made fast. The dark side of Python's easy interop with native code is that many of the implementation details of CPython were made visible to, and relied upon by, authors of native bindings.

Because all these details were relied upon, the authors of the CPython runtime can't really change those details and not break the entire Data Science ecosystem. This heavily constrains the optimizations that the CPython runtime can do.

This means that people need to constantly avoid writing CPU intensive code in Python. It is orders of magnitude faster to use something which delegates to the native world than something written in pure Python. This affects the experience of things like numpy and pandas. There is often a "fast way" to do something and several "slow ways." The slow ways are always when too much actual Python code gets involved in the work.

Clojure does not have this problem. Clojure is a language that runs on the Java Virtual Machine. The JVM can optimize code like crazy on account of all the souls sacrificed to it. So you can write real logic in Clojure no issue.

There's a reason Python's list is implemented in C code but Java can have multiple competing implementations, all written in Java. Java code can count on some aggressive runtime optimizations when it matters.

This also means that if you use Clojure for something like an HTTP Server to serve a model, you can generally expect much better performance at scale than the equivalent in Python. You could even write that part in pure Java to make use of that trained pool of developers. Anecdotally, startups often switch from whatever language they started with to something that runs on the JVM once they get big enough to care about performance.

Clojure's library ecosystem includes many high quality libraries written in Java. Many of these are better performing than their Python analogues. Many also do things for which Python has no equivalent. Clojure then gets access to all Python libraries via libpython-clj.

Clojure's interop story is also quite strong at the language level. Calling a Python function is almost as little friction linguistically as calling a Clojure function. Calling native code with coffi is also pretty darn simple.

The language is also very small even compared to Python. Obviously the education system infrastructure is not in place, but in principle there is less to learn about the language itself before one can productively learn how to do Data Science.

An extremely important part of productive Data Science work is interacting with a dataset. This is why interactive notebooks are such a big part of this world. It's also a benefit of using dynamic languages like Python and Clojure. Being able to run quick experiments and poke at data is more important than static type information.

Clojure is part of a family of languages with a unique method of interactive development. This method is considered by its fans to be superior to the cell-based notebooks that Jupyter provides.

All in all, it's a competitive package. Whether it ever gets big enough to take a big bite of Python comes down to kismet, but I think it's the only thing that might stand a chance to.


If this got you interested in learning Clojure check out Clojure Camp for resources and noj for a cohesive introduction to the Data Science ecosystem.


<- Index

Permalink

Clojure’s Persistent Data Structures: Immutability Without the Performance Hit

How structural sharing makes immutable collections fast enough to be the default choice in functional programming In most programming languages, immutability is a performance compromise. Make your data structures immutable, the thinking goes, and prepare to pay the cost in memory and speed. Every modification means a full copy. Every update means allocating new memory. …

Permalink

Clojure Deref (Feb 3, 2026)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Clojure Dev Call

Join the Clojure core team for an update on what we’ve been working on and what’s on our horizon. We’ll save time for a Q&A, so bring your questions. Feb 10 @ 18:00 UTC. Register here.

Upcoming Events

Libraries and Tools

Debut release

  • clojure-elisp - A Clojure dialect that compiles to Emacs Lisp

  • yamlstar - A YAML framwork for all programming languages

  • yggdrasil - Git-like, causal space-time lattice abstraction over systems supporting this memory model.

  • chrondb - Chronological key/value Database built on Git architecture with complete version history

  • pruner - A TreeSitter-powered formatter orchestrator

  • ouroboros - An AI vibe-coding game using babashka

  • ridley - A turtle graphics-based 3D modeling tool for 3D printing. Write Clojure scripts, see real-time 3D preview, export STL. WebXR support for VR/AR visualization.

  • restructure - Rewrite nested Clojure data with a declared shape.

  • hipflask - Offline-first, real-time collaboration for ClojureScript

  • plumcp - Clojure/ClojureScript library for making MCP server and client

Updates

  • clara-eql 0.2.1 - Generate Clara rules to collect data from EDN Query Language queries

  • monkeyci 0.23.1 - Next-generation CI/CD tool that uses the full power of Clojure!

  • unlazy 1.1.0 - Configuration for clj-kondo, discouraging lazy processing

  • next-jdbc 1.3.1093 - A modern low-level Clojure wrapper for JDBC-based access to databases.

  • cursive 2026.1-eap1 - Cursive: The IDE for beautiful Clojure code

  • hive-mcp 0.11.0 - MCP server for hive-framework development, highly integrated to emacs

  • clay 2.0.9 - A REPL-friendly Clojure tool for notebooks and datavis

  • kindly 4-beta23 - A small library for defining how different kinds of things should be rendered

  • calva 2.0.547 - Clojure & ClojureScript Interactive Programming for VS Code

  • html 0.2.3 - Html generation library inspired by squint’s html tag

  • bankster 2.1.1 - Money as data, done right.

  • aws-api 0.8.800 - AWS, data driven

  • datalevin 0.10.4 - A simple, fast and versatile Datalog database

Permalink

Clojure Developer at 3E nv

Clojure Developer at 3E nv


Our SaaS Solution

SynaptiQ is 3E’s SaaS product, an independent, evolutionary software suite for asset management of renewable energy portfolios (more information on ). SynaptiQ collects near-real time data of more than 20 million devices spread over 10 thousands utility scale and commercial solar and wind sites spread all over the globe.

We develop and operate advanced analytical services to enrich the monitoring data by:

  • satellite imaging data,
  • meteorological modelling,
  • advanced system modelling,
  • machine learning & artificial intelligence.

The platform combines domains related to big data, high-performance processing, IoT protocols and AI and is the product of the interactions between a multidisciplinary team of developers, scientists, renewable energy architects, electrical engineers, and enthusiast sales that implement, operate and commercialize SynaptiQ worldwide. 

The added value realized by SynaptiQ is performance improvement and operational cost reduction for its Operations & Maintenance customers.

What you will be doing

We are looking for a Back-End Developer with experience in Clojure and a passion for creating efficient, scalable, and accessible back-end systems. At 3E, you will have the opportunity to work on meaningful projects that contribute to the advancement of renewable energy technologies and digitalization.

Responsabilities

  • Develop, test, and maintain robust, performant, and scalable back-end codebase in accordance with design and product requirements.
  • Identify and optimize performance bottlenecks in the application (and MySQL db) using best practices and techniques.
  • Use Polylith, Reitit, Malli to build APIs which are efficient and maintainable.
  • Design and develop efficient business-logic processors that act on data.
  • Conduct thorough code reviews and enforce good back-end practices to ensure quality and maintainability of code.
  • Collaborate with multidisciplinary teams of developers, scientists, renewable energy architects, electrical engineers, and sales enthusiasts to achieve project goals (in-person or via email/MS Teams).
  • Track project evolution (Jira).
  • Maintain a codebase that is easy to understand, modify, and extend, and adhere to coding standards and best practices.

Requirements

To fulfil this role we are looking for someone with:

  • Minimum of 3 years of experience in Clojure OR significant open-source contributions which can show a significant level of skill.
  • A proven track record of optimizing performance bottlenecks, enforcing good back-end practices.
  • Good understanding of performance optimization techniques in the context of Clojure & MySQL.
  • Product-based experience – supporting and modifying a product through several years – living with the decisions of the past and building on top of them.
  • Experience with refactoring a codebase as new features are written.

Bonus points for:

  • Good profiling skills (JVM profiler).
  • Other lisp languages (e.g SBCL).
  • Knowledge of Docker

Benefits

Our offices are hidden in the centre of Brussels with a view on a pond, with ducks and a heron bringing a regular visit. In addition to a stimulating atmosphere in a highly motivated group of people**, 3E offers a unique opportunity to further develop yourself in a company/team with an ambitious growth plan, delivering innovative services.

Furthermore:

  • Flexible & gliding working hours
  • Open to fully remote candidates
  • An international environment with colleagues of 25+ nationalities and projects in over 100 countries.
  • An open-minded company where everybody can bring their ideas to the table

Permalink

Building AI agentes in practice with Clojure

During Clojure South, Marlon Silva, Senior Software Engineer at Nubank, shared his perspective on a recurring challenge for software engineers working with AI today: how to move beyond using AI assistants and start engineering reliable, task-oriented AI agents.

According to Marlon, the industry has made it increasingly easy to consume AI — APIs, copilots, and assistants are everywhere — but building AI-powered systems still requires engineers to make a series of low-level, often uncomfortable decisions. In his talk, he focused on demystifying those decisions, framing AI not as a black box, but as infrastructure, and integrations that need to be reasoned about explicitly.

Rather than presenting a new framework or abstraction layer, Marlon walked through the architectural choices he believes matter most when building agents in practice, and explained why Clojure offers a particularly strong foundation for exploring this space.

Infrastructure first: where your models live matters

Marlon started by arguing that any serious AI initiative begins with infrastructure — specifically, how teams access models. While this decision is often treated as an implementation detail, he emphasized that it directly impacts scalability, experimentation, security, and integration with existing systems.

From his perspective, engineers typically face two options:

  • Direct AI vendors: Marlon noted that these providers are an excellent entry point for individual developers. Signing up is straightforward, APIs are well-documented, and it is possible to start experimenting almost immediately. For learning and early exploration, this path minimizes friction.
  • Cloud providers: For organizations, however, Marlon argued that leveraging existing cloud relationships is usually the better long-term decision. Most companies already have accounts, billing, security controls, and observability in place. Cloud Providers like AWS and GCP make it possible to access models from multiple AI Labs without introducing new suppliers into the stack.

According to Marlon, when teams are operating inside an organization, the most pragmatic default is to use the models already available through their cloud providers. This removes procurement overhead and allows engineers to focus on building systems instead of managing vendors.

The fragmentation problem: too many APIs

Once access to models is established, Marlon pointed out a second, inevitable problem: API fragmentation. Each provider exposes different request formats, parameters, and SDKs, which quickly complicates development and makes experimentation costly.

In his talk, Marlon described this fragmentation as one of the first scaling pain points teams encounter when AI moves beyond a single script or proof of concept.

To address this, he introduced LiteLLM as a practical unification layer. LiteLLM is a proxy that standardizes access to multiple models and providers behind a single API, regardless of where the model is hosted.

Marlon highlighted three concrete benefits of this approach:

  • A unified interface, allowing teams to switch models without rewriting integration code.
  • Centralized observability, creating a single point for logging, debugging, and auditing model interactions.
  • Cost control, which he emphasized as critical. Token usage scales quickly in production, and LiteLLM enables organizations to track and limit usage per team, service, or key.

From Marlon’s perspective, this kind of proxy is not an optimization: it becomes foundational infrastructure as soon as AI is part of a real system.

Why smaller models often work better for agents

Marlon then challenged a common assumption in the AI space: that larger models are always better.

For task-oriented AI agents, he argued, this is rarely true. Citing recent research from Nvidia, Marlon explained that Small Language Models (SLMs) are often a better fit for agents designed to execute specific, well-defined actions.

According to him, the broad generalization capabilities of large LLMs, models capable of writing essays or long-form prose, are unnecessary for most agent workloads. Using them in these contexts leads to wasted capacity, higher costs, and increased complexity.

He outlined several practical advantages of SLMs:

  • Cost efficiency: models like Llama 4 Scout on AWS Bedrock cost orders of magnitude less per token than large proprietary models.
  • Lower energy consumption: making them a more responsible choice at scale and more eco-friendly.
  • Feasible fine-tuning: adapting a 7–10B parameter model to a specific domain is realistic, whereas doing the same with very large models often is not real for most companies.

For Marlon, choosing SLMs is not a compromise, as it is an engineering decision aligned with the actual requirements of agent-based systems.

Why Clojure fits this problem space

From there, Marlon shifted focus to tooling. He explained that AI development is inherently experimental: prompts change, parameters are adjusted, models are swapped, and assumptions are constantly tested. In that context, developer feedback loops matter. This is where Clojure stands out.

Marlon described Clojure as an ergonomic language — not because of syntax alone, but because of its REPL-driven development model. The ability to evaluate functions incrementally, inspect results immediately, and iterate without restarting an application fundamentally changes how engineers explore problem spaces.

In his experience, this interactive workflow aligns closely with how AI systems are built and refined.Beyond the REPL, Marlon highlighted two interoperability advantages:

  • Java interoperability: Because Clojure runs on the JVM, it has seamless access to the Java ecosystem. Cloud SDKs, HTTP clients, observability tools, and mature libraries are immediately available.
  • Python interoperability: With libraries such as libpython-clj, Clojure can import and execute Python code directly. While not as seamless as Java interop, this capability allows engineers to reuse Python-based AI tooling without abandoning Clojure’s interactive workflow.

For Marlon, this combination makes Clojure a strong orchestration layer for AI systems that need to integrate with multiple ecosystems.

From theory to practice: the live demonstration

To make these ideas concrete, Marlon walked through a live demonstration.

He started with simple Python scripts that sent text, images, and PDFs to Bedrock-hosted models via a local LiteLLM proxy. These examples established a baseline using familiar tooling.

Next, he reproduced the same workflows in Clojure by importing the Python LiteLLM package directly into the Clojure runtime. Python functions were called from Clojure code, with inputs and outputs handled interactively.

According to Marlon, the most important part of the demo was not that this approach works, but how it changes the development experience. Python-based workflows often require frequent context switching — editing files, running scripts, and restarting processes. With Clojure and the REPL, the entire feedback loop stays inside the editor.

For exploratory domains like AI, Marlon argued, this difference directly translates into faster iteration and deeper focus.

Conclusion

Marlon closed by emphasizing that AI agents remain a young and rapidly evolving field. Libraries, architectures, and best practices are still in flux, which makes flexibility a key requirement.

He also offered a note of caution: granting models excessive autonomy without clear boundaries and controls can lead to fragile systems. In his view, frameworks that favour obscure control flow and mostly relies on LLM itself encourages a “ship and pray” approach to AI development. At the same time, Marlon pointed out that new agent architectures are actively emerging from research groups at organizations like Nvidia and DeepMind, signaling that significant changes are still ahead.

His conclusion was pragmatic: combining well-chosen infrastructure, models sized to the problem, and tools that favor exploration creates a solid foundation for building AI systems grounded in engineering discipline. The repository shared during the talk serves as a starting point for engineers interested in continuing that exploration.

References

Small Language Models are the Future of Agentic AI

AlphaEvolve: A coding agent for scientific and algorithmic discovery

GitHub – marlonjsilva/clj-agents

GitHub – clj-python/libpython-clj

The post Building AI agentes in practice with Clojure appeared first on Building Nubank.

Permalink

I am sorry, but everyone is getting syntax highlighting wrong

Translations: Russian

Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.

Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.

Christmas Lights Diarrhea

Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.

Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.

Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?

So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.

Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.

If everything is highlighted, nothing is highlighted.

Enough colors to remember

There are two main use-cases you want your color theme to address:

  1. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?)
  2. Search for something. You want to know what to look for (which color).

1 is a direct index lookup: color → type of thing.

2 is a reverse lookup: type of thing → color.

Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.

Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.

I can’t.

Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?

Can you?

If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.

What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:

  • Green for strings
  • Purple for constants
  • Yellow for comments
  • Light blue for top-level definitions

That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.

Limit the number of different colors to what you can remember.

If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?

What should you highlight?

Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.

I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.

Top-level definitions are another good idea. They give you an idea of a structure quickly.

Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.

Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.

Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.

Comments are important

The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.

But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:

Comments should be highlighted, not hidden away.

Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.

Two types of comments

Another secret nobody is talking about is that there are two types of comments:

  1. Explanations
  2. Disabled code

Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!

Here’s a real example from Clojure codebase that makes perfect use of two types of comments:

Disabled code is gray, explanation is bright yellow

Light or dark?

Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?

And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:

Notice how many colors there are. No one can remember that many.

This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.

Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.

So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯

But!

But.

There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.

The second one has good contrast, but you can barely see colors.

The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.

UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.

Bold and italics

Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.

In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.

Using italics and bold instead of colors

Myth of number-based perfection

Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.

This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:

OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315

The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.

Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.

Let’s design a color theme together

Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.

Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?

Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.

But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.

This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.

Shameless plug time

I’ve been applying these principles for about 8 years now.

I call this theme Alabaster and I’ve built it a couple of times for the editors I used:

It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).

Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.

As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.

I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.

Permalink

Statistics made simple

I have a weird relationship with statistics: on one hand, I try not to look at it too often. Maybe once or twice a year. It’s because analytics is not actionable: what difference does it make if a thousand people saw my article or ten thousand?

I mean, sure, you might try to guess people’s tastes and only write about what’s popular, but that will destroy your soul pretty quickly.

On the other hand, I feel nervous when something is not accounted for, recorded, or saved for future reference. I might not need it now, but what if ten years later I change my mind?

Seeing your readers also helps to know you are not writing into the void. So I really don’t need much, something very basic: the number of readers per day/per article, maybe, would be enough.

Final piece of the puzzle: I self-host my web projects, and I use an old-fashioned web server instead of delegating that task to Nginx.

Static sites are popular and for a good reason: they are fast, lightweight, and fulfil their function. I, on the other hand, might have an unfinished gestalt or two: I want to feel the full power of the computer when serving my web pages, to be able to do fun stuff that is beyond static pages. I need that freedom that comes with a full programming language at your disposal. I want to program my own web server (in Clojure, sorry everybody else).

Existing options

All this led me on a quest for a statistics solution that would uniquely fit my needs. Google Analytics was out: bloated, not privacy-friendly, terrible UX, Google is evil, etc.

What is going on?

Some other JS solution might’ve been possible, but still questionable: SaaS? Paid? Will they be around in 10 years? Self-host? Are their cookies GDPR-compliant? How to count RSS feeds?

Nginx has access logs, so I tried server-side statistics that feed off those (namely, Goatcounter). Easy to set up, but then I needed to create domains for them, manage accounts, monitor the process, and it wasn’t even performant enough on my server/request volume!

My solution

So I ended up building my own. You are welcome to join, if your constraints are similar to mine. This is how it looks:

It’s pretty basic, but does a few things that were important to me.

Setup

Extremely easy to set up. And I mean it as a feature.

Just add our middleware to your Ring stack and get everything automatically: collecting and reporting.

(def app
  (-> routes
    ...
    (ring.middleware.params/wrap-params)
    (ring.middleware.cookies/wrap-cookies)
    ...
    (clj-simple-stats.core/wrap-stats))) ;; <-- just add this

It’s zero setup in the best sense: nothing to configure, nothing to monitor, minimal dependency. It starts to work immediately and doesn’t ask anything from you, ever.

See, you already have your web server, why not reuse all the setup you did for it anyway?

Request types

We distinguish between request types. In my case, I am only interested in live people, so I count them separately from RSS feed requests, favicon requests, redirects, wrong URLs, and bots. Bots are particularly active these days. Gotta get that AI training data from somewhere.

RSS feeds are live people in a sense, so extra work was done to count them properly. Same reader requesting feed.xml 100 times in a day will only count as one request.

Hosted RSS readers often report user count in User-Agent, like this:

Feedly/1.0 (+http://www.feedly.com/fetcher.html; 457 subscribers; like FeedFetcher-Google)

Mozilla/5.0 (compatible; BazQux/2.4; +https://bazqux.com/fetcher; 6 subscribers)

Feedbin feed-id:1373711 - 142 subscribers

My personal respect and thank you to everybody on this list. I see you.

Graphs

Visualization is important, and so is choosing the correct graph type. This is wrong:

Continuous line suggests interpolation. It reads like between 1 visit at 5am and 11 visits at 6am there were points with 2, 3, 5, 9 visits in between. Maybe 5.5 visits even! That is not the case.

This is how a semantically correct version of that graph should look:

Some attention was also paid to having reasonable labels on axes. You won’t see something like 117, 234, 10875. We always choose round numbers appropriate to the scale: 100, 200, 500, 1K etc.

Goes without saying that all graphs have the same vertical scale and syncrhonized horizontal scroll.

Insights

We don’t offer much (as I don’t need much), but you can narrow reports down by page, query, referrer, user agent, and any date slice.

Not implemented (yet)

It would be nice to have some insights into “What was this spike caused by?”

Some basic breakdown by country would be nice. I do have IP addresses (for what they are worth), but I need a way to package GeoIP into some reasonable size (under 1 Mb, preferably; some loss of resolution is okay).

Finally, one thing I am really interested in is “Who wrote about me?” I do have referrers, only question is how to separate signal from noise.

Performance. DuckDB is a sport: it compresses data and runs column queries, so storing extra columns per row doesn’t affect query performance. Still, each dashboard hit is a query across the entire database, which at this moment (~3 years of data) sits around 600 MiB. I definitely need to look into building some pre-calculated aggregates.

One day.

How to get

Head to github.com/tonsky/clj-simple-stats and follow the instructions:

Let me know what you think! Is it usable to you? What could be improved?

Permalink

FFI with GraalVM Native Image: The Real Work of Maintaining a Library That Crosses Language Boundaries

Exposing a library via FFI seems simple on paper.

You compile your code to .so, export some functions with extern "C", write bindings in the target language. Done, interoperability achieved.

Except that's not how it works in practice. At least not when your lib is a database written in Clojure, compiled with GraalVM native-image, that needs to manage Git and Lucene internally, and will be called from Rust in environments ranging from dev laptops to CI containers with 8MB of stack.

Permalink

Episode 12: It allows a small team to achieve really great goals, with Marcin Maicki, Dentons

Episode 12 of "Clojure in product. Would you do it again?" is live — Marcin Maicki, Global Data Developer & Lead Developer at Dentons, joins Artem Barmin and Vadym Kostiuk to talk about running Clojure inside a large, decentralized enterprise.

Highlights:

- Marcin’s Clojure origin story: started with ClojureScript, moved into Clojure, and found the functional mindset a natural fit coming from React.

- How Clojure landed at Dentons: a conscious choice for a focused referral‑network venture that valued expressiveness and small teams.

- Practical stack and ops: Postgres, Elasticsearch, Reagent/Re‑frame + Material UI, Metabase; Marcin also works on PySpark/Databricks in his global data role.

- Maintenance and risk: why they’re migrating away from old, unmaintained libs; regular security scans and external testing make dependency health a real concern.

- Team, onboarding, and hiring: a small Clojure pod (Marcin + one dev, testers, DevOps); knowledge sharing, docs, and close pairing are the onboarding tools — hiring remains the main practical blocker.

- Enterprise realities: polycentric org structure, integration friction with firm standards (Power BI, Azure), and the tradeoffs that make Clojure a strong fit in some contexts but a harder sell in others.

Watch Episode 12 to hear the full conversation and the nuances of keeping a nine‑year Clojure codebase healthy in a corporate setting.

Permalink

SQLite in Production? Not So Fast for Complex Queries

Datalevin speedup over SQLite on JOB benchmark

There is a growing movement to use SQLite for everything. Kent C. Dodds argues for defaulting to SQLite in web development due to its zero-latency reads and minimal operational burden. Wesley Aptekar-Cassels makes a strong case that SQLite works for web apps with large user bases, provided they don't need tens of thousands of writes per second. Discussions on Hacker News and elsewhere cite companies like Apple, Adobe, and Dropbox using SQLite in production. Even the official SQLite documentation encourages its use for most websites with fewer than 100K hits per day.

These points are fair. The overarching theme is a pushback against automatically choosing complex, client-server databases like PostgreSQL when SQLite is often more than sufficient, simpler to manage, and faster for the majority of use cases. I agree with that framing. The debate has settled into a well-understood set of tradeoffs:

For "SQLite for everything" Known limitations
Zero-latency reads as an embedded library Write concurrency limited to a single writer
No separate server to set up or maintain Not designed for distributed or clustered systems
Reliable, self-contained, battle-tested (most deployed DB in the world) No built-in user management; relies on filesystem permissions
Fast enough for most human-driven web workloads Schema migration can be more complex in large projects

These are the terms of the current discussion. But there is an important, often overlooked dimension missing from this framing.

SQLite struggles with complex queries. More specifically, SQLite is not well-suited to handle the kind of multi-join queries that arise naturally in any serious production system. This goes beyond the usual talking points about deployment concerns (write concurrency, distribution, and so on). It points to a system-level limitation: the query optimizer itself. That limitation matters even for read-heavy, single-node deployments, which is exactly the use case where SQLite is supposed to shine.

I have benchmark evidence showing this clearly. This post focuses on join-heavy analytical queries, not on the many workloads where SQLite is already the right choice. But first, let me explain why this matters more than people think.

Multi-join queries are not exotic

A common reaction to discussing multi-join queries is: "I don't write queries with 10 joins." This usually means one of three things: the schema is denormalized, the logic has been moved into application code, or the product is simple. None of these mean the problem goes away.

In any system with many entity types, rich relationships, history or versioning, permissions, and compositional business rules, multi-join queries inevitably appear. They emerge whenever data is normalized and questions are compositional. Here are concrete examples from real production systems.

Enterprise SaaS (CRM / ERP / HR). A query like "show me all open enterprise deals" in a Salesforce-like system touches accounts, contacts, products, pricebooks, territories, users, permissions, and activity logs. Real queries in these systems routinely involve 10-20 joins. Every dimension of the business (customers, ownership, products, pricing, regions, access control, activity statistics) is often normalized into its own table.

Healthcare (EHR). "Patients with condition X, treated by doctors in department Y, prescribed drug Z in the last 6 months, and whose insurance covers that drug" spans patients, visits, diagnoses, providers, departments, prescriptions, drugs, insurance plans, coverage rules, and claims. Exceeding 15 joins is common.

E-commerce and Marketplaces. "Orders in the last 30 days that include products from vendor V, shipped late, refunded, with customers in region R" touches orders, order items, products, vendors, shipments, delivery events, refunds, customers, addresses, regions, and payment methods. Again, 10+ joins.

Authorization and Permission systems. "Which documents can user U see?" requires traversing users, groups, roles, role assignments, resource policies, ACLs, inheritance rules, and organizational hierarchies. This alone can be 12+ joins, sometimes recursive.

Analytics and BI. Star schemas look simple on paper, but real dashboard queries add slowly changing dimensions, hierarchy tables, permission joins, and attribution models. A "simple" dashboard query often hits 6-10 dimension tables plus access control.

Knowledge graphs and semantic systems. "Papers authored by people affiliated with institutions collaborating with company X on topic Y" requires joining papers, authors, affiliations, institutions, collaborations, and topics. Very common in search and recommendation systems.

Event sourcing and temporal queries. Reconstructing the state of an account at a point in time with approval chains requires joining entity tables, event tables, approval tables, history tables, and version joins. Temporal dimensions multiply join counts quickly.

AI / ML feature pipelines. Feature stores generate massive joins. Assembling a feature vector often requires joining user profiles, sessions, events, devices, locations, and historical aggregates. This is why feature stores are expensive.

The pattern is consistent across domains:

Domain Typical join count
SaaS CRM / ERP 8-20
Healthcare 10-25
Authorization 6-15
BI dashboards 6-12
Knowledge graphs 10-30
Feature pipelines 8-20

Complex joins are not accidental. They emerge from normalized data, explicit relationships, compositional business rules, layered authorization, and historical records. Again, if you don't see many joins in your system, it usually means the schema is denormalized, the logic is in the application layer, or the product hasn't reached sufficient complexity yet. This does not mean the system is better. It often means complexity has been pushed into the application layer, which can add engineering cost without adding real value.

The evidence: JOB benchmark

The Join Order Benchmark (JOB) is a standard benchmark designed specifically to stress database query optimizers on complex multi-join queries [1]. Based on the Internet Movie Database (IMDb), a real-world, highly normalized dataset with over 36 million rows in its largest table, it contains 113 analytical queries with 3 to 16 joins each, averaging 8 joins per query. Unlike synthetic benchmarks like TPC, JOB uses real data with realistic data distributions, making it a much harder test of query optimization.

I ran this benchmark comparing three databases: SQLite (via JDBC), PostgreSQL 18, and Datalevin (an open-source database I build). All were tested in default configurations with no tuning, on a MacBook Pro M3 Pro with 36GB RAM. This is not a tuning shootout, but a look at out-of-the-box optimizer behavior. Details of the benchmark methodology can be found here.

Overall wall clock time

Database Total time (113 queries)
Datalevin 93 seconds
PostgreSQL 171 seconds
SQLite 295 seconds (excluding 9 timeouts)

SQLite needed a 60-second timeout per query, and 9 queries failed to complete within that limit. The actual total time for SQLite would be substantially higher if these were included. For example, query 10c, when allowed to run to completion, took 446.5 seconds.

Execution time statistics (milliseconds)

Database Mean Median Min Max
Datalevin 773 232 0.2 8,345
PostgreSQL 1,507 227 3.5 36,075
SQLite 2,837 644 8.1 37,808

The median tells the story: SQLite's median is nearly 3x worse than the other two.

Per-query speedup: Datalevin vs. SQLite

The chart at the top of this post shows the speedup ratio (SQLite time / Datalevin time) for each of the queries on a logarithmic scale (excluding 9 timeouts). Points above the 1x line (10^0) mean Datalevin is faster; points below mean SQLite is faster. The horizontal lines mark 1x, 10x, and 100x speedups.

Several patterns stand out:

  • The vast majority of points are above the 1x line, often by 10x or more.
  • For the hardest queries, Datalevin achieves 100x+ speedups. These are precisely the complex multi-join queries where SQLite's optimizer breaks down.
  • SQLite is rarely faster, and when it is, the margin is small.
  • The 9 timed-out queries (not shown) would push the ratio even higher.

Where SQLite breaks down

Timeouts. Queries 8c, 8d, 10c, 15c, 15d, 23a, 23b, 23c, and 28c all timed out at the 60-second limit during the benchmark runs. These represent queries with higher join counts where SQLite's optimizer failed to find an efficient plan.

Extreme slowdowns. Even among queries that completed, SQLite was often dramatically slower. Query 9d took 37.8 seconds on SQLite versus 1.6 seconds on Datalevin (24x). Query 19d took 20.8 seconds versus 5.7 seconds. Query families 9, 10, 12, 18, 19, 22, and 30 all show SQLite performing significantly worse, often by 10-50x.

Why SQLite falls behind

SQLite's query optimizer has fundamental limitations for complex joins:

  1. Limited join order search. SQLite uses exhaustive search for join ordering only up to a limited number of tables. Beyond that threshold, it falls back to heuristics that produce poor plans for complex queries.

  2. Weak statistics model. SQLite's cardinality estimation is simpler than PostgreSQL's, which itself has well-documented weaknesses [1]. With fewer statistics to guide optimization, SQLite makes worse choices about which tables to join first and which access methods to use.

  3. No cost-based plan selection for complex cases. For queries with many tables, SQLite's planner cannot explore enough of the plan space to find good join orderings. The result is plans that process orders of magnitude more intermediate rows than necessary.

These limitations are architectural; they are not bugs likely to be fixed in a near-term release. They reflect design tradeoffs inherent in SQLite's goal of being a lightweight, embedded database.

What this means for "SQLite in production"

SQLite is excellent for what it was designed to be: an embedded database for applications with simple query patterns. It excels as a local data store, a file format, and a cache. For read-heavy workloads with straightforward queries touching a few tables, it works extremely well.

But the production systems described above, e.g. CRM, EHR, e-commerce, authorization, analytics, are precisely where SQLite's query optimizer becomes a bottleneck. These are not hypothetical workloads, but the day-to-day reality of systems that serve businesses and users.

The "SQLite in production" advocates often benchmark simple cases: key-value lookups, single-table scans, basic CRUD operations. On those workloads, SQLite does extremely well. But production systems grow. Schemas become more normalized as data integrity requirements increase. Questions become more compositional as business logic matures. And at that point, the query optimizer becomes the bottleneck, not the network round trip to a database server.

Before choosing SQLite for a production system, ask: will our queries stay simple forever? If the answer is no, and it usually is, the savings in deployment simplicity may not be worth the cost in query performance as the system grows.

An alternative approach

In a previous post, I described how Datalevin, a triplestore using Datalog, handles these complex queries effectively. Its query optimizer uses counting and sampling on its triple indices to produce accurate cardinality estimates, resulting in better execution plans. Unlike row stores, where cardinality estimation is notoriously difficult due to bundled storage, a triplestore can count and sample individual data atoms directly.

This approach yields plans that are not only better than SQLite's, but consistently better than PostgreSQL's across the full range of JOB queries. Despite Datalevin being written in Clojure on the JVM rather than optimized C code, it still managed to halve the total query time in the JOB benchmark. The quality of the optimizer's decisions matters more than the raw execution speed of the engine.

For systems that need both deployment simplicity (Datalevin works as an embedded database too) and the ability to handle complex queries as they inevitably arise, a triplestore with a cost-based optimizer offers a practical alternative to either SQLite or a full client-server RDBMS. It is not a silver bullet, but it can deliver SQLite-like operational simplicity without giving up complex-query performance.

If you have different results or have tuned SQLite to handle these queries well, I would love to compare notes. The goal here is not to dunk on SQLite, but to surface a missing dimension in a discussion that often defaults to deployment tradeoffs alone.

References

[1] Leis, V., et al. "How good are query optimizers, really?" VLDB Endowment. 2015.

Permalink

Gaiwan: Hire Us!

Gaiwan: Hire Us!

Hire Us!

We are actively pursuing new work with clients old and new. We&aposre in the first place interested in Clojure/ClojureScript gigs, but happy to chat about any potential leads even if you yourself are not hiring.

Things we can do for you:

  • Long- or short-term team augmentation
  • Programming new features
  • Modernizing old codebases
  • Clojure / Clojurescript coaching
  • Make your team more productive by improving dev tooling
  • Consulting on how to manage YOUR open source and community
  • Feature work on OUR open source (all lambdaisland/Gaiwan libraries)
  • Architecture review
  • Cleaning up vibe-coded code bases that have become unmaintainable
Gaiwan: Hire Us!Mitesh Shah profile photo with wild hair and a blue suit.

Our colleague Mitesh Shah is open to new work. He&aposs a talented engineer, practical and enterpreneurial with a lot of experience at startups. He knows how to ship, he&aposs a great communicator, and above all he&aposs a great person to work with. He&aposs truly full stack, from UI/UX to the database and back.

Bettina Shzu-Juraschek, who covers legal, tax, HR, and operations at Gaiwan, can create financial overviews and close money leaks for small businesses.

2026 Conferences Preview


This year we&aposre looking forward to attending the Babashka conference on May 8, 2026, and Dutch Clojure Days on May 9, 2026, in Amsterdam. It&aposs always great to get together with the Clojure community in real life! We&aposre booking a sailboat in Weesp with accommodations for up to 16 people; if you want to book a berth either in your own room or in a shared room, sign up here.

Bettina will be at FOSS Backstage and FOSS Backstage Design in Berlin from March 16-18, 2026, to push forward our open source projects. Reach out if you want to connect there.

#tea-break

At Gaiwan we share interesting reads in our #tea-break channel. Here&aposs a selection:

Why there’s no European Google? "Can we just stop equating success with short-term economic growth? What if we used usefulness and longevity? What if we gave more value to the fundamental technological infrastructure instead of the shiny new marketing gimmick used to empty naive wallets?"

The Office according to "The Office." by Ribbonfarm. An oldie but goodie. Are you a sociopath, a loser, or clueless?

The Bitter Lesson by Rich Sutton. In the long run, general-purpose methods that leverage massive computation—specifically search and learning—consistently outperform specialized systems built on human domain expertise.

Yayyay events shares the financial statements from their events. "Organizing Lambda World this year came with a price tag of €52,000,...with a €4,000 deficit." If we had included the costs of our salaries in the budget for the Heart of Clojure conference we organized in 2024, we would have also been 5 figures in the red. 🙁

What are you reading these days?



Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.