Building elegant interfaces with ClojureScript, React, and UIx

During Clojure South, João Lanjoni, Software Engineer at Nubank, addressed a central challenge of modern web development: how to combine the ergonomics of ClojureScript with the maturity of React to build scalable, high-performance interfaces. 

According to João, the solution is UIx, a tool that represents the new generation of bridges that further aligns the Clojure universe with the React ecosystem. In his session, he detailed the context, the limitations of previous approaches, and the value of UIx as a new, efficient entry point for React developers into ClojureScript.

From 2013 to today: React and ClojureScript in perspective

Since its launch in 2013, React has redefined the structure of frontend applications by introducing concepts like consistent reactivity. The ClojureScript community quickly responded with idiomatic interfaces like Reagent, which became the de facto standard due to its solidity, providing a minimalistic interface between ClojureScript and React, using a Hiccup-like syntax to define components. With the arrival of functional components and hooks, starting around 2019, new interfaces came up to provide a direct way of using functional components (instead of old class-based components).

However, as React continuously evolved towards modern patterns, including concurrent rendering, functional components, and new ways to manage component state, Reagent remained tied to class-based components, mainly for backward compatibility. This mismatch resulted in some limitations like performance limitations in large codebases (due to Hiccup parsing in runtime), issues with functional components (as users may have to declare every functional component usage even when they were defined as a React standard), and hindered interoperability with modern React libraries, such as Material UI, Mantine, and Ant Design, widening the gap between the two ecosystems.

What UIx changes in your code

UIx emerges to resolve this divergence. Acting as a thin interface between ClojureScript and modern React, its focus is technical and pragmatic: it offers a minimal abstraction layer, more predictable performance, and the direct use of functional components and hooks. Furthermore, it ensures native interoperability with the React ecosystem, allowing the lifecycle and state management to be handled directly by React itself. 

“If React already handles state and lifecycle management well, why not let it do that?”

João Lanjoni, Software Engineer at Nubank

Instead of creating a complete framework or adding unnecessary abstractions, UIx is a lightweight bridge, leveraging what modern React does best, resulting in a ClojureScript codebase with idiomatic syntax but identical behavior to modern React.

UIx component structure

In practical terms, UIx centralizes component construction around two elements: defui for declaring React components and $ for rendering elements in an explicit and lightweight way. Component bodies process props identically to React. Hooks such as useState are exposed using idiomatic ClojureScript conventions, like use-state, with UIx handling the translation to native React APIs. This ergonomics combines the best of ClojureScript syntax with the React architecture, which, according to João, eliminates the need to train React developers in the internal details of layers like Reagent or Re-frame, keeping the mental model aligned with the React mainstream.

Performance in figures

A highlight of the presentation was the chart, created by Roman Liutikov – the UIx maintainer –, comparing the call stack depth when rendering a simple component in pure React, UIx, and Reagent. React exhibits the shortest path; UIx, by adding only a thin layer, follows closely. In contrast, Reagent, due to Hiccup being interpreted at runtime, shows a significantly deeper call stack. While the difference is minimal in small applications, the impact on predictability and performance becomes notable and increases in products with hundreds or thousands of components.

Who is already using UIx in production

João presented three real-world examples, all highlighted on the project’s official page:

  • Metosin, one of the largest Clojure consultancies in Europe;
  • Pitch, an AI presentation platform with amazing slide decks;
  • Cognician, an e-learning platform for personal development.

The Pitch case is particularly impressive.

The team migrated 2,500 components from Reagent to UIx, maintained compatibility with Re-frame, and saw improvements in predictability and performance.

Metosin, meanwhile, employs Juho Teperi, one of the main contributors to Reagent, who also made an example project for a full-stack app using Clojure and ClojureScript and chose UIx to build the web interface, also using Material UI as the component library without any special wrapper.

When someone who helped build the previous tool begins to advocate for the new approach, it says a lot about the current moment of the technology, even more with the launch of a new version of Reagent introducing default functional components and a thinner hooks wrapper (inspired also by UIx).

Reducing the developer learning curve

UIx’s value extends to the hiring and development of engineers, which opens a path for more professionals to enter the ClojureScript ecosystem without the requirement of mastering the intricacies of Reagent, Re-frame, or the atom-based state model from day one. It represents a pragmatic approach to lowering barriers without sacrificing the benefits of a functional and declarative language.

“The greatest value of UIx is allowing React developers to write ClojureScript with a minimal learning curve.”

João Lanjoni, Software Engineer at Nubank

When UIx is the best choice

UIx is especially recommended for modern and complex front-end applications and teams already familiar with React. It is ideal for codebases that rely heavily on hooks and for projects requiring interoperability with the latest React libraries, with a view toward strong long-term growth potential. The library, intentionally simple, does not attempt to reinvent global state management, maintaining compatibility with mature React libraries like Zustand and Jotai, instead of adding unnecessary layers, or even using a custom hook that subscribes to a Clojure atom to manage a global state (similar to those cited libraries).

In essence, UIx does not seek to replace React but rather to act as a thin, modern, and pragmatic bridge. Its goal is to allow teams to build scalable front-ends with the power of React, while preserving the expressiveness and elegance of the Clojure philosophy and syntax. For complex and modern projects in ClojureScript, UIx may be the missing link.

The post Building elegant interfaces with ClojureScript, React, and UIx appeared first on Building Nubank.

Permalink

2025 Highlights

Some notes on the year.

Movies/TV

Lots of TV shows this year. These are some of the ones that stood out.

Great

  • Andor
  • Adolescence
  • The Rehearsal - Season 2
  • The Pitt (probably my favourite of the year)
  • The Chair Company
  • Squid Game Season 3 (might be controversial to have this here, but I enjoyed it)
  • (movie) Jia Zhangke’s “Caught by the Tides”. Deeply moving meditation on time, love, displacement and process.
  • Long Story Short
  • (movie) No Other Choice
  • The Studio
  • I also went to a screening of Kwaiden (1964) this year and it was incredible.

Hounorable Mentions

  • The Eternaut
  • Pachinko - Season 2
  • Severance - Season 2
  • Foundation - Season 3
  • Dept Q.
  • Alice in Borderland - Season 3
  • Slow Horses

Disappointments

  • The Last of Us - Season 2
  • (movie) One Battle After Another - had its good points definitely, but I always have very high expectations for PTA and the last two let me down.
  • Alien: Earth - I did really enjoy this, but a lot of problems with it too (as an ‘Alien’ installment)

Books

Not too much reading this year, but my favourite was definitely “Every Living Thing” (Jason Roberts).

I also enjoyed:

  • Solaris
  • Pachinko
  • Drive Your Plough Over the Bones of the Dead
  • Delta V

Travel

Some for work, some for pleasure:

  • Japan (I visited many places in this wonderful country! Highlights - Kyoto, Naoshima Island)
  • Seattle
  • Baku

Programming

Continuing to learn more about clojure. I program purely as a hobby.

I participated in the first Scinoj Lite conference, which had some great talks. My project looked at ways of evaluating LLMs (from a very basic, almost ’naive’, perspective).

Write-up of my LLM evaluation project

Played around with the new clojure ‘flow’ libary.

Clojure Flow Blog Post Clojure Flow project

I started a webscraping project that is trying to map Irish-language content on the .ie domain.

Irish language webscraping project

I also enjoyed this year’s advent of code.

Advent of Code (clojure)

Permalink

Clojure Deref (Dec 23, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

The annual Clojure surveys are live

Help shape the future of Clojure!

Whether you use Clojure, ClojureScript, Babashka, or any other Clojure dialect, please fill out the 2025 State of Clojure Survey and spread the word on social media.

This survey gives us the best snapshot of the Clojure community, so help us get as many participants as possible.

If you use ClojureScript or dialects like Squint, Cherry, nbb, and such, please fill out the 2025 State of ClojureScript Survey and share it with others.

Thank you for your help!

Upcoming Events

Libraries and Tools

Debut release

  • cljs-uix-electron - Uix + Electron starter

  • cljs-uix-wails - Wails + ClojureScript starter

  • Oak - Oak is a Free and Open Source Identity Provider that you can host yourself

  • immersa - Open Source Web-based 3D Presentation Tool

  • bb-timemachine - Run code back in Git-time.

  • solid-cljs - ClojureScript bindings to Solid

  • clojars-download-stats - An always up-to-date, complete SQL export of artifacts daily downloads since November 2012

  • malt - Malli-Typed interfaces for Clojure

  • distributed-scope - Run one lexical scope across distributed peers.

Updates

  • repath-studio 0.4.11 - A local web-based vector graphics editor that combines procedural tooling with traditional design workflows.

  • dtype-next 10.000-beta-11 - A Clojure library designed to aid in the implementation of high performance algorithms and systems.

  • repl-mcp d00f661 - Model Context Protocol Clojure support including REPL integration with development tools.

  • virtuoso 0.1.2 - A number of trivial wrappers on top of virtual threads

  • bbin 0.2.5 - Install any Babashka script or project with one command

  • stripe-clojure 2.1.0 - Clojure SDK for the Stripe API.

  • muutos 2025-12-18 - Muutos is a zero-dependency Clojure library for reacting to changes in a PostgreSQL database.

  • cherry 0.5.34 - Experimental ClojureScript to ES6 module compiler

  • nbb 1.3.205 - Scripting in Clojure on Node.js using SCI

  • replicant 2025.12.1 - A data-driven rendering library for Clojure(Script) that renders hiccup to DOM or to strings.

  • joyride 0.0.72 - Making VS Code Hackable like Emacs since 2022

  • process 0.6.25 - Clojure library for shelling out / spawning sub-processes

  • fireworks 0.19.0 - Fireworks is a themeable tapping library for Clojure, ClojureScript, and Babashka.

  • bling 0.9.2 - Rich text console printing for Clojure, ClojureScript, and Babashka.

  • clj-kondo 2025.12.23 - Static analyzer and linter for Clojure code that sparks joy

  • calva 2.0.543 - Clojure & ClojureScript Interactive Programming for VS Code

  • sci 0.11.50 - Configurable Clojure/Script interpreter suitable for scripting and Clojure DSLs

  • scittle 0.7.30 - Execute Clojure(Script) directly from browser script tags via SCI

  • partial-cps 0.1.42 - A lean and efficient continuation passing style transform, includes async-await support.

Permalink

12 years of Component: A decade of interactive development

A little over a month ago, Nubank’s office in Vila Leopoldina, São Paulo became the meeting point for the Clojure community across South America and beyond. The second edition of Clojure South brought together more than 200 developers, researchers, and language enthusiasts for two days of knowledge sharing, connection, and celebration of functional programming.

The event reinforced Brazil’s role as one of the most vibrant hubs for the Clojure community, and Nubank’s role as an active focal point for technology communities.

It was in this atmosphere of shared enthusiasm that Alessandra Sierra, Principal Software Engineer at Nubank, opened the conference with her talk “12 Years of Component”, reflecting not only on the history of one of Clojure’s most influential libraries, but on her personal experience helping shape how thousands of developers work today.

Where the journey began

Alessandra’s story dates back to 2007, when she attended a meetup in New York City where Rich Hickey publicly introduced Clojure for the first time. She walked out of that session impressed as Clojure brought the power of an interactive Read-Eval-Print Loop (REPL) to the JVM ecosystem, enabling software developers to inspect and modify running programs and receive immediate feedback.

Sierra became one of the earliest adopters of Clojure for professional work and made significant early contributions to its standard library. Within a few years, this led to her joining Relevance, later renamed Cognitect, which was acquired by Nubank in 2020.

The power of the REPL and the frustration of interruptions

As a consultant at Cognitect, working with teams adopting Clojure for real-world applications, Sierra noticed a recurring pattern: the REPL gave developers enormous advantages in feedback and velocity, but many application structures made that experience harder than it needed to be.

In the early 2010s, web development in Clojure often centered around Ring: simple and elegant, but with trade-offs. Running a Jetty server from the REPL could block the main thread, and reloading code often left stale definitions in memory. Restarting the REPL became routine,  and frustrating.

Sierra wanted developers to keep the flow of interactive development even in systems with state and complexity. The core question was simple but fundamental: How can a system keep running while the developer continues evolving it?

The birth of Component

Instead of accepting the friction caused by restarting the REPL every time code changed, Sierra spent the following years designing a new discipline — a workflow that would eventually be known as the Reloaded Workflow, named after her widely read blog post “My Clojure Workflow, Reloaded.”

The goal was straightforward but transformative: developers should be able to evolve a running system safely, consistently, and without losing context.

This approach combined the tools.namespace library with Component, along with design practices that encouraged explicit dependencies, minimized global state, and separated pure logic from stateful boundaries. The result was a development experience centered on the REPL, where applications could be started, stopped, refreshed, and inspected in real time, without breaking flow.

Component offered a lightweight way to model systems as independent parts with clear lifecycles, without sacrificing functional design. Its small API surface and long-term stability were intentional: changes were introduced slowly and deliberately, helping the library remain simple, approachable, and durable.

A lasting impact

Twelve years after its introduction, Component remains one of the most influential libraries in the Clojure ecosystem. It has been referenced more than 13,000 times in Nubank’s production source code, continues to shape extensions and forks, and has even inspired ports to other languages — a level of longevity rarely achieved by tooling with such a minimal footprint.

Sierra closed her talk by acknowledging how fortunate she was to be at the right place, at the right time, with the right job, as Clojure was emerging. That kind of timing can’t be engineered.

But her advice to people beginning with Clojure or open-source was universal:

“Work on whatever you find interesting. Find a pattern that’s useful, build tools that help others benefit from it. But mostly, just have fun, because that’s the only thing you can choose.”

Alessandra Sierra, Principal Software Engineer at Nubank

The post 12 years of Component: A decade of interactive development appeared first on Building Nubank.

Permalink

Machine Learning in Clojure with libpython‑clj: Unlocking Causal Insights Using Microsoft’s EconML [Series 3]

“Beyond A/B testing: causal inference meets functional programming.”

In the first two parts of this series, we kept our core in Clojure while still using the best of Python.

╰┈➤ Series 1: In the first part, we explore how libpython-clj lets you use Python’s machine learning libraries right from the JVM, without needing to jump between languages.

╰┈➤ Series 2: In the second part, we explored Bayesian Networks. They are great for building models that actually make sense to humans, not just machines. That is a big deal in fields like healthcare or finance, where you cannot rely on black-box answers. Clarity is important. 

In Series 3, we focus on causal inference using EconML from Microsoft Research

╰┈➤ Predictive models answer “what might happen?”

╰┈➤ Causal inference asks “why did it happen?” and “what if we change the action?” 

That difference is essential when you need decisions you can trust, not just guesswork.

A simple A/B test might say a campaign increased conversions by 5%. EconML goes a layer deeper. It shows that students saw a 20% increase, while retirees saw no change. So you don ot get an average. You get heterogeneous treatment effects across segments. That is what you need to act with confidence.

If you work in Clojure, the process does not really change. You write the same clean and functional code. When you need Python’s causal tools, just use libpython-clj. You run your models, whether they use observational or experimental data. Then send the results right back into your JVM apps, without leaving Clojure.

Where this approach really shines:

╰┈➤ Dynamic pricing: Set prices by segment, not with a one-size-fits-all approach.

╰┈➤ Marketing: Focus on people who actually benefit, skip the rest.

╰┈➤ Healthcare: See how treatments work for different patient groups.

╰┈➤ Policy: Compare what works and break it down by demographic.

No need to make big promises. You get smarter decisions with models that separate signals from noise and show what’s making a difference for different people.

An infographic that shows “Prediction: Discount → +5% sales” vs “Causality: Students +20%, retirees no effect.”

🔗 Internal links:

[Series 1: ML in Clojure with libpython‑clj]

[Series 2: Bayesian Networks with libpython‑clj]

🌐 External link: Microsoft Research EconML – This page provides the official library overview with docs, papers, and examples.

The Problem with Traditional ML

Most traditional ML models are built on the idea that:

╰┈➤ One pattern fits the whole population

╰┈➤ One prediction applies to everyone

╰┈➤ One model captures the “average” relationship

This works only when people behave similarly, which they don’t. This is the homogeneity assumption.

A/B testing compares two groups and reports one average result.

But that average hides:

╰┈➤ Who loved it

╰┈➤ Who didn’t care

╰┈➤ Who reacted negatively

This variation is called heterogeneous treatment effects, the same issue ML struggles with.

“Traditional ML and A/B testing both look at averages. But averages hide differences. People don’t respond the same way, so relying on one number leads to decisions that don’t match what your audience actually needs.”

What should we monitor?

╰┈➤ Segment differences: Don’t stop at the average. Break results down by audience groups to see who benefits and who doesn’t.

╰┈➤ Adverse effects: A “winning” variant can still hurt certain groups. Look for where performance drops. 

╰┈➤ Context matters: Timing, demographics, past behavior, and geography all shape how people respond.

The Contrast Explained:

MetricOverall ImpactKey Takeaway
Aggregate Lift ($+5\%$ Overall)Shows mild success, but hides differences between groups.Average view is misleading- do not assume one strategy fits all.
Segmented LiftStudents ($+20\%$)Robust response.Action: Invest more in promotions for this group.
Bargain Hunters ($+12\%$)Strong positive effect.Action: Keep a moderate investment and try new offers.
Retirees ($0\%$)No impact.Action: Stop spending here and move the budget elsewhere.

Traditional ML predicts outcomes from inputs. Useful, but it does not tell you why things happen. It will not tell you what changes will occur if you take a different action. That is the gap causal inference fills.

Causal methods aim to separate correlation from cause. They work with real‑world logs and customer behavior (observational data). They also work with controlled trials, such as A/B tests (experimental data). You can use them even when a clean experiment is not feasible.

Here is where it helps:

╰┈➤ Dynamic pricing: Adjust prices based on how different segments actually respond, not just who looks similar.

╰┈➤ Churn reduction: Treatment effect estimates show which actions reduce cancellations, and for whom.

╰┈➤ Policy evaluation: Compare new vs old programs and see effects across demographics (heterogeneous effects).

And that is why it matters: with causal inference, you move from “what’s likely” to “what works,” using observational data or experiments when you have them. 

Introducing EconML

EconML is a Microsoft Research library. It estimates causal effects using machine learning. Most ML predicts outcomes. EconML asks why an outcome happened, and what would change if you took a different action.

The core method is called Double Machine Learning (DML). It trains two models, not one:

╰┈➤ Propensity score model: Estimates the probability of receiving a treatment (e.g., whether a customer is likely to receive a discount).

╰┈➤ Outcome model: Predicts the result of that treatment (like whether the discount leads to a purchase).

By combining these, EconML helps separate correlation from causation. That makes the insights more dependable than simple averages.

Diagram of the DML workflow — inputs → propensity score → outcome model → causal effect estimate.

  • Marketing campaign effectiveness: Identify which groups benefit from a promotion. Students may respond well, but retirees show nothing. Spend your money where it works, skip the rest.
  • Dynamic pricing: Set prices based on how different customers respond. Younger customers typically seek deals, while loyal customers are less price-sensitive. Do not just price for the average- match prices to the people. 
  • Medical treatment outcomes: Figure out how the impact of treatment changes with age, gender, or medical background. These details help doctors tailor care to each patient.
  • SaaS churn reduction: Identify which actions actually keep people from canceling- and who benefits from them. Focus on what makes a difference, and drop the stuff that does not.
  • Policy impact in economics: Compare new and old programs for different groups. Focus on policies that have a meaningful positive impact. Microsoft Research shares real examples and case studies using the EconML toolkit.

Diagram of segment‑level effects for each use case.

╰┈➤ Marketing campaign effectiveness.

╰┈➤ Dynamic pricing.

╰┈➤ Medical treatment outcomes.

╰┈➤ SaaS churn reduction.

╰┈➤ Policy impact in economics.

🌐 External link: Microsoft Research EconML – Case Studies – This page shows how EconML is used in real cases. It’s the official source from Microsoft Research.

Why EconML + Clojure via libpython‑clj

You do not have to leave the JVM just to use Python’s machine learning tools. With libpython-clj, you can use Python libraries like EconML right from your Clojure code. Reuse your old machine learning scripts, call familiar Python functions, and stay in your Clojure environment. It is all connected. There is no need to jump between languages or platforms.

╰┈➤ Faster experimentation: Test new ideas quickly without jumping between different tech stacks.

╰┈➤ Expressive functional code: you get the simplicity of Clojure and the power of Python’s machine learning tools.

╰┈➤ JVM ecosystem integration: your results move straight into enterprise systems without any awkward workaround code.

╰┈➤ Lower barriers: You can just build on what you have- no need to start over from scratch.

Let’s say you want to figure out what happens when you send out newsletters.

Dataset structure:

╰┈➤ XXX = It stands for things you know about your customers

╰┈➤ TTT = It tells you if they got a newsletter

╰┈➤ YYY = It shows whether they bought something or how much revenue you made.

EconML calculates τ(X), which indicates how much additional revenue you gain from sending a newsletter, broken down by customer type. Instead of just giving you one big average, you actually see how different groups react. For example,

╰┈➤ Dormant customers suddenly spend 30% more.

╰┈➤ VIP buyers spend 2% less when you send them a newsletter. (Negative effect)

╰┈➤ Bargain hunters go up by 12%, but only in certain situations. (Conditional effect)

So, now you know exactly who likes your emails- and who does not.

Table Comparing A/B Test vs EconML Uplift

SegmentA/B Test Result (Average)EconML Uplift (Segment‑level)Key Takeaway
Aggregate+5% overall liftAverage hides subgroup variation
StudentsNot visible on average+20%Strong positive effect → invest more
Bargain HuntersNot visible on average+12%Moderate effect → keep testing offers
RetireesNot visible on average0%No effect → stop spending here

Applying EconML in Practice

After EconML gives you the uplift scores for each customer, you have what you need to make wise choices. Here is how it usually goes:

1️⃣ Rank everyone on your mailing list by their predicted lift.

2️⃣ Choose the top half of those with an uplift above zero- and send your newsletters to them. 

3️⃣ Skip the bottom half to save yourself time and avoid any negative impact.

4️⃣ Customize the newsletter for each group. Give each segment the version that best fits them, so you are not just blasting the same thing to everyone.

Flowchart of the decision layer.

╰┈➤ Control A/B test: +$50k revenue

╰┈➤ EconML targeting: +$65k revenue

The uplift comes from sending fewer emails that do not matter: less spam, better delivery, and more revenue.

Bar chart comparing outcomes.

Technical Deep Dive: EconML Under the Hood

Residualization is EconML’s method for improving the reliability of causal estimates.

╰┈➤ It works by predicting what would’ve happened if the treatment had never occurred—the counterfactual outcome. 

╰┈➤ To avoid overfitting, EconML splits the data into training and validation sets. 

╰┈➤ There is also cross-fitting: models train on one subset of the data and are evaluated on a different subset. 

All of this helps cut through the noise and get to the real causal signals.  

EconML also supports causal forests, which are decision trees designed to capture heterogeneous treatment effects.

╰┈➤ They split the data into subgroups and estimate effects for each branch.

╰┈➤ This helps discover new customer segments that respond differently to interventions.

Example: “Customers under 35 who browse frequently and have not bought in 60 days increase spend by 22% when shown Instagram ads.”

Tree diagram of a causal forest. Branches by age, browsing behavior, and purchase history, with treatment effects at each leaf.

Flexiana’s Role

Flexiana has been building Clojure solutions for 9 years. Our team brings deep expertise in functional programming, machine learning integration, and global software delivery. Our projects span healthcare, fintech, SaaS, and enterprise systems.

╰┈➤ To empower teams with causal inference tools.

╰┈➤ To make advanced ML accessible to Clojure developers.

Flexiana’s focus is clear: Help organizations use causal ML without leaving their Clojure stack.

🔗 Internal link: Flexiana’s About page 

🌐 External link: Flexiana GitHub

FAQs (People Also Ask)

EconML is a Python library that helps you figure out the cause-and-effect of your actions. It uses observational or experimental data and applies machine learning to econometric models. The goal is to determine why an intervention (or “treatment”) led to a specific outcome. It is about moving past simple prediction to understand individualized treatment effects (ITE).

They have different goals. A/B testing tells you the average effect- it answers whether a change works for everyone overall. EconML focuses on the heterogeneous effects- it tells you who is most (or least) impacted by that change.

Plus, EconML can use data you already have (observational data) to target people better, saving you the time and cost of running a separate experiment for every targeting idea.

Yes, that’s what it is built for. Observational data is messy. EconML uses innovative techniques, such as Double Machine Learning (DML), to manage the many variables that can skew your results. This helps it address common issues such as selection bias, yielding honest, reliable causal estimates from non-experimental data.

It is about using the best tool for each job. Python has the best ML libraries (scikit-learn, EconML, TensorFlow) for building the models. Clojure, running on the Java Virtual Machine (JVM), provides a robust, concurrent, and highly stable production environment for running models at scale. You get Python’s excellent science ecosystem with the JVM’s rock-solid backend.

Think of a causal forest as a special kind of random forest. In regular random forests, the tree splits are based on predicting an outcome. In a causal forest (such as CausalForestDML), tree splits are based on maximizing the difference in the treatment effect between groups. This enables the algorithm to quickly identify and highlight the specific customer traits (features) that drive the uplift variation.

Conclusion: The Future of ML in Clojure

EconML changes how we use machine learning. Predictive models tell us what might happen. EconML helps explain why it happens and what changes if we act differently. That is useful when you need decisions based on cause and effect rather than averages.

With Clojure and libpython‑clj, you get a clean, functional way to build models while reusing Python’s ML libraries. It is simple to keep your JVM stack while still leveraging proven tools.

╰┈➤ Expressive code: Your code stays straightforward and easy to follow.

╰┈➤ Python interop: You can use existing ML libraries without leaving the JVM.

╰┈➤ Enterprise fit: You can send those causal insights straight into production systems- no extra steps.

Together, Clojure and EconML make machine learning more than just predictions. You can test faster, ship better, and actually trust what your models tell you.

Explore EconML with Flexiana. Let’s build causal ML solutions together.

🔗 Internal link: Contact Flexiana page 

🌐 External link: libpython‑clj GitHub repo

The post Machine Learning in Clojure with libpython‑clj: Unlocking Causal Insights Using Microsoft’s EconML [Series 3] appeared first on Flexiana.

Permalink

Machine Learning in Clojure with libpython-clj: Bridging Functional Elegance and Python’s ML Power [Series 1]

Python is the default choice for machine learning. But many teams using functional languages wonder if they have to switch. At Flexiana, we prioritize Clojure, but we also use Python.

With libpython-clj, Clojure can tap into Python’s machine learning libraries without leaving the JVM. You get the expressiveness and REPL workflow you love, plus solid speed.

In this series, let’s walk you through training a model in Python and integrating it right into your Clojure codebase. No hype, just straightforward steps to get machine learning running in Clojure.

Why This Matters Now

  • Python’s role: Python is the default for machine learning. It has TensorFlow, PyTorch, and scikit‑learn. If you are building models, you are probably using Python. That is fine. It is common and effective.
  • The issue: It creates a problem for teams that prefer other languages.
  • Flexiana’s stance: We are a Clojure‑first company at Flexiana. We work with functional programming, the REPL, and the JVM every day. We use Python when it makes sense. But our core is Clojure. So we asked a simple question: Do we need to leave Clojure to use modern ML tools?
  • Typical workflow pain: Many companies feel stuck in Python‑only workflows. Data scientists train models in Python. Developers then wrap those models in services to fit the main stack. It works, but it adds friction. It creates hand‑offs and silos. And it makes functional teams feel as if they are working on the edges.
  • A better path with libpython‑clj: With libpython‑clj, you do not have to pick one ecosystem and drop the other. You can keep Clojure’s clarity and still call Python’s ML libraries. Train a model in Python. Load it in Clojure. Use it in your codebase. No extra wrappers. No awkward bridges. Just a clean, direct path.
  • Why now: ML is now part of everyday software. Finance, healthcare, retail—many production systems use it. Most of those systems already run on the JVM. If you build in Clojure, you shouldn’t have to step out of your stack to add ML.
  • Why Clojure helps: Think of it like this- Python gives you the tools. Clojure gives you the environment. The REPL enables you to move faster. The functional style keeps code easy to reason about. The JVM fits into enterprise systems without fuss. Together, you get solid ML and clean integration.

That is what this series is about. ML in Clojure is not only possible; it is also practical. It is practical. You can keep your language and still use Python’s ecosystem. And you can fit ML into your stack without compromise.

AspectPython (ML Adoption)Clojure (ML Adoption)
Community SizeExtensive, global community. Dominant in ML and data science.Small but growing. Groups like Scicloj are active.
Library EcosystemMature ML stack: TensorFlow, PyTorch, scikit‑learn, Keras.Uses Python interop via libpython‑clj. Few native ML libraries.
Industry AdoptionCommon across finance, healthcare, retail, and research.Limited to enterprises. Often used in specialized or experimental work.
Learning CurveEasier start. Lots of tutorials and courses.Steeper start. Lisp syntax and functional style take time.
Integration with JVMIndirect. Runs outside the JVM and often requires wrappers or services.Native JVM language. Fits cleanly into enterprise stacks.
Performance in ML TasksStrong for training. Good GPU/TPU support.Suitable for orchestration and integration. Training is usually done in Python.
Current Trend (2025)Still the top ML language in most surveys.Growing interest in bridging Python ML into Clojure with libpython‑clj.

Sources:

Is ML in Clojure Possible?

Python dominates machine learning. The libraries are mature. The community is enormous. The tools fit Python well. But Clojure is not shut out. With its functional style and JVM roots, Clojure can also work with ML. The key is interop.

Chris Nuernberger built libpython‑clj to make this simple. Instead of wrapping models in services or switching stacks, Clojure can communicate directly with Python.

libpython‑clj is a bridge between Clojure and Python. It embeds the CPython runtime inside the JVM. You can call Python functions and use ML libraries from Clojure. Import TensorFlow, PyTorch, or scikit‑learn without leaving your REPL.

╰┈➤ JVM + CPython bridge: libpython‑clj runs CPython inside the JVM process.

╰┈➤ Direct calls: You call Python functions like Clojure functions.

╰┈➤ Shared workflow: Train a model in Python. Load and use it in your Clojure codebase.

This cuts friction- no extra wrappers. No microservices. No awkward hand‑offs. Just a clean, direct path that keeps Clojure as your primary language while using Python’s ML ecosystem.

Machine learning in Clojure with libpython‑clj is simple. You pull Python’s ML tools into your Clojure workflow. You stay in the REPL. You don’t need extra services.

╰┈➤ Train the model in Python: Use TensorFlow or PyTorch. Save the model when you’re done.

╰┈➤ Load the model in Clojure: Import Python modules with libpython‑clj. Call Python functions from Clojure.

╰┈➤ Run predictions in Clojure: Pass your data. Get results back without leaving the JVM.

Note: This is a simple placeholder. Your code will depend on the model and library you use.

╰┈➤ Step 1: Train the model in Python (TensorFlow/PyTorch).

╰┈➤ Step 2: Save the model file.

╰┈➤ Step 3: Import with libpython‑clj inside Clojure.

╰┈➤ Step 4: Run inference in your Clojure codebase.

Demonstrating ML in Action

Machine learning in Clojure with libpython‑clj follows a simple process. 

╰┈➤ Step 1: Train in Python: Use TensorFlow or PyTorch to build and train your model, then save it as a .pt (PyTorch) or SavedModel (TensorFlow).

╰┈➤ Step 2: Load in Clojure: You can import Python modules with libpython‑clj and load the saved model into the JVM.

╰┈➤ Step 3: Integrate: You can wrap inference in small Clojure functions and connect predictions to your data flow and test quickly in the REPL.

╰┈➤ Seamless API calls: You can call Python ML functions directly from Clojure without needing wrappers or microservices.

╰┈➤ REPL‑driven dev: It lets you test predictions, inspect tensors, and adjust data instantly.

╰┈➤ Boxed math: Passing boxed numbers or generic sequences can slow performance.

╰┈➤ Interop overhead: Frequent cross-language calls can add latency.

╰┈➤ Utility functions: Write utility functions if you want to convert data between Clojure types and Python‑friendly arrays or tensors.

╰┈➤ Optimized interop: You can batch calls, reduce crossings, cache modules, and reuse model objects.

Code comparison

Python (train and save):

Clojure (load and infer):

Note: Real PyTorch loading usually restores a model class and its state_dict. The snippet above is a placeholder to show the flow.

And that is the idea: Python handles training. Clojure handles clean integration and deployment. Together, you get solid ML with a straightforward path to production.

Why Care About Clojure in the ML World?

╰┈➤ Clear transformations: Clojure’s functional style allows you to express data flows in fewer lines.

╰┈➤ Less boilerplate: You write what is essential. Transformations stay front and center.

╰┈➤ Easier pipelines: Short and direct code makes ML steps easier to read and maintain.

╰┈➤ Mature runtime: The JVM offers you good optimization, threading, and memory management.

╰┈➤ Enterprise-friendly: You can plug ML into your already existing systems without any significant rewrites.

╰┈➤ Stable under load: You get predictable performance for production workflows. 

╰┈➤ Clean structure: Clojure has a clean, simple structure and syntax. The focus stays on the data.

╰┈➤ REPL-first workflow: With Clojure, you can test, inspect, and iterate quickly.

╰┈➤ Low ceremony: Clojure has fewer moving parts. You can easily see what each step does.

🔗 Clojure official site: https://clojure.org

╰┈➤ Reduced dependency on Python-only teams: You are not stuck relying on Python teams for everything anymore. With Clojure, your own developers can jump in to handle integration and deployment, while the Python folks focus on what they do best- training the models. That way, you can reduce risk, and scaling goes much more smoothly.

╰┈➤ Faster prototyping with REPL: Clojure’s REPL changes the game for prototyping. You get instant feedback- test predictions and tensors, as well as tweak your data right there in the loop. You see what works (and what does not) before you commit to building anything significant.

╰┈➤ Integration with enterprise JVM ecosystems: You can plug models into your company’s JVM systems directly. Less friction. Smoother deployment.  Everything lines up with the tools your team already knows and trusts. 

╰┈➤ Python-only: Higher dependency on specialized teams. Slower integration. More overhead.

╰┈➤ Hybrid: Shared ownership. Faster iteration. Better fit for enterprise workflows.

FAQs (People Also Ask Integration)

  • Question: Can you use Python ML libraries in Clojure?
  • Answer: Yes. You can bridge the CPython runtime with libpython‑clj and call Python code from Clojure.
  • Question: Is Clojure faster than Python for ML?
  • Answer: It depends. You can train models in Python for speed, and use Clojure for orchestration, integration, and deployment.
  • Question: Why not just use Python?
  • Answer: You can use Clojure for JVM interop, a REPL‑driven workflow, and clear functional code that fits enterprise systems.
  • Question: What companies use Clojure for ML?
  • Answer: You can reference case studies (Flexiana, fintech, healthcare) as placeholders until specific examples are approved.
  • Question: How do you integrate a trained model into Clojure?
  • Answer: You can load the model with libpython‑clj, wrap conversions in utility functions, and call inference from your Clojure codebase.

Future‑proof your enterprise with Flexiana’s Clojure‑first ML solutions.

This concludes Series 1, where we introduced how to combine Clojure and Python for machine learning using libpython-clj.

In Series 2, we go a step further by exploring Bayesian Networks and how they enable smarter, more interpretable AI models.
👉 Continue with Series 2: https://flexiana.com/news/2025/12/machine-learning-in-clojure-with-libpython-clj-using-bayesian-networks-for-smarter-interpretable-ai-series-2

The post Machine Learning in Clojure with libpython-clj: Bridging Functional Elegance and Python’s ML Power [Series 1] appeared first on Flexiana.

Permalink

I built a “slow” news reader

TLDR: I released my news reader on lean-news.tech for users interest in intentional reading. I’m looking for more testers, and it’s currently free.

My love for curated news and the open web

Back at the beginning of my career, I was spending a lot of time in Netvibes (which coincidentally went full EOL this year). It was a fantastic tool to keep an eye on hundreds of websites.

I tried to switch to Feedly after Netvibes was bought out by Dassault Systèmes, but their UX did not click for me. I also found it infuriating that they completely suck up the content from the websites to make you read on Feedly. A lot of websites are monetized by advertisements and traffic; if you prevent users from visiting the source website, the writer potentially loses their monetization and any insights into their readers. You also lost the website design entirely.

Social networks like LinkedIn or Twitter could have been the evolution of RSS, but they did not follow on the open web. Instead, they are designed to keep your eyeballs glued to the screen as long as possible, with a fine balance between something interesting enough to make you stay, intertwined with ads and junk. It’s a shame because they have all the data and resources to build a genius platform to help you learn and follow your favorite content producer. I wonder why LinkedIn is not following that path (They went in that direction 10 years ago); the lack of an economic model matching their size?

© Sarah Pflug

So, I decided to build my own solution.

I built a new news reader that tailors how I use RSS, and since RSS lost its trendiness, I built it to support websites lacking an RSS feed (I describe the technique here; I made it evolve a bit since, but the central idea is the same: create a feed from idioms). It’s not as good as if the data were well exposed, but good enough and continuously in progress.

What makes Lean-News different?

I built it to support how I actually want to consume information:

  • Anti binge philosophy: I have the bad habit of refreshing news websites multiple times a day. Feeds are retrieved at maximum once a day. That way, you ignore the urgent-like news, focus on the well thought content, and prevent the refresh addiction.
  • RSS and more: It’s always frustrating when a website does not offer an RSS feed, or, more frustrating, it removes it (yeah, that happens quite often …). It’s not perfect, but it’s better than ignoring the site.
  • Solving the blank start with AI: a UX problem of RSS reader is the start when you have 0 feeds. You can suggest generic content, but this is rarely what a user wants on an intentional news platform. Beyond the starting point, finding feeds that match a new interest is a lot of googling (which is fun, but it’s becoming harder with all the website gaming the ideas behind SEO). To improve the experience, I built an LLM-powered feed finder. Describe your interests (e.g., “Clojure ecosystem” or “Vector databases”), and it finds the best-matching websites for you. Since LLMs are being trained on the web, they are okay at this. Although I have ideas on how to improve it.
  • Respecting content makers: The goal is to help you find the content, and then you can go to the source website to read it.
The current LLM-powered feed finder

How do I use lean-news today?

  • Following Clojure related news: what’s going on in the ecosystem, what are the interesting projects to follow? Not everyone is on Clojurians. In general, I think that Clojurians is a mixed blessing: it’s convenient for people already familiar with Clojure, but a barrier for developers outside the Clojure ecosystem, as you need to access it and its content is not well indexed by search engines.
  • Following the database space: as a software engineer, this is a foundation of building software that still evolves at a fast speed. For example, the (last year?) frenzy on vector database, the “everything into Postgres”, the Cloud provider implementing infinite scale on top of relational database.
  • Everything AI (foundational model, agent, AI wrapper, Voice, etc): An extremely dynamic space to follow, cluttered with a lot of marketing and non-sense content. Following these topics on LinkedIn or Google News gives me mildly interesting articles, but with lean-news I can follow great authors and companies that are really innovating on the topics. No network effect, or maximizing engagement with the right keywords.
  • Tracking products change: for products I like or use frequently, I keep an eye on their competitors to see what they offer and know where they are going. The alternative to that is newsletters, but they never stuck for me.
  • Tracking vertical: for larger topics like Economics and Energy, I have curated websites that have interesting takes on them. In general long format article as opposed to the snippets or superficial bits you get on mass information websites.
  • Just blogs: There is a lot of LLM slop out there, but still, a lot of good writers are writing quality content; you just have to find them, and use an RSS reader to not miss a bit of what they publish.

Still a lot to build

The UI is rudimentary and not as polished as some well-known RSS readers (Feedly, Inoreader, Newsblur, and so on). Eventually, it will get there.

Some bugs here and there, and features I wish were there, like the automatic maintenance of feeds (URLs change) or getting more out of the exploration features with automatic news feed suggestions.

To be transparent, I intend to monetize the site in the future. Getting enough money for the hosting and the AI token would already be a great milestone.

Still reading? Give a try to lean-news and let me know if you find it useful or not :)

Permalink

December Q3 2025 Project Updates

As 2025 winds down, we have several Q3 project updates and a few more coming in January 2026 as several are on staggered schedules. A brief summary of each project is included to provide overall context. Thanks to everyone for your incredible work on these projects!

Ambrose Bonnaire-Seargent: Malli
Looking back on this project, it started with a proposal for an external analysis pass that could be used to optimize Malli validators. Now at the completion of the funding period, we’re solving the same problem, but instead of an additional tool, we’re applying the analysis directly within Malli’s validation algorithm.

Thomas Clark: Fastmath
Inasmuch as Lewis Carrol may have creatively objected, complex numbers are now an essential part of modern life: from quantum computing upwards and whether we are aware of it or not. Clojure’s support for these numbers however, remains sporadic while it’s biggest competitor, the well-known comedy snake - and scripting language https://www.geeksforgeeks.org/python/history-of-python/ treats complex numbers as first-class citizens.

With this funding, I would like to address the issue somewhat, particularly with regard to the implementation of complex matrices, but concerning a consistent complex API more generally.

Jeremiah Coyle: Fireworks

  • Publish Fireworks editor plugins/extensions/integrations for Emacs, VS Code, and IntelliJ. These are fairly simple extensions that involve some basic form rewriting for wrapping/unwrapping forms.
  • Add support for automatic detection of the 3 levels of color support (16-color, 256-color, or Truecolor), using an approach similar to Chalk. #42
  • Documentation of interactive workflow.
  • Enhanced documentation for theme creation.
  • Call-site options for quick formatting changes. For hifi printing, support call-site option to disable all truncation and ellipsis #14

Dragan Djuric: Uncomplicate Clojure ML
My goal with this funding in Q3 2005 is to develop a new Uncomplicate library, ClojureML.

  • a Clojure developer-friendly API for AI/DL/ML models (in the first iteration based on ONNX Runtime, but later refined to be even more general).
  • Implement its first backend engine (based on ONNX Runtime).
  • support relevant operations as Clojure functions.
  • an extension infrastructure for various future backend implementations.
  • a clean low-level integration with Uncomplicate, Neanderthal, Deep Diamond and Clojure abstractions.
  • assorted improvements to Uncomplicate, Neanderthal, Deep Diamond, to support these additions.
  • develop examples for helping people getting started.
  • related bugfixes.
  • TESTS (of course!).

Jeaye Wilkerson: Jank
This quarter, I’ll be building packages for Ubuntu, Arch, Homebrew, and Nix. I’ll be minimizing jank’s dependencies, automating builds, filling in test suites for module loading, AOT building, and the Clojure runtime. I’ll be working to get the final Clang and LLVM changes I have upstreamed into LLVM 22, adding a health check to jank to diagnose installation issues, and filling in some C++ interop functionality I couldn’t get to last quarter. Altogether, this quarter is going to be a hodgepodge of all of the various tasks needed to get jank shipped.

AND NOW FOR THE REPORTS!

Ambrose Bonnaire-Seargent: Malli

Q3 2025 $9K, Report No. 3, Published December 12, 2025

Looking back on this project, it started with a proposal for an external analysis pass that could be used to optimize Malli validators. Now at the completion of the funding period, we’re solving the same problem, but instead of an additional tool, we’re applying the analysis directly within Malli’s validation algorithm. This is an excellent improvement, but the stakes are now much higher as we’re changing some of the oldest, most foundational code in Malli.

The big news is that recursive validators now compile to recursive functions! This is a major optimization, integrated directly into the heart of Malli.

The road to this point started in 2021, where I prototyped and ultimately—a year later—contributed a subtle enhancement to malli.generator to map recursive schemas onto recursive generators. The main insight: recursive schemas can be detected by finding cycles of Malli refs.

Over the last few months, I knew I needed to really nail why this works before exploiting it to optimize Malli’s validation.

This culminated in a simple but clarifying documentation improvement in which we learn something new about Malli itself.

I’ll include it in full here:


Mutable registries are a dev-time abstraction

For performance reasons, Malli heavily caches registry lookups once a schema has been created via m/schema.

Don’t rely on registry mutations to be recognized consistently unless all schemas are reparsed. Here’s a simple example:

(def registry*
  (atom {:int (m/-int-schema)
         :string (m/-string-schema)
         ::node :int}))

(def eagerly-cached
  (m/schema ::node {:registry (mr/mutable-registry registry*)}))

(swap! registry* assoc ::node :string)

(-> eagerly-cached m/deref m/form)
;; => :int

Even atomic transactions mutating multiple schemas simultaneously in a mutable registry is not reliable, as a parsed schema may have cached one eagerly, and another lazily, leading to inconsistent results. See malli.core-test/caching-of-mutable-registries-test for demonstration of this phenomenon.

In practice, this is analogous to Clojure’s treatment of vars. If a var is mutated, the most reliable general strategy to recognize the update is to refresh all namespaces that use that var. Similarly, if a registry is mutated, the best strategy for recognizing the update in all schemas is to recreate all schemas.


Back to ref cycles, this doc is relevant to recursive schemas because the cycle detection algorithm also breaks in the presence of unprincipled mutating registries. If we assume registries are immutable for the duration of cycle detection, then the algorithm seems air-tight to me, and so we went forward with using ref cycles to compile to recursive schemas to recursive validators.

Recursive validators are superior to the previous approach because:

  1. they can be fully compiled ahead of time
  2. they take constant space relative to their inputs, before it was linear in the maximum depth of the validated input
  3. they only need to compile one layer of recursion, before it was equal to the maximum depth of the validated input

While this is a great improvement, there is even more we can do, with even higher impact. Not everyone uses recursive schemas, but it’s much more common to use refs more than once such as [:tuple ::expr ::expr]. Malli’s handling of validators for such schemas could be greatly improved. Here’s how I explained it to Malli’s maintainers:


Plumatic Schema solves this by caching validators for all schemas during compilation. In addition to handling recursive schemas, it also prevents a nasty exponential blowup of compilation size for even non-recursive schemas, that Malli also suffers from.

Malli’s validator compilation is exponential. This registry demonstrates how:

(def registry {::creates-1-validator [:tuple]
               ::creates-2-validators [:tuple ::creates-1-validator ::creates-1-validator ::creates-1-validator ::creates-1-validator]
               ::creates-16-validators [:tuple ::creates-2-validators ::creates-2-validators ::creates-2-validators ::creates-2-validators]
               ::creates-64-validators [:tuple ::creates-16-validators ::creates-16-validators ::creates-16-validators ::creates-16-validators]
               ::creates-256-validators [:tuple ::creates-64-validators ::creates-64-validators ::creates-64-validators ::creates-64-validators]
               ::creates-1024-validators [:tuple ::creates-256-validators ::creates-256-validators ::creates-256-validators ::creates-256-validators]
               ::creates-4096-validators [:tuple ::creates-1024-validators ::creates-1024-validators ::creates-1024-validators ::creates-1024-validators]
               ::creates-16384-validators [:tuple ::creates-4096-validators ::creates-4096-validators ::creates-4096-validators ::creates-4096-validators]
               ::creates-65536-validators [:tuple ::creates-16384-validators ::creates-16384-validators ::creates-16384-validators ::creates-16384-validators]
               ::creates-262144-validators [:tuple ::creates-65536-validators ::creates-65536-validators ::creates-65536-validators ::creates-65536-validators]
               ::creates-1048576-validators [:tuple ::creates-262144-validators ::creates-262144-validators ::creates-262144-validators ::creates-262144-validators]
               ::creates-4194304-validators [:tuple ::creates-1048576-validators ::creates-1048576-validators ::creates-1048576-validators ::creates-1048576-validators]})

With this registry, each level of depth N compiles (m/validator ::creates-1-validator) 4^N times.

e.g., (m/validator ::creates-4194304-validators) compiles (m/validator ::creates-1-validator) 4,194,304 (4^11) times.

Plumatic Schema would only compile it once. It’s not so trivial to achieve with dynamically scoped refs, but it’s the same idea as detecting ref cycles, which we can now do reliably. Here’s a reproduction of the issue https://github.com/frenchy64/malli/pull/36/files which I have been pondering since discussing https://github.com/metosin/malli/pull/1180


I may propose work on this for a future Clojurists Together project, please stay tuned.

Thank you Clojurists Together and the Clojure community for funding this project, it was highly enjoyable and I learnt a lot.
I hope you find the results useful.


Thomas Clark: Fastmath

Q3 2025 $2K, Report No. 1, Published December 8, 2025

Table of Contents

  1. Overview
  2. Surveying the scene
  3. Revised goals and towards future-proofed user-experience
  4. Outlook

Overview

Due to various circumstances - life, the universe and everything etc. - this project started significantly later than it should have and I apologise for that. It did however gain momentum quickly. Below, I consider the progress, as measured both chronologically and according to milestone. In a nutshell though, I have considered different possible avenues of implementation, played with various API strategies and made a couple of draft implementations.

Surveying the scene

The first task of the project was to more concretely situate the goals with respect to what already exists: that is, with respect to `fastmath` itself, the wider Clojure ecosystem and the available java libraries.

It was clear from the outset that `fastmath` hadn’t yet implemented complex matrices, but how real matrices and complex numbers themselves were implemented, was still a question. Thanks to a valuable discussion with generateme though, I had my first look at the depths and came to understand the mixture of Apache Commons and hard-coded Clojure that made up the library as it was.

It was concluded that `core.matrix`, rightly or wrongly, was currently dead to the community and not the right avenue for this project. The next big Clojure allies were Neanderthal and dtype-next. Both of these are very powerful and mature, but neanderthal was considered too low-level for fastmath and dtype-next, although potentially promising, would realistically require a lot of original code writing, with most operations and decomposition etc. needing to be implemented manually. The maths section of the `qclojure` library was also considered with interest, but it wasn’t implemented with efficiency as a goal. Instead, it will be used as something of a model consumer for this complex matrix project.

The conclusion therefore, was, at least initially, to wrap a java library. But which one? And how? The natural choice, would have beeen to extend the coverage of Apache Commons, but, as a working physicist, performance is a strong goal for me and it seemed that there were faster options available. Surveying the existing documentation online, `jblas` looked powerful, but has complicated dependencies and so the best all round options (as demonstrated by the Java Matrix Benchmark, and filtered by available ’complex’ API) seemed to be EJML and ojAlgo. I experimented with both of these libraries somewhat and my feeling is that ojAlgo has the greater performance, but that it’s API is much less intuitive. It took me significant time to work out how to do basic constructions and operations - and I also found a bug that required reflection (in the Java sense!) to overcome. Using EJML however, was very straightforward and I could quickly draft a solution. Going forward therefore, in the knowledge that this project has a finite deadline, I decided to continue with EJML, but with a caveat. I wanted to keep the implementation details abstracted, so that we can swap alternative backends in and out more easily in the future.

Revised goals and towards future-proofed user-experience

Realising that I might have to change how complex ’numbers’ are implemented opened up a considerable can of worms, as did the temptation of unifying the complex implementation with the real implementation. In fact, (over)thinking and implementing a separation of user experience from the Java backend took up much of the remaining time so far. And the decision of how much to change of the existing `fastmath` API is an ongoing question, that will have to be finalized with generateme in the second half.

Fundamentally, the tension in this project is to make a library that is easy to use for maths/physics-minded people, is flexible enough to be extended into the future and yet still still fast enough to live up to the library’s name. The current solution therefore is a sophisticated hierarchy of protocols: to represent mathematical identities, and for fast implementation; as well as a new, versioned, multimethod-based API. This second layer features overloaded mathematical operators such that the full linear algebra stack can be used intuitively, with operator ordering and type coercion happening ’under the hood’. If this proves too much of a performance cost though, then you can simply revert to the explicit protocol methods.

As an example, rather than `fastmath.protocols.matrix` beeing one of only two protocols, we now have a full mathematical structure available, with implementations like `fastmath.algebra.object.number.complex.ejml` and `fastmath.algebra.object.matrix.rectangular.complex.ejml`. This provides more obvious places for future specializations or alternative backends.

With mathematical properties, we can use Clojure protocols as they were intended, for flexibility and without reinventing the wheel, when it comes to how to partition functions. For example, a complex number, can now be easily implemented according to its definition: as ’a field on a normed space that has complex and polar coordinates’. What is a field you might ask? Well, it’s an algebraic structure that forms a ring and a multiplicative group. As an example, that last sentence can now be represented programmatically (below). And by structuring the protocols this way, we get polymorphism across different types and domains for free.

(ns fastmath.protocol.algebra.structure.field
  (:require [fastmath.protocol.algebra.structure.ring :as ring]
            [fastmath.protocol.algebra.structure.multiplicative.group :as mgroup]))

(defprotocol  Field)

(def add ring/add)
(def multiply ring/multiply)
(def negate ring/negate)
(def one ring/one)
(def zero ring/zero)

(def inverse mgroup/inverse)

(defn ? [x]
  (and
   (ring/?  x)
   (mgroup/?  x)
   (satisfies? Field x)))

Outlook

Having considered the architecture, API creation and first implementation tests, the project, as a whole, is on a firm footing for development. With the expansion of API goals however, the proposal metrics will have to be reprioritised. In this light, the next steps will be to widen the matrix implementations beyond basic operations and to integrate them with other `fastmath` functions, like decomposition and fourier functions. After this, I will focus on integration with the `clay` system, so that some sort of report and documentation can be published on time. On the current trajectory, it’s not clear that the performance comparisons, for example, will be completed during the official funding period, but, if not, they will surely follow promptly.


Jeremiah Coyle: Fireworks

Q3 2025 $2K, Report No. 2, Published November 30, 2025

I’m happy to report that 5 of the primary goals and 6 of the secondary goals were achieved in Q3. Many thanks to Clojurists Together for supporting this work!

Primary goals

  • Add support for automatic detection of the 3 levels of color support (16-color, 256-color, or Truecolor), using an approach similar to Chalk.
    #42
    Completed

  • Support call-site option to disable all truncation and ellipsis
    #14
    Completed

  • Documentation of interactive workflow:

  • VS Code Integration
    Completed

  • Cursive / IntelliJ Integration
    Completed

  • Emacs Integration
    In progress. Work on this will commence and once a sufficient amount of data from the use of the Joyride and Cursive implementations is gathered. This will inform any unforeseen details about ergonomics and/or implementation details.

Secondary goals

  • Allow for call-site changes to the label color.
    #53
    Completed
(? {:label-color :red} (+ 1 1))

  • Flag for eliding truncation and ellipsis at callsite
    #77
    Completed
(? :+ my-coll)
;; as shorthand for:
(? {:truncation? false} my-coll)

(? {:bold? true} (+ 1 1))

(? {:format-label-as-code? true}
   (mapv (fn [i] (str "id-" i))
         (range 20)))

  • Add function to set options globally for project, at runtime, with a config! macro.
    #81
    Completed
(fireworks.core/config!
 {:format-label-as-code? true
  :template              [:file-info :form-or-label :result]
  :label-length-limit    100})

  • Properly display contents and badges of native js data structures, when they are within a native cljs data structure.
    #46
    Completed
(? [#js {:a 1 :b 2}
    (new js/Set #js["foo" "bar"])
    (into-array [1 2 3])
    (new js/Map #js[#js[3 1] #js[4 2]])
    (new js/Int8Array #js[1 2 3])])



The latest release of Fireworks is v0.16.1, which features the enhancements listed above.


Dragan Djuric: Uncomplicate Clojure ML

Q3 2025 $9K, Report No. 3, Published December 1, 2025

Final progress after the third month:

In the first two months, I have already released more than 10 versions to Clojars as org.uncomplicate/diamond-onnxrt, with the progressively better coverage of the onnxruntime’s C API, established a nicer Clojure API for onnxruntime internals, higher level Deep Diamond and Neanderthal model integrations, and make inroads into GPU support with CUDA.

In the third month, i tried several typical open models, and did the development and refinements to support the necessary features required by these real-world examples.

I refined the low-level and high-level model support incrementally, to support as much automatic wiring of the models with Clojure as possible, and I am quite pleased with the result. One of the goals of all Uncomplicate libraries is to offer access to all abstraction levels to Clojure programmers: from the lowest fine grained just above the C API, to the highest “magic” that configures itself, and we supported this here to the fullest. Everyone can use the internal core namespace to program every single detail by themselves in Clojure - if they wish so - while they also can leverage mid-level abstract functions that build standalone models, OR, let Deep Diamond do everything for themselves. There’s something even handier: even the most abstract level supports detailed configuration of the models, with sane defaults. This is what I spent most of the time in the third month of the project: the overall usability and polishing!

The final result: now we can load a model, and almost everything related to loading and setting that model up with Deep Diamond’s Clojure tensors is automatic. If we want to configure most of the stuff that can be configured, it’s an options map away, with nice Clojure keywords, while we don’t have to worry how that configuration produces its results. At the same time, if the users have more specific requirements, or they don’t like how I implemented it, or they simply just want to learn, they still have a ladder down the rabbit hole, and they can use any of the lower layers as they please, as independent nuts and bolts for their own creations.

I released a few more versions to Clojars (the most recent one is 0.20.0).

An example of what we can do now is that we can now clone a Hugging Face repository of Google’s open model Gemma 3 in ONNX format, we can load it without much hassle as DD blueprint, instantiate it as a standalone layer function, get the access to input and output parameters as nice DD Clojure tensors, easily load them with data, and run the model as a straightforward Clojure function evaluation! Now, a casual onlooker would think: “Great. Now I just load Gemma 3, and I have a home-sized mini Gemini!” Well, no. This model accepts tensor full of numbers, and it can infer tensors full of numbers that indicate next token(s). But, LLM is more than that. It accepts strings (or images, etc) re-computes these strings into appropriate numbers (tensors), runs the inference in a loop, bookkeep various context tensors, makes sense of the output numbers, etc. That’s a whole new level of application of these moving parts, and requires another level of applicative automation. Some models do not need that applicative automation, but LLMs are more complex beasts.

So, looking back at what I proposed to implement, I would declare that I’m quite happy that I achieved everything planned, and even achieved a simpler API than envisioned at the start.
No we can bite even at the LLMs at the next step, which is a focused effort on its own.


Jeaye Wilkerson: Jank

Q3 2025 $9K, Report No. 3, Published November 30, 2025

Howdy folks! Thank you so much for the sponsorship this quarter. This month was Conj 2025 month, so leading up to the Conj I was very focused on preparing for my talk and two different Conj working groups. Outside of that, though, I have been tackling overall jank stability. The number one concern there was GC stability, since our GC usage of BDWGC (Boehm) was leading to either leaks or intermittent crashes. In total, this has involved three weeks of spelunking in both the GC code and jank’s code.

The crashes were nearly always caused by the GC prematurely collecting an object because it could no longer find a usage of it. In some cases, this happened because that object was only accessible from the system heap (i.e. normal new or malloc), rather than the GC heap. These cases could be fixed by correct usage of the GC allocator for C++ collections, which we sometimes missed. In another case, jank was using a thread-local collection to store Clojure bindings. However, on Linux, thread-local memory is stored separately from normal memory, so the GC is unable to find references to the objects within that collection. Finally, I have found that there are some aspects of our LLVM IR generation which lead the GC to prematurely collect. I have solved some of these issues, but more investigation and development remains. Curiously, though, I found that using jank’s C++ code generation, rather than LLVM IR, skirts these issues.

To prioritize stability for the upcoming alpha release in December, I decided to focus on C++ generation as the default. This has required some catching up, since we’ve been running both for a while but we’ve been focusing on LLVM IR. Now, at the end of the month, all tests are passing with C++ code generation and there are no known GC issues on macOS and Linux. I’ll continue developing the LLVM IR generation, to resolve these issues, but I’ll take the time needed to do it right. Until then, C++ generation as a default provides many benefits and no behavioral change.

The biggest remaining task for the alpha release is now documentation, which I will be tackling in the next few weeks. Then it’s alpha time!

Permalink

No, Clojure: your REPL is not new – or best

Spend any time around Clojure and a familiar cluster of claims soon appears.

You will be told that:

  • the Read–Eval–Print Loop (“REPL”) is what makes Clojure fundamentally different
  • REPL-driven development supersedes the Edit–Compile–Run model typical of C-family, .NET and Java
  • Clojure enables a uniquely live, exploratory way of building systems
  • other languages “don’t really have a REPL”

These claims sound radical, historically grounded and quietly superior, but they do more rhetorical work than factual work.

This article challenges Clojure folklore in three areas:

  • the REPL long predates Clojure
  • Clojure neither invented nor uniquely exemplifies it
  • several alternative REPL models outperform Clojure’s on important engineering criteria

To be clear, Clojure is a powerful language with dedicated supporters. What follows is not an attack on Clojure, but a correction of its mythology.

The REPL long predates Clojure

Lisp had already established the model by 1958

The Read–Eval–Print Loop originates in Lisp systems developed at MIT in the late 1950s. From the outset, Lisp environments supported:

  • interactive evaluation
  • incremental definition of functions
  • inspection and modification of live runtime state
  • persistent sessions

Clojure deliberately situates itself within this lineage and explicitly inherits from it.

Once that inheritance is acknowledged, the idea that the REPL distinguishes Clojure begins to weaken. Any feature drawn wholesale from a sixty-year-old tradition can hardly serve as a defining innovation.

ML: a typed REPL from the early 1970s

Alongside Lisp, the original ML language – developed in the early 1970s as part of the Edinburgh LCF project – was explicitly designed for interactive use.

ML’s REPL supported incremental definition, immediate evaluation and full static type inference at the prompt. This was not an afterthought. ML was a metalanguage, intended to be explored live while retaining formal guarantees.

This matters because it shows that REPL-driven development under strong static typing is not a modern compromise or a reaction against Lisp. It is a parallel tradition, older than Clojure by decades.

Smalltalk pushed live systems further in the 1970s

Smalltalk systems went beyond REPL interaction and embraced image-based development, where the entire system existed as a continuously mutable artefact. Programs were edited while running; the notion of a clean restart receded into the background.

This approach predates Clojure by decades and represents a more radical commitment to liveness than Clojure’s own model.

However one judges Smalltalk today, its existence alone undermines the idea that live, interactive programming is a modern breakthrough.

Home computers normalised persistent REPLs

The most consequential historical counterexample is neither Lisp nor Smalltalk. Rather, it is home computing.

From the late 1970s through the mid-1980s, the overwhelming majority of home computers booted directly into a persistent BASIC environment. Immediate mode was the primary interface. Variables survived RUN. Programs routinely relied on pre-initialised state in order to function within severe memory constraints.

This behaviour was universal and far from exceptional.

Home-computer BASIC: persistent REPL as the default interface

Machine Year BASIC variant Persistent variables Immediate mode
TRS-80 Model I 1977 Microsoft BASIC Yes Yes
Commodore PET 1977 Microsoft BASIC Yes Yes
Apple II 1977 Applesoft BASIC Yes Yes
Atari 400/800 1979 Atari BASIC Yes Yes
ZX-80 1980 Sinclair BASIC Yes Yes
VIC-20 1981 Commodore BASIC 2.0 Yes Yes
ZX-81 1981 Sinclair BASIC Yes Yes
BBC Model A 1981 BBC BASIC Yes Yes
BBC Model B 1981 BBC BASIC Yes Yes
ZX Spectrum 1982 Sinclair BASIC Yes Yes
Commodore 64 1982 Commodore BASIC 2.0 Yes Yes
Dragon 32/64 1982–83 Microsoft BASIC Yes Yes
Oric-1 1983 Oric Extended BASIC Yes Yes
Amstrad CPC 1984 Locomotive BASIC Yes Yes

Tens of millions of machines shipped with precisely this interaction model.

Rather than being an esoteric Lisp technique rediscovered decades later, the “tight feedback loop” was simply how personal computing worked for an entire generation. Many senior industry figures learned their craft in that world.

Erlang demonstrated live code in production

To argue that Clojure uniquely enables live mutation of running systems means overlooking Erlang, the telecoms language created by Ericsson.

From the late 1980s onward, Erlang supported hot code swapping in safety-critical telephony infrastructure. Far from exploratory hacking, this was production engineering under strict uptime requirements.

Live systems were already operational long before Clojure appeared.

What Clojure actually contributes

Clojure’s achievement lies elsewhere.

It brings together:

  • the Lisp REPL tradition
  • the JVM ecosystem
  • persistent data structures
  • modern concurrency primitives

This synthesis is real and valuable, and the result of thoughtful engineering.

Interactive programming and the REPL itself, however, sit firmly in the inherited category.

The real trade-off: semantic mutability

The distinctive characteristic of Clojure’s REPL is not persistence per se, but the level at which persistence operates.

Early BASIC systems preserved data. Numbers, arrays and flags survived across runs. Control flow remained linear, and program meaning remained fixed.

Clojure extends persistence to meaning. Functions may be redefined live. Dispatch rules can be altered. Existing call sites can acquire new behaviour without any obvious signpost.

This shift increases expressive power, but the engineering bill arrives as reduced reconstructability and predictability.

Clojure: silent semantic drift

;; Session start: define calculate
user=> (defn calculate [x] (* x 2))
#'user/calculate

user=> (calculate 5)
10

;; Later, perhaps in another file or by another developer:
user=> (defn calculate [x] (+ x 100))   ; silently replaces original
#'user/calculate

user=> (calculate 5)
105
;; No warning. No error. Source file unchanged.

For a single developer, this is simply a sharp tool. In a team, it becomes a coordination hazard: runtime truth can drift away from the text everyone believes they are running.

On Rich Hickey’s position

In talks such as Clojure Made Simple and various discussions of REPL-driven development, Rich Hickey has argued that interactive development should be central, while edit–compile–run should be viewed as inefficient.

The intuition behind this argument is understandable. Iteration speed matters.

The framing, however, tends to obscure two facts:

  • interactive development long predates Clojure
  • many systems combine interactivity with compilation discipline

Growing systems live accelerates exploration, but it also erodes the ability to reconstruct behaviour from source alone. That trade-off deserves to be stated explicitly.

Hickey did not claim the REPL was new. The Clojure community, however, often treats it as a kind of revealed truth.

A structural comparison

A clearer perspective emerges from comparing REPL models across history.

Legend: ✅ Yes, ❌ No, 🟡 Limited

Property 1958–60: Lisp REPL (image-based) 1964: Dartmouth BASIC 1977–85: Home BASIC 1973–: ML / OCaml REPL 2005–: F# REPL 2007–: Clojure REPL
Interactive evaluation
Immediate mode
Persistent session
Variables persist across runs
First-class functions
Live function redefinition ❌* 🟡 🟡
Behavioural semantics mutable live 🟡
Static type checking
Type system constrains REPL
Silent semantic mutation possible
Reconstructable from source alone
Designed for long-lived systems
  • BBC BASIC excepted; most home BASICs lacked named functions and relied on line-numbered subroutines.

A counterexample: the ML lineage (OCaml and F#)

F# is not an outlier. It belongs to a lineage – ML and OCaml – that has treated interactive, typed REPL-driven development as normal practice for over forty years.

F#’s REPL chooses constraint over maximal dynamism, but the mechanism deserves to be described accurately.

F# allows shadowing with a new type. What it blocks is type-inconsistent usage at the point you attempt to apply the new binding.

// F#: type system catches incoherent usage (FSI)
> let calculate x = x * 2;;
val calculate: x: int -> int

> calculate 5;;
val it: int = 10

> let calculate x = x + "hello";;   // shadows with new signature
val calculate: x: string -> string

> calculate 5;;
  ─────────^
error FS0001: This expression was expected to have type 'string'
             but here has type 'int'

Rather than allowing incoherence to propagate silently, the type system forces the inconsistency to surface at the exact moment of misuse. Exploration remains possible, but semantic drift is limited by static constraints.

For teams that value refactoring, long-term maintenance and source-level truth, this discipline often proves advantageous.

“But Clojure’s REPL is integrated”

A predictable rebuttal points to tooling: CIDER, nREPL and editor integration.

Better tools improve the experience. They do not alter the underlying model, which remains rooted in a design first implemented in the 1950s.

Why the mythology persists

The persistence of the REPL myth is easy to explain:

  • Lisp culture has always emphasised interactivity
  • computing history before the web is poorly remembered
  • expressive power is often confused with novelty
  • difficulty is frequently mistaken for depth

None of this indicates bad faith, but it does reward clearer framing.

Conclusion

The REPL remains a valuable tool. Its lineage stretches back more than six decades. Clojure’s power rests heavily on it, but that power arrives through inheritance rather than invention.

Other languages explore the same design space differently, sometimes with stronger engineering constraints.

It is also worth noting a structural echo. REPL-driven workflows treat programming as a dialogue rather than a batch process, and today’s agent-driven coding tools adopt a similar stance. They extend the conversational model from interaction with a running program to interaction with an entire codebase. It would be a stretch to claim a direct causal link, but the parallel is clear: both approaches reject the assumption that code must be “finished” before it can be tested against reality.

Clojure’s REPL offers power and has made a great deal of noise. It has reinvigorated an old paradigm and delighted its community — but power is not novelty, and enthusiasm should not rewrite history.

Permalink

Announcing Oak 1.0

Today the Gaiwan team is proud to announce the 1.0 release of Oak, a independent, open source, self-hosted Identity Provider. Oak supports OAuth 2.0 and OpenID Connect, making it ideal for providing access to multiple self-hosted or SaaS services with a single identity, a single login. Oak supports 2FA through standard Authenticator app codes.

altThe Oak login screen

Oak is "headless" in the sense that it doesn&apost have a Management UI. Setting up user accounts or creating OAuth Applications is instead done through the command line. This makes Oak emminently scriptable.

Work on Oak started in August 2025, so it&aposs still a fairly young project. Nevertheless, to keep ourselves accountable we set a deadline to have a first public release before Christmas. The past few days were spent on documentation and an installer script, as the final capstone. We really want to make it as easy as possible to get started with Oak. We&aposd love to hear from you if we succeeded!

Why create a new IAM/IdP product when there are already a number of excellent Open Source options available? We have a few reasons. After working on IAM and OAuth related solutions for a number of different clients, we felt we had built up the necessary expertise, we wanted to channel that into something we could present to the world. It was also a challenge to ourselves to really put out a Product that was ours. We invest a lot of time, effort, and love into other people&aposs products. We wanted something that was really ours that we could point at and be proud of.

At this point we mainly hope to differentiate ourselves through ease and simplicity. If you are self-hosting services which support single sign-on through OpenID Connect, then Oak should be the easiest way to provide a single login to your users. We&aposd love to hear from people if we managed to clear that bar!

We also believe there&aposs room for an independent, relatively low-tech IAM/IdP, that&aposs not controlled by a major enterprise, with European roots, based on high-fidelity implementations of published standards. We are already using Oak ourselves to provide a "Gaiwan identity" that the team uses to log into our Forgejo instance, and plan to use it for many other self-hosted services going forward.

Oak is built in Clojure and runs on the JVM. For most people this doesn&apost matter, it&aposs just another containerized application like any other, but Clojure teams might want to pay attention. In addition to publishing an OCI-compatible container (for use with Podman or Docker), we publish Oak as a library to Clojars, meaning you can embed it into your application, so you get login pages, password reset emails, 2FA, and more. This use case isn&apost documented or well developed yet, but it&aposs an interesting secondary purpose for Oak that we&aposre excited about.

In parallel with Oak we&aposre also developing an OAuth testbed, which can validate the spec compliance of any OAuth 2.0/OIDC server, including Oak itself.

We have a lot of ideas for the future of Oak, but much will depend on the feedback and interest we get from this first release, so do reach out and tell us what you think.

You can find Oak on the Gaiwan Forgejo instance. Follow the documentation links from there to learn how to set it up and configure it. Or if you&aposre feeling adventurous, simply run the installer scripts, and follow the prompts.

bash <(curl -sL https://git.gaiwan.co/gaiwan/Oak/raw/branch/main/install.sh)
altScreenshot of the installer

You can get in touch on the Fediverse or via email. You can also comment on this post on the r/selfhosted subreddit.

Permalink

2025 Highlights

Some notes on the year.

Movies/TV

Lots of TV shows this year. These are some of the ones that stood out.

Great

  • Andor
  • Adolescence
  • The Rehearsal - Season 2
  • The Pitt (probably my favourite of the year)
  • The Chair Company
  • Squid Game Season 3 (might be controversial to have this here, but I enjoyed it)
  • (movie) Jia Zhangke’s “Caught by the Tides”. Deeply moving meditation on time, love, displacement and process.
  • Long Story Short
  • I also went to a screening of Kwaiden (1964) this year and it was incredible.

Hounorable Mentions

  • The Eternaut
  • Pachinko - Season 2
  • Severance - Season 2
  • Foundation - Season 3
  • Dept Q.
  • Alice in Borderland - Season 3
  • Slow Horses

Disappointments

  • The Last of Us - Season 2
  • (movie) One Battle After Another - had its good points definitely, but I always have very high expectations for PTA and the last two let me down.
  • Alien: Earth - I did really enjoy this, but a lot of problems with it too (as an ‘Alien’ installment)

Books

Not too much reading this year, but my favourite was definitely “Every Living Thing” (Jason Roberts).

I also enjoyed:

  • Solaris
  • Pachinko
  • Drive Your Plough Over the Bones of the Dead
  • Delta V

Travel

Some for work, some for pleasure:

  • Japan (I visited many places in this wonderful country! Highlights - Kyoto, Naoshima Island)
  • Seattle
  • Baku

Programming

Continuing to learn more about clojure. I program purely as a hobby.

I participated in the first Scinoj Lite conference, which had some great talks. My project looked at ways of evaluating LLMs (from a very basic, almost ’naive’, perspective).

Write-up of my LLM evaluation project

Played around with the new clojure ‘flow’ libary.

Clojure Flow Blog Post Clojure Flow project

I started a webscraping project that is trying to map Irish-language content on the .ie domain.

Irish language webscraping project

I also enjoyed this year’s advent of code.

Advent of Code (clojure)

Permalink

I Completed 45 Lambda Function Exercises (And I'm Still a Beginner!)

My Functional Programming Learning Journey

I recently completed a comprehensive workbook with 45 exercises to learn about lambda functions and functional programming in Python. I want to share my experience with fellow beginners who are on the same learning path.

What I Worked Through

This wasn't just about lambda functions - it was a deep dive into functional programming concepts in pure Python. The workbook covered:

📚 The 7 Main Topics:

  1. Lambda Functions - Anonymous functions with map, filter, and sorted
  2. Closures and Freezing Variables - The tricky late-binding trap and how to fix it
  3. Conditional/Ternary Expressions - Writing concise if-else logic in one line
  4. List Comprehensions - Single, nested, and filtered comprehensions
  5. Higher-Order Functions (HOF) - Functions as first-class citizens
  6. Mixed Advanced Exercises - Combining multiple concepts
  7. Scenario-Based Problems - Real-world application challenges

🎯 Difficulty Progression:

The exercises were organized into difficulty levels:

  • Simple (S) - 16 exercises to build foundation
  • Medium (M) - 15 exercises for intermediate practice
  • Hard (H) - 14 challenging problems
  • Paragraphic (P) - 10 scenario-based real-world applications

Total: 45 exercises with complete solutions and explanations!

New Concepts I Discovered

While working through these exercises, I discovered so much more than I expected! I won't list everything because part of the joy is discovering things yourself through curiosity.

What I will say is this: my curiosity led me to learn far beyond just the exercises. For example, I stumbled upon the walrus operator (:=) which was introduced in Python 3.8. It wasn't essential for the exercises, but because I was curious and kept asking "what else?" and "why does this work?", I ended up exploring it too!

If you approach these exercises with curiosity - constantly asking questions, experimenting with the code, and wondering "what if I try this?" - you'll discover so many concepts, operators, and patterns. The more curious you are, the more you'll learn beyond what's explicitly taught.

Each exercise taught me something new, but my curiosity taught me even more!

The Reality Check ⚠️

Here's the honest truth: After completing all 45 exercises, I'm still a beginner.

And you know what? That's completely okay!

Learning programming isn't about racing to become an expert overnight. These exercises gave me some understanding of functional programming and lambda syntax. I got something from working through them, but I'm not claiming it's anything big.

I learned about lambda functions and closures. I got exposure to some Python patterns. But I'm still a beginner learning the basics.

Why These Exercises Helped

What I gained from this workbook has high influence based on my curiosity, interest, and how I approached learning. The more engaged I was, the more I learned.

The workbook helped because:

  1. Progressive difficulty - Started simple and gradually increased complexity
  2. Complete solutions - Every exercise had detailed explanations
  3. Practical examples - Real-world scenarios, not just toy problems
  4. Concept combination - Later exercises mixed multiple concepts
  5. Tricky parts highlighted - Solutions pointed out common pitfalls
  6. Hands-on practice - 45 opportunities to write actual code

But honestly, how much it helped depended entirely on how curious I was and how much I engaged with the material beyond just completing exercises.

For Other Beginners

If you're just starting with Python and want to level up and learn more about functional programming skills, I've uploaded the complete workbook I used to this repository.

Repository Link: hejhdiss/lambda-pdf

This repository contains:

  • lambda.pdf - Full workbook with all 45 exercises
  • Complete solutions with detailed explanations
  • Progressive difficulty from Simple to Hard
  • Scenario-based real-world problems
  • Coverage of advanced Python concepts

What to Expect

Structure:

  • 45 total exercises organized by topic
  • 3 difficulty levels (Simple, Medium, Hard) plus scenarios
  • Solutions included with explanations of tricky parts
  • New operators and patterns you probably haven't seen

Time Investment:

Note: This is estimated time for a beginner from scratch doing exercises with basics and also learning through the concepts.

  • Each simple exercise: 5-15 minutes
  • Medium exercises: 15-30 minutes
  • Hard exercises: 30-60 minutes
  • Scenario problems: 30-90 minutes
  • Total estimated time: 20-30 hours of focused practice

For me personally, I completed it in approximately 6-10 hours total (I'm not great at tracking time accurately, so this is an approximate estimate). This wasn't continuous work - I spread it across 3-4 days, working during free time or when I felt motivated and focused, since I have many other commitments.

This total time includes preliminary preparation before starting the exercises. I spent time learning lambda theory using ChatGPT to understand the basics and other necessary concepts. I also learned additional concepts while working through the exercises themselves.

Important: Before jumping into this PDF, I recommend learning some basic lambda theory first. If you only know regular functions (def functions), you'll need to understand lambda syntax and how to write lambda functions, as this workbook assumes that foundational knowledge.

Your time will vary based on your pace and curiosity.

What You'll Learn:

  • Learn about lambda functions
  • Learn functional programming patterns
  • Learn list and dictionary comprehensions
  • Learn about closures
  • Learn function composition
  • Discover Python operators and techniques

How much you learn depends on your curiosity, interest, and how you approach the exercises.

My Advice After Completing It

  1. Start with Simple exercises - Don't jump to Hard ones
  2. Try before looking - Attempt each problem before checking solutions
  3. Type the code yourself - Don't just read the solutions
  4. Understand the "why" - Not just the "how"
  5. Experiment freely - Modify examples and see what breaks
  6. Take breaks - Some concepts need time to sink in
  7. Revisit difficult ones - Come back to exercises you struggled with
  8. Stay humble - Remember you're learning, not competing
  9. Keep a notebook - Write down patterns and tricks you discover
  10. Move sequentially - Exercises build on previous concepts

The Learning Curve

Here's how my progress felt:

  • Exercises 1-10 (Simple): "This is making sense!"
  • Exercises 11-20 (Medium starts): "Wait, this is getting complex..."
  • Exercises 21-30 (Hard begins): "I need to slow down and think..."
  • Exercises 31-40 (Mixed/Advanced): "Okay, this is challenging but doable!"
  • Exercises 41-45 (Scenarios): "I can actually solve real problems now!"

The difficulty ramp is real, but it's manageable if you take your time.

The Bottom Line

Completing these 45 exercises doesn't make you an expert. It doesn't even make you intermediate. You'll still be a beginner, and that's the reality of learning programming.

What you gain from these exercises has high influence based on your curiosity, interest, and how you work through them. The more you engage and explore, the more you'll learn. How much you benefit depends entirely on how you approach learning.

Conclusion

If you're a beginner looking to understand lambda functions and functional programming in Python, I highly recommend checking out this workbook. Work through the 45 exercises at your own pace. Don't rush. Don't compare yourself to others.

This workbook will give you solid foundations in functional programming. And remember - after completing them, you'll still be a beginner, but you'll get so much from learning and completing these exercises. The knowledge and patterns you gain are valuable and will serve you well in your Python journey.

🤖 About This PDF & Getting Help

This PDF was generated using Gemini AI. If you get doubts or don't understand something while working through the exercises:

  • Ask Gemini or ChatGPT with specific examples from the exercises
  • Be curious! Ask "why does this work?" or "what happens if I change this?"
  • Ask in different ways - if you don't get the answer you need, rephrase your question
  • Request examples - ask for more examples to understand the concept deeper
  • Don't stop at one answer - if something isn't clear, keep asking until it clicks

Important: I used the free versions of these AI services - I don't have paid versions. So you can learn using free resources too!

The more curious you are and the more questions you ask, the deeper your understanding will become. AI assistants like Gemini and ChatGPT are there to help you learn - use them! If you don't get what you want the first time, ask the same thing in different ways until the concept becomes clear.

Keep learning, keep coding, and embrace being a beginner. We all start somewhere, and every expert was once where you are now.

Repository: https://github.com/hejhdiss/lambda-pdf

PDF Link (Google Drive): https://drive.google.com/file/d/1Xo4S7Bk_7anM8jcvy5kEsaArhJ6zRhpg/view?usp=sharing

Content: 45 exercises | 7 topics | 3 difficulty levels | Complete solutions

Topics: Lambda Functions, Closures, Ternary Expressions, Comprehensions, HOF, Mixed Exercises, Scenarios

Status: Beginner-friendly | Pure Python | Detailed explanations included

Happy Learning! 🐍

P.S. - Don't skip the "tricky parts" in the solutions. Those explanations will help reduce confusion!

Note: This article was written with the help of Claude AI.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.