LISP Prolog and Evolution

I just saw David Nolen give a talk at a LispNYC Meetup called:


LISP is Too Powerful

It was a provocative and humorous talk. David showed all the powerful features of LISP and said that the reason why LISP is not more used is that it is too powerful. Everybody laughed but it made me think. LISP was decades ahead of other languages, why did it not become a mainstream language?

David Nolen is a contributor to Clojure and ClojureScript.
He is the creator of Core Logic a port of miniKanren. Core Logic is a Prolog-like system for doing logic programming.

When I went to university my two favorite languages were LISP and Prolog. There was a big debate whether LISP or Prolog would win dominance. LISP and Prolog were miles ahead of everything else back then. To my surprise they were both surpassed by imperative and object oriented languages, like: Visual Basic, C, C++ and Java.

What happened? What went wrong for LISP?

Prolog

Prolog is a declarative or logic language created in 1972.

It works a little like SQL: You give it some facts and ask a question, and, without specifying how, prolog will find the results for you. It can express a lot of things that you cannot express in SQL.

A relational database that can run SQL is a complicated program, but Prolog is very simple and works using 2 simple principles:

  • Unification
  • Backtracking

The Japanese Fifth Generation Program was built in Prolog. That was a big deal and scared many people in the West in the 1980s.

LISP

LISP was created by John McCarthy in 1958, only one year after Fortran, the first computer language. It introduced so many brilliant ideas:

  • Garbage collection
  • Functional programming
  • Homoiconicity code is just a form of data
  • REPL
  • Minimal syntax, you program in abstract syntax trees

It took other languages decades to catch up, partly by borrowing ideas from LISP.

Causes for LISP Losing Ground

I discussed this with friends. Their views varied, but here are some of the explanations that came up:

  • Better marketing budget for other languages
  • Start of the AI winter
  • DARPA stopped funding LISP projects in the 1990s
  • LISP was too big and too complicated and Scheme was too small
  • Too many factions in the LISP world
  • LISP programmers are too elitist
  • LISP on early computers was too slow
  • An evolutionary accident
  • Lowest common denominator wins

LISP vs. Haskell

I felt it was a horrible loss that the great ideas of LISP and Prolog were lost. Recently I realized:

Haskell programs use many of the same functional programming techniques as LISP programs. If you ignore the parenthesis they are similar.

On top of the program Haskell has a very powerful type system. That is based on unification of types and backtracking, so Haskell's type system is basically Prolog.

You can argue that Haskell is the illegitimate child of LISP and Prolog.

Similarity between Haskell and LISP

Haskell and LISP both have minimal syntax compared to C++, C# and Java.
LISP is more minimal, you work directly in AST.
In Haskell you write small snippets of simple code that Haskell will combine.

A few Haskell and LISP differences

  • LISP is homoiconic, Haskell is not
  • LISP has a very advanced object system CLOS
  • Haskell uses monadic computations

Evolution and the Selfish Gene

In the book The Selfish Gene, evolutionary biologist Richard Dawkins makes an argument that genes are much more fundamental than humans. Humans have a short lifespan while genes live for 10,000s of years. Humans are vessels for powerful genes to propagate themselves, and combine with other powerful genes.

If you apply his ideas to computer science, languages, like humans, have a relatively short lifespan; ideas, on the other hand, live on and combine freely. LISP introduced more great ideas than any other language.

Open source software has sped up evolution in computer languages. Now languages can inherit from other languages at a much faster rate. A new language comes along and people start porting libraries.

John McCarthy's legacy is not LISP but: Garbage collection, functional programming, homoiconicity, REPL and programming in AST.

The Sudden Rise of Clojure

A few years back I had finally written LISP off as dead. Then out of nowhere Rich Hickey single-handedly wrote Clojure.

Features of Clojure

  • Run on the JVM
  • Run under JavaScript
  • Used in industry
  • Strong thriving community
  • Immutable data structures
  • Lock free concurrency

Clojure proves that it does not take a Google, Microsoft or Oracle to create a language. It just takes a good programmer with a good idea.

Typed LISP

I have done a lot of work in both strongly typed and dynamic languages.

Dynamic languages give you speed of development and are better suited for loosely structured data.
After working with Scala and Haskell I realized that you can have a less obtrusive type system. This gives stability for large applications.

There is no reason why you cannot combine strong types or optional types with LISP, in fact, there are already LISP dialects out there that do this. Let me briefly mention a few typed LISPs that I find interesting:

Typed Racket and Typed Clojure do not have as powerful types systems as Haskell. None of these languages have the momentum of Haskell, but Clojure showed us how fast a language can grow.

LISP can learn a lesson from all the languages that borrowed ideas from LISP.
It is nature's way.

Permalink

Using Generative AI tooling with Clojure

Clojure is easy to read for humans and AIs

Code written in Clojure is expressive and concise, but is still easy to reason about. In his “History of Clojure” paper Rich Hickey, the original author of Clojure states his motivations for building a new programming language:

“Most developers are primarily engaged in making systems that acquire, extract, transform, maintain, analyze, transmit and render information—facts about the world … As programs grew large, they required increasingly Herculean efforts to change while maintaining all of the presumptions around state and relationships, never mind dealing with race conditions as concurrency was increasingly in play. And we faced encroaching, and eventually crippling, coupling, and huge codebases, due directly to specificity (best-practice encapsulation, abstraction, and parameterization notwithstanding). C++ builds of over an hour were common.”

As a result, Clojure programs are very much focused on dealing with data and do it safely in concurrent programs. We have by default immutable data structures, easy to use literal representations of the most common collection types (lists, vectors, sets and maps) and a very regular syntax. A typical Clojure program has way less ceremony and boilerplate, not to mention weird quirks to deal with compared to many more programming languages such as Java, C#, Typescript or Python.

This means that large language models have less to deal with when reading or writing Clojure code. We have some evidence in Martin Alderson’s article that Clojure is token efficient compared to most other popular programming languages.

When we author code with generative AI tools, a developer reviewing it has less code to read in a format that easy to reason about.

Clojure MCP boosts Agentic development workflows

The REPL driven workflow speeds up the feedback cycle in normal development modes. Read Eval Print Loop is a concept in many programming languages in the LISP family such as Common Lisp, Scheme and Clojure. It allows the developer to tap into and evaluate code in a running instance of the application they are developing. With good editor integration, this allows smooth and frictionless testing of the code under development in an interactive workflow.

With the addition of MCP (Model Context Protocol) agents have gained access to a lot of tooling. In May 2025 Bruce Hauman announced his Clojure MCP that provides agents access to the REPL. Now AI agents such as Claude Code, Copilot CLI and others can reach inside the application as it is being developed, try code changes live, look at the internal state of the application and benefit from all of the interactivity that human developers have when working with the REPL.

It also provides efficient structural editing capabilities to the agents, making them less error-prone when editing Clojure source code. Because Clojure code is written as Clojure data structures, programmatic edits to the source code are a breeze.

We can even hot load dependencies to a running application without losing the application state! This is a SKILL.md file I have added to my project to guide agents in harnessing this power:

---
name: adding-clojure-dependencies
description: Adds clojure dependencies to the project. Use this when asked to add a dependency to the project
---

To add dependencies to deps.edn do the following:

1. Find the dependency in maven central repo or clojars
2. Identify the latest release version (no RCs unless specified)
3. Add dependency to either main list (for clojure dependencies),
   :test alias (for test dependencies) or :dev alias (for development dependencies). Use the REPL and
   `rewrite-edn` to edit the file
4. Reload dependencies in the REPL using `(clojure.repl.deps/sync-deps)`

In my experience, with the Clojure MCP coding agents have a far easier time troubleshooting and debugging compared to having them just analyze source code, logs and stacktraces.

As a developer, we can also connect to the same REPL as the coding agent, making it easy to step in and aid the agent when it gets stuck. In my workflows, I might look at the code the agent produced and test it in the REPL as well, make changes as required and instruct the agent to read what I did. This gives another collaborative dimension to standard prompting techniques that are normally associated with generative AI development.

Getting AI to speak Clojure

Generating and analyzing code with AI tooling is just one way to apply AI in software development. As developers, we should understand the potential for embedding AI functionality at the application level too. LLMs seem to be good at understanding my intentions, even if they don’t necessarily produce the right results. One possibility is to take input provided by a human and enrich it with data so that further processing becomes easier. For the sake of experiment, let’s look at a traditional flow of making a support request.

The user goes to the portal and their first task is to identify the correct topic under which this support request belongs to. They usually have to classify the severity of the issue as well. Then they describe what problem they have and add their contact information. With this flow, there’s a large chance that the user misclassified their support request, causing delays in getting the work in front of the right person, making the user experience poor and causing frustrations in the people handling the support requests. From the user’s point of view they don’t care about which department picks up the request, so this system is pushing the support organization’s concerns to the end user.

What if we could avoid all that and have the request routed automatically to the right backlog? Enter OpenAI’s Requests API which can handle text, image and various file inputs to generate text or JSON outputs. The json_schema response format is interesting in particular because we can express the desired result format in a manner that we can then use to process the response programmatically down the line.

In the Clojure world, we often use Malli to define our data models. We can use malli.json-schema to transform our malli schemas into JSON schema that the endpoint understands, and then use malli.transform to translate the response from JSON back to Clojure data.

A similar idea has been shown previously by Tommi Reiman, author of Malli.

Note that the choice of model can have a big effect on your output!

(require '[malli.json-schema :as mjs])
(require '[malli.core :as m])
(require '[malli.transform :as mt])
(require '[cheshire.core :as json])
(require '[org.httpkit.client :as http])

(defn structured-output
  [api-endpoint-url api-key malli-schema input]
  (let [;; Convert Malli schema to JSON Schema
        json-schema (mjs/transform malli-schema)

        ;; Build request body
        body {:model "gpt-4o" ;; consult your service provider for available models
              :input input
              :text {:format {:type "json_schema"
                              :name "response"
                              :strict true
                              :schema json-schema}}}

        ;; Make HTTP request
        response @(http/post
                   (str api-endpoint-url "/v1/responses")
                   {:headers {"Authorization" (str "Bearer " api-key)
                              "Content-Type" "application/json"}
                    :body (json/generate-string body)})

        ;; Parse response
        parsed-response (json/parse-string (:body response) true)

        ;; Extract structured data from response
        content (-> parsed-response :output first :content first :text)
        parsed-data (json/parse-string content true)

        ;; Decode using Malli transformer
        result (m/decode malli-schema parsed-data (mt/json-transformer))]

    ;; Validate and return
    (when-not (m/validate malli-schema result)
      (throw (ex-info "Response does not match schema"
                      {:schema malli-schema
                       :result result})))
    result))

(structured-output
   (get-base-url)
   (get-api-key)
   [:map
    [:department [:enum :licences :hardware :accounts :maintenance :other]]
    [:severity [:enum :low :medium :critical]]
    [:request :string]
    [:contact-information [:map
                           [:email {:optional true} :string]
                           [:phone {:optional true} :string]
                           [:raw :string]]]]
   "Hi, my laptop diesn't let me login anymore. Can't work. What do? t. Hank PS call me 555-1343 because I can't access email")
; => {:department :hardware,
;     :severity :critical,
;     :request "Laptop doesn't let me login anymore. Can't work.",
;     :contact-information
;     {:raw "Hank, phone: 555-1343, cannot access email",
;      :phone "555-1343"}}

Notice how the model was able to process the human input that contained typos and informal, broken language. One doesn’t usually have to use any of the more powerful and expensive models to do this level of work. The older json_object response format has some capabilities and supports even lighter models. See OpenAI’s documentation for reference.

I think methods like this make it easy to embed LLM-enabled functionality in Clojure applications, giving them capabilities that are normally very hard to implement using traditional methods.

DISCLAIMER

If you were to implement the above example in your application as is, you’d be sending personal data to OpenAI. As developers, we should always consider the larger implications. You can deploy models locally in your region and keep data residency and processing within the EU region for example by using Solita’s FunctionAI in UpCloud or other service providers.

Further reading

If you’re new to agentic development, Prompt engineering 101 is a great starter for how to get past the first hurdles.

Permalink

Verifikation von Algorithmen mit Z3 – Teil 1

Ein Algorithmus soll effizient Lösungen zu gegebenen Problemen berechnen. In der Praxis besteht das Argument, dass der Algorithmus auch die richtigen Lösungen berechnet, meist aus einer Menge von Testfällen, einer kurzen Rechtfertigung in natürlicher Sprache oder einem Stoßgebet. Selbst eine große Menge von Testfällen – auch wenn diese bspw. mithilfe von QuickCheck erstellt wurden – kann aber keine Aussage über die allgemeine Korrektheit eines Softwarestücks machen. Um wirklich sicher zu gehen, müssen wir die Struktur des Codes betrachten, darüber Aussagen treffen und formal argumentieren, dass diese Aussagen auch allgemeine Geltung haben. Ein Mittel, um das zu tun, lernen wir in dieser dreiteiligen Artikelserie kennen: Wir nutzen den SMT-Solver Z3 als automatisierten Theorembeweiser. Diese Artikel sollen einen ersten Einblick geben in die Inhalte unserer neuen iSAQB-Advanced-Level-Schulung zu Formalen Methoden. In der Schulung kommen neben Z3 weitere Verifikationswerkzeuge wie VeriFast, Forge und Liquid Haskell zur Sprache.


Da wir formale Aussagen über Code treffen wollen, könnten wir versucht sein, die Aussagenlogik zu nutzen. In der Aussagenlogik kann man sich mit Variablen, „wahr“ und „falsch“ und ein paar Verknüpfungen logische Ausdrücke basteln (x oder (nicht y und z)). Das ist zwar ein nettes Vehikel der theoretischen Informatik, allerdings kann man damit leider für unsere Zwecke ziemlich wenig Interessantes aussagen. Die nächste Stufe solcher formalen Systeme wäre die Prädikatenlogik, bei der zusätzlich zu den Werkzeugen der Aussagenlogik Prädikate und Quantoren existieren. Die Mächtigkeit der Prädikatenlogik ist meistens ausreichend, um Korrektheitsbedingungen von Software zu beschreiben.

Die Herausforderung, Formeln in der Aussagenlogik auf ihre Erfüllbarkeit zu prüfen, nennt sich „SAT“ (für Satisfiability) und ist zwar im Worst-Case sehr rechenaufwendig, aber immerhin machbar. Dahingegen gibt’s für die Prädikatenlogik keinen allgemeinen Algorithmus, der Formeln mit Sicherheit auf Erfüllbarkeit überprüfen kann. Für jeden erdenklichen Prüfalgorithmus wird es immer Formeln geben, bei denen dieser sich in einer Endlosschleife verfängt.

Die Aussagenlogik ist also zu ausdrucksschwach, lässt sich aber automatisch ausrechnen. Die Prädikatenlogik ist ausdrucksstark genug, lässt sich aber nicht mehr automatisch ausrechnen. Gibt es vielleicht noch was dazwischen? Ja, gibt es und es nennt sich SAT Modulo Theories – SMT. Sehr frei übersetzt: SAT aber mit noch bisschen mehr Werkzeugen. Mit SMT-LIB-2 gibt’s einen Standard für die Sprache von SMT. Z3 ist ein sog. SMT-Solver, also ein Prüfalgorithmus für SMT-Formeln.

SMT-LIB-2 ist eigentlich gedacht als einheitliches Austauschformat zwischen SMT-Libraries auf der einen Seite und SMT-Solvern auf der anderen Seite. Zu diesem Zwecke basiert die Syntax von SMT-LIB-2 auf S-Expressions. Der Standard sagt sogar ganz explizit:

The choice of the S-expression syntax and the design of the concrete syntax was mostly driven by the goal of simplifying parsing, as opposed to facilitating human readability.

Diese Argumentation ist Teil der Kraft, die stets einfache Parser will und dabei einfache Lesbarkeit schafft: Als Scheme-, Racket- oder Clojure-Programmierende ist uns diese Syntax gerade recht. Man kann in SMT-LIB-2 direkt ziemlich gut programmieren und das werden wir in diesem Blogartikel demonstrieren. Als Anschauungsbeispiel wollen wir Geraden rasterisieren, um sie auf den Bildschirm zu malen.

Ich zieh ’ne Line, ihr zieht Leine

Eine Linie hat einen Start- und einen Endpunkt und verhält sich dazwischen sehr geradlinig. In der abstrakten Welt der Mathematik gibt es darüber gar nicht viel mehr zu wissen, doch wenn wir solche Linien auf einen Computerbildschirm zeichnen wollen, haben wir das Problem, dass dieser in der Regel aus einem Raster an Pixeln besteht; wir müssen die Punkte auf der idealen Linie also irgendwie in dieses starre Pixelkorsett zwängen. Um dieses Rasterisierungsproblem im folgenden noch etwas einfacher zu machen, bestimmen wir, dass der Startpunkt immer bei der Koordinate (0, 0) und dass der Endpunkt (dx, dy) rechts oben davon liegen soll – dass also dx > 0 und dy >= 0 sind. Eine solche Linie entspricht weitgehend einer Geradenfunktion f mit f(x) = dy/dx * x. Solche Funktionen f können wir in x-Richtung ganz einfach diskretisieren – der erste Schritt zur Rasterisierung –, indem wir sie nur für ganzzahlige x aufrufen. Im allgemeinen werden die Ergebnisse – also die y-Werte – aufgrund des Bruchs dy/dx dann aber rationale Zahlen und keine Ganzzahlen sein. Um auch in y-Richtung zu diskretisieren, müssen wir also geeignet runden.

Der simple Algorithmus als Spezifikation

Wir können solche Linien bzw. Geraden in SMT-LIB-2 direkt modellieren indem wir eine neue sog. „Sorte“ (als Programmierende würden wir sagen: Typ) einführen:

(declare-datatypes ()
                   ((Line (mk-line (line-dx Int)
                                   (line-dy Int)))))

Ab jetzt ist Line ein Bezeichner für die neue Sorte, mk-line ist der Konstruktor mit zwei Argumenten und line-dx und line-dy sind die zwei Getter.

Nicht jeder Ganzzahlwert für dx ist zulässig. Der eine andere Punkt (außer (0, 0)) muss auch ein anderer Punkt sein. Diese Einschränkung beschreiben wir in einer Funktion, die prüft, ob dx ungleich Null ist. Wir wollen uns hier außerdem weiter einschränken auf „flache“ Linien (dx >= dy) nach oben rechts (dx >= 0 und dy >= 0) (warum wir uns einschränken und warum das keine echte Einschränkung ist, wird später klar). Insgesamt sieht die Validierungsfunktion so aus:

(define-fun line-valid ((l Line))
  Bool
  (and
   (> (line-dx l) 0)
   (>= (line-dx l)
       (line-dy l))
   (>= (line-dy l)
       0)))

Für eine solche valide Linie können wir die (rationale) Steigung durch eine Divison berechnen. Diese Division ist zulässig, weil dx echt größer Null ist.

(define-fun line-slope ((l Line))
  Real

  (/ (line-dy l)
     (line-dx l)))

Mithilfe des line-slope finden wir den exakten (rationalen) y-Wert für einen gegebenen x-Wert mit einer einfachen Multiplikation heraus. Die Rasterisierung in x-Richtung ist damit automatisch sichergestellt, denn x kann nur ganzzahlige Werte annehmen:

(define-fun line-exact-y ((l Line) (x Int))
  Real
  (* (line-slope l) x))

Wenn wir jetzt wissen möchten, welcher Pixel zu diesem y-Wert korrespondiert, dann müssen wir nur noch runden. Die eingebaute Funktion to_int macht eine Real-Zahl zu einer Int-Zahl, rundet dabei allerdings immer ab. Um bspw. 0.6 zu 1 aufzurunden, addieren wir vor dem to_int-Aufruf einfach noch 0.5 drauf.

(define-fun round-to-nearest ((x Real))
  Int

  (to_int (+ x 0.5)))

(define-fun line-rounded-y ((l Line) (x Int))
  Int
  (round-to-nearest (line-exact-y l x)))

Um in der Welt der puren Funktionen zu bleiben, zeichnen wir nicht direkt per Seiteneffekt auf den Bildschirm, sondern berechnen erst eine Liste mit allen y-Werten von links nach rechts. Hier soll’s ja nicht um Effekte gehen, sondern um Algorithmen. Das Berechnen einer solchen Liste macht der übliche rekursive Algorithmus. Den kann man auch in SMT-LIB-2 hinschreiben:

(define-fun-rec draw-simple-acc ((l Line) (x Int) (todo Int))
  (List Int)
  (ite (<= todo 0)
       ;; done
       nil
       ;; recurse
       (insert (line-rounded-y l x)
               (draw-simple-acc l (+ x 1) (- todo 1)))))

(define-fun draw-simple ((l Line) (todo Int))
  (List Int)
  (draw-simple-acc l 0 todo))

draw-simple nimmt als Parameter eine Linie (also im wesentlichen eine Geradensteigung) und ein Ziel auf der x-Achse (todo). Wir rufen dann die Funktion draw-simple-acc auf mit dem zusätzlichen initialen Akkumulator 0. Dieser Akkumulator beschreibt die gerade betrachtete x-Koordinate, für welche wir die passende y-Koordinate entlang der Linie berechnen wollen. nil ist die leere Liste und mit insert bauen wir aus einer alten Liste und einem Listenelement eine neue Liste zusammen (oft heißt das Ding cons oder in Clojure conj).

Wir können den Algorithmus ausprobieren, indem wir ans Ende der Datei noch einen (check-sat)-Aufruf machen. Danach können wir uns Ergebnisse mithilfe von (simplify ...) ausdrucken. Dieser Code:

(check-sat)
(simplify (draw-acc (mk-state-1 (mk-line 7 5) 0) 4))

liefert dieses Ergebnis:

sat
(insert 0 (insert 1 (insert 1 (insert 2 nil))))

Wir sehen, dass bei diesen „flachen“ Geraden manchmal ein Schritt nach oben gemacht wird, manchmal verbleibt die rasterisierte Linie aber auch für ein paar Schritte auf derselben Höhe.


Die Funktion draw-simple-acc macht eine Menge Dinge gleichzeitig: Abbruchbedingung prüfen, Business-Logik aufrufen, den einen Teil der Business-Daten in die Liste einfügen und den anderen Teil der Business-Daten an den nächsten rekursiven Aufruf weiterleiten. Im folgenden werden wir die Business-Logik dieses Algorithmus nach und nach optimieren, während der buchhalterische Rest des Algorithmus gleich bleibt. Deshalb faktorisieren wir draw-simple-acc schon jetzt in zwei Teile. Der eine Teil ist schon fertig – wir nennen diesen Teil den Rahmenalgorithmus. Der andere Teil ändert sich mit jeder Optimierung – wir nennen diesen Teil die Businesslogik. Die Businesslogik besteht immer aus vier Elementen:

  1. Es gibt einen Typen (Sorte) für den Zustand, welcher im Rahmenalgorithmus als Akkumulator fungiert.
  2. Es gibt eine Schrittfunktion, die einen alten Akkumulator in den nächsten Akkumulator überführt.
  3. Es gibt eine Extraktorfunktion, die aus dem Zustandsobjekt den y-Wert für die aktuelle Iteration rausholt.
  4. Um mit der Rechnerei starten zu können, brauchen wir eine Funktion, die uns ein initiales Zustandsobjekt baut.

In Code sieht das dann so aus: Der Zustand enthält eine Linie und das x, was vorher der Akkumulator war.

(declare-datatypes ()
                   ((State-1 (mk-state-1 (state-1-line Line)
                                         (state-1-x Int)))))

Ähnlich zum Linienobjekt sind nur manche Zustandsobjekte valide. Wir definieren entsprechend wieder eine Validierungsfunktion:

(define-fun state-1-valid ((st State-1))
  Bool
  (and
   (line-valid (state-1-line st))
   (>= (state-1-x st) 0)))

Wir definieren außerdem die Extraktorfunktion:

(define-fun state-1-exact-y ((st State-1))
  Real

  (line-y (state-1-line st)
          (state-1-x st)))

(define-fun state-1-y ((st State-1))
  Int
  (round-to-nearest
   (state-1-exact-y st)))

Die Schrittfunktion zählt einfach nur das x hoch. Das Linienobjekt wird nicht verändert:

(define-fun step-1 ((st State-1))
  State-1
  (mk-state-1
   (state-1-line st)
   (+ 1 (state-1-x st))))

Die Funktion, die das initiale Zustandsobjekt baut, ist mit dem Konstruktor mk-state-1 bereits gegeben. Wir können diese Teile jetzt in den Rahmenalgorithmus einsetzen. Das sieht dann so aus:

(define-fun-rec draw-1-acc ((st State-1) (todo Int))
  (List Int)
  (ite (<= todo 0)
       ;; done
       nil
       ;; recurse
       (insert (state-1-y st)
               (draw-1-acc (step-1 st) (- todo 1)))))

(define-fun draw-1 ((l Line) (todo Int))
  (List Int)
  (draw-1-acc (mk-state-1 l 0) todo))

Als funktional Programmierende würden wir den Rahmenalgorithmus natürlich gern als Funktion höherer Ordnung hinschreiben. Das geht in SMT-LIB-2 leider nicht.

Addition statt Multiplikation

Dieser Algorithmus ist einfach zu verstehen, arbeitet allerdings sehr verschwenderisch. Er macht mit der Funktion state-1-y in jedem Schritt eine Multiplikation. Das ist teuer, zumindest teurer als es sein müsste. Wir wissen ja, dass wir in jeder Iteration nur einen x-Schritt nach rechts gehen. Zur Optimierung dieses Algorithmus können wir uns die meiste Arbeit dieser wiederholten Multiplikation sparen, indem wir in jedem Schritt einfach nur einmal die Steigung auf den vorher berechneten y-Wert draufaddieren. Das erfordert natürlich, dass wir uns den y-Wert auch gemerkt haben, also packen wir diesen in ein neues Zustandsobjekt State-2. Wir definieren den Extraktor gleich mit:

(declare-datatypes ()
                   ((State-2 (mk-state-2 (state-2-line Line)
                                                     (state-2-x Int)
                                                     (state-2-exact-y Real)))))

(define-fun state-2-y ((st State-2))
  Int
  (round-to-nearest
   (state-2-exact-y st)))

Die zugehörige Validierungsfunktion muss jetzt auch noch prüfen, ob der y-Wert im Zustand auch zur angegebenen Linie und dem x-Wert passt.

(define-fun state-2-valid ((st State-2))
  Bool
  (and
   (line-valid (state-2-line st))
   (>= (state-2-x st) 0)
   (= (state-2-exact-y st)
      (line-y (state-2-line st)
              (state-2-x st)))))

Die neue Schrittfunktion macht jetzt nur noch eine Addition und keine ganze Multiplikation mehr:

(define-fun step-2 ((st State-2))
  State-2

  (mk-state-2
   (state-2-line st)
   (+ 1 (state-2-x st))
   (+ (state-2-exact-y st)
      (line-slope
       (state-2-line st)))))

Und jetzt fehlt nur noch die Funktion, die für eine gegebene Linie den initialen Zustand berechnet:

(define-fun init-state-2 ((l Line))
  State-2
  (mk-state-2 l 0 0.0))

Diese Funktionen können wir wieder in den Rahmenalgorithmus einsetzen. Wir nennen das Ergebnis draw-2 (und führen es hier nicht noch mal aus).

Das haben wir jetzt alles so hinprogrammiert, aber ist es auch richtig? Um diese Frage beantworten zu können, müssen wir uns erst mal klar machen, was „richtig“ hier überhaupt bedeutet. Ich würde vorschlagen, der Algorithmus arbeitet richtig, wenn er unter Absehung technischer Details die gleichen Ergebnisse wie der einfachere Algorithmus oben produziert. Formalisiert würden wir also gern solche eine solche Aussage prüfen:

(assert
 (forall ((l Line)
          (todo Int))
         (= (draw-1 l todo)
            (draw-2 l todo))))

Das kann man in SMT-LIB-2 so hinschreiben und Z3 rennt sogar los – es wird aber nie fertig. Mit einem solchen Programm befinden wir uns in einem Logikfragment, das für Z3 nicht mehr entscheidbar ist. Wir können aber eine Spezifikation aufschreiben, die funktional äquivalent und trotzdem entscheidbar ist. Der erste Teil davon ist einfach: Wir wollen sagen, dass der initiale Zustand, der aus init-state-2 rausfällt ein valider Zustand ist und außerdem das erwartete y zurückgibt, wenn man den Extraktor darauf anwendet. Wir wollen natürlich diese Aussage wieder prüfen für alle Linien. Dennoch müssen wir kein forall verwenden. Mit forall suchen wir nach einer Garantie der Allgemeingültigkeit unserer Formel. Anstatt aber die Allgemeingültigkeit zu prüfen, können wir auch das Gegenteil unserer Formel behaupten und dann fragen, ob diese negierte Formel erfüllbar ist:

(declare-const l Line)
(assert (line-valid l))

(assert
 (not (state-2-valid (init-state-2 l))))

(check-sat)

Wir können dieses SMT-LIB-2-Programm mit Z3 prüfen lassen. Raus kommt: unsat. Das klingt unbefriedigend, ist aber das Ergebnis, das sagt, dass unsere Verifikation erfolgreich ist. Die Aussage „Unsere Formel gilt nicht“ ist nicht erfüllbar, für kein einziges Linienobjekt l, d.h. unsere Formel gilt für alle l.

Das wichtigere Korrektheitskriterium betrifft step-2. Wir wollen sagen, dass step-2 sich so verhält wie step-1, abgesehen von einigen technischen Details. Formalisiert:

(declare-const st State-1)
(assert (state-1-valid st))

(assert
 (not
  (= (step-2
      (into-state-2 st))
     (into-state-2
      (step-1 st)))))

(check-sat)

Hier steht, dass es egal ist, ob wir für einen gegebenen State-1 zuerst mit into-state-2 in die State-2-Welt gehen und dann step-2 aufrufen, oder ob wir zuerst step-1 aufrufen und dann in die State-2-Welt eintauchen. Dieses Korrektheitskriterium ist ein typisches „kommutierendes Diagramm“: Egal welchen Pfad man verfolgt, man landet immer bei demselben Ergebnis. Dass das gilt, bestätigt uns Z3 mit einem weiteren unsat.

Die Korrespondenz zwischen den Welten von State-1 und State-2, welche wir als Korrektheitskriterium herangezogen haben, ist noch recht offensichtlich. Dafür hätten wir vielleicht Z3 gar nicht gebraucht. Wir wollen unseren Algorithmus im nächsten Artikel aber noch weiter verbessern. Die Art der Korrektheit bleibt dabei immer dieselbe: Wir sagen, unsere neuen Algorithmen sollen sich wie der offensichtlich richtige erste Algorithmus verhalten.

Permalink

Clojure Deref (Feb 10, 2026)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

Upcoming Events

Libraries and Tools

Debut release

  • scriptum - Lucene with git-like semantics and Clojure integration.

  • dataset-io - Enable tech.ml.dataset + tablecloth reading of Arrow, Parquet and Excel files with a single dependency

  • libpython-clj-uv - Deep integration of libpython-clj and the uv python venv manager

  • pocket - filesystem-based caching of expensive computations

  • progressive - A simple, local first, workout tracker.

  • hulunote - An open-source outliner note-taking application with bidirectional linking.

  • limabean - A new implementation of Beancount using Rust and Clojure and the Lima parser

  • mcp2000xl - A clojure/ring adapter for the official modelcontextprotocol Java SDK

  • cljd-video-player - A reusable ClojureDart video player package with optional background audio service

  • startribes - Star Tribes is a space combat game, written in Clojure

Updates

  • transit-java 1.1.389 - transit-format implementation for Java

  • transit-clj 1.1.347 - transit-format implementation for Clojure

  • sci 0.11.51 - Configurable Clojure/Script interpreter suitable for scripting and Clojure DSLs

  • clj-simple-stats 1.2.0 - Simple statistics for Clojure/Ring webapps

  • clojurecuda 0.26.0 - Clojure library for CUDA development

  • ring-data-json 0.5.3 - Ring middleware for handling JSON, using clojure.data.json

  • clj-uuid 0.2.5 - RFC9562 Unique Identifiers (v1,v3,v4,v5,v6,v7,v8,squuid) for Clojure

  • clojure-cli-config 2026-02-05 - User aliases and Clojure CLI configuration for deps.edn based projects

  • project-templates 2026-02-05 - Clojure CLI Production level templates for seancorfield/deps-new

  • fs 0.5.31 - File system utility library for Clojure

  • Selmer 1.13.0 - A fast, Django inspired template system in Clojure.

  • calva 2.0.549 - Clojure & ClojureScript Interactive Programming for VS Code

  • conjtest 0.3.1 - Run tests against common configuration file formats using Clojure!

  • nbb 1.4.206 - Scripting in Clojure on Node.js using SCI

  • cider 1.21.0 - The Clojure Interactive Development Environment that Rocks for Emacs

  • bankster 2.2.4 - Money as data, done right.

  • stripe-clojure 2.2.0 - Clojure SDK for the Stripe API.

  • clj-artnet 0.2.0 - A fully spec-compliant, idiomatic Clojure implementation of Art-Net 4, providing correct DMX512 over IP with deterministic behavior, predictable timing, and clean, composable APIs for real-time lighting control.

  • ripley 2026-02-07 - Server rendered UIs over WebSockets

  • Ship Clojure 1.0.0 - A production-ready Clojure web stack built on pure functions and data-oriented programming

  • datalevin 0.10.5 - A simple, fast and versatile Datalog database

  • hasch 0.4.98 - Cross-platform (JVM and JS atm.) edn data structure hashing for Clojure.

  • hive-mcp 0.12.4 - MCP server for hive-framework development. A memory and agentic coordination solution.

  • hirundo 1.0.0-alpha199 - Helidon 4.x - RING clojure adapter

  • ok-http 1.0.0-alpha20 - OkHttp clojure wrapper

  • scittle 0.8.31 - Execute Clojure(Script) directly from browser script tags via SCI

  • quiescent 0.2.4 - A Clojure library for composable async tasks with automatic parallelization, structured concurrency, and parent-child and chain cancellation

  • tableplot 1-beta15 - Easy layered graphics with Hanami & Tablecloth

  • clay 2.0.10 - A REPL-friendly Clojure tool for notebooks and datavis

Permalink

On Dyslexia, Programming and Lisp.

Dyslexia is a difference in the way the brain processes language. It is often defined as the difference between intelligence (vaguely defined via the fake IQ test) and a person's ability to read. That is, a dyslexic with normal intelligence will underperform compared to their peers in learning how to read. There are many different theories on what the cause is, but most people believe that there are fundamental differences between the brains of people who have dyslexia and those that do not. In Maryanne Wolf's book, Dyslexia, Fluency, and the Brain, several essays point to a physical difference in the density of specialized brain cells that the brain uses to process letter shapes. It has become clear recently from fMRIs that dyslexics actually develop unique neural pathways for reading that are located in the right side of the brain compared to the normal left-side-dominated reading pathways. However, a root cause has not been identified, and the jury is still out. Dyslexia varies considerably, with up to 15% of the population experiencing some form of it.

The experience of dyslexia is very interesting. I am dyslexic, and I was diagnosed in the 3rd grade. My dyslexia was more "intense" than some, having both verbal and visual effects. I remember trying to learn to read and struggling with all the letter rotation on the page. PQ DB IL letters all looked the same to me.

The best way to describe it is that my brain was attempting to process these letters for their meaning and would automatically rotate the letter. When you think about it, it actually makes sense. When you pick up an object with your hand that you don't understand, what do most people do? They turn it to look at all sides of the object. That's what my brain does with letters.

Because English letters are not "fixed," there is no inherent difference between the bottom and top of the letter; my brain could not fix the position of the letter on the page. The rotation was intense for me, as I was experiencing rotation in "3D." For me, this made the letter move more on the page and also made the letters even more confusing for me.

It made reading very difficult until my brain was able to fix their position. I did this by adopting what I called the "Sky line" approach. Each word makes a unique shape, like the skyline of buildings in a city. By memorizing the shape of each word in the English language, I could fix the position of the word on the page and overcome this rotation problem. Interestingly enough, once I memorized the shape of most of the words in English, I found this method to be very fast. It may be because I can see the shape of the word quickly, not relying on letter shape.

Frankly, rotation was a nightmare for me, literally. As a child, I remember having dreams/nightmares where I was falling and spinning. I would fall forever, spinning and spinning. I could not orient myself. Once, in this nightmare, I realized my solution. There was no fixed position. My desire to fix the position of the letter, or myself, was a fool's errand. Rotation itself was the only constant; the rotation, in a way, was a fixed position. It's a little difficult to describe, and this "realization" did come to me as a dream, so take it for what you will. I guess I am still spinning.

Anyway, moving on from that, in college I developed an interest in software engineering. Taking a class in C and then quickly learning Python. C and its family of languages are interesting. They are frankly a nightmare of obscure scribbles that had little meaning to me. I think a classic example of this is the ternary conditional operator. This is an operator that I still struggle with. I rapidly found that Python's whitespace-based syntax was much easier on my spinning brain. Most of the obscure squiggles are removed, and I was able to focus on the semantic meaning of the program. I found that my Sky Line approach for visual memorization works very well for programming languages. Each function or class defines its own shape. Remembering each shape in a large codebase is easier than remembering how to spell my own name.

After college I learned about Lisp, specifically the Clojure programming language. After learning about Clojure, I dove headlong into Scheme and its family of S-expression languages. At last, I had found a language that was seemingly designed for my spinning brain.

Lisp-based languages use S-expressions. These S-expressions form a constant and regular pattern. All expressions are enclosed in parentheses (). The first symbol is always the function or the macro. The rest of the expressions are arguments to that function or macro. Lisp programming is layers and layers of S-expressions, each with their own shape, making them easy to memorize and locate. I no longer have to worry about the rotation of the ternary conditional operator.

Scheme fascinates me even further. Many of the coding conventions in Scheme encourage people to use full and descriptive function names. I believe this gave my mind a deeper ability to organize the skyline shapes. Maybe you can think of this like the difference between 32-bit and 64-bit pointers in garbage collectors. The longer the function name, the more functions I can store in my mind's working memory.

These experiences leave me wondering what other characteristics we could add to our programming languages that would help people like myself. In general, people with "normal," left-dominated reading pathways have constructed languages. In doing so, they have built language that works effectively for them. These languages are tangles of obscure squiggles, a spinning brain nightmare. This tangle allows for denser packing of meaning into a "smaller" space, easy for normies, or so they think. But I can't help but notice the popularity of languages like Ruby and Python. Both of these languages take approaches that reduce the visual complexity.

I wonder if by reducing the visual complexity, we can instead increase the amount of semantic complexity we can hold in our minds. Dyslexia is common, and many people experience some form of the issues I describe above. Perhaps, we are all suffering from the tyranny of obscure squiggles.

Permalink

future - Clojure

Code

;; future.clj

(future
  (println "This runs in background")
  (Thread/sleep 1000)
  (println "Done after 1 second"))

(let [f (future
          (Thread/sleep 2000)
          "Result")]
  (println "Immediate return")  ; Prints immediately
  (println @f)) ; Blocks here for 2 seconds if needed

;;;;;;;;;;;;;

(def my-future (future (+ 1 2 3)))

;; Block until result is ready
(println @my-future)  ; Prints: 6

;; Or with timeout (returns nil if not ready in 1000ms)
(deref my-future 1000 :timeout)

;;;;;;;;;;;;;

(do
  (def my-future-3 (future (Thread/sleep 2000) (+ 1 2 3)))

  ;; Or with timeout (returns nil if not ready in 1000ms)
  (deref my-future-3 100 :timeout))

;;;;;;;;;;;;;

(do
  (def my-future-4 (future (Thread/sleep 20) (+ 1 2 3)))

  ;; Or with timeout (returns nil if not ready in 1000ms)
  (deref my-future-4 100 :timeout))

Permalink

Limit concurrent HTTP connections to avoid crippeling overload

Even the fastest web servers become bottlenecks when handling CPU-intensive API work. Slow response times can cripple your service, especially when paired with an API gateway that times out after 29 seconds.

I solved this using a simple middleware that eliminated 504 - Gateway Timeout responses and significantly reduced unnecessary load on my service API.

I assumed that if a single request takes 5 seconds on average, at least five requests could complete before hitting Amazon API Gateway’s 29-second timeout:

Visualization of how I assumed concurrent HTTP connections would put load on the CPU.

In practice, the behavior was completely different (though in retrospect, it makes perfect sense). CPU resources are divided equally among all concurrent requests, causing all responses to slow down proportionally:

Visualization of how the concurrent HTTP connections actually put load on the CPU.

I found myself in a situation where even a mediocre load would make the system incapable of responding within the time limit. The API gateway would return 504 - Gateway Timeout on behalf of my service, while my unaware service would occupy CPU resources for responses that would never be used for anything, slowing everything even further.

A sure way to contribute to climate change and get a high cloud bill, while delivering zero value.

Oh wait…
a caller is very likely to want to retry a request, indicating a temporary problem, which the HTTP response code 504 - Gateway Timeout does. Now multiply your already high cloud bill by the retry count.

In other words: A disaster. ☠️

An entirely different architecture, maybe involving a queue or some async response mechanism, would probably have been a better solution. But sometimes, we need to work with what we’ve got.

Since my CPU load was fairly consistent across requests, I could predict how many concurrent connections could complete within the timeout limit.

With the following middleware, I limit concurrent active connections to ensure high CPU utilization while still responding within the timeout:

(defn wrap-limit-concurrent-connections
  "Middleware that limits the number of concurrent connections to ´max-connections´,
   via the atom `current-connections-atom`.
   This means that the middleware can be applied in several different places
   while still sharing an atom if necessary."
  [handler current-connections-atom max-connections]
  (fn [request]
    (let [connection-no (swap! current-connections-atom inc)]
      (try
        (if (>= max-connections connection-no)
          (handler request)
          {:status 503 :body "Service Unavailable"})
        (finally
          (swap! current-connections-atom dec))))))

The middleware implementation is very naive and assumes that, the service only exposes work with a similar load profile so that the same middleware (and coordination atom) can be reused across the service.

Though the middleware does make the 504 - Gateway Timeout responses go away, they are replaced with slightly fewer 503 - Service Unavailable errors. The important part is that the maximum possible number of 200 - OK are allowed to pass through, making the system partially responsive while scaling up (deploying more instances).

Visualization of how the concurrent HTTP connections actually put load on the CPU with the middleware applied.

I ran tests to find the right value for max-connections that matches the given work and hardware the service was running on.

Endpoints with low CPU intensity, such as health checks, should not be wrapped in the middleware. You don’t want a service instance terminated and restarted just because the health check can’t communicate: I’m still doing important stuff.

A more sophisticated rate-limiting middleware is possible using the same scaffolding as above. Maybe something that times requests and reduces concurrency as response time goes up, or something with different weights instead of just incrementing and decrementing by one. But if this starts getting hairy, you might be better off with an entirely different architecture.

Use with caution. 💚

Permalink

Datastar Observations

I've been very impressed, so far, with Datastar[https://data-star.dev], a tiny JavaScript library for front-end work; I've been switching a personal side-project from using Svelte for it's UI to Datastar, and as amazing as Svelte is, Datastar has impressed me more.

Datastar's essential concept is for the client to shift virtually all logic and all markup rendering back to the server; event handlers can succinctly call server endpoints, which return markup, and the markup is morphed into the running DOM. This makes the server-side is the system of record. Datastar has a nice DSL, based on data-* attributes, allowing you to do nearly anything you need to do in the client, declaratively.

Alternately, the server can start an SSE (server sent event) stream and send down markup to morph into the DOM, or JavaScript to execute, over any period of time. For example, my project has a long running process and it was a snap to create a modal progress dialog and keep it updated as the server-side process looped through its inputs.

The mantra of Datastar is to trust the morph and the browser -- it's surprisingly fast to update even when sending a fair bit of content. It feels wasteful to update a whole page just to change a few small things (say, mark a button as disabled) but it works, and it's fast, and it frees you from a nearly all client-side reactive updates (and all the related edge cases and unforseen consequences).

The server side is not bound to any particular language or framework (they have API implementations for Clojure, Java, Python, Ruby, and many others) ... and you could probably write your own API in an afternoon.

I especially like side-stepping the issue of needing more representations of data; the data lives server-side, all that is ever sent to the client is markup. There's no over-the-wire representation, and no parallel client-side data model. All that's ever exposed as endpoints are intentional ones that do work and deliver markup ... in other words, always use-case based, never schema based.

There's a minimal amount of reactive logic in the client, but the essence of moving the logic to server feels like home; Tapestry (way back in 2005) had some similar ideas, but was far more limited (due to many factors, including JavaScript and browser maturity in that time).

I value simplicity, and Datastar looks to fit my needs without doing so much that is magical or hidden. I consider that a big win!

Permalink

Java in 2026: still boring, still powerful, still printing money

Let’s be honest.

Java is not sexy.
Nobody wakes up excited thinking “wow, I hope today I can write some beautiful enterprise Java code.”
Java boring meme

And yet…

Banks, fintechs, payment processors, credit engines, risk platforms, and trading systems are still massively powered by Java in 2026.
And they’re not moving away anytime soon. With AI taking up to 70% of written code? Not a problem.

I have worked as a software engineer for more than 17 years, and since I've started, people talking about java becoming legacy was just part of my day.

we are in 2026, american debt is skyrocketing, BTC is melting and dollar losing value... while Java? Well, looks like this guy is a tough dinosaur
java bad

Quick Java timeline (why this dinosaur refuses to die)

  • 1995 → Java is born. “Write once, run anywhere.”
  • 2006 → Open-sourced. Enterprise adoption explodes.
  • 2014 → Java 8. Lambdas. Streams. Real modern Java begins.
  • 2018–2021 → 6-month release cycle. JVM performance goes crazy.
  • 2021 → Java 17 LTS becomes enterprise default.
  • 2023 → Java 21 LTS ships with virtual threads (Project Loom). Massive scalability shift.
  • 2026 → Java 25 era. Cloud-native, AI-assisted dev, still dominating finance production systems.

So yeah…
Not dead. Not even close.

Why finance still trusts Java more than anything else

financial systems do not care about hype.

They care about, predictability, latency stability, memory safety, tooling maturity and last but not least a huge hiring pool

JVM stability is unmatched if you run:

  • loan engines
  • payment authorization
  • anti-fraud scoring
  • card transaction routing

JVM gives:

  • battle-tested GC (even with its problems)
  • insane observability
  • deterministic performance tuning
  • backwards compatibility across decades

Concurrency changed the game (virtual threads)

Before Java 21:

threads were expensive and async code was ugly, now:

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    IntStream.range(0, 1_000_000).forEach(i ->
        executor.submit(() -> processTransaction(i))
    );
}

This is millions of concurrent operations with simple blocking code and no reactive nightmare.
For fintech backends, this is huge.
huge

Spring ecosystem still dominates enterprise

Yes, people complain about Spring Boot, but look at reality:

  • 80%+ of fintech APIs run on Spring
  • security + observability + config = solved problems
  • onboarding engineers is easy
  • production support is predictable

Example minimal API:

@RestController
@RequestMapping("/loans")
public class LoanController {

    @GetMapping("/{id}")
    public Loan getLoan(@PathVariable String id) {
        return new Loan(id, BigDecimal.valueOf(1000));
    }
}

Boring?
Yes, but every engineer can get it easily, even in an eventual case of some AI-code-creationg halucination.
Finance chooses boring.

Performance is no longer an excuse

Modern JVM gives ZGC/Shenandoah which is ultra-low latency GC, JIT + profiling that optimizes the runtime, theres also a GraalVM native images for faster startup on any cloud provider

The real shift in 2026: AI-augmented Java developers

This is where things get interesting, guess what? java is a big winner here, if I can't check everything is being created, how about I rely on a very deterministic well shapped programming language?
essential

A simple example on putting claude code/github copilot to work for you

Example: generate a Spring service instantly

Prompt in editor:
create a service that calculates compound interest with validation and unit tests

Result in seconds:

@Service
public class InterestService {

    public BigDecimal compound(
        BigDecimal principal,
        BigDecimal rate,
        int periods
    ) {
        if (principal.signum() <= 0 || rate.signum() < 0 || periods < 0) {
            throw new IllegalArgumentException("Invalid input");
        }

        return principal.multiply(
            BigDecimal.ONE.add(rate).pow(periods)
        );
    }
}

tests look like a charming...

@Test
void compound_shouldGrow() {
    var service = new InterestService();
    var result = service.compound(
        BigDecimal.valueOf(1000),
        BigDecimal.valueOf(0.1),
        2
    );

    assertEquals(new BigDecimal("1210.00"), result.setScale(2));
}

Time saved with control.

How about using claude for refactoring and architecture?

Some things that were a nightmare before, now are just a task in jira like:

  • migrating legacy Java 8 → Java 21
  • converting blocking code → virtual threads
  • generating integration tests
  • explaining weird enterprise codebases

Real workflow:

inline or chat with the legacy class

modernize for Java 21 + clean architecture

tadah: get production-ready refactor

This is insane leverage.

Lets talk about career now, this moment a lot of software engineers are actually loosing their jobs, one more reason why java careers are still strong...

Everyone wants to learn:

  1. Rust
  2. Go
  3. A new AI-agent-ish of work
  4. shiny new things

But banks still run on Java, and guess what?

  • So supply ↓
  • Demand ↑

Opportunity.

Lets finish here...

Java in 2026 is like:
a boring Swiss bank account that quietly keeps getting richer
Not hype.
Not trendy.
But extremely powerful where it matters.

And in finance…, and somehow, even this boring guy is leveraging himself on AI to get fun a nice

I'm not saying sitck with java, java is more than a programming language, it is a technology, we have for example Clojure that is a lisp way of coding that runs on JVM, how about Kotlin? My point is to be aware and awake, Java still rocks.

posted here

Permalink

Full Stack Engineer (mid- to senior-level) at OpenMarkets Health

Full Stack Engineer (mid- to senior-level) at OpenMarkets Health


OpenMarkets people are…

  • Committed to driving waste out of healthcare
  • Transparent and accountable to their colleagues on a weekly basis.
  • Committed to the success of their customers and their teammates.
  • Hungry to learn by making and sharing their mistakes, as well as reading and discussing ideas with their teammates.
  • Eager to do today what most people would put off until tomorrow.

Why you want to work with us…

  • Fast-paced start-up environment with a lot of opportunity to make a large impact.
  • Passionate, dedicated colleagues with a strong vision for changing how healthcare equipment purchasing is done.
  • Opportunity to develop software to help remove wasteful spending from equipment purchasing, leaving more dollars for patient care.
  • Other benefits include comprehensive health care benefits, 401K with 4% match, pre-tax transit benefits, generous PTO, flexible maternity/family leave options and the ability to work remotely.

Apply today if you are someone…

  • Who is proficient in Clojure and ClojureScript (bonus if you're also familiar with Ruby on Rails).
  • Who knows (or is willing to learn) re-frame and Reagent Forms.
  • Who practices test-driven development
  • Who has written software for at least 4 years.
  • Is empathetic towards their team, understands the tradeoffs in their implementations, and communicates their code effectively.
  • Can speak and write in non-technical terms, and believes in the value of effective time management.

We want everyone OpenMarkets is an equal opportunity employer. We believe that we can only make healthcare work for everyone if we get everyone to work on it. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Permalink

Python Only Has One Real Competitor

Python Only Has One Real Competitor

Python Only Has One Real Competitor

by: Ethan McCue

Clickbait subtitle: "and it's not even close"


Python is the undisputed monarch in exactly one domain: Data Science.

The story, as I understand it, goes something like this:

Python has very straight-forward interop with native code. Because interop is straightforward, libraries like numpy and pandas got made. Around these an entire Data Science ecosystem bloomed.

This in turn gave rise to interactive notebooks with iPython (now "Jupyter"), plotting with Matplotlib, and machine learning with PyTorch and company.

There are other languages and platforms - like R, MATLAB, etc. - which compete for users with Python. If you have a field biologist out there wading through the muck to measure turtles, they will probably make their charts and papers with whatever they learned in school.

But the one glaring weakness of these competitors is that they are not general purpose languages. Python is. This means that Python is also widely used for other kinds of programs - such as HTTP servers.

So for the people who are training machine learning models to do things like classify spam it is easiest to then serve those models using the same language and libraries that were used to produce them. This can be done without much risk because you can assume Python will have things like, say, a Kafka library should you need it. You can't really say the same for MATLAB.

And in the rare circumstance you want to do something that is easiest in one of those alternative ecosystems, Python will have a binding. You can call R with rpy2, MATLAB with another library, Spark with PySpark, and so on.

For something to legitimately be a competitor to Python I think it needs to do two things.

  1. Be at least as good at everything as Python.
  2. Be better than Python in a way that matters.

The only language which clears this bar is called Clojure.

Clojure Language Logo


That's a bold claim, I know. Hear me out.

Clojure has a rich ecosystem of Data Science libraries, including feature complete numpy and pandas equivalents in dtype-next and tech.ml.dataset. metamorph.ml for Machine Learning pipelines, Tableplot for plotting, and Clay for interactive notebooks.

For what isn't covered it can call Python directly via libpython-clj, R via ClojisR, and so on.

Clojure is also a general purpose language. Making HTTP servers or whatever else in Clojure is very practical.

What starts to give Clojure the edge is also the answer to the age-old question: "Why is Python so slow?"

Python is slow because it cannot be made fast. The dark side of Python's easy interop with native code is that many of the implementation details of CPython were made visible to, and relied upon by, authors of native bindings.

Because all these details were relied upon, the authors of the CPython runtime can't really change those details and not break the entire Data Science ecosystem. This heavily constrains the optimizations that the CPython runtime can do.

This means that people need to constantly avoid writing CPU intensive code in Python. It is orders of magnitude faster to use something which delegates to the native world than something written in pure Python. This affects the experience of things like numpy and pandas. There is often a "fast way" to do something and several "slow ways." The slow ways are always when too much actual Python code gets involved in the work.

Clojure does not have this problem. Clojure is a language that runs on the Java Virtual Machine. The JVM can optimize code like crazy on account of all the souls sacrificed to it. So you can write real logic in Clojure no issue.

There's a reason Python's list is implemented in C code but Java can have multiple competing implementations, all written in Java. Java code can count on some aggressive runtime optimizations when it matters.

This also means that if you use Clojure for something like an HTTP Server to serve a model, you can generally expect much better performance at scale than the equivalent in Python. You could even write that part in pure Java to make use of that trained pool of developers. Anecdotally, startups often switch from whatever language they started with to something that runs on the JVM once they get big enough to care about performance.

Clojure's library ecosystem includes many high quality libraries written in Java. Many of these are better performing than their Python analogues. Many also do things for which Python has no equivalent. Clojure then gets access to all Python libraries via libpython-clj.

Clojure's interop story is also quite strong at the language level. Calling a Python function is almost as little friction linguistically as calling a Clojure function. Calling native code with coffi is also pretty darn simple.

The language is also very small even compared to Python. Obviously the education system infrastructure is not in place, but in principle there is less to learn about the language itself before one can productively learn how to do Data Science.

An extremely important part of productive Data Science work is interacting with a dataset. This is why interactive notebooks are such a big part of this world. It's also a benefit of using dynamic languages like Python and Clojure. Being able to run quick experiments and poke at data is more important than static type information.

Clojure is part of a family of languages with a unique method of interactive development. This method is considered by its fans to be superior to the cell-based notebooks that Jupyter provides.

All in all, it's a competitive package. Whether it ever gets big enough to take a big bite of Python comes down to kismet, but I think it's the only thing that might stand a chance to.


If this got you interested in learning Clojure check out Clojure Camp for resources and noj for a cohesive introduction to the Data Science ecosystem.


<- Index

Permalink

Clojure’s Persistent Data Structures: Immutability Without the Performance Hit

How structural sharing makes immutable collections fast enough to be the default choice in functional programming In most programming languages, immutability is a performance compromise. Make your data structures immutable, the thinking goes, and prepare to pay the cost in memory and speed. Every modification means a full copy. Every update means allocating new memory. …

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.