Eight Queens in core.logic

Eight Queens in core.logic

Eight Queens in core.logic

Welcome to my blog.

Here I report my core.logic solution to the classic Eight queens puzzle.

(ns eight-queens
  (:refer-clojure :exclude [==])
  (:require
   [clojure.string :as str]
   [clojure.core.logic :refer :all :as l]
   [clojure.core.logic.fd :as fd]))

;; Classic AI problem:
;;
;;
;; Find a chess board configuration, where (n=8) queens are on the board;
;; And no pairs attack each other.
;;

;;
;; representation:
;;
;; permutation <1...8>,
;; 1 number per row,
;; so that [1,2,3,4,5,6,7,8] is placing all queens on a diagonal (everybody attacks each other).
;;
;; - constrains the configuration:
;; - all queens are on different rows.
;; - all queens are on different columns.
;;
;; This is fine, because those configurations are not in the set of solutions.
;;
;;

  (defn queens-logic
    [n]
    ;; make (n=8) logic variables, for each row
    (let [colvars (map (fn [i] (l/lvar (str "col-" i))) (range n))]
      (l/run* [q]
        ;; 1. assign the domain 0-8
        (everyg (fn [lv] (fd/in lv (fd/interval (dec n)))) colvars)
        ;; 2. 'row must be different' constraint // permutation
        (fd/distinct colvars)
        ;; 3. diagonal constraint
        ;; for each queen, say that the other queens are not attacking diagonally
        (and* (for [i (range n)
                    j (range (inc i) n)
                    :let [row-diff (- j i)
                          ci (nth colvars i)
                          cj (nth colvars j)]]
                (fresh []
                  ;; handle south-east and north-east cases
                  (fd/eq (!= cj (+ ci row-diff)))
                  ;; '-' relation didn't work somehow
                  (fd/eq (!= ci (+ cj row-diff))))))
        (l/== q colvars))))

  (take 1 (queens-logic 8))
  '((1 3 5 7 2 0 6 4))

In relational programming, the code constructs are logic variables and goals. We write a program that sets up the constraints of the variables, then hand it to the logic engine with run.

After deciding on the clever representation, a permutation of column positions, we can program the constraints we need:

  1. Each queen is a number between 0 and n = 8 for each row, the number says which column it is on (or vise-versa).
  2. Each queen is a different number from the others - it is on a different row. (1+2 are the 'permutation constraint')
  3. The queens don't attack each other diagonally.

Verifying the correctness:

(comment

  ;; reference is definend below
  (def refence-outcome (find-all-solutions 8))

  (def outcome (queens-logic 8))

  [(= (into #{} refence-outcome) (into #{} outcome))
   (every? zero? (map quality refence-outcome))
   (every? zero? (map quality outcome)) (count outcome)]

  ;; =>
  [true true true 92])


ai generated back-track and a hill-climber solution:

;; ============================
;; Helpers and non-relational solutions

(defn quality
  "Count the number of queens attacking each other in the given board configuration.
   board-config is a vector of column positions, one per row.
   Returns the number of pairs of queens that attack each other."
  [board-config]
  (let [n (count board-config)]
    (loop [row1 0
           conflicts 0]
      (if (>= row1 n)
        conflicts
        (let [col1 (nth board-config row1)
              new-conflicts (loop [row2 (inc row1)
                                   acc 0]
                              (if (>= row2 n)
                                acc
                                (let [col2 (nth board-config row2)
                                      ;; Check diagonal attacks
                                      diag-attack? (= (Math/abs (- row1 row2))
                                                      (Math/abs (- col1 col2)))]
                                  (recur (inc row2)
                                         (if diag-attack? (inc acc) acc)))))]
          (recur (inc row1) (+ conflicts new-conflicts)))))))

(defn valid-solution?
  "Returns true if the board configuration has no conflicts."
  [board-config]
  (zero? (quality board-config)))

(defn print-board
  "Prints a visual representation of the board."
  [board-config]
  (let [n (count board-config)]
    (doseq [row (range n)]
      (let [col (nth board-config row)]
        (println (apply str (for [c (range n)]
                              (if (= c col) "Q " ". ")))))))
  (println))

(defn solve-backtrack
  "Solves the N-Queens problem using backtracking.
   Returns the first valid solution found, or nil if none exists."
  [n]
  (letfn [(safe? [board row col]
            (let [board-vec (vec board)]
              ;; Check if placing a queen at [row col] is safe
              (not (some (fn [r]
                           (let [c (nth board-vec r)]
                             (or (= c col)
                                 (= (Math/abs (- r row))
                                    (Math/abs (- c col))))))
                         (range row)))))

          (place-queens [board row]
            (if (= row n)
              board ;; Solution found
              (some (fn [col]
                      (when (safe? board row col)
                        (place-queens (conj board col) (inc row))))
                    (range n))))]

    (place-queens [] 0)))

(defn find-all-solutions
  "Finds all solutions to the N-Queens problem.
   Returns a sequence of all valid board configurations."
  [n]
  (letfn [(safe? [board row col]
            (let [board-vec (vec board)]
              (not (some (fn [r]
                           (let [c (nth board-vec r)]
                             (or (= c col)
                                 (= (Math/abs (- r row))
                                    (Math/abs (- c col))))))
                         (range row)))))

          (place-queens [board row]
            (if (= row n)
              [board] ;; Return solution in a vector
              (mapcat (fn [col]
                        (when (safe? board row col)
                          (place-queens (conj board col) (inc row))))
                      (range n))))]

    (place-queens [] 0)))

(defn random-config
  "Generates a random board configuration of size n."
  [n]
  (vec (shuffle (range n))))


(defn solve-hill-climbing
  "Solves the N-Queens problem using hill climbing with random restarts.
   max-restarts: maximum number of random restarts to attempt.
   max-steps: maximum steps per climb attempt."
  [n & {:keys [max-restarts max-steps]
        :or {max-restarts 100 max-steps 1000}}]
  (letfn [(swap-positions [config i j]
            (assoc config i (nth config j) j (nth config i)))

          (get-neighbors [config]
            (for [i (range n)
                  j (range (inc i) n)]
              (swap-positions config i j)))

          (climb [config steps]
            (if (or (zero? steps) (valid-solution? config))
              config
              (let [current-quality (quality config)
                    neighbors (get-neighbors config)
                    better-neighbors (filter #(< (quality %) current-quality) neighbors)]
                (if (empty? better-neighbors)
                  config ;; Local minimum reached
                  (recur (first (sort-by quality better-neighbors))
                         (dec steps))))))]

    (loop [restarts 0]
      (if (>= restarts max-restarts)
        nil ;; Failed to find solution
        (let [start-config (random-config n)
              result (climb start-config max-steps)]
          (if (valid-solution? result)
            result
            (recur (inc restarts))))))))


(comment
  ;; Example usage:

  ;; Test the quality function
  (quality [0 1 2 3 4 5 6 7]) ;; All on diagonal - many conflicts
  ;; => 28

  (quality [0 4 7 5 2 6 1 3]) ;; A valid solution
  ;; => 0

  ;; Solve for 8 queens using backtracking
  (def solution (solve-backtrack 8))
  solution
  ;; => [0 4 7 5 2 6 1 3]

  (print-board solution)
  ;; Q . . . . . . .
  ;; . . . . Q . . .
  ;; . . . . . . . Q
  ;; . . . . . Q . .
  ;; . . Q . . . . .
  ;; . . . . . . Q .
  ;; . Q . . . . . .
  ;; . . . Q . . . .

  ;; Find all solutions
  (def all-sols (find-all-solutions 8))
  (count all-sols)
  ;; => 92 (there are 92 distinct solutions for 8 queens)

  ;; Solve using hill climbing
  (def hc-solution (solve-hill-climbing 8))
  (print-board hc-solution)

  ;; Test quality on various board sizes
  (quality [0 1]) ;; => 0 (2 queens, no conflict)
  (quality [0 2 1]) ;; => 0 (3 queens, valid)
  (quality [1 3 0 2]) ;; => 0 (4 queens, valid)
  )

I could print all solutions; Why not do it with html; So it renders on this blog.

ai generated

(require '[hiccup2.core :as html])

(defn board-to-hiccup
  "Converts a board configuration to hiccup format with checkerboard pattern."
  [board-config]
  (let [n (count board-config)]
    [:div {:style {:display "inline-block"
                   :border "2px solid #333"}}
     (for [row (range n)]
       (let [col (nth board-config row)]
         [:div {:style {:display "flex"}}
          (for [c (range n)]
            (let [is-dark? (odd? (+ row c))
                  has-queen? (= c col)]
              [:div {:style {:width "60px"
                             :height "60px"
                             :background-color (if is-dark? "#769656" "#eeeed2")
                             :display "flex"
                             :align-items "center"
                             :justify-content "center"
                             :font-size "40px"
                             :font-weight "bold"
                             :color "#000"}}
               (when has-queen? "♕")]))]))]))

(spit
 "board.html"
 (html/html
     [:div
      {:style
       {:display "flex"
        :padding "8px"
        :gap "8px"
        :flex-wrap "wrap"}}
      (doall (map board-to-hiccup (queens-logic 8)))]))

All solutions printed because why not

Permalink

The “Jankiest” way of writing Ruby gems

Recently I wrote about a Jank, the programming language that is able to bring Cojure to the native world using LLVM. I also mentioned that Jank is not yet ready for production usage – and that is just true; it still have many bugs that prevent us to use it for more complex problems, and maybe even some simple ones.

That doesn’t mean we can’t try it, and see how bright the future might be.

At my last post I ended up showing in an example of a Ruby code. Later, after many problems trying to make something more complex, I finally was able to hack up some solution that bypasses some of the bugs that Jank still have.

And let me tell you, even in the pre-alpha stage that the language is, I can already see some very interesting things – the most important one being the “interactive development” of the native extension – or if you want to use the Ruby terms, monkey patching native methods.

Let’s start with some very simple code: a Ruby method that’s supposed to be written in a native language. I’m going to start using C++ because it’s the main way of doing that. For now, don’t worry about the details – it’s just a class creation with a method that just prints “hello word” in the screen:

#include &lt;iostream&gt;
#include &lt;ruby.h&gt;

static VALUE hello(VALUE self) {
  std::cout &lt;&lt; &quot;Hello, world!\n&quot;;
  return Qnil;
}

void define_methods() {
  VALUE a_class = rb_define_class(&quot;Jank&quot;, rb_cObject);
  rb_define_method(a_class, &quot;hello&quot;, RUBY_METHOD_FUNC(hello), 0);
}

extern &quot;C&quot; void Init_jank_impl() {
  define_methods();
}

We should be able to port this directly to Jank by just translating from C++ to a Clojure-like syntax – should being the keyword here, because we can’t. There are a bunch of different bugs that prevents us from doing that right now:

  1. We can’t define the hello funcion, because we have no way to add type signatures to Jank functions. To bypass that, we have to define the method in C, using cpp/raw, which brings us to
  2. Using cpp/raw to define C++ functions doesn’t work – the code generated will somehow duplicate the method definition, so things won’t compile. We can solve that by using the same technique that C headers use – we use #ifndef and #define to avoid duplications
  3. Unfortunately, Jank doesn’t actually understand the callback method signature. It expects something that matches exactly the method signature, but for Ruby callbacks (of native functions) the C API sometimes use “type aliases” or “C preprocessors/macros”. We can also solve that, but we need to “cast” the C function with the “concrete” signature to the one with the “abstract” one so that Jank will be happier.
  4. Finally, Jank doesn’t understand C macros/preprocessors. So in some cases (for example, converting a Ruby string to a C one) we’ll also need to generate an “intermediate function” to solve the issue.

First experiments

With all that out of the way, we can actually do some experiments:

(ns ruby-ext)

(cpp/raw &quot;
#ifndef JANK_HELLO_FN
#define JANK_HELLO_FN
#include &lt;jank/c_api.h&gt;
#include &lt;ruby.h&gt;

static VALUE jank_hello(VALUE self) {
  std::cout &lt;&lt; \&quot;Hello, world\\n\&quot;;
  return Qnil;
}

static auto convert_ruby_fun(VALUE (*fun)(VALUE)) {
  return RUBY_METHOD_FUNC(fun);
}

#endif
&quot;)

(defn init-ext []
  (let [class (cpp/rb_define_class &quot;Jank&quot; cpp/rb_cObject)]
    (cpp/rb_define_method class &quot;hello&quot; (cpp/convert_ruby_fun cpp/jank_hello) 0)))

And this works. But obviously, that’s not what we want. We want a Jank function to be called that will be used as the Ruby implementation. To do that, we can actually use the C API to callback Jank, so let’s just add a function and change our jank_hello code to call this new function:

(defn hello []
  (println &quot;Hello, from JANK!&quot;))

(cpp/raw &quot;
...
static VALUE jank_hello(VALUE self) {
  rb_ext_ractor_safe(true);
  auto const the_function(jank_var_intern_c(\&quot;ruby-ext\&quot;, \&quot;hello\&quot;));
  jank_call0(jank_deref(the_function));
  return Qnil;
}
...
&quot;)

Compiling the Jank code

This actually works, and we get a Hello, from JANK! appearing in the console! But to make that work, we need to compile the Jank code, and then link that together with our C++ “glue code” before we can use that in Ruby. But to compile for Jank, we actually need to include the Ruby headers and the ruby library directory, otherwise it won’t work – but knowing which flags to use can be a challenge. Luckily Ruby can help with that:

ruby -e &quot;require &#039;rbconfig&#039;; puts &#039;-I&#039; + RbConfig::CONFIG[&#039;rubyhdrdir&#039;] + &#039; -I&#039; + RbConfig::CONFIG[&#039;rubyarchhdrdir&#039;] + &#039; -L&#039; + RbConfig::CONFIG[&#039;libdir&#039;] + &#039; -l&#039; + RbConfig::CONFIG[&#039;RUBY_SO_NAME&#039;]&quot;

That will return the flags you need. Then you save your file as ruby_ext.jank, and compile it with jank <flags-from-last-cmd> --module-path . compile-module ruby_ext. Unfortunately, the output (the ruby_ext.o file) goes to a different directory depending on lots of different things (at least on my machine) – so my build script first deletes the target directory, then use a wildcard in the extconf.rb file (the file that’s used to prepare a Ruby code) so that we can actually get anything under target directory. Then, finally, we can build a final Ruby gem:

#extconf.rb
require &#039;mkmf&#039;

dir_config(&#039;jank&#039;, [&#039;.&#039;])
$CPPFLAGS += &quot; -I/usr/local/lib/jank/0.1/include&quot;
RbConfig::MAKEFILE_CONFIG[&#039;CC&#039;] = &#039;clang++&#039;
RbConfig::MAKEFILE_CONFIG[&#039;CXX&#039;] = &#039;clang++&#039;
RbConfig::MAKEFILE_CONFIG[&#039;LDSHARED&#039;] = &#039;clang++ -shared&#039;

with_ldflags(&quot;
  -L/usr/local/lib/jank/0.1/lib/
  target/*/ruby_ext.o
  -lclang-cpp
  -lLLVM
  -lz
  -lzip
  -lcrypto
  -l jank-standalone
&quot;.gsub(&quot;\n&quot;, &quot; &quot;)) do
  create_makefile(&#039;jank_impl&#039;)
end

#jank_impl.cpp
#include &lt;ruby.h&gt;
#include &lt;jank/c_api.h&gt;

using jank_object_ref = void*;
extern &quot;C&quot; jank_object_ref jank_load_clojure_core_native();
extern &quot;C&quot; jank_object_ref jank_load_clojure_core();
extern &quot;C&quot; jank_object_ref jank_var_intern_c(char const *, char const *);
extern &quot;C&quot; jank_object_ref jank_deref(jank_object_ref);
extern &quot;C&quot; void jank_load_ruby_ext();

extern &quot;C&quot; void Init_jank_impl() {
  auto const fn{ [](int const argc, char const **argv) {
    jank_load_clojure_core_native();
    jank_load_clojure_core();

    jank_load_ruby_ext();
    auto const the_function(jank_var_intern_c(&quot;ruby-ext&quot;, &quot;init-ext&quot;));
    jank_call0(jank_deref(the_function));

    return 0;
  } };

  jank_init(0, NULL, true, fn);
}

And compile with rm target -Rf && jank <flags> --module-path . compile-module ruby_ext && ruby extconf.rb && make. This surprisingly works! I say “surprisingly” because, remember, this way of using the language is not officially supported!

Refactoring the glue code away

Now, this whole number of “glue code” is a problem – it’s quite tedious to remember to make a C code, then convert that C code to be usable via Ruby. But LISPs have macros, and Jank is no exception – so let’s define a defrubymethod that accepts a parameter+type, and will generate all boilerplate for us:

(cpp/raw &quot;
#ifndef JANK_CONVERSIONS
#define JANK_CONVERSIONS
static jank_object_ref convert_from_value(VALUE value) {
  return jank_box(\&quot;unsigned long *\&quot;, (void*) &amp;value);
}
#endif
&quot;)

(defmacro defrubymethod [name params &amp; body]
  (let [rb-fun-name (replace-substring (str name) &quot;-&quot; &quot;_&quot;)
        cpp-code (str &quot;#ifndef &quot; rb-fun-name &quot;_dedup\n#define &quot; rb-fun-name &quot;_dedup\n&quot;
                      &quot;static VALUE &quot; rb-fun-name &quot;_cpp(&quot;
                      (-&gt;&gt; params (map (fn [[param type]] (str type &quot; &quot; param)))
                           (str/join &quot;, &quot;))
                      &quot;) {\n&quot;
                      &quot;  auto const the_function(jank_var_intern_c(\&quot;&quot; *ns* &quot;\&quot;, \&quot;&quot; name &quot;\&quot;));\n&quot;
                      &quot;  jank_call&quot; (count params) &quot;(jank_deref(the_function), &quot;
                      (-&gt;&gt; params (map (fn [[param]] (str &quot;convert_from_value(&quot; param &quot;)&quot;))) (str/join &quot;, &quot;))
                      &quot;);\n&quot;
                      &quot;  return Qnil;\n}\n#endif&quot;)]
    `(do
       (cpp/raw ~cpp-code)
       (defn ~name ~(mapv first params) ~@body))))

We need to implement replace-substring too, because str/replace is still not available in Jank. Anyway, now we can create Ruby methods like this:

(defrubymethod hello-world [[self VALUE]]
  (println &quot;HELLO, WORLD!&quot;))

(defn init-ext []
  (let [class (cpp/rb_define_class &quot;Jank&quot; cpp/rb_cObject)]
    (cpp/rb_define_method class &quot;hello_world&quot; (cpp/convert_ruby_fun cpp/hello_world_cpp) 0)))

Way easier, and no boilerplate. But now, it’s “superpower” time – because Jank is dynamic, we can actually reimplement hello-world while the code is running and Ruby will use the new implementation – it doesn’t matter that the implementation is native!

A REPL (kinda – more like a REP)

Of course, we also have a problem with this approach: Jank doesn’t actually have a REPL API (yet), so the easiest way to solve this is to add another method in our Ruby class that will evaluate something in Jank. This might sound confusing, and it kinda is – we’re actually registering, in Jank, a method that will be called by Ruby; this method will call a C++ code, that will delegate to Jank, to evaluate some Jank code. The trick here is to be able to pass Ruby parameters do Jank too, and to make this a reality what I decided to do is:

  1. Ruby will pass a string containing Jank code; that string can refer some pre-defined variables p0, p1, etc – each is a “parameter” that we can pass
  2. To be able to “bind” these variables, we’ll create a Jank function called __eval-code that will contain these parameters. We then will call this __eval-code and we’ll “box” the Ruby parameters (“boxing” is a way for Jank to be able to receive any arbitrary C++ value)
  3. To be able to inspect these values, we’ll also create a rb-inspect function that will call inspect in the “unboxed” Ruby value
(defn define-eval-code [code num-of-args]
  (let [params (-&gt;&gt; num-of-args
                    range
                    (mapv #(str &quot;p&quot; %)))
        code (str &quot;(defn __eval-code [&quot; params &quot;] &quot; code &quot;\n)&quot;)]
    (eval (read-string code))))

(cpp/raw &quot;
...
static const char * inspect(VALUE obj) {
  ID method = rb_intern(\&quot;inspect\&quot;);
  VALUE ret = rb_funcall(obj, method, 0);
  return StringValueCStr(ret);
}

static VALUE eval_jank(int argc, VALUE *argv, VALUE self) {
  try {
    const char *code = StringValueCStr(argv[0]);
    auto const the_function(jank_var_intern_c(\&quot;ruby-ext\&quot;, \&quot;define-eval-code\&quot;));
    jank_call2(jank_deref(the_function), jank_string_create(code), jank_integer_create(argc));

    auto const the_function2(jank_var_intern_c(\&quot;ruby-ext\&quot;, \&quot;__eval-code\&quot;));
    std::vector&lt;jank_object_ref&gt; arguments;
    for(int i = 0; i &lt; argc; i++) {
      arguments.push_back(jank_box(\&quot;unsigned long *\&quot;, (void*) &amp;argv[i]));
    }

    jank_call1(jank_deref(the_function2),
      jank_vector_create(argc,
        arguments[0],
        arguments[1],
        arguments[2],
        arguments[3],
        arguments[4],
        arguments[5],
        arguments[6],
        arguments[7],
        arguments[8],
        arguments[9],
        arguments[10],
        arguments[11],
        arguments[12],
        arguments[13]
    ));
  } catch(jtl::ref&lt;jank::error::base&gt; e) {
    std::cerr &lt;&lt; \&quot;ERROR! \&quot; &lt;&lt; *e &lt;&lt; \&quot;\\n\&quot;;
  }
  return Qnil;
}

static auto convert_ruby_fun_var1(VALUE (*fun)(int, VALUE*, VALUE)) {
  return RUBY_METHOD_FUNC(fun);
}
...&quot;)

(defn rb-inspect [boxed]
  (let [unboxed (cpp/* (cpp/unbox cpp/VALUE* boxed))]
    (cpp/inspect unboxed)))

(defn init-ext []
  (let [class (cpp/rb_define_class &quot;Jank&quot; cpp/rb_cObject)]
    (cpp/rb_define_method class &quot;hello_world&quot; (cpp/convert_ruby_fun cpp/hello_world_cpp) 0)
    (cpp/rb_define_method class &quot;eval_jank&quot; (cpp/convert_ruby_fun_var1 cpp/eval_jank) -1)))

This… was a lot of code. But here’s what it is: define-eval-code will basically concatenate (defn __eval-code [[p0 p1 p2 p3]] ... ) for example, if we pass 3 parameters (self will always be p0). Then we have the implementation, in C++, for eval_jank – this will just get the first parameter, convert to a const char*, and pass that to define-eval-code that we created previously; this will be evaluated, and we’ll create (or patch!) the __eval-code function. Now, the problem is how to pass parameters – I hacked a bit the solution by creating a C++ vector, “boxing” everything, and then arbitrarily creating a Jank vector with 14 elements; and that’s it.

Notice also that jank_box is passing the type as unsigned long * instead of VALUE *. This is actually because, again, Jank doesn’t support type aliases for now – so VALUE * gets resolved in Jank to unsigned long *, but don’t get resolved in C++. Also, be aware of the space between long and * – this is something that Jank also needs, otherwise you won’t be able to unbox the variable.

Now that we have everything in order, we can finally test some superpowers!

Ruby console

Ruby have a REPL, and that’s what we’ll use. After compiling the whole code, run irb and then you can test the code:

irb(main):001&gt; jank = Jank.new
=&gt; #&lt;Jank:0x00007324fec7da08&gt;
irb(main):002&gt; jank.hello_world
HELLO, WORLD!
=&gt; nil
irb(main):003&gt; jank.eval_jank &#039;(prn (rb-inspect p1))&#039;, [1, 2, 3]
&quot;[1, 2, 3]&quot;
=&gt; nil
irb(main):004&gt; jank.eval_jank &#039;(defn hello-world [_] (println &quot;Another Implementation!&quot;) )&#039;
=&gt; nil
irb(main):005&gt; jank.hello_world
Another Implementation!
=&gt; nil

Yes – in line 004, we reimplemented hello-world, and now Ruby happily uses the new version! Supposedly, this could be used to implement the whole code interactively – but without proper nREPL support, we can’t – some stuff simply doesn’t work (yet, probably) like cpp/VALUE – it depends on Ruby headers being present, the compilation flags, etc. Maybe in the future we can have better support for shared libraries, who knows?

And here is the very interesting part – this is not limited to Ruby. Any language that you can extend via some C API can be used via Jank, which is basically almost any language – Python, Node.JS, and maybe even WASM in the future. The future is very bright, and with some interesting and clever macros, we might have an amazing new choice to make extensions to languages!

And… another thing

Most of the time, when doing extensions, one of the worst things is the need to convert between the native types and the language ones. In Ruby, everything is a VALUE – in Jank, everything is a jank_object_ref. But again, Jank have macros – can we use that to transparently convert these types? And in such a way that’s fast, relies on no reflection, etc? Turns out we can – we can change the defrubymethod to receive a parameter before our body that will be the “return type”. We then will implement some Jank->Ruby conversions, and vice-versa, and transparently convert, box and unbox stuff, etc. So as a “teaser” here’s one of the first versions of this conversion:

(defn ruby-&gt;jank [type var-name]
  (case type
    Keyword (str &quot;to_jank_keyword(&quot; var-name &quot;)&quot;)
    String (str &quot;to_jank_str(&quot; var-name &quot;)&quot;)
    Integer (str &quot;jank_integer_create(rb_num2long_inline( &quot; var-name &quot;))&quot;)
    Double (str &quot;jank_real_create(NUM2DBL(&quot; var-name &quot;))&quot;)
    Boolean (str &quot;((&quot; var-name &quot; == Qtrue) ? jank_const_true() : jank_const_false())&quot;)
    VALUE (str &quot;jank_box(\&quot;unsigned long *\&quot;, (void*) &amp;&quot; var-name &quot;)&quot;)))

(def jank-&gt;ruby
  &#039;{String &quot;  auto str_obj = reinterpret_cast&lt;jank::runtime::obj::persistent_string*&gt;(ret);
    return rb_str_new2(str_obj-&gt;data.c_str());&quot;
    Integer &quot;  auto int_obj = reinterpret_cast&lt;jank::runtime::obj::integer*&gt;(ret);
    return LONG2NUM(int_obj-&gt;data);&quot;
    Double &quot;  auto real_obj = reinterpret_cast&lt;jank::runtime::obj::real*&gt;(ret);
    return DBL2NUM(real_obj-&gt;data);&quot;
    Boolean &quot;  auto bool_obj = reinterpret_cast&lt;jank::runtime::obj::boolean*&gt;(ret);
    return bool_obj-&gt;data ? Qtrue : Qfalse;&quot;
    Keyword &quot;  auto kw_obj = reinterpret_cast&lt;jank::runtime::obj::keyword*&gt;(ret);
    auto kw_str = kw_obj-&gt;to_string();
    auto kw_name = kw_str.substr(1);
    return ID2SYM(rb_intern(kw_name.c_str()));&quot;
    Nil &quot;  return Qnil;&quot;})

You might be asking: why does it return a string, all the time? And that’s because we’ll generate a C++ string on our macro – this string, which originally was just a way to help define a C++ function with the right signature, pass control do Jank, and return Qnil from Ruby, will now also convert parameters before sending them to Jank, which means we’ll have the correct type information on Jank’s side.

With all this, we can actually create some Jank code without any explicit conversion, that behaves like Jank, and will seamlessly work in Ruby too:

(defrubymethod sum-two-and-convert-to-str [[self VALUE] [p1 Integer] [p2 Integer]] String
  (str (+ p1 p2)))

(defn init-ext []
  (let [class (cpp/rb_define_class &quot;Jank&quot; cpp/rb_cObject)]
    ...
    (cpp/rb_define_method class &quot;sum_and_to_s&quot; (cpp/convert_ruby_fun2 cpp/sum_two_and_convert_to_str_cpp) 2)
    ...))

And then you can run with this:

Jank.new.sum_and_to_s(12, 15)
# =&gt; &quot;27&quot;

As soon as Jank stabilizes, this might actually be the best way to create language extensions! If you want to check out the work so far, see this repository with all the code in this post (and probably more in the future). Just install Jank, run ./build.sh, and cross your fingers 🙂

Permalink

Exception handling differences between Clojure map & pmap

With this post, I am in deeper waters than usual. What might sound like a recommendation in the following could be a potential disaster in disguise. Be warned.

Personally, I prefer not to know about implementation details about the function I’m calling. Although that was the situation I suddenly found myself in, when a function I call replaced map with pmap.

Here is how I approached the weirdness with exceptions tangled with pmap.

On the surface, map and pmap appear interchangeable, since they both return a lazy sequence. But the data contract breaks due to how exceptions are handled.

The following example showcases the behavior that caught me by surprise, because I had expected it to return {:error-code 42}:

(try
    (->> (range 1)
         (pmap (fn [_] (throw (ex-info "Oh noes" {:error-code 42}))))
         doall)
    (catch Exception e
      (ex-data e)))
; => nil

It did not. But using a normal map does:

(try
    (->> (range 1)
         (map (fn [_] (throw (ex-info "Oh noes" {:error-code 42}))))
         doall)
    (catch Exception e
      (ex-data e)))
; => {:error-code 42}

doall is necessary to ensure the exception is triggered while inside the try-catch block, instead of just returning an (unrealized) lazy sequence, which will cause havoc later.

As far as I know, pmap uses futures somewhere behind the scenes, which might be the reason why exceptions caused during mapping are wrapped in a java.util.concurrent.ExecutionException.

Since I am in control of the function replacing map with pmap, I decided to put the unwrapping where pmap is called, to hide the implementation detail from the caller:

(try
  (->> coll
       (pmap #(occationally throw-exception %))
       doall)) ; realize lazy seq to trigger exceptions
  (catch Exception e
    ; Unwrap potentially wrapped exception by `pmap`
    (throw (if (instance? java.util.concurrent.ExecutionException e)
             (ex-cause e)
             e)))))

The conditional unwrapping allows for a slightly more complex implementation in the try block that can throw exceptions outside pmap as well.

The above implementation assumes that an ExecutionException always has a cause, which might not be the case - I don’t know.

Use with caution.

Permalink

How was my experience at Lambda Days 2025

A Brazilian Portuguese version of this article is available here.

What is Lambda Days?

Lambda Days is an international conference dedicated to functional programming languages held every year in Kraków (Poland). The event brings together researchers, developers, and enthusiasts from communities such as Erlang, Elixir, Scala, F#, Clojure, Haskell, Elm, and many others, creating a space for exchange between academia and industry, and lasts for 2 days.

In addition to technical talks, Lambda Days also covers topics such as data science, distributed systems, artificial intelligence, and good development practices. The atmosphere is very vibrant, bringing together participants from different countries in one of the most beautiful and historic cities in Europe.

What motivated me to go this year?

I had 10 days of vacation to take and no certain destination to go. All the places I thought about visiting seemed to have great potential for a nice trip, but indecision took over. Until I remembered the feeling I had when I saw that Evan Czaplicki was going to give a talk at this conference!

For those who don’t know him, Evan is the creator of the Elm programming language and probably my favorite speaker! I am a great admirer of his technical abilities, but I am also equally impressed by the philosophical ideas he often includes in his speeches.

I had always wanted to attend a talk by Evan in person, and in recent years he hasn’t made many public appearances, being more focused on developing his newest creation: Acadia. That’s why, when I saw that he would be there, I got very excited.

Another key factor in my decision was that, although Poland was not exactly among the top places I wanted to visit in the world, it seemed to be a very interesting and beautiful country, with very distinct cultural, gastronomic, and tourist options. And it is also not one of the most expensive European countries to visit.

What is the city like?

Lambda Days takes place every year in the same city, Kraków. I liked the choice for several reasons. The first, as I already mentioned, is the cost. Although Poland has been a member of the European Union since 2004, it has never adopted the Euro, and its official currency is the złoty (PLN). This can make some things slightly inconvenient, but I found everything quite affordable, especially when compared to richer European countries. The price of the hotel, food, public transportation, and day-to-day expenses were not much higher than what I would find on a trip within Brazil.

The flight, departing from Brazil, is also not excessively expensive, especially considering it’s an 11-hour trip to Frankfurt and then another hour to Kraków (I prefer not to include how much I paid because the price would become outdated too quickly).

Kraków is a beautiful city! And since the event takes place in the summer, the sun rises very early and sets very late (photo taken at 9:45 PM).

Photo of the city at night, with the sky still bright.

I felt very safe the whole time. Since the city is very flat, it’s perfect for walking or cycling. And, of course, there are countless trams that run throughout the entire city! Through an App on your phone (or a machine at the stops) you can buy your ticket. I opted to use an App and buy a time-based ticket. This way, I could move around the city without worrying. You just validate it once (by typing the tram car number, which is printed inside near the doors) and that’s it! They say there is strict ticket inspection (in that case, you just show the validation on the App, otherwise you’ll be fined!), but in practice, I didn’t see any checks. I also found the price very reasonable (especially compared to what I paid in Norway… I almost fell backwards when I saw the public transport prices in Oslo!).

Tram

Apparently, it’s quite easy to get from the airport to the city using regular trains. But I chose to use Uber. I was super tired from the trip and felt safer choosing this option.

Another highlight is that Kraków has some very unique tourist attractions. When I travel, I usually prefer walking around the city instead of doing very touristy tours. But this time I decided to visit Auschwitz I and Auschwitz II (Birkenau). It’s hard to describe the feeling of being in a place where literally thousands of people (mostly Jews) died. I still haven’t fully processed it. It’s not the kind of place everyone will want to visit, but if you’re interested and emotionally prepared, I’d recommend going.

Foto de dentro de Auschwitz.

I also visited the famous Wieliczka Salt Mine. It’s a beautiful place, but it probably would have been more interesting to visit other parts of the city, such as a forest park a bit farther away.

Inside the mine.

Another very famous place (and one I regret not visiting) is Schindler’s Factory, which shows the efforts Schindler made to save 1,200 Jews from the Holocaust (those who watched Schindler’s List may know part of this story).

I chose to stay only in Kraków, but it seems quite easy to travel to other Polish cities by train. If I could go back in time, I would’ve planned the trip better and spent a few days in Warsaw, the country’s capital.

The only thing I really disliked about the city was the invasion of “vapes” (electronic cigarettes). Although it’s relatively common to see people using these devices here in Brazil (even though they’re prohibited), in Kraków it seemed like every young adult on the street was carrying one. A real epidemic! And they were sold in many places across the city. It’s a sad sight, considering the well-known health issues associated with these products.

Level of the talks

There were 3 tracks, with talks happening in parallel. Some were very easy to follow, such as the discussion panel Learning How to Learn, where four women (including a Brazilian!) shared their journeys learning, teaching, and growing in their Functional Programming careers.

Four women talking.

Other talks were more philosophical, such as Evan Czaplicki’s keynote, titled Rethinking our Adoption Strategy. In it, he talked about what he calls a Platform Language and what differentiates it from a Productivity Language, and what we — people not directly related to the business world — could do to make Platform Languages more appealing than Productivity Languages.

Evan's talk.

There were also more abstract talks, such as Moa Johansson’s keynote: AI for Mathematical Discovery: Symbolic, Neural and Neuro-Symbolic Methods. I must confess, the level of this (and some others) was far above what I could follow. I still managed to learn some things, but a lot of it I couldn’t fully understand. Another example was the keynote by Martin Odersky, creator of the Scala programming language, titled Making Capabilities Safe and Convenient. I understood the basics, but after a certain point, I got lost and couldn’t follow the proposals anymore.

Since there were 3 rooms/tracks happening concurrently, you can choose whichever talks seem most interesting and useful to you (except keynotes — those happen in the main room with no parallel sessions). A mistake I made was not researching the speakers beforehand. I could’ve made better choices and enjoyed more of the content.

Good food!

The event included breakfast, two coffee breaks (morning and afternoon), and lunch. That was great because I didn't have to worry about meals, and it also gave everyone more opportunities to socialize, chat, and make (or strengthen) friendships. I found everything very organized and tasty.

Party / Happy hour

At the end of the first day, there was a gathering after the event, at another venue. To enter, you just needed to show your event badge.

I don’t have many details about how it went, because unfortunately I didn’t attend. I was very tired after a long day of talks and went back to my hotel. But I regret not taking the opportunity! I was especially regretful when I heard, from some Brazilian attendees I met, that José Valim (creator of Elixir) was at the party! And they were able to talk to him! He didn’t give a talk at the event and I didn’t see him at any point, so I was surprised to learn he was there (and I missed the chance to meet him…).

First trip with so much technology at my disposal

I’ve had the privilege of taking several international trips over the past decades, from Argentina, Chile, and Uruguay, to Norway, Sweden, and the Netherlands, to Costa Rica, the United States, Cuba… but this trip to Poland was different.

When I visited Cuba in 2009, making a phone call to Brazil was very expensive and restricted (I made a few calls from the hotel’s landline). Zero internet or cell phone access. On my trip to the Netherlands in 2012, I only had internet when I found Wi-Fi (using my iPhone 3GS!). Most of the time I navigated the city using a physical map — no GPS.

On this trip to Poland, for the first time I had several technologies available that made my stay MUCH easier!

Unlimited internet

Before traveling, I bought an eSIM (a virtual SIM card — still not very popular in Brazil) through the Holafly app. The process was super simple and in a short time I had a second SIM installed on an iPhone 14. As soon as the plane stopped for my layover in Germany, at Frankfurt International Airport, my chip started working and I already had unlimited internet! WhatsApp working normally and everything else I’m used to using in Brazil. For the first time, I arrived in another country already with internet active on my phone!

Uber

Uber also operates in Poland and I felt safer and more comfortable using the service to get to my hotel. Poland’s train network is very good and I’m sure I could’ve arrived using trains alone, since the hotel was near a station — something I should’ve checked before getting there, but I didn’t! Poor planning there. In any case, the Uber dropped me off right at the hotel door. It was the first time I didn’t have to worry about anything to arrive safely at my first stop on an international trip.

I also used Uber to return to the airport. The rest of the time, I used only public transportation. But knowing that I had that option at any time made me feel more at ease to explore the city freely.

Google Maps / GPS

Google Maps works extremely well in Kraków and helped me a lot getting around the city using trams. It told me exactly when the tram would arrive at the stop, showed me how many stops were left, and alerted me at the correct moment to get off. It had never been so easy to use public transport in an unfamiliar city!

Of course, even so, I got lost a few times haha. In the rush, I would hop on a tram going in the wrong direction. But it was easy to notice by looking at the map, and since I bought time-based tickets, I just got off and took the correct tram.

Chat GPT

Another novelty was traveling for the first time with the possibility of using more recent Generative AI technologies like Chat GPT. I asked questions throughout the trip to understand how things worked in the city, and it helped me find stores and some nice places to visit.

Something I tried a few times was taking a screenshot of Google Maps with my current location and sending it to Chat GPT, asking for tips on things to visit nearby or stores where I could buy certain items I was looking for. The result was very cool!

If it weren’t for Chat GPT, I wouldn’t have visited, for example, the Kościuszko Mound. I was right next to it but it wasn’t on my list of places to visit. I followed the process I mentioned above, and it recommended going there — and I really liked it!

Opportunities for incredible conversations

As I said in the beginning, my biggest motivation for going to this event was to attend Evan Czaplicki’s talk and, of course, I had expectations that he would talk a bit about his new project, which very few people have had access to. When I saw the title of the talk, I already imagined this wouldn’t be the focus — and indeed it wasn’t. But what about behind the scenes?

During the two days of the event, he was talking with attendees in the hall where the coffee break took place. On the first day I tried to approach him to ask some questions, but my shyness won and I went back to the hotel feeling defeated! There, I told myself: you came all the way here because of this! If he’s there tomorrow, you will talk to him!!

And to my surprise, the next day he was there again, talking to people. Once again I felt embarrassed, but gathered the courage and approached. I didn’t know how to start, so I just greeted him and the person he was talking with using a nod and waited my turn while, for some reason, they were discussing Japanese culture.

At some point, he turned to me and gestured to start a conversation. I introduced myself, thanked him for the talk, praised his work as a programmer but also as a speaker and philosopher, and told him I was very inspired by his talks. We exchanged a few words and I soon asked about his “secret” backend project. I asked some general question about what it was about, and he paused for a few seconds. I got worried, honestly. Would he give a dry response? Was I being too invasive? Who was I to ask about a project he clearly doesn’t want to share widely yet??

But after a few seconds, he said: do you want to see it? I have my development notebook here — if you have a few minutes, I can give you a demo.

I accepted immediately! He asked me to wait a moment so he could call two more people who were interested in seeing the demo. Soon the four of us gathered around a high table in the hall, where he opened his notebook and began presenting the project. After some time, the other two had to leave, and I was also leaving with them when he said: I have one more demo, don’t you want to see it? And once again, I enthusiastically said yes!

At that moment, he presented the demo only to me. I asked a few questions and after 40 minutes of conversation, I thanked him and said goodbye. Then he asked for my opinion about the product, whether I would use it. And he also asked for my opinion about the demo — whether it was good, and if the proposal of the project was clear.

There I was, in Poland, giving my personal feedback to one of my idols! What an indescribable experience!! Those 40 minutes alone made the whole trip to Poland worthwhile!

Was it worth it?

As you can imagine — yes, it was totally worth it!

Overall, it wasn’t a cheap trip. Far from it! There were the flight, accommodation, the event ticket… all paid with my own money, without any company support. Because of that, it’s hard to recommend a trip like this without some hesitation.

I loved it. I had fun, I learned, I discovered a new country, I made new friends… And maybe you will meet incredible people and get the job of your dreams? Maybe. But probably not. If that’s your only goal, I recommend searching online instead — in communities, social networks, writing articles or software…

So if you’re thinking about doing something similar, do it for pleasure. For fun. Only then is it truly worth it!

And you?? What was the last conference you attended? How was it? What motivated you to go? Tell me in the comments!

Permalink

November 2025 Short-Term Q3 Project Updates

This is the second project update for three of our Q3 2025 Funded Projects. (Reports for the others are on a different schedule). A brief summary of each project is included to provide overall context. Thanks everyone for your awesome work!

Jank: Jeaye Wilkerson
This quarter, I’ll be building packages for Ubuntu, Arch, Homebrew, and Nix. I’ll be minimizing jank’s dependencies, automating builds, filling in test suites for module loading, AOT building, and the Clojure runtime. I’ll be working to get the final Clang and LLVM changes I have upstreamed into LLVM 22, adding a health check to jank to diagnose installation issues, and filling in some C++ interop functionality I couldn’t get to last quarter. Altogether, this quarter is going to be a hodgepodge of all of the various tasks needed to get jank shipped.

Malli: Ambrose Bonnaire-Sergeant
Malli’s core algorithms for transforming schemas into things like validators and generators are quite sensitive to the the input schema. Even seemingly equivalent schemas can have different performance characteristics and different generators.
I would like to create a schema analyzer that can simplify complex schemas, such that two different but equivalent schemas could have the same representation. Using this analyzer, we can build more efficient validators, more reliable generators, more helpful error messages, more succinct translations to other specification tools, and beyond.

Uncomplicate Clojure ML: Dragan Duric
My goal with this funding in Q3 2005 is to develop a new Uncomplicate library, ClojureML.

  • a Clojure developer-friendly API for AI/DL/ML models (in the first iteration based on ONNX Runtime, but later refined to be even more general).
  • Implement its first backend engine (based on ONNX Runtime).
  • support relevant operations as Clojure functions.
  • an extension infrastructure for various future backend implementations.
  • a clean low-level integration with Uncomplicate, Neanderthal, Deep Diamond and Clojure abstractions.
  • assorted improvements to Uncomplicate, Neanderthal, Deep Diamond, to support these additions.
  • develop examples for helping people getting started.
  • related bugfixes.
  • TESTS (of course!).

AND NOW FOR THE REPORTS!

Jank: Jeaye Wilkerson

Q3 2025 $9K, Report No. 2, Published October 31, 2025

Thank you so much for the sponsorship this quarter! This past month of jank development has been focused on stability of all existing features in preparation for the alpha release in December. Of note, I done the following:

  • Stabilized binary builds for macOS, Ubuntu, Arch, and Nix
  • Improved loop IR to support native values without boxing
  • Merged all pending LLVM changes upstream, so jank can build with LLVM 22 (releasing in January 2026)
  • The lein-jank plugin and template have been improved so that starting a new jank project is now just a couple of commands
  • libzip has been removed as a dependency, which enables Ubuntu 25.04 support
  • AOT building on all supported platforms has been stabilized
  • Added a daily scheduled CI binary package check for macOS, Ubuntu, and Arch
  • Added runtime type validation for cpp/unbox usages
  • Added C preprocessor value support, so cpp/FOO can refer to any #define
  • Improved IR gen and static type checking for if so that both branches must provide the same type

Normally I have an associated blog post, but as I’ll be leaving for Clojure Conj 2025 next week, I’m taking the time to polish my talk.


Malli: Ambrose Bonnaire-Sergeant

Q3 2025 $0K, Report No. 2, Published November 11, 2025

This month I fixed a bug in the work I merged last month: Robust :and parser, add :andn

The bugfix got me thinking about recursive refs again (following on from my work on recursive generators in malli.generator from a few years ago), which is a major part of the performance work I proposed for this Clojurists Together project (compiling efficient validators for recursive schemas).

I realized that we can use the cycle detection logic from malli.generator to detect cycles in validators! A major performance difference between Plumatic Schema/Spec and Malli is in recursive validators: Malli never “ties the knot” and will lazily compile and cache every layer of recursion. This means Malli uses linear space for recursive validators, while Schema/Spec use constant space, and Malli can never fully compile a recursive validator and must pay a time cost for compiling schemas at “runtime” (e.g., when your webserver is accepting requests).

I submitted a Malli “discussion” PR Sketch: tying the knot for ref validator proposing an approach to fixing this performance issue for recursive validators. See the PR if you’re interested in the exact details, as it contains a full explanation of the problem and (mostly) the solution, as well as a reference implementation that passes all the tests.

A neat part of this approach is that it is a completely transparent optimization from a user’s perspective, with no extra configuration or changes needed. Similar optimizations may be possible for other operations like m/explain and m/parse, but that’s future work.

For now, the maintainers showed interest in accepting this optimization and my next step will be to further test and validate the approach, and propose a final PR.


Uncomplicate Clojure ML: Dragan Duric

Q3 2025 $9K, Report No. 2, Published October 31, 2025

My goal with this funding in Q3 2005 is to develop a new Uncomplicate library, ClojureML (see above).

Progress during the second month:

In the first month, I have already implemented the first version, which I released to Clojars as org.uncomplicate/diamond-onnxrt version 0.5.0.

So, what has been done since then, in the second month?

We are at version 0.17.0 now!

As I already established solid foundations in the first month, in the second month, I focused on expanding the coverage of ONNX Runtime’s C api in Clojure, writing tests to get a feel how it works, working on hiding the difficult parts under a nicer Clojure API, fixing bugs. It wasn’t always a smooth sail, but with every storm I got a better understanding of the ways things work in ONNX Runtime, and I also covered most of the API available by ONNX Runtime version 1.22.2, at least the relevant part that can be used by Deep Diamond right now (which excludes sparse tensors and string tensors). The majority is covered and well tested.

The second area was using the newly added core API in higher-level integration with Deep Diamond. The result is that the public onnx function can now be integrated with the rest of DD Tensor machinery! And the public API that the user has to see is only one function, while everything related to ONNX is automatically wired under the hood! I am quite pleased with what I achieved on that front. Of course, I also implemented custom configurations that the user can provide as a Clojure map, and the underlying machinery will call the right onnx runtime function to set everything up.

The third big area is CUDA support. Since the whole setup up to this point was quite elegant (I’m sorry for having to praise my own work :), especially having lots of supporting functionality in Deep Diamond and Neanderthal, the Clojure code for this in diamond-onnx was not too demanding. However, the upstream library for this, namely ONNX Runtime’s backend for CUDA, and the build provided by javacpp-presets, was quite difficult to tame. I had to run many multi-hour C++ compilations, thaw through C++ compilation errors, dig to unearth causes, finding ways to fix, helping upstream, and this took lots and lots of time. But, that’s what it had to be done. That’s why I’ll have to wait for upgrade to the newest ONNX Runtime 1.23.2, but I managed to tame version 1.22.2.

During all this, I did numerous refinements to the codebase, so future library improvements will be easier. This is not a one-shot, I hope that this library will be a staple in the Clojure AI/ML community for the years to come, and that I’ll be able to further expand it and improve it as the requirements expand.

I probably forgot to mentioned some stuff that I worked on, too, but I hope I’ve mentioned the most important things.

Permalink

I am sorry, but everyone is getting syntax highlighting wrong

Translations: Russian

Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.

Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.

Christmas Lights Diarrhea

Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.

Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.

Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?

So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.

Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.

If everything is highlighted, nothing is highlighted.

Enough colors to remember

There are two main use-cases you want your color theme to address:

  1. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?)
  2. Search for something. You want to know what to look for (which color).

1 is a direct index lookup: color → type of thing.

2 is a reverse lookup: type of thing → color.

Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.

Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.

I can’t.

Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?

Can you?

If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.

What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:

  • Green for strings
  • Purple for constants
  • Yellow for comments
  • Light blue for top-level definitions

That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.

Limit the number of different colors to what you can remember.

If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?

What should you highlight?

Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.

I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.

Top-level definitions are another good idea. They give you an idea of a structure quickly.

Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.

Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.

Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.

Comments are important

The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.

But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:

Comments should be highlighted, not hidden away.

Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.

Two types of comments

Another secret nobody is talking about is that there are two types of comments:

  1. Explanations
  2. Disabled code

Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!

Here’s a real example from Clojure codebase that makes perfect use of two types of comments:

Disabled code is gray, explanation is bright yellow

Light or dark?

Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?

And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:

Notice how many colors there are. No one can remember that many.

This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.

Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.

So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯

But!

But.

There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.

The second one has good contrast, but you can barely see colors.

The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.

UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.

Bold and italics

Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.

In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.

Using italics and bold instead of colors

Myth of number-based perfection

Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.

This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:

OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315

The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.

Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.

Let’s design a color theme together

Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.

Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?

Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.

But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.

This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.

Shameless plug time

I’ve been applying these principles for about 8 years now.

I call this theme Alabaster and I’ve built it a couple of times for the editors I used:

It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).

Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.

As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.

I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.

Permalink

Explaining Financial Models by Deleting Transactions

Author: Denis Reis

How can you trust a model to support financial decisions if you can’t understand its decisions? For us, this wasn’t just an academic question. It was a core requirement for security, debugging, and responsible deployment. Here, at Nubank, we are leveraging the advent of transformer-based architectures, which have enabled the development of foundation models that automatically discover general features and learn representations directly from raw transaction data. By processing sequences of transactions (often by converting them into tokens like a natural language model would), these systems can efficiently summarize complex financial behavior.

In this post, we explore the explainability of such models, that is, the ability to examine how a single transaction, and its individual properties, influences the final prediction produced by a model. In this new scenario, where we have a transformer that processes a sequence of transactions, standard tools come with some limitations and requirements that make them difficult to maintain in a rapidly evolving environment like ours. However, we adopt a simpler approach, Leave One Transaction Out (LOTO), which can provide a good assessment of the impact of transactions (and their properties) on the final model prediction, while being easily compatible with virtually any model architecture.

Why Explainability is Crucial for Transaction Foundation Models

Understanding how the input influences the output of these models is paramount for responsible deployment, continuous improvement, and, critically, for monitoring and preventing exploitation.

  • Monitoring Model Behavior and Drift: Explainability allows us to track changes in the relative importance of different transaction types over time. Such changes can signal a shift in how customers interact with their finances, which may require updating our models and decision rules.
  • Debugging and Gaining Insights: It’s essential to understand how the different parts of the input contribute to a model’s output, both for individual predictions (local explainability) and across the entire dataset (global explainability). This insight is invaluable for debugging model anomalies, deepening our understanding of the financial behaviors, and guiding “feature selection” by highlighting which transaction types are most relevant.
  • Exploitability Monitoring and Prevention: This is a core concern. We define exploitability as a model vulnerability that malicious users could leverage to manipulate behavioral assessments, potentially leading to unwarranted benefits or other undesirable outcomes. In the case of these models, a vulnerability could be a transaction that is easily produced by a malicious actor and, when present, makes the model produce an output that leads to an unreasonably more favorable outcome for the actor. By monitoring changes in customer behavior, we can detect active exploits, and by examining how relevant transactions are to the model’s prediction before deploying the model, we can detect potential vulnerabilities.

How to Explain?

When addressing model explainability, a standard approach in both literature and industry is SHAP [1], a powerful framework for local explainability that has several beneficial properties for our use case.

For each data point, SHAP assigns a value, which we’ll henceforth refer to as importance, to each component of the input that represents how much that component is pushing the model’s prediction away from a baseline prediction (usually, the average prediction of a dataset that is provided as “background data”).  For example, if the SHAP value of an attribute is large for a particular data point, that feature is pushing the model’s prediction of that data point upwards. We can move from these local explainability assessments to a global one by aggregating the SHAP values, such as the average absolute value, across several data points.

In our context, our input is a sequence of transactions, which for transformers are represented as a sequence of large embeddings.  In this scenario, when assessing the relevance of transactions as a whole on the model’s prediction, SHAP values of the individual embedding dimensions aren’t relevant. Fortunately, in such cases, SHAP can group these smaller parts into transaction inputs and assign a single SHAP value to each transaction as a whole.

Another very positive aspect of SHAP is that it is model architecture-agnostic: we can compute SHAP values regardless of how the model operates internally. This property is especially relevant for us, since we are exploring several different architectures, including hybrid architectures that combine both neural networks and tree-based models.

Although SHAP satisfies our explainability requirements, unfortunately, it is computationally prohibitive for deep neural network use cases like ours. This has led to the exploration of more efficient, gradient-based approximations, such as Integrated Gradients [2] or  Layer-Wise Relevance Propagation (LRP) [3]. However, while more efficient, we lose some of the properties of SHAP that are valuable to us, which makes adopting these other approaches challenging.

The first challenge is that, unlike SHAP, these methods cannot group components together to obtain a single importance for the entire transaction; we would need to aggregate these individual values in some way. We investigated different aggregation schemes for gradient-based attributions, and the ones we considered to be the best tended to use absolute values. The problem, in this case, is that they end up disregarding directionality (i.e., whether a transaction pushed the prediction up or down).

Moreover, in our particular case, we couldn’t easily separate the importance that a transaction has due to its attributes (value, weekday, etc) from its position in the sequence: the model seemed to assign high importance to the first and last transactions in a sequence, regardless of their other characteristics. We hypothesize this is likely due to how the model “understands”  the sequence, that is, the structure of the input data. This bias is visualized in the figure below, where importance scores are heavily concentrated at both ends of the transaction sequence. Disclaimer: this figure, as well as the other figures displayed in this post, is based on synthesized data that illustrates the behavior observed on our real data.

A last challenge with these gradient-based methods is that they are not architecture-agnostic, imposing specific requirements for the types of models they can work with. First, the models need to be differentiable and provide access to their internal gradients, which was incompatible with some architectures we were examining. Second, some particular methods impose even more restrictive constraints regarding architectures or require intricate, domain-specific configurations. For instance, Layer-Wise Relevance Propagation (LRP) [5] demands specific propagation rules for each type of layer in the model. Even a more general method like Integrated Gradients (IG) introduces a critical implementation challenge: the non-trivial task of selecting a domain-specific baseline input vector that represents a “neutral” input. Keeping these restrictive, model-specific explainability tools in sync with our rapid pace of development was becoming too costly.

Lastly, another architecture-agnostic alternative we considered was LIME (Local Interpretable Model-agnostic Explanations) [4]. However, LIME introduces its own complexities: it requires generating a new dataset of perturbations and fitting a surrogate model for every individual explanation. This adds computational overhead, a layer of approximation, and requires defining a non-trivial perturbation strategy for our transactional data.These challenges led us to a simpler, more direct solution, Leave One Transaction Out (LOTO), that still fully satisfies our needs. It retains architecture-agnosticism, provides meaningful, directional values that associate the impact of transactions to predictions, and provides explainability at both local and global levels.

Leave One Transaction Out (LOTO)

A consistent characteristic across all the model architectures we examined is their ability to handle arbitrary sequences of transactions of varying length. This ability provides a very simple way to measure a transaction’s true contextual marginal impact: we just remove it and see what happens. Leave One Transaction Out (LOTO).

The method is straightforward: remove each transaction from a customer’s sequence one at a time and measure the difference between the new prediction and the original one. This directly reveals the impact of that specific transaction in that specific fixed context. It asks the simple question, “What was the direct contribution of this particular transaction’s presence to the prediction, considering the presence of all other transactions (the context)?” We can refer to this contribution as a LOTO value.

It’s important to clarify what “removing” means in this context. This is not like traditional feature ablation, which would be analogous to altering the feature space (e.g., modeling P(y | a=i, b=j) instead of the original P(y | a=i, b=j, c=k)).

Since our models are designed to process variable-length sets or sequences of transactions, “removing” one is simply a change in the input data for a single prediction, while the model’s architecture remains fixed. Conceptually, it’s like the difference between assessing P(y | … transaction_c=PRESENT) and P(y | … transaction_c=ABSENT). We are just feeding the model a sequence with one less element to see how its output changes.

The main drawback is that LOTO only tests transactions one by one. This means we miss out on interaction effects. For example, a payment to a utility company might be unimportant on its own, but when combined with a large, unexpected cash withdrawal, the two together might have a big impact. LOTO can’t capture this combined effect (what we call interaction effects). We accepted this trade-off for the vast gains in simplicity and computational efficiency, reserving more complex interaction analyses for future work.

We apply LOTO in two distinct modes: a global analysis to understand general trends by randomly choosing and removing one transaction from each customer in a large set, and a local analysis for deep dive debugging by removing every transaction, one at a time, from a single customer’s sequence. In either case, each removed transaction creates a new variant of the original input that the model must process. In the former case, since we remove only one transaction from each data point, the total number of variants is the same as the number of data points in the dataset. With hundreds of thousands to millions of these data points, we can build robust aggregations or even train simple meta-models (like a decision tree) on the LOTO results to automatically discover rules like, “Transactions of type X over amount Y consistently have a negative impact”. 

For example, as illustrated in the figure below, we could identify that the type of transaction has a clear bias on the impact of a certain binary target. Once again, the figure is based on synthesized data for illustrative purposes.

For a deep, local analysis of a single customer, we remove each of their transactions one by one, generating hundreds of variants for each customer. This is substantially more computationally expensive than gradient-based techniques, which generally require running the model only once. For this reason, we reserve local analyses for specific deep dives. However, in this regard, LOTO provides a clearer picture of a transaction’s importance, free from the positional bias that can mislead gradient-based methods. By directly measuring the impact of removing a transaction, LOTO helps distinguish whether a transaction is important because of its content or simply because of its location in the sequence. As the figure below illustrates, LOTO reveals a much more even distribution of high-impact transactions, helping us accurately diagnose the model’s behavior. 

Note that, in the figure above, it is still visible that more recent transactions are frequently more impactful. Another way of visualizing this in the global analysis is by simply checking the correlation between the age of a transaction and its impact on the prediction, as shown below.

Conclusion: Simplicity as a Feature

In our journey to build a reliable and interpretable financial AI system, we found that the simplest explainability method was also the most effective. By directly measuring the impact of removing each transaction, LOTO provides clear, actionable, and unbiased insights that are robust to our evolving model architecture.

References

[1] Lundberg, S. M., & Lee, S. I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, 30. arXiv. https://arxiv.org/abs/1705.07874 

[2] Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic Attribution for Deep Networks. Proceedings of the 34th International Conference on Machine Learning, 70, 3319-3328. arXiv. https://arxiv.org/abs/1703.01365 

[3] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7), e0130140. https://doi.org/10.1371/journal.pone.0130140 [4] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://arxiv.org/abs/1602.04938

The post Explaining Financial Models by Deleting Transactions appeared first on Building Nubank.

Permalink

Call for Applications: 2026 Annual Funding

Greetings folks!

Thanks to our members' generous support, this is the 5th year we we will be awarding annual funding to 5 developers - paid in twelve $1,500 monthly stipends (for a total of $90,000 USD). In the past 4 years, we have seen that giving developers flexible, long-term funding gives them the space to do high-impact work. This might be continuing maintenance on existing projects, new feature development, or perhaps a brand-new project.

We’ve been excited with what they came up with in the last few years and are looking forward to seeing more great work in 2026! Thanks all and good luck!

PROCESS

Apply: Anyone interested in receiving annual funding submits the application outlining what they intend to work on and how that work will benefit the Clojure community. The deadline for applications is Nov. 25th, 2025 midnight Pacific Time. Please note that past grantees must re-apply each year.

Board Review: The Clojurists Together board will review the applications and select finalists to present to the members by Dec. 3rd.

Members Vote: The final ballot will go out to members using a Ranked Vote election to determine the final recipients. As always, your input and participation is important and helps make Clojurists Together effective by ensuring members’ voices inform the work undertaken. Deadline Dec. 12, 2025 midnight Pacific time.

Awards will be announced no later than Dec. 18, 2025.

Project Updates: Recipients are required to submit a report bi-monthly to the membership.

A special call-out to Latacora, Roam Research, Whimsical, Nubank, Cisco, JUXT, Metosin, Solita, Adgoji, Grammarly, Nextjournal, ClojureStream, Shortcut, Flexiana, Toyokumo, doctronic, 180° Seguros, bevuta IT GmbH, Jepsen, Xcoo, Sharetribe, Basil, Cognician, Biotz SL, Matrix Operations, Strategic Blue. Eric Normand, Nubizzi, Oiiku, and Singlewire Software. They have all contributed significant amounts to Clojurists Together which lets us award approximately $90,000 in long-term funding to Clojure developers.

Permalink

Clojure in your browser

There is a recent article on Clojure Civitas on using Scittle for browser native slides. Scittle is a Clojure interpreter that runs in the browser. It even defines a script tag that let’s you embed Clojure code in your HTML code. Here is an example evaluating the content of an HTML textarea:

HTML code

<script src="https://cdn.jsdelivr.net/npm/scittle@0.6.22/dist/scittle.js"></script>
<script type="application/x-scittle">
(defn run []
  (let [code (.-value (js/document.getElementById "code"))
        output-elem (js/document.getElementById "output")]
    (try
      (let [result (js/scittle.core.eval_string code)]
        (set! (.-textContent output-elem) (str result)))
      (catch :default e
        (set! (.-textContent output-elem)
              (str "Error: " (.-message e)))))))

(set! (.-run js/window) run)
</script>
<textarea id="code" rows="20" style="width:100%;">
(defn primes [i p]
  (if (some #(zero? (mod i %)) p)
    (recur (inc i) p)
    (cons i (lazy-seq (primes (inc i) (conj p i))))))

(take 100 (primes 2 []))
</textarea>
<br />
<button id="run-button" onclick="run()">Run</button>
<pre id="output"></pre>

Scittle in your browser


              

Permalink

Buddy at Nubank: Supporting New Hires on Their Journey

Buddy connects new team members to Nubank, offering guidance, hands-on experiences, and opportunities to contribute with impact from day one.

At Nubank, we believe that the start of a journey can shape someone’s entire experience in the company. That’s why we created the Buddy program, connecting new hires — affectionately called Nuvinhos — to our culture, providing shared learning, support, and discovery from their very first day.

A Nuvinho is anyone who has recently joined Nubank, ready to explore our culture, processes, and challenges, and make a real impact from the start. They arrive eager to learn, find their own path, and make a difference from the beginning.

A Beginning with Purpose

When a Nuvinho joins Nubank, we want them to feel welcomed, supported, and part of something bigger. The Buddy program was created for this purpose: to provide an onboarding experience that reflects how we build our products and our work environment. Here, everyone is encouraged to make decisions, learn continuously, and collaborate to create solutions that generate real impact.

More than a formal process, the Buddy is a living network of support, allowing Nuvinhos to share experiences, try new ideas, and grow alongside the people who already embody Nubank’s culture.

High-Impact Work with Purpose

Working at Nubank is challenging and high-impact. We constantly question the status quo and create solutions that change people’s lives.

This intensity comes from the responsibility to innovate, make meaningful decisions, and deliver tangible results. For Nuvinhos, it means diving into an accelerated learning journey where every action and project contributes to something greater. It’s an environment that values initiative, collaboration, and real impact.

The Buddy program supports this process, helping each Nuvinho navigate, explore, and develop confidently in an environment driven by purpose and meaningful results.

Who the Buddy Is

A Buddy is the person who welcomes the Nuvinho and helps them find their place at Nubank. They know the company routines and day-to-day processes well and are ready to share experiences collaboratively. More than guiding, a Buddy sparks curiosity, encourages questions, and helps the new team member find their own way to contribute meaningfully from day one.

This relationship reflects what it’s like to work at Nubank: collaborative, purpose-driven, and focused on results. With a Buddy, Nuvinhos quickly realize that being part of Nu is more than holding a role — it’s participating in a story that grows every day, built collectively.

The Nuvinho Experience

From day one, the Buddy introduces paths, resources, and processes to help the Nuvinho feel confident and secure. They explain team rituals, strategic objectives, and ongoing projects, helping the new hire understand not just what needs to be done, but why it matters.

At the same time, the Buddy helps the Nuvinho explore essential tools and systems, offering hands-on learning and encouraging autonomy. They connect the new hire to colleagues, leaders, and stakeholders, building a network of relationships that strengthens belonging and eases social integration.

The Buddy also provides organizational context, showing how every decision, process, and project ties into Nubank’s larger purpose. When Nuvinhos take on their first projects, the Buddy observes, gives feedback, and celebrates achievements, ensuring continuous learning and confident, meaningful contributions.

Beyond Onboarding

The Buddy program goes beyond any traditional onboarding. It’s an expression of our culture of shared learning, collaborative building, and real impact from day one. Through this relationship, Nuvinhos discover what it’s like to work at Nubank, and Buddies also grow, learning from every new team member.

Being a Buddy means creating space for new team members to thrive, and being a Nuvinho is an opportunity to make a mark from the start. Together, they shape Nubank’s story: intense, inspiring, and purpose-driven.

The post Buddy at Nubank: Supporting New Hires on Their Journey appeared first on Building Nubank.

Permalink

Entre Código e Pedagogia: o que aprendi ao ensinar IA a escrever tutoriais de Programação Funcional

Neste semestre, decidi experimentar algo diferente na disciplina Introdução à Programação Funcional.

Em vez de manter o foco integral em Haskell, utilizei a linguagem apenas no primeiro terço do curso, como base conceitual.

Nas etapas seguintes, os estudantes trabalharam com duas stacks funcionais amplamente utilizadas na web: Clojure/ClojureScript e Elixir/Phoenix LiveView.

O objetivo era duplo: explorar a aplicabilidade prática da programação funcional moderna e investigar o papel da Inteligência Artificial na produção de materiais didáticos e arquiteturas de software.

Duas abordagens, um mesmo problema

Ambos os tutoriais resolvem o mesmo desafio: desenvolver uma aplicação Todo List completa e persistente, mas com filosofias distintas.

Versão Stack Abordagem
Clojure/ClojureScript Reagent 2.0 (React 18), Ring, Reitit, next.jdbc Reatividade explícita no frontend e API REST modular
Elixir/Phoenix LiveView LiveView, Ecto, Tailwind Reatividade integrada no backend, sem API intermediária

Os dois tutoriais podem ser acessados aqui:

O papel da Inteligência Artificial

Os tutoriais foram produzidos em colaboração com diferentes modelos de IA — ChatGPT, Gemini e Perplexity — a partir de prompts detalhados.

As IAs conseguiram gerar código funcional e explicações coerentes, mas sem estrutura pedagógica.

Faltava a intencionalidade didática: o porquê de cada decisão, o encadeamento entre etapas e a reflexão sobre erros comuns.

As IAs entregaram aproximadamente 80% do trabalho técnico.

Os 20% restantes — os mais importantes — dependeram de engenharia humana: testar, corrigir, modularizar e transformar o material em uma narrativa de aprendizagem.

Foram cerca de seis horas de curadoria, revisão e depuração, até que o conteúdo atingisse um padrão consistente e instrutivo.

“Produzir código com IA é simples. Transformá-lo em conhecimento exige experiência, método e propósito.”

O que essa experiência revelou

O processo reforçou uma lição essencial: a IA é uma ferramenta poderosa para acelerar o desenvolvimento e inspirar soluções,

mas a mediação humana continua insubstituível.

É o professor, o pesquisador e o engenheiro que atribuem sentido, constroem contexto e transformam o código em aprendizado.

Esses tutoriais representam mais do que guias técnicos.

São um experimento sobre como ensinar programação funcional no século XXI, integrando tecnologia, pedagogia e reflexão crítica sobre o papel da inteligência artificial no processo de aprendizagem.

📚 Referências dos tutoriais

Publicado por Sergio Costa

#Clojure #Elixir #ProgramaçãoFuncional #Educação #InteligênciaArtificial #Des

Permalink

Writing MCP servers in Clojure with Ring and Malli

Introduction #

Large language models, agents, and Model Context Protocol (MCP) are impossible to escape in today’s tech climate. At Latacora, we take a thoughtful and pragmatic approach towards new technologies like these. Are they going to solve all the world’s problems? No. Is it important that we understand them and be able to build software that integrates into emerging ecosystems? Yes!

Internally we’ve built a MCP server to query our Datomic databases using natural language, but now we’re open sourcing the underlying Clojure library so others can easily build robust MCP servers for the emerging LLM agent ecosystem too.

Permalink

Not One, Not Two, Not Even Three, but Four Ways to Run an ONNX AI Model on GPU with CUDA

Two weeks ago, I announced a new Clojure ML library, Diamond ONNX RT, which integrates ONNX Runtime into Deep Diamond. In that post, we explored the classic Hello World example of Neural Networks, MNIST handwritten image recognition, step-by-step. We run that example on the CPU, from main memory. The next logical step is to execute this stuff on the GPU.

You'll see that with a little help of ClojureCUDA and Deep Diamond built-in CUDA machinery, this is both easy and simple, requiring almost no effort from a curious Clojure programmer. But don't just trust me; let's fire up your REPL, and we can continue together.

Here's how you can evaluate this directly in your REPL (you can use the Hello World that is provided in the ./examples sub-folder of Diamond ONNX RT as a springboard).

Require Diamond's namespaces

First things first, we refer functions that we're going to use.

(require '[uncomplicate.commons.core :refer [with-release]]
         '[uncomplicate.neanderthal.core :refer [transfer! iamax native]]
         '[uncomplicate.diamond
           [tensor :refer [tensor with-diamond]]
           [dnn :refer [network]]
           [onnxrt :refer [onnx]]]
         '[uncomplicate.diamond.internal.dnnl.factory :refer [dnnl-factory]]
         '[uncomplicate.diamond.internal.cudnn.factory :refer [cudnn-factory]]
         '[hello-world.native :refer [input-desc input-tz mnist-onnx]])

None of the following ways to run CUDA models has preference, you use the one that best suits your needs.

Way one

One of the ways to run ONNX models on your GPU is to simply use Deep Diamond's cuDNN factory as the backend for your tensors. Then, the machinery recognizes what you need and proceeds doing everything on the GPU, using the right stream for tensors, Deep Diamond operations, and ONNX Runtime operations. This looks exactly the same as any other Deep Diamond example from this blog or the DLFP book.

(with-diamond cudnn-factory []
  (with-release [cuda-input-tz (tensor input-desc)
                 mnist (network cuda-input-tz [mnist-onnx])
                 classify! (mnist cuda-input-tz)]
    (transfer! input-tz cuda-input-tz)
    (iamax (native (classify!)))))
7

..it says.

Way two

As an ONNX model usually defines the whole network, you don't need to use Deep Diamond's network as a wrapper. The onnx function can create a Deep Diamond blueprint, and Deep Diamond blueprints can be used as standalone layer creators. Just like in the following code snippet.

(with-diamond cudnn-factory []
  (with-release [cuda-input-tz (tensor input-desc)
                 mnist-bp (onnx cuda-input-tz "../../data/mnist-12.onnx" nil)
                 infer-number! (mnist-bp cuda-input-tz)]
    (transfer! input-tz cuda-input-tz)
    (iamax (native (infer-number!)))))
7

… again.

Way three

We can even mix CUDA and CPU. Let's say your input and output tensors are in the main memory, and you'd like to process them on the CPU, but you want to take advantage of the GPU for the model processing itself. Nothing is easier, if you use Deep Diamond. Just specify an :ep (execution provider) in the onnx function configuration, and tell it that you'd like to use only CUDA. Now your network is executed on the GPU, while your input and output tensors are in the main memory, and can be easily accessed.

(with-release [mnist (network input-tz [(onnx "../../data/mnist-12.onnx" {:ep [:cuda]})])
               infer-number! (mnist input-tz)]
        (iamax (infer-number!)))
7

… and again the same answer.

Way four

Still need more options? No problem, onnx can create a standalone blueprint, and that blueprint recognizes the :ep configuration too.

(with-release [mnist-bp (onnx input-tz "../../data/mnist-12.onnx" {:ep [:cuda]})
               infer-number! (mnist-bp input-tz)]
        (iamax (infer-number!)))
7

No surprises here.

Is there anything easier?

If you've seen code in any programming language that does this in a simpler and easier way, please let me know, so we can try to make Clojure even better in the age of AI!

The books

Should I mention that the book Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, DNNL, Java, and Clojure teaches the nuts and bolts of neural networks and deep learning by showing you how Deep Diamond is built, from scratch? In interactive sessions. Each line of code can be executed and the results inspected in the plain Clojure REPL. The best way to master something is to build it yourself!

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.