Dead simple core.async job system in Clojure

Dead simple core.async job system in Clojure

In my off time (and my on time, for that matter) I&aposve been working on this quirky thing I&aposve called Scinamalink (sounds like &aposskinamarink&apos). Scinamalink lets customers send Magic login links to their users with a single REST API call. If you don&apost know what a magic login link is, it&aposs basically a password reset email on steroids. Quite literally a link that just authenticates the user&aposs session, skipping the usual password reset flow.

My motivations for working on Scinamalink range from having something to show off my Clojure skills on Twitch, and to see if this is worthwhile for people since services like Auth0 and Clerk support magic login links. I&aposm &aposunbundling&apos as the cool kids say.

An early problem I was concerned with was curtailing spam. I figure the best approach for this would be domain verification for the customer&aposs domain. Makes sense. Time for an asynchronous job and some DNS queries.

Avoiding a RabbitMQ hole

The first thing someone suggested was introducing some kind of message broker, like RabbitMQ, and going from there. I said hell no. I&aposm trying to avoid complexity. Yet, I understand that building an async worker from scratch doesn&apost seem like the simplest approach.

My line of thinking is this: I think infrastructure is part of your Software Architecture. Every component in an architecture adds exponentially more complexity, whether it&aposs a software component, or another process on the same network. By using something like RabbitMQ and writing the jobs for it, I&aposm essentially adding two or three more components to the architecture: The RabbitMQ process, its deployment configuration, and the code to manage jobs from my main application server.

Such complexity may be worth it for some developers building a solution, but, as someone who wants to get to market faster, cutting the ops work looks like the better approach. The obvious trade-off being my single-process worker system may be less reliable than RabbitMQ.

Simple Yet Reliable Enough

I have a single process with multiple threads, thanks to core.async. It&aposs not lost on me that if the process fails, the jobs will be lost, so my approach to reliability starts at the data model and database. Let&aposs take a look at the PostgreSQL data definition for a job:

CREATE TYPE worker_job_state AS ENUM (&aposcreated&apos, &aposstarting&apos, &aposworking&apos, &aposfinished&apos, &aposcrashed&apos);

CREATE TABLE IF NOT EXISTS worker_jobs (
       id serial primary key,
       current_state worker_job_state not null default &aposcreated&apos,
       timeout timestamp default now() + interval &apos24 hours&apos,
       attempt_count integer not null default 0,
       priority integer not null default 1,
       context jsonb not null,
       created_at timestamp not null default now(),
       updated_at timestamp not null default now()
);

I originally intended for timeouts and priorities to be a thing, but they&aposre &aposreserved for future use&apos (waste of time). But, since each job could be different in implementation, it&aposs necessary to store some of the context in a free-form JSON blob column. For example, when verifying domain ownership, it might be a good idea to store the customer and domain associated with the verification job.

However, we can see each job shares the same states describing the job&aposs lifecycle: created, starting, working, finished, and crashed. I think each state is self-explanatory here, and these states imply each job function is a Finite-State-Machine (FSM).

All work, no play

So, I need to implement a Finite-State-Machine in Clojure. If you&aposve read any of my previous works, you know whats coming next: Functions returning functions. In short, we can implement a finite state machine by having a function represent each state. When a state needs to transition to another state, it returns that function, otherwise it returns itself. We can use the recurring lexical scope in a recursion to propel the state machine forward. It might be easier to start with the loop:

;; in scinamalink.core.worker namespace
(def buffer-limit (or (:job-buffer-limit env) 2048))
(defonce queue (a/chan (warning-dropping-buffer
                        buffer-limit
                        "job queue full, job dropped")))

(defn worker
  "Spins off a go-loop based worker and runs the job function
  pulled from channel `queue` in a core.async/thread. If that
  job returns a fn, puts it back on the `queue` for a worker
  to process. Otherwise, worker discards result and repeats."
  [queue]
  (a/go-loop [job (a/<! queue)]
    (try
      (when-let [next-state (a/<! (a/thread (job)))]
        (when (fn? next-state)
          (a/>! queue next-state)))
      (catch Exception e
        (log/warn "Possible job failure in worker: %s" (.getMessage e))))
    (recur (a/<! queue))))

from core.workers namespace

This is a bit more complex than my past examples, but the same idea. We pull a job function off a core.async channel, queue, and execute it. The job function is executed in a core.async thread because jobs must perform blocking network operations. The result, the next-state function comes off the channel returned by thread. Instead of passing the next-state to the next recursive call, next-state returns back to the queue, provided it&aposs a function, so it doesn&apost starve other job functions waiting in the queue.

Jobs-as-functions also makes for easy synchronous testing as I just wrote a regular (non-go) loop to test each job&aposs FSM. Of course, the jobs themselves require a bit of forethought.

They Took Our Jobs

So what does a job function look like? They&aposre pretty simple:

;; In scinamalink.jobs.domain-verification namespace
(defn finished
  [job]
  (try
    (swap! registry #(dissoc % (:id job)))
    (db-jobs/job-finished ds-opts (:id job))
    (log/debugf "%s job %s finished" (get-in job [:context :job-type]) (:id job))
    (catch Exception e
      (do (crashed job)
          (log/errorf "Unexpected exception in domain verification finished function: %s"
                      (.getMessage e))))))

(defn job-work
  [job]
  (try
    (let [{:keys [context]} job
          {:keys [customer-id domain-id]} context
          domain (db-domains/get-customer-domain-by-id ds-opts customer-id domain-id)
          {:keys [verification-code verified domain-name last-checked-at]} domain]
      (log/infof "%s job %s checking domain %s"
                 (get-in job [:context :job-type])
                 (:id job)
                 domain-name)
      (when-not domain
        (throw
         (ex-info (str "Customer domain record missing for job " (:id job)) context)))
      (if-not verified
        (do (db-domains/set-last-checked-at ds-opts customer-id domain-id (Instant/now))
            (if (= verification-code (dns/get-domain-text-record domain-name))
              (do (log/debugf "setting domain to verified for job %s" (:id job))
                  (db-domains/set-verified ds-opts customer-id domain-id true))
              (log/debugf "no verification code found for job %s" (:id job)))
            #(job-work job))
        #(finished job)))
    (catch Exception e
      (do (crashed job)
          (log/errorf "error while completing job work: %s" (.getMessage e))
          (db-jobs/job-failed ds-opts (:id job) (.getMessage e))))))

(defn do-job
  [job]
  (try
    (swap! registry #(assoc % (:id job) :working))
    (let [job (db-jobs/job-working ds-opts (:id job))]
      #(job-work job))
    (catch Exception e
      (do (crashed job)
          (log/errorf "job failed: %s %s" job (.getMessage e))
          (db-jobs/job-failed ds-opts (:id job) (.getMessage e))))))

(defn start-job
  [job]
  (try
    (swap! registry #(assoc % (:id job) :started))
    (let [job (db-jobs/job-started ds-opts (:id job))]
      (log/debugf "Starting %s job %s" (get-in job [:context :job-type]) (:id job))
      #(do-job job))
    (catch Exception e
      (do (crashed job)
          (log/errorf "job failed in start-job function, job: %s message: %s" job (.getMessage e))
          (db-jobs/job-failed ds-opts (:id job) (.getMessage e))))))

(defn ->job
  [customer-id domain-id]
  (try
    (let [job-context {:job-type "domain_verification"
                       :customer-id customer-id
                       :domain-id domain-id}
          job (db-jobs/create-db-job ds-opts (db-jobs/next-week (Instant/now)) 0 1 job-context)]
      (log/info "Domain verification job created")
      #(start-job job))
    (catch Exception e
      (log/warnf "domain verification job failed: %s %s" customer-id domain-id))))

from jobs.domain-verification namespace

The domain verification job gets created with ->job which creates the job in the database and returns the first function to place on the queue with something like (dispatch-work queue (->job customer-id domain-id)).

Since the workers are so thin themselves, jobs are responsible for everything related to its function. Each state needs to clean up after itself if something goes wrong. They also update the database with its serialized context regardless of failure.

However, I&aposm not bound by the rigidity of the job data model though. You&aposll notice that job-work does most of the work for this task, yet do-job sets the state to :working in the database. I did this because I didn&apost want to unnecessarily write the state :working to the database each time the job attempts to make the DNS query. The worker doesn&apost care as long as it gets a function. Although, when the process starts and loads the jobs from the database, it will start at do-job again.

Starting, Restarting, and Unstarting Processes

At some point, jobs will need to be loaded from the database into the worker system whether it&aposs from failures or restarts. This is a pretty simple process: Read from database, dispatch to the appropriate job constructor, and put the resulting jobs on the queue for the workers.

(defmulti ->job-fn
  "Multimethod to dispatch on job creation function"
  (fn [job]
    (let [{:keys [current-state context]} job]
      (vector (csk/->kebab-case-keyword (:job-type context))
              (keyword current-state)))))

;; Currently in core.worker, but should be in core.loader:
(defn- start-existing-helper!
  "Recursively puts each `job` fn on the port `queue`,
  presumably to be processed by a worker (see above)."
  [queue jobs]
  (let [job (first jobs)]
    (try
      (a/put! queue job (fn [all-good]
                          (if all-good
                            (start-existing-helper! queue (rest jobs))
                            (log/errorf "didn&apost put job onto queue, exploding: %s" job))))
      (catch Exception e
        (log/errorf "didn&apost put job onto queue, exploding: %s" job)))))

(defn start-existing-jobs!
  "Starts existing jobs from the DB"
  []
  (try
    (let [jobs (db-jobs/get-all-pending-jobs ds-opts buffer-limit)]
      (->> jobs
           (mapv ->job-fn)
           (start-existing-helper! queue)))
    (catch Exception e
      (log/errorf "Unexpected error while starting existing jobs: %s" (.getMessage e)))))

In the actual job definition, we can extend ->job-fn with a dispatch values that map to our database record&aposs context column.

(defmethod worker/->job-fn [:domain-verification :created]
  [job]
  #(start-job job))

(defmethod worker/->job-fn [:domain-verification :starting]
  [job]
  #(do-job job))

(defmethod worker/->job-fn [:domain-verification :working]
  [job]
  #(job-work job))

from jobs.domain-verification namespace

This start-existing-jobs! function gets called when the process starts, but we need a method to periodically load jobs while the process is running. Ideally, our job loader would be aware of each running job so that the same jobs aren&apost loaded over and over.

(defonce registry (atom {}))

;; 1hr = 3600000 ms
(defn loader
  "Loads jobs from the Database into the job `w/queue`,
  skipping the currently running ones."
  [queue ms]
  (a/go-loop [to-chan (a/timeout ms)]
    (try
      (when-not (a/<! to-chan)
        (a/<!
         (a/thread
           (try
             (log/debugf "Begin loading jobs from database")
             (log/debugf "There are currently %s jobs in the registry" (count (keys @registry)))
             (let [jobs (dbw/get-all-pending-jobs ds-opts w/buffer-limit)
                   running-jobs @registry
                   stored-jobs (->> jobs
                                    (filterv #(not (contains? running-jobs (:id %))))
                                    (mapv w/->job-fn))]
               (doseq [job stored-jobs]
                 (w/dispatch-work queue job)))
             (catch Exception e
               (log/warn "Loader exception while loading jobs from DB: %s"
                         (.getMessage e)))))))
      (catch Exception e
        (log/warnf "Possible job failure in worker: %s" (.getMessage e))))
    (recur (a/timeout ms))))

(defn start-job-loaders!
  ([queue]
   (start-job-loaders! queue 1))
  ([queue n]
   (doseq [_ (range n)]
     (loader queue 600000))))

core.loader

Lucky for us, core.async violates the thread-local nature of Clojure Vars. Meaning, that I can have a Var pointing to an atom where information about all the running jobs is stored (Jobs take on responsibility of registering themselves). As we can see, a loader functions very similarly to a worker running the start-existing-jobs! functionality, filtering on what&aposs in the registry atom upon this iteration. After each iteration, the loaders will wait using a core.async/timeout for specified amount of time.

Big Picture

Finally, the big picture of the sub-system emerges.

Dead simple core.async job system in Clojure

Follow me on the website formerly known as Twitter dot com and around the web @janetacarr , or not. ¯\_(ツ)_/¯

Permalink

This Week In Python

Fri, May 17, 2024

This Week in Python is a concise reading list about what happened in the past week in the Python universe.

Python Articles

Projects

  • basilisp – A Clojure-compatible(-ish) Lisp dialect targeting Python 3.8+
  • still-manim-editor – real-time coding editor for creating diagrams
  • llgo – A Go compiler based on LLVM in order to better integrate Go with the C ecosystem including Python
  • hstream – write Streamlit, ejcct to Django and Htmx
  • horus – An OSINT / digital forensics tool built in Python

Permalink

Crustimoney - a Clojure PEG parser

This post introduces the Crustimoney library - a Clojure idiomatic “packrat” PEG parser. The initial version of Crustimoney was from 11 years ago and was incomplete. The recent rewrite of it was mostly a mental excercise, to get it feature complete and a bit beyond that. I like to think it turned out well, and perhaps you like it too. And if not, there’s always Instaparse 😊.

So, the features are as follows:

  • Create parsers from combinators
  • .. or from string-based definitions
  • .. or from data-driven definitions
  • Packrat caching, optimizing cpu usage
  • Novel concept of cuts, optimizing memory usage - and error messages
  • Focus on capture groups, resulting in shallow parse trees
  • Minimal parse tree data, fetch only what’s needed on post-processing
  • Virtual stack, preventing overflows
  • Infinite loop detection (runtime)
  • Missing rule references detection (compile time)
  • Streaming support (experimental)

There’s a lot to discover, but to keep it short and simple, let’s implement a little calculator.

The grammar

We start with the grammar itself. We can choose between combinators, clojure data or text as our format. This time we choose the latter, as it is probably most familiar. We will bind the following to grammar:

root       <- sum $
sum        <- (:operation product sum-op > sum) / product
product    <- (:operation value product-op > product) / value
value      <- number / '(' > sum? ')'
sum-op     <- (:operand [+-])
product-op <- (:operand [*/])
number=    <- [0-9]+

Some remarks are in order. Firstly, the (..) groups can (and do, in our grammar) start with a capture name. Crustimoney expressly does not capture anything by default. So if something needs to be in the parse result, it needs to be captured.

Secondly, the number rule is postfixed with an equals = sign. This is a shortcut for the following:

number <- (:number [0-9]+)

Lastly, the grammar contains a > three times. These are soft cuts, a novel concept in Crustimoney, and PEG parsers in general. In short, it means that the parser won’t backtrack after passing the soft cut, for the duration of the chain (i.e. a consecutive list of elements) it is in.

Take the sum rule as an example. If a sum-op has been parsed, a sum must follow, or fail otherwise. It won’t backtrack to try the second choice, the product. Having soft cuts improves error messages to be more local.

For example, parsing "1+" with the soft cut will properly fail with a message about a missing number or missing parenthesis. Without the soft cut, it would fail with a rather generic message about an unexpected ‘+’ sign.

This is also the time to talk about hard cuts, which would be denoted by a >>. A hard cut won’t have the parser backtrack beyond it at all, ever. It has the same advantage as soft cuts; better error messages. But it also allows the packrat cache to ditch everything before the hard cut.

Normally, a major downside of packrat PEG parsers is that they are memory hungry. But if your grammar has repeating structures, you can use hard cuts. A well placed hard cut can alleviate the memory requirements to be constant!

The parsing

Now that we have our grammar, we can create a parser from it and use it.

(require '[crustimoney.core :refer [parse]]
         '[crustimoney.string-grammar :refer [create-parser]])

(def parser (create-parser grammar))

(parse parser "1+(2-42)*8")

=> [nil {:start 0, :end 10}
    [:operation {:start 0, :end 10}
     [:number {:start 0, :end 1}]
     [:operand {:start 1, :end 2}]
     [:operation {:start 2, :end 10}
      [:operation {:start 3, :end 7}
       [:number {:start 3, :end 4}]
       [:operand {:start 4, :end 5}]
       [:number {:start 5, :end 7}]]
      [:operand {:start 8, :end 9}]
      [:number {:start 9, :end 10}]]]]

A successful parse result is in “hiccup”-style. Only captured nodes are in the result. Only the root node can be “nameless” nil.

Notice that the result nodes only have the :start (inclusive) and :end (exclusive) indices of matches. This is to keep the result tree and string processing to a minimum, also during the parsing. It is up to the user - not the parser - to decide which texts are important.

If you do want all the texts, for debugging purposes or quick parses, there’s the quick/parse function (or quick/transform if you already have a result tree). It takes a grammar directly and transforms the result to contain the matched texts:

(require '[crustimoney.quick :as quick])

(quick/parse grammar "1+(2-42)*8")

-> [nil "1+(2-42)*8"
    [:operation "1+(2-42)*8"
     [:number "1"]
     [:operand "+"]
     [:operation "(2-42)*8"
      [:operation "2-42"
       [:number "2"]
       [:operand "-"]
       [:number "42"]]
      [:operand "*"]
      [:number "8"]]]]

Let’s have a quick look at errors too, which is a set of error records:

(parse parser "1+")

=> #{ {:key :expected-literal, :at 2, :detail {:literal "("}}
      {:key :expected-match, :at 2, :detail {:regex #"[0-9]+"}} }

The transforming

Crustimoney has a results namespace to deal with the parse results. It contains functions like success->text and errors->line-column. And while it is easy to deal with hiccup-style data yourself, the namespace also comes with a “batteries included” transform function. It performs a postwalk, transforming the nodes by applying a function to it.

Let’s see it in action, and finish our calculator:

(require '[crustimoney.results :refer [transform coerce collect]])

(def transformations
  {:number    (coerce parse-long)
   :operand   (coerce {"+" + "-" - "*" * "/" /})
   :operation (collect [[v1 op v2]] (op v1 v2))
   nil        (collect first)})

(defn calculate [text]
  (-> (parse parser text)
      (transform text transformations)))

Each transformation function takes the success node and the full text. The coerce and collect macros help with creating such functions. The coerce takes a function, which is applied on the node’s matched text. The collect takes a function, which is applied on the children of the node. Both can also take a binding vector, like the :operation transformation above.

And, does it work?

(calculate "1+(2-42)*8")

=> -319

Yup! And there’s more to discover about Crustimoney - like built-in parsers, or lexically-scoped nested recursive grammars, or EDN support, or streaming, or why it’s called that way - but all that can be found on github.

As always, have fun!

Permalink

Optimizing & resizing images for Jekyll posts using Babashka

I’ve been tumbling down a rabbit hole for a little while now.

Feeling a desire to “write more” on my blog, motivated me to enhance the overall reading experience. However, customizing a Jekyll blog is not always easy due to the inherent limitations of the Liquid templating language and the modest amount of available plugins for GitHub Pages. Nevertheless, an improved reading experience led me to want a “featured image”. But images tend to affect webpage load speed… and speed matters. The featured image needed to be responsive and optimized, although doing it by hand made me shiver. Any manual repetitive process is boring and prone to errors. On top, it would remove the focus from writing. Suddenly I found myself scouring the internet for information about responsive images and semi-automating the image optimization process with a Babashka script interfacing with TinyPNGs API. 🐇🕳️

I could probably have gotten away with using a single optimized image and avoided the complexities that follow optimizing for multiple resolutions. But where is the fun in that - this was an opportunity to learn.

The following is divided into two sections describing:

  1. The essential information about “responsive images” and how the necessary Liquid template was created.
  2. A few of my thoughts behind the Babashka script for optimizing images using TinyPNG, and a link to the source code of the script.

Liquid template for responsive images

First I needed to know a bit more about images, and via MDNs article about Responsive images, I found an excellent series of blog posts named “Responsive Images 101” by Jason Grigsby.

It took a while for me to wrap my head around it, but it boils down to srcset describes which resolutions are available, and sizes help the browser choose which resolution is the best option.

Both srcset (widths) and sizes are closely coupled with the layout in which the image is used. Thus blindly copy-pasting the following code will likely result in undesirable outcomes since it is tailored to my blog. But the approach should be fairly easy to replicate if you know a little about Liquid templating.

_includes/featured_image.html:

{% assign widths = "464,720,930,1440" | split: "," %}
{% assign ext = include.src | split: "." | last | prepend: "." %}
<img
  src="{{ include.src }}"
  srcset="
  {% for width in widths %}
    {{ include.src | replace: ext, '_' | append: width | append: ext }} {{ width }}w,
  {% endfor %}"
  sizes="(max-width:767px) calc(100vw - 2rem),
         (max-width:1024px) calc(100vw - 24rem),
         (max-width:1280px) calc(100vw - 28rem),
         720px"
  alt="{% if include.alt %}{{ include.alt }}{% else %}featured image{% endif %}"
  class="featured-image-post"
/>

To identify which widths I required for my layout, I carefully inspected how my layout changed when resizing the width of the browser. I concluded that I needed two resolutions (464 and 720) plus two additional resolutions (930 and 1440) for high-pixel density monitors (4K). This is due to how the sidebar is moved to the top on narrow screens, allowing to “reuse” higher image resolutions.

There are a few things worth noticing in the above Liquid snippet. It is impossible to define arrays directly in Liquid, so to work around this limitation all widths are comma-combined in a string and then split (into an array).

It is also not possible on GitHub Pages to use a regex-replace plugin (presumably for security reasons) but by leveraging the filters split, last, append & prepend, I was able to achieve a similar result. This also makes it possible to use different image types like png and webp for different featured images.

When specifying the value of sizes for the image tag, I tried to reuse the values (of max-width) and units (rem) that I found in my (slightly adapted) Jekyll theme.

The above featured_image.html is only guaranteed to work in the context of a post, but I decided to leave it as an include to reduce the amount of code in post.html:

_layouts/post.html:

  {% if page.image %}
    {% include featured_image.html
       src=page.image
       alt=page.image_alt %}
  {% endif %}

All that is left for me to do when writing, is optionally providing the following Front Matter:

image: /assets/img/awesome_image.png
image_alt: "AI generation prompt: Something awesome."

But wait! How do all the different (now necessary) resolutions of the featured image come to be?
Enter Babashka.

Babashka script for optimizing and resizing images

TinyPNGs API is awesome. Simple, easy to use, but still powerful. I had no experience with optimizing images before this, so I might be easily impressed.

I have shared my current generic implementation of a Babashka script for optimizing and resizing images using the TinyPNG API in a GitHub Gist. It took a few iterations to carve out the last hard coupled things.

The REPL experience in Clojure and Babashka are both a delight to work with. They each played a significant role in why the code turned out so nicely. Being able to easily iterate over small parts of the code in isolation is simply awesome.

For instance, it is possible to generate the HTTP request bodies for all the combinations of resizing, without actually making any requests to TinyPNGs API (no side effects):

> (request-bodies {:width "1440,930"
                   :resize-method "scale"
                   :type "image/webp"})
({:convert {:type "image/webp"}}
 {:resize {:method "scale", :width 1440}, :convert {:type "image/webp"}}
 {:resize {:method "scale", :width 930}, :convert {:type "image/webp"}})

When I cannot avoid side effects, I make an effort to place them in places that make the code easy to test. This is why the function calling the TinyPNG API for resizing (get-image-output), isn’t also saving the image to disk. I want to be able to tinker with how the images are saved (save-output-response) without making requests to the TinePNG API every time. By mocking a TinyPNG API HTTP response, this is straightforward:

(def response
  {:request {:body "{\"resize\": {\"method\": \"scale\", \"width\": 320}}"}
   :headers {"content-type" "image/png"
             "image-width" "320"
             "image-height" "270"}
   :body (io/input-stream (.getBytes "some bytes"))})

(save-output-response
  "images/panda-happy" ; NOTICE: without extension - deduced from response
  response)

With the script having finally been molded to my needs, I can now generate all the optimized variations of a featured image for my blog by running something along the lines of:

$ ./optimize_img.clj assets/img/orig/a_man_sitting_at_a_desk_with_a_blank_page.png assets/img/ \
  -k tinypng_api.txt -m scale -w 1440,930,720,464
Optimizing assets/img/orig/a_man_sitting_at_a_desk_with_a_blank_page.png
Saving assets/img/a_man_sitting_at_a_desk_with_a_blank_page.png
Saving assets/img/a_man_sitting_at_a_desk_with_a_blank_page_1440.png
Saving assets/img/a_man_sitting_at_a_desk_with_a_blank_page_930.png
Saving assets/img/a_man_sitting_at_a_desk_with_a_blank_page_720.png
Saving assets/img/a_man_sitting_at_a_desk_with_a_blank_page_464.png

In the example above, the TinyPNG API key is stored in a file named tinypng_api.txt in the “current directory” (the same directory as the Babashka script is run from). I am using a file to avoid having my API key in the terminal history. However, the code allows for providing the key directly as an argument:
-k aWprbG1ub3BxcnN0dXZ3eHl6MDEyMzQ1.
When no file exists with the name aWprbG1ub3BxcnN0dXZ3eHl6MDEyMzQ1 the script assumes that is instead the key.

I guess I am out of excuses - now it is back to writing 😅

Permalink

Clojure Deref (May 24, 2024)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation.

Podcasts and videos

Blogs, articles, and projects

Libraries and Tools

New releases and tools this week:

  • clojure 1.12.0-alpha12 - The Clojure programming language

  • next-jdbc 1.3.939 - A modern low-level Clojure wrapper for JDBC-based access to databases

  • overtone 0.14.3199 - Collaborative Programmable Music

  • ragtacts 0.3.1 - RAG(Retrieval-Augmented Generation) for Evolving Data

  • devcontainer-templates - Devcontainer templates for Clojure

  • hermes 1.4.1400 - Hermes provides a set of terminology tools built around SNOMED CT

  • rad 2024-05-22.8-alpha - A small, zero-dependency Redis client for Clojure

  • clj-kondo 2024.05.24 - Static analyzer and linter for Clojure code that sparks joy

  • dots - Dots is a tool that generates ClojureScript wrappers for JavaScript libraries using their TypeScript definitions

  • fs 0.5.21 - File system utility library for Clojure

  • cli-tools 0.12 - CLIs and subcommands for Clojure or Babashka

  • pedestal 0.7.0-beta-2 - The Pedestal Server-side Libraries

  • clay 2-beta9 - A tiny Clojure tool for dynamic workflow of data visualization and literate programming

  • guardrails 1.2.9 - Efficient, hassle-free function call validation with a concise inline syntax for clojure.spec and Malli

  • yamlscript 0.1.59 - Programming in YAML

  • nrepl 1.1.2 - A Clojure network REPL

  • overarch 0.18.0 - A data driven description of software architecture based on UML and the C4 model

  • pipehat 1.0.2 - Read and write vertical bar encoded HL7 v2.x messages

  • fulcro-rad 1.6.1 - Fulcro Rapid Application Development

  • fulcro 3.7.5 - A library for development of single-page full-stack web applications in clj/cljs

Permalink

Bulma CSS adoption guide: Overview, examples, and alternatives

Bulma CSS is a popular, open source CSS framework with a sleek, modern design and user-friendly components. It simplifies frontend development by providing a set of pre-designed, responsive, and customizable UI elements that can be easily integrated into any web project.

Bulma Css Adoption Guide: Overview, Examples, And Alternatives

With its growing popularity and active community support, Bulma CSS has become a relevant player in the frontend landscape today. It offers developers a powerful tool to streamline workflows and create beautiful, functional web applications.


What is Bulma?

Bulma CSS was created in 2016 by Jeremy Thomas. With over 48,000 stars on GitHub and over 250,000 estimated weekly downloads, it has gained immense popularity among developers.

The framework was developed to tackle the typical problems that developers encounter when building frontend applications. For example, building responsive designs from scratch can be tedious and time-consuming. With Bulma, developers can easily create designs that work seamlessly across different devices.

Another problem Bulma addresses is the need for a modern and clean design. The framework provides an extensive library of components that websites commonly use, such as buttons, forms, cards, menus, and more. These components are designed to be simple, readable, and customizable.

Since its release, Bulma has undergone significant changes and improvements. It is a CSS-only framework, which makes it lightweight and fast to load. The framework is based on Flexbox, a CSS layout module that enables developers to create flexible and responsive layouts.

It is also built with web standards and best practices, making it a reliable and robust choice for building frontend applications. Its modular architecture allows components to be added easily without relying on complex dependencies or extra code.

Further reading:


Why use Bulma?

Bulma is a powerful and versatile CSS framework that can help you easily create beautiful, responsive, and modern websites. Here are some reasons why you might want to use Bulma for your next project:

  • Performance: Bulma is a lightweight CSS framework that has a minimal impact on your website’s performance. It is built with CSS only and does not require JavaScript, which means that it can load faster and improve your site’s overall performance
  • Easy to learn and use: Bulma is designed to be easy to use and has an excellent developer experience (DX). Its simple and intuitive syntax makes it easy to understand. With Bulma, you can quickly build responsive and mobile-first websites without needing extra libraries or tools
  • Bundle size: Bulma’s small and optimized bundle size makes it an excellent choice for projects concerned with performance and speed. It is designed to be modular, which means that you can only include the parts of the framework that you need
  • Community and ecosystem: Bulma has a strong and supportive community that constantly contributes to its development and improvement. It also has a wide range of plugins, extensions, and integrations that can help you extend its functionality and capabilities even further
  • Learning curve: Bulma has a shallow learning curve, which means that you can quickly get started with it even if you are a beginner. It has comprehensive documentation and a large community that can help you with any questions or issues that you might encounter
  • Documentation: Bulma has excellent documentation. It provides detailed examples and explanations for each component and feature, which is a great resource for people using it for the first time (and beyond that too)
  • Integrations: Bulma can be easily integrated with other frameworks and libraries, making it a versatile choice for any project. It works seamlessly with popular frontend frameworks like React, Vue, and Angular, and can be used with popular CSS preprocessors like Sass and Less

Further reading:


When not to use Bulma

While Bulma is an excellent choice for many projects, there are some cases when you might want to avoid using it. Here are some cons to consider:

  • Bulma might not be the best choice if you are working on a project requiring a lot of JavaScript functionality since it is primarily a CSS framework
  • Bulma might not be the best choice for you if you need a highly customized design that requires a lot of custom CSS
  • If you are working on a project that requires a lot of legacy browser support, Bulma might not be the best choice since it relies on modern CSS features

However, if these downsides don’t apply to your project and you’re interested in taking advantage of this framework’s many benefits, let’s next take a look at how to get started with Bulma.


Getting started with Bulma

Getting started with Bulma is a simple process. You have two options: use the pre-compiled .css file or install the .sass file and customize it according to your requirements.

If you want to go with the first option, you can utilize the CDN link in your HTML or CSS file, or download the zip file from GitHub:

//html file
<link
  rel="stylesheet"
  href="https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma.min.css"
>

//css file
@import "https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma.min.css";

Otherwise, you can install the .sass files via npm to use them:

npm install bulma
// or
yarn add bulma

Bulma also provides a right-to-left (RTL) version for developers who want to create websites and applications in languages read from right to left, such as Arabic and Hebrew. You can access it using the CDN link in your HTML or CSS file:

//html file
<link
  rel="stylesheet"
  href="https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma-rtl.min.css"
>
//css file
@import "https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma-rtl.min.css";

The .css file is a single CSS file, while the .scss files break the CSS down into multiple files, allowing you to import only the ones you need. Here is a basic example of how to use it:

//html file
<section class="section">
  <div class="container">
    <h1 class="title">
      Hello World
    </h1>
    <p class="subtitle">
      My first website with <strong>Bulma</strong>!
    </p>
  </div>
</section>
//css file
@import "https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma.min.css";

You can see the CodePen preview below as well:

See the Pen How to use Bulma CSS by Timonwa Akintokun (@timonwa)
on CodePen.


Key Bulma features to know

Below are some key features of Bulma that you should know.

Responsive design

Bulma prioritizes mobile devices and has been designed to be fully responsive. This makes it ideal for building mobile-friendly websites and web applications. It provides four breakpoints, which you can use to define five different screen sizes:

  • Mobile: up to 768px
  • Tablet: from 769px
  • Desktop: from 1024px
  • Widescreen: from 1216px
  • Full HD: from 1408px

Readable class names and modifiers

Bulma follows a consistent naming convention for class names, such as button, box, content, input, and so on. This makes it easier for developers to identify and use these elements in their code.

It’s also easy to create variants of Bulma elements using modifiers. For example, suppose you wanted to make a primary or outlined button. In that case, you could add the is-primary or is-outlined modifier. If you’re going to make an input rounded, you can add the is-rounded modifier:

<button class="button is-primary is-outlined">Primary Button</button>
<input class="input is-rounded" type="text" placeholder="Rounded Input">

The code above would yield the following:

Demo Usage Of Bulma Modifiers To Create A Primary Button Variant And Rounded Input Variant

Grid system

Bulma provides a powerful grid system that makes it easy to create complex layouts. The grid system consists of 12 columns that can be combined in different ways to create different layouts. Each column will also have equal width by default. Here’s an example usage of Bulma’s grid system:

<div class="columns">
  <div class="column">Column 1</div>
  <div class="column">Column 2</div>
  <div class="column">Column 3</div>
</div>

And here’s how it would look:

Demo Of Bulma Css Grid System With Example Of Three Columns In One Row

CSS components

Bulma comes with a wide range of CSS components that you can use to style your website or web application. These components include buttons, forms, menus, modals, tabs, skeleton loaders, and more:

<head>
  <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma.min.css">
</head>
<body>
  <section class="section">
    <div class="container is-max-desktop">
      <h1 class="title">Contact Form</h1>
      <form>
        <div class="field">
          <label class="label">Name</label>
          <div class="control">
            <input class="input" type="text" placeholder="Enter your name">
          </div>
        </div>
        <div class="field">
          <label class="label">Email</label>
          <div class="control">
            <input class="input" type="email" placeholder="Enter your email">
          </div>
        </div>
        <div class="field">
          <label class="label">Message</label>
          <div class="control">
            <textarea class="textarea" placeholder="Enter your message"></textarea>
          </div>
        </div>
        <div class="field">
          <div class="control">
            <button class="button is-primary">Submit</button>
          </div>
        </div>
      </form>
    </div>
  </section>
</body>

You can view the CodePen preview here:

See the Pen Creating a form with Bulma CSS by Timonwa Akintokun (@timonwa)
on CodePen.

Customizable

Bulma is highly customizable, so you can easily modify its styles to match your brand or design requirements. All of its components are styled using Sass variables and CSS variables, which you can easily override.

Theming

Bulma provides two default themes: light and dark. However, if you’re looking for something more personalized, you have the option to create your own custom themes by adding CSS variables to the CSS or Sass files.

Dark mode support

Bulma offers support for dark mode, a popular theme choice for many websites. This feature allows users to switch to a darker color scheme that is easier on the eyes, particularly when viewing websites in low-light environments. Additionally, users can customize their website’s theme by selecting from various color options or manually setting their preferences.


Use cases for Bulma

Here are some practical cases where web developers can use Bulma CSS:

  • Ecommerce websites: Bulma CSS can be used to build ecommerce websites that are mobile-friendly and responsive. It offers a variety of components, such as buttons, forms, cards, and skeleton loaders that can be used to create a seamless user experience
  • Corporate websites Bulma CSS can be used to create professional-looking corporate websites. It provides various CSS classes, typography, and layout options to create an attractive yet functional website
  • Landing pages: Bulma CSS is an excellent choice for creating landing pages optimized for conversions. It provides a variety of components, such as modals, forms, and buttons
  • Web applications: Bulma CSS can be used to build responsive and mobile-friendly web applications. It provides a range of components, such as tabs, accordions, and pagination that can be used to create a user-friendly interface

Comparing Bulma with Tailwind, Bootstrap, and Chakra UI

There are many CSS frameworks other than Bulma with their own strengths and ideal use cases. Let’s dive into various points of comparison to help you better understand and assess the similarities and differences between Bulma and alternatives like Tailwind, Bootstrap, and Chakra UI.

Learning curve

  • Bulma CSS: Relatively simple learning curve due to intuitive class names and straightforward documentation
  • Tailwind CSS: Steeper learning curve due to extensive use of utility classes, requiring time to master and understand practical usage
  • Bootstrap: Moderate learning curve with comprehensive documentation and resources available
  • Chakra UI: Moderate learning curve focusing on accessibility and theming, supported by clear documentation

Documentation and resources

  • Bulma CSS: Bulma offers comprehensive documentation with examples, tutorials, and guides to help developers get started quickly
  • Tailwind CSS: Tailwind CSS provides extensive documentation, including video tutorials, official plugins, UI kits, and vibrant community contributing resources and guides
  • Bootstrap: Bootstrap features extensive documentation with numerous resources, tutorials, themes, and support forums to assist developers at all levels
  • Chakra UI: Chakra UI is well-documented, focusing on accessibility and theming support, providing resources and guides for developers to create accessible and visually appealing interfaces

Performance

  • Bulma CSS: Bulma is lightweight and efficient, but it may not be as optimized for utility classes as Tailwind CSS, which could result in slightly larger CSS file sizes
  • Tailwind CSS: Tailwind CSS is highly optimized for utility classes, allowing for precise styling without needing custom CSS. However, this can sometimes result in larger CSS file sizes
  • Bootstrap: Bootstrap offers a moderate file size and is optimized for performance, with built-in utilities for common design patterns. It strikes a balance between utility and performance
  • Chakra UI: Chakra UI is optimized for performance, focusing on accessibility and a comprehensive component library

Customization

  • Bulma CSS: It offers customization by editing its CSS/Sass files and theming support
  • Tailwind CSS: Highly customizable with extensive utility classes, enabling rapid prototyping and extensive customization
  • Bootstrap: Provides customization through variables and mixins, allowing modification of colors, fonts, and other design aspects
  • Chakra UI: Highly customizable with theming support and a prop-based API, enabling easy customization of components

Features

  • Bulma CSS: Known for being modular and responsive, it provides a flexible foundation for creating modern web pages
  • Tailwind CSS: This stands out because it offers many “utility classes” that allow web developers to quickly create and customize web pages without needing to write extra CSS
  • Bootstrap: Famous for its pre-designed components and inbuilt utilities, allowing developers to create consistent and feature-rich designs
  • Chakra UI: Designed to be accessible and easy to use. It offers a library of components that are consistent with design systems

Accessibility

  • Bulma CSS: Basic accessibility features may require additional customization for specific standards
  • Tailwind CSS: Includes accessibility features in utility classes, but developers must ensure proper HTML semantics
  • Bootstrap: Prioritizes accessibility with built-in features and guidelines, including ARIA roles and keyboard navigation
  • Chakra UI: Strong emphasis on accessibility with built-in features and guidelines, ensuring accessible designs

Community

  • Bulma CSS: Bulma has an active community with a decent-sized user base that provides contributions, support, and resources
  • Tailwind CSS: Tailwind CSS boasts a large and rapidly growing community with extensive resources, plugins, and support available
  • Bootstrap: Bootstrap has a massive community with extensive documentation, support forums, resources, and themes, making it highly accessible to developers
  • Chakra UI: Chakra UI has a growing community focusing on accessibility and design systems, offering support and resources for developers

Summary table

Feature Bulma CSS Tailwind CSS Bootstrap Chakra UI
Learning curve Relatively simple learning curve with intuitive class names Steeper learning curve due to extensive utility classes Moderate learning curve with extensive documentation Moderate learning curve with focus on accessibility and theming
Documentation and resources Comprehensive documentation with examples and tutorials Extensive documentation, including video tutorials, official plugins, and UI kits Extensive documentation with numerous resources, tutorials, and themes Well-documented with focus on accessibility and theming support
Performance Lightweight and efficient, but not as optimized for utility as Tailwind Highly optimized for utility classes, but can result in larger CSS file size Moderate file size, optimized for performance with built-in utilities Optimized for performance with built-in theming and component library
Customization Highly customizable with variables with theming support Highly customizable with utility classes Customizable through variables and Sass mixins Highly customizable with theming support and prop-based API
Features Modular components and responsive design Extensive utility classes for rapid prototyping Comprehensive set of components and built-in utilities Accessible component library with focus on design systems
Accessibility Accessibility features may require additional customization Accessibility features built into utility classes Accessibility features built into components Strong focus on accessibility with inbuilt features and guidelines
Community Active community with a decent-sized user base and contributions Large and rapidly growing community with extensive resources and plugins Massive community with extensive documentation, support, and resources Growing community with focus on accessibility and design systems

Overall, each CSS framework offers its own strengths and caters to different preferences and project requirements. Developers should evaluate their specific needs and priorities when choosing a framework for their projects.

Further reading:


Conclusion

Bulma CSS is a lightweight and modern CSS framework that simplifies frontend development by providing customizable, responsive, and pre-designed UI elements. It’s easy to use and has a shallow learning curve, making it an excellent choice for beginners.

Bulma’s strengths lie in its performance, ease of use, and community support. However, it might not be the best choice for highly customized designs or projects that require a lot of JavaScript functionality.

Overall, Bulma is a powerful and versatile framework that can help you create beautiful, responsive, and modern websites.

The post Bulma CSS adoption guide: Overview, examples, and alternatives appeared first on LogRocket Blog.

Permalink

Ep 117: Pure Understanding

Each week, we discuss a different topic about Clojure and functional programming.

If you have a question or topic you'd like us to discuss, tweet @clojuredesign, send an email to feedback@clojuredesign.club, or join the #clojuredesign-podcast channel on the Clojurians Slack.

This week, the topic is: "pure data models". We find a clear and pure heart in our application, unclouded by side effects.

Our discussion includes:

  • What is the heart of a Clojure application?
  • Pure data models!
  • What is a pure data model?
  • Why do we use pure data models?
  • How do they compare to object-oriented data models?
  • Where do you put pure data models? How do you organize your code?
  • How pure data models avoid object-oriented dependency hell.
  • How do pure data models help you understand the codebase quickly?
  • Why does a codebase become easier to reason about by using pure models?
  • How do pure models fit into the overall application?
  • How do pure models relate to state and I/O?
  • Examples of pure models

Selected quotes

It's functional programming, so we're talking about pure data models! That is our core, core, core business logic.

A pure data model is pure data and its pure functions. No side effects!

We already have a whole set of Clojure core functions to operate on data, so why would we have functions that are associated with just this pure data? Because you want to name the operations, the predicates, and all the other things to do with this data, so that you, as a human, understand.

Those functions are high-level vocabulary that you can use to think about your core model. They are business-level functions. They are super-important, serious functions.

We don't like side effects, so we define an immutable data structure and functions that operate on that data. They cannot update that data. They can't change things in place. They always have to return a new version of it.

At a basic level, you have functions that take the data. They give you a new data tree or find something and return it.

We like having the app.model namespace. You can just go into the app/model folder and see all of the core models for the whole application. Any part of the application can have access to the model.

The functions are the interface. All you can do is call functions with pure data and get pure data back. You can't mess anything up except your own copy.

It's just a big pool of files that are each a cohesive data model. They're a resource to the whole application, so anything that needs to work with that data model will require it and have all the functions to work with it.

With pure models, there's no surprise!

In OO, the larger these object trees get, the more risk there is. Any new piece of code, in the entire codebase, has access to the giant tree of objects and can mess it up for everything else.

Pure models lower your cognitive load. The lower the load is, the more your brain can focus on the actual problem.

You can read the code and take it at face value because the function is 100% deterministic given its inputs. If it's a pure function, you don't have to wonder what else is happening.

The model directory is an inventory of the most important things in the entire application. Here are all the things that matter. As much code as possible should be in pure models.

Look at the unit tests for each pure model to understand how the application reasons and represents things. It's the very essence of the application.

A lot of times in functional communities, we say "keep I/O at the edges." Imagine one of these components is like a bowl. At the first edge, there's I/O. In the middle is the pure model goodness. On the other side is I/O again.

None of the I/O is hidden. That's the best part. Because I/O isn't hidden behind a function, it's easier to understand. Cognitive load is lower. You can read the code and understand it when you get back into it and you're fixing a bug.

The shallower your I/O call stacks are, the easier they are to understand.

Where there are side effects, you want very, very shallow call stacks, and where there are no side effects, and you can unit test very thoroughly, you don't have to worry about the call stack as much.

Permalink

Clojure 1.12.0-alpha12

Clojure 1.12.0-alpha12 is now available! Find download and usage information on the Downloads page.

Functional interfaces

Java programs define "functions" with Java functional interfaces (marked with the @FunctionalInterface annotation), which have a single method.

Clojure developers can now invoke Java methods taking functional interfaces by passing functions with matching arity. The Clojure compiler implicitly converts Clojure functions to the required functional interface by constructing a lambda adapter. You can explicitly coerce a Clojure function to a functional interface by hinting the binding name in a let binding, e.g. to avoid repeated adapter construction in a loop.

See: CLJ-2799

Other changes

Added:

  • CLJ-2717 - nthrest now returns rest output on n=0 or past end of seq

  • CLJ-2852 - Updated all deps, test deps, and plugin versions to latest

Reverted:

  • CLJ-2803 - #inst printer - no longer uses a ThreadLocal formatter

Permalink

Proper way to compare numbers in Clojure

Code

;; number_comparison_right_way.clj

(= 7 7)

(= 7 7.0)

(type 7)

(type 7.0)

(= 7 7N)

(type 7N)

(= 7.0 7N)

(type 7M)

(= 7.0 7M)

(= 7 7M)

(= (double 7) 7.0)

(== 7 7.0)

(== 7 7.0M)

(== 7.0 7M)

(== Math/PI (/ 22 7 ))

(== 7N 7M 7 7.0)

(= 7N 7M 7 7.0)

Notes

Permalink

Lisp: simplifying my problems through code

I struggled to fix bugs in my custom bilingual alignment tree. Unlike standard trees from libraries like React or Tcl/Tk, this tree has unique data manipulation requirements. Most of the bugs stemmed from these data handling complexities, not from the visualization itself. This is just one example of the challenges I faced. I encountered similar issues repeatedly, for instance, when building directed acyclic graphs (DAGs) for word breakers.

I've applied various programming techniques I've learned, but unfortunately, none of them seem to solve my problem. These techniques included tracking my bathroom habits (yes, you read that right!), coding in Ruby and Python with OOP, using static checking in Java, C++, and Rust, and applying a functional programming approach like Haskell's function coloring.
Surprisingly, the answer came when I started working with Lisp.

Non-destructive data transformation

I initially coded in LOGO and Racket (DrScheme), both members of the Lisp family, but these experiences were limited to toy problems. As a result, I didn't grasp the true benefit of Lisp until I revisited it by re-implementing a part of my word breaker in Racket. Notably, Racket is another Lisp dialect that heavily relies on singly-linked lists, the default data structure in Lisp. Unlike array-based lists common in other languages, singly-linked lists manipulated with the CONS function in Lisp avoid destructing existing data. This means constructing a new list based on an existing one doesn't require copying the entire original list, making it ideal for frequent operations involving large datasets. CONS itself operates in constant time, further enhancing efficiency. With Lisp, I can focus on using CONS for data manipulation without concerning on less important things like class design and function coloring. Additionally, since Lisp avoids data destruction, I can leverage the interactive environment conveniently without reloading data from files. While I acknowledge some past frustrations, using Lisp has finally allowed me to achieve what I've been striving for for years. Furthermore, adopting this functional programming approach can be beneficial in other languages, although limitations exist. These limitations will be discussed in the section on building collective team value through programming language choices.

Sustainability

I've been coding for a long time, starting with Python 1.6. As Python evolved through versions 2.X and now 3.X, I've seen the benefits of new features alongside the challenges of breaking changes. While Python's rapid development is great for startups, it can make long-term maintenance difficult for projects with strict stability requirements, such as those for governments or foundations.

Similar challenges exist in other languages. While Java boasts strong backward compatibility, the vast developer landscape can lead to codebases with varying levels of modernity (pre-generics, pre-Java 8, etc.), necessitating additional maintenance efforts. Rust, while powerful, shares Python's characteristic of rapid evolution.

The Stability of Common Lisp

Like other languages, Lisp has evolved over time. However, Common Lisp, the standardized dialect established in 1994 (with Lisp's roots dating back to 1959), offers a key advantage: stability. Code written in Common Lisp is less susceptible to breaking changes due to language updates. Additionally, powerful macros allow for the creation of new syntactic constructs at the library level, without modifying the core language specification that impacts everyone.

Furthermore, my observations of Common Lisp projects on GitHub and elsewhere reveal a remarkable consistency in coding style, even when developers aren't actively collaborating. This suggests a strong community culture that prioritizes code clarity and maintainability.

Building collective team value through programming language choices

Since most of the problems I encounter seem universal, I believe adopting a Lisp-style coding approach could benefit the entire team. However, a complete language switch might not be realistic for many developers and organizations. As a compromise, I attempted to implement Lisp-style coding in Java, but this proved unsuccessful. While elements like parentheses, CAR, CDR, and macros are easy to identify and often the target of complaints, they're not the main obstacles.

However, an experienced programmer once mentioned the difficulty of building complex programs in LOGO despite its easy start. This suggests that the core challenge might lie in the Lisp programming paradigm itself. Introducing Lisp concepts into Java often leads to endless debates rather than progress, even with extensive explanations. However, for team members who grasp the Lisp paradigm, overcoming the hurdle of parentheses becomes a less significant issue.

Searching for how to accomplish something in Java won't yield results written in Lisp style, which can lead to frustration and confusion for developers wondering if their approach is incorrect. In contrast, searching for the same task in Lisp will provide solutions that leverage proper Lisp idioms. This continuous cycle of coding and searching in Lisp builds the common value of the team.

Clojure

While Common Lisp offers a rich ecosystem for web development (front-end, back-end), and mobile apps, some developers may find its toolset less extensive compared to popular languages like JavaScript, Go, or Java. Clojure, a dialect of Lisp, addresses this by seamlessly integrating with Java/Scala/Kotlin libraries. ClojureScript provides a similar bridge to the JavaScript world, and ClojureDart is under development for Dart environments. Beyond its powerful concurrency features and data structures, Clojure and its sub-dialects make Lisp a viable option for building modern applications. A prime example is Nubank, a major online bank with over 100 million customers, which demonstrates the power of Clojure for large-scale applications.

Permalink

Clojure Corner with Mauricio Szabo

This time we have the Clojure enthusiast Mauricio Szabo in our Clojure Corner. He is currently working on Atom successor.

Ela Nazari: Well hello everyone, I’d like to thank Mauricio Szabo for accepting being guest of our Clojure Corner. Allow me to introduce Jiri Knesl the founder of Flexiana and Gustavo Valente our cool Clojurists who will run Corner this time.

Mauricio Szabo:  Hi folks, I’m Maurício, thank you for inviting me!

Gustavo Valente: Fala Mauricio. I am Gustavo.

I know that you are writing an editor, but before that, can you tell us a little about you. Like, what is your background, and what was your journey before meeting Clojure?

Mauricio Szabo: Hola, Gustavo! Yes, I am writing an Editor (kinda – more on that later).

Ok, so my background is interesting – I say that my life starts to become more “random” the more I tell, but let’s start on that – I was born in Brazil, São Paulo, and for the last three years I’m living in Uruguay, Montevideo.

So, since my first computer (a 486-SX) I wanted to make software programs (which, at the time, I didn’t even know what it was – the term “programming” wasn’t even usual at the time, I just asked my mother how could I generate an .exe file ). A friend of mine was studying QBasic, and I basically self-learned how to write simple stuff, then more complex, up to the point I tried lots of languages – VisualBasic, Delphi, C/C++, Java, and I ended up in… Ruby. Which was amazing (at the time) and somehow felt it was closer to my though process (less abstractions, more “right to the point”, etc).

At the time, I heard a lot of people talking about LISP (including my teacher, who eventually became a good friend) and I wanted to try some LISP implementation. It was a disaster . I could not understand why people liked the language so much, and I spent a couple of years trying some LISP flavor, giving up, trying again, etc. Until I found LightTable, which seemed even closer to what I wanted to achieve (a good way to explore software programs without having to “play the computer” in my head). So I decided to try with Ruby, but quickly I migrated to Clojure because the experience was better.

LightTable was complicated and had few plug-ins (I tried to write some, for example, to integrate Parinfer on it) but ended up trying to replicate the LightTable experience in my editor of choice – Atom – with the plug-in called ProtoREPL. I found the experience so amazing that I ended up contributing to the project, then doing some extensions, and finally I migrated my stack to Clojure.

Jiri Knesl: Hi Mauricio, how are you?

Why do you think people migrate from Ruby to Clojure?

Mauricio Szabo: Wow, interesting question!

I migrated because I wanted to explore interactive programming – I used to say that I migrated from Ruby to LightTable, and Clojure just appeared because LT worked better with Clojure.

As for friends that I know, usually is because of Rails. It’s hard to use Ruby without it – not impossible, but hard. So if you need something different from “a web api rendering JSON / server side rendered web app” Rails shows its limitations. Because of the JVM, and because there’s no “one true framework” in Clojure I believe it’s easier to not have these problems.

So, it’s easy to bootstrap an webapp with Ruby and Rails, and harder with Clojure, but harder to modify a Ruby app when it starts to grow (and even harder to extract fragments to different services).

Jiri Knesl: You’re absolutely right with the fact it is often difficult to extract parts of Rails app into non-Rails code with monkey patching, Active Support, etc. When I interview candidates to Flexiana who are moving from Ruby to Clojure, it’s a common theme. You change something in one part of your app and it breaks elsewhere because of hidden relationships and conventions.

At the same time, some things can be done extremely productively in Rails, or even Sinatra. Devise gem, Turbo, Active Admin, and others can make development very fast. In Xiana, we are trying to close this gap. Have you seen our framework already?

Mauricio Szabo: projects that would never exist if it wasn’t for Rails, so there’s a balance here. As an example, I would probably not start a Clojure project if I knew all I had to do was a simple web page.

And unfortunately, no, I haven’t seen the Xiana framework. To be fair, I haven’t seen any framework in Clojure – most of the projects I worked with (including my personal ones) are a little unconventional so I don’t think a framework would help (in my case).

Probably the closest I got into frameworks in Clojure was to try to write my own as an exercise (literally, just an exercise) and the combination of re-frame and reagent in the ClojureScript side.

Gustavo Valente: Its looks like that “fate” bring you to Clojure, I mean if LightTable was in Smalltalk you still get your interactivity without Clojure.

But my question now is, what hooked you with Clojure? What do you like and dislike about it?

We know that Clojure is a niche programming language, what do you think is missing to Clojure gain more traction? I mean why do you think Clojure did not storm JVM world as it is more simple and easy to code with?

Mauricio Szabo: Well, kinda. I did try Smalltalk previously, but the language didn’t “click” with me. Maybe it’s because it felt “too different” or because everything had to be done in this different environment, with different tools, or maybe it’s because everything needs to be encapsulated all the time, who knows? I did try Smalltalk again recently (on the Advent of Code 2023) and again, it didn’t click with me (although I could solve some 2, 3 problems on it).

What hooked me in Clojure is how pragmatic the language is. I’m also very pragmatic (maybe even more than what’s considered “normal” or “sane” to be honest) so the whole “evaluate something, get the intermediate results on it, then iterate over it” was such an amazing feeling!

To the point it even broke my old TDD workflow . I had to re-train myself to write better tests, because I was starting to neglect my testing (I think I found a sweet spot now, but I’m studying a way to have an even sweeter spot with my plug-in/editor).

Clojure will probably never storm the JVM world because people are prejudiced against LISP languages. For me, it’s actually easier to write right now but I remember that in the beginning, it wasn’t easy (Parinfer helped – a lot – more than Paredit, to be honest).

Another problem I see with Clojure (and specially with ClojureScript) is what I like to call the “isolated genius”. As an example, functions in Clojure are not interchangeable with Java anonymous functions/closures (added in recent JVMs); support for interop works well, sometimes, but not always (it’s hard to make a Spring app with Clojure for example, because of annotations and other stuff).

The problem is actually way worse in ClojureScript. There’s no easy support to integrate with the whole Javascript world, so writting Solid.JS, Svelte, etc is not easy or needs some non-trivial wrapping; no support for JS classes, template strings, etc also complicates adoption.

I think it’s hard to sell Clojure as a language for people that are not willing to learn (and there’s a lot of people like that) – that’s probably why Kotlin exists and it’s popular (because it is closer to Java than Scala, for example). I don’t honestly see a way around this, but I know that accepting that these other practices exist in the JVM / JS world (annotations, the whole bundler/transpiler/babel stuff in JS) and somehow add them to Clojure(Script) via macros or some helpers might help adoption.

(I imagine a world where I can pick up a Spring Boot, VertX app in Java and somehow add Clojure code to it, and start to interactive develop the migration from one language to another without needing to wrap stuff, have multiple compilation steps, etc…).

Jiri knesl: To the point it even broke my old TDD workflow . I had to re-train myself to write better tests, because I was starting to neglect my testing (I think I found a sweet spot now, but I’m studying a way to have an even sweeter spot with my plug-in/editor).

Would you mind to elaborate on this?

Mauricio Szabo: Sure – about my TDD workflow, in Ruby I used to make a test so that I could quickly run some code fragment. In Clojure, I can just… run the code . I don’t actually need a test because my data, my parameters, everything is still in the REPL, and I’m connected to my app, so my “seed data” is already there. I ended up writing a lot of code without writing the tests first. Which is kinda fine if you remember to write the test later; unfortunately, my experience is that if I write the test later, I might miss some branch, some special treatment, etc that I captured in the REPL. Fast-forward to 2 hours later, when I have to add a new case for that specific function, see a very complicated piece of code, refactor it thinking “what I was doing?” and then things break in the app.

Then, I remember ah, this very complicated code is because there’s an edge-case indeed.

Another thing is that we usually don’t do the second step of testing: writing things and see it fail. It’s probably the most misunderstood concept in automated testing. My experience is that experiencing the test failing means we’re indeed testing the right thing – how many times I wrote a test, ran it, and it passed when it shouldn’t… then I figured I was not testing the right thing.

Classical example is authorization – suppose you want to return a 404 error when the user doesn’t have access. If you write a test, and it immediately passes, you’re probably using the wrong URL, for example.

(and that is even worse if the code is already written and you’re just covering with tests – you can assume that the test is passing because the code is working, and even if the code is indeed working, you might be testing the wrong thing).

The problem is actually way worse in ClojureScript. There’s no easy support to integrate with the whole Javascript world, so writting Solid.JS, Svelte, etc is not easy or needs some non-trivial wrapping; no support for JS classes, template strings, etc also complicates adoption.

Yes, but isn’t it the price we pay for Clojure-like experience (mainly persistent data structures)?

Of course, you could just build a lisp that compiles to JS, but Clojure is not just a lisp. It’s a runtime too and this runtime is built on different assumptions than JavaScript is.

Mauricio Szabo: As for this one, more or less. We could have proxy or defclass or gen-class for Javascript too – Shadow, for example, have support for JS classes and template strings.

It should also be possible to put the ClojureScript compiler “in the middle” of JS tool – like, compile the ClojureScript code, generate something intermediate (maybe JSX, maybe even Svelte) and then feed that into another tool, generating a final JS in the process, with the code to hot-reload the runtime.

By the way, this is kinda possible with Shadow-CLJS hooks but maybe don’t generalize well enough…

src/babble/core.clj · trying-to-use-hooks · Maurício Szabo / babble · GitLab

Jiri Knesl: Better interoperability might help with writing JS code in Lisp. But how could you retain the value of STM while interacting with mutable data structures that are the default in JS ecosystem?

Mauricio Szabo: I don’t think ClojureScript needs to embrace the native data structures – the interop with the mutable stuff in JS is honestly good in my experience. It’s more offer some way to integrate with the compilers and bundlers, support full JS syntax (and have a way to emit JSX maybe) and to import arbitrary Javascript.

As of today, we can’t emit class or function*, we don’t have async nor await, we can’t use correctly things like StyledComponents, and that’s just talking about unsupported syntax… we also have unsupported Javascript that can’t be compiled into a single bundle because Closure Compiler doesn’t support it.

That being said, I’m putting a lot of hope in Cherry – as soon as it have a REPL (and maybe hot reload) I really want to drive it with some ideas.

Jiri Knesl: Today, jQuery 4 came out. One of my colleagues was joking that a combo of cherry+jquery+hiccup is inevitable framework someone will build. And also, HTMX is getting quite a hype.

And when you look at ClojureScript, it’s mainly a land of React wrappers.

How does a future of frameworks using Clojure on frontend look like to you?

Mauricio Szabo: I don’t have many experiences with Clojure on frontend, honestly, so I don’t know. And yes, I agree that it’s a land of React wrappers – I actually wanted to test Membrane and Arborist, but I needed a way to interop with some JS libraries and React was the best bet.

But even now, on Chlorine, I’m even rethinking React (Reagent in this case) – I figured out I “defaulted to React” without a real need, and things are harder than they needed to be.

By the way, if I might add, even with all these downsides (in my point of view, at least) ClojureScript is still my “go-to” choice to write something in Javascript. The ability to use the REPL and have hot-reload capabilities inside a Javascript environment (no matter how hostile – I got it working while making a NeoVim plug-in for example!) is insane, specially when we consider that in Javascript we usually have a lot of internal state, mutability, transformations happening in the UI, etc.

Jiri Knesl: Have you tried using Cherry in any projects already? How’s the experience different from Cljs?

Mauricio Szabo: So, I tried a little bit, it’s close to CLJS with some exceptions (some bugs I had that are fixed by now). It’s interesting because I could use some stuff that needs a lot of hacks in Shadow, but my workflow is hugely dependent on the REPL (and Chery doesn’t have a REPL yet – Squint does, but I don’t know how powerful it is) and nor does it have a hot-reload workflow, so I ended up staying in ClojureScript only for now

Babble is not yet battle-tested, but I want to try to a project too – maybe we can exchange experiences later?

Gustavo Valente: One thing I want to ask is about AI. How are you seen all the AI hype and specifically in regarding with Clojure and Clojurescript, what do you think that Clojure/Clojurescript could benefit from it?

Mauricio Szabo: Well, who haven’t right? For now, I feel it’s a hype indeed, but it worries me to be honest – not that it will replace my job, but that it will normalize bad code.

At the same time… I fell bad code is already normalized so maybe I’m just nitpicking.

For Clojure and ClojureScript, specifically, I don’t know. Not in the direction that AI is going.

For multiple languages, I feel that these LLMs could be used as a “support level 1” or even a “documentation layer” – I don’t know how possible this is, or even if it is possible, but the idea could be to train the AI with some organizational repository, or a couple of projects, and be able to ask AI things like “I want to authenticate against OAuth, which function do I use?”

Another thing I wanted was to not drive AI via comments, but drive via tests – write the test, and ask to generate the implementation. I feel it’s less prone to errors this way, honestly. The opposite (what people are doing right now – write the tests when you already have the code) is, in my opinion, one of the worst possible usage of LLMs – we already have bad tests in the wild, tests that check internal state, that test the wrong thing, are needlessly complicated or hard to read. Automating the writing of these bad tests is definitely no the right path – *specially because* it’ll normalize another bad practice that I see happening a lot – when someone wants to change a behavior, they will change the code, see all tests that are failing, and deleting/commenting them and writing a new one.

In this sense, tests are basically meaningless. Instead of looking at failures to see if they are valid, everything is just thrown away. Now imagine if we have a way to automate this process?

Finally, the direction I wanted to see AI going for Clojure[Script] (and LISPs in general) – to write code in a meaningful way. Like, write an implementation that works using the inputs the actual functions that we have in the language. Things like, if I ask for a tool to concatenate strings, the AI knows which functions operate on strings, then tries to compose a function that actually does what I want – then it starts a shrink process, to generate the smallest possible solution that satisfies what I do.

I remember when I studied a little bit about genetic algorithms, there was something related to that – to generate “LISP forms” that operated on the parameters and then tried to get the best solution. It worked, but to make simple things like a math problem it took a long time and it usually didn’t get it right. I don’t know if that’s possible today, with all the advancements and such.

Gustavo Valente: And editors… Tell us about your journey into working to build an editor and WHY?? Yeah, I am a big fan of emacs, and I was wondering why not extend it instead build other one? Tell us a bit of Pulsar, main features that don’t have in other editors, etc.

Mauricio Szabo: Well, it’s a loong story. But it starts with NeoVIM and my work on making plug-ins for it.

Then, seeing the insanity that is to do it, and trying Sublime. But again, the whole plug-in world in Sublime is insane – it’s in Python, and if Python doesn’t have a proper, built-in module support, imagine that inside an editor?

So I tried Atom. And honestly, I loved it. It was slow, had a horrible typing lag, but the plug-in system was simply amazing so I made my own collection of plug-ins, to the point I decided to use it for Clojure too – first by helping the author of the old Clojure plug-in proto-repl, then adding my own config on top of that, and finally where I am today – with Chlorine.

And then, Microsoft happened. 

When they bought GitHub, we all feared from the future of Atom, but fortunately, the future GitHub’s CEO spoke that they would keep Atom.

Which, of course, was a lie. Atom stopped gaining features and only the most trivial of bugs were fixed. Still, it was a good editor for me, but it started to show signs of being abandoned. So I decided to scrap what was still useful of Atom and integrate with some newer tools, and created a new project called Saturn – initially, the idea was to support the Atom API and the VSCode one (a bold decision, I know).

But then, in December/2022, Atom was officially dead. The idea to keep an Atom API was now in check, because no more plug-ins would be added to it, and Chlorine also had a VSCode version. My initial idea was to implement only a VSCode API, but add some “extension points” so that I could have an improved “hackable” experience.

Just for fun, I tried to ping the atom-community discord, and some people showed interest in keeping a fork of Atom. One was already working on reverse-engineering the backend so that package installation would still work.

That gave me some hope, so we made some polls for a new name, tried some logos, worked on some rebranding, but because we had huge disagreements on the direction atom-community was heading (basically, bundle some packages that we didn’t agree, disagreeing the name that won the poll, and also because they wanted to migrate everything to Typescript instead of keeping only Javascript) we decided to make a new fork, and Pulsar was officially born.

So it’s not like I made something from scratch, mostly I helped to save Atom from total extinction .

As for why not use Emacs – I don’t really like, or understand, Emacs. I always seem to become stuck on some intermediate keybinding, mode, or whatever that I don’t know how to exit, so in a way it feels that I’m always fighting the editor. There are some good distributions like Doom Emacs or Spacemacs but they are heavily opinionated in my opinion. Also, Javascript – I feel the async model of Javascript works really well for plug-ins in an editor. The fact that creating UI elements is literally just Javascript + CSS + HTML is a huge win, and also means I can re-use anything that exists for the web already without any “conversion layer”. 

For the things that Pulsar have that other editors lack, well, to be insanely customizable and hackable is one. UI can be changed a lot (because it’s all HTML + CSS), and the plug-ins API is insanely good – you can basically change the UI in any way you want (to the risk of breaking the editor, sure) and you have extension points that allow you to put HTML elements in almost any position.

A simple example are inline and block decorations – in any point in the editor, one can add an inline, or a block, and include HTML on it. Any HTML – generated via some string, or via DOM APIs.

Like this one, that generates a better review of tests that fail.

VSCode, for example, is a walled garden. You do have some APIs that you can extend, but you can only do that in the terms they support. There are some inline/block decorations, but they only accept text or markdown, or only appear when hovering, etc. They are fine-crafted to customize the functionality that VSCode already have, not to add a whole different thing.

And finally, packages. The way a package can communicate with the other in Pulsar is simply mind-blowing for me: we have what is called a “Service”.

A Service exposes an API, and it’s composed by providers and consumers. A simple example is Autocomplete – bundled with Pulsar is a package called autocomplete-plus that consumes the autocomplete service. Then you have multiple providers of that service: there’s a built-in that checks what’s in your editor, and suggests words for it, scored by how close they are; there’s Chlorine, my own plug-in, that provides suggestions using runtime information that comes from the REPL; and there’s LSP, that provides suggestions based at the LSP protocol.

If one doesn’t like the built-in autocomplete, that person can disable the built-in package and install a different package that provides the same service. And everything magically works!

So even considering that autocomplete is built-in, it’s customizable – and in a way that doesn’t need any change from the providers of suggestions. That is actually huge, and I’m yet to see some other editor that can have the same level of customization without needing to understand the internals of the codebase.

Jiri Knesl: BTW, what are you working on right now?

Mauricio Szabo: I’m heavily focused on getting a full refactor of Chlorine right now.

The original idea of Chlorine was to offer a great plug-in for the Atom editor written in ClojureScript, and support for Socket-REPL. The second idea was, frankly, a mistake – socket-repl never got any traction, not even with the prepl layer, while nREPL evolved a lot. Eventually, most of my Chlorine work was just to fix Socket REPL stuff, and even then, the experience was sub-optimal.

So now I’m fully committed to nREPL and the Shadow Remote API, so I have some very interesting stuff that’s supported – for example, with ClojureScript Chlorine now supports selecting the JS environment that it’ll run the code. It also displays compilation errors and warnings, and automatically resolves promises.

What I really want to add is some “tracing support” – like FlowStorm, for example – where I can evaluate a code (or just start the trace and interact with the app for a while) and see what happened, which functions were called, parameters, returns, etc.

And also adapt some data-science library for custom visualizations. I’m interested in Kindly, especially the concept they have of “advisors” – one thing that I really want is to somehow customize the way I can render EDN in the editor.

One possible case is – rendering a map with lots of keys – maybe more than 20, more than 50 – and “prioritizing” some keys over others (like, for example, if I’m developing some code that handles personal information, I might probably want to focus on :id, :name, maybe :role or fields like that). 

Jiri Knesl: Fantastic, thank you.

I was always curious, why have you picked Atom to work on?

Mauricio Szabo: Mostly, because it was “already there”. I was already an user of Atom, the community was starting to get organized to keep the editor, so it gave me the push to keep the editor.

But also, because of the capabilities of the editor to be friendly and to be almost infinitely customizable.

One example that I always mention is Autocomplete. While it is a core element of the editor, nothing in the core of the editor implements autocomplete – it’s a plug-in. A core plug-in, sure, but still a plug-in.

Now, Atom (and Pulsar, by definition) have this concept called “Services”. Basically, any plug-in – be it a “core” plug-in or not – can define its own API. The API is simple: a plug-in can define either that it’s a “consumer” or a “producer” of the API – think like “client” and “server”, more or less.

So, autocomplete in Atom/Pulsar is a “consumer” – it defines an API so that other plug-ins can contribute with “suggestions”. The API is literally some JS object with one to four methods (one is mandatory – the one that actually offers suggestions – and the others are optional).

And here is the thing that excites me the most: rethink some old ideas.

VSCode, as I said, it’s a “walled garden” – it is completely opinionated on how things should work.

I don’t like this approach because I feel it limits how things should work. The LSP protocol, for example, is really tied to the idea of “static analysis” of the code, to “intellisense” and other stuff. Calva does an amazing job re-thinking how some of these “pre-defined extensions” can be used to show code, but I feel we can go further. I feel we can try to walk “the road not walked”, like for example, Smalltalk.

To give a concrete example/idea with autocomplete: we currently think of Autocomplete as a way to write something in text, get some suggestion, select the suggestion, and get that filled for us. Usually we filter the suggestion using parts of the word, or fuzzy-finding, etc – like, if I type us, it’ll try to suggest me use

Now, imagine we could do something different. Suppose we could deprecate some function/variable and put a doc saying “this is deprecated, use require with refer instead”. And then autocomplete could suggest, in place of use, require – and add a snippet for us with that info.

Or, jumping the AI hype, we could trigger the autocomplete, and when we chose a suggestion, we could open a context menu with some additional suggestions on how you probably would use that specific code with the variables you have right now.

On other editors, this need a lot of work and if the editor’s author doesn’t agree with you, you’re out of luck.

With Atom/Pulsar, you don’t need anyone’s permission – you can literally disable the core autocomplete, and write your own, implementing the same API, and inform (in the package.json) that it’s a consumer of the autocomplete service.

And the magical thing is that it’ll simply work – every plug-in that contributes to suggestions will still work, be it LSP of some REPL-connected plug-in like Chlorine, etc.

Also, besides what people say, Atom was slow in the beginning, but the latest version that was published (1.60) was not slower than VSCode, for example – and it did run in Electron 9, which means Node 10, a very old version of both Chromium and Javascript.

As for me, I’m using now an experimental branch with Electron 27 (a version from 12 days ago), and it does speed up things quite a bit (considering all the upgrades to Node, Chromium, etc). The simple fact that Pulsar is even running on this latest Electron is a miracle by itself, considering all the breaking changes I had to fight to make it work.

Also, on a tangent, I considered for a long time to try and speed up my own editor Saturn when the end of Atom was announced.

Initially, the idea was to support both the Atom and the VSCode APIs, but after the announcement I though about supporting only the VSCode API, but somehow add some “extension points” to it.

For example – CodeLens. In VSCode, anytime you want you can have some text between lines in the editor, and if you click on these lines, some command can be called.

The way that it works is – you register a “Code Lens Provider” (nothing remotely related to the idea of Providers in Pulsar) and you return a Command. The command will have a “title” property that will be displayed. That’s it.

In Pulsar, we have “block decorations”. It’s insanely more powerful – you can have arbitrary HTML, so you can implement this “Code Lens” or you can do other stuff, like show the evaluation results, show a tree, plot something, render a chessboard… anything, really.

If you add a link, or a button, you can register that if you click on it, you can evaluate arbitrary node commands – including, for example, calling the Atom API, replacing text, etc. Again, incredibly powerful.

To make this work in my Saturn project, the idea I had was to implement CodeLens but allow people to return arbitrary HTML elements instead of a “command” – that basically would give me the same behavior as Atom, but would still be compatible with VSCode.

To be fair, this idea is not 100% dead – I’m trying to implement a VSCode compatibility layer in Pulsar, and I do want to try and implement this “extended API”. But the honest truth is simply that the VSCode API is too big, it’s a moving target, and there are some undocumented/internal stuff that I found out trying to make some VSCode plug-ins work…

Jiri Knesl: Let’s move on. If you weren’t working in Clojure, what language/s would you like to learn?

Mauricio Szabo: I want to learn so many languages

I really want to try something with gToolkit (Smalltalk). It sounds really interesting and it seems quite like what I wanted to achieve with Chlorine.

I am a little interested in Rust too, but I’ve been delaying my studies because I don’t really identify myself that much static languages, but I kinda want to experiment with WASM and Rust seems the “safe bet” for now. 

(Hopefully Jank can also be used in the future).

Jiri Knesl: Speaking of SmallTalk, do you consider SmallTalk working with images instead of files an advantage, or the opposite?

Mauricio Szabo: Yeah, it’s complicated. On one side, I would love to not depend on text files, because I do feel they are limiting (usually, we are interested in fragments of the files – for example: what does this specific function does, which are the functions it depends on, what are the testcases that exercise this specific function I’m working now).

I feel we could gain a lot if, instead of having a huge file open with what we’re working and other unrelated stuff above and below, we could have the specific function/method/class we’re working and then everything it depends around it – maybe in a star-like pattern, something like this.

At the same time, everything breaks down if we don’t use files. Smalltalk’s images need to be edited with the tools that are already inside the image – so no “emacs”, no “vim”, no “Pulsar” – not even “vim mode emulation” for example. 

Git integration (at least the one I tried) is horribly complicated and it risks doing things that we don’t want to do (it’s semi-automated, and I don’t think a good fit for git is to automate commits for example); at the same time, seems weird to use git as a version control mechanism, because we’re not working with files

Finally – all these ideas I had (the functions being around in a star-like pattern, editing individual fragments, etc) they kinda exist in Smalltalk, but they are far from being straightforward. It’s not much better than using splits in the text editor in my experience, so while ST does use images, I don’t think it’s a better experience. 

So, images could be a good thing, if implemented correctly, but the benefits need to vastly outweigh the paradigm shift, and I honestly don’t think they do in Smalltalk (for code editing).

Now, for running code, that’s a different story. 

If we could persist exceptions, for example, with all the bindings we had around the exception, the stacktrace (like, the real stacktrace, a live, breathing object in the image, not a text representation), or have a good tracing information (querying the VM to see live objects, interact with the VM by doing that, etc – imagine, for example, if I have two function that do the same thing – one really fast but it takes a lot of memory, and the other slower but it’s cheap; I could query the VM to see if there’s too many “live” objects,, or how many threads are running right now, etc, and decide which function to call), that could be amazing and open up a new kind of execution model. 

Jiri Knesl: From my point of view, image based apps are awesome when you are one person, keep all changes in your head. In the early stage of hacking the app, it might work.

For a team, having text based project, code review and keeping track with changes is much easier.

BTW, have you heard the narrative about genius Lisp developer who wants to build everything himself? I think this might be true for Smalltalk.

Mauricio Szabo: Yeah, I don’t know if that’s a failure with the idea of images or if it’s a limitation of tools that we have.

For example, we don’t have a standard that defines what an image is – where we do have standards for defining text. 

Even in the same language, images are not compatible between implementations – sometimes even between versions! 

I actually have a question: what do you folks think about the future of ClojureScript? Asking because I’m seeing people complaining (rightfully so) about some friction between CLJS and JS, and the lack of tooling (or worse experience with CLJS tools), and the recent boom of Typescript.

Jiri Knesl: I see people really speaking about just two Clojure in JS implementations in Flexiana. The classic Cljs with re-frame and Reagent, which is something we still teach everyone in Flexiana.

And then there’s a lot of interest in ClojureDart with Flutter as an alternative. I suppose for mobile apps we would use it instead of ReactNative. Flutter can be used for WebApps too (which we haven’t tried yet) so I count it as a second option.

Also, to be honest, we have successfully used HTMX a couple of times and moved all logic to backend. For many use cases, that’s great. SPAs are overused. That’s what I see a lot.

When you look at our framework, it’s all backend. We have used re-frame and Reagent, React.js (with or without next.js) and HTMX. That’s what we focus on the most as of now.

While you are waiting for the next Clojure Corner you can read our past Corner with Yehonathan Sharvit.

The post Clojure Corner with Mauricio Szabo appeared first on Flexiana.

Permalink

Clojure Goodness: Extending is Macro With Custom Assertions

The is macro in the clojure.test namespace can be used to write assertions about the code we want to test. Usually we provide a predicate function as argument to the is macro. The prediction function will call our code under test and return a boolean value. If the value is true the assertion passes, if it is false the assertion fails. But we can also provide a custom assertion function to the is macro. In the clojure.test package there are already some customer assertions like thrown? and instance?. The assertions are implemented by defining a method for the assert-expr multimethod that is used by the is macro. The assert-expr multimethod is defined in the clojure.test namespace. In our own code base we can define new methods for the assert-expr multimethod and provide our own custom assertions. This can be useful to make tests more readable and we can use a language in our tests that is close to the domain or naming we use in our code.

The implementation of the custom assertion should call the function do-report with a map containing the keys :type, :message, :expected and :actual. The :type key can have the values :fail or :pass. Based on the code we write in our assertion we can set the value correctly. Mostly the :message key will have the value of the message that is defined with the is macro in our tests. The keys :expected and :actual should contain reference to what the assertion expected and the actual result. This can be a technical reference, but we can also make it a human readable reference.

In the following example we implement a new customer assertion jedi? that checks if a given name is a Jedi name. The example is based on an example that can be found in the AssertJ documentation.

(ns mrhaki.test
  (:require [clojure.test :refer [deftest is are assert-expr]]))

(defmethod assert-expr 'jedi?
  "Assert that a given name is a Jedi."
  [msg form]
  `(let [;; We get the name that is the second element in the form.
         ;; The first element is the symbol `'jedi?`.
         name# ~(nth form 1)
         ;; We check if the name is part of a given set of Jedi names.
         result# (#{"Yoda" "Luke" "Obiwan"} name#)
         ;; We create an expected value that is used in the assertion message.
         expected# (str name# " to be a jedi.")]
     (if result#
       (do-report {:type     :pass
                   :message  ~msg,
                   :expected expected#
                   :actual   (str name# " is actually a jedi.")})
       (do-report {:type     :fail
                   :message  ~msg,
                   :expected expected#
                   :actual   (str name# " is NOT a jedi.")}))
     result#))

;; We can use our custom assertion in our tests.
(deftest jedi
  (is (jedi? "Yoda")))

;; The custom assertion can also be used with
;; the are macro as it will expand into multiple
;; is macro calls.
(deftest multiple-jedi
  (are [name] (jedi? name)
    "Yoda" "Luke" "Obiwan"))

;; The following test will fail, so we can
;; see failure message with the :expected and :actual values.
(deftest fail-jedi
  (is (jedi? "R2D2") "Is it?"))

If we run our failing test we see in the output that the assertion message is using our definition of the expected and actual values:

...
 expected: "R2D2 to be a jedi."
   actual: "R2D2 is NOT a jedi."
...

Written with Clojure 1.11.3.

Permalink

Clojure Goodness: Combine Multiple Test Cases With are #Testing #Clojure

The clojure.test namespace has the are macro that allows us to combine multiple test cases for the code we want to test, without having to write multiple assertions. We can provide multiple values for a function we want to test together with the expected values. Then then macro will expand this to multiple expressions with the is macro where the real assertion happens. Besides providing the data we must also provide the predicate where we assert our code under test. There is a downside of the are macro and that is that in case of assertion failures the line numbers in the error message could be off.

Permalink

Humble Chronicles: The Inescapable Objects

In HumbleUI, there is a full-fledged OOP system that powers lower-level component instances. Sacrilegious, I know, in Clojure we are not supposed to talk about it. But...

Look. Components (we call them Nodes in Humble UI because they serve the same purpose as DOM nodes) have state. Plain and simple. No way around it. So we need something stateful to store them.

They also have behaviors. Again, pretty unavoidable. State and behavior work together.

Still not a case for OOP yet: could’ve been maps and functions. One can just

(def node []
  {:state   (volatile! state)
   :measure (fn [...] ...)
   :draw    (fn [...] ...)})

But there’s more to consider.

Code reuse

Many nodes share the same pattern: e.g. a wrapper is a node that “wraps” another node. padding is a wrapper:

[ui/padding {:padding 10}
 [ui/button "Click me"]]

So is center:

[ui/center
 [ui/button "Click me"]]

So is rect (it draws a rectangle behind its child):

[ui/rect {:paint ...}
 [ui/button "Click me"]]

The first two are different in how they position their child but identical in drawing and event handling. The third one has a different paint function, but the layout and event handling are the same.

I want to write AWrapperNode once and let the rest of the nodes reuse that.

Now — you might think — still not a case for OOP. Just extract a bunch of functions and then pick and choose!

;; shared library code
(defn wrapper-measure [...] ...)

(defn wrapper-draw [...] ...)

;; a node
(defn padding [...]
  {:measure (fn [...]
              <custom measure fn>)
   :draw    wrapper-draw}) ;; reused

This has an added benefit of free choice: you can mix and match implementations from different parents, e.g. measure from wrapper and draw from container.

Partial code replacement

Some functions call other functions! What a surprise.

One direction is easy. E.g. Rect node can first draw itself and then call a parent. We solve this by wrapping one function into another:

(defn rect [opts child]
  {:draw (fn [...]
           (canvas/draw-rect ...)
           ;; reuse by wrapping
           (wrapper-draw ...))})

But now I want to do it the other way: the parent defines wrapping behavior and the child only replaces one part of it.

E.g., for Wrapper nodes we always want to save and restore the canvas state around the drawing, but the drawing itself can be redefined by children:

(defn wrapper-draw [callback]
  (fn [...]
    (let [layer (canvas/save canvas)]
      (callback ...)
      (canvas/restore canvas layer))))

(defn rect [opts child]
  {:draw (wrapper-draw ;; reuse by inverse wrapping
           (fn [...]
             (canvas/draw-rect ...)
             ((:draw child) child ...)}))})

I am not sure about you, but to me, it starts to feel a little too high-ordery.

Another option would be to pass “this” around and make shared functions lookup implementations in it:

(defn wrapper-draw [this ...]
  (let [layer (canvas/save canvas)]
    ((:draw-impl this) ...) ;; lookup in a child
    (canvas/restore canvas layer))))

(defn rect [opts child]
  {:draw      wrapper-draw   ;; reused
   :draw-impl (fn [this ...] ;; except for this part
                (canvas/draw-rect ...)
                ((:draw child) child ...)}))

Starts to feel like OOP, doesn’t it?

Future-proofing

Final problem: I want Humble UI users to write their own nodes. This is not the default interface, mind you, but if somebody wants/needs to go low-level, why not? I want them to have all the tools that I have.

The problem is, what if in the future I add another method? E.g. when it all started, I only had:

  • -measure
  • -draw
  • -event

Eventually, I added -context, -iterate, and -*-impl versions of these. Nobody guarantees I won’t need another one in the future.

Now, with the map approach, the problem is that there will be none. A node is written as:

{:draw    ...
 :measure ...
 :event   ...}

will not suddenly have a context method when I add one.

That’s what OOP solves! If I control the root implementation and add more stuff to it, everybody will get it no matter when they write their nodes.

How does it look

We still have normal protocols:

(defprotocol IComponent
  (-context              [_ ctx])
  (-measure      ^IPoint [_ ctx ^IPoint cs])
  (-measure-impl ^IPoint [_ ctx ^IPoint cs])
  (-draw                 [_ ctx ^IRect rect canvas])
  (-draw-impl            [_ ctx ^IRect rect canvas])
  (-event                [_ ctx event])
  (-event-impl           [_ ctx event])
  (-iterate              [_ ctx cb])
  (-child-elements       [_ ctx new-el])
  (-reconcile            [_ ctx new-el])
  (-reconcile-impl       [_ ctx new-el])
  (-should-reconcile?    [_ ctx new-el])
  (-unmount              [_])
  (-unmount-impl         [_]))

Then we have base (abstract) classes:

(core/defparent ANode
  [^:mut element
   ^:mut mounted?
   ^:mut rect
   ^:mut key
   ^:mut dirty?]
  
  protocols/IComponent
  (-context [_ ctx]
    ctx)

  (-measure [this ctx cs]
    (binding [ui/*node* this
              ui/*ctx*  ctx]
      (ui/maybe-render this ctx)
      (protocols/-measure-impl this ctx cs)))

  ...)

Note that parents can also have fields! Admit it: We all came to Clojure to write better Java.

Then we have intermediate abstract classes that, on one hand, reuse parent behavior, but also redefine it where needed. E.g.

(core/defparent AWrapperNode [^:mut child] :extends ANode
  protocols/IComponent
  (-measure-impl [this ctx cs]
    (when-some [ctx' (protocols/-context this ctx)]
      (measure (:child this) ctx' cs)))

  (-draw-impl [this ctx rect canvas]
    (when-some [ctx' (protocols/-context this ctx)]
      (draw-child (:child this) ctx' rect canvas)))
  
  (-event-impl [this ctx event]
    (event-child (:child this) ctx event))
  
  ...)

Finally, leaves are almost normal deftypes but they pull basic implementations from their parents.

(core/deftype+ Padding [] :extends AWrapperNode
  protocols/IComponent
  (-measure-impl [_ ctx cs] ...)
  (-draw-impl [_ ctx rect canvas] ...))

Underneath, there’s almost no magic. Parent implementations are just copied into children, fields are concatenated to child’s fields, etc.

Again, this is not the interface that the end-user will use. End-user will write components like this:

(ui/defcomp button [opts child]
  [clickable opts
   [clip-rrect {:radii [4]}
    [rect {:paint button-bg)}
     [padding {:padding 10}
      [center
       [label child]]]]]])

But underneath all these rect/padding/center/label will eventually be instantiated into nodes. Heck, even your button will become FnNode. But you are not required to know this.

Also, a reminder: all these solutions, just like Humble UI itself, are a work in progress at the moment. No promises it’ll stay that way.

Conclusion

I’ve heard a rumor that OOP was originally invented for UIs specifically. Mutable objects with mostly shared but sometimes different behaviors were a perfect match for the object paradigm.

Well, now I know: even today, no matter how you start, eventually you will arrive at the same conclusion.

I hope you find this interesting. If you have a better idea — let me know.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.