Small and Friendly Errors with Cider

Small and Friendly Errors with Cider

Small and Friendly Errors with Cider

Growing by removing is growing stronger.

Less obtrusive Cider errors for a quicker repl-driven experience and practice. How I came to like error overlays.

1gxO1dN.jpg

Figure 1: A rose plant growing, the sun and speed of light. The idea of a computer.

The Quick-Eval

Repl-driven development is a philosophy and practice that focuses on using a REPL (Read-Eval-Print Loop) to interactively evaluate code snippets and see their results immediately.

The key is the whole language is always there and being able to make small experiments to the point where it meshes with your thinking.1

A big part of the practice of Repl-driven development is something I am calling the Quick-Eval flow for this blog post.

  1. Modify source code sexp
  2. eval
  3. Repeat

I want to optimize this practice. The speed of 1-2-3 can easily go below 5 seconds and everything is possible here.

  1. By honing lispy2 skills and juggling those sexps.
  2. By smoothing out the edges of cider-eval.

This aspect of Repl-driven development, the Quick-Eval involves using the mind, the fingers, and the editor. It's like playing an instrument or learning a motor skill like juggling - it requires dedication and repetition to master the movements. But the beauty of this practice lies in the ability to customize the tool to fit your thought process, unlocking the limitless potential for effectiveness. Knowledge of the editor becomes a key to a world of endless detail, where you can continuously optimize speed, the expressivity of intent and precision. The goal then becomes programming at the speed of thought, making structural editing a superpower of the Clojure programming language. And with the power to shape Emacs into what you desire, the possibilities are truly endless.

Speed cubers deconstruct the minutia for their finger tricks. This means being explicit and thoughtful about what movements to do during the performance.

This blog post is about an aspect of 2, smoothing out cider-eval, by eliminating error popups.

Let it flow

The first time I saw a Clojure stack trace, I thought my computer is broken.3

(Ariel Alexa)

Alex Miller mentioned4 recently that Clojure error messages are designed to not include the stack trace by default.

A lot of these tools will automatically print the stack trace and the Clojure REPL never automatically prints stack traces [..l] It prints you like a 2 line error message. And you can ask for the stack trace if you want additional information. We intentionally don't show you that because usually most of that information is not useful to you.

Nice, sounds like less heavy error messages are something to try out!

Cider already has a minimalistic error feedback available:

(setq-default cider-show-error-buffer nil)

With this setting, we get a little overlay right next to our cursor when evaluating with error.

Xk7Lbc1.png

Figure 2: Emacs cider eval error overlay displaying a runtime error message

We achieve a more smoothed-out Quick-Eval flow.

The consistent presentation, regardless of the presence of errors, creates a smoother and more streamlined workflow. This helps keep the focus on the code and eliminates interruptions in the thought process.

My mind can stay in one place, focused on the code, without a wall of something was wrong disrupting my flow.

Sometimes you want to look at a stack trace

Ok *cider-error*, we can always stay friends.

I do not advocate throwing away the stack traces, at all. With REPL-driven development and Emacs we have the power to move fast, and decide to inspect the stack trace should we decide the info would be useful.

*e exists for inspecting stack traces. Also *cider-error* is still at our side, we still faithfully render our stack trace there.

A quick and precise pop lets us go to the error buffer when we decide to do so:

(defun mm/pop-cider-error ()
  (interactive)
  (if-let
      ((cider-error
        (get-buffer "*cider-error*")))
      (pop-to-buffer cider-error)
    (message
     "no cider error buffer")))

(define-key cider-mode-map (kbd "C-c E") #'mm/pop-cider-error)

There is also a cider buffer selection dispatch command:

(define-key cider-mode-map (kbd "C-c X") #'cider-selector)

cider-selector into x to bring up the error buffer.

Let's not jump to conclusions

With less auto-jumping, we move slowly and deliberately and call upon our tools when we decide to do so. In practice, this can mean we jump around at insane speed, but we do so controlled.

(setq-default cider-auto-jump-to-error nil)

Currently, if you interactively eval5 and you have a compile error, Cider jumps to the top of the file. This is because we do not track a file position during the eval and Clojure only sees a piece of string, not the file.

So the most important command, cider-eval, that I want to optimize for Quick-Eval, had a rough edge and now I smoothed it out.

Errors should be friendly

Who invented that errors are red and angry?

(set-face-attribute
 'cider-error-overlay-face
 nil
 :background "DarkGreen"
 :foreground "white")

Now I have these green and peaceful boxes that don't make my heart rate go up.

Errors are the normal, correct behavior of my program, certainly in the context of developing. Sometimes I even eval just to check that there is an error. This is what repl-driven development gives us. And it should not feel like doing something wrong.

The extreme: Just say Did not work

In Debugging with the Scientific Method, Stuart Halloway mentioned what I would be calling Red light, Green light error feedback. The system only indicates if something worked or not, but not what specifically went wrong.

A trick in emacs if you want text invisible is you set the foreground to the same color as the background:

(set-face-attribute
 'cider-error-overlay-face
 nil
 :background "DarkGreen"
 :foreground "DarkGreen")

P7cn70e.png

Figure 3: Emacs cider eval error overlay displayed as a full green block.

I was still seeing the error in the echo area. So I did a quick (setf debug-on-message "nothing") and found cider--display-interactive-eval-result calls message.

The visibility of the message is controlled by cider-use-overlays.

(setq-default cider-use-overlays t)

Lol, now errors are just green blocks and I do not see the error message anywhere. If that is better than getting the error message I cannot say yet.

Footnotes:

2

Or the pareidit-like editor capability of your choice

5

What I call interactive eval is cider-eval-last-sexp etc. As opposed to cider-load-file.

Date: 2023-02-05 Sun 10:50

Emacs 30.0.50 (Org mode 9.6.1)

Permalink

OSS updates of January 2023

In this post I'll give updates about open source I worked on during January 2023.

Sponsors

But first off, I'd like to thank all the sponsors and contributors that make this work possible! Top sponsors:

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

If you're used to sponsoring through some other means which isn't listed above, please get in touch.

Attention

If you are using Github Sponsors and are making payments via Paypal, please update to a creditcard since Github Sponsors won't support Paypal from February 23rd 2023. Read their statement here. If you are not able to pay via a creditcard, you can still sponsor me via one of the ways mentioned above.

Projects

Babashka

Native, fast starting Clojure interpreter for scripting.

New releases in the past month: 1.0.170 - 1.1.173 Highlights:

  • Support for data_readers.clj(c)
  • Include http-client as built-in library
  • Compatibility with clojure.tools.namespace.repl/refresh
  • Compatibility with clojure.java.classpath (and other libraries which rely on java.class.path and RT/baseLoader)
  • Compatibility with eftest test runner (see demo)
  • Compatibility with cljfmt
  • Support for *loaded-libs* and (loaded-libs)
  • Support add-watch on vars (which adds compatibility with potemkin.namespaces)
  • BREAKING: make printing of script results explicit with --prn

Babashka compatibility in external libs

I contributed changes to the following libraries to make them compatible with babashka:

  • cljfmt - A tool for formatting Clojure code
  • carve - Remove unused Clojure vars
  • debux - A trace-based debugging library for Clojure and ClojureScript

Check the changelog for all the changes!

Http-client

The new babashka http-client project mostly replaces babashka.curl.

This month the default client was improved to accept gzip and deflate as encodings by default, reflecting what babashka.curl did.

Also babashka.http-client is now available as a built-in namespace in babashka v1.1.171 and higher.

Clj-kondo

Static analyzer and linter for Clojure code that sparks joy

Three new releases with many fixes and improvements in the last month. Check the changelog for details.

Some highlights:

  • #1742: new linter :aliased-namespace-var-usage: warn on var usage from namespaces that were used with :as-alias. See demo.
  • #1926: Add keyword analysis for EDN files. This means you can find references for keywords throughout your project with clojure-lsp, now including in EDN files.
  • #1902: provide :symbols analysis for navigation to symbols in quoted forms or EDN files. See demo.

The symbol analysis is used from clojure-lsp for which I provided a patch here.

A new project around clj-kondo is clj-kondo-bb which enables you to use clj-kondo from babashka scripts.

Also lein-clj-kondo got an update.

Instaparse-bb

This is a new project and gives you access to a subset of instaparse via a pod.

Instaparse was request a few times to have as a library in babashka and instaparse-bb is a good first step, without making a decision on that yet. See the relevant discussion here.

Carve

Remove unused Clojure vars

In the 0.3.5 version, Carve got the following updates:

  • Upgrade clj-kondo version
  • Make babashka compatible by using the clj-kondo-bb library
  • Discontinue the carve binary in favor of invocation with babashka. Instead you can now install carve with bbin:
      bbin install io.github.borkdude/carve
      
  • Implement babashka.cli integration
  • Implement --help

Jet

CLI to transform between JSON, EDN, YAML and Transit using Clojure

Version 0.4.23:

  • #123: Add base64/encode and base64/decode
  • Add jet/paths and jet/when-pred
  • Deprecate interactive mode
  • Deprecate --query in favor of --thread-last, --thread-first or --func

Fs

File system utility library for Clojure.

Fs has gotten a few new functions:

  • unifixy, to turn a Windows path into a path with Unix-style pathseparators. Note that that style is supported by the JVM and this offers a morereliable way to e.g. match filenames via regexes.
  • several xdg-*-home helper functions, contributed by @eval

See changelog for more details.

Neil

A CLI to add common aliases and features to deps.edn-based projects.

This month there were several small fixes, one of them being to always pick stable versions when adding or upgrading libraries. See full changelog for details.

Quickblog

Light-weight static blog engine for Clojure and babashka.

The blog you're currently reading is made with quickblog.

Version 0.2.3 was released with contributions from several people, mostly enabling you to tweak your own blog even more, while having good defaults.

Instances of quickblog can be seen here:

If you are also using quickblog, please let me know!

A collection of ready to be used SCI configs for e.g. Reagent, Promesa, Re-frame and other projects that are used in nbb, joyride, scittle, etc. See recent commits for what's been improved.

Edamame

Edamame got a new function: parse-next+string which returns the original string along with the parsed s-expression.

lein2deps

Lein to deps.edn converter

This tool can convert a project.edn file to a deps.edn file. It even supports Java compilation and evaluation of code within project.clj. There is now a lein plugin which enables you to sync your project.clj with your deps.edn every time you start lein. Several other minor enhancements were made. See changelog.

4ever-clojure

I added the ability to build and deploy 4ever-clojure to Github Actions. Every time a commit is merged, the site is automatically updated.

Brief mentions

The following projects also got updates, mostly in the form of maintenance and performance improvements. This post would get too long if I had to go into detail about them, so I'll briefly mention them in random order:

  • jna-native-image-sci: Compile a program that uses JNA to native-image and allow dynamic evaluation using SCI!
  • deps.clj: A faithful port of the clojure CLI bash script to Clojure
  • joyride: VSCode CLJS scripting and REPL (via SCI)
  • squint: CLJS syntax to JS compiler
  • tools-deps-native: Run tools.deps as a native binary
  • tools.bbuild: Library of functions for building Clojure projects
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI
  • pod-babashka-buddy: A pod around buddy core (Cryptographic Api for Clojure).
  • nbb: Scripting in Clojure on Node.js using SCI
  • CLI: Turn Clojure functions into CLIs!
  • process: Clojure library for shelling out / spawning sub-processes
  • SCI: Configurable Clojure/Script interpreter suitable for scripting and Clojure DSLs
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI
  • sci.configs: A collection of ready to be used SCI configs

Permalink

Babashka news of January 2023

If you want to help me keep track of babashka-related news, please contribute to news.md or use the #babashka hashtag on Twitter or Mastodon.

Releases

New releases in the past month: 1.0.170 - 1.1.173 Release highlights:

  • Support for data_readers.clj(c)
  • Include http-client as built-in library
  • Compatibility with clojure.tools.namespace.repl/refresh
  • Compatibility with clojure.java.classpath (and other libraries which rely on java.class.path and RT/baseLoader)
  • Compatibility with eftest test runner (see demo)
  • Compatibility with cljfmt
  • Support for *loaded-libs* and (loaded-libs)
  • Support add-watch on vars (which adds compatibility with potemkin.namespaces)
  • BREAKING: make printing of script results explicit with --prn

Events

Articles

One new book this month:

and several blog posts:

Projects

Projects that were new, had updates or were made compatible with babashka:

  • asdf-babashka: babashka plugin for the asdf version manager
  • babashka-htmx-todoapp: Quick example of a todo list SPA using Babashka and HTMX
  • bblgum: An extremely tiny and simple wrapper around charmbracelet/gum
  • bb-dialog: A simple wrapper library for working with dialog from Babashka
  • carve: Remove unused Clojure vars
  • chr: native commands history report for the default terminal users
  • clj-kondo-bb: Invoke clj-kondo from babashka scripts!
  • cljfmt: A tool for formatting Clojure code
  • drepl: Node JS dependency-repl. The node repl you already know with easy library installations
  • instaparse-bb: Wrapper library aroud pod-babashka-instaparse
  • lein2deps: Lein project.clj to deps.edn converter
  • neil: A CLI to add common aliases and features to deps.edn-based projects
  • obsidian-babashka: Run Obsidian Clojure(Script) codeblocks in Babashka
  • pod-babashka-buddy: A pod around buddy core (Cryptographic Api for Clojure)
  • quickblog: Light-weight static blog engine for Clojure and babashka
  • solenoid: A small clojure tool for making little control UIs while using the REPL!
  • tools.bbuild: babashka version of tools.build
  • weather: command line util for grabbing current weather for a city using OpenWeather API

Permalink

Reductionism

You can be a boss at reducers if you know this one weird trick!

At least at one point I had hoped that was true. It turns out that getting reducers right requires thinking it through every single time you are confronted with new one. But I think we can come up with enough guidance so that after a few examples, we won’t really need to look at the reducers in collections to come; you’ll be able to understand them and verify them yourself.

Background

If you look for ‘clojure reduce’ in your search engine of choice, you might run across Reducers. Reducers are a very useful suite of functions that should definitely be in the arsenal of any Clojure programmer, but we have a simpler aim: clojure.core/reduce.

There are two forms of reduce, with two and three parameters. The two-parameter form takes a reducing function (of two arguments) that is applied first to the first two items of the collection, then to the result of that invocation and the third item, that result and the fourth item, etc., until the end of the collection is reached and the accumulated result is returned.

(reduce '+ [1 2 3 4 5] ; =>15

Effectively it computes (((1+2)+3)+4+5).

The three-paremeter version supplies a starting value that is supplied to the reducing function along with the first item, and so on.

(reduce '+ 10 [1 2 3 4 5] ; =>25

Effectivly it computes ((((10+1)+2)+3)+4+5).

Here is the contract for reduce:

f should be a function of 2 arguments. If val is not supplied, returns the result of applying f to the first 2 items in coll, then applying f to that result and the 3rd item, etc. If coll contains no items, f must accept no arguments as well, and reduce returns the result of calling f with no arguments. If coll has only 1 item, it is returned and f is not called. If val is supplied, returns the result of applying f to val and the first item in coll, then applying f to that result and the 2nd item, etc. If coll contains no items, returns val and f is not called.

The code for reduce in core.clj is indirect:

(defn reduce
  ([f coll]
     (if (instance? clojure.lang.IReduce coll)
       (.reduce ^clojure.lang.IReduce coll f)
       (clojure.core.protocols/coll-reduce coll f)))
  ([f val coll]
     (if (instance? clojure.lang.IReduceInit coll)
       (.reduce ^clojure.lang.IReduceInit coll f val)
       (clojure.core.protocols/coll-reduce coll f val))))

A collection implements the IReduce and/or IReduceInit interfaces to provide a specialized, presumably more efficient, reduction algorithm. Otherwise, the magic of protocols is used to extend reduction to types that do not have these interfaces defined. That code is in clojure.core.protocols. The protocols aspect is not our focus here; we are interested in how to implement the interfaces in our collections.

The interfaces

And here they are:

[<AllowNullLiteral>]
type IReduceInit =
    abstract reduce: IFn * obj -> obj

[<AllowNullLiteral>]
type IReduce =
    inherit IReduceInit
    abstract reduce: IFn -> obj

IReduceInit.reduce take a function and start argument. IReduce.reduce just takes the reducing function. The missing argument is the collection itself, which is this.

Feeling reduced, but not diminished

Before we get to code, we need to have a little chat about Reduced. It figures significantly in the code we are about to write.

It is hard to find information about Reduced. I checked five books on Clojure and found nary a mention. The most prominent mention of Reduced is in the reference on Transducers.

Reduced is used to stop reductions early. From the Transducers article:

Clojure has a mechanism for specifying early termination of a reduce:

A process that uses transducers must check for and stop when the step function returns a reduced value (more on that in Creating Transducible Processes). Additionally, a transducer step function that uses a nested reduce must check for and convey reduced values when they are encountered.

A reduced value is literally an object of type Reduced. It just wraps a value, making it available through the deref method of the IDeref interface:

type IDeref =
    abstract deref: unit -> obj

[<Sealed>]
type Reduced(value) =
    interface IDeref with
        member _.deref() = value

You can read the Transducers article for reasons for using this. For one thing, it is the only way to run a reduction over an infinite collection – you have to send a signal that you’ve had enough.

One essential rule when writing a reduce method (for IReduce and IReduceInit): after each invocation of the reduction function, check the result to see if it is an instance of Reduced; if so, stop immediately and return the deref value.

Note: if you are actually writing transducers, you might need to be passing back the Reduced object itself. This is not our concern. Our rule is only for IReduceInit.reduce and IReduce.reduce.

Some code

I’m going to start with some of the original C# code (essentially identical to the Java code) to see the problems we have with making a translation to F#. One of the simplest comes from PersistentList:

public object reduce(IFn f)
    {
        object ret = first();
        for (ISeq s = next(); s != null; s = s.next()) { 
            ret = f.invoke(ret, s.first());
            if (RT.isReduced(ret))
                return ((IDeref)ret).deref();
        }
        return ret;
    }

public object reduce(IFn f, object start)
{
    object ret = f.invoke(start, first());
    for (ISeq s = next(); s != null; s = s.next()) {
        if (RT.isReduced(ret))
            return ((IDeref)ret).deref(); 
        ret = f.invoke(ret, s.first());
    }
    if (RT.isReduced(ret))
        return ((IDeref)ret).deref();
    return ret;
}

The contract for reduce makes these demands:

  1. Without a start value:

    a. if the collection has no items, return the result of calling f with no arguments

    b. if the collection has only one item, return that item (f is not called)

    c. the first application of f should be to the first and second items in the collection

    d. if a call to f results in a Reduced instance, dereference it and return that value.

  2. With a start value:

    a. if the collection has no items, return the start value (f is not called)

    b. the first call to f should be on the start value and the first item

    c. if a call to f results in a Reduced instance, dereference it and return that value.

It might appear that requirements (1a) and (2a) are violated in the PersistentList code. And then you remember that PersistentList always has at least one member so that no check for emptiness is required. You should verify that the other conditions are met.

You can’t take that C# code and just copy it into F#. It relies on early returns out of loops, which we don’t have in F#. And we’d probably prefer to avoid mutating bindings. The technique often used is to translate to a recurive function that does the looping, which is essentially the same in our examples as using a recur loop in Clojure.

For example, take our first reduce above:

        object ret = first();
        for (ISeq s = next(); s != null; s = s.next()) { 
            ret = f.invoke(ret, s.first());
            if (RT.isReduced(ret))
                return ((IDeref)ret).deref();
        }
        return ret;

Two things change from iteration to iteration: the values of ret and s; just look for the assignments to those variables. Those become our parameters. Regular exit is when s = null – we negate the condition of loop continuation to get the condition for method termination. Early exit is done by checking for Reduced. Thus our loop can be encoded by

let rec step (ret:obj) (s:ISeq) =
    if isNull s then
        ret
    else
        match f.invoke(ret,s.first()) with
        | :? Reduced as red -> (red:>IDeref).deref()
        | nextRet -> step nextRet (s.next())

The iteration is started by calling step on arguments that set up the correct initial values for ret and s:

step (this:>ISeq).first() ((this:>ISeq).next())

The other reduce is similar

interface ReduceInit with
    member this.reduce(f,init) =
        let rec step (ret:obj) (s:ISeq) =
            if isNull s then
                ret
            else
                match ret with
                | :? Reduced as red -> (red:>IDeref).deref()
                | _ -> step (f.invoke(ret,s.first())) (s.next())
    let ret = step (f.invoke(start,(this:>ISeq).first())) (this:>ISeq>.next())
    match ret with
    | :? Reduced as red -> (red:>IDeref).deref()
    |_ -> ret

If you look closely, there is distinct difference between the two, both in the original and in the translation. For the first one, in the loop, we call f and check its value. For the second one, we check the value from the previous iteration, then call f to generate a value to pass for the next iteration. If one writes the start-value version in C# this way:

public object reduce(IFn f, object start)
{
    object ret = f.invoke(start, first());
    if (RT.isReduced(ret)) 
        return ((IDeref)ret).deref();

    for (ISeq s = next(); s != null; s = s.next()) {
        ret = f.invoke(ret, s.first());
        if (RT.isReduced(ret))
            return ((IDeref)ret).deref(); 
    }
    return ret;
}

the loop body is now the same here as in the first one. Translating ths into F#, the two versions now have identical step functions. You can move that into a method, leading to this code:

member this.recurser(acc:obj, s:ISeq) =
    if isNull s then
        ret
    else
        match f.invoke(ret,s.first()) with
        | :? Reduced as red -> (red:>IDeref).deref()
        | nextRet -> this.recurser(nextRet, (s.next()))

interface IReduce with
    member this.reduce(f) = 
        let asSeq = this:>ISeq
        this.recurser(asSeq.first(),asSeq.next())

interface IReduceInit with
    member this.reduce(f,start) =
        let asSeq = this:> ISeq
        match f.invoke(start,asSeq.first()) with
        | :? Reduced as red -> (red:>IDeref).deref()
        | acc -> this.recurser(acc,asSeq.next())   

Because the start-value version does a call of f(start,first()) before we get into the loop, we must make sure to check it for Reduced before looping.

If you check carefully against our requirements, you will find that they are all met. Do not neglect to do this exercise for every reduce you write. Trust me.

Cycling

Let’s do one more. There is a cycle function in Clojure that “[r]eturns a lazy (infinite!) sequence of repetitions of the items in coll.” It just calls a factory method on a the Create class.

(cycle [1 2 3] ) ;=> (1 2 3 1 2 3 1 2 3 ...)

A simple implementation of Cycle would hold the original sequence on the side so we could start over at the beginning if we have run through all the elements. It then just needs to know the ‘current’ sequence. Calling first() on the Cycle would just call first() on the ‘current’ sequence. Calling next() on the Cycle, we’d call next() on the underlying sequence and make that result the ‘current’ sequence in a new Cycle object.

The actual implementation of Cycle works a little harder in order to more efficient, by being lazy about calling next on the underlying sequence. One does not really need to know the next() on the underlying sequence until you either call first() or next() on the cycle object. At that point you can compute next(). We will need a mutable field in our Cycle to save the ‘current’ sequence when we finally get around to computing it. This will not be visible from the outside, so Cycle is immutable to outward appearance.

It’s probably easier just to look at the code.

type Cycle private (meta:IPersistentMap, all:ISeq, prev:ISeq, c:ISeq, n:ISeq) = 
    inherit ASeq(meta)
    
    [<VolatileField>]
    let mutable current : ISeq = c   // lazily realized

    [<VolatileField>]
    let mutable next : ISeq = n  // cached
    
    private new(all,prev,current) = Cycle(null,all,prev,current,null)

    static member create(vals:ISeq) : ISeq =
        if isNull vals then
            PersistentList.Empty
        else
            Cycle(vals,null,vals)

    member this.Current() =
        if isNull current then
            let c = prev.next()
            current <- if isNull c then all else c

        current

    interface ISeq with
        override this.first() = this.Current().first()
        override this.next() =
            if isNull next then
                next <- Cycle(all,this.Current(),null)

            next

A couple of small details. If Cycle.create(s) is called with an empty sequence, we return an empty list, not a Cycle. If we have Cycle object in our hand, we are guaranteed that its base sequence is not empty. Note that both first() and next() access the ‘current’ sequence through a call to Current; that method takes care of noticing if the underlying field current is occupied – null indicates we haven’t done the work of calling next on the underlying sequence yet. When you access Current, it will do that computation and save the result. This code also handles cycling back to the beginning if we have reached the end. It’s pretty slick. (Note: the cleverness is in the Java code. I didn’t come up with it. Authorship note in that file credits Alex Miller. Little tricks to promote laziness pop up all over the place.)

On to reduce. We will need to advance through the underlying sequence to access successive items. We do not need to use Cycle.next() to do this – that would create a bunch of unnecessary Cycle items. We just need to compute on the underlying sequence directly, performing the action that is done in Cycle.Current(). The following method does this.

member this.advance(s: ISeq) =
    match s.next () with
    | null -> all         // we've hit the end, cycle back to the beginning
    | x -> x

Consider the no-start-value version of reduce. We always have items, no need to check. The sequence is infinite, so there is no end condition from the sequence. The only way out is to get a Reduce back from f. I wrote down the sequence of steps and looked for a loop point.

    acc <- first
    s <- advance from current (because we have just eaten the first element)
    Loop:
    newAcc = f.invoke(acc, s.first())
    check newAcc for Reduced -> leave
    loop with newAcc, advance(s)

How does the start-value version compare?

    acc <- start-value
    s <- current
    Loop:
    newAcc = f.invoke(acc, s.first())
    check newAcc for Reduced -> leave
    loop with newAcc, advance(s)

The loop is the same, other than how get started. Verify that the conditions (1c), (1d), (2b), and (2c) are met. (The others don’t matter.) And this goes straight to code.

member this.reducer(f: IFn, startVal: obj, startSeq: ISeq) =
    let rec step (acc: obj) (s: ISeq) =
        match f.invoke (acc, s.first ()) with
        | :? Reduced as red -> (red :> IDeref).deref ()
        | nextAcc -> step nextAcc (this.advance s)

    step startVal startSeq

interface IReduce with
    member this.reduce(f) =
        let s = this.Current()
        this.reducer (f, s.first (), this.advance (s))

interface IReduceInit with
    member this.reduce(f, v) = this.reducer (f, v, this.Current())

Side note

The only way to test Cycle.reduce is to have an IFn that at some point returns a Reduced object. The magic of F#’s object expressions comes in handy here. We can create an object directly that implements IFn. However, don’t try to do this with an object expression based on IFn directly – you’d have to have an entry for each of the almost-20 invoke methods. Instead, you can base your object expression on AFn, an abstract class that has default implementations (raising `NotImplementedException’) for all of them. Here is an extract of my test code (using Expecto for writing the tests):

let adderStopsShort n =
        { new AFn() with
            member this.ToString() = ""
        interface IFn with
            member this.invoke(x, y) =
                if Numbers.gte(y ,n:>obj) then Reduced(x) else Numbers.add(x,y) }

let iter = Cycle.create(LongRange.create(100)) :?> IReduce
Expect.equal (iter.reduce (adderStopsShort 10)) 45L "Add them up for a while"
Expect.equal (iter.reduce ((adderStopsShort 10),100L)) 145L "Add them up for a while, with a kicker"

Our cycle is based on a 100-element LongRange. addStopsShort called on a stop value yields an IFn with this behavior: When the second argument reaches the stop value, it returns the current value of the accumulator wrapped in a Reduced; otherwise, it is just +.

(The override of ToString in the object expression is necesary. It seems you can’t just override an inteface only.)

And with that, let’s quit.

Behind the scenes

What I’ve not talked about is all the machinery behind the CollReduce protocol. That all lies out in the Clojure source code and is not our present concern. Mostly. I did have to dig into it to solve one problem. There is a reduce method in ArrayChunk. That actually is the reduce method for the IChunk interface. (See previous post Laziness and Chunking The reduce method in ArrayChunk does stop early when it gets a Reduced object back from the reducer function, but it returns the Reduced object, not the wrapped value. I struggled with this for a while until finally getting set on the correct track by Alex Miller over in the #clr-dev channel in the Clojurian slack. First is to note that this reduce is for IChunk. Then you have to figure out where it gets called from. And that’s where the protocol comes in. reduce will through the CollReduce protocol, which in this case will end up going through the InternalReduce protocol, wherein we find a handler for IChunkedSeq:

  clojure.lang.IChunkedSeq
  (internal-reduce
   [s f val]
   (if-let [s (seq s)]
    (if (chunked-seq? s)
       (let [ret (.reduce (chunk-first s) f val)]
         (if (reduced? ret)
           @ret
           (recur (chunk-next s)
                  f
                  ret)))
       (interface-or-naive-reduce s f val))
	 val))

It is this handler that calls Chunk.reduce. It notes the returned Reduced value, stops the iteration, and does the deref. If ArrayChunk did the deref, this handler woudn’t know to stop.

My head hurts.

End note

If you want to get a sense of the history of reduce, reducers, and transducers, check out the Clojure change log. These things take time to develop. Changes sometimes work through the code slowly. clojure.lang.Reduced was introduced in 2012 and incorporated into some of the reduce methods at that time. (Here is the commit.) But other edits came later. For example, it was two years later that IReduceInit was split off from IReduce (this commit) and checking for a Reduce‘d value was added to PersistentList.reduce() (this commit).

If you’ve made it this far, you’re likely someone who would check these things out.

Permalink

Dependency injection and loggers in Clojure

Logging functions have to be impure to be useful. If they don't change the state of the world around them by writing something somewhere, why would you use them? This makes any function that uses a logging function directly impure too. If that is something you want to avoid, you could inject a logging service and use that instead of the logging function. Let's do that and see what challenges we come across.

The protocol Logger below consists of a single method info. The constructor function create-logger returns a concrete implementation of Logger, which delegates to clojure.tools.logging/info.

(ns logging
  (:require [clojure.tools.logging :as log]))

(defprotocol Logger
  (info [this message]))

(defn create-logger []
  (reify Logger
    (info [_ message] (log/info message))))

The function add-and-log below takes a logger as its first argument and uses it to log the result of some computation. Pay close attention to the namespace.

(ns domain
  (:require [logging :refer [create-logger info]]))

(defn add-and-log [logger & args]
  (info logger (apply + args)))

(add-and-log (create-logger) 1 2 3 4)
(add-and-log (create-logger) 1 2 3 4 5)

The result of evaluating the last two expressions is as follows:

13:47:30.130 [nREPL-session-fab93eaa-9ae3-40d4-a4f1-a0605747ba5c] INFO logging - 10
13:49:22.927 [nREPL-session-fab93eaa-9ae3-40d4-a4f1-a0605747ba5c] INFO logging - 15

These two log entries contain the log level ("INFO"), the namespace from which the logging function was called ("logging"), and the log messages ("10" and "15").

Usually, it's convenient to be able to trace an entry in the logs to its origin in the code. In this example, however, we're logging messages in the namespace domain, but the log entries contain the namespace logging. This is unfortunate, but it makes perfect sense. It may look like we're logging messages in the namespace domain, because that's where we call the info method of the logger, but the actual logging happens in the namespace logging, where log/info is called.

Macros to the rescue

After some head scratching and browsing through code bases and documentation, I learned that this is one of those occasions where macros come in handy. As you may know, macros can be used to transform code at compile time. The end result of this transformation is evaluated at runtime.

For example, the macro twice below takes a function and a value, and applies the function twice: once to the value and then to the result of the first application.

(defmacro twice [f x]
  `(~f (~f ~x)))

Without going into details too much, you could view the expression `(~f (~f ~x)) as a template, where ~ is used as an escape symbol.

At compile time, the expression (twice inc 0) expands to the following:

(inc (inc 0))

At runtime, this evaluates to 2.

For beginners, it can be difficult to determine whether a function or a macro should be used to solve a certain problem. In fact, the macro twice could have been a function. Most people would say that if something can be implemented as a function, then it should be implemented as function, not a macro. The problem with our logger, however, is a perfect fit for macros.

Here's a new version of the Logger protocol and the corresponding constructor function:

(ns logging
  (:require [clojure.tools.logging :as log]
            [clojure.tools.logging.impl :as impl]))

(defprotocol Logger
  (-log [this ns level message throwable]))

(defn create-logger []
  (reify Logger
    (-log [_ ns level message throwable]
      (let [logger (impl/get-logger log/*logger-factory* ns)]
        (log/log* logger level throwable message)))))

This version of the protocol consists of a single method named -log, where the minus-sign indicates that the method is not meant to be called directly. (It can be called directly, but it's not meant to be.) What's most noteworthy about this method is that it takes an argument ns. The constructor function creates a logger by passing the value of ns to the logger factory of clojure.tools.logging, and that logger is then used to do the actual logging via log/log*.

This change itself doesn't bring us any closer to solving our problem, however. We still need to figure out how to pass the namespace in which we're logging something to the method -log without doing so explicitly. Part of the answer lies in *ns*, an object representing the current namespace. Using a function in the logging namespace to pass along *ns* wouldn't work however, because we would be passing along that namespace again. The second part of the answer lies in using a macro.

(defmacro log [logger level message throwable]
  `(-log ~logger ~*ns* ~level ~message ~throwable))

As mentioned above, macros will be expanded at compile time and the resulting expression will be evaluated at runtime. Because the expansion happens where the macro is applied, the value of *ns* is the namespace in which the macro is applied, not the namespace in which the macro is defined.

To provide an API that is a little more pleasant to use, the macro above is combined with the following ones (and similar ones for other log levels).

(defmacro info [logger message]
  `(log ~logger :info ~message nil))

(defmacro error [logger message throwable]
  `(log ~logger :error ~message throwable))

Now that we've defined this collection of macros, we can evaluate the following expression.

(ns domain
  (:require [logging :refer [create-logger info]]))

(info (create-logger) "a message to log")

At compile time, the expression on the last line expands to the following:

(logging/-log (create-logger) #namespace[domain] :info "a message to log" nil)

At runtime, the message "a message to log" is logged at log level "INFO", with a reference to the namespace "domain", which is exactly what we set out to achieve.

Let's put these new macros to use:

(ns domain
  (:require [logging :refer [create-logger info]]))

(defn add-and-log [logger & args]
  (info logger (apply + args)))

(add-and-log (create-logger) 1 2 3 4)
(add-and-log (create-logger) 1 2 3 4 5)

The result of evaluating the last two expressions is now as follows:

13:58:17.378 [nREPL-session-fab93eaa-9ae3-40d4-a4f1-a0605747ba5c] INFO  domain - 10
13:58:18.589 [nREPL-session-fab93eaa-9ae3-40d4-a4f1-a0605747ba5c] INFO  domain - 15

Only one word changed, but this can make a world of difference when looking through logs to track down bugs.

Permalink

Building abstractions using higher-order functions

A higher-order function is a function that takes other functions as arguments, or returns a function as its result. Higher-order functions are an exceptionally powerful software design tool because they can easily create new abstractions and are composable. In this post I will present a case study - a set of functions that defines an interesting problem domain. By reading and understanding this code, hopefully anyone can appreciate the power and beauty of higher-order functions and how they enable constructing powerful abstractions from basic building blocks.

One of my all-time favorite programming books is Peter Norvig's PAIP . In section 6.4 - A set of Searching Tools, it presents some code for defining different variants of tree searching that I've always found very elegant.

Here's a quick reimplementation of the main idea in Clojure (see this repository for the full, runnable code); I'm using Clojure since it's a modern Lisp that I enjoy learning and using from time to time.

First, some prerequisites. As is often the case in dynamically-typed Lisp, entities can be described in a very abstract way. The code presented here searches trees, but there is no tree data structure per-se; it's defined using functions. Specifically, there's a notion of a "state" (tree node) and a way to get from a given state to its children states (successors); a function maps between the two.

In our case let's have integers as states; then, an infinite binary tree can be defined using the following successor function:

(defn binary-tree
  "A successors function representing a binary tree."
  [x]
  (list (* 2 x) (+ 1 (* 2 x))))

Given a state (a number), it returns its children as a list. Simplistically, in this tree, node N has the children 2N and 2N+1.

Here are the first few layers of such a tree:

Binary tree with 15 nodes 1-15

In one sense, the tree is infinite because binary-tree will happily return the successors for any node we ask:

paip.core=> (binary-tree 9999)
(19998 19999)

But in another sense, there is no tree. This is a beautiful implication of using functions instead of concrete data - they easily enable lazy evaluation. We cannot materialize an infinite tree inside a necessarily finite computer, but we can operate on it all the same because of this abstraction. As far as the search algorithm is concerned, there exists an abstract state space and we tell it how to navigate and interpret it.

Now we're ready to look at the generic search function:

(defn tree-search
  "Finds a state that satisfies goal?-fn; Starts with states, and searches
  according to successors and combiner. If successful, returns the state;
  otherwise returns nil."
  [states goal?-fn successors combiner]
  (cond (empty? states) nil
        (goal?-fn (first states)) (first states)
        :else (tree-search (combiner (successors (first states))
                                     (rest states))
                           goal?-fn
                           successors
                           combiner)))

Let's dig in. The function accepts the following parameters:

  • states: a list of starting states for the search. When invoked by the user, this list will typically have a single element; when tree-search calls itself, this list is the states that it plans to explore next.
  • goal?-fn: a goal detection function. The search doesn't know anything about states and what the goal of the search is, so this is parameterized by a function. goal?-fn is expected to return true for a goal state (the state we were searching for) and false for all other states.
  • successors: the search function also doesn't know anything about what kind of tree it's searching through; what are the children of a given state? Is it searching a binary tree? A N-nary tree? Something more exotic? All of this is parameterized via the successors function provided by the user.
  • combiner: finally, the search strategy can be parameterized as well. There are many different kinds of searches possible - BFS, DFS and others. combiner takes a list of successors for the current state the search is looking at, as well as a list of all the other states the search still plans to look at. It combines these into a single list somehow, and thus guides the order in which the search happens.

Even before we see how this function is used, it's already apparent that this is quite a powerful abstraction. tree-search defines the essence of what it means to "search a tree", while being oblivious to what the tree contains, how it's structured and even what order it should be searched in; all of this is supplied by functions passed in as parameters.

Let's see an example, doing a BFS search on our infinite binary tree. First, we define a breadth-first-search function:

(defn breadth-first-search
  "Search old states first until goal is reached."
  [start goal?-fn successors]
  (tree-search (list start) goal?-fn successors prepend))

This function takes a start state (a single state, not a list), goal?-fn and successors, but it sets the combiner parameter to the prepend function, which is defined as follows:

(defn prepend
  [x y]
  (concat y x))

It defines the search strategy (BFS = first look at the rest of the states and only then at successors of the current state), but still leaves the tree structure and the notion of what a goal is to parameters. Let's see it in action:

paip.core=> (breadth-first-search 1 #(= % 9) binary-tree)
9

Here we pass the anonymous function literal #(= % 9) as the goal?-fn parameter. This function simply checks whether the state passed to it is the number 9. We also pass binary-tree as the successors, since we're going to be searching in our infinite binary tree. BFS works layer by layer, so it has no issue with that and finds the state quickly.

We can turn on verbosity (refer to the full code to see how it works) to see what states parameter tree-search gets called with, observing the progression of the search:

paip.core=> (with-verbose (breadth-first-search 1 #(= % 9) binary-tree))
;; Search: (1)
;; Search: (2 3)
;; Search: (3 4 5)
;; Search: (4 5 6 7)
;; Search: (5 6 7 8 9)
;; Search: (6 7 8 9 10 11)
;; Search: (7 8 9 10 11 12 13)
;; Search: (8 9 10 11 12 13 14 15)
;; Search: (9 10 11 12 13 14 15 16 17)
9

This is the prepend combiner in action; for example, after (3 4 5), the combiner prepends (4 5) to the successors of 3 (the list (6 7)), getting (4 5 6 7) as the set of states to search through. Overall, observing the first element in the states list through the printed lines, it's clear this is classical BFS where the tree is visited in "layers".

Implementing DFS using tree-search is similarly easy:

(defn depth-first-search
  "Search new states first until goal is reached."
  [start goal?-fn successors]
  (tree-search (list start) goal?-fn successors concat))

The only difference from BFS is the combiner parameter - here we use concat since we want to examine the successors of the first state before we examine the other states on the list. If we run depth-first-search on our infinite binary tree we'll get a stack overflow (unless we're looking for a state that's on the left-most path), so let's create a safer tree first. This function can serve as a successors to define a "finite" binary tree, with the given maximal state value:

(defn finite-binary-tree
  "Returns a successor function that generates a binary tree with n nodes."
  [n]
  (fn [x]
    (filter #(<= % n) (binary-tree x))))

Note the clever use of higher-order functions here. finite-binary-tree is not a successors function itself - rather it's a generator of such functions; given a value, it creates a new function that acts as successors but limits the the states' value to n.

For example, (finite-binary-tree 15) will create a successors function that represents exactly the binary tree on the diagram above; if we ask it about successors of states on the fourth layer, it will say there are none:

paip.core=> (def f15 (finite-binary-tree 15))
#'paip.core/f15
paip.core=> (f15 4)
(8 9)
paip.core=> (f15 8)
()
paip.core=> (f15 7)
(14 15)
paip.core=> (f15 15)
()

As another test, let's try to look for a state that's not in our finite tree. Out infinite tree theoretically has all the states:

paip.core=> (breadth-first-search 1 #(= % 33) binary-tree)
33

But not the finite tree:

paip.core=> (breadth-first-search 1 #(= % 33) (finite-binary-tree 15))
nil

With our finite tree, we are ready to use depth-first-search:

paip.core=> (with-verbose (depth-first-search 1 #(= % 9) (finite-binary-tree 15)))
;; Search: (1)
;; Search: (2 3)
;; Search: (4 5 3)
;; Search: (8 9 5 3)
;; Search: (9 5 3)
9

Note the search order; when (2 3) is explored, 2's successors (4 5) then come before 3 in the next call; this is the definition of DFS.

We can implement more advanced search strategies using this infrastructure. For example, suppose we have a heuristic that tells us which states to prioritize in order to get to the goal faster (akin to A* search on graphs). We can define a best-first-search that sorts the states according to our heuristic and tries the most promising states first ("best" as in "best looking among the current candidates", not as in "globally best").

First, let's define a couple of helper higher-order functions:

(defn diff
  "Given n, returns a function that computes the distance of its argument from n."
  [n]
  (fn [x] (Math/abs (- x n))))

(defn sorter
  "Returns a combiner function that sorts according to cost-fn."
  [cost-fn]
  (fn [new old]
    (sort-by cost-fn (concat new old))))

diff is a function generator like finite-binary-tree; it takes a target number n and returns a function that computes its parameter x's distance from n.

sorter returns a function that serves as the combiner for our search, based on a cost function. This is done by concatenating the two lists (successors of first state and the rest of the states) first, and then sorting them by the cost function. sorter is a powerful example of modeling with higher-order functions.

With these building blocks in place, we can define best-first-search:

(defn best-first-search
  "Search lowest cost states first until goal is reached."
  [start goal?-fn successors cost-fn]
  (tree-search (list start) goal?-fn successors (sorter cost-fn)))

Once again, this is just like the earlier BFS and DFS - only the strategy (combiner) changes. Let's use it to find 9 again:

paip.core=> (with-verbose (best-first-search 1 #(= % 9) (finite-binary-tree 15) (diff 9)))
;; Search: (1)
;; Search: (3 2)
;; Search: (7 6 2)
;; Search: (6 14 15 2)
;; Search: (12 13 14 15 2)
;; Search: (13 14 15 2)
;; Search: (14 15 2)
;; Search: (15 2)
;; Search: (2)
;; Search: (5 4)
;; Search: (10 11 4)
;; Search: (11 4)
;; Search: (4)
;; Search: (9 8)
9

While it finds the state eventually, we discover that our heuristic is not a great match for this problem, as it sends the search astray. The goal of this post is to demonstrate the power of higher-order functions in building modular code, not to discover an optimal heuristic for searching in binary trees, though :-)

One last search variant before we're ready to wrap up. As we've seen with the infinite tree, sometimes the search space is too large and we have to compromise on which states to look at and which to ignore. This technique works particularly well if the target is not some single value that we must find, but rather we want to get a "good enough" result in a sea of bad options. We can use a technique called beam search; think of a beam of light a flashlight produces in a very dark room; we can see what the beam points at, but not much else.

Beam search is somewhat similar to our best-first-search, but after combining and sorting the list of states to explore, it only keeps the first N, where N is given by the beam-width parameter:

(defn beam-search
  "Search highest scoring states first until goal is reached, but never consider
  more than beam-width states at a time."
  [start goal?-fn successors cost-fn beam-width]
  (tree-search (list start) goal?-fn successors
               (fn [old new]
                 (let [sorted ((sorter cost-fn) old new)]
                   (take beam-width sorted)))))

Once again, higher-order functions at play: as its combiner, beam-search creates an anonymous function that sorts the list based on cost-fn, and then keeps only the first beam-width states on that list.

Exercise: Try to run it - what beam width do you need to set in order to successfully find 9 using our cost heuristic? How can this be improved?

Conclusion

This post attempts a code-walkthrough approach to demonstrating the power of higher-order functions. I always found this particular example from PAIP very elegant; a particularly powerful insight is the distilled difference between DFS and BFS. While most programmers intuitively understand the difference and could write down the pseudo-code for both search strategies, modeling the problem with higher-order functions lets us really get to the essence of the difference - concat vs. prepend as the combiner step.

Permalink

Laziness and chunking

Laziness is a central concept in the handling of sequences in Clojure. Chunking comes along as an efficiency measure. Surprisingly, at the level of implementation we are looking at it, very little needs to be done; laziness is defined most in the Clojure code that builds clojure.core. We’ll take a look at what is needed at the bottom to support laziness and chunking.

Introduction

Laziness and chunking permeate the sequence machinery in Clojure. There are numerous resources explaining the general concept. (Searching around for those resources, one will discover these topics are a source of confusion for beginners.) For our purposes, Laziness in Clojure will suffice.

There are several useful exercises to prepare for what follows:

  • Search core.clj in the Clojure source code for ‘lazy’ and ‘chunk’.
  • Use the Cheatsheet. The secion ‘Creating a Lazy Seq’ seems promising. Click on any function there to get the doc; the doc page has a link to the source. Note that no function with ‘chunk’ in its name is listed; they are not commonly used.

In the Clojure source

Look for lazy-seq in the Clojure source code. The macro lazy-seq turns its argument into (fn* [] body) and passes that to the constructor for LazySeq. lazy-seq occurs in the definitions of concat, map, filter, take, take-while, drop, drop-while, … . You get the idea. (I’d like to point out lazy-cat; notice this is a statement, not a question.)

One of the simplest uses of lazy-seq is in repeatedly:

(defn repeatedly
  "Takes a function of no args, presumably with side effects, and
  returns an infinite (or length n if supplied) lazy sequence of calls
  to it" 
  {:added "1.0"
   :static true}
  ([f] (lazy-seq (cons (f) (repeatedly f))))
  ([n f] (take n (repeatedly f))))

I can guarantee you do not want to try to realize an infinite sequence. Without the lazy-seq, this would result immediately in an infinite recursion.

You will note in the Clojure code that lazy-seq and chunking appear frequently together. Here is a piece of the code for the function map:

 ([f coll]
   (lazy-seq
    (when-let [s (seq coll)]
      (if (chunked-seq? s)
        (let [c (chunk-first s)
              size (int (count c))
              b (chunk-buffer size)]
          (dotimes [i size]
              (chunk-append b (f (.nth c i))))
          (chunk-cons (chunk b) (map f (chunk-rest s))))
        (cons (f (first s)) (map f (rest s)))))))

A lot of chunky-inesss in there. chunked-seq, chunk-first and the rest are defined right after lazy-seq:

(defn ^:static ^clojure.lang.ChunkBuffer chunk-buffer ^clojure.lang.ChunkBuffer [capacity]
  (clojure.lang.ChunkBuffer. capacity))

(defn ^:static chunk-append [^clojure.lang.ChunkBuffer b x]
  (.add b x))

(defn ^:static ^clojure.lang.IChunk chunk [^clojure.lang.ChunkBuffer b]
  (.chunk b))

(defn ^:static ^clojure.lang.IChunk chunk-first ^clojure.lang.IChunk [^clojure.lang.IChunkedSeq s]
  (.chunkedFirst s))

(defn ^:static ^clojure.lang.ISeq chunk-rest ^clojure.lang.ISeq [^clojure.lang.IChunkedSeq s]
  (.chunkedMore s))

(defn ^:static ^clojure.lang.ISeq chunk-next ^clojure.lang.ISeq [^clojure.lang.IChunkedSeq s]
  (.chunkedNext s))

(defn ^:static chunk-cons [chunk rest]
  (if (clojure.lang.Numbers/isZero (clojure.lang.RT/count chunk))
    rest
    (clojure.lang.ChunkedCons. chunk rest)))
  
(defn ^:static chunked-seq? [s]
  (instance? clojure.lang.IChunkedSeq s))

We’ll discuss the underlying interfaces and classes below.

We can glean a few clues about chunking by looking at the map code. There are two cases depending on whether the sequence we are going to map over is chunky or smooth (had to be said). The smooth case is what you think map should do: create a sequence with f applied to each element. Defined recursively as:

(cons (f (first s)) (map f (rest s)))

Note that laziness is crucial here. f will be applied to (first s) when this node is realized, but the recursive call to map results in a lazy sequence again, so the f will not applied until the next value is required.

For the chunking piece, we see a parallel:

(chunk-cons (chunk b) (map f (chunk-rest s)))

That b is filled first with result of calling f on every item in the first chunk of the chunked sequence:

(dotimes [i size]
  (chunk-append b (f (.nth c i))))

Thus b plays the role of (f (first s)). This is the essence of chunking. Rather than just apply f once at a time, do a number of them all at once. f may get called more than it would on a non-chunked basis, but presumably thia a price you are willing to pay for avoiding the overhead of creating sequence elements for all the items in the chunk.

In the basement

Let’s dig in. LazySeq is quite easy, ignoring a few distractions. (LazySeq does not derive from ASeq, so it has supply all the goodies it would otherwise inherit. I’ll leave off the implementation code for System.Collections.IList and System.Collections.ICollection. Boring, really.)

Easy does not equate to obvious.

[<Sealed; AllowNullLiteral>]
type LazySeq private (m1, fn1, s1) =
    inherit Obj(m1)
    let mutable fn: IFn = fn1
    let mutable s: ISeq = s1
    let mutable sv: obj = null

    private new(m1: IPersistentMap, s1: ISeq) = LazySeq(m1, null, s1)

    new(fn: IFn) = LazySeq(null, fn, null)

The only public constructor takes an IFn. One you get around to needing a value from this sequence, fn1.invoke() will be called to generate … something. At that time, fn1 will be set to null – we are done with it. Doing so is a flags that this LazySeq has been realized (Clojure function realized? called on it will return true.)

interface IPending with
    member _.isRealized() = isNull fn

The value that fn1.invoke() returns is cached temporarily in sv. Note this is an Object, not necessarily an ISeq. We are only part of the way there. This invocation and field mutation is done in member sval:

member _.sval() : obj =
    if not (isNull fn) then
        sv <- fn.invoke ()
        fn <- null

    match sv with
    | null -> upcast s
    | _ -> sv

The if expression does the invocation if it hasn’t been done already. The match returns either sv or s. You have to see the rest of the code (below) to piece this together, but in essence if sv is not null, then we have not gone all the way to get a sequence. If sv is null, then s holds the sequence. (Which could be null if the sequence is empty.)

Where does sval get called? From seq():

    interface Seqable with

        [<MethodImpl(MethodImplOptions.Synchronized)>]
        override this.seq() =

            this.sval () |> ignore

            if not (isNull sv) then

                let rec getNext (x: obj) =
                    match x with
                    | :? LazySeq as ls -> getNext (ls.sval ())
                    | _ -> x

                let ls = sv
                sv <- null
                s <- RT0.seq (getNext ls)

            s

Why does this important action (calling sval) occur here, and what does it imply? If any of the Clojure sequence functions need something from us, either to process an element or even just to check if we are empty, they will call seq on us. And within LazySeq itself, all the ISeq methods call seq():

    interface ISeq with
        member this.first() =
            (this :> ISeq).seq () |> ignore
            if isNull s then null else s.first ()

        member this.next() =
            (this :> ISeq).seq () |> ignore
            if isNull s then null else s.next ()

        member this.more() =
            (this :> ISeq).seq () |> ignore

            if isNull s then upcast PersistentList.Empty else s.more ()

These are straightforward. But what is seq() doing? It calls sval for the potential side-effect of calling fn1 to realize the sequence. At that point, if sv is null, we have our sequence in s. However, if sv is not null, we need to do a little more work. We grab sv’s value, set sv to null to indicate we will have computed the final sequence, then call our little internal function getNext, a recursive loop to work though a potential chain of LazySeqs until we get a ‘real’ sequence, or at least something we can call RT.seq() on. (Remember RT.seq()?) Now we are realized (fn1 has been invoked), and we have tracked through to a sequence. We are good to go.

You might ask if that little loop is necessary. First, LazySeq’s being nested are quite common. (Trust me.) By separating sval from seq, we can avoid unnecessary calls to seq on the intervening LazySeqs. Definitely worth it.

And that’s pretty much it for LazySeq. There are some cute consequences of some parts of the encoding. For example, if you want to add metadata via `IObj.withMeta()’:

    interface IObj with
        override this.withMeta(meta: IPersistentMap) =
            if obj.ReferenceEquals((this :> IMeta).meta (), meta) then
                this :> IObj
            else
                LazySeq(meta, (this :> ISeq).seq ()) :> IObj

You can’t do that without realizing the LazySeq; see that call to seq(). This explains the one private constructor that takes a PersistentMap and an ISeq. The LazySeq it constructs has fn1 set to null (we’re realized), sv set to null (we’ve tracked through to our ‘real’ sequence), and s set to the ‘real’ sequence.

Chunking

All that work and we haven’t gotten to chunking yet. The basics are straightforward. A collection indicates support for chunking by implementing the IChunkedSeq interface.

[<AllowNullLiteral>]
type IChunkedSeq =
    inherit ISeq
    inherit Sequential
    abstract chunkedFirst: unit -> IChunk
    abstract chunkedNext: unit -> ISeq
    abstract chunkedMore: unit -> ISeq

which looks a lot like ISeq. Think of a chunked sequence as, well, a sequence of chunks, where a chunk is one of these:

[<AllowNullLiteral>]
type IChunk =
    inherit Indexed
    abstract dropFirst: unit -> IChunk
    abstract reduce: f: IFn * start: obj -> obj

By inheriting Indexed, it picks up count() and two flavors of nth, giving us direct access to the count() number of elements in the buffer. We usually build a chunk by first creating a ChunkBuffer:

[<Sealed>]
type ChunkBuffer(capacity:int) =

    let mutable buffer : obj array = Array.zeroCreate capacity
    let mutable cnt : int = 0

    interface Counted with
        member _.count() = cnt

    member _.add(o:obj) = 
        buffer[cnt] <- 0
        cnt <- cnt+1

    member _.chunk() : IChunk =
        let ret = ArrayChunk(buffer,0,cnt)
        buffer <- null
        ret

which allocates an array and allows adding elements to it. And then you callchunk() on it to create an ArrayChunk that implements IChunk.

[<Sealed>]
type ArrayChunk(arr:obj array,offset:int ,iend:int) =
    
    new(arr,offset) = ArrayChunk(arr,offset,arr.Length)


    interface Counted with
        member _.count() = iend-offset


    interface Indexed with
        member _.nth(i) = arr[offset+i]
        member this.nth(i,nf) =
            if 0 <= i && i < (this:>Counted).count() then  
                (this:>Indexed).nth(i)
            else
                nf

    interface IChunk with
        member _.dropFirst() =
            if offset = iend then
                raise <| InvalidOperationException("dropFirst of empty chunk")
            else
                ArrayChunk(arr,offset+1,iend) 

        member _.reduce(f,start) =
            let ret = f.invoke(start,arr[offset])
            let rec step (ret:obj) idx =
                match ret with  
                | :? Reduced -> ret
                | _ when idx >= iend -> ret
                | _ -> step (f.invoke(ret,arr[idx])) (idx+1)
            step ret (offset+1)

Note than an ArrayChunk has count() and nth(*) for getting its elements. dropFirst() gives a new ArrayChunk on the same array with a new starting point in the array. Reduction will talk about in a later post.

The last piece of the puzzle is ChunkedCons:

type ChunkedCons(meta:IPersistentMap, chunk:IChunk, more:ISeq) =
    inherit ASeq(meta)

    new(chunk,more) = ChunkedCons(null,chunk,more)

    interface IObj with 
        override this.withMeta(m) =
            if obj.ReferenceEquals(m,meta) then
                this
            else
                ChunkedCons(m,chunk,more)

    interface ISeq with
        override _.first() = chunk.nth(0)
        override this.next() =
            if chunk.count() > 1 then
                ChunkedCons(chunk.dropFirst(),more)
            else 
                (this:>IChunkedSeq).chunkedNext()
        override this.more() =
            if chunk.count() > 1 then
                ChunkedCons(chunk.dropFirst(),more)
            elif isNull more then
                PersistentList.Empty
            else
                more

    interface IChunkedSeq with
        member _.chunkedFirst() = chunk
        member this.chunkedNext() = (this:>IChunkedSeq).chunkedMore().seq()
        member _.chunkedMore() =
            if isNull more then
                PersistentList.Empty
            else 
                more

It gets most of its goodness from ASeq and otherwise looks somewhat like Cons except that its first ‘element’ is actually a chunk. first grabs the nth(0) element of that chunk, while next() does a dropFirst to move on, unless we’ve reached the end of the leading chunk, in which case we move to what follows.

Our chunky collections

Only three collections down in the basement (other than ChunkedCons) implement IChunkedSeq: Range, LongRange, and PersistentVector.

I have LongRange and Range completed, but this part of the code is too messy to be very edifying. Chunking is actually used in an essential manner in these classes, however. Here is one snippet to give you a flavor:

  let arr: obj array = Array.zeroCreate Range.CHUNK_SIZE
  let lastV, n = fillArray startV arr 0
  chunk <- ArrayChunk(arr, 0, n)

fillArray fills values into the array up to size Range.CHUNk_SIZE, and returns the next starting value and how many elements were put into the array. (That might be less than Range.CHUNK_SIZE if we are at the end of the range.) And then we create a chunk.

We’ll cover the PersistentVector implementation of this when we get to that class. That’s a lot more fun, actually, because a PersistentVector essentially is implemented directly in a chunky manner, so that mapping to IChunkedSeq is very natural.

Enough.

Permalink

Clojure Deref (Feb 3, 2023)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS)

Highlights

Today we finalized the Clojure/conj 2023 program and notified speakers. We had 100 talks submitted and accepted 23, but we easily could have accepted 70 or 80 of those talks - there were so many interesting experience reports, libraries, and ideas included that we wish we could have included more of them. Please keep submitting these talks to other Clojure and non-Clojure confs in the future! We will start to release the program and more info in the next week or two.

Libraries and Tools

New releases and tools this week:

  • Clojure port written in Haxe targeting multiple platforms -

  • dart-sass-clj - An embedded dart-sass compiler and watch task for Clojure

  • shadow-bare-bones - A mini-project to quickly get started with ClojureScript for hacking on a browser app

  • clojure-deps-edn - User level aliases and Clojure CLI configuration for deps.edn based projects

  • contajners 1.0.0 - An idiomatic, data-driven, REPL friendly clojure client for OCI container engines

  • mina - Helidon Nima ring adapter for clojure

  • fun-map 0.5.114 - a map blurs the line between identity, state and function

  • cnn-chrome-extension - Read CNN news in Chrome Extension

  • backgammon - Backgammon for 2 players on 1 device

  • clojure-test 2.0.165 - A clojure.test-compatible version of the classic Expectations testing library

  • matcher-combinators 3.8.0 - Library for creating matcher combinator to compare nested data structures

  • pathom3 2023.01.31-alpha - A library for navigating data

  • runway - Coding on the fly, from take-off to landing, with a tool.deps reloadable build library

  • lein2deps 0.1.0 - Lein project.clj to deps.edn converter

  • di 2.0.0 - DI is a dependency injection framework

  • bosquet - LLMOps tools to build, chain, test, evaluate and deploy prompts for GPT and other

  • fulcro 3.6.0 - A library for development of single-page full-stack web applications in clj/cljs

  • rewrite-clj 1.1.46 - Rewrite Clojure code and edn

  • bbin 0.1.9 - Install any Babashka script or project with one command

  • honeysql 2.4.972 - Turn Clojure data structures into SQL

  • waterfall 0.1.34 - Apache Kafka clients in idiomatic Clojure

  • tools.build 0.9.3 - Clojure builds as Clojure programs

  • calva 2.0.329 - Clojure & ClojureScript Interactive Programming for VS Code

  • squint 0.0.10 - ClojureScript syntax to JavaScript compiler

  • fulcro-rad-semantic-ui 1.3.4 - Semantic UI Rendering Plugin for RAD

  • fulcro-rad 1.4.5 - Fulcro Rapid Application Development

  • http-client 0.0.3 - HTTP client for Clojure and babashka built on java.net.http

  • neil 0.1.52 - A CLI to add common aliases and features to deps.edn-based projects

  • Cursive 1.12.8-eap1 - The Clojure(Script) IDE that understands your code

  • uix 0.8.1 - Idiomatic ClojureScript interface to modern React.js

Permalink

SpotlightBoard.com: Calendar for standup and improv

SpotlightBoard.com: Calendar for standup and improv

My latest programming project is spotlightboard.com. This is a website to aggregate all the standup and improv events happening in Amsterdam. Users can add their own events and find events based on the calendar and filtering by tags. There's also a comment section for every event and a forum for discussions.

For this project I tried something different than my usual Clojure programming tools. Spotlightboard is made with Elixir and the Phoenix web framework. Because Elixir uses immutable datastructures and functional programming it is very easy to get started with for Clojure programmers. It is only a pity that its syntax is not sexp based.

Read more

Permalink

Elm 2022, a year in review

2022 has been another exciting year for Elm, with many interesting packages, blog posts, videos, podcasts, demos, tutorials, applications, and so on.

Let's have a look at it in retrospect.

This is a list of relevant materials. I am sure there is stuff that I missed. Send me a DM in case you think there is something that I should add or remove.

At the bottom, you will also find some of the companies that hired Elm developers in 2022 and a partial list of companies that use Elm.

If you want to keep up with Elm's related news:

You can also check the previous Elm 2021, a year in review.

Here we go 🚀

January 2022

Elm Radio episode #47 - What's Working for Elm

Simple code is different from simplistic code: Elm vs JavaScript

  • January 10th - Announcement Advent of Code 2021 by Ryan Haskell-Glatz (A handful of videos teaching Elm through Advent of Code. With this blog post he recaps each of the 7 days he recorded.)

Advent of Code 2021

Introduction to Elm (with Lindsay Wardell) | Some Antics

  • January 12th - Video Functional Programming for Pragmatists - A presentation by Richard Feldman at GOTO Copenhagen 2021 (Do you care more about how well code works than how conceptually elegant it feels? Are you more interested in how effectively you can build and maintain software than how buzzword-compliant it is? Then this is the talk for you! People like functional programming for different reasons. Some like it for the conceptual elegance, or the mathematical properties. Richard? He likes to build things. He likes it when the software he builds works well and is easy to maintain. For the past decade he's been using functional programming both professionally and as a hobbyist, and has found it has helped him ship higher quality software in less time than in the decade he spent writing object-oriented code before.)

Functional Programming for Pragmatists

  • January 15th - Project Elm Search by Henrique Buss (For those using Mac's, now there is an Elm Search extension for Raycast. Helping you to more quickly find packages, functions, and more!)

Elm Search

Lambda Calculus: an Elm CLI

Elm Radio episode #48 - If It Compiles It Works

Elm Part 1 - Setup Elm and Write Your First Program

Rethinking Maybes for Elm beginners

  • January 23rd - Announcement McMaster Start Coding has taught over 22,222 kids! by McMaster Start Coding (Attention #Elm coders! Thanks to your participation, McMaster Start Coding has taught over 22,222 kids! To celebrate we are hosting a contest, and winners get free entry into a summer camp! We will select 2 winners every week, for 5 weeks straight!)

McMaster Start Coding has taught over 22,222 kids!

Wordle in Elm in 1h13m17s (PB, timelapse)

  • January 29th - Post Twitter thread about components by Duncan Malashock (In @elmlang, your UI elements don't need to be "components" the way they might be in React. So what should they be? Here are a number of common patterns for different requirements.)

Twitter thread about components

Elm vs HyperScript - A Wordle implementation

Elm Radio episode #49 - Optimizing Performance with Robin Hansen

February 2022

Learning Elm while launching a project, good idea?

Differences between TypeScript and Elm

The Ideal Programming Language

Elm Radio episode #50 - Large Elm Codebases with Ju Liu

Software Unscripted: Interactive Style Guides

  • February 16th - Announcement IntelliJ Elm Plugin future by clojj (Clojj announced the renovation of the IntellyJ Elm Plugin and already had some success with elm-review integration, Lamdera project support)

  • February 17th - Post Utilizing Elm in a Web Worker by Lindsay Wardell

Utilizing Elm in a Web Worker

  • February 17th - Post GraphQL and Elm by Ryan Haskell-Glatz (Making inputs that don't bust your face.)

GraphQL and Elm

  • February 21st - Video Writing a MMORPG game in Elm on both client and server - A presentation by Martin Janiczek at NDC Oslo 2021 (For the better part of last year I've been writing a multiplayer browser game, with both frontend and backend written in the Elm language. I'll talk about my setup, the good, bad and ugly of this approach, anecdotes from development, what surprised me and what the future holds. Who said Elm's for frontend only‽)

Writing a MMORPG game in Elm on both client and server

Meteor with Elm starter kit

  • February 21st - Game Katakana Wordle by Flavio Corpa (Learn a new KATAKANA word EVERY day!)

Katakana Wordle

elm interreactor - Clickable compiler messages for the lazy

Minidenticons ported to Elm

React to Elm: Migrating React to Elm in Three Ways

Tail recursion, but modulo cons

Elm Radio episode #51 - Primitive Obsession

March 2022

elm-map

Elm Radio episode #52 - Category Theory in Elm with Joël Quenneville

  • March 15th - Announcement New elm-review package releases by Jeroen Engels

  • March 15th - Announcement New Elm-pair release adds support for Visual Studio Code by Jasper Woudenberg

  • March 16th - Tutorial Elm - The Complete Guide (a web development video tutorial) by Carlos Saltos (I've just created a new class for increasing more Elm adoption, please help to share it with your friends that want to learn new better ways to create nice web sites using Elm)

  • March 17th - Video Hobby scale: making web apps with minimal fuss - A presentation by Martin Stewart (Creating web apps requires setting up a lot of infrastructure. Configuring a database, managing hosting, writing deploy scripts, and handling communication between the client and server are only some of the many things that need to be done. Unfortunately for many projects, this level of control isn't needed and is instead a burden. It doesn't need to be this way though! In this presentation I'll give an overview of how you can use Elm programming language and the Lamdera framework to create web apps with little effort as well as show some of the apps I've created using it.)

Hobby scale: making web apps with minimal fuss

Elm at Talenteca

Trying your luck with Elm

My Little Functor

Familiarity or Guarantees? Functional Programming for the front-end

Elm Radio episode #53 - Dead Code

Code Azimutt feature with Elm: collapse columns

  • March 31st - Project Platformer physics system by Andrea Peltrin (I’ve done a proof-of-concept platformer physics system using pixel-perfect values in #Elm. This was hella fun!)

Platformer physics system

April 2022

  • April 2nd - Video Friday Hacks #221: Why bet the company on Elm for both front and backend? - A presentation by Choon Keat (Although Elm usually runs on the browser, this talk explains why it’s actually a great choice for building the backend too – and why it can be a perfect choice for a startup. We will walk through how it even works on the backend, and how wonderful life can be in such an environment!)

Friday Hacks #221: Why bet the company on Elm for both front and backend?

Easy dependency integration in Kotlin/JS using the "Elm ports" technique

Code Azimutt feature with Elm: table & column notes

Extending Railway Oriented Programming in Elm to Make Complex User Flows Simple - Grahame #FnConf 22

Elm Radio episode #54 - Developer Productivity

  • April 14th - Video Introduction to Elm-Lang - A presentation by Shalk Venter and Gary (A presentation organized by Front-end Development South Africa)

Introduction to Elm-Lang

Property based testing: primer and examples

Software Unscripted: Static Analysis with Jeroen Engels

Web Programming In Elm - Getting Started

Elm Radio episode #55 - Use the Platform

May 2022

Offline Elm CSV to JSON GUI application in one video (elm-ui)

Applications as Libraries: Building elm-book and elm-admin

Utilizing Native Dialog in Elm

Elm Radio episode #56 - elm-book with Georges Boris

  • May 11th - Video A janitor for Elm - A presentation by Rupert Smith at Elm Online Meetup

A janitor for Elm

Game programming and creative coding with Elm

Elm Programming Quick Start - For Beginners (Functional Programming)

Introduction to Elm programming language for React developers

Elm + Heroicons = Love

Software Unscripted: Software Design with Dillon Kearns

Celebrating 10 years of Elm

The store pattern in Elm

Elm Radio episode #57 - State of Elm 2022

  • May 30th - Project Gren 0.1.0 by Robin H. Hansen (Today I'm announcing the first release of Gren. An Elm-like language that intends to support both frontend and backend development.)

June 2022

  • June 1th - Announcement Release of Simple Iot v0.2.0 by Cliff Brake (Simple Iot is a platform that enables you to add remote sensor data, telemetry, configuration, and device management to your project or product.)

Release of Simple Iot v0.2.0

  • June 6th - Announcement A Monkey Interpreter by Dwayne Crooks (An Elm interpreter for Monkey, a programming language designed by Thorsten Ball.)

  • June 6th - Elm Radio episode #58 - Elm Store Pattern "Martin Janiczek joins us to discuss a pattern for declaratively managing loading state for API data across page changes."

Elm Radio episode #58 - Elm Store Pattern

Hobby scale: making web apps with minimal fuss

Functional and Object-Oriented Programming with Lindsay Wardell

Speak & Spell reproduction in Elm

Elm Radio episode #59 - Wrap Early, Unwrap Late

Graph Bang

Understanding UI Components in Elm

My first Functional Programming app

  • June 25th - Announcement A milestone for Elm Catalog by Alex Korban (Elm Catalog crossed the threshold of 1300 packages (not counting pre-release, internal etc)! Elm Catalog now lists 1307 Elm 0.19.x packages & 110 Elm tools across all categories.)

  • June 26th - Video Effect Systems for Mortals - A presentation by Eduardo Morango at Elm Meetup Brazil

Effect Systems for Mortals

  • June 26th - Video Declarative Server State - A presentation by Dillon Kearns at Elm Meetup Brazil (Elm Meetup Brazil welcomed Dillon Kearns as he demoed some of the upcoming features on elm-pages v3.)

Declarative Server State

A Chip-8 emulator

July 2022

The Essence of Functional Programming

Elm Radio episode #60 - Building Trustworthy Tools

Elm Radio episode #61 - Exploring a New Form API Design

  • July 20th - Video Build Elm Apps (with Lindsay Wardell) | Some Antics - A presentation by Ben Myers (Previously on Some Antics, we dove into the syntax for Elm, a functional programming language that compiles down to JavaScript, with friend of the show Lindsay Wardell — but we weren't able to get to application development in time. Join us as Lindsay returns to the stream for a sequel on building Elm apps!)

Build Elm Apps (with Lindsay Wardell) | Some Antics

  • July 26th - Project Travelm-Agency 3.0.0 by Andreas Molitor (A new major version of Travelm-Agency 52, an internationalization code generator for Elm.)

  • July 26th - Game Type Signature by Andy

Type Signature

Nethys Search, a search engine for Archives of Nethys

  • July 28th - Video Static analysis tools love pure FP - A presentation by Jeroen Engels at Lambda Days 2022 (Functional programming languages have many benefits that are often explained from the developer's point of view, such as how easy it is to maintain a codebase. But we rarely look at it from the point of view of tools. Static analysis tools try to infer meaning and intent in order to find bugs and code smells, but they can be very hard to write depending on the features of the analyzed language. We will look at how explicitness, the lack of side-effects and dynamic constructs in pure FP languages empower tools to trivially achieve surprising results that would be nearly impossible with other paradigms.)

Static analysis tools love pure FP

  • July 28th - Video Towards Smart E-Learning Mentor Dispatch - A presentation by Christopher Schankula at Lambda Days 2022 (The McMaster Start Coding program has taught over 26,000 K-12 students programming using Elm over the last five years. Collectively, they have compiled nearly 4 million programs in our online learning platform. The COVID-19 pandemic has necessitated the switch to a fully virtual setup, which continues as schools have strict visitor limits. Virtual learning also necessitates upgrades to the online code compilation and mentoring software we use. In particular, we need to determine when a student is stuck so as to be able to make better use of mentor resources and proactively help students who are struggling. This presentation details data mining efforts to predict metrics such as the length of time that a student is likely to struggle if they are receiving an error in their program, in order to dispatch mentors and help the students who need the most attention.)

Towards Smart E-Learning Mentor Dispatch

  • July 28th - Video Functional Parsing for Novel Markup Languages - A presentation by James Carlson at Lambda Days 2022 (With functional languages like Elm that target the browser, one can parse and render both classical and novel markup languages in real time, providing authors a pleasant, zero-config tool for writing and distributing mathematical text. The talk will outline how one designs and builds a fault-tolerant parser that provides high-quality, real-time error messages in-place in the rendered text. As case studies we consider two markup languages: MiniLaTeX, a subset of LaTeX, and L1, an experimental markup with a syntax inspired by Lisp.)

Functional Parsing for Novel Markup Languages

  • July 28th - Video An Enigma Machine in Elm - A presentation by Ju Liu at Lambda Days 2022 (The Enigma machine was an encryption device that was used by German forces during WW2 to send secret messages. In this talk, we will explain exactly how the encryption process works and go through an implementation of it in Elm. We will demonstrate how to encrypt and decrypt a message. Then we will go over the weaknesses that made it exploitable by Alan Turing and the other fine folks in Bletchley Park. By the end of the talk, you'll be able to point out all the inaccuracies in "The Imitation Game".)

An Enigma Machine in Elm

August 2022

Elm Radio episode #62 - elm-test v2 with Martin Janiczek

  • August 3rd - Project Airsequel by Adrian Sieber (First public beta release)

  • August 3rd - Post Day 1 of Elm by Eke Enyinnaya Diala

  • August 7th - Post GSoC 2022 @ Kodi: Mid-Term Evaluation by Mohd. Shaheer

  • August 10th - Project Chart by Trade Simplr (Interactive Trading platform to trade and analyze in the financial markets, built with Elm.)

Chart

  • August 10th - Announcement Elm in Parcel (Better support for Elm has been release in Parcel, including multiple entry points.)

  • August 12th - Video Introduction to Elm Part 1 - A presentation by Programming from A to Z

Introduction to Elm Part 1

Introduction to Elm - Building a Basic Web Application

Reweave

Elm Radio episode #63 - The Root Cause of False Positives

Showing a Playing Card From a Single Suite With Elm

Introduction to Elm

  • August 24th - Game Sudoku & experiments by David Klemenc (The games are a result of inspiration from Sebastian Lague videos.)

Sudoku & experiments

  • August 29th - Announcement Alpha release of ElmLand by Ryan Haskell-Glatz (Ryan Haskell-Glatz announced the release of the latest alpha release of @ElmLand_ with a brand new @vite_js powered website, a guide designed for JS folks new to @elmlang, a dark mode AND a cool glowing thingie)

Alpha release of ElmLand

Elm Radio episode #64 - Projects We Wish We Had Time For

September 2022

Getting started with elm-watch

Elm Radio episode #65 - elm-watch with Simon Lydell

  • September 13th - Game Space Sim! by Wolfgang Schuster (Getting closer to a point with https://github.com/wolfadex/space-sim where it's actually playable, though still basically alpha. Right now it's basically just a simulation you watch.)

Space Sim!

Software Unscripted: Lamdera with Mario Rogic

scripta.io

CodeGen with Types, for Humans, by Humans

Diagrammar: Simply Make Interactive Diagrams

JUXT Cast S4E4 - Strange Loop Edition: A chat with Jared M. Smith

Elm Radio episode #66 - elm-codegen with Matthew Griffith

  • September 29th - Post Where JavaScript is headed in 2022 by Matthew Tyson (Which JavaScript frameworks, features, and tools do developers favor, and which are on the way out? Let’s look at the latest State of JavaScript survey results.)

October 2022

[elm] Building a Simple Calculator

Elm Radio episode #67 - Elm at a Billion Dollar Company with Aaron White

  • October 11th - Video Functional Programming in Vite with Elm - A presentation by Lindsay Wardell at ViteConf 2022 (Elm is a delightful language for building reliable web applications. In this talk, we'll explore what Elm is, how it compares to Javascript, and how we can incorporate it into a Vite-based application.)

Functional Programming in Vite with Elm

Yet another way to manage shared state in Elm SPAs

Elm 3D Pool Game Collaboration

  • October 24th - Elm Radio episode #68 - Elm and ADD "We discuss how Elm is a powerful tool for people with ADD, and how lessons learned from ADD can benefit people who don't have ADD."

Elm Radio episode #68 - Elm and ADD

Fullstack happiness using the regal stack

Stringy - The string transformer

What’s new in elm-watch 1.1.0

November 2022

Elm Radio episode #69 - Types vs. Tests

Designing for Programmer Delight

  • November 13th - Project Mammudeck by Bill St. Clair (I've been having a blast writing Elm again, after a two-year hiatus. I wrote billstclair/elm-dynamodb and am using it to make a DynamoDB.AppState module, so I can have global persistent state for #mammudeck)

  • November 14th - Project Announcing VendrInc/Elm-GQL! by Matthew Griffith (Matt Griffith is announcing vendrinc/elm-gql! A tool for generating Elm modules from GraphQL queries and mutations.)

  • November 15th - Project Help test the new npm elm package! by Simon Lydell (Me, @supermario and @evancz (with the help of some more folks) are working on a new version of the elm npm package.)

  • November 15th - Post Data Modeling Resources in Elm by Joël Quenneville (Some of the best Elm data modeling resources around the web.)

  • November 15th - Game Liikennematto released into early access! by Matias Klemola

Liikennematto released into early access!

  • November 16th - Game Drops by Gábor Kerekes (Drops is an implementation of Puyo Puyo by Dividat.)

Drops

Virtual DOM: What problem does it solve?

Elm, If it compiles it works!

Elm language for learning functional programming

Elm Radio episode #70 - elm-gql with Matthew Griffith

  • November 22nd - Post Contributing to devenv by Brian J. Cardiff

  • November 28th - Post Getting Tailwind to Work with Elm Book by Jesse Warden

  • November 30th - Announcement Elm-Widgets by Georges Boris (Elm-Widget, a collection of stateless widgets designed to make your experience building elm applications easier and even more delightful, was annunced at the Elm Remote Meetup)

Elm-Widgets

Thought experiment: Hiding implementation types in Elm

December 2022

  • December 1th - Video Day 1: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 1! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 1: Learn Elm with Advent of Code

  • December 1th - Project Fractale by Adrien Dulac (Fractale is part of the “productivity” or “asynchronous communication” tools family. As a developer, I am satisfied with the organization and collaboration tools built around Git, such as Github/Gitlab/Gitea. But for human beings who are not developers or strangers to these tools, they remain too complex and ill adapted...)

Fractale

  • December 2nd - Video Day 2: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 2! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 2: Learn Elm with Advent of Code

Day 4: Learn Elm with Advent of Code

  • December 4th - Video Day 3: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 3! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 3: Learn Elm with Advent of Code

  • December 5th - Video Day 5: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 5! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 5: Learn Elm with Advent of Code

Elm Radio episode #71 - Deliberate Practice

  • December 6th - Video Day 6: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 6! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 6: Learn Elm with Advent of Code

  • December 6th - Post Gaining insight into your codebase with elm-review by Jeroen Engels

  • December 7th - Video How to do Property based Testing Shrinkers Right - A presentation by Martin Janiczek at Haskell eXchange (Property-based testing (PBT) is a valuable tool in the functional programming world: it generates test inputs for you (finding tricky edge cases you wouldn't be able to find manually) and lets you specify and verify laws and invariants about your code with ease. Virtually all PBT tools nowadays shrink the failing inputs to a minimal (and thus much more helpful) counterexample before presenting it to you. Implementing shrinkers can be tricky, though. I'll walk through the common shrinking algorithms (seen in QuickCheck, Hedgehog, ScalaCheck, etc.), their inherent problems, and most importantly: how to implement shrinking in a way that doesn't suffer from them!)

How to do Property based Testing Shrinkers Right

Look Ma no graphics card! Software-based 3D rendering in Elm

  • December 8th - Video Day 8: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 8! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 8: Learn Elm with Advent of Code

  • December 8th - Project astro-integration-elm by Angus Findlay (I've created an Elm integration for Astrodotbuild, which lets you render Elm islands server-side!)

  • December 9th - Video Day 9: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 9! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 9: Learn Elm with Advent of Code

  • December 9th - Post Presenting Styleguide Colors by Tessa Kelly

  • December 11th - Video Day 10: Learn Elm with Advent of Code - A presentation by Ryan Haskell-Glatz (Advent of Code 2022, Day 10! This series is designed for anyone new to functional programming. We use a language called Elm to complete these puzzles together. All the solutions and links are available below!)

Day 10: Learn Elm with Advent of Code

Elm Radio episode #72 - 2022 Holiday Special

Hot reloading the Simple IoT UI

  • December 30th - Project Competition Tracking by Vladimir Kirienko (A web app for tracking gliding competitions using Haskell and Elm, with websockets!)

Competition Tracking

Some of the companies that hired Elm developers in 2022

For more job related news, susbscribe to the Elm Weekly newsletter or to the #jobs channel in the Elm Slack.

Partial list of companies that use Elm

AbletonAcimaACKOActiveStateAdrimaAJR InternationalAlmaAstrosatAvaAvettaAzaraBarmeniaBasiqBeautiful DestinationsBEC SystemsBekkBellroyBendyworksBernoulli FinanceBlue Fog TrainingBravoTranBrilliantBudapest SchoolBuildrCachixCalculoJuridicoCareRevCARFAXCariboucarwowCBANCCircuitHubCN Group CZCoinTrackingConcourse CIConsensysCornell TechCorvusCrowdstrikeCulture AmpDay OneDeepgramdiesdas.digitalDividatDriebitDripEmirateseSparkEXRFeaturespaceField 33FissionFlintFolqFordForsikringFoxhound SystemsFuturiceFörsäkringsGirotGenerativeGenesysGeoraGizraGWIHAMBSHatchHearkenhello RSEHubTranIBMIdeinIlluminateImprobableInnovation through understandingInsurelloiwantmynamejambitJobviteKOVnetKulkulLogisticallyLukoMetronome Growth SystemsMicrosoftMidwayUSAMimoMind GymMindGymNext DLPNLXNomalabNomiNoRedInkNovabenchNZ HeraldPermutivePhrasePINATAPinMeToPivotal TrackerPowerReviewsPractlePrimaRakutenRoompactSAVRScovilleScriveScrivitoSerenyticsSmallbrooksSnapviewSoPostSplinkSpotttStaxStowgaStructionSiteStudyplus For SchoolSymbalooTalendTallink & Silja LineTest DoublethoughtbotTravel PerkTruQuTWaveTylerUncoverUnisonVeevaVendrVerityVnatorVyW&W Interaction SolutionsWatermarkWebbhusetWejoininZaloraZEIT.IOZettle

This list is extracted from several sources, such as elm-companies, Stackshare.io, blog posts, videos, talks, atc.

This is all. See you in 2024!

❤️

Permalink

PO Sync Pocket Operator Sync App

My first app of the year is out, hooray! \o/ It's a simple app to sync pocket operator devices. It outputs a sync signal from your phone which you can plug into your pocket operator's left input to drive it using a 2.5mm male-to-male stereo audio cable. It works well with the p0k3t0 Sync Splitter.

You can get it for Android and iPhone:

PO Sync connected to a phone

This was a fun app to build. I made it because somebody left this review on one of my other apps on Google Play:

Using this for the PO sync feature. I like that most; everything else is okay... I think a great idea would be to make an app with just the PO sync feature and a tempo slider or wheel, plus an on/off

So I knew there was at least one person who wanted this app. It was simple to implement and I got to use my favourite programming languge, ClojureScript. I love it when people need software that I know I can put together quickly. You can get the source code here:

https://github.com/chr15m/PocketSync

2023 is going to be the year of pocket operator apps for Dopeloop and I. I hope to make at least 4 new music apps. I'll post back here when I release them (and also to newsletter + Dopeloop subscribers).

Permalink

Clojure Deref (Jan 30, 2023)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS)

Blogs and articles

Libraries and tools

New releases and tools this week:

  • jet 0.4.23 - CLI to transform between JSON, EDN, YAML and Transit using Clojure

  • calva 2.0.327 - Clojure & ClojureScript Interactive Programming for VS Code

  • mafs.cljs 0.1.0 - Reagent interface to the Mafs interactive 2d math visualization library

  • lein-gitlab-cart 1.0.0 - A plugin that enables Leiningen projects to access and deploy to GitLab package registries

  • clojure-lsp 2023.01.26-11.08.16 - Clojure & ClojureScript Language Server (LSP) implementation

  • carve 0.3.5 - Remove unused Clojure vars

  • recife 0.9.0 - A Clojure model checker

  • Datomic 1.0.6610 - On-Prem

  • mathbox.cljs 0.1.0 - Clojurescript extensions and utilities for Mathbox

  • build-uber-log4j2-handler 2.19.0 - A conflict handler for log4j2 plugins cache files for the tools.build uber task

  • clj-kondo-bb - Invoke clj-kondo from babashka scripts!

  • edamame 1.1.17 - Configurable EDN/Clojure parser with location metadata

  • pathom3 2023.01.24-alpha - A library for navigating data

  • hermes 1.0.914 - A library and microservice implementing the health and care terminology SNOMED CT

  • jsonista.jcs 1.0.9 - RFC 8785 JSON Canonicalization Scheme (JCS) for metosin/jsonista

  • geom 1.0.0 - 2D/3D geometry toolkit for Clojure/Clojurescript

  • ring-clr - ClojureCLR HTTP server abstraction

  • bblgum - An extremely tiny and simple wrapper around charmbracelet/gum

  • rich-comment-tests 1.0.0 - RCT turns rich comment forms into tests

  • cli 0.6.45 - Turn Clojure functions into CLIs!

  • process 0.4.16 - Clojure library for shelling out / spawning sub-processes

  • nbb 1.2.161 - Scripting in Clojure on Node.js using SCI

  • at-at 1.5.1 - Ahead-of-time function scheduler

  • antq 2.2.983 - Point out your outdated dependencies

  • babashka 1.1.171 - Native, fast starting Clojure interpreter for scripting

  • fulcro 3.6.0-RC5 - A library for development of single-page full-stack web applications in clj/cljs

  • fulcro-rad 1.4.4 - Fulcro Rapid Application Development

  • biff 0.6.0 - A Clojure web framework for solo developers

  • emmy - The Emmy Computer Algebra System

  • quickblog 0.2.3 - light-weight static blog engine for Clojure and babashka

  • neil 0.1.51 - a CLI to add common aliases and features to deps.edn-based projects

  • fs 0.2.15 - file system utility library for Clojure

  • asami 2.3.3 - a flexible graph database for both JVM and JS platforms

  • shadow-bare-bones - A mini-project to quickly get started with ClojureScript for hacking on a browser app

  • xnfun - RPC over MQTT (and maybe NOT JUST MQTT)

Permalink

Dependency injection and protocols in Clojure

Consider the following function, which

  • takes a map of dependencies and a ring request,
  • updates a gift using data from the request, and
  • returns a ring response:
(defn update-gift [{:keys [datasource]} request]
  (let [{:keys [external-list-id external-gift-id]} (:path-params request)
        {:keys [name ok price description]} (:params request)]
    (when ok
      (domain/update-gift! datasource external-gift-id name price description))
    (response/redirect (str "/list/" external-list-id "/edit") :see-other)))

The function domain/update-gift! persists the changes to the database. It has a side effect, which makes it an impure function. Because update-gift uses domain/update-gift!, it's impure too.

You could argue that this fact alone is a reason to refactor this code. Generally speaking, pure functions are easier to test and easier to reason about, which are both good reasons to prefer pure functions over impure ones.

For simple apps, however, you could also argue that there's not much to reason about anyway, and refactoring may not be worth the effort. What's more, using with-redefs to replace the impure function domain/update-gift! would make testing quite straightforward.

Because this blog post is about dependency injection, we better find another reason to refactor update-gift and apply some more dependency injection. Luckily, we can pretend that we want to replace the function domain/update-gift! with a function that uses a completely different method to persist gifts. That's not something you would do with with-redefs.

Let's look at the (spoiler alert) naive approach where we introduce a parameter to inject the function domain/update-gift! directly as a function.

(defn update-gift [{:keys [datasource update-gift!]} request]
  (let [{:keys [external-list-id external-gift-id]} (:path-params request)
        {:keys [name ok price description]} (:params request)]
    (when ok
      (update-gift! datasource external-gift-id name price description))
    (response/redirect (str "/list/" external-list-id "/edit") :see-other)))

As I mentioned above, the first argument to the function update-gift is a map of dependencies. In the example above, the key update-gift! of that map should map to a function for persisting updated gifts.

The downside of this approach is that there's no static analysis that your IDE can apply to provide you with useful information about this function. In fact, it can't even tell you that the key update-gift! maps to a function at all. You yourself have to remember that update-gift! is a function that takes a datasource, an external gift ID, a name, a price, and a description, in that order. If you forget, you have to navigate to the place where you call update-gift and see what it was again that you inject under the key update-gift!.

You could argue that this is what you get when you use a dynamically typed language instead of a statically typed one, and you would be right. However, there are good reasons to prefer dynamically typed languages over statically typed ones, and there are ways around this particular problem.

Protocols to the rescue

We can use protocols to help static analysis tools a little. A protocol is a named set of named methods and their signatures. They're similar to Java's interfaces.

The following snippet shows the definition of a simple protocol named GiftService. This protocol defines a single method update-gift!, which takes a concrete implementation of the protocol as first argument together with a number of additional arguments.

(defprotocol GiftService
  (update-gift!
    [this datasource external-id name price description]
    "Update the gift with ID `external-id` with the given name, price, and description"))

There are a number of ways to create concrete implementations of protocols. The following snippet shows one way, which uses reify.

(defn create-gift-service []
  (reify GiftService
    (update-gift!
     [_ datasource external-id name price description]
     (db/update-gift! datasource {:id external-id
                                  :name name
                                  :price price
                                  :description description}))))

The snippet shows the definition of a constructor function create-gift-service, which creates a concrete implementation of the protocol GiftService by providing an implementation of the method update-gift!. This implementation ignores the gift service itself (hence the underscore) and passes its arguments to another function db/update-gift!.

In practice, most services would have more than one method, and these methods would do more than directly call a single function. The service could perform some validation, for example, or combine a number of more low-level functions that interact with a database.

Here's the same update-gift function again. This time, a gift-service is injected as a dependency.

(defn update-gift [{:keys [datasource gift-service]} request]
  (let [{:keys [external-list-id external-gift-id]} (:path-params request)
        {:keys [name ok price description]} (:params request)]
    (when ok
      (domain/update-gift! gift-service datasource external-gift-id name price description))
    (response/redirect (str "/list/" external-list-id "/edit") :see-other)))

This function is pure, like the previous version, which makes it easier to reason about and test. Because we're injecting a service and applying a method from a protocol to it, there's more information to work with for static analysis tools. The image below shows how such a tool can show the argument list and documentation of the protocol method domain/update-gift!.

Static analysis

Whether or not this final version is better than the first version depends a lot on the size of the app it is part of, the plans for this app, the team working on the app, etc. The point of this post is not to convince you that you should apply dependency injection where you can or that you should always use protocols when you do apply it. The point of this post is to show you that you can have your cake and eat it when it comes to dynamically typed languages and static analysis.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.