Resolve Symbols and Calculate Types with Sharplasu

Parsing is typically where we begin and invest much of our enthusiasm. However, completing the parser is just the beginning. Sadly, I must inform you that additional steps are necessary. One step that enhances the value of the Abstract Syntax Trees (ASTs) obtained from parsing is semantic enrichment. In this article, we’ll explore what semantic enrichment is, what it enables us to do, and we are going to share some code.

As always, the code for this article is available on GitHub. You can find it at https://github.com/Strumenta/an-entity-language-sharplasu. In this article we just talk about symbol resolution, so if you want details about the parsing and creation of the AST you will have to look in the repository.

It All Starts With Parsing, but the Assembly Line Continue

When we parse, we recognize the structure of the code, by analyzing its syntax.

But what does it mean to parse? It means to:

  1. To check the code is syntactically correct
  2. To build the Abstract Syntax Tree (AST), we make nodes representing the structures we recognize. For example, we could process the code dog.fly() and return a node representing a method call. 

All is good, except for the fact that dogs cannot fly.

The code is syntactically correct and semantically incorrect. 

When we process code we start by verifying that it is syntactically correct. 

If it is, we move to the semantic analysis. When the code is semantically correct, we can enrich the AST with semantic information.

In essence, we are interested in two things:

  1. Connecting references to the corresponding declarations. We call this symbol resolution
  2. Calculating the types of elements that can be typed. We call this type calculation

The Link Between Symbol Resolution and Type Calculation

I am an engineer by trade, and I really like the step-by-step approach to problem-solving: you solve one part of the problem and only then move to the next one. So you may wonder why I am conflating two apparently different problems, like symbol resolution and type calculation. As for most things I do, it is because there is no better alternative.

The two mechanisms are interconnected: one depends on the other. For example, let’s say that I have this code:

class TimeMachine {

   Time move(TimeIncrement) 

   Point move(Direction)

}

class Point {

   Point add(Direction)

}

class Time {

   Time add(TimeIncrement)

}

myTimeMachine.move(foo).add(bar)

Let’s say my goal is to figure out the type of the overall expression myTimeMachine.move(foo).add(bar).

To answer this question I need to figure out which add method we are referring to. If it is the one declared in Point, then the overall result will be of type Point. If instead we are referring to the add method declared in Time, then the overall result will be of type Time

Ok, but how do I know which add method I am calling? Well, it depends on the type of myTimeMachine.move(foo):

  • If that expression has type Point and bar has type Direction, then we are calling the add method in Point
  • If instead that expression has type Time, and the expression bar has type TimeIncrement, then we are calling the add method in Time.

This means, I need to figure out the type of myTimeMachine.move(foo). To do so, I need first to figure out if I am calling the first or the second overloaded variant of TimeMachine.move. And that depends on the type of foo.

So, you see, I cannot extricate the two problems: they affect each other results, and therefore, in principle, we treat them together. In practice, for very simple languages we can get away by treating them separately. Typically, you need to treat the problems in a combined way if there are composite types or cascading functions/method calls. 

If you want to read about symbol resolution for a language like Java, you can look at How to Build a Symbol Solver for Java, in Clojure or Resolve method calls in Java code using the JavaSymbolSolver.

A Prerequisite for Any Non-Trivial Operation

Semantic Enrichment is a prerequisite for most programmatic handling of code.

Perhaps you may implement a linter or a code formatter without semantic enrichment, but for the most typical language engineering applications you need semantic enrichment:

  • Interpretation: To execute code, we need to connect function invocation to their definition
  • Migrations: To migrate code in any nontrivial way, we want to take into account the type of the different elements. For example, if we were translating the operation a + b, depending on the target language and the type of the operands, we may want to translate it as a + b or perhaps a.concat(b) or even a + b.toDouble().
  • Static analysis and refactoring: Automated code modifications, such as renaming variables or moving functions, depend on knowing which references are linked to which declarations.
  • Editors: Autocompletion or error-checking depends on semantic enrichment. But also go-to-definition or find-usages. In essence the difference between a basic editor and an advanced one is in their support for semantic enrichment for the language of interest.

The StarLasu Approach and Semantic Enrichment

When it comes to parsing, at Strumenta we apply the principles of the Chisel Method. They are quite established at this point, after years of refining. For Semantic Enrichment, things are not as crystallized as we evolve the approach at each new project, finding new solutions to new challenges. That said, we are finding patterns that work and incorporating them in our core libraries and in Sharplasu in particular. 

At this stage, Sharplasu has a module called SymbolResolution which has a reasonably good approach to symbol resolution. Type calculation is instead still implemented ad-hoc for each project, at this time. So we call the type calculation logic from symbol resolution (and viceversa). It is just that we have standard APIs for symbol resolution and not for type calculation.

Let’s See an Example of Semantic Enrichment

In our example we will work with a simple language that permits us to define entities. These entities have fields, called features with types. They can be initialized with expressions.

This is an example:

module example

import standard

type address

class base {
	name string
	description string
}

class aged : base {
	age integer
}

class person : aged {
	speed integer = 2
}

class athlete : person {
	speed integer = person.speed * 2
}

class car : aged {
	kilometers integer = age * 10000	
}

This language, while simple, contains some of the elements that we can find in most languages:

  • We can import modules
  • We can define new types
  • We have built-in types such as string and integer
  • We can reference features without specifying the context (therefore using the current object as context) or specifying it

Symbol Resolution

Let’s take a look at a portion of our SymbolResolver.

Scope ModuleLevelTypes(Node ctx)
{
    var scope = new Scope();
    var module = ctx.FindAncestorOfType<Module>();
    if (module != null)
    {
        // let's define types
        module.Types.ForEach(type => scope.Define(type));
        foreach (var import in module.Imports)
        {
            SymbolResolver.ResolveSymbols(import);
            if (import.Module.Referred?.Entities != null)
            {
                foreach (var entity in import.Module.Referred.Entities)
                {
                    scope.Define(entity);
                }
            }
            if (import.Module.Referred?.Types != null)
            {
                foreach (var type in import.Module.Referred.Types)
                {
                    scope.Define(type);
                }
            }
        }
    }

    return scope;
}

[..]

public ExampleSemantics(IModuleFinder moduleFinder)
{
    ModuleFinder = moduleFinder;

    SymbolResolver = new DeclarativeLocalSymbolResolver(Issues);
    [..]
    SymbolResolver.ScopeFor(typeof(FeatureDecl).GetProperty("Type"), (FeatureDecl feature) =>
    {
        var scope = ModuleLevelTypes(feature);                
        return scope;
    });
    [..]
    SymbolResolver.ScopeFor(typeof(Import).GetProperty("Module"), (Import import) =>
    {
        var scope = new Scope();
        if(moduleFinder.FindModule(import.Module.Name) != null)
        {
           scope.Define(moduleFinder.FindModule(import.Module.Name));
        }                    
        return scope;
    });

    TypeCalculator = new EntityTypeCalculator(SymbolResolver);                
}

Here we want to see the basics of symbol resolution and how importing symbols from other elements works. We first see that symbol resolution depends on moduleFinder. The moduleFinder is the thing containing the list of available code for a project. This means files of the project and any library available for the project. You can see on the repository that, for this project, is just a Dictionary that keeps tracks of the name and the corresponding Module object. A Module object is the root of the AST. Given the previous example file, there will a Module with name "example" representing that. The important part is that is the object that tells you if and where modules outside the current one are located.

You can see that to solve imports, as in:

import standard

To solve a symbol you need a scope. You can think of scope as a container of available definitions. In Sharplasu, a Scope can have a parent Scope, so you can properly nest them. For example, there could be global scope to solve imports and a class scope to solve features.

So, we create a scope, and then we ask the moduleFinder if there is a module with that name, and, if so, we define that module. Defining a symbol means telling our SymbolResolver object that there is a definition of the argument in the current scope. The SymbolResolver object is based on a Sharplasu class, so you can use that class for all your projects. Later, when we will ask our symbol resolver to resolve the symbols, it will look in the scope of that reference and check if there is valid definition for a reference with that name.

How Importing Modules Affects Types

You will notice that defining module is necessary to solve the type of features.

class base {
	name string
	description string
}

So, string after name and description are types and name string is a definition of a feature.

To solve the references to the types of features you call the ModuleLevelTypes method. This method will:

  • look for definition of types in the current module
  • loop through imported module, make sure that all references in imported modules are solved
  • then it will define the types in each imported module

Solving imports is therefore crucial to solve types. Particularly as in this example, as in many real languages, types are often the ones from the standard library.

class athlete : person {
	speed integer = person.speed * 2
}
class car : aged {
	kilometers integer = age * 10000	
}

Solving ReferenceExpression

In our language a ReferenceExpression, like person.speed or age can only have:

  • an optional parent/context element that is a class (like person)
  • a target element that references a feature (like speed or age)

No nesting or multiple levels are allowed.

So, solving either a reference to the context or target element is similar.

SymbolResolver.ScopeFor(typeof(ReferenceExpression).GetProperty("Context"), (ReferenceExpression reference) =>
{
    var scope = new Scope();
    var classParent = reference.FindAncestorOfType<ClassDecl>();
    if (classParent != null)
        scope = ClassHierarchyEntities(reference.FindAncestorOfType<ClassDecl>());
    else
        Issues.Add(Issue.Semantic("The class containing this expression has no superclasses. The Context cannot be solved.", reference.Position));
    return scope;
});
SymbolResolver.ScopeFor(typeof(ReferenceExpression).GetProperty("Target"), (ReferenceExpression reference) =>
{
    var scope = new Scope();
    if (reference.Context == null)
    {
        var classParent = reference.FindAncestorOfType<ClassDecl>();
        if (classParent != null)
            scope.Parent = ClassLevelTypes(classParent);
    }
    else if (reference.Context.Resolved)                
    {
        reference.Context.Referred.Features.ForEach(it => scope.Define(it));
    }
    return scope;
});

One difference is just that if the current expression contains a Context element, but the class containing the expression has no superclass, we have an issue, because the reference cannot be solved. The other one is that for solving Target, we must first solve Context, to make sure we only consider the features in the Context element.

Otherwise, we look for the proper elements in the parent class. Let’s just look at how to solve the hierarchy of classes.

Scope ClassHierarchyEntities(ClassDecl ctx)
{
    var scope = new Scope();
    var superclass = ctx.Superclass;            
    if (superclass != null && superclass.Resolved)
    {
        // let's define the superclass
        scope.Define(superclass.Referred);                
        scope.Parent = ClassHierarchyEntities(superclass.Referred);
    }

    return scope;
}

If the reference to the superclass of the current class has been solved, we define the current superclass. Then we rise up through the hierarchy of classes to define them all.

class base {
	name string
	description string
}

class aged : base {
	age integer
}

class person : aged {
	speed integer = 2
}

class athlete : person {
	speed integer = person.speed * 2
}

Basically, considering the previous example, to solve the reference to person, in person.speed we define person, because is the superclass of athlete, the class containing the expression, then aged and base.

Symbol Resolution Patterns

We can then see a few rules:

  • The way in which we resolve imports is by delegating to the moduleFinder object. This is the case because we need to use some logic to find other files and parse them on demand, possibly managing loops (what if a file import itself, directly or indirectly?)
  • When looking for a superclass we use a global scope, for all classes in the module
  • For solving references to feature we do not specify any element declared at that level. We just look for features declared in the classes containing them. So to do this, we define a parent scope
  • The case of ModuleLevelTypes is interesting because there we can see a combination of elements:
    • We get all the types declared in the module
    • We get all the types declared in the imported modules
    • We could also get all the built-in entities, but we choose to force the user to import a standard module to get them instead

These few rules cover many of the most common patterns we see in languages we work with, either Domain Specific Languages we design or legacy languages for which we build parsers.

Type Calculation

Let’s take a look at our simple Type Calculator. Notice that we managed to separate our type calculation from symbol resolution. Our type calculation needs symbol resolution, but not vice versa. We are going to see this simpler case first, then what happens in the standard case.

public override IType CalculateType(Node node)
{
    switch (node)
    {
        case OperatorExpression opExpr:
            var leftType = GetType(opExpr.Left);
            var rightType = GetType(opExpr.Right);
            if (leftType == null || rightType == null)
                return null;
            switch (opExpr.Operator)
            {
                case Operator.Addition:
                    if (leftType == EntityStandardLibrary.StringType && rightType == EntityStandardLibrary.StringType)
                        return EntityStandardLibrary.StringType;
                    else if (leftType == EntityStandardLibrary.StringType && rightType == EntityStandardLibrary.IntegerType)
                        return EntityStandardLibrary.StringType;
                    else if (leftType == EntityStandardLibrary.IntegerType && rightType == EntityStandardLibrary.IntegerType)
                        return EntityStandardLibrary.IntegerType;
                    else
                        throw new NotImplementedException($"Unsupported operand types for addition: {leftType}, {rightType}");
               [..]
            }
        case ReferenceExpression refExpr:
            if(refExpr.Context == null)
                return GetTypeOfReference<ReferenceExpression, FeatureDecl>(refExpr, typeof(ReferenceExpression).GetProperty("Target"));
            else
            {
                SymbolResolver.ResolveNode(refExpr);
                return GetTypeOfReference<ReferenceExpression, FeatureDecl>(refExpr, typeof(ReferenceExpression).GetProperty("Target"));
            }
       [..]
        case StringLiteralExpression _:
            return EntityStandardLibrary.StringType;
        case BooleanLiteralExpression _:
            return EntityStandardLibrary.BooleanType;
        case IntegerLiteralExpression _:
            return EntityStandardLibrary.IntegerType;                
        default:
            throw new NotImplementedException($"Type calculation not implemented for node type {node.GetType()}");
    }
}

It shows common patterns for type calculation:

  • On the bottom you can see that we solve types for literals: we assign a standard type for each literal
  • We solve types for binary operations by finding types for the two individual elements (left and right) of the expression and then defining rules for the combination. For each an addition of a string and an integer is considered a concatenation, so the resulting type is an integer. This will vary very much by language and how you choose to handle type conversion between compatible types
  • To solve the type of reference expressions, we need to solve the reference and then get its type

Calculating the Type of References

To solve the type of references we get a look at GetTypeOfReference method. This method has type argument the class that can hold the reference and the information about the property that will hold the reference in the first argument. Notice that in this case we could avoid using the first type argument, since we ReferenceExpression is the only kind of expression containing a reference. However, this code shows that it is easy to generalize this method.

private IType GetTypeOfReference<T, S>(T refHolder, PropertyInfo refAccessor)
    where T : Node
    where S : Node, Named            
{
    ReferenceByName<S> refValue = refAccessor.GetValue(refHolder) as ReferenceByName<S>;
    if (refValue != null && !refValue.Resolved)
    {
        SymbolResolver?.ResolveProperty(refAccessor, refHolder);                             
    }
    else if (refValue != null && refValue.Resolved != false)
        return GetType(refValue.Referred as Node);
    
    return null;
}

The method uses a bit of reflection. In essence, it checks if the providing property corresponds to a reference that has been solved. If it is not, then it triggers the symbol resolution, supposing we have a SymbolResolver. Then we get the type referred to. GetType is simply a way to access a Dictionary matching nodes with types.

These are the common patterns to calculate a type. One that we are missing is having a special type for void or unit, that represents no type.

Again, it may sound quite boring, but these are the kind of patterns we routinely see. Of course, things can get more exciting if we throw generics and type inference in the mix, but for this time, let’s keep things simple.

Our types are all classes that inherits from an interface IType. This is not an interface part of Sharplasu, we created it for this example, but it is so simple that you can look it up on your own in the repository.

When Type Calculation and Symbol Resolution Intertwine

We avoided making type calculation and symbol resolution be dependent on each other for a few reasons. Our references had only two levels and we knew that the first one was always a superclass of the current one. Imagine we change that.

class address {
	note base
	city string
	street string
	number integer	
}

class person : aged {
	location address
	speed integer = 2
}

class athlete : person {			
	deliveryNote string = location.note.description
	luckyNumber integer = location.number + 3
}

Now, the features can have as a type a class, other than scalar types. Our references now have a Context property, which is an Expression. So, the node ReferenceExpression for location.note.description will have this structure. At the first level we have a ReferenceExpression with Target description and Context another ReferenceExpression with Target note and so on.

This means that now we cannot determine statically what will be the actual type of a Context object: it could be a scalar type or a class. So, to solve a reference in Target we need to dynamically define the type of Context, which will depend on what the reference in Context resolves to. So, how do we accomplish this? For starters, we need to make a small change in the ModuleLevelTypes method and make sure that we define all entities at the module level.

module.Entities.ForEach(type => scope.Define(type));

Apart from that all we need to change is how we solve symbols for the Target property.

SymbolResolver.ScopeFor(typeof(ReferenceExpression).GetProperty("Target"), (ReferenceExpression reference) =>
{
    var scope = new Scope();

    var classParent = reference.FindAncestorOfType<ClassDecl>();
    if (classParent != null)
        scope.Parent = ClassLevelTypes(classParent);                

    if (reference.Context != null)
    {
        SymbolResolver.ResolveNode(reference.Context);

        var type = TypeCalculator.GetType(reference.Context) as ClassDecl;

        if (type != null)
        {
            type.Features.ForEach(it => scope.Define(it));
        }
    }

    return scope;
});

The pattern is simple:

  • We ensure we have resolved the Context node, so we know which feature Context resolves to
  • This allows us to get the type, i.e. the class the features has
  • We can now define the features of the class

And voilà, we can now solve the current reference.

Using Semantic Enrichment

It is easy to glue together symbol resolution and type calculation.

public List<Issue> SemanticEnrichment(Node node)
{
    SymbolResolver.ResolveSymbols(node);
    node.WalkDescendants<Expression>().ToList().ForEach(expression => {
        TypeCalculator.SetTypeIfNeeded(expression);
    });
    return Issues;
}

So, we can take any module and trigger symbol resolution. We will then get an AST containing references that have been resolved. Also, the types will be stored in a cache in TypeCalculator.

public abstract class TypeCalculator
{
    public virtual IType GetType(Node node)
    {
        return SetTypeIfNeeded(node);
    }

    public IType StrictlyGetType(Node node)
    {
        var type = SetTypeIfNeeded(node);
        if (type == null)
            throw new InvalidOperationException($"Cannot get type for node {node}");
        return type;
    }

    public abstract IType CalculateType(Node node);

    public virtual IType SetTypeIfNeeded(Node node)
    {
        if (node.GetTypeSemantics() == null)
        {
            var calculatedType = CalculateType(node);
            node.SetTypeSemantics(calculatedType);
        }
        return node.GetTypeSemantics();
    }
}

This means that after invoking the semantic enrichment, we can look each of our nodes of class Expression and get their type using mynode.GetTypeSemantics(). Easy, right?

A Simple Test

What is life without tests? Here at Strumenta we do not want to imagine such a sorry existence, so our repository has a few tests. Let’s see a simple one:

        [TestMethod]
        public void TestTypeCalculation()
        {
            EntitySharplasuParser parser = new EntitySharplasuParser();
            string code = @"module example

import standard

class base {
	name string
	description string
}

class aged : base {
	age integer
}

class person : aged {
	location address
	speed integer = 2
}

class address {
	note base
	city string
	street string
	number integer	
}

class athlete : person {			
	deliveryNote string = location.note.description
	luckyNumber integer = location.number + 3
}

class car : aged {
	kilometers integer = age * 10000	
}";
            var result = parser.Parse(code);

            SimpleModuleFinder moduleFinder = new SimpleModuleFinder();
            ExampleSemantics semantics = new ExampleSemantics(moduleFinder);
            List<Issue> issues = semantics.SemanticEnrichment(result.Root);
            Assert.AreEqual(0, issues.Count);
            result.Root.AssertAllExpressionsHaveTypes();
            Assert.AreEqual("string",
                result.Root.Entities[4].Features[0].Value.GetTypeSemantics().Name
            );
        }

We test that all expressions have a type, then we check one specific expression. The highlighted expression should have type string, since location is of class address which has the note feature of class base, which in turn has a feature description of type string.

Conclusions

While parsing organizes code into a syntactic structure, Semantic Enrichment uncovers its meaning by resolving symbols and determining types. Without this critical step, advanced operations such as code generation, interpretation, and refactoring would be impossible. Semantic Enrichment is not trivial to implement. 

With this article, we wanted to share some of the principles behind it. And through the support built-in in Sharplasu, we want to provide a way to simplify the implementation of advanced Language Engineering solutions. For us, it has been working pretty well, and we hope it will work similarly well for you. 

Have fun with your Language Engineering project!

The post Resolve Symbols and Calculate Types with Sharplasu appeared first on Strumenta.

Permalink

A Simpler Way to Deal with Java Sources in CIDER

For ages dealing with Java sources in CIDER has been quite painful.1 Admittedly, much of the problems were related to an early design decision I made to look for the Java sources only in the classpath, as I assumed that would be easiest way to implement this. Boy, was I wrong! Countless of iterations and refinements to the original solution later, working with Java sources is still not easy. enrich-classpath made things better, but it required changes to the way CIDER (nREPL) was being started and slowed down the first CIDER run in each project quite a bit, as it fetches all missing sources at startup. It’s also a bit trickier to use it with cider-connect, as you need to start nREPL together with enrich-classpath. Fortunately, my good friend and legendary Clojure hacker Oleksandr Yakushev recently proposed a different way of doing things and today I’m happy to announce that this new approach is a reality!

There’s an exciting new feature waiting for you in the latest CIDER MELPA build. After updating, try turning on the new variable cider-download-java-sources (M-x customize-variable cider-download-java-sources and then toggle to enable it). Now CIDER will download the Java sources of third-party libraries for Java classes when:

  • you request documentation for a class or a method (C-c C-d C-d)
  • you jump to some definition (M-.) within a Java class

Note that eldoc won’t trigger the auto-download of Java sources, as we felt this might be harmful to the user experience.

This feature works without enrich-classpath.2 The auto-downloading works for both tools.deps and Leiningen-based projects. In both cases, it starts a subprocess of either clojure or lein binary (this is the same approach that Clojure’s 1.12 add-lib utilizes).

And that’s it! The new approach is so seamless that it feels a bit like magic.

This approach should work well for most cases, but it’s not perfect. You might have problems downloading the sources of dependencies that are not public (i.e. they live in a private repo), and the credentials are non-global but under some specific alias/profile that you start REPL with. If this happens to you, please report it; but we suspect such cases would be rare. The download usually takes up to a few seconds, and then the downloaded artifact will be reused by all projects. If a download failed (most often, because the library didn’t publish the -sources.jar artifact to Maven), CIDER will not attempt to download it again until the REPL restarts. Try it out in any project by jumping to clojure.lang.RT/toArray or bringing up the docs for clojure.lang.PersistentArrayMap.

Our plan right now is to have this new feature be disabled by default in CIDER 1.17 (the next stable release), so we can gather some user feedback before enabling it by default in CIDER 1.18. We’d really appreciate your help in testing and polishing the new functionality and we’re looking forward to hear if it’s working well for you!

We also hope that other Clojure editors that use cider-nrepl internally (think Calva, iced-vim, etc) will enable the new functionality soon as well.

That’s all I have for you today! Keep hacking!

P.S. The State of CIDER 2024 survey is still open and it’d be great if you took a moment to fill it in!

  1. You need to have them around to be able to navigate to (definitions in) them and improved Java completion. More details here

  2. If you liked using enrich-classpath you can still continue using it going forward. 

Permalink

Where to store your (image) files in Leiningen project, and how to fetch them?

Notes

Create new app using:

$ lein new app image_in_resources

Place clojure_diary-logo.png in resources/images/ folder.

project.clj content:

(defproject image_in_resources "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0"
            :url "https://www.eclipse.org/legal/epl-2.0/"}
  :dependencies [[org.clojure/clojure "1.11.1"]]
  :main ^:skip-aot image-in-resources.core
  :target-path "target/%s"
  :profiles {:uberjar {:aot :all
                       :jvm-opts ["-Dclojure.compiler.direct-linking=true"]}})

src/image_in_resources/core.clj content:

(ns image-in-resources.core
  (:gen-class)
  (:require [clojure.java.io :as io])
  (:import [javax.imageio ImageIO]))

(defn load-image [image-name]
  (let [image-url (io/resource (str "images/" image-name))]
    (if image-url
      (ImageIO/read image-url)
      (throw (Exception. (str "Image not found: " image-name))))))

(defn save-image [image output-path]
  (ImageIO/write image "png" (io/file output-path)))

(defn -main []
  (let [image-name "clojure_diary_logo.png"
        output-path (str "./" image-name)] ; Save to current directory
    (try
      (let [img (load-image image-name)]
        (save-image img output-path)
        (println (str "Image saved successfully to: " output-path)))
      (catch Exception e
        (println (str "Error: " (.getMessage e)))))))

Run the project using:

$ lein run

Generate jar file using lein uberjar, and run it using:

$ java -jar target/uberjar/image_in_resources-0.1.0-SNAPSHOT-standalone.jar

The complete source code can be found here https://gitlab.com/clojure-diary/code/image-in-resources.

Permalink

Clojure Deref (Jan 17, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation.

Libraries and Tools

New releases and tools this week:

Permalink

40+ Must-See GitHub Repositories You Can't Afford to Miss!

Minimalistic OG Images:

1. YAML-Powered URL and Shell Command Shortener

🔗 Website: https://gittech.site/github/item/42729388...
📂 GitHub Repository: https://github.com/NishantJoshi00/yamlink
📅 Published On: Thu, 16 Jan 2025 19:06:11 GMT

Minimalistic OG Images:

2. An open source Deno monorepo template

🔗 Website: https://gittech.site/github/item/42729206...
📂 GitHub Repository: https://github.com/runreal/deno-monorepo-template
📅 Published On: Thu, 16 Jan 2025 18:46:59 GMT

Minimalistic OG Images:

3. underattack.txt Internet Standard

🔗 Website: https://gittech.site/github/item/42728311...
📂 GitHub Repository: https://github.com/robss2020/underattack.txt
📅 Published On: Thu, 16 Jan 2025 17:46:12 GMT

Minimalistic OG Images:

4. React Server Components rendered in Service Workers

🔗 Website: https://gittech.site/github/item/42727945...
📂 GitHub Repository: https://github.com/enjikaka/react-service-worker-components
📅 Published On: Thu, 16 Jan 2025 17:26:37 GMT

Minimalistic OG Images:

5. DBOS TypeScript – Lightweight Durable Execution Built on Postgres

🔗 Website: https://gittech.site/github/item/42727970...
📂 GitHub Repository: https://github.com/dbos-inc/dbos-transact-ts
📅 Published On: Thu, 16 Jan 2025 17:26:37 GMT

Minimalistic OG Images:

6. Building AI Agents with Ruby

🔗 Website: https://gittech.site/github/item/42727994...
📂 GitHub Repository: https://github.com/alchaplinsky/regent
📅 Published On: Thu, 16 Jan 2025 17:26:37 GMT

Minimalistic OG Images:

7. Adding Sub-Issues in GitHub

🔗 Website: https://gittech.site/github/item/42728155...
📂 GitHub Repository: https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/adding-sub-issues
📅 Published On: Thu, 16 Jan 2025 17:26:36 GMT

Minimalistic OG Images:

8. Clojure core.async.flow

🔗 Website: https://gittech.site/github/item/42727701...
📂 GitHub Repository: https://github.com/clojure/core.async/blob/master/doc/flow.md
📅 Published On: Thu, 16 Jan 2025 17:07:27 GMT

Minimalistic OG Images:

9. Fruitstand – A Library for Regression Testing LLMs

🔗 Website: https://gittech.site/github/item/42727834...
📂 GitHub Repository: https://github.com/deckard-designs/fruitstand
📅 Published On: Thu, 16 Jan 2025 17:07:26 GMT

Minimalistic OG Images:

10. GitHub Introduces Sub-Issues

🔗 Website: https://gittech.site/github/item/42727488...
📂 GitHub Repository: https://github.com/manticoresoftware/manticoresearch/issues/2945
📅 Published On: Thu, 16 Jan 2025 16:47:48 GMT

Minimalistic OG Images:

11. Interesting and, hopefully, fun to use APIs

🔗 Website: https://gittech.site/github/item/42727212...
📂 GitHub Repository: https://github.com/Vets-Who-Code/api-list
📅 Published On: Thu, 16 Jan 2025 16:31:37 GMT

Minimalistic OG Images:

12. A port of Doom (1993) that runs inside a PDF file

🔗 Website: https://gittech.site/github/item/42727263...
📂 GitHub Repository: https://github.com/ading2210/doompdf
📅 Published On: Thu, 16 Jan 2025 16:31:36 GMT

Minimalistic OG Images:

13. hnrss: Custom, realtime RSS feeds for Hacker News

🔗 Website: https://gittech.site/github/item/42727342...
📂 GitHub Repository: https://github.com/hnrss/hnrss
📅 Published On: Thu, 16 Jan 2025 16:31:36 GMT

Minimalistic OG Images:

14. OpenAI ChatGPT Crawler Vulnerability: Unauthenticated Reflective DDoS

🔗 Website: https://gittech.site/github/item/42727463...
📂 GitHub Repository: https://github.com/bf/security-advisories/blob/main/2025-01-ChatGPT-Crawler-Reflective-DDOS-Vulnerability.md
📅 Published On: Thu, 16 Jan 2025 16:31:35 GMT

Minimalistic OG Images:

15. List of AI Agents

🔗 Website: https://gittech.site/github/item/42725367...
📂 GitHub Repository: https://github.com/francedot/acu
📅 Published On: Thu, 16 Jan 2025 14:27:37 GMT

Minimalistic OG Images:

16. Fmtx: More intuitive printing values for Golang

🔗 Website: https://gittech.site/github/item/42724698...
📂 GitHub Repository: https://github.com/mengdu/fmtx
📅 Published On: Thu, 16 Jan 2025 13:30:12 GMT

Minimalistic OG Images:

17. Intuitive Static Site CMS for SEO-optimized and privacy-focused websites

🔗 Website: https://gittech.site/github/item/42724750...
📂 GitHub Repository: https://github.com/GetPublii/Publii
📅 Published On: Thu, 16 Jan 2025 13:30:11 GMT

Minimalistic OG Images:

18. I made an open source directory of where to showoff your projects

🔗 Website: https://gittech.site/github/item/42724757...
📂 GitHub Repository: https://github.com/KingMenes/awesome-launch
📅 Published On: Thu, 16 Jan 2025 13:30:11 GMT

Minimalistic OG Images:

19. QA via natural language AI tests

🔗 Website: https://gittech.site/github/item/42724794...
📂 GitHub Repository: https://github.com/anti-work/shortest
📅 Published On: Thu, 16 Jan 2025 13:30:11 GMT

Minimalistic OG Images:

20. Serenade, a tool for coding with voice: first community release after fork

🔗 Website: https://gittech.site/github/item/42724370...
📂 GitHub Repository: https://github.com/sombrafam/serenade/releases/tag/2.0.2-community-1.0.beta
📅 Published On: Thu, 16 Jan 2025 12:50:12 GMT

Minimalistic OG Images:

21. Stack Overflow 2024 Survey Analysis

🔗 Website: https://gittech.site/github/item/42724042...
📂 GitHub Repository: https://github.com/ousstrk/Stack-Overflow-2024-Survey-Analysis
📅 Published On: Thu, 16 Jan 2025 11:46:18 GMT

Minimalistic OG Images:

22. Script to check the warranty status for HP, Lenovo

🔗 Website: https://gittech.site/github/item/42724079...
📂 GitHub Repository: https://github.com/be-lenka/warrantiak
📅 Published On: Thu, 16 Jan 2025 11:46:17 GMT

Minimalistic OG Images:

23. Better Than Nothing – a Pi Pico-based hardware wallet

🔗 Website: https://gittech.site/github/item/42723938...
📂 GitHub Repository: https://github.com/dfsforg/btnhw
📅 Published On: Thu, 16 Jan 2025 11:31:43 GMT

Minimalistic OG Images:

24. Local Ollama Chat – Chrome Extension

🔗 Website: https://gittech.site/github/item/42723978...
📂 GitHub Repository: https://github.com/lsgrep/chrome-extension-ollama-chat
📅 Published On: Thu, 16 Jan 2025 11:31:43 GMT

Minimalistic OG Images:

25. Docling: Get your documents ready for gen AI

🔗 Website: https://gittech.site/github/item/42723798...
📂 GitHub Repository: https://github.com/DS4SD/docling
📅 Published On: Thu, 16 Jan 2025 11:07:07 GMT

Minimalistic OG Images:

26. Llama2.c Running in a PDF

🔗 Website: https://gittech.site/github/item/42723823...
📂 GitHub Repository: https://github.com/trholding/llama2.c
📅 Published On: Thu, 16 Jan 2025 11:07:07 GMT

Minimalistic OG Images:

27. uPhotos, an open source GPU-accelerated photo viewer and organizer

🔗 Website: https://gittech.site/github/item/42723533...
📂 GitHub Repository: https://github.com/i255/Photos
📅 Published On: Thu, 16 Jan 2025 10:30:44 GMT

Minimalistic OG Images:

28. Simple Crawling Server

🔗 Website: https://gittech.site/github/item/42723120...
📂 GitHub Repository: https://github.com/rumca-js/crawler-buddy
📅 Published On: Thu, 16 Jan 2025 09:25:59 GMT

Minimalistic OG Images:

29. AI Creating AI: Open-Sourcing 'Ask' – Your Agentic Command-Line Assistant

🔗 Website: https://gittech.site/github/item/42723199...
📂 GitHub Repository: https://github.com/sfarrell5123/ask
📅 Published On: Thu, 16 Jan 2025 09:25:59 GMT

Minimalistic OG Images:

30. The mighty, self-hostable Git server for the command line

🔗 Website: https://gittech.site/github/item/42722888...
📂 GitHub Repository: https://github.com/charmbracelet/soft-serve
📅 Published On: Thu, 16 Jan 2025 08:46:21 GMT

Minimalistic OG Images:

31. Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget

🔗 Website: https://gittech.site/github/item/42722154...
📂 GitHub Repository: https://github.com/SonyResearch/micro_diffusion
📅 Published On: Thu, 16 Jan 2025 07:08:12 GMT

Minimalistic OG Images:

32. TikTok-dl: Download your important content off of TikTok

🔗 Website: https://gittech.site/github/item/42721996...
📂 GitHub Repository: https://github.com/KenAdamson/tiktok-dl
📅 Published On: Thu, 16 Jan 2025 06:33:51 GMT

Minimalistic OG Images:

33. L2E llama2.c running in a PDF in a Shroedinger PNG [pdf]

🔗 Website: https://gittech.site/github/item/42721805...
📂 GitHub Repository: https://github.com/trholding/llama2.c/blob/master/assets/l2e_sky_fun.png
📅 Published On: Thu, 16 Jan 2025 06:11:51 GMT

Minimalistic OG Images:

34. GitHub rebase commits are down

🔗 Website: https://gittech.site/github/item/42721561...
📂 GitHub Repository: https://github.com/orgs/community/discussions/149282
📅 Published On: Thu, 16 Jan 2025 05:28:00 GMT

Minimalistic OG Images:

35. Python App for Batch Downloading TikTok Videos

🔗 Website: https://gittech.site/github/item/42720492...
📂 GitHub Repository: https://github.com/joeycato/tiktok-favesave
📅 Published On: Thu, 16 Jan 2025 03:10:04 GMT

Minimalistic OG Images:

36. Bunster: compile bash scripts to self contained executables

🔗 Website: https://gittech.site/github/item/42720548...
📂 GitHub Repository: https://github.com/yassinebenaid/bunster
📅 Published On: Thu, 16 Jan 2025 03:10:04 GMT

Minimalistic OG Images:

37. Visual Studio Code Extension to Set Executable Bits

🔗 Website: https://gittech.site/github/item/42720354...
📂 GitHub Repository: https://github.com/dlech/vscode-chmod
📅 Published On: Thu, 16 Jan 2025 02:51:02 GMT

Minimalistic OG Images:

38. MiniMax-01, Advanced Text and Vision-Language Models

🔗 Website: https://gittech.site/github/item/42719450...
📂 GitHub Repository: https://github.com/MiniMax-AI/MiniMax-01
📅 Published On: Thu, 16 Jan 2025 02:02:09 GMT

Minimalistic OG Images:

39. DoomPDF – Doom source port that runs inside a PDF file

🔗 Website: https://gittech.site/github/item/42719506...
📂 GitHub Repository: https://github.com/ading2210/doompdf
📅 Published On: Thu, 16 Jan 2025 02:02:09 GMT

Minimalistic OG Images:

40. 4M Tokens Context Model

🔗 Website: https://gittech.site/github/item/42720072...
📂 GitHub Repository: https://github.com/MiniMax-AI
📅 Published On: Thu, 16 Jan 2025 02:02:07 GMT

Minimalistic OG Images:

41. PLChef – Explore the Spotify catalogue and make playlists quickly

🔗 Website: https://gittech.site/github/item/42718924...
📂 GitHub Repository: https://github.com/selira/plchef
📅 Published On: Thu, 16 Jan 2025 00:39:26 GMT

Minimalistic OG Images:

42. OAuth 2.0 clients for popular providers

🔗 Website: https://gittech.site/github/item/42718994...
📂 GitHub Repository: https://github.com/pilcrowonpaper/arctic
📅 Published On: Thu, 16 Jan 2025 00:39:25 GMT

Minimalistic OG Images:

43. Serverless fine tuning using axolotl

🔗 Website: https://gittech.site/github/item/42719172...
📂 GitHub Repository: https://github.com/runpod-workers/llm-fine-tuning
📅 Published On: Thu, 16 Jan 2025 00:39:24 GMT

Minimalistic OG Images:

44. Mikaey's Flash Stress Test

🔗 Website: https://gittech.site/github/item/42719209...
📂 GitHub Repository: https://github.com/mikaey/mfst
📅 Published On: Thu, 16 Jan 2025 00:39:24 GMT

Earn $100 Fast: AI + Notion Templates

Earn $100 Fast: AI + Notion Templates

Now Free!

Get the guide here

Do you want to make extra money quickly? This guide shows you how to create and sell Notion templates step by step. Perfect for beginners or anyone looking for an easy way to start earning online.

Why Download This Guide?

  • Start Making Money Fast: Follow a simple process to create templates people want and will buy.
  • Save Time with AI: Learn to use tools like ChatGPT to design and improve templates.
  • Join a Growing Market: More people are using Notion every day, and they need templates to save time and stay organized.

Includes Helpful Tools:

  • ChatGPT Prompts PDF: Ready-made prompts to spark ideas and create templates faster.
  • Checklist PDF: Stay on track as you work.

What’s Inside?

  • Clear Steps to Follow: Learn everything from idea to sale.
  • How to Find Popular Ideas: Research trends and needs.
  • Using AI to Create: Tips for improving templates with AI tools.
  • Making Templates User-Friendly: Simple tips for better design.
  • Selling Your Templates: Advice on sharing and selling on platforms like Gumroad or Etsy.
  • Fixing Common Problems: Solutions for issues like low sales or tricky designs.

Who Is This For?

  • Anyone who wants to make extra money online.
  • People who love using Notion and want to share their ideas.
  • Creators looking for a simple way to start selling digital products.

Get your free copy now and start making money today!

Permalink

25 Must-Check Clojure Resources for Developers: Tutorials, Tools, and Tips

Clojure practice challenges – train on code kata

Clojure practice challenges – train on code kata

Practice Clojure coding with code challenges designed to engage your programming skills. Solve coding problems and pick up new techniques from your fellow peers.
Here's the link: https://www.codewars.com/kata/search/clojure...

GitHub - taoensso/carmine: Redis client + message queue for Clojure

GitHub - taoensso/carmine: Redis client + message queue for Clojure

Redis client + message queue for Clojure. Contribute to taoensso/carmine development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/ptaoussanis...

GitHub - antoniogarrote/clj-ml: A machine learning library for Clojure built on top of Weka and friends

GitHub - antoniogarrote/clj-ml: A machine learning library for Clojure built on top of Weka and friends

A machine learning library for Clojure built on top of Weka and friends - antoniogarrote/clj-ml
Here's the link: http://0x3d.site/resource/clojure/antoniogarr...

GitHub - razum2um/aprint: Awesome print: like clojure.pprint, but awesome

GitHub - razum2um/aprint: Awesome print: like clojure.pprint, but awesome

Awesome print: like clojure.pprint, but awesome. Contribute to razum2um/aprint development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/razum2um/ap...

GitHub - clojure/core.async: Facilities for async programming and communication in Clojure

GitHub - clojure/core.async: Facilities for async programming and communication in Clojure

Facilities for async programming and communication in Clojure - clojure/core.async
Here's the link: https://github.com/clojure/core.async/...

ClojureVids

ClojureVids

ClojureVids is dedicated to delivering high-quality Clojure video training for all skill levels.
Here's the link: https://www.youtube.com/channel/UCrwwOZ4h2FQh...

Build software better, together

Build software better, together

GitHub is where people build software. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects.
Here's the link: https://github.com/linpengcheng/ClojureBoxNpp...

GitHub - theleoborges/bouncer: A validation DSL for Clojure & Clojurescript applications

GitHub - theleoborges/bouncer: A validation DSL for Clojure & Clojurescript applications

A validation DSL for Clojure & Clojurescript applications - theleoborges/bouncer
Here's the link: http://0x3d.site/resource/clojure/leonardobor...

GitHub - alekseysotnikov/buran: Bidirectional, data-driven RSS/Atom feed consumer, producer and feeds aggregator

GitHub - alekseysotnikov/buran: Bidirectional, data-driven RSS/Atom feed consumer, producer and feeds aggregator

Bidirectional, data-driven RSS/Atom feed consumer, producer and feeds aggregator - alekseysotnikov/buran
Here's the link: http://0x3d.site/resource/clojure/alekseysotn...

GitHub - ertugrulcetin/kezban: Utility library for Clojure and ClojureScript

GitHub - ertugrulcetin/kezban: Utility library for Clojure and ClojureScript

Utility library for Clojure and ClojureScript. Contribute to ertugrulcetin/kezban development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/ertugrulcet...

GitHub - bigmlcom/clj-bigml: Clojure bindings for the BigML.io API

GitHub - bigmlcom/clj-bigml: Clojure bindings for the BigML.io API

Clojure bindings for the BigML.io API. Contribute to bigmlcom/clj-bigml development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/bigmlcom/cl...

GitHub - fhd/clostache: {{ mustache }} for Clojure

GitHub - fhd/clostache: {{ mustache }} for Clojure

{{ mustache }} for Clojure. Contribute to fhd/clostache development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/fhd/clostac...

GitHub - sorenmacbeth/flambo: A Clojure DSL for Apache Spark

GitHub - sorenmacbeth/flambo: A Clojure DSL for Apache Spark

A Clojure DSL for Apache Spark. Contribute to sorenmacbeth/flambo development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/yieldbot/fl...

GitHub - turbopape/milestones: The Automagic Project Planner

GitHub - turbopape/milestones: The Automagic Project Planner

The Automagic Project Planner. Contribute to turbopape/milestones development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/turbopape/m...

GitHub - luciolucio/holi: A library for calendar operations that are aware of weekends and holidays

GitHub - luciolucio/holi: A library for calendar operations that are aware of weekends and holidays

A library for calendar operations that are aware of weekends and holidays - luciolucio/holi
Here's the link: http://0x3d.site/resource/clojure/luciolucio/...

GitHub - fulcrologic/guardrails: Efficient, hassle-free function call validation with a concise inline syntax for clojure.spec and Malli

GitHub - fulcrologic/guardrails: Efficient, hassle-free function call validation with a concise inline syntax for clojure.spec and Malli

Efficient, hassle-free function call validation with a concise inline syntax for clojure.spec and Malli - fulcrologic/guardrails
Here's the link: http://0x3d.site/resource/clojure/fulcrologic...

GitHub - seancorfield/honeysql: Turn Clojure data structures into SQL

GitHub - seancorfield/honeysql: Turn Clojure data structures into SQL

Turn Clojure data structures into SQL. Contribute to seancorfield/honeysql development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/jkk/honeysq...

GitHub - rinuboney/clatern: Machine Learning in Clojure

GitHub - rinuboney/clatern: Machine Learning in Clojure

Machine Learning in Clojure. Contribute to rinuboney/clatern development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/rinuboney/c...

GitHub - clj-commons/pretty: Library for helping print things prettily, in Clojure - ANSI fonts, formatted exceptions

GitHub - clj-commons/pretty: Library for helping print things prettily, in Clojure - ANSI fonts, formatted exceptions

Library for helping print things prettily, in Clojure - ANSI fonts, formatted exceptions - clj-commons/pretty
Here's the link: http://0x3d.site/resource/clojure/AvisoNovate...

GitHub - onyx-platform/onyx: Distributed, masterless, high performance, fault tolerant data processing

GitHub - onyx-platform/onyx: Distributed, masterless, high performance, fault tolerant data processing

Distributed, masterless, high performance, fault tolerant data processing - onyx-platform/onyx
Here's the link: http://0x3d.site/resource/clojure/onyx-platfo...

Build software better, together

Build software better, together

GitHub is where people build software. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects.
Here's the link: https://github.com/apa512/clj-rethinkdb:...

GitHub - juxt/joplin: Flexible datastore migration and seeding for Clojure projects

GitHub - juxt/joplin: Flexible datastore migration and seeding for Clojure projects

Flexible datastore migration and seeding for Clojure projects - juxt/joplin
Here's the link: http://0x3d.site/resource/clojure/juxt/joplin...

GitHub - avli/clojureVSCode: Clojure support for Visual Studio Code

GitHub - avli/clojureVSCode: Clojure support for Visual Studio Code

Clojure support for Visual Studio Code. Contribute to avli/clojureVSCode development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/avli/clojur...

GitHub - aria42/infer: inference and machine learning in clojure

GitHub - aria42/infer: inference and machine learning in clojure

inference and machine learning in clojure. Contribute to aria42/infer development by creating an account on GitHub.
Here's the link: http://0x3d.site/resource/clojure/aria42/infe...

GitHub - juxt/bolt: An integrated security system for applications built on component

GitHub - juxt/bolt: An integrated security system for applications built on component

An integrated security system for applications built on component - juxt/bolt
Here's the link: http://0x3d.site/resource/clojure/juxt/bolt//...

Earn $100 Fast: AI + Notion Templates

Earn $100 Fast: AI + Notion Templates

Now Free!

Get the guide here

Do you want to make extra money quickly? This guide shows you how to create and sell Notion templates step by step. Perfect for beginners or anyone looking for an easy way to start earning online.

Why Download This Guide?

  • Start Making Money Fast: Follow a simple process to create templates people want and will buy.
  • Save Time with AI: Learn to use tools like ChatGPT to design and improve templates.
  • Join a Growing Market: More people are using Notion every day, and they need templates to save time and stay organized.

Includes Helpful Tools:

  • ChatGPT Prompts PDF: Ready-made prompts to spark ideas and create templates faster.
  • Checklist PDF: Stay on track as you work.

What’s Inside?

  • Clear Steps to Follow: Learn everything from idea to sale.
  • How to Find Popular Ideas: Research trends and needs.
  • Using AI to Create: Tips for improving templates with AI tools.
  • Making Templates User-Friendly: Simple tips for better design.
  • Selling Your Templates: Advice on sharing and selling on platforms like Gumroad or Etsy.
  • Fixing Common Problems: Solutions for issues like low sales or tricky designs.

Who Is This For?

  • Anyone who wants to make extra money online.
  • People who love using Notion and want to share their ideas.
  • Creators looking for a simple way to start selling digital products.

Get your free copy now and start making money today!

Permalink

Ornament

by Laurence Chen

Well done on the Compass app. I really like how fast it is (while retaining data consistently which is hugely undervalued these days). I’m not just being polite, I really like it.

by Malcolm Sparks

Before the Heart of Clojure event, one of the projects we spent a considerable amount of time preparing was Compass, and it’s open-source. This means that if you or your organization is planning a conference, you can use it.

When you decide to use Compass, besides importing the speaker and session data into the database, you’ll probably want to make some modifications to the frontend. That’s when you’ll quickly encounter an unfamiliar library: Ornament.

At first glance, the Ornament GitHub README is a bit lengthy, and you might be concerned about how difficult it is to learn. However, learning Ornament and using it for frontend development in the future is an excellent investment. Ornament requires a bit more time to learn because it’s a deep module. But because of this, it offers some competitive advantages that make it worth considering. Here are three key values:

  • Developer Ergonomics
  • Composability
  • High Performance

Developer Ergonomics

When you first see an introduction to Ornament, the easiest concept to grasp is that it’s a library that lets you write CSS directly within Clojure/ClojureScript.

When dealing with CSS, a common approach is to prepare several files dedicated to defining CSS and place the project’s custom CSS classes there. This means we typically have to switch back and forth between different cljs and css files during editing.

This back-and-forth problem was alleviated significantly with the rise of CSS utility classes like Tailwind and Tachyons. Developers could finally reference utility classes within a single cljs file. However, this method isn’t without limitations because sometimes we still want to do some customization or define custom CSS classes.

Ornament combines the flexibility of customization with the convenience of utility classes, allowing developers to handle everything within the cljs file. This way, we retain the flexibility of custom CSS while enjoying the streamlined experience of developing entirely within the cljs file.

Composability

There are two common uses for Ornament. The first is defining simple hiccup components, and the second is defining multi-layered hiccup components.

Consider a simple hiccup component:

(require '[lambdaisland.ornament :as o])

(o/defstyled freebies-link :a
  {:font-size "1rem"
   :color "#cff9cf"
   :text-decoration "underline"})

The defined component can be used with hiccup syntax:

;; Hiccup
[freebies-link {:href "/episodes/interceptors-concepts"} "Freebies"]

Which renders as: <a class="lambdaisland_episodes__freebies_link">Freebies</a>

Now, for a multi-layered hiccup component example:

(o/defstyled page-grid :div
  :relative :h-full :md:flex
  [:>.content :flex-1 :p-2 :md:p-10 :bg-gray-100]
  ([sidebar content]
   [:<>
    sidebar
    [:div.content
     content]]))

If you look closely, [:>.content :flex-1 :p-2 :md:p-10 :bg-gray-100] is no longer specifying the styles for :div itself, but rather for the child components under :div.

High Performance

Ornament was designed with the concept of css-in-js in mind, allowing users to define all CSS within components while writing Clojure or ClojureScript. This significantly reduces the amount of global CSS.

When using Clojure and Ornament, all CSS classes are generated during the build phase, which is intuitive. However, when using ClojureScript, should the CSS also be generated during the ClojureScript build phase, or should it still be generated by JavaScript? Ornament chooses to generate all required CSS during the ClojureScript build phase. This approach simplifies the build process, removes a lot of unnecessary styling definitions, and results in a smaller bundle size.

It originated from dissatisfaction and struggles

Ornament’s design and inspiration originated from frontend engineer Felipe Barros’ dissatisfaction and struggles with ClojureScript development, which led to extensive discussions. If you find Compass useful and fast, take some time to explore Ornament-—the time you invest will pay off.

Do you have any software development challenges or frustrations? Why not talk to us?

>

Permalink

Potential GenAI Impact On DORA Metrics: Five Dimensions Of Value For Developers—Especially Creating Option Value!

Over the past decade, the DORA metrics shaped how much of the industry measures developer productivity and software delivery performance. My collaboration on this work with Dr. Nicole Forsgren and Jez Humble from 2013 to 2019 remains one of the things I’m most professionally proud of in my career. (And we chronicled our learnings in the fantastic book Accelerate.)

Recently, I’ve been working with Steve Yegge (yes, that Steve Yegge, famous for his work at Amazon and Google, and for his depiction of the famous Jeff Bezos “thou shalt communicate only by APIs” memo) and an amazing group of researchers and practitioners who are seeking to quantify the value of GenAI for developers. I’m excited that this could potentially build upon the amazing DORA work that continues at Google (hello, Nathen Harvey and Derek DeBellis!).

The latest 2024 DORA report had one intriguing anomaly: while GenAI improved capabilities that typically improve software delivery performance (climate for learning, fast flow, and fast feedback), stability and throughput still went down! Now that’s a genuinely surprising anomaly!

Explaining anomalies is often the source of significant scientific breakthroughs, which leads to a better understanding of reality. For instance, the precession of Mercury’s orbit revealed the true nature of gravity (1915), and bent starlight during an eclipse validated Einstein’s theories (1919).

Over the last year, I’ve extensively used GenAI and coding assistants to build things, including a “writer’s workbench.” Not only has this been incredibly fun, but I’m finding that coding assistants are helping me:

  • build the things I want faster
  • be more ambitious about things I can build
  • be able to build them alone (as opposed to requiring other people or a team)
  • and have so much more fun doing it
  • more swings at bat, explore more options

I believe that the DORA GenAI anomaly is telling us something: we need to expand our field of view beyond “code committed to running in production” to more fully include the product exploration, design, and development process.

Specifically, I believe we need to capture an entirely new dimension of value creation, especially around creating option value. I’ll present some case studies that show that option value is one of the largest multipliers of value creation, which has some surprising links to architecture and modularity, and techniques such as A/B testing.

In this post, I want to discuss each of the dimensions of value described above briefly, and also give a primer on option value, and why it’s so important:

  • The link between option value and modularity
  • Case studies of modularity creating 25x more option value in the IBM System/360 project, and something similar happening at Amazon e-commerce in the early 2000s, when they transformed their monolith into microservices
  • Some thought experiments to demonstrate option value (and linking it to A/B testing)

1. Build Things Faster

Writing code faster is probably the most talked-about benefit of coding assistants (e.g., GitHub Copilot, Cursor, and the one I use most, Sourcegraph Cody). However, I think this metric is probably one of the most superficial benefits.

Make no mistake, with coding assistants, I’m able to build things in hours that would have otherwise taken me days.

In a two-hour pair programming session with the incredible Steve Yegge, I was able to create a video excerpt generator. It took timestamps of my favorite portions of a podcast or YouTube video, and generated video excerpts, with captions overlaid onto them.

(Here’s the first Twitter thread I created using this tool, from an amazing talk from Dr. Erik Meijer, on how LLMs may enable us to no longer write code by hand anymore. Here’s an article with the “highlights reel” from that pair programming session.)

This pair-programming session was a mind-expanding experience of using “chat-oriented programming.” My big lesson was: type less; lean on LLMs more.

(And, of course, afterward, I’ve spent hours and hours improving that code and adding functionality. But without a coding assistant, it would have taken me days… This level of effort was high enough that building it was something for “maybe next month.” More on this later.)

I can confidently say that coding assistants make coding faster and easier 80-90% of the time.

(Of course, 10% of the time, coding assistants and LLMs make things maddeningly slower and more frustrating. Like last Friday, when every LLM was telling me that you could put any DOM element into the Slate.js editor DOM. After reading the documentation, I know now that is absolutely false.

…or when I spent hours going around in circles trying to get ffmpeg to put captions and a static image in the center of a video file… These things happen, and woe be to those who always blindly follow what the LLM tells you to do. Madness awaits.)

That notwithstanding, here’s my new reality: if I want to do coding, and my coding assistant or LLM isn’t available (like on a transoceanic flight), I’ll choose not to code… because it’s just too difficult without it.

In other words, who wants to write code by hand like some savage from 2010?

By the way, prototyping an app using Claude Artifacts is still the most amazing thing ever. (E.g., “Build me a React app with three columns: editor on left, 3 buttons in the middle, and editor on the right.” Four minutes later, you have a working prototype app. Who could possibly go back to the old way of doing this?)

(Another “let’s be honest” qualification: development is more than just “writing code.” A developer probably only spends 25% of their time writing code, spending twice that reading code. And in many organizations, development is only 15% of the total wall-clock time to get features to users.)

2. Tackle More Ambitious and Impactful Projects

Recall how my first working version of my video excerpt tool took 2 hours, which would have taken me days before. Because of the time required, I had deferred even trying. It was always a “maybe next month.”

There could have been many reasons: maybe the perceived benefit wasn’t high enough to warrant the work, or maybe the difficulty made the “juice not worth the squeeze,” or perhaps another opportunity offered a higher, more immediate payoff.

Last month, I learned some amazing terms from labor economics to frame the importance of this from Dr. Joe Davis, Chief Economist of Vanguard ($9.5T assets under management):

  • Substitutes: Two types of labor (or labor and technology) are substitutes if one can replace the other without significantly reducing output. For example, skilled workers and automated machinery might be substitutes. When the cost of one falls, the demand for the other may drop, possibly leading to job displacement.
  • Complements: Two types of labor (or labor and technology) are complements if they are more productive together. For example, human workers and specialized software often complement each other, enhancing productivity when used in tandem. Higher demand for one usually boosts demand for the other.

(You can watch the entirety of my conversation with Dr. Joe Davis here. Just register with an email address.)

Returning to the example of my video excerpt creator, because coding assistants are such an effective complement to my own abilities, I was able to do the work I otherwise would not have done.

In other words, coding assistants are complements (especially for senior developers), not merely substitutes. They enable the creation of significant value that wouldn’t have been possible otherwise, as many projects would have been too difficult, costly, or time-consuming to undertake without them.

I’ve experienced this many times over the last few weeks as I’ve built out my writer’s workbench tool, of which the video excerpt creator is a part of. For instance:

  • swapping out the standard HTML textarea editor with Slate.js React editor component so I could introduce more editor controls. I’ve used React component a handful of times, but my lack of experience/knowledge with React, JavaScript and ClojureScript interop, and heck, JavaScript in general, would have guaranteed eventual defeat.
  • (Three to four years ago, I put a React tree control in an app I wrote, which is still only half-working—though it’s working well enough for me to use, but I gave up trying to fix it years ago.)
  • Heck, I’m finding that front-end app development is now sufficiently within reach for me. Even as a novice, having learned Tailwind and with the help of coding assistants, I’m building tools for myself that I use every day.

These are all tasks that would have been out of reach for me, and yet are things I’m tackling every week. Which brings us to the next important dimension…

3. Ability To Build Do Things Yourself

One of the most fantastic things that happened after Steve Yegge wrote his “Death of the Junior Developer” post was that Dr. Matt Beane reached out to me. He introduced me to his research on the “novices optional” phenomenon, which he has studied for over 15 years.

Dr. Beane described how, for centuries, medical surgery has required multiple people because at least three hands are needed to perform the work. This made novice surgeons an essential part of the team and also provided them with a way to learn.

However, with the advent of surgical robots, expert surgeons can now complete their surgical procedures alone. To be clear, they still need the anesthesiologist and the other support staff, but they no longer need a novice surgeon to help them.

The result is that most novice surgeons are deprived of the time and experience they need to become proficient. So how do they eventually become senior surgeons?

(Economic reasons reinforce this outcome too. Novices not only take significantly more time to complete surgeries (time is money), but novices also make more mistakes (not great for anyone)).

What does this have to do with software development? A lot! Coding assistants allow experienced developers to avoid the high costs of coordination entirely, enabling them to build things themselves, just as surgical robots did for surgeons.

In his article, Yegge wrote about how his then-head of AI showed him a multi-class prediction model that he had trained and deployed in a single day with the help of a coding assistant. What is remarkable is that last year, creating this tool would have been a six-week university senior-level intern project.

This story and Dr. Beane’s work suggest that, across almost every domain, the cost of coordination is so large that when given the opportunity for an expert to do something themselves, they will.

(It just occurred to me that this fact is deeply ingrained in the DevOps community: “self-service” is so important because we know that task queues are very dangerous. Too often, assigned tickets will often not be completed in the time and level of quality expected. That’s even if a task requires opening one ticket, let alone opening twenty different tickets with twenty different departments. It is almost always better to enable the person to get what they need on-demand.)

Indeed, this is one of my key learnings working with Dr. Steven Spear over the last four years: “Leaders massively underestimate the difficulty of synchronizing disparate functional specialties toward a common purpose” (as we state in the preface of Wiring the Winning Organization.)

(One last somewhat tangential thought: Yegge’s head of AI story illustrates coding assistants being a complement for seniors but a potential substitute for a junior intern.)

4. Have More Fun Doing The Work

I’ve talked about how coding assistants have enabled me to do work faster, do more ambitious and more impactful things, and increase the likelihood of being able to do something entirely on my own (which obviates entirely the cost of coordination).

But there’s another thing: when using coding assistants, the work is more fun.

For this, I’m going to cite my friend and co-conspirator from the State of DevOps Research (or DORA: DevOps Research and Assessment), Dr. Nicole Forsgren, on the fantastic work she did with her colleagues on the SPACE framework metrics, as published in her ACM Queue article.

To me, these metrics speak volumes about how much coding assistants changed how I feel about coding. I’ve picked a couple that seem relevant, and I know there are others that are as good, or even better:

Satisfaction and Well-Being

  • I felt fulfilled while completing the programming task.
  • I found myself frustrated while completing the programming task.
  • The code I wrote was of high quality.
  • I enjoyed completing this task.

Effectiveness and Flow

  • I was focused on the task during the programming session.
  • I was a productive programmer while completing the task.
  • I made fast progress despite working with an unfamiliar system.
  • I maintained a state of flow during the programming task.
  • I completed the repetitive programming activities fast during the task.

Communication and Collaboration

  • I spent considerable time searching for information or examples during the task.

So, if I were to score myself on the criteria above, it is clear: when I’m working with a coding assistant, the work is more fun, and I’m happier with the quality of what I’ve built. I’m happier!

(I’ve already mentioned that I don’t want to do coding without a coding assistant!)

And for non-solo projects, when developers are happier, there’s ample evidence that they do better work, and you have better retention, and it’s easier to recruit.

5. One More Thing: Explore More Options

During the six years I was involved with the DORA State of DevOps research, we focused primarily on the “code committed” to “running successfully in production” portion of the technology value stream. This is indicated in the right-hand portion of the table below. This is the boundary of where Dev and Ops had to work together to create value for the customer.

This brought in the processes of code integration, testing, and deployment, as well as a bunch of other necessary activities that we explored over the years (e.g., environment creation, test data management, etc.).

Comparison of Ideation, Research, Design and Development versus Product Delivery phases in DevOps process: Product Design vs. Delivery in DORA

All my excitement with coding assistants makes me super excited to explore how these tools could help with processes of ideation and discovery, research, design, development, and testing. (And while I’m thinking about it, infosec, architecture, and all that too). After all, this is the area where the real value-creating activities take place.

One of the things that excites me is how coding assistants make ideation and research easier and faster. This vastly increases the number of options we can evaluate, which in turn vastly increases the design space we can explore.

I’ve written about the astonishing results of “coding with ChatGPT voice code while walking the dog,” as recommended by Simon Willison.

My opening question was:

“Hello! I’m trying to build a ‘writer’s workbench’ in ClojureScript. I want to integrate a text editor that allows you to edit text, but where I can select text and send it to an LLM for tasks like clarifying, extracting facts, or even suggesting rewrites. For example, in non-fiction, it might help rewrite for ‘showing, not telling.’

“Rather than the usual triple-click to select a paragraph and then clicking a button, I’d like to explore options that go beyond the standard text area. Specifically, I’d love to be able to associate buttons directly with sentences or phrases, so instead of dragging to select text, I could just click on a sentence or outline it visually to interact. What are tools or libraries in the React ecosystem that support this kind of interaction?”

Over the next 60 minutes, I asked it increasingly detailed questions: learning about the options and what factors were important to me, I asked it to write code using the three libraries it suggested (even though I couldn’t read it).

  • What options are available for integrating buttons or interactive elements within a text editor in the React ecosystem?
  • Can you provide simple examples for setting up interactive features in Slate.js, Draft.js, and ProseMirror?
  • What does the JSON structure of a simple Slate.js editor look like?
  • Could a simpler JSON structure work for a text editor, where formatting is only applied at the paragraph level?
  • Let’s think about the outlier use case: describe how I could move an item up or down, or promote it to a parent level, in a nested vector structure in Clojure?

That was amazing! By the end of the 45-minute walk, I had confidence that I had a promising option to explore using, and I was ready to try writing a trivial Slate.js application. That conversation averted lots of risk by talking through the various scenarios and eliminating less promising possibilities.

So, what is this value? Option theory gives us a fantastic language for describing it! (Thank you, ChatGPT.)

The design process is very similar to concept of “real options” in finance. Each option gives you the flexibility to explore a path without committing upfront. Coding assistants function similarly to these options, giving you previews of multiple paths so you can make smarter decisions before investing significant time or resources.

  • Make It Cheaper and Faster to Explore Options: Normally, testing multiple solutions requires a lot of time. Coding assistants lower the “price” of exploring each path, so you can evaluate multiple approaches without the usual trade-offs. This is like having the freedom to explore new markets or projects cheaply before committing capital. (Think of the quote: “Let a thousand flowers bloom.”)
  • Able to Change Your Mind (and Avoid “One-Way Doors”): With coding assistants, you’re not locked into one approach early on. You can gather insights on a range of possibilities—Slate.js vs. Draft.js vs. ProseMirror. In options theory, this flexibility reduces the risk of sunk costs if a particular approach proves bad.
  • Explore More of the Possible Design Space: Coding assistants also expand what you’re able to explore. You can quickly learn new libraries, tools, or frameworks that might have been daunting on your own. This is like gaining access to new, potentially profitable opportunities because exploration costs have dropped.
  • Gain Knowledge and Reduce Risk: Exploring multiple options through your assistant is like accumulating information before committing to a decision. In finance, uncertainty drives risk, and more information reduces that risk. Similarly, by “trying” various approaches through the assistant, you reduce the likelihood of costly rework and increase the odds of a high-quality result.
  • Increase Portfolio Diversification: Exploring different options through coding assistants is like managing a portfolio of real options. Each question you ask helps you gather insights across multiple potential paths. This portfolio approach reduces overall risk by diversifying your design possibilities, improving the quality of the final solution.

In short, coding assistants provide an “option portfolio” for design and development choices. They allow you to sample multiple approaches at low cost, build flexibility into your process, and ultimately make more informed, higher-quality decisions in a way that minimizes risks and maximizes potential value.

Intermission: Before Diving More into Measuring Option Value

These five dimensions of value seem pretty orthogonal to me. And what is most exciting is that I believe that “writing code” faster is the least important measure.

So which will be the most important measure? Creating option value.

An option is defined as the right, but not the obligation, to act.

And here’s where life genuinely gets a bit strange. I’m going to tell you about three people: Dr. Carliss Baldwin, Dr. Steve Spear, and Steve Yegge.

To tell you about Dr. Baldwin’s work, I’m going to quote from Wiring the Winning Organization, which I co-authored with Dr. Steve Spear… (Who, believe it or not, had Dr. Carliss Baldwin as an advisor when he was working on his doctoral dissertation at the Harvard Business School. You think that’s a small world? Wait until I tell you about how Steve Yegge fits in!)

In their book Design Rules, Drs. Carliss Baldwin and Kim Clark show how system modularity creates option value. They build on the work of Drs. Robert Merton, Fischer Black, and Myron Scholes, who showed how to quantify the monetary value created by options on financial instruments. Merton, Black, and Scholes showed how one can decouple (temporally) decisions tomorrow from conditions today, giving latitude of action to decision-makers that they otherwise wouldn’t have. Baldwin and Clark showed how one can decouple actions (spatially) in one location from those in another, providing independence of action that otherwise wouldn’t have existed.

To illustrate this point, consider a system made up of ten gears that are all coupled together and therefore composed of only one module. To perform an experiment in this system, you must spin all ten gears at the same time because no gear is independent of the others. This means that for someone responsible for a gear to make a change, they must coordinate with the owners of the other nine gears, even if what is being changed doesn’t affect the other gears (e.g., changing only its material composition).

On the other hand, if each gear were its own module, each of the ten gears could be changed independently (that’s the spatial dimension), potentially more frequently (because of independence of action), and decisions could be delayed until after the result of the experiment is known (temporal dimension). The result is that the more modules there are in a system, the space that can be explored is greatly increased, often by orders of magnitude.

Source: Kim, Gene; Spear, Steven J.. Wiring the Winning Organization: Liberating Our Collective Greatness through Slowification, Simplification, and Amplification (p. 140). IT Revolution Press. Kindle Edition.

Furthermore, Baldwin and Clark describe how modularity also creates immense option value. In fact, their claim is that ONLY option value can explain the approximately 25x increase in value creation that they observed studying the IBM development of the System/360 system in the 1960s.

By the way, we use the following language in Wiring the Winning Organization:

  • Layer 1: the actual work to be performed (e.g., the patient, the code, the binary running in production)
  • Layer 2: the technologies used to perform the work (e.g., the MRI machine, the IDE, the Kubernetes platform)
  • Layer 3: the social circuitry, the organizational wiring, the software architecture — the difference maker

Case Study: Modularization in Computer Hardware and Software (1960s)

[Amazon’s engineers creating APIs in the 2000s] were not the first to modularize a large technical system (Layer 1) to reduce the cognitive overload of people in Layer 3 (social circuitry).

IBM adopted such an approach for similar reasons some fifty years earlier. In 1960, IBM was the leading mainframe computer company, with a revenue of $3.3 billion, but its market position was at risk. Competitors were entering the market, and IBM needed to figure out how to deliver faster computers (and the software that ran on them) to market more quickly.

Their time-to-market problem was due, at least in part, to coordination costs. Design teams had to be highly integrated, which meant they had limited independence of action (Layer 3), because the systems they designed were tightly coupled (Layer 1). A CPU change might require a memory change and maybe even software changes.

This coupling compounded to make any change difficult, requiring communication, coordination, or approvals across thousands of engineers. And worse, because software design was so tightly coupled to the underlying hardware design, software was incompatible from one hardware system to the next, requiring customers to rewrite their software every time they changed computer systems. And if customers had to rewrite their software anyway, it became easier for them to consider another vendor.

In response, IBM developed the System/360 family of computers. They varied by processing power, depending on customer needs, but they all ran the same software, solving for the compatibility and upgrade problem. This project was the first to decouple software from the hardware it ran on, giving software and hardware engineers the independence of action they lacked.

But the hardware designs were still highly coupled. This created the same struggle with coordination costs. This impacted development and delivery speed, as well as compatibility issues that affected customer migration from one system to the next. To address this, IBM made the revolutionary decision in 1961 to modularize hardware components (such as CPUs, memory, tape and disk drives, terminals, and keyboards), making them “plug compatible” and interchangeable, available to be used across the entire System/360 family of computers.

By partitioning the system into modular components that connected through stable interfaces, IBM made it possible for groups to work, experiment, and make improvements in parallel, without the constant communication, coordination, and joint approvals previously required.

In Design Rules, Dr. Carliss Baldwin and Dr. Kim Clark wrote, “For the first time in history, a computer system did not have to be created by a close-knit team of designers.” Like at Amazon fifty years later, changing the technical system’s architecture (Layer 1) created opportunities for designers to work independently (Layer 3) and for customers to have a range of options they previously lacked.

The System/360 program would be the largest hardware and software effort ever undertaken up to that point, with a cost estimated over $5 billion, two times higher than IBM’s annual revenue at the time,54 and involving thousands of engineers. When the System/360 computers were introduced four years later in 1964, they launched five compatible computers, with 150 interchangeable peripherals and software products.

It was an enormous commercial success, giving IBM market dominance that lasted thirty years. Revenue grew from $3.3 billion in 1960 to $7.5 billion in 1970 and $26.2 billion in 1980, with descendants of the System/360 increasing IBM’s cash flow by twenty times during that same period.

Source: Kim, Gene; Spear, Steven J.. Wiring the Winning Organization: Liberating Our Collective Greatness through Slowification, Simplification, and Amplification (pp. 181-182). IT Revolution Press. Kindle Edition.

According to Baldwin, modularity enables independence of action and creates massive option value. We know how important independence of action is. After all, we found that software architecture was one of the top predictors of performance! Check out these architectural attributes from the 2017 DORA report, which is one of the top predictors of performance:

Architecture is one of the top predictors of DORA metrics performance

Oh, and by the way, modularity is exactly what enabled Amazon to regain independence of action, which had caused their ability to deploy code to grind to a halt (which Steve Yegge is well-known for helping chronicle).

Amazon went from having one module in 1998 to tens of modules in 2004. By 2011, they had hundreds of modules, each able to work independently of each other. The impact on teams’ ability to deploy to production is breathtaking.

  • 1998: Hundreds of deployments per year (est.)
  • 2002: Twenty deployments per year (est.)
  • 2011: 5.4 million deployments per year (15,000 deployments per day)
  • 2015: 49 million deployments per year (136,000 deployments per day)

But, modularity also creates option value. Let’s quote Dr. Baldwin from her 2015 Technology and Innovation Management Distinguished Scholar Award acceptance speech (23m mark).

When you achieve a modular structure… each individual component becomes flexible. You can experiment with different options. This architecture tolerates uncertainty, which immediately makes finance people think of options theory. The modular architecture is rich with options that can be analyzed using financial tools.

Unlike a rigid system where you take it or leave it, modularity allows you to mix and match the best outcomes from many experiments. This was my biggest insight… The structure’s option-rich nature has profound implications for value creation.

To demonstrate this: if you look at the number of modules on one axis and the number of experiments per module on the other, you see exponential value creation. System/360 had about 25 modules with 25 experiments per module. This resulted in the system’s value increasing by 25 times. Such a dramatic increase in value can justify investing in many architects and experimenters.

When I was younger, I told my Technology and Operations Management colleagues who dismissed finance: “You don’t understand—finance doesn’t serve you, finance drives you.” This kind of value proposition is unstoppable from a financial perspective. The economy will reorganize itself—old organizations will disappear, new ones will emerge, funded by aggressive venture capitalists. They’ll get the job done regardless of who stands in their way.

Instead of the 0.25x improvement I saw in traditional finance models, here was a 25x factor. I told Kim [Clark], “This is what we have to explore—this is unstoppable.” That was in 1993. While I didn’t predict all the subsequent developments, I was certain we would see radical rearrangements of economic relationships because of this value creation potential.

Source: Dr. Carliss Baldwin, 2015 Technology and Innovation Management Distinguished Scholar Award acceptance speech (23m mark)

What does she mean by saying that the “industry was going to be blown apart?” Because customers want options, i.e., the ability to mix and match components. So what happened?

What happened next was unexpected: by 1969, the first plug-compatible peripheral companies emerged, and by 1980, hundreds of firms were making System/360-compatible components.

This shift forced major competitors like Burroughs, GE, and what’s now Unisys to retreat to protected niches. IBM hadn’t planned for this – they wanted to sell all System/360 components themselves. Their response was twofold: they sent a task force to improve engineer satisfaction (which only recommended new curtains) and aggressively sued compatible manufacturers. Though IBM won the legal battles a decade later, it was too late. Silicon Valley and a whole ecosystem of computer firms had already formed.

The impact was dramatic. Our graphs show IBM [market capitalization] as a “blue mountain range” that took a major hit when the market realized they wouldn’t monopolize System/360 components. The industry transformed from a “Chandlerian” vertically-integrated structure into fragmented, specialized components. As Andy Grove of Intel described in “Only the Paranoid Survive,” it was a transition from vertical silos to horizontal layers.

In 1979, IBM dominated about half the industry, alongside other vertical players like Xerox, Unisys, and Digital Equipment. By 2005, the landscape had completely changed: Microsoft, Intel, and Cisco led the industry, each specializing in complementary components rather than competing directly. IBM had fallen to fourth place.

Source: Dr. Carliss Baldwin, 2015 Technology and Innovation Management Distinguished Scholar Award acceptance speech (25m mark)

PS: In Wiring the Winning Organization, we heavily cite Dr. Carliss Baldwin’s amazing book, “Design Rules, Vol. 1 (1999).” I’m delighted beyond words that her long-awaited “Design Rules, Vol. 2 (2024)” was just released!

A Simple Thought Exercise

Let me give you a simple example that hopefully shows how powerful creating option value is. I’m going to give you two scenarios to choose from.

  • A. One roulette wheel, where you have to pick a number before you know where the ball will land?
  • B. Or 100 roulette wheels, where you can defer your decision until you see where all 100 balls land—and then place your 100 bets.

The answer, of course, is B.

Two Roulette Wheel Scenarios To Explain Option Theory

Let’s compare the returns in the two scenarios:

  • Scenario A: ($0.973 – $1)/$1 = -2.7% return (a losing bet!)
  • Scenario B: ($3500 – $100)/$100 = 3400% return

Scenario B has approximately a 3,500x better return than Scenario A (ignore for now that we’re dividing against a negative number). But we’re actually dramatically understating the power of perfect information in Scenario B because:

  • You only need to risk $1 per winning number, not all numbers.
  • With perfect knowledge, you know exactly which bets will win.

In fact, as Claude pointed out to me, our actual return approaches infinity because:

  • Investment approaches zero (we only exercise our option and bet on winners)
  • Payout remains constant (35:1)
  • Return = (Payout – Investment)/Investment
  • As Investment → minimum bet size, Return → infinity

The true advantage of perfect information isn’t just the 3,500x better return shown in the simple calculation, it’s that you can structure your bets to achieve theoretically infinite returns by minimizing investment while maintaining the same payout.

This is why option value is so powerful—having the right but not obligation to act with perfect information allows you to eliminate downside risk while preserving upside potential.

We’re Already Familiar With Option Value—Think of A/B Testing!

Congratulations if creating option value by deferring decisions until we have more information might sound familiar! Because the widely used practice of A/B feature flagging (i.e., feature toggles) is an example of this.

Consider an A/B test in which we deploy two “calls to action” into production to determine which one results in higher user signup rates. We run both A and B variants in parallel, and when we discover which one performs better, we switch to the winning one.

A/B testing creates option value in a way that perfectly matches financial option theory. When we deploy two features into production, we gain the right—but not the obligation—to run either feature. This creates value through flexibility, just like financial options do.

The mechanics directly parallel financial options. The cost of deploying both features is like paying an option premium—a small upfront cost that creates future flexibility. Once deployed, we can switch between features with minimal cost, observe their performance, and ultimately choose the better performer while abandoning the underperforming variant. This ability to maximize the upside while limiting the downside is the essence of option value.

The information generated has concrete value because it enables better decisions. While not providing perfect information, like in the roulette wheel example, A/B testing allows us to defer deciding between A and B until we know which performs better—and then we choose! We pay a small upfront cost (deploying both features) to create the option to choose the better outcome once we have more performance information.

Conclusion

I’m looking forward to working with some amazing, like-minded thinkers to explore this further. Stay tuned! 2025 is going to be an amazing year of learning!

And if you have ideas for how to measure GenAI impact on the DORA metrics, email me at genek at irevolution.com.

The post Potential GenAI Impact On DORA Metrics: Five Dimensions Of Value For Developers—Especially Creating Option Value! appeared first on IT Revolution.

Permalink

Nov. and Dec. 2024 Project Updates

Happy New Year all! This is the last set up updates from our developers who received 2024 annual funding - and the final report from Daniel Slutsky who was funded in Q3 2024. Thanks to everyone for their amazing work throughout the year!

Long-Term Project Updates

Bozhidar Batsov: CIDER
Michiel Borkent: squint, babashka, neil, cherry, clj-kondo, and more
Toby Crawley: clojars-web
Thomas Heller: shadow-cljs, shadow-grove
Kira Howe: Scicloj Libraries. tcutils, Clojure Data Cookbook, and more
Nikita Prokopov: Humble UI, Datascript, AlleKinos, Clj-reload, and more
Tommi Reiman: Reitit 7.0. Malli, jsonista, and more
Peter Taoussanis: Carmine, Nippy, Telemere, and more

Q3 2024 Project Update

Daniel Slutsky:SciCloj

Bozhidar Batsov

Report 6. Published January 1, 2025

Happy New Year, everyone!
The last couple of months were a bit slower than usual for CIDER and friends, but still we managed to make some good progress. Below are a list of highlights:

  • CIDER 1.16.1 was released in early December with a bunch of bug-fixes
  • nREPL 1.3.1 is also out with a couple of small fixes.
  • Work has been done in Orchard to simplify the parsing of Java source files
    • We’ve dropped support for Java source parsing on Java 8 to make the codebase more maintainable
    • Now you no longer need to have the JDK sources on the classpath (Orchard will look for them in several common places)
    • This (and some other improvements) with be integrated with CIDER for its next release (1.17), which I’m hoping to launch in the next month or so.
  • Piggieback 0.6 is out with improved compatibility for nREPL 1.3.
    • That’s the first release in over 3 years, btw!
  • We’ve launched a State of CIDER 2024 to gather input from the users about they way they are currently using CIDER. The data from this survey will help us shape up the roadmap for 2025. The survey will remain open until the end of January.

That’s all from me for now. Thanks again for supporting CIDER’s development in 2024!


Michiel Borkent

Report 6. Published December 30, 2024

Updates In this post I’ll give updates about open source I worked on during November and December 2024. To see previous OSS updates, go here.

Sponsors

I’d like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn’t exist or be maintained at all.

Current top tier sponsors:

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

If you’re used to sponsoring through some other means which aren’t listed above, please get in touch. On to the projects that I’ve been working on!

Updates

Clojurists Together announced that I’m among the 5 developers who were voted to receive the Long Term Funding in 2025. You can see the announcement here. Thanks so much!
Here are updates about the projects/libraries I’ve worked on in the last two months.

  • babashka: native, fast starting Clojure interpreter for scripting.

    • #1771: *e* in REPL should contain exception thrown by user, not a wrapped one
    • #1777 Add java.nio.file.attribute.UserDefinedFileAttributeView
    • #1776 Add java.nio.file.attribute.PosixFileAttributes
    • #1761 Support calling clojure.lang.RT/iter
    • #1760 For compatibility with Fireworks v0.10.3, added the following to :instance-checks entry in babashka.impl.classes/classes(@paintparty)
      • clojure.lang.PersistentArrayMap$TransientArrayMap
      • clojure.lang.PersistentHashMap$TransientHashMap
      • clojure.lang.PersistentVector$TransientVector
      • java.lang.NoSuchFieldException
      • java.util.AbstractMap
      • java.util.AbstractSet
      • java.util.AbstractList
    • #1760 For compatibility with Fireworks v0.10.3, added volatile? entry to babashka.impl.clojure.core/core-extras(@paintparty)
    • Bump babashka.cli to 0.8.61
    • Bump clj-yaml to 1.0.29
    • #1768: Add taoensso.timbre color-str function
    • Add classes:
      • javax.crypto.KeyAgreement
      • java.security.KeyPairGenerator
      • java.security.KeyPair
      • java.security.spec.ECGenParameterSpec
      • java.security.spec.PKCS8EncodedKeySpec
      • java.security.spec.X509EncodedKeySpec
      • java.security.Signature
    • Add java.util.concurrent.CompletionStage
    • Bump core.async to 1.7.701
    • Bump org.babashka/cli to 0.8.162
    • Include jsoup for HTML parsing. This makes bb compatible with the hickory library (and possibly other libraries?).
    • #1752: include java.lang.SecurityException for java.net.http.HttpClient support (@grzm)
    • #1748: add clojure.core/ensure
    • Upgrade taoensso/timbreto v6.6.0
    • Upgrade babashka.http-client to v0.4.22
    • Add :git/sha from build to bb describe output (@lispyclouds)
    • Fix NPE with determining if executing from self-contained executable
  • squint: CLJS syntax to JS compiler

    • Fix #255: fn literal with rest args
    • #596: fix unary division to produce reciprocal
    • #592: fix clj->js to not process custom classes
    • #585: fix clj->js to realize lazy seqs into arrays
    • #586: support extending protocol to nil
    • #581: support docstring in defprotocol
    • #582: add extend-protocol
    • Add delay
    • Fix #575: map? should not return true for array
    • Fix #577: support $default + :refer
    • Fix #572: prevent vite page reload
  • CLI: Turn Clojure functions into CLIs!

    • Fix #109: allow options to start with a number
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI

    • #99: make js/import work
    • #55: create gh-pages dir before using.
    • #89: allow evaluate_script_tags to specify individual scripts.
    • #87: prod build on fresh checkout fails
  • clj-kondo: static analyzer and linter for Clojure code that sparks joy. \

    • Unreleased
    • #2272: Lint for nil return from if-like forms
    • Add printf to vars linted by analyze-format. (@tomdl89)
    • #2272: Report var usage in if-let etc condition as always truthy
    • #2272: Report var usage in if-not condition as always truthy
    • #2433: false positive redundant ignore with hook
    • Document :cljc config option. (@NoahTheDuke)
    • #2439: uneval may apply to nnext form if reader conditional doesn’t yield a form (@NoahTheDuke)
    • #2431: only apply redundant-nested-call linter for nested exprs
    • Relax :redundant-nested-call for comp, concat, every-pred and some-fn since it may affect performance
    • #2446: false positive :redundant-ignore
    • #2448: redundant nested call in hook gen’ed code
    • #2424: fix combination of :config-in-ns and :discouraged-namespace
    • 2024.11.14
    • #2212: NEW linter: :redundant-nested-call (@tomdl89), set to level :info by default
    • Bump :redundant-ignore, :redundant-str-call linters to level :info
    • #1784: detect :redundant-do in catch
    • #2410: add --report-level flag
    • #2416: detect empty require and :require forms (@NoahTheDuke)
    • #1786: Support gen-interface (by suppressing unresolved symbols)
    • #2407: support ignore hint on called symbol
    • #2420: Detect uneven number of clauses in cond-> and cond->> (@tomdl89)
    • #2415: false positive type checking issue with str/replace and ^String annotation
  • nbb: Scripting in Clojure on Node.js using SCI

    • 1.3.196 (2024-11-25)
    • Add locking macro for compatibility with CLJS
    • 1.3.195 (2024-11-07)
    • #343: support :reload for reloading CLJS namespaces and JS code
  • rewrite-clj: Rewrite Clojure code and edn

    • Fix parsing of b// symbol
  • pod-babashka-go-sqlite3: A babashka pod for interacting with sqlite3

    • #19: Restore mac intel support (this time for real)
  • tools-deps-native and tools.bbuild: use tools.deps directly from babashka

    • Fix #30: pod won’t run on newer versions of macOS
  • http-client: babashka’s http-client \

    • #73: Allow implicit ports when specifying the URL as a map (@lvh)
    • #71: Link back to sources in release artifact(@lread)

Other projects

There are many other projects I’m involved with but that had little to no activity in the past month. Check out the Other Projects section (more details) of my blog here to see a full list.


Toby Crawley

Report 6. Published December 31, 2024

CHANGELOG | clojars-web commits

November 2024

CHANGELOG | clojars-web commits | infrastructure commits

  • Worked with a new contributor (Osei Poku) to implement a sitemap
  • Constrained heap usages for cron tasks to prevent competition with the webapp
    (this was causing the server instance to become unavailable on occasion)

Thomas Heller

Report 6. Published January 6, 2025

Time was mostly spent on doing maintenance work and some bugfixes. As well as helping people out via the typical channels (e.g. Clojurians Slack).

Current shadow-cljs version: 2.28.20 Changelog


Kira Howe

Report 6. Published December 31, 2024.

This is a summary of the open source work I spent my time on throughout November and December 2024. This was the last period of my ongoing funding from Clojurists together. It’s been such a magical year in many ways and I’m so grateful to have had the opportunity to spend so much time focusing on open source this year. It was a fantastic experience and I hope to be able to take another professional hiatus at some point in the future to do it again.

I’ve really enjoyed writing these summaries, too, and I find knowing they’re coming helps motivate me to stay more organized and keep better track of things, so I’ll probably continue to publish them even though I’ll be spending less time on side projects as my focus shifts to other priorities this year.

Sponsors

I always start these posts with a sincere thank you to the generous ongoing support of my sponsors that make this work possible. I can’t say how much I appreciate all of the support the community has given to my work and would like to give a special thanks to Clojurists Together and Nubank for providing incredibly generous grants that allowed me reduce my client work significantly and afford to spend more time on projects for the Clojure ecosystem for nearly a year.

If you find my work valuable, please share it with others and consider supporting it financially. There are details about how to do that on my GitHub sponsors page. On to the updates!

BOBKonf 2025

One exciting update is that a workshop I proposed got accepted to BOBKonf, which will be in Berlin next March. It’ll be similar to the types of talks and workshops I’ve been doing over the last couple of years at e.g. the Conj and London Clojurians, of course updated to show off the latest and greatest developments in the Clojure-for-data ecosystem. I spent some time in December beginning work on the content and now I’m in full conference-driven-development mode, figuring out what’s realistic to finish in time to demo at the event and what we should consider stable “enough” for now and just include. This preliminary work also sparked a couple of minor conversations, one about quarto theming of Clay notebooks and another about parsing dates from Excel workbooks.

Anyway there are still a couple of months to work on it, which on one hand feels like a long time but on the other hand is also no time at all. Before I know it I’ll be landing in Berlin ready to share the wonders of Clojure with a new eager audience.

Clojure Data Cookbook

This has been a very long-running, very ongoing project of mine. The high level goal was always (and still is) to create resources that would allow people to figure out how to be productive with Clojure’s data stack. In reality what this particular project morphed into was a process for discovering the gaps in the ecosystem and guiding development of new tools, uncovering missing features to implement or new libraries to write every time I’d start work on a new chapter.

We’ve come a long way over the past couple of years and there’s still work to do but the ecosystem is reasonably mature now. The Noj book has taken on covering a lot of the topics I wanted to document thanks to Daniel Slutsky’s incredible efforts at coordinating the community to produce this amazing content. The list of draft articles demonstrates many of the areas where work is still very ongoing in the development of the various libraries. Tutorials are mostly not left unfinished because the authors haven’t gotten around to finishing them, but more because the question of what exactly to write about is yet to be answered.

On the Clojure Data Cookbook itself, the current work in progress is available here and includes only sections that document stable and established functionality. The goal of making Clojure’s data stack accessible and easy to work with is still at the top of my priority list but conversations are underway about what the best way to do that is in the context of the current ecosystem.

ggclj

Another project I’ve been poking away at the last couple of months is my implementation of the grammar of graphics in Clojure. Most of my effort here is spent learning more and more about the core concepts of data visualization and how graphics can be represented using a grammar, and then how that grammar could be implemented in an existing programming language. This along with exploring prior implementations in other languages. I have a very rudimentary build process working for transforming an arbitrary dataset into a standardized, graph-able dataset, but nothing yet on the actual graphic rendering. It’s very interesting and satisfying but I’m not sure how useful. But, in the spirit of heeding Rich’s advice from the last Conj about doing projects for fun, I haven’t let it go completely. It’s still something I’d love to get working someday.

Reflecting on a year of open source

As I mentioned above I really enjoyed having the time this year to work on so many interesting projects for the Clojure community. It’s so rewarding to see how far we’ve come. Even though it feels like there is still so much to do, I think it’s important to reflect on the progress we have made and think about how the problems we encountered along the way shaped the path we took.

When I first started working on the Clojure Data Cookbook, there wasn’t even a way to publish a book made out of Clojure files. Clerk was brand new and Clay barely existed. Now we can render a pile of Clojure files as a Quarto book! And the need for better documentation has spurred tons of amazing development in this space. The literate programming story in Clojure is better than in any other language.

We’ve also made huge strides in connecting the various libraries of the ecosystem together. At the beginning of the year there were many amazing but disconnected libraries. I’ve been really inspired by the ideas behind the tidyverse and have been trying to communicate the idea of sharing common idioms and data structures. An ecosystem is starting to emerge in Noj that offers a coherent, standardized, shared paradigm for using all of the amazing tools of the Clojure data ecosystem together. The default stack has been chosen, and serious efforts are now underway toward making these libraries feature complete and interoperable. And I plan to continue working on tutorials, guides, and workshops as much as I can to help promote it all.

I’m grateful for all the changes in my life that have taken my time away from working on side projects as much as I used to, like marriage and a great new job, but in many ways I miss doing more of this work and I sincerely hope I find myself in a position to veer off of this “standard” life track in the future to take a period to focus on this kind of stuff full time again. Even better would be figuring out a way to make it sustainable so that I could continue to do it full time. If you have any idea how to make that work, let me know :)

It turns out I am not the kind of market-oriented, entrepreneurially-minded person who can turn coding skills into a business that generates steady income for my family. I like contracting and the slow-and-steady community building type of work that constitutes a career in open source, but unfortunately continuing down this road is just not in the cards for me this year. Though I’ll never be able to completely resist working on it whenever I can :) Thanks so much for reading this far, and hope to see you around the Clojureverse!


Nikita Prokopov

Report 6. Published December 30, 2024

Hi, this is Niki Tonsky and past two month I was working on Fast EDN parser.

Fast EDN, a faster EDN parser for Clojure.

This project started with a question: why does EDN parse so much slower than JSON? Is there a good reason for that? Turned out, no! That’s how Fast EDN was born.

  • ~6× speed improvement over clojure.edn.
  • Outperforms transit-clj.
  • More consistent error reporting.
  • Much cleaner code.

Clojure Sublimed, Clojure support for Sublime Text 4:

  • New Select Topmost Form command
  • New Align Cursors command
  • Fixed evaluation of () (closes 131)
  • Add | to the allowed symbols chars 132
  • Clarified some symbol/keyword edge cases in syntax
  • Color scheme adjustments

Sublime Executor, executable runner for Sublime Text:

  • Put 0.2 sec limit on find_executables. Large project shouldn’t be a problem anymore

Misc:

Both of these libraries were crucial in developing Fast EDN, so I contributed back.

And the most important news of all. tonsky.me, my personal website:

  • Now has a winter theme!

Overall, what a year! Thanks Clojurists Together and my patrons for making it possible to work almost entire year on open-source full time. Best year of my life (so far!)

Happy New Year everyone!


Tommi Reiman

Report 6. Published January 1, 2025

Reflected on my year of working with Open Source. Having authored tens of Clojure Open Source Libraries makes you look things from another angle. Instead of just adding more (fun) features, I dedicated time to go through all old issues and open PRs from 2024 for the active and stable libraries where there is no clear lead developer or it’s me.

Closed a lot of support tickets, answered questions and reviewed and merged PRs. Thanks to all contributors and sorry to not doing this earlier!

Some releases:

Malli 0.17.0

  • Don’t output :definitions nil in swagger. #1134
  • BREAKING: :gen/fmap property requires its schema to create a generator.
    • previous behavior defaulted to a nil-returning generator, even if the schema doesn’t accept nil
    • use :gen/return nil property to restore this behavior
  • Support decoding map keys into keywords for [:map schemas in json-transformer #1135
  • :not humanizer #1138
  • FIX: :seqable generates nil when :min is greater than 0 #1121
  • FIX: malli.registry/{mode,type} not respected in Babashka #1124
  • FIX: :float accepts doubles but never generates them #1132
  • FIX: :float missing humanizer #1122
  • Updated dependencies:
fipp/fipp '0.6.26' to '0.6.27'

Ring-http-response 0.9.5

  • Fix pre-expr syntax #34
  • updated dependencies:
[ring/ring-core "1.13.0"] is available but we use "1.12.2"

Going Forward to 2025

Yes, it’s already 1.1! I did not apply for Clojurists Together funding as I didn’t have the time I wished for to work on Open Source. Not going anywhere, but need to figure out a way to continue the work in a sustaineable way. Cheers.

Something Else

Vegan Pizza from Metosin pikkujoulu.

pizza


Peter Taoussanis

Report 6. Published December 31, 2024

A big thanks to Clojurists Together, Nubank, and other sponsors of my open source work! I realise that it’s a tough time for a lot of folks and businesses lately, and that sponsorships aren’t always easy 🙏

- Peter Taoussanis

Hey folks, it’s the end of the year again! 🍾🎉

Will keep today’s update short-

This was another busy year for open source work, with almost 50 public releases (🫣). The biggest can be easily seen on my 2024 roadmap.

The headline focus was Telemere - a modern rewrite of Timbre that offers an improved API to cover traditional logging, structured logging, tracing, basic performance measurement, and more.

That’s been a lot of ongoing work - getting things polished for v1 final. I’m pretty happy with the present state: v1 RC2 was recently released, and I expect that to become v1 final in January 🎉

The agenda for next year (2025) currently includes:

Looking forward to it :-)

As usual, my up-to-date roadmap will be available here once it’s ready.

Thank you everyone, see you next year! 👋🫶


Daniel Slutsky: SciCloj

Report 3 (Q3 2024). Published December 19, 2024.

The Clojurists Together organization has decided to sponsor Scicloj community building for Q3 2024, as a project by Daniel Slutsky. This is the second time the project has been selected this year. Here is Daniel’s last update for this period.

This overview of November’s work is extended by a few recent updates from the first half of December.

Comments and ideas would help. :pray:

November 2024

Scicloj is a Clojure group developing a stack of tools and libraries for data science. Alongside the technical challenges, community building has been an essential part of its efforts since the beginning of 2019. Our current main community-oriented goal is making the existing data science stack easy to use through the maturing of the Noj library, mentioned below. In particular, we are working on example-based documentation, easy setup, and recommended workflows for common tasks.

All these, and the tools to support them, grow organically, driven by real-world use cases.

I serve as a community organizer at Scicloj, and this project was accepted for Clojurists Together funding in 2024 Q1 & Q3. I also receive regular funding through Github Sponsors, mostly from Nubank.

As some parts of November’s work have only matured during the beginning of December, I am writing this report a couple of weeks late. It overviews my involvement during November, the first half of December, and then comments about the proposed directions for the near future.

I had 31 meetings during November and 25 meetings during the first half of December. Most of them were one-on-one meetings for open-source mentoring or similar contexts.

All the projects mentioned below are done in collaboration with others. I will mention at least a few of the people involved.

November 2024 and early December highlights

Design discussions

This has been a thoughtful period in the Clojurians Zulip chat. We faced a few questions regarding how we organize functionality across libraries and went through a careful thinking process to reach a common ground of understanding. A lot of the details shared below are the fruit of those discussions.

Noj

The Noj library is the entry point to data science with Clojure, collecting a stack of relevant libraries. During this period, it was officially announced as Beta stage. We kept extending it and mostly worked on documentation. Carsten Behring kept improving the documentation workflow, partially automating it, and creating extensive documentation for a few major Noj components. I also took part in documentation and in helping other community members contribute a few drafts.

Tableplot

Tableplot, the plotting library based on the layered grammar of graphics idea, was one of my main foci of development. I implemented quite a few additional types of plots and control parameters and put a lot of work into the documentation reference.

Tablemath

Tablemath is one result of our recent design discussions. It is a new library for math and statistics that composes well with tech.ml.dataset and Tablecloth datasets and uses the functionality of Fastmath. Tablemath is highly inspired by R and its packages and is intended to compose well with Tableplot layered plotting. During this period, I released an initial version after exploring a few directions.

Clay

Clay, the REPL-friendly notebook and data visualization tool, received a few updates from other community members. On my side, I explored the different ways it consumes JS and CSS dependencies. We want to make this part more modular for various use cases. At the moment, I am struggling with a few technical difficulties.

Kindly-render

Kindly-render is a tool-agnosic implementation of the Kindly data visualization standard. Timothy Pratley kept developing it, and it looks very promising. My involvement was not intensive, except for a few joint coding sessions and reviews of the code.

Scicloj open-source mentoring

In this period, we continued the collaboration of quite a few mentees and the current four mentors, including myself. To focus on progress with currently ongoing projects, we did slow down the growth of the project and could only accept a few new mentees.

Meeting new community members

In the last few weeks, I did meet a few new community members: Clojurians who wish to get involved in data science and people of scientific or data background who wish to get involved in Clojure. There is a good momentum in adopting the Scicloj toolkit for various needs. With a few of the new friends, I started meeting more or less regularly and exploring their concrete use cases.

Clojure in Academia

Following an initiative of Thomas Clark, we started an active exploration of pathways to make Clojure more present in academia. I reached out to about a dozen people who are active in academia and known to be involved in Clojure and scheduled a couple of meetings to discuss possible directions. There has been a warm response from most of the relevant people, who expressed their willingness to help explore this direction.

Three main directions were proposed: (1) Working on academic papers to discuss technical aspects of Clojure’s scientific stack. (2) Collaborating with researchers on specific use cases of Clojure. (3) Demonstrating the potential of Clojure in academic teaching.

The teaching perspective, which was proposed by Blaine Mooers, will receive the highest priority in the short term.

As a first step, we are considering organizing an online conference to make one or more of the above directions more visible and encourage further interest.

Tutorial meetings

During our work on documenting Noj, we experimented with various workflows of writing tutorials together. Recently, we converged on a format that is working. We meet quite often and write tutorial drafts together. The same draft will typically be handled by different people in different meetings. Each time, we review everything so that the session is self-contained. This way, more people learn about topics they care about, and the content we write gets to be reviewed by more people and more perspectives.

Linear Algebra meetings

The weekly linear algebra meetings keep happening every week. They typically go by the tutorial format mentioned above.

Website

This period has been a usual period in terms of website maintenance.

real-world-data group

The real-world-data group is a space for Clojure data practitioners to share their experiences. It keeps going, meeting every two weeks. In the three meetings we had in November and the one we had in December so far, we mainly discussed the topics mentioned above in this report, as well as a few work experiences of group members.

Near term goals

Noj

In the near future, we should bring the Noj documentation to a state that is good enough to be clear and welcoming to new users.

Tablemath

Tablemath will probably be the main experimental project on my agenda. The main goals are to combine the underlying libraries (Fastmath, dtype-next, tech.ml.dataset, Tablecloth) to benefit from the advantages of each of them in terms of ergonomics as well as performance and to provide a user-friendly API inspired by R. (2)

Clojure in academia

This project is still in its very early stages. We should explore various directions and carefully pick those which might be promising.

Tooling

I will join Timothy Pratley on the goal of helping different tools support the Kindly standard. Cursive, Calva, Quarto, and Clojupyter are a few of the relevant candidates.

Permalink

New release of Clojure lib for AWS presigned URLs & requests

The Clojure library aws-simple-sign was in pre-release for more than six months. With a modest download count of just above 2,000 and no reported issues, I finally found the time to promote the 2.0.0-alpha1 release of to a stable version.

For those unfamiliar with aws-simple-sign, it generates presigned URLs for S3 objects and signs HTTP requests for AWS.

The library is intended for those who for whatever reason want to avoid the com.amazonaws/aws-java-sdk-s3 Java dependency, or simply cannot use it (e.g., in Babashka).

A summary of changes in version 2.0.0 (since the previous stable release 1.6):

  • No dependencies (no forced ones anyway).
  • Improved documentation.
  • Credentials resolution outsourced. You can do it manually, but you will most likely want to use a “client” from either awyeah-api or Cognitect aws-api.
  • “Container credential provider” supported (indirectly). Use one of the external “clients” when used in an environment that rotates credentials like: Amazon ECS or Amazon EKS.
  • Minor API breakage 🫣 for a more sane function signature with these new “clients”. But honestly: It is a trivial change.
  • Change license to MIT, and actually included the license now.

If you are using 2.0.0-alpha1, you can just update without breakage.

Enjoy 🚀

Permalink

Clojure Deref (Jan 9, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation.

Libraries and Tools

New releases and tools this week:

  • vybe 0.7.469 - A Clojure framework for game dev (alpha)

  • signaali 0.1.0 - A small, portable & flexible implementation of signals

  • clojobuf 0.2.0 - clojure(script) library that dynamically interprets protobuf files

  • clojurescript-tiny-slides - Minimal presentation slides for ClojureScript

  • fast-edn 1.1.2 - Drop-in replacement for clojure.edn that is 6 times faster

  • clojure-multiproject-example - A grug-brained stab at layout and tooling to conveniently develop many Clojure projects in a single source repo

  • clj-async-profiler 1.6.0 - Embedded high-precision Clojure profiler

  • TrueGrit 2.3.35 - A data-driven, functionally-oriented, idiomatic Clojure library for circuit breakers, bulkheads, retries, rate limiters, timeouts, etc.

  • overarch 0.35.0 - Overarch provides an ontology and a data driven model of software systems and organizations based on e.g. UML and the C4 model

  • tableplot 1-beta7 - Easy layered graphics with Hanami & Tablecloth

  • fulcro 3.8.1 - A library for development of single-page full-stack web applications in clj/cljs

  • fs 0.5.24 - File system utility library for Clojure

  • languages-visualizations - A Languages visualization experiment

Permalink

Why choose Clojure to build a product?

When you start a new product, the go-to frameworks are generally like Rails, Django, Typescript, or the latest Javascript frameworks. These tools get the spotlight because they are easy to start with and have little friction from ideas to the first implementation. The new rails for example come bundled with a ton of web features that you need to build with any kind of web product: asset management, built-in login, easy deployment, database, cache support, …

Here I will explain why Clojure is also a good fit for web product development despite not being bundled.

Easy to resonate about

Starting a Clojure product is not as easy as the solutions listed above. You need to handpick your initial set of libraries and it can get overwhelming if you are unfamiliar with a set of basic libraries to deal with all aspects of a web product.

Once you pass that, however, you can enjoy the compactness and simplicity of the language. There is no friction from the web to your database of choice.

Clojure is a dynamic language that treats every data as a map. As scary as it sounds if you are coming from a strongly typed language, this accelerates early-stage development as you don’t spend time defining types or refactoring them as ideas evolve. Naming is a hard problem as well as finding good abstraction.

Instead of strong type everywhere, Clojure offers a la carte type that you can use where it matters to you. Malli is a great tool for that. In general, you want to type your IO: everything you get to your HTTP endpoint. Your database, if relational, will do some more type-checking for you. Then you are free to add type-checking wherever you want. From there you get the advantage of a strongly typed language without its burden.

Low maintenance

The Clojure core team’s emphasis on stability and well thoughts APIs lowers the maintenance of your codebase over time.

Evolution of Clojure core libraries over time.

Coming from Scala or the Javascript ecosystem, can you imagine upgrading the dependencies of a 5-year-old project without a single issue? With Clojure, it just works. Breaking changes in API are rare. A useful tool to do that without hassle.

This extreme care of not introducing breaking changes influences the Clojure ecosystem with maintainers of libraries paying extra attention to not breaking existing code (a recent talk on the topic). This reaches a point where new users unfamiliar with the ecosystem think that libraries are not maintained because they have had no issues and no updates for a while. When you keep libraries focus small and have a hyper stable core libraries, that is what you get.

Fast prototyping with the REPL

Clojure’s dynamic nature and its REPL (Read-Eval-Print Loop) make it ideal for fast iteration.

In general, that’s how I iterate:

  • Write down my problem and context
  • Try out different solutions using the REPL and hardcoded data structure
  • Once I figure out a solution, I revise my functions and transform the hardcoded data structure into unit tests.

This simple loop gives me immediate feedback and allows me to try various solutions before solidifying my solution with unit tests or type checking.

Clojure is also a perfect fit to implement Archirecture Decision Record: a simple way to document your software choices for you and your team.

The combination of functional language and dynamic typing allows you to implement all the design patterns in a few lines of code [1, 2]. Making the time between ideas to prototype frictionless and without boilerplate code.

Simple configuration and deployment

A simple configuration format with aero. It supports multiple profiles (dev, prod, …), environment variables, and strong type checking.

lein uberjar is all you need to deploy. You get an artifact ready to start with java -jar ….

Frontend support

Nowadays a lot of web projects are single-page applications (SPA) built on top of React.

With ClojureScript and shadow-cljs, you can set it up quickly. Add the well-established re-frame on top of it and you all have the facility to build a state-of-the-art React including the state management.

Bonus point: you can share code between the frontend and the backend with no ceremony using cljc.

Java, Javascript, Python and Bash support

Clojure being hosted by the Java Virtual Machine, you have access to decades of Java libraries with native interop.

Want to use Javascript libraries? Shadow-cljs offers NPM support.

Want to use Python libraries with all these AI libraries, it is also possible with *https://github.com/clj-python/libpython-clj.*

Want to script with bash or Makefile? Babashka.

Hiring

Hiring software engineers is hard. Evaluate their programming skills, how well they gonna fit with your team, and your philosophy of making software is still a huge ongoing problem for most tech companies. When you use a language like Javascript or Java, there are so many ways to use them that looking for a Javascript/Java developer is way too vague and needs to be refined with frameworks, how you use them, and what you consider a good software.

Clojure comes with a unique value proposition here due to the opinionated choices made by Rich Hickey and the Lisp heritage: functional, dynamic type, built-in immutability, emphasis on simplicity… A few talks to be inspired (Simple Made Easy, Hammock Driven Development, and so much more). As a side effect when you are looking to expand your team, new hires already come on board with the Clojure philosophy, saving you hours of cultural fit and onboarding on your way to software.

With the language being the same on the backend and frontend you also have the possibility to find people able to work on both sides, lowering the friction and making it possible to move fast with a small fullstack team.

While finding Clojure developers was not always simple depending on your location. The democratization of remote work has completely removed this barrier, it’s not a problem anymore to find good candidates if you are expanding your search on your timezone.

DO SOMETHING GREATPhoto by Clark Tibbs on Unsplash

Conclusion

Clojure offers a unique balance of simplicity, flexibility, and stability, making it an excellent choice for building web products. While the initial setup may take more effort, the long-term benefits of maintainability and fast iteration are worth it.

By choosing Clojure, you are in great company. See for example NuBank with its valuation of 45 billion dollars and its awesome product.

Some big open source codebases to get inspiration penpot (A Figma competitor) and metabase (a well-established fast analytics with a friendly UI and 60k companies using it).

The Freshcode podcast to hear stories of Clojure companies: https://www.freshcodeit.com/podcast.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.