Adopting FreeDesktop.org XDG standard for development tools

Managing your personal configuration for development tools and applications is much more effective when adopting the XDG basedir standard, which defines separate locations to store user configurations, data files and caches.Without the XDG standard, these files and directories are often mixed together and stored in the $HOME of the users account, making it more challenging to backup or version control.Development tools such as NeoVim, Emacs, Clojure CLI and Clojure LSP support the XDG specification, although some tools like Leiningen required a little help. There are simple approaches to work-around the limitations of tools that don't conform.

Permalink

How I switched to Flutter and lost 10 kilos

Dear Reader, dear Friend

Today's blog post is dedicated to Flutter and how it enables me to build finally what I've envisioned for years: the app, Lotti. A few years back, I started this journaling/self-improvement/quantified self project in Clojure and ClojureScript. While it is theoretically possible to build for both mobile and desktop with the language (which I love), it was just too complicated for me, at least, with a limited number of hours in the week. At some point along the journey, I saw Flutter, but I didn't warm up to it initially. Then, last summer, when trying to rewrite the meins desktop app and the then defunct meins mobile app in TypeScript, I saw that Flutter was now aiming to build apps for any screen, mobile and desktop alike, and that warranted a second look.

I found myself in the luxurious position of being able to afford to take a sabbatical for a few months. I did a brief evaluation to check if what I wanted to build could be built in Flutter and concluded relatively quickly that this was indeed the case, and proceeded to jump in the deep end, learn Flutter and Dart, and then started to rewrite my app(s), this time mobile-first. All of the above worked out faster than I was initially hoping for. In fact, I was dreading it, misguidedly believing that it would take much longer than it actually did. Now I have entirely switched over to the new app, with the exception of importing the old data, but that won’t run away [1].

Today, I'm happy to report on some very tangible results from this rewrite. For a long time, I had been convinced that if I could only divert my attention sufficiently towards my eating habits, how much I move and the amount of exercise I do, etc., then I would be able to monitor my activities more closely and get back into shape [2]. All I needed, in my mind back then, was to define an intervention, mostly as a loose set of rules or aims around food and activity, but without being too strict (e.g. I'm still eating Ben & Jerry's, just less often, the same goes for animal products of all kinds more broadly speaking), and then have a dashboard with a few data types, and a way to get myself to look at that dashboard. The problem was that, with my old tech stack, it was overwhelming and seemed verging on impossible to achieve. A few weeks before Christmas 2021, I was able to start collecting health and exercise-related data in the Flutter-based rewrite. From then on, I was forcing myself to look at and pay attention to my behavior and how that changed the outcome.

Long story short, by simply paying attention, moving more, and eating more healthily and less (but without tracking food, calories, etc. at all), I was able to lose about 10 kilos (around 22 pounds), more importantly, maintain my weight. Let me show you how I managed this:

body weight scale

Fig 1: frequent weight measurement, ideally daily

screenshot of dashbaord

Fig 2: Screenshot Lotti, weight vs steps vs Push-ups (one of six daily body weight exercises, takes less than ten minutes per day)

Now I'm in no way saying that following the steps outlined above will give you similar results, or any results at all for that matter. What works for you is for you to find out. Once you have an idea for an experiment or intervention, Lotti can help you build a dashboard from a bunch of different data sources:

  • automatically collected data from Apple Health (or the Android equivalent) [3], such as steps or hours of sleep with just a smartphone;
  • exercise durations from Apple Health, either with a smartwatch or by using a running app, for example;
  • weight and blood pressure from Apple Health;
  • manually user-defined quantitative data, such as exercise repetitions, water or beer consumed, and anything relevant for the individual intervention.

On the contrary, its contribution is helping you monitor any ongoing intervention you can come up with and then help tweak and refine your approach. If you're interested in losing weight, the methods will probably not be wildly different from mine in principle. After all, we all have to find a way of balancing energy in and energy out in a healthy way that makes us feel good in our own bodies. However, how exactly and at what pace we get there is individual and should be adaptive based on what works.

On the one hand, I'm mourning the phase where I can work on a piece of software like this one all for myself and nobody else. It's a very luxurious position to be in, after all. But on the other hand, I feel that the time has come for more than one inquisitive mind (and associated body) to benefit from these findings.

What's next:

  • Lotti is open-source software, and I'm looking for people to form a community around the app. It doesn't matter if you're a designer, an engineer, a behavioral psychologist, someone who believes that behavior can change, and want to devise and try an experiment. In all cases, this community is for you.
  • If you want to try this application and beta test it, please email me. Your onboarding experience will then be fed into in-app tutorials etc.
  • As soon as possible, thereafter, the app needs to become available for everyone on the respective app stores. There are plenty more features in the pipeline, however, the capturing and monitoring of data in the dashboards — despite the fact that it has not been perfected — is already good to go.

That's all I have for today. Have a good week!

Matthias

[1]: Since switching over earlier this year, I've collected around 14K data points in Lotti, the new app. In meins, the former app, I still have a collection of 162K data points that I intend to migrate. But the old desktop app is still working fine, so there is no rush.

[2]: This was already a concern before Covid-19, and the pandemic only exacerbated the problems.

[3]: Lotti is cross-platform and runs on iOS, Android, macOS, Linux, and Windows. I'm using an iPhone and a MacBook personally, and I'm looking for help ensuring everything works on the other platforms. The builds generally work on all of them. Are you looking to contribute to an open-source project? This might be it. Please have a look at the issues and help out if you can.

Permalink

Back to Lisp Part 1 - Working Inside the Language

I discovered Structure and Interpretation of Computer Programs in my late teens and quickly moved on to Common Lisp, working my way up from a z80 macro assembler to various web frameworks, and fun projects like a CAM system. After 12 years where I didn't work with Lisp at all, I recently decided to go back, and I am delighted by what I found. This is a series of articles that articulate my thoughts about coming back to an old love, and document the very practical things I found along the way.

When I mention Lisp in this article, it will refer to either Scheme or Common Lisp, the two languages I have actually used. You can probably replace them with Emacs Lisp or Clojure or any other SEXP based language and follow along just as well. I hate it when people write LISP uppercase as if we were still using something from the 60ies, I'm going with Lisp to convey a modern touch.

In this first article, I will talk about what I missed most: working inside a language.

REPL programming is the foundation

While many languages offer a REPL (read eval loop, i.e. a prompt you can use to execute statements), few adopt it as the central way to interact with your software system.

Notebook-style interfaces are pretty common these days, (python notebooks, R studio and Wolfram Mathematica come to mind), but they are more about "sessions" and "documents", about exploring and dissecting data than large scale programming. Other systems often come with a debug console you can use to inspect and modify the system at runtime (think browser javascript console). And finally, scratch files that can quickly be executed are common in IDEs.

In Lisp projects, you write functions and modules and packages in files, as is usual in programming projects, but you always have the compiler running along, compiling what you write and giving you feedback on what you typed. In traditional IDEs, the IDEs understanding of the program is divorced from the execution environment (either by being implemented in the IDE itself, or being run in a separate Language Server Process). With Lisp, the compiler functions as LSP to help you interact with your code (go to definition, inspect, etc...), as quick prompt to run experiments, as debugger to trace / debug / instrument and interact with the actual running system, as shell to manage your packages, deployments, builds and runtime systems.

With Lisp, everything feels intimately (and robustly) connected.

Programming as language creation is explicit

Programming is about getting computers to do things for us. But computers only really care about instructions that their CPU can execute. Since our brains can't comprehend streams of assembly language, we created programming languages, coherent, readable ways of assembling words and concepts so that we can collaborate amongst humans on the one side, and have computers execute our ideas on the other.

Programming languages (as in python, javascript, etc...), libraries, frameworks, naming conventions, design patterns all contribute to create a "programming dialect" shared by people working on a project. This language allows us to express solutions to the problems we are trying to solve in a way that is both executable by a computer, and understandable by our colleagues. More targeted project languages are often called "Domain Specific Languages".

These dialects are shaped by:

  • frameworks and libraries used (i.e., we use react and redux)
  • design patterns used (we use higher order components and context providers for a global store)
  • code and naming conventions (we call our handlers onX, and our store actions are of the form verbObjectObject. we use immer for imperative-like store reducers)

The programming language used can be more or less flexible in terms of syntax: domain-specific languages often have to carry syntactical baggage around to express certain concepts. For example, you might implement state machines by using classes for transitions, enums for states, and certain naming conventions for event dispatching. Javascript I think is so successful because it is a language that makes it easy to be creative and elegant with syntax.

Finding a satisfying API, syntax and naming conventions for concepts can be tremendously difficult. Impedance mismatch with the underlying programming languages can also mean that bugs are easier to make than they really should. When transpiling or using advanced meta programming techniques, the runtime errors are often hard to map back to the original code. Dialects still feel like dialects, modified, lived, bastardized versions of the underlying programming language.

Lisp languages don't really have much in way of syntax, as you usually write the program in terms of nested linked lists representing the abstract syntax tree. This gives you a much more simpler tool to not only create a programming dialect, but actually modify the underlying grammar to allow for a much more concise expression of useful concepts.

This is a two edged sword, as it is easy to create incoherent project languages with inscrutable grammatical extensions. A project usually needs at most one or two grammatical extensions to support its project language, and these are usually trivial (for example, an easy way to define state machine enums). In traditional languages, a clever closure pattern or some code generation will get you there just as well.

The beauty of Lisp however is during the ideation phase. It is very easy to try out different syntax ideas, move seamlessly between the meta and the practical level, run experiments in the REPL, massage syntax. This makes it possible to quickly home in on what fundamental concepts for the project are, and expressing them succinctly.

Javascript is an interesting language because it is very malleable. It allows us to use many clever tricks to make dialects that look almost like languages. Over the years, many things and patterns have been tried in Javascript: functional chains with lodash and underscore, functional reactive programming with redux and react hooks, effect programming with react hooks and many more I have never about. Of course, Javascript is also a widely used transpilation target, bringing interesting innovations to the core language while still maintaining close similarity. Source maps for example are an essential tool in making Javascript dialects useful in practice.

I missed experimenting with concepts at the language level

My past experience with Lisp has deeply engrained this way of "thinking in languages". Even when programming PHP, Javascript or C++, I will encounter patterns and ideas that I have explored in Lisp. While I can't transform the language I am working with, I can build a dialect that has sound grammatical foundations, because I built it as a "real" language in Lisp. I work with what I have (syntax tricks, naming conventions, API design) to make it as elegant and computationally airtight as possible.

Over time, I forgot how easy it was to use Lisp to experiment with different approaches. Designing a concurrent task language in C++ takes many lines of code and a lot of careful thought. While you can sketch things out pretty quickly using macros and code generation, or by being well acquainted with C++ templates, you still wrestle with a lot of syntax and operational complexity.

In a Lisp language, you can experiment by writing a program as you wish you could write it, then implementing it in 3 macros and then running it, printing out ASTs in the REPL for debugging. Building a grammar for concurrent data streams is an afternoon project.

Even though my main languages currently (PHP, Javascript and Golang) are not Lisps at all, I reconnected with this idea of creating languages, and will keep on experimenting with concepts that I can then port over once refined.

Permalink

Notes on Virtual Threads and Clojure

Have you heard the news? Virtual Threads implementation landed into JDK19 as a preview feature! Are you excited? No? You should be! It's an amazing addition to the Java platform.

Note this article discusses Preview version of software. Take it as an inspiration, not something that is set to stone!

Intro to Project Loom and Virtual Threads

Virtual Threads are the most significant feature of the so-called Project Loom.

Project Loom was launched in 2017 by Ron Pressler and his team at Oracle. The main goal of the project was to extend the capabilities of Java Virtual Machine to address the complexity of writing highly concurrent and scalable software.

There is more to Project Loom than just Virtual Threads. Project wiki specifically mentioned Delimited continuations and Tail-call elimination. But it's fair to say they are the most significant addition to the Java platform from the user perspective and productivity.

I don't want to dive deeper into delimited continuations and tail call elimination features to stay focused on the most practical matters, but it's fair to point out at least that delimited continuations seem to be quite important for the introduction of the Virtual Threads to the Java platform.

So what are they, and why they are so groundbreaking that it was worthy to write this post about them?

Traditionally, JVM threads were built around OS threads. This fact also determines their major properties:

  1. Single thread was mapped to a single OS thread
  2. Blocking (waiting) on a thread caused the thread to be effectively wasted for other tasks
  3. Managing threads on JVM was costly. Each thread easily uses an additional Megabytes of memory thus spawning many of them is not wise.

These limitations are mitigated by introducing Virtual Threads. They no longer map one-to-one to OS threads. A single OS thread can host many thousands or more Virtual Threads without a worry about blocking issues or excessive memory demands. This requires changes to the implementation of JVM and standard library to allow an effective schedule of Virtual Threads.

Virtual Threads also improve a situation when limitations of OS threads were addressed by using more or less sophisticated thread pools. Experienced developers know that thread pools (of OS threads) also have significant downsides if not constrained properly.

Virtual Thread is represented by a class java.lang.VirtualThread and it extends java.lang.Thread. This follows the Liskov-substitution principle and allows us to easily introduce them into our existing codebases.

Clojure and Threads

It's clearly stated Clojure is designed to work well together with the Java thread system. Clojure function instances even implement java.util.concurrent.Callable etc. so they naturally work with the Executor framework.

The most primitive way to do something is to launch it in a new thread like this:

(.start (Thread. #(println "Hello world!")))

Unsurprisingly there is also an API call for launching a Virtual Thread with a preview JDK (or Loom).

(Thread/startVirtualThread #(println "Hello world!"))

Nice! However, this is barely useful. We want concurrent processes to compose and coordinate. Clojure concurrency offers two essential mechanisms:

  • Agents
  • Futures

Let's revisit those in detail and see how we can spice it up with Loom's Virtual Threads.

Agents

Agents manage independent state. Their state can be changed only through submit of action. Actions are ordinary functions that take a state parameter and return a new state. Actions are dispatched using send, send-off, or send-via and they return immediately without waiting for completion. The action occurs asynchronously on thread-pool threads. Only one action per agent happens at a time.

Agents are nice because they come up with the following properties:

  • their state is always available for a reader without blocking after dereferencing with (deref an-agent) or @an-agent shortcut
  • they can be coordinated using (await an-agent)
  • any dispatches made during the action are held until after the state of the agent has changed
  • agents coordinate with transactions - any dispatches made during a transaction are held until it commits
;; construct new agent
(def a-counter (agent 0))

;; send it a function
(send a-counter inc)

;; wait for the delivery
(await a-counter)

;; reveal the state
@a-counter

Spicing up Agents

Agent's dispatching functions send and send-off use default implementations of executors for submitted tasks.

These executors live by default inside clojure.lang.Agent.

  • Dispatching function send uses clojure.lang.Agent/pooledExecutor
  • Function send-off uses clojure.lang.Agent/soloExecutor

Both executors work by default with heavy OS threads. Even though they are good defaults we can sneak in some goodies. Loom comes with a new executor service which you can easily create using the static method on the Executors class. This new executor is represented by ThreadPerTaskExecutor class. We can replace the default pooledExecutor with this new one.

(ns example
  (:import (java.util.concurrent Executors)))

;; Let's first define a factory that helps with spawning new Virtual Threads
(defn thread-factory [name]
  (-> (Thread/ofVirtual)
      (.name name 0)
      (.factory)))

;; Let's swap the default executor with the new one
(set-agent-send-executor!
  (Executors/newThreadPerTaskExecutor
    (thread-factory "clojure-agent-send-pool-")))

;; This code is going to be executed using Virtual Threads under the hood
(def a-counter (agent 0))
(send a-counter inc)
(await a-counter)
@a-counter

The same applies to the executor for send-off dispatching function.

(set-agent-send-off-executor!
  (Executors/newThreadPerTaskExecutor
    (thread-factory "clojure-agent-send-off-pool-")))

If you want to retain more control just use send-via where executor can be specified as a parameter:

;; Define an executor which just produce a new virtual thread for every task
(def unbounded-executor (Executors/newThreadPerTaskExecutor (thread-factory "unbounded-pool-")))

(send-via unbounded-executor a-counter dec)
(await a-counter)
@a-counter

This is all you need to transparently work with Agents under the new concurrency model. Clojure seems to be well prepared for the future! Futures...

Futures

Future represents a value that is going to be available at an indeterminate time in the future. It can be captured and passed around as you want. In Java futures are represented by objects implementing Future<V> interface from the java.util.concurrent package. The brief evolution of implementations of this interface can be captured by Java's standard library:

  • Java 1.5 introduced FutureTask<V>
  • Java 1.7 introduced ForkJoinTask<V>
  • As of Java 1.8 there is CompletableFuture<V>

Clojure contains a bunch of functions in its core library to work with futures. This is the most basic example that can demonstrate how to utilize futures in Clojure programs:

@(future (println "Before")
         (java.lang.Thread/sleep 2000)
         (println "After 2000 ms")
         2000)

As we can see Clojure futures are nice, Just dereference them similarly to agents or atoms with (deref a-future) or a shortcut @a-future. Dereferencing causes execution to block until a future value is resolved and thus available. Unfortunately, that means that the whole OS thread is blocked.

So what can we do to make it cheaper? Of course, Loom has our back covered with a lot cheaper Virtual Threads. Function future uses future-call function under the hood. This function references clojure.lang.Agent/soloExecutor. This means that if we replace this executor as we did for send-off above, it's all we need to do.

There is Promesa library which contains constructs to deal with futures that goes way beyond the simplistic use of futures in the Clojure core library. Some functions from the Promesa library introduce arities that take executor as a parameter and use such executor to schedule computation. Passing the ThreadPerTaskExecutor executor mitigates trouble mentioned under Promesa execution model.

Introducing Structured Concurrency

Structured concurrency is a concurrency programming model described in the following line:

When a flow of execution splits into multiple concurrent flows, they rejoin in the same code block

That means we have to be able to bind thread lifetime to a scope. Such scopes should naturally form parent-child relationships and there has to be programming constructs around the hierarchy.

Let's examine this simplistic example:

(defn run-concurrently []
  (let [executor (Executors/newThreadPerTaskExecutor (thread-factory "perfectly-scoped-pool-"))]
    (try 
      (.submit executor ^Callable #(identity 2000))
      (.submit executor ^Callable #(prn "Starting a long running operation"))
      (.submit executor ^Callable #(Thread/sleep 1000))
      (.submit executor ^Callable #(prn "Done."))
      4
      (finally (.close executor)))))

(run-concurrently)

Here scope is a function with defined executor against which tasks are submitted. None of the Virtual Threads outlives the scope of the function. Reason being ThreadPerTaskExecutor.close method do the join of the threads and cleanup after them. Caller does not need to know anything about level of concurrency of such method. Also this composes recursively (parent-child relationship), as other functions following the same structure can be called inside the body. It's deterministic and transparent.

Avoids

These are less relevant to Clojure developers as most of us do not work on low-level mode of operation, but I'd like to mention them anyway.

  1. Avoid ThreadLocal and InheritableThreadLocal. They are supported, but they defeat the cost advantages that come with Virtual Threads
  2. Avoid synchronized methods. Use java.util.concurrent.locks.ReentrantLock instead
  3. Avoid thread pools to control access to expensive resources. Use java.util.concurrent.Semaphore instead

Clojure itself contains very few instances of ThreadLocal:

  • Agent.java
  • LockingTransaction.java
  • Var.java
  • Instant.clj

Are they a problem? Probably not. My personal recommendation is to use structured concurrency approach similar to run-concurrently above so that Virtual Threads not live long and unused resources are garbage collected as soon as possible.

At some point JDK can also receive Scoped Variables that can be a substitute for expensive ThreadLocals. But it's song of the distant future.

Conclusion

  • Virtual Threads are important and extremely useful addition to Java platform
  • Clojure concurrency mechanisms can be setup and effectively use Virtual Threads today! No modifications to Clojure codebase appears to be necessary
  • Structured concurrency becomes more important mechanism to deal with concurrent processes once Virtual Threads will be released
  • Not everything is set to stone. Some mechanisms maybe revisited or adjusted

I hope this article triggered intelectual curiosity and provided with interesting information.

References

  1. YouTube - Practical Advice
  2. Ron Pressler - Loom: Bringing Lightweight Threads and Delimited Continuations to the JVM
  3. JEP 425
  4. Twitter thread on JEP 425
  5. Github commit - JDK19 Virtual Threads

Permalink

1.11.51 Release

We’re happy to announce a new release of ClojureScript. If you’re an existing user of ClojureScript please read over the following release notes carefully.

Clojure 1.11 parity

This release includes support for :as-alias. It adds update-vals and update-keys. It introduces the cljs.math namespace, providing portability for code using clojure.math. iteration,NaN?, parse-long, parse-double, parse-boolean, and parse-uuid have also been added.

This release also ports CLJ-2608, which adds support for trailing, conj-able element in map-destructuring support for seqs.

Vendorization of tools.reader, data.json, and transit-clj

ClojureScript is one of the largest libraries in the Clojure ecosystem. Having to compile some 20,000+ lines of Clojure code every time is a significant hit to REPL start times and other typical tasks. Thus ClojureScript is ahead-of-time (AOT) compiled.

However, due to some subtle aspects of AOT this can lead to unresolveable dependency conflicts. Users have hit this issue with nearly all of the declared dependencies: transit-clj, data.json, and tools.reader.

After conferring with the Clojure Team, we decided to vendorize all these dependencies. This way we can AOT everything and be confident that we won’t create a conflict that can’t easily be fixed via normal dependency management. ClojureScript no longer depends on transit-clj, only transit-java. The dependance on data.json has been removed. ClojureScript dependance on tools.reader is for a less common use case - bootstrapping the compiler to JavaScript.

Some care was taken to ensure backwards compatibility, and we are particularly interested in any issues that people may encounter.

Other Changes

The minimum Clojure version for ClojureScript is now 1.10. Google Closure Compiler has been updated to the May release.

For a complete list of updates in ClojureScript 1.11.51 see here

Contributors

Thanks to all of the community members who contributed to ClojureScript 1.11.51:

  • Tom Connors

  • Roland Thiolliere

  • David Frese

  • Paula Gearon

  • Matthew Huebert

  • Hyun-woo Nam

  • Timothy Pratley

  • Henry Widd

Permalink

Clojure Deref (May 13, 2022)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS)

Highlights

The Stack Overflow 2022 Developer Survey is open now. If you have a few minutes, completing this is really important to maintain’s Clojure visibility. In the past Clojure has been noted in this survey for its comparatively high pay and experience levels compared to other languages. If you use one of the Clojure datalog databases, make sure to write that in in the Databases section too!

Libraries and Tools

New releases and tools this week:

  • Calva 2.0.273 - Clojure & ClojureScript Interactive Programming for VS Code

  • joyride 0.0.9 - Joyride VS Code with Clojure

  • neanderthal 0.44.0 - Fast Clojure Matrix Library

  • clojurecuda 0.15.0 - Clojure library for CUDA development

  • fulcro 3.5.20 - A library for development of single-page full-stack web applications in clj/cljs

  • fulcro-rad 1.1.11 - Fulcro Rapid Application Development

  • fulcro-rad-datomic 1.2.0-RC1 - Datomic database support plugin for Fulcro RAD

  • user-manager.cognito 0.1.0 - A library for interacting with AWS Cognito User Pools API which optionally provides Integrant initialization keys for Duct Framework

  • tools.namespace 1.3.0 - Tools for managing namespaces in Clojure

  • test-runner 0.5.1 - A small library for discovering and running tests in projects

  • tools.build 0.8.2 - Clojure builds as Clojure programs

  • hermes 0.12.640 - Hermes - terminology tools to support SNOMED CT, cross-maps, inference, fast full-text search, autocompletion, compositional grammar and the expression constraint language

  • compile-time - Run Clojure function or forms in compile time

  • spacemacs.d - My configuration for spacemacs

  • semantic-ui-wrapper 2.0.2 - Fulcro 3 wrappers of React Semantic UI Controls

  • datalevin 0.6.9 - A simple, fast and versatile Datalog database

Permalink

Ideas for Clojure Network Eval API

Jack Rusher is asking for nREPL feedback from Clojure tooling authors:

After implementing an nREPL server for a new Clojure runtime, I’m begging the other tools maintainers to work with me to define an actual standard that works.🥹

Since I am also a tool maintainer AND my tool works with nREPL, I thought I share my ideas here.

What is REPL?

Rich Hickey famously said that nREPL is not a REPL:

REPL stands for something - Read, Eval, Print, Loop. It does not stand for - Eval RPC Server/Window. It is not something I “feel” about. But I do think we should be precise about what we call REPLs.
[…]
‘read’ is a protocol. It works on character streams, one of the most widely available transports. It uses delimiters (spaces, [], (), {} etc) to separate forms/messages. Readers implement this protocol and return data upon encountering complete forms.

I am happy to accept these definitions and have no desire to force REPL to mean what it’s not. But then, in my opinion, Clojure does not need a REPL, it needs an Eval RPC protocol. To turn Rich’s methods against him, current REPL complects command-line interactions with code evaluation.

It’s trivial to build human-friendly CLI on top of machine-friendly RPC, but much harder to build machine-friendly RPC on top of a human-oriented command line.

Sure, ls -lah output looks nice in my terminal, but imagine parsing this for machine consumption?

So I insist that Clojure needs a “Remote Eval API” — message-oriented machine-friendly network API that’s easy to build on.

One feature that is crucial for me that nREPL gets right and all of the Clojure REPLs get wrong is evaling incomplete forms. If I send (+ 1 2, I don’t want to see my REPL stuck in the intermediate state. I want to see an error.

JSON serialization

nREPL uses bencode by default, which was a brilliant decision. Bencode is super-simple and trivial to implement, meaning nREPL clients could exist in any environment: Python for Sublime, Java for IDEA, Elisp for Emacs, JS for VS Code, VimScript for Vim.

This day I would go with JSON, though. I know nREPL has JSON backend, but I mean by default.

Using EDN would be a terrible mistake, though. EDN is much less popular, and REPL clients exist in all sorts of environments, rarely in Clojure.

Automatic session management

nREPL has a concept of sessions. I think this exists mainly for capturing dynamic bindings like *warn-on-reflection* or *print-namespace-maps*.

What troubles me is that sessions require manual management: you clone them manually and have to close them manually. If you forgot to close them, they will exist indefinitely, leaking resources.

I would replace this with automatic management, a session per connection. Connection dropped? Kill the session.

Session-scoped middlewares

Another thing sessions could be useful for are middlewares. Currently, they are installed globally, but that seems like an oversight? If I need middlewares X, Y, and Z for my editor to work, why should anyone else see them too?

In true network API, what I do should not affect what user in a parallel session is doing.

This all applies, of course, only if we allow middlewares to exist.

No middlewares

If I understand Jack right, his main concern is that middlewares are not portable. If you wrote an nREPL client that installs middlewares for Clojure, it won’t work with ClojureScript.

I faced the same problem, and that’s the main reason why Clojure Sublimed doesn’t talk to CLJS yet.

Sure, middlewares are a nice escape hatch. But maybe the goal should be to get the basic protocol enough so that nobody will have to write middlewares?

No “upgrade”

In my head, Network EVAL API should work like any other API: you connect to the server you want, send predefined commands and get results back. Want to eval Clojure? Connect to Clojure server. ClojureScript? Connect to CLJS.

Instead, the route Clojure REPLs choose is “REPL upgrade”. You connect to Clojure server always, then eval some magic forms and commands start to behave differently, e.g. being compiled to CLJS and sent to the browser.

I think even nREPL does that because Piggieback is a middleware installed globally that shares server with Clojure instead of starting its own.

This makes API non-uniform, complects CLJS (or any other) REPL with Clojure REPL, and just feels weird to use. Evaluating Clojure code should not assume JVM Clojure environment!

Parallel eval

For some reason, nREPL allows a single pending eval per session. This seems like an arbitrary limitation. I think evaluating one form should not prevent you from evaluating another, given that threads are easy to create in JVM.

CLJS might be in a different situation, but that shouldn’t mean Clojure users should suffer.

Single connection

Not sure if worth mentioning since nREPL got this right, but some REPLs suggest you open second connection to control the session in the first one.

This is a terrible idea. Multiple connections (even two) are order of magnitude harder to manage than a single one.

Don’t block the line, allow parallel eval and you won’t need a second connection.

Stacktraces

I don’t think nREPL sends stack trace if an exception happened? That is a crucial information, seems strange to omit it.

P.S. Stacktraces as data would be nice.

P.P.S. Stacktraces that don’t show nREPL internal code would be double nice.

P.P.P.S. Unmunged Clojure names would be triple nice.

P.P.P.P.S. Correct error position would be quadruple nice, but this is on Clojure more than on REPL.

Execution time

Stupidly simple, but quite an obvious feature: how long did that form take to eval?

Can’t measure that on the client due to network overhead.

Extract info from Clojure file

Not 100% sure this belongs in the Network API server, but then on the other hand repeating this in every client feels excessive too?

Imagine you go to a file and eval the last form here:

(ns my-ns
  (:require [clojure.string :as str]))

(defn reverse [s]
  (str/reverse s))

(reverse "abc")

Which namespace should it be evaled in? my-ns, the namespace of the file, right?

But the network server doesn’t know that! All it sees is something like:

{"op":   "eval",
 "code": "(reverse \"abc\")"}

This won’t work because reverse is not defined. You can specify namespace though:

{"op":   "eval",
 "code": "(reverse \"abc\")",
 "ns":   "my-ns"}

This improves the experience a bit, but how do you get this my-ns string? Well, to do that, you have to parse Clojure file, find the closest ns form and take the first symbol. Which in turn could be preceded by arbitrary-complex metadata form, like this:

(ns my-ns
  (:require [clojure.string :as str]))

(defn reverse [s]
  (str/reverse s))

(ns ^{:doc "Hello"} another-ns
  (:require [my-ns :refer [reverse]]))

(reverse "abc")

Not exactly the simplest task, is it? Well, it’s only hard if you are in Python, or JS, or VimScript.

But if you are in Clojure, it’s trivial! Clojure already ships with Clojure parser, so figuring out namespace is a matter of a few calls into stdlib.

Another common problem arises when you want to eval “the outermost form”. Like, you stand somewhere inside a function and ask the network REPL to eval that function. Finding start and end of the form to send is again, a hard task unless you have access to Clojure.

Finally, requesting info on symbol (lookup) requires you to identify where that symbol starts/ends. And yes, would be way easier in Clojure.

I’m not sure what the solution here is. Send a file to the network server and ask for parse information back when needed? Send the entire file on each eval?

I guess the reason nREPL has no solution here is because all possible solutions here are cumbersome? Bad? But it doesn’t mean the problem doesn’t exist.

Auto-require

On the other hand, if I try to eval (+ 1 2) in my-ns file before loading my-ns first, I’ll get an error: namespace is not loaded.

Clients could work around that, sure, but it still feels like an unnecessary dance. If ns could be ignored (e.g. evaluated code does not depend on it at all), eval it as is, otherwise load namespace first? Or always load namespace when client provides it? Or have an option?

Capturing *out*

It became a norm to display part of the stdout output in the REPL panel:

But if you don’t have REPL panel, where did that output go?

In that model, it goes to the same place where any other output goes: to the console that started the server.

In other words, I would like an option to disable output capturing and just let Clojure do what it would do by default.

What to keep

Good parts of nREPL that I definitely would want to keep:

  • interrupt
  • print/buffer-size
  • lookup

I don’t have an opinion on code analysis/suggestions/etc since I don’t use those (well, I do use suggestions from Sublime but those don’t require Clojure server).

In conclusion

During my development of Clojure Sublimed, I was deciding between using nREPL or writing my own server. I decided on nREPL due to it being a standard but ended up modifying a significant part of it and losing CLJS support in the process.

Because of that, I would love to see a Clojure Network Eval API that could be used to build editor integrations with minimal modifications and interoperable with all existing Clojure servers: JVM, CLJS, Babashka, nbbjs etc.

This is my starting point/idea dump. Join the discussion at nREPL forum.

Permalink

Babashka survey Q1 2022 results!

In Q1 of 2022 I ran the babashka survey again. I've done this once before (in November 2020) to see how people are using babashka and what they think could be improved. This year about 200 people responded. That is double the amount since the previous survey!

Here follows every question of the survey with a summary of the reactions.

How are you using babashka?

  • Shell scripting / bash replacement: 90%
  • Makefile replacing using babashka tasks: 43%
  • Small or internal web apps: 18%
  • Continuous integration: 29%

The first two options were selected in the majority of answers. Unsurprisingly most people are using babashka as a shell-scripting replacement for bash, exactly how it was intended. Babashka tasks is catching on: almost half of babashka users are using it to replace their Makefiles.

Are you using babashka for personal projects, at work, or both?

  • Personal projects: 73%
  • At work: 74%

About 50% of people used babashka both for personal projects and work. Only personal and only work was about 25% each. Some people use babashka to sneak in some Clojure at their non-Clojure jobs.

Where do you think babashka should be improved?

This question was free format. Some common themes:

  • Stacktraces, better error messages
  • better nREPL / CIDER-middleware support
  • documentation:
    • overview of all available vars + docstrings in babashka
    • cookbook
    • videos
  • pods:
    • documentation
    • http proxy support
    • make it easier to create pods
  • REPL integration for bb tasks
  • Producing standalone binaries
  • Tasks: sharing/importing tasks from other files/remote hosts
  • More compatibility with Clojure: deftype

What features / libraries / namespaces in babashka do you use the most?

This question was also free format. Probably the top 5 most named libraries:

What features or namespaces in babashka are redundant and could be left out?

The majority answer here was: none. People seem to be happy with the current selection of libraries.

What features or libraries would you like to see in babashka in the future, if any?

It's hard to see common patterns here. A random pick:

  • clojure.spec.alpha
  • html parsing
  • time library
  • native interop
  • ssh support
  • malli
  • specter

Note that spec is available in babashka via this library. Specter also works from source now since :mvn/version "1.1.4". Dealing with time is done via java.time and cljc.java-time is one of the java.time based libraries that work in babashka.

Are you using babashka.* libraries with Clojure on the JVM?

In order of highest frequency:

It's great that people are using fs and process on the JVM. I didn't expect so many people to be using babashka.curl on the JVM too!

Is the binary size of babashka important to you?

84% of the people didn't care, 12% cared and 4% had a nuanced answer.

Note that bigger binary size in babashka is correlated with a longer compilation time and more strain on CI resources. That is the primary reason I'm trying to keep it down.

What operating system are you using babashka on?

  • linux: 76%
  • macOS: 58%
  • Windows: 17%

Which babashka pods are you using, if any?

The top 7:

Note that for AWS we now also have a source-compatible library: awyeah-api.

When would you use babashka instead of JVM Clojure?

A pick of the free-formatted answers:

  • Everything I can use bb for.
  • Where deployment for bb is easier.
  • Short lived, small scripts.
  • Command line apps.
  • When performance isn't a strict requirement, but resource usage is
  • For small websites.

Any other feedback on babashka you would like to give?

Lots of thank you's and compliments... *blush*

Are you a user of other babashka-related projects? Please share!

Favorite pick:

  • My personal website, saltosti.com, is a scittle app running via a <script> tag.

Other answers:

Conclusion

The majority of answers to this survey were not too surprising, yet a good confirmation that babashka is doing what it's supposed to be doing. In individual answers there were a few fun discoveries, like learning which big companies are using babashka.

You can read results from the previous survey here. In that survey I asked users what they thought was currently missing in babashka. You can find those answers here. I think almost all points were addressed in the meanwhile. The improvements requested in this survey feel mostly as finishing touches: documentation, error messages, nREPL improvements, so it's probably safe to say that babashka now covers the most important Clojure scripting use cases.

Hardly anyone mentioned performance as something that should be improved, although that has been an area of focus in the beginning of this year and something I will be looking into in an ongoing basis. Most if not all of that work is happening in SCI, which is also powering nbb and other JS targets.

If you are interested in the raw (anonimized) data for this survey, then send me a message.

Thanks

Thanks for taking the survey and being part of the babashka community! Also, thanks to everyone making babashka possible. Sponsors, contributors and users.

Permalink

Building a global marketplace for recyclable materials

Scrapad Marketplace screenshotScrapad marketplace

Building a marketplace platform is fun because you need to bring the best of both the consumer and enterprise worlds: friendly, attractive, easy to use tools as well as solid tooling and resources for the back-office of the enterprise. And even more stimulating when it contributes to the sustainability of the planet as it is the case of ScrapAd, a global marketplace for recyclable materials we have recently helped build.

Marketplace platforms consist of different layers and every layer is important. In this post I would like to dig in and explain some of these layers. We have built a complete and fully integrated marketplace and we want to share this success story with you.

Infrastructure

The underlying tech has been, as always is in our projects, Hydrogen.

Hydrogen provides a great boost for productivity when starting new projects. It automates the deployment to a cloud infrastructure (AWS) with a CI/CD pipeline, it sets up a system’s boilerplate and prepares the development environment for the developers. This automation allows developers to get to the business logic as fast as possible and removes all the complexity of setting up all the development environment details.

Automation is not the only cool thing about Hydrogen; there is also a collection of modules and libraries to satisfy different projects’ requirements. These include authentication, payment gateways, digital signature, object storage, scheduling, and many more.

Application

As I mentioned above, marketplaces require building functionality for both the final end-user and a complete back-office for the enterprise.

The functional scope of ScrapAd is quite vast so I will only mention what I consider the most relevant ones.

End-user functionality:

  • Two-factor authentication for better security.
  • Ability to upload/edit the products they want to buy/sell.
  • Real-time negotiation tool. A whatsapp-like chat for buyers and sellers negotiation.
  • Automatic negotiation-based contract creation and signature.
  • Web and email notifications.
  • Payment management: money-ins from organization banks to the payment system, transparent money transactions and availability of your money to money-out
  • Favorite ads functionality to group and track ads.

Back-office functionality:

  • Automatic KYC process to ensure only verified customers can trade in the marketplace.
  • Automatic translation of items from the original language.
  • Automatic and personalizable SEO settings per ad and language.
  • Enterprise-style back office to manage all the main entities of the marketplace.
  • Automatic and manual logistics for rates and quotes.
  • A notification machinery that covers the whole registration and negotiation flows for the administrator to be informed and in total control of the business process.
  • Total integration of the payment system with the selected payment platform where all transactions are monitored.

As you can see, many features and processes behind the simple UI users interact with.

Scrapad marketplace back office dashboard screenshotScrapad marketplace back office dashboard

SEO

Search Engine Optimization plays an important role in Marketplaces. We want final users to come from search engines and that is why we need to pay special attention to SEO related aspects.

On the one hand we have served the public pages of the application with SSR (Server-Side Rendering) using Isomorphic rendering approach. This way we achieve a modern and reactive SEO-friendly web application.

On the other hand we created SEO-related editable tags for the back-office team and also improved the generation of URL slugs to optimize for SEO as well as sitemaps, and more.

In short. Fully featured SEO capabilities integrated in the solution.

Payments — Lemonway

We also developed a module to integrate with the Lemonway payment system.

At Magnet, we had experience with two payment systems before we started this project: Stripe and Redsys. We’ve mostly used credit card payments before, but the requirements for the ScrapAd marketplace were more complex:

  • They need to work with large amounts of money.
  • Bank transfers as well as credit card payments.
  • Fractionated payments from buyers.
  • Wallet for every customer.
  • The need for an intermediary instead of making payments directly between the buyer and the seller. That is, the money had to be stored on the platform until the conditions for sending it to the seller were met.
  • Possibility of charging commissions and other expenses together with the above. That is, the need for total control over money.
  • Total and programmatic integration of the functionality in our web application.

The integration with Lemonway was smooth, implementing a total of 16 API calls and 4 webhooks. We have now entered a partnership with Lemonway to be able to collaborate in new projects in the future.

Summary

Marketplaces are complex systems that manage critical business processes for different stakeholders. They require subsuming different tech ingredients and functionality into a single place and must guarantee a smooth user experience, great performance and business scalability.

It’s been professionally and technically both exciting and challenging to be a critical part of a new ambitious venture that will play an important role in the global supply chain and circular economy of recyclable materials.


Building a global marketplace for recyclable materials was originally published in magnet.coop on Medium, where people are continuing the conversation by highlighting and responding to this story.

Permalink

Packaging Clojure projects into jars and uberjars with tools.build

Jars and uberjars The most common way to prepare your Clojure project for distribution is to pack it into a *.jar file. JAR format came from the Java world and it stands for “Java ARchive”. It is a zip archive with a *.jar extension that contains Java class files, resources, and metadata. To distribute Clojure library you can create a jar with library’s source files or with compiled code (Java *.

Permalink

Middle DevOps Engineer

We are looking for a Middle DevOps Engineer to join product development for our client from the USA. The product is an AWS hosted multi-module payment and analytical platform for the healthcare services, written in Clojure/Golang/Python language stack. The product encompasses a few applications for customer journeys (web, mobile), data science/data analytics platform, multiple integrations with federal and governmental resources, and complex micro-service architecture. The product’s domain is Healthcare/Fintech, hence all the compliances and accent on security and high-performance. 

Requirements:

  • 3+ years of experience as a DevOps Engineer
  • Strong AWS knowledge and experience
  • Experience in using CI/CD automation tools (Jenkins)
  • Experience with IAC tools like Terraform
  • Experience in operating a container orchestration cluster (Kubernetes, Docker)
  • Proficiency in Networking/VPC/Security groups
  • Familiarity with GitHub
  • Experience with ChatOps
  • ElasticSearch knowledge
  • Scalable architecture skills
  • Ability to work effectively within a team and with minimal supervision

Responsibilities:

  • Deploy and configure microservice applications on AWS
  • Build and improve tools for developers to deploy and troubleshoot applications and infrastructure
  • Develop and support configuration management code
  • Develop and support CI/ CD processes
  • Support and administration of infrastructures on AWS
  • Assist in planning and reviewing application architecture and design to promote efficient deployment process
  • Servers monitoring

We offer friendly working conditions with competitive compensation and benefits including:

  • Comfortable working environment
  • Friendly team and management
  • Competitive salary
  • Free English classes
  • Regular performance-based compensation review
  • Flexible working hours
  • 100% paid vacation, 4 weeks per year
  • 100% paid sick-leaves
  • Corporate and team building events

Apply Now 

The post Middle DevOps Engineer  first appeared on Agiliway.

Permalink

ClojureScript Developer at Newsroom AI

ClojureScript Developer at Newsroom AI

gbp50000

Newsroom AI (www.nws.ai/about-newsroom-ai) is on a mission to help publishers re-establish their identity as innovators, leading the next decade of change in which journalism is enabled and not threatened by digital media. Our work addresses three main challenges in digital content publishing:

  1. User experience and personalisation
  2. Content diversity and syndication
  3. Commercial opportunity

The role

We're looking for well-rounded engineers, to help scale our platform and launch new features.

You will be highly involved in both product roadmap creation and product delivery, as well as technical and architectural decisions.

There are opportunities to work on all areas of our stack, and develop novel solutions to difficult problems in UI/UX, data processing and machine learning.

Tech we use

Languages: ClojureScript, Python, JavaScript

Databases: PostgreSQL, Redshift

Infrastructure: AWS

What we offer

Work from anywhere

Interesting technical challenges

Opportunity for travel and career development

Permalink

Functional Geekery Episode 139 – Laura M. Castro

In this episode I talk with Laura M. Castro. We talk her introduction to Erlang, Final Project and Ph.D. around Erlang, Research and Teaching using Erlang and Elixir, the Erlang Ecosystem Foundation, Code Beam Lite, Erlang Workshops and more.

Our Guest, Laura M. Castro.

@lauramcastro on Twitter
lauramcastro on Github
https://lauramcastro.github.io/

Announcements

ElixirConf EU is taking place the 9th and 10th of June, with training running the 6th-8th. For more information and to get your tickets visit https://www.elixirconf.eu/.

:clojureD is taking place June 11th in Berlin, Germany. Visit https://clojured.de/ for more information and to submit your proposal.

Code BEAM Lite A Coruña is taking place in A Coruña, Spain on the 11th of June. Visit https://www.codebeamcorunha.es to register, or to find out more.

Lambda Days 2022 has been moved to the 28th and 29th of July in Krakow, Poland. Visit lambdadays.org to keep up to date.

Some of you have asked how you can support Functional Geekery, in that vein, Functional Geekery now has a Patreon Page.

If that is one of the ways you would like to show your support, you can find out more at https://www.patreon.com/fngeekery.

Topics [@2:51]

About Laura
Universidade da Coruña
Erlang during University
OCaml
Java
C
Prolog
OCaml being completely different, even in second year of University
Contact with computers as typewriters
Basic
Studying Computer Engineering as good profession career track
Course on Functional Programming in 4th year
First Exposure to Erlang
“I was a Lego Kid”
“It will do the things I tell it to do”
End of Degree Project
Writing a Risk Management system in Erlang
Modeling policies as processes
Pattern Matching
Doing Research in the Computer Engineering world
Ph.D. on what Functional Programming helped put on the table
Dialyzer
Seeing what it would be like to work in academia and the research world
Delphi
“What did functional programming bring to the table?”
State in Processes
Pattern Matching
Recursions
“[…] they seem straight forward 20 years later”
Matthew Flatt – A Racket Perspective on Research, Education, and Production
Keeping research close to industry
Teaching Erlang in her Software Architecture course
“They’ve never seen really distributed architectures”
Automatic Validation and Testing
“You specify what you want to test”
Proper
Designing for Scalability with Erlang and OTP
WhatsApp
Suffering from the Secrecy of Using Erlang
Erlang Ecosystem Foundation
Overview of the Erlang Ecosystem Foundation
Education Working Group
OTP Behaviors
Ecto
University of Kent Erlang Master Classes; Class 1; Class 2; Class 3
exercism
Erlang Camp
Erlang and OTP in Action
Code BEAM Lite A Coruña
Code BEAM Twitter Account
Code BEAM A Coruña Twitter Account
Sponsorships for Code BEAM Lite
Erlang Workshops
Brujo Benavides
Erlang Workshop with Laura and Brujo
Hank
Rebar 3
Property Based Testing Training Workshop coming soon
Telegram

As always, a giant Thank You goes to David Belcher for the logo design.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.