Clojure on a plane, Onyx in a browser
View this email in your browser



Libraries & Books.

  • Tengen, a macro for making Reagent components a little easier from Peter Taoussanis. Most of the time form-1 and form-2 Reagent components are all I need, but this is worth looking at if you need to work with the React lifecycle methods more deeply.
  • Tempura, an internationalisation library also from Peter Taoussanis.
  • onyx-local-rt has been released, a way to run Onyx systems locally, and even in the browser (!)
  • ClojureScript 1.9.293 is out, it's great to see Antonio Monteiro's contributions.
  • clj-xchart, a charting library for the JVM has been released, with excellent docs.
  • graphql-clj: A Clojure library that provides a GraphQL implementation
  • carry: A ClojureScript web app framework
  • Sextant: Offline geocoding in Clojure

People are worried about Types. ?


Waiting for 1.9.


  • An excellent guide on AWS, you're bound to pick up a trick or two, even if you've been using it for a while.
Copyright © 2016 Daniel Compton, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by MailChimp


Clojure in Greece: Baresquare


Baresquare is a digital analytics company. They provide the full data analytics pipeline, collecting and converting raw metrics data into human digestible knowledge. They have large clients including the Co-op Bank and Dixons Carphone, and they have a strategic partnership with SONY Electronics for a few years now.

Baresquare has been operating for 7-8 eight years and their diverse work-flows have included the use of R scripts and manual human processes. Two years ago they decided to fully automate their data-processing pipeline using Clojure.

I caught up with Georgios Grigoriadis (CEO) and Stathis Sideris (Architect) to find out more.


Jon: Could you explain a bit more about what a digital analytics company does?

Georgios: We process collected raw data into useful information; we are in essence providing the data maturation pipeline Data -> Information -> Knowledge. We can take 1 million rows of metrics and process them into a single sentence that could be tweetable, to reveal what really happened in a day or a week of user behaviour.

Jon: What's involved in the processing?

Georgios: We've standardised the process of retrieving the data from sources such as Google Analytics, doing some massaging, and then performing a time series analysis looking for trends or exceptional cases. We carry out statistical analysis to probe the 'why' behind the incidents and trends, and once the full picture has been established we provide a human digestible report.

The Clojure Story

Jon: How did Baresquare get started with Clojure?

Georgios: We originally had an external vendor in and we discussed outsourcing the whole thing - the pipeline as well as a new front-end. They proposed to do everything in Rails and this was our intention for a couple of months.

I then consulted Stathis and he advised to write the pipeline in Clojure. He was very strong with his view that the data processing requirements fit the specific characteristics of Clojure. Clojure is a functional and a Lisp language that is a good fit for data manipulation.

So we split the system in two - we outsourced the front end in Ruby and we used Clojure for the backend.

Stathis: This was an exploratory project in a sense. We didn't know what the configuration of the pipeline would look like when we started, we wanted to make a generic processing pipeline that could be heavily configured to fit the specific requirements of our customers operating in different countries.

Clojure is good for iterating quickly. We needed to move fast, to make prototypes and to modify the pipeline accordingly. Flexibility is paramount.

Georgios: I didn't have the intention of doing Clojure, it was an adventure that came up!

Stathis: The front end was well defined and was more outsourceable. The data-processing pipeline needed to be kept closer to the company - we wanted space to explore; to explore how to move from a human centric workflow to an automated one.

Georgios: We chose initially to use R for the data algorithms. Baresquare already had R expertise in-house and talking between Clojure and R was not a problem.

Jon: Is Baresquare doing more Clojure now?

Georgios: We had a proof of concept last year that we managed to deliver within 2-3 weeks of coding. The developer involved - Kosta - who was originally a Java developer, said that it would have taken him around four months in Java.

The proof of concept was successful - we put it straight into production and we haven't had any issues since. This is not a usual story that I hear!

Jon: That's great!

Georgios: Yes. It's worth mentioning that we are not a start-up who have secured funding to create our software. Everything we do is bootstrapped, we are pivoting from being a services company to being a software house. There is a limited amount of time/money to invest, and so we have hard deadlines that are make or break. It's critical to make the best use of our resources.

Clojure Onboarding

Jon: How has the Clojure onboarding process been for other developers?

Georgios: Our first in-house developers (Kostas and Panos) were new to Clojure and have been mentored by Stathis.

Stathis: Both coming from a background of heavy Java, they have adapted easily. Since then I’ve interacted with 4 developers, all of them enthusiastic about Clojure and using it at work. One started using some advanced libraries such as core.match, another being very diligent about using automated tests and so on. I provided some mentoring, but it was far from full time - about 1-2 sessions a week.

Georgios: We now have 2 senior developers and 2 junior developers coding Clojure.


Jon: How has it been hiring for Clojure devs?

Georgios: It was difficult for us to hire the junior devs, although one directly approached us looking for 'Clojure in Greece'. He said I love the language, can I work for you?

Otherwise I've found that the developers who are aware of Clojure or Scala are easier to convince that Clojure is worth the time.


Jon: What is the state of Clojure in Greece?

Stathis: There is an Athens Clojure Meetup, and there's another two companies we know of in Athens using Clojure. It's very early on in the Clojure journey here.

Georgios: I don't feel nervous. There are lots of large companies using Clojure outside of Greece and so adoption is happening and picking up. I have the trust. The resourcing makes me a little nervous, but then you get people reaching out because of Clojure which is a good thing.

We are used to doing things our own way, and being a bit different.

Are you happy you chose Clojure?

Georgios: I feel we have made a really good investment into newer technologies and this helps us be agile and to adapt. I look at competitors that are going slower and I guess it's also because of their legacy of old tech.

Also - I don't feel trapped. Clojure is built on the JVM and so it's well connected to other libraries and tools we might want to use, I like this very much. Clojure helps us attract good people, and we are thinking of using ClojureScript for a UI we need to build.

Stathis: There are some great platforms and tools we can potentially use in the Clojuresphere such as Onyx. The future is good.


Baresquare are hiring, check out their careers page and the company's linked-in page.


This Infographic Explains Why JS Dev Seems So Complicated

There's a web site called Hacker Noon which seems to specialize in complaining about JavaScript. One Hacker Noon piece resonated with a lot of people recently. It's all about how huge and overcomplicated the JS ecosystem is, and how hard it is to just sit down and get stuff done.

First of all, if what you want is a beautiful, omakase system for building modern front-end applications, just learn Elm. It's omakase as fuck. It's lovely to work with. And it's the type of project where you can do everything in one language, without ever needing to deal with other stuff.

But there's another option, too: just take a moment to understand why the JavaScript ecosystem is complicated.

It's not just because JavaScript is used by everybody who does anything on the web. It's not just because code on demand becomes a more powerful strategy each day, as the computers which run web browsers become smaller and more powerful.

It's also because JavaScript employs one of the oldest strategies of the Web: paving the cowpaths. Whenever people are already doing something, and it's working well, JavaScript seeks to lift up their practices into the language itself, rather than dictating something new.

Consequently, modern JavaScript is full of things which it got from open source projects.

Puppy picture from Daniel Stockman on Flickr.

bind was pioneered by Prototype, and showed up in Underscore and jQuery (as proxy) before landing in ES5. The Array methods each, map, and filter have a very similar backstory. CoffeeScript and Node are major contributors to ES6. When Angular and Ember first blew up, Object.observe briefly became a thing. People thought it would be a big deal, but this turned out to be premature paving of an unused cowpath, and Object.observe was deprecated, because people didn't use it. This is the output of a process where the people who shape JavaScript look at how open source projects use JavaScript, and the ways in which they seek to fix JavaScript, and use that as guidance to inform decisions about what JavaScript should be.

Modern front-end development is a lot of fun. In the 1990s and the 2000s, it was a nightmare. It was horrible. And paving the cowpaths is one of the reasons it got better. Because in the 1990s and 2000s, JavaScript implementers tried to just make up features and tell people what they wanted. It didn't work.

The chaos and churn of the JavaScript landscape is how we got a better JavaScript. It would never be practical to get such a big user base to agree on a hypothetical feature set. There's just too many people. It would be beyond impractical to expect any giganto-committee to be able to foresee the needs of such a colossal group. So JavaScript is a place where lots of projects get invented, and the language learns from its users.

That's fucking great. A huge landscape full of competing projects means that JavaScript will keep getting better.

If you'll forgive a little blatant self-promotion, this infographic is from my upcoming book, Modern Front-End Development with Ruby on Rails and Puppies. In this book, I build a simple UI in vanilla Rails, using CoffeeScript and jQuery. Then I throw out jQuery and turn the front-end code into ES5, because — as you can tell from the infographic — pretty much everything that made jQuery worthwhile has been integrated into JavaScript itself now. Anyway, then I turn it into ES6. Then I run it through Webpack and integrate Webpack with Rails. I implement the UI again in React with JSX, and then again with ClojureScript and Om. (I've also built it yet again in Elm, but I haven't written that chapter yet.) I even show how to pull in an experimental feature from ES8.

And yes, there's puppies. There's a puppy on every page. So if all this unfamiliar JavaScript scares you, all you have to do is look at the puppies and you'll feel better.

It's a very hand-hold-y book. It makes it easy for you to dive into all the complexity of modern front-end development. So there's no excuse for complaining! But the reason all this works is because this book teaches you things. In order to do front-end development, you have to learn stuff. There's no way around that. You can't expect one of the most lucrative fields on the planet to be a field which does not require you to obtain new information. That's just ridiculous.

Update: Version 0.1 0.2 of the book is now live! Also, when I made this infographic, I left out the most obvious and well-known example of JS adopting user ideas, namely, JSON. My bad!


Using Gitlab CI with a Clojure project

Gitlab allows you not only to have free private repositories, but also to test them using free runners. These can run automatically, on push, for any branch or tag.

I keep a few private repositories with them, for personal projects and small experiments. I decided to give Gitlab CI a shot for a PostgreSQL-backed Clojure project.

There’s a basic example on the Gitlab CI repository. It gets and installs lein, which isn’t necessary. Instead, we’ll build use the clojure:lein Docker image.

Let’s start with an empty .gitlab-ci.yml file on your repository’s root.

image: clojure:lein-2.7.0
- postgres:latest

You’ll see I’ve also included Postgres as a service since it’s what I’m using for the database.

We’ll then need to add a section for any environment variables

DATABASE_URL: "postgresql://postgres/dbname?user=uname&password=pwd"

Gitlab also allows you to define secret variables on a per-project basis, but there’s no need for that here.

Finally, we’ll add our before_script section, which just updates apt-get and loads the dependencies.

- apt-get update -y
- lein deps

Update: Notice that I’m updating the package lists using apt-get. This is necessary because I’ll also need to install the Postgres client. If you don’t need to install anything using apt, you might save some build time by removing that line.

Before proceeding to test, we’ll install the Postgres client, initialize the database (adding some plugins), and run the migrations.

- apt-get install postgresql-client -y
- psql -h postgres -U postgres < db-setup.sql
- lein with-profile test run migrate
- lein test

And voilá! Tests will run on push. You can find a history for the status of the Pipelines section of your project.


If for any reason you don’t see a Pipelines option, make sure that Builds are enabled in the project’s settings.

You’ll also get a build history


Here you’ll find the complete file as a snippet.



20 Things (that I want to learn)

For the past 2 years or so, I have been writing predominantly JavaScript. While I have mixed feelings on JavaScript as a language, the ecosystem is both awesome and utterly terrifying. There are so many options for every aspect of development and tooling that the choices can be paralyzing. The glut of options for, well, everything has caused me to focus more on deciding which technologies I really want to learn and for which ones I can be content in my ignorance.

In the spirit of being intentional about what to learn, I have compiled a list of 20 technologies that interest me. My plan is to learn at least enough about each one to carry on an intelligent conversaion about it and eventually do a deep dive on the ones that I enjoy the most. I plan to share my experience in this learning process as I go, so I’ll be posting to the blog as a work through each item.

The 20 things I would like to learn are - in no particular order:

  • Containers without docker
  • ARM assembly
  • Building a columnar database
  • Blockchain
  • Rust Programming Language
  • Linux filesystems
  • How an OS Works
  • strace
  • Raft consensus protocol
  • Device drivers
  • Type inferrence
  • Go language
  • Neural networks
  • TCP
  • Distributed filesystems and storage
  • Etherium Smart Contracts
  • Falcor
  • BSD Jails
  • GPU Programming
  • Algorithmic trading

Let me know if there are any topics that you’d like me to cover first or in more depth, just let me know in the comments. I hope that this learning journey is interesting/helpful!


Clojure is not afraid of the GPU (slides for my EuroClojure2016 talk)

Hello, fellow Clojurians, and welcome to my newly born blog. Is there a better way to open the new blog than with a post about my upcoming talk at EuroClojure2016? I decided to put the slides online in advance, so the attendees can relax and enjoy the talk, instead of scraping by to type all the notes and memos.

The slides are available here, just open them in the web browser (I tested with chromium). I don't know whether you'd agree with me that Clojure is not afraid of the GPU, but I hope you'll enjoy my talk and the conference!

Please note that there are two directions to browse: down/up, and left/right. Down/up arrows move you through slides, while left/up arrows move you through sections. Normally, you'd press down-down-down until there are no more slides, and then right to cross to the next section. ESC gives you the bird's view of the structure.

Enjoy, and see you in wonderful Bratislava, at awesome EuroClojure conference.


KLIPSE talk at ClojureX 2016

ClojureX 2016

I’m so excited to join Clojure Exchange 2016 as a speaker - where I will share my insights on Code Interactivity Everywhere with KLIPSE.


KLIPSE - The background

My original intention with KLIPSE - was to provide to myself a convenient way to see the javascript code produced by the clojurescript compiler. This intention stated to emerge in my mind after reading David Nolen’s blog post ClojureScript can compile itself on July 29, 2015. It became more concrete after watching Maria Geller’s talk The ClojureScript Compiler - A Look Behind the Curtains somewhere in November 2015.

Everybody around me was skeptic regarding the value of such a project. They were telling me:

What’s the problem? You simply need to create a clojurescript project edit a file, let the compiler does its job and open the js file in your IDE.

But I was not convinced by their skepticism. Afterwards, I realized that I was seeking for 3 important things:

  • Interactivity: I wanted to be able to edit code an see the transpiled javascript immediatly - as I type my code.
  • Reach: I wanted to be able to see transpiled javascript everywhere - in the bus, while walking, while running. I didn’t want to see it only when my dev environment is set up.
  • Simplicity: I wanted a simple solution - something that works in a simple way

Since the begining of 2016, I have been spending my nights on building KLIPSE - around those 3 guiding principles.


After playing a bit with eval-str and compile-str from cljs.js namespaces, I came up building this WEB REPL that evaluates clojure expressions and displays the transpiled javascript:

I was so excited by the simplicity of this WEB REPL that I decided to create a blog named in order to explore and share deep features of the language. The uniqueness of the blog would be that all the code snippets will be interactive.

During this period, I wrote about static vs dynamic dispathch, deftype and defrecord, truth in clojurescript, defprotocol’s secret, IFn magic, how not to write macros, syntax quote and more…

I learned a lot of deep stuff about clojure and clojurescript through the necessity of providing working code snippets to my readers. It required me to make sure I fully understand what I’m blogging about.

But still something was missing: it was not so convenient to embed isolated iframes inside a blog post.

This is how I came to the second facet of KLIPSE: the plugin.

KLIPSE - the plugin

The KLIPSE plugin is a javascript tag embeddable in any html page: the only thing you need to do is to put your code snippets into a div (Ar any other html tag). And BOOM your code snippets become interactive: your readers can edit the code and see the evaluation result as they type.

You get interactive code snippets like this:

(map inc [1 2 3])

Variables are share among the snippets:

(def a-number 42)
(map inc [1 a-number 3])

And you can also transpile clojurescript into javascript:

(let [[a b] #js [1 2]]

What else?

There a a lot of additional features in [KLIPSE][klipse-url]:

  • Using external libraries
  • Interactive Code Snippets from github gist
  • Other languages: ruby, javascript, python, scheme
  • Slides with klipse
  • Interactive Documentation with codox and klipse

I invite you to join me in London at Clojure Exchange in December 1, 2016 where I will show more fun stuff that you can do with KLIPSE.


Zalando Tech x Strange Loop 2016

Strange Loop has taken place every year since 2009 in St. Louis, Missouri (USA) and is highly considered among developers, covering a wide range of topics from programming languages, distributed systems, web development, functional programming, and socio-political implications of technology.

I had the chance to attend this year's edition of Strange Loop and would like to share the highlights. The first day was dedicated to workshops as well as two conferences in parallel: elm-conf and Papers We Love.

A workshop highlight for me was "Deploying and scaling applications with Docker" by Bret Fisher. Starting with running a sample app on a single node with Compose, we proceeded to scale it to a cluster of Docker nodes using Swarm.

The two following conference days were packed full of high quality talks, with a total of five concurrent tracks that were a little hard to manage. Since the videos were readily available on the day after, some of the talks I just watched on Strange Loop's YouTube channel, which you can access here.

Presentation highlights

"Systems programming as a swiss army knife" by Julia Evans was one of my favourite talks. It focused on strategies for debugging any kind of system using Linux tools such as strace, tcpdump + wireshark and perf. She showed how knowledge about kernels and systems programming can be used to help you become a better programmer, while being able to convey her enthusiasm in a very nice way, making the talk particularly appealing.

"Humanities x Technology" by Ashley Nelson-Hornstein was the ending keynote for the first day of the conference. The key takeaway from here is that technology just for technology’s sake just does not matter – rather, technology is for people, and as such it should be at the intersection of liberal arts. The examples used to convey this message were very evocative.

One of these examples was Tay, a chatbot that learned from Twitter users – within hours of coming online, it became extremely anti-human, showing how the Internet had turned an AI chatbot into a hate machine. Tay was based off of Xiaolce, a Chinese chatbot, which was more successful since it focused on the empathy of the users, not precise chat communications. Their technology was informed by their humanity, not pure science, like Tay. Another key takeaway is that we, as developers, are not the user, and that we possess blind spots when trying to understand people that we need to be mindful of. You can read Ashley’s notes on the presentation here.

"Building a Distributed Task Scheduler With Akka, Kafka, and Cassandra" by David van Geest showed how his team built a task scheduler using Akka, Kafka, and Cassandra, leveraging the strengths of these technologies. Some of the challenges they faced include dynamically adjusting for increased task load with zero downtime, ensuring task ordering across many servers, and making sure that the tasks still run if a datacenter goes down.

"Unlimited Register Machines, Gödelization and Universality" by Tom Hall, presenting a formalization written in Clojure for Universal Register Machines, used here to perform simple arithmetics operations (like sum and product), while encoding instructions as numbers themselves and performing a list of instructions that comprise a program. For those unaware, Gödelization means to formalize a simple system in math.

The whole presentation was carried by Tom's enthusiasm over his achievement, even if the result was to simply perform a simple addition. His passion for the topic is highly contagious and the talk is worth watching just because of this. You can also read his notes here.

"Kittens - datatype-generic functional programming with Scala" by Kailuo Wang, where he presented Kittens, a library built on top of shapeless and cats, which is meant as a proof of concept around combining generic and functional programming. Several examples that used this library were shown to illustrate its features and use cases.

"Reproducibility" by Gary Bernhardt focused on the importance of reproducibility, a feature that guarantees that given the same inputs, a tool yields the same thing. This is important because it aids in building clear mental models of the tool's behaviour, and his central thesis is that these mental models lead to tools we love whose designs are highly non-obvious. One example of this is git, which will produce the same hash for a file committed in two different machines. While the video of his presentation isn’t available, you can still access his notes.

The following talks received high praise and are on my "to watch" list:
- "Diocletian, Constantine, Bedouin Sayings, and Network Defense" by Adam Wick
- "Fold, paper, scissors - an exploration of origami's fold and cut problem" by Amy Wibowo
- "Languages for 3D Industrial Knitting" by Lea Albaugh

For me, the best part of Strange Loop is the inspiration that it brings me and the insightful conversations that I had with some of the other people attending, so I will remember this edition of the conference for a long time.

If you’d like to get in touch about my experiences at Strange Loop, find me on Twitter at @smourapina.


Calculating The Darcey Coefficient – Part 3 #strictlycomedancing #machinelearning #clojure #weka

The Story So Far…

This started off as a quick look at Linear Regression in spreadsheets and using the findings in Clojure code, that’s all in Part 1. Muggins here decided that wasn’t good enough and rigged up a Neural Network to keep the AI/ML kids happy, that’s all in Part 2.

Darcey, Len, Craig or Bruno haven’t contacted me with a cease and desist so I’ll carry on where I left off….. making this model better. In fact they seem rather supportive of the whole thing.


Weka Has Options.

When you create a classifier in Weka there are options available to you to tweak and refine the model. With the Multilayer Perceptron that was put together in the previous post, that all ran with the defaults. As Weka can automatically build the neural network I don’t have to worry about how many hidden layers to define, that will be handled for me.

I do however want to alter the number of iterations the model runs (epochs) and I want to have a little more control over the learning rate.

The clj-ml library handles the options as a map.

darceyneuralnetwork.core> (def opts {:learning-rate 0.4 :epochs 10000})
darceyneuralnetwork.core> (classifier/make-classifier-options :neural-network :multilayer-perceptron opts)

The code on Github is modified to take those options into account.

(defn train-neural-net [training-data-filename class-index opts]
 (let [instances (load-instance-data training-data-filename)
       neuralnet (classifier/make-classifier :neural-network :multilayer-perceptron opts)]
   (data/dataset-set-class instances class-index)
   (classifier/classifier-train neuralnet instances)))

(defn build-classifier [training-data-filename output-filename]
 (let [opts (classifier/make-classifier-options :neural-network :multilayer-perceptron
                                                {:learning-rate 0.4
                                                 :epochs 10000})
       nnet (train-neural-net training-data-filename 3 opts)]
   (utils/serialize-to-file nnet output-filename)))


There’s not much more I can take this as it stands. The data is actually pretty robust that using Linear Regression would give the kind of answers we were looking for. Another argument would say that you could use a basic decision tree to read Craig’s score and classify Darcey’s score.

If the data were all over the place in terms of scoring then using something along the lines of an artificial neural network would be worth doing. And using Weka with Clojure the whole thing is made a lot easier. It’s actually easy to do in Java which I did in my book Machine Learning: Hands on for Developers and Technical Professionals.


Rest assured this is not the last you’ll see of machine learning in this blog, there’s more to come.





Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.