A first encounter with improvisation

I grew up playing piano, which is a very typical past-time of most second generation chinese kids growing up in a western nation. My teacher, a very good family friend, was an archetypal first generation chinese professional straight out of the Beijing College of the Arts. I learnt piano the way my teacher had learnt to play piano - by replicating exactly how the cd player played those songs. It was stupidly hard. Firstly the cd sounds amazing, especially when the artist is Glenn Gould and I never got anywhere close to the level of the cd. Of course, that left a lot of room for me to 'improve' and every time I went for a lesson, I was never told: "Wow, you sound exactly like the cd". It was always... "You're a little too soft in this section" or "You didn't slow down enough for that section" or god forbid if I played a wrong note once in a while.

Even though I failed miserably as a cd replicator, I did manage to obtain a diploma when I was 15. I then never played again for another 15 years... purely for the fact that the experience ripped whatever love I had of music out of me, dragged it bloodied and screaming through the streets and then tossed it into a dungeon ten floors beneath the ground.

So I stopped playing... that is... until last month when I started doing some jazz improvisation classes. Jazz was hard, especially for someone that had been told what to play, what fingering to use, how loudly to play, where to breath, where to slow down, where to contrast the left and right hand, etc. It was not hard in the way that playing exactly as the cd had done was hard, it was hard in the way that I only had a melody, the chord structure and some basic principles of what typically sounds good. Everything else was up to the guy doing the improvisation. Me.

The downside to this was that no one told me what was good, what was bad anymore. The upside was that I was free to explore and discover for myself what was good and what was bad. Of course there is always subjectivity in music. However, within that subjectivity, there are universal principles of what sounds good, how to create tension and how to resolve it, how to modulate to different keys as well as how to come back to the original theme.

In learning how to improvise, I was learning the tools and ideas that great composers (both classical and jazz) also used in creating their own masterpieces. It made me also think seriously about how classical works should be taught - not as a single work of art, frozen in time and space, but a series of themes and variations that can be disassembled, analysed and reconstituted based upon a core set of melodic and harmonizing principles.

So I'm really looking forward to doing more jazz, aspire to learn more traditional folk music and I quote Josh, my current teacher: "I thought I had known everything already... but once I had gone through the rabbit hole, I realised that there is no way for me to learn everything about jazz in my lifetime."

If you are wondering what this is doing on a programming blog

  • replace classical music (or rather, the method with which it was taught) with design patterns/object orientation or dare I say it... static typing.
  • replace jazz improvisation with lisp

That's it. Its very much the same feeling that I had when encountering clojure for the first time.


Using Transit with Immutant 2

Out of the box, Immutant 2 has support for several data serialization strategies for use with messaging and caching, namely: EDN, Fressian, JSON, and none (which falls back to Java serialization). But what if you want to use another strategy? Luckily, this isn't a closed set - Immutant allows us to add new strategies. We took advantage of that and have created a separate project that brings Transit support to Immutant - immutant-transit.

What is Transit?

From the Transit format page:

Transit is a format and set of libraries for conveying values between applications written in different programming languages.

It's similar in purpose to EDN, but leverages the speed of the optimized JSON readers that most platforms provide.

What does immutant-transit offer over using Transit directly?

immutant-transit provides an Immutant codec for Transit that allows for transparent encoding and decoding of Transit data when using Immutant's messaging and caching functionality. Without it, you would need to set up the encode/decode logic yourself.


Note: immutant-transit won't work with Immutant 2.0.0-alpha1 - you'll need to use an incremental build (#298 or newer).

First, we need to add org.immutant/immutant-transit to our application's dependencies:

        :dependencies [[org.clojure/clojure "1.6.0"]
                       [org.immutant/immutant "2.x.incremental.298"]
                       [org.immutant/immutant-transit "0.2.2"]]

If you don't have com.cognitect/transit-clj in your dependencies, immutant-transit will transitively bring in version 0.8.259. We've tested against 0.8.255 and 0.8.259, so if you're running another version and are seeing issues, let us know.

Now, we need to register the Transit codec with Immutant:

        (ns your.app
          (:require [immutant.codecs.transit :as it]))

This will register a vanilla JSON Transit codec that encodes to a byte[] under the name :transit with the content-type application/transit+json (Immutant uses the content-type to identify the encoding for messages sent via HornetQ).

To use the codec, provide it as the :encoding option wherever an encoding is used:

        (immutant.messaging/publish some-queue {:a :message} :encoding :transit)
        (def transit-cache (immutant.caching/with-codec some-cache :transit))
        (immutant.caching/compare-and-swap! transit-cache a-key a-function)

If you need to change the underlying format that Transit uses, or need to provide custom read/write handlers, you can pass them as options to register-transit-codec:

          :type :json-verbose
          :read-handlers my-read-handlers
          :write-handlers my-write-handlers)

The content-type will automatically be generated based on the :type, and will be of the form application/transit+<:type>.

You can also override the name and content-type:

          :name :transit-with-my-handlers
          :content-type "application/transit+json+my-stuff"
          :read-handlers my-read-handlers
          :write-handlers my-write-handlers)

For more examples, see the example project.

Why is this a separate project from Immutant?

Transit's format and implementation are young, and are still in flux. We're currently developing this as a separate project so we can make releases independent of Immutant proper that track changes to Transit. Once Transit matures a bit, we'll likely roll this in to Immutant itself.

If you are interested in adding a codec of your own, take a look at the immutant-transit source and at the immutant.codecs namespace to see how it's done.

Get In Touch

If you have any questions, issues, or other feedback about mmutant-transit, you can always find us on #immutant on freenode or our mailing lists.


Immutant 2 (The Deuce) Alpha Released

We're as excited as this little girl to announce our first alpha release of The Deuce, Immutant 2.0.0-alpha1.

This represents a significant milestone for us, as we've completely removed the app server from Immutant. It's just jars now. We've put a lot of thought into the API and performed enough integration testing to feel confident putting an alpha out at this time.

Big, special thanks to all our early adopters who provided invaluable feedback on our incremental releases these past few months.

What is Immutant?

Immutant is an integrated suite of Clojure libraries backed by Undertow for web, HornetQ for messaging, Infinispan for caching, and Quartz for scheduling. Applications built with Immutant can optionally be deployed to a WildFly cluster for enhanced features. Its fundamental goal is to reduce the inherent incidental complexity in real world applications.

A few highlights of The Deuce compared to the previous 1.x series:

  • It uses the Undertow web server -- it's much faster, with WebSocket support
  • The source is licensed under the Apache Software License rather than LPGL
  • It's completely functional "embedded" in your app, i.e. no app server required
  • It may be deployed to latest WildFly for extra clustering features

How to try it

If you're already familiar with Immutant 1.x, you should take a look at our migration guide. It's our attempt at keeping track of what we changed in the Clojure namespaces.

The tutorials are another good source of information, along with the apidoc.

For a working example, check out our Feature Demo application!

Get It

There is no longer any "installation" step as there was in 1.x. Simply add the relevant dependency to your project as shown on Clojars.

What's next?

For the first release, we focused on the API and on usage outside of a container. For the next alpha, we plan on focusing on making in-container behavior more robust. Take a look at our current issues if you want to follow along.

Get In Touch

If you have any questions, issues, or other feedback about Immutant, you can always find us on #immutant on freenode or our mailing lists.


Onyx: Distributed Data Processing for Clojure

I’m pleased to announce the release of Onyx. Onyx is a new cloud scale, fault tolerant, distributed data processing framework for Clojure. It’s been in development since November of 2013, and I’m elated to open source it to you!


Fancy logo, huh?

Onyx is a batch and stream processing hybrid, and it offers transparent code reuse between both modes. This makes Onyx applicable in situations where you’d want to do data extraction, information ingestion, map/reduce operations, or event stream processing. It’s also killer as a low-ceremony tool for replicating data across multiple storage mediums.


Use the existing Onyx plugins to leverage your storage. Adapt to your needs by writing your own plugin.

The Big Deal about Onyx is rooted in its treatment of computations as more than code. There’s a fundamental difference between the specification of a distributed computation and its mechanism of execution. The former describes the structure and flow of what we’re trying to accomplish, whereas the latter is concerned with the concrete building blocks that we use to construct our programs. I contend that these two concerns are deeply complected in even the most mature frameworks.

Consider map/reduce for a moment. Map/reduce is a long-held, proven technique for processing data at enormous scales. But we notice that even map/reduce falls to this subtle complexity. By using programming language constructs to chain map and reduce operations together, we’ve baked the the flow of our program into its mechanism - namely map/reduce.

I care a lot about my ability to grow the structure of a distributed computation independently of its mechanism, and you should too. As our field matures, we’re frequently seeing customer requirements for distributed systems involve user control of the workflow running against large data sets. You’ll experience the frustration the moment that your programmer typing into an editor and compiling a JAR file isn’t the primary controller of the structure of the workflow. We’re starting to tug hard on a fundamental complexity of the way contemporary computation frameworks are designed, and it’s time for a serious change to alleviate the pain.

Onyx cuts a hard line down the concerns of computation specification and mechanism. It aggressively uses data structures to enable specification construction at a distance - in a web form, on another box in another data center, by another program, etc.

You can have your first Onyx program up and running in just a few minutes using the Starter repository. It doesn’t require any external dependencies, other than Leiningen and Java. There’s also a walk through to guide you through each piece of the program.

Deploy to the cloud, you say? There’s virtually transparent code reuse between the development environment and the constructs necessary to run in a fully fault-tolerant setting. A production Onyx environment depends on ZooKeeper and HornetQ - two very mature technologies. You’re closer than you think to your first deployment of Onyx into your data center or cloud provider.

So, if you’re ready to give it a shot, I’d recommend browsing the Examples repository to get your feet wet, and referencing the full documentation as you go. If you run into any trouble, I’m more than happy to help on the mailing list.

I hope Onyx helps you to build genuinely simpler systems and avoid much of the pain I experienced trying to build highly dynamic systems with modern tooling. If you’re interested in taking Onyx to production, get in touch. I will personally help you.

 My name is Michael Drogalis. I’m an independent software engineering consultant. Want to work together? Send a tweet to @MichaelDrogalis or an email to mjd3089-at-rit-dot-edu.


Langohr 3.0.0-rc3 is released


Langohr is a small Clojure RabbitMQ client.

3.0 will be a major release that introduces moderate internal changes in the library and some breaking public API refinements.

Changes Between Langohr 3.0.0-rc2 and 3.0.0-rc3

Clojure 1.7 Support

Clojure 1.7-specific compilation issues and warnings were eliminated.

clj-http Upgrade

clj-http dependency has been updated to 1.0.0.

ClojureWerkz Support Upgrade

clojurewerkz/support dependency has been updated to 1.1.0.

Changes Between Langohr 2.11.x and 3.0.0-rc2

Options as Maps

Functions that take options now require a proper Clojure map instead of pseudo keyword arguments:

# in Langohr 2.x

(lq/declare ch q :durable true)
(lhcons/subscribe ch q (fn [_ _ _])
                        :consumer-tag ctag :handle-cancel-ok (fn [_]))
(lb/publish ch "" q "a message" :mandatory true)

# in Langohr 3.x
(lq/declare ch q {:durable true})
(lhcons/subscribe ch q (fn [_ _ _])
                        {:consumer-tag ctag :handle-cancel-ok (fn [_])})
(lb/publish ch "" q "a message" {:mandatory true})

JDK 8 Compatibility

Langohr test suite now passes on JDK 8 (previously there was 1 failure in recovery test).

GH issue: #54.

Connection Recovery Performed by Java Client

Langohr no longer implements automatic connection recovery of its own. The feature is still there and there should be no behaviour changes but the functionality has now been pushed “upstream” in the Java client, so Langohr now relies on it to do all the work.

There is one public API change: com.novemberain.langohr.Recoverable is gone, langohr.core/on-recovery now uses com.rabbitmq.client.Recoverable instead in its signature.

GH issue: #58.

RabbitMQ Java Client Upgrade

RabbitMQ Java client dependency has been updated to 3.3.5.

Custom Exception Handlers

langohr.core/exception-handler is a function that customizes default exception handler RabbitMQ Java client uses:

(require '[langohr.core :as rmq])

(let [(rmq/exception-handler :handle-consumer-exception-fn (fn [ch ex consumer
                                                               consumer-tag method-name]

Valid keys are:

  • :handle-connection-exception-fn
  • :handle-return-listener-exception-fn
  • :handle-flow-listener-exception-fn
  • :handle-confirm-listener-exception-fn
  • :handle-blocked-listener-exception-fn
  • :handle-consumer-exception-fn
  • :handle-connection-recovery-exception-fn
  • :handle-channel-recovery-exception-fn
  • :handle-topology-recovery-exception-fn

GH issue: #47.

langohr.core/version is Removed

langohr.core/version was removed.

Change Log

Langohr change log is available on GitHub.

Langohr is a ClojureWerkz Project

Langohr is part of the group of libraries known as ClojureWerkz, together with

  • Elastisch, a minimalistic well documented Clojure client for ElasticSearch
  • Cassaforte, a Clojure Cassandra client built around CQL 3.0
  • Monger, a Clojure MongoDB client for a more civilized age
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Langohr, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

About The Author

Michael on behalf of the ClojureWerkz Team


Pinholes 2 - Ignorance, misrepresentation and prejudice edition

Sseveral comments regarding the pinholes post, have forced me, against the deepest elements of my nature, to engage in thought. Since that might never happen again, I thought it meet to record the event.

I'm going to say "I" a lot, because this is mostly my opinions.

Bidirectional programming

As pointed out by Christian Schuhegger in a comment on the original post, lenses were originally introduced to computer science in the context of bidirectional programming, rather than as a tool for dealing with deeply nested immutable structures. He points to a good list of papers on the subject on the subject. I was, it seems, excessively influenced by the use case from the Scalaz tutorial (if not by the exact details).

The original metaphor was, I suppose, that light rays traced out the same path through a lens, irrespective of direction. My take on the metaphor - that a lens is so called because it focuses on small or distant things - is, it seems to me, compelling, but it is not the original intent. Within the context of the original definition (well not the original, original definition, which would be anything in the shape of a lentil), it seems like the things I create with mk-ph-set and mk-ph-get are acceptably the ADT equivalent of lenses, but the mk-ph-mod artifacts are really state transformers. Irrespective of name, however, a tool for dealing with deeply and weirdly nested immutable data structures is demonstrably important to have, and I'm not penitent or creative enough to come up with a completely different metaphor.

Fresnel and protocols and aesthetics

Someone (whose name I'll publish if he asks me to) on twitter mentioned fresnel, which is also a lens library. I did know about this before blogging, but I didn't really want to argue about the differences, because it's such a nice and polished piece of work.

Now that the subject has come up, I'll cop to being aesthetically opposed to creating a protocol for anything that could potentially show up as (or in) the second argument to assoc (or assoc-in), especially since, if the first argument is a hash-map, Lens becomes an incomplete proxy for Object.

Clojure and Clojurescript protocols pay homage to the object-oriented nature of their virtual machines, but do so in moderation, providing abstractions over fundamentally different implementations of fundamental objects that are used in essentially the same way. Hence IPersistentMap being implemented by both (hash-map) and (sorted-map), or core.async having different impl namespaces for Clojure and Clojurescript. In Java code reviews, the detection of cond-like logic immediately results in prescription for a new interface. Not so, in Clojure, which recognizes the limitations of virtual function dispatch and so provides rich semantics for computed dispatch. In Java, the interface is the interface: instructions for using a new library generally involve telling you to implement one.1 Clojure, I think, avoided the word "interface" in conscious rejection of this paradigm.

Pinhole relies on implementations of clojure.lang.Associative, to do the right thing when assoc and get are ultimately called, but beyond that differentiates only between keys that are vectors and keys that are not vectors, and in the rare case where you want a vector as a hash key, you have provide an [in,out] function pair to do the dirty work. That feels right to me, but opinions may of course differ.

  1. Though, perhaps, the wind that brought lambdas to Java 8 may carry off a few of those idioms eventually. 


Clojure Gazette 1.94

Clojure Gazette 1.94

Strange Loop, Code Quality, Typed Clojure

Clojure Gazette

Issue 1.94 September 21, 2014


Hi clojurists,

Strange Loop is over. I didn't go, but followed along on Twitter. And now the videos are pouring into the Strange Loop Youtube account. Go watch them! I've shared and commented on a few here. They're coming in faster than I can watch them, so expect more next week.

I've mentioned before that I will be attending Clojure/conj in November. The talks have been announced. The conference will be a critical mass of Clojure-infused minds, coming together to fission off new ideas.

People have been asking me how they can prepare. The answer, of course, is to watch or read everything that's ever happened related to Clojure. But that might be a little too much to ask. I want to prepare and I want to help others prepare for what will no doubt prove to be a heady experience. I've gone through all of the talks listed on the speakers page. I'm compiling some Pre-conj Prep work, which includes watching previous talks or reading a blog post or two. I'll release one per day. Go sign up for it if that interests you.

Rock on!
Eric Normand <eric@lispcast.com> @ericnormand

PS Learn more about the Clojure Gazette and subscribe. Learn about advertising in the Gazette.

Sponsor: Factual

Factual is a location platform that enables development of personalized, contextually relevant experiences in a mobile world. Factual is hiring outstanding people in Los Angeles, San Francisco, and Shanghai to help solve the following problem: How to organize ALL of the world's data. 
You could join a team of excellent developers at a growing company with a casual, start-up vibe. Thanks, Factual, for supporting the Gazette. Now go apply

Clojure Code Quality Tools

Automated code quality linting should be relatively easy in Clojure since it's homoiconic. And now with tools.analyzer, it should be easy to parse entire namespaces into an easy to analyze structure. This post lists the author's setup for running several static analysis tools. DISCUSS

Thoughts On Five Years of Emerging Languages

Alex Payne looks back on five years of the Emerging Languages conference just before Strange Loop. Some incisive thoughts about where languages are headed. DISCUSS

Pre-conj Prep

If you are going to Clojure/conj this year and you want to make the most of your time there, sign up for this list. You'll learn the background on the topics that will be presented, so that you can learn the most possible from each talk. It's a free email list and I will send out homework daily.

A core.async Debugging Toolkit

A very cool library that wraps core.async to provide a host of debugging capabilities, including a custom scheduler that can reproduce a given interweaving based on a seed. DISCUSS

Typed Clojure in Practice

Ambrose presents Typed Clojure in the clearest way I've ever seen. He also talks a bit about the future. DISCUSS


Rich Hickey explains Transducers. They're really not that complicated after all, but they are handy. DISCUSS

Inside the Wolfram Language

Stephen Wolfram does a live demo of the Wolfram Language, then talks about the design decisions behind it. I've never used Mathematica in depth, but I'm thinking about it now. DISCUSS

React: RESTful UI Rendering

There's no doubt that React is making waves, and Pete Hunt is the one doing the splashing. This talk discusses the relationship between REST and React. DISCUSS

The Mess We're In

Joe Armstrong has been harping on this for a while, but it's a very important idea and an interesting perspective. His physics training gives him tools to talk about the limits of computation and put model the complexities of computing that we all experience. DISCUSS
Copyright © 2014 LispCast, All rights reserved.

unsubscribe from this list    change email address    advertising information


Talk: Concurrency Options on the JVM

or, "Everything You Never Wanted to Know About java.util.concurrent.ExecutorService But Needed To"
Here's the prezi:
from StrangeLoop, 18 Sep 2014

The video will be posted eventually.


Om interop with 3rd party JS libs

A couple days ago, I cheekily tweeted a piece of code for embedding the fantastic Ace Javascript editor in an Om app. I say cheekily, because I wrote the code in the RefHeap editor and didn't actually test it!

Anyone who was brave enough to test it out using this code as a starting point would have run into several issues. This post attempts to make up for those, by sharing how it works and providing usable code!

tl;dr: Here's the full source:


The basic idea

So, as you likely already know, React (for which Om is a ClojureScript wrapper) does things a little differently; it manages two virtual DOMs - one representing the live DOM and one representing the new version of the view you want rendered. It diffs between those two to determine the minimum set of changes to make to the live DOM whenever your app has to re-render.

What this means for interop with non-React code is that we have to opt React out of managing the live DOM for this code, but only once React has created it for us. Once it's created, we represent to React that nothing about the view changes from state change to state change.

This causes the diff to yield no changes for this particular part of the virtual DOM, which of course means no mutations will occur to the live DOM nodes.

Getting Ace on the page

We're going to go through a working example with Ace. We'll look at:

  1. How to instantiate an Ace instance
  2. How to populate the text editor from the Om global app state
  3. How to track the changes occuring in Ace as they happen
  4. And how to persist those changes back to the global Om state

We'll use several life cycle protocol functions to interop with Ace. Here's a full reference of all the protocols in Om.

This code comes with a couple bonuses:

  • We'll see how to use core.async to have two Om components coordinate.
  • We'll use the simple but seriously handy defcomponent from om-tools, which DRY s up all the reify IProtocol code you see in vanilla Om applications.
  • Also, we'll use Ŝablono to render HTML rather than the om.dom namespace provided by Om. I personally find it a lot easier to read and write.

Let's jump in!

1. Instantiate an Ace instance

(def *ace* (atom nil))                              ;; 1

(defcomponent editor-area [data owner]
  (render [_]
    (html [:div#ace {:style {:height "400px"}}]))   ;; 2
  (did-mount [_]                                    ;; 3
    (let [ace-instance (.edit js/ace                ;; 4
                              (.getDOMNode owner))] ;; 5
      (reset! *ace* ace-instance))))                ;; 6
  1. First, we set up an atom to store the reference to Ace so that we can work with it later on. Fair warning: this does mean you can only use one instance of this component at a time - yay global mutable state!
  2. We render a single div using Ŝablono's html macro.
  3. We use the did-mount (from the IDidMount protocol), as this is called once, right after the component has been made live on the DOM.
  4. We invoke Ace's edit decorator function, passing it the DOM node that we get by...
  5. Using React's getDOMNode function, passing in owner, which is the backing React component provided by Om.
  6. We store the Ace reference in the atom.

2. Populate the text editor from the Om global app state

We'll do this in two places; once on starting Ace up, and with a separate life cycle protocol function. First, a helper function:

(defn set-value! [value]
  (let [ace-instance (deref *ace*)                            ;; 1
        cursor       (.getCursorPositionScreen ace-instance)] ;; 2
    (.setValue ace-instance value cursor)))                   ;; 3
  1. We get the reference from the atom.
  2. We grab the current text cursor position of the editor...
  3. And pass it back in when setting the new value, so that the cursor doesn't jump around, if at all possible - sometimes it will if the text changes dramatically.

Ok, so now we can set the editor value from the global state, using a key of :value for this particular state map:

(defcomponent editor-area [data owner]
  (did-mount [_]
    (let [ace-instance (.edit js/ace
                              (.getDOMNode owner))]
      (reset! *ace* ace-instance)
      (set-value! (:value data))))                  ;; 1
  (will-update [_ next-data next-state]
    (set-value! (:value next-data))))               ;; 2
  1. Set it on start up from the initial Om cursor.
  2. Use will-update (from, you guessed it, the IWillUpdate protocol) to set the data from the incoming state transition cursor.

Great! We have an editor on the page!

3. Track the changes in Ace

Now we'll use Ace's on change callback to catch changes as they happen.

(defn change-handler []                             ;; 1

(defcomponent editor-area [data owner]
  (did-mount [_]
    (let [ace-instance (.edit js/ace
                              (.getDOMNode owner))]
      (reset! *ace* ace-instance)
      (.. ace-session
          (on "change" change-handler))             ;; 2
      (set-value! (:value data))))
  1. Create a change handler function.
  2. Here we're using Clojure's nifty .. interop convenience - here's the reference on Grimoire for that.

Ok, but what do we put into that change handler?


At this point, we could simply write the changes right back into the global app state, but there's a problem with this approach.

Doing so will cause that will-update function to run, which will unnecessarily update Ace to the value it already has. Remember, React isn't managing this DOM node - there's no fancy diffing to save extraneous work!

Instead, we'll use Om 0.7.1's new experimental set-state-nr! function to keep track of the changes without triggering a re-render, and provide a Save button for the user to click when they want their changes committed.

That way, we have the editor value available immediately, but only commit it when the user wants it. Why might we want it immediately? Well, we might decide to provide a real-time preview or validation capability!

We also need another component to compose the Save button and the editor we've just built, as we can't add any more UI to this component thanks to the way we're opting out React rendering.

Because of this, we'll need to coordinate between the container and editor components when the user clicks Save, so that the editor can transfer the editor value from local to global state.

That's where the core.async comes in.

3. Track the changes in Ace - round two

Right. Let's get the value from Ace into local state:

(defn change-handler [owner]
  (om/set-state-nr! owner :edited-value             ;; 1
                    (.getValue (deref *ace*))))     ;; 2

(defcomponent editor-area [data owner]
  (did-mount [_]
    (let [ace-instance (.edit js/ace
                              (.getDOMNode owner))]
      (reset! *ace* ace-instance)
      (.. ace-instance
          (on "change" #(change-handler owner))     ;; 3
      (set-value! (:value data))))
  1. Our handler writes an :edited-value to the component's local state via owner...
  2. Using the value from the Ace instance.
  3. We make sure to update the event listener to pass in owner.

Now we have the whole interop round-trip working - the text value going into Ace and back out again.

Let's put that container together:

(defcomponent editor [data owner]
  (init-state [_] {:editor-chan (chan)})                     ;; 1
  (render-state [_ {:keys [editor-chan]}]                    ;; 2
      [:button {:onClick #(put! editor-chan :save!)} "Save"] ;; 3
      (-&gt;editor-area data                                    ;; 4
                     {:init-state                            ;; 5
                      {:editor-chan editor-chan}})])))       ;; 6

There's quite a bit going on here:

  1. We use the init-state function (yep, from the IInitState protocol) to create a new async channel. It's important to do this inside the right life cycle function, as we only want it to be created once.
  2. We use the render-state function (you're right, from the IRenderState protocol), which is simply the IRender protocol with a convenient way to get at the local state baked in; as a function argument. Thanks to that, we destructure the channel out.
  3. We have our mighty Save button, which simply writes the keyword :save! to the channel every time it is clicked.
  4. We instantiate our editor-area component, using om-tools' shorthand
    ->component syntax, which boils down to "om/build component".
  5. We pass an initial state for our editor using om/build's third argument. This will become available as local state inside editor-area.
  6. And that state is simply a map with the channel we created at 1.

Ok. Now we have the appropriate signalling in place to know when to transfer the value from local to global state.

Now, inside editor-area, we just need to respond to that signal:

(defcomponent editor-area [data owner]
  (will-mount [_]                                          ;; 1
    (let [editor-chan (om/get-state owner :editor-chan)]   ;; 2
      (go                                                  ;; 3
        (while true                                        ;; 4
          (when (= :save! (&lt;! editor-chan))                ;; 5
            (when-let [edited-value
                       (om/get-state owner :edited-value)] ;; 6
              (om/update! data :value edited-value)))))))  ;; 7
  1. Set everything up in the will-mount function (from the IWillMount protocol, of course). We could do this in did-mount, too, but now you're aware that will-mount exists :-)
  2. Grab the channel from local state.
  3. Start a core.async go block, which allows us to write synchronous-looking but-actually-asynchronous code.
  4. Loop endlessly, so that we can catch each successive channel value.
  5. Using <!, block until there's a value on the channel, and if that value is :save!...
  6. Try to get the edited value from local state...
  7. And if it's there, use om/update! to place it in the global state map.

Phew! Now we have everything wired up.


Here's the full source, again:


We have covered quite a lot ground in just ±50 lines of code:

  • We got a mutable non-React Javascript library to live in harmony with an Om/React app, which showed us Javascript interop syntax and several Om life cycle protocols in action.
  • We used core.async to coordinate between two components, thus avoiding callbacks between them.
  • We saw how om-tools and Ŝablono look when used with Om, which is mostly a feel-good thing, I believe it helps a lot in the long run.

Special thanks

To Luke VanderHart for the conversation that led me to this insight.


Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.