Toothpick: A theory of bytecode machines

In working on my Batbridge simulators and other adventures in computing hardware, one of the problems I have repeatedly encountered is the need to generate bit encodings of symbolic instructions. When working with "well known" architectures with existing tooling support, one can usually find an "off the shelf" assembler either in the GCC collection or elsewhere. However when you're working against a custom instruction set you're on your own for tooling.

So lets invent yet another assembler!

An assembler is simply a program that describes how to construct a binary instruction encoding from a set of symbolic human-writable mnemonics representing data manipulating directives to computing hardware. add, sub[tract], mul[tiply] and jump are good examples of such mnemonics. Each operation which a given "machine" (hardware/software interpreter) will accept is typically specified as a sequence of bits and bit encoded fields (sometimes abbreviated in octal or hexadecimal notation). So Batbridge for instance specifies that the add instruction is the bit string 110000tttttaaaaabbbbbiiiiiiiiiii where the prefix 110000 or 0x30 indicates the addition operation, the bits named t encode a number t∈[0..30] being the register to which the result of the addition will be stored, the bits named a subject to the same constraints as t name the register from which the "left" operand will be drawn and the b bits name the "right" operand same as the t and a operands. i is an arbitrary signed 10 bit integer (sign is the 11th bit).

Sitting down with a specification, you or I could construct a set of functions from values appropriately constrained to bit encoded instructions as laid out in an instruction set specification. However in doing such an assembler implementation, you will discover that you are simply encoding in the programming language of your choice a translation table laid out in full by the instruction set documentation. However realizing that the assembler which we are constructing by hand represents a printer to the bytecode machine's reader, we can think of bytecodes as a language in the same sense that any other set of strings and productions constitutes a language.

A Theory of Bytecode Machines

It happens that we have good formal theories about languages and their structures. If we stop and squint, we can see that the individual opcodes are analogous to terminals or verbs. If we allow ourselves to model opcodes as words in a language, then a program (a production of words) is a sentence for which we can establish a grammar. For instance a ELF formatted program could be considered to be a sequence of blocks of instructions (verbs) defined in terms of words (opcodes) and formatted with an appropriate header/footer (the ELF file structure). As these are all linguistic constructs, a program capable of generating these values must be a direct production of the specification which lays out these productions in the same way that a lex/yacc lexer parser pair is a mechanical production of the lexing and parsing rules which define a computer language.

This leads to the idea that, just as we now rarely write parsers and lexers preferring to derive them automatically from specifications of the languages we wish them to process since a as suggested above a bytecode is simply a language on bit strings rather than ASCII or Unicode characters the consequent that we can use the same class of tools to generate assemblers and disassemblers as we use to generate parsers and printers should be obvious. We just need to construct formal languages describing the instructions our machines accept.

An Assembler Generator

Just as we have theories about "interesting" sets of languages and the parser techniques which may be applied to efficiently recognize and process them, so too rather than being able to automatically assemble arbitrary programs to arbitrary bytecodes we must first recognize that as a bytecode is a language on bit strings and as there exist languages which require a Turing machine to recognize in addition to uncomputable languages consequently one could invent an "interesting" (but I will argue in a minute silly) bytecode for which no assembler or recognizer can be automatically generated.

So what features define "interesting" bytecodes? In hardware terms as I discuss elsewhere memories and lookup tables are just about the most expensive thing you can build in terms of chip area and time. As a result, the overwhelming majority of computer architectures are not what I shall call "micro-programmable", that is users cannot give meaning to new sequences of bits and rather interact with the computer by chaining together a few bit-verbs or opcodes which are built into the chip. There do exist such micro-programmable computers, FPGAs, but they tend to be special purpose prototyping machines and are not in wide usage compared to fixed instruction set machines.

Another characteristic of "interesting" bytecodes is in the name, bytes or words. Many years have gone by since bit addressed machines were last built with commercial intent. Instead commercial machines and off the shelf memories tend to address "blocks" of bits or "lines" being strings a power of two in length which may be divided into substrings smaller powers of two in length as the user may see fit at no additional hardware complexity cost by simply selecting bits through selecting predefined subdivisions of lines known as "words" or using bit masking and shifting to select strings smaller than a single word. Strings larger than a single word must be dealt with by using multiple operations on words.

This theory of a fixed word size, fixed instruction set machine characterizes the overwhelming majority of instruction sets and machines designed and built by Intel, ARM, Symbolics, Thinking Machines, and others over the past decades due mainly to the ease of implementation and efficiency of such designs.

So what does this theory about a "useful" bytecode machine buy us? It implies that we can describe a "useful" machine as a bitstring length denoting word size. Endianness must also be considered. Then given an enumeration of the various opcodes and how to generate them as bitstrings, you can completely describe an "interesting" instruction set as characterized above.

Assemblers are simply programs that deal with translating instructions set mnemonics writable by a programmer or more often than not a compiler to bytecodes which a machine can execute. The details of computing label addresses are entirely for the most part independent of the specific architecture in so much as they are determined by properties of the specific target bytecode described above. This suggests that there must exist a function

(assemble p ← Platform c ← Codes)

Which makes sense if you consider a single target assembler to be the partial application of such an assembler to a platform.

Talk is cheap, show me the code

First things first, we need a way to represent an "interesting" instruction set.

(deftype Architecture
    "A structure representing an \"interesting\" bytecode architecture
    from which a parser or a generator can be computed."
  [word-width         ;; ← Num, how many bits in a word
   pointer-resolution ;; ← Num, how many bits forwards does *(x+1) go
   opcodes            ;; ← Map [Memonic → Opcode], opcode formatting rules
   ])

So what do individual opcodes look like? We're gonna need some bit vector formatting engine to do real code bytecode generation with.

I will define an opcode to be a bitstring of fixed length (this is important, there are instruction sets with variable length "opcodes" however as the different length forms have different operational semantics I shall define them to be different operations and declare the instruction set architects silly people) composed of a sequence of concatenated parameters or fields and constants which may be either signed or unsigned. Fields are named or anonymous. The set of named field names in a single opcode must have no repetitions. We want to be able to write something like the following

;; IFLT 0x20  100000 _____ aaaaa bbbbb iiiiiiiiiii
;; execute the next instruction IFF (< a b)
(opcode :iflt
  (const-field            :icode 6  0x20)
  (enforced-const-field   :_     5  0)
  (parameter-field        :a     5  register?)
  (parameter-field        :b     5  register?)
  (signed-parameter-field :i     11 literal?))

So in this example instruction from Batbridge, we define an instruction as the ordered bit string where the bits [0..5] are named :icode and fixed with the constant value of 0x20. We then have an anonymous field 5 bits long taking the bits [6..10] which we force to have the value of 0 since they are defined to be ignored by the instruction set. We then have a pair of (unsigned) 5-bit registers, which must satisfy some guard predicate ensuring that the value in question fits within the value domain which the instruction set allows for registers. Finally we have a signed field 11 bits long, which again is guarded with a value domain predicate. This notation encodes the bit format string, as well as the value domains for each element of the bit format string in a form that can be easily inspected by either an assembler or a disassembler program.

So what does that expression build out to as a data structure?

{:width 32,
 :params (:_ :a :b :i),
 :fields ({:offset 26,
           :width 6,
           :name :icode,
           :type :const,
           :value 35}
          {:offset 21,
           :width 5,
           :name :_,
           :type :enforced-const,
           :value 0}
          {:offset 16,
           :width 5,
           :name :a,
           :type :unsigned-field,
           :pred #<batbridge$register_QMARK_ toothpick.isa.batbridge$register_QMARK_@4bd4095>}
          {:offset 11,
           :width 5,
           :name :b,
           :type :unsigned-field,
           :pred #<batbridge$register_QMARK_ toothpick.isa.batbridge$register_QMARK_@4bd4095>}
          {:offset 0,
           :width 11,
           :name :i,
           :type :signed-field,
           :pred #<batbridge$literal_QMARK_ toothpick.isa.batbridge$literal_QMARK_@38334886>})}

Given this datastructure, and a sequence (opcode . params), we can trivially compute a map (zipmap (:params (get isa opcode)) params), and then fold right over the fields of the instruction computing each field then shifting and anding it into place in the bit vector to construct the bit encoding of the instruction.

As implemented this is a little fragile because it assumes that the computer running the assembler assembler has a larger or equal integer size than the target, as attempting to construct an int aka bit vector that's too large will yield... interesting issues. Future fixes are to use a real sequence of bits, or to rewrite this logic in terms of multiplication by two and addition.

Deriving a parser for this structure is also (fairly) straightforwards, one simply can simply generate an alternation matcher on bitstring matchers matching the individual instructions. "interesting" ISAs tend to put the opcode specification in the "low" bits of each instruction word so as a we can simply generate a jump table by masking on the least significant bits but that's not implemented yet.

Also this representation falls into jneen's type tagging pitfall, but this is old code and implementation details thereof so I'll forgive myself and fix it later.

We can then define an "architecture" by constructing a composite map as above having a word size and pointer interval specification and a map from keywords naming instructions to many such instruction maps.

So, we can describe an instruction stream as a sequence of instructions, and we can assemble individual instructions by interpreting from their representation above, so if we wanted to encode a sample program

[[:label :top]
 [:op [:add [:param [:register 0]]
            [:param [:const 0]]
            [:param [:const 1]]]]
 [:label 1]
 [:op [:add [:param [:register 1]]
            [:param [:const 0]]
            [:param [:const 2]]]]
 [:label 2]
 [:op [:add [:param [:register 0]]
            [:param [:register 0]]
            [:param [:register 1]]]]
 [:label 3]]

We can simply foldr a label location computing operation since all of our opcodes are defined to be fixed length even if they are abstractly variable length, then in a second pass we assemble each instruction against the label location context and the instruction set, giving us a sequence of bytecodes. Done! ∎

This is essentially my Toothpick project, which seeks to provide exactly this meta-assembler facility and has already proven its worth to me in automating the generation of assemblers for the various toy architectures with which I've worked at school. It's not done yet and as noted above it needs some spit and shoeshine but I think it represents a remarkably simple and powerful idea.

The real power of this approach is that, extended, this can enable the automatic generation of LLVM backends! If an instruction specification were to include its operational semantics written in terms of LLVM instructions one could imagine a trivial derived LLVM backed which sequentially scans emitted LLVM assembly matching out the platform translation of the instruction "at point" and then assembling the resulting instruction stream as above. Since LLVM already provides a massively portable C compiler infrastructure, this means that given only a formal platform specification one could construct a mediocre C compiler only from the platform specification.

To take this idea even further, one could also imagine generating a platform simulator from either symbolic instructions or bit encoded instructions from only the augmented ISA with semantics instructions.

Generating muxing logic in Verilog or VHDL should also be trivial from this structure.

Generate all the things! Correctness by specification & construction! Toy architectures for everyone!

^d

Permalink

4Clojure for Android is now in Google Play!

Just to remind that me (and Clojure-Android) are alive and well, and for little personal amusement, I hacked together a full-blown 4Clojure client for Android. You can download it from here: 4Clojure for Android. It supports both online and offline mode, so you can solve your 4Clojure problems even when there is no network available — the app will upload them later when connected. Mind that this is not an official client, although 4clojure.com guys gave me the permission to use data and resources.

Also this is a link to the repository: https://github.com/alexander-yakushev/foreclojure-android. Everything is open source, of course. Took me 5 days from scratch to finish, including me solving infrastructure problems along the way, and I'm fairly pleased with both the development experience, and the on-device performance. Mobile hardware indeed has made a great leap (comparing Oneplus One to my old Galaxy S… meh).

The load time is still significant. That is partly because I couldn't use Skummet — obviously because I need on-device Clojure compilation, and Skummet wracks that. Still, it could be worse.

The whole thing is ~1100 lines of Clojure code, together with UI definitions (done almost entirely in Clojure, not in XML). There are also around 200 SLOC of Java code, probably half of which do nothing more but animate Gus' eye color change on the splash screen. Talk about brevity and compactness.

OK, I'm getting too tired to type anything else. Please install, evaluate (pun intended), and share your feedback.

Permalink

Connected Graph with Clojure

I’ve fallen in love with clojure and as part of my learning, I’ve set myself the goal of solving all the problems on the excellent 4clojure site. I was a painter and decorater by profession up until the age of 30 so I have no background in computer science or in depth mathematics but part of my fascination with 4clojure is that some of the harder problems introduce concepts that are completely foreign to me like graph theory. I get a kick out of digging into the theory behind the problem before trying to code my solution.

Problem 91 states:

Given a graph, determine whether the graph is connected. A connected graph is such that a path exists between any two given nodes.

-Your function must return true if the graph is connected and false otherwise.

-You will be given a set of tuples representing the edges of a graph. Each member of a tuple being a vertex/node in the graph.

-Each edge is undirected (can be traversed either direction).

You are then presented with a list of input and outputs that require a common function to balance both sides of the argument, e.g.

1
2
(= true (__ #{[:a :b] [:b :c] [:c :d] [:x :y]
              [:d :a] [:b :e] [:x :a]}))
The job of the user is to provide a function that will replace the __ characters to complete the expression.

The elements of the above set represents the edges of a graph. A graph edge can be thought of as a line or path that connects two graph nodes or vertices as they are known in graph speak. Each element of the set contains a vector with two vertices that have a path between them. We could visualise the above graph in the image below:

The problem stated that the function must return true if a path exists between any two nodes. You can see from the above diagram that each node is acccessible from each other.

Another of the expressions in the problem provides a set of edges that are not connected:

1
2
(= false (__ #{[:a :b] [:b :c] [:c :d]
               [:x :y] [:d :a] [:b :e]}))

You can visualise the unconnected graph below:

Another way to view the above graph is to say that it is made up of two components. We are dealing with undirected graphs which means that the edges can be travelled in either direction or more specifically, the edges do not point in any direction. A component of an undirected graph is a subgraph in which any two vertices are connected. In the above diagram, there are two components, the one on the right with the x and y vertices and the other component which is made up of the remaining vertices. The connected example can be thought of as one component.

Show Me the Code

If we are to use this concept of components as is outlined above, then we could say that if we can get the number of components in a graph and if that number is equal to one then the graph is connected. If we have more than one component then quite obviously the graph is disconnected.

After consulting the book of knowledge, a.k.a. wikipedia, I found this entry that gave me an outline of an algorithm that I could use to tell whether I had one connected component or not:

A search that begins at some particular vertex v will find the entire connected component containing v (and no more) before returning. To find all the connected components of a graph loop through its vertices , starting a new breadth first search whenever the loop reaches a vertex that has not already been included in a previously found connected component.

Armed with the above algorithm, I was able to construct the code to solve the problem:

connected.clj
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(= true (
    (fn [g]
      (letfn [(connected? [g]
          (loop [q (conj [] (ffirst g)) visited #{}]
              (if (empty? q)
                  (let [rem (filter #(not (contains? visited %)) (flatten (for [e g] e)))]
                  (= empty? rem))
              (let [v1 (peek q)
                  edges (filter (partial some #{v1}) g)
                  vertices (filter (partial not= v1) (flatten edges))
                  unvisited (filter #(not (contains? visited %)) vertices)]
                (recur (into (rest q) unvisited) (into (conj visited v1) unvisited))))))]
     (connected? g)))</p>

<h1>{[:a :b] [:b :c] [:c :d]</h1>

<pre><code>          [:x :y] [:d :a] [:b :e] [:x :a]}))
</code></pre>

<p>

  • On line 3, is a function connected? that does the actual work.
  • Line 4 starts a loop recur sequence with some intial bindings. I am selecting the first vertex of the graph which is the first element of the set and the first element from the first vector of that set. I am also initialising a map that will keep track of which vertices of the graph have been visited. In the above algorithm, this the particular vertex where the search begins. This vertex is added to a vector that we will use to process vertices that we have not yet visited.
  • line 5 checks that we have any unvisited vertices left to process.
  • If not, line 8 selects the first element from the unvisited vector and binds it to the v1 symbol.
  • line 9 selects any edges or connected vertices that contain v1 vertice that we are searching on.
  • line 10 selects the vertices from the edges that are not equal to v1.
  • line 11 filters out any vertices that we have already processessed before calling recur on line 12.
  • The recur on line 12 will add the unvisited nodes to the processing queue and also record what we have visited.
  • The code will continue until there are no vertices left to process at which point the (if (empty? q) statement on line 5 will be true.
  • If the graph is connected or only contains 1 component then the visited map that we have been building up will contain all the vertices of the graph or the flattened elements of the vectors that make up the set.

I’m still new to clojure and there is definitely better ways of doing this. One problem is that this function is in no way lazy.

I found this a hugely enjoyable problem to solve.

Permalink

Unit and browser-testing Om & ClojureScript applications

What are we trying to do?

The ClojureScript and Om world is moving very quickly; new versions are being cut weekly, and there are people using them all over the place.

However, when starting out on our Om journey here, we've found that good advice on getting unit and integration testing started is hard to find.

We decided that what we wanted was to have our standard Chestnut-based CLJS application, with unit tests running as fast as possible from the command line (ie headless), and with autotest-mode so that a save in an affected file would automatically trigger a re-run of the tests.

We also wanted a suite of in-browser integration tests using something like WebDriver, to verify the app's "user-facing" behaviour.

But - could we make it work?

Research time.

We started by pulling together everything we could in a Google doc, and then whittling and filtering down from there.

The documentation for Om has a scant couple paragraphs; the main recommendation is to look at the following repo for inspiration:

https://github.com/sgrove/omchaya/tree/master/test-cljs/omchaya

We came across a couple of alternative unit test libraries; Chas Emerick's clojurescript.test and the CLJS core test library - recently rewritten but lacking in documentation by comparison with cemerick's setup.

In terms of integration testing, we wanted to use familiar tools - so something wrapping WebDriver was a must. The question was how and when to actually run the tests?

After much deliberation, we settled on using clojurescript.test for unit testing, driving a headless PhantomJS, and on integration tests running clj-webdriver (through Chrome) using a variant of our Jetty integration testing strategy.

Setting up unit testing

There's good doc on clojurescript.test but there is a 'gotcha'.

PhantomJS doesn't have full support for everything that's in later WebKit versions: https://github.com/swannodette/om/wiki/Testing (note to self: investigate SlimerJS...).

It's therefore necessary to provide a polyfill that gives us that functionality through JavaScript. Fortunately the Facebook React team have documented the polyfill that they use for running their own tests, so I simply lifted a copy of that and checked it in to our source in a phantomjs/ directory.

Our basic unit test config ends up looking like this:

    :cljsbuild {
        :test-commands {"test" ["phantomjs" :runner "phantom/bind-polyfill.js" "target/cljs/testable.js"]}
        :builds {
            :app { :source-paths ["src/cljs" "env/dev/cljs"] }

            :test {
                  :source-paths ["src/cljs" "test/cljs"]
                  :compiler {:output-to "target/cljs/testable.js"
                             :optimizations :whitespace
                             :preamble ["react/react.min.js"]
                             :pretty-print true}}}}

Effectively this stanza says "in parallel, compile two versions of the CLJS:

  • one called 'app' which uses the default parameters from elsewhere in the project.clj, and
  • one called 'test' which adds our test files and outputs to a separate compiled target ("testable.js"); run the unit tests against the testable version.

You can see the :test-commands element at line 2 - note the polyfill being included on PhantomJS's load path.

Automatically running the unit tests

We found a great blog post which really helped.

Really all we have to do is add a :notify-command key to the :test build, and each time lein cljsbuild builds, it'll run whatever is in :notify-command.

So now we have:

    :cljsbuild {
        :test-commands {"test" ["phantomjs" :runner "phantom/bind-polyfill.js" "target/cljs/testable.js"]}
        :builds {
            :app { :source-paths ["src/cljs" "env/dev/cljs"] }

            :test {
                  :source-paths ["src/cljs" "test/cljs"]
                  :notify-command ["phantomjs" :cljs.test/runner "phantom/bind-polyfill.js" "target/cljs/testable.js"]
                  :compiler {:output-to "target/cljs/testable.js"
                             :optimizations :whitespace
                             :preamble ["react/react.min.js"]
                             :pretty-print true}}}}

Testing Om components

So now we have the basics, we're testing arbitrary stuff on pages using clojurescript.test

Now how about actually testing some Om components?

Fortunately most of the thinking has already been done by other people.

It simply comes down to whether an Om component can be tested in isolation by instantiating it into an isolated root-level DOM.

A few utility functions can help us here:

common.cljs

(ns test.common
  (:require-macros [cemerick.cljs.test :refer (is deftest with-test run-tests testing)])
  (:require [cemerick.cljs.test :as t]
            [dommy.core :refer-macros [sel sel1]]
            [om.core :as om :include-macros true]))

(defn insert-container! [container]
  (dommy.core/append! (sel1 js/document :body) container))


(defn new-container! []
  (let [id (str "container-" (gensym))
        n (.createElement js/document "DIV")]
    (set! (.-id n) id)
    (insert-container! n)
    (sel1 (str "#" id))))

And then an actual test looks like this (code example from the first version of our Bright North Bootstrap3 om-components library, which I'll be open-sourcing soon):

core.clj

(ns test.core
  (:require-macros [cemerick.cljs.test :refer (is deftest with-test run-tests testing test-var)]
                   [dommy.core :refer [sel sel1]])
  (:require [cemerick.cljs.test]
            [om-components.test.common :refer [new-container!]]
            [om-components.core :refer [alert-box]]
            [dommy.core :refer [attr text]]
            [om.core :as om :include-macros true]
            [om.dom :as dom :include-macros true]))


(deftest alertbox-has-correct-message-text
  (let [c (new-container!)]
    (om/root alert-box {:message-text "message1"} {:target c})
    (is (= "message1" (text (sel1 c :div.alert))))))

more to follow on this one in a separate post, I think.

Integration testing with WebDriver

We've got a lot of experience of WebDriver across the team here, so clj-webdriver is a natural fit.

We're also fortunate to have spent some time on integration testing patterns for services (see Conan's blog post) so we had some prior art to call on.

The trick is to ensure that you run the CLJS app from within the integration test, by booting Jetty in a fixture (and shutting it down afterwards).

So far so easy, but you soon run into problems in a Chestnut (Figwheel, Weasel) project, as the default Jetty code for serving in dev mode pulls in dependencies on most of the universe.

Time for a simpler solution.

I added a new folder, e2e, in the project root, and inserted a stripped-down Jetty server & handler class, as follows:

(ns e2e-server
  (:require [clojure.java.io :as io]
            [compojure.core :refer [GET defroutes]]
            [compojure.route :refer [resources]]
            [compojure.handler :refer [api]]
            [ring.adapter.jetty :refer [run-jetty]]))


(defroutes routes
  (resources "/")
  (resources "/react" {:root "react"})
  (GET "/*" req (io/resource "index.html")))

(def http-handler
  (api routes))

(defn run [& [port]]
  (defonce ^:private server
    (let [port 10555]
      (print "Starting E2E web server on port" 10555 ".\n")
      (run-jetty http-handler {:port port
                          :join? false})))
  server)

(defn -main [& [port]]
  (run port))

We want to make sure that this runs around our tests, so the test starts to look a bit like this:

(ns e2e-test
  (:use clojure.test)
  (:require [clj-webdriver.taxi :as t]
            [e2e-server]
            [environ.core :refer [env]]))


(defn selenium-fixture
  [& browsers]
  (fn [test]
    (doseq [browser browsers]
      (println (str "\n[ Testing " browser " ]"))
      (t/set-driver! {:browser browser})
      (test)
      (t/quit))))

;; Keep reference to server so we can stop it later
(def svr (atom nil))

;; Run Jetty before f, shut it down after; use as a :once fixture
;; to avoid unnecessary startups & shutdowns
(defn svr-fixture
  []
  (fn [f]
    (swap! svr (fn [_] (e2e-server/run)))
    (f)
    (.stop @svr)))


;; Switch on environment here so we can execute the tests manually
;; from a REPL, or run without all the dev dependencies from the "lein test" command
(def is-dev? (env :is-dev))
(if is-dev?
  (use-fixtures :once (selenium-fixture :chrome))
  (use-fixtures :once (svr-fixture) (selenium-fixture :chrome)))


;; Actually do some WebDriver stuff
(defn go-to-homepage
  []
  (t/to "http://localhost:10555"))

And finally we need to work all this into our ''project.clj''. We do this by adding our new ''e2e'' namespace into the :source-paths only when we're in E2E testing mode.

And we make sure we're running the tests against a production-level version of the compiled CLJS.

:e2e {
    :source-paths ["e2e/clj" "test/clj"]
    :aot :all
    :cljsbuild {:builds {:app
                            {:source-paths ["env/prod/cljs" "src/cljs"]
                             :compiler {:optimizations :advanced
                                        :pretty-print false}}}}}

The last trick is to string it all together using aliases, so that we don't have to remember the correct combinations of profiles.

  :aliases {"e2e" ["do" 
                   "clean" 
                   ["with-profiles" "-dev,+e2e" ["cljsbuild" "once"]] 
                   ["with-profiles" "-dev,+e2e" "test"]]

            "autotest" ["do" 
                        "clean" 
                        ["cljsbuild" "auto" "test"]]

            "test" ["do" 
                    "clean" 
                    ["cljsbuild" "test"] 
                    "clean" 
                    ["with-profiles" "-dev,+e2e" ["cljsbuild" "once"]]
                    ["with-profiles" "-dev,+e2e" "test"]]}

Things to try in the future

Summary

This provides a guide to setting up a modern CLJS & Om stack, using Chestnut, with unit and integration testing.

Have a look at our example repository for a full, running version.

It's not easy to set this up first time (or it's not when you're a relative CLJS novice), but hopefully this post will make it easier to remember how!

Thanks for reading

Rory Gibson
Chief Technology Officer

Permalink

January 2015 London Clojure Dojo at ThoughtWorks (Soho)

When:
Tuesday, January 27, 2015 from 7:00 PM to 10:00 PM (GMT)

Where:
ThoughtWorks London Office
1st Floor
76-78 Wardour Street
W1F 0UR London
United Kingdom

Hosted By:
London Clojurians

Register for this event now at :
http://www.eventbrite.com/e/january-2015-london-clojure-dojo-at-thoughtworks-soho-tickets-15416752923?aff=rss

Event Details:

London Clojure Dojo at ThoughtWorks (Soho)

The goal of the session is to help people learn to start working with Clojure through practical exercises, but we hope that more experienced developers will also come along to help form a bit of a London Clojure community. The dojo is a great place for new and experienced clojure coders to learn more. If you want to know how to run your own dojo or get an idea of what dojos are like you can read more here.

We hope to break up into groups for the dojo. So if you have a laptop with a working clojure environment please bring it along.

We’ll be discussing the meetup on the london-clojurians mailing list

Clojure is a JVM language that has syntactically similarities to Lisp, full integration with Java and its libraries and focuses on providing a solution to the issue of single machine concurrency.

Its small core makes it surprisingly easy for Java developers to pick up and it provides a powerful set of concurrency strategies and data structures designed to make immutable data easy to work with. If you went to Rich Hickey’s LJC talk about creating Clojure you’ll already know this, if not it’s well worth watching the Rich Hickey “Clojure for Java Programmers” video or Stuart Halloway “Radical Simplicity” video.


Permalink

Ultra: A Leiningen Plugin for a Superior Development Environment

...man, that's a long title. I should do something about that.

This post is a general announcement that Ultra is ready for feedback, usage, and even (gasp!) pull requests. Happy hunting.

Intended Workflow

I happen to use Vim, and even though I use fireplace I often keep other terminal windows open in which I'll use Leiningen to run a REPL client, tests, or other tasks. Ultra is designed to complement that workflow, and focuses on features such as colorization via ANSI escape codes, pretty-printed output, and better test feedback.

Consequently, Ultra may be of limited utility if you do everything in an emacs or Sublime REPL, where there are no ANSI codes, etc.

All I can say is: I did my best.

Background

About two months ago I decided I'd been writing Clojure long enough that the time had come to optimize my development environment. In particular, I had a couple of major frustrations with the REPL, test output, and stacktraces, all of which I assumed had already been solved by existing Leiningen plugins.

On the face of it, I was correct - other experienced developers had felt the same frustrations, and had produced plugins to solve those problems. Unfortunately, what I found was a wealth of good ideas spread out across many different repositories, not all of which were compatible with each other.

Ultra is an attempt to unify that landscape, provide a really exceptional set of defaults, and make some difficult things easier. It violates the Clojure principle of "small libraries doing single things really well", but: a foolish consistency is the hobgoblin of little minds.

Features

The REPL

My first frustration - and the one that started me down this doomed path - was the fact that the REPL doesn't pretty-print values by default. This turns out to be relatively easy to configure, but then I started thinking: well, what if I want colorized values as well?

Behold:

Delicious! I can't claim much credit for this, incidentally - Ultra basically nabs Whidbey's functionality and wraps it, with the addition of a colorscheme loader for .edn files.

Go star and fork that repo, yo.

Tests

Okay, here's where things start to get interesting, which is to say that this is where I actually started to pull my own weight.

After reading Jake McCrary's article comparing Clojure's various test libraries, I found myself inspired by expectations, and decided to crib heavily from its playbook while avoiding the commitment to its syntax.

As a result, Ultra radically modifies the core functionality of clojure.test/report - a change that affects all deftest forms, and even some other testing libraries' test forms.

What does that mean things look like now? Well, let's see - below is a sample test, the sort of thing that might easily come up:

Fairly straightforward, except that whoever wrote that test has a really weird taste in field names. What about a test where we're comparing objects of the wrong type?

Nice. Hopefully we can already tell the difference between a string and a long when one is printed with quotation marks and the other isn't, but hey, every little bit helps, right?

Let's look at something slightly meatier.

Ah, now that's actually helpful. Especially when looking at larger collections, visual comparison between the actual and expected forms can be a frustrating endeavor. And we're only getting started with Ultra's diffs - let's take look at maps next:

That's-a-spicy-meatball, amirite?! From here, it just gets better. Check out these vector diffs - they come with helpful hints!

Since vectors and lists satisfy the same core interfaces (at least as far as Ultra cares), what works for one works for the other:

Most of the diffs demonstrated thus far rely on clojure.data/diff, but for strings there's a lot more mileage to be gained by using Google's diff-match-patch (with kudos to difform and lein-difftest for the original Clojure implementations):

No more straining your eyes in an attempt to find that one character that was missing! Woohoo!

Stacktraces

Clojure stacktraces take a while to get used to. When I was first getting familiar with the language, it wasn't at all apparent which parts of the stacktrace were important; these days I just find myself doing a lot of scrolling before I'm able to zero in on the parts of a stacktrace that are actually relevant.

I expect my feelings on the subject of stacktraces - and stacktrace-related tooling - to continue to evolve, but in the near term I'm very happy with the increased readability I've gotten out of using Aviso's pretty stacktraces:

test stacktrace demo

Bonus points for the fact that they use the same colorscheme as everything else in Ultra.

...whatever, I can totally give myself bonus points. Get out of here!

Java objects

The last major feature to Ultra relates to Java interop, and my frustration with not being able to tell at a REPL how to interact with an object. Here Ultra injects functions from hara into clojure.core, making them available at the REPL:

There's quite a bit more functionality here, but it's probably faster for you to take a look at the Hara docs and the Ultra wiki page on the subject.

Conclusion

I tend to to have a "live and let live" attitude to life, and Ultra is no exception to that rule - it might be useful for you, or it might not. Personally, I've found it to be an incredibly powerful tool for my development flow, and my hope is that it'll prove to be similarly valuable for any Clojure developer with a similar workflow.

That being said, if you have thoughts or opinions on how I could make Ultra even better, I'd love to hear them! Make issues, open pull requests, or just tweet at me.

I won't bite.

Maybe.

Discuss this post on Hacker News or on Twitter (@venantius)

Thanks to Keith Ballinger (@keithba), Bill Cauchois (@wcauchois) and Harry Wolff (@hswolff) for reading drafts of this post, and to Allen Rohner (@arohner) and David Lowe for providing early feedback on Ultra.

Permalink

Live queries in Comprehend

Someone recently asked me if it's possible to have Comprehend fire events when the database is updated and new matches are encountered. This is indeed possible with not too much effort. The trick is to combine forward matching and mutable indexed sets.

To start, suppose we are maintaining a set of pairs and we are interested in all triples [a b c] such that [a b] and [b c] are in our database. We define a function that prints such matches iff they are new since the function was last called:

(require '[comprehend :as c])
(require '[comprehend.mutable :as cm])

(defn report-new-matches [m-s]
"Prints new paths of length 3 since report-new-matches was last
called on m-s. Prints zero matches on the first invocation."
(let [!results (volatile! nil)]
(dosync
(vreset! !results (c/comprehend :mark :report-new-matches
@m-s
[a b]
[b c]
[a b c]))
(cm/mark m-s :report-new-matches))
(doseq [x @!results]
(println "Matched" x))))
This function runs a transaction that comprises two steps:
  1. Run a query on the mutable indexed set
  2. Update the mutable indexed set with a marker
The transaction ensures that the set is not mutated between step 1 and step 2. After the transaction completes, report-new-matches prints the results.

We now hook up report-new-matches as the 'io' argument of the mutable indexed set constructor:
(declare m-s)
(defn live-reporter
([] nil)
([s diff] (report-new-matches m-s)))
(def m-s (cm/mutable-indexed-set live-reporter))
(report-new-matches m-s) ; init
(I admit the circular dependency between the definitions of live-reporter and m-s is currently a little ugly. I might tweak the signature of 'io' functions to remedy this problem.)

That's it! Let's see what happens when we add elements:
(reduce cm/conj m-s [[1 2] [2 3] [1 3] [3 4]])
; Matched [1 3 4]
; Matched [1 2 3]
; Matched [2 3 4]

(cm/conj m-s [0 1])
; Matched [0 1 3]
; Matched [0 1 2]
At some point the comprehend.mutable API will be amended with a macro to make this even easier. In the meantime, as you can see, it's straightforward to role out your own solution.

Permalink

Clojure Tip - "contains?"

There are a number of questions and gotchas that come up over and over in Clojure forums. I thought it might be useful to write some of them up. I initially planned to write them up in a single post. But then the first one was long enough to be a whole entry.

Often one of the first gotchas many Clojure programmers encounter is the contains? function. It sounds like it checks whether a data structure contains a value:

=> (contains? [3 4 5] 0)
true   ;; wat
=> (contains? [3 4 5] 3)
false  ;; wat

The gotcha here is that contains? checks whether an associative data structure contains a key. Vectors are associative from index to element so the first question above is asking: does the vector have an index 0 (yes, it does). The second asks whether it has an index 3 (nope, just 0,1,2).

contains? on a map checks whether the map has a key:

=> (contains? {:a 1} :a)
true
=> (contains? {:a 1} :GRUE)
false  ;; whew

contains? on a set confusingly looks like it is a value check at first:

=> (contains? #{:a :b :c} :a)
true

However, sets are hashed, just like the keys in a map. So, it really works exactly the same as a map.

contains? is for fast (constant or log time) lookups, not linear time lookups. If you need fast lookups in your algorithm, then you should use a data structure that supports it (like sets and maps).

In general, when creating generic operations on data structures you can choose to be loose in how they work regardless of performance or strict in only allowing data structures that can meet performance expectations participate in an operation. Clojure strongly prefers the latter approach and functions in the core library often discuss performance expectations.

How do you actually do a linear search in a sequential collection? The most common recommendation is to use some which takes a predicate and a sequential collection. Then use a set, invoked as a function, as the predicate:

=> (some #{0} [3 4 5])
nil
=> (some #{3} [3 4 5])
3

Each element (3, 4, 5) is invoked on the predicate (in this case, a set acting as a function). If none matches, nil (a falsey value) is returned. If one does match, checking stops and the matched value is returned (a truthy value).

If you find the falsey/truthy values of nil (falsey) and 3 (truthy) results to be different than what you want, the boolean function can be wrapped around it to return false/true instead.

I am the first to admit that using some is non-obvious because it leverages several Clojure features in a surprising way for a newcomer (collections as functions, literal sets, and truthy/falsey values).

But! Note that some breaks if you happen to want to know whether a collection contains a falsey value (false or nil). In those cases, you will get back a falsey value regardless.

If you’re interested in a sequential search with early exit that works on falsey values, you’ll want something like this:

(defn contains-val? 
  [coll val]
  (when (seq coll)
    (or (= val (first coll))
        (recur (next coll) val))))

We could also envision this in terms of reduce (since we can more efficiently apply reduce on some input collections):

(defn contains-val?
  [coll val]
  (reduce #(if (= val %2) (reduced true) %1) false coll))

I ran a quick benchmark on finding the last value in (vec (range 1000)) and found that the first version took 27 µs and the second reduce-based one took 16 µs. For comparison, a set lookup took 81 ns. So use the right data structure!

Going Java

Another tricky approach is to leverage that Clojure collections implement java.util.Collection, which includes a .contains() that really does check for value containment (and also works with nil):

=> (.contains [3 4 5] 5)
true

The last word

Rich on contains?

Permalink

Partition with Game of Thrones Pugs

Clojure’s partition and partition-all functions are very useful. However, I have been bitten a few times using partition when I really wanted partition-all. So to help myself and all of you to remember it, I have made some diagrams with pugs from the Game of Thrones

In code, partition takes a collection and returns a lazy sequence of lists, each containing n items.

To demonstrate this with pugs, we will partition 5 pugs into groups of twos.

https://c4.staticflickr.com/8/7301/15757758964_570d260a31_c.jpg">

This partition will give you two groups of two pugs.

https://c4.staticflickr.com/8/7350/16380203235_91c8c4e9ee_c.jpg">

Notice, (and here is the important part), the last pug is missing. The Joffrey pug is not included because partition will not include items that do not make a complete partition. In this case, because there is no group of 2 pugs for the Joffrey pug to be in, it gets dropped.

This is the thing that has bitten me in the past.

A common use for wanting to partition things is to control the number of things that you process at one time. An example of this is sending only 500 items to be processed in a batch job at one time. If you have a few thousand items to be processed, partitioning them is a good way of chuncking. However, if you have an arbitrary number of items, you most certainly want to process them all and not drop any. This is where you should use partition-all instead.

Partition-all chunks the items as well, but also includes any leftovers. Demonstrating again with pugs.

https://c4.staticflickr.com/8/7427/15757758884_20bfd014eb_c.jpg">

This partition-all will give you three groups of pugs.

https://c4.staticflickr.com/8/7323/16193980179_229343f7f3_c.jpg">

This time pug Joffrey is not left out!

Remember, think carefully before using partition. Don’t leave a pug out.

By the way, I can’t wait till the next season of Game of Thrones. Until then ..

Game of Thrones series 4 is out now on blinkbox: http://blinkbox.me/SVbSdH To celebrate the release of the fourth series of Game of Thrones on blinkbox we created this video starring 3 adorable pugs. 'The Pugs of Westeros' sees Roxy, Blue and Bono playing doggy versions of the main characters; including Joffrey Baratheon, Daenerys Targaryen and Jon Snow. The pugs' owners, Phillip Lauer and his wife Sue, have been dressing their pugs up as characters from movies and TV since they were puppies. They normally only shoot still photography but jumped at the chance of creating a mini-movie based on one of their favourite shows. Game of Thrones seasons 1-4 are available to buy on blinkbox now priced at £16.99 in SD and £23.99 in HD. We are also giving 1,000 extra Clubcard points with every purchase of the fourth series plus a free Tesco finest 10" pizza.

Permalink

Android State Saving

Application State on Android

We’ve written about managing user data maintained within the memory of the application (a.k.a. “app state”) in the past. In this post, we talk about some of the challenges we faced when managing state in our recently released Android app. For performance and tooling reasons, we decided to write Prismatic for Android as a native Java app. This meant we had to face the challenges of managing app state in a new and unfamiliar context: the Android Application Framework. Our solution manages app state in a clean and modular way that coexists in harmony with the Android activity lifecycle.

Keeping a cached version of backend state is particularly important when developing mobile apps because the network connection has a high latency and is often flaky. But, keeping local state also adds the complexity of ensuring it is consistent across the app. We hope to convey some of the lessons we learned about the activity lifecycle and the tradeoffs of different approaches to managing app state within the Android Application Framework.

Example

To help explain the notion of app state, let’s see an example. Prismatic allows users to follow topics to receive recommended content based on the topics they follow. For example, Brad may follow the topic “Cats” in order to see recommended content about “Cats”. The ground truth of whether or not Brad is following “Cats” is maintained on the backend. So the backend is in exactly one of the following two states either:

1. Brad Follows Cats

or

2. Brad does not follow Cats

The client application is the interface through which users access and update these values on the backend servers. For Brad, the client application allows him to update this topic-user follow state on the server by following or unfollowing “Cats”. The client app also renders this state in various places where the topic surfaces:

Search Header of Cats Topic Topics Lists

The main challenge of managing app state on the client is ensuring that these different representations are consistent and up-to-date with the backend. Put another way, something has gone wrong if our user, Brad, simultaneously sees two conflicting representations: one that says that he is following “Cats” and one that says he is not, which leads to confusion and a poor user experience. We also want the representation of this state on the client to be dynamic: if Brad decides to unfollow “Cats,” we want the many places where this state is rendered to be updated. Hence, we need a way to represent this topic-user follow state in the application in a way that allows it to render consistently and be dynamically updated.

Solution 1: Caching App State Across Activities

In mobile apps, where the network connection is often flaky and high-latency, it is important to cache network responses. The first approach we will consider is keeping a client-side cache on each screen and passing the cache between screens.

Before we get specific about the implementation, let’s review some basics of the Android Application Framework. Each “screen” in the framework is represented in code as an instance of the Activity class. For example, the images above show several examples of activities in the Prismatic app: an activity containing the user’s list of currently followed topics, an activity for the “Cats” topic feed, and the search activity. Each one of these activities needs to know about the topic-user follow state.

To implement our caching solution, we can send one network request upfront for the set of followed topics from the onCreate method of the Main Activity (the first activity that the android OS launches in the app). Throughout the course of the app’s lifetime, each activity may start up subsequent activities as the user transitions between different screens, and can pass app state to subsequent activities using Intents. In pseudocode, the solution might look roughly like:

We would repeat this code in each activity that needs to reference the topic-user follow state. By saving the state in an instance variable, each activity can just look it up in the followState map, avoiding a network request for information already present on the client.

Unfortunately, this solution suffers from a couple of problems, the first of which is cluttered code. In order to implement this solution, we must always remember to pass along the topic-user follow state whenever we start a new activity. Otherwise, the downstream activity will not have the state it needs, and will have to fetch it from the server (adding logic to check whether we have the state locally or have to fetch it over the network adds even more code clutter and complexity). We might try to optimize the code to avoid passing the state to activities that do not directly reference it; however, we run the risk of breaking the chain for downstream activities that do reference the state. User-topic follow state is just one of the pieces of user state; as the number of pieces of cached state grows, so does the code bloat.

The second problem with this solution is that as users navigate between activities, the state is serialized and deserialized, effectively creating duplicate representations of the same state, which may fall out of sync. For example, consider a situation where the user has just completed a search on the SearchActivity and then navigates to the “Cats” TopicFeedActivity. On the topic feed, the user follows the “Cats” topic and then presses “back” to return to the SearchActivity. Now, because the search results were rendered from an older copy of the state, the screen will incorrectly show that the user is not following “Cats.” With this solution, we would have to ensure the new information is communicated back to the copy of the app state in older activities and have them re-render with the updated state.

Solution 2: Global App State

The distributed app state in Solution 1 introduced code bloat from needing to communicate the app state between activities and possible cache inconsistencies from having multiple copies of the same state that could easily fall out of sync. Both of these problems can be addressed by using a single representation of the app state that is referenced from all the activities that need it.

This approach is commonly implemented with the Singleton pattern, which is embraced by many Android developers. In our app, we have a singleton called the TopicManager that maintains the follow state for each topic and provides methods for querying it and updating it throughout the app.

We use the excellent annotations provided by the Dagger library to inject the singleton instance of the TopicManager into all the places that need it. For example, in the SearchActivity, we request that Dagger inject TopicManager like so:

and then to render each topic in the list of search results, we can query the TopicManager for whether the topic is currently followed:

With this solution, each activity independently requests only the pieces of cached app state that it needs. By no longer requiring each activity to pass the state downstream to the next activity, we reduce the code coupling that forms an easily broken chain.

Responding to State Updates

Whenever the state changes, we still need to update all the places that render it. Within our app, we broadcast notifications of state transitions via messages along an event bus (we’re using the fantastic Otto library as our elegant and lightweight event bus). Whenever the user-topic follow state is updated, the TopicManager fires an event on the bus:

All interested parties subscribe to the these events, and when a TopicUpdateEvent arrives along the bus, the places in the app that render the state can update themselves. As a general rule, the update code is shared with the initial rendering code. For example, for the SearchActivity, the update code just calls the notifyDataSetChanged() which triggers the adapter backing the list of activities to intelligently re-render itself and update the visible UI components that have changed. The complete, unabridged code to trigger the UI update is simply:

The call to notifyDataSetChanged() triggers the getView() method from above to run and update the UI.

Note that the activities on the backstack that also represent the state are unsubscribed from the event bus, and thus do not receive the notifications that the state has updated. Fortunately, it is easy to keep these backgrounded activities up-to-date because they will rerun the render code when they resume. Thanks to caching the app state, this re-rendering is fast.

Saving Instance State within the Android Activity Lifecycle

Keeping the cached state within a separate singleton class works harmoniously with the Android activity lifecycle.

Android Activity Lifecycle

In the Android Application Framework, each activity has its own set of lifecycle methods that are run when the user transitions between activities or triggers a configuration change such as rotating the device between portrait and landscape mode. For example, when the user rotates the device, the current activity is paused, stopped, and destroyed, a new instance of the Activity class is constructed, and then the onCreate, onStart, and onResume methods are called on the new instance. It is an unfortunate fact of the Android activity lifecycle that the values assigned to instance variables in the old activity object do not automatically carry over into the new activity. Instead, all instance variables must be re-initialized in the new activity object. In our experience, improper initialization of instance variables is the source of most null pointer exceptions.

The singleton holding the client-side cache is a separate object that exists independent of any activity. Because it lives outside the Android activity lifecycle, it does not get destroyed when a particular activity undergoes a configuration change. The Dagger library automatically ensures that the ApplicationContext maintains a reference to each global singleton to ensure that it is not garbage collected. Activities that reference these singletons typically inject them into instance variables in the onCreate() method, and so do not need devote special attention to making sure they are saved in the old activity object and restored in the new one. By sidestepping the need for explicitly saving and restoring instance state, we reduce the amount of error-prone bookkeeping needed in more distributed forms of client-side caching (e.g. the activity-level caching in Solution 1).

Global App State Under Pressure

We released a beta version of the Prismatic app that made heavy use of our global singleton abstraction. After a day in the wild, we started seeing many null pointer related crashes from many different parts of the app, which we couldn’t reproduce. Eventually, we came across a post on stackoverflow saying that Android can kill a backgrounded app to free memory. By default, instance variables are not persisted when Android kills the app. Unintuitively, when the user switches back to a killed app, the Android OS restarts the app directly in the last activity with focus, skipping over the MainActivity which initializes the instance variables in the global singletons. Any instance variables in the global singletons that are set by the MainActivity will be null. By using ADB to kill the backgrounded Prismatic app, we reproduced the problem -- confirming the root cause of the crashes.

Intelligently Persisting Global App State

We considered many solutions to address the null pointer problems that result when Android kills the app in the backgrounded state. For example, we considered lazily initializing the instance variables with a request to the backend. Unfortunately, this approach would require refactoring most of the codebase. An alternative solution that we considered was writing all updates to the app state to persistent storage, such as a file or database. We discarded this idea because it would introduce too much complexity for keeping the in-memory representation in sync with the persisted version and in dealing with stale versions of the persisted state.

Fortunately, the Android framework provides a mechanism for saving and restoring instance state for activities. When the Android framework is going to stop an activity, for example when there is a configuration change, it first calls the onSaveInstanceState method and passes it a Bundle (essentially a map from keys to arbitrary objects) in which the activity can save any instance variables that it wants to be available when the activity is recreated. This Bundle with all the saved state is passed as an argument to the onCreate method of the new activity, which can lookup the values of the instance variables persisted when onSaveInstanceState was called. This save instance state mechanism for activities provides a nice hook for ensuring that all instance variables are initialized when the activity is recreated. We understood this mechanism to be the preferred way for saving instance state on Android, and we just needed to figure out a way to use it for saving the state of our global singletons.

The way we saved global singletons in the activity’s instance state bundle is with a StateSaver class that has a handle to each of the global singletons that maintain instance state.

The StateSaver provides its own saveInstanceState method that takes a Bundle into which instance state can be persisted. We also modified each of the global singletons to have their own saveInstanceState and restoreInstanceState methods which the StateSaver calls to provide them a hook to save and restore their state. This StateSaver is a single class that must be injected into every activity and provides a hook for the activity to save the state of the global singletons whenever it is minimized. We effectively are piggy-backing on the activity-level saveInstanceState by using it to also save the global app state for all of the global singletons. We avoided code bloat by subclassing activity to StateSavingActivity and overriding the onSaveInstanceState and onCreate methods to save and restore the StateSaver; we replaced each activity in our app with an instance of the StateSavingActivity. While it does involve refactoring to StateSavingActivitys, we felt the StateSaver solution was better than the alternatives.

Summary

There are many options for managing state on the client. Each approach comes with its own tradeoffs. For our Android mobile app, where the network connection can be flaky, we found that maintaining several global cache objects (aka “managers”) keeps things modular and DRY. By leveraging some great open source libraries (Dagger and Otto), the manager solution is easy to implement and adopt across those activities in the app that need to access state. Our approach plays well with the Android Application Framework, and doesn’t create new serialized data formats that need to be cleaned up when upgrading the app. We’re just getting started developing in the Android space and are curious to hear your war stories and best practices.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.