Briefly, Dynamo Streams + core.async

Introduction To Streams

Dynamo Streams is an AWS service which allows Dynamo item writes (inserts, modifications, deletions) to be accessed as per-table streams of data. It’s currently in preview-only mode, however there’s a version of DynamoDB Local which implements the API.

The interface is basically a subset of Amazon’s Kinesis stream-consumption API, and there’s an adaptor library which allows applications to consume Dynamo streams as if they were originating from Kinesis. In additon to direct/pull consumption, Dynamo streams can be associated with AWS Lambda functions.

From Clojure

Support for streams is included in the recently-minted 0.3.0 version of Hildebrand, a Clojure DynamoDB client (covered in giddy detail in a previous post). Consider all of the details provisional, given the preview nature of the API.

The operations below assume a table exists with streams enabled, with both new and old images included (i.e. before-and-after snapshots, for updates). Creating one would be accomplished like so, with Hildebrand:

(hildebrand/create-table!
  {:secret-key ... :access-key ...
   :endpoint "http://localhost:8000"}
  {:table :stream-me
   :attrs {:name :string}
   :keys [:name]
   ...
   :stream-specification
   {:stream-enabled true
    :stream-view-type :new-and-old-images}})

Note we’re pointing the client at a local Dynamo instance. Now, let’s listen to any updates:

(ns ...
  (:require [clojure.core.async :refer [<! go]]
            [hildebrand.streams :refer
             [latest-stream-id! describe-stream!]]
            [hildebrand.streams.page :refer [get-records!]]))

(defn read! []
  (go
    (let [stream-id (<! (latest-stream-id! creds :stream-me))
          shard-id  (-> (describe-stream! creds stream-id)
                        <! :shards last :shard-id)
          stream    (get-records! creds stream-id shard-id
                      :latest {:limit 100})]
      (loop []
        (when-let [rec (<! stream)]
          (println rec)
          (recur))))))

We retrieve the latest stream ID for our table, and then the identifier of the last shard for that stream. The streams documentation isn’t forthcoming on the details of how and when streams are partitioned into shards - we’re only interested in the most recent items, so this logic will do for a demo.

get-records! is the only non-obvious function above - it continually fetches updates (limit at a time) from the shard using an internal iterator. Updates are placed on the output channel (with a buffer of limit) as vectors tagged with either :insert, :modify or :remove.

:latest is the iterator type - per Dynamo, the other options are :trim-horizon, :at-sequence-number and :from-sequence-number. For the latter two, a sequence number can be provided under the :sequence-number key in the final argument to get-records!

Let’s write some items to our table:

(defn write! []
  (async/go-loop [i 0]
    (<! (put-item! creds :stream-me {:name "test" :count i}))
    (<! (update-item!
         creds :stream-me {:name "test"} {:count [:inc 1]}))
    (<! (delete-item! creds :stream-me {:name "test"}))
    (recur (inc i))))

Running these two functions concurrently, we’d see this output:

[:insert {:name "test" :count 0}]
[:modify {:name "test" :count 0} {:name "test" :count 1}]
[:delete {:name "test" :count 1}]
...

The sequence numbers of the updates are available in the metadata of each of the vectors.

Permalink

How to Set Up a Clojure Environment with Ansible

This article is brought with ❤ to you by Semaphore.

Introduction

In this tutorial, we will walk through the process of setting up a local environment for Clojure development using the Ansible configuration management tool. Ansible and Clojure are are a perfect fit, as both place an emphasis on simplicity and building with small, focused components. While we will not cover the Clojure development process specifically, we will learn how to set up an environment that can be used across a whole host of Clojure projects.

Additionally, we will see how using a tool like Ansible will help us implement the Twelve-Factor App methodology. The Twelve-Factor App is a collection of best practices collected by the designers of the Heroku platform.

By the end of this tutorial, you will have a virtualized environment for running a Clojure web application, as well as all supporting software, including MongoDB and RabbitMQ. You will be able to quickly get up and running on multiple development machines. You can even use the same Ansible configuration to provision a production server.

Prerequisites

For the purposes of this tutorial, we will be using Vagrant to spin up several VMs for our development environment. If you have not used Vagrant before, please make sure to install the following packages for your OS before continuing:

Additionally, you will want to have Leiningen installed locally.

Finally, this tutorial uses an existing chat application as an example, so go ahead and clone the repo with this command:

git clone https://github.com/kendru/clojure-chat.git

Unless otherwise specified, the shell commands in the rest of this tutorial should be run from the directory into which you have cloned the repository.

What Makes Ansible Different

With mature configuration management products such as Puppet and Chef available, why would you choose Ansible? It is something of a newcomer to the field, but it has gained a lot of traction since its initial release in 2012. The three things that set Ansible apart are that it is agentless, it uses data-driven configuration, and it is also good for task automation.

Agentless

Unlike Puppet and Chef, Ansible does not require any client software to be installed on the machines that it manages. It only requires Python and SSH, which are included out of the box on every Linux server.

The agentless model has a couple of advantages. First, it is dead simple. The only machine that needs anything installed on is is the one that you will run Ansible from. Additionally, not having any client software installed on the machines you manage means that there are fewer components in your infrastructure that you need to worry about failing.

The simplicity of Ansible's agentless model is a good fit in the Clojure community.

Data-Driven Configuration

Unlike Puppet and Chef, which specify configuration using a programming language, Ansible keeps all configuration in YAML files. At first, this may sound like a drawback, but as we will see later, keeping configuration as data makes for much cleaner, less error-prone codebase.

Once again, Clojure programmers are likely to see the value of using data as the interface to an application. Data is easy to understand, can be manipulated programmatically, and working with it does not require learning a new programming language.

When you adopt Chef, you need to know Ruby. With Puppet, you need to learn the Puppet configuration language. With Ansible, you just need to know how YAML works (if you don't already, you can learn the syntax in about 5 minutes).

Task-Based Automation

In addition to system configuration, Ansible excels at automating repetitive tasks that may need to be run across a number of machines, such as deployments. An Ansible configuration file (called a playbook) is read and executed from top to bottom, as a shell script would be. This allows us to describe a procedure, and then run it on any number of host machines.

For example, you may use something like the following playbook to deploy a standalone Java application that relies on the OS's process manager, such as Upstart or systemd.

---
# deploy.yml
# Deploy a new version of "myapp"
#
# Usage: ansible-playbook deploy.yml --extra-vars "version=1.2.345"

- hosts: appservers
  sudo: yes
  tasks:
    - name: Download new version from S3
      s3: bucket=acme-releases object=/myapp/{{ version }}.jar dest=/opt/bin/myapp/{{ version }}.jar mode=get

   - name: Move symlink to point to new version
     file: src=/opt/bin/myapp/{{ version }}.jar dest=/opt/bin/myapp/deployed.jar state=link force=yes
     notify: Restart myapp

  handlers:
    - name: Restart myapp
      action: service name=myapp state=restarted

This example playbook downloads a package from Amazon S3, creates a symlink, and restarts the system service that runs your application. From this simple playbook, you could deploy your application to dozens — or even hundreds — of machines.

Installing Ansible

If you have not already installed Vagrant and Leiningen, please do so now. The following steps require that both are present on your local machine. We also assume that you already have Python installed. If you are running any flavor of Linux or OSX, you should have Python.

Installing Ansible is a straightforward process. Check out the Ansible docs to see if there is a pre-packaged download available for your OS. Otherwise, you can install with Python's package manager, pip.

sudo easy_install pip
sudo pip install ansible

Now, let's verify that the install was successful:

$ ansible --version
ansible 1.9.1
  configured module search path = None

Provisioning Vagrant with Ansible

Now that all dependencies are installed, it's time to get our Clojure environment set up.

The master branch for this tutorial's git repository contains a completed version of all configuration. If you would like to follow along and build out the playbooks yourself, you can check out the not-provisioned tag:

git checkout -b follow-along tags/not-provisioned

At this point, we want to instruct Vagrant to provision our virtual environment with Ansible. One of the key concepts in Ansible is that of an inventory, which contains named groups of host names or IP addresses so that Ansible can configure these hosts by group name. Thankfully, Vagrant will automatically generate an inventory for us. We just need to specify how to group the VMs by adding the following to our Vagrantfile:

config.vm.provision "ansible" do |ansible|
  ansible.groups = {
    "application" => ["app"],
    "database" => ["infr"],
    "broker" => ["infr"],
    "common:children" => ["application", "database", "broker"]
  }
end

This creates 4 groups of servers, each with a single virtual machine. Notice that both database and broker groups have the same server (infr). This will cause all configuration for both groups to be applied the the same VM.

While we could start up our Vagrant environment now, Ansible would have nothing to do. Let's fix that by writing some plays to provision our environment.

Writing Ansible Plays

Before we dig into the plays that we need for our application dependencies, let's write a simple task to place a message of the day (motd) on each of the servers that will be displayed when the user logs in. We will be using a role-based layout for our Ansible configuration, so let's create a common role and add our config. Your directory structure should look something like the following:

Vagrantfile
...
config/
└── roles
    └── common
        ├── tasks
        └── templates

Next, we'll add a main.yml file to the tasks directory that will define the motd task.

---
# config/roles/common/tasks/main.yml
# Tasks common to all hosts

- name: Install motd
  template: src=motd.j2 dest=/etc/motd owner=root group=root mode=0644

Briefly, this file defines a single task that uses the template module built into Ansible to take a file from this role's templates directory and copy it onto some remote machine, replacing the template variables with data from Ansible.

Along with this task, we'll create the motd.j2 template.

# config/roles/common/templates/motd.j2
Welcome to {{ ansible_hostname }}
This message was installed with Ansible

When Ansible copies this file to each host, it will replace {{ ansible_hostname }} with the DNS host name of the machine that it is installed on. There are quite a few variables that are available to all templates, and you can additionally define your own on a global, per-host, or per-group basis. The official documentation has very complete coverage of the use of variables.

Finally, we need to create the playbook that will apply the task that we just wrote to each of our servers.

---
# config/provision.yml
# Provision development environment

- name: Apply common configuration
  hosts: all
  sudo: yes
  roles:
    - common

In order for Vagrant to use this playbook, we need to add the following line to our Vagrant file in the same block as the Ansible group configuration that we created earlier:

ansible.playbook = "config/provision.yml"

We can now provision our machines. If you have not yet run vagrant up, running that command will download and initialize VirtualBox VMs and provision them with Ansible (this will take a while on the first run). After we run vagrant up initially, we can re-provision the machines with:

$ vagrant provision
# ...
==> app: Running provisioner: ansible...

PLAY [Apply common configuration] *********************************************

GATHERING FACTS ***************************************************************
ok: [app]

TASK: [common | Install motd] ******************************
changed: [app]

PLAY RECAP ********************************************************************
app                        : ok=2    changed=1    unreachable=0    failed=0

If all was successful, you should see output similar to the above for each of the VMs in our environment.

Ansible Play for Application Server

In our infrastructure, the application server will be dedicated to running only the Clojure application itself. The only dependencies for this server are Java and Leiningen, the Clojure build tool. On a production machine, we would probably not install Leiningen, but it will be helpful for us to build and test our application on the VM.

Let's go ahead and create two separate roles called "java" and "lein".

mkdir -p config/roles/{java,lein}/tasks
cat <<EOF | tee config/roles/java/tasks/main.yml config/roles/lein/tasks/main.yml
---
# TODO

EOF

Next, let's add these roles to a play at the end of our playbook.

# config/provision.yml
- name: Set up app server
  hosts:
    - application
  sudo: yes
  roles:
    - java
    - lein

For the purpose of our application, we would like to install the Oracle Java 8 JDK, which is not available from the standard Ubuntu repositories, so we will add a repository from WebUpd8 and use debconf to automatically accept the Oracle Java license, which is normally an interactive process. Thankfully, there are already Ansible modules for adding apt repositories as well as changing debconf settings. See why they say that Ansible has "batteries included"?

---
# config/roles/java/tasks/main.yml
# Install Oracle Java 8

- name: Add WebUpd8 apt repo
  apt_repository: repo='ppa:webupd8team/java'

- name: Accept Java license
  debconf: name=oracle-java8-installer question='shared/accepted-oracle-license-v1-1' value=true vtype=select

- name: Update apt cache
  apt: update_cache=yes

- name: Install Java 8
  apt: name=oracle-java8-installer state=latest

- name: Set Java environment variables
  apt: name=oracle-java8-set-default state=latest

Next, we'll add the task to install Leiningen. Instead of having Ansible download Leiningen directly from the internet, we will download a copy and make it part of our configuration so that we can easily version it:

mkdir config/roles/lein/files
wget https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein -O config/roles/lein/files/lein

With that done, the actual task for installing Leinengen becomes a one-liner.

---
# config/roles/lein/tasks/main.yml
# Install leiningen

- name: Copy lein script
  copy: src=lein dest=/usr/bin/lein owner=root group=root mode=755

Let's make sure that everything is working:

$ vagrant provision
# ... lots of output
PLAY RECAP ********************************************************************
app                        : ok=9    changed=6    unreachable=0    failed=0

Ansible Plays for Infrastructure Server

Next up, we'll add the play that will set up our infr server with MongoDB and RabbitMQ. We'll create roles for each of these applications, and we'll create plays to apply the mongodb role to servers in the database group, and the rabbitmq role to the servers in the broker group. If you recall, we only have the infr VM in each of those groups, so both roles will be applied to that same server.

We'll set up the role skeletons similar to the way we did with the java and lein roles.

mkdir -p config/roles/{mongodb,rabbitmq}/{tasks,handlers}
cat <<EOF | tee config/roles/mongodb/tasks/main.yml config/roles/rabbitmq/tasks/main.yml
---
# TODO

EOF

This time, we'll add two separate plays to our playbook.

# config/provision.yml
- name: Set up database server
  hosts:
    - database
  sudo: yes
  roles:
    - mongodb

- name: Set up messaging broker server
  hosts:
    - broker
  sudo: yes
  roles:
    - rabbitmq

Next, we'll fill in the main tasks to install MongoDB and RabbitMQ.

---
# config/roles/mongodb/tasks/main.yml
# Install and configure MongoDB

- name: Fetch apt signing key
  apt_key: keyserver=keyserver.ubuntu.com id=7F0CEB10 state=present

- name: Add 10gen Mongo repo
  apt_repository: >
    repo='deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse'
    state=present

- name: Update apt cache
  apt: update_cache=yes

- name: Install MongoDB
  apt: name=mongodb-org state=latest

- name: Start mongod
  service: name=mongod state=started

# Allow connections to mongo from other hosts
- name: Bind mongo to IP
  lineinfile: >
    dest=/etc/mongod.conf
    regexp="^bind_ip ="
    line="bind_ip = {{ mongo_bind_ip }}"
  notify:
    - restart mongod

- name: Install pip (for adding mongo user)
  apt: name=python-pip state=latest

- name: Install pymongo (for adding mongo user)
  pip: name=pymongo state=latest

- name: Add mongo user
  mongodb_user: >
    database={{ mongo_database }}
    name={{ mongo_user }}
    password={{ mongo_password }}
    state=present

This role is a little more complicated than what we have seen so far, but it's still not too bad. We again use built-in Ansible modules to fetch 10Gen's signing key and add their MongoDB repository. We use the service module to start the mongod system service, then we use the lineinfile module to edit a single line in the default MongoDB config file.

You may have noticed that we used several variables in the arguments to a couple of tasks. There are a couple of places that variables can live, but for this tutorial, we will declare these variables globally in config/group_vars/all.yml. The variables in this file will be applied to every group of servers.

mkdir config/group_vars
cat <<EOF > config/group_vars/all.yml
---
# config/group_vars/all.yml
# Variables common to all groups

mongo_bind_ip: 0.0.0.0
mongo_database: clojure-chat
mongo_user: clojure-chat
mongo_password: s3cr3t
EOF

Additionally, we added a notify line that will notify a handler when the task has been run. Handlers are generally used to start and stop system services. Let's go ahead and create the handler now.

cat <<EOF > config/roles/mongodb/handlers/main.yml
---
# config/roles/mongodb/vars/main.yml
# MongoDB service handlers

- name: restart mongod
  service: name=mongod state=restarted
EOF

Since there are no surprises in the configuration for RabbitMQ, we will not go into it here. However, the full config is available in the repo for reference.

Ansible Plays for the Application

In order to follow the 12-factor app pattern, we would like to have all of our configuration stored in environment variables. Our app is already set up to read from HTTP_PORT, MONGODB_URI and RABBITMQ_URI. We'll just write a simple task to add those variables to the vagrant user's login shell.

---
# config/roles/clojure_chat/tasks/main.yml
# Add application environment variables

- name: Add env variables
  template: src=app_env.j2 dest=/home/vagrant/.app_env owner=vagrant group=vagrant

- name: Include env variables in vagrant's login shell
  shell: echo ". /home/vagrant/.app_env" >> /home/vagrant/.bash_profile

And we'll go ahead and create the template that defines these variables:

# config/roles/clojure_chat/templates/app_env.j2
# {{ ansible_managed }}

export HTTP_PORT="{{ backend_port }}"

# We are including the auth mechanism because Ansible's mongodb_user
# module creates users with the SCRAM-SHA-1 method by default
export MONGODB_URI="mongodb://{{ mongo_user }}:{{ mongo_password }}@{{ database_ip }}/{{ mongo_database }}?authMechanism=SCRAM-SHA-1"

export RABBITMQ_URI="amqp://{{ rmq_user }}:{{ rmq_password }}@{{ broker_ip }}:5672{% if rmq_vhost == '/' %}/%2f{% else %}{{ rmq_vhost }}{% endif %}"

Verify the Configuration

We just wrote a lot of configuration, but now we have a fully-reproduceable development environment for a Clojure application that utilizes a messaging queue and a noSQL database.

Let's make sure that everything works by provisioning again and firing up the repl.

vagrant provision
vagrant ssh app
cd /vagrant
lein repl

The repl will load our clojure-chat.main namespace, where we can start up the server.

;; REPL
clojure-chat.main=> (-main)

Now a web server should be running inside our VM on port 3000. We can check it out by visiting http://10.0.15.12:3000/ in a browser.

At this point you have successfully set up a complete, multi-machine development environment for Clojure using Ansible!

This article is brought with ❤ to you by Semaphore.

Permalink

Seven specialty Emacs settings with big payoffs

Let's skip the bread and butter Emacs modes and focus on settings that are hard to find, but which once activated provide a big payoff.

For reference, my complete Emacs settings are defined in init.el and lisp/packages.el


1. Monochrome rainbows are the best way to reveal unbalanced delimiters

I rely and paredit and formatting to keep my parenthesis honest, and for the most part that works out great. Occasionally I need to go outside the box. Emacs defaults are terrible for finding unbalanced forms when things go wrong. This setting makes it obvious that there is an error when I have fallen out with my grouping delimiters. See the red highlighting in this example:





The trick is to not use different colored delimiters! The reason I need the rainbow delimiters package is only to highlight unbalanced delimiters, which it does quickly and accurately. For those cases where I really want to differentiate a group, placing the cursor on the delimiter causes Emacs to highlight the other delimiter.

To activate this setting:


Install rainbow-delimiters, then
M-x customize-group rainbow-delimiters
Rainbow Delimiters Max Face Count: 1

(add-hook 'prog-mode-hook 'rainbow-delimiters-mode)
(require 'rainbow-delimiters)
(set-face-attribute 'rainbow-delimiters-unmatched-face nil
:foreground 'unspecified
:inherit 'error)

Depending on your theme you may wish to modify the delimiter colors to something relatively passive like grey.


2. Save on focus out

I love writing ClojureScript because it feels alive. When I change some code I instantly see the results reflected in the browser thanks to figwheel or boot-reload. It is even better than a REPL because you don't need to eval. This setting causes my file to be saved when I switch from Emacs to my browser. This is perfect because I don't need to mentally keep track of saving or compiling... I just code, switch to my browser and see the code reloaded. If something goes wrong figwheel notifies me with a popup or boot-reload with a noise, and I can go back to the code or check my build output. This also works great with ring reload middleware.

To activate this setting:


(defun save-all ()
(interactive)
(save-some-buffers t))
(add-hook 'focus-out-hook 'save-all)

You will never forget to save/eval again, and the saves happen at the right time in your workflow.


3. Chords

Emacs key-strokes are pretty gnarly. Especially for a VIM guy like me (Emacs evil-mode is the best VIM). Chords are so much more comfortable! For example I press j and x together instead of M-x. Just be careful when choosing chords that they are combinations you will never type normally. I mainly use chords for switching buffers, navigating windows, opening files, and transposing expressions.

To activate this setting:


Install key-chord

(key-chord-define-global "jx" 'smex)

(You can see more chords in my settings file)


4. Forward and backward transpose

I move code around a lot! This setting allows me to swap values quickly and move forms around. For example if I accidentally put function call arguments in the wrong order, I can swap them around. Or if I define my functions out of order, I can swap the entire form up or down. I just put my cursor at the start of the form I want to move, and press 'tk' to move it forward/down, or 'tj' to move it backward/up. Joel came up with this trick, and has lots of other Emacs foo up his sleeve (check out his matilde repo).

To activate this setting:


(defun noprompt/forward-transpose-sexps ()
(interactive)
(paredit-forward)
(transpose-sexps 1)
(paredit-backward))

(defun noprompt/backward-transpose-sexps ()
(interactive)
(transpose-sexps 1)
(paredit-backward)
(paredit-backward))

(key-chord-define-global "tk" 'noprompt/forward-transpose-sexps)
(key-chord-define-global "tj" 'noprompt/backward-transpose-sexps)


5. Choose a comfortable font

Perhaps this is obvious, but it took me a while to realize what a big impact a comfortable font has when you spend a lot of time in Emacs. The default font is rather small in my opinion, and there are better options. I find Source Code Pro a good choice.

To activate this setting:


(set-frame-font "Source Code Pro-16" nil t)


6. Send expressions to the REPL buffer

For Clojure coding, cider is great. But weirdly it doesn't have a way to send code the the REPL buffer... instead you just evaluate things and see the output. I really like seeing the code in the REPL buffer, and for pair programming it is essential for showing your pair exactly what you are doing (they can't see your zany Emacs keystrokes!) This setting binds C-; to send the current form to the REPL buffer and evaluate it.

To activate this setting:


(defun cider-eval-expression-at-point-in-repl ()
(interactive)
(let ((form (cider-defun-at-point)))
;; Strip excess whitespace
(while (string-match "\\`\s+\\|\n+\\'" form)
(setq form (replace-match "" t t form)))
(set-buffer (cider-get-repl-buffer))
(goto-char (point-max))
(insert form)
(cider-repl-return)))

(require 'cider-mode)
(define-key cider-mode-map
(kbd "C-;") 'cider-eval-expression-at-point-in-repl)


7. Prevent annoying "Active processes exist" query when you quit Emacs.

When I exit Emacs, it is often in anger. I never want to keep a process alive. I explicitly want my processes and Emac to stop. So this prompt is infuriating. Especially when there are multiple processes. Emacs, just stop already!

To activate this setting:


(require 'cl)
(defadvice save-buffers-kill-emacs (around no-query-kill-emacs activate)
(flet ((process-list ())) ad-do-it))


Final comments

These settings improved my workflow dramatically. Drop me an email or comment if you have any trouble with them. I put this list together after being inspired by Jake's blog. He has lots of great Emacs/workflow tips that you might like if Emacs is your editor of choice.

Permalink

EuroClojure 2015

A group of six people went to EuroClojure 2015 from Solita at the end of June. All of our offices (Tampere, Helsinki and Oulu) were represented so that we could spread the word throughout Solita more efficiently after we came back. Also this was a superb opportunity to get to know guys from other offices better too. It was a very busy two days at the conference with many interesting talks.

Tools The guys from Solita who went to EuroClojure 2015

Here is a brief look at the presentations given at the conference.

Day 1

Clojure, a Sweetspot for Analytics

The first day started by registering into the conference at 8 am. After drinking some coffee and refreshments the show started with a presentation about using Clojure with analytics work. Alex Petrov from ClojureWerkz gave the talk and he mentioned that although other languages (mainly R and Python) have pretty much dominated the analytics world Clojure has been growing steadily and for a good reason.

Alex mentioned that this was his first time giving a conference speech and unfortunately it showed. He spent way too much time going over irrelevant subjects and in the end he had to skip over half of his slides since he ran out of time. Because of that the talk was a stub and it was pretty hard to grasp the bigger picture or message Alex was trying to tell us. However, he did mention two different tools for analytics on Clojure, Anglican and Statistiker. Anglican is actually a new language for analytics work built on top of Clojure. It's a macro heavy implementation and provides straightforward declarative approach to analytics with Clojure. Anglican did seem somewhat interesting and worth keeping in mind in case you end up doing analytics work with Clojure. The second library, Statistiker, is a ClojureWerkz project which is heavily in development. Apparently Alex was involved in development of Statistiker but it's maturity is a big question mark for now. It might be worth a look later though.

What lies beneath - A deep dive into Clojure's Data Structures

Next Mohit Thatte from India gave a talk about Clojure's data structures and how they work under the hood. Mohit was a good speaker and he managed to keep his presentation both interesting and funny at the same time. The first half of the presentation felt bit too much like university lectures about data structures: stuff like how trie or red-black trees work was explained and so forth.

Tools Challenge accepted!

After the first half though the presentation got more interesting when Mohit dug into the more modern datastructure implementations which are also in Clojure itself. There's a suprising amount of little tricks and optimizations going underneath the hood to give us the fast persistent data structures we have in Clojure. Tip: binary operations are the king :) A nice fact was also that Clojure's persistent implementation of HAMT trees are also the first in world anywhere. So Rich Hickey got himself to datastructure history with his work and that implementation has been copied to several other languages later on. Overall this presentation was quite nice and entertaining although it didn't really give anything tangible to help us in our daily work.

Building a real world, end user, open source application in Clojure

Guys from Bevuta IT came to talk about project they had been working on with full-stack Clojure. Moritz Heidkamp and Moritz Ulrich (the Moritz brothers?) first showed us few slides about their application PEPA, which was a pretty basic document management system. Slides contained few bullet points and few screenshots and from there it was a deep dive into Clojure code written into their system.

PEPA seemed to follow a pretty typical good architecture for a full-stack Clojure project. They used Stuart Sierra's Component, Om and so forth. The whole talk could be described as a 'code review'. Although it was pretty interesting the most you'll get out of it is to just go and read PEPA source codes from Github. PEPA looked like a well made piece of software and it could be used as a loose reference point for your own full-stack Clojure projects.

Mesomatic: The cluster is a library

After the break Pierre-Yves Ritschard talked about Mesomatic which is a Clojure library for working with Apache Mesos. Pierre-Yves spent a lot of time in the beginning going through how application scaling demands grow when it gets more popular and he tried to build a foundation for his talks main point with it. Unfortunately in my opinion he spent way too much with that part of the talk and I was just about to doze off when he finally go to the point which was Mesomatic and Mesos.

For those of you who don't know Mesos, it's a library to control your applications in a cluster in such a way that you won't be deciding where your applications are running. You just define resources your application needs and throw it at Mesos which will decide where to put your application and runs it. An important part of Mesos is also message passing mechanism between these applications. Mesos provides a very dynamic way to scale your system across cloud environments where servers can come and go and only those applications are ran which are needed at that specific time.

Mesomatic was nothing more than a Clojure library to control Mesos. Mesos is most definitely a very interesting abstraction over operating systems and could provide a very nice way to handle your cluster if you have a big enough system to gain the benefits of Mesos.

Where's my data?

David Chelimsky from Cognitec gave a presentation about Datomic. In case you didn't already know, Datomic is an immutable temporal database which is pretty much the coolest database out there. I'm pretty sure that most of the audience already knew what Datomic is though so this presentation wasn't really needed that much. David was an excellent speaker and he did give us some first looks at the new stuff coming at Datomic (pull API is really nice and it composes with query API seamlessly) so I didn't feel like my time was wasted. However I would have liked to see something which isn't already known by the whole Clojure community instead of this presentation.

Tools Datomic data traversal

Generative Testing: Properties, State and Beyond

Generative testing is a testing method first defined by John Hughes when he was working at Ericsson. Jan Stepien gave a presentation about generative testing with Clojure with test.check. The main idea of generative testing is that you don't write tests with fixed input values but you write generators which will generate (possibly) an infinite amount of variations for the input parameters. You define properties for your tests and let test.check execute your test hundreds or thousands of times. This allows one to find hidden bugs and makes it possible to write reliable tests for eg. concurrent scenarios.

John Hughes thinks that you shouldn't write any other tests than generative tests. They are definitely a useful testing method especially when you test pure functions - although one can test impure functions as well, they just need more scaffolding to get running (but that's true also with unit and integration tests).

Overall this presentation didn't really give me anything new but what it did do was that it reminded me of the existence of generative tests. I actually already took test.check into use in our project when testing some parser logic.

Beyond Code - Repository mining with Clojure

Next presentation was the best of the first day at EuroClojure. It was a suprise since I didn't really expect anything from the presentation name and abstract.

Adam Tornhill talked how when trying to improve the quality of large systems the most important part isn't actually the code itself but version history. This was a really interesting proposition and Adam managed to convince at least me that he is most likely correct in this matter.

During the presentation Adam went through several cases where just by analyzing version history log data one could find:

  • Worst code in the system which contained most of the bugs
  • Code with dependency problems
  • Code with logical problems
  • Copy-paste code
  • and many other things

This was done mostly via analyzing temporal coupling of different commits and changes to different files. For example, if two files are changed at the same time they almost always contain some kind of coupling between them. It can be a logical coupling which could mean they actually should be in one file. Or it can be because they contain copy-paste code and when you change the first file you also need to do the same changes to the second file too. And stuff like this.

Tools Temporal coupling in version history commits

Adam has written a tool with Clojure called code-maat which is a command line tool for doing temporal analysis from version control data. The tool is language agnostic and can read log data from Git, Mercurial, SVN and Perforce. He also has written a whole book about this concept called Your Code as a Crime Scene.

This talk was inspirational and will most likely lead to at least something at Solita. Definitely recommend to spend your time on watching the video when it comes available.

Clojure on Android

Alexander Yakushev is a the guy responsible for Clojure Android which he started few years back when doing a Google Summer of Code project on it. Today one can develop full blown applications for Android with Clojure and Alexander gave us the very first live coding session in the conference. When doing that, Alexander built a very simple Android application with full REPL support and everything directly from Emacs.

Today Clojure Android consist of several projects which give you all kinds of tooling, libraries and more. One of the most interesting parts of Clojure Android from web developer perspective is Skummet which is a Clojure optimizer. What Skummet does is that it removes all dynamic bindings from Clojure bytecode which improve bootstrapping time significantly (over four times faster!). After dynamic bindings have been removed this opens doors for other optimizations too such as tree shaking via Proguard since the functions are just static methods after the var removal. This both improves performance and lowers the footprint of application binaries. The best part of all this is that Skummet is not limited to only Clojure on Android but it can be applied to any Clojure application. You'll lose dynamic var functionalities and eval but most application don't need those anyway when in production.

I was hugely impressed on work done on Clojure Android and it seemed to be suprisingly mature platform to build applications on.

μKanren: Running the Little Things Backwards

First day was concluded by the functional programming world's very own superstar girl Bodil Stokke. Bodil had apparently noticed that there was quite a many finns in the conference since she started her presentation by saying "Hyvää iltaa" which is "Good evening" in Finnish. :)

Bodil did a very typical presentation for her with first few slides with horrid colors and comic sans font and then diving deep into live coding with Emacs. Bodil implemented a micro sized logic language with Clojure live on stage during her presentation. I have to say I had very hard time keeping up with her and felt myself incredibly stupid. For my relief she concluded her presentation by saying that she just went through several decades of computer science research work in 45 minutes. That made me feel a bit less stupid :)

However it is truly impressive what you can do with just a tad over 100 lines of Clojure code. μKanren isn't something Bodil implemented but it is something she truly grasps intimately. Bodil is an inspirational figure all with her Nyan Cat decorated Emacs frenzy and all programmers should see her giving a presentation at least once in their lifetime.

Conference party

After the first day of presentations there was a conference party in a local night club which lasted until midnight. There isn't very much to say about the party except there was free beer (for a limited time) paid by Cognitec and an awesome chance to network with other Clojurians. Had a nice time although after the free drinks seized to flow the prices went through the roof: 7€ for warm Budweiser and 12€ for gintonic. Maybe this was a scheme to prevent the conference visitors from getting too drunk so they'd manage to get to the second day in time since the first presentation was already at 8:50 AM. This scheme failed miserably though since last message into EuroClojure Slack channel had been written at 4:30 AM.

Second day

Ctries in Clojure, or Concurrent Transients

Second day was launched by a presentation by a very intelligent man called Michał Marczyk. He talked about a datastructure called ctries which he had implemented for Clojure as a library. I'd love to tell you all the nasty details how this datastructure works but I have to admit I just couldn't follow this presentation. While Michał is obviously a very intelligent man he unfortunately lacks basic presentation skills. He failed to explain many concepts which he took for granted and jumped back and forth between his slides which didn't really contain any detailed information about how the datastructure worked. Ctries is a very nice datastructure and it's great we have it on Clojure but you'll understand better how it works by watching a presentation by Martin Odersky about it.

There was this godlike tweet written when this presentation was going on. One can only understand this tweet if you have sat at the conference at 8:50 AM after spending quite a lot of time and alcohol on previous night. Super heavy duty stuff as the first presentation.

Grudging Monkeys and Microservices

Second presentation was a lot easier on your mind and was perfect for waking up people after the party night. Carlo Sciolla gave a presentation about how one should (or could) build infrastructure for true microservice architecture based application. He used this super simple example application as an example which consisted of applications called Monkeys. Each Monkey behaved slightly differently and they effectively implemented a single microservice each. On top of these was a simulator which talked to these microservices and provided a web based UI for the application. The application didn't do anything truly interesting but it was enough to demonstrate a microservice application in action.

The true beef of the presentation was the tooling and how they were tied together. Carlo bombarded us with whole slew of tools:

  • Consul.io
  • Go Continuous Delivery
  • Terraform
  • Hysterix
  • ELK stack
    • Elastic Search
    • Logstash
    • Kibana
  • New Relic
  • Mesos
  • Docker
  • and several others

Carlo emphasized that you need to pay dearly to get microservices truly running but if you do, the benefits are great. I personally loved his approach of not concentrating on the application itself but on how to best implement a very robust microservices architecture infrastructure. This talk gave me good tips on several new tools I didn't know of and made me think about microservice architecture in overall. Good talk.

Efficient & Concise Access to Remote Data - Reinventing Haxl

Alexey Kachayev gave this presentation (which had slightly changed it's name from the schedule on EuroClojure webpage) which was a true suprise. Alexey gave one of the best talks of the whole conference and his library might be a really big thing in Clojure world in the next few years.

Muse is a library which is heavily influenced by Haxl and Stitch which are developed by Facebook and Twitter for Haskell and Scala. What it does is that it provides a declarative way to describe data sources and concurrent communication between those sources. Muse makes the requests and is smart enough to not do the same request twice when doing it's work so the developer doesn't have to worry about performance problems caused by such things. It also automatically parallelizes these requests if possible and then does logical merging from these sources as defined by the developer.

I had heard of Haxl before but hadn't really dug into what it truly is. This concept is truly interesting especially in the microservice world since it simplifies significantly the overhead of communication logic code between the services. Muse is a very impressive and beautifully written piece of code and it does it's magic with only 254 lines of Clojure code.

Real Estate Clojure

Next talk was a project case story about building onthemarket.com which is a real estate marketplace in UK. Jon Pither gave an entertaining talk how they approached the problem and what problems they faced during the development. There isn't that much to say about this talk except Clojure saved the day and in the end they delivered a performant and beautiful (I agree) website which does it's job well. As a nice sidenote Jon praised compojure-api by Metosin like nothing else. I guess @ikitommi smiled pretty widely during this talk :)

ClojureBridge - Building a more diverse Clojure community

Ali King came to tell the Clojure community about ClojureBridge - a movement which tries to get more people from the minority groups into Clojure programming. With minority groups Ali meant women, disabled people and so forth. While I understand how diversifying the programmer pool in Clojure world will be a good thing I didn't really enjoy this talk. While Ali did speak clearly she also gave her talk by reading it from notes. This made the pace of the talk unnatural and mechanic and thus hard to follow.

The subject of this presentation was a bit dangerous and it almost blew when it came time to make questions. There were several plain stupid questions and one which could be considered as rude to the speaker. What these questions proved to me though that ClojureBridge is indeed needed since obviously there are still people from stoneage even in the Clojure community. After the presentation at least one new ClojureBridge event was created so mission accomplished Ali!

Performance and Lies

After the lunch break it was time for Tom Crayford to step on the stage. During the lunch we talked that the next presentation could only be either super bad or super interesting based on the topic. Fortunately what Tom gave us was of the latter type.

Not only did Tom manage to make his presentation entertaining and fun it was also packed with interesting information about how JVM and Clojure works. There's over 800 different flags which you can give to JVM but Tom brought up few of those which are truly interesting for performance tuning:

  • -Xmn -Xmx, which is the basic heap tuning arguments. Tom said you can gain 15% performance boost just by tweaking these according to your application.
  • AggressiveOpts. 8.4% performance boost.
  • UserCompressedOops, which will propably win the "best name for a flag" competition and also can give you a 8.1% performance boost if you turn it off.

As an interesting additional information Leiningen runs by default with -XX:TieredStopAtLevel=1 which tells the JVM to optimize startup time, not runtime performance. As a result one should turn this off if you want to do any kind of benchmarking via Leiningen. Did not know this.

Also one can gain quite significant performance improvements especially with Clojure by tweaking MaxInlineLevel=9 to something different. What this flag does is that it limits the JIT on how many function calls it can recurse into when doing inlining. Since Clojure is a functional language and it does a lot of function calls this can make a huge difference with certain applications.

Tom also made a thrust at profiler builders and told that all of the profilers on JVM lie. They cannot do anything else since they effectively work on JVM safepoints which allow debuggers, profilers etc to tap into the normal execution of your application. During a safepoint your application doesn't proceed at all and the tooling can do their magic. The reason for profilers lying is that these safepoints do not happen at even intervals during the execution and thus profilers might miss important information between safepoints. That doesn't mean that profilers wouldn't be useful tools, it just means you shouldn't rely too much on them.

There was a ton of other stuff too on Tom's presentation and I suggest you'll check the presentation video out when it comes online.

Realtime Collaboration with Clojure

Leonardo Borgas from Atlassian came to talk about product he's been working on. It's a collaborative wiki platform similar to Google Docs. The main problem with applications as these is the conflict possibility between concurrent edits and how to handle these.

Leonardo gave a pretty thorough analysis of the problem and its possible solutions. In the end he told that actor model is a perfect match to solve this problem but on Clojure there is a problem: we do have several actor model implementations for JVM (Akka, Pulsar) but none of these work on top of ClojureScript. And Leonardo's team wanted the actor model to be preserved throught the application chain to gain the best benefits without resorting to ugly hacks.

Because of this they had written their own mechanism on top of core.async which was somewhat similar (but not quite the same) as actor model. At this point I was starting to get really interested but then Leonardo told us that it is still a work in progress and not ready. Apparently they are planning to opensource the implementation when it is finished which is interesting.

This presentation left me with mixed feelings. On the other hand the content was interesting and the proposition of this new actor(ish) library was equally so. But I would have loved to see some demos of this whole collaboration platform in action or example code of how to use their library. It seems like EuroClojure came too soon and this presentation would have been a lot better half a year from now.

Om Next

The very final presentation and unofficial keynote was given by ClojureScript God Almighty, David Nolen. The topic of the presentation was a bit misleading since only half of the presentation was about Om Next and the rest was about ClojureScript's future.

Om Next is coming and it'll totally change what Om is. It is backwards incompatible and effectively a completely new UI library on top of React. The biggest change with Om Next is going to be removal of cursors. Like Nolen said: "cursors gave us 16 months of time to figure out how to do this properly". The answer is the pull API which is identical to Datomic new pull API. It allows the developer to describe in this specific query language of what data the component is interested in and Om Next takes care of providing you with that data.

The second huge change is the fact that Om Next is moving forward from using REST APIs to different way of doing things. With Om Next you won't force your server side APIs to grow into huge monsters which are very hard to maintain but it'll encourage you to create query APIs on your server side and use those from client. Without going to too much into details you can think of this a bit like creating datalog queries from your client directly to the server. This allows keeping the server side API lean and allow client code to make the business decisions about data.

Tools REST APIs as seen by David Nolen :)

I personally think this is the way to go and I've actually implemented similar system in my history. It was super nice to use, although definetly nothing as elegant as the one in Om Next. One has to think about security concerns and so forth a bit differently in an approach like this but it's all quite doable.

The second part of Nolen's presentation was about ClojureScript and its future. First of all, ClojureScript is not thought to be only for the web but for all platforms where JavaScript can run. Due to this vision there has been quite a lot of work on tools to make it all happen.

Work on ClojureScript to ClojureScript compiler has started and a simple version of it is already working. It still needs macro support and whatnot but still it's looking nice already. No word on when it would get ready though.

There is a Google Summer Code -project going on which is concentrating on conversion tool for other JavaScript module systems than Google Closure. If this is successful, it'll be a huge thing for ClojureScript ecosystem since the amount of usable libraries would grow exponentially. Nolen mentioned that work has been done on converting CommonJS, AMD and EcmaScript 2015 modules.

Last part was a demo of Ambly, which is a development tooling for iOS development with ClojureScript. Nolen changed OpenGL ES -application running on iOS emulator on fly by using ClojureScript REPL. Which was pretty awesome, full REPL support for iOS development? Who wouldn't want to get that?

Conclusion

Overall the quality of the presentations at EuroClojure 2015 was above average of the conferences I've visited in. There were better and worse presentation like always but there definitely were thought provoking presentations which will affect our Clojure development at Solita in some way in the long run. Let's try to take over the world with Clojure!

Tools It's time to take over the world with Clojure!

Jarno Väyrynen also wrote about our trip into Solita's finnish blog, you can read about it from here.

Permalink

Functional Geekery Episode 22 – LambdaConf 2015, Part 1

This is part one of a number of mini interviews I did at LambdaConf 2015. While I was there, I setup my laptop and microphone off to the side for a bit and recorded episodes with anybody who was interested in a mini interview.

Sponsors

This episode is sponsored by PurelyFunctional.tv. For high quality videos on Clojure, from an intro to Clojure to an in depth look at core.async, Eric Normand has you covered. Videos are downloadable allowing them to be viewed offline and at your leisure, and include exercises to help ensure your learning through interaction. Listeners get a 25% discount off everything with coupon code GEEK. Visit http://purelyfunctional.tv/geekery, and make sure to thank them for being a sponsor.

Topics

Brian McKenna

@puffnfresh
Overview of PureScript
Differences between PureScript and Haskell
Idris
Dependent Types and Equality
Idempotent Functions and Involutions
Barrier to Entry of Idris
Best way to get into Idris is via Haskell
brianmckenna.org
brian@brianmckenna.org

John De Goes

John’s programming background
SlamData
Organizing a conference
Inspired by the passion of people that came to LambdaConf 2014
The cross-language friendships formed
Amazed by the number of languages and experience levels
PureScript Conf
Why PureScript
Possibility of niche language workshops pre LambdaConf 2016

Xan

Women Who Code
Paige Bailey’s workshop on Clojure
Highlights of the conference so far
Programming and Math talk
Perspective on the functional programming community at LambdaConf
Interest in Clojure and Scala

Matt Farmer

Elemica
Scala and Clojure
Why pursuing new and cutting edge technologies is worth while
Paul Phillips on a virtual file system.
Farmdawg Nation
@farmdawgnation
“Be here next year”

Brooke Zelenka

Vancouver Functional Programmers meetup
Idris and Dependent Types
Algebraic Data Types
Chris Allen on teaching programming
Early release of Haskell Programming
@expede

Zeeshan Lakhani

The Meaning of LFE
Tile from Fogus
Learnings from learning Lisp Flavored Erlang
Duncan McGreggor
Robert Virding
Good interop with QuickCheck already
lfetool for creating projects
Feel the power of both the LISP and Erlang worlds
Zeeshan’s Conflict Resolution Data Types lightning talk
Merging based on causality
Still new research on CRDTs going on
Version Vectors and Eventual Consistency
Sean Cribbs’ A Brief History of Time in Riak presentation
Papers We Love
Keep people talking about research from both academia and industry
Shouldn’t lose track of research that is going on or happened
@zeeshanlakhani
Check out footnotes from the slides

A giant Thank You to David Belcher for the logo design.

Permalink

Software Engineer - Functional Programming at NAVIS (Full-time)

NAVIS is looking for a creative, motivated Software Engineer to join us as a critical member of our Engineering Team in beautiful Bend, Oregon.

NAVIS is seeking highly-competent "development athletes" to join our team, rather than position players. This means you know how to develop rich code, regardless of the tools you use - you are very capable and easily learn new technologies. We are re-architecting many of our products from the UI through the database, and we are exploring (and have already implemented) new technologies to add to our existing technology ecosystem, including the use of functional programming with Clojure / ClojureScript. The primary focus of this position is coding new, creative, operational software that enhances our products.

It’s an exciting time to be at NAVIS!

This developer will design, develop, optimize and test client/server and web applications in an Agile / SCRUM setting primarily based on Open Source technologies. As part of the Engineering Team, you will collaborate closely with the Chief Technology Officer (CTO), Chief Innovation Officer (CIO), Technical Team Leads and other Product Team members, to design exceptional software / product solutions to meet customer needs.

If NEW software development is appealing, this position could be a great fit! The Engineering Team has multiple projects that often start at the concept stage and develops new products and/or product features from scratch. Our products are constantly evolving with our client needs - we are not simply maintaining existing products.

This is a full SDLC development position. The candidate must be willing to take on all aspects of the development process, and be ready to support applications in production environments. He/she may be asked to test code sets, document software, and implement products and related solutions. Strong knowledge of design patterns and software development best practices must be shown at all times. A high degree of creativity and motivation is required of the successful candidate, who is expected to operate with limited direct supervision. This developer needs to have a willingness to do whatever it takes to make the products and the company successful.

WHY WOULD YOU WANT TO WORK FOR NAVIS?

  • Check out our Company page on StackOverflow: http://careers.stackoverflow.com/company/http-www-navisresorts-com
  • NAVIS was a finalist in the 2015 Oregon Tech Awards Technology Company of the Year - http://www.navisblog.com/navis-finalist-in-the-15-oregon-tech-awards-technology-company-of-the-year…;
  • NAVIS is growing fast due to high market demand for our products. Our products are successful because they truly help our clients succeed and grow. We demonstrate our value to our clients every single day.
  • We are building new technologies and enhancing existing ones - we are NOT standing still.
  • We are re-architecting our products from the UI through the database. This means we need strong, creative, intelligent developers with great ideas that love to build awesome software products.
  • Our technology is blossoming and we will be utilizing some of the latest and greatest technologies on the market today.
  • The Oregonian has listed NAVIS as a "Top Place to Work in Oregon" in 2012, 2013, and 2014.
  • We live the core values listed below
  • they are not window dressing.
  • Strong technical leadership in a team-based atmosphere
  • Your work matters; your impact will be visible
  • Local, established Oregon-founded and based company
  • Healthy financial foundation
  • We give generously to local charities and offer opportunities to participate to directly help those in need
  • Competitive base salaries with a 10% annual bonus target and a full benefits package, including matching 401(k)
  • WORK / LIFE BALANCE - what a concept!
  • You get to LIVE and WORK in Central Oregon - what could be better than that?

At NAVIS, we live our Core Values:

  • Golden Rule: Treat others as you would want to be treated
  • Integrity: A person of your word, highly trusted
  • Innovation: Open and involved in creating or executing on "new"
  • Passion: Love the TEAM, the clients and the work we do
  • Attitude: Consistently display a positive, can-do attitude

LOCATION OF THIS POSITION:

To foster a highly-collaborative team environment, this position will be located in beautiful Bend, Oregon.

ABOUT BEND, OREGON:

Bend, the county seat of Deschutes County, covers 32 square miles. To the east is high desert vegetation and to the west, Bend is surrounded by U.S. Forest Service land. The city is noted for its scenic setting, mild climate, year-round recreation opportunities and growing economy. At an elevation of 3,625 feet, Bend enjoys the predominately dry climate of the Great Basin. Sunny days, cool nights and low humidity characterize the weather.

Love the outdoors?

The following activities are all right out your doorstep (quite literally, in many cases):

  • Mountains, mountains, mountains: the Three Sisters (Faith, Hope and Charity), Broken Top, and Mount Bachelor are the five most prominent peaks nearby, and offer activities galore
  • Climbing: world class rock climbing is just 25 minutes north of Bend at Smith Rock
  • On the River: the Deschutes River flows right through the middle of town - people use it for kayaking, paddle boarding, floating the river, fishing, or simply walking its banks
  • Skiing: Mount Bachelor is just 25 minutes southwest of Bend, and there are other ski resorts nearby offering Alpine and Nordic skiing at all skill levels – snowshoeing, too!
  • Mountain Biking: there are many miles of existing trails for all skill levels, and new trails currently under construction
  • Fishing & Boating: countless lakes, streams, rivers, ponds and reservoirs at your disposal
  • Hiking & Camping: dozens of designated campsites nearby with access to trails
  • Golfing: 20+ courses sit in the surrounding area
  • Whitewater Rafting: there are multiple guiding companies in the surrounding area
  • Hunting: bow or rifle hunting for deer, elk, bear, water fowl, etc.

In addition to Bend's outdoor recreational ethos, there are now 13 (yes – 13, and counting) craft breweries in town! World-class craft beer is one of the cultural pillars of Bend - after a fun day at the slopes or on the river, a well-deserved beverage is a staple among we Bendites.

Check out the Official Bend Oregon Video created by VisitBend.com for more information on the area: https://www.youtube.com/watch?v=u6zSKJqalug ;

Don't take our word for it - take a look at what others are saying:

The Top 10 Best Places to Live Now - Men's Journal - http://www.mensjournal.com/expert-advice/the-10-best-places-to-live-now-20150312/bend-oregon

Can Bend Become the Next Boulder? This Venture Capitalist Thinks So, and Here's Why - Geekwire - http://www.geekwire.com/2014/qa-venture-capitalist-dino-vendetti/ ;

Bend, Ore., A Brewer's Town - New York Times - http://www.nytimes.com/2012/04/22/travel/in-bend-ore-craft-brewing-is-booming.html?_r=0 ;

Bend, Ore., The City You'll Love to Hate - Washington Post - http://www.washingtonpost.com/lifestyle/travel/…

REQUIRED SKILLS AND EXPERIENCE:

  • Experience with greenfield functional programming or functional software development or functional architecture is highly desired
  • BS in Computer Science or equivalent experience, bonus for an MS in Computer Science degree
  • 5 or more years of web software design & development experience on highly scalable web properties supporting multiple browsers
  • Strong knowledge and experience developing on multiple server-side frameworks, including and Open Source stack
  • Strong experience with Design Patterns
  • Database application design and development experience, preferably in a multi-tenant database environment.
  • Solid understanding of software development life cycles
  • Strong verbal, written and interpersonal communication skills
  • Demonstrated expertise in Object-Oriented Analysis and Design, with a focus on high-quality, timely, and supportable and maintainable code delivery
  • Broad experience with iterative development with quick release cycles, such as Agile/SCRUM/TDD methodologies and associated tools
  • Full stack development experience is ideal

In addition to the above requirements, you should also have experience with the following:

  • Development experience in Clojure / ClojureScript is highly desired
  • Any front-end development experience is a bonus - AngularJS, Backbone.js, Node.js, ReactJS
  • Experience with MySQL and NoSQL (MongoDB, Cassandra) would be highly beneficial
  • Linux and related open stack technologies
  • Service-Oriented Architecture (SOA)
  • Complex MySQL query design and troubleshooting
  • Experience with Cloud Computing and Amazon Web Services (AWS)
  • Experience developing applications for international customers
  • Development experience with CRM and/or VOIP services is desirable

ABOUT NAVIS:

Based in Bend, Oregon, NAVIS is the leading provider of sales and marketing solutions to hotels, resorts and vacation rental management companies in North America. Building on our rich 25+ year heritage with humble beginnings, NAVIS is strategically focused on the critical value of providing accurate, timely data for our clients. Our clients view NAVIS as the best source of solutions, and employees view NAVIS as THE best place to work.

Get information on how to apply for this position.

Permalink

Clojure 1.7 is now available

We are pleased to announce the release of Clojure 1.7. The two headline features for 1.7 are transducers and reader conditionals. Also see the complete list of all changes since Clojure 1.6 for more details.

Transducers

Transducers are composable algorithmic transformations. They are independent from the context of their input and output sources and specify only the essence of the transformation in terms of an individual element. Because transducers are decoupled from input or output sources, they can be used in many different processes - collections, streams, channels, observables, etc. Transducers compose directly, without awareness of input or creation of intermediate aggregates.

Many existing sequence functions now have a new arity (one fewer argument than before). This arity will return a transducer that represents the same logic but is independent of lazy sequence processing. Functions included are: map, mapcat, filter, remove, take, take-while, drop, drop-while, take-nth, replace, partition-by, partition-all, keep, keep-indexed, map-indexed, distinct, and interpose. Additionally some new transducer functions have been added: cat, dedupe, and random-sample.

Transducers can be used in several new or existing contexts:

  • into - to collect the results of applying a transducer
  • sequence - to incrementally compute the result of a transducer
  • transduce - to immediately compute the result of a transducer
  • eduction - to delay computation and recompute each time
  • core.async - to apply a transducer while values traverse a channel

Portable Clojure and Reader Conditionals

It is now common to see a library or application targeting multiple Clojure platforms with a single codebase. Clojure 1.7 introduces a new extension (.cljc) for files that can be loaded by Clojure and ClojureScript (and other Clojure platforms). 

There will often be some parts of the code that vary between platforms. The primary mechanism for dealing with platform-specific code is to isolate that code into a minimal set of namespaces and then provide platform-specific versions (.clj/.class or .cljs) of those namespaces.

To support cases where is not feasible to isolate the varying parts of the code, or where the code is mostly portable with only small platform-specific parts, 1.7 provides Reader Conditionals

Reader conditionals are a new reader form that is only allowed in portable cljc files. A reader conditional expression is similar to a cond in that it specifies alternating platform identifiers and expressions. Each platform is checked in turn until a match is found and the expression is read. All expressions not selected are read but skipped. A final :default fallthrough can be provided. If no expressions are matched, the reader conditional will read nothing. The reader conditional splicing form takes a sequential expression and splices the result into the surrounding code.

Contributors

Thanks to all of those who contributed patches to Clojure 1.7:

  • Timothy Baldridge
  • Bozhidar Batsov
  • Brandon Bloom
  • Michael Blume
  • Ambrose Bonnaire-Sergeant
  • Aaron Cohen
  • Pepijn de Vos
  • Andy Fingerhut
  • Gary Fredricks
  • Daniel Solano Gómez
  • Stuart Halloway
  • Immo Heikkinen
  • Andrei Kleschinsky
  • Howard Lewis Ship
  • Alex Miller
  • Steve Miner
  • Nicola Mometto
  • Tomasz Nurkiewicz
  • Ghadi Shayban
  • Paul Stadig
  • Zach Tellman
  • Luke VanderHart
  • Jozef Wagner
  • Devin Walters
  • Jason Wolfe
  • Steven Yi

Also, continued thanks to the total list of contributors from all releases.

Permalink

June 2015 London Clojure Dojo at ThoughtWorks (Soho)

When:
Tuesday, June 30, 2015 from 7:00 PM to 10:00 PM (BST)

Where:
ThoughtWorks London Office
1st Floor
76-78 Wardour Street
W1F 0UR London
United Kingdom

Hosted By:
London Clojurians

Register for this event now at :
http://www.eventbrite.com/e/june-2015-london-clojure-dojo-at-thoughtworks-soho-tickets-17494151478?aff=rss

Event Details:

London Clojure Dojo at ThoughtWorks (Soho)

The goal of the session is to help people learn to start working with Clojure through practical exercises, but we hope that more experienced developers will also come along to help form a bit of a London Clojure community. The dojo is a great place for new and experienced clojure coders to learn more. If you want to know how to run your own dojo or get an idea of what dojos are like you can read more here.

We hope to break up into groups for the dojo. So if you have a laptop with a working clojure environment please bring it along.

We’ll be discussing the meetup on the london-clojurians mailing list

Clojure is a JVM language that has syntactically similarities to Lisp, full integration with Java and its libraries and focuses on providing a solution to the issue of single machine concurrency.

Its small core makes it surprisingly easy for Java developers to pick up and it provides a powerful set of concurrency strategies and data structures designed to make immutable data easy to work with. If you went to Rich Hickey’s LJC talk about creating Clojure you’ll already know this, if not it’s well worth watching the Rich Hickey “Clojure for Java Programmers” video or Stuart Halloway “Radical Simplicity” video.


Permalink

Setting up BeagleBone Black for development (Clojure in particular)

I just got another BeagleBone Black (BBB) to tinker with as I finish up my pin-ctrl library and wanted to record the steps so it's easier for me and others in the future.

In particular, I show here how to set up a non root user with access to the board's pins. This is useful no matter what programming language you care to use on the BBB. (Deving as root is bad, m-kay?)

Thank you Debian

My first BBB came with Angstrom. While it's a fine distribution, I rather missed access to aptitude for package management, as well as other various Debian-isms. But I was also too lazy to bother setting up Debian and figured it was worth getting a feel for Angstrom. To my pleasant surprise though, it appears the newer rev C BBB is now shipping with Debian! This made setup this time around much easier. However, if you have an older board, these instructions may not work in entirety for you, without first installing Debian (the steps taken for the non root user are likely the same though).

Prereqs - connecting to the board

In my opinion the easiest way to get things going is to just connect the board to a router via ethernet, since this will both let you ssh into it and give you network access for installing packages, which you'll want to do anyway. Once connected, you can ssh in to the beaglebone with ssh root@beaglebone (note: on some routers/installations, you may have to do @beaglebone.local). There shouldn't be any password prompt.

Alternatively, you can follow these directions on your non-beaglebone computer so that you can ssh into the BBB over the USB cable. This is handy for when you don't have ethernet around, so worth doing on it's own right. But again, you'll need net access anyway.

Setting up a non root user

Unfortunately, all of these boards (BBB, Pi, etc.) assume you will be running as root, which is terrible Linux practice. You really only ever want to be running as root when you're doing system maintenance and such. Additionally, as far as Clojure related things go, Leiningen does not particularly like running as root. You can override this with LEIN_ROOT=1 lein ..., but doing so feels all the more like you're going against the grain of Unix best practice.

The real kicker here is that by default only the root user has access to the pins. To give a non root user access to these pins, we can do the following:

  • add a group named pinusers
  • create a new user belonging to that group
  • set up some udev rules so that on boot our pin files will automatically have the group permissions they need

Here's how we do this:

# First create new user group
groupadd pinusers

# Next create user (don't use the -s flag if you don't want to use zsh) and homedir
mkdir /home/<username>  
useradd -G pinusers,admin -d /home/<username> <username>  
chown -R csmall /home/csmall  
# Enter password for new user when prompted
passwd <username>

# Download the permissions script and udev rules
wget https://gist.githubusercontent.com/metasoarous/a7308779837f9dcba662/raw/b6717752469da4184c4aaa8bd88407290df15e19/80-pinusers-permissions.rules -O /etc/udev/rules.d/80-pinusers-permissions.rules  
wget https://gist.githubusercontent.com/metasoarous/a7308779837f9dcba662/raw/8b7dfc236514caeca0274c6144afe195cd074a0c/set-pinusers-permissions.sh -O /usr/local/bin/set-pinusers-permissions.sh  
chmod +x /usr/local/bin/set-pinusers-permissions.sh

# Execute the permissions script to kick things off, while we're at it
set-pinusers-permissions.sh  

Note that the udev rules and permissions script come from this gist, if you care to inspect them before downloading.

You should now be able to su <username>.

Opinionated dev setup

If you have your own opinionated 'nix setup, by all means, have at it however you usually like to set it up. I myself have a pretty opinionated setup which emphasizes vim + tmux + zsh. If you're interested in giving it a whirl, read on.

(I'm also working on a more general post about the environment things I like to set up for the future, which I'll link here when done).

Apt packages

Thankfully, the Debian distribution already comes with a lot of the things I want in my dev environment. In particular:

tmux / vim / git / ruby

Some other things I want/need for various reasons:

zsh / java / rake / exuberant-ctags

To install these, simply (EDIT: Added ctags here so vim would stop complaining about absence)

sudo apt-get install zsh openjdk-7-jdk rake exuberant-ctags  

To change to zsh as the standard shell for your user, simply chsh, and enter /usr/bin/zsh when prompted. To fully apply these changes, log out and back in again.

Dotfiles

I have a dotfiles repo I use to keep things in sync between computers (an approach I strongly recommend). To get that going, simply:

# First download the dotfiles, and cd into them
git clone https://github.com/metasoarous/dotfiles.git ~/.dotfiles  
cd ~/.dotfiles

# Backup all existing dofiles, then install new ones
rake backup  
rake install  

Vim setup

Now that I've switched to Vundle for vim plugin management, all of my vim plugins are managed directly through my .vimrc, making it a lot easier to set things up. The dotfiles script installs Vundle for us, so the only thing left to do is fire up vim and execute the command :PluginInstall. This will install all the plugins specified in the vimrc, which should then be available on a restart of vim.

Tmux settings

Unfortunately, the tmux package available on this version of Debian is a bit old, so there are some configuration settings that won't work. This can be amended with the following replacement in ~/.tmux.conf:

#bind-key - split-window -v -c "#{pane_current_path}"
#bind-key _ split-window -v -c "#{pane_current_path}"
#bind-key | split-window -h -c "#{pane_current_path}"
bind-key - split-window -v  
bind-key _ split-window -v  
bind-key | split-window -h  

Clojure specific stuff

This is actually relatively straight forward. The main thing is deciding on which Java to install. I've heard you get more mileage out of your little board if you go with Oracle JDK, but you have to add a ppa for that, so to keep things simple we'll stick with OpenJDK 7. (EDIT: Turns out there is a nasty bug that crashes OpenJDK 7 for intermittently; see link for other options; I'll be posting more soon).

# First install the JDK (already did this above...)
sudo apt-get install openjdk-7-jdk

# Download Leiningen, and make it executable
wget https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein -O ~/bin/lein  
chmod +x ~/bin/lein

# Initialize lein, and download dependencies
lein  

Now you can try firing up a repl with lein repl. Note that it will likely take about three and a half minutes :-/ However, this is a lot better than the older Raspberry Pis, one of which I timed at around 20 minutes to boot a repl!

Rocking the Clojures

With the long boot times, it's highly recommended you use the reloaded and/or component pattern in your applications to avoid wasted time waiting for Clojure to boot up as you change code. Further, once you're finished developing your application and are ready to let it "just work", it's recommended you create an uberjar that you can run with java directly. This will reduce overhead on the board.

Once I've finished putting together a preliminary implementation of pin-ctrl for the BBB, I'll update this post with a link to getting that set up on the board. There will likely be some other Clojure development optimizations and customizations I'll post about in time as well.

RFC

As always please let me know in the comments if there's anything that could use clarification, or if you have any questions. I hope you found this useful.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.