Javascript's Persistent Popularity: No one seems to particularly enjoy the language, but everyone is using it

Stack Overflow released the results of their annual developer survey last week, and for the fifth consecutive year Javascript checked in as the most popular language. This year, 62.5% of developers reported using Javascript, significantly more than the next non-query language, Java (39.7%).

Javascript is not only popular, but increasingly so, up from 54% in 2015. Its five-year trend, as well as those of other prominent languages can be seen in this graph:

There are a few straightforward reasons for JavaScript's popularity. The language is simple enough to pick up quickly, flexible enough to be used on the client and server side, and comes built into your web browser. As Jeff Atwood, Stack Overflow's cofounder, wrote about a decade ago in the blog that would become his website: "any application that can be written in JavaScript, will eventually be written in JavaScript."

We shouldn't rush to crown JavaScript the greatest language to grace our hard drives though. The survey's respondents are overwhelmingly web developers, making their preference for JavaScript somewhat of a foregone conclusion. Plus, being the most prevalent language doesn't map directly to being the most enjoyable or profitable.

In fact, 40% of Javascript users didn't report the desire to keep working in the language, compared to just 27% of Rust users and 33% of those who use Smalltalk (a niche language that you most likely won't find in a bootcamp curriculum). And while Clojure developers report average salaries of nearly $80,000, JavaScript devs are much closer to the middle of the pack at about $55,000.

None of this comes as much of a surprise. Economics 101 tells us that JavaScript's ubiquity makes it a skill unlikely to demand top-level wages, and the language's warts have long been comedic fodder in the dev community:

Of course, the very code that I used to embed that tweet relies on Javascript. So while we may poke fun at the language, the majority of us will probably still be using it by this time next year, when Stack Overflow releases its 2018 results.

Permalink

PurelyFunctional.tv Newsletter 218: Lisp and Flow

Issue 218 – March 27, 2017

Hi Clojurers,

I’ve been doing a lot of research lately into the experience of flow and expertise. I really like Lisp. I think Lisp complements the way I think very well. And one of the aspects of it that engages me so much is that experience of flow that you get when you’re deep in a programming session. But what is it about Lisp that helps me get into flow? That’s what I’ve been researching.

Please enjoy this flowing issue.

Rock on!
Eric Normand <eric@purelyfunctional.tv>

PS Want to get this in your email? Subscribe!


The REPL

I messed up the link to this awesome Clojure newsletter (by Daniel Compton) last week, so here it is. I doublechecked that it works 🙂

BTW, thanks to everyone who did let me know about the mistake. I always appreciate it.


Peak: Secrets from the New Science of Expertise

Peak is Anders Ericsson’s popularizing book about the science of expertise. One of the most important aspects for becoming an expert is fast feedback. In other words, we need to quickly see the results of our actions. Fast feedback has been a part of the Lisp experience since very early–it comes with the REPL. And it’s vitally important to learning skills.


Flow: The Psychology of Optimal Experience

Flow is the classic book about the experience of flow by Mihaly Csikszentmihalyi. Sadly, I’ve never read it, but it’s the definitive book in the field and I believe there is a law that you 1) have to refer to this book and 2) apologize for pronouncing the name wrong. It’s now on my reading list.


Deep Work: Rules for Focused Success in a Distracted World

Deep Work is a great book by Cal Newport. It explores the necessity for mental space and focus in our ever more connected world. Its practical advice has had a deep influence on me and my work.


Loopy

Nick Case has created a new tool for creating your own interactive simulations. It’s simple but can create powerful systems.


KLIPSE

KLIPSE is a plugin for writing Live and Interactive, embedded editors on webpages. It works for a variety of languages (including Clojure). Tools like this can be used to create interactive documents using programming languages.


Paren Soup

Another great embeddable editor, this one is just for Clojure. It comes from Zach Oakes, the creator of Nightcode. It has parinfer and an instarepl.


ClojureScript meeting in Portland

I’m going to be speaking at the PDX Clojure meetup this Wednesday, the day before Clojure/West. Thanks Julio Barros for putting this together. I’m also talking at Clojure/West about generative testing.


The Thank You Economy

I really value mental space and undistracted work. I love focus and flow. I love going deep. And achieving those things requires, at least for me, turning off communication media like email, Twitter, and Slack for long periods of time. This also seems to match a thread in the zeitgeist that distrusts social media and the distraction it can be.

For the sake of completeness, I wanted to hear another perspective. I’ve been reading Gary Vaynerchuk’s book The Thank You Economy. As you may know, Gary Vaynerchuk is very connected through social media. He writes and talks a lot about it. I’ve never been one to think that it’s destructive or inscrutible. It’s just chit-chat and peacocking, which is nothing new.

What I’ve learned from Vaynerchuk’s work is that we are already in a world, whether we like it or not, that is totally connected and distracted. And we can reach people on an individual level all over the world, as opposed to on a mass public level, like we could with TV and newspapers. The ability to talk to people one-on-one is invaluable to us as individuals. It lets us understand each other better, but it comes down to how we use the tools. I’m now experimenting with how to use social media for good use in my life.

Permalink

The REPL

The REPL

Graphql, Darkleaf, conflict resolution
View this email in your browser

The REPL

Notes.

Welcome to another week of The REPL. Please feel free to send me any links for interesting Clojure stuff, as I don't always catch everything.

-main

Libraries & Books.

  • Walmart Labs has open sourced Lacinia, a GraphQL library. They also have a good writeup on their motivations for creating it.
  • Codependence is an extension to Integrant
  • Fixpoint provides clojure.test fixtures and datasources
  • Neanderthal 0.9.0 looks like it is going to be blazingly fast, the release is just around the corner.
  • Darkleaf is a bidirectional ring router. Great name.

Foundations.

Tools.

  • IntelliJ 2017.1 is out

Recent Developments.

  • CLJ-2133: Clarify documentation for the satisfies? function

Learning.

Misc.

  • My favourite schadenfreude is still CA schadenfreude.
Copyright © 2017 Daniel Compton, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by MailChimp

Permalink

Comparing Reagent to React.js and Vue.js for dynamic tabular data

I recently ran across a comparison of React.js to Vue.js for rendering dynamic tabular data, and I got curious to see how Reagent would stack up against them.

The benchmark simulates a view of football games represented by a table. Each row in the table represents the state of a particular game. The game states are updated once a second triggering UI repaints.

I structured the application similarly to the way that React.js version was structured in the original benchmark. The application has a football.data namespace to handle the business logic, and a football.core namespace to render the view.

Implementing the Business Logic

Let's start by implementing the business logic in the football.data namespace. First, we'll need to provide a container to hold the state of the games. To do that we'll create a Reagent atom called games:

(ns football.data
  (:require [reagent.core :as reagent]))

(defonce games (reagent/atom nil))

Next, we'll add a function to generate the fake players:

(defn generate-fake-player []
  {:name               (-> js/faker .-name (.findName))
   :effort-level       (rand-int 10)
   :invited-next-week? (> (rand) 0.5)})

You can see that we're using JavaScript interop to leverage the Faker.js library for generating the player names. One nice aspect of working with ClojureScript is that JavaScript interop tends to be seamless as seen in the code above.

Now that we have a way to generate the players, let's add a function to generate fake games:

(defn generate-fake-game []
  {:id                 (-> js/faker .-random (.uuid))
   :clock              0
   :score              {:home 0 :away 0}
   :teams              {:home (-> js/faker .-address (.city))
                        :away (-> js/faker .-address (.city))}
   :outrageous-tackles 0
   :cards              {:yellow 0 :read 0}
   :players            (mapv generate-fake-player (range 4))})

With the functions to generate the players and the games in place, we'll now add a function to generate a set of initial game states:

(defn generate-games [game-count]
  (reset! games (mapv generate-fake-game (range game-count))))

The next step is to write the functions to update the games and players to simulate the progression of the games. This code translates pretty much directly from the JavaScript version:

(defn maybe-update [game prob path f]
  (if (< (rand-int 100) prob)
    (update-in game path f)
    game))

(defn update-rand-player [game idx]
  (-> game
      (assoc-in [:players idx :effort-level] (rand-int 10))
      (assoc-in [:players idx :invited-next-week?] (> (rand) 0.5))))

(defn update-game [game]
  (-> game
      (update :clock inc)
      (maybe-update 5 [:score :home] inc)
      (maybe-update 5 [:score :away] inc)
      (maybe-update 8 [:cards :yellow] inc)
      (maybe-update 2 [:cards :red] inc)
      (maybe-update 10 [:outrageous-tackles] inc)
      (update-rand-player (rand-int 4))))

The last thing we need to do is to add the functions to update the game states at a specified interval. The original code uses Rx.js to accomplish this, but it's just as easy to do using the setTimeout function with Reagent:

(defn update-game-at-interval [interval idx]
  (swap! games update idx update-game)
  (js/setTimeout update-game-at-interval interval interval idx))

(def event-interval 1000)

(defn update-games [game-count]
  (dotimes [i game-count]
    (swap! games update i update-game)
    (js/setTimeout #(update-game-at-interval event-interval i)
                   (* i event-interval))))

The update-games function updates the state of each game, then sets up a timeout for the recurring updates using the update-game-at-interval function.

Implementing the View

We're now ready to write the view portion of the application. We'll start by referencing the football.data namespace in the football.core namespace:

(ns football.core
  (:require
    [football.data :as data]
    [reagent.core :as reagent]))

Next, we'll write the components to display the players and the games:

(defn player-component [{:keys [name invited-next-week? effort-level]}]
  [:td
   [:div.player
    [:p.player__name
     [:span name]
     [:span.u-small (if invited-next-week? "Doing well" "Not coming again")]]
    [:div {:class-name (str "player__effort "
                            (if (< effort-level 5)
                              "player__effort--low"
                              "player__effort--high"))}]]])

(defn game-component [game]
  [:tr
   [:td.u-center (:clock game)]
   [:td.u-center (-> game :score :home) "-" (-> game :score :away)]
   [:td.cell--teams (-> game :teams :home) "-" (-> game :teams :away)]
   [:td.u-center (:outrageous-tackles game)]
   [:td
    [:div.cards
     [:div.cards__card.cards__card--yellow (-> game :cards :yellow)]
     [:div.cards__card.cards__card--red (-> game :cards :red)]]]
   (for [player (:players game)]
     ^{:key player}
     [player-component player])])

(defn games-component []
  [:tbody
   (for [game @data/games]
     ^{:key game}
     [game-component game])])

(defn games-table-component []
  [:table
   [:thead
    [:tr
     [:th {:width "50px"} "Clock"]
     [:th {:width "50px"} "Score"]
     [:th {:width "200px"} "Teams"]
     [:th "Outrageous Tackles"]
     [:th {:width "100px"} "Cards"]
     [:th {:width "100px"} "Players"]
     [:th {:width "100px"} ""]
     [:th {:width "100px"} ""]
     [:th {:width "100px"} ""]
     [:th {:width "100px"} ""]]]
   [games-component]])

You can see that HTML elements in Reagent components are represented using Clojure vectors and maps. Since s-expressions cleanly map to HTML, there's no need to use an additional DSL for that. You'll also notice that components can be nested within one another same way as plain HTML elements.

Noe thing to note is that the games-component dereferences the data/games atom using the @ notation. Dereferencing simply means that we'd like to view the current state of a mutable variable.

Reagent atoms are reactive, and listeners are created when the atoms are dereferenced. Whenever the state of the atom changes, any components that are observing the atom will be notified of the change.

In our case, changes in the state of the games atom will trigger the games-component function to be evaluated. The function will pass the current state of the games down to its child components, and this will trigger any necessary repaints in the UI.

Finally, we have a bit of code to create the root component represented by the home-page function, and initialize the application:

(defn home-page []
  [games-table-component])

(defn mount-root []
  (reagent/render [home-page] (.getElementById js/document "app")))

(def game-count 50)

(defn init! []
  (data/generate-games game-count)
  (data/update-games game-count)
  (mount-root))

We now have a naive implementation of the benchmark using Reagent. The entire project is available on GitHub. Next, let's take a look at how it performs.

Profiling with Chrome

When we profile the app in Chrome, we'll see the following results:

Reagent Results

Here are the results for React.js and Vue.js running in the same environment for comparison:

React.js Results

Vue.js Results

As you can see, the naive Reagent version spends about double the time scripting compared to React.js, and about four times as long rendering.

The reason is that we're dereferencing the games atom at top level. This forces the top level component to be reevaluated whenever the sate of any game changes.

Reagent provides a mechanism for dealing with this problem in the form of cursors. A cursor allows subscribing to changes at a specified path within the atom. A component that dereferences a cursor will only be updated when the data the cursor points to changes. This allows us to granularly control what components will be repainted when a particular piece of data changes in the games atom. Let's update the view logic as follows:

(defn player-component [player]
  [:td
   [:div.player
    [:p.player__name
     [:span (:name @player)]
     [:span.u-small
      (if (:invited-next-week? @player)
        "Doing well" "Not coming again")]]
    [:div {:class-name (str "player__effort "
                            (if (< (:effort-level @player) 5)
                              "player__effort--low"
                              "player__effort--high"))}]]])

(defn game-component [game]
  [:tr
   [:td.u-center (:clock @game)]
   [:td.u-center (-> @game :score :home) "-" (-> @game :score :away)]
   [:td.cell--teams (-> @game :teams :home) "-" (-> @game :teams :away)]
   [:td.u-center (:outrageous-tackles @game)]
   [:td
    [:div.cards
     [:div.cards__card.cards__card--yellow (-> @game :cards :yellow)]
     [:div.cards__card.cards__card--red (-> @game :cards :red)]]]
   (for [idx (range (count (:players @game)))]
     ^{:key idx}
     [player-component (reagent/cursor game [:players idx])])])

(def game-count 50)

(defn games-component []
  [:tbody
   (for [idx (range game-count)]
     ^{:key idx}
     [game-component (reagent/cursor data/games [idx])])])

(defn games-table-component []
  [:table
   [:thead
    [:tr
     [:th {:width "50px"} "Clock"]
     [:th {:width "50px"} "Score"]
     [:th {:width "200px"} "Teams"]
     [:th "Outrageous Tackles"]
     [:th {:width "100px"} "Cards"]
     [:th {:width "100px"} "Players"]
     [:th {:width "100px"} ""]
     [:th {:width "100px"} ""]
     [:th {:width "100px"} ""]
     [:th {:width "100px"} ""]]]
   [games-component]])

(defn home-page []
  [games-table-component])

The above version creates a cursor for each game in the games-components. The game-component in turn creates a cursor for each player. This way only the components that actually need updating end up being rendered as the state of the games is updated. Let's profile the application again to see how much impact this has on its performance:

Reagent Results

The performance of the Reagent code using cursors now looks similar to that of the Vue.js implementation. You can see the entire source for the updated version here.

Conclusion

In this post we saw that ClojureScript with Reagent provides a compelling alternative to JavaScript offerings such as React.js and Vue.js.

Reagent allows writing succinct solutions that perform as well as those implemented using native JavaScript libraries. It also provides us with tools to intuitively reason about what parts of the view are going to be updated.

Likewise, we get many benefits by simply switching from using JavaScript to ClojureScript.

For example, We already saw that we didn't need any additional syntax, such as JSX, to represent HTML elements. Since HTML templates are represented using regular data structures, they follows the same rules as any other code. This allows us to transform them just like we would any other data in our project.

In general, I find ClojureScript to be much more consistent and less noisy than equivalent JavaScript code. Consider the implementation of the updateGame function in the original JavaScript version:

function updateGame(game) {
    game = game.update("clock", (sec) => sec + 1);

    game = maybeUpdate(5, game, () => game.updateIn(["score", "home"], (s) => s + 1));
    game = maybeUpdate(5, game, () => game.updateIn(["score", "away"], (s) => s + 1));
    
    game = maybeUpdate(8, game, () => game.updateIn(["cards", "yellow"], (s) => s + 1));
    game = maybeUpdate(2, game, () => game.updateIn(["cards", "red"], (s) => s + 1));

    game = maybeUpdate(10, game, () => game.update("outrageousTackles", (t) => t + 1));

    const randomPlayerIndex = randomNum(0, 4);
    const effortLevel = randomNum();
    const invitedNextWeek = faker.random.boolean();

    game = game.updateIn(["players", randomPlayerIndex], (player) => {
        return player.set("effortLevel", effortLevel).set("invitedNextWeek", invitedNextWeek);
    });

    return game;
}

Compare it with the equivalent ClojureScript code:

(defn update-rand-player [game idx]
  (-> game
      (assoc-in [:players idx :effort-level] (rand-int 10))
      (assoc-in [:players idx :invited-next-week?] (> (rand) 0.5))))

(defn update-game [game]
  (-> game
      (update :clock inc)
      (maybe-update 5 [:score :home] inc)
      (maybe-update 5 [:score :away] inc)
      (maybe-update 8 [:cards :yellow] inc)
      (maybe-update 2 [:cards :red] inc)
      (maybe-update 10 [:outrageous-tackles] inc)
      (update-rand-player (rand-int 4))))

ClojureScript version has a lot less syntactic noise, and I find this has direct impact on my ability to reason about the code. The more quirks there are, the more likely I am to misread the intent. Noisy syntax results in situations where code looks like it's doing one thing, while it's actually doing something subtly different.

Another advantage is that ClojureScript is backed by immutable data structures by default. My experience is that immutability is crucial for writing large maintainable projects, as it allows safely reasoning about parts of the code in isolation.

Since immutability is pervasive as opposed to opt-in, it allows for tooling to be designed with it in mind. For example, Figwheel plugin relies on this property to provide live hot reloading in the browser.

Finally, ClojureScript compiler can do many optimizations, such as dead code elimination, that are difficult to do with JavaScript. I highly recommend the Now What? talk by David Nolen that goes into more details regarding this.

Overall, I'm pleased to see that ClojureScript and Reagent perform so well when stacked up against native JavaScript libraries. It's hard to overstate the fact that a ClojureScript library built on top of React.js can outperform React.js itself.

Permalink

crepl and c3e at Dutch Clojure Days 2017

crepl and c3e at Dutch Clojure Days 2017

Today was the 2017 edition of Dutch Clojure Days (DCD17), the annual international gathering of Clojure enthusiasts and practitioners in the Netherlands. At DCD17 I presented a lightning talk about crepl.

crepl is a collaborative editor written in Clojure and ClojureScript. It's like Google Docs where you can also run the ClojureScript you write in the browser. And by using atom sync you can also keep state in sync with each browser running the code, which is useful when working on code with UIs. These previous blogposts have more information about crepl and its features:

Read more

Permalink

Manipulating the DOM with Clojure using Klipse

The Klipse plugin is a client-side code evaluator.

This means that inside a web page, you are not limited to manipulate data, but you can also manipulate the DOM.

In this article we will show 4 approaches for manipulating the DOM with Clojure using Klipse:

  • reagent
  • the Klipse container
  • the html editor type
  • a custom DOM element

1. Reagent

(require '[reagent.core :as r])
[:div
  "Hello "
  [:strong "World!"]]

For a full explanation about using reagent inside Klipse, have a look at Interactive reagent snippets.

And if you want very cool material about reagent, read this series of reagegent deep dive and How to use a charting library in Reagent.

2. The Klipse container

Each Klipse snippet is associated with a container - a DOM element that is just below the Klipse snippet and accessible with js/klipse-container and js/klipse-container-id:

(set!
 (.-innerHTML js/klipse-container)
 "<div style='color: blue;'> Hello <b>Container</b>!</div>")
""
(set!
  (.-innerHTML (js/document.getElementById js/klipse-container-id))
  "<div style='color: red;'> Hello <b>Container Id</b>!</div>")
""

The reason, we have the "" at the end of the snippet is to prevent from the evaluation of the code to be presented in the result box.

3. Html editor type

You can also have a Klipse snippet with data-editor-type="html": the evaluation of the snippet will be the innerHTML of the result box.


"Hello <strong>HTML editor</strong>"

4. A custom DOM element

Another thing you can do is to add a DOM element to you page (a div a canvas or anything you want) and to manipulate it with your klipse snippet.

In this page we have inserted a <div id="my-custom-container"> just above the Klipse snippet.

(set!
  (.-innerHTML (js/document.getElementById "my-custom-container"))
  "<div style='color: green;'> Hello <b>Custom Container</b>!</div>")

There are a couple of blog posts with lots of creative stuff using this approach:

Permalink

Ryzen is for Programmers

When I heard about AMD’s Ryzen series CPU launch I immediately wondered how much performance I could get out of one. This article relates my experience building a PC to replace my much loved but aging 2013 Macbook Pro with a Ryzen powerhouse. First I’ll share the reasons why I wanted to build my own PC. Then I’ll run through the build process. At the end are my programmer oriented task timings comparing my new machine with my laptop.






Why build a PC?

Tim, do you *really* need a faster computer? Aren’t you just editing text files?

Ha ha ha yes. Ultimately I am just editing text files, and could easily do that on my favorite PC of all time, my university days 386. But those little text files drive a system, and that system is a hungry bear. My work involves back-end development, web interface development, browser testing, and video conferencing. So let’s take a look at what a day in my development life looks like, and why compute power matters to me.

Back-end service development:

Our development environment consists of 12 docker containers in a VirtualBox host. The containers provide a database, the core logic service, an event pipeline, data service, and Nginx routing. All of them except for the database and Nginx proxy are Clojure services that use between 500 MB and 8 GB of memory in production. In development they squeeze into 8 GB of memory. On my Macbook Pro I cannot fit two customer specific environments into the 16 GB of memory available. An attempt to run them side by side results in so much thrashing that the only way to regain control is to hard reboot. It’s a fairly heavy development environment, I’ll be the first to agree, but not unusually so. The biggest advantage of running a full stack is that it is pretty close to how the software gets deployed to production, so I can do end to end testing.

Web interface development:

Our User Interface (UI) is a Reagent/ClojureScript app. As you might have guessed already from previous posts I really like Reagent/ClojureScript. Using the same language, tools, and mental models on the front end and back end make it easy to work features that traverse the whole system. Our UI has grown to be about 7k lines of code. Still a small/medium sized code base by most measures, but in the realm that performance while developing is becoming a drag. ClojureScript has slow initial compile times. As your project gets large you end up with many dependencies and this leads to hundreds of files being loaded in development mode. And if you want to test the advanced mode compilation output, you need to be patient for a full build on every change (so don’t make too many mistakes with that!). Also the page itself can end up doing more work than I’d like. In order to be able to tune it, I need to be able to run and profile the slow code, then do it all again.

Browser testing:

I run virtual machines for IE8, 9, 10, 11, and Edge. On my Macbook Pro they crawl.

Video conferencing:

Our development team consists entirely of promiscuous pair programming practitioners. That means we all spend all day with video chat on and sharing our screens. Tmate is the least resource intensive way for us to share our terminals, and often prefered for this reason. Typing responsiveness is excellent, so bouncing between driver and observer is easy. Sometimes Tmate isn’t enough. Sometimes we need to share a browser screen. I like using Cursive (a Clojure IDE), which doesn’t work over Tmate, so I share that too. We use screen sharing regularly. Google Hangouts was way too resource intensive for doing work while conferencing. Instead we use zoom.us, which is a native video chat application. Zoom is much less resource intensive, but still uses at least 15% of my Macbook Pro when pairing. When I screen share that goes up to 30% and starts to affect the responsiveness of my system. Sometimes we stop sharing and mute video while a process is starting up. Pair programming is fantastic, but it can be hampered when my system is under heavy load.

Tim, why didn’t you buy a 2017 Macbook Pro?

Macs are nice. They work out of the box. The look fantastic. They are reliable. They are best of class. They have great software. Homebrew works pretty well for development dependencies. However the latest Macbook Pro is not a big leap forward in processing power. I do think the latest Macbook is great… But according to the pre-launch benchmarks, the Ryzen CPU was ahead on horsepower. When I do travel, my trusty 2013 Macbook Pro is perfectly fine. So I am willing to go with a desktop solution if it provides a performance improvement.

But why build when you can buy an out of the box PC or iMac?

I want the fastest hardware at a reasonable price. Pre-built high end systems are expensive, and are often optimized for use cases that aren’t quite what I have in mind. I can do better. Building a PC is easy. Modular components make up a computer. You just order them and put them together inside a case. All you need is a screwdriver. Components only fit in their assigned spot, and there are plenty of instructions accompanying the components and on the web to guide you through the process. The end result is a low cost, high spec custom built desktop. Also, I must admit to harboring a little hubris; to me building my PC is akin to building my own light-saber.

The programmer’s guide to a DIY PC build:





  1. Set a budget
    • Performance goal: I’m aiming for double the power of my 2013 model Macbook Pro
    • Cost of a boxed solution: A fully loaded 2017 Macbook Pro retails for $3500. A premium desktop PC retails between $2000-$3000
    • Value judgement: I can build a *better* than premium desktop for $2000. Everything will be customized exactly how I want it
  2. Allocate the budget between components
    • CPU, Memory, Storage, Motherboard/Power Supply Unit/Case, Peripherals.
    • An equal distribution between these categories is a good starting point. About $400 in each of the categories listed above. Most premium PCs are gaming systems, and are dominated by a hefty graphics card. For programming I am focused on CPU/Memory/Storage.
  3. Select a draft build. Use PCPartPicker to spec out the system. This tool will check the compatibility of parts, highlight any missing parts, and compare prices from multiple vendors.
    • It helps to have a CPU in mind to narrow down the options.
    • Sorting parts by price and choosing something appealing in the price range allocated in the component budget is a good starting point for the draft.
    • Visiting a premium PC builder site such as iBuyPower can provide some ideas. Keep in mind that they have a different pricing model and cater to gaming, so no need to follow them too closely.
  4. Research benchmarks and reviews for each component from your draft. Find candidate components to swap around. Lock in on 1 to 3 products per component that are excellent value propositions in the desired budget range.
  5. Prepare to purchase.
    • Are all the selected components compatible? Read the product specifications carefully.
    • Verify your expected power consumption.
    • Does your case fit you components?
    • Do you have a rough idea of how everything will fit together?
    • Are any of the components over spec or under spec?
  6. Buy. Don’t pay full price!
    • Pcpartpicker compares prices from several vendors, but is not comprehensive or up to date. If you search around you will definitely find even better deals. I recommend doing a Google search by part number. Then a Slickdeals search. Then visit Newegg, Bestbuy, and Amazon directly and search for the product.
    • 30% off list price is a good deal on components and peripherals. 10% off is not great, you might want to keep searching… or look at a different product in the same performance band that has a better deal. Compare the bottom line cost, because some seller “discounts” aren’t.
    • Buy parts individually as you find a good deal. These days free shipping is never a problem, so there is rarely any advantage to consolidating an order. Don’t try to fill your cart up with everything you need because:
      • It is complicated to figure out an optimal set of purchases across vendors that contains all your parts.
      • It is simpler to focus on making one good purchase at a time.
      • Missing a good deal can delay the build while looking for an alternative.
      • There are enough vendors and products that you can be confident of filling your full system order, and you already have a baseline from PCPartPicker on the worst case scenario.
    • Consider joining Ebates. It is very easy to sign up for. You receive a 1-5% cashback on purchases sent as a check to you in the mail. If you join as a referral from the green button here you will also get a $10 bonus after your first purchase (and I’ll get a $5 bonus!)
      Ebates Coupons and Cash Back
      All the major component suppliers have Ebates cash back, so you will definitely end up with an extra $30-60 in your pocket. And yes, you will still earn points or cashback from your credit card separately.
  7. Assembly.
    • Start with the case, it will have a build manual which will walk through the important steps such as attaching your CPU and RAM to the motherboard before putting it in the case. As you unbox each component it will have a manual. Read the manual. Most manuals are short, pictorial, and cryptic. However they contain essential information that will make the build go smoothly.
    • Take your time. Components fit together easily and neatly by following the diagrams.
    • There are heaps and heaps of build guides and in depth analysis of all sorts of topics like thermal paste application, fan configuration, and every possible consideration you might have on the web. I chose the “pea” method for thermal paste, and front mounted my radiator based on advice from enthusiasts on YouTube.





My component purchases:


Case $80 Fractal Design Define R5

I love this case. Roomy! Removing all the drive bays was easy and freed up even more space. Who uses 5.25” bays? There is a spot for SSDs behind the motherboard. Cables are nicely hidden away behind the motherboard backing plate and tied neatly with velcro straps. Thumbscrews appear pretty much everywhere which makes building much less fiddly than regular screws. Two good fans are included. Top mounted power and USB ports are easy to access over a monitor. The build instructions were clear and well presented. The top holes for CPU power are too small to pass the CPU power cable head through while the rubber grommet is in place, so I needed to kink the grommet temporarily to pass it through. Replacing the top removable panels takes a bit of care; one end is a lock while the other is a latch. The case has removable dust filters. I did not connect the front panel fan controller (what benefit is there to manually changing fan speed?) Instead I connected my fans to motherboard pins. I had no use for the second included fan because of the large cooling unit I installed.

Power supply $80 EVGA SuperNOVA 550 G3, 80+ GOLD, 550W

I loved unboxing and installing this PSU. It looks fantastic, is small, fully modular and easy to install. Cabling was easy, neat and fast.

CPU $500 Ryzen 1800X

The Ryzen proposition is that you get equal performance to the best consumer grade Intel CPU at half the price. I bought it at launch which was a bit of a risk, but that price is very attractive for the best CPU performance available. This value proposition is the catalyst for the build. So yeah, gonna go for the big one here. Now that the launch is over, it seems that the 1700 and 1700X are definitely better value. I am happy with my 1800X, it might not have been as cost efficient, but it is the fastest! And really that’s my whole goal here, to have a dramatically faster system than my Macbook Pro.

Cooling $100 Cooler Master MasterLiquid Pro 280

Urrrgh! I have angst about this component. I bought this cooler believing that it would support AM4 (the Ryzen socket) with a bracket. But the bracket is not available and will not be until April at the earliest. So how exactly am I running my system? The cooler is jerry rigged on with wire and a cable tie. The good news is that it’s running nice and cool between 30-40 Celsius. But really, is this a good idea? No. The sad thing is that all I need is a little plate of metal with the right dimensions to allow me to screw the cooler head securely to the motherboard. Searching for a cooler (air or water) was a real let down. It was very confusing as to which products have brackets now as opposed to promising them in the future. There was only a very limited selection of AM4 CPU cooling options at launch. When I was able to find a product (water or air), it was out of stock or overpriced. The MasterLiquid Pro 280 specifically has some issues. The pipes are quite rigid, which makes it difficult to position. The pipes are also only just long enough. These two factors cause torque on the CPU attachment. This made it very challenging to jerry rig because the CPU attachment had a tendency to twist and slide away from the correct position. It also produces more noise that I would like even with the fans configured to “silent” setting in BIOS. On the other hand my CPU never goes above 40 Celsius under medium load with the radiator fans running at 20%. Temperatures of up to 80 Celsius are considered normal for operation. This cooler is working very effectively! There is something cool about water. The installment looks awesome in the box. I'm happy with the component now that it is installed. If you don't like the water cooling options, then go with a heat-sink and fan air cooler instead. Temperature only makes a difference for overclocking, and air coolers are very good. Air is cheaper, easier to install, quieter and plenty cool enough. Whichever you choose, check the AM4 compatibility carefully. I hope fanless cooling systems become available.

Motherboard $100 Asus PRIME B350-PLUS ATX AM4

B350 is not the premium chipset (X370), but the only difference is the lack of SLI. SLI is only necessary if you want to run dual ultra graphics cards. I do not. You will need to pick up a dedicated video card (the motherboard has a HDMI port, but there are no integrated video CPUs available in AM4 yet). The Asus PRIME was the only motherboard I could find available for purchase aside from the more expensive Asus X370 gaming offering. Supposedly there should be a selection of motherboard options, but to actually buy one the choices at launch were limited. Fortunately the Asus PRIME is a *really* nice board at an inexpensive price point. By default the BIOS fan settings are at full power. I changed the chassis fans from DC to PWR mode silent, and the CPU to silent. But it isn’t silent. There is still audible fan noise. The board looks great, is well built, and has excellent specifications. The only minor annoyance I had was that it is actually slightly smaller than the full ATX form factor, which means that it has a few less riser connections to the case, so the right side of the board remains a tad flexible.

Memory $185 G.Skill TridentZ Series 32GB (2 x 16GB) DDR4-3200

Choosing memory sounds easy. You need a good amount of any brand named RAM with reasonable specs. Unfortunately the RAM market is extremely variable for such a standard component. For example the exact same RAM I bought for $185 currently retails at $325. You have to shop around between many different manufacturers and ratings to find a good price. After I got my system setup and Ubuntu installed I went into the BIOS and used DCOSP to detect my RAM and rebooted for 3200… and nothing. It is a sickening feeling when your wiz-bang computer will not even boot to BIOS anymore. It made me queasy resetting the BIOS. I popped out the CMOS battery and shorted a special pair of pins with a screwdriver as per the motherboard manual. After replacing the battery, the machine booted, almost! It did about 5 false starts before it finally loaded back into BIOS. Just because I have 3200 memory doesn’t mean I can use it at that speed. No big deal, I only got it because it was cheap from a reliable brand. The default/stable DDR4 speed for my motherboard is 2333, so I do not need anything rated higher. There is a spreadsheet that is useful for comparing various speed and latency ratings on RAM.

Storage $480 Samsung 960 Evo 1TB M.2-2280 Solid State Drive

This was an expensive component. The performance specs are far out in front of “normal” SSDs, and leave physical HDDs for dead. Given it outperforms “normal” SSDs by such a large margin, I was willing to pay the price. As a programmer I host databases, process large files, and load virtual machines from disk regularly. I anticipate fast storage having a significant impact on overall system performance for my usage. 1TB is possibly more than I need. My Macbook Pro has 512GB and I’ve only had to clean out files once. It is nice to have double the capacity, but the read/write performance is what I’m primarily after here. An attractive alternative is to get a smaller 256GB M.2 drive, and a large “normal” SSD. It would be much cheaper. I went with the 1TB seeing as there is only one M.2 drive slot, and I’d rather not have to think too much about where files belong. Installing the M.2 form factor was super easy, just one screw in. Gosh they are small!




Total cost

All up these components cost $1525 excluding peripherals. Considering that these are all premium components with higher specs than you can even select for a pre-built system, that’s a screaming deal! It is also worth noting that choosing a “normal” SSD and going with the slightly slower Ryzen 1700 with included wraith spire instead of water cooling you can still build a pretty fantastic system for under $1000.

Performance comparison

I now have a desktop PC that is 2X better in every dimension than my Macbook Pro, on paper. But does it help me with my day to day activities? I fired up Slack and Chrome on both systems (as I always have these open) and got my stopwatch ready. All times quoted herein are wall-clock times formatted as minutes:seconds.

Paper match up:

Macbook Pro (2013 model):
2.3 GHz Intel Core i7 (4 cores),
16 GB 1600 MHz DDR3,
512G SSD.

Ryzen PC (2017 custom build):
3.6GHz Ryzen 1800X (8 cores),
32 GB 2333 MHz DDR4,
1TB NVMe SSD.




Docker-compose up:

This is the command I use to fire up my development environment, when I switch customers, and when I make a system configuration change. It starts the 12 docker containers in an already running host.

Ryzen: 1:30 1:28 with zoom share 1:41 1:41
MBP: 2:45 2:38 with zoom share 2:59 2:54
323/178 == 1.8 with zoom share 353/202 == 1.7

Just short of 2X.

How about docker-compose up two environments simultaneously?

Ryzen: with zoom share 2:30 2:29
MBP: Cannot do

I can fire up two customer specific environments simultaneously in 30 seconds less than it takes to fire up one customer specific environment on my Macbook Pro! I think this result due to the extra cores in my Ryzen. The Macbook Pro has 4 cores, while the Ryzen has 8 cores. For this particular scenario I’m getting more than 2X performance, but it is not a direct comparison because the Macbook Pro cannot fit both environments in memory at the same time. This test highlights that the extra capacity provides a capability that was missing before (running two customer specific environments simultaneously).

Make up:

This command sets up an environment from scratch. It builds containers. It compiles ClojureScript, migrates database schemas, and does a docker up.

Ryzen: 4:30 4:38 with zoom sharing: 4:59 5:14
MBP: 6:24 6:12 with zoom sharing: 7:12 7:18
756/548 == 1.38 with zoom sharing: 870/613 == 1.42

Pretty handy, but not the the full 2X I was hoping for. Maybe `make up` is bottle-necking in other departments. I did a little digging and vagrant up takes about the same time on both systems, accounting for 30 seconds. So there is an example of something that was not improved at all. Perhaps I’m bottle-necking on network for some tasks?

During these tests my Macbook Pro goes bonkers. Full fans, hot to touch. The Ryzen? Fans at the same low hum, CPU sitting cool at 40 celsius. But the most noticeable difference is that when I did subsequent runs I also did some multi-tasking. The Ryzen remains fully responsive and completely usable. You wouldn’t even know that it had anything else to do. The Macbook Pro has some serious lag going on.

I had expectations that my PC would be more responsive. The Ryzen greatly exceeded those expectations. The whole experience of using my Ryzen PC is a big leap from using my Macbook Pro. On the Ryzen everything is noticeably snappier. Having the extra capacity available is great.

Compiling ui.js

I was hoping that ClojureScript compile times would be dramatically better with fast disk and more CPU. To my surprise it wasn’t as much as I anticipated. Perhaps the compiles are dominated by single thread CPU performance?

Ryzen: 0:18 0:19 0:18 with zoom sharing 0:20 0:20
MBP: 0:24 0:24 0:24 with zoom sharing: 0:27 0:27
72/55 == 1.3 with zoom sharing 54/40 == 1.4

Profiling a web page:

Ryzen: Scripting 1109ms



MBP: Scripting 1681ms



1681/1109 == 1.5

To be able to tune and improve web pages, it helps to be able to profile them quickly. The Ryzen gets a 1.5X advantage. This page is a pretty standard public facing high traffic site. I wasn't expecting the extra compute power to have such a dramatic impact on normal browsing times. But so much of the web is JavaScript driven these days that it actually makes a big difference. So for profiling, bench-marking, and general web browsing there is a significant advantage here.


Loading an IDE (Cursive/IntelliJ)

I use VIM to edit files here and there because it opens instantly, and jump into Cursive for longer coding sessions.

Ryzen: 0:9 0:9 0:9 with zoom sharing: 0:12 0:11
MBP: 0:11 0:12 0:12 with zoom sharing: 0:14 0:15
35/27 == 1.3 with zoom sharing: 29/23 == 1.3

Starting figwheel:

For interactive web development I first need to fire up `lein figwheel`.

Ryzen with zoom sharing: 0:12 0:11 0:11
MBP with zoom sharing: 0:16 0:15 0:15
46/34 == 1.4

Less waiting, more coding.

Processing a large text file:

I took a 9GB file containing 300k events and used wc (word count) and ag (silver searcher, like grep):

wc enriched_300k.tsv
Ryzen: 0:54 0:54 0:54
MBP: 0:24 0:24 0:24
72/162 == 0.4

wc -l enriched_300k.tsv
Ryzen: 0:01 0:01 0:01
MBP: 0:09 0:09 0:09
27/3 == 9

ag foo enriched_300k.tsv
Ryzen: 0:09 0:09 0:09
MBP: Cannot do (ERR: expected to read 2445307116 bytes but read 4294967295)

grep foo enriched_300k.tsv
Ryzen: 0:01 0:01 0:01
MBP: 1:45 1:46 1:44
315/3 == 105

These results are all over the map. I don't understand why wc full is faster on the Macbook Pro. Or why grep is ridiculously faster on the Ryzen. Perhaps the implementations differ between OSX and Ubuntu?

Starting an IE9 Virtual Machine (from saved state):

Ryzen: 2s 2s 2s
MBP: 10s 9s 9s
28/6 == 4.7 ?!??!!

Using an IE9 Virtual Machine is very smooth on the Ryzen, it feels like a native browser. I can run the developer tools and profile the page comfortably. On the MBP, the virtual browser is painfully sluggish. I don’t see why the Ryzen does so much better here. Probably there is some VM specific factor at play.

Switching to Linux





I installed Ubuntu. It was really easy to install. I made a bootable USB jump drive on my Macbook Pro, put it in the PC, bam, installed! Amazingly fast. Everything worked out of the box. No driver problems. Great!

I like the Ubuntu interface. Window management is better than mac out of the box. Unity has hotkey and drag dock left right maximize and corners. Positioning windows is a breeze. PrintScreen takes a screenshot of the desktop, and Alt+PrintScreen captures the current application. It brings up a preview so you can edit/rename the image.

The jarring difference from OSX is the slightly different use of common keyboard shortcuts. Copy + paste in the terminal require either shift+ins or ctrl+shift+c/v. I find this slows me down because I have to think about whether I’m in a terminal or not, and use different keys. Ctrl-a Ctrl-e don’t go to start of line or end of line in Ubuntu apps, so I am retraining myself to use the home/end key.

For programmers Linux is an upgrade over OSX or Windows. Why? Apt-get. Software dependency management with apt-get is fast, easy and works. Homebrew on OSX attempts to emulate apt-get, but is more wild west and falls short.

Final thoughts:

Should you build a Ryzen based PC? Yes! Absolutely! There is a discontinuity in the CPU price to performance ratio introduced by the launch of this new CPU range. Programmers and enthusiasts stand to benefit with more raw power for their dollar.

Building a PC is fun. It is just like Christmas with all sorts of interesting packages are arriving at your front door. I got a sense of accomplishment from putting my PC together. The final result looks and performs great. It did take a significant investment of time, effort, and attention to detail. As a computing enthusiast, that time was well spent.

The Ryzen build specs on paper promised to be 2X better than my previous hardware. In most programming oriented tasks it delivered an improvement factor of around 1.5X. Responsiveness while multitasking was vastly improved. The cost of $1525 excluding peripherals was well below a premium off the shelf equivalent.

Can Linux cut it? Yes! Ubuntu just keeps getting better. Linux on the desktop is really, really good these days.

Is Intel doomed? Doubtful. On March 19 the Intel i7-6900K retails for over $1000; twice the price of the equivalent AMD Ryzen 1800X. It sure looks to me like AMD has jumped way ahead of Intel in CPUs for programmers. Such a direct comparison is for a small segment of the CPU market. There are other price points where Intel still beats AMD; ultra servers, low end servers, and laptops.

Should you feel bad if you still prefer Mac? No! OSX software and Mac hardware are pretty great. It is impressive that a 4 year old laptop can still be in the same league as a modern desktop.

Thank you for reading my blog, and have a great day!

Permalink

New Datomic Training Videos and Getting Started Documentation

We are excited to announce the release of a new set of Day of Datomic training videos!
Filmed at Clojure/Conj in Austin, TX in December of 2016, this series covers everything from the architecture and data model of Datomic to operation and scaling considerations.

The new training sessions provide a great foundation for developing a Datomic-based system. For those of you who have watched the original Day of Datomic videos, the series released today uses the new Datomic Client library for the examples and workshops, so if you haven't yet explored Datomic Clients, now is the perfect opportunity to do so!

If you ever want to refer back to the original Peer-based training videos, don't worry - they're all still available as well.

In addition to an updated Day of Datomic, we've released a fully re-organized and re-written Getting Started section in the Datomic Documentation. We have gathered and incorporated feedback from new and existing users and hope that the new Getting Started is a much more comprehensive and accessible introduction to Datomic.

We look forward to your thoughts and feedback. If you have any comments on the new training videos, the new getting started section, or any additional thoughts, please let us know!

Permalink

Senior ClojureScript Developer at Sallie Mae (Full-time)

Sallie Mae is consumer bank focused on creating new products and best in class experiences through technology, building on our history of helping students and their families save, plan, and pay for college. Our CEO's mantra is simple.... Best in Class. As a developer what 3 words would you rather hear? Just a small sample of tech we use: Clojure/Clojurescript Reagent/React, AWS Lambda, Microsoft Azure, Swift, .NET Core, MongoDB, Apache Kafka.

What tech are you going to help us leverage to deliver this Best in Class experience? We are open minded, customer driven.

Overview

Do you have experience with modern web stacks? Are you familiar with or keen on learning functional programming? Have you architected modular front-end solutions? We're always looking for and exploring new tools and technologies to find the best fit for the problem at hand, and now we are seeking ambitious software professionals to engineer our next generation products, platforms, and frameworks.

About you

You understand the importance of selecting the right technology tool for the task. You are a natural problem solver. You ask why, you explore, and you thrive on challenges. You know what continuous integration means, and believe automation is the path to happiness. You feel empowered to make a difference.

We're seeking someone who is super passionate about their craft and is hyper-focused on delivering extraordinary solutions. Is this you?

Here are the Basics:

• Design, build, and test web applications • Lead the craftsmanship, availability, resilience, and scalability of your solutions • Manage the risks associated with information and IT assets through standards and security policies but not let that limit your creativity • Work with the architecture team on development of new subsystems for our evolving digital platform • Bring a passion to stay on top of tech trends, experiment with and learn new technologies, participating and mentoring in the technology community • Encourage innovation, implementation of cutting-edge technologies, inclusion, outside-of-the-box thinking, teamwork, self-organization, and diversity

Education: • BS in Computer Science or equivalent experience

Qualifications: • 5+ years in web-based application development • 2+ years experience working within product development teams • Experience in CSS • Functional programming experience (particularly Clojure/ClojureScript) • Good understanding of RESTful APIs, Web Services, Websockets, Web Sockets and experience working with backend developers to design and consume them. • Demonstrated ability to quickly learn new tools, frameworks, and languages. • Comfortable with quickly addressing changing or new product needs to help seize business opportunities • Ability to clearly communicate design decisions and trade-offs. • Passion for writing clean, maintainable code with tests and automation. • Regularly perform code reviews • Enjoy refactoring. • Natural inclination to collaborate • Desire to make a difference

Extra Credit: • Experience in full stack development • Practical experience with one or more front-end frameworks: Angular, Ember, Backbone, React, • Exposure/experience with both SQL and No-SQL data stores • Exposure/experience with Git or a similar version control system • Exposure/experience with AWS, Azure or similar cloud service providers • Contributions to open source community • Experience with an iterative development methodology (i.e. Scrum, Kanban, SAFe) and techniques for sustaining rapid release cycles (i.e. TDD, BDD)

Get information on how to apply for this position.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.