Struggling to scale your data operations with spreadsheets and legacy systems? Learn how modern data platforms provide the scalability, real-time insights, and security businesses need to grow efficiently. Explore the limitations of outdated tools and discover how to future-proof your data strategy with Redefine Technologies.
This was prompted by Ian Chow (who just released a CLJD app onto the stores) mentioning, en passant, he struggled to port this tutorial.
We couldn’t let this slip, so here we go! This is going to be code-heavy, and not very clojurey as we were not familiar with the Flame framework.
If you are impatient, skip to the 🕹️ emojis.
🧑💻 Our Consulting Services 🧑💻
If you are looking into building a desktop, mobile or even web app, get in touch with uswe will help you at multiple levels from technical support and expertise to building it for you and with you.
🗞️ News and Links: App Announcement, Re-Dash Inspector and HYTRADBOI 2025 🗞️
"Have You Tried Rubbing A DB On It?" (HYTRADBOI) was a really great conf (go watch the videos!) and it's happening anew in February 2025. There will be a Programming Languages track too!
It's fresh, cheap, online and async. You can't afford to miss it!
More formal but Datalog 2.0 2024 will be held next week (and on Christophe's birthday!). We hope videos and papers surface soon!
🕹️ Flame's breakout in ClojureDart 🕹️
Open the original tutorial in a separate tab as the focus is on translating Dart to ClojureDart, we won't repeat the tutorial explanations.
At each step, new or modified code is pointed to by 👇 or 👈.
This is going to be code-heavy, so feel free to jump to the end for our conclusions!
Project setup
Let's create the project directory and add Flutter to the path.
First we introduce the PlayArea, I use a deftype for now, maybe a reify would suffice. I don't know: I'm writing this as I go. In the same vein the mixin is parametrized by a yet undefined class (BrickBreaker) so I omit it for now. Maybe we won't even need it.
(ns brickbreaker.main(:require["package:flame/game.dart":asgame]["package:flutter/material.dart":asm]["dart:async":asasync]; 👈["package:flame/components.dart":ascomponents]; 👈[cljd.flutter:asf]))(def game-width820.0); 👈(def game-height1600.0); 👈; 👇(deftype PlayArea[]:extends(components/RectangleComponent.paint(doto (m/Paint)(.-color!(m/Color0xfff2e8cf))))(onLoad[this](.onLoad^superthis)(.-size!this(game/Vector2game-widthgame-height)))^:mixincomponents/HasGameReference); there's a ref but I don't know yet where we are going(defn main[](f/run(game/GameWidget.game(game/FlameGame))))
Next the missing BrickBreaker class
(it's a direct link to the relevant section but then some JS kicks in and decide that you can't link in the middle of a page 🙄)
(ns brickbreaker.main(:require["package:flame/game.dart":asgame]["package:flutter/material.dart":asm]["dart:async":asasync]["package:flame/components.dart":ascomponents][cljd.flutter:asf]))(def game-width820.0)(def game-height1600.0)(deftype PlayArea[]:extends(components/RectangleComponent.paint(doto (m/Paint)(.-color!(m/Color0xfff2e8cf))))(onLoad[this](.onLoad^superthis)(.-size!this(game/Vector2game-widthgame-height)))^:mixincomponents/HasGameReference); there's a ref but I don't know yet where we are going; 👇(deftype BrickBreaker[]:extends(game/FlameGame.camera(components/CameraComponent.withFixedResolution.widthgame-width.heightgame-height))(onLoad[this](.onLoad^superthis)(-> this.-camera.-viewfinder(.-anchor!components/Anchor.topLeft))(-> this.-world(.add(PlayArea))))BrickBreaker; 👇 I have a bad feeling about these getters(^:getterwidth[this](-> this.-size.-x))(^:getterheight[this](-> this.-size.-x)))(defn main[](f/run(game/GameWidget.game(BrickBreaker)#_👈)))
Behold the beige rectangle!
Step #5: Display the ball
Are things getting serious yet?
(ns brickbreaker.main(:require["package:flame/game.dart":asgame]["package:flutter/material.dart":asm]["dart:async":asasync]["package:flame/components.dart":ascomponents][cljd.flutter:asf]))(def game-width820.0)(def game-height1600.0)(def ball-radius(* game-width0.02))(deftype PlayArea[]:extends(components/RectangleComponent.paint(doto (m/Paint)(.-color!(m/Color0xfff2e8cf))))(onLoad[this](.onLoad^superthis)(.-size!this(game/Vector2game-widthgame-height)))^:mixincomponents/HasGameReference); there's a ref but I don't know yet where we are going; 👇(deftype Ball[^game/Vector2velocity]:extends(components/CircleComponent.anchorcomponents/Anchor.center.paint(doto (m/Paint)(.-color!(m/Color0xff1e6091))(.-style!m/PaintingStyle.fill)))(update[thisdt](.update^superthisdt); 👇 .+ and .* because we call the native + and * operators which are not restricted to numbers (.-position!this(.+(.-positionthis)(.*velocitydt)))nil)); 👇 creating a constructor function to pass position radius; maybe we should allow the ^super type hint onto fields in deftype(defn ball[velocitypositionradius](doto (Ballvelocity)(.-position!position)(.-radius!radius)))(deftype BrickBreaker[]:extends(game/FlameGame.camera(components/CameraComponent.withFixedResolution.widthgame-width.heightgame-height))(onLoad[this](.onLoad^superthis)(-> this.-camera.-viewfinder(.-anchor!components/Anchor.topLeft))(doto (.-worldthis)(.add(PlayArea)); 👇(.add(ball(game/Vector2(* (- (rand)0.5)(.-widththis))(* 0.2(.-heightthis))); 👇 calling the native / operator but ./ is not a valid symbol so we have to desugar into this form(. (.-sizethis)/ 2)ball-radius)))(.-debugMode!thistrue)); 👈BrickBreaker(^:getterwidth[this](-> this.-size.-x))(^:getterheight[this](-> this.-size.-x)))(defn main[](f/run(game/GameWidget.game(BrickBreaker))))
Step #6: Bounce Around
Still following the original tutorial we now add collision with the walls.
(ns brickbreaker.main(:require["package:flame/game.dart":asgame]["package:flutter/material.dart":asm]["dart:async":asasync]["package:flame/collisions.dart":ascollisions]; 👈["package:flame/components.dart":ascomponents][cljd.flutter:asf]))(def game-width820.0)(def game-height1600.0)(def ball-radius(* game-width0.02))(deftype PlayArea[]:extends(components/RectangleComponent.paint(doto (m/Paint)(.-color!(m/Color0xfff2e8cf))).children[(collisions/RectangleHitbox)]); 👈(onLoad[this](.onLoad^superthis)(.-size!this(game/Vector2game-widthgame-height)))^:mixincomponents/HasGameReference); there's a ref but I don't know yet where we are going(deftype Ball[^game/Vector2velocity]:extends(components/CircleComponent.anchorcomponents/Anchor.center.paint(doto (m/Paint)(.-color!(m/Color0xff1e6091))(.-style!m/PaintingStyle.fill)).children[(collisions/CircleHitbox)]); 👈(update[thisdt](.update^superthisdt)(.-position!this(.+(.-positionthis)(.*velocitydt)))nil)^:mixincomponents/HasGameReference; 👈 same as before I omit the type parameter for now^:mixincollisions/CollisionCallbacks; 👈; 👇(onCollisionStart[thisintersection-pointsother](.onCollisionStart^superthisintersection-pointsother)(cond(not (instance? PlayAreaother))(println "collision with"other); deliberaely not trying to simplify/refactor this(<= (-> intersection-points.-first.-x)0)(.-x!velocity(- (.-xvelocity)))(<= (-> this.-game.-width)(-> intersection-points.-first.-x))(.-x!velocity(- (.-xvelocity)))(<= (-> this.-game.-height)(-> intersection-points.-first.-y))(.removeFromParentthis))nil))(defn ball[velocitypositionradius](doto (Ballvelocity)(.-position!position)(.-radius!radius)))(deftype BrickBreaker[]:extends(game/FlameGame.camera(components/CameraComponent.withFixedResolution.widthgame-width.heightgame-height))(onLoad[this](.onLoad^superthis)(-> this.-camera.-viewfinder(.-anchor!components/Anchor.topLeft))(doto (.-worldthis)(.add(PlayArea))(.add(ball(game/Vector2(* (- (rand)0.5)(.-widththis))(* 0.2(.-heightthis)))(. (.-sizethis)/ 2)ball-radius)))(.-debugMode!thistrue))^:mixingame/HasCollisionDetection; 👈BrickBreaker(^:getterwidth[this](-> this.-size.-x))(^:getterheight[this](-> this.-size.-x)))(defn main[](f/run(game/GameWidget.game(BrickBreaker))))
Next steps of the tutorial are more Flutter-related so I'm stopping there.
Tying loose ends
The compiler complains about 5 dynamic warnings:
DYNAMIC WARNING: can't resolve member width on target type FlameGame of library package:flame/src/game/flame_game.dart (no source location)
DYNAMIC WARNING: can't resolve member height on target type FlameGame of library package:flame/src/game/flame_game.dart (no source location)
DYNAMIC WARNING: can't resolve member width on target type FlameGame of library package:flame/src/game/flame_game.dart at line: 71, column: 9, file: brickbreaker/main.cljd
DYNAMIC WARNING: can't resolve member width on target type FlameGame of library package:flame/src/game/flame_game.dart at line: 115, column: 11, file: brickbreaker/main.cljd
DYNAMIC WARNING: can't resolve member width on target type FlameGame of library package:flame/src/game/flame_game.dart at line: 122, column: 7, file: brickbreaker/main.cljd
Sure enough they are tied to the HasGameReference we left unparametrized. Trying to change it to #/(HasGameReference BrickBreaker), I hit a circularity issue between types. It can be fixed by declaring BrickBreaker twice: a first time without any implementation and a second time (the existing one) at the end.
Following this tutorial brought us and the compiler out of our comfort zone. We fixed three fine interop bugs along the way (did you know that a getter/setter pair may disagree on the returned/expected type? Did you know that the super class of a class is not the class specified after extends when mixins are applied?).
At about 200 lines of code it's not bad.
The gameplay is meh but it's ok it's a tutorial I guess.
keyboard input as implemented is dependent on keyboard repeats, thus you don't have smooth continuous movement of the bat. I didn't find a builtin way to get notified each frame about already pressed keys. So you would have to roll your own.
no continuous movement means no fine control on how the bat velocity combines to the ball velocity on impact.
The code above is a straight port of the Dart version without knowing anything about Flame. Things could be made more Clojure friendly by introducing more generic components. However they are dependencies on types for example in the way query uses a cache indexed by types. So more generic components would call for other filtering methods.
There also is mutation and nesting all over the place.
Making a good helper/wrapper would be an interesting challenge.
A good first step would be to reimplement this breakout game with a more data-oriented and less type-centric approach and see where the impedance mismatches occur.
I got a litlte bit nerd sniped by the following video and decided to implement game of life in clojure.core.logic, because any logic program can be evaluated forwards and backwards.
Without further ado here is my implementation:
(nspepijndevos.lifeclj(:refer-clojure:exclude[==])(:useclojure.core.logic)(:gen-class));; A helper to get the neighbouring cells.;; Clips to zero.(defnget-neighbours[rowsxy](for[dx(range-12)dy(range-12):when(not(=dxdy0))](get-inrows[(+xdx)(+ydy)]0)));; Produces binary vectors of a certain number of bits.;; This is used to generate all neighbour combinations.(defnbitrange[n](sort-by#(apply+%)(for[i(range(bit-shift-left1n))](vec(map#(bit-and1(bit-shift-righti%))(rangen))))));; Encode the game of life rules as a 256 element conde.;; Depending on the number of ones in a vector,;; the corresponding rule is generated;; that equates the pattern to the neigbours;; and the appropriate next state.;;;; This can be asked simply what the next state is for;; given neighbours and current state.;; OR you could drive it backwards any way you like.(defnlifegoals[neighselfnext](or*(for[adj(bitrange8):let[n(apply+adj)]](cond(or(<n2)(>n3))(all(==next0)(==neighadj))(=n3)(all(==next1)(==neighadj)):else(all(==nextself)(==neighadj))))));; Relate two grids to each other according to the above rules.;; Applies lifegoals to every cell and its neighbours.;; in the forwards direction executes one life step,;; in the backwards direction generates grids;; that would produce the next step.(defnstepo[sizevarsnext](let[rows(->>vars(partitionsize)(mapvec)(into[]))neig(for[x(rangesize)y(rangesize)](get-neighboursrowsxy))](everyg#(applylifegoals%)(mapvectorneigvarsnext))));; Make a grid of unbound variables.(defngrid[size](repeatedly(*sizesize)lvar));; Simply execute a life step on the state.(defnfwdlife[sizestate](let[vars(gridsize)next(gridsize)](run1[q](==qnext)(==varsstate)(steposizevarsnext))));; Produce three backward steps on state.(defnrevlife[sizestate](let[start(gridsize)s1(gridsize)s2(gridsize)end(gridsize)](run1[q](==q[starts1s2end])(==endstate)(steposizes2end)(steposizes1s2)(steposizestarts1))));; Nicely print the board.(defnprintlife[sizegrids](doseq[ggrids](doseq[row(->>g(partitionsize)(mapvec)(into[]))](doseq[trow](printt""))(print"\n"))(print"\n")));; Test with a glider.(defn-main[&args](->>[000000000000000110001100000010000000](revlife6)first(printlife6)))
Every seasoned developer has been there: whether it’s an urgent requirement change from your business leader or a faulty assumption revealing itself after a production deployment, your data needs to change, and fast.
Maybe a newly-passed tariff law means recalculation of the tax on every product in your retail catalog (and you sell everything). Maybe a user complains that her blog post is timestamped to the year 56634, and you realize you’ve been writing milliseconds, not seconds, as your epoch time for who knows how long. Or maybe Pluto has just been reclassified and your
favorite_planet
column urgently needs rectification across millions of astrological enthusiast rows.
Today we’re releasing Rama’s new “instant PState migration” feature. For those unfamiliar with Rama, PStates are like databases: they’re durable indexes that are replicated and potentially sharded, and they are structured as arbitrary combinations of maps, sets and lists.
Instant PState migrations are a major leap forward compared to schema migration functionality available in databases: use your own programming language to implement arbitrary schema transformations, deploy them worry-free with a single CLI command, and then watch as the data in your PStates, no matter how large, is instantly migrated in its entirety.
If you want to go straight to the nitty gritty, you can jump to the public documentation or the example in the rama-demo-gallery. Otherwise, let’s take a look at the status quo before diving into a demonstration.
Status quo
SQL
SQL needs no introduction – it’s a tried-and-true tool with built-in support for schema evolution.
SQL (Structured Query Language) is composed of sub-languages, two of which are the Data Definition Language (DDL) and the Data Manipulation Language (DML).
Via DML, you can manipulate the data in your table:
1 2 3 4 5 6 7 8 9
UPDATE golfers SET
is_experienced = total_rounds_played >=10,
skill_level =CASE WHEN total_rounds_played <10THEN'Beginner' WHEN handicap_index <=5.0THEN'Advanced' WHEN handicap_index <=20.0THEN'Intermediate' ELSE'Beginner' END;
In this example, an internet amateur golfer database is making some changes:
Change
full_name
to a
TEXT
field (perhaps uber-long names have become fashionable)
Precompute a golfer’s experience indicator and skill level (say, to shave off some milliseconds at render time)
To actually update the production database, they’ll need to wrap the changes in a transaction so that a failure can’t leave the table with unpopulated new columns:
1 2 3 4 5 6
BEGIN;
ALTERTABLE golfers ...; UPDATE golfers ...;
COMMIT;
Taken together, this demonstrates some powerful functionality:
New attributes can be derived from existing ones
In some cases, a column’s type can be altered “for free”, without reading a single row from disk, as would happen if the only modification was to change
full_name
‘s type from
VARCHAR(50)
to
TEXT
SQL is sufficiently expressive to describe changes to multiple columns in a single operation, and smart enough to apply them in a single full-table scan. Doing so should offer significant speed-up compared to doing multiple, separate full-table scans.
However, there are some areas that could use improvement:
Changes must be specified using nothing but SQL. This will likely mean re-implementation of code and duplication of business logic that’s already been expressed in the application programming language. For example, the 10-round experience threshold and skill level tiers above would be duplicated in both SQL and whichever programming language the application uses.
Deployment of the migration will take hand-holding and coordination. If the table is massive, then scanning it may take hours or days, during which the old schema must still be assumed by application code. If there’s an unexpected fault (say, power outage), the transaction may fail and require manual re-attempt.
Some migrations may require locking entire tables for the duration of the migration, inducing downtime as reads and writes are blocked. While there may be third-party tools available that minimize downtime, these generally work by providing a phased rollout of the new schema, which may still involve an extended period of backfilling during which the old schema must be used, as is the case with the pgroll plugin for PostgreSQL.
Under the hood, the SQL database must always retain all state necessary to perform a rollback while in the middle of a commit; in practice, this could mean holding on to duplicate data for every single migrated row until the commit goes through.
If the database is sharded across multiple nodes, then deployment becomes immensely trickier, requiring careful thought and attention to ensuring its coordinated success on all shards.
NoSQL
The category of “NoSQL” databases is vast and varied, but we’ll try and summarize the landscape with respect to schema and data migrations.
In general, NoSQL databases eschew the relative power of SQL in order to gain horizontal scalability. Any schema migration capabilities had by SQL are likewise mostly thrown out with the bathwater.
Some NoSQL databases retain a distinctly SQL-ish interface, as exemplified by the column-oriented Apache Cassandra’s ALTER TABLE command. This command enables immediate addition or logical deletion of a column, but little else (its support for making even very limited changes to a column’s type was removed). A search for “Cassandra schema migration” yields primarily links to third-party tools.
Indeed, the general theme across NoSQL databases is a total lack of built-in support for anything resembling schema migrations. This might seem sensible for the category of document databases, which are often referred to as schemaless or as having dynamic schemas. These databases are lax about the shape of the data stored. Each record is a collection of key-value attributes; the attributes are an open set, and the only one required is the all-important one used as the key for lookup and partitioning. For example, the CLI command to define the
golfers
table in DynamoDB might look like:
Notice that Dynamo isn’t told what other attributes the golfers will have; it’s got no idea that it will ultimately be storing fields like
full_name
and
total_rounds_played
.
But what happens when changes must be made to the data’s shape and contents? The answer from document databases is: you’re on your own, kid. One option is to roll your own migration system by writing code that scans an entire dataset and rewrites everything, but this is tedious, non-transactional, and error-prone. The other options boil down to variants of migrate-on-read, wherein the tier of the codebase which reads from the database is updated to tolerate different versions of the data at read time. This might mean deserializing records as instances of either
GolferV1
,
GolferV2
, etc. When a record is updated, it’s written to the database using the new schema. Optionally, additional code may be written to perform a more eager write-on-first-read wherein the record is immediately written back to the database the first time it happens to be read following deployment of a new schema.
The migrate-on-read approach comes with lots of baggage. It requires tedious, imperative code to be written and deployed to the database access tier. Since many NoSQL databases provide little in the way of locking, this code may need to explicitly handle race conditions inherent to reading and re-writing a record that might have been updated in the interim. Worse, this code can never be removed unless you are certain that every single record has been re-written to the database, which can only be determined by carefully scanning the entire dataset. This might mean incurring a significant performance penalty on every read, forever.
Many NoSQL databases have ecosystems of third-party tools around them, some of which build out support for schema-migration capabilities. Mongock is one such tool, a Java library that supports code-first migrations for MongoDB and DynamoDB. While such tools will inevitably appear as godsends to developers in tight spots, they’ll never offer the ease-of-use and efficiency achievable via first-party support.
NewSQL
We should note that there is a class of “NewSQL” databases which attempt to bring NoSQL’s horizontal scalability to SQL. Schema migrations with these databases are mostly the same as SQL’s, except that they may provide assistance with coordinating changes across multiple partitions. For example, CockroachDB’s online schema changes actually enable background migration of partitioned tables, followed by a coordinated “switch-over” to the new schema on all nodes. While this is a commendable effort, it still suffers from the same limitations and expressivity issues that hamstring standard SQL schema migrations, and it’s far from instantaneous. We feel that an entirely new paradigm is necessary.
Schema evolution in Rama
Rama was built from the ground up to enable rapid iteration on software backends.
With this in mind, let’s take a quick look at Rama’s existing support for schema evolution. Then, we’ll take a detailed dive into today’s newly-released feature, instant PState migrations.
Existing support
Rama has had built-in support for schema evolution since day one.
Unlike systems built with SQL or document databases, systems built with Rama use an event sourcing architecture which separates raw facts, i.e. depot entries, from the indexes (or “views”) built from them, i.e. PStates.
This design wipes out an entire class of problems in traditional databases: by recording data in terms of irrevocable facts rather than overwriting fields in a database record, no fact once learned is ever lost to time.
With Rama, when your requirements change, you can materialize new PStates using the entirety of your depot data. For example, continuing with the above golf scenario, suppose a change must be made as to how a golfer’s handicap is computed. Thankfully, the event sourcing architecture means that the raw facts required are available: a depot record for each golf round completed by a golfer, e.g.
GolfRound(golferId, finishedAt, score)
.
Even if the handicap calculation requires examining every golf round ever played by a golfer, Rama happily enables its calculation via use of the “start from beginning” option on a depot subscription. Here’s how it’s done with Rama’s Java API:
golfRounds.source("*rounds-depot", StreamSourceOptions.startFromBeginning()).out("*round")
.each((Round round)-> round.golferId, "*round").out("*golfer-id")
.localSelect("$$handicaps", Path.key("*golfer-id")).out("*handicap") // updateHandicap performs the actual arithmetic to calculate the new handicap
.each(GolfModule::updatedHandicap, "*handicap", "*round").out("*new-handicap")
.localTransform("$$handicaps", Path.termVal("*new-handicap"));
And here’s the equivalent code expressed in the Clojure API:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
(let[golf-rounds (stream-topology topologies "golf-rounds")] (declare-pstate golf-rounds
$$handicaps (map-schema Long ; golfer-id
Double ; handicap )) (<<sources golf-rounds (source>*rounds-depot {:start-from :beginning}
:>{:as*round :keys[*golfer-id]}) (local-select>[(keypath *golfer-id)] $$handicaps
:>*handicap) ;; updated-handicap performs the arithmetic to calculate the new handicap (updated-handicap *handicap *round :>*new-handicap) (local-transform>[(keypath *golfer-id)(termval *new-handicap)]
$$handicaps)))
Having the ability to easily compute new indexes based on the entirety of the raw data is immensely powerful, but there are some scenarios where it might be infeasible or impossible to compute the desired view in this manner:
If you’ve enabled depot trimming to cut down on storage costs, then you won’t have access to each and every historical depot record.
If your existing PStates have data that was non-deterministically generated, you might find that you need to describe your change in terms of existing views rather than in terms of your depot records.
Scanning millions of depot records might be egregiously inefficient – for example, if your depot records describe many repeated updates to a given entity, and you already have a PState view on the “current” state of the entity, then it might mean lots of wasted effort to examine all of the obviated depot entries corresponding to that entity.
In these scenarios, Rama’s new instant PState migration feature is here to help.
New: instant PState migrations
Just as Rama reifies decades of the industry’s collective learnings into a cohesive set of abstractions, our new instant PState migration feature draws from SQL’s expressivity and NoSQL’s scalability.
In Rama, PState migrations are:
Expressive – just as Rama PStates support infinite, arbitrary combinations of elemental data structures, so do migrations support arbitrary transformations expressed in the programming language you’re already using.
Instant – after a quick deployment, all PState reads will immediately return migrated data, regardless of the volume of data.
Durable and fault-tolerant – in the background, Rama takes care of durably persisting your changes in a consistent, fault-tolerant manner.
Rama achieves this via a simple, easy-to-reason-about design. On every PState read until the PState is durably migrated, Rama automatically applies the user-supplied migration function before returning the data to the client. In the background, Rama works on durably migrating the PState; it does so unobtrusively on the task thread as part of the same streaming and microbatches your application is already doing.
Let’s take a detailed look at each facet of migration.
Expressive
PState migrations are specified as code, and the heart of each migration is a function written in your programming language of choice. Specifying your migration as an arbitrary function is tremendously powerful. Rather than being confined to a limited, predefined set of operations, as is often the case with SQL migrations, with PState migrations you have the Turing-complete power of your language, your entire codebase and all its dependencies available to you.
When you declare a PState, you provide a schema describing the shape of the data it contains. At certain locations within the schema, you may now specify a migration.
Continuing with the golf example, the golfers PState schema expressed via the Java API might look like this:
When it comes time to add a golfer’s experience indicator and skill level, you can specify a migration using code you already have. Here it is with the Java API:
The new API addition demonstrated here is the
migrated
function. It takes three or four arguments:
the new PState schema
a migration ID string
a function from old-data to new data
optionally, some options describing the migration
The migration function used here is
enrich-golfer
, a function from
golfer
to
golfer
which calculates the
:is-experienced
and
:skill-level
keys unless they’re already set.
It’s important to note that the migration function must be idempotent. Rama will invoke the migration function on every read of a migrated location until the PState is completely durably migrated in the background, whether or not a particular entry has been migrated yet or not. This means that the migration function may run against both yet-to-be-migrated and already-migrated inputs. This design choice gives total control to the user: rather than adding definite storage and computational overhead to the implementation, e.g. state for every single PState entry indicating whether it has been migrated, the user’s migration function may switch on state which is already present, e.g. the migrated entity’s type.
The migration ID is used to determine whether successive migrations to the same PState are the same or different. It is only relevant when you perform a module update while a PState is undergoing migration. In such cases, Rama will look at the migration IDs in the PState’s schema and restart the migration from scratch if any of them has changed; otherwise, it continues where it left off. For example, consider the following cases:
You’ve deployed a module update with a migration on your massive
$$golfers
PState which will take several days to complete. However, in the midst of migration an unrelated hot-fix must be made to some other topology. Another module update may safely be made with the
$$golfers
migration left untouched, and the background migration will resume where it left off.
Or, suppose you’ve deployed a migration on the
$$golfers
PState, but while it’s running you realize there’s a bug in your migration function that’s somehow made it through your staging environment testing. In this case you don’t have to wait for background migration to complete – you can fix your migration function, alter the migration’s ID, and do another module update immediately. Background migration will immediately be restarted from scratch.
There are also some options available for making certain kinds of structural changes to your schema; see the docs for more details.
Instant
With the migrated schema in place, committed to version control and built into a jar, all that’s left is to do is deploy it with single command:
This is the same command used for any ordinary module update, and this will do the same thing as any other module update: spin up new workers running the new code and gracefully hand over writes and reads before shutting down the old workers. It will take no longer than if there were no migrations specified in the new code.
Once the module update concludes, every read of the migrated location will return migrated data, whether made via a distributed query, a select on a foreign PState, or a topology read. Rama automatically applies the migration function at read time. This means that your topology code and client queries can immediately expect to see the migrated data, without ever having to worry about handling the old schema or content.
Durable and Fault Tolerant
After deploying a migration, Rama begins iterating over your migrated PStates and re-writing migrated data back to disk. Like all PState reads and writes, this happens on the task thread, so there are no races. Rama does migration work as part of the streaming event batches and microbatches that are already occurring, so the additional overhead of background migration is minimal.
The rate of migration is tuned primarily via four dynamic options, two apiece for streaming and microbatching:
With these options, you may tune the target number of paths for Rama to migrate each second, and limit the amount of migration work done in each batch. In our testing with the default dynamic option values, background migration work added about 15% and 7% task group load for streaming and microbatch topologies respectively, with one million paths per partition migrated in about 3 hours 15 minutes and 2 hours 45 minutes respectively (but this will depend on your hardware, append rate, and other configuration). If your Rama cluster has 128 partitions, this comes out to about 40M and 46M paths migrated per hour respectively.
Remember, Rama applications can be scaled up or down with a single CLI command, so if you need a little extra CPU to perform a migration or want to increase its rate, it’s trivial to do.
Migrations are done in a fault tolerant manner; they will progress and eventually complete even in the face of leader switches, worker death, and network disconnection issues, with no intervention from a cluster operator required.
Migration status details are visible in the UI, at the top-level modules page down through to the individual PState pages. If the monitoring module is deployed, detailed migration progress metrics are also available.
These three screenshots taken from the cluster UI of one of our test clusters show how migration status is surfaced at the module, module instance, and PState levels:
On an individual PState’s page, the PState’s schema, migration status, and collection of tasks undergoing migration are displayed:
If the monitoring module is deployed, then migration progress metrics are also available per-PState:
Once your migration completes, you are free to remove the migration from your source code and forget it ever happened.
Conclusion
Schema evolution is an inevitable part of application development. Existing databases have varied levels of support for it: none at the low end, but even at the high end, SQL databases leave much to be desired in terms of expressivity, operational ease, and fault tolerance.
Rama was built with schema evolution in mind: with event-sourcing at its core, you’ll never “forget” anything once known, and you’ll always have the ability to derive new PState views from existing depot data.
With Rama’s new instant PState migration feature, the story gets even better: you now have the power to update your PStates’ schemas and data in-place, via the powerful programming language you’re already using, instantly and without any operational pain.
As always, we’re excited to see what kinds of novel applications are unlocked by this new leap forward in development ease.
Clojure macros have two modes: avoid them at all costs/do very basic stuff, or go absolutely crazy.
Here’s the problem: I’m working on Humble UI’s component library, and I wanted to document it. While at it, I figured it could serve as an integration test as well—since I showcase every possible option, why not test it at the same time?
This is what I came up with: I write component code, and in the application, I show a table with the running code on the left and the source on the right:
It was important that code that I show is exactly the same code that I run (otherwise it wouldn’t be a very good test). Like a quine: hey program! Show us your source code!
This macro accepts code AST and emits a pair of AST (basically a no-op) back and a string that we serialize that AST to.
This is what I consider to be a “normal” macro usage. Nothing fancy, just another day at the office.
Unfortunately, this approach reformats code: while in the macro, all we have is an already parsed AST (data structures only, no whitespaces) and we have to pretty-print it from scratch, adding indents and newlines.
I tried a couple of existing formatters (clojure.pprint, zprint, cljfmt) but wasn’t happy with any of them. The problem is tricky—sometimes a vector is just a vector, but sometimes it’s a UI component and shows the structure of the UI.
And then I realized that I was thinking inside the box all the time. We already have the perfect formatting—it’s in the source file!
So what if... No, no, it’s too brittle. We shouldn’t even think about it... But what if...
What if our macro read the source file?
Like, actually went to the file system, opened a file, and read its content? We already have the file name conveniently stored in *file*, and luckily Clojure keeps sources around.
In any other language, this would’ve been a project. You’d need a parser, a build step... Here—just ten lines of code, on vanilla language, no tooling or setup required.
Sometimes, a crazy thing is exactly what you need.
Hope everyone had a wonderful time at Heart of Clojure last week!
After the pandemic shutdowns, it&aposs been so hard to find communities meeting in person, so I&aposm proud that we made Heart of Clojure happen for 250+ Clojure and functional programming enthusiasts. In this "we" are our many volunteers who helped before, during, and after the event – Heart of Clojure would have been impossible without your hearts and hands. In addition, without our generous sponsors, we wouldn&apost have been able to pay for so many "nice-to-have" extras like pulu&aposs music performance or breakfasts on both conference days.
So, if you&aposre looking for a job, some of our very cool sponsors are hiring:
Griffin
Griffin puts the bank in Banking as a Service. We combine the secure, regulated infrastructure of a bank with the speed and power of modern software into one powerful full-stack BaaS platform. Our goal is to empower companies to embed banking into their products quickly and safely. With purpose-built APIs and an all-access developer sandbox, we’re helping our customers build and launch payment platforms, financial wellness apps, client account products, business loans, and more. We&aposre currently growing our teams, so if you&aposd like to change the face of finance, check out our careers page.
Otto
For the best online shopping experience, our tech colleagues at OTTO face new challenges every day. With us, you have the opportunity to work in an inspiring tech environment, think outside the box and actively shape the latest developments in e-commerce . For us, new technologies and innovations are the key to meeting the changing needs of our customers in the best possible way. That&aposs why a third of our colleagues are already tech experts - and the trend is rising. Because as a top player in fast-moving e-commerce, we want to continue to grow - together with you! Find out more
Freshcode
Are you passionate about technology and functional programming? Join our team of experienced Clojure developers at Freshcode! We specialize in building cutting-edge web applications that have a global impact. Be a part of our journey to transform businesses with innovative tech solutions: https://www.freshcodeit.jobs/vacancies/clojure-developer-2
Clojurists Together funds the developers who build the open source Clojure software you use every day. Please talk to your boss about your company joining Clojurists Together to help make the Clojure ecosystem (and your company&aposs technical foundations) even stronger 💪
NuBank
If you&aposd like to keep the Clojure love going, the Clojure/conj returns to Alexandria, VA from Oct 23-25 and tickets are still available! Join us for an amazing slate of talks about tools, libraries, ideas, and experience reports about Clojure.
Also, don&apost forget to complete the 2024 State of Clojure survey, our annual checkpoint on the community, providing valuable insights and tracking over the years.
Looking forward...
To be clear, we make no promises about any future Heart of Clojure events 😊 but we would like to hear and learn from your feedback in case we do put on another Heart of Clojure. Look out for another email soon with our conference attendee survey.
Welcome to the Holy Dev newsletter, which brings you gems I found on the web, updates from my blog, and a few scattered thoughts.
You can get the next one into your mailbox if you subscribe.What is happeningI have been to Heart of Clojure in Belgium, which was a wonderful experience. And I am not saying that only thanks to the home-made waffles. And the pralines. A cozy city, good food, very nice people from all around (primarily) Europe. Surprisingly many expats like me :-). Some talks and workshops I really enjoyed as well. Though the best part was meeting and chatting with people new and old that I meet rarely or never before in the real life, such as a gang from Czech Republic, Lovro, Ben, Sigmund, Sami, Colin, and too many others to name. I’ve also had a very nice lunch with my tai-chi chuan teacher. If/when the talks come online, I can recommend James Reeve’s Living With Legacy Code, XTDB’s Richer SQL — Steering SQL’s Future Towards Clojure’s Philosophy. Sami’s Sailing with Scicloj: A Bayesian Adventure demonstrates well the maturity and power of the SciCloj ecosystem for scientific computing. I hear that Staring into the PLFZABYSS - From the IBM AS/400 to Clojure & Datomic was very insightful.All my remaining time has been consumed by writing and improving documentation of Wolframite, our Clojure - Wolfram bridge. Both me and Thomas have done a great job and it is turning out to be one of the best documented Clojure libraries. Which is necessary, since we are introducing a wholly new thing to two very different audiences - Wolfram to Clojurians and Clojure to scientists. You can see for yourself our work in progress. After I finish my current review, I will get back to writing a demo of using Wolfram(ite) to analyze bike trips for Clojurians - writing it is a great way to finally learn Wolfram 😅 and to discover more ways to improve the library. I hope to wrap the documentation up in a few weeks, and then we can focus on preparing some talks and workshops we want to have ready when we announce v1. We also are looking for beta testers, either from the Clojure or scientific communities - if you know anyone, let me know!I haven’t had any time for Fulcro or Rama in the last months, but I am looking forward to coming back to both of them…
The year 2012 was a significant year for Metosin; for more than one reason. In addition to starting the company operations in January 2012, we also started an important new project for our client using Clojure. At the time, this was quite a controversial decision. The general consensus at the time was that proper software projects were always done using Java and Spring framework.
The project turned out to be a major success for our customer, but in 2012 this was in no way inevitable. One critical risk in our decision to use Clojure was the availability of skilled Clojure programmers.
In May 2012 I and my colleague and co-conspirator Tommi Reiman attended the very first EuroClojure in London (the original site is long gone, but thanks to the Wayback machine, we can find it here). We were thrilled to hear the presentations and meet many great Clojure luminaries like Zach Tellman, Stuart Halloway, Christophe Grand, Malcolm Sparks, Rich Hickey, and many more. The community around Clojure was vibrant, energetic, and amazingly welcoming for total newbies like us.
On our way home, we were convinced we needed to organize a small get-together focused on Clojure in Finland. We hoped to bootstrap the Clojure community in Finland with the same energy and friendliness we witnessed in London. Maybe this would help give Clojure more visibility and reduce the risk of not having enough skilled Clojure programmers.
Clojure conference we must have, this much was clear.
We needed speakers, preferably from abroad, to give our conference a glorious international feeling. We told our CEO, Mikko Heikkilä, to figure this out: "There's no hurry; you have two weeks". Mikko contacted Bruce Durling, who had just given a presentation in EuroClojure.
Unfortunately, Bruce could not make it, but he promised to help us by using his contacts in the London Clojure community.
This worked well, and very soon, Sam Aaron contacted Mikko and told him that he had never been to Finland and would like to give a presentation about the Clojure interactive programming for generating music. Soon we also had confirmations from the local software industry; Lasse Rasinen from Zenrobotics and Markku Rontu from NitorCreations agreed to give presentations.
So this is happening. We need a web page.
The web page for ClojuTRE was quickly hacked together by yours truly. In hindsight, it's hard to say if the limiting factor for the aesthetic aspects of the page were the lack of time, lack of skill, or the general absence of taste. The most likely combination of all three.
Our principal motivation was to bring together people who share our enthusiasm for Clojure to talk, share ideas, and, since this is Finland, have a few beers and enjoy the sauna.
We understood that most of the interested people don't use Clojure in their work and would most likely be unable to get financial support from their employers. So, a free conference it is.
Fortunately, we got NitorCreations to sponsor the conference and share the bill. The bill was not very impressive; the venue for 25 persons, sauna, few beers and pizzas, travel and hotel for Sam, we're not talking big bucks here.
The Friday 12th of October 2012 was at hand much sooner than we realized. After hectic last-minute errands, the first ClojuTRE was in progress.
It was informative; it was inspirational, and, most importantly, it was fun. After the presentations, we had pizza, beer, and a sauna. Sam enjoyed the sauna a lot.
Unlike Sam, Christophe Grand had a different opinion of the whole sauna experience. Next year, in connection with ClojuTRE 2013, we organized a public Clojure training with Christophe, Sam, and Meikel Brandmeyer as teachers. After the event we had, again, pizza, a few beers and sauna. Christophe reported to his friends back in France about our "attempt to steam-cook him alive". Guess sauna is not for everyone.
With a new project, ClojuTRE, and the training, the year 2012 was quite busy for Metosin. When the year was ending, we started to make plans for 2013. One thing was clear; "We need to do this ClojuTRE thing again."
Fast-forward seven years, and we get to ClojuTRE 2019. What started in 2012 as a low-key, ad hoc get-together for 25 local Clojure programmers had grown to a two-day international conference with ~400 attendees each day. In 2019 we had more speakers than we had attendees in 2012!
Although the size of ClojuTRE was different, we wanted to maintain the sense of homegrown, down-to-earth, warm personal feeling of the first ClojuTRE. That meant we needed to find a way to source the arrangements for some other company. This meant that we (and I mean mostly Mikko here) spent a lot of time making ClojuTRE the event we wanted.
Even with the generous support from our sponsors and 75€ ticket price, the financial impact of ClojuTRE was no longer a footnote in accounting for a small software consulting. The expenses were over 70k€, even without counting all the hours our people put into this.
Don't get me wrong; we have benefitted from ClojuTRE greatly. Although we have not emphasized our role as the organizer and instead promoted the community as a whole, ClojuTRE has given us a lot of visibility and a good reputation far more significant than any marketing effort could have ever accomplished.
Still, from a purely financial point of view, ClojuTRE had become really expensive for us. However, we felt that ClojuTRE was more important than just us or money; we were doing something we were really proud of.
It seems that people liked the concept and the execution: Here's some feedback we have had over the years.
The conference was very well organised. One of the best organisations I've seen, as a matter of fact
Fantastic conference guys! Truly top drawer stuff. Top name speakers, good venue, free food and drink and all for 0€ - amazing. I just can't compliment you enough! :)
ClojuTRE is the best conference I have attended. Love to all the people organizing it <3
Many people have liked the short talks:
The event was very well organised. What a splendid idea to keep presentations short – the pace of it worked great for preventing sleep throughout the day!
I think you guys pretty much nailed the format and execution for the perfect conference. ClojuTre has definitely become my favourite conf.
Top notch organization of the conference, the venue and food was pretty amazing.
The quality of presentations and the variance of topics has also been well received:
Atmosphere was great as it has been every year since 2012. Good mix of talks with different kind of topics and seriosity
Exceptional speakers! Absolutely fabulous set of topics in talks. It covered very wide area and opened my eyes
I loved Clojutre, and it is by far the most excellent technical conference I have ever been to, from the atmosphere and spirit, to the quality and topic range of talks
Friendly, open, and welcoming atmosphere has been our top priority right from the start.
A wonderful conference with a very friendly atmosphere
I love that you care about diversity and inclusion! And thank you for the great event, can't wait until next year!
I guess something from our local culture has given the event a bit of flavor:
Thank you arranging Clojutre. It is a fantastic conference that I will be recommending to all Clojurists I meet. Don't give into any pressure to change the concept and your special Finish approach
It was amazing, and Finland rocks – thank you for the work you put in!
Right after the 2019 event, the ClojuTRE 2020 planning kicked in full swing.
Then Covid-19 happened, and all plans were trashed. Suddenly nobody was able to make any commitments to travel anywhere in any time soon. So ClojuTRE 2020 was postponed, then again in 2021, and yet still in 2022.
Now we're in the final stages of 2022 and starting to look at 2023. Although the shadow of the pandemic seems to be lifted, the world's safety and financial situation is far from certain. The outrageous war with senseless destruction and murder of civilians in Ukraine has put the whole continent on uneven footing. With a future that can be described as uncertain at best, what should a small company with a big dream do?
Organizing ClojuTRE requires a lot of work, money, and commitment. We want to maintain the idea and the soul of ClojuTRE, and at this time we're not ready to make the commitment that ClojuTRE deserves.
So, with a heavy heart, I must pronounce yet another postponement for ClojuTRE.
While our beloved conference is in a deep sleep at R'lyeh, we're not going to be idle. We hope to serve the local community best by helping to organize a set of small Clojure meetups throughout 2023. In fact, we already started the Oulu Clojure meetup just last week. That is part of our super secret plan to open an office in Oulu, but more about that later.
Another thing we're committed to is our continued efforts on open-source projects, ours and others too. Check out the latest blog posts by Tommi and Juho. Tommi will probably write more about that soon, so stay tuned.
If you figure it out, let me know at Clojurians slack, I go by handle @jarppe. The first person to do so wins a ticket to the next ClojuTRE, whenever that happens!
A month ago I gave my first meetup talk ever. For those who are thinking of
doing this or have been there know it’s not necessarily easy to make the
decision to go talk to a bunch of seasoned programmers about any topic. The
bar definitely feels high even when it shouldn’t be.
At the end of October, my colleague invited me to talk about this curious side
project of mine in a Clojure meetup that would be held in Oulu, in the northern
half of Finland. Little wonder as I'd been blabbering about my side project
during friday coffee sessions, demoing it to colleagues, and thus had gathered
interest in the topic. I slept on the decision and ended up accepting the
invitation.
The meetup was set to be held on December 1st and was hosted by the kind
Enhancell folks within their cozy office premises in
the heart of Oulu city.
The countdown began with four weeks on the clock. I decided on my title right
after receiving the invitation. For me, it was important to have an explicit
title so that the contents of that talk would be clear beforehand and potential
guests could make their RSVP decisions more easily. The title of my talk was:
Creating an experimental GraphQL formatter using Clojure, Instaparse, and GraalVM.
I knew I had to put some buzzwords in there, so I put all of them there. Some
old, some new - all good, as they say!
At end of the week, I started preparing my presentation at a high-level using
markdown Gist and bullet points. This allowed me to create the narrative
without spending too much time on the graphics, typography, demo code, and
other embellishments at this point. Crafting the bullet points took me one
friday evening.
At this point, I still had ample amount of time to get things right. Yes, even
my editor code style schemes.
But I still had to get my bullet points right as I knew my draft was still very
rough. I spent the entire next week fixing all my typos, bending the narrative
to my will, and thinking about the lessons in all of this. This retrospective
process involved going through the code and commits from the beginning and
reflecting on my feelings about the problem/solution space, and also digging up
my notes on different experiments and reflecting on those.
Another week began, and I already knew I had to start fleshing out my slides.
The sand was sifting through the hourglass and I didn't have any memes on my
powerpoint yet.
And to add to my bikeshed woes, no way people would be reading markdown Gist
with enhanced grandpa font sizing (if only I had enough time to learn emacs org
mode)! Where were those company slide templates again? Which template was the
least stale out of the three?
Pretty soon I would have to start throwing that clay into something far more
beautiful. So that's what I started doing.
At the end of the week, I had something that I could call "graphical" (in lieu
of a better word). I also wrote some demo
code
to showcase parts from the Abstract Syntax Tree processing pipeline that I had
built for my formatter, and then continued working on my slides, spit-polishing
the graphical parts, making sure I was using the right font sizes consistently
everywhere, etc. You know, the important stuff.
On the final week, I solely focused on presentation notes, refining both the
content of the slides and rehearsing what my notes would sound like when I
actually presented it.
My goal was to aim for a maximum of 20-to-30 minutes which I thought would be a
good meetup talk length. I basically adjusted this to the time we had in total
and the number of talks we would have, plus adding into the consideration the
fact that people probably wanted to socialise as well.
Being on stage can make you forget ALL kinds of lines, improvised or not. It
was very important to write those notes down. For me, this was extra important
as I have a habit of forgetting a lot of words and things when under stress.
Initially, I’d set myself the goal not to improvise too much during rehearsal.
I only did it during times when I really couldn’t come up with anything that I
could write down in my notes. If something good came up during practise, I
would write that down. Repeat ad nauseam.
My final rehearsal in the hotel room before the meetup was strictly on-script
only with very minor changes to my slides and presentation notes. Also,
before the presentation, I added some very quick and small amendments to the
notes. Nothing too fundamental.
I already knew well in advance that I couldn't just read from the script. I
would have to up the ad-libbing during the actual talk to make it more
interesting for the crowd.
In fact, when I watched Jarppe Länsiö, a Metosin colleague of mine, present
the Architect’s Case where he explained the architectural rationale for
selecting Clojure I became more confident of my decision. His talk was a tour
de force and contained little if any notes (or any that I saw at least). He
made it appear as if he had improvised the whole talk. It was fun and
engaging, not least because of its blitzy tempo but also because some new war
stories were shared.
After Jarppe's talk, it was my turn to talk. While I’d clocked my presentation
during rehearsals to stop somewhere during the 20-to-30-minute mark, my actual
talk took me 43 minutes! And with the after-discussion I went well beyond
overtime, clocking in at 50 minutes.
The presentation itself went without major problems and there were some
questions during the presentation as well as after the presentation. There were
also a couple moments where I struggled a bit in trying to explain the concept
of Instaparse in a beginner-friendly way. Something I had not planned in my
notes. I also didn't expect that there would be people who hadn't really done
much with Clojure, but I probably should have!
Besides these two presentations, we also heard from Martín Varela about his
historical path towards Clojure from the academic world and his current work
where he’d, among other things, produced a whitepaper pinpointing anomaly
events in logs from stability testing using N-grams or Deep-Learning as part of
his normal day-to-day programming work. This was insightful and also gave me
some ideas on how to leverage something like this, only simpler, for structured
logging data as well (something which the whitepaper did not focus on
primarily).
All in all, the Clojure Meetup Oulu was really nice with friendly people from
diverse backgrounds. The guests had really good questions throughout the
presentations and they seemed to have enjoyed their time there! The hosts were
also super nice to us even when we went slightly overtime.
And finally, let’s not forget the second most important bit here. The food. We
had vegan and gluten-free pizza for all plus inclusive drinking options.
After the meetup, we switched locations to discuss on the meetup topics but
also other programming-related topics such as emacs and technical metal. All in
all, what a blast!
If you’ve never held a talk at a meetup before, I warmly recommend doing so as
it’s a great experience. It’s not necessary to set a high bar for yourself or
strive for perfection nor do you need to write code for as long as I did.
All talks will be different and interesting in their own way, both in terms of
content but also presentation style. The most important part is that you feel
you want to talk about it.
For me, this was a unique and inclusive first-time experience. I definitely
encourage everyone to try it out once you’ve reached a certain level of
comfortability to start blabbering about something in front of a small crowd. :-)
And also, if you’d like to join our company where we focus on all things
Clojure (be it meetups, naked-performance
programming, or Clojure
consulting), let’s start talking: rekry@metosin.fi
After coming back to Taiwan, a LinkedIn post written by a Clojurian who attended Heart of Clojure soon caught my eye.
Perhaps it&aposs time for some small, new, and incredibly neat and performant Clojure-based consultancy to be born, hmm?
It sounds like the author is planning to bring Clojure to his neighborhood, company, or country. No matter which one it is, I hope he will be successful.
Seeing this post, I couldn&apost help but think of the opening talk, which manifested that attending a conference can have indeterminate but consequential effects.
Indeterminate Consequential
Years ago, after I successfully delivered a Clojure/Datomic solution to a Taiwanese enterprise, I reached a point where I needed to rethink my career. Clearly, I had two possible options:
Improve my English and try to get a Clojure job in Europe or the U.S.
Improve my business skills and try to create a Clojure job for myself in Taiwan.
I couldn&apost decide, so I attended a Clojure conference in London. That was the first time I had been to a Clojure conference, and most developers I spoke to on that trip had no experience using Datomic in production.
An epiphany struck: Perhaps I am a developer with better business skills than English skills. Great, let me build on my strength and choose option 2.
Several years later, while option 2 has not yet fully succeeded, I am now programming in Clojure as part of my day job. In hindsight, it was the experience of attending that conference that determined my career path.
Clojure Migrates to a New Land
At Heart of Clojure, there was one talk that answered a question I had been searching for a long time: the talk about replacing IBM AS/400 with Clojure/Datomic at a German car manufacturer.
From my experience, this kind of job seemed impossible. The challenge wasn&apost technical but related to people&aposs decisions. In enterprises, strict rules are often set around databases. However, the speakers had successfully achieved this. I had to know why.
The story was rugged and wild. The breakthrough came when the speakers discovered that the Italy branch wasn’t using AS/400 directly because the AS/400 had only a German-language interface. As a result, they were allowed to deploy the Clojure/Datomic solution. Without this talk, I would never have imagined that the key to gaining permission for deployment could be something like that.
Activities Are Like Black Holes
At the Heart of Clojure conference, I used the Compass App to remind me when the next talk I wanted to attend was, and I joined many activities. Or, to put it another way, I was drawn into many activity black holes.
We talked about many things:
Where are you from?
Culture
History
Career
Industry
The experience of being the only Clojurian among many non-Clojure colleagues.
I couldn’t explain why I fell into so many conversations, as I never considered myself an extrovert. From time to time, I found it difficult to leave one conversation to go to the next talk.
It felt like we were brothers and sisters of the same religion. Or perhaps, not just spiritually, we even shared some biological similarities.
Bring HoC to Your Home
If after Heart of Clojure, you make a significant resolution—something great but difficult to achieve—don’t worry. You are not alone.
I originally only thought about bringing Clojure home, but now, I’m considering bringing Heart of Clojure home as well.