The programmers who live in Flatland

In the book Flatland: A Romance of Many Dimensions, a two-dimensional world called “Flatland” is inhabited by polygonal creatures like triangles, squares, and circles. The protaganist, a square, is visited by a sphere from the third dimension. He struggles to comprehend the existence of another dimension even as the sphere demonstrates impossible things. It’s a great book that has stuck with me since I first read it almost 30 years ago.

I’ve realized that “Flatland” is a perfect metaphor for the state of mind of a large number of programmers. Consider this: in 2001 Paul Graham, one of the most influential voices in tech, wrote the essay Beating the Averages. He argues forcefully about Lisp being fundamentally more powerful than other languages and credits Lisp as the key reason why his startup Viaweb outlasted their competitors. He identifies macros as the particularly distinguishing capability of Lisp. He writes:

A big chunk of our code was doing things that are very hard to do in other languages. The resulting software did things our competitors’ software couldn’t do. Maybe there was some kind of connection. I encourage you to follow that thread.

I did follow that thread, and that essay is a key reason why Clojure has been my primary programming language for the past 15 years. What Paul Graham described about the power of macros was absolutely true. Yet evidently very few shared my curiosity and Lisp/Clojure are used by a tiny percentage of programmers worldwide. How can this be?

Many point to “ecosystems” as the barrier, an argument that’s valid for Common Lisp but not for Clojure, which interops easily with one of the largest ecosystems in existence. So many misperceptions dominate, especially the reflexive reaction that the parentheses are “weird”. Most importantly, you almost never see these perceived costs weighed against Clojure’s huge benefits. Macros are the focus of this post, but Clojure’s approach to state and identity is also transformative. The scale of the advantages of Clojure dwarfs the scale of adoption.

In that essay Paul Graham introduced the “blub paradox” as an explanation for this disconnect. It’s a great metaphor I’ve referenced many times over the years. This post is my take on explaining this disconnect from another angle that complements the blub paradox.

I recognize there’s a fifty year history of Lisp programmers trying to communicate its power with limited success. So I don’t think I’m going to change any minds. Yet I feel compelled to write this since the metaphor of “Flatland” just fits too well.

Dimensions of programming

Programming revolves around abstractions, high-level ways of thinking about code far removed from the base primitives of bits, machine instructions, and memory hierarchies. But not all abstractions are created equal. Most abstractions programmers use are automations, a package that bundles a set of operations behind a name. A function is the canonical example: it takes inputs and produces an output. You don’t need to know how the function works and can think just in terms of the function’s spec and performance characteristics.

Then there are the rare abstractions which extend the algebra of programming itself: the basic concepts available and the kinds of relationships that can exist between them. These are the abstractions that create new dimensions.

Lisp/Clojure macros derive from the uniformity of the language to enable composing the language back on itself. Logic can be run at compile-time no differently than at runtime using all the same functions and techniques. The syntax tree of the language can be manipulated and transformed at will, enabling control over the semantics of code itself. The ability to manipulate compile-time so effortlessly is a new dimension of programming. This new dimension enables you to write fundamentally better code that you’ll never be able to achieve in a lower dimension.

If you’re already a Lisp programmer, you already understand the power of macros and how to avoid the potential pitfalls. My description is boring because you’ve done it a thousand times. But if you’re not a Lisp programmer, what I described probably sounds crazy and ill-advised!

In Flatland, the square cannot comprehend the third dimension because he can only think in 2D. Likewise, you cannot comprehend a new programming dimension because you don’t know how to think in that dimension. You do not have the representational machinery to understand what the new dimension is even offering. A programmer in 2D may conclude the 3D concept is objectively wrong. This isn’t because they’ve understood it, but because they’re trying to flatten it into their existing coordinate system.

Learning new dimensions

You can’t persuade someone in 2D with 3D arguments. This is exactly like how in Flatland the sphere is unable to get the square to comprehend what “up” and “down” mean.

However, this is where the metaphor breaks down. Though your brain will never be able to comprehend 4D space, your brain can adapt to new dimensions of programming. People who adopt Lisp/Clojure typically describe the experience similarly. First it’s uncomfortable, then there’s a series of moments of clarity, and then there’s a feeling they can never go back.

All it takes is curiosity and an understanding that the greatest programming ideas sometimes can’t be appreciated at first. That latter point is the key insight that gets lost in the conversation. We all have that cognitive bias. Recognizing that bias is enough to break it, and it’s one of the best ways to grow as a programmer. Macros are not the only great programming idea with this dimension-shifting quality.

Conclusion

In the end, living in Flatland is a choice. The choice appears the moment you notice that instinctive recoil from an unfamiliar idea, when you feel that tension between “this doesn’t make sense” and “maybe I don’t yet have the concepts to make sense of it.” What you do in that moment defines whether you stay in Flatland or step out of it.

Permalink

Aimless

I’ve been doing Clojure long enough that I remember when Clojure just was a JAR provided in a ZIP file. As I much as I benefit from all the work around modern Clojure tooling, it was less ceremonious in the old days to play around without big plans in mind.

During the Clojure Tooling Working Group meetup (coordinated by the awesome Christoph Neumann) at the Conj this year, Rich Hickey said that Clojure should be useful without needing to create a formal project. That struck a chord and reminded me of the simpler times.

It also got me thinking how even experienced Clojurists still find ClojureScript difficult or unnatural to setup. That’s really unfortunate as folks like António Monteiro, Mike Fikes, myself and others put in an incredible amount of work over the years to make using ClojureScript more like using Clojure. Let’s see what I mean. Edit your ~/.clojure/deps.edn to include the following:

{:aliases
 {:#cljs {:extra-deps
          {org.clojure./clojurescript {:mvn/version "1.12.116"}}
          :main-opts ["-m" "cljs.main" "-r"]}}}

That’s really all you need. No extra compiler flags, no build configuration EDN or files. Pick whatever global alias name makes sense for you. The idea is to not surprise yourself later. Go ahead and launch a ClojureScript REPL, it doesn’t matter where. Don’t worry, since we didn’t specify an output directory, ClojureScript will write to a tmp directory.

clj -M:#cljs

A browser window will open. Type a few things at the REPL. Then make a file called whatever you want but ending in .cljs. user.cljs works if you’re not feeling inspired. Again, we’re just aimlessly exploring - how does the new proxy functionality in ClojureScript work? Add the following to your file:

(require '[cljs.proxy :refer [builder]])

(def proxy (builder))
(def proxied-map (proxy {:first "Foo" :last "Bar"}))

Note we didn’t define a namespace. We don’t need to! Let’s load the file in the REPL with (load-file "user.cljs") or whatever name you came up with. Right-click on the browser REPL page to open the browser inspector (in Safari you’ll need to enable Developer tools). Navigate to the Console. Try typing the following into the JavaScript Console:

cljs.user.proxied_map["first"]
cljs.user.proxied_map["last"]
Object.keys(cljs.user.proxied_map)

That’s it! No project, no namespaces - just experiments and explorations. You never know what might happen. I actually wrote the code that became cljs.proxy without concrete goals in mind in exactly this way.

Permalink

1.12.116 Release

We’re happy to announce a new release of ClojureScript. If you’re an existing user of ClojureScript please read over the following release notes carefully.

This is a major feature release with a significant number of enhancements. Before diving in, note that Google Closure Compiler has been updated to v20250820.

For a complete list of fixes, changes, and enhancements to ClojureScript see here

ECMAScript 2016 Language Specification

ClojureScript, outside of a few small exceptions, has generated ECMAScript 3rd edition (1999) compatible code. We avoided any newer constructs because historically they offered undesirable outcomes: increased code size due to polyfilling and decreased performance due to yet unoptimized paths in JavaScript virtual machines.

Nine years have passed since the ECMAScript 2016 was released and the major JavaScript virtual machines now offer great performance across the specification. While language constructs like let surprisingly fail to deliver much value for ClojureScript, features like Proxy and Reflect solve real problems at low prices.

ClojureScript will use ES2016 moving forward where it delivers value and performance benefits.

cljs.proxy

ClojureScript’s interop story, while strong, was never as complete as Clojure on the JVM due to the lack of common interfaces, i.e. java.util.Map. ClojureScript values had to be marshalled to JavaScript values via clj→js, generating a significant amount of computational waste.

Enter cljs.proxy. This new experimental namespace uses ES2016 Proxy to lazily bridge ClojureScript maps and vectors to JavaScript. JavaScript code can now see ClojureScript maps as objects and vectors as array-likes. cljs.proxy was carefully written to add very little overhead for object access patterns over a direct -lookup call.

(require '[cljs.proxy :refer [builder]]
         '[goog.object :as gobj])

(def proxy (builder))
(def proxied-map (proxy {:foo 1 :bar 2}))

(gobj/get proxied-map "foo") ;; => 1

This feature needs commmunity tire kicking, but we believe this approach offers sizeable benefits over existing practice.

Clojure Method Values

ClojureScript now supports Clojure 1.12 method value syntax as well as static field syntax. PersistentVector/EMPTY works, but also String/.toUpperCase and Object/new. Thanks to ES2016 Reflect we do not need manual :param-tags for disambiguation, and it covers the many cases where type information will simply not be available to the ClojureScript compiler.

(refer-global :only '[String])
(map String/.toUpperCase ["foo" "bar" "baz"]) ;; => ("FOO" "BAR" "BAZ")

:refer-global and :require-global

:refer-global lets a namespace declare what definitions from the global environment should be available in the current namespace without js prefixing. It can be combined with :rename.

(refer-global :only '[Date] :rename '{Date my-date})
(my-date/new)

:require-global lets you use JavaScript librares that you included as script tags on the page without any further build configuration. JavaScript build tooling brings a considerable amount of additional complexity and now risk and there is a growing population of developers moving to technologies that eliminate it. Hypermedia frameworks in particular have returned to more innocent times where at most you needed exactly one JavaScript dependency to be productive.

ClojureScript now supports hypermedia-centric development approaches where you might have only one dependency and you are using ClojureScript / Google Closure Library primarily to build Web Components and want to sidestep the JavaScript dependency churn and tooling burden.

(require-global '[Idiomorph :as idio])
(idio/morph ...)

:lite-mode and :elide-to-string

Not all programs we might want to write require ambition. There are light scripting use cases, say for a blog, that are not currently well served by ClojureScript.

How to break the 20K compressed wall? After some time in the hammock, we decided to travel back to 2011 and resurface the original data structures that Rich Hickey and co. included in the standard library. While not as efficient, they are decoupled and smaller. Setting the new :lite-mode compiler flag to true makes the ClojureScript compiler emit calls to the older constructors and tree-shaking can eliminate the heavier persistent implementations.

Printing is another blocker for very compact artifacts. Many simpler programs will never recursively print EDN. The :elide-to-string compiler flag removes the toString implementations leading to improved tree-shaking.

Combining these two experimental flags cuts the initial artifact size by two thirds. It’s important to understand these flags cannot be used to make large ClojureScript programs smaller - once you have enough dependencies or rely on enough features, the savings are a wash.

But for people who know that they want to build something very compact, yet not give up on useful bits of cljs.core and Google Closure Library, these two new flags provide more control.

The following program is 6K Brotli compressed with :lite-mode and :elide-to-string.

(->> (map inc (range 10))
  (filter even?)
  (partition 2)
  (drop 1)
  (mapcat identity)
  into-array)

We’re excited to hear feedback about all these new features!

Contributors

Thanks to all of the community members who contributed to ClojureScript 1.12.116

  • Michel Borkent

  • Paula Gearon

  • Roman Liutikov

Permalink

Como foi palestrar na Clojure South 2025

O que é a Clojure South?

Segundo site oficial:

"Organizada pelo Nubank, a Clojure South faz parte da programação oficial da comunidade de Clojure, conectando pessoas desenvolvedoras, entusiastas e empresas para compartilhar experiências reais, discutir tendências e fortalecer a rede global da linguagem."

Motivação

Em uma conversa com uma brasileira que conheci na Lambda Days 2025 (falei sobre como foi participar deste evento neste artigo), perguntei se ela iria assistir às palestras (ela trabalha para a Nubank) e descobri que ela iria ser instrutora do workshop de Clojure! Nesta conversa ela sugeriu que eu deveria enviar uma proposta de palestra. Comentei uma ideia que eu tinha para submeter, mas que estava meio inseguro de enviar. Ela me encorajou a tentar e eu resolvi arriscar!

Embora tenha no meu currículo algumas palestras e participações em podcasts, sentia que talvez fosse mais interessante, neste caso, submeter uma proposta no formato lighting talk (palestra de apenas 10 minutos e sem tempo para perguntas).

E algum tempo depois, para minha surpresa, a proposta foi aceita!

Tema

Havia pensado em enviar propostas sobre dois possíveis temas. Um sobre o ensino de Clojure e Programação Funcional para iniciantes (por conta do meu curso Clojure: Introdução à Programação Funcional). E outro sobre um projeto que fiz na empresa onde trabalho, que utiliza Clojure no backend e a linguagem de programação Elm para o front-end.

Optei pela segunda opção. Já era um projeto que eu queria falar sobre publicamente e esta parecia ser uma boa oportunidade!

Outro fator é que o projeto em questão está relacionado com um App bastante famoso no Brasil, a Carteira Digital de Trânsito — App desenvolvido pelo SERPRO e disponível para Android e iOS. Imaginava que isso poderia aumentar as chances de a palestra ser aceita.

A descrição completa que enviei foi:

Durante o desenvolvimento da Carteira Digital de Trânsito, percebi como era desafiador dar suporte a um App com milhões de usuários integrado a diversos sistemas. Sonhei em criar uma interface interna simples e amigável para consultas que facilitassem o atendimento. Assim surgiu o Apoio CDT, um sistema web que automatizou processos e permitiu consultas rápidas e seguras. Com backend em Clojure e front-end em Elm, desenvolvemos um MVP em três semanas. O resultado foi uma arquitetura web simples, eficiente e fácil de manter ao longo dos anos.

Até onde sei, este foi o primeiro projeto da empresa que utilizada Clojure e também o primeiro a utilizar Elm. E esta foi a primeira vez que tive oportunidade de falar sobres este sistema para pessoas de fora da empresa onde trabalho!

Demais palestras e workshops

O evento durou dois dias, sendo que no primeiro ocorreram os 2 workshops:

Workshop de Clojure

Um curso de introdução ao Clojure, em inglês, ministrado pelo Christoph Neumann, Developer Advocate de Clojure no Nubank.

Embora eu já tenha experiência com Clojure (e inclusive seja instrutor do curso Clojure: Introdução à Programação Funcional), foi muito legal reforçar os principais conceitos desta linguagem e também aprendi alguns fundamentos que ainda não conhecia!

Workshop de Datomic

O período da tarde do primeiro dia foi destinado ao workshow do banco de dados Datomic. Este foi ministrado em português pela Carolina Silva (a mesma que citei no começo deste artigo) e Hanna Figueiredo, ambas Software Engineering no Nubank.

Eu já havia assistido várias palestras sobre o Datomic e sempre achei um banco de dados muito intrigante e interessante. Acredito que faria muito sentido utilizá-lo em vários dos projetos que atuo ou já atuei no passado.

O workshop mostrou vários conceitos teóricos do Datomic, mas também foi muito prático.

Você pode conferir a parte prática do workshop neste repositório do GitHub!

Confesso que foi muitas informações em pouco tempo. Definitivamente, vou precisar voltar ao repositório e praticar bastante ainda para internalizar os principais conceitos deste banco de dados. Mas valeu muito a pena!

Se antes eu achava que valeria a pena estudar este banco, agora tenho certeza!

Conversas e feedbacks

Logo ao chegar no evento tive a oportunidade de conversar com algumas pessoas que só conhecia virtualmente, como o Arthur Fücher. Além de ser Senior Software Engineer no Nubank, Arthur é uma figura muito presente na comunidade Clojure do Brasil!

E mesmo sendo uma pessoa bastante tímida, foi possível conhecer muitas pessoas bacanas do Brasil e de outros países.

Em especial, fui muito bem recebido pelo Christoph Neumann. Eu estava parado em um canto no happy hour, após o segundo dia de evento, comendo e pensando em já ir pra casa, quando ele se aproximou e puxou conversa. Conversamos um pouco e comecei a fazer algumas perguntas sobre Datomic. Ao longo da conversa, citei que talvez fosse um banco de dados interessante para empresa onde trabalho. Quando falei onde trabalhava e o que a empresa fazia, ele se mostrou bastante interessado! E logo se ofereceu para me apresentar o Joe Lane, Principal Engineer Building Datomic na Nubank!

Conversei com o Joe e, mais uma vez, fui super bem recebido! Ele mostrou bastante entusiasmo quando comentei onde trabalhava e os tipos de projetos que o SERPRO desenvolve, e ficou à disposição para tirar dúvidas sobre Datomic, inclusive comentando que o SERPRO poderia ser um ótimo case para o Datomic.

Ao longo do evento também tive a oportunidade de conhecer algumas pessoas que fizeram meu curso de Clojure! Entre essas pessoas, conheci o Bruno Guimarães, que atualmente é Senior Software Engineer no Nubank e foi host da conferência.

Conhecer pessoalmente pessoas que fizeram meu curso online é sempre uma experiência muito legal! Os números, as estatísticas, os comentários, ganham um cara, uma voz, vida. É sempre muito gratificante!

No segundo dia, após a minha apresentação, muitas pessoas vieram conversar e querer saber mais sobre como consegui introduzir Clojure em um sistema do governo federal brasileiro. Tiveram algumas dúvidas mais pontuais, sobre minha motivação para usar Elm no front-end (e não optar por utilizar Clojure em sua versão ClojureScript), entre várias outras conversas interessantes.

O feedback foi bastante positivo e todas as conversas foram muito bacanas, bem humoradas e respeitosas.

Comidas (e mais conversas)

O evento contava com welcome coffee no início do dia e coffee breaks ao longo do dia. A qualidade era muito boa e tudo muito formal (talvez formal até um pouco demais? rs).

Já para almoçar, existiam várias opções (não inclusas no valor do ingresso) de restaurantes dentro do mesmo complexo de prédios onde encontra-se a Nubank Sparks. Isso facilitava bastante, já que não precisava sair do local, especialmente no primeiro dia, de workshops, já que estava com meu notebook.

Você pode almoçar onde quiser. Mas gostei das opções que estavam disponíveis ali perto e foi mais uma oportunidade de conversar com mais pessoas do evento. Novamente, mesmo sendo bastante tímido e sentando em um lugar mais afastado, logo vieram várias pessoas sentar junto comigo e puxar conversa. Neste momento conheci uma Ucraniana que mostrou pra gente o App Diia, onde ela pode ter versões digitais de vários de seus documentos, como Carteira de Motorista, Passaporte, entre outros e ter acesso a mais de 130 serviços do governo Ucraniano (você pode saber mais sobre este App nesta página do Wikipedia)!

Valeu a pena?

Definitivamente, sim!

O evento ocorreu em dois dias da semana (uma segunda e terça-feira). Isso talvez atrapalhe a participação de algumas pessoas — um colega meu não conseguiu participar pois a empresa não liberou este dias para ele, por exemplo. Mas para aquelas pessoas que conseguem participar, fica muito menos cansativo do que ter um evento como este no final de semana, depois de uma longa semana de trabalho.

Eu havia comprado os ingressos na pré-venda, antes de ter minha palestra aprovada, mas depois tive o valor estornado pela coordenação do evento - ganhei ingressos para os dois dias por ser palestrante. Também me ofereceram ajuda para transporte e estadia, a qual eu recusei, já que o evento seria próximo da minha residência.

A experiência de palestrar em um evento desta magnitude, mesmo que por apenas 10 minutos, foi incrível! É uma sensação de muita gratidão às pessoas que ajudaram a organizar a conferência e que confiaram no meu potencial!

Permalink

Post-Conj update

I got back from the Clojure Conj last Saturday (almost a week ago as I write this), and I’m still trying to get my life back together. Well, I should say that I always find conferences a bit disruptive to my routines. But that’s kind of the point! Conferences are a space out of time, where the quotidian responsibilities like childcare, work, and dishes are paused while you intensely spend 18 hours per day immersing yourself in a topic you’re interested in. While the cost is high, the benefits are higher. And that’s why I keep attending conferences.

It was great to go to the Conj this year. While it’s kind of a blur, I got what I was looking for, which was basically a boost in the perceived social value of Clojure. Let me put that less obtusely: Most of my friends are not programmers. They don’t know what Clojure is. I can’t really geek out with them about Clojure. Basically, my interest is private. Over time, it becomes harder and harder to maintain interest in something few people I encounter face-to-face care about. Am I crazy liking this thing? Am I out of touch with reality? Does it matter at all?

The Conj gives me that boost. Hundreds of smart people talking about Clojure, validating my interest in it, all in one small space. There are so many people that there are sub-groups. AI vs. anti-AI. Protocols vs. no protocols. Even REPL vs. no REPL (yes! they exist!). And it’s glorious.

The other benefit is that after three days together blabbing on about Clojure, it starts to get personal. You get to know people beyond their code and their jobs. It turns out that even Clojure programmers have lives.

I want to touch on a couple of things.

My workshop went well. The ideal is that I blow people’s minds with skills they couldn’t have learned any other way that they can immediately apply at their work. I hit somewhat short of that. The second best is missing that but learning how to do better next time. That’s where it was. Luckily, I can translate that learning directly into my book. Thanks to all the participants who played along.

People don’t get to see the magic of Rich Hickey outside of the Conj. Seriously, he’s amazing. As far as I can tell, he watches every talk (unless there are two tracks, obviously). He will ask questions during the Q&A. And he’ll approach the speaker after the talk and share opinions, critiques, and encouragement. Despite not being as active online as he once was, he is committed to the community when it comes to in-person gatherings. He’s super engaged with the design of new features and is still thinking through tough problems, even after his retirement.

Speaking of community, it was a big focus this year. Christophe and Jordan (and other volunteers) hosted a community-building event (where we all got to share our love within small groups guided by Clojure luminaries). Just their presence and energy made it clear that this conference is important.

And I’m super excited about the Clojure documentary. Rich Hickey has already been interviewed out at his cabin. Many Clojure folks have also been filmed, including yours truly. I hope my footage doesn’t wind up on the floor. We’re very lucky that the same company that made the Python, Rails, and Node documentaries are making one for Clojure. We’re much smaller. But they’ve told me they think the influence of Clojure is greater than what’s apparent in the number of programmers. I have to agree, but it’s nice that the filmmakers can see that.

Alright, folks! Be good to each other.

Permalink

Spring Framework 7 and Spring Boot 4: The tastiest bites - JVM Weekly vol. 153

I know I usually don’t write about new releases outside of “Rest of the Story”, but hey - this is a major Spring release, so…

Thanks for reading JVM Weekly! Subscribe for free to receive new posts and support my work.

This week, Spring (according to their calendar) released a new generation of practically all its flagship projects: Spring Framework 7, Spring Boot 4, Spring Data 2025.1, and a release for Spring AI 1.1 for the Boot 3.5 line (while actively developing 2.x for Spring Boot 4).

Let’s be honest - major version bumps like these don’t just happen “by the way.” They usually mark the end of an era rather than just another patch. New Spring fits into a much broader shift across the entire ecosystem. Java has racked up new LTS releases, Jakarta EE has finally closed the book on the javax to jakarta migration, GraalVM stopped being just a “conference talk curiosity,” and AI has moved from hackathons straight into budgets, roadmaps, and KPIs.

But here’s the thing: Spring isn’t just trying to keep up - it clearly wants to co-define this movement.

That’s why, instead of throwing in yet another “compatibility flag” or one more section in application.yml, the team decided to go for a coordinated, one-time “Big Bang.” In this release, the foundations have been aligned with new standards, a massive chunk of technical debt has been paid off, and the platform has been positioned for the AI-driven years ahead. From the outside, it looks like a version migration. From the inside? It looks like a redesign of Spring’s entire mental model.

The foundation: goodbye javax

At the fundamental level, the changes are very concrete. The world of javax officially disappears from the main path. Annotations like @Resource, @PostConstruct, or @Inject - present in almost every project for years - need to be migrated to the new path.

import javax.annotation.PostConstruct;
import javax.persistence.Entity;

@Entity
public class LegacyUser {
    @PostConstruct
    private void init() { ... }
}

They are now moved to a consistent, Jakarta-native world compliant with Jakarta EE 11.

import jakarta.annotation.PostConstruct;
import jakarta.persistence.Entity;

@Entity
public class ModernUser {
    @PostConstruct
    private void init() { ... }
}

But there is another “hidden landmine” in the plumbing: Jackson 3. Just like the Jakarta migration, the new Jackson version changes package names. Spring Framework 7 is doing some heavy lifting to support a mix of Jackson 2 and 3 for now, but the signal is clear: the old JSON backend is on its way out, and we are looking at another ecosystem-wide shift.

At the same time, the entire stack gets a lift: newer Servlet, JPA, Bean Validation, plus new generations of Tomcat and Jetty. Some well-known servers, like Undertow, simply didn’t make the cut for the new standard and have dropped out of the Spring ecosystem. It hurts (I remember using Undertow regularly back in my Clojure days), but it closes a chapter. This leap had to happen - the only question was whether it would be a single, planned surgical cut, or a “death by a thousand cuts” via endless, frustrating compatibility fractures. Spring chose option one.

The Billion Dollar Mistake and other improvements for sanity sake

The second axis of change concerns the daily developer experience. It starts with something we all know a little too well: nulls. Spring Framework 7 and the new generation of libraries around it (Spring Data, Spring Security, Spring Vault) adopt JSpecify.

To understand why this is huge, you have to remember the absolute mess we’ve lived in for the last 15 years. We had javax.annotation.Nullable, org.jetbrains.annotations.Nullable, edu.umd.cs.findbugs.annotations.Nullable, and android.support.annotation.Nullable. It was the Wild West, where tools (IDEs, Sonar) often ignored each other’s annotations.

JSpecify is the peace treaty. It is a standard agreed upon by the giants (Google, JetBrains, Oracle, and others) to finally speak one common language about nullability.

In Spring Framework 7, this manifests as a massive “Inversion of Control” for nulls. Thanks to the @NullMarked annotation applied at the package level, Spring stops being “everything can be null” by default. instead, it sends a clear signal: non-null is the normal case. You no longer have to clutter your code with @NonNull on every parameter. You only explicitly mark the exceptions with @Nullable.

@org.jspecify.annotations.NullMarked
package com.example.billing;

public class BillingService {

    // The compiler knows ‘invoice’ cannot be null.
    public Receipt processPayment(Invoice invoice) {
        return new Receipt(invoice.getId());
    }

    public @Nullable Transaction findHistory(@Nullable String transactionId) {
        return transactionId != null ? repo.find(transactionId) : null;
    }
}

We can expect stop of the “hacky workarounds” we’ve been doing for a decade.

First, API Versioning is finally a first-class citizen. No more writing custom interceptors or dragging in external libraries just to version your REST endpoints. Whether you prefer path-based, header-based, or query-param versioning, it is now supported natively by standard annotations.

@RestController
@RequestMapping(”/orders”)
public class OrderController {

    // Native API Versioning support (Header, Path, or Param)
    // No more custom interceptors needed
    @GetMapping
    @ApiVersion(value = “1.0”, strategy = VersionStrategy.HEADER)
    public List<Order> getOrdersLegacy() {
        return repository.findAll();
    }

    // Resilience patterns moved directly into Core
    // No extra ‘spring-retry’ dependency required
    @Retryable(maxAttempts = 3, backoff = @Backoff(delay = 500))
    @GetMapping
    @ApiVersion(value = “2.0”, strategy = VersionStrategy.HEADER)
    public List<Order> getOrdersResilient() {
        return service.fetchOrdersWithRateLimit();
    }
}

Second, Resilience patterns (Retries, Circuit Breakers, Rate Limiters) are being moved/promoted directly into spring-core and the main framework. Features like @Retryable and Rate Limiting have moved directly into spring-core. You no longer need to pull in extra dependencies or rely solely on external tools like Resilience4j for basic robustness. In distributed system failures are a default state (I hope everybody internalized that knowledge by now) and the framework finally treats them that way “out of the box.”

Less magic, more build-time actions

The third axis is Spring’s relationship with the compiler and the build process. For years, Spring was the master of “runtime magic”: classpath scanning, dynamic configuration, reflection, proxies. This gave us incredible flexibility, but it came with consequences. On one hand, we always dealt with technology indistinguishable from magic (cheers to Arthur C. Clarke), but on the other, in the era of containers, cold starts, and native images, it started to hurt.

The new Spring - following the current fashion in the ecosystem - consistently shifts the workload to build time. Instead of doing “everything” at startup, it tries to know and generate as much as possible beforehand.

You can see this best in Spring Boot 4. The existing spring-boot-autoconfigure, which bloated over the years, has been broken down into a neat set of smaller modules. Suddenly, your IDE stops flooding you with suggestions for classes and configuration properties that aren’t even on your classpath. And this is exactly where the Spring AOT world meets what’s happening inside the Virtual Machine itself within Project Leyden.

Leyden in OpenJDK has the exact same goal, just a level lower: improve startup time, time-to-peak performance, and application footprint by shifting as much work as possible from execution time to an earlier stage. JDK 24 brought the first installment of this philosophy with JEP 483: Ahead-of-Time Class Loading & Linking, which can load and link classes during a “training” run and save the result in a special cache.

So when you combine this with modular Boot 4 and Spring AOT, the puzzle starts to come together. By slicing up autoconfiguration and moving “magic” parts to code generated at build time, Spring practically reduces the scope that the JVM needs to warm up at start. Leyden, with its AOT cache, no longer operates on a chaotic clump of classes, but on a predictable, slimmed-down graph that Spring has already cleaned and materialized.

No wonder early experiments show synergy: even before Leyden, Spring AOT optimizations alone gave about a 15% startup boost on the classic JVM, and “pre-main” improvements from Leyden can boost this effect even further. In practice, this means that Java - the same Java we got used to calling “heavy” - is starting to behave more and more like an ecosystem with built-in, hybrid AOT: part of the work is done by the framework, part by the JVM. The container rises faster, eats less RAM, and depends less on dynamic generation happening at startup. Native builds have less matter to analyze, and AOT gets small, well-described blocks to work with instead of one mega-jar.

Interestingly, we can add to this Spring Data 2025.1 with repositories prepared Ahead-of-Time. Methods like findByEmailAndStatus stop being mysterious spells interpreted at application startup and become regular, generated code, compiled along with the rest of the system. Startup is faster, fewer things happen “in the background,” and behavior in native images stops being a lottery.

A Tale of Two AI Streams

However, a no less interesting part of this puzzle concerns AI. Spring AI is clearly diverging into two development lines.

The 1.1 branch closes the Spring Boot 3.5 era: it’s a “broadening” release, not a table-flipper. In practice, this means you can already build sensible agentic workflows in the 3.5 ecosystem: you pick up the Spring AI starter, configure a provider, define a few tools, and the rest – from the MCP skeleton, through JSON mapping, to integration with ChatClient – is delivered by the framework. Spring AI 1.1 is “AI for today’s projects” – maximally compatible with existing Boot, focused on stability, use-case coverage, and integration with what you already have in your monoliths and microservices.

Simultaneously, ream is currently working on the 2.x line deliberately breaks this compromise and is designed in tandem with Spring Boot 4. Here, the theme is redesign, not just expansion: full compatibility with Boot 4, a new API shape, separation of reactive and imperative ChatClient, preparation for JSpecify and null-safe contracts (which Spring officially announces as a goal for 2.0), and better embedding of MCP and AOT in the very heart of the architecture. For the developer, the difference comes down to direction: 1.1 is the fast track to add AI to an existing 3.5 architecture, but 2.x is a conscious decision that you are building a system where LLMs, MCP agents, and classic Spring services create one, coherent, first-class stack.

@Service
public class BookingAgent {

    private final ChatClient chatClient;

    public BookingAgent(ChatClient.Builder builder) {
        this.chatClient = builder
            .defaultSystem(”You are a booking assistant.”)
            .defaultTools(”bookingTools”) // References a bean, not just a function
            .build();
    }

    // Defined once, used by Agents and LLMs “automagically”
    @Tool(description = “Check room availability for a given date range”)
    public boolean checkAvailability(@NonNull LocalDate from, @NonNull LocalDate to) {
        return bookingService.hasSlots(from, to);
    }
}

And I suspect the difference in capabilities will only widen - if only to motivate people to migrate to new versions 😉.


To sum it up - it’s very clear that Spring has drawn two parallel realities.

Spring 6 + Boot 3 is now the stable set – stable, known, ideal for systems entering the maintenance phase that just need to work, not chase every novelty.

On the other side, we have the new baseline: Spring Frameworks 7, Spring Boot 4, fresh Spring Data, and Spring AI 2.x. This is where Jakarta EE 11, JSpecify, AOT, vector search, MCP agents, and lightweight runtimes for AI originating from Spring Boot 4.0 await.

Honestly? The scope of changes is large enough that I expect a lot of serious projects on Spring 6 will simply stay there – at least for a few years. It will be a very comfortable haven: support is there, everything is familiar, business is happy. But that’s exactly why the question is no longer “is it worth upgrading.” The real question is: in which world do you want your system to live in five years - in the safe, tamed 6/3, or on the new, wilder, but also much more promising 7/4 terrain?

I have my suspicions where the most ambitious teams will end up - or those who want to migrate gradually, without a major Big Bang when support ends... but we’ll see.


PS: There was no edition last week as I was recently traveling, with opportunity to make up some bucket list TV Shows on plane. And as a pre-Disney Star Wars fan... OMFG.

PS2: Next week I’ll be speaking at #KotlinDevDay in Amsterdam! See you there 😊

Thanks for reading JVM Weekly! Subscribe for free to receive new posts and support my work.

Permalink

I am sorry, but everyone is getting syntax highlighting wrong

Translations: Russian

Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file.

Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work.

Christmas Lights Diarrhea

Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc.

Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here?

The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together.

Here’s a quick test. Try to find the function definition here:

and here:

See what I mean?

So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t.

Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities.

If everything is highlighted, nothing is highlighted.

Enough colors to remember

There are two main use-cases you want your color theme to address:

  1. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?)
  2. Search for something. You want to know what to look for (which color).

1 is a direct index lookup: color → type of thing.

2 is a reverse lookup: type of thing → color.

Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t.

Let me illustrate. Before:

After:

Can you see it? I misspelled return for retunr and its color switched from red to purple.

I can’t.

Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names?

Can you?

If the answer for both questions is “no”, then your color theme is not functional. It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you.

What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four:

  • Green for strings
  • Purple for constants
  • Yellow for comments
  • Light blue for top-level definitions

That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment.

Limit the number of different colors to what you can remember.

If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice?

What should you highlight?

Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls.

I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants.

Top-level definitions are another good idea. They give you an idea of a structure quickly.

Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code.

Please, please don’t highlight language keywords. class, function, if, elsestuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the if the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not.

Highlight names and constants. Grey out punctuation. Don’t highlight language keywords.

Comments are important

The tradition of using grey for comments comes from the times when people were paid by line. If you have something like

of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored.

But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important.

So here’s another controversial idea:

Comments should be highlighted, not hidden away.

Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it.

Two types of comments

Another secret nobody is talking about is that there are two types of comments:

  1. Explanations
  2. Disabled code

Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. -- vs /* */ in SQL), then use it!

Here’s a real example from Clojure codebase that makes perfect use of two types of comments:

Disabled code is gray, explanation is bright yellow

Light or dark?

Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why?

And I think I have an answer. Here’s a typical dark theme:

and here’s a light one:

On the latter one, colors are way less vibrant. Here, I picked them out for you:

Notice how many colors there are. No one can remember that many.

This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down:

Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”.

Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors.

So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯

But!

But.

There is one trick you can do, that I don’t see a lot of. Use background colors! Compare:

The first one has nice colors, but the contrast is too low: letters become hard to read.

The second one has good contrast, but you can barely see colors.

The last one has both: high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really.

UI designers know about this trick for a while, but I rarely see it applied in code editors:

If your editor supports choosing background color, give it a try. It might open light themes for you.

Bold and italics

Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything.

In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples.

Using italics and bold instead of colors

Myth of number-based perfection

Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle.

This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds:

OkLab l=0.7473 c=0.1253 h=0, 45, 90, 135, 180, 225, 270, 315

The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart.

Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it.

Let’s design a color theme together

Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post:

First, let’s remove highlighting from language keywords and re-introduce base text color:

Next, we remove color from variable usage:

and from function/method invocation:

The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code.

Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from?

Next, let’s tone down punctuation:

I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important.

But you might roll with base color punctuation, too:

Okay, getting close. Let’s highlight comments:

We don’t use red here because you usually need it for squiggly lines and errors.

This is still one color too many, so I unify numbers and strings to both use green:

Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue).

Compare with what we started:

In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster.

Shameless plug time

I’ve been applying these principles for about 8 years now.

I call this theme Alabaster and I’ve built it a couple of times for the editors I used:

It’s also been ported to many other editors and terminals; the most complete list is probably here. If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know).

Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me.

As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now.

I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes.

Permalink

10 Developer Habits That Separate Good Programmers From Great Ones

10 Developer Habits That Separate Good Programmers From Great Ones

There's a moment in every developer's career when they realize that writing code that works isn't enough. It happens differently for everyone. Maybe you're staring at a pull request you submitted six months ago, horrified by the decisions your past self made. Maybe you're debugging a production issue at 2 AM, surrounded by energy drink cans, wondering how something so simple could have gone so catastrophically wrong. Or maybe you're pair programming with someone who makes everything look effortless—solving in minutes what would have taken you hours—and you're left wondering what separates you from them.

I've been writing code professionally for over a decade, and I can tell you with certainty: the difference between good programmers and great ones has very little to do with knowing more algorithms or memorizing syntax. It's not about graduating from a prestigious university or working at a FAANG company. The real separation happens in the invisible places—in the daily habits, the tiny decisions made a thousand times over, the discipline to do the unglamorous work that nobody sees.

This isn't about natural talent. I've watched brilliant developers flame out because they relied solely on raw intelligence. I've also watched average programmers transform into exceptional ones through deliberate practice and habit formation. The great ones aren't born—they're built, one habit at a time.

What follows isn't a collection of productivity hacks or keyboard shortcuts. These are the deep, fundamental habits that compound over time, the practices that will still matter whether you're writing Python microservices today or quantum computing algorithms fifteen years from now. Some of these habits will challenge you. Some will feel counterintuitive. All of them will require effort to develop.

But if you commit to them, you won't just become a better programmer. You'll become the kind of developer others want on their team, the one who gets pulled into the hardest problems, the one who shapes how engineering gets done.

Let's begin.

Habit 1: They Read Code Far More Than They Write It

When I mentor junior developers, I often ask them: "How much time do you spend reading other people's code compared to writing your own?" The answer is almost always the same: not much. Maybe they glance at documentation or skim through a library's source when something breaks, but intentional, deep code reading? Rarely.

This is the first habit that separates the good from the great: great developers are voracious code readers.

Think about it this way. If you wanted to become a great novelist, you wouldn't just write all day. You'd read—extensively, critically, analytically. You'd study how Hemingway constructs sentences, how Ursula K. Le Guin builds worlds, how Toni Morrison uses language to evoke emotion. Programming is no different. The craft of software engineering is learned as much through observation as through practice.

But here's what makes this habit so powerful: reading code teaches you things that writing code alone never will. When you write, you're trapped in your own mental models, your own patterns, your own biases. You'll naturally reach for the solutions you already know. Reading other people's code exposes you to different ways of thinking, different approaches to problems, different levels of abstraction.

I remember the first time I read through the source code of Redux, the popular state management library. I was intermediate-level at the time, comfortable with JavaScript but not what I'd call advanced. What struck me wasn't just how the code worked—it was how simple it was. The core Redux implementation is just a few hundred lines. The creators had taken a complex problem (managing application state) and distilled it down to its essence. Reading that code changed how I thought about software design. I realized that complexity isn't a badge of honor; simplicity is.

Great developers make code reading a regular practice. They don't wait for a reason to dive into a codebase. They do it because they're curious, because they want to learn, because they know that buried in those files are lessons that took someone years to learn.

Here's how to develop this habit practically:

Set aside dedicated reading time. Just like you might schedule time for coding side projects, schedule time for reading code. Start with 30 minutes twice a week. Pick a library or framework you use regularly and read through its source. Don't skim—actually read, line by line. When you encounter something you don't understand, resist the urge to skip over it. Pause. Research. Figure it out.

Read with purpose. Don't just read passively. Ask questions as you go: Why did they structure it this way? What problem were they solving with this abstraction? What would I have done differently? Are there patterns I can adopt? What makes this code easy or hard to understand?

Read code from different domains and languages. If you're a web developer, read embedded systems code. If you work in Python, read Rust. The patterns and principles often transcend the specific technology. I've applied lessons from reading Erlang's OTP framework to architecting Node.js microservices, even though the languages are wildly different. The underlying principles of fault tolerance and supervision trees were universally applicable.

Join the reading club movement. Some development teams have started "code reading clubs" where developers meet regularly to read through and discuss interesting codebases together. If your team doesn't have one, start it. Pick a well-regarded open-source project and work through it together. The discussions that emerge from these sessions are gold—you'll hear how different people interpret the same code, what they notice, what they value.

Study the masters. There are certain programmers whose code is worth studying specifically. John Carmack's game engine code. Rich Hickey's Clojure. Linus Torvalds' Git. DHH's Rails. These aren't perfect (nothing is), but they represent thousands of hours of refinement and deep thinking. Reading their work is like studying under a master craftsperson.

The transformation this habit creates is subtle but profound. You'll start to develop intuition about code quality. You'll recognize patterns more quickly. You'll build a mental library of solutions that you can draw from. When you encounter a new problem, instead of Googling immediately, you'll remember: "Oh, this is similar to how React handles reconciliation" or "This is the strategy pattern I saw in that Python library."

I've interviewed hundreds of developers, and I can usually tell within the first few technical questions whether someone is a serious code reader. They reference implementations they've studied. They compare approaches across different libraries. They have opinions informed by actual examination of alternatives, not just Stack Overflow answers.

Reading code won't make you a great developer by itself. But it's the foundation. Everything else builds on this. Because you can't write great code if you haven't seen what great code looks like.

Habit 2: They Invest Deeply in Understanding the 'Why' Behind Every Decision

Good programmers implement features. Great programmers understand the business context, user needs, and systemic implications of what they're building.

This might sound obvious, but it's one of the most commonly neglected habits, especially among developers who pride themselves on their technical skills. I've worked with brilliant engineers who could implement any algorithm, optimize any query, architect any system—but who treated requirements like gospel, never questioning whether what they were asked to build was actually the right solution.

Here's a story that illustrates this perfectly. A few years ago, I was working on a fintech platform, and we received a feature request to add "pending transaction" functionality. The product manager wanted users to see transactions that were authorized but not yet settled. Straightforward enough.

A good developer would have taken that requirement and implemented it. Created a new status field in the database, added some UI components, written the business logic. Done. Ship it.

But one of our senior engineers did something different. She scheduled a meeting with the PM and asked: "Why do users need to see pending transactions? What problem are they trying to solve?"

It turned out users were complaining that their account balances seemed wrong—they'd make a purchase, but their balance wouldn't reflect it immediately. They weren't actually asking to see pending transactions; they were confused about their available balance. The real solution wasn't to show pending transactions at all—it was to display two balances: current balance and available balance, accounting for pending authorizations.

This might seem like a small distinction, but it completely changed the implementation. Instead of building a whole new UI section for pending transactions (which would have added cognitive load), we refined the existing balance display. The solution was simpler, better aligned with user needs, and took half the time to implement.

This is what investing in the "why" looks like in practice.

Great developers treat every feature request, every bug report, every technical decision as an opportunity to understand the deeper context. They don't just ask "What needs to be built?" They ask:

  • What problem is this solving? Not the technical problem—the human problem. Who is affected? What pain are they experiencing?
  • What are the constraints? Is this urgent because of a regulatory deadline? Because of competitive pressure? Because a major client threatened to leave? Understanding urgency helps you make better tradeoff decisions.
  • What are the second-order effects? How will this change user behavior? How will it affect the system's complexity? What maintenance burden are we taking on?
  • Is this the right solution? Sometimes the best code is no code. Could we solve this problem through better UX? Through configuration instead of programming? Through fixing the root cause instead of treating symptoms?

I once spent three hours in a technical design review for a caching layer that would have solved our performance problems. The engineer who proposed it had done excellent work—detailed benchmarks, solid architecture, clear migration plan. But then someone asked: "Why are we having these performance problems in the first place?"

We dug deeper. Turned out a poorly optimized query was the root cause, making millions of unnecessary database calls. We'd been about to build a caching system to work around a problem that could be fixed with a two-line SQL optimization. Understanding the "why" saved us from weeks of unnecessary work.

This habit requires courage, especially when you're early in your career. It feels risky to question requirements, to push back on product managers or senior engineers, to suggest that maybe the planned approach isn't optimal. But here's what I've learned: people respect developers who think critically about what they're building. They want collaborators who catch problems early, who contribute to product thinking, who treat software development as problem-solving rather than ticket-closing.

How to develop this habit:

Make "Why?" your default question. Before starting any significant piece of work, ensure you can articulate why it matters. If you can't, you don't understand the problem well enough yet. Schedule time with whoever requested the work—product managers, other engineers, customer support—and ask questions until the context is clear.

Study the domain you're working in. If you're building healthcare software, learn about healthcare. Read about HIPAA. Understand how hospitals operate. Talk to doctors if you can. The more you understand the domain, the better you'll be at evaluating whether technical solutions actually solve real problems. I've seen developers who treated the domain as background noise, and their code showed it—technically proficient but misaligned with how the business actually worked.

Participate in user research. Watch user testing sessions. Read support tickets. Join customer calls. There's no substitute for seeing real people struggle with your software. It fundamentally changes how you think about what you're building. After watching just one user testing session, you'll never write a cryptic error message again.

Practice systems thinking. Every change you make ripples through the system. That innocent feature addition might increase database load, complicate the deployment process, or create a new edge case that breaks existing functionality. Great developers mentally model these ripples before writing code. They think in systems, not in isolated features.

Document the why, not just the what. When you write code comments, don't explain what the code does (that should be obvious from reading it). Explain why it exists. Why this approach instead of alternatives? What constraint or requirement drove this decision? Future you—and future maintainers—will be grateful.

I'll be honest: this habit can be exhausting. It's mentally easier to just implement what you're told. But here's the thing—great developers aren't great because they chose the easy path. They're great because they took responsibility for outcomes, not just outputs. They understood that their job wasn't to write code; it was to solve problems. And you can't solve problems you don't understand.

The developers who cultivate this habit become trusted advisors. They get invited to planning meetings. They influence product direction. They become force multipliers for their teams because they catch misalignments early, before they turn into wasted sprints and disappointed users.

Understanding the "why" transforms you from a code writer into an engineer. And that transformation is everything.

Habit 3: They Treat Debugging as a Science, Not a Guessing Game

It's 11 PM. Your production system is down. Customers are angry. Your manager is asking for updates every ten minutes. The pressure is overwhelming, and your first instinct is to start changing things—restart the server, roll back the last deploy, tweak some configuration values—anything to make the problem go away.

This is where good developers and great developers diverge most dramatically.

Good developers guess. They rely on intuition, past experience, and hope. They make changes without fully understanding the problem, treating debugging like a game of whack-a-mole. Sometimes they get lucky and stumble on a solution. Often they don't, and hours vanish into frustration.

Great developers treat debugging as a rigorous scientific process. They form hypotheses, gather data, run experiments, and systematically eliminate possibilities until they isolate the root cause. They're patient when patience feels impossible. They're methodical when chaos reigns.

Let me tell you about the worst production bug I ever encountered. Our e-commerce platform started randomly dropping orders—not all orders, just some of them. Maybe 2-3% of transactions would complete on the payment side but never create an order record in our database. Revenue was bleeding. Every hour the bug remained unfixed cost the company thousands of dollars.

The pressure to "just fix it" was immense. The easy move would have been to start deploying patches based on gut feelings. Instead, our lead engineer did something counterintuitive: she made everyone step back and follow a structured debugging process.

First, reproduce the problem. Seems obvious, but many developers skip this step, especially under pressure. She set up a staging environment and hammered it with test transactions until we could reliably reproduce the order drops. This single step was crucial—it meant we could test theories without experimenting on production.

Second, gather data. What do these dropped orders have in common? We pulled logs, traced requests through every system component, analyzed timing, examined user agents, scrutinized payment gateway responses. We weren't looking for the answer yet—we were building a complete picture of the problem.

Third, form hypotheses. Based on the data, we generated a list of possible causes, ranked by likelihood: database connection timeout, race condition in order creation logic, payment gateway webhook failure, API rate limiting, network partition, corrupted state in Redis cache.

Fourth, test systematically. We tested each hypothesis one at a time, starting with the most likely. For each test, we clearly defined what result would prove or disprove the theory. No guessing. No "let's try this and see what happens." Every experiment was deliberate.

It took four hours of methodical investigation, but we found it: a race condition where concurrent payment webhooks could create a state where the payment was marked successful, but the order creation transaction was rolled back. The bug only manifested under high load with specific timing conditions—hence the intermittent nature.

Here's the key insight: we could have easily spent twenty hours flailing around, making random changes, creating new bugs while trying to fix old ones. Instead, systematic debugging found the root cause in a quarter of the time. More importantly, we fixed it correctly, with confidence that it was actually resolved.

This habit—treating debugging as a disciplined practice rather than chaotic troubleshooting—is perhaps the most underestimated skill in software engineering.

How great developers debug:

They resist the urge to jump to solutions. When you see an error, your brain immediately wants to fix it. Fight this instinct. Spend time understanding the problem first. I have a personal rule: spend at least twice as much time understanding a bug as you expect to spend fixing it. This ratio has saved me countless hours of chasing symptoms instead of causes.

They use the scientific method explicitly. Write down your hypothesis. Write down what evidence would confirm or refute it. Run the experiment. Document the results. Move to the next hypothesis if needed. I literally keep a debugging journal where I log this process for complex bugs. It keeps me honest and prevents me from testing the same theory multiple times because I forgot I already tried it.

They make problems smaller. Great debuggers are masters of binary search in debugging. If a bug exists somewhere in 1,000 lines of code, they'll comment out 500 lines and see if the bug persists. Then 250 lines. Then 125. They systematically isolate the problem space until the bug has nowhere to hide.

They understand their tools deeply. Debuggers, profilers, log analyzers, network inspectors, database query analyzers—great developers invest time in mastering these tools. They can set conditional breakpoints, analyze memory dumps, trace system calls, interpret flame graphs. These tools multiply their effectiveness exponentially. I've seen senior developers debug issues in minutes that stumped others for days, simply because they knew how to use a profiler effectively.

They build debugging into their code. Great developers write code that's easy to debug. They add meaningful log statements at key decision points. They build observability into their systems from the start—metrics, traces, structured logs. They know that 80% of a bug's lifetime is spent trying to understand what's happening; making that easier is time well invested.

They reproduce, then fix, then verify. Never fix a bug you can't reproduce—you're just guessing. Once you can reproduce it, fix it. Then verify the fix actually works under the conditions where the bug originally occurred. Too many developers skip this verification step and end up shipping fixes that don't actually fix anything.

They dig for root causes. When you find a bug, ask "Why did this happen?" five times. Each answer leads you deeper. "The server crashed." Why? "Out of memory." Why? "Memory leak." Why? "Objects not being garbage collected." Why? "Event listeners not removed." Why? "No cleanup in component unmount." Now you've found the root cause, not just the symptom.

I've worked with developers who seemed to have an almost supernatural ability to find bugs. Early in my career, I thought they were just smarter or more experienced. Now I know the truth: they had simply internalized a systematic approach. They trusted the process, not their intuition.

This habit has a profound psychological benefit too. Debugging stops being stressful and starts being intellectually engaging. Instead of feeling helpless when bugs occur, you feel confident—you have a process, a methodology, a way forward. The bug might be complex, but you know how to approach complexity.

There's a reason the best developers don't panic during incidents. They've trained themselves to treat every bug as a puzzle with a solution, not a crisis. They know that systematic investigation always wins in the end. That confidence is built through this habit.

And here's something beautiful: when you approach debugging scientifically, you don't just fix bugs faster—you learn more from each one. Every bug becomes a lesson about the system, about edge cases, about your own mental models. Debuggers who just guess and check learn nothing. Scientific debuggers accumulate deep system knowledge with every issue they resolve.

The next time you encounter a bug, resist the temptation to immediately start changing code. Take a breath. Open a notebook. Write down what you know. Form a hypothesis. Test it. Let the scientific method be your guide.

You'll be amazed how much more effective you become.

Habit 4: They Write for Humans First, Machines Second

Here's an uncomfortable truth: most of your career as a developer won't be spent writing new code. It'll be spent reading, understanding, and modifying existing code—code written by other people, or by past versions of yourself who might as well be other people.

Yet when I review code from good developers, I consistently see the same mistake: they optimize for cleverness or brevity instead of clarity. They write code that impresses other developers with its sophistication, but which requires intense concentration to understand. They treat the compiler or interpreter as their primary audience.

Great developers flip this priority. They write code for humans first, machines second.

This might sound like a platitude, but it represents a fundamental shift in mindset that affects every line of code you write. Let me show you what I mean.

Here's a code snippet I found in a production codebase:

def p(x): return sum(1 for i in range(2, int(x**0.5)+1) if x%i==0)==0 and x>1

Can you tell what this function does? If you're experienced with algorithms, you might recognize it as a prime number checker. It works perfectly. The machine executes it just fine. But for a human reading this code? It's a puzzle that needs solving.

Now here's how a great developer would write the same function:

def is_prime(number):
    """
    Returns True if the number is prime, False otherwise.

    A prime number is only divisible by 1 and itself.
    We only need to check divisibility up to the square root of the number
    because if n = a*b, one of those factors must be <= sqrt(n).
    """
    if number <= 1:
        return False

    if number == 2:
        return True

    # Check if number is divisible by any integer from 2 to sqrt(number)
    for potential_divisor in range(2, int(number ** 0.5) + 1):
        if number % potential_divisor == 0:
            return False

    return True

The second version is longer. It's more verbose. The machine doesn't care—both run in O(√n) time. But the human difference is night and day. The second version is self-documenting. A junior developer can understand it. You can understand it six months from now when you've forgotten you wrote it. The intent is crystal clear.

This habit—writing for human comprehension—manifests in many ways:

Naming that reveals intent. Variable names like temp, data, obj, result tell you nothing. Great developers choose names that encode meaning: unprocessed_orders, customer_email_address, successfully_authenticated_user. Yes, these names are longer. That's fine. The extra few characters are worth it. You type code once but read it dozens of times.

I remember reviewing code where someone had named a variable x2. I had to trace through 50 lines of logic to figure out it represented "XML to JSON converter". They'd saved themselves typing 18 characters and cost every future reader minutes of cognitive load. That's a terrible trade.

Functions and methods that do one thing. When a function is trying to do multiple things, it becomes hard to name, hard to test, and hard to understand. Great developers extract functionality into well-named functions even when it feels like "overkill." They understand that a sequence of well-named function calls often communicates intent better than the raw implementation.

Strategic comments. Here's a nuance many developers miss: great developers don't comment what the code does—they comment why it does it. If your code needs comments to explain what it does, the code itself isn't clear enough. But comments explaining why certain decisions were made? Those are gold.

"Why" comments might explain:

  • "We're using algorithm X instead of the obvious approach Y because Y has O(n²) complexity with our data patterns"
  • "This weird timeout value came from extensive testing with the external API—smaller values cause intermittent failures"
  • "We're intentionally not handling edge case X because it's impossible given the database constraints enforced by migration Y"

These comments preserve context that would otherwise be lost. They prevent future developers from "optimizing" your carefully chosen approach or removing code they think is unnecessary.

Code structure that mirrors mental models. Great developers organize code the way humans naturally think about the domain. If you're building an e-commerce system, your code structure should reflect concepts like orders, customers, payments, and inventory—not generic abstractions like managers, handlers, and processors.

I once worked on a codebase that had a DataManager, DataHandler, DataProcessor, and DataController. None of these names conveyed what they actually did. When we refactored to OrderValidator, PaymentProcessor, and InventoryTracker, suddenly the codebase became navigable. New team members could find things. The code structure matched their mental model of the business.

Consistent patterns. Humans are pattern-matching machines. When your codebase follows consistent patterns, developers can transfer knowledge from one part to another. When every module does things differently, every context switch requires re-learning. Great developers value consistency even when they might personally prefer a different approach.

Appropriate abstraction levels. This is subtle but crucial. Great developers are careful about mixing abstraction levels in the same function. If you're writing high-level business logic, you shouldn't suddenly drop down to low-level string manipulation details. Extract that into a well-named helper function. Keep each layer of code at a consistent conceptual level.

Here's an example of mixed abstraction levels:

function processOrder(order) {
  // High-level business logic
  validateOrder(order);

  // Suddenly low-level string manipulation
  const cleanEmail = order.email.trim().toLowerCase().replace(/\s+/g, '');

  // Back to high-level
  chargeCustomer(order);
  sendConfirmation(order);
}

Better:

function processOrder(order) {
  validateOrder(order);
  const normalizedOrder = normalizeOrderData(order);
  chargeCustomer(normalizedOrder);
  sendConfirmation(normalizedOrder);
}

Now the function reads like a sequence of business steps, not a mix of business logic and implementation details.

This habit requires discipline because writing for machines is often easier than writing for humans. The machine is forgiving—it doesn't care if your variable name is x or customer_lifetime_value_in_cents. But humans care deeply.

I've seen talented developers handicap themselves with this habit. They write impressively compact code, demonstrating their mastery of language features. But then they spend hours in code reviews explaining what their code does because nobody else can figure it out. They've optimized for the wrong thing.

There's a famous quote often attributed to various programming luminaries: "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." The wisdom in this statement becomes more apparent with every year of experience.

When you cultivate the habit of writing for humans first, something remarkable happens: your code becomes maintainable. Teams move faster because understanding is easy. Onboarding new developers takes days instead of weeks. Bugs decrease because the code's intent is clear. Technical debt accumulates more slowly because future modifications don't require archaeological expeditions through cryptic logic.

I can always identify great developers in code reviews by one characteristic: I rarely have to ask "What does this code do?" The code itself tells me. I might ask about trade-offs, about performance implications, about alternative approaches—but I never struggle with basic comprehension.

Write code as if the person maintaining it is a violence-prone psychopath who knows where you live. The person maintaining your code will be you in six months, and you'll thank yourself for the clarity.

Habit 5: They Embrace Constraints as Creative Catalysts

When I was a junior developer, I viewed constraints as problems to be overcome or worked around. Limited time? Frustrating. Legacy system compatibility? Annoying. Memory restrictions? Limiting. I saw my job as defeating these constraints to implement the "proper" solution.

Great developers think about constraints completely differently. They embrace them. They lean into them. They recognize that constraints don't limit creativity—they focus it, channel it, and often produce better solutions than unlimited resources would allow.

This is one of the most counterintuitive habits that separates good from great, and it takes years to internalize.

Let me share a story that crystallized this for me. I was working at a startup building a mobile app for emerging markets. Our target users were on low-end Android devices with spotty 2G connections and limited data plans. Our initial instinct was to treat these constraints as handicaps—we'd build a "lite" version of our real product, stripped down and compromised.

Then our tech lead said something that changed my perspective: "These aren't limitations. These are our design parameters. They're telling us what excellence looks like in this context."

We completely shifted our approach. Instead of asking "How do we cram our features into this constrained environment?", we asked "What's the best possible experience we can create given these parameters?"

We designed offline-first from the ground up. We compressed images aggressively and used SVGs where possible. We implemented delta updates so the app could update itself over flaky connections. We cached intelligently and prefetched predictively. We made every byte count.

The result? An app that felt snappy and responsive even on terrible connections. An experience that was actually better than many apps designed for high-end markets, because we'd been forced to think deeply about performance and efficiency. Our Western competitors who designed for high-bandwidth, powerful devices couldn't compete in that market. Their apps were bloated, slow, and data-hungry.

The constraints didn't handicap us. They made us better.

This principle extends far beyond technical constraints. Consider time constraints. Good developers see tight deadlines as stress. Great developers see them as clarity. When you have unlimited time, you can explore every possible solution, refactor endlessly, polish indefinitely. Sounds great, right? But unlimited time often produces worse results because nothing forces you to prioritize, to identify what really matters, to make hard trade-off decisions.

I've watched projects with loose deadlines drift aimlessly for months, adding feature after feature, refactoring the refactorings, never quite shipping. Then I've seen teams given two weeks to ship an MVP who produced focused, well-scoped products that actually solved user problems. The time constraint forced clarity about what was essential.

Or consider team constraints. Maybe you're the only backend developer on a small team. Good developers see this as overwhelming—too much responsibility, too much to maintain. Great developers see it as an opportunity to shape the entire backend architecture, to make consistent decisions, to build deep expertise. The constraint of being alone forces you to write extremely maintainable code because you'll be the one maintaining it.

Or legacy system constraints. You're integrating with a 15-year-old SOAP API with terrible documentation. Good developers complain about it. Great developers recognize it as an opportunity to build a clean abstraction layer that isolates the rest of the codebase from that complexity. The constraint of the legacy system forces you to think carefully about boundaries and interfaces.

Here's how to cultivate this habit:

Reframe the language. Stop saying "We can't do X because of constraint Y." Start saying "Given constraint Y, what's the best solution we can design?" The linguistic shift creates a mental shift. You move from problem-focused to solution-focused thinking.

Study historical examples. Twitter's original 140-character limit wasn't a bug—it was a constraint that defined the platform's character. Game developers creating for the Super Nintendo worked with 32 kilobytes of RAM and produced masterpieces. They didn't have unlimited resources, but the constraints forced incredible creativity and efficiency. The Apollo Guidance Computer had less computing power than a modern calculator, but it got humans to the moon. Study how constraints drove innovation in these cases.

Impose artificial constraints. This sounds crazy, but it works. If you're building a web app, challenge yourself: what if it had to work without JavaScript? What if the bundle size had to be under 50KB? What if it had to run on a $30 Android phone? These artificial constraints force you to question assumptions and explore different approaches. You might not ship with these constraints, but the exercise makes you a better developer.

Embrace the "worse is better" philosophy. Sometimes a simpler solution that doesn't handle every edge case is better than a complex solution that handles everything. Constraints force you to make this trade-off explicitly. The UNIX philosophy—small programs that do one thing well—emerged from extreme memory and storage constraints. Those constraints produced better design principles than unlimited resources would have.

Look for the constraint's gift. Every constraint is trying to tell you something. Memory constraints tell you to think about efficiency. Time constraints tell you to focus on impact. Legacy constraints tell you to design clean interfaces. Budget constraints tell you to use proven technologies instead of chasing novelty. What is the constraint teaching you?

I've seen developers waste enormous energy fighting constraints instead of working with them. They'll spend weeks architecting a way to bypass a database query limitation instead of restructuring their data model to work within it. They'll add layers of complexity to work around a framework's design instead of embracing the framework's philosophy.

Great developers pick their battles. Sometimes constraints truly are wrong and should be challenged. But more often, constraints represent real trade-offs in a complex system, and working within them produces better results than fighting them.

This habit also builds character. Embracing constraints requires humility—accepting that you can't have everything, that trade-offs are real, that perfection isn't achievable. It requires creativity—finding elegant solutions within boundaries. It requires focus—distinguishing between what's essential and what's merely nice to have.

The modern development world often feels like it's about having more: more tools, more frameworks, more libraries, more features, more scalability. But some of the most impactful software ever created was built with severe constraints. Redis started as a solution to a specific problem with strict performance requirements. Unix was designed for machines with tiny memory footprints. The web itself was designed to work over unreliable networks with minimal assumptions about client capabilities.

When you embrace constraints, you stop fighting reality and start working with it. You become a pragmatic problem-solver instead of an idealistic perfectionist. You ship solutions instead of endlessly pursuing optimal ones.

And here's the beautiful paradox: by accepting limitations, you often transcend them. The discipline and creativity that constraints force upon you produce solutions that work better, not worse. The app optimized for 2G connections also screams on 5G. The code designed for maintainability by a solo developer remains maintainable as the team grows. The feature set focused by time constraints turns out to be exactly what users needed.

Constraints aren't your enemy. They're your teacher, your focus, your catalyst for creative solutions. Learn to love them.

Habit 6: They Cultivate Deep Focus in an Age of Distraction

The modern developer's environment is a carefully engineered distraction machine. Slack pings, email notifications, endless meetings, "quick questions," and the siren song of social media and news feeds—all conspiring to fragment your attention into a thousand tiny pieces.

Good developers work in these conditions. They context-switch constantly, juggling multiple threads, believing that responsiveness is a virtue. They wear their busyness as a badge of honor.

Great developers build fortresses of focus. They understand that their most valuable asset isn't their knowledge of frameworks or algorithms—it's their ability to concentrate deeply on complex problems for extended periods. They treat uninterrupted time as a non-negotiable resource, more precious than any cloud computing credit.

This isn't just a preference; it's a necessity grounded in the nature of our work. Programming isn't a mechanical task of typing lines of code. It's an act of construction and problem-solving that happens largely in your mind. You build intricate mental models of systems, data flows, and logic. These models are fragile. A single interruption can shatter hours of mental assembly, forcing you to rebuild from scratch.

I learned this the hard way early in my career. I prided myself on being "always on." I had eight different communication apps open, responded to messages within seconds, and hopped between coding, code reviews, and support tickets all day. I was exhausted by 3 PM, yet my output was mediocre. I was putting in the time but not producing my best work.

Everything changed when I paired with a senior engineer named David for a week. David worked in mysterious two-hour blocks. During these blocks, he'd turn off all notifications, close every application except his IDE and terminal, and put on headphones. At first, I thought he was being antisocial. But then I saw his output. In one two-hour focus block, he'd often complete what would take me an entire distracted day. The quality was superior—fewer bugs, cleaner designs, more thoughtful edge-case handling. He wasn't just faster; he was operating at a different level of quality.

David taught me that focus is a skill to be developed, not a trait you're born with. And it's perhaps the highest-leverage skill you can cultivate.

Here's how great developers protect and cultivate deep focus:

They schedule focus time religiously. They don't leave it to chance. They block out multi-hour chunks in their calendar and treat these appointments with themselves as seriously as meetings with the CEO. During this time, they are effectively offline. Some companies even formalize this with policies like "no-meeting Wednesdays" or "focus mornings," but great developers implement these guardrails for themselves regardless of company policy.

They master their tools, but don't fetishize them. Great developers use tools like "Do Not Disturb" modes, website blockers, and full-screen IDEs not as productivity hacks, but as deliberate barriers against interruption. The goal isn't to find the perfect app; it's to create an environment where deep work can occur. They understand that the tool is secondary to the intent.

They practice single-tasking. Multitasking is a myth, especially in programming. What we call multitasking is actually rapid context-switching, and each switch carries a cognitive cost. Great developers train themselves to work on one thing until it reaches a natural stopping point. They might keep a "distraction list" nearby—a notepad to jot down random thoughts or to-dos that pop up—so they can acknowledge the thought without derailing their current task.

They defend their focus courageously. This is the hardest part. It requires saying "no" to well-meaning colleagues, setting boundaries with managers, and resisting the cultural pressure to be constantly available. Great developers learn to communicate these boundaries clearly and politely: "I'm in the middle of a deep work session right now, but I can help you at 3 PM." Most reasonable people will respect this if it's communicated consistently.

They recognize the cost of context switching. Every interruption doesn't just cost the time of the interruption itself; it costs the time to re-immerse yourself in the original problem. A 30-second Slack question can easily derail 15 minutes of productive flow. Great developers make this cost visible to their teams, helping create a culture that respects deep work.

They structure their day around energy levels. Focus is a finite resource. Great developers know when they are at their cognitive best—for many, it's the morning—and guard that time fiercely for their most demanding work. Meetings, administrative tasks, and code reviews are relegated to lower-energy periods. They don't squander their peak mental hours on low-value, shallow work.

They embrace boredom. This sounds strange, but it's critical. In moments of frustration or mental block, the immediate impulse is to reach for your phone—to seek a dopamine hit from Twitter or email. Great developers resist this. They stay with the problem, staring out the window if necessary, allowing their subconscious to work on the problem. Some of the most elegant solutions emerge not in frantic typing, but in quiet contemplation.

The benefits of this habit extend far beyond increased productivity. Deep focus is where mastery lives. It's in these uninterrupted stretches that you encounter the truly hard problems, the ones that force you to grow. You develop the patience to debug systematically, the clarity to see elegant architectures, and the persistence to push through complexity that would overwhelm a distracted mind.

Furthermore, focus begets more focus. Like a muscle, your ability to concentrate strengthens with practice. What starts as a struggle to focus for 30 minutes can, over time, become a reliable two-hour deep work session.

In a world that values shallow responsiveness, choosing deep focus feels countercultural. But it's precisely this choice that separates competent developers from exceptional ones. The developers who can enter a state of flow regularly are the ones who ship complex features, solve the hardest bugs, and produce work that feels almost magical in its quality.

Your most valuable code will be written in focus. Protect that state with your life.

Habit 7: They Practice Strategic Laziness

If "laziness" sounds like a vice rather than a virtue, you're thinking about it wrong. Good developers are often hardworking—they'll pour hours into manual testing, repetitive configuration, and brute-force solutions. They equate effort with value.

Great developers practice strategic laziness. They will happily spend an hour automating a task that takes five minutes to do manually, not because it saves time immediately, but because they hate repetition so much they're willing to invest upfront to eliminate it forever. They are constantly looking for the lever, the shortcut, the abstraction that maximizes output for minimum ongoing effort.

This principle, often attributed to Larry Wall, the creator of Perl, is one of the three great virtues of a programmer (the others being impatience and hubris). Strategic laziness isn't about avoiding work; it's about being profoundly efficient by automating the boring stuff so you can focus your energy on the hard, interesting problems.

I saw a perfect example of this with a DevOps engineer I worked with. Our deployment process involved a 15-step checklist that took about 30 minutes and required intense concentration. A mistake at any step could take down production. Most of us treated it as a necessary, if tedious, part of the job.

She, however, found it intolerable. Over two days, she built a set of scripts that automated the entire process. The initial investment was significant—probably 16 hours of work. But after that, deployments took 2 minutes and were error-free. In a month, she had recouped the time investment for the entire team. In a year, she had saved hundreds of hours and eliminated countless potential outages. That's strategic laziness.

This habit manifests in several key ways:

They automate relentlessly. If they have to do something more than twice, they write a script. Environment setup, database migrations, build processes, testing routines—all are prime candidates for automation. They don't just think about the time saved; they think about the cognitive load eliminated and the errors prevented.

They build tools and abstractions. Great developers don't just solve the immediate problem; they solve the class of problems. When they notice a repetitive pattern in their code, they don't copy-paste with minor modifications—they extract a function, create a library, or build a framework. They'd rather spend time designing a clean API than writing the same boilerplate for the tenth time.

They are masters of delegation—to the computer. They constantly ask: "What part of this can the computer handle?" Linting, formatting, dependency updates, performance monitoring—tasks that good developers do manually are delegated to automated systems by great developers. This frees their mental RAM for tasks that genuinely require human intelligence.

They optimize for long-term simplicity, not short-term speed. The strategically lazy developer knows that the easiest code to write is often the hardest to maintain. So they invest a little more time upfront to create a simple, clear design that will be easy to modify later. They're lazy about future work, so they do the hard thinking now.

They leverage existing solutions. The strategically lazy developer doesn't build a custom authentication system when Auth0 exists. They don't write a custom logging framework when structured logging libraries are available. They have a healthy bias for using battle-tested solutions rather than reinventing the wheel. Their goal is to solve the business problem, not to write every line of code themselves.

How to cultivate strategic laziness:

Develop an allergy to repetition. Pay attention to tasks you find yourself doing repeatedly. Does it feel tedious? That's your signal to automate. Start small—a shell script to set up your project, a macro to generate boilerplate code. The satisfaction of eliminating a recurring annoyance is addictive and will fuel further automation.

Ask the lazy question. Before starting any task, ask: "Is there an easier way to do this?" "Will I have to do this again?" "Can I get the computer to do the boring parts?" This simple metacognition separates the habitually lazy from the strategically lazy.

Invest in your toolchain. Time spent learning your IDE's shortcuts, mastering your shell, or configuring your linters isn't wasted—it's compounded interest. A few hours learning Vim motions or VS Code multi-cursor editing can save you days of typing over a year.

Build, then leverage. When you build an automation or abstraction, think about how to make it reusable. A script that's only useful for one project is good; a tool that can be used across multiple projects is great. Write documentation for your tools—future you will thank you.

The beauty of strategic laziness is that it benefits everyone, not just you. The deployment script you write helps the whole team. The well-designed abstraction makes the codebase easier for everyone to work with. The automated test suite prevents bugs for all future developers.

This habit transforms you from a code monkey into a force multiplier. You stop being just a producer of code and become a builder of systems that produce value with less effort. You become the developer who, instead of just working hard, makes the entire team's work easier and more effective.

And in the end, that's the kind of laziness worth cultivating.

Habit 8: They Maintain a Feedback Loop with the Production Environment

Good developers write code, run tests, and push to production. They trust that if the tests pass and the build is green, their job is done. They view production as a distant, somewhat scary place that operations teams worry about.

Great developers have an intimate, ongoing relationship with production. They don't just ship code and forget it; they watch it walk out the door and follow it into the world. They treat the production environment not as a final destination, but as the ultimate source of truth about their code's behavior, performance, and value.

This habit is the difference between theoretical correctness and practical reality. Your code can pass every test, satisfy every requirement, and look beautiful in review, but none of that matters if it fails in production. Great developers understand that production is where their assumptions meet reality, and reality always wins.

I learned this lesson from a catastrophic performance regression early in my career. We had built a new feature with a complex database query. It was elegant, used all the right JOINs, and passed all our unit and integration tests. Our test database had a few hundred rows of synthetic data, and the query was instant.

We shipped it on a Friday afternoon. By Saturday morning, the database was on fire. In production, with millions of rows of real-world data, that "elegant" query was doing full table scans. It timed out, locked tables, and brought the entire application to its knees. We spent our weekend in panic mode, rolling back and writing a fix.

A great developer on our team, Maria, took this personally. Not because she wrote the bad query (she hadn't), but because she saw it as a systemic failure. "We can't just test if our code works," she said. "We have to test if it works under real conditions."

From that day on, she became the guardian of our production feedback loops.

Here's what maintaining a tight production feedback loop looks like in practice:

They instrument everything. Great developers don't just log errors; they measure everything that matters. Response times, throughput, error rates, business metrics, cache hit ratios, database query performance. They bake observability—metrics, logs, and traces—into their code from the very beginning. They know that you can't fix what you can't see.

They watch deployments like hawks. When their code ships, they don't just move on to the next ticket. They watch the deployment metrics. They monitor error rates. They check performance dashboards. They might even watch real-user sessions for a few minutes to see how the feature is actually being used. This immediate feedback allows them to catch regressions that slip past tests.

They practice "you build it, you run it." This Amazon-originated philosophy means developers are responsible for their code in production. They are on call for their features. They get paged when things break. This might sound punishing, but it's the most powerful feedback loop imaginable. Nothing motivates you to write robust, fault-tolerant code like knowing your phone will ring at 3 AM if you don't.

They use feature flags religiously. Great developers don't deploy big bang releases. They wrap new features in flags and roll them out gradually—to internal users first, then to 1% of customers, then to 10%, and so on. This allows them to get real-world feedback with minimal blast radius. If something goes wrong, they can turn the feature off with a single click instead of a full rollback.

They analyze production data to make decisions. Should we optimize this query? A good developer might guess. A great developer will look at production metrics to see how often it's called, what its average and p95 latencies are, and what impact it's having on user experience. They let data from production guide their prioritization.

They embrace and learn from incidents. When production breaks, great developers don't play the blame game. They lead and participate in blameless post-mortems. They dig deep to find the root cause, not just the symptom. More importantly, they focus on systemic fixes that prevent the entire class of problem from recurring, rather than just patching the immediate issue.

How to develop this habit:

Make your application observable from day one. Before you write business logic, set up structured logging, metrics collection, and distributed tracing. It's much harder to add this later. Start simple—even just logging key business events and performance boundaries is a huge step forward.

Create a personal dashboard. Build a dashboard that shows the health of the features you own. Make it the first thing you look at in the morning and the last thing you check before a deployment. This habit builds a sense of ownership and connection to your code's real-world behavior.

Volunteer for on-call rotation. If your team has one, join it. If it doesn't, propose it. The experience of being woken up by a pager for code you wrote is transformative. It will change how you think about error handling, logging, and system design forever.

Practice "production debugging." The next time there's a production issue, even if it's not in your code, ask if you can shadow the person debugging it. Watch how they use logs, metrics, and traces to pinpoint the problem. This is a skill that can only be learned by doing.

Ship small, ship often. The more frequently you deploy, the smaller each change is, and the easier it is to correlate changes in the system with changes in its behavior. Frequent deployments reduce the fear of production and turn it into a familiar, manageable place.

Maintaining this feedback loop does more than just prevent bugs—it closes the circle of learning. You write code based on assumptions, and production tells you which of those assumptions were wrong. Maybe users are using your feature in a way you never anticipated. Maybe that "edge case" is actually quite common. Maybe the performance characteristic you assumed is completely different under real load.

This continuous learning is what turns a good coder into a great engineer. You stop designing systems in a vacuum and start building them with a deep, intuitive understanding of how they will actually behave in the wild.

Production is the most demanding and honest code reviewer you will ever have. Listen to it.

Habit 9: They Prioritize Learning Deliberately, Not Accidentally

The technology landscape is a raging river of change. New frameworks, languages, tools, and paradigms emerge, gain fervent adoption, and often fade into obscurity, all within a few years. A good developer swims frantically in this river, trying to keep their head above water. They learn reactively—picking up a new JavaScript framework because their job requires it, skimming a blog post that pops up on Hacker News, watching a tutorial when they're stuck on a specific problem. Their learning is ad-hoc, driven by immediate necessity and the loudest voices in the ecosystem.

A great developer doesn't just swim in the river; they build a boat and chart a course. They understand that in a field where specific technologies have a half-life of mere years, the only sustainable advantage is the ability to learn deeply and efficiently. They don't learn reactively; they learn deliberately. Their learning is a systematic, ongoing investment, guided by a clear understanding of first principles and their long-term goals, not by the whims of tech trends.

This is arguably the most important habit of all, because it's the meta-habit that enables all the others. It's the engine of growth.

I witnessed the power of this habit in two developers who joined my team at the same time, both with similar backgrounds and talent. Let's call them Alex and Ben.

Alex was a classic reactive learner. He was bright and capable. When the team decided to adopt a new state management library, he dove in. He learned just enough to get his tasks done. He Googled specific error messages, copied patterns from existing code, and became functionally proficient. His knowledge was a mile wide and an inch deep—a collection of solutions to specific problems without a unifying mental model.

Ben took a different approach. When faced with the same new library, he didn't just read the "Getting Started" guide. He spent a weekend building a throwaway project with it. Then, he read the official documentation cover-to-cover. He watched a talk by the creator to understand the philosophy behind the library—what problems it was truly designed to solve, and what trade-offs it made. He didn't just learn how to use it; he learned why it was built that way, and when it was the right or wrong tool for the job.

Within six months, the difference was staggering. Alex could complete tasks using the library, but he often wrote code that fought against its core principles, leading to subtle bugs and performance issues. When he encountered a novel problem, he was often stuck.

Ben, on the other hand, had become the team's go-to expert. He could anticipate problems before they occurred. He designed elegant solutions that leveraged the library's strengths. He could explain its concepts to juniors in a way that made them stick. He wasn't just a user of the tool; he was a master of it.

Alex had learned accidentally. Ben had learned deliberately.

Here’s how great developers structure their deliberate learning:

They Learn Fundamentals, Not Just Flavors. The great developer knows that while JavaScript frameworks come and go (Remember jQuery? AngularJS? Backbone.js?), the underlying fundamentals of the web—the DOM, the event loop, HTTP, browser rendering—endure. They invest their time in understanding computer science fundamentals: data structures, algorithms, networking, operating systems, and design patterns. These are the timeless principles that allow them to evaluate and learn any new "flavor" of technology quickly and deeply. Learning React is easy when you already understand the principles of declarative UI, the virtual DOM concept, and one-way data flow. You're not memorizing an API; you're understanding a manifestation of deeper ideas.

They Maintain a "Learning Backlog." Just as they have a backlog of features to build, they maintain a personal backlog of concepts to learn, technologies to explore, and books to read. This isn't a vague "I should learn Go someday." It's a concrete list: "Read 'Designing Data-Intensive Applications,'" "Build a simple Rust CLI tool to understand memory safety," "Complete the 'Networking for Developers' course on Coursera." This transforms learning from an abstract intention into a manageable, actionable project.

They Allocate "Learning Time" and Protect It Ferociously. They don't leave learning to the scraps of time left over after a exhausting day of meetings and coding. They schedule it. Many great developers I know block out one afternoon per week, or a few hours every morning before work, for deliberate learning. This time is sacred. It's not for checking emails or putting out fires. It's for deep, uninterrupted study and practice.

They Learn by Doing, Not Just Consuming. Passive consumption—reading, watching videos—is only the first step. Great developers internalize knowledge by applying it. They don't just read about a new database; they install it, import a dataset, and run queries. They don't just watch a tutorial on a new architecture; they build a toy project that implements it. This practice builds strong, durable neural pathways that theory alone cannot. They understand that true mastery lives in the fingertips as much as in the brain.

They Go to the Source. When a new tool emerges, the reactive learner reads a "10-minute introduction" blog post. The deliberate learner goes straight to the primary source: the official documentation, the original research paper (if one exists), or a talk by the creator. They understand that secondary sources are often simplified, opinionated, or outdated. The truth, in all its nuanced complexity, is usually found at the source. Reading the React documentation or Dan Abramov's blog posts is a different league of learning than reading a list of "React tips and tricks" on a random blog.

They Teach What They Learn. The deliberate learner knows that the ultimate test of understanding is the ability to explain a concept to someone else. They write blog posts, give brown bag lunches to their team, contribute to documentation, or simply explain what they've learned to a colleague. The act of organizing their thoughts for teaching forces them to confront gaps in their own understanding and solidify the knowledge. It's the final step in the learning cycle.

They Curate Their Inputs Wisely. The digital world is a firehose of low-quality, repetitive, and often incorrect information. Great developers are ruthless curators of their information diet. They don't try to read everything. They identify a handful of trusted, high-signal sources—specific blogs, journals, podcasts, or people—and ignore the rest. They favor depth over breadth, quality over quantity.

How to cultivate this habit:

Conduct a quarterly "skills audit." Every three months, take an honest inventory of your skills. What's getting stronger? What's becoming obsolete? What emerging trend do you need to understand? Based on this audit, update your learning backlog. This transforms your career development from a passive process into an active one you control.

Follow the "20% rule." Dedicate a fixed percentage of your time—even if it's just 5% to start—to learning things that aren't immediately relevant to your current tasks. This is how you avoid technological obsolescence. It's how you serendipitously discover better ways of working and new opportunities.

Build a "personal syllabus." If you wanted to become an expert in distributed systems, what would you need to learn? In what order? A deliberate learner creates a syllabus for themselves, just like a university course. They might start with a textbook, then move to seminal papers, then build a project. This structured approach is infinitely more effective than random exploration.

Find a learning cohort. Learning alone is hard. Find one or two colleagues who share your growth mindset. Start a book club, a study group, or a "tech deep dive" session. The social commitment will keep you accountable, and the discussions will deepen your understanding.

The payoff for this habit is immeasurable. It's the difference between a developer whose value peaks five years into their career and one who becomes more valuable with each passing year. It's the difference between being at the mercy of the job market and being the one that companies fight over.

Deliberate learning is the ultimate career capital. In a world of constant change, the ability to learn how to learn, and to do it with purpose and strategy, isn't just a nice-to-have. It's the single greatest predictor of long-term success. It is the quiet, persistent engine that transforms a good programmer into a great one, and a great one into a true master of the craft.

Habit 10: They Build and Nurture Their Engineering Judgment

You can master every technical skill. You can write pristine code, debug with scientific precision, and architect systems of elegant simplicity. You can have an encyclopedic knowledge of algorithms and an intimate relationship with production. But without the final, most elusive habit, you will never cross the chasm from being a great technician to being a truly great engineer.

That final habit is the cultivation of engineering judgment.

Engineering judgment is the silent, invisible partner to every technical decision you make. It’s the internal compass that guides you when the map—the requirements, the documentation, the best practices—runs out. It’s the accumulated wisdom that tells you when to apply a rule, and, more importantly, when to break it. It’s what separates a technically correct solution from a genuinely wise one.

A good developer, when faced with a problem, asks: "What is the technically optimal solution?" They will find the most efficient algorithm, the most scalable architecture, the most pristine code structure. They are in pursuit of technical perfection.

A great developer asks a more complex set of questions: "What is the right solution for this team, for this business context, for this moment in time?" They weigh technical ideals against a messy reality of deadlines, team skills, business goals, and long-term maintenance. They understand that the best technical solution can be the worst engineering decision.

I learned this not from a success, but from a failure that still haunts me. Early in my career, I was tasked with building a new reporting feature. The existing system was a tangled mess of SQL queries embedded in PHP. It was slow, unmaintainable, and a nightmare to modify.

I saw my chance to shine. I designed a beautiful, event-sourced architecture with a CQRS pattern. It was technically brilliant. It would be infinitely scalable, provide perfect audit trails, and allow for complex historical queries. It was the kind of system you read about in software architecture books. I was immensely proud of it.

It was also a catastrophic failure.

The project took three times longer than estimated. The complexity was so high that only I could understand the codebase. When I eventually left the company, the team struggled for months to maintain it, eventually rewriting the entire feature in a much simpler, cruder way. My "technically optimal" solution was an engineering disaster. It was the wrong solution for the team's skill level, the wrong solution for the business's need for speed, and the wrong solution for the long-term health of the codebase.

I had technical skill, but I had failed the test of engineering judgment.

Engineering judgment is the synthesis of all the other habits into a form of professional wisdom. Here’s how it manifests:

They Understand the Spectrum of "Good Enough." Great developers know that not every piece of the system needs to be a masterpiece. The prototype for a one-off marketing campaign does not need the same level of robustness as the core authentication service. The internal admin tool can tolerate more technical debt than the customer-facing API. They make conscious, deliberate trade-offs. They ask: "What is the minimum level of quality required for this to successfully solve the problem without creating unacceptable future risk?" This isn't laziness; it's strategic allocation of effort.

They See Around Corners. A developer with strong judgment can anticipate the second- and third-order consequences of a decision. They don't just see the immediate feature implementation; they see how it will constrain future changes, what new categories of bugs it might introduce, and how it will affect the system's conceptual integrity. When they choose a library, they don't just evaluate its features; they evaluate its maintenance status, its upgrade path, its community health, and its architectural philosophy. They are playing a long game that others don't even see.

They Balance Idealism with Pragmatism. They hold strong opinions about code quality, but they hold them loosely. They can passionately argue for a clean architecture in a planning meeting, but if the business context demands a quicker, dirtier solution, they can pivot and implement the pragmatic choice without resentment. They document the trade-offs made and the technical debt incurred, creating a ticket to address it later, and then they move on. They understand that software exists to serve a business, not the other way around.

They Make Decisions Under Uncertainty. Requirements are ambiguous. Timelines are tight. Information is incomplete. This is the reality of software development. A good developer freezes, demanding more certainty, more specifications, more time. A great developer uses their judgment to make the best possible decision with the information available. They identify the core risks, make reasonable assumptions, and chart a course. They know that delaying a decision is often more costly than making a slightly wrong one.

They Distinguish Between Symptoms and Diseases. A junior developer treats the symptom: "The page is loading slowly, let's add a cache." A good developer finds the disease: "The page is loading slowly because of an N+1 query problem, let's fix the query." A great developer with sound judgment asks if the disease itself is a symptom: "Why are we making so many queries on this page? Is our data model wrong? Is this feature trying to do too much? Should we be pre-computing this data entirely?" They operate at a higher level of abstraction, solving classes of problems instead of individual instances.

How to Cultivate Engineering Judgment (Because It Can't Be Taught, Only Grown)

Judgment isn't a skill you can learn from a book. It's a form of tacit knowledge, built slowly through experience, reflection, and a specific kind of practice.

Seek Diverse Experiences. Judgment is pattern-matching on a grand scale. The more patterns you have seen, the better your judgment will be. Work at a startup where speed is everything. Work at an enterprise where stability is paramount. Work on front-end, back-end, and infrastructure. Each context teaches you a different set of values and trade-offs. The developer who has only ever worked in one environment has a dangerously narrow basis for judgment.

Conduct Retrospectives on Your Own Decisions. This is the single most powerful practice. Don't just move on after a project finishes or a decision is made. Schedule a solo retrospective. Take out a notebook and ask yourself:

· "What were the key technical decisions I made?"
· "What was my reasoning at the time?"
· "How did those decisions play out? Better or worse than expected?"
· "What did I miss? What would I do differently with the benefit of hindsight?"
This ritual of self-reflection is how you convert experience into wisdom.

Find a Yoda. Identify a senior engineer whose judgment you respect—someone who seems to have a preternatural ability to make the right call. Study them. When they make a decision that seems counterintuitive, ask them to explain their reasoning. Not just the technical reason, but the contextual, human, and business reasons. The nuances they share are the building blocks of judgment.

Practice Articulating the "Why." When you make a recommendation, force yourself to explain not just what you think should be done, but why. Lay out the trade-offs you considered. Explain the alternatives you rejected and why. The act of articulating your reasoning forces you to examine its validity and exposes flaws in your logic. It also invites others into your thought process, allowing them to challenge and refine your judgment.

Embrace the "Reversibility" Heuristic. When faced with a difficult decision, ask: "How reversible is this?" Adopting a new programming language is largely irreversible for a codebase. Adding a complex microservice architecture is hard to undo. Choosing a cloud provider creates lock-in. These are high-judgment decisions. On the other hand, refactoring a module, changing an API endpoint, or trying a new library are often easily reversible. Great developers apply more rigor and demand more certainty for irreversible decisions, and they move more quickly on reversible ones.

Develop a Sense of Proportion. This is perhaps the most subtle aspect of judgment. It’s knowing that spending two days optimizing a function that runs once a day is a waste, but spending two days optimizing a function called ten thousand times per second is critical. It’s knowing that a 10% performance degradation in the checkout flow is an emergency, while a 10% degradation in the "about us" page is not. This sense of proportion allows them to focus their energy where it truly matters.

The Compounding Effect of the Ten Habits

Individually, each of these habits will make you a better developer. But their true power is not additive; it's multiplicative. They compound.

Reading code widely (Habit 1) builds the mental library that informs your engineering judgment (Habit 10). Understanding the "why" (Habit 2) allows you to make the pragmatic trade-offs required by strategic laziness (Habit 5) and sound judgment (Habit 10). Cultivating deep focus (Habit 6) is what enables the deliberate learning (Habit 9) that prevents you from making naive decisions. Treating debugging as a science (Habit 3) and maintaining a feedback loop with production (Habit 8) provide the raw data that your judgment synthesizes into wisdom.

This is not a checklist to be completed. It is a system to be grown, a identity to be adopted. You will not master these in a week, or a year. This is the work of a career.

Start with one. Pick the habit that resonates most with you right now, the one that feels both necessary and just out of reach. Practice it deliberately for a month. Then add another.

The path from a good programmer to a great one is not a straight line. It's a spiral. You will circle back to these habits again and again throughout your career, each time understanding them more deeply, each time integrating them more fully into your practice.

The destination is not a job title or a salary. The destination is becoming the kind of developer who doesn't just write code, but who solves problems. The kind of developer who doesn't just build features, but who builds systems that are robust, maintainable, and a genuine pleasure to work with. The kind of developer who leaves every codebase, every team, and every organization better than they found it.

That is the work. That is the craft. And it begins with the decision, right now, to not just be good, but to begin the deliberate, lifelong practice of becoming great.

Permalink

Toronto, meet Nu: Building the purple future from Canada

When Nubank was born in 2013, the mission was simple but ambitious: to fight complexity and empower people through simple, transparent, and accessible financial products.

A lot has changed since then. Today, Nubank is one of the largest digital financial services platforms in the world, with more than 127 million customers across Brazil, Mexico, and Colombia.

We are the third largest financial institution in Brazil, with the lowest complaint rate among the country’s top 15 banks. And we continue to grow with the purpose of building technology that gives people control over their financial lives.

We’ve become a global company listed on the NYSE (NU), recognized by Time, Fast Company, and Forbes as one of the most innovative and fastest-growing companies in the world.

But even after a decade, one thing hasn’t changed: we believe the best way to build the future of finance is with technology made by people, for people.

Why Toronto

Starting this month, Nubank is seeking talent in Toronto to accelerate its global journey of technology and innovation.

Our goal is not to open a physical operation in Canada, but to connect local professionals with our global engineering ecosystem, a network that already spans Brazil, Mexico, Colombia, the United States, Germany, Uruguay, and Argentina.

Choosing Toronto to represent this new moment was an easy decision. After all, the city is one of the world’s leading tech hubs, with a vibrant community of engineers and startups. It’s a mature, diverse, and multicultural market that values autonomy, collaboration, and purpose.

And these values, of course, are deeply aligned with Nubank’s culture.

“Nubank’s journey started with a world-class team building a digital-first stack that has enabled us to serve over 127 million customers with unmatched scale and resilience. We are now looking for the next generation of world-class engineers – individuals passionate about fighting complexity and building a truly customer-centric future of finance. Tapping into Toronto’s incredible talent pool will help us leverage our technical advantages, from our robust cloud infrastructure to advances in AI, and redefine financial services for hundreds of millions of people across the globe.”

Eric Young, CTO, Nubank. 

To enable this global collaboration model, Nubank operates in a hybrid structure tailored for distributed teams. Daily work happens remotely, while squads periodically come together for about one week of in-person alignment and co-creation. For professionals based in Canada, these sessions typically occur in one of our hubs (Brazil, Mexico, Colombia, or the United States) and are planned well in advance, with full travel support to ensure equitable access to these collaboration moments.

This expansion is not only geographic—it’s cultural, technical, and strategic. It reflects our ambition to go far beyond the three countries where we operate today and build a truly global platform.

For engineers in Canada, it’s an invitation to help solve real, high-impact challenges: scaling systems that support tens of millions of users, designing products that seamlessly adapt across markets and languages, and integrating AI deeply into our operations to power a native AI banking experience.

“Our vision is to become an AI-first company, embedding foundation models across our products to unlock simpler, smarter, and more intuitive financial experiences—generating meaningful value for customers and accelerating our global journey.”

David Velez, Nubank CEO

Engineering at the heart of everything

At Nubank, technology is at the core of everything we do. Our ecosystem grows sustainably, combining efficiency, scalability, and impact.

Our stack is cloud-native, distributed, and immutable, designed for high performance, security, and resilience. It can be understood through four main pillars:

Cloud infrastructure

As a 100% digital product, Nubank relies on a strong partnership with the AWS ecosystem, which allows us to deliver products efficiently and at scale. This includes components such as storage, routing, session management, and Lambda functions.

Security is an essential part of this structure. With multiple processes and continuous alignment to financial regulations, we ensure full compliance in every operation.

Backend

We use Clojure in our backend layer. Clojure, a functional language strategically chosen by Nubank, offers key advantages: immutable concurrency, immutable data structures, function purity, and expressiveness, making the code easier to understand, test, and maintain, without sacrificing power or conciseness.

Today, we operate with more than 3,000 microservices, all designed to support the scale and speed our customers expect.

Mobile

We adopt an approach called Backend-Driven Content (BDC), which allows us to design screens directly on the server side, without needing to code them in Flutter or native Android/iOS languages.

This drastically accelerates deployment time and reduces dependencies on app store reviews or release cycles.

Database

We use Datomic as our transactional database, a technology built around immutability, an essential differentiator in the financial sector, where maintaining permanent and reliable records of every transaction is critical.

In addition, our microservices-based and backend-driven architecture runs on more than 85 Kubernetes clusters, with 355,000 active pods and 1 petabyte of logs ingested daily.

A new chapter in our global journey

Our talent expansion in Toronto reinforces our commitment to attracting professionals who share our mission and help expand our global presence.

It’s also a bet on the talent and diversity that drive the tech ecosystem in Canada.

It’s also a reminder of what brought us here: the belief that technology can transform realities, build bridges, and simplify what once seemed impossible.

Technology companies have a set of competitive advantages that allow them to gain ground in markets traditionally dominated by large institutions, and Nubank is living proof of that.

We’re building the purple future. Now, also from Toronto.

The post Toronto, meet Nu: Building the purple future from Canada appeared first on Building Nubank.

Permalink

Eight Queens in core.logic

Eight Queens in core.logic

Eight Queens in core.logic

Welcome to my blog.

Here I report my core.logic solution to the classic Eight queens puzzle.

(ns eight-queens
  (:refer-clojure :exclude [==])
  (:require
   [clojure.string :as str]
   [clojure.core.logic :refer :all :as l]
   [clojure.core.logic.fd :as fd]))

;; Classic AI problem:
;;
;;
;; Find a chess board configuration, where (n=8) queens are on the board;
;; And no pairs attack each other.
;;

;;
;; representation:
;;
;; permutation <0...7>,
;; 1 number per row,
;; so that [0,1,2,3,4,5,6,7] is placing all queens on a diagonal (everybody attacks each other).
;;
;; - constrains the configuration:
;; - all queens are on different rows.
;; - all queens are on different columns.
;;
;; This is fine, because those configurations are not in the set of solutions.
;;
;;

  (defn queens-logic
    [n]
    ;; make (n=8) logic variables, for each row
    (let [colvars (map (fn [i] (l/lvar (str "col-" i))) (range n))]
      (l/run* [q]
        ;; 1. assign the domain 0-8
        (everyg (fn [lv] (fd/in lv (fd/interval (dec n)))) colvars)
        ;; 2. 'row must be different' constraint // permutation
        (fd/distinct colvars)
        ;; 3. diagonal constraint
        ;; for each queen, say that the other queens are not attacking diagonally
        (and* (for [i (range n)
                    j (range (inc i) n)
                    :let [row-diff (- j i)
                          ci (nth colvars i)
                          cj (nth colvars j)]]
                (fresh []
                  ;; handle south-east and north-east cases
                  (fd/eq (!= cj (+ ci row-diff)))
                  ;; '-' relation didn't work somehow
                  (fd/eq (!= ci (+ cj row-diff))))))
        (l/== q colvars))))

  (take 1 (queens-logic 8))
  '((1 3 5 7 2 0 6 4))

In relational programming, the code constructs are logic variables and goals. We write a program that sets up the constraints of the variables, then hand it to the logic engine with run.

After deciding on the clever representation, a permutation of column positions, we can program the constraints we need:

  1. Each queen is a number between 0 and n = 8 for each row, the number says which column it is on (or vise-versa).
  2. Each queen is a different number from the others - it is on a different row. (1+2 are the 'permutation constraint')
  3. The queens don't attack each other diagonally.

Verifying the correctness:

(comment

  ;; reference is definend below
  (def refence-outcome (find-all-solutions 8))

  (def outcome (queens-logic 8))

  [(= (into #{} refence-outcome) (into #{} outcome))
   (every? zero? (map quality refence-outcome))
   (every? zero? (map quality outcome)) (count outcome)]

  ;; =>
  [true true true 92])


ai generated back-track and a hill-climber solution:

;; ============================
;; Helpers and non-relational solutions

(defn quality
  "Count the number of queens attacking each other in the given board configuration.
   board-config is a vector of column positions, one per row.
   Returns the number of pairs of queens that attack each other."
  [board-config]
  (let [n (count board-config)]
    (loop [row1 0
           conflicts 0]
      (if (>= row1 n)
        conflicts
        (let [col1 (nth board-config row1)
              new-conflicts (loop [row2 (inc row1)
                                   acc 0]
                              (if (>= row2 n)
                                acc
                                (let [col2 (nth board-config row2)
                                      ;; Check diagonal attacks
                                      diag-attack? (= (Math/abs (- row1 row2))
                                                      (Math/abs (- col1 col2)))]
                                  (recur (inc row2)
                                         (if diag-attack? (inc acc) acc)))))]
          (recur (inc row1) (+ conflicts new-conflicts)))))))

(defn valid-solution?
  "Returns true if the board configuration has no conflicts."
  [board-config]
  (zero? (quality board-config)))

(defn print-board
  "Prints a visual representation of the board."
  [board-config]
  (let [n (count board-config)]
    (doseq [row (range n)]
      (let [col (nth board-config row)]
        (println (apply str (for [c (range n)]
                              (if (= c col) "Q " ". ")))))))
  (println))

(defn solve-backtrack
  "Solves the N-Queens problem using backtracking.
   Returns the first valid solution found, or nil if none exists."
  [n]
  (letfn [(safe? [board row col]
            (let [board-vec (vec board)]
              ;; Check if placing a queen at [row col] is safe
              (not (some (fn [r]
                           (let [c (nth board-vec r)]
                             (or (= c col)
                                 (= (Math/abs (- r row))
                                    (Math/abs (- c col))))))
                         (range row)))))

          (place-queens [board row]
            (if (= row n)
              board ;; Solution found
              (some (fn [col]
                      (when (safe? board row col)
                        (place-queens (conj board col) (inc row))))
                    (range n))))]

    (place-queens [] 0)))

(defn find-all-solutions
  "Finds all solutions to the N-Queens problem.
   Returns a sequence of all valid board configurations."
  [n]
  (letfn [(safe? [board row col]
            (let [board-vec (vec board)]
              (not (some (fn [r]
                           (let [c (nth board-vec r)]
                             (or (= c col)
                                 (= (Math/abs (- r row))
                                    (Math/abs (- c col))))))
                         (range row)))))

          (place-queens [board row]
            (if (= row n)
              [board] ;; Return solution in a vector
              (mapcat (fn [col]
                        (when (safe? board row col)
                          (place-queens (conj board col) (inc row))))
                      (range n))))]

    (place-queens [] 0)))

(defn random-config
  "Generates a random board configuration of size n."
  [n]
  (vec (shuffle (range n))))


(defn solve-hill-climbing
  "Solves the N-Queens problem using hill climbing with random restarts.
   max-restarts: maximum number of random restarts to attempt.
   max-steps: maximum steps per climb attempt."
  [n & {:keys [max-restarts max-steps]
        :or {max-restarts 100 max-steps 1000}}]
  (letfn [(swap-positions [config i j]
            (assoc config i (nth config j) j (nth config i)))

          (get-neighbors [config]
            (for [i (range n)
                  j (range (inc i) n)]
              (swap-positions config i j)))

          (climb [config steps]
            (if (or (zero? steps) (valid-solution? config))
              config
              (let [current-quality (quality config)
                    neighbors (get-neighbors config)
                    better-neighbors (filter #(< (quality %) current-quality) neighbors)]
                (if (empty? better-neighbors)
                  config ;; Local minimum reached
                  (recur (first (sort-by quality better-neighbors))
                         (dec steps))))))]

    (loop [restarts 0]
      (if (>= restarts max-restarts)
        nil ;; Failed to find solution
        (let [start-config (random-config n)
              result (climb start-config max-steps)]
          (if (valid-solution? result)
            result
            (recur (inc restarts))))))))


(comment
  ;; Example usage:

  ;; Test the quality function
  (quality [0 1 2 3 4 5 6 7]) ;; All on diagonal - many conflicts
  ;; => 28

  (quality [0 4 7 5 2 6 1 3]) ;; A valid solution
  ;; => 0

  ;; Solve for 8 queens using backtracking
  (def solution (solve-backtrack 8))
  solution
  ;; => [0 4 7 5 2 6 1 3]

  (print-board solution)
  ;; Q . . . . . . .
  ;; . . . . Q . . .
  ;; . . . . . . . Q
  ;; . . . . . Q . .
  ;; . . Q . . . . .
  ;; . . . . . . Q .
  ;; . Q . . . . . .
  ;; . . . Q . . . .

  ;; Find all solutions
  (def all-sols (find-all-solutions 8))
  (count all-sols)
  ;; => 92 (there are 92 distinct solutions for 8 queens)

  ;; Solve using hill climbing
  (def hc-solution (solve-hill-climbing 8))
  (print-board hc-solution)

  ;; Test quality on various board sizes
  (quality [0 1]) ;; => 0 (2 queens, no conflict)
  (quality [0 2 1]) ;; => 0 (3 queens, valid)
  (quality [1 3 0 2]) ;; => 0 (4 queens, valid)
  )

I could print all solutions; Why not do it with html; So it renders on this blog.

ai generated

(require '[hiccup2.core :as html])

(defn board-to-hiccup
  "Converts a board configuration to hiccup format with checkerboard pattern."
  [board-config]
  (let [n (count board-config)]
    [:div {:style {:display "inline-block"
                   :border "2px solid #333"}}
     (for [row (range n)]
       (let [col (nth board-config row)]
         [:div {:style {:display "flex"}}
          (for [c (range n)]
            (let [is-dark? (odd? (+ row c))
                  has-queen? (= c col)]
              [:div {:style {:width "60px"
                             :height "60px"
                             :background-color (if is-dark? "#769656" "#eeeed2")
                             :display "flex"
                             :align-items "center"
                             :justify-content "center"
                             :font-size "40px"
                             :font-weight "bold"
                             :color "#000"}}
               (when has-queen? "♕")]))]))]))

(spit
 "board.html"
 (html/html
     [:div
      {:style
       {:display "flex"
        :padding "8px"
        :gap "8px"
        :flex-wrap "wrap"}}
      (doall (map board-to-hiccup (queens-logic 8)))]))

All solutions printed because why not

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.