Buddy at Nubank: Supporting New Hires on Their Journey

Buddy connects new team members to Nubank, offering guidance, hands-on experiences, and opportunities to contribute with impact from day one.

At Nubank, we believe that the start of a journey can shape someone’s entire experience in the company. That’s why we created the Buddy program, connecting new hires — affectionately called Nuvinhos — to our culture, providing shared learning, support, and discovery from their very first day.

A Nuvinho is anyone who has recently joined Nubank, ready to explore our culture, processes, and challenges, and make a real impact from the start. They arrive eager to learn, find their own path, and make a difference from the beginning.

A Beginning with Purpose

When a Nuvinho joins Nubank, we want them to feel welcomed, supported, and part of something bigger. The Buddy program was created for this purpose: to provide an onboarding experience that reflects how we build our products and our work environment. Here, everyone is encouraged to make decisions, learn continuously, and collaborate to create solutions that generate real impact.

More than a formal process, the Buddy is a living network of support, allowing Nuvinhos to share experiences, try new ideas, and grow alongside the people who already embody Nubank’s culture.

High-Impact Work with Purpose

Working at Nubank is challenging and high-impact. We constantly question the status quo and create solutions that change people’s lives.

This intensity comes from the responsibility to innovate, make meaningful decisions, and deliver tangible results. For Nuvinhos, it means diving into an accelerated learning journey where every action and project contributes to something greater. It’s an environment that values initiative, collaboration, and real impact.

The Buddy program supports this process, helping each Nuvinho navigate, explore, and develop confidently in an environment driven by purpose and meaningful results.

Who the Buddy Is

A Buddy is the person who welcomes the Nuvinho and helps them find their place at Nubank. They know the company routines and day-to-day processes well and are ready to share experiences collaboratively. More than guiding, a Buddy sparks curiosity, encourages questions, and helps the new team member find their own way to contribute meaningfully from day one.

This relationship reflects what it’s like to work at Nubank: collaborative, purpose-driven, and focused on results. With a Buddy, Nuvinhos quickly realize that being part of Nu is more than holding a role — it’s participating in a story that grows every day, built collectively.

The Nuvinho Experience

From day one, the Buddy introduces paths, resources, and processes to help the Nuvinho feel confident and secure. They explain team rituals, strategic objectives, and ongoing projects, helping the new hire understand not just what needs to be done, but why it matters.

At the same time, the Buddy helps the Nuvinho explore essential tools and systems, offering hands-on learning and encouraging autonomy. They connect the new hire to colleagues, leaders, and stakeholders, building a network of relationships that strengthens belonging and eases social integration.

The Buddy also provides organizational context, showing how every decision, process, and project ties into Nubank’s larger purpose. When Nuvinhos take on their first projects, the Buddy observes, gives feedback, and celebrates achievements, ensuring continuous learning and confident, meaningful contributions.

Beyond Onboarding

The Buddy program goes beyond any traditional onboarding. It’s an expression of our culture of shared learning, collaborative building, and real impact from day one. Through this relationship, Nuvinhos discover what it’s like to work at Nubank, and Buddies also grow, learning from every new team member.

Being a Buddy means creating space for new team members to thrive, and being a Nuvinho is an opportunity to make a mark from the start. Together, they shape Nubank’s story: intense, inspiring, and purpose-driven.

The post Buddy at Nubank: Supporting New Hires on Their Journey appeared first on Building Nubank.

Permalink

Entre Código e Pedagogia: o que aprendi ao ensinar IA a escrever tutoriais de Programação Funcional

Neste semestre, decidi experimentar algo diferente na disciplina Introdução à Programação Funcional.

Em vez de manter o foco integral em Haskell, utilizei a linguagem apenas no primeiro terço do curso, como base conceitual.

Nas etapas seguintes, os estudantes trabalharam com duas stacks funcionais amplamente utilizadas na web: Clojure/ClojureScript e Elixir/Phoenix LiveView.

O objetivo era duplo: explorar a aplicabilidade prática da programação funcional moderna e investigar o papel da Inteligência Artificial na produção de materiais didáticos e arquiteturas de software.

Duas abordagens, um mesmo problema

Ambos os tutoriais resolvem o mesmo desafio: desenvolver uma aplicação Todo List completa e persistente, mas com filosofias distintas.

Versão Stack Abordagem
Clojure/ClojureScript Reagent 2.0 (React 18), Ring, Reitit, next.jdbc Reatividade explícita no frontend e API REST modular
Elixir/Phoenix LiveView LiveView, Ecto, Tailwind Reatividade integrada no backend, sem API intermediária

Os dois tutoriais podem ser acessados aqui:

O papel da Inteligência Artificial

Os tutoriais foram produzidos em colaboração com diferentes modelos de IA — ChatGPT, Gemini e Perplexity — a partir de prompts detalhados.

As IAs conseguiram gerar código funcional e explicações coerentes, mas sem estrutura pedagógica.

Faltava a intencionalidade didática: o porquê de cada decisão, o encadeamento entre etapas e a reflexão sobre erros comuns.

As IAs entregaram aproximadamente 80% do trabalho técnico.

Os 20% restantes — os mais importantes — dependeram de engenharia humana: testar, corrigir, modularizar e transformar o material em uma narrativa de aprendizagem.

Foram cerca de seis horas de curadoria, revisão e depuração, até que o conteúdo atingisse um padrão consistente e instrutivo.

“Produzir código com IA é simples. Transformá-lo em conhecimento exige experiência, método e propósito.”

O que essa experiência revelou

O processo reforçou uma lição essencial: a IA é uma ferramenta poderosa para acelerar o desenvolvimento e inspirar soluções,

mas a mediação humana continua insubstituível.

É o professor, o pesquisador e o engenheiro que atribuem sentido, constroem contexto e transformam o código em aprendizado.

Esses tutoriais representam mais do que guias técnicos.

São um experimento sobre como ensinar programação funcional no século XXI, integrando tecnologia, pedagogia e reflexão crítica sobre o papel da inteligência artificial no processo de aprendizagem.

📚 Referências dos tutoriais

Publicado por Sergio Costa

#Clojure #Elixir #ProgramaçãoFuncional #Educação #InteligênciaArtificial #Des

Permalink

Como foi minha experiência na Lambda Days 2025

O que é a Lambda Days?

A Lambda Days é uma conferência internacional dedicada a linguagens funcionais realizada todos os anos na Cracóvia (Polônia). O evento reúne pesquisadores, pessoas desenvolvedoras e entusiastas de comunidades como Erlang, Elixir, Scala, F#, Clojure, Haskell, Elm e muitas outras, criando um espaço de troca entre a academia e a indústria e tem duração de 2 dias.

Além das palestras técnicas, a Lambda Days também aborda temas como ciência de dados, sistemas distribuídos, inteligência artificial e boas práticas de desenvolvimento. O ambiente é bastante vibrante, reunindo participantes de diferentes países em uma das cidades mais bonitas e históricas da Europa.

O que me motivou a ir este ano?

Eu tinha 10 dias de férias para tirar e nenhum destino certo para ir. Todos os lugares que pensava em visitar pareciam ter muito potencial para uma viagem bacana, mas a indecisão tomava conta. Até que lembrei da sensação que tive quando vi que o Evan Czaplicki iria fazer uma apresentação nesta conferência! 

Para quem não o conhece, Evan é criador da linguagem Elm e provavelmente o meu palestrante favorito! Sou grande admirador das suas capacidades técnicas, mas também fico igualmente impressionado e atento às ideias mais filosóficas que ele costuma incluir em seus discursos.

Sempre quis assistir uma palestra do Evan pessoalmente e nos últimos anos ele não tem feito muitas aparições públicas, estando mais focado no desenvolvimento da sua mais nova criação: Acadia. Por isso, quando vi que ele estaria lá, fiquei bem empolgado.

Outro ponto importante na decisão foi que, embora a Polônia não estivesse exatamente entre meus lugares prioritários para conhecer no mundo, me parecia ser um país bem interessante, bonito e com opções culturais, gastronômicas e turísticas muito distintas. Além de não ser um dos países europeus dos mais caros de se visitar.

Como é a cidade?

A Lambda Days ocorre todo ano na mesma cidade, Cracóvia. Eu gostei da escolha por várias razões. A primeira, como já mencionei, é econômica. Embora a Polônia seja membro da União Europeia desde 2004, ela nunca adotou o Euro e sua moeda oficial é a złoty (PLN). Isso pode dificultar algumas coisas, mas considerei o preço de tudo bastante convidativo, especialmente quando comparado com outros países mais ricos da Europa. O preço do hotel, comida, transporte público e dos gastos gerais do dia a dia não são muito acima dos preços que eu encontraria em uma viagem dentro do próprio Brasil.

A passagem, saindo do Brasil, também não é um preço exorbitante, especialmente se considerarmos que trata-se de uma viagem de 11 horas até Frankfurt e depois mais uma hora até Cracóvia (prefiro não colocar os valores de quanto paguei pois iria se desatualizar muito rapidamente).

Cracóvia é uma cidade linda! E como o evento ocorre no verão, o sol nasce super cedo e se põe bem à noite (foto tirada às 21:45h).

Foto da cidade à noite, com céu ainda claro.

Me senti bastante seguro o tempo todo. Como a cidade é super plana, é perfeita para andar à pé ou de bicicleta. Além, claro, dos inúmeros trens de superfície (vulgo bondinhos) que percorrem a cidade toda! Através de um App no celular (ou de uma maquininha nos próprios pontos de parada) você pode comprar sua passagem. Eu acabei optando por usar um App e comprar passagem por tempo de uso. Assim, conseguia andar a cidade toda sem precisar me preocupar. Basta validar uma vez (digitando o número do vagão, que está impresso na parte de dentro, próximo das portas) e pronto! Dizem que existe bastante fiscalização (neste caso, basta mostrar que você validou sua passagem no App, caso contrário você será multado!), mas na prática não vi nenhum controle sobre isso. O preço também achei bastante aceitável (principalmente se comparado com que eu tinha que pagar quando visitei a Noruega… quase caí pra trás ao ver os preços do transporte público quando estive em Oslo!).

Trem de superfície (bondinho)

Aparentemente é bastante tranquilo chegar do aeroporto para a cidade usando trens (não os bondinhos - trens mesmo). Mas eu optei para usar Uber. Estava super cansado da viagem e me senti mais seguro optando por este meio de transporte.

Outro ponto de destaque é que Cracóvia possui algumas opções turísticas bastante únicas. Quando viajo, costumo optar mais por andar pela cidade e evitar um pouco passeios muito turísticos. Mas desta vez decidi conhecer Auschwitz 1 e Auschwitz 2 (Birkenau). É difícil descrever a sensação de estar em um lugar onde literalmente milhares de pessoas (em sua maioria judeus) morreram. Até agora não consigo acreditar. Não é o tipo de lugar que todas as pessoas vão querer visitar, mas se você tiver interesse e disposição, eu recomendaria conhecer.

Foto de dentro de Auschwitz.

Também fui na famosa Mina de Sal de Wieliczka. É um lugar bem bonito, mas provavelmente teria sido mais interessante ter ido para outras partes da cidade, como algum parque florestal mais afastado.

Foto de dentro da mina de sal.

Outro ponto bastante famoso (e que me arrependo de não ter visitado) é a Fábrica de Schindler, que mostra o esforço feito por Schindler para salvar 1.200 judeus do Holocausto (quem assistiu o filme A Lista de Schindler deve conhecer um pouco desta história).

Optei por ficar apenas na Cracóvia, mas parece ser bastante fácil ir para outras cidades da Polônia de trem. Se pudesse voltar no tempo, organizaria um pouco melhor minha viagem e passaria alguns dias em Varsóvia, capital do país.

A única coisa que eu realmente não gostei na cidade foi a invasão dos “vapes” (cigarro eletrônico). Embora seja relativamente comum ver pessoal no Brasil com estes dispositivos (mesmo sendo proibido), na Cracóvia parecia que todo jovem adulto que estava na rua carregava um “vape” na mão. Uma verdadeira epidemia! E o mesmo era vendido em vários lugares da cidade. É uma cena um pouco triste, já que os problemas associados ao consumo deste tipo de produto já são bastante conhecidos.

Nível das palestras

Haviam 3 trilhas, com palestras acontecendo paralelamente. Algumas eram bem tranquilas de acompanhar, como o painel de discussão Learning How to Learn, onde quatro mulheres (incluindo uma brasileira!) contavam um pouco sobre a sua jornada para aprender, ensinar e crescer em suas carreiras em Programação Funcional.

Foto de 4 mulheres conversando.

Outras palestras eram mais filosóficas, como a keynote realizada pelo Evan Czaplicki, intitulada Rethinking our Adoption Strategy. Nesta, ele falou sobre o que ele chama de Platform Language e o que a diferencia de uma Productivity Language e o que nós, pessoas não relacionadas ao mundo dos negócios, poderíamos fazer para tornar as Platform Languages mais atrativas que as Productivity Languages.

Foto do Evan palestrando.

Haviam ainda palestras em níveis mais abstratos, como o keynote de Moa Johansson: AI for Mathematical Discovery: Symbolic, Neural and Neuro-Symbolic Methods. Confesso que o nível desta (e de algumas outras palestras) estavam muito acima do que eu conseguia acompanhar. Sempre era possível tirar algum conhecimento novo, mas muita coisa eu não tinha capacidade de compreender totalmente. Outro exemplo foi a keynote do Martin Odersky, criador da linguagem de programação Scala, com o título Making Capabilities Safe and Convenient. Consegui entender os fundamentos, mas a partir de um determinado momento da palestra, me perdi e não conseguia mais acompanhar as propostas.



Como eram 3 salas/trilhas com palestras acontecendo de forma concorrente, você tem a opção de escolher aquelas que julgar mais interessantes e que vai tirar mais proveito (menos às keynotes - essas ocorrem na sala maior e sem outras palestras no mesmo momento). Um erro que cometi foi não ter estudado melhor sobre as pessoas que iriam palestrar. Poderia ter feito escolhas mais interessantes e ter aproveitado mais conteúdos.

Comida boa!



Estava incluso no evento o café da manhã, dois coffe-breaks (pela manhã e tarde) e um almoço. Isso é bem legal pois não precisava me preocupar com alimentação e também era um momento à mais que o pessoal utilizava para socializar, conversar e fazer (ou fortalecer) amizades. Achei tudo bastante organizado e gostoso.



Festa / Happy hour

No final do primeiro dia ocorreu uma confraternização após o evento, em outro local. Para entrar bastava mostrar a credencial do evento.

Não tenho muitos detalhes de como foi, pois infelizmente eu não participei. Estava bastante cansado depois de um longo dia de palestras e acabei voltando para meu hotel. Mas me arrependo de não ter aproveitado esta oportunidade! Fiquei especialmente arrependido quando soube, através de algumas pessoas brasileiras que conheci no evento, que o José Valim (criador da linguagem Elixir) estava presente na festa! E que conseguiram conversar com ele! Ele não palestrou no evento e não o vi em nenhum momento, então fiquei surpreso ao saber que ele estava lá (e eu perdi a oportunidade de conhecê-lo…).

Primeira viagem com tanta tecnologia à minha disposição

Tive o privilégio de fazer várias viagens internacionais nas últimas décadas, desde Argentina, Chile, Uruguai, até Noruega, Suécia, Holanda, passando por Costa Rica, Estados Unidos, Cuba, … mas esta viagem para Polônia foi diferente.

Quando visitei Cuba, em 2009, fazer uma ligação telefônica para o Brasil era muito caro e restrito (fiz algumas vezes, de um telefone fixo do Hotel). Zero acesso à internet ou telefone celular. Em minha viagem para Holanda, em 2012, internet apenas quando conseguia uma wi-fi (usando meu iPhone 3GS!). A maior parte do tempo me locomovia olhando um mapa físico da cidade, nada de GPS.

Nesta viagem para Polônia, pela primeira vez tinha a minha disposição diversas tecnologias que facilitaram MUITO a estadia por lá!

Internet ilimitada

Antes de viajar eu já havia comprado um eSIM (um chip virtual de celular - coisa que ainda não é muito popular no Brasil) através do App da Holafly. O processo foi super simples e em pouco tempo tinha um segundo chip instalado em um iPhone 14. Assim que o avião parou na escala que eu precisava fazer na Alemanha, no aeroporto Internacional de Frankfurt, meu chip já passou a funcionar e eu já tinha acesso ilimitado à internet! WhatsApp funcionando normalmente e tudo mais que eu já estava acostumado a usar no Brasil. Pela primeira vez cheguei em outro país já com internet disponível em meu celular!

Uber

A Uber opera também na Polônia e me senti mais confortável e seguro utilizando o serviço deles para conseguir chegar em meu hotel. A rede de trens da Polônia é muito boa e com certeza eu conseguiria chegar usando apenas trens, já que o Hotel ficava perto de uma estação - coisa que eu deveria ter visto antes de chegar lá, mas não fiz! Meu planejamento deixou a desejar também neste ponto. De qualquer forma, o Uber me deixou na porta do hotel. Foi a primeira vez que não precisei me preocupar com nada para conseguir chegar tranquilo em meu primeiro destino em uma viagem internacional.

Acabei optando por utilizar o Uber também para voltar para o aeroporto. De resto, usei apenas o transporte público. Mas saber que eu tinha esta opção a qualquer momento, me deixava mais tranquilo para explorar a cidade à vontade.

Google Maps / GPS

O Google Maps funciona muito bem na Cracóvia e me ajudou demais a me locomover pela cidade usando os trens de superfície. Ele me avisava exatamente quando o trem iria chegar na estação, me mostrava quantas estações faltavam para descer e me alertava na hora exata de descer. Nunca foi tão fácil andar de transporte público em uma cidade desconhecida!



Claro que mesmo assim eu me perdi alguma vezes haha. Na pressa entrava no trem no sentido errado e coisas do tipo. Mas era fácil perceber o erro ao olhar o mapa e, como as passagens que eu comprava eram por tempo, bastava descer e pegar o trem correto.

Chat GPT

Outra novidade foi viajar pela primeira vez com a possibilidade de usar tecnologias mais recentes de Inteligência Artificial Generativa, como o Chat GPT. Fazia algumas perguntas ao longo da viagem para tirar dúvidas sobre o funcionamento das coisas nesta cidade, ele me ajudou a encontrar lojas e também a encontrar alguns pontos bonitos para visitar.

Uma coisa que experimentei fazer algumas vezes foi tirar um print da tela do Google Maps de onde eu estava naquele momento e a enviava para o Chat GPT, pedindo dicas de lugares para visitar naquela região, ou dicas de lojas onde eu poderia comprar determinadas coisas que eu estava buscando. O resultado foi bem legal!

 Se não fosse ele, eu não teria conhecido, por exemplo, o Kościuszko Mound. Estava ao lado dele mas não estava na minha lista de lugares para conhecer. Fiz o processo que citei acima e ele me recomendou ir para lá, e achei bem legal!

Oportunidades de conversas incríveis

Como disse lá no começo, minha maior motivação para ir neste evento era assistir a palestra do Evan Czaplicki e, claro, tinha expectativas dele falar um pouco sobre seu novo projeto, que poucas pessoas tiveram acesso. Quando vi o título da palestra, já imaginei que este não seria o foco - e realmente não foi. Mas e nos bastidores?



Durante os dois dias de evento, ele estava conversando com as pessoas do evento, no hall onde ficava o coffee-break. No primeiro dia eu ensaiei abordá-lo para fazer algumas perguntas, mas minha timidez venceu mais uma vez e fui para o hotel me sentindo derrotado! Lá, disse para mim mesmo: você veio até aqui por conta disso! Se ele estiver lá amanhã, você vai sim falar com ele!!

E para minha alegria, no dia seguinte ele estava lá novamente, conversando com pessoal. Mais uma vez fiquei com muita vergonha, mas tomei coragem e me aproximei. Não sabia como abordá-lo, apenas cumprimentei ele e a pessoa com quem ele estava conversando com um aceno de cabeça e aguardei a minha vez de falar enquanto, por algum motivo, eles conversavam sobre a cultura japonesa.

Em um determinado momento ele se virou pra mim e fez um gesto indicando um início de conversa. Me apresentei, agradeci pela palestra, elogiei seu trabalho como programador mas também como palestrante e filosofo, e disse que me inspirava muito em suas palestras. Trocamos algumas palavras e logo depois perguntei sobre seu novo projeto de backend “secreto”. Fiz alguma pergunta meio genérica sobre do que se tratava e ele ficou pensativo por uns segundos. Fiquei preocupado, confesso. Será que iria receber uma resposta meio atravessada? Será que eu estava sendo muito invasivo? Quem era eu para perguntar sobre um projeto dele que eu sei que ele não está querendo ainda compartilhar com muitas pessoas??



Mas após alguns segundos, ele responde: você quer ver? Estou com meu notebook de desenvolvimento aqui, se você tiver uns minutos livre eu posso fazer uma demo pra você.

Aceitei na hora! Ele pediu para esperar um pouco, que ele iria chamar mais 2 pessoas que estavam interessadas em ver a demo. Logo nos juntamos em uma mesa alta, ali no hall mesmo, onde ele abriu o note e começou a apresentar o projeto. Depois de algum tempo as outras duas pessoas tiveram que sair, e eu estava saindo junto com elas quando ele disse: tenho mais uma demo, não quer ver? E mais uma vez, aceitei entusiasticamente!

Neste momento ele passou a apresentar a demo apenas para mim. Tirei algumas dúvidas e depois de 40 minutos de conversa, agradeci e me despedi. Nisso ele pergunta a minha opinião sobre o produto, se eu usaria. E em seguida também pergunta minha opinião sobre a demo, se estava boa, se dava para entender a proposta do projeto.



Estava eu lá na Polônia, dando minha opinião pessoal para um dos meus ídolos! Que experiência indescritível!! Esses 40 minutos de conversa já fizeram a ida para Polônia valer a pena!

Valeu a pena?

Como você já deve imaginar, sim, valeu muito a pena!

Somando tudo, não foi uma viagem barata. Longe disso! Teve preço da passagem, estadia, ingresso do evento... Tudo pago com dinheiro próprio, sem auxilio da empresa onde trabalho. Por isso, é difícil indicar uma viagem dessas.

Eu adorei. Me diverti, aprendi, conheci um país diferente, fiz novas amizades... E talvez você conheça pessoas incríveis e consiga o emprego dos seus sonhos? Talvez. Mas é mais provável que não. Se este é seu único objetivo, recomendo que busque na internet mesmo. Nas comunidades, nas redes sociais, escrevendo artigos ou softwares, ...

Então se você está pensando em fazer algo parecido, faça por prazer. Por diversão. Aí sim, vale muito a pena!

E você?? Qual foi a última conferência que você foi? Como foi? O que te motivou a ir? Me conta nos comentários!

Permalink

Not One, Not Two, Not Even Three, but Four Ways to Run an ONNX AI Model on GPU with CUDA

Two weeks ago, I announced a new Clojure ML library, Diamond ONNX RT, which integrates ONNX Runtime into Deep Diamond. In that post, we explored the classic Hello World example of Neural Networks, MNIST handwritten image recognition, step-by-step. We run that example on the CPU, from main memory. The next logical step is to execute this stuff on the GPU.

You'll see that with a little help of ClojureCUDA and Deep Diamond built-in CUDA machinery, this is both easy and simple, requiring almost no effort from a curious Clojure programmer. But don't just trust me; let's fire up your REPL, and we can continue together.

Here's how you can evaluate this directly in your REPL (you can use the Hello World that is provided in the ./examples sub-folder of Diamond ONNX RT as a springboard).

Require Diamond's namespaces

First things first, we refer functions that we're going to use.

(require '[uncomplicate.commons.core :refer [with-release]]
         '[uncomplicate.neanderthal.core :refer [transfer! iamax native]]
         '[uncomplicate.diamond
           [tensor :refer [tensor with-diamond]]
           [dnn :refer [network]]
           [onnxrt :refer [onnx]]]
         '[uncomplicate.diamond.internal.dnnl.factory :refer [dnnl-factory]]
         '[uncomplicate.diamond.internal.cudnn.factory :refer [cudnn-factory]]
         '[hello-world.native :refer [input-desc input-tz mnist-onnx]])

None of the following ways to run CUDA models has preference, you use the one that best suits your needs.

Way one

One of the ways to run ONNX models on your GPU is to simply use Deep Diamond's cuDNN factory as the backend for your tensors. Then, the machinery recognizes what you need and proceeds doing everything on the GPU, using the right stream for tensors, Deep Diamond operations, and ONNX Runtime operations. This looks exactly the same as any other Deep Diamond example from this blog or the DLFP book.

(with-diamond cudnn-factory []
  (with-release [cuda-input-tz (tensor input-desc)
                 mnist (network cuda-input-tz [mnist-onnx])
                 classify! (mnist cuda-input-tz)]
    (transfer! input-tz cuda-input-tz)
    (iamax (native (classify!)))))
7

..it says.

Way two

As an ONNX model usually defines the whole network, you don't need to use Deep Diamond's network as a wrapper. The onnx function can create a Deep Diamond blueprint, and Deep Diamond blueprints can be used as standalone layer creators. Just like in the following code snippet.

(with-diamond cudnn-factory []
  (with-release [cuda-input-tz (tensor input-desc)
                 mnist-bp (onnx cuda-input-tz "../../data/mnist-12.onnx" nil)
                 infer-number! (mnist-bp cuda-input-tz)]
    (transfer! input-tz cuda-input-tz)
    (iamax (native (infer-number!)))))
7

… again.

Way three

We can even mix CUDA and CPU. Let's say your input and output tensors are in the main memory, and you'd like to process them on the CPU, but you want to take advantage of the GPU for the model processing itself. Nothing is easier, if you use Deep Diamond. Just specify an :ep (execution provider) in the onnx function configuration, and tell it that you'd like to use only CUDA. Now your network is executed on the GPU, while your input and output tensors are in the main memory, and can be easily accessed.

(with-release [mnist (network input-tz [(onnx "../../data/mnist-12.onnx" {:ep [:cuda]})])
               infer-number! (mnist input-tz)]
        (iamax (infer-number!)))
7

… and again the same answer.

Way four

Still need more options? No problem, onnx can create a standalone blueprint, and that blueprint recognizes the :ep configuration too.

(with-release [mnist-bp (onnx input-tz "../../data/mnist-12.onnx" {:ep [:cuda]})
               infer-number! (mnist-bp input-tz)]
        (iamax (infer-number!)))
7

No surprises here.

Is there anything easier?

If you've seen code in any programming language that does this in a simpler and easier way, please let me know, so we can try to make Clojure even better in the age of AI!

The books

Should I mention that the book Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, DNNL, Java, and Clojure teaches the nuts and bolts of neural networks and deep learning by showing you how Deep Diamond is built, from scratch? In interactive sessions. Each line of code can be executed and the results inspected in the plain Clojure REPL. The best way to master something is to build it yourself!

Permalink

Biff support for XTDB v2 is in pre-release

I've been working on/preparing for migrating Biff to XTDB v2 since that became generally available in June. After investigating the deployment options and performance characteristics, I've added some XTDB v2 helper functions to the Biff library (under a new com.biffweb.experimental namespace) and I've made a version of the starter project that uses XTDB v2.

You can create a new XTDB v2 Biff project by running clj -M -e '(load-string (slurp "https://biffweb.com/new.clj"))' -M xtdb2. See this gist for a diff between the old/main starter project and this new one.

To give you a quick overview of what Biff provides:

  • There are use-xtdb2 and use-xtdb2-listener components, roughly the same as we have already for XTDB v1.
  • The ctx map will have a :biff/conn key in it (a Hikari connection pool object) which you can pass to xtdb.api/q to do queries.
  • There is no longer a custom Biff transaction format. There is still a lightweight wrapper function, com.biffweb.experimental/submit-tx, which will apply Malli validation to any :put-docs / :patch-docs operations.

There's still plenty of work to do before XTDB v2 support in Biff is officially released and becomes the default:

  • Next up, I'm migrating Yakread to XTDB v2. This will help me find any more issues that need to be addressed/make sure that Biff is indeed ready for XTDB v2.
  • After that I need to update a bunch of documentation, including the tutorial project.

Since those next two steps will take a while, I wanted to do this "pre-release" for anyone who would like to get a head start on trying out Biff with XTDB v2. If you do so, let me know whatever questions/comments you have. Just note that the new functions in Biff's API are still experimental and might have breaking changes before I do the official release.

And for anyone who would rather not deal with migrating an existing app, Biff will still support XTDB v1. It's totally fine to stay on that.

Finally: I'll be at Clojure/Conj next week, at least if my flight doesn't get canceled. Come say hi.

Permalink

Calling Jank from C

For those that don’t know, Jank is a Clojure implementation but instead of targeting Java, it targets LLVM (Low-Level Virtual Machine). That means, Jank compiles to native code, and interops directly with C++.

Jank already have ways to call C++, but I wanted to do the opposite – to call Jank’s code from C. The reason might not be obvious, so here is an why: writing libraries.

Not all things need to be “stand-alone executables”. One example is libraries for Node, for Ruby, or Python. These languages are amazing on the levels of abstraction they support, and it’s easy to interact directly with code and their runtime (in Node, using Devtools, in Ruby, using something like pry or Lazuli, my own plug-in). They are also quite slow, and in some cases, we might need to call some native API that these languages don’t support. So what now? The canonical way is to write some extension in C or C++; now we have to manually manipulate memory and deal with safety issues (and before people say something about it “not being that hard”, it is. Most of CVEs happen because of manual memory manipulation in C – every cast, every printf, every strcpy can cause ACE and/or privilege escalation issues). They are also not interactive so if you’re trying to easily hack some solution, you need to write the code, compile, make a shared library, use the library via the Ruby/Node/Python code, see if it does the thing you want, repeat.

It’s tedious. Maybe with Jank we can speed up this process?

First a disclaimer: Jank currently doesn’t seem to officially support what I want to do. It seems that its creator wants to support the use-case I want later, but right now, this is just a happy coincidence that I can do what I do. So let’s start with a base code:

(ns jank-test)

(defn some-code []
  (println "HELLO?"))

Save that to jank_test.jank and let’s compile it with Jank, but instead of making an executable, let’s instruct it to make a library with jank --module-path . compile-module jank-test.

This will generate some build files – in my case, in directory target/x86_64-unknown-linux-gnu-6edc6e02e1bf8d875f77f87b5820996901c1894b142485e01a7785f173afb8df/jank_test.o. You might notice that this is not a shared library – as I said earlier, Jank doesn’t really support what I want to do right now but it will in the future. For now, we can either create a shared library from this .o file or we can create a final binary by linking it together with our code. So let’s do this second choice, because it’s easier: you will now create a C++ file containing:

// (1)
extern "C" {
  void jank_load_jank_test();
}

#include <jank/c_api.h>

// (2)
using jank_object_ref = void*;
using jank_bool = char;
using jank_usize = unsigned long long;

extern "C" jank_object_ref jank_load_clojure_core_native();
extern "C" jank_object_ref jank_load_clojure_core();
extern "C" jank_object_ref jank_var_intern_c(char const *, char const *);
extern "C" jank_object_ref jank_deref(jank_object_ref);

int main(int argc, const char** argv)
{
  // (4)
  auto const fn{ [](int const argc, char const **argv) {
    // (5)
    jank_load_clojure_core_native();
    jank_load_clojure_core();

    // (6)
    jank_load_jank_test();
    auto const the_function(jank_var_intern_c("jank-test", "some-code"));
    jank_call0(jank_deref(the_function));

    return 0;
  } };

  // (3)
  jank_init(argc, argv, true, fn);
  return 0;
}

Lots of things here, so let’s go one by one: in (1), we declared an “external” reference. When Jank compiles a code, it’ll generate these jank_load_<namespace> which will do what it’s supposed to do: load the namespace. Unfortunately, it won’t actually load Clojure Core’s namespace, nor it will load any dependencies (I told you this isn’t officially supported yet! You have been warned!). The “external” will be resolved at linker time, and right now it resides only on the .o intermediate file. In (2) we’ll define some data we’ll need to use later in more “extern” declarations. These are used, again, to refer the Jank library that will be linked with the code.

Now, in (3) (which is the second to last line of actual code) we’ll init the Jank runtime. This will boostrap the “Clojure”-ish classes defined in Jank, and we’ll need to pass a fn argument, that is defined previously in `(4). Without this step, you will get a segfault trying to run Jank code, so this is absolutely necessary.

In (5) we will load “Native core” and “Clojure core”, meaning we’ll start the core libraries that are builtin in Jank (native) and these will be used to implement the clojure.core namespace using Clojure code; in (6), we’ll also load our namespace – the one we defined in our .jank file. And finally, after loading this namespace we’re ready to call our function – we’ll first create a “jank var” using jank_var_intern_c, that will essentially resolver to '#jank-test/some-code, and then we’ll deref it to get back the function. We use jank_call0 to call a function with arity 0, and obviously we can use jank_call1 or jank_call2 if the function have arity 1 or 2, for example.

Finally, to compile your final binary:

clang++ \
  -L/usr/local/lib/jank/0.1/lib/ \
  -I/usr/local/lib/jank/0.1/include \
  test.cpp \
  target/x86_64-unknown-linux-gnu-6edc6e02e1bf8d875f77f87b5820996901c1894b142485e01a7785f173afb8df/jank_test.o \
  -lclang-cpp \
  -lLLVM \
  -lz \
  -lzip \
  -lcrypto \
  -l jank-standalone \
  -o program

So, test.cpp is the code we just created, and target/x86..../jank_test.o is the intermediate file that Jank compiled. The -L and -I point to directories where Jank was installed, -o program instructs the compiler to output to program, and the rest are just libraries that we need to link to generate the final binary. And that is it – running that binary will print HELLO on the screen!

But of course, Jank can do that by itself, so…

But Why?

Supposing we’re working in some language – for example Ruby – and we want to optimize some code, or integrate with some native library. The canonical way to do that is to use C or C++, sometimes Rust, to make the library. Now, how would we create a class – let’s say Jank – on C++, for it to be usable in Ruby? It’s quite simple, in fact:

#include &lt;ruby.h&gt;

extern &quot;C&quot; void Init_jank_impl() {
  rb_define_class(&quot;Jank2&quot;, rb_cObject)
}

That’s literally just it. Now, suppose we want to send this to Jank, so we will define the class in Jank – how could we do that? Well, it’s also very simple: we will use the same technique in this post, but instead of defining a main code, we’ll keep the extern... code and move all the code that was supposed to be on main to this Init_jank_impl code. Then, on Jank side, we’ll add:

(ns jank-impl)

(defn init-extension []
  (cpp/rb_define_class &quot;Jank&quot; cpp/rb_cObject))

That’s it. Can we create Ruby methods, and do more stuff with this? Hopefully! But not right now: while I was trying this approach, I found some bugs in Jank, so until these bugs get fixed (which I suspect, based on how fast the language is evolving, will be very fast) we can’t.

But this might even open some very interesting possibilities, that I expect to expand on a future post!

Permalink

Clojure Deref (Nov 6, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS).

The State of ClojureScript 2025 Survey is live!

If you ever wondered what’s happening in cljs world, this is your chance to contribute and learn back from the community. Take a few minutes to fill out the survey and share it in your circles.

Upcoming Events

Blogs, articles, and news

Libraries and Tools

Debut release

  • moon - RPG Maker & Engine

  • reagami - A minimal zero-deps Reagent-like for Squint and CLJS

  • aero-1p - Bridge between Aero and 1Password

  • litellm-clj - A universal translator for LLM models

  • muutos - Muutos is a zero-dependency Clojure library for reacting to changes in a PostgreSQL database.

  • webserial-starter - WebSerial API starter with Clojurescript and Replicant

  • clj-threats - Clojure implementation of Threagile

  • qclojure-ml - Quantum Machine Learning based on QClojure

  • DSCloj - Structured LLM prompts in Clojure

  • clojure-mcp-light - Experimental Clojure tooling for Claude Code - automatic delimiter fixing via hooks and parinfer

Updates

  • tools.build 0.10.11 - Clojure builds as Clojure programs

  • statecharts 1.2.24 - A Statechart library for CLJ(S)

  • fulcro-inspect 4.1.0 - A tool for inspecting and debugging Fulcro applications during development.

  • fulcro-devtools-remote 0.2.8 - An adapter for writing development tooling that runs as a Chrome extension or an electron app.

  • test-filter 1.0.6 - A tool for reducing CI times by finding minimal test set based on code analysis.

  • nexus 2025.10.1 - Data-driven action dispatch for Clojure(Script): Build systems that are easier to test, observe, and extend

  • powerpack 2025.10.22 - A batteries-included static web site toolkit for Clojure

  • clj-kondo 2025.10.23 - Static analyzer and linter for Clojure code that sparks joy

  • component 1.2.0 - Managed lifecycle of stateful objects in Clojure

  • spacemacs-config 2025-10-25 - rich Clojure & LSP config for Spacemacs

  • dompa 1.1.0 - A zero-dependency, runtime-agnostic HTML parser and builder.

  • markdown 0.7.196 - A cross-platform clojure/script parser for Markdown

  • pedestal 0.8.1 - The Pedestal Server-side Libraries

  • cli 1.27.121 - Opinionated command line argument handling, with excellent support for subcommands

  • edamame 1.5.33 - Configurable EDN/Clojure parser with location metadata

  • thneed 1.1.4 - An eclectic set of Clojure utilities that I’ve found useful enough to keep around.

  • calva-backseat-driver 0.0.24 - VS Code AI Agent Interactive Programming. Tools for CoPIlot and other assistants. Can also be used as an MCP server.

  • qclojure 0.23.0 - A functional quantum computer programming library for Clojure with backend protocols, simulation backends and visualizations.

  • malli 0.20.0-alpha3 - High-performance data-driven data specification library for Clojure/Script.

  • clay 2.0.2 - A REPL-friendly Clojure tool for notebooks and datavis

  • kindly 4-beta21 - A small library for defining how different kinds of things should be rendered

  • durable-queue 0.2.0 - a disk-backed queue for clojure

  • joyride 0.0.71 - Making VS Code Hackable like Emacs since 2022

  • reagent 2.0.1 - A minimalistic ClojureScript interface to React.js

  • sente 1.21.0 - Realtime web comms library for Clojure/Script

  • http-kit 2.9.0-beta3 - Simple, high-performance event-driven HTTP client+server for Clojure

  • cider 1.20.0 - The Clojure Interactive Development Environment that Rocks for Emacs

  • tempel 1.0.0 - Data security framework for Clojure

  • eca 0.77.1 - Editor Code Assistant (ECA) - AI pair programming capabilities agnostic of editor

  • carmine 3.5.0 - Redis client + message queue for Clojure

  • clojure-mcp 0.1.12 - Clojure MCP

  • manifold 0.4.4 - A compatibility layer for event-driven abstractions

  • squint 0.9.178 - Light-weight ClojureScript dialect

  • calva 2.0.540 - Clojure & ClojureScript Interactive Programming for VS Code

  • cursive 2025.2.1-eap4 - Cursive: The IDE for beautiful Clojure code

Permalink

Merco Talento Brazil 2025: Nubank ranks among the top 5 most attractive companies to work for

In 2025, Nubank appears for the first time among the five most attractive companies to work for in Brazil. The list is organized by Merco Talento, one of the leading corporate reputation and employer brand rankings in Latin America.

Climbing ten positions from 2024 to 2025 reinforces that investing in autonomy, trust, and a clear purpose isn’t just a cultural choice — it’s what drives real impact inside and outside the company.

As our CEO and Founder, David Vélez, puts it:

“Culture attracts people — and people build the products that attract customers. In the end, customers are consumers of culture. […] Culture is like the company’s spirit — it permeates everything we do.”

At Nubank, culture is more than a statement — it’s a living force that shapes how we hire, build, and grow. And this recognition reflects exactly that.

What is Merco Talento and why it matters

Merco Talento, the Corporate Reputation Business Monitor, evaluates how attractive companies are as places to work based on the perception of different audiences such as university students, professionals, human resources specialists, unions, headhunters and society in general.

In addition to spontaneous recognition, the ranking combines information about corporate reputation, well being practices, professional development, organizational culture and social purpose.

In other words, it is not just an employer branding award. It reflects trust, credibility and the ability to inspire people.

Inside the methodology

The methodology used to build the Merco Talento ranking includes:

  • More than 9,000 interviews with different groups such as professionals, students, business leaders, human resources specialists and the general public.
  • Evaluation of 26 variables grouped into three main pillars: strong internal reputation, employer brand and quality of life at work.
  • External audit conducted by KPMG, which ensures independence and methodological integrity.

This combination of multiple perspectives is what makes the ranking solid and respected in countries like Brazil, Mexico and Colombia, where Nubank has been recognized before.

A culture built every day

Being among the top 5 most desired companies to work for in Brazil is the result of everyday decisions.

We got here because we work with autonomy and responsibility in teams that genuinely trust people and the decisions they make. We create a safe environment to learn, make mistakes and grow, without rigid hierarchies that limit initiative or creativity.

Diversity and inclusion are true foundations of the way we build products, develop leaders and form teams. And everything connects to a clear purpose: to fight complexity and empower people to have more control over their financial lives.

The post Merco Talento Brazil 2025: Nubank ranks among the top 5 most attractive companies to work for appeared first on Building Nubank.

Permalink

Introducing Agent-o-rama: build, trace, evaluate, and monitor stateful LLM agents in Java or Clojure

We’ve just open-sourced Agent-o-rama, a library for building scalable and stateful LLM agents on the JVM. Agent-o-rama provides two first-class APIs, one for Java and one for Clojure, with feature parity between them.

AI tooling today is overwhelmingly centered on Python, and while the JVM ecosystem has seen growing support through libraries like LangChain4j, it lacks the kind of integrated tooling that lets developers evaluate, observe, and deploy LLM-based systems rigorously and at scale. Available tools are fragmented or complex to set up, and nothing handles the entire workflow from development to production with proper observability.

Agent-o-rama fills that gap. It brings the same ideas popularized by LangGraph and LangSmith – structured agent graphs, tracing, datasets, experiments, evaluation – but makes them native to Java and Clojure. LLMs are powerful but inherently unpredictable, so building applications with LLMs that are helpful and performant with minimal hallucination requires being rigorous about testing and monitoring.

Agents are defined as simple graphs of Java or Clojure functions that execute in parallel. Agent-o-rama automatically captures detailed traces and includes a web UI for offline experimentation, online evaluation, and time-series telemetry (e.g. model latency, token usage, database latency). It also supports streaming, with a simple client API to stream model calls or other outputs from nodes in real time. Agent-o-rama extends the ideas from LangGraph and LangSmith with far greater scalability, full parallel execution, and built-in high-performance data storage and deployment.

Agent-o-rama is deployed onto your own infrastructure on a Rama cluster. Rama is free to use for clusters up to two nodes and can scale to thousands with a commercial license. Every part of Agent-o-rama is built-in and requires no other dependency besides Rama. Agent-o-rama also integrates seamlessly with any other tool, such as databases, vector stores, external APIs, or anything else. Unlike hosted observability tools, all data and traces stay within your infrastructure.

Example agent

Let’s take a look at an example agent! This is a research agent from the examples/ directory in the project. In that directory you’ll find equivalent Java and Clojure versions.

You’ll need Java 21 installed and API keys for OpenAI and Tavily (Tavily’s free tier is sufficient). Put the API keys in environment variables like so:

1
2
export OPENAI_API_KEY=your_openai_key_here
export TAVILY_API_KEY=your_tavily_key_here

To run the agent, clone Agent-o-rama and follow these instructions (for Java or Clojure, whichever you prefer):

1
2
3
4
5
6
7
8
9
# Java instructions
cd examples/java
./run-example com.rpl.agent.research.ResearchAgentExample

# Clojure instructions
cd examples/clj
lein repl
(require '[com.rpl.agent.research-agent :as research-agent])
(research-agent/run-agent)

This runs Rama’s “in-process cluster” (IPC) and launches the research agent on it. You’ll get a prompt at the terminal to enter a research topic. The agent will generate a set of analyst personas to analyze the topic, and you’ll be prompted again whether you want to give feedback on the generated analysts. Once you tell the agent you have no more feedback, it will spend a few minutes generating the report, including using information it finds through web searches and through Wikipedia, and then the final report will be printed.

As the report is being generated or when it’s finished, you can open the Agent-o-rama UI at http://localhost:1974 .

Here’s an example back and forth:

Enter a topic: What's the influence and legacy of Billy Wilder?

Do you have any feedback on this set of analysts? Answer 'yes' or 'no'.

{"role" "Film Historian", "affiliation" "University of California, Los Angeles", "name" "Dr. Lucy Reynolds", "description" "Specializes in post-war American cinema and the contributions of filmmakers like Wilder. Focuses on Wilder's stylistic innovations and narrative techniques, exploring how they shaped modern filmmaking."}
{"role" "Cultural Critic", "affiliation" "Film Critic Magazine", "name" "Michael Chen", "description" "Analyzes the social and cultural impacts of Wilder's films, particularly in relation to gender and race issues. Concerned with how Wilder's work reflects and influences societal norms."}
{"role" "Cinema Studies Scholar", "affiliation" "New York University", "name" "Professor John Hartman", "description" "Investigates the legacy of classic Hollywood directors, with an emphasis on Wilder. His work focuses on the interplay between commercial success and artistic integrity in Wilder's films."}
{"role" "Screenwriter and Director", "affiliation" "Independent Filmmaker", "name" "Emma Thompson", "description" "Explores the thematic elements in Wilder's storytelling, particularly humor and satire. Engages with Wilder's ability to blend genres and how this influences contemporary narrative structures."}
>> no

# The Enduring Influence of Billy Wilder

## Introduction

Billy Wilder's legacy in Hollywood cinema is marked by his unparalleled ability to blend commercial success with artistic integrity. This report delves into Wilder's impact, highlighting his innovative storytelling techniques and social critiques through iconic films like "Sunset Boulevard," "The Apartment," and "Double Indemnity." We explore how his personal experiences shaped his keen observational skills and narrative style, as well as how his work laid the groundwork for contemporary storytelling and the exploration of gender dynamics. Ultimately, Wilder’s films illustrate the enduring relevance of balancing humor, critique, and emotional depth in cinema.

---

Billy Wilder stands as a towering figure in cinema, adept at fusing commercial viability with artistic integrity. His films often strike a delicate balance between engaging mainstream audiences and provoking critical reflection on serious themes, as exemplified in "Sunset Boulevard" (1950). This film vividly critiques the dark side of fame and highlights Wilder's unique ability to craft narratives that resonate deeply with viewers while navigating complex moral landscapes. His background as an Austrian émigré and early career as a screenwriter supplied him with the observational prowess necessary to convey the multifaceted nature of human experiences, allowing his work to transcend mere entertainment and engage with profound social commentary [1].

Wilder's amalgamation of humor and satire serves as a compelling vehicle for addressing serious social issues, influencing contemporary screenwriters to adopt similar techniques. Films like "Some Like It Hot" and "The Apartment" showcase his signature style, where humor enriches the narrative while prompting reflection on societal norms and human behavior. This approach remains pervasive in the works of modern filmmakers, illustrating Wilder's constructed legacy in storytelling that encourages the interplay of comedic elements and deeper thematic explorations. Notable contemporary films such as "The Big Sick" and "Parasite" echo these traditions, suggesting that humor can coexist with critical commentary and profound moral questions [2].

Central to Wilder's storytelling innovations is his ability to meld humor with dark themes, employing non-linear narratives and flashbacks in movies like "Double Indemnity" and "The Apartment." These techniques reveal complex character motivations and provide a framework for rich, layered narratives. Wilder’s knack for sharp dialogue and intricate comedic timing enhances this social commentary, resonating with audiences across generations. The blend of genres within his films also paved the way for a more diverse cinematic landscape, allowing modern filmmakers to challenge conventions and push creative boundaries [3].

Particularly significant is Wilder's exploration of gender dynamics in "The Apartment," where the protagonist Fran Kubelik's experiences reflect the challenges faced by women within a patriarchal corporate structure. The film critiques the objectification of women through key scenes and deft cinematography, simultaneously highlighting moral ambiguity and emotional depth. This examination of gender roles emphasizes the importance of authentic relationships in a transactional world, underlining the resonance of Wilder's critiques within contemporary discussions surrounding gender and power [4].

In conclusion, Billy Wilder's influence is multifaceted, shaping both the narrative and thematic dimensions of modern cinema. His legacy emerges from an enduring ability to captivate audiences while addressing the intricacies of human behavior, societal constructs, and moral dilemmas. Through a unique blend of artistry and commercial appeal, Wilder set a standard for storytelling that continues to inspire filmmakers and storytellers today.


---

## Conclusion

Billy Wilder's cinematic legacy is a testament to his exceptional ability to balance artistry and commercial appeal. His films, including "Sunset Boulevard," "The Apartment," and "Double Indemnity," not only entertained audiences but also provoked critical thought on profound societal themes and human dynamics. Through innovative storytelling techniques and a distinctive blend of humor and critique, Wilder paved the way for contemporary writers and filmmakers. His enduring influence can be seen in the way modern narratives confront gender dynamics and moral complexities, demonstrating that engaging storytelling can exist alongside rich thematic exploration. Ultimately, Wilder's impact remains a vital reference point in the evolution of cinema.

## Sources
[1] Interview with Professor John Hartman on the legacy of Billy Wilder.  
[2] https://glcoverage.com/2025/01/23/billy-wilder-screenwriting-tips/  
[3] Culture Vulture | Counter Culture  
[4] Breaking Down the Storytelling in Billy Wilder's 'The Apartment' https://nofilmschool.com/apartment-storytelling-breakdown  
[5] Analysis of ‘The Apartment’ – Infinite Ocean - Mawr Gorshin  https://mawrgorshin.com/2022/08/20/analysis-of-the-apartment/  
[6] On its 60th anniversary, Billy Wilder’s The Apartment looks like an indictment of toxic masculinity - AV Club  https://www.avclub.com/on-its-60th-anniversary-billy-wilder-s-the-apartment-l-1844004988  

If you click on the research agent in the UI, you’ll see this:

The invoke there is what we just ran. Clicking on it brings up the trace for the invoke:

This is displaying the parallel execution of the agent, with orange nodes being aggregations of data computed on multiple branches. On the right is aggregated statistics of everything that happened during the agent’s execution. You can see how many tokens it used, and if it did any database reads/writes you’d see stats about those too. If the agent invokes other agents, you can see a breakdown of stats by agent as well.

Clicking on the “write-report” node brings up a detailed trace of what happened when that node executed:

This node did one LLM call, and you can see the arguments to that LLM, what was returned, and stats on the call in the “Operations” section. The code for this node is just this:

JavaClojure
1
2
3
4
5
6
7
8
9
.node("write-report", "finish-report", (AgentNode agentNode, String sections, String topic) -> {
  ChatModel openai = agentNode.getAgentObject("openai");
  String instructions = String.format(REPORT_WRITER_INSTRUCTIONS, topic, sections);
  List<ChatMessage> chatMessages = Arrays.asList(
    new SystemMessage(instructions),
    new UserMessage("Write a report based upon these memos."));
  String report = openai.chat(chatMessages).aiMessage().text();
  agentNode.emit("finish-report", "report", report);
})
1
2
3
4
5
6
7
8
9
10
11
12
(aor/node
 "write-report"
 "finish-report"
 (fn [agent-node sections topic]
   (let [openai (aor/get-agent-object agent-node "openai")
         instr  (report-writer-instructions topic sections)
         text   (chat-and-get-text
                 openai
                 [(SystemMessage. instr)
                  (UserMessage. "Write a report based upon these memos.")])]
     (aor/emit! agent-node "finish-report" "report" text)
   )))

This code says that the node’s name is “write-report”, the node emits to the node “finish-report”, and the node’s implementation is the given function. The agentNode / agent-node argument is how you interact with the graph to return a result, emit to other nodes, or get agent objects like models, database connections, or anything else. When you emit to other nodes, you simply say what node you want to emit to and what arguments to pass to that node. Agent nodes run on virtual threads, so they can be efficiently written in a blocking style like this.

That’s most of what’s involved in programming agents with Agent-o-rama! There’s a bit more to learn with aggregation and how to declare agent objects, and this is all documented on the programming agents guide. The rest of using Agent-o-rama is creating and managing datasets, running experiments, setting up online evaluation and other actions on production runs, and analyzing agent telemetry.

Also, you can see from this code and the trace that model calls are automatically traced – this node didn’t have to record any tracing info explicitly. Though you can include your own info in traces with a simple API (see this Javadoc and this Clojuredoc).

Let’s take a look at running this on a real cluster! Let’s quickly set up a cluster locally by following these instructions:

  1. Download the latest Rama release from here.
  2. Unpack the release somewhere.
  3. Run: ./rama devZookeeper &
  4. Run: ./rama conductor &
  5. Run: ./rama supervisor &
  6. Visit: http://localhost:8888 . When the page loads, the cluster is ready.
  7. Download the latest Agent-o-rama release from here.
  8. Unpack it somewhere.
  9. Run: ./aor --rama /path/to/rama-root &

Next, to deploy you need to build a jar first. Here’s how to build either the Java or Clojure version from the Agent-o-rama project:

1
2
3
4
5
6
7
# For Java version  
cd examples/java
mvn clean package -Dmaven.test.skip=true

# For Clojure version
cd examples/clj
lein uberjar

The Java version will build target/java-examples-with-dependencies.jar , and the Clojure version will build target/agent-o-rama-examples-1.0.0-SNAPSHOT-standalone.jar .

Next, to deploy the module just run this command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Deploy the module (Java uberjar)
./rama deploy \
  --action launch \
  --jar /path/to/java-examples-with-dependencies.jar \
  --module com.rpl.agent.research.ResearchAgentModule \
  --tasks 4 \
  --threads 2 \
  --workers 1

# Deploy the module (Clojure uberjar)
./rama deploy \
  --action launch \
  --jar /path/to/agent-o-rama-examples-1.0.0-SNAPSHOT-standalone.jar \
  --module com.rpl.agent.research-agent/ResearchAgentModule \
  --tasks 4 \
  --threads 2 \
  --workers 1

Now it’s up and running! You can view the agent in the UI at http://localhost:1974 and play with it. From the agent screen you can invoke the agent with the arguments ["", {"topic": "your topic here"}] . On the trace, you’ll be able to see any human input prompts the agent makes and respond to them there.

Rama handles all of storage, deployment, and scaling. There are no other dependencies needed to run this. Setting up a production cluster is only slightly more work, and there are one-click deploys for AWS and for Azure.

Resources

Check out these resources to learn more or get involved:

Conclusion

Agent-o-rama lets developers gain the benefits of Rama without needing to learn it. Rama’s distributed programming model is powerful but has a learning curve: it introduces a rich dataflow API and uses compound data structures for indexing instead of fixed data models. Agent-o-rama abstracts away those concepts into a familiar API so developers can take advantage of Rama’s strengths for the specific domain of building LLM agents.

For those who want to learn how to program Rama directly, Agent-o-rama also serves as a great example of Rama in practice. The backend is about 15K lines of code and the front-end about 11K, yet together they form a complete, end-to-end distributed system with a diverse feature set. Along with our Twitter-scale Mastodon implementation, it shows the breadth of what can be built with Rama.

We’d love to hear what you build with Agent-o-rama. Join the rama-user mailing list or the #rama channel on the Clojurians Slack to ask questions, share feedback, or discuss ideas with others using Agent-o-rama.

If you’d like to talk directly with us about Agent-o-rama, whether to exchange ideas, get technical guidance, or explore working together on building an LLM agent, you can book a call with us.

Permalink

Gaiwan: October Recap

MCP-SDK Released

Gaiwan: October Recap

New blog post! mcp-sdk: an Introduction to creating an MCP service with Clojure.
Last month we released mcp-sdk, a pure Clojure SDK for working with MCPs. If you&aposd like to create your own MCP service, check out our blog post to help you get started.

What&aposs in a name?

Most of our open source projects carry the Lambda Island name. You find them under lambdaisland on github, and under [com.]lambdaisland on clojars. Lambda Island is the name I chose when in 2017 I decided to get into screencasting, making premium video tutorials about Clojure. These videos are still online, nowadays you can watch them all for free.

The first library I released under the same name was lambdaisland/uri, a small utility I extracted from the code base that runs the lambdaisland website. Many more libraries and tools would follow. Kaocha (2018), Ornament (2021), Launchpad (2021), Plenish (2022), CLI (2024), just to name a few highlights.

Since 2019 the maintainance and stewardship of what are by now literally dozens of projects falls upon Gaiwan colleagues and me. This is a collection of open source that Gaiwan proudly shares for the betterment of the Clojure community, but the old name has stuck. I&aposve never been quite sure what to do with that. People would tell me I should rename Gaiwan to Lambda Island to benefit from the name recognition, or go the other way, and migrate all these projects over to the Gaiwan team and organisation. I will agree this confusion of names has not done us any favors.

For me there&aposs always been a clear distinction though. Lambda Island is not an official entity, but if it was, it would be a non-profit. It&aposs our connection to the community, hence why Lambda Island has an opencollective page, or why we run the ClojureVerse forum. There&aposs no commercial motive here, rather it&aposs in our DNA to give back, to share, and to strengthen the community and ecosystem we benefit from. I guess it&aposs my own anti-corporate tendencies that have always wanted to keep that separate from the business, even though Gaiwan is about as indy as it gets. A handful of people running a bootstrapped business.

Lately however we have at last started releasing newer stuff directly under the Gaiwan name, notably our in-progress IAM implementation called Oak. This is a project that distills years of consulting experience, and so it felt right to put our own name on it. A mark of the maker. Oak is also a starting point for us to explore commercial possibilities in the identity space. If that sounds like something you&aposd like to chat to us about, get in touch!

Gaiwan: October RecapReset password screenshot from Oak, our IAM implementation

Coming Up

Arne will do an online talk about The Gaiwan Stack on November 11, 18:30 London / 19:30 CET. Gaiwan has built a lot of Clojure applications over the years, and we&aposve developed an opinionated stack and tooling. It&aposs overdue that we share more of these learnings.

What We Are Reading

  • Europe&aposs plan to ditch US tech giants is built on open source - and it&aposs gaining steam &apos Digital Sovereignity is a hot topic in Europe, and it&aposs something we&aposve been having a lot of conversations about inside the Gaiwan team as well. We&aposve started the process of migrating from Github to our own Forgejo instance. It&aposs a space we are actively exploring to see if we can help European tech companies break their dependency on US clouds.
  • The Majority AI View Some of you may have read the post from our founder back in September where he explains his view on AI and some of the cognitive dissonance it causes (link). While we do keep an eye on these technologies and try to evaluate their worth, like the people in this article we are concerned and sceptical as well.
  • Your data model is your destiny: "when code is cheap, competition is fierce, and vertical depth matters, your data model is the foundation of your moat. The companies that win won’t be those with the most or even the best features. AI will democratize those. The winners will be built on a data model that captures something true about their market, which in turn creates compounding advantages competitors can’t replicate."

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.