Merco Talento Brazil 2025: Nubank ranks among the top 5 most attractive companies to work for

In 2025, Nubank appears for the first time among the five most attractive companies to work for in Brazil. The list is organized by Merco Talento, one of the leading corporate reputation and employer brand rankings in Latin America.

Climbing ten positions from 2024 to 2025 reinforces that investing in autonomy, trust, and a clear purpose isn’t just a cultural choice — it’s what drives real impact inside and outside the company.

As our CEO and Founder, David Vélez, puts it:

“Culture attracts people — and people build the products that attract customers. In the end, customers are consumers of culture. […] Culture is like the company’s spirit — it permeates everything we do.”

At Nubank, culture is more than a statement — it’s a living force that shapes how we hire, build, and grow. And this recognition reflects exactly that.

What is Merco Talento and why it matters

Merco Talento, the Corporate Reputation Business Monitor, evaluates how attractive companies are as places to work based on the perception of different audiences such as university students, professionals, human resources specialists, unions, headhunters and society in general.

In addition to spontaneous recognition, the ranking combines information about corporate reputation, well being practices, professional development, organizational culture and social purpose.

In other words, it is not just an employer branding award. It reflects trust, credibility and the ability to inspire people.

Inside the methodology

The methodology used to build the Merco Talento ranking includes:

  • More than 9,000 interviews with different groups such as professionals, students, business leaders, human resources specialists and the general public.
  • Evaluation of 26 variables grouped into three main pillars: strong internal reputation, employer brand and quality of life at work.
  • External audit conducted by KPMG, which ensures independence and methodological integrity.

This combination of multiple perspectives is what makes the ranking solid and respected in countries like Brazil, Mexico and Colombia, where Nubank has been recognized before.

A culture built every day

Being among the top 5 most desired companies to work for in Brazil is the result of everyday decisions.

We got here because we work with autonomy and responsibility in teams that genuinely trust people and the decisions they make. We create a safe environment to learn, make mistakes and grow, without rigid hierarchies that limit initiative or creativity.

Diversity and inclusion are true foundations of the way we build products, develop leaders and form teams. And everything connects to a clear purpose: to fight complexity and empower people to have more control over their financial lives.

The post Merco Talento Brazil 2025: Nubank ranks among the top 5 most attractive companies to work for appeared first on Building Nubank.

Permalink

Daily Artificial Intelligence Digest - Nov 04, 2025

AI Infrastructure & Strategic Partnerships

Major cloud providers are solidifying their positions as the backbone for AI development, with AWS and OpenAI partnership cementing a substantial cloud computing deal, further detailed by OpenAI and Amazon cloud deal, AWS supports OpenAI workloads, and CNBC on the deal. Microsoft continues its global expansion of AI cloud capacity through a multi-billion-dollar Lambda AI infrastructure deal and a Microsoft Australia AI cloud deal, alongside a strategic Microsoft UAE AI investment aimed at advancing US AI diplomacy. The sheer demand for AI compute power highlights significant challenges, as Microsoft's AI GPU electricity needs are growing, while Elon Musk controversially suggests leveraging Tesla cars as AI computers for a massive distributed computing network.

AI Model Advances & Enterprise Applications

Advancements in AI models continue, with Anthropic Claude research making progress and Apple's Siri-Google Gemini deal reportedly exploring integration to enhance personal AI capabilities. Tools for building and monitoring stateful LLM agents are also emerging, exemplified by Agent-O-Rama for LLM agents for Java and Clojure development. Beyond core development, AI is rapidly finding diverse applications, from Coke's AI holiday ad to Elon Musk's Grokipedia, which is undergoing Grokipedia academic assessment, while content creators like PewDiePie self-hosting AI are delving into building custom models.

AI Governance, Ethics & Business Dynamics

The expanding reach of AI is fueling discussions around data rights and ethical considerations, with publishers oppose OpenAI training pushing back against OpenAI's use of their work and LinkedIn AI training privacy plans raising concerns and prompting calls for opt-out mechanisms. Business leaders like Sam Altman are addressing financial transparency amidst speculation about Sam Altman on OpenAI IPO and Sam Altman on OpenAI revenue. Concerns about the potential for AI-driven cyber threats are also rising, alongside the ongoing debate about appropriate design and user interaction with AI hardware, as one notable test proposes assessing if AI hardware user experience devices provoke a desire to physically interact aggressively.

Permalink

Introducing Agent-o-rama: build, trace, evaluate, and monitor stateful LLM agents in Java or Clojure

We’ve just open-sourced Agent-o-rama, a library for building scalable and stateful LLM agents on the JVM. Agent-o-rama provides two first-class APIs, one for Java and one for Clojure, with feature parity between them.

AI tooling today is overwhelmingly centered on Python, and while the JVM ecosystem has seen growing support through libraries like LangChain4j, it lacks the kind of integrated tooling that lets developers evaluate, observe, and deploy LLM-based systems rigorously and at scale. Available tools are fragmented or complex to set up, and nothing handles the entire workflow from development to production with proper observability.

Agent-o-rama fills that gap. It brings the same ideas popularized by LangGraph and LangSmith – structured agent graphs, tracing, datasets, experiments, evaluation – but makes them native to Java and Clojure. LLMs are powerful but inherently unpredictable, so building applications with LLMs that are helpful and performant with minimal hallucination requires being rigorous about testing and monitoring.

Agents are defined as simple graphs of Java or Clojure functions that execute in parallel. Agent-o-rama automatically captures detailed traces and includes a web UI for offline experimentation, online evaluation, and time-series telemetry (e.g. model latency, token usage, database latency). It also supports streaming, with a simple client API to stream model calls or other outputs from nodes in real time. Agent-o-rama extends the ideas from LangGraph and LangSmith with far greater scalability, full parallel execution, and built-in high-performance data storage and deployment.

Agent-o-rama is deployed onto your own infrastructure on a Rama cluster. Rama is free to use for clusters up to two nodes and can scale to thousands with a commercial license. Every part of Agent-o-rama is built-in and requires no other dependency besides Rama. Agent-o-rama also integrates seamlessly with any other tool, such as databases, vector stores, external APIs, or anything else. Unlike hosted observability tools, all data and traces stay within your infrastructure.

Example agent

Let’s take a look at an example agent! This is a research agent from the examples/ directory in the project. In that directory you’ll find equivalent Java and Clojure versions.

You’ll need Java 21 installed and API keys for OpenAI and Tavily (Tavily’s free tier is sufficient). Put the API keys in environment variables like so:

1
2
export OPENAI_API_KEY=your_openai_key_here
export TAVILY_API_KEY=your_tavily_key_here

To run the agent, clone Agent-o-rama and follow these instructions (for Java or Clojure, whichever you prefer):

1
2
3
4
5
6
7
8
9
# Java instructions
cd examples/java
./run-example com.rpl.agent.research.ResearchAgentExample

# Clojure instructions
cd examples/clj
lein repl
(require '[com.rpl.agent.research-agent :as research-agent])
(research-agent/run-agent)

This runs Rama’s “in-process cluster” (IPC) and launches the research agent on it. You’ll get a prompt at the terminal to enter a research topic. The agent will generate a set of analyst personas to analyze the topic, and you’ll be prompted again whether you want to give feedback on the generated analysts. Once you tell the agent you have no more feedback, it will spend a few minutes generating the report, including using information it finds through web searches and through Wikipedia, and then the final report will be printed.

As the report is being generated or when it’s finished, you can open the Agent-o-rama UI at http://localhost:1974 .

Here’s an example back and forth:

Enter a topic: What's the influence and legacy of Billy Wilder?

Do you have any feedback on this set of analysts? Answer 'yes' or 'no'.

{"role" "Film Historian", "affiliation" "University of California, Los Angeles", "name" "Dr. Lucy Reynolds", "description" "Specializes in post-war American cinema and the contributions of filmmakers like Wilder. Focuses on Wilder's stylistic innovations and narrative techniques, exploring how they shaped modern filmmaking."}
{"role" "Cultural Critic", "affiliation" "Film Critic Magazine", "name" "Michael Chen", "description" "Analyzes the social and cultural impacts of Wilder's films, particularly in relation to gender and race issues. Concerned with how Wilder's work reflects and influences societal norms."}
{"role" "Cinema Studies Scholar", "affiliation" "New York University", "name" "Professor John Hartman", "description" "Investigates the legacy of classic Hollywood directors, with an emphasis on Wilder. His work focuses on the interplay between commercial success and artistic integrity in Wilder's films."}
{"role" "Screenwriter and Director", "affiliation" "Independent Filmmaker", "name" "Emma Thompson", "description" "Explores the thematic elements in Wilder's storytelling, particularly humor and satire. Engages with Wilder's ability to blend genres and how this influences contemporary narrative structures."}
>> no

# The Enduring Influence of Billy Wilder

## Introduction

Billy Wilder's legacy in Hollywood cinema is marked by his unparalleled ability to blend commercial success with artistic integrity. This report delves into Wilder's impact, highlighting his innovative storytelling techniques and social critiques through iconic films like "Sunset Boulevard," "The Apartment," and "Double Indemnity." We explore how his personal experiences shaped his keen observational skills and narrative style, as well as how his work laid the groundwork for contemporary storytelling and the exploration of gender dynamics. Ultimately, Wilder’s films illustrate the enduring relevance of balancing humor, critique, and emotional depth in cinema.

---

Billy Wilder stands as a towering figure in cinema, adept at fusing commercial viability with artistic integrity. His films often strike a delicate balance between engaging mainstream audiences and provoking critical reflection on serious themes, as exemplified in "Sunset Boulevard" (1950). This film vividly critiques the dark side of fame and highlights Wilder's unique ability to craft narratives that resonate deeply with viewers while navigating complex moral landscapes. His background as an Austrian émigré and early career as a screenwriter supplied him with the observational prowess necessary to convey the multifaceted nature of human experiences, allowing his work to transcend mere entertainment and engage with profound social commentary [1].

Wilder's amalgamation of humor and satire serves as a compelling vehicle for addressing serious social issues, influencing contemporary screenwriters to adopt similar techniques. Films like "Some Like It Hot" and "The Apartment" showcase his signature style, where humor enriches the narrative while prompting reflection on societal norms and human behavior. This approach remains pervasive in the works of modern filmmakers, illustrating Wilder's constructed legacy in storytelling that encourages the interplay of comedic elements and deeper thematic explorations. Notable contemporary films such as "The Big Sick" and "Parasite" echo these traditions, suggesting that humor can coexist with critical commentary and profound moral questions [2].

Central to Wilder's storytelling innovations is his ability to meld humor with dark themes, employing non-linear narratives and flashbacks in movies like "Double Indemnity" and "The Apartment." These techniques reveal complex character motivations and provide a framework for rich, layered narratives. Wilder’s knack for sharp dialogue and intricate comedic timing enhances this social commentary, resonating with audiences across generations. The blend of genres within his films also paved the way for a more diverse cinematic landscape, allowing modern filmmakers to challenge conventions and push creative boundaries [3].

Particularly significant is Wilder's exploration of gender dynamics in "The Apartment," where the protagonist Fran Kubelik's experiences reflect the challenges faced by women within a patriarchal corporate structure. The film critiques the objectification of women through key scenes and deft cinematography, simultaneously highlighting moral ambiguity and emotional depth. This examination of gender roles emphasizes the importance of authentic relationships in a transactional world, underlining the resonance of Wilder's critiques within contemporary discussions surrounding gender and power [4].

In conclusion, Billy Wilder's influence is multifaceted, shaping both the narrative and thematic dimensions of modern cinema. His legacy emerges from an enduring ability to captivate audiences while addressing the intricacies of human behavior, societal constructs, and moral dilemmas. Through a unique blend of artistry and commercial appeal, Wilder set a standard for storytelling that continues to inspire filmmakers and storytellers today.


---

## Conclusion

Billy Wilder's cinematic legacy is a testament to his exceptional ability to balance artistry and commercial appeal. His films, including "Sunset Boulevard," "The Apartment," and "Double Indemnity," not only entertained audiences but also provoked critical thought on profound societal themes and human dynamics. Through innovative storytelling techniques and a distinctive blend of humor and critique, Wilder paved the way for contemporary writers and filmmakers. His enduring influence can be seen in the way modern narratives confront gender dynamics and moral complexities, demonstrating that engaging storytelling can exist alongside rich thematic exploration. Ultimately, Wilder's impact remains a vital reference point in the evolution of cinema.

## Sources
[1] Interview with Professor John Hartman on the legacy of Billy Wilder.  
[2] https://glcoverage.com/2025/01/23/billy-wilder-screenwriting-tips/  
[3] Culture Vulture | Counter Culture  
[4] Breaking Down the Storytelling in Billy Wilder's 'The Apartment' https://nofilmschool.com/apartment-storytelling-breakdown  
[5] Analysis of ‘The Apartment’ – Infinite Ocean - Mawr Gorshin  https://mawrgorshin.com/2022/08/20/analysis-of-the-apartment/  
[6] On its 60th anniversary, Billy Wilder’s The Apartment looks like an indictment of toxic masculinity - AV Club  https://www.avclub.com/on-its-60th-anniversary-billy-wilder-s-the-apartment-l-1844004988  

If you click on the research agent in the UI, you’ll see this:

The invoke there is what we just ran. Clicking on it brings up the trace for the invoke:

This is displaying the parallel execution of the agent, with orange nodes being aggregations of data computed on multiple branches. On the right is aggregated statistics of everything that happened during the agent’s execution. You can see how many tokens it used, and if it did any database reads/writes you’d see stats about those too. If the agent invokes other agents, you can see a breakdown of stats by agent as well.

Clicking on the “write-report” node brings up a detailed trace of what happened when that node executed:

This node did one LLM call, and you can see the arguments to that LLM, what was returned, and stats on the call in the “Operations” section. The code for this node is just this:

JavaClojure
1
2
3
4
5
6
7
8
9
.node("write-report", "finish-report", (AgentNode agentNode, String sections, String topic) -> {
  ChatModel openai = agentNode.getAgentObject("openai");
  String instructions = String.format(REPORT_WRITER_INSTRUCTIONS, topic, sections);
  List<ChatMessage> chatMessages = Arrays.asList(
    new SystemMessage(instructions),
    new UserMessage("Write a report based upon these memos."));
  String report = openai.chat(chatMessages).aiMessage().text();
  agentNode.emit("finish-report", "report", report);
})
1
2
3
4
5
6
7
8
9
10
11
12
(aor/node
 "write-report"
 "finish-report"
 (fn [agent-node sections topic]
   (let [openai (aor/get-agent-object agent-node "openai")
         instr  (report-writer-instructions topic sections)
         text   (chat-and-get-text
                 openai
                 [(SystemMessage. instr)
                  (UserMessage. "Write a report based upon these memos.")])]
     (aor/emit! agent-node "finish-report" "report" text)
   )))

This code says that the node’s name is “write-report”, the node emits to the node “finish-report”, and the node’s implementation is the given function. The agentNode / agent-node argument is how you interact with the graph to return a result, emit to other nodes, or get agent objects like models, database connections, or anything else. When you emit to other nodes, you simply say what node you want to emit to and what arguments to pass to that node. Agent nodes run on virtual threads, so they can be efficiently written in a blocking style like this.

That’s most of what’s involved in programming agents with Agent-o-rama! There’s a bit more to learn with aggregation and how to declare agent objects, and this is all documented on the programming agents guide. The rest of using Agent-o-rama is creating and managing datasets, running experiments, setting up online evaluation and other actions on production runs, and analyzing agent telemetry.

Also, you can see from this code and the trace that model calls are automatically traced – this node didn’t have to record any tracing info explicitly. Though you can include your own info in traces with a simple API (see this Javadoc and this Clojuredoc).

Let’s take a look at running this on a real cluster! Let’s quickly set up a cluster locally by following these instructions:

  1. Download the latest Rama release from here.
  2. Unpack the release somewhere.
  3. Run: ./rama devZookeeper &
  4. Run: ./rama conductor &
  5. Run: ./rama supervisor &
  6. Visit: http://localhost:8888 . When the page loads, the cluster is ready.
  7. Download the latest Agent-o-rama release from here.
  8. Unpack it somewhere.
  9. Run: ./aor --rama /path/to/rama-root &

Next, to deploy you need to build a jar first. Here’s how to build either the Java or Clojure version from the Agent-o-rama project:

1
2
3
4
5
6
7
# For Java version  
cd examples/java
mvn clean package -Dmaven.test.skip=true

# For Clojure version
cd examples/clj
lein uberjar

The Java version will build target/java-examples-with-dependencies.jar , and the Clojure version will build target/agent-o-rama-examples-1.0.0-SNAPSHOT-standalone.jar .

Next, to deploy the module just run this command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Deploy the module (Java uberjar)
./rama deploy \
  --action launch \
  --jar /path/to/java-examples-with-dependencies.jar \
  --module com.rpl.agent.research.ResearchAgentModule \
  --tasks 4 \
  --threads 2 \
  --workers 1

# Deploy the module (Clojure uberjar)
./rama deploy \
  --action launch \
  --jar /path/to/agent-o-rama-examples-1.0.0-SNAPSHOT-standalone.jar \
  --module com.rpl.agent.research-agent/ResearchAgentModule \
  --tasks 4 \
  --threads 2 \
  --workers 1

Now it’s up and running! You can view the agent in the UI at http://localhost:1974 and play with it. From the agent screen you can invoke the agent with the arguments ["", {"topic": "your topic here"}] . On the trace, you’ll be able to see any human input prompts the agent makes and respond to them there.

Rama handles all of storage, deployment, and scaling. There are no other dependencies needed to run this. Setting up a production cluster is only slightly more work, and there are one-click deploys for AWS and for Azure.

Resources

Check out these resources to learn more or get involved:

Conclusion

Agent-o-rama lets developers gain the benefits of Rama without needing to learn it. Rama’s distributed programming model is powerful but has a learning curve: it introduces a rich dataflow API and uses compound data structures for indexing instead of fixed data models. Agent-o-rama abstracts away those concepts into a familiar API so developers can take advantage of Rama’s strengths for the specific domain of building LLM agents.

For those who want to learn how to program Rama directly, Agent-o-rama also serves as a great example of Rama in practice. The backend is about 15K lines of code and the front-end about 11K, yet together they form a complete, end-to-end distributed system with a diverse feature set. Along with our Twitter-scale Mastodon implementation, it shows the breadth of what can be built with Rama.

We’d love to hear what you build with Agent-o-rama. Join the rama-user mailing list or the #rama channel on the Clojurians Slack to ask questions, share feedback, or discuss ideas with others using Agent-o-rama.

If you’d like to talk directly with us about Agent-o-rama, whether to exchange ideas, get technical guidance, or explore working together on building an LLM agent, you can book a call with us.

Permalink

Functional programming, demystified: What we learned at Nu Engineering Meetup

At Nu Engineering Meetup #15, functional programming shed its “niche” label and became a tangible practice. Alberto Souza, Software Engineering at Nubank and creator of the Dev + Eficiente ecosystem, opened the session by connecting programming paradigms to everyday design decisions.

Rafael Ferreira, Senior Python Developer, educator at Rocketseat, and founder of the Programador Lhama initiative, shared a perspective on functional architecture that begins in theory and translates into testable, predictable code. Marcelo Arbore, Director of Engineering at Oracle Brazil with over fifteen years of experience in hybrid and distributed cloud environments, presented an experiment combining Clojure, Datomic, and Oracle 23AI for vector search and multimodel data applications.

This post brings together the key ideas and addresses a recurring question among those who look at Nubank and wonder if they’d fit in without knowing functional languages.

First, a good question

This question often comes up in interviews, events, and blog comments: “Do I need to know Clojure to work at Nubank?” The answer is as straightforward as the question itself: no. What we look for is curiosity, solid foundations, and the willingness to learn.

The language is a tool that serves the engineering principles we value. This discussion has appeared in community conversations, such as in Alex Miller’s interview on the Hammock Podcast, where he spoke about learning journeys and framed language choices as means, not ends. You can explore that conversation here.

What is functional programming, and why does it matter

Functional programming is an approach that favors immutable structures, pure functions, and predictable composition. It gives engineers greater clarity about what changes and where those changes occur.

In practical terms, it means that data transformations return new values instead of updating variables in place, that side effects are concentrated, and that the path of data can be read as a pipeline.

An accessible introduction to this way of thinking can be found in Functional Programming with Clojure, which shows how these principles translate into design decisions that make testing, parallelism, and maintenance easier.

How this translates into Clojure at Nubank

Clojure is our main language in many systems. The choice aligns with our emphasis on immutability and our use of history-oriented databases such as Datomic.

This story has already been told in Clojure’s Journey at Nubank, where we detailed the technical and cultural reasons for adopting the ecosystem. The decision doesn’t create an exclusive club—it builds an environment that encourages focus on business rules, clarity of effects, and responsible experimentation.

For those who want to dive deeper behind the scenes, the special series Clojure Turns 15 captures discussions about the language’s evolution and its impact on our daily work.

Joining without knowing Clojure is possible

Many people who joined Nubank had never written Clojure before. During his talk, Alberto Souza shared how he learned from scratch and how functional principles began to influence his code in other languages.

The main message is liberating: paradigms are not dogmas—they are lenses. You can apply immutability, cohesion, and pure functions in Java, Python, or JavaScript, just as you can bring concepts of cohesion and domain-driven design into Clojure.

On the Hammock Podcast, we explore this technical and personal journey in the episode Journeys in Code: Clojure, Datomic, and Personal Growth. The conclusion is that learning the language comes naturally when you have the right context, peers nearby, and interesting problems to solve.

From the whiteboard to production code

The three presentations at the meetup show how theory translates into practice.

Alberto highlighted how immutability simplifies debugging and reduces bugs by concentrating mutations in single points of the flow. Rafael presented functional architecture patterns in Python, such as service handle and tagless final, which isolate effects and maintain system predictability.

Marcelo, on the other hand, demonstrated an experiment combining Clojure, Datomic, and Oracle 23AI to build a vector search service with embeddings—proof that the functional paradigm can coexist with modern data and AI technologies.

These approaches reflect an engineering philosophy driven by simplicity and clarity. Every decision—from pure functions to declarative pipelines—is designed to keep systems understandable in the long run.

Learning as part of the work

When we say that you don’t need to know Clojure to participate in our hiring process, we’re also talking about how we support learning from the very first day. The technical onboarding is structured to combine product context, experienced peers, and a safe environment to ask questions, explore, and make mistakes.

During the first few weeks, new joiners are exposed to concepts from the functional ecosystem and learn hands-on through pair programming, code reviews, and mentorship. The learning curve exists—and it’s shared.

What changes in practice

Working in a functionally oriented environment is about transforming the way you think.

  • Predictability: States are controlled, and concurrency bugs are drastically reduced.
  • Testability: Pure functions make automation and unit testing easier.
  • Readability: Declarative pipelines make data flows clearer.
  • Evolution: Deliberate cohesion reduces the impact of changes in the long term.

These principles cut across languages and paradigms, explaining why functional engineering is a mindset that shapes how we build products at Nubank.

Functional programming is an invitation to think more clearly about data, effects, and the evolution of systems. At Nubank, we’ve built an environment that encourages continuous learning, collaboration, and safe experimentation.

If you identify with this way of building, this is a great place to begin your journey.

The post Functional programming, demystified: What we learned at Nu Engineering Meetup appeared first on Building Nubank.

Permalink

Gaiwan: October Recap

MCP-SDK Released

Gaiwan: October Recap

New blog post! mcp-sdk: an Introduction to creating an MCP service with Clojure.
Last month we released mcp-sdk, a pure Clojure SDK for working with MCPs. If you&aposd like to create your own MCP service, check out our blog post to help you get started.

What&aposs in a name?

Most of our open source projects carry the Lambda Island name. You find them under lambdaisland on github, and under [com.]lambdaisland on clojars. Lambda Island is the name I chose when in 2017 I decided to get into screencasting, making premium video tutorials about Clojure. These videos are still online, nowadays you can watch them all for free.

The first library I released under the same name was lambdaisland/uri, a small utility I extracted from the code base that runs the lambdaisland website. Many more libraries and tools would follow. Kaocha (2018), Ornament (2021), Launchpad (2021), Plenish (2022), CLI (2024), just to name a few highlights.

Since 2019 the maintainance and stewardship of what are by now literally dozens of projects falls upon Gaiwan colleagues and me. This is a collection of open source that Gaiwan proudly shares for the betterment of the Clojure community, but the old name has stuck. I&aposve never been quite sure what to do with that. People would tell me I should rename Gaiwan to Lambda Island to benefit from the name recognition, or go the other way, and migrate all these projects over to the Gaiwan team and organisation. I will agree this confusion of names has not done us any favors.

For me there&aposs always been a clear distinction though. Lambda Island is not an official entity, but if it was, it would be a non-profit. It&aposs our connection to the community, hence why Lambda Island has an opencollective page, or why we run the ClojureVerse forum. There&aposs no commercial motive here, rather it&aposs in our DNA to give back, to share, and to strengthen the community and ecosystem we benefit from. I guess it&aposs my own anti-corporate tendencies that have always wanted to keep that separate from the business, even though Gaiwan is about as indy as it gets. A handful of people running a bootstrapped business.

Lately however we have at last started releasing newer stuff directly under the Gaiwan name, notably our in-progress IAM implementation called Oak. This is a project that distills years of consulting experience, and so it felt right to put our own name on it. A mark of the maker. Oak is also a starting point for us to explore commercial possibilities in the identity space. If that sounds like something you&aposd like to chat to us about, get in touch!

Gaiwan: October RecapReset password screenshot from Oak, our IAM implementation

Coming Up

Arne will do an online talk about The Gaiwan Stack on November 11, 18:30 London / 19:30 CET. Gaiwan has built a lot of Clojure applications over the years, and we&aposve developed an opinionated stack and tooling. It&aposs overdue that we share more of these learnings.

What We Are Reading

  • Europe&aposs plan to ditch US tech giants is built on open source - and it&aposs gaining steam &apos Digital Sovereignity is a hot topic in Europe, and it&aposs something we&aposve been having a lot of conversations about inside the Gaiwan team as well. We&aposve started the process of migrating from Github to our own Forgejo instance. It&aposs a space we are actively exploring to see if we can help European tech companies break their dependency on US clouds.
  • The Majority AI View Some of you may have read the post from our founder back in September where he explains his view on AI and some of the cognitive dissonance it causes (link). While we do keep an eye on these technologies and try to evaluate their worth, like the people in this article we are concerned and sceptical as well.
  • Your data model is your destiny: "when code is cheap, competition is fierce, and vertical depth matters, your data model is the foundation of your moat. The companies that win won’t be those with the most or even the best features. AI will democratize those. The winners will be built on a data model that captures something true about their market, which in turn creates compounding advantages competitors can’t replicate."

Permalink

OSS updates September and October 2025

In this post I&aposll give updates about open source I worked on during September and October 2025.

To see previous OSS updates, go here.

Sponsors

I&aposd like to thank all the sponsors and contributors that make this work possible. Without you, the below projects would not be as mature or wouldn&apost exist or be maintained at all! So a sincere thank you to everyone who contributes to the sustainability of these projects.

gratitude

Current top tier sponsors:

Open the details section for more info about sponsoring.

Sponsor info

If you want to ensure that the projects I work on are sustainably maintained, you can sponsor this work in the following ways. Thank you!

Updates

The summer heat has faded, and autumn is upon us. One big focus for me is preparing my talk for Clojure Conj 2025, titled "Making tools developers actually use". I did a test run of the talk at the Dutch Clojure Meetup. It went a bit too long at 45 minutes, so I have to shrink it almost by half for the Conj. The more I work on the talk the more ideas come up, so it&aposs challenging!

presentation at Dutch Clojure meetup

Of course I spent a ton of time on OSS the past two months as well. Some special mentions:

  • I&aposm pretty excited by Eucalypt, a remake of Reagent for Squint without React by Chris McCormick. It lets you build UIs with the Reagent API in less than 10kb of gzip&aposed JS. The code was initially generated by an LLM, but now work is going into making the code base thoroughly tested and simplified where possible.
  • After studying Eucalypt&aposs code I figured that making an even more minimal Reagent-like by hand would be fun. This is where I came up with Reagami. The API looks like a hybrid between Reagent and Replicant. You can build apps with Reagami starting around 5kb gzip&aposed.
  • Edamame got Clojure CLR support thanks to Ambrose Bonnaire-Sergeant.
  • SCI Clojure CLR support is underway. The sci.impl.Reflector code, based on clojure.lang.Reflector was ported to Clojure with the purpose that it would then be easier to translate to Clojure CLR.
  • Cljdoc chose squint for its small bundle sizes and easy migration off of TypeScript towards CLJS
  • Via work on Squint, I found a way to optimize str in ClojureScript (worst case 4x, best case 200x)

Here are updates about the projects/libraries I&aposve worked on in the last two months in detail.

  • babashka: native, fast starting Clojure interpreter for scripting.

    • Bump to clojure 1.12.3
    • #1870: add .addMethod to clojure.lang.MultiFn
    • #1869: add clojure.lang.ITransientCollection for instance? checks
    • #1865: support reify + equals + hashCode on Object
    • Add java.nio.charset.CharsetDecoder, java.nio.charset.CodingErrorAction, java.nio.charset.CharacterCodingException in support of the sfv library
    • Fix nrepl-server completions and lookup op to be compatible with rebel-readline
    • Add clojure.lang.Ref for instance? checks
    • Bump SCI: align unresolved symbol error message with Clojure
    • Use GraalVM 25
    • Bump deps.clj to 1.12.3.1557
    • Change unknown or REPL file path to NO_SOURCE_PATH instead of <expr> since this can cause issues on Windows when checking for absolute file paths
    • #1001: fix encoding issues on Windows in Powershell. Also see this GraalVM issue
    • Fixes around java.security and allowing setting deprecated Cipher suites at runtime. See this commit.
    • Support Windows Git Bash in bash install script
  • SCI: Configurable Clojure/Script interpreter suitable for scripting

    • ClojureCLR support in progress (with Ambrose Bonnaire Sergeant)
  • edamame: configurable EDN and Clojure parser with location metadata and more

    • 1.5.33 (2025-10-28)
    • Add ClojureCLR support (@frenchy64)
  • clj-kondo: static analyzer and linter for Clojure code that sparks joy.

    • Unreleased
    • #2651: resume linting after paren mismatches
    • 2025.10.23
    • #2590: NEW linter: duplicate-key-in-assoc, defaults to :warning
    • #2639: NEW :equals-nil linter to detect (= nil x) or (= x nil) patterns and suggest (nil? x) instead (@conao3)
    • #2633: support new defparkingop macro in core.async alpha
    • #2635: Add :interface flag to :flags set in :java-class-definitions analysis output to distinguish Java interfaces from classes (@hugoduncan)
    • #2636: set global SCI context so hooks can use requiring-resolve etc.
    • #2641: fix linting of def body, no results due to laziness bug
    • #1743: change :not-empty? to only warn on objects that are already seqs
    • Performance optimization for :ns-groups (thanks @severeoverfl0w)
    • Flip :self-requiring-namespace level from :off to :warning
    • 2025.09.22
    • Remove dbg from data_readers.clj since this breaks when using together with CIDER
    • 2025.09.19
    • #1894: support destruct syntax
    • #2624: lint argument types passed to get and get-in (especially to catch swapped arguments to get in threading macros) (@borkdude, @Uthar)
    • #2564: detect calling set with wrong number of arguments
    • #2603: warn on :inline-def with nested deftest
  • squint: CLJS syntax to JS compiler

    • Support passing keyword to mapv
    • Inline identical? calls
    • Clean up emission of paren wrapping
    • Add nat-int?, neg-int?, pos-int? (@eNotchy)
    • Add rand
    • Fix rendering of null and undefined in #html
    • #747: #html escape fix
    • Optimize nested assoc calls, e.g. produced with ->
    • Avoid object spread when object isn&apost shared (auto-transient)
    • Optimize =, and, and not= even more
    • not= on undefined and false should return true
    • Optimize code produced for assoc, assoc! and get when object argument can be inferred or is type hinted with ^object
    • Optimize str using macro that compiles into template strings + ?? &apos&apos for null/undefined
    • Fix #732: take-last should return nil or empty seq for negative numbers
    • #725: keys and vals should work on js/Map
    • Make map-indexed and keep-indexed lazy
    • Compile time optimization for = when using it on numbers, strings or keyword literals
    • Switch = to a deep-equals implementation that works on primitives, objects, Arrays, Maps and Sets
    • Fix #710: add parse-double
    • Fix #714: assoc-in on nil or undefined
    • Fix #714: dissoc on nil or undefined
    • Basic :import-maps support in squint.edn (just literal replacements, prefixes not supported yet)
  • reagami: A minimal zero-deps Reagent-like for Squint and CLJS

    • First releases
  • clerk: Moldable Live Programming for Clojure

    • Support evaluation of quoted regex
    • Support macros defined in notebooks
    • Bump cherry
  • cljs-str

    • More efficient drop-in replacement for CLJS str. This work was already upstreamed into CLJS, so coming near you in the next CLJS release.
  • unused-deps: Find unused deps in a clojure project

    • Support finding unused git deps
  • scittle: Execute Clojure(Script) directly from browser script tags via SCI

    • Fix SCI regression where interop on keywords like (.catch ...) was accidentally munched
  • Nextjournal Markdown

    • Add :start attribute to ordered lists not starting with 1 (@spicyfalafel)
  • cherry: Experimental ClojureScript to ES6 module compiler

    • Bump squint compiler common component and standard library
    • Bump other deps
    • Optimize =, str, not=
    • Support :macros option + :refer so you can use unqualified macros using compiler state (see macro-state-test)
  • deps.clj: A faithful port of the clojure CLI bash script to Clojure

    • Released several versions catching up with the clojure CLI
  • pod-babashka-go-sqlite3: A babashka pod for interacting with sqlite3

    • Add close-connection
    • Fix #38: add get-connection to cache connection
    • Fix potential memory leak
    • Better handling of parent process death by handling EOF of stdin
    • #25: use musl to compile linux binaries to avoid dependency on glibc
  • quickdoc: Quick and minimal API doc generation for Clojure

    • Fix extra newline in codeblock

Contributions to third party projects:

Other projects

These are (some of the) other projects I&aposm involved with but little to no activity happened in the past month.

Click for more details - [CLI](https://github.com/babashka/cli): Turn Clojure functions into CLIs! - [pod-babashka-fswatcher](https://github.com/babashka/pod-babashka-fswatcher): babashka filewatcher pod - [sci.nrepl](https://github.com/babashka/sci.nrepl): nREPL server for SCI projects that run in the browser - [babashka.nrepl-client](https://github.com/babashka/nrepl-client) - [fs](https://github.com/babashka/fs) - File system utility library for Clojure - [http-server](https://github.com/babashka/http-server): serve static assets - [nbb](https://github.com/babashka/nbb): Scripting in Clojure on Node.js using SCI - [sci.configs](https://github.com/babashka/sci.configs): A collection of ready to be used SCI configs. - [http-client](https://github.com/babashka/http-client): babashka's http-client - [quickblog](https://github.com/borkdude/quickblog): light-weight static blog engine for Clojure and babashka - [process](https://github.com/babashka/process): Clojure library for shelling out / spawning sub-processes - [html](https://github.com/borkdude/html): Html generation library inspired by squint's html tag - [instaparse-bb](https://github.com/babashka/instaparse-bb): Use instaparse from babashka - [sql pods](https://github.com/babashka/babashka-sql-pods): babashka pods for SQL databases - [rewrite-edn](https://github.com/borkdude/rewrite-edn): Utility lib on top of - [rewrite-clj](https://github.com/clj-commons/rewrite-clj): Rewrite Clojure code and edn - [tools-deps-native](https://github.com/babashka/tools-deps-native) and [tools.bbuild](https://github.com/babashka/tools.bbuild): use tools.deps directly from babashka - [bbin](https://github.com/babashka/bbin): Install any Babashka script or project with one command - [qualify-methods](https://github.com/borkdude/qualify-methods) - Initial release of experimental tool to rewrite instance calls to use fully qualified methods (Clojure 1.12 only) - [neil](https://github.com/babashka/neil): A CLI to add common aliases and features to deps.edn-based projects.
- [tools](https://github.com/borkdude/tools): a set of [bbin](https://github.com/babashka/bbin/) installable scripts - [babashka.json](https://github.com/babashka/json): babashka JSON library/adapter - [speculative](https://github.com/borkdude/speculative) - [squint-macros](https://github.com/squint-cljs/squint-macros): a couple of macros that stand-in for [applied-science/js-interop](https://github.com/applied-science/js-interop) and [promesa](https://github.com/funcool/promesa) to make CLJS projects compatible with squint and/or cherry. - [grasp](https://github.com/borkdude/grasp): Grep Clojure code using clojure.spec regexes - [lein-clj-kondo](https://github.com/clj-kondo/lein-clj-kondo): a leiningen plugin for clj-kondo - [http-kit](https://github.com/http-kit/http-kit): Simple, high-performance event-driven HTTP client+server for Clojure. - [babashka.nrepl](https://github.com/babashka/babashka.nrepl): The nREPL server from babashka as a library, so it can be used from other SCI-based CLIs - [jet](https://github.com/borkdude/jet): CLI to transform between JSON, EDN, YAML and Transit using Clojure - [lein2deps](https://github.com/borkdude/lein2deps): leiningen to deps.edn converter - [cljs-showcase](https://github.com/borkdude/cljs-showcase): Showcase CLJS libs using SCI - [babashka.book](https://github.com/babashka/book): Babashka manual - [pod-babashka-buddy](https://github.com/babashka/pod-babashka-buddy): A pod around buddy core (Cryptographic Api for Clojure). - [gh-release-artifact](https://github.com/borkdude/gh-release-artifact): Upload artifacts to Github releases idempotently - [carve](https://github.com/borkdude/carve) - Remove unused Clojure vars - [4ever-clojure](https://github.com/oxalorg/4ever-clojure) - Pure CLJS version of 4clojure, meant to run forever! - [pod-babashka-lanterna](https://github.com/babashka/pod-babashka-lanterna): Interact with clojure-lanterna from babashka - [joyride](https://github.com/BetterThanTomorrow/joyride): VSCode CLJS scripting and REPL (via [SCI](https://github.com/babashka/sci)) - [clj2el](https://borkdude.github.io/clj2el/): transpile Clojure to elisp - [deflet](https://github.com/borkdude/deflet): make let-expressions REPL-friendly! - [deps.add-lib](https://github.com/borkdude/deps.add-lib): Clojure 1.12's add-lib feature for leiningen and/or other environments without a specific version of the clojure CLI

Permalink

Get Ready for Clojure, GPU, and AI in 2026 with CUDA 13.0

A little anniversary

Did you know that CUDA has been available in Clojure for the last 9 years through ClojureCUDA, and GPU programming through OpenCL for more than 10? I almost forgot about these anniversaries.

Ten years ago most people liked it a lot, starred it on Github, patted me on the back, but then concluded that they don't have an Nvidia card available on their laptops, or, if they had GPUs, that they won't have time to learn to think in massive parallel algorithms, or if they have time and will, that there are no GPUs in the servers, so what would they do with their applications, even if they created them in Clojure, and so on, and so off :)

But, ClojureCUDA and ClojureCL continued living on for these 10 years, I used them in creating Neanderthal, Deep Diamond, and Diamond ML, and they proved themselves as simple and reliable tools. I still had trouble convincing Clojure programmers that they can write GPU programs that run as fast as they'd wrote them in C++, but interactively in the Cloujre REPL, without C++ hell.

But I'm not easy to shake off! If it's necessary, I'll continue for 10 more years, for I'm convinced there'd be a moment when Clojure programmers are going to say "hmmm, this is something that we can use and be good at!".

CUDA 13 is here!

I've recently released ClojureCUDA 0.25.0, with support for the latest CUDA 13.0.2!

Why not celebrate that by opening the REPL, and coding your first Hello World application on the GPU? I promise, it won't be a usual GPU carpet of text; this is ClojureCUDA, it follows the Clojure philosophy by being simple and interactive!

There's not much sense in wielding a GPU to print out "Hello World". Note that it is also not very useful to work with scalar numbers and call a GPU function to add or multiply two numbers. No. Unless you have many, many, numbers to crunch, stay by your trusty CPU. For our purposes, many, many, numbers would be two vectors of dimension 3 (hey, it's hello world; imagine it's 3 billion). Also, even when we have many, many, numbers the sheer cost of getting them to the GPU memory would destroy any gains we get in the computation speed, so we must also ensure that we want to perfom many complicated operations. Well, we will only do the simple operation of adding these two vectors, and we will pretend that this operation is extra-demanding (we're hello-worlders today, we can cheat a bit).

CUDA Hello World

First things first, we require the functions that we'll use.

(require '[uncomplicate.commons.core :refer [release]]
         '[uncomplicate.clojure-cpp :refer [float-pointer pointer-seq]]
         '[uncomplicate.clojurecuda.core
           :refer [compile! context device function grid-1d init launch! mem-alloc-driver
                   mem-alloc-pinned mem-alloc-runtime memcpy-host! module parameters program
                   synchronize! push-context!]])

If it seems to you that this list already looks too large, I agree with you. But, don't be afraid; in Uncomplicate libraries there are so many higher-level helpers that you'll rarely need to touch these. I only use them here because I want to show you that even when we program at the base CUDA level, Clojure can do it interactively and each line can be evaluated by itself, and each intermediate result can be inspected and understood.

I'll use def for poor man variables, and pop-context! so this is evaluated step-by-step. The real code would be much simpler; resources can be managed by with-release and with-context!

First we initialize CUDA and create the the CUDA context.

(init)
(def ctx (context (device)))

Unfortunately, CUDA insists on managing contexts in different threads by itself, so we have to let CUDA know that we want to use the context that we've just created. ClojureCUDA has some macros that can help with making this simpler, such as with-context, and in-context, but we'll do this hello world as real ninjas, with basic tools!

(push-context! ctx)

Many language integrations try to let you write everything in that language, both the host code that manages GPU computations, and the GPU kernels themselves. So far, they don't fare too well compared to C++ CUDA, even when it comes to simplicity of such kernel code. ClojureCUDA doesn't do that, for a reason. We are practical people. We understand that we won't be able compile kernels ourselves and be competitive with Nvidia. So, we write kernels in C++ that Nvidia uses, since they are typically not that complicated compared to host code that manages them. We can load these kernels as strings, and I find it's most convenient to not sprinkle these strings throughout my .clj source files, but to load them from .cu files, which contain C++ code that text editors recognize.

We load the kernel source.

(def kernel-source (slurp "test/cuda/examples/jnvrtc-vector-add.cu"))

Next, we compile it.

(def prog (compile! (program kernel-source)))

We create the module…

(def m (module prog))

… and load the add function that was defined in the kernel code. CUDA kernels are short functions that run on the GPU.

(def add (function m "add"))

This way we get the best of both worlds: we edit short C++ kernels, and then load them in our Clojure REPL and manage the kernels they define. If we change anything, there's no need for recompilation of everything; just the short .cu file that changed, and this is also interactive, as these tools are available as Clojure functions.

Next, this kernel needs data! We allocate some memory on the GPU. There are a few different types that CUDA offers, and even a few CUDA APIS: runtime and driver. Typically, the runtime API is what CUDA uses in the C++ code that mixes host and GPU device code, while the driver API is more geared towards tools such is ClojureCUDA. But, some CUDA libraries will expect inputs to be from the runtime API, and ClojureCUDA supports both. We can even mix them!

(def gpu-a (mem-alloc-runtime (* Float/BYTES 3)))
(def gpu-b (mem-alloc-driver (* Float/BYTES 3)))

The data needs to be transferred to the GPU memory. There are many functions in CUDA for doing that for every combination of different argument types. ClojureCUDA simplifies this a lot with protocols, and usually memcpy-host! will find the right way to transfer the data. We also need the place to keep the results, unless we want to overwrite one of the two input arrays.

(memcpy-host! (float-pointer [1 2 3]) gpu-a)
(memcpy-host! (float-pointer [2 3 4]) gpu-b)
(def gpu-result (mem-alloc-pinned (* Float/BYTES 3)))

That's a lot of work to set everything up! Let's compute it at last!

(launch! add (grid-1d 3) (parameters 3 gpu-a gpu-b gpu-result))

The kernels are launched asynchronously. That means that the launch! function returns as soon as it puts the kernel in the computation queue of the GPU, typically before the kernel has actually been executed; there might be 1000 kernels in the queue before this one. We can explicitly wait until the kernel has completed its work by calling synchronize!.

(synchronize!)

Since we now know that the new data is in the gpu-result array, we will have to move it back to the host if we wont to see it.

(pointer-seq (memcpy-host! gpu-result (float-pointer 3)))
=> (3.0 5.0 7.0)

Yeah, I know, a dozen lines of code for a simple addition of two vectors. But hear me out: the management code for more demanding kernels is not much more complicated. So, it's a dozen lines for this simple case, but it will be a dozen lines for some real crunching. Or 23 lines, but not 2000 lines.

Plus, there are so many functions in the libraries for computing vectors, matrices, and tensors, that you'll write your own kernels only occasionally, and you'll still get great speed, once you learn the basics.

So, invest some time in 2026, learn the basics of GPU computing, and enjoy the coming AI age! In Clojure!

Oh, I didn't show you the C++ kernel code. It's C++, it must be scary! No, it's not. CUDA kernels are written in a subset of C++, without the scary parts!

extern "C"
__global__ void add(int n, float *a, float *b, float *sum) {
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < n) {
        sum[i] = a[i] + b[i];
    }
};

… and, hey, we shouldn't forget to clean up the memory! Since if we didn't use def, we could just rely on with-release to do this for us, but we did everything manually, and now we have to clean up manually.

(release gpu-result)
(release gpu-b)
(release gpu-a)
(release add)
(release m)
(release prog)
(release ctx)

Permalink

Which Reagent API Calls are Most Used?

I've been working on Eucalypt, a frontend library for building small web apps with Squint-ClojureScript. It features a Reagent-compatable-ish API. I'm not implementing the full Reagent API, just a useful subset, and so I was interested in which parts of Reagent are most used.

no r/track? -- HN User

To figure this out I used GitHub as a data source, performing searches for common patterns and counting up totals.

Raw results

--- Reagent API Usage ---
require reagent.core      7656
atom                      6269
render                    3581
require reagent.dom       1664
with-let                  1114
:dangerouslySetInnerHTML  1036
cursor                    670
track                     170
wrap                      94
track!                    65
reaction                  60
unsafe-html               16

reagent-usage-chart.svg

These numbers come with caveats. Some API calls are inherently going to occur more frequently than others in a given codebase, names may be aliased in a way I didn't consider, and these are public code-bases only. I think they give interesting ballpark numbers anyway.

You can find the scripts I used on GitHub: https://github.com/chr15m/report-reagent-api-usage

My hunch that r/atom and r/render are the critical pieces of this API seem to be correct. Other parts of the API are called less often. I think for the Eucalypt API I will focus on cursor and above and I won't implement the less-used functions. Keeping the API concise adds to the goal of the library being "small" in several ways:

  • Small artifact sizes produced.
  • Small code size of the library itself.
  • Small API.
  • Small set of concerns.

At the moment I am porting the Preact test suite over to Eucalypt. If I can get all of those tests passing I'll probably cut a version one-point-oh.

Permalink

mcp-sdk: an Introduction to creating an MCP service with Clojure

mcp-sdk: an Introduction to creating an MCP service with Clojure

Arne recently launched mcp-sdk, a flexible, pure Clojure SDK for building Model Context Protocol (MCP) servers.

MCP is set of standards that allows large language models (LLMs), like Anthropic&aposs Claude and Microsoft Copilot, to communicate with external systems like data, APIs, custom built tools, or specific prompts. Companies like Google, Notion, and Figma use MCP services to let users integrate their apps directly into their own AI workflows. You can visit the official MCP documentation for a more detailed explanation.

If you&aposre new to building MCPs in general, I&aposm going to walk through the example MCP weather service that we have in our project. I&aposll point out some notable code, then show to run the service on your own machine and connect to it from Claude desktop.

Building our own Weather Tool

We have an example weather service tool that lets us use an an LLM like Claude to ask for alerts and the weather forecast from any state or location in the US. Since we need to call the National Weather Service API for our data, we&aposll be configuring an MCP tool. I&aposll point out some notable code snippets.

Configuring the weather alerts tool (code)

(state/add-tool
 {:name "weather_alert"
  :title "Weather Alert Tool"
  :description "Given a two letter state code, finds the weather alerts for that state."
  :schema {"type" "object"
           "properties" {"state" {"type" "string"
                                  "description" "Two letter state code"}}
           "required" ["state"]}
  :tool-fn (fn [_req {:keys [state]}]
             (let [alerts (get-alerts state)]
               {:content [{:type :text
                           :text (str/join "\n" (vec alerts))}]
                :isError false}))})

The NWS alert API takes a two letter state code, which we&aposre passing in as a parameter called state. In the example above, we defined an input schema (called inputSchema in the official docs) with a data type and description.

tool-fn is the function that gets called when our MCP tool is invoked. We can pull the state from our input params that we defined in schema and use it to make a simple clojure to get-alerts, which handles the HTTP request and data parsing. We need to make sure this function returns data in the correct schema - here, we&aposre simply returning plain text, defined as TextContent, as defined here.

To run our service, we need to run co.gaiwan.mcp/run-http

(mcp/run-http! {:port 3999})

Claude Desktop

You can simply connect to your new running MCP server using Claude connectors; however, if you&aposre using a free Claude plan, you&aposll have to use a tool like mcp-remote in order to connect to the MCP tool we created. You can install it using npm install -g mcp-remote

Open Claude Desktop, open your settings, and then click on the "Developer" option. You can access your claude_desktop_config.json file by clicking "Edit Config."

mcp-sdk: an Introduction to creating an MCP service with Clojure

Add this snippet to your claude_desktop_config.json

{
  "mcpServers": {
    "clojure-weather-tool": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:3999/mcp",
        "--allow-http"
      ]
    }
  }
}

Save your file, restart Claude, and you should now see your service when you click on the "Search and Tools" button. Test it out yourself!

mcp-sdk: an Introduction to creating an MCP service with Clojure
mcp-sdk: an Introduction to creating an MCP service with ClojureTesting out our weather alert tool in Claude.

Try asking Claude: "What is the current forecast for New York City?"

I hope this was a useful introduction to learning how to create your own MCP server in Clojure! Have any more questions or need some developers to help build out your MCP tools? Feel free to contact us!

Permalink

A functional programming course in 6 books

I will be hosting a workshop at the Clojure/conj on Wednesday, November 12 in Charlotte, North Carolina. My workshop is about domain modeling in Clojure. You can get 25% off the workshop using code DOMAINJ25OFFCONJ. And you can get 25% off tickets to the talks at the conference using code CLJ18BIRTHDAY. I’d love to see you there!


Here’s a series of books that give an introduction to the majesty and variety of functional programming.

Pure and higher-order functions

Grokking Simplicity, my own book, provides the necessary groundwork for functional programming. It begins way before most books begin by encouraging the distinction between pure and impure functions. Once that’s mastered, higher-order functions are explored. It has gotten high praise. I don’t feel weird hyping my own book because I wrote it because no book was taking up the task. However, be careful with chapter 3. Every time I open it, I dream of rewriting it for a second edition.

Separating data and behavior

Data-Oriented Programming by Yehonathan Sharvit, introduces the idea of programming with data and not objects. Since most people will be familiar with the standard object-oriented methodology of combining data with behavior, this book presents a very different perspective: Maybe data can be treated separately from behavior.

It is a decidedly Clojure-centered view of functional programming, but even a die-hard Haskeller will find some similarity to how they program, albeit using types instead of run-time schema checks. I think the idea of writing functions over data is such a foreign idea that it’s important to show how the other side lives. However, if there’s one thing from data-oriented programming that I wish was treated more in the book, it’s that much of the time, we don’t need to model our data quite so much. In other words, we’re not always building entities and defining their schemas. Sometimes, the best part of data-oriented programming is just leaving the data as it is and making a program that is data in, data out.

Functional domain modeling

So, I am writing a book on this right now, but it’s not done and what’s been published online will certainly change. It’s meant to be a hands-on guide to the intermediate skills of the programmer. Once you know how to write functions and for loops, where do you turn to write good functions and for loops? If you’re interested, go read the 200 pages that are already online for free. But I’m rewriting all of it to be more focused on learnability and objective. The premise is that software design does not need so much hand waving.

While you wait, you can read Domain-Modeling Made Functional by Scott Wlaschin. It’s good if a little hand-waving, like most software design books. I daresay that it’s the best book I’ve read explaining Domain-Driven Design.

Functional algorithms

Thinking Functionally with Haskell is one I enjoy. If you want something rigorous from someone who takes functional programming seriously, this is a great book. It reads a little like a textbook. But at the same time, it doesn’t have the 3-page code listings some industry books have. The algorithms are chosen to gradually show you the range of techniques you’ll need to solve any problem using functional, as opposed to imperative, style.

Types and category theory

I’ve read several books on these two areas, but I can’t say I’ve found a book that really captures the magic enough for me to recommend it. However, I haven’t read every book. Functional Programming in Scala, by Paul Chiusano and Runar Bjarnason, is extremely popular. It’s so popular, it’s referred to by its color (the red book). Still, I doubt it’s giving me what I’m looking for. I want a book that expresses the beauty and elegance of types and Haskell-style category theory beyond “preventing errors” and reinventing imperative programming.

Linguistic abstraction

This topic isn’t learned quite enough today. We tend to talk about simple indirections like runtime dispatch to solve problems. Runtime dispatch, however, is just a fancy conditional. Sometimes your indirection needs a totally new semantics. My two favorite books for these are Principles of Artificial Intelligence Programming (PAIP) by Peter Norvig and Software Design for Flexibility by Chris Hanson and Gerald Jay Sussman.

PAIP is a classic from the time when AI meant “good programming”. The examples are in Common Lisp and it shows how to build a logic programming system (similar to Prolog), rules engines, and equation simplifiers in a data-driven style.

Software Design for Flexibility is written using Scheme and shows other techniques for developing powerful abstractions for conquering complex problems. The premise is that your semantics need to have more degrees of freedom than they problem you’re solving. Both books show techniques we rarely reach for, even when conditionals and recursion don’t cut it. If I had to pick one, it would be PAIP.

Conclusion

If someone read these books and tried to apply them, they’d be well on their way to mastering functional programming. I know there are a lot of books I’m not familiar with, so please don’t take this as an exhaustive list. It’s just a list of good books that fill niches in the FP journey. I’d love to hear your takes on required reading. Just hit reply and send me a message. Please share what you like about your recommendations along with the titles.

Permalink

Melpastats badges

Cersei Lannister: I shall wear this as a badge of honor.
Robert Barathon: Wear it in silence or I’ll honor you again.
– George R. R. Martin

TL;DR

Have a project called fizzbuzz that you deploy on MELPA? Add the following snippet to your README.md in order to add a download count badge:

[![downloads](https://melpastats-2761cf.gitlab.io/badges/fizzbuzz.svg)](http://melpa.org/#/fizzbuzz)

or if org-mode is your poison:

[[https://melpa.org/#/fizzbuzz][https://melpastats-2761cf.gitlab.io/badges/fizzbuzz-badge.svg]]

It looks like this (taken from my own project plantuml-mode):

downloads

The quest

I was recently busy with some cleanup and improvements of my open source projects, which resulted into deflate, a pure elisp implementation of the Deflate compression algorithm. After a few day that the package was published on MELPA, I was curious to check whether it was being downloaded (mostly as a sidecar to plantuml-mode, which is the only thing in the world that depends on it currently).

Fortunately enough, MELPA publishes your download counts and – oh boy! The tears in my eyes when I did see some downloads action there! I needed to boast about it and showcase the download count prominently in the project README.org. I needed badges.

Eat your own parenthesis

I’m sure there are million ways to make a badge / shield happen. Heck, maybe someone else already did it and I haven’t found it (spoiler: I haven’t searched 😱). Anyways I was craving some custom coding and so it was decided: I’ll make a new project called melpastats. What it offers:

  • 🖼️ pregenerated .svg badges with the download count
  • 📦 assets are hosted on Gitlab pages
  • ⏲️ refresh the download counts every 6 hours
  • 💄 different background colors depending on the total downloads count
  • 📈 full history of the download count updates (maybe, in the far future)

I went for a simple babashka script, which uses svg-clj to render the actual image from Hiccup syntax. For the uninitiated, here’s how it looks like:

[:svg {:xmlns "http://www.w3.org/2000/svg"
           :width full-width
           :height height
           :role "img"
           :aria-label (str "MELPA: " package-name)}
     [:title (str "MELPA: " package-name)]
     ;; ...
     ]

You can see the full hiccup stuff in the sources of course. The overall process is intentionally dumb as a rock, namely:

  1. gitlab CI: download and parse the packages list and download counts JSON files
  2. babashka: for each package, render the actual SVG file using its download count
  3. ensure the folder with the SVGs is published in Gitlab pages

It took just a few hours of non-continuous work to make that happen. Now it’s live for a month or two and I’m confident enough with it that it’s time for this post to happen. Hope it’s of any use for anyone out there.

Bye bye 🇬🇧 / tot ziens 🇳🇱 / a si biri (alas, no emoji for Sardinian :sadface:)

Permalink

Clojure Runs ONNX AI Models Now - Join the AI fun!

Hello, Clojurians! I haven't written here in a long time. Was I tired? Is anybody reading blogs anymore? Who knows. But that was not the main reason.

I've been working on several Clojure projects sponsored by the Clojurists Together Foundation. I did a ton of things, but after all this programming, I was kinda tired, and kept slugging when it comes to telling people about the work done! That's not very smart, but you know how it goes… :) But, then, if we don't tell people about awesome software that we have, nobody is going to use it, so finally I had to stop kicking this down the road, sit, and write the first post. It's been long overdue, so expect more posts soon!

ONNX Runtime in one line of Clojure

The most recent thing I'm currently working on started its life as Clojure ML (again, superthanks to Clojurists Together for sponsoring this). I proposed to create a human-friendly Clojure API for AI/DL/ML models, and back it by the first implementation, in this case based on ONNX Runtime. Of course, it should all be integrated into existing Clojure libraries, and follow the Clojure way of doing stuff as much as possible!

The idea is to get an existing, pre-trained ML model previously exported to the ONNX format from whatever technology the authors chose (which in today's world is typically Python and PyTorch), and put it into production in Clojure and JVM. It should be seamless and in-process, without any clunky interoperability, copy, translation, etc. Of course, our Clojure numerical libraries fully support GPU computing, so it goes without saying that we want that, too! Just to be clear, we do not use nor need any Python or Python interop for this, we use the ONNX Runtime's underlying C library.

Nice idea, but what parts of this well intended story can we evaluate in our REPLs right now? At least some promising demo? Are we on the trail? To access that AI goodness, we surely have to do a sophisticated dance? Are the steps hard to learn? Do we need to watch carefully for slippery floor? Is it accessible to mere mortals?

Here's the gist:

(onnx "data/mnist-12.onnx")

"Wait, what?", you'll say. One function? One tini, tiny, function, with one laughingly trivial argument? Is that an API? What does such trivial API do? "Now you confused me!", you'll scratch your head. It's just a stick.

I hope I've also intrigued you, so please keep reading to see it in action (this post is actually generated from a live REPL session, so the example is fully executable, not just interesting bits on the table, and a ton of complex boilerplate under a Persian rug).

Hello World, the MNIST image recognition model

For this recipe, you'll need the following ingredients: Deep Diamond tensors (one cup), Deep Diamond network (one slice), one Neanderthal transfer! function for moving data around for demo purposes, and that's it! Oh, yes, don't forget the new onnx function. We load the native namespace, and the right Deep Diamond engine is set up for our system (yes, even on Mac OS, thanks to Clojurists Together!).

(require '[uncomplicate.neanderthal.core :refer [transfer! iamax]]
         '[uncomplicate.diamond
           [tensor :refer [tensor desc]]
           [dnn :refer [network]]
           [onnxrt :refer [onnx]]]
         '[uncomplicate.diamond.native])

The ONNX model

We evaluate the onnx function, and it loads the model.

(def mnist-onnx (onnx "../../data/mnist-12.onnx"))
#'user/mnist-onnx

Sure, that's easy, but how is that useful? Well, the result is a function. This function has just been set up with ONNX internals, so now it can create Deep Diamond network layers and fit in with the rest of the Tensor-y stuff that DD already provides.

The ONNX Runtime model revolves around environment, session, input and output tensors, type info, and a lot of other stuff and brittle ceremony. Sure, sometimes you need to reach these internals, and diamond-onnxrt provides clojurized internals API even for that. However, it can sing the main song, and set all the right arguments at the right places for you. Even the onnx function supports option map, where you can tell what you like, and it will take care to configure ONNX to do the right thing, but this is a story for another article.

The rest is the usual Deep Diamond stuff, which is simple as beans!

The MNIST dataset specifies images of hand-written digits, in just one grayscale channel, each \(28\times28\) pixels a challenging task for 1989's USPO and the tecnology from back then, but a hello world level stuff for today's accelerated libraries (still, keep in mind that if you tried to code even this easy example without such libraries, you'll be surprised how slow that can be!).

We create a tensor descriptor for such input (this step can be left out, but I'm being pedantic to accommodate beginners):

(def input-desc (desc [1 1 28 28] :float :nchw))
#'user/input-desc

Next, we create a reusable abstract network blueprint, that can then create concrete networks tailored for training, or optimized for inference, that is classifying MNIST images. Normally, we would have to train these networks, or load the parameters from somewhere, but in this case it contains only of the onnx model, which had already been trained and already knows all the right weights, so no training is needed (nor available with ONNX Runtime yet; it's main job is inference in production).

(def mnist (network input-desc [mnist-onnx]))
#'user/mnist

Note that all these things so far look and behave just as ordinary Clojure objects. You can use them even outside this specific structure. Full flexibility that I hope will spark your creativity.

We'll also need a place for the actual image that we'd like to classify. This particular network that I downloaded from ONNX Runtime examples specifies exactly one image at input, to classify one at a time. Typically, if we have many images, it's better to compute them in batches, but it's just a hello-world, after all, we won't be too demanding.

(def input-tz (tensor input-desc))
#'user/input-tz

A blueprint (mnist in this case) is a function that can create networks optimized for inference with concrete tensors, adequate internal tensors, and parameters. The following line is the moment when the network is actually created from the abstract descriptors contained in its blueprint, to the actual engines, operation primitives, and tensors in memory.

(def classify! (mnist input-tz))
#'user/classify!

True to the Clojure philosophy, mnist is a function, which, given the specification for desired input, (mnist input-tz) produces classify!, which is a function, too, but for actual inference! It might sound cumbersome when it's written out, but the code shows it's elegance. No need for complex APIs. Each thing does exactly one thing, and does it in the most simple way, by just evaluating with one or two parameters!

Now we got a function that classifies images

This is how you would typically use this

Step one: classify! is now a typical Clojure function! Evaluate it:

(classify!)
{:shape [1 10], :data-type :float, :layout [10 1]} (-0.04485602676868439 0.007791661191731691 0.06810081750154495 0.02999374084174633 -0.1264096349477768 0.14021874964237213 -0.055284902453422546 -0.04938381537795067 0.08432205021381378 -0.05454041436314583)

The result is a ten-element tensor, each element represents the possibility that the category at its index is the right one. So we should just find which element contains the highest value, and that'd be our category, which in the MNIST example is, very conveniently a digit 0 to 9 that is equal to that index.

However, you can see that the current values are just random small numbers. This is because we never loaded any image data to the input tensor! It just classified random noise as not very likely to be an image of any digit.

We need step zero: place the image data in network's input somehow. This could be done in many different ways (for example, by memory-mapping the image data on disk), but we'll keep it simple, and we'll just transfer it naively from an in-place Clojure sequence. (this is a hello-world :)

The following sequence is copied from the actual MNIST data, but I just took the data of the first image, and scaled it to 0-1 range instead of 0-255.

(transfer! (map #(float (/ % 255)) [0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 84.0 185.0 159.0 151.0 60.0 36.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 222.0 254.0 254.0 254.0 254.0 241.0 198.0 198.0 198.0 198.0 198.0 198.0 198.0 198.0 170.0 52.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 67.0 114.0 72.0 114.0 163.0 227.0 254.0 225.0 254.0 254.0 254.0 250.0 229.0 254.0 254.0 140.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 17.0 66.0 14.0 67.0 67.0 67.0 59.0 21.0 236.0 254.0 106.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 83.0 253.0 209.0 18.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 22.0 233.0 255.0 83.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 129.0 254.0 238.0 44.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 59.0 249.0 254.0 62.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 133.0 254.0 187.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 9.0 205.0 248.0 58.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 126.0 254.0 182.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 75.0 251.0 240.0 57.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 19.0 221.0 254.0 166.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 203.0 254.0 219.0 35.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 38.0 254.0 254.0 77.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 31.0 224.0 254.0 115.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 133.0 254.0 254.0 52.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 61.0 242.0 254.0 254.0 52.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 121.0 254.0 254.0 219.0 40.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 121.0 254.0 207.0 18.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0])
           input-tz)
{:shape [1 1 28 28], :data-type :float, :layout [784 784 28 1]} (0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(classify!)
{:shape [1 10], :data-type :float, :layout [10 1]} (-1.2567189931869507 0.6275832653045654 8.642718315124512 9.428943634033203 -13.740066528320312 -6.045698642730713 -23.486745834350586 28.3399658203125 -6.7914958000183105 3.941998243331909)

Now, we see some better looking results, but are we the one who need to look at a bunch of numbers and compare them?

No, the machine should do that. Luckily, Neanderthal has just the right function for this!

(iamax (classify!))
7

And this is the kind of answer that we can show our clients! What's on this image? Easy, it's 7!

Can you tell me the main point of this, in one paragraph?

Yes. Clojure programmers typically write functions. Functions are things that take something at the input, compute stuff internally, and return an output, which is hopefully useful downstream. The funcion transforms the input into the output according to the logic that we programmers wrote in code, following some algorithm that we designed for the purpose. Now, sometimes the problem is so convoluted that we don't have the slightest idea how to write that transformation in code, but what we (or someone else) do have is lots of data, and in many such cases we can train a general machinery (neural networks for example) to find out a good enough transformation. Sometimes someone else have already done the hard part by training the network, exporting it to a standard format (ONNX) and gave it to you! Now, you can load it in Clojure and use it as a Clojure function. You don't even need to know how it works internally, but it does the thing that you need, it transforms the input tensors that you have into just the right output tensors. What you do with these outputs is up to you :)

Who is this for?

Do you need to be an AI researcher to find this useful? Absolutely not! This can appeal to any Clojure engineer.

AI researchers try to find novel AI models, or to push their model by 0.1% on an artificial benchmark. Recently, they don't necessarily even do that, some of them found the way to chase funding at crazy evaluations, and catch it. Some of them don't necessarily write code but work with mathematical models trying to figure a way to do some abstract thing. Or, if they are PhD students, they spend endless nights fiddling with Python and PyTorch trying to figure this or that task assigned by their laboratory, or they just try to catch a bit of sleep while a GPU cluster crunches some tiny step in an endless training cycle.

There's nothing wrong with that, but if you're a Clojure programmer, you probably don't have time, opportunity, experience, or even interest to work on that stuff. But, even if you don't want (or can't) understand AI internals, you can still be very creative with the applications. Now there are many, many, published ML models that work, many of them are even exported to ONNX, and quite usable. You don't need to invent a new OpenAI competitor, there are many more mundane problems that can be solved by taking an already existing model and applying it in a niche context, in a domain that you know well. You don't even need to understand exactly what or how the model does what it does, you can treat it as a black-box function that transforms inputs and outputs, and that function just need a bit more care to work than a regular Clojure four-liner that you'd normally write and be proud of.

Although (sadly) Clojure has not find its way in the big guns AI arena, Clojure is a very capable capable language and Clojure programmers very knowledgeable people when it comes to integrating stuff into real-world applications! So, here it is, now you don't have to make compromises; you can got to Hugging Face, or some other AI related community, find ONNX models that other people already prepared, and join the AI fun, directly from Clojure.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.