Clojure Is Awesome!!! [PART 6]

(ns observer
  (:require [clojure.spec.alpha :as s]))

(s/def ::topic keyword?)
(s/def ::message any?)
(s/def ::callback fn?)

(defprotocol Observable
  "Protocol defining Observable behaviors"
  (subscribe [this topic callback] "Subscribes a callback function to a specific topic")
  (unsubscribe [this topic callback] "Removes a callback subscription from a topic")
  (notify [this topic message] "Notifies all subscribers of a topic with a message"))

(defrecord EventBus [subscribers]
  Observable
  (subscribe [this topic callback]
    {:pre [(s/valid? ::topic topic)
           (s/valid? ::callback callback)]}
    (update-in this [:subscribers topic] (fnil conj #{}) callback))

  (unsubscribe [this topic callback]
    {:pre [(s/valid? ::topic topic)
           (s/valid? ::callback callback)]}
    (update-in this [:subscribers topic] disj callback))

  (notify [this topic message]
    {:pre [(s/valid? ::topic topic)
           (s/valid? ::message message)]}
    (doseq [callback (get-in this [:subscribers topic])]
      (callback message))
    this))

(defn create-event-bus
  "Creates a new event bus instance"
  []
  (->EventBus {}))

(defn create-stateful-subscriber
  "Creates a subscriber that maintains state between notifications"
  [initial-state update-fn]
  (let [state (atom initial-state)]
    (fn [message]
      (swap! state update-fn message))))

(defn create-logging-subscriber
  "Creates a subscriber that logs messages with timestamps"
  [topic-name]
  (fn [message]
    (println (format "[%s][%s] Received: %s"
                     (java.time.LocalDateTime/now)
                     topic-name
                     message))))

(comment
  (def event-bus (create-event-bus))

  (def order-logger (create-logging-subscriber "Orders"))

  (def bus-with-logger 
    (subscribe event-bus :orders order-logger))

  (def order-counter
    (create-stateful-subscriber 0 (fn [state _] (inc state))))

  (def bus-with-counter
    (subscribe bus-with-logger :orders order-counter))

  (notify bus-with-counter :orders {:id 1 :total 100.0})
  (notify bus-with-counter :orders {:id 2 :total 200.0})

  (def final-bus
    (unsubscribe bus-with-counter :orders order-logger))
)

(comment
  (require '[clojure.test :refer [deftest testing is]])

  (deftest observer-pattern-test
    (testing "Basic subscription and notification"
      (let [received (atom nil)
            callback #(reset! received %)
            bus (-> (create-event-bus)
                   (subscribe :test callback))]
        (notify bus :test "hello")
        (is (= @received "hello"))))

    (testing "Multiple subscribers"
      (let [results (atom [])
            callback-1 #(swap! results conj [:cb1 %])
            callback-2 #(swap! results conj [:cb2 %])
            bus (-> (create-event-bus)
                   (subscribe :test callback-1)
                   (subscribe :test callback-2))]
        (notify bus :test "hello")
        (is (= @results [[:cb1 "hello"] [:cb2 "hello"]]))))

    (testing "Unsubscribe"
      (let [received (atom nil)
            callback #(reset! received %)
            bus (-> (create-event-bus)
                   (subscribe :test callback)
                   (unsubscribe :test callback))]
        (notify bus :test "hello")
        (is (nil? @received))))

    (testing "Stateful subscriber"
      (let [counter (create-stateful-subscriber 0 (fn [state _] (inc state)))
            bus (subscribe (create-event-bus) :test counter)]
        (notify bus :test "event1")
        (notify bus :test "event2")
        (is (= @(#'observer/state counter) 2))))))

Permalink

50+ Remote Job Websites You Need to Know About đŸŒđŸ’Œ

I've compiled an extensive list of job boards specifically focused on remote positions. Whether you're a developer, designer, or working in other tech roles, these platforms will help you find your next remote gig.

Why Remote Work? đŸ€”

Remote work has become more than just a trend—it's the future of work. It offers:

  • 🏠 Better work-life balance
  • 🌎 Freedom to work from anywhere
  • 💰 Access to global opportunities
  • ⏰ Flexible schedules
  • đŸ’Ș Increased productivity

Job boards

  1. Real Work From Anywhere - A curated platform dedicated to 100% remote positions. They verify all job listings to ensure they are truly location-independent, making it easier to find genuine remote opportunities without geographic restrictions.

  2. 4 Day Week - Specializes in tech jobs with companies offering compressed work weeks. Perfect for developers seeking better work-life balance with companies that prioritize employee wellbeing.

  3. Authentic Jobs - A long-standing job board focused on design, development, and creative tech roles. Known for high-quality listings from established companies and startups.

  4. Built In - A comprehensive tech career platform featuring jobs from innovative companies. Offers detailed company profiles, salary information, and culture insights along with job listings.

  5. ClojureJobboard.com - The go-to platform for Clojure developers. Features specialized roles for functional programming enthusiasts with remote options.

  6. Crypto Jobs - Exclusively focused on blockchain and cryptocurrency positions. Perfect for developers interested in Web3 technologies and decentralized systems.

  7. Crypto Jobs List - The largest Web3 job board with positions in blockchain, DeFi, and NFT projects. Features both technical and non-technical roles in the crypto space.

  8. Cryptocurrency Jobs - High-quality crypto job board with positions from established blockchain companies. Includes detailed salary ranges and comprehensive job descriptions.

  9. CyberJobHunt.in - Specialized in cybersecurity positions across all experience levels. Features roles in security engineering, penetration testing, and security analysis.

  10. Daily Remote - A modern job board with an extensive collection of remote positions. Offers powerful filtering tools and daily updates of new opportunities.

  11. Diversify Tech - Focuses on inclusive tech opportunities. Companies posting here actively support diversity and inclusion initiatives, making it ideal for underrepresented groups in tech.

  12. Dribbble Jobs - Premier platform for design professionals. Features UI/UX, graphic design, and creative tech roles from top companies.

  13. Drupal Jobs - The official job board for Drupal developers. Perfect for finding roles in agencies and organizations using Drupal.

  14. freelancermap - Popular in German-speaking regions but open globally. Focuses on IT consulting and freelance projects with good rates.

  15. Golangprojects - Specialized in Go programming language positions. Ideal for Golang developers seeking remote opportunities.

  16. Guru - A platform offering a wide range of freelance and contract jobs across various categories, including tech, design, and more.

  17. HackerX - A platform that connects developers with top companies. Offers a unique approach to job searching with a focus on coding skills.

  18. Hasjob – Location filter -> "Anywhere/Remote". A job board featuring a variety of tech and non-tech roles with remote options.

  19. HigherEdJobs - Specializes in higher education jobs, including remote opportunities in academia and administration.

  20. HN hiring – Filter REMOTE. A job board featuring tech and startup jobs, including remote positions.

  21. JOBBOX.io – Filter -> Remote only. A platform offering a curated selection of tech jobs, including remote opportunities.

  22. JobsCollider - * Tens of thousands of remote jobs from over 10,000 companies and startups worldwide. *

  23. Jobspresso * High-quality remote positions that are open and legitimate *

  24. JustRemote - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.

  25. Larajobs – The artisan employment connection. Specializes in Laravel and PHP jobs, including remote opportunities.

  26. No Fluff Jobs – Filter -> "remote". A job board featuring tech jobs without the fluff, including remote positions.

  27. NODESK - A remote job board featuring a curated selection of tech and startup jobs.

  28. Power to Fly - A platform focused on women in tech, offering remote job opportunities and resources for career development.

  29. Remote AI Jobs - Remote AI jobs in Machine Learning, Engineering, Data Science, Research, etc.

  30. Remote Backend Jobs - Find exclusively remote backend jobs aggregated from the top 22 job boards in the world.

  31. Remote Frontend Jobs - Find exclusively remote frontend jobs aggregated from the top 22 job boards in the world.

  32. PyJobs.com - Jobs for Python developers, including remote opportunities.

  33. Remote Game Jobs - Find remote work and talent in the game industry.

  34. remote-es/remotes - Repository listing companies which offer full-time remote jobs with Spanish contracts.

  35. thatmlopsguy/remote-pt - Repository listing companies which offer full-time remote jobs with Portuguese contracts.

  36. remote-jobs - A list of semi to fully remote-friendly companies in tech.

  37. Remotees - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.

  38. Remote.co Jobs - A platform offering a curated selection of remote jobs, including tech, marketing, and more.

  39. RemoteJobs.lat - Remote jobs for LATAM people.

  40. Remotive Jobs - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.

  41. Remote People - A platform offering a curated selection of remote jobs, including tech, marketing, and more.

  42. Remote Works - Remote jobs in software development.

  43. Ruby On Remote - All ruby remote jobs in one place.

  44. Skip the Drive - A job search platform featuring remote and flexible job opportunities.

  45. Slasify - Remote tech, art/design and marketing opportunities from Asia, global payroll service included.

  46. Stream Native Jobs - Scroll down to Join Us.

  47. SwissDev Jobs - Filter -> "Remote / Work from home".

  48. UI & UX Designer Jobs - Remote jobs for UI, UX & UXR Designers.

  49. Upwork - Find remote jobs in any category.

  50. Virtual Vocations - A job search platform featuring remote and flexible job opportunities.

  51. Vue.js Jobs Find Vue.js jobs all around the world - Click on "Remote" tab.

  52. Web3Jobs - Remote Web3 Jobs.

  53. Wellfound - Startup Jobs. Search by going to Job Type, and selecting "Remote OK".

  54. We Work Remotely - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.

  55. Workana Freelance Job Board in Spanish and Portuguese.

  56. Working Nomads - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.

  57. zuhausejobs.com - Remote Jobs in German-speaking countries (Germany/Austria/Switzerland).

  58. Dataaxy Job board and reverse job board specialized in Data and AI in North America.

  59. Freel Freelancers job board in Canada.

  60. DevOpsJobs DevOps, SRE, Cloud and Platform engineering jobs.

Pro Tips for Remote Job Hunting 🎯

  1. Tailor Your Profile: Each platform has its unique features. Make sure your profile is complete and optimized for each site.

  2. Set Up Alerts: Most of these platforms offer job alerts. Use them to be among the first applicants.

  3. Niche Platforms: Consider using niche job boards specific to your skills (e.g., PyJobs for Python developers).

  4. Global Opportunities: Don't limit yourself geographically unless necessary. Many companies hire globally!

  5. Check Regularly: Remote positions often fill quickly. Make job searching a regular part of your routine.

Ready to Start? 🚀

This list includes platforms for various specialties:

  • General remote work
  • Tech-specific roles
  • Freelance opportunities
  • Regional job boards
  • Cryptocurrency and Web3
  • AI and Machine Learning
  • Design and UX

Pick the platforms that best match your skills and career goals. Happy job hunting! 🌟

Remember to follow me for more articles about remote work, tech careers, and professional development!

Permalink

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web Pages

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web Pages

Have you ever stared at your terminal, watching yet another framework installation crawl by and wondered how you got here? You know, that moment when you realize you&aposre installing 27,482 dependencies just to render "Hello World" on a webpage?

"This framework will solve all your problems!" they said.

"It&aposs the future of web development!" they promised.

"Just npm install happiness-and-world-peace..." they insisted.

And then reality hits. Your beautiful, simple idea starts drowning in a sea of build configurations, render strategies and optimization techniques.

Your Lighthouse scores look like a bad grade school report card and somehow your "modern" web app takes longer to load than the entire Space Jam website did in 1996.

That&aposs exactly where I found myself while building Vade Studio.

We have a simple yet ambitious goal - empower anyone with an idea to build modern web applications.

You know, the kind that&aposs blazing fast, SEO friendly and plays nice with AI answer engines like Perplexity. Nothing too crazy, right?

Wrong. Dead wrong.

In an era where content discovery increasingly happens through AI-powered search and answer engines, having a slow, poorly optimized website isn&apost just inconvenient - it&aposs practically invisible.

And let me tell you, watching our performance scores hover in the 30s felt like being stuck in digital purgatory.

This is the story of how we went from those embarrassing numbers to consistent 90+ performance scores. It&aposs a tale of multiple iterations, unexpected discoveries and learning that sometimes the "modern" way isn&apost always the right way.

The Foundation: Understanding Vade Studio&aposs Architecture

So how do you even begin tackling this problem? After several sleepless nights and way too much coffee, we landed on a deceptively simple idea: what if we stripped away all the framework magic and represented UI exactly as a pure data structure?

Enter our UI tree - probably the most boring-looking piece of code that ended up solving our biggest headaches:

export interface ITreeNode {
  componentName?: string       // What component to render
  sourceName?: string         // Where it came from
  hidden?: boolean            // Should we show it?
  isSourceNode?: boolean      // Is it a source component?
  id?: string                // Unique identifier
  props?: Record<string | number | symbol, any>  // Component properties
  children?: ITreeNode[]     // Child elements
}

Look at it. It&aposs just... JSON.

No fancy decorators, no complex inheritance hierarchies, no "innovative" design patterns.

Just a simple tree structure that represents your entire UI.
Want a button? It&aposs a node.
A layout container? Another node.
That complex data visualization? You guessed it - just another node in the tree.

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web Pages

But it gets interesting. By representing our entire UI as a plain JSON tree, we accidentally stumbled upon something powerful. This simple structure meant we could:

  • Serialize and deserialize the entire UI state without any framework magic
  • Transform and optimize the tree structure before rendering
  • Generate static HTML that&aposs practically weightless
  • Keep the runtime JavaScript minimal and focused

Remember those framework promises about "just write components and we&aposll handle the rest"? Well, turns out sometimes handling less is actually... more.

This brings us to the real question: how do you turn this glorified JSON into blazing-fast web pages? That&aposs where things get really interesting...

Client-Side Rendering

Remember how I mentioned our performance scores were in the 30s? Let me tell you how we got there with what seemed like a "perfectly reasonable" first implementation.

Our initial approach was straight out of the modern web development playbook. We&aposd fetch the UI tree from the server and let the client handle everything. The code looked clean and simple:

function renderUITree(node: ITreeNode) {
  if (!node.componentName) return null
  
  const Component = COMPONENTS[node.componentName]
  const children = node.children?.map(child => renderUITree(child))
  
  return <Component {...node.props}>{children}</Component>
}

Look at that beautiful recursive function! Three whole lines of actual logic! What could possibly go wrong?

Everything. Everything went wrong.

Here&aposs what actually happened when someone visited a page:

  1. Browser requests page
  2. Server sends minimal HTML with a JavaScript bundle
  3. JavaScript loads and starts executing
  4. Browser fetches the UI tree
  5. JavaScript recursively walks through the tree, creating components
  6. Finally, actual content appears on screen

By the time your content showed up, you could&aposve made a cup of coffee, checked your email and contemplated the meaning of life. Our Lighthouse scores were crying for help - hovering below 30, which in web performance terms is like showing up to a Formula 1 race in a horse-drawn carriage.

The worst part? This wasn&apost even a complex application. We were just rendering some static content! You know something&aposs wrong when your static page takes longer to load than a full-blown web application.

But hey, at least we learned an important lesson:

Just because an approach is common in modern web development, it doesn&apost mean that it&aposs the right one. Sometimes you need to step back and question if you&aposre really solving the right problem.

Want to know how bad it really was? Here&aposs what our pagespeed analysis looked like...

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web PagesPerformance with only client side rendering

The Server-Side Rendering + Hydration

After our client-side rendering fiasco, we did what any self-respecting team would do: we moved the rendering to the server.

But not just any server - we went full Clojure on the JVM. You know, because if you&aposre going to fix something, you might as well flex some functional programming muscles while doing it.

The plan was beautiful in its simplicity:

(defn render-tree [node]
  (when-let [component-name (:componentName node)]
    (let [component (get-component component-name)
          children (map render-tree (:children node))]
      (component (merge (:props node) 
                       {:children children}))))

On the server side, our Clojure code would pre-render the entire UI tree into HTML.

Then, on the client side, we&aposd "hydrate" the page - basically telling React
"Hey, this HTML is already here, just wire up the interactivity, okay?"

The results? Our performance scores jumped from the embarrassing 30s to the mediocre 60s.

Progress! But also... not really good enough. It&aposs like going from a F to a D+, better but nobody&aposs putting that grade on their refrigerator.

Here&aposs what was happening under the hood:

  1. Server pre-renders the HTML (blazing fast, thanks Clojure!)
  2. Browser receives fully formed HTML (great!)
  3. JavaScript loads (uh oh...)
  4. Hydration begins (wait for it...)
  5. React walks through the entire tree, matching components (sigh...)
  6. Finally, interactivity is restored (was it worth it?)

We&aposd solved the initial content display problem, but introduced a new one: the hydration tax. Our pages were showing content faster, but they were still shipping and executing more JavaScript than a crypto mining script.

The real kicker?

Most of our UI was static.

We were paying the full price of hydration for pages that barely had any interactivity. It&aposs like buying a Ferrari to drive to your mailbox - technically it works, but there might be a more reasonable solution.

Little did we know, our journey through web performance optimization was about to take an interesting turn...

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web PagesPerformance with SSR and bundled javascript

The Unexpected Discovery

Like any good developer drowning in performance problems, I dove deep into the classic "maybe if I just optimize the bundle size" rabbit hole. You know the drill:

;; Before: Kitchen sink build config
{:builds 
  {:app {:target :browser
         :output-dir "public/js"
         :asset-path "/js"
         :modules {:main {:entries [app.core]}}
         :dev {...everything-but-the-kitchen-sink...}}}

;; After: Aggressively optimized
{:builds 
  {:app {:target :browser
         :output-dir "public/js"
         :asset-path "/js"
         :modules {:main {:entries [app.core]}
                  :widgets {:entries [app.widgets]}
                  :charts {:entries [app.charts]}}
         :compiler-options {:optimizations :advanced}}}}

We went full Marie Kondo on our dependencies:

  • Goodbye Malli, we don&apost need runtime prop validation
  • Farewell Garden, Tailwind&aposs got us covered
  • Au revoir, every npm package we weren&apost 100% sure about

Hours of optimization, countless shadow-cljs configuration tweaks and enough CPU cycles burned to heat a small house in winter.

The result? We moved from the 60s to the... wait for it... 70s.

But then something magical happened. In the midst of all this JavaScript juggling, I made a mistake. A glorious, beautiful mistake. During one deploy, I somehow managed to not send any JavaScript to the client at all.

And the Lighthouse score? 91. NINETY-ONE!

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web Pages

I was ecstatic! "Look at these scores!" I shouted to my team, proudly showing off the blazing fast page loads. Until someone tried to click a button. And another button. And another button...

Nothing. Worked.

Back to square one?

Not quite. This accident revealed something crucial - our pages were perfectly capable of being lightning-fast. We just needed to figure out how to keep that speed while maintaining interactivity.

That&aposs when it hit me: What if we only sent the JavaScript that actually enables interactivity? What if our server-rendered system could handle this natively, without any additional complexity?

It was time to stop fighting with bundle optimizations and start thinking about a completely different approach...

The Perfect Solution

So there I was, deep in the trenches of JavaScript optimization hell, looking for answers. First stop: htmx. Have you ever looked at its implementation? It&aposs like someone took jQuery, sprinkled in some modern web features and called it a day. Not exactly what I was looking for.

Then came the "maybe I&aposm just not trying hard enough" phase. I started doing things that would make any sensible developer question my sanity:

// Desperate attempt #1: The "setTimeout hack"
setTimeout(() => loadAllTheJavaScript(), 1000) // Sorry, user, your buttons will work... eventually

// Desperate attempt #2: Let&aposs throw it in a web worker!
import { partytownSnippet } from &apos@builder.io/partytown/integration&apos
// It did not work with hydration

After banging my head against the wall for what felt like eternity, I did what we all should do more often:

I stepped away. Spent time with my kids. Got some sleep. Did the school run.

And then, over my morning coffee, it hit me. Alpine.js. That elegant little library I&aposd used years ago and somehow forgotten about. You know that moment when you remember an old friend and wonder "how did I forget about you?"

The solution was suddenly crystal clear: Instead of fighting with hydration and complex client-side frameworks, what if we just compiled our event handling into Alpine.js directives?

From this:

{:componentName "Button"
 :props {:onClick {:type "function"
                  :body "handleClick(event)"}}}

To this:

<button x-on:click="handleClick($event)">
  Click me!
</button>

No hydration. No complex bundling. No setTimeout hacks. Just clean, progressive enhancement that works.

When we deployed this solution, our performance score hit 92.

NINETY-TWO!

And this time, everything actually worked!

Stop Making JavaScript Do Everything - Vade Studio's Secret to Fast Web Pages

Sometimes the best solutions aren&apost found in the latest frameworks or clever hacks. Sometimes they&aposre found in that quiet moment over coffee, when you remember there&aposs beauty in simplicity.

The Path Forward: Less JavaScript, More Coffee

So what did we learn from this rollercoaster of web performance optimization?

First, the obvious: modern web development is broken.

We&aposve somehow convinced ourselves that throwing more JavaScript at performance problems will somehow make them go away.

It&aposs like trying to make your car go faster by adding more weight - technically you&aposre doing something, but it&aposs probably not helping.

But the real lessons went deeper:

  1. Sometimes the best solutions come from constraints, not capabilities. Every time we removed something - client-side rendering, hydration, heavy runtime libraries - our application got better. It turns out the fastest JavaScript is often the JavaScript you don&apost ship.
  2. The path to high performance isn&apost always about adding the newest, shiniest tools. Sometimes it&aposs about rediscovering simpler solutions that have been there all along. Alpine.js wasn&apost new or revolutionary - it was just right for the job.
  3. Most importantly, we learned that stepping away from the problem can be as important as diving into it. That "aha!" moment didn&apost come during a late-night coding session or while reading the latest framework documentation. It came over a cup of coffee, after a good night&aposs sleep and a school run.

Today, Vade Studio renders static pages with performance scores in the 90s, proving that you don&apost have to choose between modern web applications and blazing fast performance. You can have both.

We&aposre just getting started on this journey of making web development simpler, faster and more accessible. As we continue building the future of app development with AI and no-code, we&aposre discovering new ways to push the boundaries of what&aposs possible.

Want to stay updated on our progress? Subscribe to our newsletter below. We&aposll share our latest discoveries, technical deep-dives and insights as we continue building the future of web development. No framework fatigue, just practical solutions for real problems.

After all, the web should be fast. Building for it should be fun. And maybe, just maybe, we can help make both of those things true.

Permalink

From a Developer to The CEO of Metosin

In this episode, we sit down with Valtteri Harmainen, the CEO of Metosin, to explore his unconventional journey into the world of technology and leadership. You might remember Valtteri from his talk at ClojuTRE, where he shared how he "created his ow...

Permalink

A vision for Runnable Specifications

If you want to watch me talk for two hours on the topic of my next book, Runnable Specifications, please check out my presentation to the Houston Functional Programming User Group. Two hours were recorded, but the conversation went on for four, it was such fun!

Speaking of my new book, I’ve published a major revision to the Introduction. It’s free to read online, along with the drafted parts of the book.

If you want to read my previous book, Grokking Simplicity, you can get it from Amazon or from Manning, or read it on O’Reilly Online.


A vision for Runnable Specifications

Imagine you are writing software for your construction equipment rental business. Imagine, further, that you have a library that completely, precisely, and simply captures the behavior of your business domain. That is, you can easily write expressions that represent every scenario possible in your business. Yes, there are millions of such scenarios, but you can tell by the way the operations fit together that there nothing could happen that you couldn’t describe.

You want to add a new feature to your point of service (POS) station. While the UI interactions are intricate, the business domain library makes anything you need to do easier. You are free to experiment with the UI because the library gets out of your way.

And then, one day, you notice that many people rent a Bobcat with their backhoe. You decided to add rental bundles to your business domain library. While it is a change to the business, you are able to keep the bundle domain rather decoupled from the rest of the business concepts. Rental bundles build on some ideas and compose others, while adding a couple of new ideas itself.

This is the vision for what I mean by runnable specifications. It’s an imperfect term. It doesn’t refer to the domain model per se. The specification is the program you write using the library. Essentially, you should be able to write a specification for how your business operates, then run the specification. Your library is the specification language. And because it’s written in your programming language, it is runnable.

Making it runnable has a number of advantages. First, you can easily prototype it. I’m so accustomed to programming with a REPL at my side that I can’t imagine wanting to work any other way.

Second, you might be able to ship it to production. A specification might not have all of the properties you need. For instance, it might not be performant enough to run under production load. But it might be! That would be the ideal case.

Third, even if it can’t ship to production, it could be used as a test oracle. Your specification defines how the business operates, so you could query it for what it should do in certain test scenarios. Compare that to the productionized code and you have a nice way to guarantee they both describe the same business.

Now, I said the domain model you write—that expressive library—is not the specification. It’s the specification language. However, that specification language was itself specified in code; its implementation is a specification itself.

The biggest problem with the vision of runnable specifications is that it’s a dream. In what world do we have the luxury of writing such perfect libraries? Certainly not in this one. But that’s the nice thing about this vision: even getting halfway there would be lovely.

Runnable Specifications (the book I’m writing) collects the skills I’ve learned over the years to help make those kinds of libraries. The libraries have a number of nice properties:

  • They’re built on total functions.

  • The functions compose well.

  • The functions tend to use the closure property.

  • And, of course, the functions tend to correspond to operations used in the domain.

I hope the book makes a contribution to the literature on software design. It already has the bulk of the material I’ll write for it (though revisions are often required). Please let me know what you think about the book. Just reply to this email.

Permalink

Resolve Symbols and Calculate Types with Sharplasu

Parsing is typically where we begin and invest much of our enthusiasm. However, completing the parser is just the beginning. Sadly, I must inform you that additional steps are necessary. One step that enhances the value of the Abstract Syntax Trees (ASTs) obtained from parsing is semantic enrichment. In this article, we’ll explore what semantic enrichment is, what it enables us to do, and we are going to share some code.

As always, the code for this article is available on GitHub. You can find it at https://github.com/Strumenta/an-entity-language-sharplasu. In this article we just talk about symbol resolution, so if you want details about the parsing and creation of the AST you will have to look in the repository.

It All Starts With Parsing, but the Assembly Line Continue

When we parse, we recognize the structure of the code, by analyzing its syntax.

But what does it mean to parse? It means to:

  1. To check the code is syntactically correct
  2. To build the Abstract Syntax Tree (AST), we make nodes representing the structures we recognize. For example, we could process the code dog.fly() and return a node representing a method call. 

All is good, except for the fact that dogs cannot fly.

The code is syntactically correct and semantically incorrect. 

When we process code we start by verifying that it is syntactically correct. 

If it is, we move to the semantic analysis. When the code is semantically correct, we can enrich the AST with semantic information.

In essence, we are interested in two things:

  1. Connecting references to the corresponding declarations. We call this symbol resolution
  2. Calculating the types of elements that can be typed. We call this type calculation

The Link Between Symbol Resolution and Type Calculation

I am an engineer by trade, and I really like the step-by-step approach to problem-solving: you solve one part of the problem and only then move to the next one. So you may wonder why I am conflating two apparently different problems, like symbol resolution and type calculation. As for most things I do, it is because there is no better alternative.

The two mechanisms are interconnected: one depends on the other. For example, let’s say that I have this code:

class TimeMachine {

   Time move(TimeIncrement) 

   Point move(Direction)

}

class Point {

   Point add(Direction)

}

class Time {

   Time add(TimeIncrement)

}

myTimeMachine.move(foo).add(bar)

Let’s say my goal is to figure out the type of the overall expression myTimeMachine.move(foo).add(bar).

To answer this question I need to figure out which add method we are referring to. If it is the one declared in Point, then the overall result will be of type Point. If instead we are referring to the add method declared in Time, then the overall result will be of type Time. 

Ok, but how do I know which add method I am calling? Well, it depends on the type of myTimeMachine.move(foo):

  • If that expression has type Point and bar has type Direction, then we are calling the add method in Point. 
  • If instead that expression has type Time, and the expression bar has type TimeIncrement, then we are calling the add method in Time.

This means, I need to figure out the type of myTimeMachine.move(foo). To do so, I need first to figure out if I am calling the first or the second overloaded variant of TimeMachine.move. And that depends on the type of foo.

So, you see, I cannot extricate the two problems: they affect each other results, and therefore, in principle, we treat them together. In practice, for very simple languages we can get away by treating them separately. Typically, you need to treat the problems in a combined way if there are composite types or cascading functions/method calls. 

If you want to read about symbol resolution for a language like Java, you can look at How to Build a Symbol Solver for Java, in Clojure or Resolve method calls in Java code using the JavaSymbolSolver.

A Prerequisite for Any Non-Trivial Operation

Semantic Enrichment is a prerequisite for most programmatic handling of code.

Perhaps you may implement a linter or a code formatter without semantic enrichment, but for the most typical language engineering applications you need semantic enrichment:

  • Interpretation: To execute code, we need to connect function invocation to their definition
  • Migrations: To migrate code in any nontrivial way, we want to take into account the type of the different elements. For example, if we were translating the operation a + b, depending on the target language and the type of the operands, we may want to translate it as a + b or perhaps a.concat(b) or even a + b.toDouble().
  • Static analysis and refactoring: Automated code modifications, such as renaming variables or moving functions, depend on knowing which references are linked to which declarations.
  • Editors: Autocompletion or error-checking depends on semantic enrichment. But also go-to-definition or find-usages. In essence the difference between a basic editor and an advanced one is in their support for semantic enrichment for the language of interest.

The StarLasu Approach and Semantic Enrichment

When it comes to parsing, at Strumenta we apply the principles of the Chisel Method. They are quite established at this point, after years of refining. For Semantic Enrichment, things are not as crystallized as we evolve the approach at each new project, finding new solutions to new challenges. That said, we are finding patterns that work and incorporating them in our core libraries and in Sharplasu in particular. 

At this stage, Sharplasu has a module called SymbolResolution which has a reasonably good approach to symbol resolution. Type calculation is instead still implemented ad-hoc for each project, at this time. So we call the type calculation logic from symbol resolution (and viceversa). It is just that we have standard APIs for symbol resolution and not for type calculation.

Let’s See an Example of Semantic Enrichment

In our example we will work with a simple language that permits us to define entities. These entities have fields, called features with types. They can be initialized with expressions.

This is an example:

module example

import standard

type address

class base {
	name string
	description string
}

class aged : base {
	age integer
}

class person : aged {
	speed integer = 2
}

class athlete : person {
	speed integer = person.speed * 2
}

class car : aged {
	kilometers integer = age * 10000	
}

This language, while simple, contains some of the elements that we can find in most languages:

  • We can import modules
  • We can define new types
  • We have built-in types such as string and integer
  • We can reference features without specifying the context (therefore using the current object as context) or specifying it

Symbol Resolution

Let’s take a look at a portion of our SymbolResolver.

Scope ModuleLevelTypes(Node ctx)
{
    var scope = new Scope();
    var module = ctx.FindAncestorOfType<Module>();
    if (module != null)
    {
        // let's define types
        module.Types.ForEach(type => scope.Define(type));
        foreach (var import in module.Imports)
        {
            SymbolResolver.ResolveSymbols(import);
            if (import.Module.Referred?.Entities != null)
            {
                foreach (var entity in import.Module.Referred.Entities)
                {
                    scope.Define(entity);
                }
            }
            if (import.Module.Referred?.Types != null)
            {
                foreach (var type in import.Module.Referred.Types)
                {
                    scope.Define(type);
                }
            }
        }
    }

    return scope;
}

[..]

public ExampleSemantics(IModuleFinder moduleFinder)
{
    ModuleFinder = moduleFinder;

    SymbolResolver = new DeclarativeLocalSymbolResolver(Issues);
    [..]
    SymbolResolver.ScopeFor(typeof(FeatureDecl).GetProperty("Type"), (FeatureDecl feature) =>
    {
        var scope = ModuleLevelTypes(feature);                
        return scope;
    });
    [..]
    SymbolResolver.ScopeFor(typeof(Import).GetProperty("Module"), (Import import) =>
    {
        var scope = new Scope();
        if(moduleFinder.FindModule(import.Module.Name) != null)
        {
           scope.Define(moduleFinder.FindModule(import.Module.Name));
        }                    
        return scope;
    });

    TypeCalculator = new EntityTypeCalculator(SymbolResolver);                
}

Here we want to see the basics of symbol resolution and how importing symbols from other elements works. We first see that symbol resolution depends on moduleFinder. The moduleFinder is the thing containing the list of available code for a project. This means files of the project and any library available for the project. You can see on the repository that, for this project, is just a Dictionary that keeps tracks of the name and the corresponding Module object. A Module object is the root of the AST. Given the previous example file, there will a Module with name "example" representing that. The important part is that is the object that tells you if and where modules outside the current one are located.

You can see that to solve imports, as in:

import standard

To solve a symbol you need a scope. You can think of scope as a container of available definitions. In Sharplasu, a Scope can have a parent Scope, so you can properly nest them. For example, there could be global scope to solve imports and a class scope to solve features.

So, we create a scope, and then we ask the moduleFinder if there is a module with that name, and, if so, we define that module. Defining a symbol means telling our SymbolResolver object that there is a definition of the argument in the current scope. The SymbolResolver object is based on a Sharplasu class, so you can use that class for all your projects. Later, when we will ask our symbol resolver to resolve the symbols, it will look in the scope of that reference and check if there is valid definition for a reference with that name.

How Importing Modules Affects Types

You will notice that defining module is necessary to solve the type of features.

class base {
	name string
	description string
}

So, string after name and description are types and name string is a definition of a feature.

To solve the references to the types of features you call the ModuleLevelTypes method. This method will:

  • look for definition of types in the current module
  • loop through imported module, make sure that all references in imported modules are solved
  • then it will define the types in each imported module

Solving imports is therefore crucial to solve types. Particularly as in this example, as in many real languages, types are often the ones from the standard library.

class athlete : person {
	speed integer = person.speed * 2
}
class car : aged {
	kilometers integer = age * 10000	
}

Solving ReferenceExpression

In our language a ReferenceExpression, like person.speed or age can only have:

  • an optional parent/context element that is a class (like person)
  • a target element that references a feature (like speed or age)

No nesting or multiple levels are allowed.

So, solving either a reference to the context or target element is similar.

SymbolResolver.ScopeFor(typeof(ReferenceExpression).GetProperty("Context"), (ReferenceExpression reference) =>
{
    var scope = new Scope();
    var classParent = reference.FindAncestorOfType<ClassDecl>();
    if (classParent != null)
        scope = ClassHierarchyEntities(reference.FindAncestorOfType<ClassDecl>());
    else
        Issues.Add(Issue.Semantic("The class containing this expression has no superclasses. The Context cannot be solved.", reference.Position));
    return scope;
});
SymbolResolver.ScopeFor(typeof(ReferenceExpression).GetProperty("Target"), (ReferenceExpression reference) =>
{
    var scope = new Scope();
    if (reference.Context == null)
    {
        var classParent = reference.FindAncestorOfType<ClassDecl>();
        if (classParent != null)
            scope.Parent = ClassLevelTypes(classParent);
    }
    else if (reference.Context.Resolved)                
    {
        reference.Context.Referred.Features.ForEach(it => scope.Define(it));
    }
    return scope;
});

One difference is just that if the current expression contains a Context element, but the class containing the expression has no superclass, we have an issue, because the reference cannot be solved. The other one is that for solving Target, we must first solve Context, to make sure we only consider the features in the Context element.

Otherwise, we look for the proper elements in the parent class. Let’s just look at how to solve the hierarchy of classes.

Scope ClassHierarchyEntities(ClassDecl ctx)
{
    var scope = new Scope();
    var superclass = ctx.Superclass;            
    if (superclass != null && superclass.Resolved)
    {
        // let's define the superclass
        scope.Define(superclass.Referred);                
        scope.Parent = ClassHierarchyEntities(superclass.Referred);
    }

    return scope;
}

If the reference to the superclass of the current class has been solved, we define the current superclass. Then we rise up through the hierarchy of classes to define them all.

class base {
	name string
	description string
}

class aged : base {
	age integer
}

class person : aged {
	speed integer = 2
}

class athlete : person {
	speed integer = person.speed * 2
}

Basically, considering the previous example, to solve the reference to person, in person.speed we define person, because is the superclass of athlete, the class containing the expression, then aged and base.

Symbol Resolution Patterns

We can then see a few rules:

  • The way in which we resolve imports is by delegating to the moduleFinder object. This is the case because we need to use some logic to find other files and parse them on demand, possibly managing loops (what if a file import itself, directly or indirectly?)
  • When looking for a superclass we use a global scope, for all classes in the module
  • For solving references to feature we do not specify any element declared at that level. We just look for features declared in the classes containing them. So to do this, we define a parent scope
  • The case of ModuleLevelTypes is interesting because there we can see a combination of elements:
    • We get all the types declared in the module
    • We get all the types declared in the imported modules
    • We could also get all the built-in entities, but we choose to force the user to import a standard module to get them instead

These few rules cover many of the most common patterns we see in languages we work with, either Domain Specific Languages we design or legacy languages for which we build parsers.

Type Calculation

Let’s take a look at our simple Type Calculator. Notice that we managed to separate our type calculation from symbol resolution. Our type calculation needs symbol resolution, but not vice versa. We are going to see this simpler case first, then what happens in the standard case.

public override IType CalculateType(Node node)
{
    switch (node)
    {
        case OperatorExpression opExpr:
            var leftType = GetType(opExpr.Left);
            var rightType = GetType(opExpr.Right);
            if (leftType == null || rightType == null)
                return null;
            switch (opExpr.Operator)
            {
                case Operator.Addition:
                    if (leftType == EntityStandardLibrary.StringType && rightType == EntityStandardLibrary.StringType)
                        return EntityStandardLibrary.StringType;
                    else if (leftType == EntityStandardLibrary.StringType && rightType == EntityStandardLibrary.IntegerType)
                        return EntityStandardLibrary.StringType;
                    else if (leftType == EntityStandardLibrary.IntegerType && rightType == EntityStandardLibrary.IntegerType)
                        return EntityStandardLibrary.IntegerType;
                    else
                        throw new NotImplementedException($"Unsupported operand types for addition: {leftType}, {rightType}");
               [..]
            }
        case ReferenceExpression refExpr:
            if(refExpr.Context == null)
                return GetTypeOfReference<ReferenceExpression, FeatureDecl>(refExpr, typeof(ReferenceExpression).GetProperty("Target"));
            else
            {
                SymbolResolver.ResolveNode(refExpr);
                return GetTypeOfReference<ReferenceExpression, FeatureDecl>(refExpr, typeof(ReferenceExpression).GetProperty("Target"));
            }
       [..]
        case StringLiteralExpression _:
            return EntityStandardLibrary.StringType;
        case BooleanLiteralExpression _:
            return EntityStandardLibrary.BooleanType;
        case IntegerLiteralExpression _:
            return EntityStandardLibrary.IntegerType;                
        default:
            throw new NotImplementedException($"Type calculation not implemented for node type {node.GetType()}");
    }
}

It shows common patterns for type calculation:

  • On the bottom you can see that we solve types for literals: we assign a standard type for each literal
  • We solve types for binary operations by finding types for the two individual elements (left and right) of the expression and then defining rules for the combination. For each an addition of a string and an integer is considered a concatenation, so the resulting type is an integer. This will vary very much by language and how you choose to handle type conversion between compatible types
  • To solve the type of reference expressions, we need to solve the reference and then get its type

Calculating the Type of References

To solve the type of references we get a look at GetTypeOfReference method. This method has type argument the class that can hold the reference and the information about the property that will hold the reference in the first argument. Notice that in this case we could avoid using the first type argument, since we ReferenceExpression is the only kind of expression containing a reference. However, this code shows that it is easy to generalize this method.

private IType GetTypeOfReference<T, S>(T refHolder, PropertyInfo refAccessor)
    where T : Node
    where S : Node, Named            
{
    ReferenceByName<S> refValue = refAccessor.GetValue(refHolder) as ReferenceByName<S>;
    if (refValue != null && !refValue.Resolved)
    {
        SymbolResolver?.ResolveProperty(refAccessor, refHolder);                             
    }
    else if (refValue != null && refValue.Resolved != false)
        return GetType(refValue.Referred as Node);
    
    return null;
}

The method uses a bit of reflection. In essence, it checks if the providing property corresponds to a reference that has been solved. If it is not, then it triggers the symbol resolution, supposing we have a SymbolResolver. Then we get the type referred to. GetType is simply a way to access a Dictionary matching nodes with types.

These are the common patterns to calculate a type. One that we are missing is having a special type for void or unit, that represents no type.

Again, it may sound quite boring, but these are the kind of patterns we routinely see. Of course, things can get more exciting if we throw generics and type inference in the mix, but for this time, let’s keep things simple.

Our types are all classes that inherits from an interface IType. This is not an interface part of Sharplasu, we created it for this example, but it is so simple that you can look it up on your own in the repository.

When Type Calculation and Symbol Resolution Intertwine

We avoided making type calculation and symbol resolution be dependent on each other for a few reasons. Our references had only two levels and we knew that the first one was always a superclass of the current one. Imagine we change that.

class address {
	note base
	city string
	street string
	number integer	
}

class person : aged {
	location address
	speed integer = 2
}

class athlete : person {			
	deliveryNote string = location.note.description
	luckyNumber integer = location.number + 3
}

Now, the features can have as a type a class, other than scalar types. Our references now have a Context property, which is an Expression. So, the node ReferenceExpression for location.note.description will have this structure. At the first level we have a ReferenceExpression with Target description and Context another ReferenceExpression with Target note and so on.

This means that now we cannot determine statically what will be the actual type of a Context object: it could be a scalar type or a class. So, to solve a reference in Target we need to dynamically define the type of Context, which will depend on what the reference in Context resolves to. So, how do we accomplish this? For starters, we need to make a small change in the ModuleLevelTypes method and make sure that we define all entities at the module level.

module.Entities.ForEach(type => scope.Define(type));

Apart from that all we need to change is how we solve symbols for the Target property.

SymbolResolver.ScopeFor(typeof(ReferenceExpression).GetProperty("Target"), (ReferenceExpression reference) =>
{
    var scope = new Scope();

    var classParent = reference.FindAncestorOfType<ClassDecl>();
    if (classParent != null)
        scope.Parent = ClassLevelTypes(classParent);                

    if (reference.Context != null)
    {
        SymbolResolver.ResolveNode(reference.Context);

        var type = TypeCalculator.GetType(reference.Context) as ClassDecl;

        if (type != null)
        {
            type.Features.ForEach(it => scope.Define(it));
        }
    }

    return scope;
});

The pattern is simple:

  • We ensure we have resolved the Context node, so we know which feature Context resolves to
  • This allows us to get the type, i.e. the class the features has
  • We can now define the features of the class

And voilĂ , we can now solve the current reference.

Using Semantic Enrichment

It is easy to glue together symbol resolution and type calculation.

public List<Issue> SemanticEnrichment(Node node)
{
    SymbolResolver.ResolveSymbols(node);
    node.WalkDescendants<Expression>().ToList().ForEach(expression => {
        TypeCalculator.SetTypeIfNeeded(expression);
    });
    return Issues;
}

So, we can take any module and trigger symbol resolution. We will then get an AST containing references that have been resolved. Also, the types will be stored in a cache in TypeCalculator.

public abstract class TypeCalculator
{
    public virtual IType GetType(Node node)
    {
        return SetTypeIfNeeded(node);
    }

    public IType StrictlyGetType(Node node)
    {
        var type = SetTypeIfNeeded(node);
        if (type == null)
            throw new InvalidOperationException($"Cannot get type for node {node}");
        return type;
    }

    public abstract IType CalculateType(Node node);

    public virtual IType SetTypeIfNeeded(Node node)
    {
        if (node.GetTypeSemantics() == null)
        {
            var calculatedType = CalculateType(node);
            node.SetTypeSemantics(calculatedType);
        }
        return node.GetTypeSemantics();
    }
}

This means that after invoking the semantic enrichment, we can look each of our nodes of class Expression and get their type using mynode.GetTypeSemantics(). Easy, right?

A Simple Test

What is life without tests? Here at Strumenta we do not want to imagine such a sorry existence, so our repository has a few tests. Let’s see a simple one:

        [TestMethod]
        public void TestTypeCalculation()
        {
            EntitySharplasuParser parser = new EntitySharplasuParser();
            string code = @"module example

import standard

class base {
	name string
	description string
}

class aged : base {
	age integer
}

class person : aged {
	location address
	speed integer = 2
}

class address {
	note base
	city string
	street string
	number integer	
}

class athlete : person {			
	deliveryNote string = location.note.description
	luckyNumber integer = location.number + 3
}

class car : aged {
	kilometers integer = age * 10000	
}";
            var result = parser.Parse(code);

            SimpleModuleFinder moduleFinder = new SimpleModuleFinder();
            ExampleSemantics semantics = new ExampleSemantics(moduleFinder);
            List<Issue> issues = semantics.SemanticEnrichment(result.Root);
            Assert.AreEqual(0, issues.Count);
            result.Root.AssertAllExpressionsHaveTypes();
            Assert.AreEqual("string",
                result.Root.Entities[4].Features[0].Value.GetTypeSemantics().Name
            );
        }

We test that all expressions have a type, then we check one specific expression. The highlighted expression should have type string, since location is of class address which has the note feature of class base, which in turn has a feature description of type string.

Conclusions

While parsing organizes code into a syntactic structure, Semantic Enrichment uncovers its meaning by resolving symbols and determining types. Without this critical step, advanced operations such as code generation, interpretation, and refactoring would be impossible. Semantic Enrichment is not trivial to implement. 

With this article, we wanted to share some of the principles behind it. And through the support built-in in Sharplasu, we want to provide a way to simplify the implementation of advanced Language Engineering solutions. For us, it has been working pretty well, and we hope it will work similarly well for you. 

Have fun with your Language Engineering project!

The post Resolve Symbols and Calculate Types with Sharplasu appeared first on Strumenta.

Permalink

A Simpler Way to Deal with Java Sources in CIDER

For ages dealing with Java sources in CIDER has been quite painful.1 Admittedly, much of the problems were related to an early design decision I made to look for the Java sources only in the classpath, as I assumed that would be easiest way to implement this. Boy, was I wrong! Countless of iterations and refinements to the original solution later, working with Java sources is still not easy. enrich-classpath made things better, but it required changes to the way CIDER (nREPL) was being started and slowed down the first CIDER run in each project quite a bit, as it fetches all missing sources at startup. It’s also a bit trickier to use it with cider-connect, as you need to start nREPL together with enrich-classpath. Fortunately, my good friend and legendary Clojure hacker Oleksandr Yakushev recently proposed a different way of doing things and today I’m happy to announce that this new approach is a reality!

There’s an exciting new feature waiting for you in the latest CIDER MELPA build. After updating, try turning on the new variable cider-download-java-sources (M-x customize-variable cider-download-java-sources and then toggle to enable it). Now CIDER will download the Java sources of third-party libraries for Java classes when:

  • you request documentation for a class or a method (C-c C-d C-d)
  • you jump to some definition (M-.) within a Java class

Note that eldoc won’t trigger the auto-download of Java sources, as we felt this might be harmful to the user experience.

This feature works without enrich-classpath.2 The auto-downloading works for both tools.deps and Leiningen-based projects. In both cases, it starts a subprocess of either clojure or lein binary (this is the same approach that Clojure’s 1.12 add-lib utilizes).

And that’s it! The new approach is so seamless that it feels a bit like magic.

This approach should work well for most cases, but it’s not perfect. You might have problems downloading the sources of dependencies that are not public (i.e. they live in a private repo), and the credentials are non-global but under some specific alias/profile that you start REPL with. If this happens to you, please report it; but we suspect such cases would be rare. The download usually takes up to a few seconds, and then the downloaded artifact will be reused by all projects. If a download failed (most often, because the library didn’t publish the -sources.jar artifact to Maven), CIDER will not attempt to download it again until the REPL restarts. Try it out in any project by jumping to clojure.lang.RT/toArray or bringing up the docs for clojure.lang.PersistentArrayMap.

Our plan right now is to have this new feature be disabled by default in CIDER 1.17 (the next stable release), so we can gather some user feedback before enabling it by default in CIDER 1.18. We’d really appreciate your help in testing and polishing the new functionality and we’re looking forward to hear if it’s working well for you!

We also hope that other Clojure editors that use cider-nrepl internally (think Calva, iced-vim, etc) will enable the new functionality soon as well.

That’s all I have for you today! Keep hacking!

P.S. The State of CIDER 2024 survey is still open and it’d be great if you took a moment to fill it in!

  1. You need to have them around to be able to navigate to (definitions in) them and improved Java completion. More details here. ↩

  2. If you liked using enrich-classpath you can still continue using it going forward. ↩

Permalink

Where to store your (image) files in Leiningen project, and how to fetch them?

Notes

Create new app using:

$ lein new app image_in_resources

Place clojure_diary-logo.png in resources/images/ folder.

project.clj content:

(defproject image_in_resources "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0"
            :url "https://www.eclipse.org/legal/epl-2.0/"}
  :dependencies [[org.clojure/clojure "1.11.1"]]
  :main ^:skip-aot image-in-resources.core
  :target-path "target/%s"
  :profiles {:uberjar {:aot :all
                       :jvm-opts ["-Dclojure.compiler.direct-linking=true"]}})

src/image_in_resources/core.clj content:

(ns image-in-resources.core
  (:gen-class)
  (:require [clojure.java.io :as io])
  (:import [javax.imageio ImageIO]))

(defn load-image [image-name]
  (let [image-url (io/resource (str "images/" image-name))]
    (if image-url
      (ImageIO/read image-url)
      (throw (Exception. (str "Image not found: " image-name))))))

(defn save-image [image output-path]
  (ImageIO/write image "png" (io/file output-path)))

(defn -main []
  (let [image-name "clojure_diary_logo.png"
        output-path (str "./" image-name)] ; Save to current directory
    (try
      (let [img (load-image image-name)]
        (save-image img output-path)
        (println (str "Image saved successfully to: " output-path)))
      (catch Exception e
        (println (str "Error: " (.getMessage e)))))))

Run the project using:

$ lein run

Generate jar file using lein uberjar, and run it using:

$ java -jar target/uberjar/image_in_resources-0.1.0-SNAPSHOT-standalone.jar

The complete source code can be found here https://gitlab.com/clojure-diary/code/image-in-resources.

Permalink

Clojure Deref (Jan 17, 2025)

Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation.

Blogs, articles, and projects

Libraries and Tools

New releases and tools this week:

Permalink

Ornament

by Laurence Chen

Well done on the Compass app. I really like how fast it is (while retaining data consistently which is hugely undervalued these days). I’m not just being polite, I really like it.

by Malcolm Sparks

Before the Heart of Clojure event, one of the projects we spent a considerable amount of time preparing was Compass, and it’s open-source. This means that if you or your organization is planning a conference, you can use it.

When you decide to use Compass, besides importing the speaker and session data into the database, you’ll probably want to make some modifications to the frontend. That’s when you’ll quickly encounter an unfamiliar library: Ornament.

At first glance, the Ornament GitHub README is a bit lengthy, and you might be concerned about how difficult it is to learn. However, learning Ornament and using it for frontend development in the future is an excellent investment. Ornament requires a bit more time to learn because it’s a deep module. But because of this, it offers some competitive advantages that make it worth considering. Here are three key values:

  • Developer Ergonomics
  • Composability
  • High Performance

Developer Ergonomics

When you first see an introduction to Ornament, the easiest concept to grasp is that it’s a library that lets you write CSS directly within Clojure/ClojureScript.

When dealing with CSS, a common approach is to prepare several files dedicated to defining CSS and place the project’s custom CSS classes there. This means we typically have to switch back and forth between different cljs and css files during editing.

This back-and-forth problem was alleviated significantly with the rise of CSS utility classes like Tailwind and Tachyons. Developers could finally reference utility classes within a single cljs file. However, this method isn’t without limitations because sometimes we still want to do some customization or define custom CSS classes.

Ornament combines the flexibility of customization with the convenience of utility classes, allowing developers to handle everything within the cljs file. This way, we retain the flexibility of custom CSS while enjoying the streamlined experience of developing entirely within the cljs file.

Composability

There are two common uses for Ornament. The first is defining simple hiccup components, and the second is defining multi-layered hiccup components.

Consider a simple hiccup component:

(require '[lambdaisland.ornament :as o])

(o/defstyled freebies-link :a
  {:font-size "1rem"
   :color "#cff9cf"
   :text-decoration "underline"})

The defined component can be used with hiccup syntax:

;; Hiccup
[freebies-link {:href "/episodes/interceptors-concepts"} "Freebies"]

Which renders as: <a class="lambdaisland_episodes__freebies_link">Freebies</a>

Now, for a multi-layered hiccup component example:

(o/defstyled page-grid :div
  :relative :h-full :md:flex
  [:>.content :flex-1 :p-2 :md:p-10 :bg-gray-100]
  ([sidebar content]
   [:<>
    sidebar
    [:div.content
     content]]))

If you look closely, [:>.content :flex-1 :p-2 :md:p-10 :bg-gray-100] is no longer specifying the styles for :div itself, but rather for the child components under :div.

High Performance

Ornament was designed with the concept of css-in-js in mind, allowing users to define all CSS within components while writing Clojure or ClojureScript. This significantly reduces the amount of global CSS.

When using Clojure and Ornament, all CSS classes are generated during the build phase, which is intuitive. However, when using ClojureScript, should the CSS also be generated during the ClojureScript build phase, or should it still be generated by JavaScript? Ornament chooses to generate all required CSS during the ClojureScript build phase. This approach simplifies the build process, removes a lot of unnecessary styling definitions, and results in a smaller bundle size.

It originated from dissatisfaction and struggles

Ornament’s design and inspiration originated from frontend engineer Felipe Barros’ dissatisfaction and struggles with ClojureScript development, which led to extensive discussions. If you find Compass useful and fast, take some time to explore Ornament-—the time you invest will pay off.

Do you have any software development challenges or frustrations? Why not talk to us?

>

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.