Hacker News new | past | comments | ask | show | jobs | submit login
Polylith is a functional software architecture at the system scale (gitbook.io)
116 points by laerus on March 19, 2022 | hide | past | favorite | 82 comments



It is probably/definitely me but I watched the long video and did not learn anything and then read the GitHub example for a few minutes and got it. I have a hard time understanding why ‘young people’ (or maybe all people outside myself as it sometimes feels) enjoy videos; text is so much faster for everything but learning how to render a wall (stuff with your hands). Then again, I never went to lectures in uni vs reading books, papers and syllabi; the latter I found to be much faster.

Anyway, congrats with the project; it seems pretty common sense idea wise so it is nice but cannot see many people jumping through the hoops and adopt it; people are too opinionated (including me).


Thanks for the feedback, and I'm glad to hear that you understood the concept from the text documentation.

We decided to include both videos and text in the documentation because we know that people are different, and have different preferences for learning.

You're right that people are opinionated, and that it's difficult to convince them to try new ideas. However, I've been pleasantly surprised at how open the Clojure community has been to our concept and at the momentum it's starting to build.

It'll be interesting to see if other language communities have the same mindset.


It's great you do this really. More ideas on how to actually get to LEGO (promised in software for a long, long time) like development. But yeah, even more than languages, I would think you need a massive following to get traction. Happy the community likes it!


> More ideas on how to actually get to LEGO (promised in software for a long, long time)

I think for this reason the Lego analogy needs to be completely abandoned. Every effort of software modularization has been analogized to Lego at some point, and yet they all fall short. No arhicture claims they are the equivalent of twine glue and toothpicks, even if that's what it ends up being. So any N+1 language/framework/architecture promising it's going to be Lego (for reals this time), even if it does actually deliver, will not be taken seriously when they make the claim.


Well, good ABI/API’s and, let me really clearly separate this from the word ‘good’, standard OS GUI frameworks like Windows wpf and Mac OS X cacao for frontends is as far Lego as I have seen for now. Web frontends is where it all goes terribly wrong; nothing fits and everything breaks when you put X into Y when X and Y are from other authors generally. It is a shit show. But for backends and internally it can be done with good APIs. Problem is… not many people can write those, so you end up with something that says lego on the pack but is in fact lego after your ‘always was a bit weird’ cousin played with it with his flamethrower.


Polylith targets the backend, so I will concentrate my answer around that. I agree that good APIs are important. Polylith helps you with sharing code because it's built around "movable"/decoupled bricks that can be reused across services (e.g. different kinds of APIs). Reuse is a hard problem to solve, and you need LEGO-like building blocks for that.


As a Clojure dude, looking forward to reading. Would appreciate the docs in a single pdf for offline reading. Looks like there are tools that consume a gitbook but would be great if you could just publish a version yourselves!


Not sure if the link will work, but here's the result of clicking "Export PDF" from right sidebar only visible on wider screen.

https://storage.googleapis.com/download/storage/v1/b/gitbook...


You can download PDF version of the documentation directly from GitBook. The option is only visible on desktop. When you land on the website, the top most item on the right hand side menu is "Export as PDF".


I couldn't agree more.

I produce a bit of music and one thing I can never remember how to do without looking it up is side chain compression. I don't use it very often but when I do, I always have to Google what the controls / routing is to be able to do it.

In the beginning it was forum posts, maybe with a couple of images to help illustrate where to click etc. Now it's 10 minute YouTube videos replete with needless introductions about the person creating it and hat in hand begging for likes or subscriptions. You have to go digging to find what you're looking for.

I've been experimenting in this space a bit though (weirdly enough with video game tutorials). I'd be interested in how others find this format for learning or understanding concepts. Here's an example

https://apex-fundamentalists.vodon.gg/videos/0e074984-f9f0-4...


>"It is probably/definitely me but I watched the long video and did not learn anything"

This is because the video says how nice would it be to have a banana without a slightest description of what the banana is and why do we need it. Absolutely useless video. And yes I very much prefer reading a document for understanding concepts.

As for the project itself. I did a very brief reading and I fail to grasp any particular value. Basically what I understood is that if you organize your code in some particular way we will do some particular things.

Maybe if they've presented clear example I would be willing to dig in deeper. For now I've "invested" couple of minutes and watched that "nothing" video and I do not think I'll invest any more time. Maybe it is my loss and their idea is amazing but they definitely do not make it easy to understand what's in it for me.


Before you completely give up on understanding the concept, I would like to suggest you try the 10-minute "Polylith in a Nutshell" video: https://www.youtube.com/watch?v=Xz8slbpGvnk

If you watch it at 1.5x speed, then it'll take less than 7 minutes of your time.

If you're still not getting the "why" of Polylith after this video, then I'd be very grateful if you could give us some quick feedback on what you're missing. That will help us figure out how to explain the concepts better in the future.


That was the video I watched and I just don't get what is different about polylith.

The video spends too long telling me things I already know - e.g. what are functions / objects / layers / etc.

Tell me something I don't know - such as what makes Polylith unique. The video says this is "Components". Well what's so different about those? They sound like microservices or perhaps high-level objects. Or perhaps interfaces.

Show an example!


The idea with that part of the video was to couch the new concept (components) within the framework of existing concepts (functions, objects, layers, etc.) to try to help people connect the new concept at the right place in their knowledge graph. Though your feedback makes me think it didn't work as well as I hoped.

Components do have attributes in common with microservices and with stateless objects (e.g. a public interface and encapsulated implementation).

Where components differ from microservices is that a component's interface is simply a collection of functions, rather than network-facing endpoints. This means that multiple components can be deployed into a single artefact, keeping deployment complexity and cost down.

Where components differ from objects is that a component is a higher-level abstraction, closer in scope to a microservice.

However, we think that Polylith's biggest differentiator is the separation it gives between development and production. Let's say you have a Polylith project with 100 components, that you deploy in production across 10 services. You can work with all 100 components in a single development environment, and test them as though they're a monolith, even though they're not deployed that way in production. It's a lot like building systems with LEGO, and we think its just as fun!


>"Where components differ from microservices is that a component's interface is simply a collection of functions"

Is this a "collection of functions" or "collection of function declarations"? If you want to communicate concepts this is not the best example. And for either case many languages already have the solutions.


Technically, a Polylith interface is a collection of "pass-through" functions, each of which delegate their function call to an "implementation" function within the component.

I agree with you that many languages have syntax to support building interfaces. However, I've yet to come across one that offers all the benefits of Polylith's approach with components.


>"Before you completely give up on understanding the concept, I would like to suggest you try the 10-minute "Polylith in a Nutshell" video"

My lines about the video part was exactly about this particular video.


I was confused because you were replying to a comment which mentioned the "long video", so I assumed you were both talking about the forty-minute one on the homepage of the documentation. My mistake.

Did my response to `cjg` in this thread help to clarify anything for you?


Not really. I suspect that you might have done something that has value in Clojure ecosystem (keep in mind that I do not program in Lisp).

In my ecosystem I simply do not really have problems that you suggest we do and the functionality that you relegate to that "component" idea is not something new and unavailable.


Which ecosystem do you work with? How does it solve the problem of sharing code across service boundaries?


You've answered this question yourself - "Technically, a Polylith interface is a collection of "pass-through" functions, each of which delegate their function call to an "implementation" function within the component."

I work mainly in C++, Delphi/Lazarus, JavaScript. Many others as well but not very often. C++ and Delphi handle the domain just fine. Javascript - bit less convenient still easy.


I'm not sure we're understanding each other. When I asked "How does it solve the problem of sharing code across service boundaries?", I meant how does your language or architecture enable you to share code across separately deployed artefacts, for example across two microservices.

There are three approaches that I'm aware of for solving this problem: 1) copy/paste the shared code between the two repositories, 2) freeze the code into a library that both services depend on, 3) keep both services in one repository, extract the shared code into a module or component, both services depend on the shared module, deploy the services as separate artefacts.

1) is bad for obvious reasons, 2) adds unnecessary friction to the development process, and 3) is how Polylith solves it.

I was wondering if you've come across another way to achieve 3), or perhaps a fourth approach?


I am a vendor. Develop software products from scratch. Sometimes I "hire" myself as I have couple of products that make me money just by me owning and maintaining those.

Pattern goes like this.

1) Client hires me to develop product. After initial phases I determine what parts (code) of other products I can salvage for reuse. At this point I either copy those parts or get the latest version if those parts are 3rd party libraries which at this point is basically the same. So in between different projects I always use approach (1). You are free to laugh at me. I develop for 40 years already and this approach saves me from countless headaches related to "purism". Unless there is really really big reason I do not want to change something in a piece of shared code and then test countless permutations in unrelated projects. Thanks but no thanks. Also once my product reaches maturity I usually transfer it to client.

2) Stage 2 - working on a single product. Even though I avoid "microservices" as deployable like a plague the product still might consist from few physical executables / services. In this case the "components" code is shared as a code and the code uses Interfaces (or simulation in case of Javascript for example) when I feel that I need to abstract some "component" so that I can replace it. so this is your (2) and it poses zero friction to me unlike what you claim.

Types of products I develop ranges from firmware, to game like native applications with device control and accelerated graphic and multimedia, to enterprise backends etc. I have way bigger things to deal with rather than nitpicking whether concept of Component through Interface / library / etc. poses mental / maintenance challenge for me (hint it is not).


Thank you for explaining how you work, and I would never laugh at a developer for copy/pasting code. We've all done it!

I'm still intrigued to understand exactly how you share code across services. Let's say that you've written a piece of code for logging, which you want to use in both service A and service B. Do you package it up in a library? If so, doesn't that mean you have to place that library in a repository, so both services can access it? Doesn't that mean that if you want to make changes to the logging code that you now have to publish a new version of the library, and remember to update both services to depend on the new version?

That's the friction I'm talking about.

With Polylith, the logging code would live in a component that's directly accessible to all the other components in the system. That's because Polylith lets us work with all our components as if they're a monolith (even if we chose to deploy them as multiple services). This means that when we update the logging component, there's zero friction to update any impacted components in the services.

If the change only affects the logging component's implementation (and not its interface) then no other components need to be updated, and we can just redeploy the system. If it's a breaking change to the interface, then we can immediately fix the impacted components within our monolithic development environment. If the change is a refactor of the logging component's interface, then the other components will be automatically updated by our refactoring tool!

Hopefully that explains how Polylith solves this challenge so elegantly.


>"Do you package it up in a library?"

I package it mostly as a code. For example in C++ those would be the header file "xxx.h" ans an interface and "xxx.cpp" as implementation. If I only change the implementation there is no need for me to to touch anything else. Build system will figure out what services (executables) need to be rebuild and relinked. It will then build, deploy, run tests etc while I sit and pick my nose.


Agreed wholeheartedly. It’s something that has been bugging me for a while. I can consume information maybe 3-5x faster (or more?) via text vs video but every trend I can see points to video being the preferred medium for Gen Z.

Is it because they’re “video native” and know how to navigate it better? Surely they’re not sitting through 2 minutes introductions. Do they inherently understand how to return to the important parts of a video for future reference? My brain just does not operate like theirs, it seems.


I wonder the same; often I look up programming things and the first links are videos. So I try to watch one and immediately I think; how can anyone anyone do work like this? I needed a flag for ls and was too lazy to read the man page; the result was already put in 1 line on top by Google. Of course I needed only 5 chars only. Then I clicked the video that answered the question for fun; I will try to find it but I think it was 15 minutes of drivel… I just don’t get it. I mean sure I understand why the video makers do it, but who consumes this crap? And why? And it is getting worse; I like watching live streams of programming, but when you actually want to learn something and therefor need to enter code yourself, there is nothing more inefficient medium than video imho.


It could be habit from the wiring of their brain being setup for video from social media.

But following your idea of finding a reason that assumes it's actually more beneficial learning from video than text for people in that class, you need to consider two alternatives.

The first is that they are better at video, but the second is that they are worse as long form text.


Oh yea, I totally assume that they know what they’re doing. It’s just clear that I’m not wired that way. I didn’t grow up with videos accessible to the point of ubiquity like Z has.


I guess my point is that it's possibly/probably an objectively worse way of ingesting information. It's always going to be faster to skim text to find what you need that watching a video on high speed.


But is it? That seems “obvious” to us “old” millennials+. But is an entire generation all independently wrong, and doing the slow thing? If so, why? Is there evidence supporting this that we can look at? All genuine questions, I don’t know the answers. But there’s something interesting happening there, that I do know.


Video is nice for multitasking. I can't read and do anything else at the same time. I can listen to a video, glance at it, and do other things too.


There’s absolutely no way you’re internalizing what’s in that video while it’s playing in the background.


That's my issue too; it is fine for tv shows and movies to register if they are worth watching 'for real', but also there, if I am working on something hard, I often notice that 4 episodes have passed and I have 0 clue what happened.


Sure there is. I code while watching videos all the time.


I'm a member of the team that created Polylith, so feel free to ask questions or give feedback.

Polylith has been gaining momentum within the Clojure community. As an example, you can follow along with Sean Corfield's journey migrating the World Singles codebase to Polylith: https://corfield.org/blog/2021/02/23/deps-edn-monorepo/

There's an active Slack channel, with people and teams using Polylith on both commercial and side-projects: https://clojurians.slack.com/archives/C013B7MQHJQ

Outside of the Clojure community, there's a project to port Polylith to Python, using Poetry: https://pypi.org/project/poetry-polylith-plugin/

We'd love to see it get ported to other languages too!


Haven't had time to dive into the code yet, but the write-up is very thorough !

A couple points though:

- i wish you had made it clear from the start that the architecture applies to individual applications / services.

I wondered if "components" were made to represent only libraries of functions, or if they could be their own server running along side other services. You need to wait for the very end to get the idea that a polylith distributes / microservices architecture means "many polilths next to each other".

Which means that polylith might help organize the _code_ of a microservices monorepo, but will be of no help with the _execution_ problems of it (eg where should the data live, how many services do you have to traverse, how do you deal with desynchro and caching, etc... In my experience those are the thorny problems, not so much how your libs are laid out - but maybe that's because we already use similar ideas coming from "clean architecture", "hexagonal", etc..

- I am right to understand that most code in "components" should be concerned mostly with business domain, and free of "big" frameworks (spring / rails / etc..) and that only "bases" should be concerned with that ?

- what is the story when you need to fake a component that represents a distributed service ? The "component" folder will have an API,and an implementation that targets the real service ; but should it also provide a "fake" implementation ? Or would it be a different component sharing the same API ?

- Are there sides of the architecture that depends on being done with a dynamic language like closure ? Have you actually tried it with other languages ?

Thanks !


Hi and thanks for showing interest in Polylith.

- The main idea is to allow bricks (as we call the "LEGO bricks"

- components and bases) to be put together into projects from where we generate artifacts (libraries, tools and different kind of services). Maybe we should have a sentence like that on top of the first page of the high level doc!

- The help the Polylith architecture provides when it comes to how to execute the code is that it makes it much easier to reorganise the code in production (by splitting, merging, or creating new services by reusing existing components) without affecting your single development experience (which is always only one single project). This is where the LEGO idea comes in. But you are right, it doesn't help you at all with the implementation of the code.

- A component can handle business logic, e.g. some part of our domain, others might manage integration with external systems, and others will be responsible for infrastructure features such as logging or persistence.

- The way you introduce a "fake" component (a component that can replace an existing component) is to create a new component that implement the same interface. So yes, it should be a new component that implements the same "API".

- We haven't tried it with other languages than Clojure, but the idea is applicable to other languages because it's not about the language syntax. David Vujic is working on tooling support for Python: https://davidvujic.blogspot.com/2022/02/a-fresh-take-on-mono...


I tried looking into Polylith a few months ago, when I first discovered it. I recall the feeling of experiencing information overload. Though, I am not a Clojure expert either, so that probably compounded my issue.

I believe it would be beneficial to supply more example projects. I found one, https://github.com/furkan3ayraktar/clojure-polylith-realworl..., however, it uses SQLite. Maybe an example which uses Postgres, and Redis for caching would be more real-world? Also, maybe a few deployment examples? Heroku, AWS, GCP, Kubernetes?

One question I have, how are ENV variable driven configurations handled? For example, if I need a `DATABASE_URL` etc. I recently ran into an issue, https://discord.com/channels/313110246643990528/313110246643..., in my own Clojure web service attempt where I could not use `def` to define the individual variables since they are evaluated at Uberjar build time. I eventually converted it to a `defn` but then it gets evaluated every time it's used.


Really enjoyed your team's breakdown of different architectures on https://polylith.gitbook.io/polylith/conclusion/current-arci...


Thanks! It was a bit nerve-racking to give traffic light ratings to existing architectures that people use and enjoy, but I hope we were fair in our assessment.


I really like the ideas behind Polylith. I'm building mainly data transformations in Apache Spark and I'm searching for some nice abstractions that could help to build cleaner and more maintainable code, have you tried it out in a pure data processing context as well?


What company is behind Polylith?


There is no company behind Polylith, just me (Joakim Tengstrand), Furkan Bayraktar and James Trunk.


Ah, shame. I mean - good for you and amazing work! I was hoping to apply to work for such an innovative and brave company :D


Thanks!


Hi there! I mean to be constructive with my post and not harsh, so please take it as intended :) TLDR; show and tell is key. You can't just tell me about your project, you have to show me to make it real in my head.

I invested about 40 minutes into the video in TFA and another 10 minutes in the "nutshell" video you linked. I haven't heard of Polylith before today. I really enjoyed your presentation and I thought it was of a very high quality as far as the amount of practice you put in and the visuals you created to accompany your words.

However, I have to be honest and say that the quality of the presentation is really the only thing that kept me watching. I figured if you spent so much time on the presentation there must be something here worth hearing. although I did want to abandon the effort and move on at several points because I just had no idea what you were trying to convey.

Or let me put it this way: I did know what you were talking about, but I continually felt like I was missing something because what you were telling me was obvious to me.

In the 40 minute video some notes I had in my head were that:

- First of all, you need to cut out the guy who immediately says he forgets your name. That's not a good way to start off the video.

- But more importantly, you only start talking about Polylith at minute 8 of the talk. Really, I had to double check at a couple points to make sure I was watching the right video because you spent most of the first part of the presentation talking about a different project called Code city, and I started to think this was a presentation on that. What you do at minute 8 you should have done immediately.

- When you do finally introduce Polylith, you don't tell me anything specific or interesting. The words you use are quite generic, and would be used by almost any project to describe itself. All projects want to be simple, flexible, and productive.

- At 12 minutes in things started to click for me when you actually started getting more specific. I don't know what your YouTube statistics look like but I'm willing to bet by this point you've lost almost all of your viewers.

- At 27 minutes in I got really excited because the MC came back to say that we were going to move past concepts and theory and get a demo of the actual product. But what follows is not a demo at all, but further descriptions of a system.

- Finally I'm struck that you introduce your project Scrintel, which is described as a video transcription service, but I'm not actually using Scrintel as I watch your video. Instead I'm using YouTube. This is the perfect opportunity for you to demo your actual software. I literally had to look at the video captions to understand what the MC was saying, and the captions I believe are wrong. Why aren't you using your transcription software to transcribe your video that is selling me on your transcription software?

Moving on to your 10 minute nutshell video, it's more of the same. After spending almost an hour exploring the content you've posted about your product, you've actually never demonstrated your product to me once!

One thing about the nutshell video, and your videos in general, is that I think you should abandon the Lego metaphor. First, it doesn't have the intended effect when you display the toothpick Whitehouse and the Lego Whitehouse. When I see these two, you want me to think about how difficult the toothpick version is to make, and how we shouldn't construct things that way in software, but all I can think about is how detailed it is compared to the Lego model. The toothpick model isn't a mess, it's a masterpiece. I admire the person who made it, but you're telling me that I shouldn't want to build software with that model.

Secondly, the Lego analogy is overdone. As I said in another post, every architecture has made this claim, even if they are actually toothpicks. So when you make it, it's going to be met with skepticism, and since you don't actually demo any of your software in these videos when you make the claim, you're going to be dismissed. Especially when your videos rest so heavily on the metaphor. My opinion is, if you take every instance of describing Polylith as Lego, and you replace it with a demo of your project showing how it's better in concrete terms on a real-world example (not your startup), you will be 1000% in a better place.


Thank you for your comprehensive and detailed feedback!

I'll leave Joakim to respond to your comments about his presentation, and I'll respond directly to your comments about the nutshell video.

> you've actually never demonstrated your product to me once

That's partially true, although I'd argue that by showing the components and bases from an example system, we are showing "the product". Perhaps you're thinking about showing code or filesystem structure from a complete production Polylith workspace? We do that in other videos, but didn't think it would be a good fit in the 10-minute "why?" introduction.

> The toothpick model isn't a mess, it's a masterpiece. I admire the person who made it, but you're telling me that I shouldn't want to build software with that model.

I agree that the toothpick model is a masterpiece! However, the reason we shouldn't want to build software in the same way comes down to one word: change. The real Whitehouse doesn't change shape very often, but all the software I've ever been involved in building does. That's why building software with LEGO pieces is so much better than building it with toothpicks and glue. Which is exactly why it's a such strong metaphor.

> Secondly, the Lego analogy is overdone. As I said in another post, every architecture has made this claim, even if they are actually toothpicks.

You might be right about this, but most of the other recent architectures we'd come across seemed to steer away from talking about LEGO (onion, hexagon, DDD, microservices, etc.), so we were hoping it wasn't overused.


Hi and thanks for your feedback!

- The Code City project is a cool way of visualising code and the idea was to show how we normally organise code today, and what problems it creates (development experience + sharing). I agree I could have been more clear about that.

- This is the longer video where I (and Furkan) try to explain what Polylith is, what problems it solves, how it works, and why you should use it. The sad fact is that it's super hard to explain Polylith (we have given about 15 different presentations) and I think you need to try it out yourself to get an idea how it is to work with and which problems it solves.

I will leave to Furkan to answer your questions about Scrintal.


Honestly, I don't know enough Clojure to understand the proposal, and all the explanations are centered around it. My current understanding is that what you propose is an ascribed way of doing things by mixing several pre-existing concepts, like mono-repo, convention over configuration, and some generic dev principles that are universally desirable, like components "ensure encapsulation and composability but at the same time they are simple, easy to reason about, and flexible" - nothing wrong in that, all good principles, but I fail to understand how you enforce it. Because usually every monolith or micro-service out there starts with that in mind, and ends with spaghetti, because dev happens. At least that's my experience with Elixir/Phoenix umbrella apps, which, as per my current understanding, is the closest thing I see to Polylith. However, I would be very curious about your traffic lights evaluation. Generally, I would consider a "large" app something like 200+ microservices (yes, I'm biased towards that). In your FAQ, you describe and app that needs 4 services. Sorry, that doesn't qualify as a large distributed system. First, I would really doubt you can find a monorepo which is one language only (sorry, I didn't understood how Polylith would work if I have a codebase that's Scala, Elixir, Rust and Python, by example). Second, I don't understand how you handle builds. Third, I don't understand how you handle deployments and other operational concerns.


Your understanding of Polylith is essentially correct. It is a combination of existing ideas, such as monorepos, convention, components, interfaces, encapsulation, and static code analysis. However, I'd argue that what emerged in Polylith from those ingredients was more than the sum of its parts - due to how those particular concepts resonate with each other.

For example, because of the forced conventions, it's trivial for the Polylith Tool to perform static code analysis on a Polylith codebase. This means that the tool can identify the subset of tests to run, based on the components that have changed since the last run. Which leads to a fast-feedback loop, and encourages both good testing practices and fine-grained component modularity.

Polylith gives you a system-level architectural building block; the component, which encourages a modular design and separation of concerns. However, you're right that it's still possible to create spaghetti code with Polylith. All it would take is poorly designed components, with bad names, multiple reasons to change, and exposing their state everywhere. However, I'd argue that when you give someone a well designed tool (like Polylith), then they're more likely to craft a well designed product with it.

To understand how builds and deployments work with Polylith, I'd recommend reading the "Workflow" section in the Polylith Tool's documentation: https://polylith.gitbook.io/poly/workflow/shell (especially "Build", "Git", "Continuous Integration", and "Testing").


It feels like what you're going to get out of using it is basically a set of conventions, best practices, rules and tooling support that make doing the right thing (for some specific opinions of right, but I think I share enough of the opinions to leave it at that caveat) the natural and obvious thing to do.

I'm sort of reminded of adopting a formatting and linting stack across a codebase - so long as it mostly makes mostly good choices, the shared conventions are usually a net win overall just because it (a) makes code more accessible across the team (b) gives you a solid default way to resolve a lot of choices.

The whole "can -reliably- figure out which tests need to be re-run" part specifically sounds like it would be a very nice thing to have.

I suspect your biggest challenge in terms of adoption will be the requirement for development group wide buy in to get the full benefits, but that's a problem that's kind of inherent to the goals you're trying to achieve here and so I shall simply wish you good luck with that part.


That's right, it's a highly opinionated approach to building software, which comes with a host of benefits.

Including my favourite; a complete untangling of your development and production environments. With Polylith you always develop your system as a monolith (because that's the most effective way to build software), but you're able to deploy it as multiple services (because that's sometimes the most effective way to run software). It turns out that separating deployment complexity from development complexity is a game-changer, and something that I haven't come across from other architectures.

It's true that you don't reap all the benefits of Polylith until your entire codebase uses the same structure, which feels a bit like "all or nothing". However, many of the benefits are unlocked "as you go", so even converting one or two existing microservices to Polylith will feel like a nicer codebase to work with.

> so I shall simply wish you good luck with that part

Thanks!


My understandimg of Polylith so far, in the context of "untangling development and production" is that you might put together components and bases differently in dev vs prod. If that's correct, untangling development and production seems to run counter to the idea of trying to match development and production as closely as possible (eg with containers to mimic environments better) to make testing more accurate/realistic. Based on my very limited experience, I don't really understand why you would want to drop dev/prod parity entirely.

Having said that, I'm sure it's possible to run the prod version(s) locally and test them, but then don't you lose the benefits of separating deployment complexity from development complexity since you're now running the prod version locally anyways?


When you start a Polylith codebase from scratch, you can start implementing your business logic and your components before you have even decided how to execute the code in production. The components can talk directly to each other and you only need the development project to begin with. Then you may decide that yo want to expose your functionality as e.g. a REST API, so you now need to create a base and a project where you put that base and your components. Now you have two projects, one for development and one for your REST service, which both looks the same. Some times later you decide to split your single service into one more service. You still have all the tests set up in development but the way you run the code in production has changed. How you execute your code in production is seen as an implementation detail in Polylith. While developing the system, it's really convenient to work with all your code from the single development project, especially if your language has support for a REPL. If you need a production like environment, then you can have that too, but with that said, you will most probably spend most of your time in the single development environment because that's where you are most productive.


I see, that makes sense. Basically, there's nothing stopping you from organising your code to make it easier to follow/step through the business logic (dev), making your changes and then double checking against a slightly different organization of your components and bases (prod), right? You may end up having more projects than deployed services, but that's a non-issue.

Hypothetically, say I'm big corp A with my thousands of developers and hundred of micro services that are just polylith projects. Further, let's say, I have N services that depend on component B. Now, if one team needs to make a breaking change on component B (say you need to change the interface), how would you suggest handling it based on polylith architecture? Would you version each component so that services can pin the component version? Or would you create a new component? Something else? Intuitively, versioning sounds like a mess of thousands of repos. On the other hand, creating a new component would create precedent that might be used to justify an explosion of components that may make your workspace a mess of almost identical components. While refactoring sounds like the way forward here, if you've dug yourself a hole with bad design choices then polylith seems like it would give you more rope to hang yourself with. Otherwise you have to coordinate with all the teams needed to figure out how to modify the N services depending on component B. With typical microservices, my understanding is that this wouldn't happen so long as the service's API remained constant.


Yes, one way of making two projects slightly different is to have two different versions of the same interface (all components expose an interface and they only know about interfaces) and then use different components in the two project (that implements the same interface). The development project can also mimic production to some extent, by using profiles (see https://polylith.gitbook.io/poly/workflow/profile).

You are right that you need to handle breaking changes in some way. Refactor all the code that uses the changed component is probably the best way to go in most cases because it keeps the code as simple as possible. Second best (or best in some situations) is to introduce a new function in the existing component (if it's not a huge change, then a new component can make sense). This can be done in different ways, e.g. by adding one more signature to the function or by putting the new version in a sub namespace, e.g. 'v2' in the interface namespace.

You will face similar coordination problems with microservices too. I worked in a project where we had around 100 microservices. To share code between services we created libraries that was shared across services. Sometimes we found bugs due to some services used an old version of a library. Then we went through all the services to make sure they all used the latest version of all libraries, and that could take two weeks for one person (full time)! You had to go and ask people about breaking changes that was made several weeks ago or try to figure it out yourself.

The alternative is to not share any code and just copy/paste everything (or implement the same shared functionality from scratch every time) but that is probably even worse, because you will not get rid of the coordination needs and if you find a bug in one service, you have to go through all code in all 100 services manually to see if any of the other 99 services contain the same bug, or hope for the best if you don't.


Thanks for the explanations and the perspective! I'll definitely need to play around with Polylith :)


Large monorepos tend to already support such analysis via tools like Bazel.


I have a feeling that the gitbook could use an editor who would throw away half the content and reword the rest to be concise and concrete.

For example, "Polylith in a Nutshell" mentions (pure) functions, but they never come up again.


Thanks for the feedback, we appreciate it.

It's tricky to pick the right level of detail when explaining an idea like Polylith. If we could precisely identify the level of understanding of the reader, then we could tailor the content to perfectly match their experience level and vocabulary. Unfortunately, we can't do that, so we have to pick a middle ground, where we probably over explain some concepts, and under explain others.

However, we'd be grateful if you could give us some specific examples of sections or pages that you didn't find useful or enlightening.


Re: choosing the right level of detail, I think it's preferable to start with people who already have all the necessary background knowledge (but don't necessarily know Clojure since the architecture is intended to be language-agnostic). Once that works, the documentation can be augmented to be easier to understand to people who don't have the background knowledge yet.


It seems there are several independent ideas somewhat mixed together.

The idea number 0 is component interfaces. They seem equivalent to plain old interfaces in all the other languages. (Except some reflection may be necessary in statically typed languages to process the lists of functions in a generic way?)

The idea number 1 is bases. They are domain-agnostic adapters that convert component interfaces (aka language-native interfaces) into language-agnostic interfaces. (Except they seemingly do include domain logic because they combine several components before exposing them?)

The idea number 2 is project layout, I think? It appears to be more of a supporting convention at best, not sure.


We have a similar question in the FAQ, so I just copied the Q/A here (https://polylith.gitbook.io/polylith/conclusion/faq)!

Question: What parts of Polylith are important and what are just “cermony”?

Answer: The short answer is that all parts are needed:

interface: Enables functionality to be replaced in projects/artifacts.

component: The way we package reusable functionality.

base: Enables a public API to be replaced in projects/artifacts.

library: Enables global reuse of functionality.

project: Enables us to pick and choose what functionality to include in the final artifact.

development: Enables us to work with all our code from one place.

workspace: Keeps the whole codebase in sync. The standardized naming and directory structure is an example of convention over configuration which enables incremental testing/builds and tooling to be built around Polylith.

I also want to add one thing, and that is that Polylith combines the LEGO-like blocks of code (components/bases) outside of the language itself. In Clojure we use tools.deps, but in other languages we would use other tools, like Maven in Java, to combine the different source directories into projects (that is built into aftifacts in the end, like libraries, tools and services).


To save some time for those working on frontends, from the FAQ: "There are several reasons why we don’t support Polylith for the frontend at the moment. One of them is that frontend development has other needs than backend development." https://polylith.gitbook.io/polylith/conclusion/faq


In a sense, some of it could be adapted, but with relaxing the idea of "components are pure functionsnindependant of a base" to "components are pure functions producing data usable by a given base using a given UI framework".

For example, in a react / redux app, you would have "pure" components representing the business logic of your app, either a component or a base that actually stuff them in a redux store ; a multiple "react-components" that returns jsx ; and at least one "react base" (or react/redux base) that ties everything.

Different projects would be "your real application", "your dev server connected to local stub services", "your storybook execution", etc...


Cool. A way for me to "market" good software design to others without them realizing it's just normal modular design with reasonable best practices. Sadly most people need a fancy name and slide deck for this.


"Normal" good software design takes you a bit, but it doesn't give you LEGO-like building blocks, and it will still be hard to share code between services. Polylith gives you a single development experience that you would not otherwise get. Take a look here to see what I mean: https://polylith.gitbook.io/polylith/architecture/bring-it-a...


Is there tooling or something special around this? I'm not trying to be a curmudgeon, but I'm having a hard time seeing what is novel here other than applying a name to component based modular software design.


Have you read the "Advantages of Polylith" page? https://polylith.gitbook.io/polylith/conclusion/advantages-o... It tries to summarise the benefits from both a development and deployment perspective.


That page doesn't clarify much to me. How exactly do you deploy pieces of the app as different services, while keeping it a monolith in development? Especially this thing not being a framework. Standard go/node modules give you "lego-like" pieces too. The real world example only has one "base" that puts everything together under a single API, so doesn't help understanding how this would be deployed as multiple services.


You can get an idea by looking at the Production systems page (https://polylith.gitbook.io/polylith/conclusion/production-s...) where each column in the first diagram is a project (deployable artifact / service). All components are specified in a single file, like the configuration file for the poly tool itself: (https://github.com/polyfy/polylith/blob/master/projects/poly...).

I try to explain it in the "Bring it all together" section also: https://polylith.gitbook.io/polylith/architecture/bring-it-a...


Those examples don't really tell much more. I'm guessing there are a lot of assumptions about how Clojure projects are structured.

How exactly are these deployed? Does it use k8s, Docker? Does it integrate with some particular CI setup? How can this be language agnostic? How can a single codebase be separated into multiple running http services without some kind of overarching routing framework? How do services communicate?


In Clojure you put your functions in namespaces (sometimes named modules or packages in other languages). In an object oriented language you put your classes in namespaces too, but the way you would implement Polylith in an OO language is to expose static methods in the component interfaces, which would have to live in a class with the same name as the Interface.

Polylith doesn't help you with the deployment of the project, that code you have to write yourself, but it's possible to calculate if a project has been affected by the latest changes or not (the 'poly' tool that we have build for Clojure, supports that, but it can be implemented for other languages too, and what you get is support for incremental builds and tests = only build the project that are affected by the change and only run affected tests).

Polylith is an idea about how to decouple and organise code, in the same way that FP and OO are ideas that can be materialsed in different computer languages.

The whole codebase lives in a single repository, but all components (and bases) have their own src, test and resources directories. We then pick and choose from these sources to create projects from where we build artefacts.

All projects live in the same repository in a workspace and if you structure the code the "normal" way, each project may live in its own repository with a single src directory (+ test and resources) but then you had to put all the code into that single src directory. What Polylith does, is to let each project specify (in a configuration file) which set of src directories it needs, instead of putting everything into a single src directory. This is supported out of the box in Clojure, but in e.g. Java, if you use Maven, you need a Maven plugin for that.

The deployed services communicate with each other through their public API. No difference from what you would otherwise do.


> It tries to summarise the benefits from both a development and deployment perspective.

Call me a pessimist, but I'm not convinced. It seems a bit hand-wavey to me.


What do you do in the cases of high coupling between components (parent child) or between a base and a component? You just let the couplings happen or they themselves turns into an individual component/base? How do you decide whether or not the resulting highly coupled component + base is a component or a base?

Say in your Real world app you have a SQL for getting articles by user in store.clj. How do you decide if that SQL lives in article component or user component?


I would say it's up to you to decide what you think works best, as long as the bases only contain the code needed to expose the public API and then you divide the rest of the code into components. The good thing with Polylith is that it's easy to refactor the code by splitting up existing components. When you add new functionality, you either add to an existing component or create a new one. Polylith allows you to make the components tiny with almost no performance penalties, and that is also our experience, that components tend to be really small, or be divided into many small namespaces.

To answer your second question, I would make the same considerations as in any other architecture. If the SQL only took a user ID and didn't contain business logic from the user domain, then I would just put it in the article component, but if the SQL had to contain logic from the user domain (more than e.g. just translating user name to user ID) then I would extract that logic into the user component. Exactly how you combine this into a query is an implementation detail that can be solved in several different ways (and it often gets quite ugly!). This is how I think about it in general, but there are always situations when it's right to break "rules"!


> If you prefer to start from the beginning, and take things at your own pace, then you're already in exactly the right place - just keep reading.

If you don't mind a drive-by suggestion as I keep reading, this should be the first bullet in the list. It's a tiny speed bump in the funnel, but I could feel my attention dwindle until I got to "oh, there's more to read if I click the large, friendly button".


That's useful feedback, thank you. I'll suggest that change to Joakim.


> Polylith addresses these challenges by introducing simple, composable, LEGO-like bricks, which can easily be shared across teams and services.

…Like functions and values?


Exactly! Components are an attempt at scaling-up functions to a system-level building block, whilst maintaining as many of the benefits as possible.


Cool, it's written primarily in Clojure!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: