Hacker News new | past | comments | ask | show | jobs | submit | tengstrand's comments login

Hi! This blog post, which I wrote recently, may clear up some question marks: https://medium.com/@joakimtengstrand/understanding-polylith-...


Polylith targets the backend, so I will concentrate my answer around that. I agree that good APIs are important. Polylith helps you with sharing code because it's built around "movable"/decoupled bricks that can be reused across services (e.g. different kinds of APIs). Reuse is a hard problem to solve, and you need LEGO-like building blocks for that.


When you start a Polylith codebase from scratch, you can start implementing your business logic and your components before you have even decided how to execute the code in production. The components can talk directly to each other and you only need the development project to begin with. Then you may decide that yo want to expose your functionality as e.g. a REST API, so you now need to create a base and a project where you put that base and your components. Now you have two projects, one for development and one for your REST service, which both looks the same. Some times later you decide to split your single service into one more service. You still have all the tests set up in development but the way you run the code in production has changed. How you execute your code in production is seen as an implementation detail in Polylith. While developing the system, it's really convenient to work with all your code from the single development project, especially if your language has support for a REPL. If you need a production like environment, then you can have that too, but with that said, you will most probably spend most of your time in the single development environment because that's where you are most productive.


I see, that makes sense. Basically, there's nothing stopping you from organising your code to make it easier to follow/step through the business logic (dev), making your changes and then double checking against a slightly different organization of your components and bases (prod), right? You may end up having more projects than deployed services, but that's a non-issue.

Hypothetically, say I'm big corp A with my thousands of developers and hundred of micro services that are just polylith projects. Further, let's say, I have N services that depend on component B. Now, if one team needs to make a breaking change on component B (say you need to change the interface), how would you suggest handling it based on polylith architecture? Would you version each component so that services can pin the component version? Or would you create a new component? Something else? Intuitively, versioning sounds like a mess of thousands of repos. On the other hand, creating a new component would create precedent that might be used to justify an explosion of components that may make your workspace a mess of almost identical components. While refactoring sounds like the way forward here, if you've dug yourself a hole with bad design choices then polylith seems like it would give you more rope to hang yourself with. Otherwise you have to coordinate with all the teams needed to figure out how to modify the N services depending on component B. With typical microservices, my understanding is that this wouldn't happen so long as the service's API remained constant.


Yes, one way of making two projects slightly different is to have two different versions of the same interface (all components expose an interface and they only know about interfaces) and then use different components in the two project (that implements the same interface). The development project can also mimic production to some extent, by using profiles (see https://polylith.gitbook.io/poly/workflow/profile).

You are right that you need to handle breaking changes in some way. Refactor all the code that uses the changed component is probably the best way to go in most cases because it keeps the code as simple as possible. Second best (or best in some situations) is to introduce a new function in the existing component (if it's not a huge change, then a new component can make sense). This can be done in different ways, e.g. by adding one more signature to the function or by putting the new version in a sub namespace, e.g. 'v2' in the interface namespace.

You will face similar coordination problems with microservices too. I worked in a project where we had around 100 microservices. To share code between services we created libraries that was shared across services. Sometimes we found bugs due to some services used an old version of a library. Then we went through all the services to make sure they all used the latest version of all libraries, and that could take two weeks for one person (full time)! You had to go and ask people about breaking changes that was made several weeks ago or try to figure it out yourself.

The alternative is to not share any code and just copy/paste everything (or implement the same shared functionality from scratch every time) but that is probably even worse, because you will not get rid of the coordination needs and if you find a bug in one service, you have to go through all code in all 100 services manually to see if any of the other 99 services contain the same bug, or hope for the best if you don't.


Thanks for the explanations and the perspective! I'll definitely need to play around with Polylith :)


Hi and thanks for your feedback!

- The Code City project is a cool way of visualising code and the idea was to show how we normally organise code today, and what problems it creates (development experience + sharing). I agree I could have been more clear about that.

- This is the longer video where I (and Furkan) try to explain what Polylith is, what problems it solves, how it works, and why you should use it. The sad fact is that it's super hard to explain Polylith (we have given about 15 different presentations) and I think you need to try it out yourself to get an idea how it is to work with and which problems it solves.

I will leave to Furkan to answer your questions about Scrintal.


We have a similar question in the FAQ, so I just copied the Q/A here (https://polylith.gitbook.io/polylith/conclusion/faq)!

Question: What parts of Polylith are important and what are just “cermony”?

Answer: The short answer is that all parts are needed:

interface: Enables functionality to be replaced in projects/artifacts.

component: The way we package reusable functionality.

base: Enables a public API to be replaced in projects/artifacts.

library: Enables global reuse of functionality.

project: Enables us to pick and choose what functionality to include in the final artifact.

development: Enables us to work with all our code from one place.

workspace: Keeps the whole codebase in sync. The standardized naming and directory structure is an example of convention over configuration which enables incremental testing/builds and tooling to be built around Polylith.

I also want to add one thing, and that is that Polylith combines the LEGO-like blocks of code (components/bases) outside of the language itself. In Clojure we use tools.deps, but in other languages we would use other tools, like Maven in Java, to combine the different source directories into projects (that is built into aftifacts in the end, like libraries, tools and services).


I would say it's up to you to decide what you think works best, as long as the bases only contain the code needed to expose the public API and then you divide the rest of the code into components. The good thing with Polylith is that it's easy to refactor the code by splitting up existing components. When you add new functionality, you either add to an existing component or create a new one. Polylith allows you to make the components tiny with almost no performance penalties, and that is also our experience, that components tend to be really small, or be divided into many small namespaces.

To answer your second question, I would make the same considerations as in any other architecture. If the SQL only took a user ID and didn't contain business logic from the user domain, then I would just put it in the article component, but if the SQL had to contain logic from the user domain (more than e.g. just translating user name to user ID) then I would extract that logic into the user component. Exactly how you combine this into a query is an implementation detail that can be solved in several different ways (and it often gets quite ugly!). This is how I think about it in general, but there are always situations when it's right to break "rules"!


"Normal" good software design takes you a bit, but it doesn't give you LEGO-like building blocks, and it will still be hard to share code between services. Polylith gives you a single development experience that you would not otherwise get. Take a look here to see what I mean: https://polylith.gitbook.io/polylith/architecture/bring-it-a...


Is there tooling or something special around this? I'm not trying to be a curmudgeon, but I'm having a hard time seeing what is novel here other than applying a name to component based modular software design.


Have you read the "Advantages of Polylith" page? https://polylith.gitbook.io/polylith/conclusion/advantages-o... It tries to summarise the benefits from both a development and deployment perspective.


That page doesn't clarify much to me. How exactly do you deploy pieces of the app as different services, while keeping it a monolith in development? Especially this thing not being a framework. Standard go/node modules give you "lego-like" pieces too. The real world example only has one "base" that puts everything together under a single API, so doesn't help understanding how this would be deployed as multiple services.


You can get an idea by looking at the Production systems page (https://polylith.gitbook.io/polylith/conclusion/production-s...) where each column in the first diagram is a project (deployable artifact / service). All components are specified in a single file, like the configuration file for the poly tool itself: (https://github.com/polyfy/polylith/blob/master/projects/poly...).

I try to explain it in the "Bring it all together" section also: https://polylith.gitbook.io/polylith/architecture/bring-it-a...


Those examples don't really tell much more. I'm guessing there are a lot of assumptions about how Clojure projects are structured.

How exactly are these deployed? Does it use k8s, Docker? Does it integrate with some particular CI setup? How can this be language agnostic? How can a single codebase be separated into multiple running http services without some kind of overarching routing framework? How do services communicate?


In Clojure you put your functions in namespaces (sometimes named modules or packages in other languages). In an object oriented language you put your classes in namespaces too, but the way you would implement Polylith in an OO language is to expose static methods in the component interfaces, which would have to live in a class with the same name as the Interface.

Polylith doesn't help you with the deployment of the project, that code you have to write yourself, but it's possible to calculate if a project has been affected by the latest changes or not (the 'poly' tool that we have build for Clojure, supports that, but it can be implemented for other languages too, and what you get is support for incremental builds and tests = only build the project that are affected by the change and only run affected tests).

Polylith is an idea about how to decouple and organise code, in the same way that FP and OO are ideas that can be materialsed in different computer languages.

The whole codebase lives in a single repository, but all components (and bases) have their own src, test and resources directories. We then pick and choose from these sources to create projects from where we build artefacts.

All projects live in the same repository in a workspace and if you structure the code the "normal" way, each project may live in its own repository with a single src directory (+ test and resources) but then you had to put all the code into that single src directory. What Polylith does, is to let each project specify (in a configuration file) which set of src directories it needs, instead of putting everything into a single src directory. This is supported out of the box in Clojure, but in e.g. Java, if you use Maven, you need a Maven plugin for that.

The deployed services communicate with each other through their public API. No difference from what you would otherwise do.


> It tries to summarise the benefits from both a development and deployment perspective.

Call me a pessimist, but I'm not convinced. It seems a bit hand-wavey to me.


Hi and thanks for showing interest in Polylith.

- The main idea is to allow bricks (as we call the "LEGO bricks"

- components and bases) to be put together into projects from where we generate artifacts (libraries, tools and different kind of services). Maybe we should have a sentence like that on top of the first page of the high level doc!

- The help the Polylith architecture provides when it comes to how to execute the code is that it makes it much easier to reorganise the code in production (by splitting, merging, or creating new services by reusing existing components) without affecting your single development experience (which is always only one single project). This is where the LEGO idea comes in. But you are right, it doesn't help you at all with the implementation of the code.

- A component can handle business logic, e.g. some part of our domain, others might manage integration with external systems, and others will be responsible for infrastructure features such as logging or persistence.

- The way you introduce a "fake" component (a component that can replace an existing component) is to create a new component that implement the same interface. So yes, it should be a new component that implements the same "API".

- We haven't tried it with other languages than Clojure, but the idea is applicable to other languages because it's not about the language syntax. David Vujic is working on tooling support for Python: https://davidvujic.blogspot.com/2022/02/a-fresh-take-on-mono...


There is no company behind Polylith, just me (Joakim Tengstrand), Furkan Bayraktar and James Trunk.


Ah, shame. I mean - good for you and amazing work! I was hoping to apply to work for such an innovative and brave company :D


Thanks!


If you liked the old version, I don't think the new one will disappoint you! Please reach out to us (https://clojurians.slack.com/archives/C013B7MQHJQ) and say what you think.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: