Hmm. I want to distance this library pretty far from Camel!
Wrt. "papering over": Not really. I make it "feel like" you're coding straight down, sequential, linear, as if you're coding synchronously.
But if you look at the examples, e.g. at the very start of the Walkthrough: https://mats3.io/docs/message-oriented-rpc/, you'll understand that you are actually coding completely message-driven: Each stage is a completely separate little "server", picking up messages from one queue, and most often putting a new message onto another queue.
It is true that the error handling is very different. Don't code errors! You cannot throw an exception back to the caller. You can however make for "error return" style DTOs, but otherwise, if you have an actual error, it'll "pop out" of the Mats Fabric and end up on a DLQ. This is nice! It is not just a WARN or ERROR log-line in some log that no-one will see until way later, if ever: It immediately demands your attention.
Ops team?! We, the developers, are monitoring the DLQs: It is our mistakes, and we must fix them. Operations' only role is to keep the VMs the ActiveMQ is running on active, and the database it uses responsive.
If you used it in a synchronous fashion for a end-user, you are correct: He'll get a timeout - but the message will blink red on the DLQ, giving the devs exact info as to what has happened. As opposed to a 500 or something similarly bad, which probably in the best case was logged somewhere, and hopefully someone is tallying up the error codes every few weeks..
I fail to see how messaging is worse in this case.
> Hmm. I want to distance this library pretty far from Camel!
I'm curious what makes you distinguish it heavily from Camel? From everything you say it sounds to me like you are building Camel - or at least the routing part of it :-)
While these ideas was brewing, I ventured down into many libraries, Camel was one of them. It did not solve any of my problems.
Mats is messaging with a call stack. One messaging endpoint can invoke another messaging endpoint, and get the reply supplied to its next stage, with the state from the previous stage magically present: https://mats3.io/docs/message-oriented-rpc/#isc-using-mats3 and other pages.
It's interesting how we can have such different takes on it. Still everything you say maps naturally onto Camel for me.
Your example using Camel would look something like:
from('rest:/api/myprocess') // using REST API endpoint as an example starting point
.inOut()
.to('activemq:main')
.to('activemq:mid')
.to('activemq:leaf')
.end()
Then you would define the handlers:
from('activemq:main')
.transform { e ->
return new State(...)
}
from('activemq:mid')
.transform { e ->
e.body.number1 =
}
All the processing stages are happening asynchronously and results are passed back and forth by Camel. If things error out half way through you get a printout from Camel of the whole workflow including each step, where exactly it failed and what the content of the state was at that point (and it'll throw the lost message onto DLQ if you want to re-do the process).
Mats does not explicitly define a set "route" through a bunch of stages. I view Camel more like a Workflow system, which is pointed out elsewhere in the discussion threads here - the workflows are defined external to the actual processing steps.
With Mats, you define Endpoints, which can be generic like a REST endpoint can be generic: "AccountService.getAccountListForCustomer" - which can be used (by invocation) by sevaral other endpoints.
Also, IIRC, the process you show with Camel is now effectively thread bound - or at least JVM bound. If the node running that process goes down, it takes all unfinished, mid-flow, processes with it. The steps are not transactional.
With Mats, the "life" of the process lives in the message. You can literally bring down your entire system from GCP, and rebuild it on Azure, and then when you start it up again, the mid-flow processes will just continue and finish as if nothing happened - as long as you brought along the state from the message broker (in addition to your databases). Viewed "from within the process", the only thing visible is that there was a bit more latency between one step and the next than usual. AFAIR, you cannot get anything like this with Camel. The idea is of course not the ability to move clouds, but that the result is exceptionally robust and stable.
Wrt. "papering over": Not really. I make it "feel like" you're coding straight down, sequential, linear, as if you're coding synchronously.
But if you look at the examples, e.g. at the very start of the Walkthrough: https://mats3.io/docs/message-oriented-rpc/, you'll understand that you are actually coding completely message-driven: Each stage is a completely separate little "server", picking up messages from one queue, and most often putting a new message onto another queue.
It is true that the error handling is very different. Don't code errors! You cannot throw an exception back to the caller. You can however make for "error return" style DTOs, but otherwise, if you have an actual error, it'll "pop out" of the Mats Fabric and end up on a DLQ. This is nice! It is not just a WARN or ERROR log-line in some log that no-one will see until way later, if ever: It immediately demands your attention.
I wrote quite a long answer to something similar on a Reddit-thread a month ago: https://www.reddit.com/r/programming/comments/1059jpv/messag...