Not least because, despite HN now swung hard against microservices, they exist often for a reason, and it's difficult for one service to emit hypermedia for another.
The inspiration was humans surfing freely through interlinked web pages, but API clients just don't naturally do that.
> The inspiration was humans surfing freely through interlinked web pages, but API clients just don't naturally do that.
There's nothing unnatural about it, it's just one extra step to use a name that the service must guarantee is stable rather than directly accessing a hard-coded URL which can be unstable. You're used to:
var client = new HttpClient();
var result = client.Post("http://somehost.com/api/hardcoded-service-resource", /*pass some data*/);
The resource URL is hard-coded with a specific structure and meaning, which means the service can never change it without breaking clients. A simple REST equivalent would be:
var client = new HttpClient();
var entry = client.Get("http://somehost.com/api/entry").ParseHypermedia();
var result = client.Post(entry.StableNameForEndpointIWantToCall, /*pass some data*/);
There's nothing all that unnatural about this. The key is that the entry point returns a description of the API using hypermedia which maps stable names to unstable URLs. This allows service to change its internal structure as long as the entry point returns a hypermedia result that has a stable structure.
You can think of HATEOAS for APIs like DNS for API URLs. Calling it "unnatural" is like saying that hard-coding IP addresses in your clients is more natural than using DNS names. That's just crazy talk!
> The key is that the entry point returns a description of the API using hypermedia which maps stable names to unstable URLs. This allows service to change its internal structure as long as the entry point returns a hypermedia result that has a stable structure.
But it's much simpler to handle this with basic versioning. If you want to rearrange /foo so it now lives at /bar/foo, you can just put the entire old API under /v1 and then have /v1/foo internally redirect to /v2/bar/foo.
You don't need to maintain a giant hypermedia reference for all your endpoints and have the client invoke it and parse it on every single call to make things "dynamic". They aren't dynamic, it's just a layer of indirection, but the coupling between the client and StableNameForEndpointIWantToCall is still just as tight and brittle as the one between the client and /v1/foo. Same maintenance work for the server, but the second requires less work for the client.
> But it's much simpler to handle this with basic versioning. If you want to rearrange /foo so it now lives at /bar/foo, you can just put the entire old API under /v1 and then have /v1/foo internally redirect to /v2/bar/foo.
I'd say it's equally simple, not simpler. It's also less flexible. What if you don't want an internal redirect, but have to redirect to other locations? For instance, perhaps the location of the entry point is getting DoS'd. With the hypermedia entry point, you could immediately respond by returning URLs for a different geolocated service, or round-robin among a bunch of locations, or any number of other policies without having to change other infrastructure.
The point is that with the choice of a single architectural style, you get all kinds of flexibility at the application level that you'd otherwise have to cobble together using some mishmash of versioning, DNS, and other services.
It's simpler and more flexible on the whole, in a TCO sense, which you won't see if all you do is look at isolated examples of small services operating under optimal conditions. Then of course less flexible options will look simpler. When has that ever not been the case?
> You don't need to maintain a giant hypermedia reference for all your endpoints and have the client invoke it and parse it on every single call to make things "dynamic".
That's not how it works. Firstly, I don't know what a "giant" hypermedia reference is. An API with a thousand URLs, which never happens, would still be parsed in tens of milliseconds at worst on today's CPUs.
Secondly, like any URL, the entry point result has a certain lifetime that the service must respect, so you cache it and only refresh it when it expires.
But DNS allows for redirection wrt IPs. How does HATEOAS do the same?
First, you already have DNS, and can abstract wrt existing URL structure. second, you still need stable references to services in order to identify them, and they in turn need to have stable structure - Hence the only thing you can change is the top-level stuff? Why is that important?
> But DNS allows for redirection wrt IPs. How does HATEOAS do the same?
I gave a code sample above. The service's entry point exports a map of unstable URLs to stable names, just like DNS exports a map of unstable IPs to stable names.
What I described is the most basic form of HATEOAS, but it also allows hierarchical structuring, ie. you can discover more URLs via any path through a hierarchy of hypermedia returned by URLs from the entry point.
> First, you already have DNS, and can abstract wrt existing URL structure
DNS abstracts IPs not URLs. You could come up with an encoding where you map URLs to subdomains, but now you're lifting application logic to influence infrastructure policy, where your devs are now messing with nameservers instead of just sticking to their HTTP application, and you're tied to DNS TTL instead of application-specific caching policies, and DNS insecurities (no TLS).
For instance, with HATEOAS I can make a resource permanent, or live only a few seconds, and these are application-level policies I can define without leaving my coding environment. DNS just can't give you this control, so trying to shoehorn the flexibility that HATEOAS gives you into DNS is putting on a straightjacket.
i.e. why is ref <myFooService> better than www.myDomain.com/service/myFooService ?
Unlike IPs, URLs can be just as abstract as names, no?
> it also allows hierarchical structuring
This is what I meant by "the only thing you can change is the top-level stuff" - is this important enough to require the extra level of abstraction? For whose benefit is the restructuring if the stable names all share the same namespace anyway.
> you can discover more URLs
Is this a manual user hitting these API endpoints, or a client? Why is it useful that a client can be given different entry paths?
> where your devs are now messing with nameservers
current service discovery solutions don't require this. The name server just points to a single proxy.
> i.e. why is ref <myFooService> better than www.myDomain.com/service/myFooService ?
Because the latter is less flexible. I've expanded on this here [1], but for the short version, you can change the reference of <myFooService> without regard for location or based on any number of other application-specific policies, without having to host everything behind a single DNS entry.
> you could immediately respond by returning URLs for a different geolocated service, or round-robin among a bunch of locations
Is this not possible with a proxy and a (non-perm) redirect?
> without having to host everything behind a single DNS entry
But your entry point will still need to be a single entry!
It sounds more like you want extra semantics around DNS - e.g. have some entry point provide latest mirrors, localised servers etc; but this would require extra logic in the client app that could be a new protocol at a lower level.
I suppose that makes sense if you are working at the application level, but as an industry standard I'd sooner have a lot of this stuff lifted to the non-application layer.
That said, I still don't get the HATEOS aspect; Is the reason you'd have to follow references so that there is no single "stateful" collector of the state of (other) backend services?
Probably not clear but I was talking about a situation where an API client (e.g a web application server) needs to manipulate customers (managed in one microservice) and also their orders (managed in another).
Not the two microservices talking to each other.
So the API client will be making separate API calls to each of the microservices.
My premise is that it is too hard for API responses from the customer microservice (for example) to include hypermedia links to resources within the orders microservice.
Got you. I just want to point out the absurdity of it. Java microservices commonly use Javascript Object Notation for their data transfer. How is that the right tool for the job. With all the translations that are needed, why don't they use RMI/RPC? And why even spend all that development time on something that you can solve in an afternoon with a JDBC connection anyway.
Because the paradigm of RPC is not about data transfer (which is the serialization protocol).
RPC is one architectural style. It's pretty undefined other than "call/response". Marshalling of arguments is just a serialization issue.
RMI/RPC doesn't work well across firewalls or org boundaries. It doesn't support separating internal and external host identification, so when you try to RMI to a host in another DNS domain that has a local name, there's all sorts of nonsense to get the Java runtime to understand it can have multiple DNS names and be associated with multiple network interfaces.
JSON/protobuf/cap'nproto/ONC-XDR/XML are about serializing data (whether it's a resource or a marshalled argument/result for RPC).
So basically: it doesn't work well if you want to reach every IP on the whole internet. Yeah I suppose that's correct, but we are talking about internal service-to-service communication so why would you have those requirements in the first place?
It's only call/response because that's all you ever need if you can call any arbitrary function by it's name. That's infinitely more powerful than any other protocol that restricts you to a limited grammar.
>Probably not clear but I was talking about a situation where an API client (e.g a web application server) needs to manipulate customers (managed in one microservice) and also their orders (managed in another).
I would use a microservice gateway with the client only talking to the gateway. It seems weird to me that the client can call any microservice.
I've been working on this for nearly 5 years, and despite not being very popular yet, the technology seems pretty sticky for those that have adopted it.
But there's gaps in the standards, and we need way more tooling for more ecosystems. But when it works, it works great.
I always understood application in HATEOAS to be referring to the browser or whatever type of application sitting on some machine somewhere had been written to consume hypertext, but it sounds like you're considering a service to be the application?
My point was, how is such an API served up when it spans resources that are managed by two separate services / microservices - one for customers, one for orders?
Of course this is a simple example, we are smart tech heads, so obviously we can wedge a third service in front of the other two, which exists solely to provide aggregating behavior like this.
But why? Is hypermedia so useful that it's worth going to this trouble? (No IMHO).
Don’t you just return links that refer to the other service? (ie absolute URLs). That is kind of the point of hyperlinks: the service hosting a resource can change, and you just change the links you serve rather than changing lots of hard-coded clients.
(the example I used is so simple it didn't really highlight the difficulty)
My point is that one microservice cannot easily generate links for another.
How does the customer microservice (that wants to be hypermediaful), generate a deep link into the order microservice, e.g perhaps to obtain the latest order for a customer?
It can't, unless it is uncomfortably tightly coupled to the other microservice.
The most pragmatic solution is not even to try. Better to pass on the hand wavey HATEOAS stuff, and just tell the developers to work from the API documentation.
> How does the customer microservice (that wants to be hypermediaful), generate a deep link into the order microservice, e.g perhaps to obtain the latest order for a customer?
In short, you're asking how to implement service discovery.
Also, in REST there is no such thing as a "deep link". There are only resources, and links to said resources. HATEOAS returns responses that represent application state by providing links to related resources, and that's pretty much all there is to it.
I completely agree it is in part a service discovery problem - my original point is that HATEOAS is not a workable service discovery mechanism in a microservices environment.
Instead, use some service discovery technology. Not hypermedia.
It's too hard for one service to generate links (yes, let's call them complex links instead of deep links for enhanced correctness) into another.
If it can do that then they were never really independent microservices in the first place, they are so tightly coupled.
> I completely agree it is in part a service discovery problem - my original point is that HATEOAS is not a workable service discovery mechanism in a microservices environment.
What leads you to believe that? You want a related resource, and you get it by checking it's location. It's service discovery moved to the resource level. What's hard about it?
> Instead, use some service discovery technology. Not hypermedia.
I don't understand what's your point. Where do you see any relevant difference? HATEOAS is already service discovery at the resource level.
> It's too hard for one service to generate links (yes, let's call them complex links instead of deep links for enhanced correctness) into another.
Not really. Tedious? Yes. Too hard? Absolutely not. Not only there are even modules and packages that either do that for you or do most of the work but also it's not different than just putting together a response to a request.
> If it can do that then they were never really independent microservices in the first place, they are so tightly coupled.
You seem very confused about this subject as you're not only mixing up unrelated concepts but also imagining problems where there are none.
From the start, REST is just an architectural style, and HATEOAS is just an element of said style. HATEOAS basically boils down to a technique to allow clients to not have hardcoded references to URLs pointing to resources. Instead, when you get a response to a request, the response also returns where you can find related resources. That's it. It matters nothing if said links never change at all during the life of a service. What it matters is that if you're developing a service that's consumed by third-parties that support HATEOAS, you do not have any problem whatsoever peeling out responsibilities to other services or even redeploying them anywhere else, because your clients will simply know where to look without requiring any code change at all.
> My point is that one microservice cannot easily generate links for another.
Right, if these are all REST services, then one service should not be generating links for another, it should be obtaining those links from the service itself.
> How does the customer microservice (that wants to be hypermediaful), generate a deep link into the order microservice, e.g perhaps to obtain the latest order for a customer?
It asks the service for a link via a call to a link it obtained from the service's stable entry point. Pretty standard stuff when you're reasoning about encapsulated systems.
Why do you think it’s hard for one microservice to generate links to another microservice? The alternative is that all the clients are tightly coupled to both of them.
The alternative may not be palatable to you but IMO if two microservices have an encyclopedic knowledge of each other's url structures, then they aren't really separate microservices at all.
Well indeed, but the hyperlinks here are just surfacing the coupling that already exists between those services. Making explicit an already implicit coupling.
Two other points:
- you can use HATEOAS to let one microservice discover the links it needs from another, exactly as a client would do.
- by not using hyperlinks you have not solved the problem, you’ve just moved the requirement to know the URL structure from another microservice (that you control) to dozens or hundreds of clients, that you likely don’t control. This makes your services much more brittle, not less.
Respectfully I believe you have constructed a straw man.
In this contrived example, assume there is no need for the customers microservice to even know about the existence of orders to do it's job.
So forcing it to understand all the various url formats, query params, that the orders microservice supports - just so it can populate hypermedia elements in it's own APIs - which in turn is being done only to support discoverable clients or perhaps to appease the REST gods - seems an ugly architectural choice to me.
Perhaps I would feel differently if I had ever seen a project where HATEOAS had really moved the needle instead of being more a catalyst for ideological battles.
Surely the Orders service would support a single entrypoint - "index for customer X", or something - which would return a list of links into the resources that it owns? You're not forcing the Customers service to know anything more than that single per-customer entrypoint.
Exactly the same pattern applies for any other domain. All the Customers service needs to know is that there is a link to another domain; the structure is entirely down to the service for the other domain to manage.
Why does the knowledge have to be encyclopaedic in the first place?
If a customer microservice has to link to another, the only information it needs is the link format, for example: `https://another-microservice/customers/{customer_id}/orders`. This URL pattern can be in an environment variable, or configuration file.
However, I'd consider needing a customer-related microservice to link to the "last order" in another microservice a bit excessive. Why is this requirement in place? Why can't the client go to the first URL (`/customers/{customer_id}/orders`) and figure the last order from another link there? The link to the last order can be provided by the second microservice, without needing to involve the customers microservice.
If this is all an optimisation to save the client a roundtrip, then yeah, there are gonna be downsides. But that's the nature of optimisations: there are downsides. It would be the same in any other architecture.
Parent commenter point is that you need the URL structure to be somewhere. If it's not in the microservices, it's in the frontends. There's no magic silver bullet here, something's gotta give...
> Is hypermedia so useful that it's worth going to this trouble?
No. It's a pointless indirection. These APIs are universally awful to use. I'd also rather you provide a library that abstracts your poor taste in HTTP API design. If I have to navigate your HATEOAS swamp, you better make up for it by being really important (PayPal).
Not familiar with Django sadly but it sounds like it would act as something of a proxy that sits in front of your microservices.
My point is that if you are starting with just e.g 2 microservices, one for customers, one for orders, then hypermedia on it's own is not useful enough to justify adding a new proxy/ layer, just so that an API produced by one of those microservices can include hypermedia for the other.
>Not least because, despite HN now swung hard against microservices, they exist often for a reason, and it's difficult for one service to emit hypermedia for another.
Why would microservices even need HTTP to talk with each other?
Not least because, despite HN now swung hard against microservices, they exist often for a reason, and it's difficult for one service to emit hypermedia for another.
The inspiration was humans surfing freely through interlinked web pages, but API clients just don't naturally do that.