> The server is now free to change the format of new URLs at any time without affecting clients (of course, the server must continue to honor all previously-issued URLs).
If you have to honor all previously-issued URLs then you aren't changing your format -- you are supporting two formats from now on, the old one and the new one.
You can of course tell your users that you will deprecate the old format, but unless you are as powerful as Google your users may prevent you from enforcing a deadline for deprecation.
If the URLs in your API responses are FQDNs rather than relative paths, all of this gets significantly harder to deal with.
Even if you figured all of that out, links are not idiomatic if your users consume your api via an RPC or GraphQL.
If I had to choose I'd used Stripe and Protobuf for inspiration.
Stripe uses a date as its version and it maintains state for each user, so it knows whether to serve an old version or latest and greatest. They offer UI and tools to manually migrate to a new version, with clear expectations and changelogs provided. They can build a rollout and sunset plan around that.
Protobuf is append only until you change the actual package. This gives you the opportunity to make additive changes without making a breaking change and you don't need to maintain separate releases as you make incremental updates or bug fixes. If the API changes significantly, it's no longer the same API, it's a new one.
This argument is true for any data that was included in the response. If the client chooses to cache it, it needs to deal with it being stale.. which in this case could result in a 404. Obviously it would be a nice-to-have if the server had a redirect from the old to new.
> Even if you figured all of that out, links are not idiomatic if your users consume your api via an RPC or GraphQL.
Agreed, although you could model a link in the object-based structure representation to match the protocol, something like..
>(of course, the server must continue to honor all previously-issued URLs).
Why is this true for non-user facing urls? Seems like an argument could be made that you only need to support the urls for as long as the response was cacheable.
And the problem isn't really limited to links in the response; the same goes for computed links based on keys too.
In fact, links in responses are probably easier to update, because waiting for client software to upgrade can take a long time - but waiting for old versions to fall out of caches or at least dwindle to a small enough number is likely a much shorter wait.
Furthermore, when your clients are composing links based on keys, there's no good way for the server to tell a client that the uri has changed - the composition is in the client, and the key (presumably) hasn't changed, so the thing the server would want to update is out of conventional reach.
But with uri's there's at least a conventional solution - redirects. That's not going to be an ideal solution; but as a way of limiting transition pain it might have a place.
All in all, I think links are likely less painful to use than keys when it comes to uri routing changes. (But really, try to avoid changing uri's in the first place, because whatever you do it's likely going to be painful somewhere).
For sure, but for most people using APIs to integrate with systems, caching is an unwanted middle way between always fresh and stored persistent, where local storage is the "cache", except it isn't invalidated using HTTP logic.
Cache timeout limits generally apply to the data retrieved for a URL, not the URL. If you know that your clients will never "bookmark" or in any way store URLs, then you can change them. The same rule applies for all identifiers, not just URLs
This really is an old quote, and some people certainly do cling to it quite dogmatically, but my own experience suggests that slavishly following this principle makes life more difficult for both producers and consumers, and that there's little substitute for excellent API documentation, and clear, timely communication of changes.
I don't think Fielding meant to say that you cannot bookmark a URL, which is what your interpretation would imply. He was only commenting on how you find the URL in the first place to bookmark it.
No half decent programmer will randomly break backwards compatibility anyway so I'm not sure what your point is.
Consider your audience. Is it APIs like GitHub? Well guess what, you're stuck with maintaining backwards compat anyway, may as well have the relationships readable.
And if it's not and you control both ends then what does it matter because you can break links all you want.
It was touched on but less emphasized the relationships being expressed as links exposes further context than you otherwise get.
It's also nice to be able reason about shared domains. I.e. you might have the concept of a stock item and an order item. They are both represent the same thing in the real world and share a lot of information but depending on the value of self will tell you what kind of data you can expect to be a available and what domain you're in as well. Reading logs would be oh so much clearer.
From the conceptual modeling point of view it is important to understand that:
(1) There can be several levels of entity identifiers, that is the same entity exists at several levels where it has different identifiers. Example: a computer has DNS name and IP address (and MAC address); a person may have several identifiers. These (initially independent) spaces can be structured differently:
(1.1) Layered structure like DNS and IP
(1.2) Independent, e.g., a person has two passports from different countries
(2) Links (representing relationships) are attributes. Like all attributes they are functions which map input entity identifiers into output (address) space identifiers.
Taking this into account, the problem is that URLs frequently play two roles, that is, one address convention is used at two layers simultaneously which can lead to numerous problems and controversies:
o URLs are used at access (protocol) level where the goal is to identify computers and access paths within some service (and not entity). For example, this layer has no idea what this means: http://service.com/passports/1234 - it is used for HTTP access only.
o URLs are used to identify real entities and here we re-use the previous name convention for higher level purposes by identifying people like http://service.com/passports/1234 although we do not care about protocols or computer names.
In order to avoid serious problems we need to follow these steps:
* Recognize and document that there are two independent address spaces
* Define a mapping between these two spaces. Initially, it can be identity relationship, that is, every entity name (URL) is mapped to the same URL used for access.
* If necessary (and frequently highly desirable) use relative names by assuming that the higher level segments of the name can be restored from the context or configured separately.
These rules are especially important for extensibility and for complex evolving systems where the standards and conventions change in time.
You don't have to honor it forever; if you want to be good about planning for changes, you annotate each link with the latest date when it is guaranteed not to be expired, and the latest date an updated authoritative URL will be available (either as content in the resource representation or an HTTP redirect) if it expires on the earliest possible date. Consumers are then free to determine whether they need to update links or whether the data from which they come is stale (if your resources at any given link are immutable but for metadata about currency and links to updated versions if the content of the logical entity has changed, so that you can always recover the same version of an entity you fetched at the original URL so long as the URL hasn't expired, clients don't even need to check each references URL for updates, they can just check the base URLs of the entities they already have read that referenced them, and in the process update all referenced URLs; you kind of need that allow clients an approximately consistent view of data when they can't read it all simultaneously anyway, and if your API doesn't need that level of consistency, it then it certainly doesn't need link stability, either—you’ver already decided you are dealing with fundamentally ephemeral data,.so.if people don't follow links relatively quickly, they aren't getting meaningful data even if the links are permanently stable.)
> you annotate each link with the latest date when it is guaranteed not to be expired
If I were paying someone for access to such an API, I would do everything in my power to change providers. Tracking link expiration would be a terrible developer experience. Such a thing may be a sound design for a system, but it's not a good design.
Providers like Twilio and Stripe have been largely successful because they've aimed for making developer experience as non-sadistic as possible.
Anyone who has maintained a database such as a customer database knows that it needs to be constantly tested and updated to track changes in e.g., telephone, email, billing, or delivery addresses.
What's different about this proposal? It would seem to acknowledge and support that type of activity directly, rather than through ad-hoc and kludge type mechanisms that typically include paying somewhat shady 3rd-party services.
Maybe the version could be in the hostname? Then site-relative URL's go to the same version you were using, which is presumably what you want. But I agree that it doesn't seem all that useful.
It seems like any of these schemes make assumptions that some things are invariant when switching to a new version, so you don't have to start over.
For example, if user IDs are just integers then you have a fairly conservative assumption that clients can store the user IDs and they can be reused with a new version. That is, users won't be renumbered. But everything else about the API can change between versions.
A site-relative URL also fixes part of the URL template between versions, so it can be safely stored. But the host can change.
I'm skeptical that URL's are all that helpful here. Why not do a POST using something like protobufs and avoid the issue entirely? An API like this is site-specific anyway and requires custom client code (other than the parsing layer which can be generated from a schema). It seems weird to pretend otherwise, as if some universal client existed. At best, you could use a browser for debugging, but that's about it.
This - and if your clients are getting their URLs from a downloaded swagger spec or even worse, from something they hardcoded, then the upgrade path is much worse with IDs.
With HATEOAS, at the best case is that clients just bump the entry point url (or even have it bumped for them) and then follow the links they are given. It's not possible to upgrade clients that way if they are constructing urls themselves.
Google seems almost pathologically allergic to maintaining backward compatibility in any context. Just doesn’t seem to be in their DNA. So it’s no surprise that advice originating from there doesn’t put much thought into how to maintain backward compatibility.
I wonder if this is an artifact of their monorepo developer culture, where they allow only one version of a library in the whole repo, and changing APIs is something that can be applied across the entire codebase?
Saying that URLs are not idiomatic with RPC or GraphQL is really just saying that most people aren't used to this way of thinking. Specifically, they are not used to is treating URLs as identifiers. Most programmers and API designers think of URLs as being a way of encoding remote procedure calls, with identifiers being parameters to the call that must be embedded inside the URL. The point of the blog was to try to explain why a minority of people like to use a different model where the URLs _are_ the identifiers.
You don't necessarily need to support two formats; you need to support redirecting the previously-issued URLs. This _can_ be accomplished by supporting the old format, but it doesn't _need_ to be.
Instead, you could just use a database of redirects which you append to when you make format changes. For some applications, the overhead of maintaining a redirect service is far lower than trying to maintain support for multiple formats. (And this redirect service may even be something you need/want for other reasons.)
The point here is that you can do this more easily. Consider a really dumb example
/people/12345 becoming /people/v2/12345
Sure, you still have to support the old version (maybe only temporarily), but migrating schemas gets marginally easier (you can do it via an entirely different service, for example)
If a client gets a 410 when following a stored link, they can always find the new url for the resource by following links from the parent resource (this is recursive, all the way back to the root url).
You still have to keep deprecated URLs responsive in order to issue HTTP 301 redirects. And until 100% of legacy clients update their calling code (could be never), you can't remove support without breaking the system for legacy clients.
Yeah, but it doesn't mean you have to serve them from the same codebase. Partition the old urls to a thin service that only does the URL transcription and issues the 301 and you're done
I'm thinking of moving some docs now, as it happens, and it looks like all I'll need to do to serve 301's for them is put a few lines in an .htaccess file.
My deeper point is that all this has been hashed out and settled by now. If you're still having this problem in 2019 you're either playing around or "Doing It Wrong".
Another thing to keep in mind is that HTTP 301 redirects are typically only followed for GET requests. POST, PUT & DELETE requests will still be broken.
I never had any uncertainty regarding where I can use a given entity id in a well designed API. Fix the naming and organization of your API if that's a problem for your users.
Conversely, I often need to log or store API-provided entity ids on my side, and having to parse it out of a URL or store irrelevant URL bytes in my own database would be really annoying.
You're not going to avoid the need to compile entity URLs on the client side either, unless you only make requests to entities returned by the API, which would be a weird constraint to design client code around.
The browser already does it with the anchor tag and the user decides to navigate or not, the difference in an API is that it's a machine driving. What's the security issue you're talking about?
I wouldn't really call this particular scenario a security issue, but allowing such things is a bad habit that will eventually bite. It's like not HTML-encoding values like usernames that you "know" are safe (but you know are not HTML-encoded).
Yes, this is why developers should use URI-building libraries instead of direct string manipulation to modify URIs.
If I visit an HTML page with a link to “.evil.com/people/123” and click on it, the user agent won’t append “.evil.com” to the hostname. You’d instead get something like “https://api.hotstartup.com/.evil.com/people/123” which would be safe (if not broken).
If you save the relative URL in your database and then the API changes its URL schema, you will need to migrate everything you stored to the new schema.
If all you stored was the ID, all you would need to change is the logic in your API client which accepts the ID and constructs the URL for it.
a change that necessitate a URL schema change would be just as far reaching had the system been designed with using IDs.
For example, if the entity ID changes from being an integer, to being a GUID, you'd still have to write code to update your schema (presumably, from an int column to a GUID/string column).
You don't have to parse entity IDs our of URLs — the URL already is an entity ID, and from the client's perspecitive, it doesn't have any sub-parts that can be parsed out, except those defined in the URI spec (scheme, authority)
That only works if fetching the /people/123 URL is the only thing you will ever do. If you need the real 123 entity id in any other context, e.g. to display it to the user, or to make a user-accessible website link to that entity (e.g. "view this transaction in stripe dashboard"), you'll need parsing.
I also mentioned that I don't want to store redundant garbage in my logs and database, especially if I want to index the column containing that entity id.
I think this takes an overly-simplitisic view of APIs. Going by the primary example in the article, by representing a pet's owner as a link instead of an id, they're basically discounting the idea that there may be separate endpoints that take in an owner id. For example, if there was an endpoint that let you get the invoices by customer, you would still need to understand the templates for that endpoint.
More fundamentally, I think it's trying to solve a smaller problem in the face of a much bigger one, you still need to know what the response of any given endpoint is going to be. Just because they've passed me a link, doesn't me I don't need documentation on what endpoint that link points to. I still need to know that the owner is a link to the people endpoint so I can properly parse that result. That in turn requires just as much documentation (IMO) to describe the relationships as it would to properly document your URI templates.
Obviously, the primary reason to use links over ids is to give the developers of the API more control over changing things like routes and Ids and whatnot, but I feel like it is a bit disingenuous to make it out to be a much better user experience or something, since it really isn't.
I think the HATEOAS way to do this would be that you never calculate an endpoint like that. The owner object should have the invoices url for that owner available in the body of the owner object.
...Its ludicrously bloated and part of the reason why no one does "real rest."
> The owner object should have the invoices url for that owner available in the body of the owner object.
Actually, it doesn't need to. The owner object would only have invoice IDs. The invoice collection resource would be discoverable through the root endpoint through the same process used to discover the pet and owner collection resources.
In fact, the owner doesn't need to have any invoice information at all. The invoice collection endpoint can simply be queried to find an invoice related to the pet and/or owner.
> ...Its ludicrously bloated and part of the reason why no one does "real rest."
Actually it isn't, and the main reason why "no one does real rest" is because web API clients are required to perform content discovery, which is more complex than simply using hard-coded URLs/URL paths.
Of course this complexity is only apparent and superficial, as having to support a myriad of API versions ends up with a simple solution that due to all the technical debt that piles on rather quickly is far more complex and error-prone and unmaintainable than the approach required to do REST.
In REST you do not need URI templates. You discover the invoice collection endpoint through the root endpoint, and once you have a ID you can, say, query the invoice collection endpoint to get to the invoice.
> I think the HATEOAS way to do this would be that you never calculate an endpoint like that. The owner object should have the invoices url for that owner available in the body of the owner object.
HATEOAS is neutral as to resource representation; you could have the invoices collection and all the individual resources within the (or one of the; resources don't need one and only one representation) owner resource representation.
If the collection or invoices can be accessed as individual resources (which, again, HATEOAS is neutral about), they should have their own URLs, but that doesn't prevent them from being incorporated in the parent resource representation.
Naive orthogonal CRUD on (the equivalent of) base tables of a normalized DB design is not a requirement of HATEOAS or REST, and it's often bad API whether or not you are aiming for REST. Its the a
strawman every one beats on about REST, but it's got nothing to do with REST.
> More fundamentally, I think it's trying to solve a smaller problem in the face of a much bigger one, you still need to know what the response of any given endpoint is going to be. Just because they've passed me a link, doesn't me I don't need documentation on what endpoint that link points to. I still need to know that the owner is a link to the people endpoint so I can properly parse that result.
Well, yes, you need to know the semantics of the relationship, but not for parsing, because he content-type plus, where needed, internal data within the resource representation more specifically identifying the subtype should tell you what you need to know about handling the representation.
If you’re interacting with the api then it’s extremely likely you’re not going to write code that will automatically follow reference links and attempt to parse them dynamically.
You will need to know what fields are in the end point how they represent data (iso 8601 for dates? Unix timestamps?).
You’re substituting documentation with asking the api consumer to explore and discover and assume.
In the model where identifiers are URLs, your example would most naturally look like this: /invoices?customer=/cust/123456. /cust/123456 is an opaque identifier for the client. Since /invoices?... itself is a URL and therefore a URI, it must be the identifier for something—it is the URL of a query result. You can also have a property of a customer that looks like 'invoices: <url>', in which case that URL is opaque. Regarding your second point, it is true that clients still need to understand the entities of the problem domain, but this is still simpler than having to learn the entities and also learn the URL templates that are used to access them, which is the alternative.
There shouldn't be separate endpoints that take an owner's ID. That's bad design. The owner endpoint should contain a list of invoices, ie. links to the invoice endpoint.
Maybe the data model is used for two use cases, one where you primarily access owner entities and one where you primarily access invoices and occasionally do something with their owner. Just because it does not fit some weird, hypothetical prototypical API it's not inherently "bad design". APIs aren't ER models, sure, they're supposed to make sense but they're also supposed to help their consumers perform a task.
And let's not kid ourselves, much of our world runs on APIs that would really deserve the "bad design" handwave, end of the day that's often an aesthetic question.
But the invoice endpoint is likely going to be something like "/customer/(id)/invoice/...". So what did we gain from getting it from the customer description first? (vs getting the customer id from the response and the link pattern from the docs)
I meant a whole tree of methods. Sure, you can get specific invoices from `/invoice/id`, but you probably still want `/customer/id/invoices` or similar for searching through them. You could use extra parameters for `/invoice/...`, but I think often it's nicer to namespace that. (use case dependent)
Do I misunderstand you or do you suggest that the customer object should include the list of all invoices? That does not scale. Imagine if it is something more common like transactions where a user can have made thousands or tens of thousands.
It can contain a link to the list of all invoices. So you could have /customer/<cid>/invoice which lists all invoices for the customer which are actually just links of the form /customer/<cid>/invoice/<iid>
There's literally not a single upside of this shown in the article.
> The server is now free to change the format of new URLs at any time without affecting clients (of course, the server must continue to honour all previously-issued URLs).
No more than it was previously.
> The URL passed out to the client by the server will have to include the primary key of the entity in a database plus some routing information, but because the client just echoes the URL back to the server and the client is never required to parse the URL, clients do not have to know the format of the URL.
Instead, you now have to require new kind of knowledge, one of the keys which must be present in schema and their meaning. E.g. knowing that "pets" key is present and leads to a relationship of a particular kind, with all the implicit logic added and documented. And what if you want to get pet's owners with some additional parameter, like only getting ones which are exclusively yours? Would you need to edit that "pets" url adding "&exclusively_owned=true"?
I can think of one upside: it does make your keys more distinct. I can tell at a glance that /pets/12345 is a different category of entity from /people/98765. That could aid in debugging. It's a bit like a type system.
And of course since it is a type system, it now means you've got yet another type system to deal with in your universe. One with unknown and inconsistent semantics, and no natural support built in. Conceivably those semantics could be added and software built to support it, but rolling your own is going go yield a lot of effort with much less benefit.
Yeah, but it would essentially still be more of a "string id" rather than a URL, e.g. you can just make your ID to look like "person-12345" and "pet-52435" without tying it to URL. Showing as valid URL can be a "nice artefact" to remove the need to lookup API docs for collection names.
I don’t mind using a link, but I’d prefer to have both the exact id and the link to avoid having to parse a link in an unreliable way to get the actual id.
It’s interesting how GraphQL changes the base point of the article concerning documentation/ease of api use.
In our GraphQL resolvers we typically add two fields, thing_id which resolves to the id string, and thing which resolves an object that you can drill into as desired.
I was starting to see a lot of GraphQL APIs only add “thing”, which meant you had to do a lot of queries like below just to get the id.
I guess most schema designs wouldn't duplicate each field just for that convenience, as it doubles the number of fields in documentation, autocompletes etc. as well.
When you are orchestrating an interaction between several services where they are all using a shared identifier somewhere in their API, you need to extract that identifier if it's embedded in a URI.
The identifier is not embedded in a URI, the identifier is a URI: instead of an integer id 1234 or a string id "1234", you can use a string id "http://example.com/id/1234". Typically, one service is responsible for creating the id but after that, it can be used to refer to the same entity in any number of services. EDIT: What makes it better than a free-format string id is that every person and tool knows how to de-reference it / look it up.
What determined which API owns a resource when the same resource is exposed in mutlipe apis (e.g you have a REST API and a AMQP feed)? In some cases it is obvious but in other cases it is far from clear.
I feel using urls as ids create more problems than it solves, especially if you use absolute urls. I have worked with cases where the same ID was used in 3 or 4 different APIs. None of them clearly the master.
The URI doesn't have to be fully owned by any single microservice. The JSON representation can more or less correspond to a HTML view and be served by the UI, or a separate API facade can combine the relevant pieces of information from various microservices.
You would still often need to parse them. Imagine that the url is the primary key of a table and the API changes url structure. Then you would get duplicate rows and the only way to prevent that would be to validate the format of the urls, which is parsing. The index would also be bloated and slow.
If you follow this model consistently, the client never needs the concept of an "actual id". For the client, there is only one Id — the URL. The client can use it anywhere that in other models they might need to use an "actual id"
Why did the author of this blog post decided to pass web links in resources and completely ignored standard practices such as RFC 8288 which employs the Link HTTP header?
I feel that the author tried to reinvent HATEOAS but skipped a cursory bibliographical review and jumped right into reinventing the wheel, and one which has already been reinvented multiple times (HAL, JSON-LD, etc...)
The post itself, and this response, illuminate a reality which we don't talk about -- the set of standards around HTTP are a horrible mess. As a person who in a past job has tried very hard to implement a "standards compliant" HATEOAS API, the sheer mass of complexity and vagueness is just too much. It's easier to just write something that works and which resembles something people are used to than to wade through these horrible RFCs trying to follow standards without good reason. It's like a horrible joke -- I wouldn't be surprised if you got to the final RFC and it said "Just kidding! Congratulations on getting this far, but this was just a test of your tolerance for pedantry."
> the set of standards around HTTP are a horrible mess.
That's not the problem at all.
It would be the problem if the current state of the art was actually taken into account and discarded for some reason.
Instead, the state of the art (or even basic standard practices) is systematically ignored, and we end up seeing the same old wheel being supposedly invented again and again by people who fail to perform the flimsiest cursory bibliographical research on any given topic, and instead invest the bulk of their time announcing their poor reinvention of the wheel.
The web is based on linking. It's a very basic concept. Linking resources is not a new problem. What line of reasoning can possibly lead anyone to believe that this specific problem has never been researched by anyone before us, thus it's a sensible idea to simply dive head-first into coming up with a proposal that completely ignores any prior work?
Nope, the current state of affairs is an indictment of the "standards". It should be easy and obvious to implement something according to the standards, but it's not. I'm a relatively smart guy, and I've tried. You very quickly reach the point where you ask, "why the fuck am I doing this?" and just do what makes sense.
Let's take RFC8288 as an example since you've brought it up. Where in that RFC does it discuss _why the fuck_ I would want to put an API link in the headers instead of the body?
The fact is that this RFC isn't "the standard". The standard thing is to put API links in the body with other attributes, and it's the standard because that's what everyone does, and because that's what makes sense. This RFC is a hammer for HTTP pedants to hit people over the head with.
The web is based on linking
And where to we expect those links to appear? In the header?
FWIW, RFC 8288 was preceded [1] by RFC 5988, and RFC 5988 [2] says in its section '1. Introduction':
"A means of indicating the relationships between resources on the Web, as well as indicating the type of those relationships, has been available for some time in HTML [W3C.REC-html401-19991224], and more recently in Atom [RFC4287]. These mechanisms, although conceptually similar, are separately specified. However, links between resources need not be format specific; it can be useful to have typed links that are independent of their serialisation, especially when a resource has representations in multiple formats."
"To this end, this document defines a framework for typed links that isn't specific to a particular serialisation or application. It does so by redefining the link relation registry established by Atom to have a broader domain, and adding to it the relations that are defined by HTML."
Such as in which situations? It's not obvious at all, so a reader (that is, an API designer) is on their own to try and divine whether there's any real reason to follow this "standard" instead of doing things in the more natural and standard way.
This is my point. Not that it's hard to read these RFCs... but that they're often so vague, and is so often hard to know whether there's any benefit at all to following them in one's specific situation.
That doesn't stop self-appointed standards cops from smacking people down.
The answer is in the part that says "it can be useful to have typed links that are independent of their serialisation, especially when a resource has representations in multiple formats".
In HTTP, URLs locate a 'resource'. Then you and the server do content negotiation, implicit and/or explicit, to select a 'resource representation'. Think of these as different formats for the same conceptual thing identified by the URL. Some formats like HTML can support hypermedia that can have embedded links. Some, like 'text/plain' or 'image/gif', can't.
Link headers allow links from the current resource to other resources to be communicated even if the chosen representation can't communicate links in its body.
You as the client try to GET /my-receipts/20190512-1 from Fancy Receipt Scanning Service, and content-negotiate with an Accept header to "text/plain" or "image/gif" (e.g. to get a plain copy or a scan). There's no agreed-upon way of communicating links in plain text or GIF, so Fancy Receipt Scanning Service can't serve you a GIF scan of your receipt that links to a product page for every item you bought.
If you accepted "text/html", it could have served HTML that embedded these links within the response body, but you didn't accept "text/html".
It can choose to send links as headers, if it still wishes to communicate links.
That's fair, but if I'm defining an API that serves, say, JSON, I can define a schema for it and tell my clients what things mean in the schema, including which things are links.
> Nope, the current state of affairs is an indictment of the "standards".
That's not true at all. Those "standards" are not taken into account at all. I mean, there are currently about half a dozen HATEOAS standards and specifications that repeatedly implement the same concept, and they have existed for quite a few years. Yet, how many of those standards and specifications were taken into account in this particular blog post? Zero. Instead of taking into account any prior work already done on the issue of web linking, the problem is for some reason presented as a novel idea that, for some reason, no one in the world would have ever thought about.
How is that a reasonable starting point?
> It should be easy and obvious to implement something according to the standards, but it's not.
You got to be kidding.
Let's consider RFC8288. RFC8288 in essence specifies a single HTTP header and a format to represent link relations. You want to link your resource to any other resources? Well, just add the link relations. You want to check what resources are related to the resource you've just requested? Well, just check the Links headers.
Let's consider Hypertext Application Language (HAL). HAL in essence specifies a wrapper document type that extends the resource with a "_links" name:object pair. You want to link your resource to any other resources? Well, just add the link relations to the "_links" object. You want to check what resources are related to the resource you've just requested? Well, just check the object referred by the "_links" name.
Let's consider the Ion Hypermedia Type. Ion in essence specifies a wrapper object type that includes all link relations as JSON name:value pairs and contains the resource as the value of the "value" name:value pair. You want to link your resource to any other resource? Well, just add the link relation as a name:value pair. You want to check what resources are related to the resource you've requested? Well, just check the name:value pairs of the response object.
And the same applies to other HATEOAS standards and specifications such as JSON-LD, Hydra, Collection+JSON, Siren, etc etc etc...
Where's the rocket science?
> Where in that RFC does it discuss _why the fuck_ I would want to put an API link in the headers instead of the body?
As someone who is wholly unfamiliar with this, why should relationships be treated any differently than any other attribute of some object returned by a web api?
If data-attributes have to be parsed out of some json-body, but link-attributes have to be parsed out of the header, that's weird. It becomes even weirder if I decide to switch from using a key-in-the-body to a link-in-the-header.
That doesn't sound easy or obvious. As is, as someone dealing with a REST API, I rarely, if ever, have to think of headers. When I do, they're related to things like CORS, and transport layer semantics, not application level semantics. But now you're saying that, actually, some of my application semantics should be in the header, but most should be in the body.
How is that easy or obvious? Why is that better?
I'm dead serious, I'm not kidding, I'm not the user you just responded to.
> As someone who is wholly unfamiliar with this, why should relationships be treated any differently than any other attribute of some object returned by a web api?
Because links are resource metadata, not the resource itself let alone entities represented by a resource. The relationships between resources are independent of the resources themselves.
> If data-attributes have to be parsed out of some json-body, but link-attributes have to be parsed out of the header, that's weird.
It really isn't. In fact, it's the other way around. Link relations depend on the request, not the resource, just like the particular version of the resource (see ETag header) or the date and time at which the origin server believes the resource was last modified (see Last-Modified header).
Considering the petstore example, it would be weird if the pet would include a version/hash or the last time it was modified. A pet is a pet. It has a name, a species, a breed, and an owner. The pet resource is the pet information that you received as a response to the request you've made in a specific moment in time.
> Link relations depend on the request, not the resource, just like the particular version of the resource (see ETag header) or the date and time at which the origin server believes the resource was last modified (see Last-Modified header).
Not really. Since it's mother's day, the fact that I'm related to my mother is an attribute of me, not an attribute of the request you make for information about me. I am related to my mom.
In a conventional database, link relations are defined on the tables themselves, but versioning information and last-modified information is normally defined in metadata tables. So I don't see any prior art for this. It just doesn't match how most people normally represent objects.
> and the pet resource is the pet information that you received as a response to the request you've made in a specific moment in time.
And at version X, the pet has a specific owner, defined as a relation on the pet itself. The request provides versioning information, but the owner is keyed on the pet, just like it would be in the server's database.
> Not really. Since it's mother's day, the fact that I'm related to my mother is an attribute of me, not an attribute of the request you make for information about me. I am related to my mom.
You are related to your mother, but that relationship is represented through link relations. You are represented by your resource, and your mother is represented through her resource. Where and how those resources are represented or found is an entirely different concern that has absolutely no relation with the relationship between you and your mother.
> In a conventional database, link relations are defined on the tables themselves
Actually, they are not. I mean, there are entity tables (the resources) and then there are the relationship tables (the... relations). Although some techniques involve using non-normalized databases, that does not mean that entities and entity metadata, such as relations, are or should be conflated.
Now, imagine that the database tables weren't fixed properties of the system, which is a basic design issue in resource-oriented architectures. Would it make any sense to hard-code references to other tables if they could change at any point in time, specially if you're operating a system that could cache table rows?
> And at version X, the pet has a specific owner, defined as a relation on the pet itself.
Yes, and that's also the wrong way to go about specifying the relationship between resources. The correct way would be to express the relationship between resources as a link relation. The identity of neither the pet or the owner depend on each other.
> Actually, they are not. I mean, there are entity tables (the resources) and then there are the relationship tables (the... relations)
Foreign keys are put in entity tables in normalized databases.
> The identity of neither the pet or the owner depend on each other
So then the only thing that belongs on the pet itself is an arbitrary id? Because the identity of the pet doesn't depend on it's age or when it was born, and a timestamp is just a timestamp, so putting "time of birth" could just be a link relationship to another entity.
Same for breed. Location too, that's not a tool for identifying the pet. Heck, even the name is often defined by the pet-owner relationship but not the pet itself.
So we're left with a pet object that has only a uuid, and a set of link relations in the headers.
Why is that better? You've said my way is "wrong", but you haven't actually explained what your way does to improve the situation. I just demonstrated that any arbitrary piece of info can be represented as a link relationship, so the whole thing is arbitrary anyway.
So the simple question, the only question is, as a designer or user of an API, how does sticking some relationships in headers make my life easier? How do I decide which relationships those should be?
You clearly have opinions on how this should be done, but neither you nor the RFC explains what I gain from your way of doing things. So again, I'm not asking what I should do, I'm asking why?
> As is, as someone dealing with a REST API, I rarely, if ever, have to think of headers.
Really?
Headers contain information about the content and the caller which are used in business logic. Authentication and authorization information is often passed in headers, for example.
The state of the art for web APIs, for better or for worse, is ones where you send and receive 'untyped' JSON (served as application/json), endpoint URLs have version numbers in them, and POST GET PUT DELETE mostly map to CRUD. Some extra flourish is sprinkled on top, not to improve functionality, but to chase the mood of the times or to make your code-generator (like Swagger/OpenAPI) work.
The article appears to start with a very similar assumption, and proposes use of URLs within resource representations instead of bare foreign-key IDs. I don't think the article can be accused of systematic ignorance of 'standard practices' or failing to perform "cursory bibliographical research"; stuff like HAL and JSON-LD are far from standard practice.
This series may benefit from a quick comparison of API design schools, comparing ways to express the a similar domain model in various styles, but it also seems to be trying not to get bogged down and dispense some prescriptive advice (instead of, say, overwhelm and despair).
> The state of the art for web APIs, for better or for worse, is ones where you send and receive 'untyped' JSON
JSON is just a document format used to encode resources. Links between resources don't change with the document format used to encode the resources. Conflating resource encoding with resources, let alone resource linking, misses the whole point of a resource-oriented architecture.
> endpoint URLs have version numbers in them
That's an API design choice, and one which has no relation with how resources are linked. It makes absolutely no difference where a resource can be found, as long as it's reachable. That's the whole point of REST.
> and POST GET PUT DELETE mostly map to CRUD.
That's entirely irrelevant to how resources are linked.
> Some extra flourish is sprinkled on top, not to improve functionality, but to chase the mood of the times or to make your code-generator (like Swagger/OpenAPI) work.
Again, entirely irrelevant. Resource and resource representation are entirely different concepts. In fact, some web api frameworks actually pick resource encodings depending solely on the content type negotiation process, while using the exact same resource regardless of any encoding.
>> the set of standards around HTTP are a horrible mess.
> That's not the problem at all.
sorry, what is the real problem then, in your opinion?
i have never heard of that link in headers RFC until now. i have never heard of CURIEs but i remember seeing the xhtml example they gave. no one called them curies.
there is a huge communication gap, it seems, between what the standard defines and what people do in practice.
Why did the author of this blog post decided to pass web links in resources and completely ignored standard practices such as RFC 8288 which employs the Link HTTP header?
https://tools.ietf.org/html/rfc8288
Who uses this "standard practice"? I haven't seen it. It seems rather awkward, even user-hostile, to put most of a resource's attributes in the body but put ones which happen to be links in the header.
Resource links aren't a part of the resource. They are only a means to express how the resource you've just requested is related to other resources. Thus it's a function of the HTTP request and not the resource itself.
I mean, think about it. The resource can (and will be) cached. Does it make any sense to also cache ephemeral links with it?
Then why are the single most important and widespread form of resource links, HTML URLs, always included in the response body i.e. in the HTML document itself?
HTML is a markup language whose primary goal is to provide human users with a readable and navigatable document through a web browser.
There's a reason why HTML is primarily used to provide human-readable documents, while web APIs are implemented based on other document formats that are better suited to provide machine-readable resource encodings.
Let me explain why you have reached the wrong conclusion. The key is to think about why the markup is needed. Is it to format documents, to make them 'readable'? No, it's to tag documents with semantic meaning that computers can use to enable richer content and behaviours. That's why, for example, tags like `<b>` and `<i>` are nowadays widely accepted to be design mistakes in HTML, and we use CSS instead for formatting. That's why we encourage semantic markup like `<section>`, `<nav>`, and even `<div>`.
Hyperlinks are one of the basic semantic markup tags. They allow machines to read them and insert jump points. The key here is that they are machine readable to enable behaviour that wouldn't be possible otherwise.
Right.. so... that doesn't jibe with how the real world thinks about these things. In the real world we want the pet to have an "owner" attribute, which is part of the resource.
You should just know that this makes zero sense to most of us. Either that's because it makes no sense, or you can't explain it well, or we're all too stupid to get it. Either way, good luck with that "standard".
> You should just know that this makes zero sense to most of us.
Unless you've held an election, you only speak on behalf of yourself and yourself alone. If you're having a hard time understanding basic concepts then naturally it's unlikely that you'll have an epiphany in a discussion where you're decided to take an hostile stance towards details you're not understanding.
Meanwhile, you can ponder how these specifications you're claiming you and those you claim to be representing don't understand are actually used extensively, and it's such a basic concept that it even features in intro tutorials to REST APIs. Perhaps that's a good sign that basic concepts such as web linking aren't that complex or hard to understand as you've tried to assert they are.
They provide what you've described as an alternative to what most consider the normal way of representing resource links. It doesn't make any of the claims you are making as to why one is more valid than others.
So once more: sure we can do things your way, but why is it better? Every other source says link headers are an option. You claim they're superior. Why are they better?
Why does representing links in headers and not bodies make the API easier to understand, navigate, or use? How does it clarify the interactions? What do I gain from it?
So far, all you've said is "relationships are metadata" or something, which is not how most people consider relationships, and even if it we're true, it doesn't explain why my response can't contain metadata, as long as it's marked as such.
I've built actual HATEOAS services for a few years now, and I work/chat with several people that do the same, including a few authors on this subject, and I don't think that the idea that "a relationship generally should not be part of the representation itself, because it's meta-data" is a very universal idea. I'd say far from it.
Aside from that, I'm curious how do you manage alowing API clients to create new relationships or change them?
> Aside from that, I'm curious how do you manage alowing API clients to create new relationships or change them?
What's hard to understand? If you add a resource then you update your link relations. If you actually tried to implement a REST API instead of RPC-over-HTTP then the client navigates link relations just as before. That's pretty much the whole basic premise of REST.
The thing I'm mostly curious about is situations where a client manages relationships. For example, you already have an 'article' resource and now want to add a new 'category' link to that article. Clients often need to be able to create new relationships, or perhaps remove them. I'm curious if you ran into this and if so, how you do this with just Link headers.
It's ok if the author tried to come up with solutions for the problems he feels he has, but it is a complete waste of time -- his own an of those who spend any time reading this sort of blog posts -- if the post fails to take into account standard practices that are being employed for years.
As another example of how the author failed to take into account basic standard practices, the author asserts that there is no standard for HTTP headers that specify API versions, but this baseless assertion requires ignoring the fact that media type versioning does exist and has been used for ages. The author even realizes that a solution based on the Accept header, which he prefers to invent a Accept-Version header, would fit his requirements. Ok, so why doesn't he just follow what has been standard for quite a few years and simply state the document version in the media type, which already is supported by any content-based routing scheme and is cached quite nicely?
Your suggestion for using content_type for versioning makes sense if you believe it makes sense to invent a new media type for every entity in your domain model (customer, invoice, ...). I don't think this is a good practice. I always stick to the standard media types (application/json, text/html, ...)
A thing may be identified by a URI (/person/123) for which there are zero or more URL routes (/person/123, /v1/person/123). Each additional route complicates caching; redirects are cheap for the server but slower for clients.
JSONLD does define a standard way to indicate that a value is a link: @id (which can be specified in a/an @context)
https://www.w3.org/TR/json-ld11/
One additional downside to storing URIs instead of bare references is that it's more complicated to validate a URI template than a simple regex like \d+ or [abcdef\=\d+]+
And on a more fundamental standpoint I get that disk space is cheap theses days. But you just doubled, if not worse, the storage space required for a key for a vague reason.
It may never make any difference on a small dataset, where storage was anyway unaware of differences between integer and text. But it would be hiding in the dark. And maybe in a few year a new recruit will have the outstanding idea to convert text-link to bigint to save some space...
The idea of using uris instead of keys is not a new one (as has been mentioned by other commenters). Every few years the idea gets a resurgence of people who say that REST apis should be HATEOS and that we are doing it wrong.
It seems obvious that the cost-value for this is simply not there, if it was good enough, you'd see developers requesting it and many more vendors implementing it. So far I haven't seen any recent changes that might skew the cost-value towards the uri's favor, only the opposite (cue GraphQl).
Using uris have little benefits, but it does have the following problems:
As a user of the api:
* You need to keep an arbitrary length key in your database if you save references. It can cause some issues with certain setups (less so these days though).
* If you keep the entire URI as identity, then you can't use multiple endpoints. For instance lots of companies have an endpoint for production and one for reports - using URI for one endpoint in another is quite awkward.
* Working with queries is troublesome, especially with get request. Consider searching for all transaction of a specific account, where the account's identifier is `https://api.google.com/v1/account/123`
* Upgrading to a new version of the api (one with a different url like v1/v2) now not only requires you to change your code to work with the new version, but also migrate all previous ids you kept in your database, which is a much different and more error-prone issue then simply changing code.
One simple strategy is not to change your approach to storage of IDs, which can continue to be "simple" keys. If you use that strategy, the only change from a "conventional" application is that the server is doing all the URL<->simpleId conversions instead of pushing that responsibility onto the client. Another good strategy is to store Ids in the form that the URI spec calls "path-absolute" URIs — basically you lop off the scheme and authority. This strategy works well, but may require a bit more care, and may cause problems if you have to integrate with other tables that have a different approach to keys.
The article knowledge has been lost to time. In "relational databases" you always name the foreign key as the relationship between tables.
From "A Practical Guide to Relational Database Design" from the year 2000.
"Each relationship line should carry a pair of descriptive elements, which define the nature of the association between entities. A name is a single word or descriptive phrase; it should always contain a verb such as: owns, owned by, holds, administered by, etc. Examples from our simple model are: A PART is sold on an ORDER LINE. An ORDER LINE is placed for a PART."
But, this has been lost because the practicality is that it is hard to know what is the element. As other comments points.
Probably the best is both worlds: PersonId_Owner. PersonId_Veterinary. Or something similar.
It seems that such a discussion should have been solved decades ago. And here we are. :)
I feel like the author has never used graphql. We're also just finally graduating past REST to something more meaningful. This advice feels 15 years late and now totally wrong. An API shouldn't be tied to a protocol like http, it should be able to move on to other things.
Ahh I was correct:
> I have never used GraphQL, so I can't endorse it, but you may want to evaluate it as an alternative to designing and implementing your own API query capability.
You really shouldn't write this giant article without having tried that.
The article's recommendations don't achieve HATEOAS, because even though the foreign-key IDs are replaced with URLs, they're not actually links because they don't specify an explicit relationship.
Instead, the relationship between the response document and the URL's target is implicit, probably guessed from the naming of the key or maybe noted in the response document's definition.
The point of HATEOAS is that a client who understands the meaning of certain link relations (aka "rels"), such as ones in the IANA registry [1], can interact with these referred-to resources using the Standard Interface (of GET, POST, PUT, etc).
Only in the section where the article talks about ways to express links in JSON do link relations appear.
In the style of JSON I like to use, and whih GitHub and Google Drive use, the relationship name is given by the JSON name. So if you see 'owner: /person/12345', then 'owner' is the relationship name. There are other JSON styles for expressing the relationship name — the blog post mentions some of them. You might quibble about whether these names are the names of the relationship, or just names of one end of the relationship.
The original idea was that you had URLs that told you where to find a thing, and URNs that identified a particular thing regardless of where it appeared. These used a unified schema (called URIs) because when you’re telling somebody else about a thing, you want to be able to refer to it by location (“the red building in the next block”) or by identity (“the main post office”). Presumably, there would be services that could take a name and tell you where to find it; not a bad assumption as every library in the world has such a system for its own collection.
While this looks good on paper, in practice URLs were relatively stable in the early days of the internet and so they turned into de facto names before lots of effort was put into making URNs work. Now, we’re struggling with the issues they originally foresaw, but weren’t able to follow through with a working implementation.
Why can't the ID also be a link? Serious question. I don't see any real downside to it.
If the ID is also a link, it is guaranteed to be globally unique (like a Relay Node ID in GraphQL).
If you want to get the same resource via GraphQL, just use the same URL-ID. In fact, the author mentioned the possibility of base64-ing the link to prevent clients from relying on its structure, which is also a common pattern with IDs in GraphQL.
Is not that they can't, it is just that link are not designed to be ID.
The ID is the maximum cardinality of an entity (informally the smallest set of values of such entity necessary to uniquely identify it), the link is how to find the entity.
What happen if you want to deprecated your API?
There are other way to share the link of a resource,like use headers.
Then again, from a strictly practical point of view in 99.9% of the case it won't change anything, but:
1. Eventually there will be cases where it does matter, and then it will be an huge mess.
2. Why shut yourself a door only to don't read an header and obtain exactly the same information?
What is the source of knowledge for the client about fields, where they can read link to the entity?
For human it's obvious that dog has an owner, so field "owner" should be used, but for code - you need to write it, "document it".
So if you're going to "document" every field containing link to external resource, you'll end up with even more code, than just "documenting" API endpoints.
Also, pretty often you need multiple IDs of entities to send POST/PUT request - just to create a relation.
POST /adoption, owner_id=5, dog_id=7.
How should it look with links? Will it be issue for the server to parse them? And it's just simple case with 1 to 1 relation, sometimes you need to add sets of objects to another entity.
It's a really bad advice and after reading this I'm not sure I should trust other articles from that source.
Because it's more difficult than parse "5" and "7", especially when you need to parse not only numeral IDs. "More difficult" always means "more bugs" and "more vulnerabilities".
There is no single reason for this complication. The ONLY motivation author had - less knowledge about API endpoints on the client. But simultaneously it means more knowledge about the fields where links are stored, so it doesn't save nothing.
The document also forgets that API's are not read-only. So let's say you have users and usergroups and you can request a usergroup with its list of users and you can add users to usergroups.
If you use links for read, you should also use them for writes, otherwise it's quite inconsistent. So now you need to add a lot of parsing everywhere to extract the id's out of the urls, just for the sake of being more dogmatic
I have implemented many APIs in this style, both read and write. Depending on the design of the storage layer, the server may or may not have to parse ids out of URLs. The client never has to; for the client, the URL is the only Id.
If in your example your usergroup and user are managed by the same service, then usually you should already have a feature in your framework that parses `/user/123` into its individual component and finds the relevant entity.
I've literally spent 2 years working on a project that did exactly what this article is recommending. There were some places which needed the relative-url as an identifier, and other places which needed the "database id" as the identifier. We constantly had to extract the id from the URL, or convert the id into a URL, and keep a mental map of which format each input was using, and which format was needed for each output. It was a mess. I would personally not recommended this at all.
If some parts of your API used database keys as identifiers and other parts used URLs, then I can see that could be confusing. All one or all the other would probably be better.
Anytime you're interacting with the database, you'll need to use database keys. Anytime you're interacting at the API interface level, you'll need URLs. Anything in the middle is then going to be a gray area, especially when you have multiple people working on this implementation together.
Basically, advocating for dynamic typing rather than static typing, across an API boundary. You’ll save code constructing API requests but need to create a lot of application logic to handle an owner link and pet link separately, since they have different semantics.
What's the point? None of this is useful to me, all of this is extra complexity. Why would I want to expose every addressable entity through URLs and HTTP? That's not what IDs are for.
I'm aware that this fits into the whole REST idea. I still don't care.
I am surprised the article didn't mention RDF. In every data facet of RDF the data is uniquely identified by URI. In the case of RDF the URI is merely a unique identifier that can resolve to a HTTP resource, but doesn't have to.
The fact that JSON is just a format standard and doesn't have specified components (like links etc) but instead we have to built those on top has cost us a lot in APIs. Btw, according to RFC 8288 Web Linking (and before that 5988), a link consists of 3 parts + 1 optional part:
"In this specification, a link is a typed connection between two
resources and is comprised of:
o a link context,
o a link relation type (Section 2.1),
o a link target, and
o optionally, target attributes (Section 2.2).
A link can be viewed as a statement of the form "link context has a
link relation type resource at link target, which has target
attributes".
For example, "https://www.example.com/" has a "canonical" resource at
"https://example.com", which has a "type" of "text/html".
"
That's why you need a standardized link component that is globally accepted/understood that takes into account all parts of the linking, instead of having various ways depending on the API/JSON-based Media Type to communicate that something is a link.
I used links but I'm gonna rewrite this code to simply pass IDs. The reason is simple: I need additional configuration for my server to know its hostname and I don't want to do that. May be my server even have few different hostnames for different clients? So I must parse client request and extract Hostname? But it's served via reverse-proxy, so I must do some complex configurations to pass this information. So many issues. But client knows perfectly well which server he's talking to, so he can just append server-base and id. Yes, client must know about its structure, but it's nonsense that client can somehow learn something. I'll code that anyway.
May be it makes sense when you're writing an API and some different person writes a client and she's so shy that she don't want to even ask you. Yeah, she can inspect answer and find out that this seems like a link to query further. I never was in that situation, I was always building all software myself, so for me this does not make sense.
I'm not sure about the verdict on URL versioning. I've used header versioning extensively and while flexible, it also carries some big downsides, mainly that it's confusing for new developers, and makes it real hard to casually explore the API in a browser (bad DX). I'm also not sure you do want to encourage mixing v1 and v2 API representations; I have certainly seen cases where it makes progressive upgrade easier, but it can also bring inconsistencies, so having a default new integrator path of "start at v2/login and use whatever links you get" is appealing.
I do like the idea from Stripe of having Accept header versioning, but pinning every new client to default to the newest GA version. Gets around most of the DX concerns I raised, but it's a bit more machinery to wire up.
I wasn't fully convinced by this article. Language specific API wrapper clients can abstract all these complexities. Having links for IDs felt very unnatural but I guess that's because I have never came across an API that uses links like the article suggested
Conceptual and data modeling aspects of this problem are discussed in [1]. It compares links with joins (and foreign keys) by proposing a solution (concept-oriented model) which does not use joins at all but rather relies on links only.
Essentially, a foreign key is viewed as a relational workaround for representing links with some significant drawbacks and the question is why not to use links directly without relational wrapping.
That seems like a highly relevant aspect to explore. For example, GraphQL does not specify joins and nesting is semantically a reference (directed link in the graph) instead. Any update on the cliffhanger?
> Yet, classical references miss some
properties which are of crucial importance for data
modeling. How links can be revisited in order to
overcome these drawbacks will be our focus for
future research.
Sometimes I wish REStful/REST the whole idea was a lot more opinionated. Sure you can have opinionated frameworks but nothing is stopping you from using a patch like I would a delete... (not the best example but you get the gist).
I think the issue brought up in this blog post pales in comparison to the two biggest problems faced when working with REST APIs: querying for nested data, and the limitations of CRUD interfaces to model complex behavior.
I've been building REST apis for 13 years and never had either of these problems. I might be reading too much into this, but it sounds like you enjoy GraphQL or RPC-like apis and have trouble mapping their respective concepts 1:1 to REST. You shouldn't.
These architectures aren't equivalent alternatives. Pick the right tools for the job, and use the right tool appriately.
No I work mostly with REST apis and I find that api consumers don't really care about having nice relative links or references to self. If anything the popularity of GraphQL indicates that they want more flexibility than is usually available through REST and posts like this are the ones really addressing non-problems.
> If anything the popularity of GraphQL indicates that they want more flexibility than is usually available through REST
I absolutely agree. GraphQL clearly addresses a pain point, or a gap. I just don't think it should be seen as a replacement. Pick the right tool for the job.
It required some discipline. I had to catch myself several times. The reason I avoided those terms is that I wanted readers to focus on one simple idea and not get distracted by all the baggage those terms bring with them. You can see by the comments that I wasn't totally successful.
Why on earth would you blow out your request size for the sake of purity? Calling GET /pets is going to return a lot of instances of pet with very similar URLs.
I have implemented JsonApi a few times and so far it is the best one. It solves all problems I had about APIs in a very nice way. The `include` strategy is very simple and effective for dealing with relationships.
What really convinced me about HATEOAS and links was the first time I used a HAL browser and started clicking around to discover an API using only its entry point and navigating from there.
From that point on I try to use it as much as possible. A typical API response for my projects looks like this:
It still has the ID field in there for cases where the client needs to store the id itself but it should not be used to template URI's for related resources, the links are there for that.
I've always been a bit undecided on this. I can see some obvious upsides, but on the other hand (apart from the downsides mentioned in the article and elsewhere) you've added 500 bytes to this entity for functionality that's only useful to the developer. Is the ability to click around in a HAL browser instead of a Swagger document worth the verbosity? Is there an argument that we might produce clients that can meaningfully deal with new links being presented without a developer being involved? I'd be surprised if that was realistic.
Yes, 500 bytes extra but it's not only for the developer discovering the API in a browser. If the client developer is exploring the API and then manually implementing an API client something is not right.
Hal clients for this use case do exist. you point them to an entrypoint and they can discover and generate the necessary code/methods for interacting with the API.
I think I am missing the core concept here. This still uses IDs, only now you have to grep them out of a URL construct instead of just getting them directly?
I don't get the intent at all here, but I have a suspicion whatever problem this tries to solve is better solved by UUIDs or by doing nothing out of the ordinary.
The URL doesn't contain the ID, it _is_ the id. If you stop treating the URL as an opaque string but start parsing things out, you are definitely not getting any benefits.
One benefit of using urls as ids is that it no longer is just an id, it also describes where you can get it's representation.
No one thinks twice about using links for images. You wouldn't make an API that specified images as "image id 2345, which the client can find at /images/{id}"
Why on earth would one trade off a short, static unique identifier for a potentially long, dynamic "link" that essentially binds all data to some crappy API that will be outdated in a few years? Is it really _that_ hard to use keys?
In micro services world this makes perfect sense. In the legacy land - this is slightly tricky as dependent application (where our foreign key points to) may or may not be in services world. But I get the idea.
> The server is now free to change the format of new URLs at any time without affecting clients (of course, the server must continue to honor all previously-issued URLs).
If you have to honor all previously-issued URLs then you aren't changing your format -- you are supporting two formats from now on, the old one and the new one.
You can of course tell your users that you will deprecate the old format, but unless you are as powerful as Google your users may prevent you from enforcing a deadline for deprecation.
If the URLs in your API responses are FQDNs rather than relative paths, all of this gets significantly harder to deal with.
Even if you figured all of that out, links are not idiomatic if your users consume your api via an RPC or GraphQL.