"The native app teams discovered that using GraphQL came with the additional overhead of building queries by concatenating a bunch of strings and then uploading those queries over slow connections. These queries could sometimes grow into the tens of thousands of lines of GraphQL. Also, every mobile device running the same app was sending largely the same queries.
The teams realized that if the GraphQL queries instead were statically known — that is, they were not altered by runtime conditions — then they could be constructed once during development time and saved on the Facebook servers, and replaced in the mobile app with a tiny identifier. With this approach, the app sends the identifier along with some GraphQL variables, and the Facebook server knows which query to run. No more overhead, massively reduced network traffic, and much faster mobile apps."
Firstly, what query could be 10k lines? Secondly, if you're defining the query just by an id, how is that then any different to a fixed REST endpoint or stored procedure?
A 10k line query is pretty easy for me to imagine. In GraphQL, every property you want on every object is a line, since you query by property, not by object. In an interface with a whole bunch of widgets, I can imagine it really adding up.
It _is_ pretty much like a stored procedure. But I don't think that's inherently a bad thing at all. The procedures are effectively version controlled right within your application code, and I'm guessing they're persisted at deploy-time.
But I thought the headline feature of GraphQL was that the client could choose the shape of the response, hence all those properties. Even then, ten thousand properties seems a huge amount, even for a rich single page app.
If you end up putting that on the server anyway, you're back to fixed responses, no better than a REST API.
I guess you mean like a RESTish "super-endpoint", that aggregates a whole bunch of data? Because operationally, it would be really different than making a thousand idiomatically REST queries over a hypermedia API.
If so, there's still a huge difference. In the REST super-endpoint world, you have to modify your API service to suit the desires of individual clients. In GraphQL, the client controls the shape of the query. The detail that the client is sending that query at deploy-time, instead of request-time, doesn't change that.
This also recognizes that queries tend to be parametric, but not fully dynamic. That's kind of built-in to Relay, since the fragments are statically attached to React components.
Exciting to see! I've been waiting on this since we decided to use Relay for our application about 6 months ago. Relay is amazing but quite an investment (especially mutations).
I'm a bit worried, however, that Relay Modern has focused a bit too much on the internal needs of a massive application like Facebook at the expense of fleshing out some of the rough spots of working with Relay.
Simpler, more explicit mutations is a wonderful improvement, as is more granular control over the cache, but there's no mention of subscriptions or client-side state control (using Redux on top of Relay is... doable, but not as elegant as one might hope for).
That all said, this is an impressive release and congratulations to the team. We're committed to Relay and hope this release grows the community.
Follow-up (didn't see the updated docs). Looks like there is some support for client-side state through "Client Schema Extensions" — excited to play with this.
I do also hope that this iteration brings with it better docs — that's the one area where I've looked over at Apollo longingly. On many occasions I've discovered unknown patterns in stack overflow answers that aren't documented anywhere (credit where it's due: the answers are often by members of the Relay team).
And one final tangent: the day when Facebook gives up on flow and adopts Typescript will be a glorious, glorious day.
> the day when Facebook gives up on flow and adopts Typescript will be a glorious, glorious day.
I foresee FB to skip TypeScript and go straight with ReasonML[0], as 25%[1] of the messenger code base supposedly is already converted to Reason.
And I think that is even more glorious development then FB going with TypeScript. :)
[0]: An easier to approach (for programmers coming from mainstream languages; like JS/C++/Java/C#) syntax on top of the OCaml language, and some new tooling. The tools integrate specifically well with BuckleScript (by JaneStreet) which provides the OCaml compiler with a JAvaScript compile target.
BuckleScript is by Bloomberg, specifically https://github.com/bobzhang, not Jane Street. The developer of js_of_ocaml, the other JS to OCaml compiler, works at Jane Street though.
Worth stopping by the discord [0], it's grown quite a bit and so has the (editor/build/etc.) tooling. There's no Apollo/Relay story yet (that's the biggest missing piece for me on the client side), but everything else is pretty decent!
It was a bit tough to get tooling set up and it doesn't work on Windows, but on Mac (and Linux I assume) it's amazing. Better Intellisense support in VSCode than Typescript!
It is very new though, and I haven't got a complete grip of JS FFI with Reason.
Hey a quick question — which Flow extension are you using? There's the official one called "Flow Language Support" and another one called vscode-flow-ide.
UPD: sorry, I just realized you were talking about Reason, never mind.
Because I use Typescript! Joking aside though (although, let's be honest, there is some straight-up selfishness: I use TS and if Facebook did, my life would be easier), I think adding types to Javascript is a huge win no matter how its accomplished and it just so happens that TS seems to be gaining more community support than Flow. Flow's support is almost exclusively from the React community because of Facebook. If Facebook would switch to TS, the whole community could coalesce around one standard. That would have huge benefits.
> I do also hope that this iteration brings with it better docs
The docs are very minimal this point with some definite gaps (we had a bit of a scramble to get them written in time for F8), but the point is well-taken: Relay docs need to get better. Luckily, the design of Relay Modern is simpler and easier to understand and explain; there is less (no?) magic involved, so we should be able to get the docs to a much better place now.
Now that mutations are so much simpler,
would be great if there was some discussion of what you get from connections. Previously this was only well explained in blog posts by Huey Petersen.
GraphQL Subscriptions are supported: applications must inject a network layer that provides support for connecting to the server and receiving subscription updates. Local state is supported via client-only extensions to the schema, combined with an imperative update API. More docs coming soon :-)
I read architecture.md in the runtime package and it looks perfect for binding websocket updates into components. I was really hoping for the following example:
* create a stand-alone relay store
* query and subscribe to changes, print to console
* imperatively push updated records into store
* see updated data in console
I assume this is possible, but maybe it's not? Or you need a graphql server or container component?
Very excited as well for the release of Relay Modern! I think overall a huge boon for the Relay developer experience.
I'm with you here on the static queries. We've used Relay a couple of times as well for various smaller applications here and there, but honestly felt the "magic" was too strong. I'm looking forward to building my future apps and try out Relay Modern.
In addition, I want to be able to encourage more applications that build on our GraphQL platform called Scaphold.io (https://scaphold.io) to try out Relay Modern as well since our API is built to the Relay spec. With that in mind, I'm hoping that the spec will be less restrictive in the future, so there's a smaller learning curve for folks trying it out.
As for the concern with focusing on the internal needs of Facebook, I think the experience that they have working with one of the largest distributed systems in the world helps (rather than limits) their vision on what Relay can be. I trust that this is the case, though it would be really great to see some of the tooling for native support (i.e. iOS, Android, etc). I'm sure they've got that somewhere waiting to be released as I can imagine they have plenty of mobile teams at Facebook who aren't using React Native yet.
Congrats! Relay is what really sold our iOS-only native team to move to React Native. Very excited to migrate to modern version at some point in the future.
Is there any great libraries to implement the backend for GraphQL? I think the benefits of using GraphQL on the frontend are pretty obvious, however last time I checked I had trouble of finding good documentation or implementation on how to serve GraphQL requests from the backend point of view.
Great question! There are great libraries like Create-GraphQL (https://github.com/lucasbento/create-graphql) that can help you scaffold an app on the server side pretty quickly, and it's fairly un-opinionated. That one in particular works with Mongo, but I believe they're rolling out support for other data sources as well soon.
In addition, if you're looking for a hi-fidelity way of building apps without having to worry about the server-side, Scaphold.io (https://scaphold.io) is a GraphQL backend as a service that can help. I work full-time here, and we help you get from zero to GraphQL in a matter of minutes.
And with this you have two options:
1) If you want to use the service, by all means :)
2) We're built to the open standard / Relay spec, meaning that if you want to create an app to learn about how the API is structured, that can help as well. Here's more of a primer on how our API is built and works anywhere (https://docs.scaphold.io/coredata/schema/).
If you're willing to commit to Postgres, this package makes turning on a GraphQL API as simple as writing a couple lines of middleware: https://github.com/postgraphql/postgraphql
It seems many people loved GraphQL, a powerful and elegant concept, but then had a rude awakening with Relay as an overly complex, unwieldy, buzz kill.
Node, edge, and viewer are terrible mistakes w.r.t. naming and usability. I'm sure it's very intuitive for FB devs to think in these terms, but the words are specific to a problem domain and just don't translate as well to the general case as other choices might have.
That was one of the reasons I ran as fast as possible to Apollo after trying to understand Relay for a bit (that and more or less no tutorials for Relay). The other one was that Relay seems to be one of these "my way or the highway" frameworks, while I prefer libraries which follow my needs and don't force me to do all things the way they want.
2. Are people on the Relay OSS team willing to answer StackOverflow-type questions on a certain time?
- I would want to be able to programmatically run relay-compiler. Every time I update my client-side code, webpack detects the change and then runs relay-compiler.
- And get help figure out some errors related to mutations I'm encountering. I wouldn't want to post these type of things as a GitHub issue because it could technically not be a bug.
1. Reactiflux Q&A is a great idea, perhaps someone who organizes that can help set that up.
2. We're a pretty small team and our primary focus is building great software for Facebook and the larger community, so unfortunately our ability to focus on answering questions is relatively limited. Many people on the team occasionally hop into StackOverflow to answer questions, but we definitely can't make any guarantees about answering them all or answering them in a certain amount of time.
You can run relay-compiler as part of your webpack config as a pre-compile step. Also, if you want to run relay-compiler while you're iterating on your code, pass --watch and it will rerun whenever a file is saved.
We've moved from runtime generation of queries to 100% of queries generated at compile time to remove a large chunk of computation out of runtime. Some APIs got more complicated to use than I'd like to. For example, pagination currently requires developers to write an explicit query to fetch more entries.
Now with performance in good shape, I want to focus on improving the developer experience again in a way that doesn't sacrifice runtime speed. For example for pagination, we should in most cases be able to synthesize a query at compile time instead of asking for an explicit query.
Relay Modern (as with previous versions) supports an injectable network layer. This allows each application to customize how to communicate with its GraphQL server using whatever transport protocol is most appropriate (for example you might use HTTP for queries/mutations and WebSockets for subscriptions).
Seriously, your own homegrown garbage collection inside the js runtime? I looked at React the first time it came out and aside from the insanity of using xml mixed with javascript or some kind of pseudo js, it was waaaay too complex. I dont know but it seems crazy to me to write applications like that, it makes xaml look decently simple.
I still firmly believe riot.js (http://riotjs.com/) is the best "react-like" tool even though it's almost entirely unknown (sadly) and its website/PR/general presentation is a bit janky.
It's essentially a very very tiny, minimally opinionated structural layer that lets you build html components (called "tags") using almost entirely vanilla JS. It inverts the JSX paradigm: where JSX is "html in the middle of your code", in riot the markup is primary and the code is a supplement to it (expressions in the markup via templates/mustaches, additional tag-specific script added outside of the markup if desired, tag-specific scoped css) so there's no JSX insanity.
It's like a much leaner react/vue, and frankly I love it. It's entirely minimal and you can bring in any library you want to use along with it (e.g. jquery for ajax, redux if you want...). It has virtually no cognitive load (just looking at a sample "tag" file for 2 minutes gives you ~80% of what you need to know), you just pick it up and work with it and just occasionally peek at the docs if needed.
I'm a huge fan of the "minimally opinionated" approach. The fewer idiosyncrasies and custom abstractions in tools, the more productive you are (I'm looking at you, angular!).
Several points of agreement here, and now I'll have to check out Riot (which I remember seeing around here a while ago).
But I think that
> html in the middle of your code
is selling JSX a bit short. Tooling and coupling aside (and I agree those are strikes against it), JSX creates a pair of mutually-recursive, formally-defined languages, which you can switch between freely and frictionlessly. I've never seen this before, not even in lisp, where quoting/unquoting is bad enough at a single level.
This—the ability to compose languages—is extremely powerful in general. Even if you don't like JSX per se, I'd bet there is some set of languages that you'd like to be able to use this way. I'd look at the "controversy" around JSX (and Babel and the React dependency) in this light, as a learning experience.
For example, JSX is really just a variation on EX4, which has been around for much longer. A number of "external" factors have put JSX over the edge (of mass adoption), and these mostly have to do with React's popularity. By the time I capitulated and decided to try React, I found that I already had tooling support for JSX, even though I was pegged to Babel 5.8 for reasons.
I think this is the principle behind OMeta: we should be able to iterate on language features as freely as we do on application features. We ship applications, why shouldn't we ship the languages, too? (Of course OMeta came out of VPRI, and Alan Kay's view is that we could just ship the whole dang system!) Anyway, from that perspective, JSX is no more "insanity" than some of the things we do to work around the rigidity of languages-as-silos.
Thanks for the insightful response. I think you make a compelling case, and it's something I'll be thinking about (I've gone back and forth regarding what to think about JSX).
In the specific use case of React and Riot, I feel like it's more natural to have markup be the main structure and code to complement it, rather than the other way around, but I don't feel that strongly about it and I find React's approach is still not too bad (imho of course).
I tend to get a bit of an "overly academic" or "cargo cultish" sort of vibe from designs I consider overwrought, and I had a little bit of that sentiment about React back when I worked it, but really it was not a very strong feeling and I'm convinced React is pretty great.
The one that really gives me that vibe is Angular. No offense to anyone who likes/uses Angular, but I view it as a criminal case of abstraction run amok.
>we should be able to iterate on language features as freely as we do on application features. We ship applications, why shouldn't we ship the languages, too?
I also found Riot very practical, mostly because I got up and running very quickly. I had a well-structured Riot app done faster than grasping all that ReactJS boilerplate.
And this was possible even though the Riot documentation was sub-optimal in some places, to say the least - the holes in the documentation were compensated by the fact that most things are simple and straight forward.
I don't know how good this concept scales to huge applications, but I can confirm that writing a non-trivial SPA in Riot was a charme.
Neverthelss, I do miss some things the Riot for the overall architecture and information flow. But I learned that ReactJS doesn't have these either. As with React, Riot should be combined with Flux, or a lightweight alternative such as Redux.
>Neverthelss, I do miss some things the Riot for the overall architecture and information flow. But I learned that ReactJS doesn't have these either. As with React, Riot should be combined with Flux, or a lightweight alternative such as Redux.
As long as you remember that you're doing Model-View-Whatever, you can structure a riot (or react/vue for that matter) application fairly lucidly and without too much friction. You want the application state (~Model) to reside in a separate container, and the riot/react/whatever custom tags (~Views) should be fairly lean and strive only to display and trigger operations on the state (i.e. either call functions exposed by the state object, or trigger/listen to state events), while storing as little state as needed locally. If you're lazy like me, you can make the app state container a global - otherwise just pass a pointer to it to every tag (Flux).
Redux helps with the minimization of local state by emphasizing pure functions, and thus encourages the required minimal state to be passed in as function parameters when needed (and in the localest scope needed). That's really the genius of it, imho - functional purity keeps the state tidy.
Riot's (optional) solution is an "observable" functionality which can be attached to any object (e.g. the state container itself or a child thereof), and tags and other objects can then listen to events on those objects and update themselves accordingly (pretty classic pattern - but being minimal and "vanilla" is the point of Riot anyway), and so I find myself using this and not going with Redux, for the sake of productivity, even though it's a little bit sloppier than Redux.
I found and loved Riot, too. Even used it to build a medium-sized application. At a time when React was in flux (pun intended, I guess) it was the first implementation of the custom component model that made me go "ah ha!" Mostly because how fast I could be useful with it.
The challenge I think for Riot is just that the community never really developed around it. As my friend always says, when you pick a technology, you are picking a community, and the latter may matter more in the end.
I find Vue to to be the best blend of the two. Great docs, great community, and as powerful as react. (Don't let the declarative templates fool you, everything compiles down to render functions and you can even write JSX or even hyperscript if you want to.)
Vue doesn't quite have the React ecosystem but in some situations fewer choices may be better, as examples, boilerplates, etc, seem to have more in common rather than less.
I feel you're being disingenuous here. They're just talking about automatic cache eviction, which wasn't possible/trivial with the previous version of Relay.
I think the terminology is fair, given that I believe it uses a reference-counting mechanism to determine whether cached data is currently being used by a mounted component. But it's not quite as crazy as "JavaScript's GC sucks, we're inventing our own".
You could say the same about shadow dom rendering: "Your own homegrown DOM renderer?". But in practice, it has a good reason to exist (performance vs. real DOM) and nearly zero usability issues (it is completely invisible to the developer, "it just works").
If they hadn't considered "garbage collection" (cache eviction, really) then it would have been seriously difficult to manage from outside Relay (and would partly defeat the point of using it).
These are good questions: why does Relay Modern have garbage collection? Is that just a fancy name for cache eviction?
Let's put aside naming for a moment. Relay stores GraphQL data in normalized form, as a map of global identifiers to records. Each record has an identifier, type, and map of fields to values. Relationships between objects are expressed as fields that "link" to other records. These links are expressed as data structures - an object such as `{__ref: <id>}` - as opposed to direct references to the objects.
Using object references would mean that Relay could in theory let the JS runtime do garbage collection: except that the runtime would only see a cyclic graph of objects for which (typically) at least one root object had a persistent reference (the record corresponding to the root of the graph). In other words, it would do its job and retain all records in memory since they would all be (in)directly referenced from the root object, which would have to be referenced by Relay in order to access the data.
Relay, however, has more knowledge than the JS runtime does about how this data can be accessed: it can analyze the currently active queries against the object graph to determine which records are required to fulfill those queries. This is what the garbage collection feature does: remove records that may not be referenced by any active query.
Note that this has some aspects in common with standard garbage collection in programming language runtimes. There is a mapping of identifiers (memory addresses) to values. Each value may contain links (pointers) to other records (blocks of memory). Because the graph has cyclic references, standard cache eviction strategies - LIFO, LRU, etc - don't necessarily apply as they might evict data that is still referenced.
I hope this helps shed some light on this feature. Questions and suggestions (PRs) welcome!
FB employs one of the world's best C++ developers (Andrei Alexandrescu) to write custom string implementations and maintain/extend their homegrown PHP VM. Their development practices would be completely unsustainable for any company that has anything resembling economic accountability.
True; I just disagree with the general tendency to treat everything FB or Google does as the holy grail of best practices (or even good at all) for the average company.
The learning curve for Apollo is a bit shallower for two reasons: (1) they have GraphQL libraries that allow you to implement a Relay-like client on top of an existing GraphQL end point (Relay requires some custom fields); and (2) better documentation.
We went with Relay, however, because it is, in my experience, more robust and more polished. The way that Relay handles fragment composition and defers component rendering, along with data masking, is much nicer and more... holistically considered(again, in my opinion). Using Apollo can feel a bit like patchwork sometimes.
Plus, ultimately, Relay is backed by Facebook and used in Facebook applications, in production, so it's not going anywhere.
We're going to write some more content in the coming days or weeks about the differences, but here are some of my main thoughts based on following along during Relay Modern development:
1. Relay uses a build process to generate code for queries. That allows some better performance optimizations and static typing out of the box. However, it prevents you from doing anything which requires arbitrary knowledge of the queries at runtime. It also means that if, for some reason, you can't use the build tooling, you can't use Relay. That's actually the original reason we started working on Apollo instead of using Relay ourselves. Apollo works with regular GraphQL ASTs at runtime, so you can use and write tools to work with those queries in any way you like. While it's not something all apps need, we've found some situations where this is desirable, especially for developers building companion libraries.
2. Relay doesn't have as many facilities for updating the store and working with mutation results. Apollo Client has a unique way to use GraphQL fragments and queries to read and write to/from the store, the most recent of which is described here: https://dev-blog.apollodata.com/apollo-clients-new-imperativ...
3. The Apollo Store is a plain JavaScript object, which means it can be easily serialized, persisted, hydrated, etc. So for example doing server-side rendering where you also hydrate the state is super simple in Apollo. Part of this is because of Apollo's Redux heritage.
4. Developer tools - we think it's super important to understand exactly what is going on with your data, both inside your app and across the wire. That's why in addition to sticking to simple plain objects we worked on some developer tools for chrome: https://dev-blog.apollodata.com/apollo-client-developer-tool...
5. One thing we're really proud of is how different libraries in the Apollo ecosystem are owned and maintained by different organizations from the community. This might make the experience of using it a bit less polished, but means that you can easily contribute or start your own projects if you need some non-standard features.
However, it's also great to remark on the similarities, which I think show that the community and Facebook are converging on some common good ideas. In fact, a lot of the initial decisions on Apollo are based on talking to the GraphQL team at facebook about their experiences:
1. Fully static queries - both Apollo and Relay encourage you to write your queries in the GraphQL language, and avoid manipulating them in unpredictable ways. This is actually one of the ways Apollo diverged from the original Relay release and it's great that it's coming together. Read more here: https://dev-blog.apollodata.com/5-benefits-of-static-graphql...
2. Colocation of data with the view - both Apollo and Relay enable you to do this. This pattern was one of the best achievements of the original versions of Relay, and we think putting the queries and fragments right next to the UI is a great pattern.
Most importantly, though, it's super encouraging that the GraphQL community is gaining another great tool. The best part about GraphQL is the diversity of approaches to servers, clients, and tooling, and that they can all work together through the specification. Really excited to see this release, and I hope we can all learn from each other and make GraphQL a real pleasure to work with.
This is a pretty good distillation of some differences, but I just wanted to reiterate that I'm more excited about the similarities. Having options for what tools to use along with GraphQL is a great thing, and even better is when the best ideas make their way into many of them.
From what I know of Apollo (which is a lot less than Relay, having worked on the RelayModern compiler), this comparison is pretty good.
(1) is true, for the most part, from the developer point of view. But when you're using the compatibility mode of RelayModern (i.e. sending out a legacy query that contains modern fragments), Relay does runtime query building from the Modern fragments.
(2) Relay allows you to define updaters for mutations, which lets you write client-defined data transformations. This may not be as complex as what's happening in the Apollo client, but I don't have the experience to say. See http://facebook.github.io/relay/docs/mutations.html#updating...
Edit: I'm Matt, and I work on a sibling team to Relay at Facebook, and helped build RelayModern's compiler.
Agree that this is overall accurate, especially wrt to Relay Classic. For Relay Modern, however:
> 2. Relay doesn't have as many facilities for updating the store and working with mutation results.
Apollo and Relay Modern are about equal in this regard. Relay supports arbitrary writes to the store (either via an imperative API, via a fragment + payload, or a mix of the two), plus similar APIs for updating the store after a mutation or subscription update. This includes the ability to update client-only state.
> 3. The Apollo Store is a plain JavaScript object,
The Relay Modern store is also a plain object. There aren't currently any convenience functions for serializing/deserializing it, but this is something we're open to adding.
Overall I'd say the main differences is that Apollo has focused very much on easier onboarding and covering a wide variety of use cases (many view layer integrations, developer tooling etc), where Relay is more focused on performance and scalability (hence features such as ahead-of-time optimization, garbage collection, etc).
Either are appropriate depending on your specific needs. I'm excited to see so much iteration in this space!
> "Relay Modern is designed from the start to support garbage collection — that is, cache eviction — in which GraphQL data that is no longer used by any views can be removed from the cache"
Could we go ahead and implement proper caching as HTTP would do: expiration date per field/models, then eviction based on expiration dates? With an optional max cache size using LIFO, last used or the current model.
That way we don't need to refetch data if the API is set up to cache. It's sort of frustrating that staleness is just defacto ignored by UI right now.
Good questions! It isn't quite as straightforward as you might expect - the interconnected nature of graph-like data means that strategies that work for HTTP don't necessarily apply.
For example, storing per-field expiration times could incur additional memory overhead (you might end up with an object per field instead of per record). Storing per-record expiration times is tricky since the same record can have different fields fetched at different times. And a simple max cache size + LIFO/LRU/etc evication strategy means that the cache might evict a record that is still referenced by a view.
This type of TTL/expiration is something that we're continuing to explore.
But, wouldn't "garbage collection" solve the problem of many React web apps of consuming too much RAM (e.g. when using Redux, you never expire some keys and throughout the application lifecycle, you never cease of accumulating data inside your stores which makes RAM consumption goes up) ?
In general yes, garbage collection in Relay Modern is meant to help constrain growth of memory usage during the course of a session. This is where the declarative nature of GraphQL is helpful; unlike Redux which is accessed via arbitrary selector functions, Relay knows (via queries, fragments, etc) which parts of the cache may still be referenced and can evict records that aren't.
My biggest gripe with the original Relay was that it didn't work with any GraphQl schema, but only those that provided a bunch of features like pagination and retrieving any object by id. This no batteries included, high initial bar for using Relay really turned me off of the product. I see that Relay Modern claims to be 'simpler', but I don't see anything about relaxing the constraints on my graphql schema.
You should be able to use Relay Modern with any valid GraphQL Schema. If you can't, it's a bug that you should create an issue for! To use your own schema, you just need to specify what it is during the relay-compile step: http://facebook.github.io/relay/docs/relay-compiler.html#set...
Yes, mostly: if you don't meet the first requirement (having some sort of root field that allows you to refetch an object), you'll have trouble using, for example, RelayModern's RefetchContainer. If you don't meet the second requirement (a description of how to page through connections), you'll need to define how to paginate through connections as part of your component's logic (if that matters to you). And the third requirement: you no longer need to use Relay's imperative mutation API. You can simply describe a specific mutation to refetch, including all of the fields (i.e. via a fragment) that you want fetched every time you send that mutation (fat queries are gone in RelayModern).
I'd be interested to read an analysis of how this compares to the backends-for-frontends pattern.
Also, it seems like Relay Modern reintroduces API versioning, but automates it behind a compiler step. Is that a fair characterization? Does the server have to implement some kind of tracking and pruning for unused fragments, or is it expected that the fragments will accumulate at a non-threatening rate and never need pruning?
There's just as much versioning for Relay Modern (from the server's point of view) as Relay Classic: so long as you have a client that sends a query string with old fields, your server needs to maintain support for returning those fields. So long as the field is nullable, you always can choose to "turn it off" server-side by always returning null.
In general, over time, as applications get more complex, the main queries you use will have more fragments. But we usually convert an entire query into a single ID (representing the entire query text, fragments and all). With each "version" of the query, fragments could be added, removed, or completely modified. Usually, your server shouldn't care what the specific fragment version is, but should be able to translate the fields the fragment asks for into a function that builds a response of that shape.
But one of the advantages of GraphQL in general (Relay included) is that you don't really need to version your API: so long as a field continues to exist in your schema, and continues to return the same type of object (even if that object adds new fields or interfaces), your old query/fragment will always receive the same shape. That may mean, over time, you "turn off" more and more fields by having them return as nulls to every client that asks for them, but you shouldn't think of it in terms of versions.
Oh? Seems like you still want to support old client versions, which means retaining the fragments that they reference. The question then is what implications that has on a server -- do old fragments need to be eventually garbage collected?
This technique of persisting the queries (and fragments) to the server at build time predates Relay - we've been using it on our iOS and Android apps since 2013.
At build time these clients submit their query strings to the server and get a small identifier in return which they can use at runtime to reference the whole query. This definitely means that old queries need to be kept around as long as the clients that use them are still active. Since iOS and Android apps seem to last forever, we're still getting traffic today from just about every version of our native apps we've ever shipped, even from 2012 and 2013. Because of this we decided to not bother with garbage collecting persisted queries, in terms of all the other data Facebook retains, persisted GraphQL queries is a grain of sand in a desert. However, with a bit more work, you could easily keep a hit counter per persisted query and go remove any persisted queries which had no recent hits.
That is fair, I hadn't thought of that. I would argue the difference is perhaps the ease of tracking the different versions and their usage and the ease of collecting versions. Plus the client can fall back to submitting the full query.
I've been somewhat out of the Clojurescript loop recently, but this looks conceptually a lot like the om.next model ("colocated queries").
Makes me curious how many people are using om.next in anger. It's always seemed like a good idea (and now with some extra endorsement for the core principle), but judging by GitHub activity (which I realise isn't a perfect measure), the project seems to have rather lost momentum.
I'm super excited about this release! Great work Jan, Lee, Joe and everybody else who was working on this! :)
At Graphcool (https://www.graph.cool/) we were using Relay since the very beginning. It has enabled us to build frontend products at an incredible speed and while staying confident about the data layer. For instance our entire console is written using Relay at its core. (It's open-source btw: https://github.com/graphcool/console)
PS: We're also the authors of Learn Relay (https://www.learnrelay.org/) which we'll update to Modern Relay soon!
I work at Facebook and had the chance to use Relay Modern for an upcoming product. Was really happy with the performance and collocation of data and view just amazing. Love it :) BTW I realized where Relay's logo came from by chance while fiddling around with PowerPoint's new Morph transitions... https://gfycat.com/EnviousBothFinch
Facebook has said that they never remove a property from their databases which seemed nuts to me but with Relay and GraphQL it makes a lot of sense. Why have the need for versioning when the client can request whatever they need.
To clarify a bit: we've removed fields from and changed our database schema repeatedly over the years. In fact we've migrated between entire database technologies multiple times over.
What we haven't done is remove fields from our GraphQL API when those fields are still in use by shipped iOS, Android, or web apps.
GraphQL gives us a layer of abstraction to create consistency from the point of view of client apps while allowing iteration of backend services.
Oh hey thanks for the reply. I think I misunderstood before. But what you are saying is what I meant to say. That GraphQL is fantastic in that it gives a layer of abstraction away from versions apis with different endpoints.
You're not off base at all! It's very similar to that. I think it's important to extract what was good and bad about this old website pattern.
The good is that within a single PHP file you could see both the logic for requesting data (SQL) AND the logic for rendering that data. This colocation was part of what made the early web take off, it was a great developer experience.
The bad is that these interspersed SQL statements were immediately invoked and blocking, which led to utterly awful performance.
One of the core ideas of Relay is that we wanted to bring back that developer experience of colocation while not only retaining good network performance, but actually creating opportunities for network optimization. When you see GraphQL in your Relay code, that is not a blocking immediately invoked network request. It's a description of a part of data needed. Relay aggregates these GraphQL fragments together to submit in few network requests in a non-blocking way to achieve the network performance we expect from modern mobile applications.
One of the things that I've noticed about the javascript community is that they're re-inventing stuff from previous technologies - sometimes badly - but almost always with a willful ignorance of what has gone before.
While this doesn't look like a bad idea - what they've done is to re-invent stored procedures. Back in the original object orientated wars this turned out to be a mixed blessing. You could get quite sharp performance, but reasoning about the application logic became harder. It might be nice if this kind of issue was at least acknowledged in the article.
I don't think this is what they have done. They've invented a client/server data model, schema definition and associated query language. Definitely wheel-reinventing, but imho a useful valid wheel which stored procedures were not. If it were like stored procedures the relay code would be running on the server. It runs in the client.
"The native app teams discovered that using GraphQL came with the additional overhead of building queries by concatenating a bunch of strings and then uploading those queries over slow connections. These queries could sometimes grow into the tens of thousands of lines of GraphQL. Also, every mobile device running the same app was sending largely the same queries.
The teams realized that if the GraphQL queries instead were statically known — that is, they were not altered by runtime conditions — then they could be constructed once during development time and saved on the Facebook servers, and replaced in the mobile app with a tiny identifier. With this approach, the app sends the identifier along with some GraphQL variables, and the Facebook server knows which query to run. No more overhead, massively reduced network traffic, and much faster mobile apps."
Firstly, what query could be 10k lines? Secondly, if you're defining the query just by an id, how is that then any different to a fixed REST endpoint or stored procedure?