I have mixed feelings about Apollo and GraphQL in general.
To go from mature backend frameworks with autogenerated REST APIs to manually writing a lot of boilerplate code just to get a _basic_ GraphQL API running is frustrating. Part of the blame goes to the marketing of GraphQL. For example, the tagline on graphql.org is "Describe your data | Ask for what you want | Get predictable results". That's all well and good, but it leaves out the large majority of the work you have to do. This describes the schema, the query, and the result; but what about all the resolver code you have to write? That's the painful part of the whole process, especially when you get to the point of writing field-level resolvers and integrating child objects (DataLoader, etc.).
And then on the frontend, Apollo is preaching for us to write client-side queries AND client-side resolvers to fetch data that's _already in the cache_. All of this just to read, for example, a single primitive value? This is just too much.
Hi from the Apollo team! This is good feedback. I agree that the GraphQL community (including us) could do a better job at educating developers about best practices for building GraphQL APIs. There's a lot that goes into standing up a GraphQL server that isn't explained on the official docs, so hopefully we can work with the GraphQL Foundation to improve that in the future.
I don't think backend services with mature REST APIs are the problem; rather, it's fetching and aggregating their data on the frontend that's cumbersome. With the complexity of modern apps, developers have to write a considerable amount of data fetching code to build out new features. They have to fetch data from multiple REST APIs, filter down that data, aggregate it, cache it, and account for loading and error states. Writing all of this state management code by hand slows developers down and leads to bugs.
Apollo aims to solve this problem by unifying around one way to query all of your app's local and remote data. This reduces state management code considerably since Apollo takes care of fetching, tracking loading and error states, and caching your data. For simple primitive values that aren't shared among multiple components, I totally agree that Apollo is overkill and would recommend seeing how far you can get with React state instead. For local data that's shared among many components, like device API results or global boolean flags that you would put in a Redux store, Apollo shines because it allows you to specify all of your data requirements declaratively in one query.
While you do have to integrate a new data graph layer into your stack to take advantage of all the state management benefits, you don't have to migrate your REST APIs. Apollo Server has a data source plugin [1] that simplifies hooking a GraphQL server up to existing REST APIs, including a cache that eliminates the need for DataLoader in most cases.
I hope this helped to clear up any misconceptions. Happy to answer any other questions you might have!
You can still use Apollo as a client side solution for the above. The client side resolvers Apollo speaks about is for client-side data. This is if you optionally want to use the same patterns on the front end to manage your local state. But it's totally optional. You can use Apollo + Redux or Apollo + setState or Apollo + MobX or Apollo + useReducer or whatever.
As others have pointed out there is a rich community of tools that can auto-generate GraphQL APIs, including resolvers. I'll point out one project that I've worked on, neo4j-graphql[0], for generating GraphQL API and resolvers on top of Neo4j graph database.
I'm 100% with you on this. One of my teams just spent 3 weeks ripping out all of that tedious (and error-prone!) resolver code that resulted from trying to use apollo-link-state for local state management and switching it all to redux instead.
It is insane how many silent failure modes the Apollo cache has. Performance and ability to estimate work (predictability) went through the roof since then. It's just way overcomplicating things to do state management this way - Apollo is a lovely GQL client otherwise.
I may do a write-up on this one day, as a cautionary tale. It boggles the mind that this is being pushed so hard as an ultimate solution to state management - nice in theory but awful in practice.
I'd love to read something about your experience. We are currently using Apollo for one of our newer projects. We've written Redux apps in the past and understand the tradeoffs and limitations that come with that approach. We really love the component-level declarative data-fetching of Apollo client, but were pretty hesitant around the apollo-link-state stuff.
How are you integrating GQL in your Redux code? Do you still use Apollo Query/Mutation components? Do you use Redux in lieu of the Apollo cache? How do you deal with the slight mismatch of denormalized GQL data with a normalized Redux store?
I understand the problem that the Apollo cache is solving, and I feel like I do a lot of low value handwritten code when I use Redux instead--but the visibility and transparency of Redux still feels worth it?
I think leaning too heavily into apollo-link-state is going to draw the same boilerplate complaints of Redux. The amount of client code generation needed, plus all the schema details bleeding into your code (__typename)... doesn't feel like we are at the "solution" yet.
Just a few of my rambling thoughts. Kudos to the Apollo team for continuing to push the community forward. These are hard problems and we won't solve them without people trying to innovate.
When dealing with green fields projects of course you can rely on clear schemas. Less so in practice when working on an older live app that has passed through multiple maintainers and maybe even been migrated between different DBs.
When you use something like Apollo + a graphQL backend it makes client-side code considerably easier to deal with. Think of how much front-end code is simply fetching data, storing it locally, and then creating components to render that data. With Apollo, you use declarative graphQL strings to state what data you need to render a view, and then it handles all the fetching/retry/cache/subscription logic for you. You're able to iterate quicker because you have less CRUD code to deal with.
Other benefits are that you don't need to write multiple endpoints to serve condensed responses, since the client declares exactly what data it needs in the request. So a mobile client that only needs a fraction of the data of a web client can get back exactly what it needs, and there's no need for the backend to create separate REST endpoints or params for managing that. It's part of the spec.
It prevents over / under fetching. Eg, on certain screen sizes you might show more or fewer fields. With GraphQL you can decide the fields you want in the client and explicitly request them. You can do this with REST by passing in a lot of extra query parameters, but it’s pretty awful. So most people end up either over fetching (getting more data from the server than they need and discarding some of it) or underfetching (get a list of IDs, then make a follow up request per ID). GraphQL solves that really nicely - the client requests what it wants, and the server answers it in a single response.
GraphQL is also typed, and has some really great developer tooling as a result. It’s usually really easy to learn about a data source by just exploring in graphiql with ctrl+space autocomplete. Eg, check out github’s API: https://developer.github.com/v4/explorer/
Biggest reason, IMHO: The frontend developer experience of using APIs that speak graphql is amazingly better. Less APIs to call, better client side magic for managing app state, better client side magic to codegen, autocomplete type experience for API calls in your IDE.
I do not use GraphQL, so I don't have a strong opinion here, but the answer is that there is no standard REST/JSON API. There is actually a GraphQL specification that can be used to introspect the features of the API. This may not be enough to convince, but I think the introspection of REST/JSON API's are usually not as good as GraphQL or even SOAP.
1) Clear types and structs that are shared between client and server means that both can generate models based on a common set of data specs
2) Clients can have some autonomy in what data they choose to fetch. This is easier said than done but models may be set up to allow joins on datasets for custom model trees and reduced payload size by limiting the fields returned.
To go from mature backend frameworks with autogenerated REST APIs to manually writing a lot of boilerplate code just to get a _basic_ GraphQL API running is frustrating. Part of the blame goes to the marketing of GraphQL. For example, the tagline on graphql.org is "Describe your data | Ask for what you want | Get predictable results". That's all well and good, but it leaves out the large majority of the work you have to do. This describes the schema, the query, and the result; but what about all the resolver code you have to write? That's the painful part of the whole process, especially when you get to the point of writing field-level resolvers and integrating child objects (DataLoader, etc.).
And then on the frontend, Apollo is preaching for us to write client-side queries AND client-side resolvers to fetch data that's _already in the cache_. All of this just to read, for example, a single primitive value? This is just too much.