Shameless plug:
I’m one of the authors of Phero [1]. It’s goal is similar to tRPC: fullstack typesafety between your server and client(s).
One difference is syntax: Phero leverages the TS compiler api to generate parsers for your backend functions. It will parse input and output of your api functions, automatically, based on the types you define for your functions. It will generate a declaration file of your api and generate an RPC style client SDK for your frontend. Another difference is that it aims to be more batteries includes.
> Stop assuming data is of the type you’re expecting. You know it is, period.
> Know when you break compatability with the frontend, before even running it: TypeScript has your back.
Do you? Your front-end and back-end, regardless if they use the same source for their interface contract, aren't deployed as a unit. At least not always, and not in the single page application use cases you're targeting.
How do you handle versioning and protocol compatibility? Across all these frameworks, it feels like a footgun your users will discover on their own.
We use Phero extensively in mobile apps projects, so we know the pain of maintaining API versions and backward compatibility hell.
Phero generates a declaration file with all functions you expose, and bundles dependent domain models (interfaces, type aliases, enums, etc) with it. Currently there's only one version. Our plan is to let users pin versions of this declaration file (we call it the manifest file).
Then we build cli commands to compare them and actually tell if they are compatible. This way you know if it's safe to deploy a new version of your API without breaking existing clients. These are all future plans of course, but in the scope of the project.
It seems like you could use Microsoft’s API Extractor to validate schema changes between HEAD’s phero-schema.d.ts and $(git merge-base HEAD main). I haven’t run Phero yet but it would be cool to leverage existing investment here.
One big difference in philosophy here is that tRPC is not designed to be used for cases where you have third party consumers. It's built for situations where you control both ends. (That said, you can use trpc-openapi for the endpoints that are public)
On versioning: it's 2023 & in most cases, you can solve versioning in other ways than maintaining old API versions indefinitely. For RN there's OTA, for web you just need to reload the page or "downgrade" your SPA-links to a classic link to get a new bundle (did an example here https://twitter.com/alexdotjs/status/1627739926782480409)
Also, we'll release tooling to help keeping track of changes in cases where you can't update the clients as easily.
GraphQL is amazing but it isn't a silver bullet either, it has its own complexity that you have to accept as well.
> On versioning: it's 2023 & in most cases, you can solve versioning in other ways than maintaining old API versions indefinitely. For RN there's OTA, for web you just need reload the page or "downgrade" your SPA-links to a classic link to get a new bundle (did an example here https://twitter.com/alexdotjs/status/1627739926782480409)
I would suggest you think more deeply about this problem.
In all the examples you listed there is a population on the new version and the old version, even in the happy case. Distributed systems are not instant, and changes take time to propagate. Publishing a new version of your website / RN bundle to a CDN does not mean all edge locations are serving it. Long lived single page applications (Gmail for example) are not typically refreshed often. For small applications, this may not be an issue -- when your scale is millions of users, the population that now receives a 500 due to a bad request is significant, even if it's seconds or minutes where clients access both versions, but the backend only supports the latest version.
> Also, we'll release tooling to help keeping track of changes in cases where you can't update the clients as easily.
The ease of updating the clients doesn't solve this issue, and avoiding leaking it into your framework because it's messy or introduces constraints won't make it go away.
Thanks for the input! We have thought about it a lot.
The biggest challenge with trpc right now is that the API is transient so it's not obvious when you might be braking clients in flight and that you often can't guarantee perfect sync of deployments as you're rightly pointing out.
Once we have some more tooling around it, you'll be able to get the benefits of a traditional API where you consciously choose when to break the spec, but with the productivity benefits of not having to manually define it. I think that will scale pretty well.
> One big difference in philosophy here is that tRPC is not designed to be used for cases where you have third party consumers. It's built for situations where you control both ends. (That said, you can use trpc-openapi for the endpoints that are public)
I'm a happy tRPC user, and this is my use case. Our web application has no client other than our web frontend. I can't see a situation when it would (and I would bet this is true for most web applications), so I am very happy with how tRPC has worked out.
I did recently create a more limited data API, and for that I used express-zod-api [0] which I like very much.
As far as I understand Phero needs a build-step, where as tRPC does not
This means you won't be getting the same real-time experience/feedback you get with tRPC, because you will still be waiting for Phero's compiler to complete the build
I was expecting something like that to emerge and initially wanted to go into that direction with Garph (see my comment below) but we ultimately decided that a code-gen does not give you the same amazing feeling so we ended building a pure TypeScript library
Yes, Phero has a build step. Phero does come with a CLI and will watch code changes, build your TS alongside with an client SDK.
In our experience the latency between changing something in your API, and seeing compile errors arise in the client is just a few seconds. And this only matters when your API contract changes, which is of course not always. Not a biggy in our eyes. In order to run, you'd need to compile TS anyways. :)
In our opinion this is well worth the "magic" Phero adds: it will work with plain TS. No "t.string" like apis to build your apis/models. Matter of taste I guess :)
Not to take away from your explanation, but this is actually the key point that a lot of developers have been fighting in the last couple of years.
We've "had" end-to-end typesafety with codegen tools openapi/swagger and graphql etc.
However, the big issue with a lot of those tools is that you end up having to manage this compile step.
The real magic of trpc is exactly this point about it being without a compiler, where the types are derived from the very same typescript files - this is what gives it this immediacy and feeling of it being instant and not having to deal with a compile step.
I've always wondered whether we'd get the same benefits and niceties with a really streamlined compiling approach - perhaps something like phero. But it's just a little harder of a sell compared to the built-in typesafety you get from trpc
There is a difference with approaches like openapi/swagger/graphql though.
With Phero you define your models with plain TS. Our code-gen will "just" copy the exact same models to your client SDK. The DX is night and day.
We believe domain models are the most important part of your codebase. We don't like to define them in an intermediate language like graphql/swagger/graphql/you-name-it.
Because we support plain TS as "input" if you will, you can use all features and greatness TS comes with. Like generics, conditional types, mapped types, even template literal types!
I like this approach of using the Typescript compiler API to produce the runtime type validator from the TS type, instead of the other way around! I started working on a toolkit to compile Typescript types to arbitrary downstream languages like Protobuf and Python/MyPy, but the last 20% of polish eludes me. My work is here: https://www.npmjs.com/package/@jitl/ts-simple-type
I excited about the direction all these frameworks are going. It's the BFF pattern on steroids. No more dealing with APIs and typed contracts.
NextJS and RemixJS have interesting approaches to gluing clients and server together. Microsoft Blazor Server is an extreme version of where things are headed. Just call C# functions and the UI is surgically updated as needed over SignalR. I haven't found any other stacks take it that far yet.
Note that tRPC also has an official NextJS starter kit that also bundles Prisma and Playwright, but not Tailwind or an auth plugin. I've used it, it worked well.
Thanks for sharing this. I haven't tried tRPC yet, but I do use Prisma. I am slightly curious/puzzled why I would need both in the same project. Doesn't Prisma get you most of the way there? Or, is this for creating a CRUD REST API that talks to the DB via Prisma, and to the Client via REST/tRPC? Seems heavy to me.
They serve different purposes. Prisma provides type safety between server and database, tRPC provides type safety between client and server. You’re most likely going to have some logic between those two channels that manipulates and makes those interfaces similar but not quite the same.
tRPC on the back end essentially is sharing the typed return value of an API route call with it's react query wrapper on the client. It's useful with Prisma because you can leverage type inference coming from the Prisma query, meaning your types will always be up to date with your DB.
EG a tRPC query that returns prisma.posts.findMany() will share the typeof Post with your client when you call the tRPC route as the return type of the API call, without you having to do any type definitions etc.
Very cool. And if you're interested in this sort of thing, I would recommend taking a look at remix.run as well. They go a bit farther and pull these concepts into a framework where it's straightforward to describe the client/server relationship in this style
I used tRPC with Cloudflare workers and a Vue/Pinia front-end and really loved the experience. tRPC was a bit of a fussy one to setup but once it was setup, it was a real pleasure to work with. Being able to test the calls with unit tests was also fantastic
Could you elaborate more on how you approached testing the applications that use tRPC? I was looking for example big projects that use tRPC but they don't have tests [0]. I am wondering what you mean by testing calls with unit tests. Have you been testing individual endpoints, e.g. `trpc.postst.getAll`, or have you been testing components that use tRPC endpoints? I would appreciate some examples.
If you want tRPC-like developer experience but prefer GraphQL, I can recommend Garph
You get all the benefits of tRPC and all the benefits of GraphQL in one package. Garph can also be used to replace graphql-codegen in your stack with statically inferred types
I use GraphQL for disparate clients talking to the server, it's agnostic to the backend and frontend languages. If Garph is already in TS, why would I use it instead of tRPC? I guess if I have a non-TS frontend such as a Flutter app?
Makes sense, so if I already have a tRPC application, what's the transition like to Garph? Or do I need to have it already in Garph when first starting out (ie hard to transition)?
Depends on your application size. The hardest part to transition will probably be the client
We will provide a migration path in our documentation and build some utilities to automate the work required. You can join our Discord to get an update when we have something to share (linked in the repo)
Hmmm... interesting. Was this intended for public APIs? I think the whole point of tRPC is to reduce the overhead down to zero, since this is only for internal APIs.
tRPC is a great choice if you want to call server functions from the client without overhead in a type-safe way. However, with React Server Components, I doubt you will need tRPC for this use-case anymore
Our concept was: take the amazing developer experience and end-to-end type-safety of tRPC but give you an actual, spec-compatible GraphQL API, that you can expose and others can use and that also works with existing GraphQL tools
There are a couple of differences: Garph is easier to use than Pothos, Garph fully supports circular references (no workarounds!) and Garph solves type-safety for clients without the need for code-gen. We also offer a compatible client for React/Next.js based on GQty (formerly GQless) for the best experience
Pothos is great for backend-only applications and if you need advanced features such as Relay right now. Garph is great if you want the best possible developer experience and type-safety on both ends of your stack
That looks like a very cool idea. Basically one code base, all browser client code, except for the (apparently directly linked) telefunc files that are caught by some build step and turned into something that runs on the server.
What does this get me that gRPC + Protobuf to generate a Typescript client wouldn't?
Edit: Ah. I see, you can use Typescript to define the IDL vs. learning Protobuf, Smithy, ...
Not sure I would take the trade-off. I like having the option of using something other than Typescript for my back-end. The dev experience isn't compelling enough for me. Contract changes are not an area I iterate on so often that a small delay in re-generating the client from an IDL causes me to grab for this dependency.
I'm currently working on a project with Typescript and gRPC. It's mostly great, but I'm somewhat disappointed with the generated code.
The frontend and backend get code generated differently. The backend is all interfaces (which, through some Nestjs magic get translated into protobufs) and the frontend is all objects. So on the frontend, you're expected to write:
const message = new MyRequest().setName('Bob').setFoo('bar');
On the backend, you'd just write:
const message = { name: 'Bob', foo: 'bar' }
It's not a big deal, but it's weird. The backend ends up being more/overly strict about all values being set when you might just be fine with the default zero/empty values.
Also, using a `oneof` field is correctly typed on the frontend as
Does this get compiled down to `contact_info: string | string` on the frontend? How do you tell which variant was used without encoding the discriminant?
I tried gRPC with Typescript once, and we needed transformers to transform gRPC to something usable - there's no optional fields in the v3 version for example. So we ended up with an extra layer on reading and writing to it, which was hard to test thoroughly.
My learnings were that gRPC is for compressing data, not type-safety. The type-safety is kind of just a necessity to get the compression to work, but it is not very expressive.
The big item is having no build step. With gRPC even no matter how fast your codegen is, it still needs to execute. With tRPC the return types from the routers are all automatically derived from whatever the mutations or queries return.
It depends how you're running the application but at _development_ in your IDE no build is required between the time you define your controller return and then switch into your react code.
Yes, this is my understanding. If the browser client and the server are both written in typescript, tRPC has a very attractive story of bridging the two. No protobufs though; so whatever performance wins gRPC brings, are absent from the tRPC story.
If you'd like something like this for Rust, there is rspc [0], where you can write your Rust backend code and React bindings are generated automatically.
One difference is syntax: Phero leverages the TS compiler api to generate parsers for your backend functions. It will parse input and output of your api functions, automatically, based on the types you define for your functions. It will generate a declaration file of your api and generate an RPC style client SDK for your frontend. Another difference is that it aims to be more batteries includes.
[1] https://github.com/phero-hq/phero
Comparison: https://phero.dev/docs/comparisons/tRPC