Hacker Newsnew | past | comments | ask | show | jobs | submit | exhaze's commentslogin

Push notifications and mental real estate by being “an app” are the primary business reason (based on both statsig experiments I’ve seen across my career as well as some intuition about behavioral psychology regarding the app mental real estate bit).


Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.

Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.


Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.


ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.

Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.

- https://orpc.unnoq.com/

- https://github.com/unnoq/orpc


First time hearing about oRPC, never heard of or used ts-rest and I'm a big fan of tRPC. Is the switch worth the time and energy?


If you're happy with tRPC and don't need proper REST functionality it might not be worth it.

However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.

- https://orpc.unnoq.com/docs/openapi/integrations/trpc


Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.

Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.


For forking and changing a few things here and there, I could see how there might be less of a need for LLMs, especially if you know what you're doing. But in my case I didn't actually fork `ts-rest`, I built a much smaller custom abstraction from the ground-up and I don't consider myself to be a top-tier dev. In this case it felt like LLMs provided a lot more value, not necessarily because the problem was overly difficult but moreso because of the time saved. Had LLMs not existed, I probably would have never considered doing this as the opportunity cost would have felt too high (i.e. DX work vs critical user-facing work). I estimate it would have taken me ~2 weeks or more to finish the task without LLMs, whereas with LLMs it only took a few days.

I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.

[0] https://lucia-auth.com/


nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!

I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.


I've been impressed with Hono's zod Validator [1] and the type-safe "RPC" clients [2] you can get from it. Most of my usage of Hono has been in Deno projects, but it seems like it has good support on Node and Bun, too.

[1] https://hono.dev/docs/guides/validation#zod-validator-middle...

[2] https://hono.dev/docs/guides/rpc#client


Agreed. Hono has been great for my usage, and very portable.


Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?


I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.

For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.

For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.


How do you supply the schema on the other side?

I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.


There are a few ways, but I believe SSOT (single source of truth) is key, as others basically said. Some ways:

1. Shared TypeScript types

2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety

3. RTK (redux toolkit) query style: codegen'd frontend client

I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.

Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.


What is a validation quirk that would happen when using server side Zod schemas that somehow doesn’t happen with a codegened client?


I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).

For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.

It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.


Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.

The server validates request bodies and produces responses that match the type signature of the response schema.

The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.

It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.


This will break old clients. Having a deployment stategy taking that into account is important.


Effect provides a pretty good engine for compile-time schema validation that can be composed with various fetching and processing pipelines, with sensible error handling for cases when external data fails to comply with the schema or when network request fails.


The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation


Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.


I migrated from ts-rest to Effect/HttpApi. It's an incredible ecosystem, and Effect/Schema has over taken my domain layer. Definitely a learning curve though.


For what it's worth, happy user of ts-rest here. Best solution I landed upon so far.


Back in 2015ish Uber we liked to emphasize everyone building at the company to never forget about the long tail [distribution].

When your N crosses gets above N per day, even the 0.1% edge cases happen several days times a day. When this has real world implications, even a single instance can matter a lot.


Cool project! I built a similar tool [0] last year, but:

1. Targeting fbt (Meta's internal i18n tool)

2. Used CST (<3 ast-grep) instead of AST - really useful here IMO esp. for any heuristic-based checks.

3. Fun fact: this was made entirely on my phone (~2.5h) while I was walking around Tokyo. Voice prompting + o1-pro. Why? My friend was working on porting fbt to TS and said he was planning to build this. I wanted to one-up him + convince him to start using LLMs =)

One thing you should be aware of is that for at least Japanese, localization is far from just translating the text. There are lots and lots of Japan-specific cultural nuances you have to take into account for web users and even down to actually just having an entirely different design for your landing page often because those you'll find those just convert better when you know certain things are done that are typically not done for you know non-Japan websites.

Notta (multi-lingual meeting transcriptions + reports) is a great example if you compare their Japanese [1] and English [2] landing pages.

Note how drastically different the landing pages are. Furthermore, even linguistically, Japanese remains a challenge for proper context-dependent interpretation. Gemini 2.5 actually likely performs best for this thanks to Shane Gu [3], who's put in tons of work into having it perform well for Japanese (as well as other "tough" languages)

[0] https://github.com/f8n-ai/fbtee-migrate

[1] https://www.notta.ai (Japanese version)

[2] https://www.notta.ai/en (English version)

[3] https://x.com/shaneguML


Thanks! =)

> localization is far from just translating the text

For sure, that's spot on.

What I'm excited about the most is that linguistic/cultural aspects are close to being solved by LLMs, including Gemini 2.5 that's got a huge performance boost vs the previous iteration. So, the automated approaches make more sense now, and have a chance of becoming the default, reducing i18n maintenance down to zero - and as a dev I can't be not excited about that.

P.S. fbt is great by the way, as is the team behind it. It's a shame it's archived on GitHub and isn't actively maintained anymore.


Food for thought, a snippet from a highly specialized project I created two months ago:

https://gist.github.com/eugene-yaroslavtsev/c9ce9ba66a7141c5...

I spent several hours searching online for existing solutions - couldn't find anything (even when exploring the idea of stitching together multiple different tools, each in a different programming language).

This took me ~3-4 hours end-to-end. I haven't seen any other OSS code that is able to handle converting unstructured JSON into normalized, structured JSON with a schema, while also using a statistical sampling sliding window method for handling for all these:

- speculative SIMD prediction of end of current JSON entry - distinguishing whether two "similar" looking objects represent the same model or not - normalizing entities based on how often they're referenced - ~5-6 GB/s throughput on a Macbook M4 Max 24GB - arbitrary horizontal scaling (though shared entity/normalization resource contention may eventually become an issue)

I didn't write this code. I didn't even come up with all of these ideas in this implementation. I initially just thought "2NF"/"BNF" probably good, right? Not for multi-TB files.

This was spec'd out by chatting with Sonnet for ~1.5 hours. It was the one that suggested statistical normalization. It suggested using several approaches for determining whether two objects are the same schema (that + normalization were where most of the complexity decided to live).

I did this all on my phone. With my voice.

I hope more folks realize this is possible. I strongly encourage you and others reconsider this assumption!


The snippet you shared is consistent with the kind of output I have also been seeing out of LLMs: it looks correct overall, but contains mistakes and code quality problems, both of which would need human intervention to fix.

For example, why is the root object's entityType being passed to the recursive mergeEntities call, instead of extracting the field type from the propSchema?

Several uses of `as` (as well as repeated `result[key] === null`) tests could be eliminated by assigning `result[key]` to a named variable.

Yes, it's amazing that LLMs have reached the level where they can produce almost-correct, almost-clean code. The question remains of whether making it correct and clean takes longer than writing it by hand.


Cite things from ID based specs. You’re facing a skill issue. The reason most people don’t see it as such is because an LLM doesn’t just “fail to run” here. If this was code you wrote in a compiled language, would you post and say the language infuriates you because it won’t compile your syntax errors? As this kind of dev style becomes prevalent and output expectation adjust, work performance review won’t care that you’re mad. So my advice is:

1. Treat it like regular software dev where you define tasks with ID prefixes for everything, acceptance criteria, exceptions. Ask LLM to reference them in code right before impl code

2. “Debug” by asking the LLM to self reflect on its decision making process that caused the issue - this can give you useful heuristics o use later to further reduce the issues you mentioned.

“It” happening is a result of your lack of time investment into systematically addressing this.

_You_ should have learned this by now. Complain less, learn more.


Install MCP plugin and call a search engine of your choice.

If you’re unhappy about something, try to first think of a solution before expressing your discontent.


Wow, so condescending

I don't use the desktop app and I don't want to use the desktop app or jump through a bunch of hoops to support basic functionality without having my data sent to a sketchy company.


There's always the option of not using it.


It would be great to hear from actual folks working for one of his companies rather than folks that presume that they can speak on behalf of those people, and, even worse, go so far as to liken them to "people in abusive relationships".


Can you show me any longitudinal studies that show examples of a causal connection between incrementality of latency and churn? It’s easy to make such a claim and follow up with “go measure it”. That takes work. There are numerous other things a company may choose to measure instead that are stronger predictors of business impact.

There is probably some connection. Anchoring to 10ms is a bit extreme IMO because it’s indirectly implying that latency is incredibly important which isn’t universally true - each product’s metrics that are predictive of success are much more nuanced and may even have something akin to the set of LLM neurons called “polysemantic” - it may be a combination of several metrics expressed via some nontrivial function that are the best predictor.

For SaaS, if we did want to simplify things and pick just one - usage. That’s the strongest churn signal.

Takeaway: don’t just measure. Be deliberate about what you choose to measure. Measuring everything creates noise and can actually be detrimental.


Human factors has a long history of studying this. I'm 30 years out of school and wouldn't know where to find my notes (and thus references) , but there are places where users will notice 5ms. There are other places where seconds are not noticed.

The web forced people to get used to very long latency and so fail no longer comment on 10+ seconds but the old studies prove they notice them and shorter waits would drive better "feelings". Back in the old days (of 25mhz CPUs!) we had numbers of how long your application could take to do various things before users would become dissatisfied. Most of the time dissatisfied is not something they would blame on the latency even though the lab test proved that was the issue, instead it was a general 'feeling' they would be unable to explain.

There are many many different factors that UI studies used to measure. Lag in the mouse was a big problem, not just the point movement either: if the user clicks you have only so long before it must be obvious that the application saw a click (My laptop fails at this when I click on a link), but didn't have to bring up the respond nearly as fast so long as users could tell it was processing.



Here is a study on performance that I did for JavaScript in the browser: https://github.com/prettydiff/wisdom/blob/master/performance...

TLDR; full state restoration of a OS GUI in the browser under 80ms from page request. I was eventually able to get that exact scenario down to 67ms. Not only is the state restoration complete but it covers all interactions and states of the application in a far more durable and complete way than big JavaScript frameworks can provide.

Extreme performance showed me two things:

1. Have good test automation. With a combination of good test automation and types/interfaces on everything you can refactor absolutely massive applications in about 2 hours with almost no risk of breaking anything.

2. Tiny performance improvements mean massive performance gains overall. The difference in behavior is extreme. Imagine pressing a button and what you want is just there before your brain can process screen flicker. This results in a wildly different set of user behaviors than slow software that causes users to wait between interactions.

Then there are downstream consequences to massive performance improvements, the second order consequences. If your software is extremely fast across the board then your test automation can be extremely fast across the board. Again, there is a wildly different set of expectations around quality when you can run end-to-end testing across 300 scenarios in under 8 seconds as compared to waiting 30 minutes to fully validate software quality. In the later case nobody runs the tests until they are forced to as some sort of CI step and even then people will debate if a given change is worth the effort. When testing takes less than 8 seconds everybody and their dog, including the completely non-technical people, runs the tests dozens of times a day.

I wrote my study of performance just a few months before being laid off from JavaScript land. Now, I will never go back for less than half a million in salary per year. I got tired of people repeating the same mistakes over and over. God forbid you know what the answer is to cure world hunger and bring in world peace, because any suggestion to make things better is ALWAYS met with hostility if it challenges a developer's comfort bubble. So, now I do something else where I can make just as much money without all the stupidity.


Curious - how would one frame the cat before/after box reveal within the conceptual framework of causal closure?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: