Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
We Need To Talk About Vercel (maxcountryman.com)
277 points by llambda on April 10, 2023 | hide | past | favorite | 154 comments


Vercel scares the hell out of me - their previous handling of that runaway bill of 6k was atrociuos. They basically blamed the user and only handled it after it gained traction on social media.

Beyond that, they do a lot of things with web, while having very little moat as a company. By that I mean they're involved in a lot of front-end libraries, articles, projects, etc. a lot of which they incorporate into their platform. Which is why everyone praises them for their ease-of-use, but anybody should know that there's a reckoning that comes after the honeymoon period when all this has to be maintained. And that stuff gets to be very expensive.

Which would be fine if they were Google with a really wide moat but they're not. They're a thin layer above the big three cloud providers. It's too easy for a dev to just pack up and move to AWS where they're not paying for the overhead once the project becomes serious. It also doesn't help that they're not seen as a serious solution because of things like their poor customer service.


Their moat may be from a technical perspective thin, however with that said, if you do deploy Next.js on Vercel with all the nice things that comes with out of the box, it’s hard to beat. And GCP, AWS nor Azure make it as out of the box seamless and keeps it that way as things scale.

The big cloud providers “solutions” around this all fall short introducing mindless complexity to upsell you more services often with half baked integration


I can use their NextJS Docker container anywhere though. Currently I do it on a Hetzner VPS for a few bucks a month and handle all the scale I need, no CDN or edge runtime required.


It’s worth using a CDN to reduce server load and increase your website’s speed. Cloudflare is free for most use cases and ridiculously easy to set up.


But they insist on put your NS on cloudflare as well and afair not accepting domains greater than 2nd level.

Coming back to the load - simple 3vCPU Linux based VPS on Hetzner could handle 4Gbit of traffic serving. Offloading your server fleet probably the last reason here. Geo presence - agree here.


It's pretty easy to beat using GitHub Actions, a VPS host such as DigitalOcean, and a CDN such as Cloudflare or Bunny.net.

You almost never need a serverless runtime when you're starting out. If you're building a SaaS, then your need to scale will be proportional to your revenue and you can easily afford to vertically and horizontally scale within your VPS provider using a small fraction of your subscription revenue. By the time you need to go serverless, you can afford to pay someone else to do it.


> while having very little moat as a company

they have effectively acquihired React (Zuck's mistake). no moat?

(not to mention some genuinely great cloud DX and other platform features eg with their Edge Streaming)


> they have effectively acquihired React (Zuck's mistake). no moat?

I honestly don't even know what are you saying, is it that Vercel hiring the React folks makes Vercel untouchable somehow?

I see a complete disconnect between the two things. If anything, the "acquihire" could well lead to those React folks doing a job search because Vercel doesn't have a high switching-away-from cost and keeps dropping the ball PR-wise.


Well, the React team implies that React Server Components were effectively a collaboration between the NextJS team and them, and that only NextJS currently supports RSC out of the major React frameworks.


There is some merit to it. Maybe not that comment but on Vercel<->React in general.

Even React new docs are promoting NextJs vs something neutral.


The vast majority of React’s core team still works for Meta, and I think the sentiment that a small handful of “famous” hires amounts to acqui-hiring the team is frankly a little insulting to the rest of them.


That's not the sentiment. At least not for me. As I mentioned it's not really where the employees are but the direction and perspective we as "outsiders" get from the team.

E.g. - React Server Components currently only available in NextJs. - The official docs recommend NextJs and Remix. - Create React App is deprecated. Vite is not promoted.

It's already not vendor neutral nor Meta focused. More people ask about NextJs than React nowadays. Perhaps these "famous" hires you mention do more promoting and it's about the perception (that works both ways). Perhaps there's the aspect of choosing the frameworks was the best technical solution but what perception did this bring to the community? People ask if you can use React with NextJs :(


> they have effectively acquihired React (Zuck's mistake). no moat?

How is that a moat? They could completely own React and it still wouldn't prevent me deploying my Next.js project anywhere but Vercel.

> (not to mention some genuinely great cloud DX and other platform features eg with their Edge Streaming)

Vercel didn't invent CDNs.


Vercel is deceitful.

The Image/img fiasco really pulled the covers off for vercel for me. I have migrated all my work off of the platform.

NextJS’ lint strategically dissuades you from using the img tag in favor of the NextJS Image component. If you make the mistake of heeding this advice and migrating to it, you can’t use static site generation—which means you are stuck using their hosting.

Here’s one of the most PR-shameful threads I’ve ever read in OSS:

https://github.com/vercel/next.js/discussions/19065


Lol, silently* replacing the original request with the company response instead of documenting it in the docs is certainly a move. Nice to have all the thumbs-up and heart reactions on the "sorry we don't do it", for sure. Love the "We'd love to hear your feedback" and then locking the discussion too. Not having the feature is one thing, but why not properly document that and the (by now existing) workarounds?

* yes, not 100% silent, there is a small gray indicator from Github that it has been edited, but one would not expect a full-on replace with that.


Ugh.. that is sneaky


The edit history / indicator doesn't show up on mobile, but I confirmed it shows up on Desktop; leerob hijacked the question and replaced it with PR.

Deceit is indeed the right word to describe Vercel's behavior.


You are talking about exporting app more precisely, which outputs a bunch of HTML files (vs using SSG for only some pages). You can keep the Image tag around very easily, there is an option to treat it as normal img tag, so no it doesn't block exporting, and I've tested it to confirm. Use an environment variable to enable or disable exporting. Edit to add reference: https://nextjs.org/docs/api-reference/next/image#unoptimized


The last comment by @leerob which comes 2 years after the issue was opened, saying "we hear you" and "up-voting the issue is the best way for us to track that" just shows how empty their statements are, devoid of any intention do something about it. Very unprofessional behavior.


Can’t you adjust your config to serve the images using your own cdn with the Image component? By default maybe it’s vercel-friendly, but I recall it being trivial to adjust to use any other path instead.

Has this changed?

You could also modify the linting rules to exclude the Image rule.

Not saying vercel is awesome. I no longer use their products. I just didn’t find this particular matter to be problematic.


Yeah the "next/image" has always bothered me, I suspect it's a huge revenue stream for them

My advice is to just turn off the linting errors by adding this rule to ".eslintrc.json", like this:

    {
      "rules": {
        // Other rules
        "@next/next/no-img-element": "off"
      }
    }


I haven't used the Image component but I've hosted a Next.js project. leerob's comment said "To clarify again, you do not need to use Vercel for optimizing images. It works using next start or any loader of your choice.". This implies that it would still work in SSR using any hosting method, it doesn't mean you're forced to use Vercel's hosting services.

There should be a warning that it won't work with SSG and is only for SSR, but there are also many features on next.js that are SSR only.


You don't have to use their hosting. I use their NextJS Docker container on Hetzner and it works just fine.


Does the <Image> tag work with the docker container? GP's issue is with how vercel was pushing the <Image> tag a lot yet it doesn't work when in static site generation mode which is used by a lot of people.


It doesn't work in SSG, no, but it works in SSR just as if you hosted on Vercel itself. That's what I do, SSR instead of SSG, because for my app the distinction is unnecessary if I'm hosting it myself.


You can disable it to make it behave like an Image tag https://nextjs.org/docs/api-reference/next/image#unoptimized It works, it's just not optimized, that's not the same thing.


It's 2023 what kind of plebeian is using raw image tags? NextJS's OSS design is exactly what it should be, I want the image tag. I don't want Vercel boxing me into a choice:

  1. Serving images over Node.
  2. Using Vercel.
I also don't want to eject code that is sitting right there in the repo. And I don't want to maintain a fork of a homebrew reimplementation of it.


I didn’t know they have a Next.js Docker image. Why do we specifically need theirs? I roll my own based on node:lts-alpine via GitHub Actions and push it to my Docker registry on GitHub.


You don't but it makes productionizing easy, I just have to pull their latest image for every build and it should work, since it's their official image.


[flagged]


I think you must have missed that the first post is a replacement of the actual feature request that someone submitted. I have no dog in this fight, but it's pretty slimy to respond to someone's request by overwriting it with your explanation. They effectively hijacked the requester's name, profile picture, and who knows how many upvotes to push their own message instead of letting them have their say and, you know, replying.


I was wondering why this message was so weird. Like the last post in a discussion came first…


Does the edit menu appear on mobile? I only see it in desktop mode.


It doesn’t.


Which one, the go ahead and "define your own custom loader"? You mean re-implement the image export that is sitting right there in their express server... from scratch? Hah.

NextJS won't let you export their optimized images in a static site export because they want _everything_ running through their Express server.

Why's that great for them? Because they get to charge you for image bandwidth and image optimization. This is called vendor lock in.

Sources:

In vivo (v13.3.0):

  > npx next export
  <...>
  error - Image Optimization using the default loader is not compatible with export.
    Possible solutions:
      - Use `next start` to run a server, which includes the Image Optimization API.
      - Configure `images.unoptimized = true` in `next.config.js` to disable the Image Optimization API.
    Read more: https://nextjs.org/docs/messages/export-image-api
and https://vercel.com/changelog/changes-to-vercel-image-optimiz...


To be fair, you can optimize images yourself, or just not optimize them at all.


Or use a web framework that gives you optimised images that aren’t vendor-locked.

The point is that the library artificially limits capability because it benefits a particular vendor model, not the user.


> It reminds me of the npm hate

The comparison to npm is worrisome. npm inc. spent years trying to make a profit & grow beyond "privately owned package registry". It failed and was acqui-hired by Github back in 2020 for an amount far beneath its capitalization.


Slightly related: Netlify has/had an even bigger problem around caching, and not just caching.

I set `cache-control: public,max-age=2592000,immutable` on my SPAs' assets as they're hashed and should be immutable.

But Netlify somehow doesn't atomically swap in a new version: say my index.html referenced /assets/index.12345678.js before and is updated to reference /assets/index.87654321.js instead, there's a split second where Netlify could serve the new index.html referencing /assets/index.87654321.js, while /assets/index.87654321.js returns 404! So users accessing the site in that split second may get a broken site. Worse still, the assets directory is covered by the _headers rule adding the immutable, 30-day cache-control header, and Netlify will even add it to the 404 response... The result is user's page is broken indefinitely until I push out a new build (possibly bricking another set of users) or they clear the cache, which isn't something the average joe should be expected to do.

I ended up having to write a generator program to manually expand /assets/* in _headers to one rule for each file after every build. And users still get a broken page from time to time, but at least they can refresh to fix it. It really sucks.


This is a fundamental flaw with the whole "atomic deploy" model. The web is a distributed system and you can't just pretend that it isn't.

https://kevincox.ca/2021/08/24/atomic-deploys/

In this case it is possible that Netlify could have avoided the issue where the new HTML loads the old asset but this will just make the problem where the old HTML gets a 404 because the new asset has been swapped worse.

At the end of the day hashed assets is a great idea but you need to keep multiple versions around.


I wonder if somehow the request got handled by two different edge workers that were desynchronized? I’ve seen it happen in busy areas (NYC, etc.) where a single client will hit many Workers in a session whereas when connecting from a rural area I’ve never observed that.

Regardless, I say the solution is fat index files. Is there any tangible benefit to the long held tradition of separating the structure from the functionality from the styling? Seems to me like that’s just asking for trouble.


I mostly use Vite nowadays, so my bundle is usually automatically split into a vendor.js and an index.js. The vendor bundle for dependencies is large (usually 50-200KB brotli'ed for my popular side projects) and rarely changes. The index.js containing only my code is usually smaller and changes on every build. With a fat index everything has to be downloaded on every change. Most people don't care these days but I try to make the experience nice even for people with really shitty connections.

In addition, fat index files are really bad for multi-page apps.


Splitting vendor and index makes good sense, I use esbuild which has [iffy](https://github.com/evanw/esbuild/issues/207) support for that. Still, the vendor code could be loaded independently while the application code (and styling!) is inlined into the initial index.html response.

> I try to make the experience nice even for people with really shitty connections

I see this sentiment a lot, though people seem to be able to use it to justify any design at all... the question IMO is what does "shitty" mean?

- In a low bandwidth case, serving only the absolute minimum data to do what the user has specifically requested makes good sense. A solution here is Server Components, which can send the client js event handlers on a per-interaction basis.

- In a high latency case, the total number of round trips should be minimized at all costs, so the Server Components approach is terrible, the client may need to wait for 2+ round trips to do their interaction (one to download the client js, one for that client js to perform the actual action). A solution here leans towards the fat index approach.

- In the case where connections drop often, all the data that the client might need should be transferred over as soon as possible, as there's a good chance they won't be able to access the server at the exact moment when the data is needed. The solution here is the fattest indices possible, with copious caching.

These are all three in conflict with each other to some extent, so the best approach is probably dependent on the specifics of your user-base.

In my experience, a "shitty connection" is one where the bandwidth is low, latency isn't typically a big deal, but the connection will drop frequently, potentially for hours or more. However, in such cases I'll at times have access to the occasional hotspot where the network is perfectly fine. Accordingly, I design my apps to transfer as much data as possible initially (ensuring that the main content can be seen and interacted with even if data for some other module is transferring in the background), and provide the option to store everything in stale-while-revalidate service worker caches so the full experience is available fully offline to the extent possible, even if you didn't fully explore the app while online. In this way, I can download the latest chunks when on the good connection and run fully offline from then on (obviously excepting actions that are legitimately impossible without a server).


Is the JS file somehow being embedded into index.html on the server side? If not, how do you expect this to be atomic when the user’s browser is making two separate requests (with an arbitrary delay between them)?


The atomicity is for the site update.

If I'm deploying to my server, the structure would look like:

  /srv/example.com/prod -> /srv/example.com/versions/1
  /srv/example.com/versions/1/index.html
  /srv/example.com/versions/1/assets/index.12345678.js
  /srv/example.com/versions/2/index.html
  /srv/example.com/versions/2/assets/index.87654321.js
A new version is atomically swapped in by changing the prod link from versions/1 to versions/2. If you request index.html and get the updated version, there's no scenario where assets/index.87654321.js could 404. Serving an updated index.html but 404 for a later request for assets/index.87654321.js is not reasonable. Of course distributed systems are harder but it's their problem to solve.

Note that with a naive web server and the layout above, one could get an old index.html but no assets/index.12345678.js by the time the .js file is requested, but that's less problematic and could be covered by some lingering cache. Or I could simply include the last build's assets as there's no conflict potential.


> Or I could simply include the last build's assets as there's no conflict potential.

It looks like your build puts the hashes into the file names for each asset (instead of just naming resources purely as the output from the hashing function). If you're using a halfway decent hash function, you're ~never going to get a hash conflict even across all of your assets, let alone across all the versions of an asset for a single source file name.

You could just leave all the old assets in place (esp because many of them won't change from build to build) and prune hashed assets that you know haven't been referenced from an index.html in >1 month.


My sense of Vercel (mostly from working with NextJS) is that they are more interested in appearing to support an open source framework while making their product as difficult to interoperate with other technologies as possible in an attempt to lock users into their platform and hopefully pay for it.


Hey, I'm on the team at Vercel. What could we do better? Open to your feedback. Our platform integrates with 30+ frameworks (https://vercel.com/docs/frameworks), we directly fund the development of Next.js and Svelte, and we sponsor Nuxt, Astro, Solid, and more.


> Hey, I'm on the team at Vercel. What could we do better? Open to your feedback.

You recently broke Next.js with AsyncLocalStorage by putting it in globalThis and breaking any runtime that's not Vercel's. With no specification, and other runtimes scrambled to reverse engineer your implementation: https://twitter.com/ascorbic/status/1616811724224471043 and https://twitter.com/lcasdev/status/1616826809328304129


I'd say one of the big annoyances I've encountered while building applications with the last few versions of NextJS is the increasingly tight integration with the built-in server. The world of Node servers is pretty well-established, with documented interfaces that the major server platforms all implement and support. Simple stuff like res and req in the context and what those objects contain. With each release NextJS breaks a bit more of those interfaces and replaces them with approaches that are very specific to NextJS and pretty unintuitive to anyone who is used to building servers in Node. I'm talking about stuff like handling response codes (the notFound parameter you need to return to get a 404 from getServerSideProps is a particularly egregious example) or redirects. You are presented with what looks like a standard interface, but doesn't fundamentally actually work anything that interface.

Decades of working in this industry have taught me to value interoperability, modularity and standardization. I love the idea of a framework that makes SSR easy, and the idea of static site generation, and a router that uses the pages approach where each file is a route is handy in some cases (though very clumsy in others), but all of those a different things and aren't really something I need a single monolithic framework for. I may want one or some or none of those things based on what project I'm working on. If NextJS makes it harder to pick and choose what I want to use, or makes some features contingent on using other features I don't really care about, I'm going to start looking really hard for an option that gives me more choices.


Seems like you may appreciate https://vite-plugin-ssr.com/ (I'm its author).


I love vite-plugin-ssr. Thanks for making it.


Nice. I was going to use Next for my current project, but may give your solution a try instead!


Hi leerob, I am using NextJS from v.10 in production, and I believe it is amazingly effective way to write complex web applications really fast.

Recently (I would say, after the latest investment round), I as a developer see lots of new features implemented seems rushed, not complete or having no thought about community as a whole.

- API Middlewares are not working as expected [1]

- The new pages layout (i.e. app/) are super-weird, implemented in a completely different way from pages/ with 2 incompatible API sets - one for pages/ (i presume soon to be declared legacy and unmaintained, and shiny new but still experimental?)

- Images API just as others pointed, are beneficial to some tiny subset of developers who have bunch of static images locally. But for most projects it is not useful at all at the current implementation.

- Your release versioning does not make sense for a mature stable product. When you release new major version (e.g. v.13) you stop supporting older v.12 version.

- Could you give balazsorban44 bigger team? Next-Auth needs more love to be a great product.

- [1] https://github.com/vercel/next.js/issues/38273


If you want to call yourself open, you can start by making the development more open. Right now it’s almost all behind closed doors until it hits a PR, and once in a while the Vercel gods are feeling generous enough to share some slightly early plans in a discussion somewhere.

Simply “here’s what we have in mind for next release”. An open framework should not be developed like you folks do it. Look at how Python or Django develop their releases for better examples.

You have RFCs, but most of them are internal and don’t reach the public space. Why?

AppDir is a good example. Yeah you have an extremely high level table of what’s planned vs in progress but there’s nowhere public where these discussions are being held. We’re not able to see nor contribute to the decisions. We’re not able to chime in when bad decisions are obviously being taken. Some companies view this as an asset because they want the power to silently take decisions that are good for them but bad for their users.


A more sane release lifecycle and release communication.

When a major version bump with a host of breaking changes drops, we'd prefer to be able to stay on the superseded version and still receive bug fixes for at least some time, with expectations set for what that timeline looks like.

The last releases of v11 (v11.1.1~v11.1.4) lack release information/changelog, and the CI still looks failing for Mac and Windows on v11.1.4 without this being acknowledged.

The final release of v12 was 1 month after v13.0.0. The last few releases of v12 similarly lack release info or changelog. You could say "just look at the git history" but the Nextjs git log really doesn't lend itself well for it (unless you're already a dev on the Nextjs codebase or used to code-auditing, I guess).

For a "foundational" (that's what you aim for it to be, right?) software like Next.js, users should be able to have similar expectations to versioning, releasing, and documentation as Vercel is relying on for Nodejs.

Younger devs follow what the leaders in the ecosystem do, which is how trends and norms rise and change. If Vercel changed its approach here it could contribute to setting a good example instead of showing that maintenance as an afterthought is nbd bro, why don't you update already!


Too many bugs. Too many features rushed out the door. Every major AND minor version of NextJs has new features and/or breaking changes, which is fine on its own but existing stuff often also break.

We go through a cycle of: new feature -> patch fix -> minor fix -> improvement -> finally works -> replaced by something else OR it broke again.

For something like NextJs that has been running for years maybe more focus on polishing the existing features would be a good start.


The Vercel way is to say "those aren't breaking changes, they're just bugs".


I was and am a big fan of Vercel but I hated that billing situation a few days ago. You shouldn’t need your post to blow up on social media for support to acknowledge a billing error.


You should offer better customer support as repeated in many comments here.


And you should fix the stupid net js img bs refrenced in other comments too


Don’t charge a metric fuckton for bandwidth.


It doesn't help that the main Vercel GitHub repo isn't really any core part of Vercel. It's just the cli and ancillary scripts. None of their infrastructure is open source or self-hostable, which was a surprise after finding the repo and trying to dig through it


No vendor lock-in here. Been running NextJS on GCP Cloud Run for years and each major release has become more Docker friendly. Just my experience.


They’ve helped other UI frameworks run on Vercel. They seem to be very good about that or do you mean something else?


They've "helped" frontend developers of other UI frameworks pay them $400/TB for bandwidth, not to mention various other extortionate overages and hidden charges.


Open alternative to Next.js: https://vite-plugin-ssr.com/ (I'm its author).

Open:

- Choose any UI framework you want (React/Vue/Solid/...)

- A lot more flexible than Next.js (e.g. i18n and base assets configuration are fundamentally more flexible)

- Keep architectural control (vite-plugin-ssr is more like a library and doesn't put itself in the middle of your stack)

- Use your favorite tools. And manually integrate them with vite-plugin-ssr (for full control without surprises).

- Deploy anywhere (and easily integrate with your existing server/deploy strategy)

- Open roadmap

- Ecosystem friendly

The upcoming "V1 Design" has been meticulously designed to be simple yet powerful. Once nested layouts, single route files, and typesafe links are implemented vite-plugin-ssr will be pretty much feature complete.

Note that, with vite-plugin-ssr, you implement your own renderer, which may or may not be something you want/need/like to do. Built-in renderers are coming and you’ll then get a zero-config DX like Next.js (minus extras like image processing as we believe they should be separate libraries).

Web dev isn't a zero sum game - a vibrant and healthy ecosystem of competing tools can co-exist. (I'm close to be able to make a living with sponsors.)

Vision is to make a truly open and collaborative foundation for meta frameworks.

Let me know if you have any questions.


Next.js pseudo-alternatives are usually subpar, because they misunderstand Next.js approach and implement a very simplified vision of it (eg they can't even tell the difference between client-side rendered SPA and HTML export). This one doesn't seem to fall in this trap, from the documentation it seems a very well thought library. Great work!


Even though you go on explaining what you mean by "open", seeing that the common definition is "open (source)", I'd go for a different qualifier.


It's available under MIT License hosted on GitHub. Am I missing something?

https://github.com/brillout/vite-plugin-ssr/blob/main/LICENS...


I think parent means that both Next.js and vite-plugin-ssr are open source, so "open(-source)" isn't a differentiator.

I agree although we think we can keep the word "open" and make it clear that we're takling about being a "open framework" (and not only open source).


Yea, we've plans to improve communicating that. (Although we do plan to stick to the word "open".)


This type of behavior, where a CDN flat out ignores a header you’ve set in favor of their own values, without any indication, is incredibly frustrating.

I hit a similar issue when using Cloudflare and the Date header, where I was signing some parts of the response including the Date header. The problem was that if the request hit Cloudflare at just the right^W wrong time, the signature would be invalidated because their Date header value would be different than the original.

They didn’t see it as an issue, even though IIRC the HTTP spec states that a proxy server must not overwrite the Date header if it was set by a prior actor.

Took days of debugging to determine why some requests were producing invalid signatures.


(Only somewhat-related rant)

I'm very much starting to distrust these huge companies with infinite product/feature lists and generic marketing-lingo websites.

"Vercel is the platform for frontend developers, providing the speed and reliability innovators need to create at the moment of inspiration."

Seriously?

I want serverless providers that tell me the 4-5 products they offer (Compute, maybe a KV store, maybe a database, maybe some pubsub, maybe a queue?), give me the pricing, and leave me the Hell alone.

I don't want to feel locked into a system promising end-to-end whatever, ones that heavily push a certain framework, and most importantly ones that look like the homepage was designed by a team of sales people instead of a team of engineers.

It's the difference between the Cloudflare Workers website and the Vercel website: Vercel looks like the new-age big-brother con artist, while Workers looks like a utility.

Sorry, what were we talking about? A runaway bill?


The gist is (which the support engineer referred to as the internal RFC): instead of stripping `cache-control` `s-maxage` / `stale-while-revalidate` values, we should support Targeted HTTP Cache Control[1] (i.e.: `cdn-cache-control` and `vercel-cache-control`).

Vercel strips them because (1) at the time this RFC didn't exist and (2) most of the time you we found customers don't want to cache on the browser side or proxying CDNs, which makes purging and reasoning about cache staleness very difficult.

Another example there is the default `cache-control: public, max-age=0, must-revalidate`. Without that, browsers have very unintuitive caching behavior for dynamic pages.

Customers want to deploy and see their changes instantly. They want to their customers "go to our blog to see the news" and not have to second guess or fear the latest content will be there.

I appreciate Max's feedback and we'll continue to improve the platform.

[1] https://httpwg.org/specs/rfc9213.html


Cloudflare (and likely other cdns) has a development mode for this exact use case


Unless I'm misunderstanding you: it's not about dev or prod. It's that you want Vercel to cache a dynamic page, but not your visitor. That allows you to be in control of the ship: if you purge the CDN, you don't risk a customer having a stale page.

I've seen a lot of customers get burn by sending `max-age` as a way of getting their CDN to cache, not realizing they're inadvertently caching on users' machines. Sometimes it's a seemingly harmless "5 minutes", but that can be damaging enough for rapidly changing pages (imagine breaking news on a homepage).


Look, regarding setting cache-control headers, it's a professional tool, and it's going to be possible to shoot yourself in the foot with it. The approach to try to reduce that is to have a UI that asks people, "hey are you sure you want to do this potentially dangerous thing? It just result in these unintended consequences", but yes, ultimately allow people to do it. Otherwise, you're not letting people use what they paid for.


Totally, I don't like surprising behaviors either. At the time we made that decision the `CDN-Cache-Control` proposal didn't exist, so it was a tricky spot.

There also really wasn't a UI opportunity in this case (although one thing we thought about was a setting to control it and turn off the Vercel override).


> There also really wasn't a UI opportunity in this case

Why, because the configuration is set in a text file and not in the UI?

You could send an automatic email to the account holder with the warning whenever someone adds a foot-gun cache-control setting, with the ability to turn off the email by setting a different configuration flag to true or by checking a flag in the UI.


If anyone needs a CDN, please use BunnyCDN. I have tried almost every other major CDN and it just blows everything else out of the water.

Regarding Vercel, they do have quite poor support so it doesn't feel rock solid and dependable. They are a great start though, but then ideally you should just switch to bare metal on Hetzner or something when you are earning serious money from your business.


Honestly nothing compares to bandwidth free expenses with cloudflare especially with cloudflare R2. Also having different pricing for every region is very annoying.


The free version of cloudflare is essentially a "fair-use" offering, meaning that it is free until they decide its no longer free for you at their sole discretion [0]. Their terms of service prohibits distribution of any "disproportionate percentage" of non-HTML content [1]. That includes most of what you would want a CDN for: video, images, audio, binaries.

They have exercised this discretion repeatedly for significant bandwidth users, usually in the form of "You need to upgrade to the enterprise plan or we will terminate services for your site." One of my sites got the enterprise "offer" after serving single digit TB in a month. Running on a real CDN from the start would have been cheaper than inevitably getting extorted to a large <contact sales> price for <contract term> or terminated with little to no time to migrate things.

[0] Section 2.6 https://www.cloudflare.com/terms/ [1] Section 2.8 https://www.cloudflare.com/terms/


Free version also has some awful gaps in global coverage


Then use R2, which can be used for everything with no usage limits as you are going to pay for your usage anyway.


To use R2 you’d need to host images/assets separately on their own domain rather than just put a CDN infront of your entire site and forget about it. Putting assets on their own domain isn’t uncommon but for some stuff it can be a big inconvenience with build pipelines/software support.

I’m not sure on R2 specifics but I remember when I wanted vanity names on Cloudflare I.e assets.example.com it was not possible unless you give Cloudflare control over your entire domain, dropping your current nameservers or sign up for an enterprise plan for cname support.


I mean you are going to do that anyway to use bunny. And you are right, they need the domain to be on their nameservers.


But cloudflare can only be used safely with R2. If you use it as a CDN for your app, you have to be careful about section 2.8 of their TOS. If your usage is mostly caching a JSON API response, you are technically in violation.


Yup, that's why I said use R2 and its still cheaper than bunny, since they charge per transaction instead of bandwidth usage and its unified price for all areas.


Hetzner is cheaper unless you are serving a sizable asset with R2. R2 costs per call. That cost is more than most web bundle size assets will be at Hetzner egress pricing.


Since when does hetzner have a CDN?, unless you mean storage boxes which aren't even close.

And they aren't even comparable to normal object storage, since data is protected by a single raid cluster.


Didn't know hetzner had a fully managed global caching solution, nice.


Any links to it? Can't find 1 on the website.


They don't


Ah so the OP lied, great.


I'm only describing egress pricing for serving content, not CDN.

If you're in the same region, another DNS lookup is going to be more costly than a CDN will be worth in proximity.


Depends. Some CDN are more than worth it. Hetzner has weak DDoS protection for example. Some CDN have caches inside the ISP network. It doesn't even get out. There are also cases where the local ISP may not peer with you even if you are in the same region. OVH was famous for this in Asia. It's not enough to be in the same region.


Different perspectives from different requirements. In my case I have to serve API requests that can't be proxied. Most of the data is API, it's bigger than the statics.

Using a CDN for static assets for North America just doesn't seem worth it to me.


Cloudflare isn't 100% free unless it's only the US. Non business accounts get re-routed on major ISPs in quite a few countries. Quite an upfront cost.


Re routed?


Yes e.g. if a user is in India they may get sent to Singapore, Hong Kong or other locations to serve their traffic instead of the actual India PoP.


Yea but better to not keep all you eggs in one basket and also centralization of web issues re:cf


Just using them as a CDN through R2, no proxying.


Bunny.net is amazing. Its pricing is excellent for any scale. Its combination of CDN, DNS and storage services allow for unlimited static websites, each with a custom domain.


Better than Fastly?


Certianly cheaper.


Personally, I love how approachable Vercel is and for most small things I've used Vercel for, it has been an absolute delight.

My issue with using Vercel for "real workloads" is the pricing though. 100GB of Bandwidth for $40 is a blocker.

I love how easy the experience is to throw up an app and test it in the real world, the dashboard is great, build times are excellent, but I can't see myself paying that high of a premium.


Also $20 per user. A lot of teams don't use a lot of bandwidth but have a lot of developers.


Feel like this Tweet could use some social support too.

https://twitter.com/ms_nieder/status/1626995266619420675?s=4...


This is incredible. If I understand the thread, that >$22,000 surprise bill was not forgiven by Vercel and the most they're willing to do is offer a 25% discount.

Is Vercel a business or a scam masquerading as a tech company?

If a company needs to stoop to this level of billing shenanigans to make money, I have my doubts...


Vercel have controlled the narrative that, managing your own infrastructure is too difficult or, it takes too much time away from shipping products. They want you to put your trust in them, but you’ll have to pay for it. This would be ok if you could trust them.

Many stories are emerging where it’s clear that trusting Vercel is a risky strategy.


This right here is why I never, ever use cloud services for my side projects, and only use them on the client's or employer's dime. I always use Digital Ocean, Hetzner, or some other provider where I pay a fixed amount per month.

Which is a shame as it makes it harder to learn things like AWS and Google Cloud (or for that matter, Vercel) in my spare time, and perhaps they might even work out cheaper for low-traffic hobby projects, but ultimately the risk is too great.


I have spent a while struggling with Vercel and unexpected caching behaviour, specifically in regards to Nextjs 12 returning "Static props not found" errors in situations where that seems like it should be impossible according to the docs.

The lack of other voices on the internet with the same issues led me to believe that I was going insane. Now I'm starting to think going back to a good old VPS might not be such a bad idea.


What the hell. You have a page with a 150 jobs and a form. You basically don't need anything more than an html editor.

We need to talk about how we do software architecture and technology choices today. In the time you played around with CDN stuff you easily could build a company listing, fixed your footer and header links, thinking about a pricing, build an apply-form, fixed/build job notifications, ...


While I may myself not have a complex setup for something that looks so simple on the surface, I think people here need to stop assuming they know everyone’s purpose for building software.

Maybe the author intentionally set things up so they can have a low stakes system to learn about these technologies and improve their knowledge. This post is about Vercel and their poor practices. Let’s stay on topic and actually ask the author why such a complex setup instead of assuming you know and speaking off the cuff with criticism


> Let’s stay on topic and actually ask the author why such a complex setup instead of assuming you know and speaking off the cuff with criticism

I'd go a step further and say the question of why the author may or may not be overengineering their own app is besides the point.

I'm not saying asking out of curiosity would be a problem, but it shouldn't be construed as at all relevant to the very valid points OP is making regarding Vercel's service


That was my thinking. This seems over-engineered for a simple jobs board. I would be surprised if the traffic even warrants the CDN setup.


I don't dislike Vercel specifically, but I dislike opaque businesses that masquerade as open source champions. Also, I wish fewer devs were OK with "advanced frontend stuff" being such a leaky abstraction.


For someone looking at switching to Vercel for frontend deployment, this is a bit scary.

Granted, CloudFront isn't terribly hard to use. It's nice to have all resources in one place, however, it's probably worth sticking to the more mature products for things like content delivery.


If you'd like to see the full details, I posted an update here (https://news.ycombinator.com/item?id=35508529).


> Most of remotejobs.org is already hosted on a VPS and so in my case I'll move the web pieces there with a CDN like Cloudflare in front of it.

You probably don’t even need a CDN at all.


Remote Jobs seems to be a content driven site. Why wouldn't it want all that content on the edge?


Why would it?


Speed? Primary reason for CDNs.


I don't buy that, at least not in the general case. A CDN adds an additional indirection between the site you serve and the content that you reference on the CDN.


Having someone handle multiple points of presence is a nice courtesy to users. Not sweating about outages is worth a lot. Someone else handling botnets & DDoS is fantastic.

I very very much believe in some DIY & doing things ourselves but I also recognize a ton of value in using CDNs. Glad both are options. DIY is hard.


A cdn is largely a value prop to reduce latency for static web pages, so that someone coming from a link get a page load asap. Once you have a dynamic or hybrid app, the value diminishes rather quickly, I assume. Potentially it could become negative if you have different points of failures and cache inconsistencies.


> Potentially

Unless CloudFlare cables are somehow shinier than the rest of the internet’s, adding one hop almost certainly adds latency. More hops more time.

In some instances where the original server was closer than CF’s edge, I measured increased time even for cached content, effectively making CF slower for every request by that specific user.


> Unless CloudFlare cables are somehow shinier than the rest of the internet’s

Unrelated, but these mega actors sometimes have such shiny cables, because they can route on their internal network across the globe. Iirc Cloudflare does that for some/all traffic(?). But you’re right, all else equal more hops = worse, and I’d be unsurprised if Cloudflare overstates the benefits of using them.


If the site is both static and busy enough, then the majority of users never need to do a roundtrip to the original server.

If the site isn’t busy enough… well, I think more CDNs ought to support pre-caching. Supposedly Vercel does?


You can warm up the cache with cloudflare R2 or Bunnycdn edge storage.


> Unless CloudFlare cables are somehow shinier than the rest of the internet’s, adding one hop almost certainly adds latency. More hops more time.

In my experience, CloudFlare (and other CDNs) can often provide a better route then a regular ISP. Sometimes downloading speed is really slow, switching to CloudFlare wrap (VPN) can at least double or triple the speed.


I assume that’s because a VPN doesn’t let the ISP do any traffic shaping, which isn’t the case for regular CDNs (unless the ISP itself “positively traffic shapes” for)

Warp also regularly lets me escape public Wi-Fi’s where in-browser uploads fail.


You're not factoring in the DDoS protection, that's why the answer you got is so clear. It _may_ still be value negative, but it probably depends on the tradeoffs one cares about.


[Lee from Vercel] Max reached out to me today (Sunday) after this experience, and I worked with him this evening get to a resolution for his site on the Vercel free tier.

I'm really sorry we weren't able to get to a resolution faster. I've concluded it's not an issue with the Vercel Edge Network based on the reproduction he provided and pushed documentation and example updates (see below). I know Max spent a lot of time on this with back and forth, so I just want to say again how much I appreciate the feedback and helpfulness by providing a reproduction.

Here are the full details from the investigation:

- Beginning Context: Vercel supports Astro applications (including support for static pages, server-rendered pages, and caching the results of those pages). Further, it supports caching the responses of "API Endpoints" with Astro. It does this by using an adapter[1] that transforms the output of Astro into the Vercel Build Output API[2]. Basically, Astro apps should "just work" when you deploy.

- The article states that to update SWR `cache-control` headers you need to use the `headers` property of `vercel.json`[3]. This is for changing the headers of static assets, not Vercel Function responses (Serverless or Edge Functions). Instead, you would want to set the headers on the response itself. This code depends on the framework. For Astro, it's `Astro.response.headers.set()`[4]. This correctly sets the response SWR headers.

- Vercel's Edge Network does respect `stale-while-revalidate`, which you can validate here[5] on the example I created based on this investigation. This example is with `s-maxage=10, stale-while-revalidate`. Vercel's Edge strips `s-maxage` and `stale-while-revalidate` from the response. To understand if it's a cache HIT/MISS/STALE, you need to look at `x-vercel-cache`. I appreciate Max's feedback here the docs could be better—I've updated the Vercel docs now to make this more clear[6].

- I've started a conversation with the Astro team to see how we can better document and educate on this behavior. In the meantime, I updated the official Vercel + Astro example to demonstrate using SWR caching headers based on this feedback[7].

- The reproduction provided by Max[8] does not show the reported issue. I was not able to reproduce, which is the same result that our support team saw. It sounds like there were some opportunities for better communication here from our team and I apologize for that. I will chat with them. Free customer or not, I want to help folks have the best experience possible on Vercel. If Max (or anyone else) can reproduce this, I am happy to continue investigating.

[1]: https://docs.astro.build/en/guides/integrations-guide/vercel...

[2]: https://vercel.com/docs/build-output-api/v3

[3]: https://vercel.com/docs/concepts/projects/project-configurat...

[4]: https://docs.astro.build/en/reference/api-reference/#astrore...

[5]: https://astro.vercel.app/ssr-with-swr-caching

[6]: https://vercel.com/docs/concepts/edge-network/caching#server

[7]: https://github.com/vercel/vercel/pull/9778

[8]: https://github.com/maxcountryman/astro-trpc-example


Actually I reached out to you asking for a comment which I could add to the article, not for your technical support.

I was surprised and I have to admit a bit dismayed to watch you throw yourself into the fray on a Sunday. The technical issues, which in fact persist, are at this point an aside to the way Vercel has handled this issue.


You can reference this comment for the article if you'd like. I'm always more than happy to help out folks. Based on all of the information you've provided, everything is working as expected. As I mentioned above, I'm happy to continue investigating if there is a new repro showing unexpected behavior. Thanks Max


It’s pretty clear these issues were beyond what traditional support was capable of. If you had reached out to someone like Lee earlier it’s likely your experience would have been much better

Vercel’s definitely in a weird place, trying to be the home for innovation while also offering more traditional support. While the support experience you had was less than ideal, you’re also failing to recognize that you are bleeding edge a bit here.

Your reply here makes it really hard to take any of what you’ve done in good faith. Lee has been incredible to work with and I commend his efforts here


What a disappointing take.

In fact Lee was looped in six weeks ago; there was an issue with Remix and CDN cache behavior which popped up on GitHub. I mentioned my issues in that thread thinking they were likely related. Lee responded and let me know he would talk to the internal team.[0] However, Lee did nothing that was ever visible to me. Moreover, I completely disagree with your assessment: it should not require Lee or social media posts to tackle issues like this, that’s completely unscalable and not a realistic way to run a business.

It’s pretty irresponsible of you to be suggesting folks reach out directly to Lee when frontline support fails. That sucks for Lee and it’s bad for the Vercel business.

To be blunt with you, your comment does not read well. It looks like you’re ignoring very real problems and doing everything you can to dismiss them as “bad faith” when in fact there’s a demonstrable problem which shouldn’t be excused as growing pains but instead addressed head on. This kind of fanboyism doesn’t help anyone and I hope Vercel takes the time to reflect on this feedback and make real, meaningful changes.

[0]: https://github.com/vercel/community/discussions/1559#discuss...


Please note that the above comment is super carefully written, because Vercel puts tremendous efforts into their public image and marketing. However, their real support is terrible, as pointed out in the article.


> The article states that to update SWR `cache-control` headers you need to use the `headers` property of `vercel.json`[3]. This is for changing the headers of static assets, not Vercel Function responses (Serverless or Edge Functions).

Then why does [3] say "This example configures custom response headers for static files, Serverless Functions, and a wildcard that matches all routes.", if it isn't for changing the headers on Serverless Functions? (EDIT: my bet would've been that its adding the headers outgoing from the CDN, not from the function, but your claim above contradicts that too)


That should be more clear! I'll update, thank you. Basically, you should always use the framework or "product native" way of adding caching headers. For Serverless Functions (that use Node.js), that's the `response.setHeader()` API[1]. For Edge Functions (that use Web APIs), that's using the Web Response API and passing a headers object[2].

Most of the time folks using Vercel aren't actually using these Functions manually, but instead having framework-defined infrastructure[3] that generates the functions based on their framework-native code (e.g. `getServerSideProps` in Next.js)

[1]: https://vercel.com/docs/concepts/functions/serverless-functi...

[2]: https://vercel.com/docs/concepts/functions/edge-functions/ed...

[3]: https://vercel.com/blog/framework-defined-infrastructure


Hey Lee, Travis here.

One piece of meta feedback for Vercel reading through all this is that whenever you have multiple ways of accomplishing something (like setting headers in vercel.json vs in the handlers themselves), it's bound to cause confusion.

Love Vercel :)


I’ve worked on two projects where there were comments about strange SWR config — pointing towards GitHub issues discussing this exact issue.

The first time I encountered it must have been 3 years ago. I have a feeling vercel doesn’t care.


ive ejected out of the next ecosystem and work in preact now. some of the optimizations being adopted across react and next had me worried about insurmountable lock-in! with that said vercel's github integration is a beautiful thing that i will continue to use. the automated preview deployments are just too clutch


We need to talk about developers.

Vercel and any other company in their space follow the same old playbook:

They play opensource to attract users and build nice stuff developers like (not necessarily what they need) to win market share and developer's mind and heart.

When they are above the competition, thanks to the free contributions of the community, they reveal their true nature and start play greed.

Developers get upset and start ranting on HN.

How many times do I need to see developers playing this movie? It's is the same shit over and over and over again.


I hope this doesn't come off as snarky, but after researching Vercel and its competitors I decided that the iffiness around billing for these services wasn't worth it. I would mostly use it as Infrastructure as a Service anyway. It took me about a night to learn the ins and outs of AWS CloudFormation. Now I know exactly what my stack looks like and I can more easily estimate my end-of-month costs -- plus I can stay in the free tier wherever possible.


There's no option/docs on rate limit or prevent DDOS serverless functions. I warned them 3 years ago, but it's not listened.


$250,000 per year for 10 seats with Vercel on AWS Marketplace.

https://twitter.com/eigenseries/status/1645515739280064512?s...


ha, I thought I was just bad at "computer stuff" for not figuring out how to implement SWR correctly on Vercel...


Does any CDN support stale-while-revalidate?

I know Cloudflare doesn't. I thought Vercel did but apparently not.




You don't need to go linking to your marketing comment in every possible subthread for damage control, it isn't nice to spam.


Some odd hate in this thread. Been using Vercel for around 3 years and it’s been amazing and it’s our default for all front end deployments now. The DX is second to none and the per-branch deployments are great for protoyping.

Yeah there is no cap on spend (Which cloud services do this? None afaik) but if you’re really worried about getting DDOSd then put Cloudflare in front of Vercel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: