Hacker News new | past | comments | ask | show | jobs | submit login
Neon Postgres vs. Supabase (devtoolsacademy.com)
45 points by TheAnkurTyagi 8 months ago | hide | past | favorite | 47 comments



While you can use Supabase as simply a Postgres provider, the more interesting comparison IMO is to other backend-as-a-service providers. Supabase used to call themselves a “Firebase alternative” but at this point they have surpassed Firebase in almost every way, in my view.


(I'm on the supabase team)

I'd also like to offer some corrections to the linked post:

- Supabase is SOC2 type 2 and HIPAA compliant (https://supabase.com/security)

- Supabase works with all the same Postgres tooling that neon does (dbeaver, PgHero, PgAdmin, etc.)

- Supabase also offers integrations with Auth0, Clerk, and Okta etc.

- Supabase does offer verify-full SSL mode

- Supabase encrypts data in transit and at rest

- Supabase does offer pg_stat_statements and additionally the newer pg_stat_monitor

And I just want to call out that this author works for neon (self-proclaimed: https://www.reddit.com/r/SideProject/comments/1dy2r8b/commen...)


(I work for Neon.) The author does not work for Neon. He did some one-time consulting work for us earlier this year. We didn't review or have input on this article. I'm sure he'll be happy to update it with the Supabase corrections he is a great guy who is trying to be genuinely helpful.


thanks for the feedback... I'll update the post but check with your founder (Paul).

I messaged Paul on Twitter on Sunday before even sharing the post to get feedback if any as I don't want any confusion like you had last time on Reddit.

and I genuinely like both databases and other awesome developer tools.

pls show some fighting spirit.

PS- I'm no longer working with Neon.

happy to show the bank statement :)


In my opinion.

I do not get the value of BaaS platforms. By the time you have spun up your 3rd postgres instance in a VM you get a hang of it and create a boilerplate docker container with an API layer.

Firebase has auth for free. Firebase has custom domains for free. Supabase is BaaS and Firebase is an app platform. You can make firebase just work, but for supabase you need other services to compliment.

I maybe missing the point of supabase, but I have tried it for a year and it is not for me. VPS from medium and small vendors are quite cheap and it is worth the investment to set it up in your way.

Fly.io shutdown, planetscale canceled free tier, heroku canceled free tier and frontend layer platforms like Vercel and Netlify has pricing related reputational issues.

I just do not trust these platforms any more. VPS and dedicated servers are cheap, sutainable, and experimentable. They are tried and true. But that's my opinion, I could be wrong.


As far as I know, fly.io is very much alive?


Ahhhh... my mistake it was "bit.io"

Bit.io was a serverless postgres platform similiar to neon. They got acquired by databricks and their service was shutdown extremely fast.


fly.io isnt a baas. They give you a VM. Just a more nice dev and deploy experience (with a markup). And they didnt shutdown.


Sorry. I meant to say "bit.io"


they still do to this day because every day Google neglects Firebase is free marketing/customer churn to Supabase. i think they should keep it until ~2 years after Google inevitably kills Firebase.


yes please add issues here as it's an open-source project I started and would love some contributions from the community to keep it going. I'm hungry for ideas and write about cool dev tools https://github.com/tyaga001/devtoolsacademy


Do these providers really bring enough value to use them over a colocated managed db? (Not just the same region, but subnet etc)

I was under the impression for quite some time that it wasn't that bad to have 2-3ms latencies compared to a co-located DB which is typically <1ms. However, we recently switched from Neon to a colocated, managed db and there was a huge improvement. Some of our queries were executing sequentially (due to our ORM, Prisma), and so what was a 3 second transaction was reduced to only 1 second. Yes this could be rearchitected better, but it illustrates a major floor in my mind for these companies providing only a DB.

Managed vs. unmanaged is a massive difference and would be worth it. But these days I was under the impression most hosting companies also offer managed DBs.


Not everyone needs that kind of performance. Consider for example someone upgrading from an Excel spreadsheet. The fact that you can create a Neon database in just a few minutes, typing only 2 strings, is pretty incredible. And they don’t even ask for a credit card!


fun fact - this blog is also powered by neon

Here’s a quick rundown of the tech stack:

Framework: nextjs Styling: tailwindcss Database: prisma paired with neondatabase MDX Support: I love writing with Markdown Auth: ClerkDev—an absolute game-changer. Animations & Icons: Framer Motion and Heroicons. UI Components: Radix UI

github - https://github.com/tyaga001/devtoolsacademy


I havent used either beyond hobby scale, but I’ve followed both for a while because my day job runs a very large Postgres deployment.

You can run both Neon and Supabase in your cloud account by self hosting. They may also offer on-prem managed deployment, I haven’t looked. I think anyone at large scale interested in using them will “colocate” them.

They have different capabilities over the incumbent cloud provider managed Postgres service.

Neon is very interesting for:

- scale up & scale out. Get more cores for your DB than the max incumbent single instance size.

- fast SSD cache in front of S3. Incumbent DB often uses glacially slow network block storage like EBS.

- branching and schema management wizardry

Supabase is “just” a vanilla Postgres instance plus a suite of extra services and tooling. In their case the collocated version can add the services around an incumbent cloud managed DB.


Sounds like your issue lies with your ORM, not with query latency.


> Yes this could be rearchitected better, but it illustrates a major floor in my mind for these companies providing only a DB.

The solution is AWS PrivateLink (or equivalent with other clouds). It allows you to connect internally from VPC to VPC. It's solved.

It's definitely offered by some managed DB vendors. So this is more of a case of a startup providing this soon rather than this being an issue.


Price/performance will be a wash at least for steady workloads.

But usability will be massively better. Both platforms offer various things to make developer more efficient and dev cycle shorter


Supabase can't become firebase alternative, if they do not introduce pay per hour pricing model. Their plan starts at $25/month (their free plan is shit). Even their auth is not free unlike firebase which has everything almost free to start.


Here is a real-world comparison for 3 startups on Supabase, showing how our Auth pricing compares to Firebase:

https://x.com/kiwicopple/status/1656746963277340672


What is a CPU?

If it's AWS hosted for example it can range from a t2 (low end) to c7a (high end) and have huge performance impacts. How will this change over time?

It's weird that pricing is based on CPU but it's never defined. And how do we compare between offerings when that much is not obvious?


Compute size is measured in CUs. 1CU is 1vCPU and 4GiB of RAM. Database compute and it's read replicas autoscale from 0.25 to 10 CU based on load and down to zero when inactive.

And autoscaling works like this: https://neon.tech/docs/introduction/autoscaling


> Compute size is measured in CUs. 1CU is 1vCPU and 4GiB of RAM.

Sure, but that precisely doesn't answer the question.

What is 1vCPU in this 1CU? If I benchmark this CPU for example, what do the numbers look like?

I saw discussion comparing the cost of Neon vs Supabase based on CPU but felt it lacking without any indicator on what is a CPU. 1 CPU could be up to 2x the other CPU.


Ah, it is whatever AWS calls vCPU. Virtual core in an EC2 instance


> Ah, it is whatever AWS calls vCPU.

And again, that varies.

As in the Fargate / Lambda vCPU? There's x86 (old) and arm64 (graviton 2).

As in an EC2 vCPU which as indicated could range from t2 all the way to m7i / m7a. Even just comparing AWS' own graviton from 2 to 4 (latest), you get about 2x performance improvements.

If we compare to RDS / Aurora, it lets you pick the CPU, so definitely makes a difference.


Does this specific detail matter if the customer can just scale up (big number mean faster ug!) and it meets their requirements? Also I'm sure this has changed and will continue to change over time, so it's not exactly easy for the CEO to come on here and tie the infra engineers hands by publishing the exact details of whatever they currently have deployed at the moment just for some hardass to bust their balls about it later. If you really care that much ($$$) I wouldn't be surprised if they'll let you use whatever fancy pants core you have in mind.


> Does this specific detail matter if the customer can just scale up

Yes, as it drifts further and further the scaling can be meaningless. This is exactly what's lost with the newer generation and cloud.

Also with the databases mentioned it can't be horizontally scaled. Writes are single server. So there's a limit.

> so it's not exactly easy for the CEO to come on here and tie the infra engineers hands by publishing the exact details of whatever they currently have deployed at the moment

Nothing to do with infra. This is about the product itself. What are you selling? That's the question. The product could be very expensive or not based on that. It also might not meet any organization needs e.g. since they have to meet a performance criteria and that's not just about more CPUs.

> If you really care that much ($$$) I wouldn't be surprised if they'll let you use whatever fancy pants core you have in mind.

And likely the competition can too. Point being? Does that matter? Imagine going to a store and buying a "shirt". You might be allergic to cotton. Now do you just buy any shirt hoping it works?


Within Fargate you even get different capabilities too, we were having mixed performance between instances and eventually stuck a log statement in to `cat /proc/cpuinfo`. We got different generations of Intel CPU between containers, from Broadwell to Haswell. You had no control over this when provisioning it.


I tried using supabase to back a vercel project. It was far too slow so we switched back to postgresql included with vercel. Turns out that was neon under the hood. The project is called webtm.io. It's opensource so you should be able to see this in history.


Really? How much data were you using? I just started using supabase and it seems great


I'm also surprised. Here is a 3rd-party benchmark published last month using various ORMs:

https://www.prisma.io/blog/performance-benchmarks-comparing-...


Perhaps it's how supabase hosts it?


we're hosted on AWS (and so is Vercel) - it could have been the region that your database was hosted?


huh, not sure. If you wanna join our discord we'd be happy to work with you on trying it out again.


sure - I'll jump in next week (after we've wrapped up Launch Week). What is your discord?


checkout webtm.io


supabase is great, not sure why it was so slow compared to neon. I assumed it was a collocation issue.


Seems weird to compare them since I see them as complimentary services.


They can be, but they both offer PostgreSQL services. The article touches on Supabase’s other offerings, but the comparison is mostly on the database offerings.


why not? both offer database as a service and under the hood using Postgres.


what is the largest node size Neon will scale to ?


Neon CEO:

Storage is will be long term unlimited. Short term it is few terabytes - we are constantly increasing the ceiling as storage sharding improves: https://neon.tech/blog/how-we-scale-an-open-source-multi-ten...

Compute autoscales from 0 to 10CPUs per read replica with 0.25 CPU increments - you set up min and max. We give our enterprise customers larger compute sizes. And you can have A LOT of read replicas that stand up instantly.


Is it feasible to use Neon connecting from smaller clouds, such as Hetzner, DO, Linode, etc? Thanks.


Yes of course. People do it all the time.

For small workloads egress cost doesn’t matter. For larger - well it does.

I wish it was illegal to charge so much for egress for cloud providers.


Thanks, will try. I was worried about latency between distinct providers.


It’s mostly a region question. If the data centers are physically close to each other latencies will be low as well: https://neon.tech/blog/sub-10ms-postgres-queries-for-vercel-...


Cool thank you for clarification and on write side what can write load scale to ?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: