Hacker News new | past | comments | ask | show | jobs | submit login

$49/month for 60k queries… The pricing of these serverless offerings is always completely out of this world.

My $5 VPS can handle more queries in an hour. Like, I realize there’s more included, but…

Is it truly impossible to serve this stuff somewhere closer to cost? If this is close to cost, is this truly as efficient as it gets?




While in EA (early access), there's no additional cost of Prisma Postgres, you're effectively getting a free Postgres db.

The pricing you are citing is for Accelerate, which does advanced connection pooling and query caching across 300 POPs globally.

All that being said, we'll def address the "can you pls make Prisma Postgres pricing simpler to grok?" question before we GA this thing. Thanks for the feedback!


I think from my perspective there’s some cognitive dissonance when I imagine myself getting the free plan with a cool 60k queries included, and then switching to the paid plan, and finding that that I still have a cool 60k queries included. Wat?

I rationally realize there is a price/usage point where this makes sense, but emotionally it doesn’t feel good.

Start the plan at $59 and include 2M queries.


Thanks for the feedback!


Does a simple selection query count the same as a 50 line query joining many tables and doing aggregates?


Yes


What makes its connection pool "advanced"?


- autoscaling - connection pool size limits - query limits - multi-region support out of the box

more here: https://www.prisma.io/docs/accelerate/connection-pooling

also recommend trying out the Accelerate speed test to get a "feel for it": https://accelerate-speed-test.prisma.io/


I'm wondering big the prisma overhead is here. I know that historically, prisma has been very slow - the sidecar process and the generic protocol and database agnosticism come with a steep cost, and the numbers shown in the benchmark both seem rather low to me. 4x the performance of "very slow" isn't super impressive.. but of course, I can't verify this right now since I don't have access to machine where I can duplicate this and run some raw postgres queries to compare.


The global caching gets even more interesting and beneficial when the end users are all over the globe, and you're caching queries that simply take longer to execute on the db. You'll save time both on latency and compute that way. For example, everything running on the speedtest is through a single internal Accelerate project so we have some data: the overall average is 58x faster, with 779.53 ms served from origin and 13.28 ms served from cache.

Nonetheless, we absolutely have room for improvement on the ORM in terms of performance, and are working on those as well!


When I looked at your pricing, this were my thoughts:

1) 60k queries? I burn through that in an hour. All it takes is the Google bot and some shitty AI scraper to come along and crawl our site - which happens every single day.

2) $18 per million? I don't know how many queries I need per day, at the moment, but give 1), I will surely burn through a million dozens of times oer months...

...at which point this thing will be just as expensive as an RDS instance on AWS, potentially even more so if we hit traffic peaks (every single user causes hundreds of queries, if not thousands).

3) I don't even understand who to interpret the egress cost. No idea how to predict what the pricing will be. Maybe some calculator where we can slot in the relevant estimated values would be nice?


Thanks for walking through your thinking, feedback taken!

What's your take on the pricing calculator? We've been working on an improved version, and would love to hear your thoughts on this. In your case, what inputs would you find helpful to put in to arrive at a calculation, considering that you're unsure about projecting both queries and egress? How would you go about putting in estimated values for those?


is there any page which explains how does Accelerate works? I am interested knowing the internal technical details


It sounds like a clone of Cloudflare hyperdrive and others:

https://developers.cloudflare.com/hyperdrive/get-started/


Cloudflare announced Hyperdrive on September 28, 2023

Prisma announced Accelerate on January 16, 2023

But you are right, they are close in functionality though Hyperdrive requires a CF Worker to operate.


If you mean beyond the product page and our docs, then no. Accelerate is not an OSS product from Prisma.

All that being said, if there are specific questions that you have which relate to a use case, ask away!


I feel like that is always the crux of these solutions.

For example, I used Aurora Serverless v2 in a deployment, and eventually it just made sense to use a reserved instance because the fee structure doesn't make sense.

If I actually scale my app on these infrastructuers, I pay way more. I feel it's only great for products that _arent_ successful.


I always thought serverless meant you could scale out AND lower the cost. It always seems to turn out that serverless is more expensive the more you use it. I guess at certain volume, a serverless instance is meaningless since it’s always on anyway.


> I guess at certain volume, a serverless instance is meaningless since it’s always on anyway.

Bingo. The pricing alignment makes sense:

You share the risk of idle, but provided capacity with the provider: no fixed capacity for no fixed pricing.

The capex for the provider are fixed, though.

That's why I think more competition in the serverless Postgres space is fantastic: Sure, it's not a pure price competition, providers try to bundle with slightly different customer groups they focus on.

But underneath it, technology is being built which will make offering serverless ever more cost effective.

We might see a day where serverless (i.e. unbundeled storage and computed) with dedicated compute is cheaper than standalone GCP/ AWS/ Azure Postgres.


Its 8$ per million query, it really is ok. The baseline of 49$ is the price of the general prisma services. For a production workload cloud based this is in the low-end and if you work on product salaries are always the number one cost.


Indeed, as others have mentioned, you get 60k queries for free! Don't even need to add a card. Then, you rather pay for the usage (primarily by number of queries) you have. The $49 Pro plan you mentioned gives you additional features, such as more projects, higher query limits, and a lower $$ price per million queries. On the Starter plan though, you can get going for absolutely free, incl. those 60k queries, and only pay for the queries above that.

We are also working on making this simpler to understand. We want to make sure our pricing is as easy to grok and as affordable as possible. Keep an eye out for improvements as we get to GA!


That would be indeed crazy, luckily you were to fast reading the pricing:

https://www.prisma.io/pricing

60k are included.

But, I totally agree to your overall statement. The premium for hosted DBs is quite high despite the competition.

Usually, if you want hardware to handle real world production data volumes (not 1 vCPU and 512MB, but more like 4 vCPUs and 8G) you are very soon around $200 to $300. A VPC with that size is around $15?

The hosted solutions are just so damn easy to get started.


> Is it truly impossible to serve this stuff somewhere closer to cost?

Often these types of SaaS are hyper cloud backed so their own costs tend to be high

Don’t know whether that’s the case here. Agreed though that pricing also raised my eyebrows


They need to cover the dev costs and any bloating their organization might have (no idea about Prisma but lots of these startups are bloated). Eventually, the tech will get democratized and the costs will come down.


They've pivoted a number of times already. They started with Graph Cool which was a sort of serverless graphql db back around 2016 iirc.

Honestly I'm surprised they lasted this long.


yeah the whole "but you need to do maintainance" aspect of using a real server is overblown

OS' are pretty stable these days, and you can containerize on your server to keep environments separate, and duplicate

I guess it just comes with experience, but at the same time, the devops skillsets necessary for dealing with serverless stuff is also totally out of this world. most places I've worked at, marketing hasn't even launched a campaign, there is no product validation about how much traffic you'll get, and you're optimizing for all this scale that's never going to happen


Agreed that existing serverless stacks like lambda are a nightmare. But the real problem is that they don't solve the state management problem. (You need step functions to compose lambdas AND you need a bunch of custom logic for recovery in case lambdas crashes/pause/timeout).

I do hope tomorrow's engineers won't have to learn devops to use the cloud. My team works on what we think is the better way to do serverless, check it out! https://dbos.dev/


> If this is close to cost, is this truly as efficient as it gets?

I saw them hiring Rust devs recently, which makes me feel like they do things efficiently(hopefully). That being said, Serverless is the greed-driven-model, where you start by thinking, "meh, we don't need that many queries/executions/whatever anyways, we will save plenty mulas we'd waste otherwise renting a reserved instance sitting idle most of the time", then something bad happens and you overrun the bill and then go into "sh+t, need to always rent that higher tier, else we risk going bankrupt" and since your stuff is already built, you can no longer change your stuff without another big re-write and fear or breaking things.


Switching out a Postgres provider surely is on the easier side of things to migrate?

With most of these serverless providers, there's no technological lock in I am aware of, it's all just Postgres features paired with DevOps convenience.


What size dataset can you fit on that $5 VPS where it handles those queries in reasonable time? Serious question, all the $5 VPS' I've seen are too low spec to get anywhere with them. Eg a digital ocean $6/mo VPS will get you a single measely gig of RAM and a 25 GiB SSD. Without being more explicit about "realize there more included" is a $5 VPS really even a valid point of comparison?

I don't know why people ever buy plane tickets, walking's free.


1 million requests in a month is ~0.4 requests per second.

With the Prisma pricing, $1k gets you up to a 48req/s load average, and that's without the geo balancing. For a little more you can get a dedicated Postgres instance with 128GB memory and 1TB+ of disk on DO that would definitely handle magnitudes more load.

Of course there are a bunch of trade-offs, but as the original poster said the gap is pretty wide/wild.


Anything with indexes will be completely fine. Hell, your little instance can probably do hundreds of primary key lookups every second. How fast would you burn through your query allowance on Prisma with that?

The point is that when I buy managed postgres, the thing I expect to be paying for is, well, postgres. Not a bunch of geo load balancing that I’m never going to need.

That’s why the comparison is with the thing that actually does what I want.


On hetzner it gets you at least 4 cores, 8GB RAM and 80GB local SSD. For $49 you can almost get a dedicated server with 8 cores and 64GB RAM. More than enough to handle that load Edit: this is for $8 but general point still stands




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: