I have used it for work-related reasons and indeed the service is quite nice. But I don't use Google Cloud Run for personal projects for two reasons:
- No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.
- The risk of being locked out. For many, many reasons (including the above), you can get locked out of the whole ecosystem. I depend on Google for both Gmail and Android, so being locked out would be a disaster. To use Google Cloud, I'd basically need to migrate out of Google in other places, which is a huge initial cost.
Both of those are basically risks. I'd much rather overpay $20-50/month than having a small risk of having to pay $100k or being locked out of Gmail/my phone. I cannot have a $100k bill to pay, it'd destroy everything I have.
Also I haven't needed it so far. I've had a Node.js project on the front page of HN, with 100k+ visits, and the Heroku hobby server used 30% of the CPU with peaks at 50%. Trying to do the software decently does pay off.
I had the same thoughts: even if I like Google Cloud a lot (I use it extensively at work), I don’t feel it’s safe for me to use it at home, since I don’t want to risk having my entire google account locked due to “suspicious activity”, whatever that might mean.
In fact, I recently shut down a personal App Engine service I had been using for myself for a few years just because of this paranoia. The service was not doing anything illegal, just crawling a few websites (forums, ...) I like and sending me emails when there are interesting updates. But you never know if they might determine my outbound traffic is suspicious. I also started the long process of moving my main email from a @gmail.com to a @custom.domain that currently forwards to gmail, just in case I get locked out.
It is quite bizzarre that this is the reputation google gained for themselves.
> since I don’t want to risk having my entire google account locked due to “suspicious activity”, whatever that might mean.
Agreed. I often second guess my usage of various Google apps and services since I don't want to trigger some process that I would have no way of ever knowing.
Their reputation has turned dramatically. Back in the day Penn and Teller's episode of BS about the death penalty said they might not mind the death penalty if Google were in charge of it. Maybe they were somehow being presciently ironic?
Not only Google, just visit any online space and you will see a lot of arbitrariness, inconsistent rules, obscure decision making, etc. I honestly think that the best people to rule a country are the politicians: They are corrupt, narcisists and dangerous but at least they are somewhat professional in what they do.
Sounds a lot more like abuse detection signals firing based on Apple Pay using virtual card numbers.
Apple says that their virtual card numbers protect your privacy because they're untraceable. Ok, but that also means that your using Apple pay is mostly indistinguishable from credit card fraud.
But, you ask, does Google really have to worry that much about fraud? Do people really phish known-good Google accounts, add a stolen card, and then buy a whole bunch of ads?
Well.... yeah. That's actually one of the primary uses for stolen credit cards.
I used to use google docs until they randomly locked one of the docs I was working on for a week due to one of their "suspicious activity" scripts. Really hammered in the message that if you don't host it then you don't own it.
That's actually kinda nice of them. Instead of waiting till we were totally and completely locked in to play big bad wolf, they've done it earlier while there's still time to get the message out.
Paid G suite can and does pull this crap. There was a comment on HN a year-ish ago, when someone's entire ~100 person company (almost?) went out of business because Google flagged the personal Gmail of the domain admin, this "spread" to their company email, Google closed it and losing the admin account made the entire domain get deleted. Not "blocked" or "pending review" - deleted! IIRC even pulling personal favors at Google couldn't save them.
Google's main problem here is they can't tell their side of the story.
If only they had some process where a customer could agree to have Google publicly explain why an account was banned, I think we'd see many more explanations along the lines of "This customer was using Google cloud to launch Ddos attacks" or "This customer sent bomb threats to the president".
How is that a problem? What's stopping them? Google can easily write blog posts or release post mortems or have the dozens of PMs that visit HN talk about it.
Considering these links are all about people with valid businesses and apps, I doubt your examples apply for violations.
Both the law and Googles privacy policy stops them telling the world if you sent bomb threats to the president. That's still your private mail. They can't go looking it it, let alone telling the world about it.
> That's still your private mail. They can't go looking it it,
They definitely subject it to all kinds of automated scanning for spam and potentially abuse. The nearest thing to a public statement from Google on the subject seems to be: “very specific cases where you ask us to and give consent, or where we need to for security purposes, such as investigating a bug or abuse”
i.e. there may be abuse cases where they read your mail without asking.
At least DO was responsible, transparent and disclosured the bug for the public. After reading through the post, I've felt much more confident to use their services, specially after that comment: https://news.ycombinator.com/item?id=20119939
Big G's policy is to lock you, provide no further comments and no contact link.
“Transparency” due to blowup of bad publicity on HN. They’re simply not big enough to ignore this audience. They wouldn’t have given a damn if they were, just like they ignored many cases that didn’t blow up.
Sure, fine, but Google blows up hn here and there with horror stories but because of search and I guess the fact that 2024 is when they're possibly nuking cloud anyway, the companies in question still got deleted.
What's the alternative you're proposing? Are you just saying Cloud Bad?
Quite a lot of paid services have done exactly this. They use vague all-encompassing terms of service designed to give them complete control. Pretty much anything can be used as a violation of those terms, allowing them to keep your money while also blocking all access and even deleting your data. Very few customers have the legal and financial resources to unblock this if it does happen.
I ordered a phone from Google that's been lost in delivery. I have Gmail/documents/photos/music... Should I do a charge-back? Sue them in small claims court? I should never have done business with them.
At least have all your account recovery lined up before you try a chargeback. It is taken as an extremely strong signal that the account has been compromised and is being used to fraud the rightful owner.
Source: I work there.
Also: I would exhaust all my escalation options before going the cc route. With any retailer.
Having thought about this some more, I really can't recommend doing a chargeback with any company you want to keep taking your money. Afaiu this doesn't cancel the contract, so you owe them the money unless you manage to void the contract, in which case they should send the money back anyways. Then, teaching all the fraud detection systems that your exact usage pattern leads to fraud seems unpleasant too. Just too many ways for this to backfire, even if the company doesn't play offended.
This is a response from a dystopian anti-consumer future that we seem to be living in because Google thinks customer service has no value.
> Afaiu this doesn't cancel the contract
There is no contract if Google didn't send the phone. The commenter doesn't owe them money for something they failed to send.
> Then, teaching all the fraud detection systems that your exact usage pattern leads to fraud seems unpleasant too.
Or (hear me out, this might sound insane): a fucking human could talk to the customer and flag it as "not fraud". This is how every other company does it.
The solution to getting screwed by an algorithm is not to give in to the algorithm. It's to talk to a human to override it.
The ultimate solution, I hope, is that the next iteration of the federal government is pro-consumer and enforces our UCC rights and/or breaks up Google.
> There is no contract if Google didn't send the phone. The commenter doesn't owe them money for something they failed to send.
Don't know about your jurisdiction, but in most countries I lived in the law works otherwise. The moment you checked out you have a contract. Seller not sending out the goods is failing their contractual obligations. But that failure does not cancel the contract, only allows you to execute the appropriate clauses of it (some of which usually lead to refund and cancellation). On the other hand, the delivery of goods doesn't usually prevent you from cancelling the contract (you usually have an obligation to return the goods in that case). The clauses injected by laws tend to make this very consumer friendly... But the ones I remember require you delivering a notice of cancellation and I'm not sure if failing payment counts as such (IANAL, I don't even know your jurisdiction, Yadda Yadda).
Same here. It seems very unlikely they would not resend a phone lost in delivery. When it happened to me, they immediately ordered a replacement device.
The “oh noes!” Lock-in arguments are comical. Everything is lock-in. And it’s unlikely google makes some radical consumer change to screw people would hurt its efforts to be the #1 or #2 player. If we just focus on building on the things these cloud providers have built we can stop being leery about things and focus on the product and quit wasting cycles on things that don’t matter like the lock-in fear.
You may have misinterpreted the comment chain. It sounds like you're talking about vendor lock-in. They're talking about being locked out of their Google account due to a Google bot incorrectly categorizing their work as spam or abuse. The implications for them being locked out is that they can't use anything related to their Google account. That could include their personal phone, personal email account and personal cloud services.
You can set up alerts if you exceed a budget, and you can program a response to turn off billing on a project. Google has a guide for doing exactly what you want. It's not a particularly clean fix, but it is fairly easy (copy paste) and can be done. It also allows for fine grain control eg you could kill an expensive backend process but keep the fronted running.
> You might cap costs because you have a hard limit on how much money you can spend on Google Cloud. This is typical for students, researchers, or developers working in sandbox environments. In these cases you want to stop the spending and might be willing to shutdown all your Google Cloud services and usage when your budget limit is reached.
Depends - if your profits increase in line with your expenses, you might be raking in the ad revenue/views/sales/whatever because your product 'went viral' and you might prefer to keep it up.
Or if the rise in costs is something you can mitigate on your end - such as a bad deployment - you might want time to respond yourself, rather than your site going offline.
More generally, few site reliability engineers are looking to add extra ways for the site to be taken offline.
Of course, if you're large enough to be in those situations, your round-the-clock operations staff will be monitoring the billing situation as carefully as they monitor page load times and error rates and database load so an unexpected bill will be very unlikely.
> More generally, few site reliability engineers are looking to add extra ways for the site to be taken offline.
SRE 101 is rate limiting everything and protection against DDoS. With cloud and auto scaling risks of DDoS are less about uptime but more about getting a bill that will bankrupt the business.
> few site reliability engineers are looking to add extra ways for the site to be taken offline.
Apart from the cases already mentioned in sibling comments, at some scale you start adding in outage switches in many cases. Basically quick ways to take parts of your service offline if something starts misbehaving.
Google doesn't have good enough billing systems to be able to guarantee to be able to limit your spend. Lots of billing things are only done daily for example, meaning you could spend millions of dollars before the billing run at the end of the day.
Google prefers you be on the hook rather than them.
Google themselves recommend using the limits like max instances to mitigate the risk of out of control costs.
I also don't understand why this is being framed as a uniquely Google problem. Other cloud providers with serverless services have similar hazards and similar methods to manage the risk.
I don't think the alternatives are nearly that expensive. Vultr, DigitalOcean and others have virtual private servers for only $5 per month. They're small instances but totally fine for side-projects. I run a cheap $5/month VPS and it was able to withstand my project making the HN front page without issues. I don't use Google cloud hosting for the same reasons as you, I don't want to have too many eggs in one basket.
+1 to just running something on a cheap Digital Ocean style box.
In our modern cloud age, I think we've forgotten how much a single box can actually handle (and how few of us actually _need_ to "scale" from day one). Hacker News front page isn't really that much traffic in the grand scheme of things. My $5 DO instance handled it without a sweat. Hell, even "real" projects can still work under this approach. A $20/mo DO box, sqlite, and a few shell scripts can get you shockingly far ^_^
A few years ago I set up a $5 DO droplet with the Dokku image they provide. Years later it's still running all of my side projects in production, even though I moved from the $5/mo plan to the $20/mo plan as my business grew and my needs increased.
I have 15 containers connected to 10 Postgres instances running right now handling tens of thousands of views per month for $20/mo, AND I have Heroku-like convenience to deploy with a "git push dokku master", without having to pay a minimum of $7/mo for each app I deploy. I can deploy a new app at no extra cost
Sure, I have to patch my own OS (minimal effort but still effort) and backups/DR/HA is on me to provide, so it might not be for everyone. But I have a mantra that all my side projects combined need to be able to pay for all my side projects combined to keep me from spending too much, so keeping costs low is important. And that $20/mo would be over $100/mo on Heroku. For me it was a no-brainer. One low-revenue side project pays the bills for all my just-for-fun projects.
Yes the database is on the same instance. The biggest downside to that is my droplet gets low on space as the database grows but so far it hasn’t been too much of an issue. The growing need for SSD space has pretty much matched the growing need for RAM as I increase the droplet size.
For backups: I have a bash script set up every night to run a pg_backup and send it to an S3 bucket where I store the last 7 days of backups. All static files (images mostly) are hosted on S3 with no real backup but that works fine for my particular use case.
Sounds like a great setup. Do you have any documentation on it that others could follow to build something similar or links to tutorials that helped you?
I just looked up DO's guides for Dokku and it seems like they're redirecting to the deploy page... that's a shame, they were quite good. In case it's just a bug on my side, here's the link: https://www.digitalocean.com/community/tags/dokku?type=tutor...
For deploying, it works basically the same as Heroku except there's no GUI for it. Following Dokku's deploy guide top to bottom works perfectly. Look into Dokku plugins for things you might want/need (database support is a plugin, for example) and it uses a system called "herokuish" to allow Heroku buildpacks to work if you have weird stacks like React on Rails. Or you can bring your own Dockerfiles and avoid buildpacks altogether. Ultimately Dokku just manages Docker containers like a lightweight, single host Kubernetes.
Eventually I'm going to have to migrate to Kubernetes... Dokku's lack of built-in HA/DR/load balancing is its main drawback. But it's served me well for years with very minimal maintenance. I hardly ever even think about my infrastructure stack because it just gets out of the way. Which is incredible because it's so small and lightweight, built mostly with Bash scripts.
Granted, but those are not 1-to-1 to Google Cloud Run mentioned in the article, I normally use Heroku and the typical "production" server for me is comprised of:
- Hobby server, at $7/month
- Database, there are many but add another $5-$15/month
- Redis, either $0/month or $15/month, depending on the needs
Cloud hosting is great for businesses, that's why I believe that every web developer should experiment with those services. It's not great for personal use for the reasons you've stated - the risk of your service going down due to it being viral is much easier to bear than the risk of having to pay outrageous amounts of money for those services in that event.
If your hobby project goes down for a while nobody will remember that and having to pay $100k will make you remember that for an eternity.
Years ago I ran a side project that went viral. It grew to 60k visitors a day and my monthly cloud expenses were around $1,500. I cannot fathom a scenario where you get a surprise $100k bill from a viral hit. Anytime my project went down due to scaling, it was painful. You don’t get many chances at going viral.
60k is on the low-end of visits only from HN where I've normally seen 100k+. I saw around 60k once of 4-5 of my projects hitting the front-page.
Also I know I make mistakes, both at coding and at setting things up, which can easily trash things around and make a 10x-100x multiplier for the cost. The risk is small, but the consequences are horrible so I prefer to avoid this risk.
Edit: also note that even a $10k would be horrifying to spend in most personal projects, and $1500 is more than what most programmers are saving monthly in most of the world.
I’ve seen surprise bills not far off that, not from a viral hit, but from bugs in the firmware for connected devices which suddenly switched them from taking 1 action/10 minutes to 1 action/3 seconds. Needless to say firmware QA has become much more focused since that incident.
What was that cost for? 60k visitors is less than 1 req/sec, something a terribly small server should be able to handle with relative ease. We’re there a lot of static assets not served by a CDN or cache?
I was getting $3-5k bills from Azure and AWS several times without being viral just because I enabled some wrong features. Luckily they refunded them. I don't want that crap anymore. We also tried to run on AWS EC2 for a while and it was costing us 10 times more than a dedicated server that we got later on. Ridiculous. Now I have a backup server on Azure just because they give me $50 credits and it's a basic VM with a 1Tb slow disk attracted and they manage to charge me $70 for this, when I can buy a box at https://www.kimsufi.com/en/servers.xml for 5-7 EUR. I think cloud is for idiots
Understanding the cost of these services is not easy at all, especially for extreme cases/situations. And a calculator/estimator won't fix the problem. That is why I love the fixed $X/month where there's no room for surprises.
I use Heroku for these situations where everything is managed automatically. The only con here would be paying more, $20-50/m (Node.js+DB+Redis?) instead of $5/m for DO, but I'm happy to pay for that and spend no time on manual management.
The trouble with this is what do you do when your "max spend" is reached? Shut everything down? Shut parts of it down? Most "real world" systems aren't built to have the stool kicked from underneath them like that, so there will be data/business loss and pissed off customers (and in the case of Cloud also pissed off customers' customers).
The conversation was about side-projects where it's better to have it shut down than drain the owner's checking account. The alternative being an overloaded web server that becomes unavailable.
I think some folks may be overestimating their ability to put a dent in Google's infrastructure.
1 CPU
2GB memory
80 concurrent requests per container instance
1000ms execution time per request
5kb outbound network bandwidth per request
100 million requests per month
$120.19 per month
What if we bump it up to 100kb per request? In my experience only initial requests end up being enormous, especially in single page apps. But to be fair some folks may not have time to optimize. That still only brings the monthly bill to $1,071.48
Then again that second estimate probably isn't relevant since I typically host my static data on a CDN.
I can't say it's "only" $1071/month. I definitely wouldn't go broke because of it though. Especially if it's only a side project I probably would look for ways to reduce cost for subsequent months.
I don't know how optimizing comes into the equation. The app you build and deploy to Google Run leverages their infrastructure. A fiber optic cross (with just one telecommunications company) connect would cost probably close to $1000/month alone, but Google is probably peered with every telecommunications company in the entire world, and they don't have just one fiber optic line connecting to each of them. So, it's not really a one-time optimization when you put your work on Google. It's like I would be paying Google $1071 to rent the infrastructure of their entire network to receive and distribute data in my name.
Silly me I have last week made an infinite loop on Firestore - update to document triggered CloudFunction which updated document and Firestore is fast..
In just minutes I had passed free quota and here I have been lucky because I have checked console.
If I have left that version (and I have been sure it is just innocent commit) for a few hours running I would be up for a surprise.
A lot of web stacks optimize for concurrency. Based on 2GB memory, which ends up being ~25MB per request (which for me is extremely high), I don't see it as being unrealistic. Especially for the use case I'm considering. Typically the only reason these boxes exist is to allow a web browser to gain access to data in a database, so most of the 1000ms per request wouldn't be spent in CPU, it will be spent waiting for the database to return a response.
I agree about being too scared to do business with Google. FWIW, I have a project that monitors an AWS S3 hosted web site every minute and takes it down if the charges exceed a quota. It ends up costing me 44 cents a month to run it as a lambda function but I think of it as insurance. Pull requests are welcome from whoever understands Terraform better than me because I couldn't figure out how to automate everything about the deployment.
For me, this is the main reason I try not to use any Google products, except for Gmail and Android. For mail, I started to migrate away from Google to reduce risk of being locked out.
Why not just make another Google Account just for this project?
It’s what I do since year, basically for every customer I work, I create a new account and even share the credentials with the customer (if he wants it).
I’ve heard about Google correlating these accounts (through billing methods, contact methods, access patterns) and banning them together when one infringes on something. I’m not sure it’s the protection you think it is.
Yes they do. If you want to bypass these rules, just make a ton of accounts and wait for a couple months and the use those accounts with whatever credit card you have (keep in mind that you should get another card from your bank because google can tell when your card is a generated one like privacy.com)
I don't recall ever giving my real name/address for any billing address/name, because most if not all the time, they don't really care (or at least I never seen a difference).
Has anyone actually been locked out of Gmail because of a google cloud bill? I don't think they are connected in the sense that your Gmail/Youtube/Android etc account will stop working if you don't pay.
Same with Amazon accounts and AWS bills for that matter.
I understand the concern though... and using separate accounts is probably best practice.
This pattern seems common among business people: things working in the common case vs things working in corner cases, it’s how you end up with consumer windows running critical machines. I’m always shocked and moderately disturbed when I see it but I guess we all need to accept the reality that most people are very pragmatic. It makes sense, most people’s intuition comes from “the real material world” where you have to pragmatic, I think many of them fail to realize that on a computer you don’t have to give up certainty the way you do in “the real world.”
Loss aversion[1] describes the phenomenon pretty well:
> Humans may be hardwired to be loss averse due to asymmetric evolutionary pressure on losses and gains: for an organism operating close to the edge of survival, the loss of a day's food could cause death, whereas the gain of an extra day's food would not cause an extra day of life (unless the food could be easily and effectively stored).
For lots of companies an accidental over-use of tens or hundreds of thousands of dollars is an annoyance, but for a single person that could bankrupt them. I generally avoid programmatically interacting with cloud providers on my own time for exactly this reason. One mistake in a loop can get expensive fast.
That's not quite what I'm describing, it's more like how people will ignore git error messages and randomly fiddling with things thinking "that's just how it is and I can't understand it" rather than figuring out what's actually broken.
I don't know how I would get a $100k bill from a spike. That ignorance is enough for me to avoid using the service - unless I know how the billing works, what my maximum monthly bill is going to be (with a hard limit that cannot be crossed), and exactly where all the gotchas are then I won't use the platform.
Auto-scaling magic is lovely in theory, but in practise it is hard.
Annecdata - a video streaming startup here in Newcastle that enabled DJs to stream live sets was used to illegally stream some football games. The subsequent bandwidth bill killed the startup. Yes, they got some things wrong with their tech, and security, but that's the danger that put's people off using "clever" services.
Wow is the ban thing a real risk that anyone can substantiate? That's horrifying and I had the same thought that everyone else did about using a burner email, which is apparently impossible.
Personally I don't think I can go straight up Heroku or DO because I like things like firestore/dynamo, S3, etc etc. But this is pushing me to move everything I do over to AWS. The only thing is I am very comfortable in GCP, so that would kind of suck. bleh.
If I were to run any public-facing protect on Google Cloud, I definitely would use a separate account, created just for that. You never knows what might happen to that account. I thought everybody does this.
I wonder how hard would it be to run a script that checks your balance e.g. every 15 minutes, and shuts down public access to your services when a certain threshold is triggered. I wonder if a ready-made service for that exists in cloud providers' offerings.
Google somehow is able to link your newly created account to your personal/regular account. So if you some shady stuff with the new account, your other account is at risk of being locked out, too.
> - No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.
It's surprising that no major cloud provide prepaid option, which would be handy for such.
There are a bunch of costs that could increase this number: having to deal with networks, having an emergency and having the max amount for a procedure exceeded, and, in some cases, your deductible.
It is a best-practice to have a GSuite account instead of a consumer-grade Gmail account to manage an associated GCP account.
It is a bit onerous for a hobbyist, admittedly. But if its anything more ambitious than that, do you _really_ want Google scraping the contents of your email while you build the Next Great Thing? Try not to use a consumer account.
I wont use this for the simple reason that I bought into the Google Appengine stack in the past and it really bit me for several reasons:
They force-upgraded the java version. The problem was their their own libraries didn’t work with the new version and we had to rewrite a ton of code.
It ended up being insanely expensive at scale.
We were totally locked-in to their system and the way it did things. This would be fine but they would also deprecate certain things we relied upon fairly regularly so there was regular churn to keep the system running.
Support was extremely weak for some parts of the system. Docs for java were outdated compared with the python docs.
Support (that we paid for) literally said to us “oh... you’re still using appengine?”
Finally, they can jack up the pricing at any time and there really isn’t anything you can do - you can’t switch to an alternative appengine provider.
Certain pages in the management console were completely broken due to a js error (on any browser). In order to Use them i had to manually patch the javascript. Six months after reporting it several times and it was still broken.
Oh, and when we got featured on a bunch of news sites, our “scalable” site hit the billing threshold and stopped working. No problem, just update the threshold, right? Except it takes twenty four hours (!) for the billing stats to actually update. So were were down on the one day that “unlimited scaling” actually mattered to us.
I’m never again choosing a single-vendor lock-in solution. Especially since it’s not limited to appengine - Google once raised the fees for the maps API from thousands a year to eight figures (seriously) a year with barely any notice.
You're outlining all the reasons why Cloud Run is the successor to App Engine.
App Engine was the very first PaaS, came out before Docker, and did things very uniquely in order to try to only allow scalable apps. App Engine standard has to explicitly create special environments for each of their runtimes, and that's slow and expensive. Services like Datastore and Memcache were tightly coupled.
Cloud Run fixes all that. It's just a Docker container that listens on the PORT env variable. Use whatever runtime you want. Run the same container locally, or on another cloud provider. The other services like Firestore or Memorystore (Redis) are truly optional and external.
Cloud Run is what lets you avoid single-vendor lock in, but still get from 0 scaling.
My understanding is that the 'flexible' version of App Engine runs on the same infrastructure as Cloud Run (and GKE for that matter). IMO running a regular node express app on Node Flexible App Engine isn't that different from running on Cloud Run, and I'm not really using any specific App Engine services (e.g. I don't use datastore, just a regular postgres DB on Cloud SQL). Lets me get up and running quickly with the knowledge I can easily containerize it if needed.
App Engine is indeed problematic. I have an important app on it and various forced upgrades gave me a big headache. It's been stable for several years now, so I'm okay with it, but as much as I like the idea, and indeed the execution, of App Engine I'm not going to do anything new on it because of the lock-in factors.
Cloud Run, however, uses standard containers, so as long as you don't use Google proprietary stuff on the backend it's relatively easy to move. As the article mentions, it's useful for low-traffic projects, and if they pick up you can move them to full-time instances.
These products remind me of that one colleague who doesn't understand why people don't like them. Uh, you've systematically fucked them over for years without even realizing it, and when you get caught you never even apologize, thats why they don't like you.
The thing I really want out of these services is the ability to set a payment cap. It’s probably never going to be an issue, but I have anxiety, and I can’t sleep easily knowing that if I fuck up, if someone sinister abuses my application or whatever I may be stuck with a giant bill.
Interestingly, AWS will not cut you off for non-payment (we had an issue with finance and were 250k in the red by the time we got the first 'Is there any issues over there?' email)
This sounds like a short term 'thinking'. If there are a few stories on the internet (and there are some in this very thread) about developers getting burned by this 'feature', it would turn off a lot of potential users in the future. It would make sense to be transparent and give tools to strictly control usage/billing - both in terms of trust and money.
It is. I'm using free there to build and host my projects. If/when I exceed the free tier I'm not making a billing account unless I can set a max bill. I'll sooner rewrite everything to run on bare metal that has a fixed monthly cost.
I pay my $5 for linode (expanded swap to 2GB, it's SSD backed) and just run Caprover... I'd rather not deal with AWS's billing (even their billing dashboards are delayed by 24 hours)
I've been using gc/firebase. It's very easy to use, with acceptable documentation. I've been able to get dangerous quickly without even trying to. If I weren't on the free tier I would've racked up unaffordable bills. Instead I exhausted my API limits and had to wait. Perfectly acceptable behavior for development. I'll also add some logic to the finished project to handle resource exhaustion. Should I start exhausting resources,and it's not a bug, I'm not going to continue with infinitely scalable/infinitely billable gc. I guess if target customers are companies with 9-12 figure budgets getting a 9 figure bill wouldn't be the end of the World. If I got a high 4 figure bill I'd have to declare bankruptcy.
If you have a payment agreement, GCP won't cut you off provide your account is in good behavior in the past.
If your only payment method is credit card, IMHO allowing a past due account to continue generating cost is a very risky move and an easy target of abusing (imaging stolen credit card etc).
Disclaimer: I work for Google Cloud as SRE and have first hand experience dealing with past due account (it was highly manual and not automated).
One issue in GAE is that it can take 24 hours to update the billing cap. So if you happen to get a ton of fantastic press coverage one day, your site will go down and there is literally nothing you can do to fix it.
These stories are everywhere. A couple months ago, it hit an associate of mine pretty hard, he moved a small python monitoring and statistics application off a laptop in the office to AWS. A couple weeks later he came back and discovered that it had burned up a few thousand $'s in storage and transfer fees for what normally was a couple hundred MBs of data a day being passed around and a some database updates.
Since it wasn't really worth debugging what went "wrong" it got chalked up as a learning experience and moved back to the laptop where its happily running. If it dies/whatever its no big deal, just a git clone;./runme; and it rebuilds the whole database and creates another web facing interface.
The IaaS guys are masters of marketing. They have convinced everyone that their offerings are less expensive and more reliable, which repeatedly is proven to be false in all but the most dynamic of situations. In this cases its saving $7.99 a month over a unlimited site shared hosting godaddy/whatever plan. Just in case it might need to scale a lot, in which case your going to be paying hundreds if not thousands in surge pricing.
Not sure what happened to your associate but this sounds way, way out there. I run a fairly resource intensive SaaS on AWS (lots of background jobs generating data) and we barely go over $600 a month.
People should not be scared of by these anecdotes, however true they may be. It’s perfectly possible to run a very cost effective business on AWS.
Starting to think the reason we arnt working for google/aws is we make mistakes. Whereas engineers at those companies just dont make mistakes, therefor they assume that billing caps are not necessary. Such is our lot in life.
I think the bigger difference (other than a potential skill difference) is rather that they're more willing to take these risks. The ones that are good at it and succeed will rise to the top!
Bug free is a not realistic but if you are a programmer and definitely if you are a tech founder you definitely should be willing to bet your financial future on writing reasonably low bug code.
If you are a programmer, I kind of find the comment puzzling; maybe I am reading you wrong, but you seem to be saying that you are writing code for some company while being happy to commit it and not care if the company loses money if your code is bad? As you would not bet on your work yourself, but you do not mind your employer paying you for it and as such betting some of it’s financial future on your work? Sorry if I misunderstood your comment.
When you work for somebody else (or with somebody else) then you try to do as good of a job as possible, but the ultimate responsibility still lies elsewhere - might be your boss or could even be the group. There will be other people who interact with your code and might spot errors. There will be people who are trained, in some capacity, to figure out ways to mitigate against accidentally generating very large bills. It is exceedingly unlikely that these points hold true for a solo developer working on a main project, let alone a side project.
Even if you are hyper competent and can probably get all of this correctly, you can't rest easy. You simply don't know whether you did everything correctly or not. Just one dumb mistake can saddle you with an enormous bill.
This is just like gun safety: don't point your gun at anything you're not intending to shoot. Mistakes happen and the consequences of it can be catastrophic.
You are right, but at some level you are thinking 'you can do it' right? Otherwise you would be pretty miserable I would imagine. But I agree with the rest you wrote. You meant it slightly more nuanced than I read it!
You are absolutely right; no idea (thinking hard what service I was thinking off now) why I typed that. I was thinking of something entirely else. No-one try this; bad advise; they will indeed come after you. Not
It seems absolutely within amazon's technical ability to allow you to prepay for usage, and then evaluate your use on a per-hour basis.
I have a side project that uses AWS at the moment, and while stuff like serverless RDS instances are really cool, it scares me that somehow amazon is going to have a bug which empties my checking account. I've read as much of the documentation as I can find and have done everything I seem to be able to to prevent this from happening according to AWSs documentation, but it sill worries me.
In fact here is a banking feature I'd love to see: per merchant daily spending limits. I would love to be able to tell my bank that amazon is allowed access to up to $20/day or something of money until they get rate limited and I have to intervene.
Uh oh I better stop before I start advocating for the blockchain, haha
Are there any cloud products that have hard $$$ caps? Even non-big 3. Any at all?
Seems like this is a huge dealbreaker. You cannot be expected to perfectly audit your side project for security. Suppose someone finds a remote code execution and starts mining Monero on it. Or someone just points a botnet at it. You could be on the hook for an unlimited amount of money.
Might as well just install Kubernetes on a monthly-fee VPS and pretend you're using GCP.
It's cheap insurance to create an LLC to protect yourself from extreme situations for this kind of thing. Worst case the LLC goes defunct, but you won't be personally liable.
This VERY heavily depends on your state and I'm sure a lot of other factors. Years ago when I set up my single-member LLC my lawyer was quick to point out a thousand ways the LLC would be useless in court. Mingling bank accounts was only one of them.
The IRS, for what it's worth, treats a single-member LLC as a sole proprietorship and all losses/gains are considered personal income.
As always, check your state laws and consult a real, in-person lawyer.
wut? are you suggesting that if there's a cost overrun you just shutdown the LLC? that sounds like a loop that couldn't possibly be true? when you sign TOS you agree to be responsible for service fees. if being an LLC indemnifies against these sorts of charges it would also indemnify you against other legitimate fees? take out a business credit card and cash advance and then close the LLC? free money!
It basically does work like that, however the law is almost never that cut and dry. The situation you're describing would have the person benefiting from the money as being personally liable because that person presumably would be taking the cash advance for self-interest instead of taking the loan in the interest of the business.
Yes. This is exactly what the “limited liability” in LLC means. You are not personally liable, only the business is.
There are nuances to this, obviously fraud is fraud, regardless of using a shell company to perpetrate it, and a new business is unlikely to get a credit limit large enough to be a worthwhile avenue for fraud.
more or less, yes. bankruptcy courts vary by location. the whole point of a 'limited liability corporation' is you not being personally liable.
BUT if you have 'malicious intent' -- meaning you build an llc on purpose to overrun costs and get free stuff, you CAN and WILL be sued and face penalties. cause that's just cut-and-dry fraud.
Companies are doing this all the time and not prosecuted (or prosecuted and not found guilty); they just do not make it obvious. So the ‘will’ is not that cut and dry really; even in countries where bankruptcies are not normal and frowned upon (like NL), there are many companies only created to spend money and killed off when it runs out. On paper they look like real businesses. Also it is not unusual to put employees in a separate llc and kill that when the money runs out; again that happens a lot, even with the intent of doing that. As long as it looks good on the outside, it works. I cannot stand it personally but there are not many solutions to resolve it; you cannot read the minds of the founders to get their real intentions.
Which is exactly why banks require 2+ years of business history, collateral, or a personal guarantee. They will also review your financials when you apply for additional credit facilities.
Banks often ask for personal guarantees from ltd owners for precisely this reason. I've just skimmed the T&Cs and I can't see any indemnity clauses, but that document's massive and I'm just some guy, so who knows.
(Incidentally, it's fairly common to structure a pair of ltds with the assets in one and the liabilities in another. If the whole thing crashes and burns, at least the IP doesn't go with it).
As someone who used to work as an attorney in a past career, there's a policy called "piercing the corporate veil" to handle cases like this. Basically, if the only reason an LLC exists is to protect the owner from consequences of these sorts of shenanigans, the courts can pretend like the LLC doesn't exists for purposes of financial responsibility.
Note that this is a vague, general answer and most certainly does NOT constitute legal advice.
A “cap” feature isn’t as easy to implement as you’d think. What happens when you hit the cap? Does it start shutting down instances? If yes, which ones and in what order? And don’t say “I don’t care” because you do care—you are allowing your provider to basically cause an outage in your service.
If you do allow some kind of ordered shutdown of service, what is the UX going to look like? What is the API going to look like?
Having a cap is not an easy feature to design or implement. And on a backlog, it is going to be a “multi quarter, low impact” item—meaning it will never get built.
Why not allow people to set a cap that disables everything set to that billing method once it's been reached? That seems to be the use case that people are saying is missing from cloud offerings.
I imagine most of those companies can afford to have some overages built into their caps. They should also be able to afford the developer time to create fallback behavior, and graceful degradation of services.
Maybe just a simply array with the list of services to be shut down in the order provided? This should be enough for hobby and smaller projects, bigger projects will have plans/people in place anyway.
It is not that they can't do it - we are talking about a company with insane technical ability and resources. It is just that they don't want to do it.
>Warning: This example removes billing from your project, shutting down all resources.
this is the digital equivalent of literally pulling the plug. i wonder how long it takes for GCP to register that you've pulled billing though? in some cases i'm sure even milliseconds could make a large difference.
This is ridiculous. I'm guessing a huge revenue source is going after businesses for long-tail billing spikes.
Not only do I have to worry about bad code causing a huge bill, I also have to worry about the quality of code (or config) that emergency stops the billing.
I wonder if you can sign up with a pre-paid credit card and use a fake name.
It's pretty complicated to do that on every services, imagine for any service they would have to call an API to get the $$ remaining , compute before scaling / creation if you have enough ect ...
If you wanted to be that precise, sure. But something like an hourly ‘calculate what I owe’ and compare it to a cap is not a big stretch of the imagination, and would suffice for many use cases.
The only complaint I have with Cloud Run now (after many usability updates since the initial release) is that there is no IP rate-limiting to prevent abuse, which has been the primary cause of unexpected costs. (due to how Cloud Run works, IP rate-limiting has to be on Google's end; implementing it on your end via a proxy eliminates the ease-of-use benefits)
I'm currently serving an api that uses a 500mb resnet v2 model.
The bootup takes to long, so now I have a single instance that can't handle any peaks and costs too much.
Doesn't your model take to long to spin up before being able to serve a request ?
You can work around "cold starts" by periodically making requests to your Cloud Run service which can help prevent the container instances from scaling to zero.
Use Google Cloud Scheduler to make requests every few minutes.
Does my application get multiple requests concurrently?
Contrary to most serverless products, Cloud Run is able to send multiple requests to be handled simultaneously to your container instances.
Each container instance on Cloud Run is (currently) allowed to handle up to 80 concurrent requests. This is also the default value.
What if my application can’t handle concurrent requests?
If your application cannot handle this number, you can configure this number while deploying your service in gcloud or Cloud Console.
Most of the popular programming languages can process multiple requests at the same time thanks to multi-threading. But some languages may need additional components to do concurrent requests (e.g. PHP with Apache, or Python with gunicorn).
Yes unfortunately, but that's the caveat of services-on-demand. I'm looking more into more efficient/cheap model deployment workflows. (it might be just running the equivalent of Cloud Run on Knative/GKE, backed by GPUs)
Does GCP have an equivalent for Aurora Serverless? If so, would choosing that over CloudSQL been cheaper?
If you're familiar with AWS, would using AWS Batch exclusively with Spot pricing [0] (or Fargate with Spot pricing) and Aurora Serverless [1] been cheaper than Cloud Run + CloudSQL?
[0] Say, the service runs for 5 mins every hour for 30 days. The respectable a1.large instance would cost $0.50 per month, the cheapest t3.nano would cost around $0.19 per month.
[1] Say, the service stores rolling 10 GiB/Month ($1.00) and does about 1000 calls per 5 minutes every hour ($0.20) using 2 ACUs ($3.60). This would cost $4.80 per month.
I haven't create a service for auto-generated tweets yet (just human-curated ones), but for similar service which output tweet-length text (w/ a 2GiB RAM side), it takes about 30s on a cold boot (which makes sense as it has to load the model to RAM), and ~12s to generate text after a cold boot.
Can't you see this article is a paid advertisement for Google Cloud? Just like those 1 hour long videos on youtube - where they show how pilots fly an airplane of a specific company and how well is al organized or a 1 hour long video of a german car factory.
Just reading this line makes you suspicious:
"I have built hundreds of side projects over the years "
really? Hundreds?
And then below:
"I am yet to have a side project go ‘viral’"
Out of hundreds of projects over the years, none of them went viral?
And if you look at his "blog" you will see it has 3 entries in total: https://alexolivier.me/
Bummer to see comments like this so high up the page after so many hours.
HN guidelines:
> Please don't post insinuations about .. shilling.. It degrades discussion and is usually mistaken. If you're worried about abuse, email us and we'll look at the data.
Also: every blog starts somewhere. Plenty of stuff in the author's GH and linkedin, both linked from TFA.
This article fails to mention the issue of needing a database. It doesn't matter how seamlessly your application can scale if your data backend won't scale with it.
They mention Cloud SQL, which is of course instance based and would run into scaling issues if your app got suddenly hammered. Not to mention, the cost isn't $0 if your app gets 0 traffic, you are going to have to pay to keep that running around the clock.
I realize some applications are very heavy on the app side and light on needing to hit the DB, but in my experience, that isn't very common.
AWS has Aurora Serverless (https://aws.amazon.com/rds/aurora/serverless/) for that purpose with MySQL and Postgres compatible engines. But I haven't heard anyone using it for side projects yet.
Note that you can either run Aurora Serverless constantly (at a cost of about $43 a month) or have ~30 second startup times if the instance has timed out (~15 minutes).
They also have RDS Proxy (https://aws.amazon.com/rds/proxy/) that lets you pool connections from tons of lambda instances in order not to overwhelm your DB when scaling up.
A colleague keeps reminding me that in the end, if you don't need ACID, you can just use S3 as a key-value database and never pay more than pennies a month (and you get infinite scaling). Just depends if you need a db just for a few minor use cases or the app fundamentally depends on it
Agree. Except I would keep in mind that AWS data egress is very expensive, so if there is any heavy media involved, you would also want to put a CDN in front of s3 with proper caching or you could find yourself with a hell of a bill from AWS.
So if you are serving up just a few megabytes and hit a few million page views you may find an AWS bill for a few thousand dollars.
See https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-p... for more specific guidelines on S3 scaling. It used to be you had to be careful and use randomized prefixes... It looks like that might no longer be the case. Also note there is a limit (arguably pretty high) per prefix on request rates.
As a noob I have a question as to what advantages I have using docker compared to just a service like Heroku where I just push the application to them... and I don't bother with docker?
To me with my limited understanding this seems like just another step.
Now granted when it comes to work, I'm using docker with specifics that I know why I would want / can specify with docker ... but for personal projects this ever comes up for me.
But otherwise for personal projects it just never comes up.
Yet on the other hand when it comes to various examples I see more and more that involve docker where, I'm not sure it needs to / what the advantage is.
Obviously there must be some strategic choices / advantages I'm missing.
The first obvious advantage is you can test your app locally in the exact same environment it will run, including any system dependencies and native modules. On any OS.
The first thing that comes to my mind is lock-in. if you push the application to Heroku and "it just works" then when you need to deploy it somewhere else you still have that hurdle to cross. Heroku's pricing adds up quickly.
It takes a lot of virality to go over $5/mo for a smaller-side project in Cloud Run. (essentially if you got the point where it would go over, the additional surplus traffic would be well worth it)
Interesting. I've been looking looking at options too & opted for essentially the opposite: Get a big(ish) VPS and stack everything on top of each other with docker behind a nginx reverse proxy.
So far so good. Managed to host gitlab, prometheus, grafana and ghost working this weekend, which I'm pretty chuffed about.
Not as clean as OP's, but the intention was learning, so sacrifices on convenience are acceptable.
This is the advice I give early stage startups... don’t waste cycles learning the AWS stack, and getting locked in. Just pay for a cheap VPS, and scale it vertically as you grow. By the time you outgrow vertical scaling you should have the revenue or funding to figure out your at scale architecture.
You’d be surprised how much you can handle with a single beefy VPS or dedicated.
A slight twist on this is a cheap, beefy colo box with nothing installed except something like k3s. This way you end up with all the YAML bureaucracy paid down in case that dreamy future with millions of users finally manifests
What’s the advantage of GCR over AWS Fargate/ECS? I’ve been running an app on ECS for a couple months now and have been pretty happy with the ease of set-up, load-balancing, auto-scaling etc, though there are still kinks I’m figuring out (SSHing into containers to perform database management, for example, or deploying updated tasks without downtime). Is the main selling point of GCR just its price? I haven’t found ECS pricing to be an issue (but I’m also not running anything at scale, and I do pay more than a few cents a month — but still under 10 bucks).
Another advantage not stated by others is that cloud run (mostly) adheres to the KNative API spec. This allows you lift-and-shift your application from cloud run to a Kubernetes cluster with the KNative plugin installed, a feature you can’t find with any other serverless product
Thanks. What are the usual cold start and warm start times for Cloud Run? Is there potential for those start times to go down to a few milliseconds in the near-future?
Most definitely, but you can auto scale on custom metrics with a bit of glue. Even a very small node app has a start time in the 10-30 sec range. Every auto scaling system, even cloud run, can struggle to keep up with extremely bursty traffic. If you want no timeouts, you have to overprovision.
AFAIK, PaaS solutions like heroku have a similar way of working, at least for side-projects. Here, you deploy a container and Google runs it somewhere and Heroku containerizes your application every time you push it. Similar to here, Heroku's free hobby containers also go to sleep in ~30min inactivity.
This is exactly the service that Azure needs and doesn't seem to have: while there is a consumption plan for functions, that's about it, and App Service is incredibly expensive for what you get.
As @GordonS mentions, Azure Container Instances is about as close as you’ll get with Azure, but it’s only close on ease-of-use: rather different from a deployment/scaling characteristics POV.
I’m generally very underwhelmed by Azure offerings, but ACI is one of the few that actually lives up to what is advertised.
Yeah, I did look at that, and I think you're right on ease-of-use - it's a deploy and done deal, which is nice. I think it suffers from two things:
* the cost seems net-net similar to App Service, but more granular. So I can go this route and get slightly more flexible pricing, but at the cost of some arbitrary limits on assignable CPU/memory.
* it's easy to wander into Azure Container Service stuff accidentally, and that has "deprecated" plastered all over it. Shame the naming doesn't separate them more clearly.
Have you seen Azure Container Instances (I just posted another comment about it just now, before I saw your comment).
An aside, but I totally agree that App Service is way too expensive for the piffling amount of CPU/RAM they provide, even though I acknowledge the platform side of the offering is excellent.
If very few people visit a side-project, that's probably bad for Google search. Its crawler will be detecting slow response times and Google can penalize you in search results.
More like unintended consequences. Like many real world cases, each of the policies is a reasonable, sound and fair policy by their own right. But when you combine them they might seem designed with exploitive motives.
Sorry for any confusion but I didn’t mean it WAS based off cloud run, but was fundamentally the same type of product with a better dev experience built around it.
I was tinkering with it recently. My problem was .. support.
After a lot of double-checking on my part, I was finally convinced that Cloud Run messed things up (in my case: A Content-Type header was changed from ThingsISend to text/html and broke every client). The issue tracker is hard to find and more or less abandoned, SO wasn't helpful (but had people that .. love Google Cloud Run and didn't believe me) and only after tweeting a bit someone looked into it.
The issue is fixed now, which is nice. The way to get there was ... questionable?
Excuse my ignorance. if a container hasn't been hit in a long time, how long does it take to serve the first request back? Is it spun up or sort of hot paused?
For personal projects and client work I prefer a VPS with fixed pricing. Many have quoted Digital Ocean on this thread but you get much more for your money with a VPS from Hetzner.com. $11.75/month = 8Gb RAM, 2 CPUs, 80Gb SSD and 20Tb of traffic. That's 4 times the RAM, 2 times the CPU and 10 times the traffic compared with a $10 Digital Ocean VPS.
Do the kind of people who have tech jobs and have side projects really need it to cost nothing to run side projects?
I don't understand this obsession with running projects for "nothing" and contorting software architecture to do so.
$5/mo for a digital ocean droplet or $50/month for a a beefier VPS (or even dedicated hardware if you know where to look[0]) is not much compared to the normal monthly expenditure of people in tech on average.
If it was all for convenience/efficiency that'd be one thing, but learning "google cloud run" teaches you nothing about system maintenance, limits your understanding of the full stack, and encourages a myopic view of development, all so at some point when Google/AWS/Azure raises the temperature of the water in the pot everyone starts wondering "how did running software get so expensive?".
While I tend to agree, I'm the type to have ~3 projects going at a time. I'm pretty young, but I can see myself spinning up a few projects a year. Each project needs a sql database, hosted by google for a few bucks a year, a server per service, a company email per project per person on the project, etc. I can easily see this being $50/month/project. So let's call it $150/month, $1800/year. If you're not making any money, that's a lot of bare minimum over a life time just to play around with. Then if you assume your side project isn't meant to be profit optimizing, you may never make money.
It really isn't a lot, but some people aren't in SF where they make massive profit. In more rural areas in the US, non-senior engineers can make ~$60k-$80k. In non-US roles, that's even lower. I can see this price range being meaningful.
Also, I know you can get cheaper with this, but if your goal is to spin up MVP's ASAP, you don't care to work hard to optimize ever penny out of your infrastructure. This is a nice breakdown, and I'll probably use it for the current project I'm spinning up as a test.
Running side-projects at a loss prevents you from keeping a lot of them around, and might make it harder to decide to create a new one - "I want to do this cool project, but i already spend $X a month in hosting and I don't want to spend more".
Sounds quite similar to Azure Container Instances, except ACI seems to be cheaper (at a glance). ACI is also not HTTP-only like this seems to be, but you do need to combine it with a Function App (Azure's serverless offering) if you want to trigger containers using HTTP.
We evaluated using it at work to replace App Engine Flex and unfortunately it was not ready for our use case:
1. There is no liveness/readiness check nor a way to move the traffic between versions, so you'll have downtime at every deployment
2. The only way to rollback to a previous version is to redeploy, no support from the web interface
3. There is no way to SSH to an instance (not so important)
4. You can't connect to Google Cloud MemoryStore (hosted redis)
Scale to zero + instant deploys would make cloud run a great candidate for staging environments deployed on every pull request but it's not quite there yet.
Interesting. Seems good. I guess the Render free tier is basically competing with Netlify, with the added luxury that you can add paid dynamic services at any time...?
There must be added cost for a managed databases or similar, right?
This sounds a lot like aws lambda (except nicer thanks to just running any container). In AWS’s case, you need to pay extra for RDS, redis, and any other persistence.
Exactly. And this is why I do not use these services yet. It's great if the app itself only costs like 18 cent per month.. But if you have to pay almost €10 per month for a (SQL) database, all the monetary benefits are gone.
I currently pay 7€ for two VPS which perform great, have a lot of storage and enough power to run all my projects.
> The service will create more and more instances of your application up to the limit you defined (currently the cap is 1000 instances).
> As long as you have architected your application to be stateless - storing data in something like a database (eg CloudSQL) or object storage (eg Cloud Storage) - then you are good to go.
Won’t this just defer the scalability issue to the SQL part of the application? It’s nice that the stateless REST part can be scaled almost infinitely, but if the SQL part doesn’t offer the same scalability, what’s the point? Last time I looked, CloudSQL didn’t offer this kind of scalability.
Another option is Google Cloud AppEngine. It’s a little more limited in terms of languages that are supported, but the free tier is generous enough that I have never paid anything to run backends for side projects.
GAE and Cloud functions are gradually going to be replaced by Cloud Run. In fact Run and 2nd generation GAE run on exactly the same infra backed by gVisor and you can deploy custom containers to either in almost exactly the same manner.
For personal projects I think App Engine is superior due to the included services like Mail API and Memcache API.
There are also features like Firewall API that's lacking in Cloud Run right now.
One downside of App Engine (due to its 10+years of history) is that one GCP project can only have one App Engine app in a single region, and you can't change the region after it's created. You can have arbitrary number of Cloud Run service in one GCP project, each can be in different region.
It's also harder to take a Infrastructure-as-code approach on App Engine as there's no easy way to diff between the deployed version and the intent (in Cloud Run, you can use the container image hash for that purpose).
Disclaimer: SRE working on App Engine and Cloud Run.
Right now you may be able to do more with AppEngine but the trade-off is you're locked in to platform-specific APIs. With Cloud Run there's no platform lock in since all it is, is a Docker container with whatever you want running that listens on $PORT.
I am using this and it is a much better development and deployment workflow when compared to cloud functions. The only thing it lacks is some bigger RAM for ML workloads.
We've been using Firebase and one of the issues we had is that methods become "cold": if it's not used for a few minutes, the latency of the next call is unpredictable and could be in tens of seconds. The way Cloud Run is described ('Only pay when your code is running') suggests that it may suffer from it too. Does anyone know if it actually has the problem?
started using this... pretty awesome vs the wall of yaml it replaces. Not suitable for all workloads (max 1cpu/2 gigs ram, 4 minute max pod startup time, can't do background work when not serving a request). But it replaces cert-manager, ingress-nginx, oauth2-proxy, k8s service, k8s deployment, k8s secret, k8s configmap, k8s hpa, k8s pdb, helm charts and cluster management.
Disclaimer: I work closely on Cloud Run as an SRE.
IF you want, you can still have it by use "Cloud Run for Anthos", and deploy your container to your GKE cluster with the same Knative API. This allows you to have more control on memory/CPU etc.
I believe there's no requirement if you just want to use Anthos on GKE (I could be wrong though, Anthos is relatively new).
Alternatively, if what you really want is the ease to manage and deploy containers and automatically scale according to traffic, my understanding is you can always use the open source Knative project with your Kubenetes cluster. This provide you the exact same API supported by Cloud Run and makes it very easy to migrate from managed Cloud Run to on-prem K8s cluster (or K8s clusters from other Cloud vendors).
thanks, I hope you're right, GCP doesn't appear to be clear on this point, seems to be offering free trial up to May 2020 and then who know. I'm aware of the ability to set it up from open source which is a great fallback!
I tried out Azure Container Instances a while back, and the startup times were also what killed it for me - they were all over the shop. It was 1-2 years ago, so I don't recall exactly and things may well have improved now, but I seem to recall they ranged from 1-5m.
So far what I've observed is <10s cold-start times on GCR. This is a fairly lightweight web app written using Undertow in Java so carries a JVM as overhead.
this has already been said, but due to the fact that this is google you have no idea when it's going to get killed and most things from google get killed.
so i would suggest AWS. API Gateway + Lambda. It's basically free for side-projects and the setup + operating it is trivial. It also scales (and you're going to have to shell out real money) if you were to receive a lot of traffic.
If your deployment can run in 256MB of RAM w/ 1 vCPU, handles an average request in under 250ms, transfers 200kb or less per request on average, and you get 2 requests/minute on average to your site:
The cost is around $2/month USD, which I feel is a more likely scenario for a side project vs the “pennies a month” the OP claims.
I have gone serverless for a Vue frontend app I deliver with Cloudflare Workers Sites and with a DigitalOcean S3 Space serving all images. However I'm still very reluctant to use Google Cloud Computing or any similar Azure/AWS solutions for its backend because it just seems pretty expensive to me.
I can get a vServer with 48 Gb RAM, 10 vCPUs and a 800 SSD with a 1000 Mbit/s connection and unlimited traffic for 20€ a month and run my MariaDB and Redis database on the same machine as my other containers.
Even if the vServer only runs at 25% on average the same performance with GC would cost me 150€ for the CPU, 77€ for the RAM (with free tier already included) and that is still not accounting for the traffic. And I would have to host the MariaDB instance elsewhere (which adds latency). And Google Cloud has very few data centers in Europe so it'd not even be near my main market.
In this case Google Cloud Run (with worse performance) would cost me like 15x more than running a vServer. That's not a price I'm willing to pay to go completely serverless or to avoid downtime at all costs. Or am I missing something?
> I can get a vServer with 48 Gb RAM, 10 vCPUs and a 800 SSD with a 1000 Mbit/s connection and unlimited traffic for 20€ a month and run my MariaDB and Redis database on the same machine as my other containers.
Interesting! How/why are they so much (~8x) cheaper than e.g. DigitalOcean or Linode? Or am I missing something and they're not comparable in some way?
Sure, but 8x cheaper? And I think most non-European hosting companies have datacentres in Europe; I assumed they'd compete on price but apparently not.
A request is not a visit. Each visit might have a up to a hundred requests (or even more when showing live data and caching a PWA on first visit for example). That would translate to one user per hour or two which even unpopular side projects can get.
But you're right about the free tier of course.
2 requests/minute = 86400 requests a month. Plug in the other numbers from my earlier post and you’ll see the calculator say exactly as I said, around $2/month give or take depending on region.
I've been looking for a service like this for a while now. The only problem I'm seeing with regards to my use case is I need powerful GPUs for NLP inference to also scale up and down to demand. Can someone explain if this is possible with GCR and if so what do i need to do to accomplish it?
Probably a mindset thing. Side projects are lottery tickets in my mind. Implementation of an interesting idea hoping for the remote chance that it might get actual traction (and become more than a side project) - at that point it needs scale and it needs to scale fast while people are talking about it. If you take another week rearchitecting your backend while your site is down, you lost your chance. So you shouldn't probably expect huge traffic, but make it ready for the traffic or there is no point if it will fail the moment it starts to get traction.
My 3 side projects got reddit frontpaged (also here) and I got better at handling it. Fortunately I always had scale in mind (at peak got thousands of requests per second) but the first time was not a pleasant experience because I would have to pay more than I could if that kept going on. Fortunately I found a solution while keeping my site online and over the time the project paid me very handsomely. If I didn't plan ahead, those would be dead in the water.
I’m just starting to explore cheap hosting for a web app and my initial digging suggests shared php hosting (with MySQL) is promising. It seems much cheaper than ruby, node, etc. Can anyone comment if my initial hunches are correct?
If you can save $10/month by using PHP rather than Ruby (which seems optimistic) and value your time at $10/hour (which seems pessimistic) then writing PHP instead Ruby only has to take one hour per month more (or the equivalent in pain) to not be worth it.
I get what you’re saying and appreciate the perspective. I’m not convinced rails or perhaps Sinatra are really any nicer than say Laravel. But I personally am not a ruby fan.
In this particular case the front end is very rich and the majority of the effort, and for the back end I just need a little bit of crud.
Sure, I'm not saying that you should use Rails over Lavarel, but rather that you shouldn't consider differences in costs of hosting much since other factors like difference in development time probably dwarf those.
What kinds of applications take no time to startup though? I'm curious about the spin up time, 'cause my rails applications take on the order of minutes. I guess you can schedule a keep alive elsewhere though..
Netlify (without extra features) is simply static hosting. With Lambda, it does add server-side code.
Cloud Run is to run a fully custom server, as in an API. It can obviously do more than Netlify, however Netlify really excels at static + some server code, while Cloud Run is extremely flexible (server in any language, more resources, etc)
Netlify's stupid-easy for small projects. You just point it at a Github repo and it does the rest. As long as you're staying under its usage caps, and you can run all the server code in lambda functions, it seems like the best choice.
Once you exceed the free tier, though, no doubt it's much more expensive than GCP/AWS.
What do didn't see mentioned is the latency for the first request when it's effectively dealer to zero. To get initial traffic/customers this is very important.
Is there an easy equivalent for scheduled tasks in GCP? A background process that executes for a short period of time and then exists only charging you for the time it ran?
> When will everyone understand that google cloud (money earning product) is different to all the other free products
When Google demonstrates it. They've already discontinued various parts of their cloud product.
True, they are unlikely to shut down gmail, search, or Android.
On the other hand google's cloud is a money loser that has slipped to a distant #4 in the cloud rankings.
But like many of Google's revenue-generating products it exists to show Wall Street that they have an increasingly diversified revenue stream (I think advertising is "down" to around 87%). They have lots of bets to try an improve this but little to show for it.
They did recently commit $2B to making GCP a player but I suspect that if that doesn't show results by the end of 2020, investment in this sector will fall precipitously.
You don't need to ask me: Google provides their own handy list of examples (some like messaging API show up under other names like XMPP; others don't appear on this list because they were marketed differently, e.g. Google Print):
Those are just features of a single product (appengine) which has been around for more than a decade and through several generations. Every product under every cloud vendor has API and feature changes over the lifetime.
GCP has never had a major product line shutdown. The only thing that comes close is Google Maps API pricing changes, but technically the service is still there and has even more functionality.
Many of those seem to have been around for well more than the 1 year deprecation minimum, and most of them appear to be some combination of Alpha/Beta products or language runtimes. Alpha/Beta products aren't really on-topic, and the language runtime stuff doesn't look all that different from how AWS does runtime deprecations.
Google Cloud has an official deprecation policy. If a product is labeled GA, Google Cloud guarantees that the service will continue running for at least 12 months since the announcement of deprecation.
Exactly. According to what you said, any service with that policy could literally shut down next year if there is a deprecation announcement this year.
Which is better than some small no name startup service provider who might just go out of business tomorrow, but among all the big cloud providers Google is definitely the hardest to trust with longevity.
A more realistic concern is that Google might substantially change the free tier making this more expensive, or might shut down some auxiliary product that would be painful to switch from without migrating clouds completely, or that a payment might bounce for some reason and now I’m locked out of all of my google services without recourse.
Sure the odds of all of the above are relatively low, but from what I’ve seen they are a lot higher on Google than they are on Azure or AWS.
High profile deprecations are handled carefully and we usually handle them with extended deprecation period with clear migration path/doc and communications. Notable examples in the table:
1. Python 2.5: 4+ years
2. Java 6: 20 months with existing user automigrate to Java 7
3. Java 7: 13 months with existing user automigrate to Java 8
4. Master/slave datastore: 3+ years.
Considering App Engine is the first Google Cloud offering, I'd say we have a very good track record of meet our deprecation policy and usually exceeds it.
Can we standardize this standard comment? Say a URL that points to the problems we always see complaints about for using a Google service? Then someone posts that link at every Google HN post and we can all get on with our lives?
They're boilerplate and don't add to the discussion.
Assuming you’re using SQL or something else that’s portable for your data, the app is just a standard app in a container. It would be easy to move elsewhere, or even to just deploy to a different Google Cloud service.
YES with serverless you can really deploy your side projects at scale paying basically nothing if they don't get visited.
The Serverless Framework on AWS Lambda though are more mature to do that.
One of the biggest gotchas is using just one aws lambda function with the entire web server instead of doing one aws lambda for each endpoint.
It's not. But the cost difference between servers and serverless is maybe exponential. As traffic gets really high, serverless is going to be extremely expensive compared to a dedicated box.
Definitely something I look out for, but the advantage over something like Cloud Functions, which would be an alternative, is that a Cloud Run service handles multiple requests per instance so it isn't 1:1 with request rate.
No, the concurrency of GCF is always 1. If the latency of your code handling request is 1s, to handle 100 requests per second you'll need to have 100 instances running.
It goes up of course. Nothing's for free. Don't expect DigitalOcean level of pricing, the premise of cloud is on efficiency and scale, not necessarily being the cheapest raw computing provider when compared to traditional hosting. In other words it increases the elasticity of computation, but the price per unit of computation goes up. (Of course there are cost-savings like ops, security, uptime etc. But if you are talking about one-off side projects or something ML-related then the raw compute/bandwidth costs far outweighs everything else.) For PaaS the free tier is subsidized by your enterprise and other paying customers.
Not sure why you write "of course" and "nothing's for free". If you use dedicated hosting with something like Hetzner, "suddenly hit a big traffic" doesn't automatically mean more costs for you. The website might get slower, or even crash, but your pricing will still be the same as it was before.
https://scaleway.com prices their base compute very cheap [0]. They have got DCs in AMS and CDG, but unfortunately, I've not got to use them since their DCs are a bit far from customers.
[0] €2.99 for 2vCPU, 2GB RAM, 200Mbits/s bandwidth.
Out of curiosity, how are people thinking about GCP platform risk after the '2023 deadline fiasco'[1]? Is it still a good idea to use GCP at all in the aftermath of them articulating that it could experience budgets cuts or even be axed entirely (though that latter seems much less likely)?
If you already have an EC2 instance reserved on AWS for a year, you could just throw all those small projects there.
If they are truly stateless then the bottleneck will probably be the database, anyway.
For anyone starting a new app I recommend just building apps that are TRULY serverless. Then you can make them client-first, work offline, not tied to on one particular domain name, support end-to-end encryption, be embarassingly parallel and scalable, and take an activist position against continuing centralization.
- No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.
- The risk of being locked out. For many, many reasons (including the above), you can get locked out of the whole ecosystem. I depend on Google for both Gmail and Android, so being locked out would be a disaster. To use Google Cloud, I'd basically need to migrate out of Google in other places, which is a huge initial cost.
Both of those are basically risks. I'd much rather overpay $20-50/month than having a small risk of having to pay $100k or being locked out of Gmail/my phone. I cannot have a $100k bill to pay, it'd destroy everything I have.
Also I haven't needed it so far. I've had a Node.js project on the front page of HN, with 100k+ visits, and the Heroku hobby server used 30% of the CPU with peaks at 50%. Trying to do the software decently does pay off.