Hacker News new | past | comments | ask | show | jobs | submit login
GPUs Are Now Available for Google Compute Engine and Cloud Machine Learning (googleblog.com)
449 points by boulos on Feb 21, 2017 | hide | past | favorite | 148 comments



(Disclosure: I work on Google Cloud and contributed to this effort).

As I said back in November [1] for our initial announcement, the most exciting thing (for me) is that we let you mix and match cores and GPUs. You can see that spelled out in a nice table in the docs [2].

[1] https://news.ycombinator.com/item?id=12963902

[2] https://cloud.google.com/compute/docs/gpus/


Sorry for being off-topic: can you guys consider introducing something similar to AWS's free tier? It doesn't have to have a lot free resources but it should be lengthy (6+ months, ideally at least 1 year). I recently started a new project, and while I'd prefer to use GC, I had to go with AWS because they offer one year for free (and GC's 2 months is not nearly enough to validate an idea).


That's valid feedback. GCP's free tier is more valuable and more flexible than AWS, but only for 2 months instead of 12.

There are also perpetual free tiers for several GCP services - AppEngine, BigQuery, Firebase.

No comments on your specific feedback, but please do join GCP NEXT March 8-10 for some exciting updates [0].

(Work on Google Cloud, NOT in marketing :))

[0] https://cloudnext.withgoogle.com/


The tricky part of a 2 month free trial is that it's not nearly enough time to explore the offerings unless you're a company that's already got a product ready to go and staff to migrate it full-time (and I assume the latter is exactly who you'd want to charge full price to). I took advantage of the 2-month GCP free trial, and ended up using 1 GCE instance, 2 Cloud Storage buckets, your cloud storage webhosting & DNS, and....that's it. I don't even really know what other services exist. It takes time to program things...by the time you get around to realizing that there are other GCP services that may help your project, the trial's already over.

Since there's a dollar cap as well, I don't see the problem with extending the trial out to a long calendar time-period.


Cloud Datastore has a perpetual free tier, too: https://cloud.google.com/datastore/ :)

(I work on it.)


Thanks, I forgot.. there are probably others I'm missing :/


Anyway to get a discounted Cloud Next ticket? I'm here in San Francisco, but even the $549 (one-day) is out of budget. Would really love to go. I am a GCP user and have a ops/devops consultancy startup (https://elasticbyte.net).


no livestream?


they should have one up closer to the date.


You should really reevaluate whether you should use an entire 6 months to validate an idea. That's a lot of sunk time.


To each her own! The most common issue is that you have some idea you start working on (T=0), put it away for a while (T=3 months), and come back to realize your free trial has expired. Alternatively, if you're just tinkering in the background or on weekends, 60 days is suddenly just 9 weekends.


What if you have one idea, and then another one 6 months later?


A product like ours (ML based medtech) takes a lot longer than 6 months to validate - and we make extensive use of GPUs (both AWS and Google)


Ideally perpetual free tiers are the best, especially their usage becomes a rounding error.


Oh I love it!

I'm designing the distributed backend for my Deep Learning startup now (https://signalbox.ai), this is going to be great. Awesome work!


What type of architecture are you using? We're starting to set up a distributed backend at http://www.bugdedupe.com, and there don't seem to be a lot of resources out there about how to do it right.


I wrote it myself, IPC is shared memory, RPC is Thrift, but this is really just metadata, with a lot of wrangling for performance.


I was excited when I read that Google would offer both NVIDIA and AMD GPUs in the November announcement[1]. Is there any timeline on when AMD GPUs will show up ?

[1] https://cloudplatform.googleblog.com/2016/11/announcing-GPUs...


As I said below, "Soon" (and the page says so). You might want to attend Google Cloud NEXT [1], with most of your ticket price being turned into credits for Cloud usage :).

[1] https://cloudnext.withgoogle.com/


Can you explain how this is done? Does this mean that VM and the GPUs can be on different host machines? Will it affect the GPU-CPU communication performance?


Sounds like if you max the GPUs per CPU core they have CPU cores "left over" that they then use for CPU-only instances, instead of splitting a host to a predefined pattern of just GPU instances that use up all the CPU as well.


Yep!


You guys should talk (internally) to Zync and offer redshift and octane rendering on your infrastructure!


A bit off topic, but i recently realized how much the cost per cpu is really competitive in cloud offerings compared to reserved instances in regular hosting providers, but the network costs are absolutely outrageous.

Cpu costs may be something 5 to 10 times more expensive in the cloud, but network are close to 100 times. Any hosting provider will offer you a 250 mbps unlimited network access for your macine, whereas consumming that much bandwidth in google cloud for a month will cost you more than 1000$.

How come is the difference that big ?


> the cost per cpu is really competitive in cloud offerings

> Cpu costs may be something 5 to 10 times more expensive in the cloud

So are CPUs more or less expensive in the cloud?

Also I assume by "cloud offerings" you mean AWS, Azure, Google Cloud and the like, and by "regular hosting providers" you mean GoDaddy, BlueHost and the like? Or perhaps you mean Paas vs Saas offerings?


I believe he means OVH (+Kimsufi, Soyoustart), Online.net, and Hetzner for example.

If you are interested in what 1/100th of AWS price looks, have a look at Online's server order page for example [1].

[1] https://console.online.net/en/order/server


Yes and no: yes you can get a Quad-Core 64GB RAM + SSD server for $55/month (source: https://www.hetzner.de/de/hosting/produkte_rootserver/ex51ss...). But there's one more spec that matters: networking. It takes completely different amount of effort to provide 1Gbps connectivity versus 40Gbps per server.

Most of the providers like Hetzner/OVH provide former, while GCE provides the latter. I'm not saying it's bad, in fact for most of the people 1Gbps would be more than enough. But it's not something that is fair to omit.

Disclaimer: I don't work for Google, just from my experience.


OVH offer 1Gbps, 10Gbps and 40Gbps plans: https://www.ovh.com/us/dedicated-servers/storage/

I am sure Hetzner would be able to offer the same as an add-on if you contacted them directly.

That being said, you would continue to pay orders of magnitude less for your bandwidth then you would at Google or AWS. There's a bubble in cloud bandwidth pricing, and I don't think it's value-related.


Cpu are more expensive on google cloud than ovh, but that is expected. I'm ok to pay x2 or even x5 for the agility and the power those platform offer (especially when you need autoscaling up and down every single day).

What i don't understand is the network cost factor of x100 to x1000


> How come is the difference that big ?

To a first approximation, CPU and RAM are required to get your site up and running at all. Bandwidth is less so. Bandwidth scales up as your growth scales up.

So it makes sense for cloud providers to make CPU and RAM relatively cheap, and charge unreasonable prices for bandwidth. If you're growing, you're more inclined to pay, since you're seeing success. Plus you're already locked in at that point.


That depends a lot on your cpu / bandwidth ratio. A modern website consume a lot more cpu per request than an online videogame server (my use case), which is basically the equivalent of a websocket router with a little data transformation in the middle.


No kidding! I liken it to buying a soft drink with dinner. It only costs the restaurant 10c to fill the cup, but they charge $2.99 because the people that want it are willing to pay.


Egress is a bitch. Even on a personal project with about 3million MAU, the network costs are crazy. Easily 20x lower by owning my own machines.


FYI Google's K80 ($0.7, pay-per-minute) are cheaper than AWS's K80 ($0.9/h, pay-per-hour), as well as Azure's K80 ($1.08/h, pay-per-hour).


Google's $0.70 rate is in addition to the normal instance cost, so math is more complicated if you don't have a machine already. (Alhough, a low-end high memory machine is $0.126/hr, so the sum is still lower than Amazon)


Yes, but you can connect any number of GPUs to an n1-standard-1, so feel free to add $.05/hr :).


Is the n1-standard-1 instance fast enough to not be the bottleneck for the K80?


It really depends on what you're doing. If you are doing large transfers over PCIe back and forth, not really. But lots of things work just fine.

The bigger challenge for large ML models is the memory you'd need to back it. But with GCE you can happily do a custom machine with up to 6.5 GB per vCPU, just so you can fit the output ;).


Slightly off-topic, but I don't know where to get help with this.

I see one of our projects has gotten a quota of 16 GPUs in asia-east1, us-central1, us-east1. However we seem to have been allocated nothing in europe-west1. Is this an error, or simply something I have to manually ask for somewhere?

"Quota 'NVIDIA_K80_GPUS' exceeded. Limit: 0.0" :(

I'm so psyched to finally be able to use GPUs on GCE instead of AWS, so any help would be appreciated.


Yep! And for distributed training, the per-minute billing is incredibly important. But just a correction: Azure also does per-minute billing (following suit from GCE way back in the day).


What about the cost reduction that usually happens with extended use called sustained discounts? Does anyone have that info?


Good question!

We do not currently apply sustained-use discounts to GPUs, but we may do so in the future (we need to gather data first on usage, to understand if people will be running 24x7) [1]:

> You cannot attach GPUs to preemptible instances. GPUs do not receive sustained use discounts.

[1] https://cloud.google.com/compute/pricing#gpus


Slightly off topic, but I have two questions:

1. Are GPUs covered by the free trial (ie, can that $300 be spent towards GPU instances)?

2. How is support for GCP?

I've been curious about trying GCP, but held off over GPU support (since AWS was covering my needs and I do ML stuff mostly) and general support (since Google doesn't have a great reputation for supporting products).

Also, perhaps an affiliated person can chime in with something about the roadmap to stable GPU support. (It currently says there may be breaking changes.)


1. At this time, we don't grant quota for GPUs for free trial customers (hello various coin miners!). However, if you upgrade from your free trial, you do keep your $300, and that's just money :).

2. Unlike consumer-facing products, GCP is focused on business. We offer paid support plans [1] with high (measured) customer satisfaction. I know several of the people in the support teams (and you see them here on HN as well), and we're really trying to defeat the meme of "Google doesn't do support".

As far as "stable GPU support", this is just confusing language surrounding our usual "Beta" terms [2]. Once it becomes Generally Available, no changes would be made. But moreover (for GCE anyway), we don't make API breaking changes from Beta to GA (Beta to GA is "just" about stability in production).

[1] https://cloud.google.com/support/

[2] https://cloud.google.com/terms/launch-stages


>1. At this time, we don't grant quota for GPUs for free trial customers (hello various coin miners!). However, if you upgrade from your free trial, you do keep your $300, and that's just money :).

What do I have to upgrade to? Isn't GCE all per-minute? Do I just have to pay $0.05 for a standard instance and then use the $300 from my trial?

Also, is coin mining against any TOS? I'm not planning on doing it, I'm just curious.


Sorry, upgrading to a paid account (abuse risk is just too high for the free trial, so we keep the quota limits low).

Coin mining is not against the TOS. However, because it's usually economically irrational, it's usually abuse. If you don't pay for your GPUs (fake credit card) it's awfully economically rational though ;).



This is great! Just out of curiosity, how does the $0.70 / hour rate compare to the per-hour electricity cost of running a K80 (at full load)?


https://images.nvidia.com/content/pdf/kepler/Tesla-K80-Board...

"The board is designed for a maximum input power consumption of 300 W"

So one hour at home:

Power used: 300 Wh

Price: $0.20/kWh

Total cost: $0.06/hour

Google cost: $0.70

You could say it's 10X more expensive, but you need to include all additional costs, power is just a fraction of it. The GPU itself is $5k, which over - let's say - 3 years would cost $5/day, or $0.20/hour, and if you're using it a third of the time, just that amounts to $0.60/hour. Add everything else (you might use less than 1/3 of the time) and it would be far more expensive than using it in the cloud. As it is expected to be.


Keep in mind that they're tricking you. Yes, the entire card is $5k, but they're selling access per GPU. This card has two processors on it, so they're actually selling access to half that card.


But on the other hand, everyone assumed that because that's how MS and Amazon do it...


I think that's why they chose the K80 as well. By today's standards it's an extremely old card (2 generations behind), but it's the last professional series card to feature two processors on it. It's easier for them to sell, I guess.


Actually, it's more that it's the best Kepler class part. Maxwell, unfortunately, doesn't have full-speed double precision, locking out an entire (important) segment of the market. As I said in another thread, we'll have P100s and so on as soon as we can.


Fair enough, but Maxwell makes up for the lack of DP with other features that other customers appreciate more. Pascal will solve all those issues, but the P100 unfortunately will probably be extremely expensive to use because of the cost of the card.


M60 was the last to feature 2 GPUs. See this convenient reference: https://en.wikipedia.org/wiki/Nvidia_Tesla#Specifications_an...

But the double precision perf of M60 is very low and less than the K80 due to limitations of the Maxwell microarchitecture. So we could say K80 is the last decent dual-GPU Nvidia card...


Ah, sorry, I forgot about that one. The Tesla Maxwells came much, much later after the Geforce cards, so it wasn't very long until Pascal was released. To meet all the needs of the customer, Maxwell was likely not a good choice, but the single precision was much better than Kepler.


Unless you are using Geforces, which are much cheaper than 5k$ and have near performance (single-precision).


A major use for GPU in the cloud is not being limited to the 8-12GB(ish) limits of enthusiast cards. There's similar processing performance, and the ability to store all you need in memory is a much closer reality to why these cards go for $5k


I'm not sure most gtx users doing hpc can be called "enthusiasts". After all, most research is done in these devices, and most models should therefore fit memory requirements (unless you're doing some hardcore - or lazy engineered research model) . Correct me if I'm wrong, but AFAIK the tesla cards are designed to fit regulated markets and are not mass produced like the geforces, therefore the elevated prices.


The only problem is GeForce 1080s (for example) burn out much quicker under heavy load. They're not designed to sit inside an enterprise chassis


Do you mean that they fail? I have built a mining setup with multiple gpus in the past and haven't lost a card. I am 100% sure that Google is smarter than me at such setups.


Also interested in seeing what sort of failure rates you've had with GPUs. Our's have worked fine - but n=4.


I'm confused, the google cost is $0.70/minute not $0.70/hour correct? Aka it is $42.00/hour?


No. It's charged at a rate of $.70/hour per die, but usage is billed per minute. For example, if you use a K80 for 24 minutes (i.e., 40% of an hour) you pay $.7 * .4 = $0.28 total.


It's not $0.7/min, it's $0.7/hour.


Typo; thanks.


K80 maximum input power is 300W. The average price people in the U.S. pay for electricity is about 12 cents per kilowatt-hour.

So, around 4 cents an hour?


Plus you need to cool that 300W in a datacenter, which costs a few cents I am sure.


Google is renewable at this point I believe.


Which doesn't even remotely mean their energy is free (though they don't pay as much as you do at home per unit, because they specifically build datacenters where they can get power cheaper, because it's a huge cost at datacenter scale)


I didn't say it was even remotely free, but it's at a scale compared to consumers that it's pretty damn close. 70 cents a minute is A LOT to pay to run these GPU's on Google's end. Which is the point of this discussion.


On the market, renewable energy costs more than non-renewable, simply because there is a higher demand for renewable, and non-renewable has no benefit for consumers.


Looking at the docs, it appears that the CUDA drivers have to be manually installed on the host image, which takes time/money.

Is there a GCP Image with CUDA drivers pre-installed? Or is that not possible with the hardware architecture?


SIGH. Because GPL you can't pre-install the NVIDIA driver. At that point, it sadly makes more sense to have you roll your own and then bake the image. I'll gladly hand out refunds for the X minutes this takes (don't forget that on kernel upgrades you get to do it again when the kernel headers change!), but like you I wish it weren't so.


Feed the installer the right flags and it will run unattended. install Dkms and it'll rebuild automatically on kernel updates


Yes, that's correct. But, you still must have the user do so. Otherwise, you're distributing the resulting artifact (the Nvidia driver linked against the kernel) which is GPL (kernel) and not (NVIDIA).

But IANAL.


Amazon Linux AMI for their GPU instances also does not include the CUDA driver.


They have a Deep Learning AMI which does, though: https://aws.amazon.com/marketplace/pp/B01M0AXXQB


Just run a cuda container.


Can you point to an example of this? I have been installing nvidia-docker, and it sounds like what you're proposing is substantially simpler.


https://hub.docker.com/r/nvidia/cuda/

Edit: still requires nvidia-docker, or a hand crafted docker command that replicates nvidia-docker.


Yeah my use case is nVidia docker but still a lot easier than configuring it on your host system imo


Takes 10 minutes to install nv-docker correctly, and you're golden.


I know you are working on better GPU support in kubernetes, but it would be awesome if I could just grab my image that already run in nvidia-docker and run it on GKE.


You can do that on Nimbix/Jarvice. Submit a Docker image, start batch job and get an e-mail once it's finished. 4 core 32GB RAM machine with K80 costs ~$1.06 there.


Anyone checked if this is compatible with Blender GPU rendering? On my old Mac machine GPU rendering doesn't work; I can still get a 16-core VPS for a few hours when I'm in a hurry but this has the potential for more performance I guess..?


I believe Cycles (the Blender renderer) only relies on CUDA [1], so it should work. Depending on which AMD card you had on your Mac, it sounds like it might not be supported by Cycles. Because the NVIDIA K80 doesn't do Display though, you'd need to run say vnc or run Cycles from the command line.

[1] https://docs.blender.org/manual/en/dev/render/cycles/gpu_ren...


My iMac 2011 doesn't work with GPU Cycles but that's OK. And I've successfuly boosted render times by renting the multicore VPS by the hour already (commandline, specify different frame range on each machine to render).


You might want to check out Golem (https://golem.network/). They're working on very cheap Blender rendering. They're on Slack: http://golemproject.org:3000/


Hopefully soon someone will create a GCE image for WebPageTest. It's been a pig to get up and running, but Amazon's per-hour billing is expensive for a machine needed 10-15 minutes every hour.


(I work at Google, I have my biases, but…)

You might find good fun with our Preemptible VMs, which I think will land about ~$7/month: https://cloud.google.com/compute/docs/instances/preemptible


Preemptible VMs + Kubernetes = Awesome.

If you manage to build you app/infrastructure in away that can survive nodes shutting down at random times (or if you don't care about restarts) then you can reduce your infra costs for 80%.

We have a staging cluster running on preemptive instances and as soon as one instance goes away we get a different one. Everything gets deployed automatically. Regular internal users checking out various webpages don't even notice.

We're looking into changing our 24/7 infra (which needs to be 24/7) to something that can be run mostly on preemptive (with a couple of normal instances for services that can't be randomly killed).

Super happy about our move to GCP and our K8S experience.


AWS CodeBuild now gives you per minute billing. But it has limited instance types.


Unfortunately CodeBuild looks to be build only, not install a full software suite on the machine and run it.

Otherwise, if it'd load an AMI - ideal!


Sigh. When I first use Google cloud (lowercase c), it was only App Engine (CMIIW). I woke up one day, now it has more than a dozen offerings, confusing like AWS.

Does anyone have, specifically for AI/ML, a list of "If you want to do X, use Y" for Google Cloud offerings? The official list (https://cloud.google.com/products/machine-learning/) doesn't help much. Would appreciate if it's explained by layer (higher such as ready-to-use Speech Recognition and lower where you possibly need to setup some infra stuff).

EDIT: I'm looking something like this (explains AWS offering by layer) but for Google -> https://aws.amazon.com/blogs/ai/welcome-to-the-new-aws-ai-bl...




TPUs are for Tensorflow-based computation only, rather than being generally CUDA compatible so they can run everything from Deep Learning to fluid dynamics simulations. I believe they are also for Google internal applications right now, so not generally available.


O/T I guess...Does Google have plans for FPGA instances preferably with design tools (for $.70 an hour :) !)?


AWS has spot instances which are roughly ~1/10th the price of the regular EC2 instances. Anything equivalent here?


There are discounts if your instances can be pre-empted: https://cloud.google.com/preemptible-vms/. The pricing isn't variable like EC2 spot instances, however.


Time for me to shamelessly plug my TL;DR blog on Google's Preemptible VMs:

https://medium.com/@thetinot/google-clouds-spot-instances-wi...


From the site:

> You cannot attach GPUs to preemptible instances. GPUs do not receive sustained use discounts.

So sorry, no.


The question didn't even mention GPUs, and there's no reason that what you mention can't change in the future. It was a pointer.


Are the cards physically in the same machine, or are they in a remote chassis and connected through the network?

I think on P2, Amazon's cards are remote, and there is pretty significant latency when using them for time-sensitive computing.

What kind of performance can we expect from GCP compared to AWS P2?


They are connected directly to the host with PCIe, so you should get "bare metal" performance.


I've heard that before. I'll believe it when I see it :)


Anyone can try it today :)

I'd love to see your findings, I'm curious as well!


The "Restrictions" section may answer some of your questions: https://cloud.google.com/compute/docs/gpus/


I know it's early, but has anyone tried using one of these instead of the AWS p2.xlarge recommended for use in http://course.fast.ai/ ?


Awesome.

Separately, can tensorflow models trained on Cloud ML be downloaded yet?


As Cloud ML is "just" hosted TensorFlow, once you train the model it stores a .meta file in GCS for you. You can import this in TensorFlow [1] for serving elsewhere if you so choose. Is that what you're after?

[1] https://www.tensorflow.org/versions/r0.11/how_tos/meta_graph...


Thanks. That works in Cloud ML now? I'll try it.

The copy on https://cloud.google.com/ml/ under Portable Models says "In future phases, models trained using Cloud Machine Learning can be downloaded for local execution" so I hadn't looked into it further.


Huh... maybe I'm mistaken. Lemme ask the experts and get back to you.

[Edit: Okay, we've decided that the exported model it produces is what you'd expect. We're going to update the landing page, once we can agree on what it should say.]


Thanks!


When (if ever) will we see Pascal cards in the cloud?


> AMD FirePro and NVIDIA® Tesla® P100s are coming soon.

from cloud.google.com/gpu (our landing page). That's currently limited on availability of hardware, testing, etc. but it really should be "soon". Note though that P100s are massive and expensive, so we don't intend to get rid of K80s or anything once we have P100s.


Out of curiosity, are there strong reasons to leverage GPUs in standard web application development from a general-purpose standpoint? Can I leverage this to enhance a general-purpose server or database? If so, anywhere I can read more?

Always interested in what kinds of things one can do when new offerings like this are made.


Short answer: no. Long answer: if and only if you can really parallelize your processing into very small calculation each doesn't take a lot of memory, and you have a lot of time of writing special code using special framework. Perhaps streaming encoding in real time could be one, but that's already sort of beyond general web application.

GPU is powerful because a GPU usually has 100+ cores. Each core is weak and inefficient, but power adds up when you have 100+ cores available.

https://en.wikipedia.org/wiki/General-purpose_computing_on_g...



Besides 3D graphics and machine learning, GPUs are also good for image resizing/encoding and video encoding/transcoding. But I haven't heard of anyone trying to accelerate Node.js or MongoDB on GPUs.


You can run MapD (https://www.mapd.com) and get big speedups over CPU analytic databases (http://tech.marksblogg.com/benchmarks.html). We'll be launching on Google Cloud soon.


GPU have no application whatsover for the usual web applications.


SQream DB (http://www.sqream.com) is an SQL analytics database which uses GPUs. It is on both AWS P2 and Azure NC machines, and it can do very fast analytics on hundreds of raw terabytes on those.

Having said that, on-premise or bare-metal is about 25% faster than AWS P2 for large datasets (over 2TB). For smaller datasets that may fit in-memory, they function about the same.


Out of curiosity, can you... run video games with it?


No. The K80 is a Compute-only device from NVIDIA (i.e., it won't run OpenGL or DirectX). We've previously announced that cards with Display are coming "soon".


Actually, the K80 does run OpenGL. Source: I'm building a remote rendering solution that runs on a couple of K80's, and it works just fine.


No, but there are guides out there to use SteamLink for gaming on EC2.

https://lg.io/2015/07/05/revised-and-much-faster-run-your-ow...


... if you can deal with the 250-2000ms display lag sure

and you have to use a remote windowing dealie


I'm only 23ms from us-west1. See my other note about OpenGL/DirectX right now, but you shouldn't assume anything like those round-trip times...


Is that number based on your experience using it this way with something like RDP/TurboVNC, or anecdotal based on other cloud providerS?


I tried with AWS GPU once and only once. Probably if you spend time and effort you can get it down a fair bit


From my experience, network in GCE is way more stable and usable than it's AWS counterpart. So maybe we can see another OnLive rise again ;)

(I'm joking, but there has to be a day when it's possible, right?)


Don't use Wi-Fi. My lag was 300ms from my desktop to US-West-2 and then I used an ethernet cable and it dropped to 30ms.


I use ethernet for that sort of thing. I did get 200ms and 2000ms latency regularly anyhow


Does anyone else find it kind of odd that Google, a company famous for having no human support at all anywhere who shuts down any attempts at contacting a human within its ranks for support on ANY of their products or services, always seems to deploy a small army of senior tech people to answer questions every time there is an article posted on HN. Strange double standards afoot, some users matter, others not so much.


> Does anyone else find it kind of odd that Google, a company famous for having no human support at all anywhere

While a number of people push that false meme repeatedly, everything I've heard from people who've used it (or worked on it, but the latter comes with obvious bias) is that the paid human support on GCP is good.

Heck, on the consumer side, I've gotten good (quality and speed) human support on Google Express, too.


GCP gold support is great as you actually get through to an SRE, silver (what I use)... Not so great, often you'll submit a ticket with a ton of technical detail only to have them come back with something asinine rather than escalating it if it's past their understanding. The difference in price is pretty striking though, so it's no surprise the different in quality of service is different.

Honestly though I can't say I've ever really needed support on GCP, most of the times I've raised tickets its been due to funny behaviour (slow spinning disks in us-central1 was the last one).


https://cloud.google.com/support/

Is where you can talk to a human every time you file a ticket related to Google Cloud Platform.

(disclosure: a human that answers those tickets)


How do you distinguish Google intentionally deploying senior tech people to HN from Google just having a lot of senior tech people who read HN? ;)


Yeah, the latter is literally the case. No one is 'deployed' anywhere, and it's a little amusing to think that we have "no support," but would deploy people to web forums, when looking to promote a commercial business.

(We have support, more than 600 security engineers alone, we just offer commercial support for a commercial product.)


For what it's worth, I'm not "deployed" here. And as I've said on this forum numerous times: we offer real, paid support for Google Cloud.


I, for one, still hold out hope Google will one day pay me to hang out on internet forums all day :)


That's what we pay boulos for. /heh


No, but I do find these regurgitated posts regarding their lack of human interaction annoying. I've always been able to talk to someone at Google on Google Play. Why don't you try it? Go to Google Play and initiate a chat session or ask for someone to call you.


When you pay for Google services you actually get support.

Source: Pay for many Google services, received support when needed.


Users that (potentially) fork over kilowads of cash have a way of attracting special attention.


0.15 kilowad/mo is sufficient: https://cloud.google.com/support/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: