My company is launching a new web design and development service using Google Cloud services. Initially, this was an experiment because we received some Google Cloud hosting credits. We run LAMP (Linux-Apache-MySQL-PHP) stacks on virtual machine instances on servers worldwide of our choosing.
After Google I/O, some webinars and testing I learned about how quickly new websites and apps could be deployed, Google DNS for managing domains, App Engine and other load balancing features that can help manage cost.
The ability to SSH into a virtual machine has proven quite useful to enable my team to coordinate and manage projects. I am still open to using other services like AWS but Google Cloud has been great - especially because they also offer customer service for tech and billing issues.
Spendology is the company >>> spendology.net. Our new service is called Blue Apex Digital. I just used Google DNS and Google Compute Engine to publish >>> blueapexdigital.com. Thanks GCP!
I do have a quick question. I am running a LAMP stack with the A standing for Apache. I put an "Index.html" and an "Index.php" file in my "www" folder. I realize that new files append old files. However, I want to add a ".htaaccess" file with DIRECTORY prioritization for the PHP file over the HTML.
I tried copying an htaaccess file via the gcloud command line tool but I got an error. I've been looking through help, faqs, and searched but haven't seen a solution as of yet. Did I miss something? Thanks!
fwiw, this is currently my favourite site to check for possible optimizations, I ran the test so you wouldn't have to wait 30 seconds in the queue ^_^ http://yellowlab.tools/result/e7v314ikq3
I have noticed that all google cloud services now require credit card information for validation. Can this be be made optional and instead rely on their Gmail accounts ? (Especially for people just wanting to tinker with new services for fun)
I suspect that it's unlikely that google can remove this because bad guys tend to spin up many free trial instances and do bad things (sharded bitcoin mining, DDoS'ing, etc). Privacy is #1 so they aren't allowed to look inside your machines or inspect your traffic...so that leaves billing / such metadata to fight abuse.
That makes perfect sense, and I don't argue with this logic at all. But since you're here, I'd like to say something (that isn't horrible :P) that I've actually wanted to [have the opportunity to] get out there about GCP for a while now.
The "Heroku-style free" tier is likely to be all I can manage for the short to medium term future, due to a variety of factors.
As a result, the two go-to systems I would turn to if I need code hosting are Heroku and OpenShift, and I have accounts with these platforms (although I'm leaning toward the latter lately). I am yet to do anything generally interesting with them myself, but these are the systems I take into account when talking to others, because I have no experience with GCP. (I wonder if this is true for others too, and how big that number is.)
I have zero idea how Heroku are fending off precisely what you refer to. I've heard about situations where individuals' AWS accounts have been hacked by automated systems that spin up "instances with everything", but once it was established with billing that the account was hacked, the charges (which in one case were 5 figures) were fully reversed. Perhaps this fact is relevant here.
I think the problem can be solved before it (theoretically) gets to that point, however.
Serving up webpages, doing the occasional database transaction, etc, produces a significantly different CPU load than bitcoin mining does. I don't consider instance CPU usage monitoring a violation of my privacy, so I think it would be perfectly fair to throttle back continuous high CPU usage, but allow for short bursts of high usage. (In fact, I think this kind of thing is already standard...?) For bonus points, make the system track instance CPU usage and adjust its thresholds to allow for periodic high burst usage :P (since, thinking about it, masquerading as a web server and doing 30 seconds of mining every 5 hours is going to work out to zero gain).
I wouldn't consider it at all unfair to offer a free tier with exceptionally aggressive CPU-time QoS; in fact, it would probably suit me (and a lot of other people) perfectly, giving the funemployed community the option to do things like spin up fascinating new environments like Erlang, Dart, Rust, etc, and play with these environments (which are not yet available on Google's cloud hosting infrastructure) for web serving and similar. (For more bonus points, I'd extend the heuristic CPU-time tracking I mentioned before to classify instances as "friendly" over the long term, and let them have exceptionally low latency! :D)
In short, there's a whole demographic of people out there that you're definitely excluding, including people like myself who are just at the "messing around" stage, to nervous types who don't like deadlines and run from the "free for X period of time" part of GCP.
I realize and recognize that your response here is only your opinion, not Google's, and I'm very happy to respond here or via email (my address is in my profile) if you'd like to discuss any of any of the things I've said. ^^
I like this, and it feels "Google-y". Worried that a handful of us will use it, lean on it heavily, and then Google will do what Google does and pull the plug on it.
Well in all honesty if you end up leaning on it heavily in spite of the warning you kind of deserve whatever you get...
>This is a Beta release of Cloud Shell. This feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.
I wasn't explicit with my earlier comment, you're right about the beta period. I was referring to after when they exit beta and are a good, useful service for a year or two.
Which would be understandable if they ever shut down any cloud service offering of this level before (which they have not).
This whole "Google loves to shut stuff down" is really tired and overplayed.
Go ahead and compare Google's track record with Apple's or Microsoft's, or any other company. They are about on par, yet Google almost always gives 6+ months notice (often over a year), provides one-click alternatives, allows you to export your data in one of many formats, and more often than not has an in-house alternative that you can use automatically.
Take a look at the wiki page of google's products [1]. I'd be surprised if you even knew about 10% of those, let alone used more 1 or 2 significantly.
On that list that I've used: Buzz, Pack, Desktop, Reader, iGoogle.
To Be Discontinued: Google Drive Hosting, Google Code.
I don't think it's tired and overplayed. It's a consequence of how Google works; try lots of things and don't be afraid to pull the plug. That strategy is great, and it works.
It's just sometimes the services that are on the edge of being worth Google's time to maintain cause the most backlash because a fair number of people used those services.
Google refused to allow you to pay for them? Each of those services could trivally have had some combination of ads and paid services but Google's management made strategic decisions to put resources elsewhere.
Only using paid services up front might seem to help but one look at the way Google Apps has been in maintenance mode for years suggests that even that offers only limited protection.
Realistically, how many people would have paid for something like Google Reader after getting it for free for so many years? I think it would be a tough sell and you would have witnessed a lot of complaining...
On the other hand, the negative PR they have gotten from Reader (it's pretty much the poster child for the "Google cancels products" meme) - they probably should have kept it around, even if it was not strategic.
I'm pretty sure anyone working at Google knows how to put ads on a free service but in any case, I saw a lot of people calling for a paid option in the period between the de-featuring for the botched Google+ roll-out and actually closing the service down. That would have been a natural approach: free version has ads with some sort of “Pro” option to remove them.
Any other company? You've named three. Truth is enterprise grade companies that serve the business market and don't give their products away for free or participate in a race to the bottom typically don't do this type of thing to the degree that google does. Perhaps the companies that do (including parts of msft) are those that deal with developers or consumers which apparently are more likely to not complain to much (other than to whine online) when they get the shaft. The stereotype, unfortunately, is true. At least that is what I have found.
Also, and this is important, there is the benign neglect phase where they simply keep the product minimally working but don't spend any time to improve it (like google voice).
To the best of my recollection, I don't believe we've ever shutdown a product. Was there one in particular that hit you? (Reader, Wave, etc are totally different divisions.)
Google Code was first turned into a graveyard, then deprecated, and soon will shut down altogether. Reader was not strictly a development tool (which is what I guess you refer to as "your division"), but was heavily used by developers, so it significantly disrupted people's workflow. Same for Wave and any web API (there's quite a few of them which were unceremoniously dumped).
It is not really Google's fault that Code was not a successful product, other than that they simply didn't have the energy/budget/will to actually make it a competitive product with Github.
Saying Google "doesn't have the budget" to do something is fairly preposterous, tbh. I can understand that they were wrong-footed by the rise of Github; Code was built to compete with Sourceforge, when GH didn't even exist. From day 1 though, it was clear that Code wasn't even "better enough" to actually kill SF for good; further development was incredibly slow. When Github hit their stride, Google reacted by just giving up. They didn't even attempt a comeback.
well, google isn't a small business, where "the budget" can be synonymous with "the bank account(s)". like many (all?) large businesss, things are organizationally regimented into units (and sub-units, etc), and budgets are allocated toward each unit.
so while "google" might have funds, the "google code development team" may have a very tiny allocation.
Of course, but budget allocation does not descend from Heaven fully formed, so to speak. Google directors (i.e. Google) decided Code was not a priority, so in the end it's Google-the-company's fault that it had to close.
I think there was no reason for them to keep Google Code as a going concern with the rise of Github. They were not going to do as good a job as Github, and it wasn't something they were making any money off of.
Even if they had been trying.
But yes, like many Google products, it was basically released and abandoned. They weren't trying. (If they had been, maybe we never would have had a github...)
I do think as a public service, they could have left all the code (and wiki documentation) accessible read-only virtually perpetually. Surely they can afford that.
Instead, we get tarball download only, and only until late 2016, after which it's all gone forever, if it hasn't been migrated elsewhere by code owners or third parties.
Google Drive hosting, to be fair it still works and you guys gave plenty of heads up (August 31, 2016). Just because they are in different divisions doesn't mean Google hasn't shut things down.
Oh brother. I guess if you're using the strict definition that a product is only something people pay for, then you may be right, but in the more generally accepted meaning that a Google product is something Google makes that people outside of Google use, you're just flat out wrong.
I am so tired of this meme. Every business shuts down products and services. To re-iterate this meme every time Google announces something is tired. I'm guessing you still hold Google accountable for shutting down Reader - a service you probably never used but go to the well each and every time they announce a service or product.
Multiple people (now three!) have downvoted this, but the idea that everyone who talks about Google service closures is complaining about one specific service--Google Reader--is the "meme" that we should all be quite tired of, as it is trotted out like a broken record every single time anyone points out reservations about Google's track record: no matter the context, no matter how many services have been shut down or cut down since, no matter what announcements Google makes about limiting their policies, and no matter whether the person mentioned Google Reader or not... it is even used against people like myself, who had absolutely no specific interest in Google Reader in the first place :/.
It is nothing more than a knee-jerk way to dismiss a rather easily defended position (due to the large number of closures that have been documented, ones that are more extreme or would have been considered less likely than Google Reader) by stereotyping someone's argument down to not just a "strawman" (an argument that is easily defeated), but essentially a purposely broken and laughable version of their argument so as to purposely ill-inform other people (which I feel the need to separate from a "strawman", as the goal is not to defeat the argument but to belittle the opponent). It is frankly one of the more underhanded argumentation tactics that people seem to enjoy defending here.
The reality is that Reader is a non-issue for most people here, as it isn't something you likely built your business or infrastructure around (and to the ones who ended up indirectly relying on it, that is a stretch to blame them for), but when Google randomly shuts down, cuts down, or entirely reboots APIs and services--something they have done many times, at the extremely painful end with things like Checkout and Login, but also with things such as App Engine or Charts--the fact that people seem to seriously have just forgotten how recent these things have been is distressing, and is made all the worse by people who insist on perpetuating "you are just whining about Reader" lie :/.
He linked a TON of projects that have been shut down. The logic of your message seems to be "they only shut down things that aren't popular, therefore you should feel fine using this really niche product that will never expand beyond a small group of developers."
And for the record, I did use iGoogle. And there were plenty of people who used Wave, Reader, Code, and Labs.
Reader and Code are the only ones that had traction.
I object to the idea that it is, "a Google" when we could make the same argument of many tech companies. I object specifically here because it's been shown to be a talking point in a whole deck of talking points written for an MPAA smear campaign on Google.
And it's not particularly fair. They should call it, "Pulling a startup" given how often we fail at them.
Coming from an AWS shop, I guess I don't understand the use case(s) for this. What this enable me to do that I couldn't accomplish by ssh'ing to any of the boxes in my environment?
a. Which SDK? You mean the ssh client?
b. Any custom software is wiped from your account in about an hour, though. Also an SSH re-connect is not that big a deal, IMO.
c. There are SSH client Chrome Apps
I guess I still don't see the point but maybe I'm assuming that one is already running a *nix and can either jump into the terminal there or connect a VM in the cloud very easily. The convenience doesn't seem that large.
For B, unless something is actively running it will presumably just freeze the instance and restart it when you return. GCE is pretty quick to do this and for shells that's not disruptive.
I use AWS extensively as well, but really like this product plan.
Ultimately what AWSClassic does is give you a lot of baremetal boxes to drop AMIs on. Google's changing that to the containerized model by offering direct services that are scheduled on hardware. As Kubernetes becomes more and more the primary method for deploying apps on Google's managed services, this approach will pay off more.
But it's worth nothing that for many people this is what Amazon needs to do as well. As VPCs become the mandatory methodology for AWS, it's increasingly annoying just to set up the bare minimum you need to get a capable shell inside your environment. Even experienced AWS users have trouble getting VPCs right given the state of the current documentation.
> As VPCs become the mandatory methodology for AWS, it's increasingly annoying just to set up the bare minimum you need to get a capable shell inside your environment. Even experienced AWS users have trouble getting VPCs right given the state of the current documentation.
Please. Even for an advanced VPC configuration, you're looking at 1-2 hours tops for setup. If you want point and click, go to Digital Ocean. Complex tools are always going to have a learning curve.
> Please. Even for an advanced VPC configuration, you're looking at 1-2 hours tops for setup.
We use DynamoDB. A lot of it. It is a non-trivial setup to get a dynamic, scaling solution to route traffic efficiently to the DynamoDB endpoints. Our SQS traffic sometimes also experienced extreme latency when we would send large groups of messages.
So forgive me if I greet your derisive tone and silverbacking with skepticism; I've got a product in the market already. I have correspondence from the AWS support explaining that it is in fact very difficult to operate at the scale we want to with some AWS Services that aren't supported, and the S3 connection was also not trivial. This is especially so if you're managing multiple discrete apps that don't coordinate usage or bandwidth spikes, as is common in large enterprise settings. Which is, ultimately, what we're doing.
> Complex tools are always going to have a learning curve.
There is no justification for the current mess that is Amazon's documentation for the VPC feature set. The only reason they get away with it is that AWSClassic is a giant band-aid which will be grandfathered onto customers with scaled products.
Arguing that things like Heroku or Digital Ocean or managed Kubernetes are bad and EC2 is good simply because it uses a hypervisor model instead of a container model is grandstanding and nonsensical, plain and simple.
The startup world has 0 use for this kind of thinking. What matters is how quickly you can deploy, and how efficiently and inexpensively you can scale if you get a hit. Everything else is ego, and useless in this environment.
There are systems that need AWS's model, but they're for specialist products. The vast majority of products can and do deploy in DO or Heroku just fine. And if they're sustaining people and serving customers, who are we to judge?
> There are systems that need AWS's model, but they're for specialist products.
Netflix. Tinder. Reddit. Yelp. Slack. Foursquare.
AWS' model works absolutely fine if you have systems knowledge. If you're looking for a PaaS, its of course not the solution for you. It feels like your comment is "Why doesn't AWS do what it wasn't designed for?"
Netflix, I agree. But what about Tinder, Reddit, Yelp, and Foursquare makes them unsuited to a PaaS? Is this, "It's not fast enough for my EXTREME METAL CLOSENESS?" arguments? Scaling them isn't inherently made easier by AWS. People said the same shit about AWS as opposed to your own data center.
The nice part about Google's approach, it gives you both worlds in an interoperable package. But I suppose you're more interested in the silverbacking than the specific technical arguments.
Another option is to use something like my CFNBuilder[0] to generate AWS CloudFormation templates. It should allow you to create a VPC (with a Bastion box, DNS proxy, NAT instance, etc.) with just a couple of keystrokes. VPCs don't have to be scary. :)
The NAT instances are so easy to overwhelm. You have to scale them along with the services you're using if you have significant use of other Amazon services.
Also, it's not clear to me why I should have to do that part on my own. The fact I need to engineer a solution for high volume access to AWS Services (besides S3) in a VPC is stupid. Especially when I can just keep using EC2C and honestly have an easier time of it, while also not having to migrate and rewrite existing infrastructure to become VPC-aware.
So you are now following me around these threads, defending the concept of AWS from all criticism? Even that which has been admitted and promised to be fixed by AWS's support staff?
For someone with 'toomuchtodo' it seems like a very questionable use of time.
It is great if you're not experiencing the problems we are.
If you use AWS services besides S3 it can get very tricky to not overwhelm your egress and not suffer performance issues. Scaling this configuration in an elastic way is certainly not turnkey in the current VPC environment, and probably won't be until Amazon finishes something like the S3 gateway for them.
I think GCP's long term strategy is still to move all devops work into Google. They've stated this with GAE for awhile, and this is a logical extension move as part of that strategy for app devs who like to "SSH into their machines".
Like a lot of "cloud services", this is indeed nothing you can't do on your own box, and I imagine tens of thousands of people have done it before and will do it again on AWS. Google has now done it in a way that generalizes for everyone. Aside from what it lists on the page (installing the GC SDK and setting up authentication), Google handles all of the opsy stuff you would need to do on your own.
Running a Linux server and installing / authenticating an SDK is not really that hard, of course, but it's one less thing to worry about when developing an app. That's almost always always a good thing.
I think their long term strategy is to move to a containerized world. The fact that there are managed Kubernetes instances is very telling, because that was a LOT of work.
Another benefit is it's great for doing tutorials or to use when teaching a class. Instead of having to get all the tools and libraries installed locally on everyone's laptops people can just login to the console, fire up the shell and everyone will have the same base environment configured and ready to go.
You can reach your full cloud infrastructure from any browser, all you need to remember is your google password. No need for a terminal or ssh, a regular windows, os x, linux, chromebook or tablet will do. Quite handy.
Handy? That sounds like a security nightmare. Your instances are accessible by simply being able to gain access to the Google account? Even with 2-factor authentication, this is a serious no-no to basic security requirements. A remote server should be only accessible from specific IPs using ssh keys. No passwords, no world-accessible browser interface.
> A remote server should be only accessible from specific IPs using ssh keys. No passwords, no world-accessible browser interface.
Maybe if you own the servers.
Since they're Google's or Amazon's servers you have to be able to administer them with some set of Google or Amazon credentials. Otherwise what would you do if sshd crashed?
I understand your point. I host on linode which has Lish (an out-of-bound terminal that gives you command-line access without ssh). I have it turned off normally, but again can be turned on by anyone who gains access to my account with credentials and 2-factor auth. It should take a lot more from an attacker to be able to access your servers.
How does that add any security? Anyone with access to your root AWS credentials still has your IP addresses and your EBS volumes/snapshots and can do whatever they want.
This is the trade off I've wrestled with. I've kicked around setting up a Clound9 IDE with access to our environments, so that I can access and repair any of our production environments remotely without needing my laptop.
Nice idea. I do a lot of my development SSHing to a resource rich VPS. For things that take a long time to run (Haskel stack builds, a long running machine learning calculation, etc.) it is great to not run that stuff on a laptop.
Cloud Shell is a bit different in the sense you would not want to try anything resource intensive on a micro instance. But for coordinating other services it makes sense.
I think Google needs to add one more thing to their cloud development toolkit: a public version of something close to their cider web based IDE that gives you access to work with any code you have in you private Google-hosted git repos, AppEngine and VPS services. nitrous.io has something like this, and I think that Google would do well to offer something similar for using the public version of their infrastructure.
Mark, if you'd like to try our version of a "web based IDE that gives you access to work with any code you have in your private Google-hosted git repos", check out the Source Editor feature of the Cloud Source Repositories: https://cloud.google.com/tools/cloud-repositories/docs/cloud...
Also, we're alpha testing some integration with the Source Editor and the Cloud Shell right now, so anyone that would like to participate in that alpha test can drop me a line: csells@google.com.
Chris Sells
Cloud Developer Tooling
Product Manager
So far the biggest advantage of web IDE for me was ability to bypass corporate firewall.
What most web IDEs still get wrong is version control. I want to work on live code and then push it to version control. I do not want another copy of code that I have to push to server and version control.
That's the rate though. For little sessions here and there during your working hours and per-minute billing, I feel like it'll be unusual to do more than say 5 hours a week even if you often forget to exit before the "garbage collector" turns it off for you. That gets you into below $1/month territory pretty easily.
Disclaimer: I work on Compute Engine but not Cloud Shell.
I understand worrying about burn rate and all that, but when the entire service costs the same as a single cup of coffee per month, it might just not matter. How well it works, what features it provides, and how well it interfaces with the rest of the cloud services you're using are much, much more important concerns than saving a buck a month.
Well, not always. As a student, you don’t want to spend 4$ for a cloud console, plus 4$ for cloud IRC, plus 20$ for a VPS, plus 12$ for your phone contract, plus 120$ for a new phone every 2 years...
I – like most people – don’t let useless devices run on standby either, costing me upwards of 30$ a year for a TV on standby.
As a student, I don’t buy totally overpriced coffee that costs that much either.
You're well past the point of diminishing returns where it makes sense for you to focus your efforts more on increasing income rather than reducing expenditures further. If saving $1 per month is significant to you then it sounds like you've already cut your expenditures to the bone, and there's no more blood to squeeze from that stone. Even working a single hour per week at a part-time job would have better returns than extreme penny-pinching on unplugging appliances and rewriting entire applications to save a few bucks per month. I'm a frugal person, but not illogically so, and I know that income has no ceiling but expenditures definitely have a floor.
I’m a student, I have not much time to work during the day, and jobs during the night are not exactly growing on trees.
Also, I’d still cut expenditures equally. Even my parents – both of which have studied law – do this. Actually, most people I know do. Why waste money on useless devices if it takes only seconds to unplug them?
If you're capable of rewriting essential subscription web-based services from scratch instead of paying a few bucks a month for them, you're basically do have a part-time job at night, except that your few-$-per-month savings aren't paying you anything close to what you're worth. Rather than spending your time doing this, you should be doing some programming that actually pays you a good wage.
You really believe "most people" use socket strips and a remote control to turn their tv's on and off each time? In the words of respected philosopher Nelson (from The Simpsons), "Ha Ha"
Germany, for example (Where I live). Most people don’t live in the US, and for most people wasting 1.20$ for a product where you save one or two lines of code to integrate it is not okay.
I seriously rewrote several web products myself that I'd have to normally pay 4$ or so per month for. I took over development for the QuasselDroid Android client because I could not afford IRCCloud. I ended up rewriting all the features of Reddit Gold last night as a browser extension because I can’t afford wasting money on Reddit Gold.
It’s problematic wasting money in one place, but for a dozen services at once? Nope, that’s gonna bankrupt a student who has no income except for a small grant.
I followed this thread with much interest, and I'm glad I did.
I was just gilded this afternoon, and was able to confirm for the first time that new comment highlighting is not particularly hard for RES to pull off, it just doesn't do it for political reasons.
Hence, the only way would be to quietly develop an extension for the purpose... or, strike the jackpot, and come across someone who's done just that :D
Might I borrow your extension? =P
(My email address is in my profile if you prefer to respond that way)
Second, the way I load it into the page: http://hastebin.com/raw/sugixafeve.js (You’ll need to adapt that, too, as Userscripts or extensions can’t easily interface with the rest of the page normally)
Last: You’ll need to set up localStorage.authkey to be equal to the auth key you set in the php script, you’ll want to host the PHP script on some server, and you’ll want to create an SQLite database on that server, give your PHP environment write access to that database, and initialize the database with a table:
CREATE TABLE `reddit-silver-comments`( `thread-id` TEXT PRIMARY KEY NOT NULL, `values` TEXT NOT NULL);
After that, just modify the URL in the first script to point instead of my server to your server
------------------------
If you want, I can write you an alternative version that stores the data in local IndexedDB, but then it’s not synced between devices.
You've definitely given me some awesome ideas with this. My implementation is likely to be slightly different, but your links are an awesome start and will definitely come in handy.
I'll likely take a while to get to actually implementing my idea (>.>), but I think it'd be cool to share my implementation when I (finally) finish it. How best can I get back to you in perhaps a month or two?
Sure, contact me at newsletter@kuschku.de (that email goes per default to my spam folder, if you send me an email today I’ll whitelist your address).
And the implementation is very simple: I just store every timestamp when you visit a thread (unless you visited it in the last 5 minutes already), and then just display those in a list.
Highlighting then all comments that were created after that timestamp is pretty simple.
That’s typical for most of the EU. A group of 500 million people. More than the US. Meaning, there are more people unplugging TVs to save money than there people who are able to waste their money on Starbucks coffees.
And if more people are unplugging their TVs to save money than there are people wasting money on Starbucks coffees, then maybe, just maybe, saving 1.20$ per month on a VPS is worth it.
(Used OVH for several years, but my server with them has an uptime of 1034 days as of today, so customer service might very well be useless - I've never had a reason to talk to them)
We paid OVH for the renewal of our company's domain name several weeks before its expiration. This domain hosted our google apps for business and more. We received an invoice and bank statements for that. OVH never renewed with the registry.
We only got to know about this because of a standard email warning from the country's registry (not OVH as a registrar) itself about quarantaine and expiry!
We spent over eight hours over several days talking to OVH. I have been able to verify that the problem was on the side of OVH only, and not on the registry's side at all.
In the end, we made a direct payment to the registry to make sure the domain was not deleted.
There seems to be 5GB of associated storage ... would be interested to know if they are charging for that or if it's part of your wider Google allocation or if there's an ongoing cost once you generate data in there that will keep charging forever, even if you use the service only intermittently?
They likely don't know how this is going to be used, so they want to gather usage statistics (both user and resource) to determine what it should cost.
This is a big deal for high school students. They can use a school-issued laptop, but they can't install custom software (like an ssh client), so sites like Cloud9 provide access to something that would otherwise by inaccessible to them.
Source: I'm a high school teacher at just such a school.
I have two GCE accounts - one personal, where I do have the cloud console feature, and one business domain currently on a $300 trial, where the button does not show up.
No button for my domain account as well. Have billing enabled. And when I attempt to send feedback on this issue, I get a "submission failed" error!
Have used micro instances in the past as remote development and deployment environments and a micro instance and it works ok. Wondering if something in between micro (0.7G) and standard (3.5G) might be better considering devenv can get pretty beefed up using certain modern frameworks we won't mention by name ;)
I don't have it on my main Google account, so I signed into a test account that I use pretty much for testing apps, but never for anything serious, and voila, there it is. So I'm guessing it's pretty random how it is being rolled out.
It becomes increasingly clear over time why Google is investing so heavily in the "web pages that have the capabilities, performance and experience of a desktop application" branches of modern browser development now.
And the new Nexus Tablet, everyone hoping it will run Android Studio? It's entirely possible next year we'll see JetBrains and Google convert that environment over to web as well. JB's been doing a lot of that work independently, already.
It's still a vm. It uses the browser to "ssh" into the machine. They've got that functionality fol all other vm's as well. Now they offer this "ssh as a service"
This doesn't actually appear to be new; at least not that I can tell.
Google Cloud has supported being able to open an SSH session to any of your instances right from the browser for awhile. I've found it to be a killer feature and am really surprised Amazon Web Services does not offer the same thing.
It's a preconfigured VM that includes all the tools for developing for Google Cloud Platform. You just click a button on a console and you have everything you need, with a 5GB persistent $HOME directory.
True, you could do everything this is doing with Compute Engine. The point is that it's always one-click away and there's no setup, maintenance, or (for the time being) cost.
Albeit much different, yet with some common ideas, I've created a docker “devops environment” image, that if you provide with a GCE service account, it will auto-activate google cloud and even setup ansible to work with your gce project. Emacs and vim are provided, pre-configured to a certain degree. The best usage scenario would be emacs+golang but there is basic support for other languages. There are some other conveniences too, like bash-completion being enabled (no more need for remembering all git flags).
If anyone is interested or want to borrow some ideas:
As a long time user of googles web based terminal (accessed normally through google cloud platform console by listing your VMs and clicking the SSH button):
I love it because I don't need to worry about key management and can access my machines anywhere that has a browser.
However, my few gripes are:
1)when copying text out of the web terminal window that spans multiple lines (on Mac osx chrome), newlines are inserted.
OT, this is a big reason why I encourage people to learn vi/vim.
It doesn't rely on ctrl, and can be used in environments that do weird mappings to control characters. Anything you can do with ctrl, can be done with regular keys and commands.
I cut my engineering teeth having to work on a lot of disparate systems run by the US DOD: VAX, True64, PC Linux, SGI. Having a single dependable text editor that worked across all of them consistently was important. Those environments were neither "subpar", nor were they "broken."
I'm going to spin this up today and see how it integrates with rsync.net.
We have 'gsutil' in our environment, so you can do things like:
ssh user@rsync.net gsutil ... blah ... blah ...
but if there is a google shell that you can use to manipulate those same items, then presumably you could do data transfer to/from rsync.net from within that shell.
Not everyone has a use-case like this, but some folks do ... so we'll see how it works ...
Congratulations for getting this far down the page.
Now open the shell and traceroute (install it) to google.com. :)
(Other fun things: traceroute can't find your external IP; you have ~250Mbps download; you're on a Xeon with 32GB RAM of which you have 512MB; I can't remember anything else.)
I really wish GC had some sort of client facing VPN service so that I do not have to create VM instances that have SSH open to the public. This is a good first step, but from what I can see it does not give me access to my LAN just the gcloud cli.
At SunSed.com we moved from linode to GC a year ago! It has been great! The only thing that I am currently looking for is GC HTTP(S) load balancing support for HTTP/1.1 100 Continue response.
Reminds me that I'm missing GoogleCL a lot. Info: Project is discontinued and not working anymore because it was using OAuth1... they could not upgrade to OAuth2 oO
I've been pushing my current company towards Google cloud because of the billing. Sure AWS is great with reserved instances etc but in our environment I don't know what we will look like 6 months from now in terms of instances. With Google we'd be burning way less cash with our 100 or so instances.
After Google I/O, some webinars and testing I learned about how quickly new websites and apps could be deployed, Google DNS for managing domains, App Engine and other load balancing features that can help manage cost.
The ability to SSH into a virtual machine has proven quite useful to enable my team to coordinate and manage projects. I am still open to using other services like AWS but Google Cloud has been great - especially because they also offer customer service for tech and billing issues.