Hacker Newsnew | past | comments | ask | show | jobs | submit | more deadfece's commentslogin

I always say something like "Let's make it work so well that it's completely boring. Let's build tomorrow's 'boring'."


I just use a bookmarklet for Archive.Today , it's easy enough.

https://gist.github.com/573/e5cf230a03c5d53f848b58c3ced0bc95


Solution building: I don't really see anything in that use case that would prevent it from using inotify/incron, so I would probably evaluate that side first.


This is not a good look, Matt.


Allow me to offer my opinion without reading the article:

I can and do pay for news, I just dislike the bait and switch with modals/popovers that much. Now that I can no longer block domains in my Google search results, I can't remove those paywalled sites from relevancy and it's hard to keep track of everyone who only lets you read the first paragraph and a half before sticking their hand out asking for $10.

ETA: I have now read the article and have no revisions to my statements.


I have to admit I wasn't familiar with IoC for appdev. I only knew of that initialism as "Indicators of Compromise" from infosec.


The article honestly reads as if written by a very smart sysadmin with zero cloud experience.

1:1 lift and shift is always obscenely more expensive. In this case, if the author had been in charge of the migration, then yes, the services would have cost them dearly to operate in the cloud.

I'm sure if I was personally put in charge of moving some aspect of IT into an unfamiliar mode of operation, my inexperience there would make my approach insanely expensive as well.

That says nothing about the target, except that having undertrained and inexperienced staff in charge of its design and implementation is probably foolish from a financial perspective.

There are obviously thousands on thousands of scenarios where moving to commodity cloud is an absolute slam dunk in aspects that are important to the subject business.

Unfortunately we really get no insight into what the workload truly is in the article's comparison. There's no mention of solution aspects like app architecture, security, HA/DR, SLA, RTO/RPO, security or backups [1]. We only get what is plainly a tunnel-vision view of a comparison.

It's almost like the author doesn't make solutions for a living.

Maybe the author actually realizes their blind spot, and is secretly utilizing Cunningham's law to crowd-source a relatively free solution from the professionals and amateurs in the internet comments sections.

The good architects don't work for free. There's a reason why Troy Hunt's web services cost him vanishingly little to operate, and it's certainly not by running IaaS VMs 24x7x365.

[1] I mentioned security twice as part of an ongoing effort to make up for all the times CyberSec/Infosec teams have been forgotten in the planning process. =P


>There's a reason why Troy Hunt's web services cost him vanishingly little to operate

And I thought that's because he is on a cloudflare premium plan with a workload where 99,8% of requests are cached


> 1:1 lift and shift is always obscenely more expensive.

Is it? Managed services cost a lot more than a vm. Re writing software cost a lot more.

Where are the savings?


Fewer IT staff for systems mgt. Reduced costs in off peak hours with on-demand instances. Right sizing resources to application needs.

There are wins that one can have, but nothing is guaranteed. It will vary by application, size and staff.


> Fewer IT staff for systems mgt

This hasn't been my experience. Replace sysadmin with cloud engineer/architect, salary bump, no reduction in quantity. This assumes you are mildly competent as an organization.

On managed services, say the database. My experience is that the extra costs of the service are larger (usually much much larger) than any salaries or head count reduction. I'd rather employ more people than not, and actually control my data, given the choice. Particularly when the savings are questionable or false.

I generally prefer a lower dependency count. Code and vendor. Even at modest immediate cost increases, you gain better flexibility and there are less things to bite you.

> Reduced costs in off peak hours with on-demand instances.

Agreed. You do increase system complexity to accomplish it. But there are actual cost savings here.

> Right sizing resources to application needs.

This isn't unique to cloud, you can do this in any hypervisor. This is a basic feature.

> There are wins that one can have, but nothing is guaranteed

It does not "always" hold. This is critical missing nuance in the original claim.


I've worked places where they had so many databases from M&A that half of their FTEs were wholly preoccupied all year with performing outstanding DB maintenance: fixing backups, doing storage management, patches, applying move/add/changes.

For them, managed DBMS was a life-changing event. As soon as they had RDS or even Azure SQL MI, they were begging the cloud team for more, so they could get their team back.

In some businesses, it's definitely not a big loss to lose agility by having a large portion of your team tangled up in infrastructure management, but for some businesses, that constraint is an impediment to their line of business. Some businesses are missing opportunities for want of infrastructure being able to move fast enough.


Much of that can be automated (how do you think AWS does it?), but I get the point. Still, the OP said: "You save money by re-architecting and using more managed services." To me, that isn't implying "save money from personnel costs," it's discussing pure service cost.


That maintenance problem is sometimes also expensive in terms of service costs. Which is more of a tech debt problem than a cloud comparison, honestly, but they're often inseparable.

Extended support costs on old hardware and software are sometimes astronomical. Then you're paying all of that to get something that, compared to modern gear, performs like ass and just breaks all the time.

I've also had C-suites tell me I have two months to start a project and have all the gear live, but I find we are only allowed to order direct from the manufacturer, and our orders will take 90 days to get to our door. Oh boy. That scenario is difficult to put into terms of money. Sometimes having that one-hop instant supply chain into a cloud service is a huge business enabler, sometimes it's not.


I regret to inform you that you can also incur extended support costs with managed services.


Ah, I thought we were having a discussion in good faith. Later!


My main advice on Windows Containers is to avoid them at all costs.

A lot of the things that ran in Windows Nano Server worked even better in Linux containers, and a full Windows Server Core container is usually only necessary for .NET Framework, where you will need all of the luck in the world help you with that problem.

Packer and Ansible are also much harder to use for Windows Containers, so if you're already using those to configure VM images and Ansible to perform config mgmt on VMs, get ready for some headaches to make that work against Windows Containers.


Former Windows Container wrangler. First off, why are you using Ansible/Packer against Windows Container images? You wouldn't fault Linux Containers for being terrible to use with Packer/Ansible because you shouldn't be doing that.

Second, Yea, it's only use is Windows specific languages like .Net Framework and various Win32 API applications. Windows Containers works well enough if you are good enough with Powershell to handle various levels of bullshit that comes Windows.


> First off, why are you using Ansible/Packer against Windows Container images ? You wouldn't fault Linux Containers for being terrible to use with Packer/Ansible because you shouldn't be doing that.

You can build Linux containers just fine with Packer/Ansible. But to your point I already stated why I use Ansible and Packer for container images: "if you're already using those to configure VM images and Ansible to perform config mgmt on VMs"

I currently build a number of images with Packer and Ansible across Linux and Windows. I choose not to rewrite all of that from scratch in a crummy Dockerfile when I can just have Packer and Ansible make the images alongside everything else they're already making.


I have used Ansible for my Windows desktop setup for several years now, and I haven't fought those things in it. It has been pretty straightforward as I'm mostly just installing software and pulling some configs and scripts down.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: