I worked for about a year with a consulting firm that handled "Y2K compliance". Unlike this Andersen exercise in legal face-saving, it was a real job. Big companies hired us to do a full inventory of their site equipment (this included manufacturing plants, Pharma stuff) and go line by line with their vendors and figure out which components had known Y2K issues, which had not been tested at all, and which ones were fine/had simple fixes. We helped them replace and fix what needed to be fixed.
Y2K was a real problem. The end-of-the-world blackouts + planes falling from the sky was sensationalism, but there were real issues and most of them got fixed. Not trying to take away from this very interesting story of corrupt cronyism, but there were serious people dealing with serious problems out there. "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.
...and 24 years later, after the paperwork has been filed away, someone will still write that the problem never existed. Y2K minimization and anti-vaxx sentimental are 2 symptoms of problems solved so successfully, the magnitude of the problem disappears from the collective consciousness.
Place I used to work had a cycle of "Everything is working. We don't need quite this much IT staff." and "Everything is broken. Clearly we need more IT staff."
How the people in charge of this stuff never noticed the cycle is beyond me.
I've seen this phenomenon play out multiple times in my professional career, where a considerable amount of effort goes into creating a robust system only for the effort to be minimized by management due to its stability. Somehow preventing a plane from crashing is not as valuable as digging through the ashes.
In agreement with your overall point, accounting and legal is different though.
For accounting, it's not a simple cost center, and is at the front line to show the numbers. They can say how much it will cost to not comply with a rule, or how much they saved by creativity or ingenuity. Being that close to the money is a tremendous advantage.
Legal is more distant, but there's a clear scale of how much is on the line. When you review a contract, it's pretty clear what’s at stake if legal work is botched.
That's my main takeaway: if you care about money, you need to be as close to it as possible. At the same skill level, dealing with user security or financial transaction security won't pay the same.
That's why presentation and "sales" type skills can be useful. A bit of doom-mongering internal PR about the problem, then present the solution. Don't just solve it quietly.
> "Remember Y2K? Nothing happened!" is a super toxic lesson to take away […]
See perhaps:
> Normalcy bias, or normality bias, is a cognitive bias which leads people to disbelieve or minimize threat warnings.[1] Consequently, individuals underestimate the likelihood of a disaster, when it might affect them, and its potential adverse effects.[2] The normalcy bias causes many people to prepare inadequately for natural disasters, market crashes, and calamities caused by human error. About 80% of people reportedly display normalcy bias during a disaster.[3]
> Optimism bias or optimistic bias is a cognitive bias that causes someone to believe that they themselves are less likely to experience a negative event. It is also known as unrealistic optimism or comparative optimism.
I got my first IT job doing Y2k compliance. About 20% of our systems broke with the date changed to 01/01/2000, including the PABX (continual reboot cycle) and the sales leads system which would crash every few minutes.
> "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.
My cynicism about Y2K comes from the fact that there were a lot of snarky articles written about how certain countries or companies were not Y2K ready but nothing bad seemed to happen to those countries either. It seems like a natural experiment was conducted and the results indicate there was no correlation between good outcomes and the work done to be Y2K ready.
I have no doubt that the armies of consultants did fix real issues but anyone working in software knows there is a never ending set of things to fix. The real issue is whether that work was necessary for the ongoing functioning of business or society.
"but nothing bad seemed to happen to those countries either."
Bad things still happened everywhere, despite all our efforts. How bad depends on your perspective.
Several people suffered a bizarre form of resurrection, which normally, Christians would be all over it and jolly excited. Pensions suddenly started paying out, tax bills became due from people long dead. If you were not a relative of one of those people it did not affect you and if you read about it, you'd have perhaps said "typical" and got on with life.
Some devices just went a bit weird and needed turning off and on again. Who cares or even noticed? Someone did but again, you did not hear about those.
I spent quite a while patching NetWare boxes and applying some very shaky patches to Windows workstations. To be honest, back then, timezone changes were more frightening than Y2K - they happen twice a year and something would always crash or go wrong.
The sheer amount of stuff that was fixed was vast and I don't think your "countries that did and did not" thought experiment is valid. Especially as it is conducted without personal experience nor much beyond a bit of "ecce, fiat" blather.
Nowadays time is so easy to deal with. Oh, do you remember when MS fucked up certificates and Feb 29 a few years back?
Your examples make my point. Some bad things happened but not on a catastrophic level that warranted the level of investment that was put into Y2K projects.
Most of the companies I was familiar with then did not have enough time or resources to check for and resolve every problem, and these problems were very real. At some companies the engineers were given autonomy, authority, and effectively unlimited budget to do literally whatever was required to mitigate any publicly visible failures that occurred. We had a lot of backup plans to keep operations running, sometimes literally paper and pencil, when the inevitable failures occurred. A lot of companies were furiously faking it and throwing people at the problem.
I directly witnessed a few near catastrophic failures due to Y2K at different companies, literally company killers. We kept everything (barely) running long enough to shore up and address the failures without anyone noticing, partly because we had prepared to operate in the face of those failures since we knew there was no way to fix them beforehand. It was a tremendous decentralized propaganda coup. No one wanted to be the company that failed as a result, the potential liability alone was massive.
The idea that what was averted was minor is a pretty naive take. I was genuinely surprised that we actually managed to hold some of the disasters together long enough — out of sight and out of mind — to fix them without anyone noticing critical systems were offline for months. IT was a bit more glitchy, slow, and unavailable back then, so the excuses were more plausible.
When things got missed things went _badly_ wrong and that spurred businesses to take rapid action to respond.
The first "Y2K" bugs where when banks' computer systems started messing up the calculations of long date financial securities/mortgages - decades before the millennium. Closer to the time Supermarkets started junking food that had a post 1999 Best Before date. Those were company ending problems if not fixed and so got overwhelming and rapid focus.
"... a lot of snarky articles written about how certain countries or companies were not Y2K ready..." I know you're talking about articles written after Jan 1, 2000. But there were a lot of articles written before then that were Jeremiah doomsday articles, so the snarky articles were reacting in part to equally wrong articles before then.
One article I recall in particular was in Scientific American some time in (IIRC) 1998 or early 1999. It prophesied (I use that word intentionally) that no matter how much money and effort was put into fixing the problem ahead of time, there would be all kinds of Bad Things happening on January 1. It called out in particular computers that were said to be unreachable, like hundreds of feet underwater on oil platforms. (Whether there actually were such computers, I don't know.) There was a sort of chart with the X-axis being effort spent on preventing the problem and the Y-axis being the scale of the resulting disaster. The graph leveled off while still in the "disaster" range, but still presented a clear message: "Give us more money and we can prevent catastrophes".
Somehow I haven't been able to find that article. Maybe SciAm suppressed it when the outcome turned out to be way short of a disaster.
There was also a TV (remember that?) news site or three that planned coverage beginning on midnight December 31 somewhere in Europe (Russia and China were off the map, I don't remember about Japan). Of course the news was that there was no news. (Yes, there were some computer programs that died or spit out junk, but nothing rising to the level of news.) I think it was an hour or two after midnight Eastern Time (US) that they ended the news cast.
Was there a Y2K problem? Of course. But it was largely taken care of before January 1, 2000, Y2K Jeremiahs notwithstanding.
I think it's either going to be a retirement plan for many who are young-ish IT people right now, or "optimists hoard gold, pessimists hoard cans and ammo" time with the pessimists being right. And a lot of this depends on how decision makers will remember Y2k.
Nobody is going to waste as much money on it as they did with Y2K and it's way more common for computers to actually use epoch time... but I think almost everything uses 64-bit time now and we're still more than a decade away.
(Don't reply with examples of things that use 32-bit time.)
That narrative is only in the uneducated public. Every Y2K perfect of course documented its many findings and fixes.
In fact, the REASON Y2K got so much budget and attention were the early companies that started discovering the issues and alerted the others. Notables include IBM, General Motors, Citibank and American Express.
Agreed it was a nice success. We also did pretty well in paperless office, the ozone layer and acid rain, automobile and airplane safety, and the war on cancer, and now obesity and diabetes.
The public were not uneducated about this. If you remember how Y2K was presented to the public, it was ridiculously extreme - planes crashing, economies collapsing, etc. None of that happened, and not because all the bugs were fixed.
You can't fix all bugs, so if the consequences really were going to be catastrophic then you'd expect at least a handful of catastrophes to sneak through, but that didn't happen at all.
> None of that happened, and not because all the bugs were fixed.
No one is in a position to assert that. We have very little idea how fragile our civilization is. Perhaps it's pretty robust, and networks of interconnected problems (like Y2K) stand no chance of snowballing out of control. Or perhaps it's really, really fragile, and surprisingly little stands between us and a profound collapse.
It's very difficult to be certain, because it's such a complicated system, and one that we can't really test to destruction.
Would all the Y2K bugs have caused a widespread systematic failure if they'd gone un-fixed? Probably not... but maybe? Just like all low-probability, high impact risks, it's very hard for us to reason about.
How much money is it wise to spend on averting the risk of giant asteroid impacts? Hard to say. Probably more than you think, though.
The fact that it went so well is not evidence that no original issue existed. On the other hand, maybe it's evidence that we over-invested a bit into diminishing returns.
A perfectly fine-tuned response would have a little bit more to fix on January 1. Of course, expecting society to perfectly fine-tune the response for something poorly understood is hard.
Which only makes it more interesting. There are many takeaways one can have from this article, one of them is that:
- Problem X is serious.
- Y will address problem X
Is incomplete reasoning, or even an outright fallacy. Just because it's claimed that Y will address X doesn't mean it actually will.
Especially on high-stakes issues ("our business will collapse", security, safety) or emotive issues (social justice, security) this type of flawed reasoning seems to be a common problem.
> Finally, it was a fake job because the problem that the Conglomerate had hired Andersen to solve was not real,
I don't really like the whole popular understanding of the y2k thing. Substantial problems with infrastructure were fixed. It wouldn't have been Mad Max, but business operations could have been affected for weeks. Instead, we had almost smooth sailing because there was an appropriate amount of preparation beforehand.
(Not to say that there wasn't graft like he describes as part of that preparation).
I really hate this take some people have (especially people that weren't there at the time) that Y2K wasn't a real problem. It was a real, large problem effecting businesses across the globe. But the thing is, the problem wasn't that "everything was going to fail catastrophically at the turn of the year". It was that "some things could fail catastrophically at the turn of the year, and we didn't know which things would". So a lot of time and money went into figuring out which things actually had a problem, and fixing them. It was entirely possible that, if we didn't fix something, planes _could_ fall out of the air, or energy providers could shut down for long periods of time.
So we spent lots of time and money finding out what things needed to be fixed. And "it turns out that not that not that many things did really need to be fixed" is a GREAT result from that time and money; arguably the best possible result. But it doesn't invalidate that the time and money was spent, in the same way that "I didn't get into an accident" doesn't invalidate the fact that wearing your seat belt is a good idea.
I worked on a Y2K project in 98-99 for a major car manufacturer. They tested the ERP system for Y2K (in 1997) and it failed, in a way that would make the business fail. They replaced it via this project and the business continued to be successful without interruption.
Another company I know of simply backdated their whole system by 10 years. This worked, but unfortunately the contract cycle renegotation reminders then did not appear, and competitors took many of their contracts over the next year or so. They went out of business in a couple of years.
I implemented a new manufacturing and retail system in 1992, and we used 4 digit years. The company owner complained that they were a needless expense and annoyance and that there was plenty of time to deal with Y2K. They continued to use the system well into the 2000s.
The space shuttle never flew over New Year's eve/day as they couldn't handle a single year rollover, let alone a century. Some use cases can simply avoid the problem.
Strangely enough, a few months ago I came across the bug fix for that Space Shuttle problem. "CR 93160: Year End Roll Over (YERO) Reset. Allows in-flight reset to January 1 without an IPL and complicated procedures while maintaining a good state vector."
(IPL is "Initial Program Load", IBM-speak for a reboot.)
> It's infuriating.
I worked on a Y2K project in 98-99 for a major car manufacturer. They tested the ERP system for Y2K (in 1997) and it failed, in a way that would make the business fail. They replaced it via this project and the business continued to be successful without interruption.
Same.
I worked on multiple enterprise projects for Y2K where we advanced the dates on the customer's system and ran it so we could see the magnitude of the problems and prioritize the remediation process.
There were some things that were temporarily annoying and there were some things that were critical to the business and could not be worked around manually by throwing people at it, the system needed to be fixed.
I read it as, a real problem wasn’t identified by the management side of the company. Finding typos in spreadsheets and printing corrected versions to slap in a binder was a CYA event, the article says as much.
The whole article was full of supercilious snark. A lot of it was aimed, rightly, at Andersen, but it does take away from the fact that Y2K was a real issue that affected millions of computers worldwide.
I patched hundreds of vulnerable systems, rewrote code for mission-critical software and still ended up with a call-out to halted manufacturing plant on January 1st, 2000.
If there hadn’t been a global, coordinated response, things would have been a lot worse. And let’s not forget that there were life-changing effects of the bug beyond the snarky “garage door opener” issue mentioned in the article. [0]
Back when I was just out of school, I worked as a computer tech guy. Our company got hired to test all the PC+BIOS version combinations a certain department had, to ensure they'd work after y2k.
Spent several weekends rigging up computers, and running a battery of tests on them.
Did actually find a handful of combos which misbehaved, mostly incorrectly handling the leap year IIRC. None of them had NTP running, so could have been an issue. A few didn't handle the transition well at all. Most got resolved by a BIOS update.
I recall thinking after 2000 came and went that sure, a fair share of superfluous work had probably been done, but the reason it seemed like such a nothingburger afterwards was because a lot of work went into ensuring exactly that.
The 2038 bug is less than 14 years away at this point. There is an astonishingly greater amount of software out there compared with 24 years ago. I don’t want to be a doomsayer but it’s probably time to keep this in mind anytime you implement an epoch going forward.
Right now Debian is in the middle of making sure that all of the 32-bit software has 64-bit time_t. Descendant distributions will benefit from their work.
Are other Linux distributions doing the same thing? Sure, and there was a lot of work in the kernel (mostly in the 5.x series) to get it all nailed down. NetBSD and OpenBSD already tackled it.
But "all" can't be guaranteed, because people insist on keeping elevator controllers and industrial processes and anything which hasn't actually had capacitors explode and resistors melt running for an extra decade.
A lot of embedded software is still running on 8 bit microcontrollers. The market has shifted towards 16 and now 32 bits, but 64 bits isn't that common yet. And a lot of the software libraries provided by microcontroller manufacturers are terrible and absolutely affected by Y2k38.
Expect lots of IoT or home security devices to malfunction. Or more critically industrial equipment, fire alarm systems, factory automation, etc. A bit over a decade a ago I helped develop an industrial spark extinguishing system. It's still sold and each setup will probably be used for 20-40 years. I know it will suffer from Y2k38, though probably (hopefully) only the event logging. When I raised concerns over this the answer from my boss was "I will be retired by then, don't spend time on that".
> When I raised concerns over this the answer from my boss was "I will be retired by then, don't spend time on that".
But...isn't capitalism supposed to fix this kind of reasoning? Turns out that what you really need is people caring for the right thing to do, and not just for money. Yes, often the right thing to do in a business context will lead to more money. But when time-frames expand and you are not the real owner...
Oh, yeah, there are protocols, file formats, old devices, all kinds of interesting things. But more importantly, differently from the last time, they are very different from each other, so much that it's hard to even anticipate what may break.
I imagine a couple of days with the internet misbehaving everywhere is a safe bet.
Even talking about software running on servers, I think there’s more 32-bit software than you assume.
But my real worry is embedded systems, where very little is 64-bit. Almost every embedded Linux system, IoT crapware, Android anything. Almost none of it will be able to keep time after 2038.
> My interview consisted of about twelve minutes with a laconic, mustachioed, middle-aged Arthur Andersen manager named Dick. (One of the services Andersen had been asked to provide was to help hire the Conglomerate’s Y2K team.) “On a scale of one to ten, what’s your knowledge of computer software?” he began. I paused for a moment, unsure of whether our interview would include a demonstrative component, as had so many previous interviews for jobs I had not gotten. But his office was empty. I couldn’t see how he would test me. I said eight.
On a private forum I'm part of there was an engineer who ascended into upper management at a startup. He had an entire thesis that modern tech interviews were bullshit and could be replaced with short conversations and asking candidates to self-rate their abilities.
He posted long and confident writings about how he was going to save his company so much time and attract the best talent by having the shortest interviews and trusting candidates self-ratings of their abilities.
He disappeared for a long time until one day he came back and wrote a post-mortem about his hiring experiment. The quality of their new hires had collapsed. It was so bad that some of their previous top employees were quitting because they were tired of all of the unqualified people joining the company.
As he learned, if candidates sense that their interviewers aren't going to examine their claims too closely, many of them will inflate their claims as much as they think they can get away with.
In hiring, it only takes one candidate grossly exaggerating their experience and abilities to push the truly qualified candidates into 2nd place (aka not getting the job).
We all like to think that we can separate the liars from the truth tellers, but when you're trying to do it in extremely short interviews with questions going both ways, there isn't much time to deep dive.
You need to slow the hiring process significantly to slightly decrease the chances of getting burned. When you are trying to scale up this is very painful. Hence, the common discussion about the problems that occur in a company experiencing rapid growth.
It appears that the author was indeed not too closely familiar with the premise of the Y2K bug, as they mention the change "from 19 to 20"
... the Y2K Bug, and it prophesied that on January 1, 2000, computers the world over would be unable to process the thousandth-digit change from 19 to 20 as 1999 rolled into 2000 and would crash ...
That wouldn't be problematic, since the numbers don't loop around (like when going from 99 to 00).
A lot of these systems stored the year as two digits. So 19 to 20 wasn’t the problem. The problem was mainframe based systems are/were almost entirely based on fixed length data representations; cobol copybooks, tape and dasd datasets (ie files). Expanding all those from two bytes to four was a lot of work and risk is some organizations.
A bit perplexing that the author after all that time thinking/working on Y2K isn't really clear on what the bug was. Nor that under all the spreadsheet filling, meetingnote-taking etc there was some IT deckhands running around updating PC BIOSes and making sure software updates were in place so ppl could keep doing their daily office work when they got back to the office on Jan...3rd.
It's not surprising at all that some corporate analyst doesn't know shit about the technical domain. I see it all the time in cybersecurity, there are two types: the hackers who understand the different layers of technology, how they interact, how they can be broken and how they can be fixed, and the checklist monkeys who don't even understand the questions they're asking.
I felt a lot of relief on the morning of 1st when everything I had done was working perfectly. All the test showed that had it not been fixed there was a lot of data corruption.
I have been through another one of these where one part of a company changed their year entry from 20XX to just XX to save the people entering it time and multiple systems did not defend against this and suddenly started calculating all intervening years from the time of jesus up to the present day! Its was catastrophic, that simple changed caused several days of disaster recovery but months of defensive programming work as well. Date/time errors can be devastating to business.
Stupefying that after Y2K, a company is willing to deliberately make this mistake to save two keystrokes. They deserve whatever bad will come to them in 2100.
If I save a little bit of work now, my life is better. In the unlikely event that the system is still used in 76 years, it will not be my problem.
I'm generally against rolling problems to future generations, but taking risks with a minuscule risk of going somewhat badly that may not even be within my children's lifetimes is not a core concern.
There is a very powerful myth taking hold that the Y2K problem wasn't real, but some kind of false alarm or scam. It's very strange and scary. It's not just isolated people getting it wrong, I see it popping up all over the place. We really need to push back on it.
My sister's department reviewed every line of code in every product for date/time issues, and corrected them. But she was probably anomalous, by all the anecdotes. She ran a tight ship, and did things that actually needed to be done.
The article is less about Y2K itself than it is about the absurdity of work in the corporate consulting world. It’s a good read.
I wasn’t there for the play-doh management fad but I sure experienced capital A Agile and honestly can’t tell which is more infantile.
In a sense, the corporate environment of constant indefinite emergency described in the article is the same as arbitrary deadlines and non-stop “sprints”.
The Y2K problem would have manifested as the year 2000 being interpreted as 1900. As such, it could have affected anything around dates (age verification, paynment due dates, morgages, etc.). Other more extream issues (power outages) would have been unlikely but it is impossible to predict all the knock-on effects Y2K would have had.
There were several issues that happened as a result of Y2K.
Maybe you are being downvoted because this is “obvious”, but as it turns out it is not obvious enough for the TFA itself to not have this error:
it prophesied that on January 1, 2000, computers the world over would be unable to process the thousandth-digit change from 19 to 20 as 1999 rolled into 2000 and would crash
I came across a reasonable amount of perl code that happily formatted the year 2000 as 19100. The functions that converted from seconds-since-1970 to a "human readable" parts returned a year relative to 1900. Most people would format using sprintf("%d/%d/%d",$d,$m+1,$y+1900) but some code I inherited instead used sprintf("%d/%d/19%d",$d,$m+1,$y). I never did figure out if it was due to stupidity or malice.
I kept seeing this behaviour on some websites in to the late 00s and possibly even in to the early 10s, your comment mede me realise I haven't seen it recently though :(
A big part of that problem was Javascript's terrible Date API, which was practically a carbon copy of the (equally terrible) java.util.Date. For no good reason, its Date.getYear() method returned the number of years since 1900.
I remember booting up an emulated CADR for the first time and, after entering the date as something, something, 2008, was startled for the status bar to read a year of 19108.
I also did my quick Lisp sanity test of saying (* 4 3) and expecting 12 -- but I got 14 instead. It was then that I learned the CADR speaks octal by default.
Feels very relatable. A couple of years after Y2K I was asked by the employer at the time to join an multi-billion internal project that was run almost exclusively by contractors. I think the task was to convert a Fortran monolith into Java, but after 20+ years it could also been Cobol.
The biggest contingent of contractors came from a company of the AA universe that has survived the Enron scandal. My task was to advise on how to interface with a certain internal system. Easy job. The amount of actual code written was minimal and theses folks were big on generating class scaffolding from UML paintings.
Funnily, there was a constant rotation of contractors and they didn‘t realize I was internal. So, I was treated by them as one of their own. And it was six weeks of education that you can’t get in university or business school.
I learned the art of being billable which is miles away from doing any work. Matter of fact the hour didn’t have 60 minutes but a hundred. And whatever you faked doing, you did it with double digit precision. You documented that you documented someones proposed, ie. documented, code change that day. Two minutes spent, 1.89 hours documented. Yes, Github and stuff would only come the following decade. We are talking word documents here. Being billable meant that people would propose changes and retract the proposal with another one proposal. At least, the amount of side effects was limited.
I also learned the art of fake process. The contractors didn‘t necessarily know what they were doing. But they knew of doing things. So, lots of meetings were happening and random „specialists“ and „experts“ were flown around the planet to delight meetings with stuff they knew of. Having these people disseminate some talking points from recycled slide decks was .. uhm .. fashionable.
Third was the art of faking yourself. I vividly remember the guy coming in from Tuesday to Thursday who would bring and arrange the same set if golf balls at his desk. I must have asked him if he played or something. In the end it turned out some important dude made him caddy for an offsite golf event of some partners. And he picked up the balls that were lying around the course. Some consider that a big nono. But apparently it gave some status since every thought the guy had played at said offsite.
Highlights from the weeks being embedded with the contractors include a contractor-only dinner. No idea anymore how I managed to be there but some partner gave a rousing speech how they were at the fore front of innovation with this project. And that everyone frenetically clapped at the end. Memory has that the red wine was good, the food less so. Another highlight was an educative session by one of the flown in experts on what a FIFO queue was and how that aided in-order processing of arriving events. Frenetic clapping ensued.
Now this is all 25‘ish years ago and perhaps memories are more pointy than reality was. What is certain though is that the number of lines shipped into production was zero. Zilch. And that was not because someone realized all of it was fake. It was because the company went out of business eventually and was bought, sold, merged for its customer base and not its systems.
Ugh, upvoted because I've seen this too. Lots of "businessing" being done so it could be billed, but nothing actually gets produced besides reports and talks and dinners and meetings and documentation, and then, the project gets quietly scrapped until the next one starts and it's rinse-repeat. It feels like perpetual money laundering of shareholder value into contractors (and employees, honestly, too).
> It feels like perpetual money laundering of shareholder value into contractors (and employees, honestly, too).
I've come to view the economics of tech this way too (although much more so true of contractors than employees). Workers are supposedly the downtrodden sucker class beneath BigWigs. But, if positioned the right way, workers can make out like absolute bandits, often at the expense of executives and majority shareholders. I've worked many useless, bloated projects that lined consultants' pockets before ending in layoff-inducing/executive-firing failure, projects that - ironically - were initiated at the behest of said executives themselves.
In a general sense, I don't really understand where the value is being generated in tech anymore. One of the higher-level things that has slowly gotten me disillusioned with the industry. Just a constant parade of companies either 1. never turning a profit or 2. milking one cash cow built in the 90s, both types of companies just churning stupid projects over and over so that engineers can write new things on their resumes and executives can go from Director to VP.
Bad as they could be back in the day, what would enterprise-wide Y2K remediation projects have looked like with Agile, Jira, Cloud, DevOps, Open-Source packages/dependencies, and today's compliance/risk/audit requirements?
I can't bring myself to read this monolith because the two premises at the top are wrong:
1. Y2K was a real phenomenon that needed fixing in various aspects, and
2. fraudulence is not a problem of "capitalism" - it's a problem of humanity. Hell, the biggest quote used to explain the Soviet economy was "they pretended to pay us and we pretended to work".
Lot of things would have broken that didn't because we fixed them before they broke. The ones that we missed, broke, and we had to fix them while down. It was no fun fixing them before 1/1/2000 and even less fun fixing them after.
Occasionally you still see mentions of "Y2K Compliance" on websites (generally low traffic legacy websites for sleepy businesses that still happily tick over in some obscure niche). I get nostalgic every time.
Y2K was a real problem. The end-of-the-world blackouts + planes falling from the sky was sensationalism, but there were real issues and most of them got fixed. Not trying to take away from this very interesting story of corrupt cronyism, but there were serious people dealing with serious problems out there. "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.