> "We have no indication of this," company President Brad Smith told New York Times reporter Nicole Perlroth. Perlroth said the company stood by a statement it issued on Sunday saying it had no indication of a vulnerability in any Microsoft product or cloud service in its investigations of the hacking campaign."
I find that not applying electricity to computers makes them perform with flawless reliability. I had an old, dorm refrigerator-sized server as an end table for many years. It never failed.
Edit: Nobody else ever gave it instructions that went counter to mine, either. Security vulnerabilities are 100% non-exploitable without power. I should probably start an anti-APT business with that knowledge.
Tomorrow on HN: "Powering air-gapped computers remotely using some spare copper wire and a bit of python..."
There's a reason the old go to for secure deletion was throwing your hard drive into a wood chipper... (though that's bad for the environment, please don't do that)
We had mandatory registration at restaurants here when they were still open. Some had paper forms, especially initially, others used web forms.
People were worried that it would be easy to grab the filled out forms and get a restaurant's guests' phone numbers and addresses (and it probably was). Predictably, one of the rushed web forms had non-existant security, making it easy to get all restaurants guests' phone numbers and addresses.
As I understand it from my readings here on HN, that will be a major undertaking, since the malicious software that Microsoft discovered in its systems is actually Windows 10.
I must be experiencing a real case of the Baader-Meinhof phenomenon. I hadn't noticed this attitude towards Rust until yesterday, and now I've seen it multiple times since then.
It is a bit of a meme, and I assume people are downvoting you due to its prevalence, but I suspect if you don’t spend a ton of time on cynical sites like HN and Reddit (at least the programming subreddits) you could very well miss this attitude. I haven’t seen the “rewrite it in Rust” attitude prevalently in most contexts.
I'd say it's a positive phenomenon. Many of us would like to see C/C++ replaced by something as fast but more modern. Each alternative has some problems, like being too slow (Python), or tied more to one platform (C#, Swift), or not being backed up by a big org (my favorites here are D and Nim), or having other issues (Java), or just too boring (Go). It looks like Rust finally got the momentum to become a language of choice for writing efficient apps, and with time the generated code might be on a par with C's. So I hope it succeeds, even though I wish it had happened with D.
It’s good if it’s grounded in reality. I don’t find Go being “too boring” to be a rational reason to not use it; it’s an accessible and convenient language that works well for applications without a lot of shared state among threads. Boring is good for security. Some would argue Go is not boring enough due to edges with CSP.
Rust is an accomplishment, but it’s also a very complicated language. It has a harsh learning curve and simply put, not all software written will benefit as much from the guarantees Rust provides. For a lot of software that will benefit the most, people are already experimenting with it.
seL4’s approach also seems interesting. It is doing C with proofs, which in theory should be able to offer stronger guarantees than even Rust if done correctly.
I think I saw an article here that says that a few times per century, a star explodes in a distant galaxy and the resulting radiation can interact with a spinning hard drive in such a way as to allow Facebook to enhance their ad targeting profile on you. So no spinning hard drives, please.
I cannot wait until we are able to afford to run our critical business infrastructure on our own computers (again).
I have a lot of trust in Microsoft & Amazon, but with the complexity of their organizations, there is no way they can provide the same kind of security assurances as if I were to have my own locked cage at some colo. Certainly, you could spin a fantastical tale about how AWS's 23+ layer physical security perimeter is superior to whatever is available at my local facilities. But, I have grown to classify this sort of stuff under the "what if 2 SHA256 hashes collide" category of fear-driven development.
I almost have to convince myself on a daily basis now that "everything is fine" with how we are currently using the cloud. The selling points for moving to the cloud are very powerful and I agree with most of them. But at the same time, the idea that you are locked into this same combined fate as everyone else leaves me with the constant sensation that I should have brought a parachute with me.
It's scary what was once called "spyware", "malware", "adware" or "other"-ware has become so commonplace and accepted.
You type in Windows 10 search and it sends your keystrokes to Microsoft, you log into Windows 10 OS (and they push hard for this during setup, they actively make it hard to use an "offline" profile) and it records your every interaction with that computer; with "full" telemetry it records every web page you visit, every app you launch, every app that has an error (you can download an app from MS store to see what telemetry W10 is sending to MS, it's quite illuminating)
These days, more and more of society expects you to have a smartphone and "apps"; "please can you scan a QR code to enter this restaurant". A supermarket has an app and offers in-store discounts on food, your data subsidizes the cost of what you purchase. Many offers are locked behind a social contract of "you give me data and I'll give you some money off". It's amusing to see how 'cheap' people are and how much data they are willing to freely give away in the name of a very very small discount (the data is worth much more than the savings you are getting).
An always online, always connected fully digital society is prone to attacks, hacks and disturbances. We've seen hospitals held to ransom and have paid bitcoins to get critical machines working again, something that shouldn't even be possible, yet one person who opens $phishingEmail.exe can bring down an entire network.
Our life is essentially in the hands of crudely built machines, with absolutely no security against basic human errors - and we trust these with the very foundation of society. One day we will witness a truly devastating hack, a disturbance unlike anything we've known previously, and it'll likely be as devastating as the Beirut explosion. It's not an if, but a when.
I want to return to a time without cars or computers, even just for a brief period (the lockdown was so nice this year. the hum of the birds and not the thunder of engines was a blessing).
A smartphone is much more valuable, it offers richer data and capabilities than a discount card. An app can access tonnes of 'free' information provided by the OS (model, make, package list), as well as logging each and every tap/interaction and accessing personal information (contacts lists, location data, device identifiers you can trace across datasets)
You can see which offers people engage with most, how often they open your app, well timed push notifications and a reliance on that app. They can also purchase data from that smartphone and have huge data silos to perform studies on. An App provides access to a human being constantly, whereas a discount card is a stateless entity in a wallet that never gets used; it's unlikely someone loses their smartphone, it's a prized possession.
Quite honestly, discount cards are probably quite worthless vs smartphone data and the richness of what you can get. I am surprised they are even still used/offered these days (I know of supermarkets that just have apps now).
There are ceiling-mounted facial recognition cameras for that now. Have a Customer who brought in some cameras for "people counting" from Xovis. Their's gear purported capabilities are down right scary.
I've found that very often the cashier will swipe their own discount card if you tell them you don't have one of your own. You can also, in practice at Safeway at least, ask for a new one each time. I used to collect them just for shits and giggles, I had dozens before I got bored of it.
As a tracking mechanism, they rely on the customer cooperating and using the card as intended.
> One day we will witness a truly devastating hack, a disturbance unlike anything we've known previously
Yeah I recall Mikko Hyppönen in one of his talks describing how malware got encoded into DNA so it could propagate like a true virus. And there was a story recently of malware getting into some biotech lab and changing the genetic code to create mutant pathogens. If any malware authors are reading this: stop being so ruthless and unethical. You have a duty of care when you write code. You need to wield your technical prowess responsibly!
Maybe they realized I'd flip out if they asked but apparently Verizon now encourages people to "install McAfee on their router." This does exactly what you would expect: terminate SSL connections and only allow web access if you install their CA cert.
I only know this because the person in charge of internet at the house my girlfriend currently lives at was convinced by the ISP to set it up and now her machine has malware on it.
There is a version of Windows 10 made just for China to remove all the surveillance because China thought Windows 10 was such an invasion of privacy and security risk that they wouldn’t use it otherwise. Think about that when you’re falling asleep at night.
Actually, the Microsoft developers should think about that when they are trying to fall asleep at night.
I believe it'll happen as soon as on-premise hosting is given a marketing-friendly name. Like "cloudless". Host your containers in a cloudless habitat, at a fraction of the cost!
It will be hailed as a hallmark of innovation. Any voices claiming that such a thing has always been possible will be tutted.
Why do we think that 'our own servers' gives us 'better security'?
I think this is the illusion of control.
"security assurances as if I were to have my own locked cage at some colo."
'Locked cage' - is not going to help us secure it from the kind of intrusion we are afraid of.
Scale means more layers of vms, more redundancy, more sophisticated security teams etc..
Physically, Fort Knox has never been robbed, whereas any mom and pop shop can be.
I'm really surprised the US Military has not been working with the FAANGS to produce a new OS that is fundamentally more secure and containerized, including a networking stack with identity built right in, and of course, working on easy solutions for the user side of the equation as well to thwart social attacks. And maybe possibly systems architecture groupings to bifurcate systems from one another.
That would be because the last time DoD did the most workable and approachable solution was one without all those gubbins that you could just bolt on later.
By the way, what you're endorsing are essentially ASICS. They're fundamentally impractical as all hell. The logistics of security causes a verbosity explosion when you actually need to change something. Imagine the world we have now with 1000x the paperwork, and a bigger chunk of the populace devoted to just enabling somebody to go from point A to B to do something.
And even then. You're only as secure as your Users will tolerate.
Security resources are sparse and large companies pay lots of money and hire them away. So attempting to defend by yourself could leave you more vulnerable. Probably a middle ground is the best- to benefit from expertise on both sides.
Companies are hiring and working with tons of physical security pentesters to assess how much easy is it to enter their offices. At the same time nobody is paying anyone to assess for the security of my house. But I'm sure it is easier to enter a Google office than my home
Good point, that's why the industry is moving (some companies have already moved to) an approach that assumes your device is compromised and your work laptop and identity is treated as untrusted by default which does not get you unvetted access to everything. You might have todo many things to get more access... and still there is no 100% security, just a little more security
We don't get anything by default. Not even slack clone or email. And if we don't use something for 90 days we lose it. It's not uncommon in my role to have to request something every day. Then have to wait for manager approval, sometimes one up approval, Then resource owner approval, Then wait for provisioning etc. Even read only github access requires senior manager approval (A relatively high role, like 3-4 away from the CEO at a Fortune 50 company)
We can't plug in USB drives either and our database queries are logged and suspicious queries (the heuristics for "suspicious" is not great admittedly) automatically get sent to your manager. Even the files on the hard drive are scanned. I've had so many false positives for the prior two things.
Also we're totally moving away from laptops. Everything will be a thin client. The performance on the things is miserable due to oversubscription and technical limitations.
I’d love to trust Microsoft. They have a lot of talented people working on a lot of cool things.
But, they voluntarily collaborated with the US Government to spy on their users at home and abroad, and those people are still in charge, as far as I know.
btw, one attack vector was that on premise is connected to azure ad and the on premise installation already had malware and the malware stealed security tokens.
Capex & salary requirements. I personally don't have the time to drive to the datacenter to fix something. We'd need someone who is available pretty much on-demand to manage the hardware acquisition/install/maintenance/decommissioning. We are a very small organization. One extra person is a big deal for us.
That said, its really close to being possible. We are going to look at it hard next year. With the density coming out of AMD these days, we can probably solve the capex problem pretty easily.
I find it extremely ironic. I’m currently finishing “Countdown to Zero Day” and some people are saying that NOBUS (nobody but us) doctrine reduces the attack surface considerably. Some other people highlight this mentality as extremely dangerous from a defense standpoint.
Ten years after Stuxnet/Flame saga, USA is experiencing a same kind of attack and their stated preparedness has not improved from the levels stated in that book.
It’s fascinating.
Edit: No. I’m not enjoying this. There’s no schadenfreude.
Confused, how is NOBUS supposed to reduce any attack surface? NOBUS means you don't patch vulnerabilities that you believe only you have access to. Wouldn't that expand an attack surface (or leave it as-is)? How would a refusal to patch decrease attack surface?
You’re right to be confused. NOBUS assumes that other parties do not have the means to find the same vulnerabilities and/or weaponize them with same effectiveness.
It also layers this assumption with “we can prevent it at the firewall level so we can actually patch it while they can’t” idea.
As we all know, it’s an absolutely brilliant(!) idea.
> Some other people highlight this mentality as extremely dangerous from a defense standpoint.
Often the answer this group offers is yet another third party solution that is popular at the time. Then they give these tools the high level of permissions they need to function. Cycle repeats.
Nope. They advocate immediate disclosure of zero day exploits found by government and mandatory policies against stocking of these vulnerabilities for using them as/in cyber weapons.
That book contains no corporate backing/advertising.
If the current state of affairs is better than governments not doing any such research, yet responsible disclosure of any vulnerabilities is, in turn, better than keeping them private, then any government that believes in the transitory principle should continue their research and start disclosing their findings.
> What you actually seem to be proposing- in practice- is a ban on government-funded vulnerability research.
To be completely honest, I don't have a concrete answer to this question yet because I haven't finished the book and didn't give the hard think it requires.
However, I can see and both parties' perspective and reasoning. I just need to give these perspectives a good rub with my own views.
Sorry, I wasn't referring to the book specifically. Just talking about my experience during these kinds of conversations in government and the corporate world.
I don't see the irony? Was there a zero day that allowed them to trojan the Orion binary? I thought they just had ordinary access to change it. If there wasn't one, NOBUS doesn't really apply here? I haven't seen anything on how they got the initial access to it yet so maybe I've missed it.
NOBUS doctrine has implicit, philosophical problems. It may work with kinetic weapons because research is more expensive, obscure, dangerous and accumulation of knowledge has a greater effect.
On the cybersecurity side, its effectiveness inherently reduced because any person with a laptop can conduct the same research in principle. So, other parties can find breakthroughs or leapfrog each other with relatively minimal effort.
On the other hand, NOBUS has an inherent ego attached. It considers other parties to be inferior permanently. It considers other parties are unable and will be unable forever unless we leak something. This is a flawed thinking in computer science / programming research.
The core of the irony has two layers. First the personal one: The timing of reading the book and how the events unfold.
Second irony is in the US government. They were aware that they're not ready to defend if someone uses the same weapon against them, however they were so focused to improve their offensive capabilities and failed to secure themselves. Moreover, the attack came via a national software company. They've hit from inside. It's like adding insult to injury.
Almost everyone has lost personal information to such hacks at some point or the other. Critical infrastructure (e.g. hospital systems) has been held for ransom. National elections have been swayed by state-sponsored actors. Companies have lost plenty of IP to foreign competitors and governments.
They are already weaponized, and have been for a while. The time to start taking information security seriously was 15 years ago.
> The time to start taking information security seriously was 15 years ago.
Yet we openly embrace hardware we don't understand riddled with flaws and software we can't see in to or audit and makes requests without our consent. When you login to $bankingApp are you aware of all the DNS requests, information it's POSTing?
Heck, Facebook's SDK derps and cripples many applications. Our software stack is built on a mountain of hacks, flaws and quite frankly, insanity.
Computing, from the hardware to the kernel needs completely revamping, because nothing else will be able to salvage our current environment.
What makes you say this? I thought it was a common rumor that the NSA had backdoors in everything. Wouldn't Mutually Assured Destruction make an attack irrational, just as it deters nuclear attacks?
That "common rumor" has always struck me as particularly stupid. I believe it's popular because "everything is worse than you think" conspiratorial cynicism is what passes for intelligence these days.
This hack by itself should be proof that the NSA has no such capabilities. And, no, stopping this hack would not have compromised anything. They could have tipped off any one of a rather large number of security researchers, companies, or vendors affected who would have gladly made up a believable story as to how they noticed this.
Mutually Assured Destruction relies on the victim knowing that they've been destroyed.
Lots of these so-called "Cyber weapons" are operated by actors who are very effective at leaving no trace.
It's been in the background before but in the reports about this SolarWinds issue the "leaves no trace" angle is starting to be emphasised much more.
Not knowing if or what has been compromised means that the attacker can choose when or how to use the information they obtained and the victim will be surprised, even a long time down the line.
This is precisely the reason why I'm against moving everything to the cloud. Find your way into AWS or Azure and you will have everything on a silver platter.
This is very bad logic to be honest. When you have worked for some big companies, and I’ve worked for dozens and dozens of them, you know their security is basically Stone Age compared to anything in the « Cloud ». They have « firewalls » with rules nobody know about and wether they are still valid or not, they have applications running on servers they don’t know about (I saw in more than a single occasion that the « solution » is just to kill the application to see if someone is still using it), they have servers that they can’t locate, they have MPLS networks connected to remote locations with ZERO physical security (and of course the MPLS network is connected to their datacenters like it’s private network, i.e. with full access and no security at all), they have hundreds and hundreds of contractors with way way too much information, way way too much accesses, they have basically zero knowledge about what’s going on in the world of IT security, and they don’t have 10% of the people they would need to actually even try to do security, etc. etc. I mean, the security of those big companies is so ridiculous that it becomes funny. I suspect that those systems don’t crash more just because the hackers inside are actually maintaining them. There is no single doubt those companies would be 10 or 100 times more secure if they would move everything to AWS or Azure.
Until some day, and you will get an article about a teen who was able to get access to any private message of any user just by asking the support for an access. Hum wait... already happened
Maybe the bigger you are, the more you think about security, but the bigger you are the more difficult it is to protect yourself
An organization that has spawned such a mess, and been unable to clean it up thus far, will be plenty capable of creating an equivalently insecure (but different in the details) mess on a cloud platform.
That might be true for a tech-competent company with a well-staffed security team, but for many companies and for individual data, I'd trust Amazon and Microsoft to secure my data more than I'd trust myself.
I think this is the wrong analogue. I absolutely would not trust Amazon or Microsoft with my personal data (I'm the type that hosts my own e-mail, sync, docs, etc.)
Now for a BUSINESS, I could see the advantage in less maintenance cost and being able to spin up/down services easily. It is easier and that cost is measurable (up to a point; when you get big enough AWS costs become obscene compared to self-hosting. I've been at more than one shop that moved things back on-prem to save money).
Yeah migrating to AWS is not cheaper. It never is. It’s a complete fucking lie to be honest.
I’m actually dealing with an issue where one RDS instance in AWS costs more over three years than the entire infrastructure and software investment did on the on-prem.
Pre-cloud I remember having 10 full time infrastructure guys to support 100 devs by building servers, installing software, setting up backups, buying licenses, setting up the network, creating credentials, patching software, etc...
Most of the projects I work on now have 1-2 IT people supporting 100 devs.
Yeah this line is the new "nobody got fired for buying IBM". No, but they ended up getting laid off eventually.
Will non-tech-competent companies without well-staffed security teams make it? How much individual, private digital data is likely to make it as long as say, a vellum manuscript?
Microsoft probably has a large network and lots of employees and vendors. So I'd assume that every day some machines are being compromised at Microsoft (like anywhere else with lots of employees and stuff like Google, Facebook, Amazon,...) - it's just not that unusual these days.
The question is if any of these leads to lateral movement and access of sensitive information or modification of data and stuff. No company is required to tell you about every single piece of malware they find in their networks - unless something "bad" happened. Maybe we should require more transparency in general.
I guess investigations will show what happened or is happening still.
At least in AWS, practice is to encrypt all connections between components, and to have granular least privilege permissions at every point. Behind the scenes AWS follows the same principles for the infrastructure. I would argue a lot of cloud set-ups are inherently more secure than the equivalent on-prem of large enterprises.
Encryption only helps if you can guarantee that your attacker can't get access to the layer below where the runtime decrypts things.
A cloud hack like the parent poster talks about assumes that you get access to the hypervisor layer and can look at the RAM of the guest machines.
This is not inconceivable. Rather, it seems quite reasonable given the complexity of hypervisors and the prevalence of CPU architecture bugs that makes these attacks easier.
IF you're referring to the Capital One incident, that had nothing to do with AWS. Their systems behaved as intended. It was a error in the implementation of Capital One's systems.
Your misinterpretation of their comment is the source of your confusion: note that they said “a lot” and “more secure”, not perfect. There are many more breaches of on-premise systems but we don’t say that those are too risky to use — it all comes down to cost. One big advantage that cloud environments have is that you can assume everything is API-driven and there are off the shelf tools to look for common problems like the Capitol One WAF setup. You certainly can do that on-premise but you have more work to do and the bespoke nature of the environment makes misunderstandings easier.
Having trouble finding a reference since the search terms aren't too friendly (lots of targeted ads though), was there one where it wasn't an account configuration issue?
Yes - but the alternative argument could be made that a million small organizations have no ability to stay on top of every little best practice, update, software, no dedicated security/zero day team, no ability to do investigations/in-depth analysis - i.e. easily compromised.
Apparently big companies also can't vet or properly compartmentalize third-party software either.
It's an untenable problem for organizations of any size. There aren't enough man hours to reverse engineer and vet all the third-party software that any sized organization uses. There's no community will to force vendors to do better either.
We need something like an Underwriters Labs for software. It probably will take the insurance industry coming down hard for things to change.
Do you say this because you think some hacker can gain access to the VMs where your data is stored? It’s significantly more likely that this will happen due to one of your engineers getting social engineered. While we don’t know the details of the SolarWinds breach yet, I’d be willing to bet the hackers did not gain access to production VMs.
If 5% of businesses are independently compromised every year, that's a painful but manageable drain on our economy.
If the [inter]national infrastructure goes down, with the firmware on every device on every internet-connected computer bricked at the same time due to a large-scale cyberattack (perhaps followed by a military attack a little while later), we're f-ed.
All this would take is:
- One zero-day each on Windows, MacOS, and Linux.
- Nation-state level resources to create a bricking firmware update for all commonly-used devices.
- Nation-state level resources to create a spreading attack for all major routers and network devices.
- Nation-state level resources to deploy this rapidly enough that response systems can't respond.
With 200 nation-states, it's perhaps just a matter of time....
(And yes, there's a lot more i's to dot and t's to cross, but I think they're all doable, with nation-state level resources)
Why destroy the computers? Just destroy the power grid. Get a few million smart meters to disconnect from the grid simultaneously and watch the sparks fly. Destroy a sizeable quantity of transformers and it will be months or years before power is restored.
I think this is more "movie plot", though. "Smart Grid" security has gotten tons better than it was at the start. A lot of very security conscious and smart people have been working on it.
Precisely what you said: Power grid has security. Compromising 90% of connected devices with a mainstream OS would, at once:
(1) Tank the economy
(2) Likely, be doable with resources totaling in the single-digit million dollars
Modern wars are largely about industrial capacity.
But if it is equally easy to compromise the grid, why not do both?
This may sound like a movie plot, but so did the invasion of Poland at the beginning of WWII, the attack on Pearl Harbor, nuclear bombs on Japan, the rape of Nanjing, or many other actual events which Actually Did Happen.
We tend to underestimate the impact of rare events: our brains are conditioned to discount anything which happens once or less than once per lifetime. That's likely why humanity will kill itself at some point.
Honestly, this is a weird fucking hack. The update went out signed by Solar Winds. Somehow, they got this payload into the build-chain their developers use.
I'm looking for info on this same topic. If the update was signed, how did that happen?
The other thing that occurs to me is that this Solar Winds update goes out to a huge number of organizations, with a tremendous diversity of capabilities. Why was FireEye the first one to notice? Was the Solar Winds code exempted from monitoring, because it is monitoring? How big is this blind spot?
My understanding was that FireEye noticed because this backdoor was used to attack them. They worked it backwards and found the backdoor.
It would be interesting to know how many "softer" targets the attacker went after and were completely undetected.
It's worth noting that FireEye didn't find the backdoor until after it had been exploited. Nobody can vet modern software. The complexity is too high, let alone accounting for the ridiculous update "velocity" that's coming out of most commercial software today.
Just getting software "manufacturers" to itemize and document the normal communication mechanisms used by their software, so that threat hunters aren't reverse engineering communications protocols, would go a long way. If SolarWinds had provided a list of domains their software needed to communicate with that would have gone a long way to stopping this kind of attack. (I can't be the only person who demands vendors provide this info, and who puts third-party software behind MiTM proxies and hard-assed ACLs.)
I've always seen it as a good indicator of unreliability and poor QA practices. "We update our software frequently" has never seemed like a good thing to me. I read it as "You're paying licenses fees to be our testers."
Maybe after signing it (but before releasing it), the CI/CD should upload the signed hash to an independent third-party publicly-accessible append-only log (like Certificate Transparency).
Assuming the binary is reproducibly buildable, different internal teams can try rebuilding the source and checking that the signature matches what's in the public log, and that each entry in the log has a corresponding commit in the source repo.
I would love to see a postmortem for how they were handling their code signing cert. Right now I get a "PFX file on a network share and everybody knows the password" kind of feeling.
On Friday last week, SWI was trading around 23.5. Today it closed around 14.2, so it looks like the big money investors are betting on SolarWinds-the-business being toast. Even if it does survive, if it doesn't somehow stage a miracle recovery very quickly, the current senior leadership are surely done as soon as they've finished being ritually sacrificed so their replacements don't get saddled with too much of the fallout.
As another potentially interesting data point, SWI still appears to have a P/E close to 120 even now. Given that traditional value investors might consider something closer to 20 to be reasonable for an established business and the huge ratios of many big tech stocks only make sense if you think they're going to continue the dramatic growth some of them have enjoyed for a while, there could still be a long way for stocks in SWI to fall even if they do eventually stabilise.
Perception is why. Especially for certain clients. Let’s pretend that I work for the French department of the interior and some time early next year I go to install SolarWinds. You think the boss is gonna OK that, knowing it’ll be in the paper the next day? No chance.
They’re screwed for a while, only hope they have is staying afloat long enough for our short memories to forget this.
I don’t even know where to begin you can put in a 1000 firewalls or anti malware or greater defense systems , but apparently nothing would have stopped this?
I have been searching and it looks like this is specific to Microsoft Systems, though other systems may have helped spread it. From a couple of articles I found, the issue was in file SolarWinds.Orion.Core.BusinessLayer.dll
The reason I looked was because I never saw anything stating what OS are directly impacted, all they talked about was how bad it was.
I now understand why Microsoft is pushing so many articles about this issue (outside of the fact it looks like it is a big problem for lots of companies/gov/people).
How do those people break keyboard based scrolling, and only keyboard based scrolling?
It's incredibly prevalent, but the only way I can imagine doing that is by capturing every key at the top level... that I can't imagine any reason for doing.
From yesterday:
> "We have no indication of this," company President Brad Smith told New York Times reporter Nicole Perlroth. Perlroth said the company stood by a statement it issued on Sunday saying it had no indication of a vulnerability in any Microsoft product or cloud service in its investigations of the hacking campaign."