Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
U.S. government probes VPN hack within federal agencies, races to find clues (reuters.com)
153 points by c1c2c3 on April 29, 2021 | hide | past | favorite | 58 comments


Prediction: at some point (if it isn't already happening as we speak), the government insistence on "we need to be able to hack into any software if it's important" will collide with "we need to be able to keep foreign powers out of our software", and there will be bitter internal fights about it, both sides claiming national security interests.


Bruce Schneier has been complaining about this tradeoff for more than a decade: https://www.schneier.com/blog/archives/2014/05/disclosing_vs...

>The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.

Unsurprisingly, the NSA often chooses to keep zerodays for their own use.


I know this would be hard to keep under wraps, and extremely difficult for closed source software, but it seems like the right answer here would be for the NSA to create patches for government use.

If the government only used open-source software, the NSA could create patches that only the government would use, while keeping zero days that can be used against everyone else.

If the government started requiring all/most software to be open-source, it would create a market. There's no way big government vendors would refuse to create open source software. They would just shift to monetizing more heavily using consulting services, or support, or something.


I think this would be even worse. The number of people that would need to have access to the patched versions would be too large to effectively secure and the patch would deliver knowledge about the vulnerability to any potential attacker. Government computers would be protected, but contractors and other businesses placed at higher risk.


Doesn't SELinux provide some of that?

Tightly enclose all running software with a beyond-root, kernel-level authority/sandbox, so even vulnerabilities only we know can't harm us if they're discovered?


If only it was just one agency from one country plinking holes in things...


Makes me wonder if we should have a white-hat government org that notifies big corporations or software projects about critical vulnerabilities in their code.


That was the point of the NSA, but the war on terror has corrupted that mission to the core.


The attack vector Makes more sense. Your enemies will always have new backdoors they don’t report so you need a many as you can get. Closing loop holes isn’t super effective because there will always be a new bug or exploit


There's another perspective on this that's often cited and if I remember correctly was used to argue against the release of stuxnet. who has the most to lose if there are widespread cyber attacks? Given the related attacks so far I'd say it's probably the US where most people are connected and everyone either wants to spy on or attack. But then again attacking sounds so much cooler which is why I guess we still do it


The Crypto AG revelations --- a Swiss-based firm selling Telex encoding equipment revealed to be a CIA front --- strongly suggest to me that the principle (though not only) strength of the US intelligence agencies has been based on backdoors. As software-based encryption became more prevalent, they sought to either discourage effective crypto, or impose mandatory back-doors.

The downside of having generally-known weaknesses seems to have been largely deprecated.

Rather than "security by obscurity", the operational status has been "insecurity by obscurity". Unknown to users, systems are largely wholly insecure, and it's only ignorance that gives the illusion that they are secure.

I wrote on this recently: https://joindiaspora.com/posts/b596219086b1013991d8002590d8e...

In practice, the "everyone anywhere can attack any online system" status of the Internet, and the porosity of most LANs and even nominally airgapped / detached systems (see the Stuxnet attack on Iran's centrifuge systems) means that virtually all systems are vulnerable.

I suspect that the debate is quite live within government, particularly as the US itself is repeatedly the victim of such attacks.


Here's a good discussion of that very same debate that is happening right now.

https://www.lawfareblog.com/lawfare-podcast-nicole-perlroth-...


The only reason people use garbage like Pulse is compliance with stupid federal bullshit like FIPS 140.


> at some point

Hasn't this been the debate since encryption came around? I thought we've been having this debate for at least 50 years.


Encryption is a bit different debate with different tradeoffs. With encryption, the government can try to use different encryption than everyone else, and many sectors of industry don't rely on encryption. But vulnerabilities on common software apply to everyone; there's pretty much the same pieces of software and electronics (e.g. mobile phones) used by every country, by civilians and businesses and governments alike.


I'm sorry, but what industry doesn't rely on encryption? Every financial service relies pretty heavily on encryption.

As an aside, I personally would argue that in the age of big data/information that your populous having security is extremely important. Modern warfare (or all warfare) depends highly on information. TOR only works if average people use it. The military suggests soldiers use Signal because many times they've gotten in trouble because adversaries intercepted SMS messages to loved ones (or just someone getting some strange).

There is of course a question of balance, but personally I don't see one. Safer to encrypt everything imo.


For sure, my point was that the debate, instead of being between government figures who are in favor of keeping the right to listen in vs. non-government figures who want to keep them out, it will shift (has shifted?) to a within-government debate. In the days after 9/11, I don't get the impression there was much of an intra-government debate at all.


By intra-government you mean like US vs China? (or any other competitors? We could say Israel and Germany) I think this has always existed though the information age has swung the balance to there being more importance for average citizens to have encrypted data in a more general sense and not just finance.


"Intra" here means inside the same government (you're thinking of "inter"). The hypothesis is that there will be parts of the US government (like perhaps the FBI) that will advocate for government-controlled backdoors into all encryption, while other parts (like perhaps the NSA) will argue for the strongest, backdoor-free encryption possible.

I think it's an interesting hypothesis, but one weakness is that the government can have its cake and eat it too: they can mandate that all encryption have backdoors, except that the government is exempt from that requirement.

Of course, then it just becomes the usual "if you outlaw strong encryption, then only outlaws will have strong encryption". As long as backdoor-free encryption merely exists, the "bad guys" will get their hands on it and use it. So you haven't fixed the problem of being unable to prosecute crimes due to encryption, and at the same time you've weakened everyone's security. This state of affairs is still beneficial to the government, as it makes dragnet surveillance a lot easier, and your average citizen with "nothing to hide" won't seek out the (illegal) strong encryption.


...until the higher ups learn that, yet again, China or Russia or Iran or somebody got their hands on a lot of sensitive data, and they start pressuring the NSA to get a handle on this. I don't know if it's happening yet, but if it hasn't it will.


It probably already happened.

In the defensive world, success is abstract, failure is concrete and there are always going to be bugs, accidents, lapses, etc. in the offensive world, you demonstrate success by providing actual intel, you can demonstrate value. I’ve worked on security products for most of my career, there is a point in the lifecycle before your product is just a requirement where customers will ask “how do I know I need this? Or it’s working?” It can be more challenging to answer that than if your product failed and they got popped, at least you can help and provide information if they got popped.

I know who I think would climb the ranks. Long term strategy wise, if they split it up and aggressively worked with industry to patch holes and fix things, encouraging best practices, it would probably save the nation trillions but we would have to use other techniques to get some of our intel.


Yeah, this was a problem back when NSA's directorate went from Defensive to Offensive. We would like to patch issues that are zero days but they are just so damn fruitful when attacking enemies of the state... the battle lines are already drawn on this.


Trolling prediction: all U.S. agencies will switch to open-source software on top of Gentoo Linux as a way to easier patch whatever vulnerabilities NSA finds and does not disclose

:)


Some version of that debate has been going on since roman days.


Wasn't Pulse Secure VPN the one that required an ActiveX control and IE in order to "secure" your system on Windows? I mean, when I see that kind of shit, I kind of assume the vendor sells some shit software.


Just yesterday I was on a federal government web site (FWS, BLM, or some similar agency), and it popped up a window saying that the web site doesn't work in Safari, and I should use IE10.


There are major federal government websites that have office hours. The EIN application website only works between 9am and 5pm. I like to imagine that it's this way because there's no webapp, there's just a team of people who get all the HTTP requests and manually respond to them. This would also explain its speed.


If they're only paying support staff 9 to 5, it might make sense to shut it down outside of supported hours, even if it would probably still work, especially if they're not positive that it won't do something odd like start issuing duplicates if the back-end DB is down.


> The U.S. plans to address some of these systemic issues with an upcoming executive order that will require agencies to identify their most critical software and promote a “bill of materials” that demands a certain level of digital security across products sold to the government.

Interesting, no mention of any requirements towards software manufacturers themselves.

If you think about it, this will further incentivize poor-quality software as responsibility of vulnerability response is now being laid on the product owner.


This theme keeps coming up. Some cohort of HN is upset that software manufacturers aren't directly required to produce "secure software" [1]

I would suggest people look at a very foundational essay on this [2]. Key quote: "Security is a process, not a product. Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products. "

How many times do we have to learn this?

[1] In quotes 'cause "secure software" does not exist. In two different ways; software always has bugs and using a piece of software incorrectly makes a secure system insecure.

[2] https://www.schneier.com/essays/archives/2000/04/the_process...


Security is not a Boolean. It's on a continuum from "You must be fucking joking" (early versions of IE) to "Not perfect, but workable with training" (literal weapons-grade air gapping).

Perfectly secure software does not exist. More secure and less secure software certainly does exist.

Which is why software should be expected to provide some basic level of protection.

In any org there will be a proportion of idiots who cannot be trusted to do the right thing, and software should be designed to minimise the damage their idiocy can generate.

This isn't enough to make an org secure, but it's a good start on whack-a-mole with the most obvious attack vectors.

The real problem is that too much of the industry - both management and devs - lacks maturity and professionalism. There's too much casual hobby tinkering, too much "But that's too expensive", and too much "Get it out the door and worry about it later".

There isn't enough conscientious attention to detail and far, far too little understanding of the disastrous - literally potentially explosive - consequences of serious security failures.

And CS courses teach far too little of all of the above. Academic algo noodling is one thing. But the reality is that computers can literally be as dangerous as weapons - and should be treated as such.


Can you elaborate on what exactly do you mean by " software should be expected to provide some basic level of protection." ?

In some sense security is binary - if your software happens to have even a single mistake that results in RCE or authentication failure, then it's totally exploitable and does not provide any level of protection whatsoever. And as experience shows, we seem unable to write any software without such mistakes, even if we try really, really hard by skilled people with security in mind, as far as I recall every popular piece of software that needed to be secure has had vulnerabilities.


You don’t have to perfectly secure in order to raise the bar past your adversary’s level of sophistication. But you do have to stop doing the same stupid shit that’s in easy reach of anyone who can program.


Once a complicated exploit is known, it can be added the arsenal of any script kiddie.

This isn't saying you're wrong that quality can raise the bar. It's saying that time and context also lower the bar. Especially, not being subject to "that’s in easy reach of anyone who can program" not guarantee by any purely default action - notably not guaranteed by spending X dollars.


Not every vulnerability will be exploited, most hacks use very simple exploits if at all. 80 percent of security can be achieved with 20 percent of the work


I think there's a couple of issues at play here.

Firstly there's the information asymmetry for non-technical users - they don't think of themselves as buying security, they think of themselves as buying a remote access solution. They therefore don't see this as a process, but instead as a product or solution. That means they're surprised and caught unaware when something goes wrong.

The second issue is that people creating the software aren't themselves thinking about security, because the customer isn't buying security, or comparing security. And how do you measure or quantify or observe security? There's no commercial incentive to invest a month in hardening a product against attack, unless that month of engineering effort sees more sales and revenues. And since the people who buy are satisfied by slideware and specification sheets for security, nothing changes.

I think we need a whole change to how we buy software, hardware, and solutions in general, to see this change. The underlying economics don't incentivise secure products, in fact they actively discourage them.


Sure, but if you're going to sell a "security product" and then the security turns out to be a joke -- or even decent, but flawed -- you should be held responsible for it in some way.

Obviously it's difficult to draw the line, and that's why we have courts. The company will argue that they did all that was possible, but as sometimes happens, something got through; the plaintiff will argue that the company's software had serious flaws because they were negligent or cut corners or had poor development processes or whatever. However imperfect the process is, the court can render judgment case-by-case.

(Before someone suggests this, I'm not trying to say that a random open source developer who works on OpenSSL should be held liable here. But if you're selling a product, you should hold some liability for when that product fails.)


Part of the process of security is to design and code with consideration towards potential vulnerabilities. Organizations that ought to care care about security, and even spend a lot of money on security, also buy a lot of bug-riddled crap to run on their “secure” networks. Firewall rules and Group Policies can’t fix everything.


The government has no authority to demand a software bill of materials (SBOM) from everyone who publishes software.

Imposing this requirement on their own agencies is enforceable because there's software that can generate an SBOM, at least from container images.

Then the agencies will have to choose software that meets compliance requirements, so they're the ones putting pressure on their chosen vendors. It follows logically that a vendor who wants a better chance of being chosen for more government contracts will make it easy to obtain SBOMs for their software.


>The government has no authority to demand a software bill of materials (SBOM) from everyone who publishes software.

Speaking from US Gov perspective - if the company is part of a contract (and ~40% of the Gov are contractors), Gov certainly can.

They can put nearly anything (legal) into the RFP/Q. Even if they do not say "give us your BoM", they can wrap it in requirements that in essence delivers the same exact result.

That said, it is Gov mistake to ask for the BoM. They will do little with it in a timely fashion, and lack the expertise to identify risks, and lack the resources to go after it. The best contracts are the ones where the rules and parameters are set for the contractor, (i.e. no untested software, no foreign influence, no this, no that, must have this and that), and auditing of the compliance.


They could build in a requirement that the software has undergone penetration testing by a security firm, and that a copy of the penetration testing report along with any mitigations applied to the software be provided.

I've never even heard of the software the government is using. Why aren't they using Cisco AnyConnect like literally every other company I've worked for who has a VPN?


Pulse Secure is pretty well regarded (or maybe was better regarded when it was a Juniper product). AnyConnect has had had will have its fair share of vulnerabilities as well. A few years ago I had to update the firmware our ASAs like four times in a year due to new vulnerabilities. Any commercial product you pick is going to have new vulnerabilities and you just need to stay on top of it.


Not all agencies, but the US gov't does use Cisco AnyConnect and pretty much everything they use for IT is COTS these days.


Federal contractors as well.


If I were a federal contractor, wanting to make more from my cost plus contraction, what better way than generating text files full of dependencies that will cause billable meetings to discuss why we should be ok with some old insecure library being used... that will always end with even more billable work to update the old, insecure library. Even better would be if I had to incur some billable time and cost on certifications. Cost plus FTW (unless you are a taxpayer).


Luckily not all federal contractors think like this. Some would report this behavior and some of us would be quite happy to report it. There are those that still believe in doing the right thing, value tax payer dollars, and want to deliver for the American people. If only more of us wanted to completely rip out the rot.


> If only more of us wanted to completely rip out the rot.

Even if you report it, this kind of behavior is so normal, I'd be shocked if it does anything other than create more billable project management hours for the company you are protesting.


> If you think about it, this will further incentivize poor-quality software as responsibility of vulnerability response is now being laid on the product owner.

Not really, this is more about transparency of all components and letting people downstream be aware that there is an issue and either fix it, mitigate it, or raise the issue upstream. My guess is that this is related to Allan Friedman's SBOM work at NTIA (sorry - this is not the most up to date link: https://www.csiac.org/podcast/software-bill-of-materials-sbo... )

The problem that keeps on getting hit time and time again is that both end users and product manufacturers do not know everything that is in their system. Consider the case of say, an MRI machine. What OS is it running and how up to date is it? If the end user has an SBOM they can better evaluate that and demand fixes if there are known issues. Likewise if the MRI manufacturer is good at making MRIs, but not so much at knowing if their version of Windows on the MRI is out of date, the SBOM for the MRI can be analyzed to automatically flag problems.

You can regulate all you want about "There must be no open issues" and plenty of certifications for the Fed government do have that language. The problem this answers is forcing a listing of every component so that "Sorry I didn't know OpenSSH v.1.2.3 is out of date" or "I had no idea we were running Windows 95 on this hardware" are no longer valid excuses.


If anyone is interested to read more about Software Bill of Materials and how you can implement it check out OWASP Dependency Track project - https://owasp.org/www-project-dependency-track/


This new arms race can eventually lead us to militarization of the whole economy. Almost every business operation will cost 40% more than now because of security costs. Security doesn't scale well and can't be commoditized (until we get AGI I guess). You can't just outsource it to Google or other megacorp.

That would be an insane waste of resources.


Do people know that Fortinet is pretty much a de-facto Chinese company?


Any links on that? Only notable incident was when they apparently sold intentionally mislabeled Chinese-made equipment to U.S. government end users. https://en.wikipedia.org/wiki/Fortinet#cite_note-37


I think they are referring to the fact how Fortinet was founded by two Chinese born brothers.

Tho Ken Xie is also a Stanford graduate and has had US citizenship for decades.


Not only that, but them having the bulk of their staff in China.


> not only that

By this logic does it not mean that Google is a Russian/Soviet company? Sergey Brin was born and spent some years in Russia, also Google has staff in Russia(and China in this regard)


He was born there, but he did not remain as a citizen of that country, and not been going back, and forth in line of business.

A situation is different from where a double citizen operates a sales office of a company from his home country


So that we are on a same page, you are stating that any company is de-factor to be considered belonging to country X if: 1) Founder has a double citizenship 2) Has some of the staff in a foreign country 3) Has a sales office in their home country

Is above what you are trying to say?


In this case, the sales office is their main office, and the Chinese "subsidiary" is where the main body of the company really is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: