This happens in many corporations as well. It's fun and exciting to be on the red-team (doing the penetration testing, writing exploits, etc) but the blue team (infrastructure teams and developer teams hardening things) is not only boring to most, but it's also the team that gets the most grief from developers for inducing friction. If your company has a red team, ask how big the blue team is and if they have the same freedom to develop and implement mitigating controls as the red team has to exploit things.
Hacker competitions mirror this. Red teams are allowed to bring in any exploits and do just about anything (as criminals would be expected to do) and the blue team are stifled by bureaucracy and not allowed to bring in anything.
Another, related paradox is that in corporate org structures, the CIO is responsible for making sure the company's systems are available and working correctly, but the CISO is responsible for securing systems. Departments of CIOs can frequently be seen as a profit center which unlocks potential for the company while CISOs are almost always seen as a cost center which (ostensibly) slows the potential of the company.
This also contributes to perverted incentives (like the red/blue teams) where the CIO frequently gets their way and is more likely to get budget while CISOs take all the blame when their budget increase requests get declined and IT is tasked with keeping unpatched systems up and stable rather than patching systems quickly. Obviously, the best orgs find a way to get both done, but resources are always scarce for the rest of us.
I left a high pay info-sec position at a large insurance corporation for this very reason. CIO trumped CISO (fractional) on literally every security issue that was surfaced - and worse yet the CIO and CEO refused to acknowledge the risk being onboarded/ignored. The irony of insurance execs refusing to acknowledge information security risk was just too much.
I've been in infosec since the 90's. A lot of times I think this is on us. As much as I respect the technical acumen and creativity of my colleagues in the industry, I don't think we broadly understand risk that well and as a consequence we do a pretty bad job of communicating it. We tend to peg the panic meter with multiplied likelihoods and catastrophized impacts of possible scenarios while directly causing revenue losses by adding sometimes insane amounts of friction to the product delivery process.
That's not to say there aren't cowboy CxOs recklessly ignoring reality, but accepting risks is part of the job. The real answer generally lies somewhere in the middle of the two extremes.
> As much as I respect the technical acumen and creativity of my colleagues in the industry...we do a pretty bad job of communicating it.
This is the root of so many problems for technical teams in ostensibly non-technical businesses. More developers and engineers really need to embrace the reality that your work doesn't always speak for itself - sometimes you have to speak convincingly on its behalf.
Or you wait until it explodes and then get the money either way. Plus, you don't have to bother with people who do not want to understand, which is the second problem commonly faced by technical teams. I've seen more than enough technical people doing anything they could to make people understand, but at the end of the day Sinclairs adage about people not understanding something if their income depends on not understanding it holds true.
It's not always about understanding, sometimes it's just about making them believe you. The relationship between business and tech doesn't have to be adversarial - learning how to get yourself a seat at the table and what to say once you get there can be a quality of life improvement across the board.
Agreed, It doesn’t seem appropriate for info-sec people to be making decisions about what which risks to mitigate, ignore, etc. They should provide input into that process though. We struggled to even get the CIO and CEO to acknowledge and discuss info-sec risk and make decisions regarding what to do about that risk.
By yank the ripcord do you mean leave the organization? I see this type of behavior at just about every company I have worked. There is no real priority to fix security holes even when they are discovered.
Depends on the circumstance and what your career goals are. If you want to develop your leadership skills, stay put and try to drive change. If you're developing your IR/SOC/threat hunting skills, maybe stay put b/c you're likely to be needed (assuming org is large enough target to get interesting attention). If you're doing assessment/red team/pen testing I'd stay a short while then move on b/c your reports are going to start to be recyclable. If you're doing security architecture/engineering/etc you're going to be resource starved so maybe move on.
Moral of the story is determine how it impacts your career goals and chose.
I can imagine the average corp board member underestimating the risk accumulated by consistently ignoring CISO request for more cybersecurity investments, but the insurance industry is used to dealing with the low-frequency, high-impact payouts.
Do you think it was mis-communication, ignorance, greed, hubris, or something else?
I have hoped that "Cyber Insurance" might be able to price these risks, and also price information assurance best practice into premiums.[1] Do you think this has, or could work?
If an insurance company is unable to price it's own internal IA risks either at all, or at a non-zero value, I'm discouraged from hoping for a market solution to the problem that, as the truism states, "offense is easy, defense is impossible." I think the intelligence services and LE have also done a bad job, as evidenced by the hoarding, instead of reporting or fixing, of vulnerabilities.
Schneier has lately argued that regulation is necessary. The idea of GDPR for infosec is unappetizing, but I have trouble thinking of any other solution that hasn't already failed.
There are currently (or were recently) 2 large lawsuits regarding cyber insurance claims working their way through litigation. If they both go in a certain direction, the concept of cyber insurance may be much less appealing (far fewer claims could be paid out, making the concept relatively expensive for less benefit than many companies anticipated).
Basically, insurance only works when the insured has faith that the insurer will pay and that both parties understand the boundaries of the contract. One of the lawsuits involves the effects of WannaCry, which the insurer claims was a state-sponsored attack. "Acts of War" is one of those common exclusions to insurance policies, so the insurer has an incentive to always claim cyber attacks are nation-state sponsored if the insurer wins that case.
The other case I think is about the difference between a general corporate insurance policy which has some coverage related to fraud and the insurer who claims the insured should have purchased a standalone cyber insurance policy. I think that case partially revolves around "when fraud happens on a computer network, is that a 'hack' or is it traditional fraud?"
I'd actually expect this to be the opposite. Insurance is heavily risk analysis based. It sounds like they were choosing to take the risks because either you didn't show them properly, or you don't realize how cheap the actuated cost of non compliance is.
I follow your reasoning... but no, that wasn’t the case here. A number board members of this org fought for and succeeded in getting increased investment in a true info-sec program due to years of very lax security culture and a series of internal audits elaborating the risk to the org. The CEO and CIO were constantly grossly over budget on pet software dev initiatives, which the board was becoming increasingly concerned with - then here come the info-sec folks with a laundry list of gaping security holes in said over-budget software projects, to which the CEO and CIO proceeded to dodge meetings, ignore risk assessment communications, direct their underlings to exclude and shut out the sec team, and keep the board in the dark. It was a toxic culture, glad I left when I did.
Hacker competitions often seem very contrived to me. I suspect that in order for the red team to make any progress you have to tie the blue teams hands behind their backs. Most of what I see from the penetration testing community is pretty gimmicky and situational generally and often doesn't take into account the attackers risk/reward ratio.
I disagree completely. Red team tools and techniques are different and gimmicky for a reason, their goal is to demonstrate lack of or effectiveness of security controls and processes. While bad guys have more time and more precise target. For example, 0days and disruptive actions are mostly prohibited for red teamers
Allowing Blue Team to fight back maybe? Or to be able to actively track the red team instead, using an active defense, instead of only passive defense?
Moreover, the outcomes are different for both teams:
- RedTeam success => they are seen as "real" hackers/heros and the BlueTeam are the poor incompetent
- RedTeam fail => the BlueTeam did "only" its job, the investments in cybersec for the company paid off... so the budget for the cybersec can be reduced.
So, for RedTeam, it's either a win or a tie. And for BlueTeam it's either a tie or a loss...
If the BlueTeam could fight back, maybe this could change...
That's true but only because it mimics real life. The defenders are always at a disadvantage here, they have the boring job but one where one mistake is one too many. And they have to achieve that perfect score while operating within the rules.
On the other side the attackers have the more exciting job and only need one success which they can achieve by using whatever means they see fit.
You'll see this outside of IT just as well, like in sports. Goalkeepers (defenders) vs. strikers come to mind but at least there they all operate within the same set of rules.
I kind of like the dual approach. First team to get in to the box has to try and hold onto it while still maintaining specified services it's supposed to be providing in the simulation. Winner is whoever holds it the longest.
It's inherent to the field. A successful blue team is a distributed win - every line of code did what it was supposed to do. A successful red team is a concentrated win, for the people who found the few lines of code that did something else. The job of a red team is to make things interesting. The job of a blue team is to keep things boring.
Let the non-red teams use pre-existing scripts, code, etc, to harden things. This of course would make the competition a level playing field and would make it much less fun for the red team. Attendance would drop off quickly and companies would no longer sponsor these events, as the primary purpose is to recruit people out of college.
Actually, this could be made like "CS:GO" competition:
- RT is the terro
- BT is the AT
The RT has to "plant" an exploit. The BT can either block/track the RT or "diffuse" (find/disable) the exploit.
The "maps" would be the kind of system:
- an AD behind a firewall
- a WebServer with datas to extract from a backend DB
- and so...
The sponsors could sell either the skills of their pen-testers to hire, or their solution to secure a system, so it might be a good maketing campaing for the winner...
Hacker competitions mirror this. Red teams are allowed to bring in any exploits and do just about anything (as criminals would be expected to do) and the blue team are stifled by bureaucracy and not allowed to bring in anything.