The biggest areas for growth for Cyber at the moment are of the not-so-sexy jobs. The asset inventory, patch management, vulnerability management, third party management, risk management, etc. If you are good at any of those and are innovating in any of those areas, you are as close to naming your own price as you can get in Cyber.
As for the most "needed" areas of Cyber, it comes down to education. Not your bachelors degree, but educating and raising awareness to your business, your IT staff, and even your development teams. It's extremely tricky to measure your return on investment, but almost always it comes down to a lack of knowledge causing one massive hole in the fence, leading to a breach.
No amount of controls will stop someone truly motivated and skilled, so you're better off raising the fence a bit higher and hoping that it deters the truly malicious.
Disclosure: I run Vulnerability Management and Assessments globally for one of the largest companies in the world, so my answer may be a bit bias :)
I keep telling to people who want to get into infosec one thing over and over: most of the infosec work is not about breaking [into] things, it's about incredibly boring reporting.
The truly interesting bits are on what to investigate/automate, what to report from it - and how.
If you're really good, I recommend to focus your long-term efforts into usability. Security gets a bad rap because far, far, FAR too often increasing security of <something> means reducing that thing's usability. But if you can find a way to improve <something> in a way which makes it more secure and more usable, you can't keep people away.
Fact of life: people gravitate towards convenience.
I keep telling people that the person who applies the patches needs to be qualified, paid, AND TRAINED just as much as the guy who wrote the fancy paper on maintaining security, and that development and infrastructure need to be more simplified, otherwise security will likely not be implemented properly... companies rarely heed the warning. And that leads to breaches that PR teams get paid a LOT within companies fight furiously to squash.
This is a huge component of my work, and in my industry truly underappreciated. I'm the only programmer in a manufacturing environment and as our business grows so does our exposure, attack surface, and potential bounty. Some days I feel like my co-workers think I'm goofing off or ignoring my other other hats by messing with obscure systems. Sometimes I feel guilty. It's one of those professions where nobody notices you when you're doing things right, and the only way you know for sure it's right is after it's gone horribly wrong.
> No amount of controls will stop someone truly motivated and skilled, so you're better off raising the fence a bit higher and hoping that it deters the truly malicious.
I also want to second this. As angering as this statement is its entirely true. You cannot stop someone forever. You can just increase the difficulty of their tasks to beyond a reasonable or obtainable threshold. A "secure" network with ineffective monitoring can quickly become worse than a terribly insecure network that is tirelessly monitored. Complacency is a killer.
The word cyber is almost exclusively used when discussing security of computer systems. It's used very heavily by government and academic circles and it propagates to other industries from there.
In the 90s I used the word to describe an instant messenger version of "phone sex" and I haven't been able to take anyone that uses the term seriously after that, but I never really took the goverment or academia seriously to begin with.
Even in academia, at least in my corner of it, "Cyber" as a term has a very government/military/suit connotation. Academics will sometimes use it when writing grant proposals or presenting in a DARPA-ish context, but most researchers prefer to call what they do "cybersecurity" (or even just "security", if a CS context is clear).
Yeah, not only is "Cybersecurity" a "Thing" (I work in s/w security and hear it all the time) but people still talk about "Artificial Intelligence" when they mean image filters and classifiers, and have been doing so since the 1980's.
Language fires the imagination. This is mostly a good thing. Sometimes it's also stupid, but not necessarily bad for it.
It’s usually used by either people working with the government, or by people who are tech-illiterate (marketing etc). The rest of the security industry is likely to snicker if they hear that word.
“Computer security” is too narrow; you want to secure more than just computers (phones, for example, which are technically just little computers but no one calls them that).
“Information security” is too broad as it covers more than technological systems—sensitive information often exists on paper, for example.
A close plain English name would probably be something like “information systems security” and I have heard some people use that, but it’s kind of a mouthful.
I guess I wonder why people get so upset and offended by the word cyber. Sure it’s a made up word, but all words are made up. IMO a lot of the resistance to using it comes down to weird cultural signaling like “I’m too smart or informed to use this dumb word.” It’s just a word, and even people who complain about it know what it means.
For me that cultural signal goes the other way around. When faced with people using "cyber" unironically I'm attributing that to government proximity, weird processes and TED talks for managerial types instead of actual technical content. Which is fine in context I guess, that doesn't make the term any less stupid, for me it signifies a certain cultural distance from the subject matter of a generation that has been left behind. I always assumed the term stemmed from the old use of cyberspace or cyber information highway in the 70s/80s(?), we've moved on from that era of the internet being a sci-fi construct. Even school children get at least some technical understanding these days and are made aware of larger implications like privacy.
While I absolutely agree that it's a pointless discussion, I don't believe it's completely insignificant.
You're not the only one. Brussels, the self proclaimed capital of the European Union and seat of many lobbies, is full of those "cyber-security" types. They proudly declare themselves experts in the field but are hardly pressed to discuss any product other than what they've been told to sell.
Ah, cheers, forgot about that one. I'm not saying the word "cyber" is generally stupid but the use as a pop culture prefix is... problematic and misleading in my view if you'd prefer other terms. It's just used as theatrics from the type of people that classify getting hit by ransomware due to unpatched systems as some APT, at least everywhere I see it.
I agree with you -- I think there's a lot of cultural signalling, perhaps some unintentional by those who use the word 'cyber'. Generally speaking, I'm off-put when someone uses the word "cyber" because I generally interpret that as a signal that someone doesn't really know that much about infosec/cyber-security. For example, let's suppose that I'm meeting with a rep from Company X's cybersecurity team and we're reviewing my threat model, counter-measures, the specifics of the encryption, etc... and I'm asked "this all looks good, but is it cyber?" -- it's just plain off-putting. That said, I'll still do my best to smile and be helpful because, at the end of the day, we're trying to improve the world, not cut people down.
They needed a fancy word to attract and anchor the ridiculous investments made. This area has attracted so much attention, it has become an overlay IT organization that demands a lot of care and feeding. Nothing escapes the cyber-amoeba... even sleepy areas like asset management need expensive cyber tools and expensive cyber people.
Think of it like front end web development in the 90s. Webmasters ended up with a lot of independence and cash, because the company had to get on the information superhighway.
This is way off, the security community at large rejected the term cyber for years, but it was necessary to play ball with govt. That's it. The fact that vendors now leverage the word is irrelevant, that happened way later.
Sometimes I wonder how much money is out there waiting for the Magical HN Unicorn that is anti-cloud, anti-network, pro-RDBMS, pro-POSIX, old-school Dirty-Grandpa-Fighter[0] ultraconservative about computer security. My gut tells me $LOTS.
Marshaling the latest innovations in AI, ML and self-driving infrastructure, we protect your company with time-tested compromise-free MIL-SPEC IT solutions!
60000% more secure than Palantir, 134% more secure than AWS Government Cloud, according to "Fair and Balanced" independent testing.
Free yourself from the Cloud! Guaranteed physical isolation of mission-critical assets; armed guards 24/7 in front of your dedicated StatiDyn Security Cell; biometric six-factor authentication using the Gillette's Razor™ protocol.
I thought Asset Management was the next sexy thing in security - heard about a lot of security startups that facilitate in Asset and inventory management - BitDiscovery, Senrio etc.
Everyone is trying to get a piece of the pie :) trickiest thing right now is defining what an "asset" truly is.
An asset could be ephemeral cloud infrastructure, an uncompiled piece of code, an API endpoint, a server, a compiled application, a third party vendor, a group of microservices, a fax machine, an employee, a filing cabinet with sensitive information, a virtually defined CI/CD pipeline, and a million other things. At what point do you cross line from paranoia to proper asset inventory, tracking, triaging, remediation, etc. How do you find commonality between all of these devices, critical infrastructure, and data?
Bonus points of trickiness, how do you manage inventory when it changes constantly like cloud, like a third party, a web app, etc. Things like certificate management get extremely dicey. Where do you cross the line between data management, asset management, etc. It's currently the most open area of IT and Cyber that there is, and no one, in my opinion, has a grip on it.
I've never even seen a company that properly tracks assets when they're only defined as "servers" and "software packages". The closest I saw with hardware, before virtualization really took over, was when the datacenter wasn't allowed to hand out IP addresses to new servers without them being in the master inventory list. Then virtualization happened and things got bad again. Any company with Devops is going to run into challenges too.
I've never understood people who say: "No amount of controls will stop someone truly motivated and skilled". I don't think that's true.
Correct me if I'm wrong, but If there's no holes in the application/web stack to be exploited, then there's no getting in. Right? It's not about hacker/pirate skill. It's about whether or not the target has plugged all their holes or not.
If you’re going against a three letter agency, Israeli or Chinese intelligence, you also have to consider all of your hardware sourcing. They don’t even need to compromise vendors, they just need to intercept a package en route.
Not sure where OP was coming from. It’s virtually impossible to protect yourself against a dedicated advanced persistent threat group.
In the purest, most academic sense of the conversation; yes, it is impossible to comprehensively defend against 0-days, APTs and nation states.
If we want to be pragmatic about the discussion, then it’s all about your threat model. In that sense, OP is right. If you’re a mom and pop shop selling a catalog of hardware, your LAMP stack isn’t going to face the same scrutiny as a “GooFacePayZon”. According to how he defines his threat model, he can call himself ‘secure’.
Software is only one part. Do you trust your hardware, your people, your supply chain, your physical security. "Truly motivated" can mean extreme resources and willingness to cross all boundaries.
Are you secure if your admin's child is kidnapped and the ransom demand is for network access?
Are you secure from the Secret Police wanting to hijack your service for their purposes?
Once you accept you CAN'T stop truly all attacks you can be comfortable with acceptable risk and work to mitigate realistic risks.
Yep - this is why you might try to limit pivoting based on an assumption that everything is compromised, you can require coordination from multiple geographies to unlock access to certain highly sensitive resources, you ensure that these protocols aren't published, and above all you follow the New York Times Test: don't type anything that you wouldn't want to see on the front page of the NYT. This requires pride in security at all levels of your organization, and it's something that few organizations outside of the military get right.
I am referring to a CIA (confidentiality, integrity, availability) related incident. Less so the availability. If an attack was truly motivated, the web stack / application stack is not how you compromise the system. The user is how you compromise the system. Do you have proper physical security to prevent unauthorized access? Do you have proper password and 2 factor auth configured? Do you educate your employees on how to identify phishing? There are numerous other ways to compromise a system than remotely via the web or application stack :)
>Correct me if I'm wrong, but If there's no holes in the application/web stack to be exploited, then there's no getting in. Right? It's not about hacker/pirate skill. It's about whether or not the target has plugged all their holes or not.
Similarly if a ship is unsinkable the passengers will never drown. Easier said than done.
I think you may be imagining a comprehensive numbered list of exploits. Some products are sold that indicate things like this.
It may be possible to write a software component that is not vulnerable to exploits, but any non-trivial system built of many components will almost certainly be exploitable.
As much as people say they value security, they also value delivery of working software.
Additionally, as others have said, no system is invulnerable from the CIA, NSA, KGB, etc. Someone knows the passwords (or where the passwords are stored) for your system. They may be vulnerable to bribery, blackmail, torture, etc.
Unfortunately it is never that simple. Even if you have thing well plugged on your end, other software /services that interact may provide a path. I recall one instance a few years ago where a hacker chained password recovery services together to breach an apple account, by bouncing through Amazon. One of the password recovery methods for Apple at the time was providing the billing address, and at Amazon you could recover a password by providing the full CC# of a card on file. But Amazon also let you add a CC# for an account you weren't logged into, so the hacker got a Visa giftcard, added it to the Amazon account of the victim, reset the Amazon password with that CC#, and then used the shipping address in Amazon to recover the Apple account password.
Then there are the security holes that exist and are known about by select groups which they sit on and use for big plays...
> Correct me if I'm wrong, but If there's no holes in the application/web stack to be exploited, then there's no getting in. Right?
Right. But there's a saying. "Nothing in unhackable". There in lies the problem. If you can build an unhackable system you literally can get whatever salary you want. If you can convince someone that such a thing is possible. But I'm pretty sure that'd count as fraud.
> If you can build an unhackable system you literally can get whatever salary you want.
Does it have to be useful?
On a more serious note, similarly to being able to break RSA in ‘little’ time, having that kind of skill would not result in financial wealth but a huge risk to your physical and mental/emotional well-being. Imagine who would come knocking on your door (assuming they won’t straight out abduct you), and trying to tell them no.
> It's about whether or not the target has plugged all their holes or not.
You're not exactly wrong, but you're assuming something that's impossible. How do you know where all the holes are? You (I'm using the generic you here, as though speaking to a CIO) cannot even inventory all the net-connected software and hardware you own, and even if you could the list would be out of date in 24 hours. But let's say you had that fictional inventory. How do you find its vulnerabilities? You might be able to design an automated process to look at your source code and match against the CVE database. Whoops! You don't have source code for most of your resources because they're proprietary and came from outside vendors. So maybe you look at object code. There are tools that do that. Whoops! A lot of the code is in ROM and you cannot extract it. Even if you could extract all your object code and analyze it against CVEs (which you can't), that's only going to catch known vulnerabilities. What about the unknown ones?
Oh and now we have to talk about all the stuff that's not net-connected which is vulnerable to employees plugging in USB drives...
So no, you can't know where all the holes are so there's no way to patch them all. This doesn't mean security is impossible. It just means there's no such thing as perfect security and there are no magic bullets. Security is a necessary, expensive, and mostly boring part of any company's day-to-day business operations, like, say, accounting and the legal department. But that's not quite right, because most of your employees probably don't need to know much about accounting or the law. But they do need to understand the basics of safe computer use, so ongoing training should be a fat budget line item.
Anyway security is a process, not a thing you can just buy a little of from a vendor. You ignore the security process at your peril.
There is no plugging all of the holes. Not in a general case. It's like the halting problem (it's technically equivalent) - maybe you can say for one program there are no holes, but not for arbitrary programs, for arbitrary definitions of holes.
This is rice's theorem.
More practically, you can simply assume that for an arbitrary program of 'reasonable size' with a moving codebase there are effectively infinite exploitable vulnerabilities.
They'll just rubber-hose your teenage son until you give it up. I'd certainly give up a database password before I'd let my son get beaten by Bin Laden.
Two problems with that - knowing about all of your holes, and whether or not they are plugged, is impossible. Second, many breaches don’t even involve holes in your web app stack. Low tech attacks like phishing and malicious attachments are remarkably effective to get a foothold into a network.
I guess the stack itself is probably so deep and wide generally that the attack surface goes on and on. More than anything though, humans. Staff can be exploited easier than anything else in a lot of cases (I'd wager, not my area of expertise).
A combo of in house tools for creating findings from "non-standard" tools, (standard tools being nessus, app scan, etc.) Such as pen tests, responsible disclosure, red teamings, etc. We partner with Kenna Security pretty heavily in terms of tracking and consolidating our vulnerabilities, along with remediation prioritization and strategy
Completely agree. We're working on an idea to handle the boring stuff as part of YC's Startup School 2019. GDPR, HIPAA, CCPA, PCI, etc. compliance + penetration testing and risk assessments.
We'll be building it at: https://secquity.com or if anyone has any specific questions, feel free to reach out at info@secquity.com
Not surprising. At Netflix, despite the already high salaries, the security engineers were paid a premium for their skills. The highest paid engineers at Netflix were security engineers. They did stuff like invent entirely new security protocols[0][1].
That’s definitely not the norm though - security engineers are typically classified the same as an SDE for payroll purposes, and tend to have less negotiating leverage than high level SDEs who build and ship products, except maybe some very rare exceptions. But most companies also don’t have their security engineers actually write shipping code.
I think this article, like every security related article from bloomberg, is pretty much BS.
Also, they use the example of CISO at a large company. CISOs aren’t actually security experts in the vast majority of companies. They’re usually business people or outright frauds, disappointingly. The people who hire and interview them have no way to validate them.
The article makes a case for why CISOs are worth a lot, justifying the cost of a breach. The problem is that CISOs have virtually no impact on whether or not you get breached, and they usually bear no responsibility for it. It doesn’t matter who you pay how much - it isn’t going to affect the outcome much. Alex Stamos is one of the few that actually has any background in security at all, and look what good that did Yahoo and Facebook. Not much. The other problem is CISOs rarely get any actual authority over product, and when they try to flex, they just get pushed out. The ones who survive are simply master politicians who manage their messaging and their image.
The real reason for this article, I suspect, which appears to be primarily sourced from a security recruiting firm, is that they take a cut of every position they fill. It’s very much in their best interest to pump up the value to justify their fees.
Most security money is very poorly spent. I think part of the problem is that hackers are usually bad managers, and tend to be less interested in playing corporate politics. So the manager jobs go to someone else, who has to make decisions they don’t understand. The higher up you go, the more this gets amplified. For every breach you hear about, that company likely has a few competent security folks saying, “see, I told you...”
I don't know what they did at Microsoft, but this is just pants-are-shirts wrong for SFBA tech companies. I could tell you I know this from near-firsthand experience (getting offers, watching friends get offers, for both SDE and security roles, as well as doing technical recruiting work for our clients, who are almost all SFBA startups), but you're also responding to someone giving you firsthand knowledge at Netflix.
The people telling you security talent commands a premium are not lying and they're not wrong.
There is a general instinct people have to push back on "look at these high salaries" data points because, not to put too fine a point on it, they're not good negotiators and get second-tier offers. That includes a lot of people with truly extraordinary talent; negotiating skill and delivery skill are, of course, orthogonal. But even that is changing; all you have to do is keep your ear to the ground to know that the standard SFBA SDE salary is not the market clearing rate for (e.g.) software security.
I didn’t accuse anyone of lying, so I have no idea what you are on about. I asked how jedberg knew what other engineers were paid. I am not even skeptical of his particular anecdote, just curious.
I have first hand experience at SFBA tech companies, and if you don’t think they are overwhelmingly classifying them the same as SDEs, you’re simply wrong. You haven’t worked inside of one, so that alone is likely going to limit your understanding quite a bit.
This has nothing to do with my own salary either (and I have received an offer from netflix, not that it means anything).
For whatever it’s worth (probably not much), I actually ran into someone who interviewed with a company that your company was recruiting for, and that person did not pass your interview. Based on their account, your bar for security engineers is far higher than the norm. I seriously question whether you understand how mediocre the mean security engineer actually is, even at top tier companies, given that you seem to surround yourself with top tier talent. There is a vast ocean of OWASP 101 grads landing senior positions out there.
I feel like the opposite thing is true about us, in that our process (effectively, somewhat notoriously) avoids people with substantial prior experience in our field.
Google does not differentiate those roles (like just about every company). Most companies have a “faux title” and a real title. The real title for security engineers is usually the same title as software engineers. I’m not saying that no company exists with security engineers in a separate pay ladder (because who knows), but I’ve personally never heard of one.
It’s about as difficult to answer that as it is to answer what the market rate is for SDEs. They vary wildly. Like hundreds of thousands in variation. And even the same role at the same level varies wildly depending on negotiated initial offer and performance bonuses/discretionary equity grants. Facebook in particular gave out $1M in DE to their top performing SDEs, even at lower pay grades.
Also, appsec tends to be distinct from IT security (who tend to be classified as IT/SRE/Ops or similar), and often outside of the CISO/CSO scope, but not always.
Most of my years spent in the cyber security space revolved around consulting with large companies. The most common activity we consulted on was cyber security related technical transformation. Basically a company would hire a new security manager or CISO who had all their favorite security solutions and vendors. So everything already implemented had to be replaced with the CISO preferred solution. This kept an army of consultants and business analysts and project managers in business as well as gave the security org the feeling they were making a difference.
Might not be the case everywhere but was certainty the norm in the consulting world.
I’ve spent time in consulting at and with the name brand pen testing shops and never heard of this type of consulting. I don’t doubt your account, but that’s a foreign concept to me. I’m curious which consulting shops work that way.
Information security system integration consulting shops. I think we had some pen testing going on but the but the money was made selling a new IBM governance tool or SIEM solution and then a new security manager comes in and he prefers SailPoint or Microsoft so everything gets switched over.
I definitely feel like I have much more negotiating power than eng, as someone who used to be in an eng role and is now in security. There are more engineers and eng is a less niche skillset. There are very, very few people with my skillset. Hiring for my team takes months, minimum, with far far fewer candidates in our pipe.
I work at an SF company and routinely field offers from other companies that I can view on levels.fyi, and my colleagues in eng are open with me about their salaries so I have lots of datapoints to compare to.
To your other comment:
> I seriously question whether you understand how mediocre the mean security engineer actually is, even at top tier companies
I doubt I could give you better advice than anyone else or the internet. Not trying to be dismissive, I just don't feel like I have a good handle on the question myself.
I can say that the trend at companies that pay well is that you will be able to pass an eng interview. More and more, it'll basically be "we expect you to be as good as an eng around your level, maybe one level less" and "we also expect you to be an expert at threat modeling", plus whatever is specific to your niche; for me it's detection and response, so I'm expected to understand operating system services, how attackers go about taking over a computer, the traces they leave behind, etc.
There are a lot of ways you are flat wrong about this issue.
I was a CSO at a large technical company and we were successful in separating the role of Security Engineer from regular Engineer with HR.
With regard to corporate politics, you have hit on an interesting aspect of the problem. Why is it that politics should trump the actions taken to prevent breaches?
The CSO/CISOs that I know all have this stress of knowing what the risks are and the difficulty of moving the organization, incrementally, towards a safer posture. Successful ones know how to navigate whatever the culture is and somehow produce a solution.
The underlying causes of breaches really come down to 1. the corporate culture, 2. the technical competence of the organization, top to bottom, and 3. the strength of the security team. If the security team up through the CSO is very good and the culture is not so good (recent breaches will give you names), or the technical competence of the company is low (e.g., putting a breach response site on the internet disconnected from your root domain), then the effectiveness of the CSO is not very good.
There is a fundamental tension between spewing off features to win market share and producing quality, secure software.
A wise company will find a CSO who can help move the culture and influence the technical practices. Those folks are very rare, and as such will have very high pay.
We talked pretty openly about salary. Also, managers, or anyone involved in setting pay, had a spreadsheet of all the engineers salaries for level setting purposes.
Those would be security auditors, and I don't know. They wouldn't have been on the engineering list. You're not a security engineer if you don't check in code.
Agreed on most points except in some cases where CISOs are actually people who rise up through the ranks within security and actually know their shit. But I concede that its not very common.
Yes it is. Companies set salary ranges based on cost of labor in given markets. They do this by grouping job codes into job families. The job families are what's benchmarked. It's entirely possible for there to be a broad range of job codes and business titles (or even one job code & multiple business titles) with the same benchmark. Because CoL benchmarking only works if companies share labor data with each other, they typically have no problem sharing this information to clearinghouses that provide this analysis service for pay. It also means the benchmarking tends to be fairly accurate.
There is an endless supply "infosec specialists" and "ethical hackers."
But there is a massive shortage in motivated experts that ensure packages are up to date and fluent enough in code spelunking to ensure the app isn't trusting user input or allowing privilege escalation.
There's also a shortage in technology leaders willing to spend money on the mundane aspect of security. It requires regular work, not compliance effort and periodic audits/pentests that check off boxes.
Software complexity in most companies has exploded. Nobody is doing anything to try reduce or manage complexity so it's only getting worse. The more complexity there is, the easier it is to find vulnerabilities.
>>Nobody is doing anything to try reduce or manage complexity so it's only getting worse.
I disagree, I see a number of large corporations starting to standardize either 1) their entire development stack from IDE all the way to how the code is deploy 2) Reengineering entire languages to have one language be used e.g Quartz at BofA 3) at the very least, companies are starting to standardize their middleware stacks, to at least avoid the configuration related issues of having a development team managing that.
While I do agree, that the complexity of third party libraries has exploded and is increasingly difficult to manage, I'd say companies are well on their way to standardizing that, with tools like Nexus, SonaType, Blackduck, etc.
We're obviously a long ways away from being even 75% effective across the board, but to say nobody is managing the complexity is a bit short sighted :)
We're trying to address this. If you've got sometime I'd really like to compare notes on this and learn how you guys work day-to-day. We're leveraging Osquery to asses the various aspects of systems to try and build threat models where risk cascades as systems change. To help facilitate automated reporting. Alongside the traditional mundane cybersecurity day-to-day activities.
Things are out of date because things that are working don’t get money and time allotted to them to update them.
People still run Windows XP because there’s a piece of software that never got updated to run on Windows 7, much less Windows 10.
There’s those Java apps that are stuck on Java 6. Websites that still need IE6. Things that use the unsafe versions of stuff like HTTPS...because Visual Basic 6 doesn’t support them out of the box.
I'm out here imagining all the unethical hackers drooling over the sweet sweet vectors that are snap and flatpak. Unfortunately the qa and auditing aspects of distro packaging seem to be taken for granted, and the resources for that are surely not sufficient to counter motivated adversaries.
Do enough people actually use Snap or Flatpak (especially at large companies) to make it worth anything? I’d imagine that most people would just use real distro packages and stuff compiled from source instead of trusting Snapcraft or random Flatpaks off the Internet, especially in production.
As a software engineer early enough in his career to change tack, how would I go about venturing into this space? Cybersecurity is something that has always interested me, but it seems like such a massive feat that I often find myself overwhelmed and settle back into my comfortable dev job.
Then every time we finish building a new publicly accessible system we send it off to "the security company" to pen test it. I am always very jealous about this.
Bug bounties are a great way to get your feet wet. I've seen many devs (especially web devs) have a lot of success hacking on websites that are built with frameworks they are familiar with. I would recommend checking out Bugcrowd or Hackerone to get started.
Besides that, there are a ton of great online courses such as PWK/OSCP, and labs (HacktheBox).
There is the serious end of the security business, the service end, and the fake end. Plus of course all the black-hat ends.
If you want to be in the serious end, which doesn't necessarily pay more than any other software job but can be really interesting work, I would suggest learning about anti-virus and similar attacks (there are books and tutorials) and generally making your server software game as strong as possible. Then get a job with a security company at whatever level and bust your ass looking for challenges. You can rise very quickly if you can move the dial for the customers, and "smart and gets things done" plus "gives a shit about security" is a rarer combination than you'd think.
The service part, e.g. your pen-test company, is going to be much more mercenary. Great experience if you can get it, and probably a good space to start your own company in, but of very limited value in the big world. Security companies will have huge annual contracts, pen-testers and the like will be called in occasionally to check off a box on a security audit. Either one can work for you, but it's best to know what you're getting into.
The fake end of course is companies promising something they won't actually deliver, or will deliver with gross violations of ethics and/or the law. Obviously avoid these as best you can -- for the more serious companies, having your name associated with "SEO" or other spammers can permanently blacklist you from employment at least in the US, obviously the dodgier the play the greater risk of blacklisting. Hiring managers worth their salt have a nose for this, since Ethics is way more important than Skillz for any serious security job.
In case the black-hat part isn't obvious: in many places word gets around if a talented hacker is interested in security. Mafia is mafia even for us nerds. If something sounds suspicious, I strongly suggest you don't take the meeting. (This may be less of an issue in the US.)
Best of luck to you! The world needs more smart people working for a safer Internet!
I dunno about some of this; working for a security company in a non-security software role gets you a lot of adjacent experience (take extra courses mandated by the company, go to extra talks, work with super smart security people), but I don't consider myself anything like an actual security expert after doing this for nearly 8 years.
There's a lot of not-security work to be done in the security industry, and it's not all work that gives you security-specific experience. I like to think I'm good at what I do, but it's not security, even though it's to help security people.
I've been working in Security Per Se for more than 10 years and I would also be reluctant to call myself a "security expert" -- as would most of the people I respect in the business. (Free pass for CVs in motion of course.)
This is because many of us have very specific domain knowledge which probably doesn't map to a layperson's expectation of "security expert" -- and while I don't see much "Impostor Syndrome" I would assert that most branches of Security will humble you if you really know your shit, so a great indicator of someone who doesn't is their readiness to claim broad expertise.
Yes, most of the work in "security" is just "software engineering" -- but my own experience has been that for people who care about the security angle, plenty of domain knowledge accrues over time. You might not even realize how much you have, but others do: for me there is a huge difference between working with an ops person who has internalized the adversarial worldview of Security and one who is "just a sysadmin."
Take the OSCP course, that would get you some good practical exposure and a well-recognized cert. Having a software engineering background you'd be well positioned to do code auditing too, if that was your bag. If you really want to make bank, get into smart contract auditing. Going to events like DefCon and Blackhat is great for networking too. Check out capturetheether.com and hackthebox.eu for some practice!
Cyber security is so wild west right now. change your linkedin. Do some public speaking. Read some blogs and you'll get your foot in the door. Where you go from there is about how well you sell your self.
Do CCNA and follow to CCNA security certification.This way you can easily get job. believe or not, certification is important. non-tech don't trust you without it.
If you're in bug hunting, do it in leisure time. it takes lot of time, patience and can't pay bills always. OWASP u can do as additional not as primary.
Easiest way i think to enter in security with guarantee to pay bills is via networking domain.
I told my son repeatedly that if he ever wanted a secure and high paying job it coukd be in cybersecurity. We aren't going back to less dependence on networks, and we are putting more and more valuable assets and operations on those networks, which will need to be protected. Although, I suppose ut is possible that computer security could move into the automsted sphere....
A lot security issues will boil down to the CISO / consultant saying: You have to spend a lot more on your software infrastructure, hire more staff and keep shit up to date and not use bottom barrel $10/hour offshored engineers. It will have to become company culture to keep shit up to date and do the basics in security. Like with equifax:
And when that is the problem you have to deal with, it's more about executive buy in and management than it's about any sort of security expertise, and that seems fairly difficult to do at most companies with really large systemic security culture issues such as that.
“CEOs don’t know what it’s worth until it’s walking out the door,” Comyns said. “Then they stand in the door and say, ‘You’re not going anywhere.’”
They deserve what they get. If you underfund critical parts of your infrastructure that you don't even realize are critical, what do you expect? Pay for talent. End of story.
True, but if it's critically underfunded everywhere then it's what we have now where it's just the cost of doing business and a risk you account for.
I would imagine most sites on the internet could be exploited by someone targeted and ultimately relying on the fact that their data/site isn't valuable enough to attack in the first place.
As a security specialist, 100% agree. It’s just not provable that we are worth anything. There is so much theory and guesswork and companies still get hacked because you can’t prove if I’m wrong or right until you actually get hacked. There’s too much ground to cover and not enough people watching the alarms. And of course he alarms trigger whenever a stiff wind blows.
Everyone wants to do security but no one knows what security means so they just cut a fat check and pretend like it is working.
Plus Security Expert is incredibly vague. Like what do you do? Are you a pen tester? SIEM analyst? Compliance? Do you work in SecDevOps? Or are just a firewall/IDS guy? Or maybe you walk around the office and hit your accountants every time they click on a phishing link with a wiffel ball bat.
The world of info sec is so massive that saying you are a expert in 'Security' is useless.
You could say the same thing about being a software developer.
That could be backend development, but even then there's different languages and tools: MongoDB, MSSQL, Oracle. Even the server languages can differ vastly: Java, .NET Core, Python, Ruby, that list goes on.
Add "full stack" to that list and now it could include desktop software (Windows forms, WPF, etc), javascript frameworks, or mobile platforms (which can include swift, C, java, or some of the cross platform frameworks).
Sure, all these skills are in demand, but good companies will explain what their interpretation of "Software developer" means in the job description.
Then you, an expert in a few of those languages, will pick and choose what jobs to apply for. I don't see why the security field has to be any different.
Search for "Corporate Controller". They typically report directly to the CFO and oversee a lot of other financial roles. Every company has one, and they usually only step out of their office to hit someone with a club or something.
A lot of their job is protecting the company from lawsuits, usually because someone somewhere did something dumb.
Source: worked closely with a corporate controller for many years.
I'm skeptical about the "more than 300,000 unfilled security jobs" stat. Are we talking about thousands of empty chairs for each of the big tech companies and hundreds for each of the remaining fortune 500?
I don’t know about this particular statistic, but what is often done in these studies is to survey organisations to ask how many of these staff they want or need, or how many vacant ‘positions’ they have.
The numbers don’t always reflect how many positions they actually have the budget for, or whether they’re willing to pay a realistic rate.
Arguably, if they were willing to pay market rates there couldn’t be any vacancies, because supply and demand.
I rarely see the concept of "supply and demand" used properly.
In introductory economics, it's just two lines crossing somewhere. In reality though, companies still have to make a buck, so there is a definite limit on how much a company can pay a "security professional" and still be profitable.
And worse, if we are talking about a labor shortage across an industry or country, price doesn't really come into it at all. Price is only a competitive factor, it doesn't create or destroy individual developers. If one employer "scoops" an employee for a higher price, the shortage moves to where he just left.
Sucking in talent from other industries or countries also has its limits. And on top of all of that there seems to be some anti-competitive effects preventing wage-wars.
I’m failing at finding the original report about this but this is the line I keep seeing repeated in PRs:
> Nationally, there were 301,873 cybersecurity job openings in the private and public sectors during the 12-month period between April 2017 and March 2018. This included 13,610 openings in the public sector.
cybersecurity is a cat and mouse game, it will never end. If it ends up like another arms race, it s going to be a huge cost to the markets, bigger than any national defense budget. The best way to secure data is not to have them stored in the first place. I think this will be the next big trend, from big data to small data
Most cybersecurity threats are "simple" crime, rather than government action, and even the latter is often quite "commercial".
If there is enough law enforcement and enough self-protections by civilians (companies), criminals should get demotivated and the overall level of activity should die down, just like "real" crime. We're far from that though.
Also sounds like in the old days and somewhat similar to 'nobody ever got fired for buying IBM'. Why? Well by throwing money a great deal of money to hire the experts (if that is the case) nobody can be called out as they can if they didn't pay for that insurance.
I'm curious if anyone has found consistent work doing bug bounty's and things like HackerOne to a sufficient degree that it would replace a FT engineer role.
I'd love to hear/learn about someone's experiences if it exists.
I co-manage https://hackerone.com/googleplay and the top contributor there probably makes 5x - 10x of an average software engineering salary for his home country.
Not a lot of hackers care about Android app security so there's barely any hackers participating and little competition. Most apps have never had anybody do a security review.
Additionally the scope of the program is so wide that you can look through hundreds of apps from companies that have no security posture at all. Finding bugs is easy and payouts are more than generous.
I know a few folks who do full time between things like SynAck and BugCrowd. SynAck is the ideal model in my opinion for pivoting to full time vs part time as a professional bug bounty individual, although it takes a ton of skill and hardwork. I'd say it's the exception moreso than the norm.
If you are interested in learning more about SynAck and it's model shoot me an email: i@willcode.it I can try and setup some contacts from their side that are working full time on platforms like it
Do the folks you know that are doing this full time actually depend on that income for their livelihood? I was on the receiving end of a large program for a short stint and have been watching it casually over the past few years. It's very much a feast and famine way to live, and you need to not only be very skilled, you also need to be dedicated to the effort and be very disciplined with your money. A $20K chained RCE looks great on paper until you're trying to live on a constant diet of clickjacking and IDOR bugs for two months straight.
I would caution anyone thinking about this to do it as a side hustle for at least six months if not a year to test the waters, understand the subculture a bit, and take a few rounds on the roller coaster.
Yes the ones I know do it full time as their only income AFAIK. Most do live in low cost of living areas, none of them that I know are living in places like San Fran :)
I would say only 10-15% of our reports were from folks in the USA and I don't recall any being full time. The dedicated folks were mostly from eastern europe and middle east...i'm guessing that has changed a bit over the past few years.
Shubham Shah got $80k in 120 days (that would be $243k in 1 year) while additionally working a full time regular job. That's probably the very top end, and he did have to sacrifice his mental health for it.
I want companies to be held liable for their breach, especially when they are negligence or easily avoidable. I currently have FREE CREDIT MONITORING as my PII has been exposed by multiple reputable companies plus the government. Monitoring x3 doesn't punish them for leaking my data. I doesn't even (in my mind) give them enough incentive to make it much harder to loose. I'd be happy if is was encrypted and regularly tested to make sure it was still encrypted, both at rest and in motion.
>Just last week, Capital One Financial Corp. disclosed that personal data of about 100 million customers had been illegally accessed by a Seattle woman, possibly one of the largest breaches affecting a U.S. bank. The firm’s shares have fallen 8.9% since the intrusion was revealed.
Misleading reporting. The whole stock market went down since then, and banks particularly so.
Yes and no. The price many cybersecurity pros want, is to be able to live in a place of their choosing and work remotely. Many will even take lower salaries than the market would dictate where the company is located to do so. While this is slowly improving, a majority of companies are still not willing to accommodate this in my experience.
I'm studying cyber security in Germany. It's the single best decision I've made so far. The work opportunities seem endless and I love working in the industry.
Most of my fellow students simply decided to go into this field because of the high monetary compensation and expect salaries starting from 60k€ upwards with a Bachelors.
The last time my work password expired and I had to increase the number at the end from 7 to 8, I asked our 6-figure-salary cybersecurity pro what is the point of password rotation when all it achieves is that regular users stick it on post-its to their monitors or store it in draft emails in Outlook.
I've been interested in learning cyber security for a while, but I can't help but feel if there was a successful attack the fallout and pressure would be intense. Is that a real thing to worry about or is my imagination getting away with me?
There are all hands on deck situations, but you just do what you're good at, figure stuff out, and fix it asap. Usually it's a team effort. Not awful. If you're interested I really recommend it.
Title is misleading. It should read “Cybersecurity pros who graduated summa cum laude from Ivy League universities with dozens of contacts in the C-levels of Fortune 500 companies and are under 30 can name their own price”. Truly unique times we live in.
As for the most "needed" areas of Cyber, it comes down to education. Not your bachelors degree, but educating and raising awareness to your business, your IT staff, and even your development teams. It's extremely tricky to measure your return on investment, but almost always it comes down to a lack of knowledge causing one massive hole in the fence, leading to a breach.
No amount of controls will stop someone truly motivated and skilled, so you're better off raising the fence a bit higher and hoping that it deters the truly malicious.
Disclosure: I run Vulnerability Management and Assessments globally for one of the largest companies in the world, so my answer may be a bit bias :)