One thing I have learned as a sysadmin who has had the privledge to see the inside of hundreds of companies from medium sized law firms to F500 oil companies:
There is a lot more incompetence than you would ever want to believe, and it's not always where you think. I've traced most of it to a failure of connection/communication between IT departments and C-levels/boards. The CTO/CIO and the person immediately below them (and the person immediately below them) are the "buck stops here" people for these kinds of issues, but often are either one of two types. 1) Too much MBA, not enough tech. 2) Too much tech, not enough MBA.
Half of the devs from there that I was in contact with were not capable of:
- googling a solution to a problem efficiently. When they hit a wall, they turned to me with an empty look like they were lost.
- read an error message to troubleshoot. A stack trace is utter mystery.
- use effectively the UI of their laptop. Some can't even Ctrl + S to save, they look up the "save" entry in the menu.
We are talking about people writing code every day, in several programming languages: fortran, c, c++, java, Python...
Because I'm a freelancer, I don't care. I'm paid extremely well to be very nice to them and solve all their problems.
But I'm very glad I don't have to be held responsible for anything those people end up putting in production. And I have no reason to believe it's different in their security department.
However, and this is a good lesson to all of the geeks like me that think work is about doing the right thing: the output they produce is good enough in our society. Its cost/value hits the sweat spot. Business is not about doing things right, it's about being profitable.
If you have one scandal a year, but it costs you less than making sure you have a secure system, and you are not legally challenged, then you are golden.
In fact, the chances to have even one scandal are very low. Actual risks of failure or attack are low. And consequences in case of crisis are low too. People don't care that much about privacy, cyber-security, etc. And policy makers won't enforce their laws anyway, at least not to any extent that will endanger the company.
So if the software allows people to do their job IRL at a reasonable price, under an acceptable deadline, good enough.
I'm also a freelancer, and I've also said exactly this. I was happy to profit handsomely from the incompetence and apathy of my clients.
But in recent years I realized I was wrong. It doesn't matter how well it pays, I feel like I'm wasting precious chunks of a very limited lifetime being a disciplined and methodical janitor for my clients.
Now it's more important to learn, to stretch, or to contribute to a more worthy cause than just getting paid well to do shitwork.
Time well spent is worth far more than anyone will pay for it.
The unintended side effect of prioritizing learning over invoicing is that revenue now seems to take care of itself. It's all a bit Zen.
From an ethical perspective (also a freelancer), I think it's my job to warn my clients when I think a particular idea won't work and help guide them to either abandon it or find a better idea. It is, after all, why they hired an expert -- presumably I know more about the specialty than they do so why wouldn't they expect that their money is paying for my honest advice?
If the insist on doing it the way they want, I am happy to admit that they may well know more about the business than me and do it anyway. But I'd rather have a reputation for steering my clients in good directions with both advice and direct technical help. That way people will be confident I'm not trying to take them for a ride. Plus better technical specs out of it and clarity around what needs to be done.
One extreme example is there is this homeless guy who is a retired very good teacher who has a way of both teaching kids and correcting problematic behaviors. Unfortunately, he doesn't advertise and he doesn't up his rate ($20/hr IIRC). It's a shame.
I used to work with a brilliant chip designer who couldn't find the start button on a Windows machine. We all suck at something. Personally, I think CSS is the devil and should we should nuke it from orbit.
True, I have a Ph.D. and am very skilled with electronics and embedded programming. If you handed me an iPhone I would have no idea how to read text messages (actual situation).
Same with the people I help technically at work. They're all brilliant scientists. They get confused by the difference between VGA, DisplayPort, HDMI, and DVI. Or get extremely frustrated when a button on the UI moves.
I think software developers don't quite understand how big a deal it is to a 70 year old when the button to do something moves. Probably a quarter of my day is often just figuring out how to reconfigure things to their liking or else spend an hour retraining them because of some unnecessary UI change in Windows 10, after which they will still forget and ask for help again.
God forbid you break apart an application into multiple programs or have online activation or a license server. I think I hear at least a daily rant about how you can't just buy software anymore and now you can only rent it for a bit.
We have versions of software that are 13 years old because the publisher switched from an unlimited permanent license to a per-seat per-year license model. Rarely worth it when the instructors get confused by new software anyway.
> They get confused by the difference between VGA, DisplayPort, HDMI, and DVI
Back when there were RS232 (aka 9 pin) connectors on PCs, my dad's computer had two mail connectors, one CGA and one serial port (I think it was a serial port, but I'm not sure, as those connectors were usually female). I took the VGA cable and accidentally plugged it into the serial port. When I turned on the computer, I heard the startup chirping noises, the screen was black for a few seconds, and then white smoke started pouring out of the power supply. I turned it off REAL fast :) Somehow the computer still worked after that.
VGA is DE-15 (3 rows), DE-9 (erroneously called DB-9) is 2 rows. ;) Interestingly, VGA only really needs 6 pins to operate: R G B VSYNC HSYNC & GND, and monochrome only needs 4.
I don't see how that's possible without really crushing it in there. Also, CGA & EGA were the same connector as serial (DE-9), which would've been easier to confuse.
It could've been worse: I knew a guy in high-school who plugged a parallel printer into a Mac classic's SCSI DB-25 (the same physical connector as a parallel port, female on the computer; DB-25 serial is a male connector on a PC) and baked it into "apple pie" with that "lovely" magic smoke aroma.
"I think software developers don't quite understand how big a deal it is to a 70 year old when the button to do something moves."
My 84 yo mother was technically competent. Currently has mid-stage dementia. Long term recall still remains impressive. Apparently unable to learn new skills, habits.
Every software update is cleaving a few more things from her life.
My siblings and I thought upgrading her to an iPhone was a good idea, some years ago. Initially, sure. But now I wish we had a snapshot of her tech stack from her early 70s, and found a way to keep that working.
My mother, over 90, uses the exact same arrangement which I set up in the mid 90s. Linux, mutt, emacs, fvwm customized to be very simple. Whenever hardware stops working I just need to transfer the setup to a new machine and everything remains identical.
It would've been impossible to do this with commercial software as basically nothing has a 25+ year old support life. Because everything is open source, nothing needs to change.
I feel the same.
A friend of mine asked me to take a photo with his iPhone XR a while ago. I couldn't figure out how to launch the camera app from the lockscreen.
The gesture controls Apple implemented starting with the iPhone X are also very confusing for an Android user, eve though I also use gesture controls on my S10.
But in the end, you'd get used to it as you did with Android.
It might. But the Android phone manufacturers tend to build their own interpretation of the stock Android features. The main factor that confused me on the iPhone was the lack of a back button like I'm used to on my Galaxy.
True. And I'm sure most devs that I mention in my comment actually get the job done. In their own way.
I have paired up with some terrible programmers, with bad coding habits, weak knowledge of their ecosystem and no capacity for any kind of architecture design, but that eventually did better than me because they worked way more, and were more persistent.
This idea that bad code will waste productivity is not that important if one person accepts to work 30 hours more than you every week, including to fix the mistakes they introduced by writing said code in the first place.
The level of complexity involved in developing even a basic CRUD solution these days requires knowing a tech stack going from DAL over networking, protocols to Front-end. Just getting all that shit to work and being able to reason through to get something working requires years of experience.
If you can produce something working that satisfies the requirements you can't be terrible in my book, no matter what I think of your style
Something that works & satisfies requirements may still be terrible to maintain, expand or debug. That's the real difference between a well / badly developed app.
Agreed on the levels of complexity in basic apps nowadays.
I wonder if anyone's tried to write a language primarily for manipulating the DOM that's less... "quirky" than JavaScript? With WebAssembly being fairly well supported these days I suppose someone with a background in language design could write an interpreter in C for such a domain-specific language and target WASM.
Of course, people would say "what's the point" when JS is a thing and widely supported. I wonder if there'd be a) a compelling improvement in performance and b) a compelling improvement in reliability if we just wholesale replaced it based on lessons learned? I reckon even just keeping JS and eliminating implicit coercions would be a huge improvement (as well as maybe automatic semicolon insertion) and reduce the debugging times considerably.
> I wonder if anyone's tried to write a language primarily for manipulating the DOM that's less... "quirky" than JavaScript?
I don't know about "less quirky", but Microsoft tried with VBScript. Google tried with Dart. Initially, Dart meant to be interpreted by the browser (not transpiled to JS). I'm sure there might have been similar, smaller projects, but Javascript-in-the-browser has momentum that's hard to beat.
Yes. Style with inline tags, you're generating your HTML anyway. Use tag names describe what they do rather than "semantic" tags that aren't. Rather than agonising over which polyfills you need to get flexbox support, do table-based layouts that render reliably and quickly in any browser. People went crazy for the <video> tag in HTML5, but <embed> achieves the same result and a nicer UX to boot. What's not to like?
Just the opposite. Imagemaps suck because the user can't resize their browser and have the page reflow appropriately. Too many modern styled websites view that as desirable feature.
> - read an error message to trouble shot. A stack trace is utter mystery.
The number of developers I've worked with who handle broken builds with "The build is broken! Why did you break it?" instead of "I shall now proceed to read the error message which will helpfully tell me precisely why the build is broken and how to fix it" has always astounded me.
I've yet to find a way to handle error messages that will actually convince devs to read them. All I've found to date is apologizing for an unclear error message and offering to help with anything they had trouble understanding.
As someone who was part of the small team that used to do our company’s builds before I automated them, I deliberately would not bother reading stack traces and try to fix it but would send it back immediately to the dev.
1. It took too much of my time. Since I was doing this in addition to my dev duties, it was not something I could afford to spend time on.
2. When I first started, I would resolve the issues myself. But I soon realized the builds were usually broken by the same suspects. Sending it back to the devs (and also requiring them to buy a box of donuts on breaking the build) put a quick stop to this.
3. Builds were usually broken by people checking in their code in a hurry right before they were done for the day. Calling them and having them fix it after work (as opposed to me having to sit additional hours after work fixing something they broke) very quickly put an end to the practice of checking in your code in a hurry and leaving for the day because you had a dinner to make it to. I don’t want you to miss your dinner, but if it’s important enough that you can’t be bothered to make sure your code is working then you can also wait till the next morning to checkin your code.
As a developer, 2 and 3 are basicly feedback. If you check in hurry and no one ever tells you it caused a problem, you will slowly unconsciously start believing yourself a god. I never cause a problem with this!
Except I do. And as with anything, feedback takes you back to reality.
Plus you learn where you tend to make mistakes and thus have a chance to study a bit more about area to learn how to not make mistakes.
Doesn't the development happen on a different branch? Of course I push before I go home: if my machine breaks i want to lose a few hours, not a day of work as well.
It took me by surprise too, but now when I give a training, I make sure I include how to:
- read a stack trace
- find things in the doc
- use the debugger
Most people, professional devs, signing up for my __advanced__, yes, specifically requested to be advanced, Python trainings, don't know how to do this.
Again, I try to be extremely nice about this, because nobody likes to feel inadequate and I'd rather have people learning happily this and improve their life and work.
It takes a lot of courage, if you are 40 and being in the field for 2 decades to admit you need to go back to this.
But it's not what I was expecting from the industry.
My guess would be that a lot of coding is autodidactic, aka there is a lot of self-learning, trial and error, etc. This means you learn a lot of bad habits, or develop gaps that a formal training course would avoid.
So when you get to something that is complicated and has the potential for deep strategies and shortcuts for efficiency (e.g debugging), it's no surprise that experienced coders might want a refresher.
Just today a junior on my team (who is also new to my team) complained to me that a set of docker containers he was trying to spin up were "not coming up." I probed him a bit, he sent me a screenshot. I shit you not, "ERROR: Write /some/path/on/disk: no space left on device." It's pretty incredible. You can lead a horse to water, and all of that.
That seems like the kind of oversight one expects from a junior dev just joining a team, that's where ones job as a senior dev to push them through the process comes in.
The number of developers I've worked with who handle broken builds with "The build is broken! Why did you break it?"
You have it easy, the devs here in Cardiff push code that doesn’t even compile locally, then start howling that “Jenkins is down!” when the build fails. Or even just “the server is down” with no other details. Like I just telepathically know which of our 5000 VMs is “the”.
As someone outside the software development industry, but with experience programming for various reasons... I'm dumbfounded by this. Is this the result of hiring code bootcamp graduates instead of those with bonafied CS degrees or what?
Don't see how an academic background would help you avoid this it's a skill you're far more likely to learn through practical experience. I don't see bootcampers as being particularly vulnerable to these gaps in knowledge.
My experience in learning computing in school was going through large months long projects where later stages built on early stages but the entire project was throughly documented and paint by Numbers so long as you followed the directions exactly. If you came up with a valid but undocumented solution you would break the assumptions of the documentation and forever be hacking the program to get it to work which was still possible just time consuming and nobody knew how to help you. The end result is students who can follow docs without fucking up but who also don't know how to fix things other than by rolling back to a known good version.
I've found these sorts of problems to be very difficult to solve in academics due to the need to standardize assignments and grading yet a few months doing an original project will infuse you with troubleshooting mojo.
Completely not my experience at Georgia Tech. You were left to your own devices, had to learn sensible debugging and logging in order to complete most tasks in a time efficient manner, and your time was crunched. It probably depends, but I don't see how you can get that experience in a web dev bootcamp
> Don't see how an academic background would help you avoid this it's a skill you're far more likely to learn through practical experience. I don't see bootcampers as being particularly vulnerable to these gaps in knowledge.
Agreed. I was told, and later had to tell others something to the effect of "okay, so what did the error message say? did you search that?"
Another reality check is that fixing build I broke is typically quick while fixing build broken by somebody else takes a lot of time. Those errors are rarely as exact ad to allow you to fix issue in 5 minutes. Occasionally, stack trace leads to quick null pointer fix, but it is tricky as real cause can be deeper (null should not even happen) and quick fix is just hiding larger problem.
Moreover, having person who made bug is just good practice. Feedback, basically. When people know about own errors and have to fix them, they learn more about errors they make.
That's a bad bias you have. I could handle coredumps before I started CS. Also CS does not teach you efficient programming - that's a different course.
I'm actually generally in favor of boot camps since I myself have a self taught CS background with a civil engineering degree, and I actually know quite a lot of computer science. However, there's something to be said for the additional time in and critical thinking that you might get from a CS degree. However, some people on here seem to be indicating that their degree didn't made allowances for struggling through things or was "too academic" in nature.
It perplexes me that you could get through a 4 year degree without learning how to read a compiler error message though. I'd be less perplexed at this happening in a super condensed and streamlined boot camp though.
My experience: through the whole masters degree I had to write ~6 working programs. Only 2 were of non-trivial size. (a compiler and pseudo-physics stimulation) Some were group effort where someone else could easily pick up the debugging if needed. My 5 year degree definitely didn't care about debugging and I think people could do it with next to no programming skills.
Probably because no one was spoon feeding you or holding your hand. Plus the fact that people who can teach themselves are probably a little more driven or competent to start with.
I would guess autodidactic learners are more likely to pick this up than those who mainly learn through formal education. I don't think stack traces or debuggers were ever really covered in my CS program.
Then again neither were the languages you were expected to do your homework in. I think there was one "Pascal language" class no one in the Major took. The faculty cared about the Computer Science (Math) underpinnings; the language dejour was just a speed bump to help separate the wheat from the chaff.
My program had a pretty good layout as far as languages. I think the standard was 1.5 classes of C, 1.5 classes of Java, then pretty much everything after was in one of those languages.
Only time I felt students got shafted was a Web Dev elective. People who weren't already pretty experienced with JS had a really hard time (we were on the quarter system, an HTML/CSS/JS+React/Redux+Node is pretty ridiculous to cover in one quarter).
No it's human nature. Do people write code that will make error messages meaningful or not? That's something you see in type and variable names, filenames, warning hygenics, etc.
If the expected cost of decoding the error messages includes a lot of reverse engineering of crap code, people won't bother.
It was a major, major issue in early C++ template metaprogramming. You couldn't give good names to some of your constraints so the errors were nightmares.
The oil and gas developers I worked with who were some of the most incompetent people I've ever known had fancy masters degrees (at a minimum) in computer science.
It's not so much that they'd be directly taught in a CS program, but more that I'm surprised you'd never have bothered to pick up that skill set, even accidentally during a four year degree.
Apart from comedic value, blame doesn't fix anything.
I resisted joining a startup that exited for 8-figures because one of the devs was emotional/blame-oriented and another one of the devs, a female, was too into me.
I'd guess a huge % of engineers don't know what MD5 is. It doesn't exactly come up that often outside of being "some random string that is on download pages now and then."
Heck I know what it is and I've only used it a few times outside of copy/pasting stuff w/o needing to think about the underlying details too much.
It's one of the reason I favor framework over libs or micro-framework, espacially from beginner.
E.G: I tell beginners to use django, not flask.
Because people usually don't have the skill or knowledge to take proper design decisions. They need something to guide them, otherwise the project architecture will be terribly wrong.
Last year I worked on a flask site were the devs stored the password hashed as unsalted md5. And another one with an API that returned sometimes JSON, sometime plain text. And this year one that, to be deployed, required you to run a build process on the prod server, and defined the upload folder and static media folder as the same.
My take on this is: if you want to go the minimalist route, you need to be very good already.
> My current approach is to raise an issue once, raise that same issue a second time but on the third attempt, bite my tongue and just leave it at that.
The same thing goes for when you aren't a contract worker and management has slightly different (but not necessarily invalid) opinions on what the right thing is. Depending on your point of view, that and what you said are exactly the same thing with a different lens applied. That is, the "right thing" as viewed from the angle of engineering best practices may or may not be the same as the "right thing" for getting some product out the door, even if many of us wish it was.
This isn't an IT thing. The vast majority of people are average. Anyone who thinks they do better than the rest is likely wrong. It requires insane levels of bureaucratic power and will to truly force anything better than average.
Every single business is full of people who are adequate. From the line staff all the way up to managers. Cynical people call it the peter principle, but it might just be that an average manager is only averagely good at judging who is good at their job and therefore average people are just as likely to get promoted.
> This isn't an IT thing. The vast majority of people are average. Anyone who thinks they do better than the rest is likely wrong. It requires insane levels of bureaucratic power and will to truly force anything better than average.
The point is that smaller companies are less full of _people_. Less bureaucracy, less institutional inertia and more relative influence of each individual.
Yeah, if someone’s not getting with the program it’s easier to walk across the building and talk to them than email across 3+ time zones and potentially a language barrier.
You can blame most of it on SEO. Minimally, it's about making sure what you've created is accurately represented. Maximally, it's about tricking people into thinking you're what they want when you aren't. The way that happens is through worse search results.
It's enough to be obedient, not too curious, not having too much self-esteem, and be able to cope with doing crap work (and knowing it) because... well, management doesn't understand the interest of quality
(sorry, too cynical... but I'm in the field for more than 20 years)
Networking is always a center of awesomeness in those companies. I've seen things like six (at least) layers of NAT for "extra security" or super fast fiber optics that are unused because someone left everything hanging off a backup cable line months ago and forgot. In the latter case it was discovered after they spent weeks tweaking QoS settings to make it go faster to no avail.
Not to mention the proxy settings nobody knows how to setup for their particular IDE/cmdlinetool/browser and that blocks the one thing you need to access.
I disagree that you are golden if you only have one scandal every few years. You can have zero scandals and still be screwed. If a company has its valuable IP stolen and is then undercut by competitors that never had to spend on R&D it can be a significant loss of revenue.
I also disagree that the risk of an attack is low. An F500 company is the sort of target that can end up in the sights of many different threat actors.
> Business is not about doing things right, it's about being profitable.
Because everybody seems to just read past this and think "oh yeah, yeah that is how things are", I'm gonna say it: That is terrible.
I mean, it's your business whatever you decide to do, but I disapprove.
What gave you the idea that it's ever okay to decisively not do the right thing? Especially when the reason is money. Sorry but if your reasoning is "I'm not doing the right thing but that's okay because it is profitable" then you are actively ruining the world. You get no respect from me for that.
> Business is not about doing things right, it's about being profitable
I find this line particularly insightful. It was the biggest lesson I learned in my transition from academia to industry. No one cared that my work was correct and on time, they cared that the vendor we used caused us to miss the deadline and because I was in charge of the overall system, it was still my fault. A tough lesson, but a good one.
Not sure if that was an intentional mistype of "sweat spot", but given your description of their competence level I read that as a word play on sweating due to the nervousness of their code being ready to blow up vs. "sweet spot" meaning the good balance between cost and value.
The CTO/CIO and the person immediately below them (and the person immediately below them) are the "buck stops here" people for these kinds of issues, but often are either one of two types. 1) Too much MBA, not enough tech. 2) Too much tech, not enough MBA.
1) Knows how to get people to do things, but doesn't know what needs to be done.
2) Knows exactly what would fix security, but doesn't know how to get people to do those things.
Eh, while this happens its getting rarer and rarer. Execs know that if a cybersecurity issue is brought to them they kinda have to take it seriously (in 2020 anyways) or be the next Experian.
What exactly happened to Experian? Their stock is up almost 100% since the breach announcement, and no one went to jail. If I were an executive looking to Experian as an example I would conclude that the market as a whole does not care about security.
It was Equifax that had the data breach, not Experian. Equifax was fined $700 million. Experian benefitted from the breach though since Equifax provided Experian credit monitoring services to those whose data was released.
Equifax stock has gone up since the 2017 data breach, though because consumers are not customers of Equifax, it's banks and lenders who are not concerned with data they don't control. If hackers had broke into Equifax and changed people's credit scores, the banks might have been a bit more concerned.
Experian is a pretty unique case, because the people whose data they lost (you and I) don't have a choice to not do business with them. And the people who actually do have a choice (businesses) realistically don't have a choice either.
Well, risk being the next Equifax, not necessarily BE the next Equifax.
If taking on that risk gives someone a 90% chance at a massive comp boost and a 10% chance at a more negative outcome where they're likely still well compensated, many execs would consider that.
Something that I’ve seen as a root to a lot of problems and inertia at big co IT is that theres been a couple of cycles where people where internally recruited from business to IT - the mainframe era and more recently the client/server era.
Nothing wrong with recruiting internally per-se, it’s more about the fact that many lacked basic knowledge.
I’m painting with a big brush here, but I’ve seen effects of this throughout my so called career.
- 40 years ago career-paths was opened as cobol programmers.
Knowledge was scarce and security was in many ways non-existing.
- 20 years ago guys got shifted to IT because they knew how to fix the printer.
Client/server knowledge was scarce and security non-existing.
You breed a segment of IT workers, many which are in ways clueless, but it’s big business. It’s enterprise IT.
Massive breaches, slow to change, and downtime by the buckets. Tickets churning along in “automated” workflows.
Queue the outsourcing strategies!
Might seem a sane choice considering the above - only... it’s difficulty to outsource something you’re not in control of.
This obvious lack of control have led to the ITSM field with it’s CABs, ITILs and other acronyms, all designed to lull everyone into a false sense of security.
I could go on, perhaps write a book. Anyway, I totally agree with what you’re saying, just wanted to share my perspective.
And this is why IT Audit and IT Security came forward, because IT staff (1st line) will do IT stuff and won't take into account design effectiveness, operating effectiveness for controls/processes.
IT Risk (2nd line) is mostly non-IT people that try to convince IT people "what is right". That never works. One of the advantages I have when I work for the 2nd line, is that I've been the 1st line and I know the shit the IT staff have to deal with, and I don't pretent I know better, I do know (((EDIT: I do know exactly what they go through because I've gone through that myself))). And my "risk mitigation advice" is actual, tangible, and with real examples.
There is also the 3rd line (internal audit) that in most orgs they completely miss the mark. ZERO IT knowledge, probably former Big4 that have never seen a console in their lives, wearing a police hat barking orders.
No wonder that the IT landscape is suffering.
I also understand that COO/CEOs don't allow CIO-CTOs the time to have a freeze period, STOP running forward and give them the time to pause, breathe, think, repair. You can't be making and perfecting at the same time. Nobody has the resources for that, so it's all accidents waiting to happen.
I'm not fearmongering, I've just seen enough crap on IT systems (mainly on LARGE corporations) that have made my skin crawl..
It's not just incompetence, what we are seeing is far to systemic to be the fault of mere individuals it have to be rooted in the way modern business culture deals with IT.
Almost every real world security post mortem i have ever seen ends up as untimely application of patches due to the business either rejecting downtime/changes or simply not having the tools and staffing in place to keep up with changing vendor recommendation and security updates.
And despite every evidence based report placing the blame for our current mess on "systematic maintenance failures" the only thing we heer hear from the MBA mills and business leaders is a call for more snakeoil products that can, do what nobody have yet to demonstrate under real world conditions, and add security to an insecure environment.
This is not incompetence/ignorance it's denial likely caused by an "prisoners dilemma" explained elegantly in H.C. Andersen's "The Emperor's New Clothes" where nobody wants to be the first to admit that they aren't keeping up with best practices for fear of ridicule or worse.
I was a sysadmin since 17 and a client-facing, on-prem AWS consultant for F200 and startups in the early days.
A lot of big companies outsource their IT to the cheapest vendor, who, in turn, outsources roles to onshore/offshore. There's not necessarily incompetence by virtue of people from elsewhere in the world, but there is sufficient incompetency, second-stringers and lack of accountability because of all the bureaucratic layers and indirection in the responsibilities. As such, technical debt, confusion and substandard work thrives because of all the vampiric technology "ticks" consulting companies who maximize profit before customer value.
The way-out is for companies to directly hire fewer but better technical employees, employee-ownership and not jumping on the IPO bandwagon because of the perverse motivations of publicly-traded companies.
Oh my, yes. I've had very similar privileges, and it's almost universally a complete shitshow. It doesn't really ever matter what the age, size, skillsets, etc, of the company are, either. It's just.... all bad.
Example: one of the first questions I try to ask when starting a new job is, "Can I have an accurate list of the physical hosts and the IP of the management NIC?", and the answer is usually some form of, "We don't maintain a list like that, you'll have to grab everything from [pick two]: (DNS|Spreadsheets|Monitoring System|Configuration Management|nmap|ARP tables).". Then, because of course, lists don't match with each other, so literally no source of truth for what hosts exist in a network exists. Even for production! It's truly maddening.
Not a pun - OP's talking about these[1], which essentially gives an IP address to a serial port, often internet-facing. The joke is about how clueless management is about their infrastructure.
For any of these multi-billion dollar organizations you have worked at or have sufficient trustworthy information about, do you think any of them would be able to withstand a competent pentesting organization with a $1 Million contract to breach their systems? As a corollary, if they implemented all of the suggestions of a type 2, technical CTO/CIO, do you believe they would be able to withstand such a pentest, and if so what would you estimate the cost to totally compromise their systems be?
All of my outsider knowledge indicates that none of the multi-billion dollar organizations can protect themselves from adversaries 1/1000th their size, and I would like to know if this is consistent with the knowledge of people with more direct firsthand experience. Thanks.
I'd venture to guess this is almost a natural law. Look at the pathogen/organism relationship in life. A small attacker with low overhead can cripple a large complex organism. That's been evolving for billions of years without resolution. In essence, I imagine that security vs cost is an exponential curve with security being the independent variable, and cost the dependent variable.
At the lowest levels, this is great because there's some very easy and cheap fixes to get large gains in security, but at the high end, each incremental gain increases in cost until it reaches beyond the realm of feasibility.
Air gap all your computers? Cool, you just killed productivity which massively increased costs. Actively supervise every user? There's some niche systems that probably require this level of security, but it's rare. Even then, I don't think such a thing as perfect security exists.
I don't think any of them could withstand a 50k contract to breach TBH. The only ones that really could generally tend to be DoD/E related.
That's not necessarily the point though, in an era of APT. I like to say breaches are inevitable, the real question is how fast do you know about it so you can mitigate/respond? HIDS/NIDS, auditing, monitoring and logging are the core of this imho.
Perhaps this is less a knock on those people than it is with the position of CTO/CIO itself. A role that only works effectively when it's filled by a special unicorn candidate is a broken role.
I can think of some reasons why this might be:
- It takes a particular kind of personality to ascend to a C-suite role, and that personality isn't a good fit for solving this particular problem
- Concentrating responsibility for this problem in a single C-suite role lets the rest of the C-suite ignore it as "somebody else's problem"
- Concentrating responsibility for this problem in a single C-suite role serves mostly to provide a handy scapegoat when things inevitably go wrong
- This is a cross-cutting problem that can't effectively be siloed into a single role, even a C-suite one; unless the CEO/board also buy in, the CTO/CIO will lack the authority required to override objections or inaction on the part of other officers
They make the argument that every C suite employee should be tech literate, and if thats the case, then all the "tech decisions" dont fall to one bottleneck on the team, and it isnt that one persons job to persuade everyone else of something they may choose not to understand.
The organizational structure itself, perpetuates silos, and holds companies back from evolving and growing.
"One of the key reasons the C-suite has not yet developed tech maturity is the presence of the CIO. Executives feel that they can deflect technology responsibilities to the CIO in part because the very existence of a technology executive provides an excuse to do so (why else is he or she there?) In the same way that no self-respecting firm would hire a chief quality officer today (quality should be everyone’s job), firms risk perpetuating the technology equivalent of a chief quality officer when they hire a CIO."
"Even if the CIO’s role endures, expect it to involve more facilitation, coordination and strategic planning rather than implementation and operation, while the rest of the executive team handles day-to-day IT decision-making." CIO should own the process of governance building, and collaboration enablement. Figuring out how to facilitate communities within the company coming together, and supplying them with the resources they need to succeed. Just like any other sort of resource planning.
> There is a lot more incompetence than you would ever want to believe
I work for a company that you've definitely heard of, and a lot of people have actively used our product. We're a very big company. We have domain admins browsing the public web via interactive RDP sessions on the domain controllers themselves. That is one of many horror stories of the security at this company.
> There is a lot more incompetence than you would ever want to believe
I'm currently in ops/sysadmin. Large org. Yes, the incompetency you see is sometimes incredulous: I have helped developers install Visual Studio; how to use RDP; had to explain how saving to local disk is different from a networked folder ... I can't think of better examples at the moment, though they exist and are plenteous.
Thing is, I am a hundred percent sure these same people could have turned the tables on me and be aghast by my lack of knowledge in many disciplines.
There are more things to know in this world than any one person can and it is thinking you are beyond this that is truly delusional.
Even with a "verified expert" in a subject, whatever that means, we can find ample holes in their knowledge.
I don't think it's a "Tech vs Business" problem. Tech is just another part of the business. Every part of the business may have bottlenecks; sales, marketing, support, finance, etc. C-level has to learn to identify the bottleneck in the value chain and improve it until it goes away. You don't have to become an expert in every field to do that, but you do have to ask a lot of questions and use an improvement loop.
It's the same whether you're in manufacturing, distribution, health care, etc. Yes, you have to be able to learn a bit about the field. But just being a domain expert ("knowing about tech") often does not substantially improve the overall output of the value chain, as experienced engineers learn eventually.
while this is completely true, many people on the 'business side' don't see it this way at all. thankfully this is changing in the culture, but at a glacial pace.
"lets postpone delivery of the system to rewrite everything again from scratch this month because now the coolest tech is X and I really don't want to be stuck using last months technology even though it has 0 impact to users and actually negatively impacts the business case in terms of skillset staffing and operational overhead" for starters..
Yup. Left to their own devices, engineers will chose to do what is “coolest” using the hippest technology stack and the most pure architecture imaginable. And they’ll build something and it will be cool.. to them.... but it won’t solve any business needs or will solve the wrong ones and not deliver as much value as it could have if it had somebody with a good business head on.
...or they’ll go reinventing the wheel and build massive costly systems that should have been purchased off the shelf from a vendor.
Nope. You need a balance. It isn’t enough to just have tech smarts. You need somebody with business smarts to keep things grounded and solving real business problems that deliver value to the customer.
> In March 2019, the Federal Bureau of Investigation (FBI) alerted Citrix they had reason to believe cybercriminals had gained access to the company’s internal network. The FBI told Citrix the hackers likely got in using a technique called “password spraying,” a relatively crude but remarkably effective attack that attempts to access a large number of employee accounts (usernames/email addresses) using just a handful of common passwords.
Pretty bad when the FBI has to step in and alert you that someone has brute forced their way into your servers.
Weird timing that Dec of 18 they forced a password reset to most of its Sharefile "customers." (aka including anyone who has ever received a file from someone through sharefile, and accidentally signed up for a service they didnt want.)
“This is not in response to a breach of Citrix products or services,” wrote spokesperson Jamie Buranich.
I want to know if they knew already in December, and if they lied to the public and their customers. Maybe they could argue that "yes a breach happened, but this password reset was completely unrelated" but thats a load of livestockwash, if thats the case.
Edit: maybe i should read the article. Looks like they were in back in October! Jamie is likely just a sacrificial lamb, who is there so they have a head to roll, but somebody on the executive team should be in trouble for that kind of lie, unless there were government gag orders.
>Citrix’s letter was prompted by laws in virtually all U.S. states that require companies to notify affected consumers of any incident that jeopardizes their personal and financial data.
Excuse my French, but thats fucking bullshit that they are just admitting to this a year and a half later.
> Resecurity also presented evidence that it notified Citrix of the breach as early as Dec. 28, 2018, a claim Citrix initially denied but later acknowledged.
(This is slightly off the topic of this story) Believe it or not this happens often. I work with R&D / high-tech small businesses that have 1-2 patents or some other IP. They provide services to the government but they aren't exactly flush with cash - meaning their infosec programs are slim. So they stay in contact with the FBI and other TLAs (three letter agencies) who will contact them from time to time to say "hey you may want to check your network, we think someone's broken in."
Basically the FBI is the internet police for these infrastructure / science / tech / etc. firms. It's not hard to understand why this information isn't out on the street more.
Yes but we’re talking about Citrix here, not some random small business supplier. In Citrix’ case it is bad to be in the position that the FBI had to tell you someone was brute forcing your app.
True, and I agree it is bad. But I stand by my original comment. I believe all this happens far more often than we know right now. I predict we'll be finding out in the not so distant future that we've all been targeted and breached. All our data are belong to them.
How are they finding out in the first place? Either they have the capability to watch and decrypt the overall public traffic or they are already inside themselves.
Usually it will be because they were investigating something else, and either seized a hackers device, or gained access to a hackers servers.
Device or servers will then have evidence of the other things the hacker and thier associates have been doing.
Sometimes criminals brag about things.
Other times, the compromised infrastructure is used in other criminal activity that gets detected by the next victim, and the law enforcement agencies work thier way back.
The NSA will pass information like this to the FBI as well (through the NCIJTF). They usually omit / redline enough information to make it Unclassified.
Apparently this is more common than you expect. A company I worked for initiated an overhaul of our IT security because the FBI turned up in the CEOs office and basically said "Now we've warned you, shareholders can sue you if you don't act on this". Or atleasts thats the story everyone in the company were told. It seems plausible, because I can't see any other reason why the CEO would deliberately set fire to the entire development team for no reason.
I assume this is why the FBI is mentioned so often in these stories. A lot of companies probably keep hacking incidents quiet, but acknowledge it once the FBI is involved.
It's actually quite common. The typical series of events is something like:
- criminal steals someone's SSN
- criminal uses that SSN to steal an identity
- the SSN owner notices their identity has been stolen and reports it to authorities
- the authorities investigate and are able to trace it back to where it was stolen from (and sometimes even who stole it)
- the authorities notify the company that was breached.
It's more common with larger breaches with more stolen SSNs (and thus more people reporting that their SSN was stolen) because that catches the FBI's attention more readily and makes it easier to trace it back.
txcwpalpha mentions one common path. It is also a pretty standard practice for law enforcement to perform forensics on seized C2 components (sometimes even replacing them with emulators or continuing to operate them for some time in order to collect additional data), so that they can identify and notify victims.
I worked at a big name university in the IT department for housing and dining. Long before I got there, one of the Oracle database servers for meal-related activities had been pwned for years because it hadn't been behind a firewall and it had a routed public IP address. It was running Windows so it had accumulated a number of interesting malware including obscure rootkits with no antivirus patterns. I once booted it up off of clean media, ran some forensics tools and found a warez dumpsite on it. This box "couldn't be down" so all the happened to it was it was place behind a bidirectionally-restricted firewall. It still kept limping along with funky malware because they didn't want to spend time or money fixing it. Sigh. If it were my box, it would've been an immediate disconnection, image hardening, wipe and reinstall from backups (data-only).
I remember sending some binaries and other deets over to Mark Russinovich at then SysInternals, who's now the CTO of Azure.
How much source code was committed in that span? Time to audit it all. Plus the binaries and other resources that are pulled off network shares at build time. Plus the compilers...
Well, no official government-supported agency have even stepped in to establish a list of norms involving software security, to force large corporations to abide by them.
While the private sector is the sole responsible for their own cyber security, and while the NSA wants to keep the upper hand in cybersecurity by holding a cyber-weaponry supremacy, events like these will keep happening.
Cyber chaos will continue because the NSA is obviously holding massive advancements in cyber weapons. The day the NSA will have an adversary that can be at least 50% as good as the NSA, you can be sure you will see cyber security standards being passed into law.
If you think about it, having cyber supremacy is a good way to have total power over the world. When you have all the information, you have everything you need to do whatever you want. That sort of describes the US right now.
These norms are already well-known. However they are never followed 100% because they all rest upon a sandy foundation of:
"Don't be clueless."
Social engineering/phishing will always eventually work on someone somewhere in a company with more than 20 employees.
Not to mention the other part, where passwords are allowed to be a vital link in the security chain, yet software vendors like Citrix simultaneously discourage password managers by getting in the way of using them and by forcing perfectly good passwords to be cycled endlessly. Resulting in people using passwords like Citrix20! (will change to Citrix21! in 90 days... iron clad security there guys.)
Isn't this normal? I mean after you break in keeping a low profile and staying undetected for as long as possible sounds like a no brainer to me. I wouldn't be surprised if some APTs were inside some "worst" companies for even 2 years at a time.
Edit: from the Wikipedia page on Advanced Persistent Threats:
> The median "dwell-time", the time an APT attack goes undetected, differs widely between regions. FireEye reports the mean dwell-time for 2018 in the Americas is 71 days, EMEA is 177 days and APAC is 204 days.[4] This allows attackers a significant amount of time to go through the attack cycle, propagate and achieve their objective.
What i find particularly embarrassing for Citrix (although only marginally touched in the article) is the amount of time taken for them to close the hole that was in their Netscaler/ADC components.
I mean, this is not a one-man show and an open-source project....
They still have nearly 10k employees and are a big, affluent company. None of their project should be a one-man project, but, even if it were, they should still be able to find enough engineers to work on it in case of emergencies.
If it can be done by a small startup with a total of 3 devs in the entire company (and I have seen it done), it can be done by a company the size of Citrix.
thedance is pointing out that we have no idea of the inner workings of Citrix, and how they staff their projects. It's not completely unreasonable to believe a non-core project has minimal staffing levels.
The vulnerability in Netscaler was an old bug in a web server that let you trivially break out of the virtual directory.
So you could navigate to <URL>/vpn/../path and read/write files.
The device is a freaking load balancer. It took the company a month from public disclosure to push out a rule to workaround to stop most exploits on some devices, and 6 weeks to patch it. Something went horribly wrong.
Netscaler was a potential spinoff product awhile back — my guess is that they stripped the division in the anticipation of a sale to make the numbers look good.
I was relieved to see that only internal employee information was impacted. You don't even want to know how many banks, hospitals, and power plants rely on Citrix Receiver for remote desktop access.
I felt like this article was implying that other firms are claiming this is bigger and deeper with impact to their clients. It looks like maybe Citrix being exposed to their internal networks also impacted their customers but wasn't originally disclosed. Maybe they didn't do enough analysis after the breach to discover this themselves?
If the hack was conducted through the use of account hijacking then why didn't anybody at Citrix notice. Unless once the hackers got in they migrated within the network using as yet unknown vulnerabilities. Makes one wonder of there are backdoors in all networking equipment. So as the various state security entities can keep an eye on us.
So instead of the old days of having a sign that says "We haven't had a workplace accident in XX days" they need "We haven't had a security breach in XX days".
Not sure if there's more breaches or just more exposure because people started caring. Reading the The Cuckoo's Egg at the moment, apparently it was common in the 1980's for military networks to have guest/guest logins, or even ones with no passwords at all.
I hope we can move to a world with ubiquitous two-factor and hardware roots of trust (FIDO2, U2F, etc) across enterprises. That is the only way I see things like this ending.
The core of OSes need to be treated more like read-only firmware that only gets updated as-needed and is unable to be over-written by itself, e.g., send a request to the BIOS to look for a valid public-key signed image file to be applied on reboot.
Flash is so cheap, 64 GiB mirrored SSD devices should be available for operating system images on system boards. Leave OS images as signed squashfs files on a dumb flash FS like exFAT. Delta updates can be applied by stripping-out entropic-metadata, patching and recompressing a previous release to arrive at the valid signature of a complete latest release.
Mixing operating systems, configuration, programs and user data in together is a recipe for fail.
A common answer to "we depend on this application that only runs on Windows Server 2003" is "well, use Citrix jumphosts to access the insecure app running on the insecure server and you can call the risk mitigated!"
Citrix XenApp or whatever it's called this week is a lot better marketed to enterprise and offers a lot more integrations than Windows MultiPoint/Terminal Server. But a lot of is history, Citrix was a close partner of Microsoft and so Citrix essentially was RDP in this context for some time before Microsoft decided to try competing on their own. Microsoft's entries have never really caught up in terms of adoption or features - for one, Citrix supports just about every platform there is for the receiver while Microsoft only has an officially supported RDP client for a couple.
I've sat in many pitches where, ironically, the argument is that Citrix is more secure.
We use multiple desktop products where the modern cloud version was literally just a Citrix cluster per customer, running the same old app. It's absolutely endemic across enterprise products as a "cloud solution".
Networking software giant Citrix Systems says malicious hackers were inside its networks for five months between 2018 and 2019, making off with personal and financial data on company employees, contractors, interns, job candidates and their dependents.
There is a lot more incompetence than you would ever want to believe, and it's not always where you think. I've traced most of it to a failure of connection/communication between IT departments and C-levels/boards. The CTO/CIO and the person immediately below them (and the person immediately below them) are the "buck stops here" people for these kinds of issues, but often are either one of two types. 1) Too much MBA, not enough tech. 2) Too much tech, not enough MBA.
Both tend to have pretty similar results.