Directly behind it is the student library, with an echoey three story tall open space [1] (why would you design a library like that?!). When you're there at 2am you can hear the chains dropping and hitting the coffin on the hour, which is not at all terrifying...
> A common pattern in these systems is that there's some frontend, which could be a service or some Javascript or an app, which calls a number of backend services to do what it needs to do.
I think an important idea here is that you should be trying to measure the experience of a user (or as close as possible). If there is a slow service somewhere in your stack, but has no impact on user experience, then who cares? Conversely, if users are complaining that the app feels sluggish, then it doesn't matter if all your graphs say that everything is OK.
I find it helpful to split up graphs/monitoring into two categories: 1) if these graphs look fine then the service is probably fine, and 2) if problems are being reported then these graphs might give an insight into why things are going wibbly. In general, we alert on the former and diagnose with the latter. Of course, its nigh on impossible to get perfect metrics that track actual user experience, but we've definitely found it worthwhile to try and get as close as possible to it.
---
Another fun problem with using summary statistics is they can easily "lie" if the API can do a variable amount of work. For example, if you have a "get updates API" that is called regularly to see updates since the last call, then you end up with two "modes": 1) small amount of time between calls and so super fast and 2) a large amount of time between calls and so is slow. Now, in any given time period the vast majority of the calls are going to be super quick, but every user will hit the slow case the first time they open the app for the first time that day. This results in summary statistics that all but ignore those slow API calls when opening the app.
The flip side is that errors often pollute service latency statistics. If your service is capable of serving a fast failure, for example by returning 503 instantly for all requests when it is overloaded, you need another dimension in your statistics to handle that.
One thing that always has scared me a bit with using CAs for SSH is how you protect the signing certificates? After all, if an attacker gets that cert then they get full access to everything, and can masquerade as anyone. You end up with a choice between a) have lots of SSH keys out in the wild, each with varying degrees of access, or b) have a single cert that is on your infrastructure but has access to everything. (Not to mention how you deal with the operating the site, what happens if it crashes? How do you log in without the site to sign your ssh key? Using standard trusted SSH keys to access feels like its somewhat undermining the point of using CAs).
Has anyone solved this, or got a write up of some best practices for running this? All I've managed to find are articles about how to run such apps, rather than how it fits into the broader security architecture.
Ideally ideally, what I would actually like is the ability to configure OpenSSH to require multiple things to log in, i.e. both that they SSH key is trusted and that it has recently been signed by the signing service. That way gaining access to the signing certificate doesn't help without also gaining a trusted SSH key (it's still bad, but not quite Game Over levels of bad). I had a quick look to see if I could hack together a patch to do this, but alas I had forgotten how weak my C foo is :(
If you have no compliance requirements, you can also just use any pkcs#11 token (with support for non-extractable keys) to secure the key, and setup an air-gapped process on a laptop with a bootcd, etc, to minimize the risk of compromising your process.
> ... what I would actually like is the ability to configure OpenSSH to require multiple things to log in, ...
With OpenSSH, you can require multiple authentication methods to succeed before access is granted.
For example, "publickey,password" to require password authentication after key-based authentication has succeeded. You could even do "publickey,publickey,publickey" to require three different keys to be used!
This has been supported for several years, by the way. See "AuthenticationMethods" in the "sshd_config*" man page.
The main thing I found interesting about this is the idea of trying to break the taboo of working from bed, especially for those who have a disability that make it hard/impossible to get out of bed some days. After all, this is the reality for a lot of people long before the pandemic. Yes, fine, maybe it's bad for your back or sleep schedule (I personally hate even reading in bed), but it's up to the individual to figure out what works for them.
I really hope that one thing we get from this pandemic is not just that remote work is more accepted, but that there's more understanding and flexibility over people's individual situations (without them having to try and justify themselves). This would help so many people, whether they're disabled, or juggling childcare, or whatever.
To me all these slogans around security is to ensure people really, truly, actually think about things before they go against the grain. Is using obscurity as part of your defence always wrong? No, but equally it often adds a false sense of security. Popularising these easy to remember slogans helps change peoples defaults. Nowadays, if someone sees an attempt at security by obscurity it (hopefully) rings alarm bells and causes them to interrogate it to ensure that there is also other security measures in place, or that it is otherwise OK. It's the same with "never roll your own crypto"
I find it somewhat interesting that the article uses an example which falls right into another pitfall that "security vs obscurity" is trying to prevent.
> SSH runs in port 64323 and my credentials are utku:123456. What is the likelihood of being compromised?
>
> Now we changed the default port number. Does it help? Firstly, we’ve eliminated the global brute forcers again since they scan only the common ports. ... So, if you switch your port from 22 to 64323, you will eliminate some of them. You will reduce the likelihood and risk.
This is technically correct. However, the author has identified a security concern that he wants to mitigate: brute force attacks. Now, you could try and reduce that risk by using a different port which might reduce it by 50%, or* you could fix this issue by deploying fail2ban (or using ssh keys, or VPNs and bastion boxes, etc), and thus negating that attack vector entirely. There isn't even a usability argument here: making people remember the right port for ssh is less usable than setting up fail2ban. Of course there are tonnes of other attack vectors to consider, but in general where possible its better to "properly" (fsvo.) mitigate those concerns and only rely on obscurity where that isn't possible. If a concern is mitigated than adding obscurity does almost nothing, while likely proving to be more annoying to the end user (like in the case of specifying a port in the above example).
Now of course that's not to say that you should never use obscurity, but if you do then I think its entirely reasonable that you are prepared to give a good justification why its appropriate. For example, sharing via secret URLs is a good example where it can be easy to justify in some settings, but it equally may not be OK for documents that are really really sensitive as its relatively easy for links to be shared in error with the wrong people.
RE some comments about using obscurity to signal that your deployments would be harder to get into and thus for attackers to not bother: I'd genuinely love to know if that is true or not, I wouldn't be surprised if attackers assumed obfuscation mean that the more advance security measures hadn't been deployed (otherwise why bother with obfuscation?).
* Based on the twitter poll in TFA, though if you have a targeted attack it seems sensible to assume that if port 22 doesn't work they'd try again with other methods
I did a Maths degree before becoming a software engineer, and honestly I think its really changed the way I think, just in general. There's something about being given a problem, or theorem to prove, and grappling with it until you really start to get a deeper understanding. After spending hours on a single problem, sitting there trying various ideas, getting flashes of inspiration, hitting dead ends, grabbing a drink, coming back and doing it all again. Finally actually getting to the point where it all just suddenly clicks and you realise that actually if you just think about it in these ways the solution is just, well, obvious! It's really intensely satisfying; just a three line proof of "without loss of generality we can assume X, which implies Y, and so clearly Z is true". So satisfying! (Then you realise you still have another nine problems to try to do before tomorrow, oh god...)
Anyway.
To me, it really taught how to tackle Hard Problems, where you do just sit there making seemingly no process for hours/days/weeks. When you first start tackling such problems it can feel really frustrating, but actually with experience you realise that progress is being made when you slowly manage to map out the problem space and get a better intuitive understanding what's going on. I kinda do imagine it as stumbling round in the dark in an unfamiliar place, slowing groping around, hitting dead ends, then slowly but surely getting a mental model of what's around you and how it all interconnects. Once you have that understanding and intuition the problem is often, kinda, easy? Or obviously impossible and you'll need to make some trade offs.
Changing the way you think about progress to be less goal oriented and more about expanding your understanding is really quite crucial to tackling such problems I think. Both just to keep you motivated through the process and stop you from getting discouraged, but also helps you realise when you've stopped making any progress and should take a bit of a break and come back with a fresh mind.
Most of the time this skill is entirely useless, but sometimes it really is quite powerful. I guess working on Matrix is a bit of a special case, but I would never have been able to sit down and spend weeks trying to come up with a new state resolution algorithm, to pick one example, without that sort of experience. I just wouldn't know where to start, and I'd become demotivated by the end of the first day and likely give up (knowing me).
All of this rambling is to say: I think Maths is really something you have to do. Reading books about it is interesting and great and all, but if you really want a deeper understanding you have to get stuck, get your hands dirty and try to solve problems. I don't mean problems where you take that cool theorem you just learnt and apply it or figure out how to apply, but problems where you actually have to come with ideas and theories on your own. (Now, I have no idea how feasible that is outside a formal setting and without supervisors, but that's really the dream).
I hope that in some ways helps, even if its probably entirely devoid of practical advice :)
FWIW I've been coding since my early teens and enjoyed it a lot, but when it came to university (in 2009) I had very little interest in doing a CS degree and instead opted for Maths. The two main reasons were: a) I knew that doing Maths instead of CS wasn't really going to hinder my job prospects in any way, and b) CS sounded a lot drier and had fewer options and choices than the Maths degree (in the UK you apply for the course and you generally don't do anything from other courses, so your choice matters a lot).
The first point I think has born out fairly well, even if it was probably a bit arrogant. Certainly when I'm interviewing grads I'm not actually that interested in if they did a CS degree (though that might be more because I didn't do one...). We don't really do early stage training so we're looking for evidence that the grads can actually code, whether its by pair programming, via questions or looking at personal projects, etc, its the people who have done it as a hobby that tend to shine there.
The second point is highly subjective and obviously quite personal, but equally if people know they can get into software engineering without a CS degree then I think they're more likely to do a course that really interests them. After all, if you it doesn't effect your job prospects that much then why wouldn't you? There is a fair argument to be said that the industry should be better at hiring CS and code camp graduates and doing on the job training, but that's not where we're at currently, alas.
If anything I tend to view CS as the academic arm, and software engineering as the practical/vocational arm. In the same way e.g. law works (at least in the UK), where actually most lawyers haven't done law as their first degree and do a conversion course after instead (often getting a contract before doing the conversion cause and then having the firm fund it). Really, its the classic argument about how much university degrees should be academic vs vocational.
I think this is the crux of it. What we're doing as software developers is not a scientific discipline, it's engineering. So if you want to be a software developer, computer science is the WRONG field for you.
Computer Science is largely concerned about things like algorithms and complexity, theory of automata, and stuff like that. They're doing research.
Software developers care about those algorithms, but we decidedly do not want to be implementing them. I've got libraries, where somebody already took care of coding the hashing or binary-tree-rebalancing algorithms, or databases with the same. There's really no reason I need to be able to explain how quicksort differs from bubblesort.
But what we DO care about is how to gather requirements, how to perform proper modeling and design, and stuff like that. Yet those are classes in the engineering school, and not required of CompSci majors (at least not back when I was in college).
The result is that folks with a CompSci degree are ill-prepared for a career in software development. They never use at least half of what they were taught, while on the other hand, at least half of what they do wind up needing was never taught to them.
I disagree pretty strongly. I wrote lots and lots of practical software as part of my undergrad. And the software I wrote was pretty diverse, including, file system drivers, image recognition, data mining / text classification, exploit utilities, etc. Most of the complex theory has been offloaded to graduate school to make way for "practical applications." Plus, I feel like my education set me up to be a little ahead of the job market, as data science was a track of my CS ten years ago, and you'd have left university with your own little scikit-learn library.
Well said. And THIS is why I have no qualms about not majoring in CS 10 years ago, despite being at a big-name university with a well-regarded CS program.
I love working as a software engineer, and had been tinkering with web development independently since high school -- and it didn't really occur to me to major in CS. Other than both involved working with computers, I didn't see the connection. I had dismissed CS as irrelevant to my interests, too esoteric, boring, dry.
In the course of my current work, there are some things I do wish I'd learned more about, but not enough of them to make me regret not majoring in CS.
It's worth noting that VPN / SSL proxies provide box to box (or process to box) encryption, whereas native SSL support provides process to process encryption. The difference being that if an attacker manages to get access to the box then it becomes easier to capture traffic due to it going unencrypted between the app and the VPN/SSL proxies. Fundamentally, native SSL support provides strictly better protection than just VPNs or SSL proxies.
Now, given the context this may or may not be a distinction that you care about, but there certainly are times where you really do care.
(Besides, if I'm running a tcpdump on a box to try and figure out why the network is going wibbly I'm a lot happier knowing all traffic is encrypted and I'm not going to accidentally capture some PII. I've had to tcpdump within docker containers before too, so putting everything in containers doesn't necessarily solve this.)
Yup, if you're using ssh-agent (as opposed to something like gnome-keyring) setting `AddKeysToAgent confirm` to your ssh config should cause a pop up to happen every time anything requests a key from the agent.
Isn't the other reason so that adversaries can't tell if a particular username/email has signed up? This is not so useful for something like github, sure, but certainly is useful for the more embarrassing sites where users have an expectation the site won't leak their membership.
So in some ways I've always thought of this as a privacy concern rather than a security one?
Edit: I guess I'm thinking purely of emails where you don't get availability checkers during sign up.
As the post demonstrates, you simply go to the login form to validate the presence of accounts.
Few sites remember to anonymize that, which might be the real PSA: in such a case, if you require an email confirmation anyway, just send the "recover password" email internally, but let it look like the regular sign-up flow.
If you don't requite email confirmation, anonymous membership isn't possible (just try to sign up with that account, what is the site supposed to do that looks legit without giving away information?)
[1] A terrible picture of the space, where the back of the clock would be on the left: https://i.pinimg.com/originals/e1/9b/1c/e19b1c997dc06c45e8b9...