Sorry if I'm being thick, but how is this twodimensional slideset supposed to work? I just want to go to the next slide, should I go all the way down and then right? Or do I need to go up again before going right? Or can I ignore down?
I've seen this being presented before, it seems to be a new thing, but never navigated one myself.
The slides interface is supposed to be for the presenter, really, not for the general public. The benefit to the presenter is if someone from the audience says "I'd like you to go back to the slide where you talked about X" -- finding the slide in a 2D-organized deck is much easier by going to overview and finding the right contextual column.
I find slides.com very handy because I can present from any web browser and don't have to worry about bringing my laptop.
It was a bit annoying at the start but I loved the content. Next time I'm rebuilding my Mac (probably when new ones come out around June 13th) I'm going to use this hardening guid which might help a bit:
This guide is great. Is there a similar guide more focused on Linux particularly Ubuntu out there? I found [0] a bit lacking.
[0] https://wiki.ubuntu.com/BasicSecurity
Just use your spacebar and it'll navigate you linearly, taking you down ad across the pseudotree as you need.
Edit, or click downarrow, if no spacebar.
See my comment here [0] with its genuine, but as yet unanswered question:
If you hit space, how do you know
you end up seeing every slide?
Every seems to be chipping in with these navigation hints, and that's great, but no one seems to answer my question. The site seems to be a triumph of presentation over usability, and I'd really like an insight into the minds of those people who think it's all obvious and usable.
For example, someone else said[1] that if you hit ESC you get an overview. How do you find that? What if you don't have an escape key, such as on my tablet?
I'm not trying to be grumpy or anti-anything, I'm trying to gain an insight into how people think this is a good thing, and to provide, in return, an insight into why I think it's unusable.
Slides.com is focused on the presentation. The audience is not typically interacting directly with the deck.
When you create a deck, there is a pretty good tutorial explaining how the interface works, including what spacebar and escape do.
The real advantage of the service, for me, is the support for embedding any digital media you want into a slide via iFrames, and the ability to use your phone/tablet to advanced slides and see speakers notes if your venue did not provide you with a clicker.
It also has another pretty neat feature where audience members can pull up the presentation on their laptops/tablets to follow along, and their slides will automatically advance to match my progress through the deck.
I'm just trying to explain why its very much a tool for the presenter, not the presentee.
Edit: Slides.com is a locked down instance of reveal.js
It seemed to me that every time it went sideways, that new vertical had no upper parts to that tree. I wish i could diagram it here, but it seems this one was well-formed. I often check, and after 3 times this happening, I trusted the storyteller. I can' guarantee that another writer wouldn't do this differently as I don't know the tech, but this one seemed well-constructed. Did you not look and see that when you were clicking down and it went sideways to the right, there was no up option on the next slide?
I am on mobile, and while there was a set of 4 arrows to navigate vertically and horizontally as required, there was no spacebar equivalent. You had to tap each time to animate in the next bullet point. And you had to differentiate between light brown and slightly lighter brown under your finger to know when you reach the bottom of a column.
I absolutely loathe this style of presentations. It always feels like I've missed a part. Call me old fashioned but the UI and UX of linear presentations is so much easier to use.
It's absolutely horrid. Lets say you want to read everything: first you need to experiment whether depth-first search is going down or right. Then you miss stuff because the down and up buttons on some screens use a lighter shade than what you're expecting that is very close to the "nothing here" shade; after figuring this out you go back to examine everything you saw because you might've missed something based on this mysterious hard-to-distinguish shade. Eventually I gave up, maybe he has interesting stuff to say further on but I'll never know because I was defeated by terrible UI patterns.
How do you know? How can you be sure you reach every slide simply by pressing space?
What about on my tablet, which doesn't have a space bar?
Edit: These are genuine questions, and I am deeply frustrated that it's being downvoted without any discussion. People are claiming that hitting the space bar takes you linearly through the entire presentation. How do you know you reach every slide like that?
Really, seriously, how do you know?
As far as I'm concerned the navigation is opaque - if I hit space, I don't know that I'll reach every slide.
How do you know ?!?
I suspect that people are enamoured with the attractiveness of the presentation - it's cool, it's slick, it's gorgeous, it's wonderful - and when I question its usability or discoverability, people are downvoting because they have no answer, and just get pissed off at being questioned.
You have raised a very important issue here. Today's web has been festered with such sucking UI/UX. It just caters to the demands of novelty-seeking users with no concern for usability/discoverability.
Indeed, you made a very sharp statement. It might be driven, to a very large extent, by the crazy novelty-seeking ideas of the website owners and devs.
Then if some equally stupid "journalist" gives praise and publicity to their foolish UI/UX this trend increases and perpetuates.
Unfortunately, it seems to be happening to a very large extent.
The irony of this article is that it talks about security, and calls for anti-bloat things.
Look at the compass rose in the lower right corner. When there's another detail slide below, the down arrow is dark. If you're at the bottom, it's grey. Basically you're navigating a grid and the arrows tell you if the cell in that direction is populated or empty.
I'm not totally sure if moving sideways pops you back up to the top row (which it should) or to the equivalent depth of the neighboring stack.
I actually really like this style. It allows you to quickly skip across the top row to get the main points, then dive deeper when needed.
Thanks for the reply - useful, but it continues to raise questions.
By looking at one of the slides, is what you've said obvious? How can you tell? From the start it's not obvious to me that there's a grid of slides. It seems to me that you need to experiment to work out how it works, and it's really not obvious. You yourself say that you don't know if moving sideways pops you back to the top of the next column.
In short, it's all taking my attention away from the presentation, it's making me work to figure out what's going on, and it's just not obvious. Ask yourself if this is a good thing - making your audience work to figure out how the presentation works, taking them away from the content. It's swish, it's fancy, ...
I'm sorry you all have to read just the slide deck. It's an hour-long presentation and a lot of content is simply not in the deck. :( Unfortunately, every time I've presented it, the talk was not recorded -- hopefully I'll eventually present it somewhere else that will capture it for me.
It's a good presentation with many good points outside the horrid formatting. Just turn it into a PDF with slides for goodness sakes. Write key pages on a piece of paper for audience questions where you have to go back. Should work fine. :)
Btw, one thing worth correcting is false claim that QubesOS was or is only attempt at workstation security. I've evaluated almost a dozen over past 10 years with some still existing. List those here:
You really need to look up separation kernels as isolating most critical stuff in a dedicated partition protected with 4-12kloc kernel is one of strongest approaches. seL4 and Muen are examples with GenodeOS an example of FOSS attempt to do a Nizza-like architecture with strong foundation and best-of-breed components (esp Nitpicker GUI). High-assurance security is moving forward with hardware-software architectures with one maybe getting SOC release (plus source code) in 1-2 years. Yet, our prior work with separation kernels/VMM's plus safe code (esp SPARK Ada or C w/ Astree Analyzer) for trusted components is still stronger than any crap mainstream FOSS, VMware, etc are making. They rarely learn from the past.
Note: Email me if you want more examples of past and current high-assurance work. I have collected them for most focus areas with papers, prototypes and/or products.
> Btw, one thing worth correcting is false claim that QubesOS was or is only attempt at workstation security.
You must look at my statements in the context of presenting this at the Linux Security Summit. You know a lot more about this than me, obviously, but from what I can tell, each of the other solutions you mention run custom non-Linux microkernels that provide virtualization to other consumer OSes. I'm ready to be educated here, but I believe I didn't misstate that QubesOS was one of the first pure-Linux mainstream attempts at workstation security through compartmentalization.
Re slides. Oh, I must have misread meaning of one of your comments. I got a PDF to share now. Good. :)
Re "one of the first pure-Linux mainstream attempts"
Damn, I'd have had you if you didn't say mainstream. This statement is so well-worded I might have to agree with it. Sad part, though, is it's because mainstream rarely accepts anything more secure, esp high integrity/security. Rust and QubesOS are among a tiny set of exceptions.
If you want to give the talk at the CloudFlare office in SF or London (can get a couple other speakers -- maybe about network services), we could provide food for attendees and either free for 100-300 people or have people make a donation to Linux Foundation. Getting it professionally recorded so you could put it online somewhere would be easy; I can give you the files, or we could find a place to host them.
Thank you very much for the offer. I'm not sure I can easily take you up on that, as SF and London are about equally far from Montreal. I'll try to see if perhaps I can do an on-air hangout.
It's not really a recommendation. It's presented as one of the free software projects attempting to tackle workstation security. Another one is SubgraphOS.
"The only serious attempt at workstation security"
"The Volvo of blah blah"
Quite a slam to those of past and present that handed NSA or DOD pentesters their asses back to them. Maybe be more accurate if you said "a FOSS attempt at workstation security" minus Volvo part. Volvo probably goes to INTEGRITY-178 as SKPP cert requires more attack areas to be covered plus 2 years of pentesting for kernel. Genode Architecture is prime contender for FOSS far as foundations go. Next time a FOSS project claims to be designed securely just ask for a covert storage and timing channel analysis of any components that handle secrets. They'll either say "Huh? What's a covert channel analysis?" or "We don't really have anyone doing that as we're too understaffed or it doesn't really matter." ;)
I slammed Joanna for its issues in the past. It's kind of a merger between features of Compartmented Mode Workstations and an insecure, virtualization platform. This author fails by thinking it's the only attempt at a secure workstation. There were many with market ignoring them mostly. It's why only a few remain. Demand-side problem.
Anyway, the best way to do it is a microkernel or separation kernel that virtualized Linux with security-critical stuff running directly on the microkernel & communicating through protected IPC. That's the MILS/SKPP model whose first commercial releases were around 2005 or so with security kernels doing similar stuff in 90's. Closest thing to that in FOSS is GenodeOS: uses many proven components from CompSci like Genode, Nitpicker, Muen, and seL4. They're a small outfit. They need contributors to get it into Beta shape.
Note: CompSci stuff that explains how these things work really well without corporate marketing. :) This is similar to separation kernels below, GenodeOS and Sirrix's TrustedDesktop.
Note: The design techniques listed here make for strong guarantees against apps screwing it up. Esp "hard currency." Worth copying by any FOSS project.
Note: One with more features and risk to make app development easier. Nice visuals showing split between untrusted OS's & runtime partitions w/ careful comms.
Note: Great presentation on CMW capabilities & risks. Several are on market but Argus is most mature & featured. Orange Book taught us to use CMW/OS features for medium-assurance, damage reduction with security kernels isolating those & individual tasks for high-security. Balance in everything. Argus runs on Red Hat Linux now.
So, yes, there's plenty of them going back to 90's with a variety of security tradeoffs. Strongest ones use split app architecture with untrusted VM's for legacy stuff (or OS's) plus isolated runtimes on high-security kernels. Usually custom, slimmed-down stacks for filesystem and networking that isolate against OS-level attacks. A few, like GEMSOS (old) or Muen (recent), were implemented in safer languages to reduce error potential. Such features were in my recommendations to QubesOS mailing list, rejected entirely, and some later implemented as if it was their idea all along. Best to avoid it unless vanilla malware is the concern. ;)
Perhaps I'm wrong, but I've always remembered this as
*Ugly* bags of mostly water.
Am I wrong?
And along with several other commenters here, I strongly dislike this sort of multi-direction navigation with no overall map showing what's there, where I've been, and what I haven't yet visited. Beautifully designed and presented, with no concern for the user experience.
A bit like the cars being described.
The car is designed | The presentation is
perfectly. | designed perfectly
Any crashes are the | Any inability to navigate
driver's fault. | is the user's fault.
EDIT: The Wikipedia page about this episode agrees with me, but other "quotation" pages have "ugly giant bags of mostly water." Does anyone have the episode to hand and the time to watch it? Am I paying too much attention to trivial details?
EDIT II: Found a script[0]: First reference is in an initial translation and is:
Ugly... Ugly... Giants...
Bags of Mostly Water...
Subsequently as the translator gets better it's:
Ugly Bag of Mostly Water
So there we are.
EDIT III: Wow - downvotes! No complaint, obviously people feel either that this comment is wrong, or doesn't belong. I'd appreciate knowing why people might think that, but I guess I'll never know. Which is a shame, I'd welcome the opportunity to learn.
"The presentation is designed perfectly. Any inability to navigate is the user's fault." is spot on. It took me a lot of slides to figure out something was terribly wrong.
On the otherhand organizing slides by index/subindex + it is on internets already won me over.
Yes, I have some Star Trek on in the background day and night (I mean that literally). It is exactly as you describe. Ugly giant bags of mostly water. But for some reason, this pared down version ("giant bags of mostly water") has persisted online.
There's no escape on my tablet. Also, the whole experience is horrible on tablet also wrt the other parts of navigation. Doesn't fit right on screen, navigates when trying to adjust, jumps around when trying to navigate. Hard to hit the arrows. I bailed out on this after three slides. Unusable.
I'd be interested to know how you know that. Did you just experiment? Others have said that "space" will take one through the slides in sequence. Is that genuinely discoverable? How does one know when hitting space that one has seen all the slides, and not just taken a single path through a subset.
These are genuine questions - I'm heavily into usability (in a different context) and I'd be interested to know what people try, and what they conclude. In my context people are uneasy about hitting keys and random just to see what happens. On some web sites I'm afraid to move the mouse, because the mouse-over event has been trapped, and things happen that I don't want, and can't undo.
So how did you discover that ESC will produce an overview?
Let me know when your team of 10 security people that the corporation spends a million per year on is ready to tackle all these issues. Not one overworked IT guy that keeps getting shit from dumb people like the CEO who don't get it. Otherwise forget it, security is not happening, because there is no budget for it. At the end of the day the problem with security is not security, it's money, we have more than enough tools to make everything airtight. What we don't have are the budgets to make this happen, so instead of proactive security you get reactive security, and the CEOs and other executives don't care about that until it happens because it costs money.
Of course, if the company leadership doesn't care, then you will have a hard time convincing them why the upfront effort of "doing it right" is worth it. When dealing with this situation, I found it useful to compare IT security people to lawyers. Wait, hear me out before you shout me down. :)
To the non-initiated, lawyers and infosec people are seen with nearly-equal amount of both dislike and trepidation. They are seen as a force of lawful evil that descends on your team and starts telling you that all those cool things you're trying to do cannot actually be done, or must be done in a non-obvious roundabout way. When asked for reasons, both lawyers and infosec start talking about concepts that are entirely unfamiliar to most devs (code provenance, license agreements, trademarks, patent litigation, IP isolation, containers and namespaces, RBAC policies, multifactor authentication). All you care about is that this is a person who is telling you that your project, 99% complete after your team worked multiple 60-hour weeks, must be delayed until a bunch of things -- that you don't consider broken! -- are fixed.
However, this is where things usually go differently. If a lawyer comes to management and says "this project cannot launch because a bunch of code was copy-pasted from stackoverflow and links with an incompatibly-licensed library," the management is likely to listen even if they don't understand a word of what was said -- because they know the importance of lawyers and know that, in the long run, litigation is extremely expensive. However, if an infosec person comes to them and says "this project cannot launch because they have a PHP script running as root that listens on external port 80," management will not value this input nearly to the same degree, even though, in the long run, a bad security vulnerability can have just as much of a detrimental impact on a company as litigation -- and probably worse, because you won't be able to hush-hush and "settle out of court."
The reasons for this are multiple -- infosec is in infancy compared to the legal field, and, sadly, many IT security practitioners tend to look and act in a way that makes their recommendations carry so much less weight with upper management.
So, where I'm going with this is -- if you work for a company in an infosec field and you genuinely want to improve things to the point where management actually starts to listen (which translates into $$ for your team and your projects), then you need to both convince them that your expertise is equally as important as the lawyers', and probably present yourself with the same amount of gravitas as those working on the legal team.
Its pretty much the same with any aspect of IT. I would like to spend more time making the code better and maintainable, but management just wants to see new features.
But mostly it was a lot of proding and poking whilst setting various things up that helped a lot, and remembering that when something doesn't work, to check selinux first.
The 'settroubleshoot-server' package for the Red Hat based distributions is also good whilst you're getting started in dev, as it takes the avc logs and 'guesses' what the likely cause of the problems you've had are, giving percentage likelihoods of the policies and booleans that might be causing problems.
Hehe, still looking for a walk through of what commands exist, in what order are they typically used, basically a couple of more annotated sessions. Preferably text although I have managed to get through a few videos lately.
You might be missing the "down" part of each slide? In the bottom right corner there's the 4-arrow thing, and if down is 'not disabled', there are more slides to see.
This aricle falls foul of what I might call 'security shopping'-- passing mentions of lots of brightly coloured complex sounding security things with very little regard to what exact problem they're solving.
They mention a VPN or insecure access panel having bad permissions, but recommend a mixed bag of differently coloured jellybeans as the solution without once reccomending shutting down the PHP script, allowing access from the VPN only through certificate, password and hardware two-factor authentication, and ensuring good access controls and employee on- and off boarding systems.
Far more importantly, I question the efficacy of any security recommendation that doesn't mention threat modelling at all. What is it you want to protect? What's it going to cost to protect these things? What's it going to cost to lose them? What's the simplest and most effective best way of protecting these? Is it really moving your entire system to a different platform and upgrading all your cypto -- ask yourself -- are we really installing air bags, or are we building our car out of armour plates? Some kid is going to spend 2 hours on XSS in your app if you spend all your resources investing in in-datacentre encryption and service-to-service authentication.
Im sure threat modelling is something everyone does implicitly.
As someone who practices security, I found the keywords you can pull from the slide reasonable in their suggestions to follow up on. There were a couple of places he went into the weeds, and I think he probably could have talked up iOS security a bit more instead of smart cards which are a bit overkill relative to his other suggestions.
But, this is just a slide deck. Try not to rush to judgement considering we didn't hear the talk that came with it.
> I'm sure threat modelling is something everybody does implicitly.
You may work somewhere that this is the case, but I can't count the number of times I have tested an application where someone has equated security to having an A+ HTTPS rating.
> This is a slide deck
Understood, and something I didn't consider before. That said, I think my comments will still be useful to those here who have also not seen the original talk.
First off: just hit space. You'll see the whole presentation.
----
As a developer, I definitely liked the framing of the presentation. Though I don't think it goes far enough in emphasizing defense in depth. Put simply, user workstations should ideally never be trusted. Getting into the network shouldn't help an attacker much if everything requires authentication even once you're in.
The analogy between IT security and car safety is really insightful. However, it's also interesting that cars are two to three orders of magnitude less safe than rail and air travel, respectively, in terms of injuries and fatalities per passenger mile. Maybe we should be thinking about how to make using computers as safe as air travel, rather than as safe as car travel.
The utility of the car metaphor is that both computers and cars have to work safely for non-professional users. Cars are overwhelmingly driven by people whose job isn't to drive a car and computers are used by non-professional users all the time. This is a critical point, as it's a lot harder to enforce safety practices or education on people who don't have a professional obligation to follow those practices. You have to design your systems to work for people who will do everything wrong.
In terms of mitigating severity from crashes, cars are far ahead of airplanes.
Of course, air travel is a great area to draw on for security when it comes to professional computer users (developers, sysadmins, etc.). A lot of the practices which have improved safety in air travel (automation, checklists, blameless post mortems) are directly applicable to computing safety when it comes to people whose job it is.
Why are different modes of travel's safety measured in passenger mile instead of passenger hours? To measure utility of distance traveled, it makes sense, but passenger mile doesn't reveal how safe it is per hour that I sit in a particular vehicle (train, plane, car, etc.).
> Why are different modes of travel's safety measured in passenger mile instead of passenger hours?
Because it's really the only sensible way to compare different forms of transportation.
Imagine a teleportation device with an accidental death rate of 1 per 10 trillion miles. But it only takes 2ms to operate (regardless of distance), so the death rate measured in hours would be horrible.
Compare that to covered wagons. Per mile, they're quite dangerous—going on a long trip in one might mean a 1 in 10 chance of death. But they're also extremely slow. So measuring death in terms of hours would make them look safe.
From a safety perspective, which would you rather use?
What I love is the smartcard... Well ... Main AAA (Authentication/Auhtorizaation/Accounting) problem is the clould.
How can we federate identity and manage them safely?
Plus, once you decide to be connected how to make AAA system talk to each others?
And then, sometimes you need to make money ... and you know, safely pass token forth and back... And what standard solution do we have that is not a framework?
Well, 3GPP proposed IMS based on IETF Diameter. Still not there.
Some proposed Role Based Accounting and proxy authz based on LDAP ... well not really deployed.
So we are also waiting for a new standard of inter communication of centralized Enterprise Directory that has tokens, tickets, multi policy for authentication according to origin, roaming, sane schema...
LDAP is honestly a good tool, is it actually relying on a secured, strongly typed NOSQL. But, I hardly saw any devs understand the anonymous bind then authed bind mechanism.
I do feel our biggest problem is not in the tools/technology, but rather in too much education of people that are useless in production.
SAML is based on SOAP no? SOAP that google dropped for security and overcomplexity concern while security studies tends to say the bigger risk is using system you cannot fully understand, no?
It seems to me like modern security by trust in "expert" rather than understanding the basics. But I guess I am wrong in my appreciation?
The slides mention IP reputation and a great free resource for that is http://GetIPIntel.net to make sure unexpected IPs don't connect to your services.
IMO, the key challenge to IT security is business justification. The costs are very explicit: it's a shit ton of work to enable SELinux, and 2factor auth isn't free. Meanwhile the benefits are fairly nebulous: 2 factor auth doesn't prevent shellcode injection into imagemagick.
The end result is you have a low probability of a huge impacts occurring, without enough data to suggest the distribution of low probabilities, or the nature or distribution of huge impacts. You're sort of selling tiger repellent rocks, and when the big event does happen, you now have to convince the executives the counterfactual that if you had the budget to implement X, Y and Z, the event would have been prevented. Meanwhile, the costs on team velocity are silently ignored.
The best case scenarios here probably revolve around insurance. Compliance auditors can impose non-compliance fees, and giving security teams a direct financial consequence avoided to point to would help their budget justifications.
I was watching S01E05 of Mr. Robot, which involves the crew breaking in to a huge data warehouse facility reminiscent of Fort Knox. My friend asks me if places like that really exist and I laughed, "No company cares enough to pay that much to protect their data".
Exactly, I've been annoyed more than once by pure whining on the format/color/navigation of a site and little to no comments on content. [Now going meta whining about the whining, but I'll try not to do it again, I promise]
"This is what DevOps is about: running Ops like you're Developing an app, not letting your devs run your ops."
This is a very common attitude of sysadmins who think that configuration management is the only thing they need to do to "become DevOps". Sadly, years after DevOps movement has started, the majority of people who are "doing DevOps" are those sysadmins who just added Puppet or Chef to their toolbox.
Security is a very difficult subject when it comes to DevOps practices, but the approach given in this presentation is definitely something I would not want to be part of. Unless what they are securing is a nuclear reactor control center.
On the other hand, I have yet to work with, or even meet a developer who thinks that DevOps isn't dumping .tar files or Docker images directly into production. The developers love it because it gives them carte blache to just hack production any way they see fit. For instance, I had some Java developer attempt to argue with me that .tar files are the same as .rpm's, and that was just one of many incidents. The worst part is, the developers actively spread that propaganda, along with OS packaging requiring root privileges to deploy payload as being unsuitable for continuous delivery.
I think this is solvable by educating devs, not just by enforcing policies. And path to production should be a well defined process, that has to evolve from collaboration between devs and ops, so nobody should be able to just "hack production" on their own, and ops could also be doing dev code reviews to see what exactly they are doing.
Education is critical, along with ownership. A lot of the bad practices on either side happen when someone can just fob a mess off onto the other group rather than trying to fix it. Once they have that responsibility it's easier to get someone to care about e.g. how a tarball doesn't address dependency management.
The slides said something about an explanation. And then they link to a github repo with one of the most minimal readmes you can create. Well... Good attempt I guess.
So what is "IP Score" in the car analogy? A World-wide DMV? With all user information exposed to either a closed-source entity, or an open source effort?
The car-analogy is a good sign that the speaker is not able to express the thought directly. The networks are not that simple, and the analogy breaks down at depth for every comparison. Like the modern drivers who consciously chose not wear a seat belt. Or that half of the 1.25 millions of death a year are the "vulnerable users", called so by the who: http://www.who.int/mediacentre/factsheets/fs358/en/ , and fatalities of laws of momentum, where a 150 lbs pedestrian absorbs more energy than a 3000 lbs Scion...
Stop talking cars and and analogies!
Talk straight!
The author of the presentation lost my respect when I read he is advocating for using SELinux and Linux containers instead of zones and SmartOS, which tells me he is not up to date on security. The whole thing was even made worse by him advocating for the use of continous integration: no, .tar files and Docker images are not the same thing as operating system packages, nor do they come anywhere near the functionality, therefore "DevOps" is insane, especially when the whole thing can be pulled off correctly and securely with SmartOS zones, OS packaging for both delivery and configuration management, and change management process modeling. Those are your airbags, crumple zones, and safety systems, not SELinux, "DevOps", nor Linux "containers". The only thing of technical value in the presentation is how to put SSH keys on SmartCards, and even that is just explained as a series of manual steps (hacking in system engineering context), rather than as a turnkey, OS packaging process (full, high quality, repeatable integration). Damn it how I loathe half-baked stuff from people who should know better but don't: if you are going to be presenting on security, you better know what you are talking about, or else it is blind leading the blind.
You may be right, but from my perspective you're falling into the same trap that most IT security people do. You're assuming that the mainstream devs should put security first.
They just never will. Follow the money. The money is paying for revenue generating activity.
Using your example that "zones and SmartOS" are preferable to "SELinux and Linux Containers"....that's not going to happen. The people with the money have already chosen the container direction.
A much better strategy would be to give up on what you want, and focus your energy on making the direction that's been chosen more secure.
I suspect if enough talented, security minded engineers descended as contributors for docker, rkt, etc...the situation would improve much faster than the current direction of just complaining they aren't secure.
> Using your example that "zones and SmartOS" are preferable to "SELinux and Linux Containers"....that's not going to happen.
That has actually already taken place, and is taking place. There is unlikely to be one winner-take-all.
Following the money does not mean that the application cannot be programmed from the ground up to support SmartCards and roles, or that it has to be full of security holes.
> I suspect if enough talented, security minded engineers descended as contributors for docker, rkt, etc.
Depends on what is under etcetera. Docker and rkt are not the Silver bullets everyone who has not gotten busted by them think they are, they are just a trend. With all of those you instantenously lose lifecycle management, because they are just images of massive file dumps, not images of software and configuration installed with packages. When you use Solaris zones in SmartOS, Docker and rkt become completely superfluous, because you suddenly get a fully working yet completely isolated UNIX server, running at the speed of bare metal no less. Add some OS packages on top of that, make them into an image for imgadm(1M), and in few seconds you're done. What does one need Docker for in that scenario?
And I should certainly hope that perfect is the enemy of good, because life has taught me, the hard way, that good isn't good enough. I absolutely hate being woken up during the night because of an incident, and will go out of my way to get as close to perfect as possible in order to be able to sleep through the night.
> If you're focused on driving the best solution in some sub niche, yes...you can be successful with that.
I would be hard pressed to call securing containers, virtualization and cloud a niche, since that is exactly what SmartOS has been designed for, from the ground up.
A large part of generating revenue is not having downtime caused by data corruption incidents or security breaches (or both), which means picking and using a substrate which provides guards against that.
Set up a TFTP server, a DHCP server, and use PXEGrub (ipxe is flaky). Boot the system off of the network. As soon as you log in, read the manual pages for imgadm(1M), and vmadm(1M), then pull down the "base64" image. Also read up on Solaris zones. Oracle documentation will do, as SmartOS and Solaris 10 are similar to a good degree.
Next, read up on pkgsrc, and make a simple "helloworld" package. After you get all that working, get Gemalto SmartCard code working on SmartOS (as Solaris was a big market for SmartCards, the code should still work on SmartOS).
> "They are dumb, because they don't do it how I would do it. I know better.
To summarize:
if you want security, you have no business running Linux-anything, not now, not ever: the choices are either OpenBSD or SmartOS, and rejoice that we do have alternatives.
>
if you want security, you have no business running Linux-anything, not now, not ever: the choices are either OpenBSD or SmartOS, and rejoice that we do have alternatives.
That's a very bold assertion and you haven't even tried to back it up. Who do you think is going to read that and say “some random Internet commenter is right and Google/Facebook/the NSA/etc. must all be incompetent”? You really need to back that claim up and do so comprehensively across the entire security landscape from basic design to operational concerns like patch management.
(As an example: I've never logged into a Solaris / Illumos machine which was current on security updates because the admins didn't have the confidence which Debian admins have had since the 90s that updates won't break things. I'm open to the argument that they're all paranoid or incompetent but that's probably the most important real world security task and it at least merits discussion)
> Who do you think is going to read that and say “some random Internet commenter is right and Google/Facebook/the NSA/etc. must all be incompetent”?
That depends on one's experience level, to have the requisite experience to recognize a solution which one should do one's homework on, when someone mentions it, and so will anyone else who has been doing this kind of work long enough. For example, the NSA has been both using and modifying (Open)Solaris for decades now; you can find their papers on securing it with IPsec and locking the OS down, as they've been released to the public. They are still relevant, both in illumos / SmartOS and security context today. Now with virtualization, cloud and containers, moreso than ever. NSA was prescient, or at the very least had common sense.
Apropos Google, Facebook, trendy... two things:
"if a hundred million flies all eat shit, surely they can't be wrong?"
I've been doing this container, virtualization and UNIX long before Facebook and Google even got the idea, even before those companies existed (;-)
When you dedicate every waking moment of your life to studying and mastering UNIX, not only professionally but privately, in time you obtain the insights needed to understand and evaluate what is and isn't good technology. Put enough priority 1 incidents and sleepless nights behind you, and you will know what works, and what doesn't.
As Paul Graham so aptly put it, "When you choose technology, you have to ignore what other people are doing, and consider only what will work the best."
> As an example: I've never logged into a Solaris / Illumos machine which was current on security updates because the admins didn't have the confidence which Debian admins have had since the 90s that updates won't break things.
I want to adress this separately: UNIX and Solaris, from which SmartOS stems, practically invented the concept of backwards compatibility, the guaranteed not to break application binary interface, and paranoia about end to end data integrity. The system administrators' concerns are themselves disconcerting: imagine you need to have a brain tumor removed, and the surgeon who is to remove it is not specialized in neurosurgery? You write about sysadmins who are unfamiliar with the subject matter... and as professional system administrators, they well should be. With this in mind, would you trust them to know what they are doing?
I've seen this being presented before, it seems to be a new thing, but never navigated one myself.
Edit: Just saw a comment about pressing space bar, that seems to work linearly. Thanks! (Source: https://news.ycombinator.com/item?id=11653281 )