Hacker Newsnew | past | comments | ask | show | jobs | submit | zigara's commentslogin


It seems the issue was developers using SSH agent forwarding which was abused to access the production environment.


The attacker seems to have responded:

https://github.com/matrix-org/matrix.org/issues/357 edit: just saw the rest: https://github.com/matrix-org/matrix.org/issues?utf8=%E2%9C%...

"[SECURITY] SSH Agent Forwarding

I noticed in your blog post that you were talking about doing a postmortem and steps you need to take. As someone who is intimately familiar with your entire infrastructure, I thought I could help you out.

Complete compromise could have been avoided if developers were prohibited from using ForwardAgent yes or not using -A in their SSH commands. The flaws with agent forwarding are well documented."


I'm also a UNIX geek, and I would have to disagree with your opinion.

The design looks solid. I don't know anyone that has troubles scrolling on their laptop these days. I found that quite bizarre to hear. With daily use, you can be nearly as nimble as using a real mouse.

Could you give some suggestions on what you would change? I'm curious how you would display that much data on the screen in a clean manner.

Not trying to argue here, genuinely interested in improving my UI/UX knowledge. Quite useful when building webapps these days.


How about you use a simple virtualenv wrapper such as https://pypi.python.org/pypi/vex ?

Then you can easily type 'vex myenv python myapp.py', no need to spin up a linux container for simple development.


> no need to spin up a linux container for simple development

But why not do this, anyway?

These containers are very cheap and come with very little overhead, they are truly isolated and they are easily shareable. I'd say there is no need not to spin up a container for everything, it's not like you're going to run out of RAM...


While they do make sense in certain situations, they also add more complications. On linux, it's much easier, but If I want to run this on windows, or osx, they tell you to run Docker inside of a full VBox VM (and maintain it), this is not ideal as VBox destroys my battery and other annoyances.

You also have to maintain those containers/images (not to mention lug around a 600+mb base OS image and update it). I am not sure how Docker handles keeping images updated, I assume overlayfs makes it easy to keep your base docker OS images updated, but not sure how it'd handle certain package configurations.


To be fair, python + anything that needs compiling on windows adds complications.

If I had a bunch of money to invest I pay to:

- Get non-virtualenv libraries moved to virtualenv.

- Get non virtualenv libraries working in windows.


> If I want to run this on windows, or osx

Yeah, it's completely nonsense then, the whole point for this is to be extremely lightweight; if you have to run a full VM you may just as well use it directly.

> You also have to maintain those containers/images

Well, that's true, but due to the containers being lightweight and focused you don't really have that much to maintain. I'm not a supporter of one-process-per-container philosophy, but even 3 processes/services inside a container are easier to manage than a full VM.

> not to mention lug around a 600+mb base OS image and update it

Last I checked the base Ubuntu image was ~120Mb.

> I am not sure how Docker handles keeping images updated

You just rebuild them. Thanks to Dockerfiles it's fully automated, so you can plug it into your CI.

The whole point of Docker containers is that they are very easy and cheap to build, run, share and destroy. Personally, I use Docker as a replacement for VirtualBox - my containers are nearly-full-OSes, with sshd, cron, syslog etc. running on them. But building a new one for a new project - if the project doesn't need something exotic like different distro - takes tens of seconds at most and starting/running a new instance (which is equivalent of starting VirtualBox image) takes a few seconds at most.

The important thing to remember about Docker is that you can have as little or as much of an OS inside of a container as you want. The only thing shared between the container and native OS is the kernel - inside the container you can install any distro you want or even get by without any explicit distro (although I don't know if someone tried this yet). You can have a full-fledged OS if you really need it - you can run the container, ssh on it and configure everything you need to. Or you can just run a single process inside a container. Or you can stay in the middle. Whatever you do, Docker gracefully adapts, unlike a VM, which will always need heaps of RAM, have problems with interop and so on.

Over in *BSD land they have jails - I've been using them for years before Docker existed - and they are really proven piece of technology. One interesting use case for them is in PC-BSD, which is FreeBSD-derived desktop system. There's this nifty "wizard-like" GUI for spinning up jails for installing packages from outside the official package manager (like from sources or ports). The system takes care of tunneling, symlinking userland and so on; you can install/build/compile three different versions of Firefox in their respective jails and run them seamlessly alongside each other on your native desktop. It's just one of the possibilities this technology - I mean very cheap kernel-level virtualization - can be used for.


Don't get me wrong, I've been using containers for many many years. I run pure LXC on my desktop at home for simple containers. Containers are great in many situations. It's a shame LXC didn't get as much hype as Docker (not sure if Docker is LXC based anymore, but was for quite sometime).

However, you do require Docker (or LXC), and you need proper cgroup support in your kernel if you want true isolation. This is perfectly fine for myself and perhaps a few of my developers running Linux, but it starts to look less appealing in other environments.

Personally, I have to maintain multiple FreeBSD servers and even have a local FreeBSD machine for related purposes. I've also had to develop and maintain python applications for SmartOS (solaris based) machines.

Perhaps one day Docker will support Solaris Zones or FreeBSD jails, who knows. :)


>But why not do this, anyway?

It's one more thing that can break, and if it does, it will take time & effort to fix it.

There's also no benefit as far as I can tell. Virtualenv + apt-get solve the environment issue for me...


I'm happy that it works for you, but many people have complex infrastructures that aren't easily deployed with virtualenv+apt-get alone.

I wish I could deploy our python apps with virtualenv alone, but before docker we ended up having to create our own deployment system to wrap the virtualenv with all the associated dependencies so it could be pushed to various environments in a deterministic manner.


>I'm happy that it works for you, but many people have complex infrastructures that aren't easily deployed with virtualenv+apt-get alone.

For your development environment? Seriously?

>I wish I could deploy our python apps with virtualenv alone

I thought this was about dev environments? For deployment (and testing deployment) I use a virtual machine too - vagrant/virtualbox + ansible. Yes, I wouldn't want to pollute my computer with that stuff either.

I wouldn't want to develop inside a virtual machine, though.


A previous python project I worked on had multiple independently developed services, sometimes with conflicting requirements. Production often separated these, but keeping staging and local development environments happy was a constant struggle.

I also like to make sure the devs are working with an environment that translates directly to production (docker makes this much easier now). I really don't enjoy sorting out a blob of code that only works with your specific local configuration:

- ok requires opencv (or other complicated dependency). No problem, that's already in production. - Oh it only builds against the 2.4.7.1 version you have on your machine. - Hmm, 2.4.7.1 has a bug when used with common_lib version X on distro Y - etc.


If it breaks it's essentially a bug in the kernel.

If you don't see benefits of virtualization then I guess there's no convincing you, and maybe you just really don't need them. But there are people who would benefit from virtualization but don't do it because of the costs of maintaining and running a full VM, and Docker can be a good solution for them.


Python has virtual environments already. I don't see any need to host it in a guest VM.


Because it introduces a lot of complexity with literally zero need or benefit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: