Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been tracking the beta for a while. I'm confused about this announcement. These issues still seem unresolved?

(1) docker can peg the CPU until it's restarted https://forums.docker.com/t/com-docker-xhyve-and-com-docker-...

(2) pinata was removed, so it can't be configured from CLI scripts https://forums.docker.com/t/pinata-missing-in-latest-mac-bet...

(3) it's not possible to establish an ip-level route from the host to a container, which many dev environments depend on https://forums.docker.com/t/ip-routing-to-container/8424/14

(4) filesystem can be slow https://forums.docker.com/t/file-access-in-mounted-volumes-e...

Are these fixed in stable? I'm personally stuck transitioning from docker-machine and (from the comments) it seems like other folks are as well...



Sadly, the state of things, be it the Docker ecosystem or others, "ready for production" means something much different than it did years ago.

For me, the definition of ready for production, Debian is a good example of the opposite end of Docker.


I think by 'production', they mean 'ready for general use on developer laptops'. No one in their right mind is deploying actual production software on Docker, on OS X/Windows.

I've been using it on my laptop daily for a month or two now, and it's been great. Certainly much better than the old Virtualbox setup.


>No one in their right mind is deploying actual production software on Docker, on OS X/Windows.

Since the whole point of Docker would be to deploy these in production and not just for development, I don't see how the term 'ready for production' can be used. Isn't this just a beta?


I doubt the problems mentioned happen on Linux or CoreOS, which is likely what a production environment will run on.


> Linux or CoreOS

Well, now I'm confused


Sorry, CoreOS is Linux as well, but in my mind it's enough of a hyper-specialised immutable auto-updatable container-specific version of Linux that it warrants a separate category when talking about Docker.


Docker for Windows is to isolate Windows software.

It's not a tool to test Linux containers on Windows.

The deployment target for Docker containers for Windows will be a Windows OS.


Sadly, no, they're using the name "Docker for Windows" to refer to the Docker-on-Linux-in-a-VM-on-Windows version.

Real native Windows containers and a Docker shim to manage them are coming: [1] but not released yet.

[1] https://msdn.microsoft.com/en-us/virtualization/windowsconta...


I don't think so. That's what Jeffery Snover is working on in Server 2016 with Windows nano server.

Unless something has changed since the last time I checked, The WindowsServerCore docker image was not generally available yet and requires server2016 (I think it was TP6 the last time I checked)

Docker, to my knowledge, is still exclusively Linux flavors. (Though I'm happy to be corrected if someone knows more than me)


Docker images still aren't generally available, but you can now run Windows Container Images based on the NanoServer docker image (and WindowsServerCore image if you replace nanoserver with windowsservercore in their image URL in the docs below) on Windows 10 (insiders build)[0].

[0]: https://msdn.microsoft.com/en-us/virtualization/windowsconta...


I went wide-eyed about three or four times while reading those instructions!

Super exciting! Thanks for the comment.


I am almost positive that is completely incorrect. Can you give any example of Docker being used to isolate Windows software?


You're right. I was wrong about this


you would use kubernetes, dc/os, swarm mode for aws, etc for that. Containers are portable.. nobody is launching a windows vm and doing a "docker run" for their production env


The fact that I can have Bash up and running in any distro I feel like within minutes blows my friggin mind. Docker is the stuff of the future. We were considering moving our development environment to Docker for some Fun, but we're still holding off until it is more stable and speedy.


I'm still using VirtualBox. Could you elaborate why Docker is better?


Leaving containers vs VMs aside, docker for Mac leverages a custom hypervisor rather than VirtualBox. My overall experience with it is that it is more performant (generally), plays better with the system clock and power management, and is otherwise less cumbersome than VirtualBox. They are just getting started, but getting rid of VirtualBox is the big winner for me.


It's based on the OS X sandbox and xhyve which is in turn is based on bhyve https://blog.docker.com/2016/03/docker-for-mac-windows-beta/


Thanks!


When I used VirtualBox for Docker (using Docker machine/toolbox), I would run out of VM space, have to start and stop the VM, and it was just clunky all around.

Docker.app has a very nice tray menu, I don't know or care anything about the VM it's running on, and generally is just better integrated to OS X. For instance, when I run a container, the port mapping will be on localhost rather than on some internal IP that I would always forget.


I don't think he was comparing Docker to VirtualBox.

In Docker 1.11 they used VirtualBox to host a Linux Docker Image to run containers. In 1.12 they switched to Microsoft's Hyper-V.


On the other hand I find my old setup with VMware much more reliable and performant. And I can continue to use the great tools to manage the VM instead of being limited to what docker provides. Some advanced network configuration is simply impossible in docker's VM.


I'm pretty sure they don't mean that, or they would have said that it was still in Beta.


This isn't a product that's "ready for production"; it's a product company declaring that it is.

This means what it's always meant: that the company believes the sum they'll make by convincing people it's "production ready" is greater than the sum they'll lose from people realizing it isn't.

Keep in mind the optimal state of affairs for Docker Inc. is one where everyone is using Docker and everyone requires an enterprise contract to have it work.


So misinformed. Docker for mac and docker for windows are not targeting production. They are designed for local dev envs


So why call it "production ready"?


I agree that it is confusing. Production ready in the sense that it is stable for the targeted use-case: local development environments. Not for "production". Damn now i'm confused...


GA would probably be a more appropriate description.


Well, it was beta before.


Exactly. "Ready for production" and "industrial" are constantly abused. All these tools are awesome and we use them, but PROPERLY deploying and supporting them in production is far from painless (or easy).


I think many view "ready for production" as a sign of what they do have in place is stable enough and support options are available so that it ticks all the CTO/CEO boxes in business plans.

Which basicly gets down to when your CTO/CEO or some manager comes in preaching docker - we should be doing that, why arn't we has one less argument to dismiss it now than before.

Yes many aspects need improving but case of what is there is deemed to of gained enough run-time in environments to be deemed stable enough to say, we can support this in production for off the shelf usage without you needing lots of grey-bearded wizards to glue it all in place and keep it that way.


I'm not completely disagreeing with you but Debian in recent years has taken massive steps backwards as far as production stability. Jessie for example did not ship with SELinux enabled which was a key deliverable for Jessie to be classed as stable / ready for production, what's worse is it doesn't ship with the require SELinux policies - again another requirement before it was to be marked as stable, it's filled with out of date packages (you know they're old when they're behind RHEL/CentOS!) and they settled on probably the worst 3.x kernel they could have.


You've given one example; SELinux. Did wheezy ship with SELinux enabled? No. So how is that a step backwards? It would have been a step backwards if they shipped with it enabled and it was half-assed. SELinux is notoriously hard to get right across the board. See how many Fedora solutions start with "turn off SELinux." Shipping jessie without SELinux enabled was the right thing to do, if the alternative was: not shipping jessie; or shipping borked jessie with borked SELinux support on by default. Those who know what they are doing can turn it on with all that entails.

You gripe about kernel 3.16 LTS but provide no support for your statement. With a cursory search I can't find any. If it was such a big deal I have to assume I would. For my part I use Jessie on the desktop and server and have not encountered these mysterious kernel problems of which you complain. Again, you may have wished for some reason that they shipped with 3.18 or 4.x, but they shipped. They have 10 official ports and 20K+ packages to deal with, I'm sorry they didn't release with your pet kernel version. Again, those who know what they are doing can upgrade jessie's kernel themselves if they are wedded to the new features.

So, massive steps backwards?


Unfortunately, nobody has stepped for SELinux maintainance. If this is important for you, you should help to maintain those policies.

All your remaining points are vague at best.


Oh believe me, we did try to contribute to Debian, in recent years the community has aged poorly and become toxic and hostile, where the Redhat / CentOS community has grown, is more helpful and we have found them to be more accepting of people offering their time than ever.


Most people I have spoken to about this say exactly the opposite. In 2014, the project even ratified a Code of Conduct [0].

The only major contentious issue I can recall was the systemd-as-default-init discussion, but that was expected.

[0] https://www.debian.org/code_of_conduct


I genuinely don't know about what toxicity and hostility you are speaking of. Any pointer?


It's amazing to me that a tool I use to prove that our stuff is ready for production is having such a hard time achieving the same thing.


Do you run your containers in production with "docker run" ??


Only for a tiny pet project.

The sales pitch I usually give people is that any ops person can read a Dockerfile, but most devs can't figure out or help with vagrant or chef scripts.

But it's a hell of a lot easier to get and keep repeatable builds and integration tests working if the devs and the build system are using docker images.


You are doing it wrong then. People run containers in production using orchestration platforms, like ECS, kubernetes, mesos etc. The docker for mac/windows are not designed to serve containers in production environments.

They help you build and run containers locally, but when it comes time to deploy you send the container image to those other platforms.

Using docker like that is like running a production rails app with "rails s"


And how do you solve all of the security problems and over-large layer issues that the Docker team has been punting on for the last 2 years?


Which security problems are you referring to? Our containers run web applications, we aren't giving users shell access and asking them to try and break out.

Over large layers: Don't run bloated images with all your build tools. Run lightweight base images like alpine with only your deployment artifact. You also shouldn't be writing to the filesystem, they are designed to be stateless.


Credentials capture in layers. Environment variable oversharing between peer containers (depending on tool).

And the fact that nobody involved in Docker is old enough to remember that half of the exploits against CGI involved exposing environment variables, not modifying them.


With kubernetes, putting credentials in env vars is an anti pattern.

You create a secret and then that secret can be mounted as a volume when the container runs, it never gets captured in a layer.

Also CGI exploits exposing env vars would work just as well on a normal non-container instance would they not?


Two separate issues.

Yes, you can capture runtime secrets in your layers, but it's pretty obvious to everyone when you're doing that and usually people clue in pretty quickly that this isn't going to work.

Build time secrets are a whole other kettle of fish and a big unsolved problem that the Docker team doesn't seem to want to own. If you have a proxy or a module repository (eg, Artifactory) with authentication you're basically screwed.

If you only had to deal with production issues there are a few obvious ways to fix this, like changing the order of your image builds to do more work prior to building your image (eg, in your project's build scripts), but then you have a situation where your build-compile-deploy-test cycle is terrible.

Which would also be pretty easy to fix if Docker weren't so opinionated about symbolic links and volumes. So at the end of the day you have security-minded folks closing tickets to fix these problems one way, and you have a different set that won't provide security concessions in the name of repeatability (which might be understandable if one of their own hadn't so famously asserted the opposite http://nathanleclaire.com/blog/2014/09/29/the-dockerfile-is-... )

I like Docker, but I now understand why the CoreOS guys split off and started building their own tools, like rkt. It's too bad their stuff is such an ergonomics disaster. Feature bingo isn't why Docker is popular. It's because it's stupid simple to start using it.


Regarding secrets in builds, I think a long term goal would be to grow the number of ways of building Docker images (beyond just Docker build), and to make image builds more composable and more flexible.

One example is the work we've experimented with in OpenShift to implement Dockerfile build outside of the Docker daemon with https://github.com/openshift/imagebuilder. That uses a single container and Docker API invocations to execute an entire Dockerfile in a container, and also implements a secret-mount function. Eventually, we'd like to support runC execution directly, or other systems like rkt or chroot.

I think many solutions like this are percolating out there, but it has taken time for people to have a direct enough need to invest.


>> Debian is a good example of the opposite end of Docker.

It is not fair to compare Docker with Debian. Docker Inc (who backed Docker) is a for-profit corporation and is backed by investors. It is understandable why they need to push their products into production the soonest time possible.


I use Docker a lot. I also use things like Docker volume plugins and have had to modify code due to API changes/breakages.

"Production ready" in the "container space" for me are Solaris Zones, FreeBSD Jails, and to an extent lxc (it's stable, but I've used it less). I like what Docker/Mesos/etc. bring to the table, but when working with the ecosystem, it takes work to stay on top of what is going on.

It is even harder to consult with a customer or company interested in containers and give the most accurate near/long term option. It becomes a discussion in understanding their application, what approach works now, and guidance for what they should consider down the road.

Networking and Storage are two areas with a lot of churn currently.


What does it matter how fair it is? It's neither fair to compare a monkey to a fish in terms of being able to climb trees, but that doesn't change that one of the two is most likely already sitting on a branch. And ultimately, if you need something that can climb trees, a fish simply won't do, no matter how fair you try to treat it.


I can't get it to work on OSX without the CPU staying at 100%. Still not fixed:

> There are several other threads on this topic already. Setups that docker build an image and rely on in-Docker storage work well; setups that rely heavily on bind-mounting host directories do not. A complex npm install in a bind-mounted directory breaks Docker entirely, according to at least one thread here.

https://forums.docker.com/t/just-switched-over-from-dlite-cp...


This is another issue that's been preventing my adoption of Docker for Mac: https://forums.docker.com/t/docker-pull-not-using-correct-dn.... The fact that DNS resolution over a VPN still doesn't work correctly makes me wonder how production-worthy this release is. It's a pretty common thing people want to do in my experience.


If you have the time, could you make a report on the issue tracker https://github.com/docker/for-mac/issues and include the contents of /etc/resolv.conf and "scutil --dns" when you connect and disconnect to your VPN? Ideally also include an example resolution of a name by the host with something like "dig @server internalname". I suspect the problem is caused by a DNS server in the "scutil" list being missing from /etc/resolv.conf. We're planning on watching the "scutil --dns" list for changes, but it's not implemented completely yet.


Okay, will do. Resolution of internal hostnames by their FQDN works fine if I set my VPN client (Tunnelblick) to rewrite /etc/resolv.conf. That said, the search domain is not carried into the VM, so name resolution by hostname does not work. Also, Tunnelblick has a name resolution mode that does split DNS (i.e. preserves DHCP-set DNS servers and only forwards DNS requests for the internal domain to the VPN DNS servers). This mode doesn't work at all. Would it be possible to allow forwarding of DNS requests to the host machine like with Virtualbox (VBoxManage modifyvm "VM name" --natdnshostresolver1 on)? I feel like that would simplify things greatly.


Sigh .. I need to disconnect from VPN to use it. I think u can reconnect after creation.


I always thought of production ready to be stable, of all things. Feature complete is not a part of it.

Basically, if you can live with the shortcomings a release has (bugs, performance, lack of features) you can use it in production as long as it's stable (and secure).


I wouldn't consider pegging a CPU until restart to be 'stable'.


This bug's been driving us mad because we can't reliably repro it on our machines at Docker, and it only happens to a small subset of users, but is very annoying when it goes trigger. It seems to be related to the OSX version involved, but there's not enough bug reports to reliably hone in on it.

The other aspect that it may be is a long-running Docker.app -- since as developers we are frequently killing and restarting the application, it could happen after a period of time. I've now got two laptops that I work on, and one of them has no Homebrew or developer tools installed outside of containers, and runs the stable version of Docker.app that's just been released. If this can trigger the bug, we will hunt it down and fix it :-) In the meanwhile, if anyone can trigger it and get a backtrace of the com.docker process, that would be most helpful. Bug reports can go on https://github.com/docker/for-mac/issues


But this release aside, I was more commenting on the whole concept of production ready.


True. So that gives us one issue then?


"Aside from that Mrs Lincoln, how was the play?"


The filesystem is still not as fast as I would like, but it's incredibly improved over the last couple months.

One thing I found, was to be a little more cautious about what host volumes you mount into a container: for a Symfony project, mounting `src` instead of the whole folder sped up the project considerably, as Symfony's caching thrashes the file-system by default.


I have also yet to see a reasonable solution for connecting out of a container back to the host with Docker.app.

On linux and OSX with docker-machine this is easy with:

    docker run --add-host host:ip.for.docker.interface foo
But there is no equivalent to the docker0 interface or the vboxnet interface for Docker.app.

EDIT: I don't use this for any production environments, but it is very useful for debugging and testing.


What about getting the gateway address from inside the container:

    HOST_IP=$(/sbin/ip route | awk '/default/ { print $3 }')


That works for some use cases, but for others (Elasticsearch, Zookeeper, Kafka, etc) the service inside the container needs to bind to an interface associated with an IP that's also addressable by the host. Even in host networking mode, eth0 inside a DFM-powered container will bound something like 192.168.x.y but that 192.168.x.0 subnet is completely inaccessible from the host.


The best solution is to add a new stable, unconflicting IP address to the loopback interface on the Mac and connect to that.


Still not as friendly, as it requires system changes on the host, but not totally unreasonable.

I'll give it a try if I evaluate Docker.app again.


why not just bind a port with -p


I was an early user of the mac beta and the 100% cpu would happen 2-3 times daily. Now it maybe happens once every 2 weeks.

Not sure about the others but the CPU isn't much an issue anymore. Maybe its just me being use to how bad it was.


I've been heavily using it since what must have been early closed beta, and cannot recall ever having this issue. Might be something that isn't quite so widespread.


It's about weekly for me on mac.


It only happens with a few users, and not at all to the majority. It seems to happen more on older OSX versions, but beyond that there has not been anything identifiable in common about the systems it happens on unfortunately.


Not to mention the lack of host:container socket sharing and the fact that the Moby VM time drifts due to system sleep. I love Docker for Mac, I use it every day, and it's definitely still beta quality.


How much does your time drift? We changed the mechanism so that it should sync from the OSX ntp server now, which seems to be giving good results. If you are having problems can you create an issue and we can look into it.

Host container socket sharing will come, but it is complex as sockets only exist with in a single operating system, so we have to bridge them across two. We are using this for the docker socket, and debugging the issues across Mac and Windows, so it is in the roadmap.


Actually GA may have fixed this. I was able to reproduce it, but may have checked too quickly. I opened https://github.com/docker/for-mac/issues/17 against it, and may end up closing it.


Fixing the file system is going to be a very hard/impossible task.


They should just go with nfs mounts it is at least 10 times faster than what they have now


For #3 that is an issue for remote debugging with things like XDebug for PHP. I have been using this command:

sudo ifconfig lo0 alias 10.254.254.254

And setting the remote host to 10.254.254.254 instead of localhost inside the container to work around that issue. It's been working pretty well.

I've been using the Beta version of Docker for Mac for many months and haven't had many issues with it at all. The biggest issue I've seen was the QCow file not releasing space and growing to 60+GB, but deleting it and restarting Docker did the trick (although I had to rebuild or repull any containers).


I had a similar experience trying to switch to docker-machine as it sounds like you've had with the new apps, and ended up giving up.

It's super simple through Vagrant though, just vagrant up and set DOCKER_HOST to the static IP. Plus there are vagrant plugins that let you sync a directory to the vm in a way that gives you inotify events so live build/update tools can run in your containers (which btw is huge, I can't believe the official apps haven't even attempted to address that, as far as I've seen).


The company claimed back in March [0] that Docker for Mac addresses the filesystem events. I observed that it works.

While Docker for Mac has improved somewhat over the beta, unfortunately it's still quite rough. For example, it was only last week that they pushed a fix for the DNS timeout issue [1] (I think maybe it was fixed? I can't check because Docker for Mac is not open source).

[0] https://blog.docker.com/2016/03/docker-for-mac-windows-beta/

[1] https://forums.docker.com/t/intermittent-dns-resolving-issue...


The DNS resolving code in Docker for Mac is in the VPNkit project which is open-source: https://github.com/docker/vpnkit. A DNS timeout is a fairly general symptom and it's hard to diagnose fully without a packet capture, but there's one issue that I'm aware of: if the primary server is down then the UDP DNS queries and responses will use the host's second server. However if a response is large and requires TCP then unfortunately we will still forward it to the primary server, which obviously won't work :( I've filed this issue about it: https://github.com/docker/vpnkit/issues/96. We hope to improve DNS, VPN and general proxy support for 1.13 -- please do file issues for any other bugs you find!


> Plus there are vagrant plugins that let you sync a directory to the vm in a way that gives you inotify events so live build/update tools can run in your containers (which btw is huge, I can't believe the official apps haven't even attempted to address that, as far as I've seen).

If you don't mind, what are these plugins? This is one thing that's sorely missed when I do development with Vagrant. I did a small amount of searching and trial and error, but couldn't find a solution that worked for me.


There used to be a separate vagrant plugin for rsync but it's now built-in. There is also built-in support for NFS and virtualbox/vmware synced folders. These all work reasonably well until you start having fairly large numbers of files/directories.

Also if you use a native Linux host with LXC or Docker there is no overhead for sharing directories with the container, it's just a bind mount.


I don't believe NFS supports inotify events? At least, that's what I'm using, and I'm forced to use polling for any file change detection. And rsync is one-way IIRC. But yes, LXC on Linux works great when it's feasible; I've just been looking for something that supports file change detection on other platforms.


The official apps do do that. It's one reason their shared fs performance is abysmal so far.


The last one, in my experience, is basically a deal breaker. Simple commands (eg rake routes, npm install, etc) take 100x longer.

I'm don't have a firm opinion on what is or isn't 'production ready', but if there are major bugs, then there should be some way of disseminating that information instead of everyone rediscovering the same issues.


Number (3) is specially painful. The fact that their documentation makes it very explicit that the host is bridged by default and containers are "pingable" aggravates it a little bit further as it seems as a very basic pre-requisite for the tool to be usable.


for 4) you can use http://docker-sync.io - its compatible with docker for mac and others, supports rsync/unison/unison+unox and will have NFS support in the near future.

With unison+unox you have full transparent sync while having native performance (no performance loss at all). This is far better the osxfs or nfs.


I wonder, is microsoft helping to solve those issues? If they are, it shouldn't take too long.


I wish they just adopt what dinghy did, xhyce with nfs mounts and dns server




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: