For organizations with more than a handful of engineers, shouldn't deployment happen automatically through something like a CI server? Do we really want every engineer deploying code directly themselves? How do the currently available `vagrant push` deploy strategies handle concurrent deploys happening from different people?
I realize that a lot of this is left open to allow custom plugins/strategies to do more complex things (like automated deployment), but I question the practice of encouraging workflows where developers deploy themselves rather than just pushing code revisions.
Or by "deploy", are we referring to deployment to a local development environment? The distinction is an important one in my opinion.
Agreed. While cool at first glance, this feature is virtually useless to me, because if I want to do things properly, I need to make sound deployment system, and by no means pushing upstream stuff from your local machine is sound. And in fact it doesn't matter how many developers on your project can make releases, they never ever should do this that way.
It is an obvious must that you have to be able to answer the question "what runs in production right now", anytime. There's one way to do that: deployment starts from your git/svn/whatever-vcs repository, not from some untracked code on any developer machine.
That is very, very preferable even if there's only one almighty developer (you) on the project, and total must if you have a couple more developers with different access rights — like, it's quite possible you don't want every developer to know password to the DB (or some payment system, or something even more scary) on production, which you'll store in some configs outside of VCS. Or maybe there're simply some tweaks on production system which are not possible to reproduce on development/staging. So you'll have to run some more custom actions on every deployment and, again, in centralized fashion, not just from some developers vagrant instance.
Using vagrant this way is essentially the same as encouraging everyone to SSH to prod instances themselves.
In this case, "vagrant push" should interact directly to your CI. It is named "push" for this reason and not "deploy", although the blog post title uses the word "deploy" since it is the end goal (the push should lead to the deploy).
Example: "push" might not send any code, so much as signal to the CI that that commit is meant to be deployed (if it were to pass). Or, "push" might send code directly to a CI. It is really up to you.
"push" doesn't require it go direct to production, and doesn't encourage that (or actively discourage that). It is just meant as a workflow mechanism to unify how deployments are started.
Everything you mentioned seems to be accomplished with a simple "git push". When git (or similar VCS) is involved, the push of a new revision of the codebase should (in my opinion) be what triggers a CI build/deploy. As mentioned in krick's comment, the codebase (specifically the branch that is tracked for releases) is the authoritative place to know what code is running on production.
Otherwise, we have a disconnect between what's in the codebase and what's running on production, and we must ask the question "did the latest code get deployed yet?" or "who is responsible to 'vagrant push' this revision?" - all of which can be avoided by letting the VCS workflow determine all of that.
I love vagrant for what it does best: simplify development environments. I fear that the "push" functionality is treading into unrelated territory. The problem is not what technically happens with a "vagrant push", but rather the idea of "starting deployments" (as you said) from individual developers in their local development context. In my opinion, developers don't initiate deployments; deployments happen passively in response to code changes - assuming it passes any relevant tests/conditions.
I'm having a hard time understanding what this does and what it buys me. If I am understanding correctly, they have added a new subcommand, but you pretty much have to make it do anything useful yourself (if you fall outside of their limited set of plugins). Is this a correct assessment?
This is correct. The benefit is technology abstraction for the developer. To use our own tools as an example: our open source docs is deployed to Heroku, our binaries go to Bintray, our services go to Atlas.
Now every developer at HashiCorp knows that you `git clone` any project in the company, and `vagrant push` to deploy. It doesn't matter how it works under the covers.
Example: We just changed Vagrant itself to `vagrant push` to deploy Vagrant: `vagrant push www`, `vagrant push docs`, `vagrant push release`. This all does different things.
> Now every developer at HashiCorp knows that you `git clone` any project in the company, and `vagrant push` to deploy. It doesn't matter how it works under the covers.
We already have a standard for that. It's called "make" to build, and "make deploy" to deploy. Put a Makefile next to your Vagrantfile like you should have done in the first place. :)
Sure :), but that's yet-another-thing you need to manage. You could put all that logic in a Vagrantfile. It's about reducing the number of touch points.
I definitely see your side of the argument, and we certainly have Makefiles in our projects too. Vagrant Push provides a way for an organization or team to choose what is best for them and lower the barrier to entry for developers on a project.
"OOH man, we have all these tools that no one understands anymore! I know, let's make a tool to abstract this away!"
git hooks are another option, as are shell aliases, SCM integration, CI/CD systems, etc. This seems like yet-another-layer-of-abstraction to cover for bad / misunderstood / underutilized abstractions under it.
What happened to "one thing and well"? Vagrant is exceptional in what it does but I can't see why anybody should use it as their deployment tool among myriad of deployment tools.
I can understand the fear/confusion here. "vagrant push" is glue to real deployment tools. For example, Vagrant itself would never itself contain the full logic of something like Capistrano. But "vagrant push" will _execute_ Capistrano.
The idea is that Vagrant is a single workflow for development environments. "vagrant up" to work on any application, "vagrant share" to collaborate on any application, "vagrant push" to deploy (or start to deploy) any application.
EDIT: @ossreality: Your comments are showing up as dead. But I want to respond to you. As I mentioned here, `vagrant push` is glue to real deployment tools. How are you deploying your complex application now? You would hook up `vagrant push` to that. "vagrant push" itself is not a deployment framework.
Thanks Michell! I was worried Vagrant started heading in a different direction especially after reading the "This is a historic day for Vagrant" paragraph. Thanks for clearing it up.
1) It was never that great an idea in the first place. It's OK-ish for command line, text based interfaces, but its tedious and limiting for more complicated stuff and it results in tools that lack a coherent story to work together, especially if each of the tools is made by a different entity, with different APIs, flags, etc.
2) It's still here for Vagrant. Vagrant doesn't try to do the deployment. Just provides a command that can call your preferred deployment tool that just does "deployment well".
I've watched the videos and read the docs, but I'm still a little confused about this feature. How does it know what files to push? Does it diff the base vagrant image? If my application has package dependencies, does it push those too? What about if I need to restart a service as part of the push?
Good question. It does a few heuristics depending on the strategy in use. But it has some VCS detection in it so if you're using Git it'll only upload files in the Git index (staged/committed/not ignored). There are also glob-based include/exclude filters you can specify. If all else fails, then yes, it uploads everything.
> it has some VCS detection in it so if you're using Git it'll only upload files in the Git index (staged/committed/not ignored)
So... if I have some docs in markdown, and a server-config to serve static assets -- it'll upload the config and the markdown, but not the generated html/css/js? The idea is that the server knows what to do with the git repo?
Or am I just being difficult now? (I honestly can't really tell -- I also have a hard time seeing how to make use of this. Not that it matters - if it works for you, then great ;)
To build on what mitchellh said, you can also specify paths outside of the current working directory to include in the push with the `include` flag. It can be specified multiple times in the DSL as well.
We are currently using a solution called Jenkins which serves our purposes very well because of it's maturity and the flexibility that it provides. Other than the simplicity of the deployment command, after configuration, I fail to see the benefit of this tool, especially at such an early stage.
FTP and SFTP are unrelated protocols, the "secure" version of FTP is FTPS. Sorry to be pedantic, but it is a little confusing. Does 'secure = true' enable FTPS or SFTP (which is actually a protocol over SSH)? I'm hoping SFTP, but then how do you enable FTPS?
Good question, it is SFTP. The reason we made them look similar is that we did some user testing and end users are used to FTP clients having just a checkbox for "Secure" (which is typically SFTP). We followed that model.
Under the covers, it is a completely different protocol.
Interesting question. Vagrant push exists outside of the context of a provisioner, so I don't see what you would be provisioning in the push section. Chef will provision the VM, but the push strategy is pushing the contents of the current working directory to some external destination (like FTP or Heroku). Does that help clarify?
Makes sense. I am put-off by the complexity surrounding the initiation of host provisioning. Writing recipes is a breeze with berkshelf and vagrant. But diving into "real" provisioning tools is a pain for me.
There is a lot of churn in the provisioning and orchestration tools like chef and docker and others. The tension is between an inherent utility in these tools (it makes managing/deploying services easier!) and their desire to lock people into a platform. Vagrant captures a middle ground in the midst of that churn.
when you run 'vagrant up' for the first time, you end up with a provisioned host. Why not say that 'vagrant up' in a local vm is the same as 'vagrant push staging' to some random host. In either case we end up with something modeled by the vagrant file? I know other people would say no, but please consider going into competition in the dev-ops space as a wrapper to the churn, a nice abstraction on the ambitions of others. As an itinerant developer, I just want a Vagrantfile :)
I realize that a lot of this is left open to allow custom plugins/strategies to do more complex things (like automated deployment), but I question the practice of encouraging workflows where developers deploy themselves rather than just pushing code revisions.
Or by "deploy", are we referring to deployment to a local development environment? The distinction is an important one in my opinion.