I was initially a huge fan but now I hate it. It re-invents so many unnecessary wheels that it is downright obnoxious. There are no loops, if statements, variables. Things that have been in programming languages since day 1. It re-invents modules, badly. Has weird hacks like null resources and external data providers to get around shortcomings in their "declarative" model. And a few other things I'm forgetting. Oh, now they're adding "environments". That's just dandy because they got everything else so right.
Every place I've seen Terraform used invariably runs into these shortcomings and the workarounds range from using Erb and Jinja templates to generate Terraform templates to just ditching it entirely and using home-grown solutions. Terraform should have been a library in an actual programming language instead of a gimped external DSL that re-invents the wheel.
I would highly recommend Nixops. It allows you to describe your infrastructure in the Nix expression language. Which is a proper dict-based programming language. It supports many popular clouds. I'm not sure if it is Nixos only, but Nixos itself is awesome to use with it. You just put your provisioning configuration inside your "network" configuration and boom, all your servers and hard drives are created and fully provisioned.
it is a fully-fledged programming lazy purely-functional programming language. Which allows you to abstract a lot of boilerplate in functions and whatnot. https://nixcloud.io/tour/?id=1
I am actually okay with not having loops or advanced constructs. Those things made using Ansible for more complex provisioning scenarios (a) unreadable and (b) horribly kludgey.
m
The things that Terraform solves for, namely keeping environment state and infrastructure relationships, are really hard to do with other CM tools without diving deep i.e. writing python or ruby
It's not the lack of "advanced" constructs as you say. It is also the total disregard for all the human centuries of effort that have gone into making programming languages and environments convenient vehicles for expressing computation. Things like stack traces, syntax highlighting editors, auto-complete, interactive debuggers, and a bunch of other stuff you get with an actual programming language that you don't get with an external DSL. There is a reason those things are not part of the package. It is because making those things is extremely hard. Whereas slapping together a parser is not but feels like actual work.
You might consider those things advanced. I consider it the bare minimum to get proper work done.
Here's the thing: HCL isn't a programming language nor will it ever be. I hope. this problem isn't unique to Terraform; Chef and Puppet are in the aame boat.
It's markup. It is meant to be easily readable by devs and non-devs alike while being reasonably capable.
The beauty of these tools is that they are written and be expanded upon with popular languages. If I wanted to make Terraform deploy Dell servers, all I have to do is write a plug-in with Go. What's more is that I don't have to teach a TF user anything new since its usage pattern will be the same as any other TF plug-in.
Nope. Chef is an embedded DSL. It does not have the same shortcomings. In fact it is the only configuration management tool that approaches the problem from the right angle. Instead of re-inventing the wheel it gives me a library of composable components that I can glue together however I wish.
You must not have seen many TF deployments. I've seen several and each one is a unique snowflake. Some use Jinja, some use Makesfiles, some use Erb, some just use heroic copy-pasting. There are no conventions or standards whatsoever. I don't know what you mean by readable either. When was the last time you navigated 20 Terraform modules while mentally filling in all the variables and thought to yourself "Ya, this is totally readable".
> It's markup. It is meant to be easily readable by devs and non-devs alike while being reasonably capable.
Who are the non-devs reading infrastructure code? I think the target market for Terriform (probably) understands the basics of programming languages like loops, conditions, and variables.
I see your point about plugins/expansion. I think infrastructure hasn't gotten past the "write a plain file as an API" stage, so writing a library to generate those files feels like hacking on top of an incomplete platform.
With Puppet 4, Puppet is lightyears ahead of the mess that is HCL. It's not a general-purpose language, but it's strongly typed and you can work with datastructures in a functional fashion. I'll admit that some more complicated transformations are less natural than I'd like, but at least they're possible.
If I could use puppet to manage openstack instances at work, I'd use it over Terraform in a heartbeat because the foundation is much more solid, but unfortunately such a provider does not exist.
I use terraform and want to like it, but every bug and weird behaviour and nonsensical limitation is making it harder and harder. Furthermore, these aren't really fixable implementation issues. The terraform language needs a complete redesign.
You don't need explicit loops and branching to do the kind of declarative programming terraform needs, but being able to use basic datastructures besides strings would be very useful.
Terraform has lists and maps, which takes care of a number of use cases at least. Now, if you want types as opposed to data structures then you start to get into structures that are tough to serialize into JSON that is helpful for interoperability with even less featureful languages / DSLs.
Instead of if statements, pattern matching constructs would probably suffice for avoiding some of the concerns with avoiding loops and branching (limits and predictability upon the DAG and resulting state machine being generated as I understand it).
In any case, I've found plenty of power by using custom data providers that use whatever logic I want and feed that to Terraform providers and modules to instantiate. I think the level of maturity with the Terraform community is still in the early stages and emphasis will shift to data provider based constructs for orchestration.
What would you consider has the bare minimum to get the job done then? Chef? Ansible? Salt? Puppet? Some homegrown monstrosity? I have used the first two and both of them have major problems with state and they can get super complicated if you are not careful.
I think part of your issue with Terraform is that you want it to do everything, when you should just use it to get the foundation up and then use a provisioner with more power to do the finer detail work to fit your needs. Also, lets remember it's not even 1.0 software at this point. We started playing around with it around 0.6 and things have come along way since then. Maybe it's just not the tool for your needs.
You're setting up a false dichotomy. Expressing a dependency graph of cloud resources does not require a special DSL.
In fact if Terraform was a library it would not do everything nor do I want it to do everything. At the end of the day Terraform traverses a dependency graph and generates a sequence of commands to run. The traversal is basically a topological sort and the command generation is a bunch of API calls. None of those things require a specialized DSL.
And you still didn't answer my question. Based on your other comments Chef is your tool of choice for setting up servers because you can hack and glue things together how you want. That's great for you I get it. I assume you are a programmer by trade that wants to use the tools you feel comfortable with to get the job done. That's OK. I am an ops guy that is looking to make reproducible infrastructure without having to figure out awful Ruby/Chef bastardization. Terraform isn't a programming language. I understand you wish it was, but it simply isn't one. It's not the tool for someone that wants to program the infrastructure with Ruby.
I understand your point, but as a templating language the following should be expressible:
Given a list:
[ A, B, ... N ]
Create a list of objects:
[
{ foo: const, bar: A },
{ foo: const, bar: B },
{ foo: const, bar: C },
...,
{ foo: const, bar: N }
]
You say, why, this complicated programming task is not a job for terraform. Then why are some elements in place of the dsl functions, and some missing?
I have a feeling that the creators of may cloud tools don't use the cloud, or only to have pet infrastructures instead of pet servers. To set up something automated, complicated, scalable, reproducible, you have to result to do it yourself, while these tools are supposed to do it for you.
I feel like ops people are writing these tools for ops people, but instead of good old bash, now in Go. What is the problem with this?
* They reinvent well known patterns, but call them differently.
* They disregard the common knowledge of software development tradecraft.
* They don't get these things right for the first (or second time), just redo their previous workflows, and we are still where we were 10 years ago, just with different tools.
I personally prefer Ansible, but it has its weak points as well, and its development seems to have slowed since RedHat acquired it. The main advantages of Ansible are extensibility and reuseability. I can write custom modules, and have a descriptive DSL for stuff not thought of for the upstream devs. I can also simply reuse parts, which is a very weak point in terraform.
Still we use terraform, for it has its own merits, but IMHO if you want to do something, either do it, or don't do it, but don't do a half-assed "solution" which promises much, and fails to deliver. This is my feeling with terraform, where I could not even create a simple mapping of values, if those are not strings! (I know, "ops" people love strings. Only those pesky "devs" love structured data)
HCL is atrocious. Terraform's main issue is that it tries very hard to be declarative, then it has to invent a template language as a superset of the declarative JSON it uses to make up for it. And that superset is invariably shit.
It's like you said; no loops, no proper variables etc.
Terraform itself though is fantastic and powerful. It's lacking some pretty important stuff still (for example, not having to refresh the entire infrastructure state when you change one single variable for one single tiny little service), but that just makes it an alpha. It's still better than the alternatives by a mile and then some.
But, yeah. Fuck HCL. The good news though is that I think this is "easily" fixable by someone who would really work on it: Because the underlying environment is 100% json, it's easy enough to generate; so you could implement a saner environment for specifying the state and feed that to terraform. That's enough for me to bet my infrastructure on it: knowing that today's issues are fixable, rather than the effort being doomed from the start.
We've really enjoyed templating terraform. It solves a lot of the problems you're mentioning, and we can write the complex logic we need without interacting _directly_ with cloud APIs...
It also seems pretty much the same as ansible (which suffers the same problems as you mentioned).
I pretty much agree with you, they should of provided 2 entrypoints IMHO, they can make there DSL and all that and make it (eventually) turing complete and at the same time they should just provide composeable objects in <some> language as well.
I've never quite figured out who the target audience is for these kind of DSL(s), is it people that can't program, or people that can't program well/at all (without adult supervision)?
I don't know either. Puppet, ansible, salt, and friends are in the same boat. The best part is that the special syntax doesn't buy them anything. They just end up being gimped languages for expressing some kind of dependency graph.
They all suck in equal measure, but each in a different fashion. You can only try to pick the one tool that sucks the least for your particular use-case and hope that two years down the line you made the right choice.
This isn't much of an argument. That's like saying all programming languages suck and it's a crapshoot. It's not. There are design principles that work and scale to large scale systems and then there are things that don't.
As far as I'm concerned Chef is the right way to do configuration management. Everyone else with their YAML DSLs is doing it wrong. Expressing dependencies between resources does not require specialized external DSLs and sometimes you need to do imperative things to get to the end which is either impossible or just horribly convoluted with any tool that tries to hide complexity behind a markup/serialization language.
Apparently these people think it makes it "simpler". Well I'm sorry but me looking up how to format things with yaml and no autoformat and no intellisense just isnt good and doesnt make things simple.
Even xml with a backing xsd would be better than that, but that's apparently not cool enough anymore.
I forgot about the lack of auto-complete and the horrible error messages. At least with a library when I have a syntax error or something blows up I have a debugger and the actual language at my disposal to figure stuff out. No such things are remotely available with all these external DSLs. If even there is any semblance of error reporting.
Several human centuries of effort have been spent on improving these things (syntax highlighting, auto-complete, stack traces, error messages, interactive debuggers, etc.) and then all these DSL authors decide to just chuck it all out. Boggles the mind.
More than a few times Terraform has wrapped an error message from AWS in the hopes of being helpful while hiding all the details of the actual error message and no hints how to rectify things in the DSL.
I had the same experience with ansible. It also wraps the responses in some ultra verbose json. I really just want to see the remote commands being executed, if you return them wrapped in some kind of format then you need to provide a gui for it. Otherwise it's all worse than before.
Would you know a recent tool based on XML + XSD (or RNG) in the space (I believe IBM had one like 10 years ago, but it got sacked because XML)? I've never been a fan of using markup languages for things other than text/semistructured data, but yaml/toml/json is such a huge step backwards that I'm having trouble recommending it.
Some basic conditional logic was added in 0.8 [0]. The reliance on 'interesting' declarative tricks, e.g. counts for conditionals and loops, is a little annoying I agree. However, it looks like they're listening to the community and will build on these features in the future. There's a lot to work on :)
Take a look at jsonnet: http://jsonnet.org. It extends JSON with loops, conditionals, variables and more in an elegant way. We don't write JSON anymore and treat it like an "assembly language." We use jsonnet to generate CloudFormation templates, terraform templates and Kubernetes resource definitions.
Hear, hear! Devops right now is in the Middle Ages, kind of like what web frontend programming was a few years ago. Remember all those weird cobbled down logicless DSLs that were supposed to save us from ourselves.
I am expecting the Renaissance you mentioned in devops shortly, just like how React transformed the scene.
In short, DSLs should be inside programming languages and we shouldn't be afraid to write code.
But this isn't a programming language. it's a DSL that lets you get the job done. I get that you want to use flow control statements to make your life easier, but I feel that part of the reason why Terraform was made and has gained a lot of popularity. For me, it doesn't make things overly complicated. Except for how they handle variables with modules.That stuff sucks.
I don't really agree that Terraform re-invents wheels. In fact it feels like a lot of the things you cite as issues are actually due to Terraform being pretty innovative. There's not much like it for "version-controlled" infrastructure, so the team and community are learning as they go, and can't be expected to come up with perfect solutions right away. I agree the looping hacks are very ugly, really dislike the whole split and join everywhere thing.
As for DSL vs. programming languages, the halting problem provides a pretty compelling argument in favor of limiting your system if you want to have "verifiable convergence". Personally I'm a huge fan of HCL too. It fixes a lot of what's bad about YAML and TOML and has a built in pretty printer.
Ahh just like Ansible with it's YAML'ish templating language. If you want to add variables and scoping, loops, a module resolution system, etc., then I'd rather use a real programming language.
Huge Terraform fan. The first time it creates VM's, disks, IPs, firewall rules it is like magic. Infrastructure definition should be code, it just makes sense.
If your looking to dive in, I wrote a short introduction blog post on getting started with Google Compute Engine.
I love terraform! It is one of the few tools I've found that was easy to understand and easy to integrate with legacy stuff, and it very quickly saved me a huge amount of time.
It only took me about 30 minutes to setup a terraform file that could bring up and teardown an entire web stack with a single command. I was so shocked by how straightforward it was that I brought the stack up and down a few times just to make sure it was actually repeatable.
We had a tough time with Terraform on AWS. The syntax is undeniably better than Cloudformation and having the concept of modularity built in is a big win.
However, when things went wrong we ended up with cryptic error messages and often when `terraform up` failed, then `terraform destroy` would fail too, leaving us using AWS console to jump in and start clearing up the resources. Particularly painful because AWS has no awareness that your stack came from a Terraform config, so you have to navigate through the subresource menus destroying things one-by-one.
We ended up switching over to Cloudformation. The extra verbosity does suck, but the tooling for running/updating stacks feels far safer. We can review the changeset when we update the stack, view a realtime event log as resources are created, deleted or updated, and best of all we can always tear the stack down from a single point.
Amazing platform. We've recently taken about 6 months to define our entire infrastructure in Terraform. We've been using it since version 0.6 and with version 0.9, the biggest feature to come for us is definitely the remote back-end and the remote locking mechanism (which we use Terragrunt for atm). The remote back-ends mean that the state will not longer be stored on the local machine which is obviously a big plus for security as the state actually contains all of your infrastructure information and even secrets. Anyway, huge project and huge potential, so much so, I've made it the headline in my CV! Thank you Hashicorp, keep it coming!
It's allowed us to do the following for our developers:
Full integration with the CI/CD process.
Open a branch, commit it to GitHub, add a special label and you will get a completely new setup, just to test your branch. New application name in the service discovery tool (Consul/Eureka), new RDS instances, new network, etc, etc. Once happy, the developers can merge into dev and the branched version of the infrastructure is destroyed.
I cannot recommend Terraform enough for teams that are looking to gut their old infrastructure deployment methods. We used Terraform ~0.7 to build out our EU VPC on AWS. After a bit of pain understanding how to build modules we were finally able to say we have our infrastructure truly in code. We were even able to reproduce our exact infrastructure deployment, with a bit of tweaking, for partner that had strict data privacy requirements. Terraform allowed us to provision a smaller scale deployment of our production deployment on their AWS account in about a days worth of work. We plan on redoing our US deployment Terraform and we we are going to POC their Pro/Enterprise version very soon
Can anyone compare Fugue[0] with Terraform/Terraform Enterprise.
I'm launching a venture that requires deploying and maintaining cloud infrastructure across the three major cloud providers (AWS, GCP, Azure) and am considering options. Most of our infrastructure is kubernetes but Fugue caught my eye for their AWS deployments. Curious if anyone has used both.
At lot of the debate here centers on DSLs vs programming languages and I don't want to get into the weeds of that. I did use terraform extensively at a few places and generally if you want to do something simple it does a fine job. When things start getting complex and the team & infrastructure is larger...well, let's just say some warts start coming up. You more or less end up with some type of a wrapper around it.
But in another place we decided to use troposphere[0] and I liked it more (at least for AWS-only shops). A lot more flexibility and you're relying on Cloudformation to do some of the heavy lifting around states. We basically ended up with a tool based on that library which would generate deployable CF stacks (and some other things).
How do folks feel about Terraform vs Cloudformation for AWS-only deployments these days? Do we have a clear winner? I remember even a year back there were certain features of AWS that were being made available through Terraform first before you could configured them in CF, so the choice was somewhat non-obvious.
I looked at multi-platform tools initially, but decided that CFN was probably much more suitable for my AWS-only needs. There are several limitations with CFN (especially early on) so I built an engine[0] in NodeJS & Handlebars that allows you to create complex CFN templates from simple JSON "blueprints".
Some of the things I like about it are that it allows me to have multiple environments (Prod, Dev, QA, etc.) where each stack can be slightly different but still related. It also has global variables (ie. office IP address) that allow you to change something in one place and have it be updated everywhere. Finally, AWS resources are defined in Handlebars partials it's easy to add new ones or maintain existing ones as AWS adds more functionality.
It's still a work in progress and I updated it almost every week. If you like working with NodeJS I'm always looking for help. :)
Given the inputs you stated, I am full CF because I don't want to manage state files, and I want some more advanced things like Deletion policies and automatic tagging of resources with the stack info. We use these tags in a lot of lookup scripts when operating on the infrastructure.
Terraform is great but there are certain things that I still use Cloudformation for (simply because you can't do it right now using Terraform). One example is rolling updates on your ASGs. Last time I checked you can't do it via the aws_autoscaling_group resource.
A lot of people including myself have used tools to generate CF or Heat templates in the absence of a need for cross-cloud compatibility that Terraform offers. You can use Troposphere, Stacker, or anything else that can intelligently generate and manage component information as a JSON or Yaml file and get much of the power of Terraform - perhaps more given the option of a full language rather than an external, declarative DSL. I still haven't found an idiomatic, elegant API that can manage all those template references.
The primary advantage Cloudformation has for myself is a lack of need to manage the state that Terraform spends so much effort upon. In fact, Cloudformation does this fairly opaquely but was much more obvious when I tried to deploy a CF stack when S3 was down a while ago - being able to use an alternate state file location would have been handy then.
In fact our very own jdreaver wrote a Haskell EDSL for generating CloudFormation templates, this seems to be not uncommon among companies who want to do infrastructure as code with that ecosystem: https://github.com/frontrowed/stratosphere
Unfortunately when you inevitably make changes to your CF config, `terraform plan` doesn't help because it's likely you changed something inside the `template_body` and Terraform just knows the string has changed and doesn't tell you what part of the config changed. Parameterising as much as you can helps.
This is so cool. I literally found Terraform yesterday and I guess I found 0.9 just before the announcement. I spent the entire evening reading and trying to get it running with Linode.
I've got to say, I always end up falling in love with products with cool names, nice logos, great design and easy interfaces. Terraform and most of the Hashicorps products I've seen tick all of those boxes. I'm amazed at the quality of the tools.
However a question:
The documentation makes it really clear that Terraform is not a configuration management tool. Do I understand this correctly that it means you need another tool like Chef to run all base setup code, like install firewalls, set timezone, install other packages? I've been using Ansible to install a baseline of packages for my servers.
We're getting into using this on AWS. Plus points: it's awesomely powerful and actually useful. Minus point: the version number is not a lie, this is buggy betaware that deprecates functionality fast without looking back, and if you use it for real work at this stage you are live beta testing both the software and its functionality and will tear out your remaining hair. Even given that, we're sticking with it.
What's the general pattern for making sharded applications where each machine needs to be distinguishable on that count?
For instance, say you're sharding across three machines. And now you want to up that to four. Each instance of the application is smart enough to do it provided it knows which shard it's in.
We deploy fully baked AMIs containing an application. Do you make separate AMIs per shard? Or do you supply configuration afterwards? Or do you template and generate your plan file each time?
I haven't found anything on this so I'm hoping someone can tell me.
I would prefer the image to look identical. Each host is going to have a unique name, and you're going to outsource your routing/discovery to the Client/another service anyway right?
Generally though, if there is some custom config data that must be passed on BOOTSTRAPPING the node, you can use templated values for user-data on cloud-init based systems, or the remote_exec provisioner. This can even include the modified shard name (with count). If you're talking about updates AFTER the machine has been in service, I do not believe this is 'officially' supported by Terraform but it can be done [0]. You may want to look into a proper CM tool w/ idempotency for that sort of thing (Chef, Salt, etc.)
Oh, no, definitely the former approach. All right. That sounds perfectly all right to me. I was wondering if there was a different way people did things.
I'm guessing there is a finite number machine types, and they are well defined and do not change often?
What you could do is generate a different packer configuration for each of those types (guessing you use packer since you mentioned AMIs), which will create a different image for each machine type. You can scale each type individually.
Ah, I wasn't terribly clear. Each node is actually the same machine type. It's something like Voldemort with partition-aware routing at the client[0]. Essentially, there's a small amount of configuration to determine which partitions a specific node should serve.
Do people usually just build k AMIs if they have k partition groups that they want to serve?
For a sort of simplistic version of the problem, imagine the data was splittable into 6 partitions and you hashmod the partitions across 3 groups. So instances in group 1 will serve partitions 0,4; instances in group 2 will server partitions 1,5; and so on. Does one build an AMI for group 1, one for group 2, and one for group 3? Does one provide configuration informing which group a specific node should serve through other means? Or does one stick that in a Terraform definition?
Thank you. It seems to me like you'd need variation in your Terraform file to specify that. How do you do that? The DSL doesn't have control structures.
I'm running a small game on Heroku (Node.js + WebSocket) but I want to migrate it to my own DO/Linode server. However, I know nothing about servers and I'm terrified I will make all kind of security errors. Is this something I should use to mitigate it?
OT but what I really want is a service where I just click a button and they configure the server for me with automated security updates.
> I know nothing about servers and I'm terrified I will make all kind of security errors. Is this something I should use to mitigate it?
Not really.
What Terraform does is to allow you to put all your configuration in (versioned) code. So you can have all your existing configuration, and then change the relevant definitions to Digital Ocean or Linode and make the commit 'Move to DO/Linode'.
Exactly how its configured - security or otherwise - is still up to you though.
> OT but what I really want is a service where I just click a button and they configure the server for me with automated security updates.
Why are you moving from Heroku to Digital Ocean then? Heroku is much closer to, if not exactly that.
> Why are you moving from Heroku to Digital Ocean then? Heroku is much closer to, if not exactly that.
It basically comes down to the daily cycling which I have no control over and the low memory on the hobby dynos. My game is constantly reaching the limit and without rebuilding the whole architecture it leaves me with either upgrading to a much more expensive dyno or migrating away.
I also miss just being able to ssh into my machine and do stuff.
Preliminary note: I use AWS, Heroku, dedicated or baremetal servers, for different purposes.
I'd first ask what makes you want to move out of Heroku, especially if you have no sysadmin experience at all.
Re OT: you should really check out https://www.engineyard.com/, which provides a layer of services on top of AWS. You can handle a Rails or Node application with little experience and have security updates (Disclaimer: I'm a happy customer, but again I use many different providers so I don't think I'm too biased).
See my other comment for reasons why I want to move away from Heroku. I can expand on the cycling bit though:
I keep my game's state in memory all the time and really just want to write to the database once in a while. The random cycling prevents this since it just terminates the app without any warning so any state is lost.
I can see why you'd wanna go to something like DO/Linode for that. It's a relatively fixed cost up front.
This is all anecdotal but at work we recently switched from marketing sites/blogs on internal infrastructure to farming them out to DO. The biggest concern with the person who manages our fleet is the security implications of handling multiple machines outside our network. In just a very minimal amount of Google fu I found a plethora of management services that do the work you're talking about on your VPS infrastructure. Some specialize in just DO but a large number are relatively cloud agnostic.
We have also just recently invested in using Laravel Forge for provisioning and it's relatively trivial to write a script that will run 'sudo apt-get update' but proper security usually requires a bit more than that. Package upgrades to major versions can sometimes cause quite a bit of havok. Having a service or team of individuals that can keep this brain power to handle your case properly is often more than worth the price to someone like me. Finding the right one is its own chore but from my limited search there's quite a lot out there to cover just how hands off you want to be.
I'm actually using Laravel Forge for my PHP projects and I love how they take care of everything. Maybe I should just use Forge for my Node.js projects as well? I suppose I just need to open up the socket port in nginx.
Terraform allows you to store your configuration changes in version control. Instead of manually clicking buttons in AWS or any kind of IAAS, you can set up Terraform configurations to set up IAM users or spin up EC2 instances. This provides infrastructure configuration that is clearly documented.
Terraform is a tool that captures Infrastructure as Code. Flynn appears to be a PaaS, akin to something like Heroku with scheduling capabilities on top. Under the hood, there is some sort of infrastructure that Flynn is deploying to; it's providing an API on top of it to easily manage deployments and scaling. Terraform is the tool that defines the layout and composition of underlying infrastructure resources.
Another one of Terraform's strengths is that it supports multiple providers. Most of this thread mentions AWS, but Terraform also supports GCP, Azure, and DigitalOcean. But Terraform can manage anything with an API, so you can codify things like GitHub teams and permissions, DNS configuration, PagerDuty escalation policies, and so much more (https://www.terraform.io/docs/providers/index.html).
All of these definitions are captured in code. You can bring that code under management of source control to get peer review, pull requests, or permission models (signoffs) too.
If you are an individual developer happy with Heroku or Flynn, Terraform probably isn't the best choice for you, especially if you do not have a background in operations or you do not want to manage servers. However, most medium->large scale production applications cannot be supported by such technologies and demand a tool like Terraform to manage the complexity.
I hope that clears up the differences a bit.
Disclaimer: I work for HashiCorp, the company that makes Terraform.
So why isn't it also a replacement for ansible? I dont understand this artificially limited scope. You set machines up, why not also set up their software and repos?
I don't know a lot about terraform, but it sounds like they are very different things? Terraform talks mostly to cloud provider apis, ansible usually deals with running code on the servers themselves.
In my internship this past summer, I used terraform to easily describe the infrastructure and deploy my application on AWS. I.e, what type of EC2 instance I wanted. I really enjoyed my time using it.
It's not for development, it's for infrastructure.
Terraform allows you to manage VPC, network, routing table , firewalls and gateways in the cloud. There is no other software to do that (only manual click in the AWS website).
It can also create AMI, instances and auto scaling groups but there are many tools in that field that do it better.
I haven't used Flynn, however I am using terraform. Main purpose is to completely automate AWS configuration so we can bring up entire site in CI. This makes it really easy to have staging and production deployed from CI.
Great changes! State environments is the feature that stands out the most for me. Currently, this is done by managing different sets of TF VARs. That's not going away, but this should allow for more nuanced modules.
I hope they will fix their VMWare + Docker compatibility. I love terraform, but right now i need to delete manually my VMs to avoid terraform apply crash on trying to capture IP addresses from VSphere.
I'm not yet ready to jump into docker and the likes, and I've used terraform for setting up the infrastructure for a 3-tiered project and I was extremely pleased with this tool.
[Serious] if I'm already happy using CloudFormation for AWS, and I don't plan on supporting AWS + another cloud, is Terraform for me? Or is it for other use-cases?
Hey there, I'm one of the creators of Terraform. Honestly, if you're happy using CloudFormation and you're sticking with AWS, just stick with CloudFormation. Terraform will be around when/if circumstances change and it'll be better than ever by that time. No rush if things are working for you.
The best time to look into something like Terraform is if you begin managing more than just AWS resources. Even so, you can choose to use CF for AWS and TF for non-AWS. Or, if you want to start taking advantage of the higher-level features Terraform is beginning to support.
At the end of the day though, I'm a big fan of working with what you're most expert with until circumstances require otherwise.
I think TF is better at post-setup execution. I haven't used CF in a while, but I remember having it run commands post-setup (outside of user data) being kind of kludgey. I also really really dig its module system; it lets you get incredibly creative. I also heavily prefer HCL over Amazon's DSL (at least what I remember of it when I used it heavily last year. Self referencing security groups were a huge pain in CF but pretty easy in TF)
the biggest win for TF (IMO) is that it doesn't lock you into any provider. It is a lot easier to design TF configs to work in cross-cloud scenarios (well, "easy") than it is to port CF over to something else (which is AWS only.
I'm sure you've got a substantive point here but (a) you haven't explained it and (b) you added incivility instead. To make it a good comment, you could (a) remove the incivility and (b) add explanation.
Every place I've seen Terraform used invariably runs into these shortcomings and the workarounds range from using Erb and Jinja templates to generate Terraform templates to just ditching it entirely and using home-grown solutions. Terraform should have been a library in an actual programming language instead of a gimped external DSL that re-invents the wheel.