Getting Started with Terraform on AWS

Published on 25 January 2019

Having looked at Vagrant previously, the next in the HashiCorp stable is Terraform. Terraform is great because you define the state of the infrastructure you want to have. So rather than manually creating AWS resources such as an EC2 instance, or an S3 bucket, you create a configuration file that contains the definition of what the environment should look like. Terraform will then ensure that your environment meets that definition.

If you want to remove previously managed Terraform resources, you can edit the configuration file to let Terraform remove those too. And just like Vagrant, you can use the "destroy" keyword to remove whatever you stood up as part of your plan, so at least you won't be paying for it any longer.

You don't have to start with a managing your entire infrastructure with Terraform though, you can start with a single EC2 instance, and build up from there.

Firstly though, you need to defined a provider for AWS. As Terraform is cross platform, you can manage not just AWS, but Azure, VMware, and others. You can provide a provider by using the following code:

        ##################################################################################
        # PROVIDERS
        ##################################################################################

        provider "aws" {
        access_key = "${var.aws_access_key}"
        secret_key = "${var.aws_secret_key}"
        region     = "us-east-1"
        }

What we are doing here is passing the access key and the secret key (which you get from define your account in IAM) from a variable file. The $ is doing just that. This means that you don't need to keep them in your code. You could of course do the same for the region. I will show you have to define those at the end.

You can then create your instance.

    resource "aws_instance" "Webserver1" {
        ami           = "ami-c58c1dd3"
        instance_type = "t2.micro"
        key_name        = "${var.key_name}"

        connection {
            user        = "ec2-user"
            private_key = "${file(var.private_key_path)}"
        }

        provisioner "remote-exec" {
            inline = [
            "sudo yum install nginx -y",
            "sudo service nginx start",
            "sudo cp /home/ec2-user/.s3cfg /root/.s3cfg",
            "sudo cp /home/ec2-user/nginx /etc/logrotate.d/nginx",
            "sudo pip install s3cmd",
            "s3cmd get s3://${aws_s3_bucket.web_bucket.id}/website/index.html .",
            "s3cmd get s3://${aws_s3_bucket.web_bucket.id}/website/logo.png .",
            "sudo cp /home/ec2-user/index.html /usr/share/nginx/html/index.html",
            "sudo cp /home/ec2-user/logo.png /usr/share/nginx/html/logo.png"                
            ]
        }

        tags {
            Name = "${var.environment_tag}-webserver1"
            BillingCode = "${var.billing_code_tag}"
            Environment = "${var.environment_tag}"
        }
    }

That is a lot to start with, but here we are creating an instance called "Webserver1", and specifying the ami, instance type, and key name. Again, we are getting the key_name and the private_key from a variable.

We are then using a provisioner called "remote-exec" to create the configurationg that we will apply once the instance is created. Lastly we are tagging the instance. Again, we are calling values from variables so that we can reuse the code, and also store the code without having to redact secrets.

How do we provide the variables? We can define at the top of the file:

        ##################################################################################
        # VARIABLES
        ##################################################################################

        variable "aws_access_key" {}
        variable "aws_secret_key" {}
        variable "private_key_path" {}
        variable "key_name" {
        default = "GregH2O-USEast-KP"
        }
        variable "network_address_space" {
        default = "10.1.0.0/16"
        }
        variable "subnet1_address_space" {
        default = "10.1.0.0/24"
        }
        variable "subnet2_address_space" {
        default = "10.1.1.0/24"
        }
        variable "billingcode_tag" {}
        variable "environment_tag" {}
        variable "bucket_name" {}

Generally, we would place variables that we want to adjust at runtime in a separate file. We can provide those on the command line, but typically we would use a variables file to pass those variables that we don't change often, but are what we would consider secret. This would have an extension of .tfvars, so could be called terraform.tfvars.

The file can look like this:

        aws_access_key = "AKIAJUBMDA3W93XXX111"

        aws_secret_key = "BcGWosfs%fFAgaa7Ys/fb5555A8RuIukfassjK"

        private_key_path = "c:\\aws\\GregH2O-USEast-KP.pem"

        bucket_name = "gregh2o-bucket-terraform"

        environment_tag = "dev"

        billing_code_tag = "IT9004S"

When we store those as text in the terraform.tfvars file, we can then use terraform to create the resource.

Like Vagrant, we have several commands that are really useful in provisioning our infrastructure.

  • Plan
  • Apply
  • Destroy

Plan

        PS C:\terraform> terraform plan -var-file="c:\aws\terraform.tfvars

This command will run terraform (assuming it is in your path), with the "plan" option, and read variables from the c:\aws\terraform.tfvars file. The "plan" option is very useful as it will look at what is current configured, and tell you what it "plans" to do. If resources will be added or remove, you will receive a summary.

The output will be a list of all resources changed, and a count:

        Plan: 18 to add, 0 to change, 0 to destroy.

Apply

        PS C:\terraform> terraform apply -var-file="c:\aws\terraform.tfvars

This will apply the plan. All the resources that need to be modified, will be changed.

Destroy

                PS C:\terraform> terraform destroy -var-file="c:\aws\terraform.tfvars

This will tear down your infrastructure according to your plan. So if you have written a terraform file to create a pair of instances, a load balancer, security groups, subnet, etc., and have finished testing, run the "destroy" option so you no longer have to pay.

comments powered by Disqus