Introduction
This was just something I wanted to play around with for a while. I wanted an immutable Ansible server in AWS, so I could just fire it up on demand, and I didn't want to have to build my inventory file manually each time. Who wants that kind of pain? Yes, I know there is an AWS module for Lambda, but sometimes it is nice to do things your own way, and learn a bit at the same time. Originally I was going to do it all with Powershell, but I though I would include bits in my Terraform configuration, and also make use of Lambda to call a Python script. Because why not just use everything?
In a nutshell then, we will use Terraform to upload some Python to S3, which we will use for a Lambda function. We will then use Powershell to create an EC2 instance that, as part of its userdata, will install Ansible, and update our inventory.yml file.
Create the Python script for Lambda - update_ansible_inventory.py
It really doesn't matter which order you do this stuff in, but as I want to call a Lambda function, it makes sense to start with that. What this script will do, is use Boto3 to get the EC2 resources (apart from terminated ones!), and grab the name and IP address, then weite it to our inventory.yml file. One thing to note, is that "ansible678867" is the name of my S3 bucket where this file get written too. You will need to use your own S3 bucket!
import boto3
ec2 = boto3.client('ec2')
s3 = boto3.resource('s3')
response = ec2.describe_instances()
body = "---\n[servers]\n"
for x in response["Reservations"]:
if x["Instances"][0]["State"]["Name"] != 'terminated':
ip = x["Instances"][0]["PrivateIpAddress"]
body = body + ip + "\n"
instancename = ''
for tags in x["Instances"][0]["Tags"]:
if tags["Key"] == 'Name':
instancename = tags["Value"]
s3.Object('ansible678867', 'inventory.yml').put(Body=body)
you will want ot save that as "update_ansible_inventory.py", or change the name of it in the Terraform script. The file should be placed with your terraform script.
Use Terraform to Create your Lambda function
Assuming you have a basic Terraform configuration already, you don't need to add a lot here. What you will to add is:
- Our Python code (.py) for the Lambda function, that we will upload as a .zip.
- An IAM Role
- the Lamba function itself.
All of that is fairly straight-forward in Terraform. If this isn't just for playing around, you might want to be slightly more granular in the permissions though:
##############################################
# Lambda Functions
##############################################
# To update the S3 Inventory file
data "archive_file" "zip" {
type = "zip"
source_file = "update_ansible_inventory.py"
output_path = "update_ansible_inventory.zip"
}
data "aws_iam_policy_document" "policy" {
statement {
sid = ""
effect = "Allow"
principals {
identifiers = ["lambda.amazonaws.com"]
type = "Service"
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = "${data.aws_iam_policy_document.policy.json}"
}
resource "aws_lambda_function" "lambda" {
function_name = "update_ansible_inventory"
filename = "${data.archive_file.zip.output_path}"
source_code_hash = "${data.archive_file.zip.output_base64sha256}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "update_ansible_inventory.lambda_handler"
runtime = "python3.7"
environment {
variables = {
greeting = "Hello"
}
}
}
Create your Ansible Server with Powershell
The key thing here is the userdata that you will pass to the instance. What we want to do here is create an instance, install Ansible (and the pre-requisites), and then call the Lambda function that will be created by Terraform. A couple of things to note, you will probably want to provide your own keys, and consider the region. Additionally, it is looking at my bucket, so you will need to change "ansible678867" to whatever bucket you create for this. Also, the imageID might need to be updated depend on when you run this. It is just a basic Amazon Linux AMI though. Lastly, the role, "EC2ServerRole", is used to access S3. So replace that with whatever role you use to give your instances access to S3.
The tags are optional of course too.
function New-EC2AnsibleServer {
$mykey = "GregH2O-USEast-KP"
$region = "us-east-1"
$SecurityGroup = "MyWebDMZ"
$Userdatafile = "MyUserData.txt"
$Userdata = @'
#!/bin/bash
yum update -y
yum install python-pip -y
pip install xmltodict
pip install pywinrm
mkdir ansiblestuff
aws configure set region $region
aws lambda invoke --function-name update_ansible_inventory /tmp/outfile.txt
pip install ansible
aws s3 cp s3://ansible678867/ansiblescript.sh ansiblestuff/ansiblescript.sh
chmod +x ansiblestuff/ansiblescript.sh
ansiblestuff/ansiblescript.sh
aws s3 cp s3://ansible678867/inventory.yml ansiblestuff/inventory.yml
'@
$Params = @{
KeyName = $mykey;
InstanceType = "t2.micro";
Userdata = $UserData;
Region = "us-east-1";
EncodeUserData = $True;
SecurityGroup = $SecurityGroup;
InstanceProfile_Name = "EC2ServerRole"
ImageID = "ami-0b69ea66ff7391e80"
}
$tag1 = @{ Key="Name"; Value="Ansible" }
$tag2 = @{ Key="cost-center"; Value="cc123" }
$tagspec1 = new-object Amazon.EC2.Model.TagSpecification
$tagspec1.ResourceType = "instance"
$tagspec1.Tags.Add($tag1)
$tagspec2 = new-object Amazon.EC2.Model.TagSpecification
$tagspec2.ResourceType = "volume"
$tagspec2.Tags.Add($tag2)
if (!(Get-IAMInstanceProfile "EC2ServerRole")) {
New-IAMInstanceProfile -InstanceProfileName "EC2ServerRole"
Add-IAMRoleToInstanceProfile -InstanceProfileName "EC2ServerRole" -rolename "EC2ServerRole"
}
$Instance = (new-ec2instance @params -TagSpecification $tagspec1,$tagspec2)
}
Running it all
First apply the configuration with Terraform and you should be able to see the Lambda function in the console. If you run the function, you should see it create the yml file in your S3 bucket. Then you can run the Powershell function to create your instance. Give it a few minutes to finish installing, and you should be able to logon to your instance, and see your populated YML file!
comments powered by Disqus