Last time, we set up our local machine for accessing AWS programmatically. This will allow us to use Terraform and Terragrunt to easily create all infrastructure needed for our data warehouse. Now, let's set up Terragrunt and Terraform.
Install Terraform
Navigate to the Terraform downloads page:
After installing Terraform enter the following in the terminal:
terraform --version
You should be greeted with output similar to:
Terraform v1.2.8
on darwin_arm64
Install Terragrunt
Terragrunt is a thin wrapper for Terraform, having a few additional tools for managing IaC projects.
Download and install it: * Terragrunt Downloads * Terragrunt Package Manager Install
After installing Terragrunt type the following in the terminal:
terragrunt --version
You should get the Terragrunt version as output:
terragrunt version v0.38.7
Create a Git Repository
I'll be using Github, but any git hosting service should be similar.
Use My IaC Template
If you want to skip the next part and use my template repository:
Visit the page and fork the repository into your Github account, then, clone it locally.
Open a terminal and type, replacing
<YOUR_GITHUB_NAME>
with your Github username:
git clone https://github.com/<YOUR_GITHUB_NAME>/self_sensored_iac.git
I'll explain in the next section why this repository is setup the way it is.
Create an IaC Template
In Github, create a new repository and call it
self_sensored_iac
or whatever name you'd like to give your personal enterprise. Then clone this repository locally.
Set "Add README.md" and "Add .gitignore". For the
.gitignore
template, select
Terraform
.
git clone https://github.com/<YOUR_NAME>/<YOUR_REPO>.git
Setup Enterprise Terragrunt Project
Whew, we made it. Our work machine is set up. Now, we need to create a Terragrunt project.
The idea of our Terragrunt project is to separate code into two major categories. First,
common_modules
will contain a folder for each of the major resources you plan to deploy. Imagine these are class definitions. The second category contains all the inputs needed to initialize the resources defined in the
common_modules
.
The easiest way to create this project is with a folder structure like this:
.
├── README.md
├── common_modules
│ ├── vpc
│ │ ├── data.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
├── empty.yaml
└── prod
├── environment.yaml
└── us-west-2
├── aws_provider.tf
├── terragrunt.hcl
├── region.yaml
└── vpc
└── terragrunt.hcl
common_modules
As I mentioned, the
commond_modules
is similar to a class in object-oriented programming. It is a blueprint of a resource and should be coded in a way to provide appropriate flexibility. That is, if we create a blueprint to set up a VPC, it should probably take only a few inputs. These inputs will then change certain behavior per deployment of the resource.
In Terragrunt, a module is defined by a folder containing a collection of files ending in
.tf
. Collectively, these files tell Terraform how to create the needed infrastructure within AWS.
Let's look at the VPC code. It is in
./commond_modules/vpc/
As the files may suggest,
main.tf
is where most of the magic happens. Let's take a look:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "${var.vpc_name}"
cidr = "${var.vpc_network_prefix}.0.0/16"
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = ["${var.vpc_network_prefix}.1.0/24", "${var.vpc_network_prefix}.2.0/24", "${var.vpc_network_prefix}.3.0/24"]
public_subnets = ["${var.vpc_network_prefix}.101.0/24", "${var.vpc_network_prefix}.102.0/24", "${var.vpc_network_prefix}.103.0/24"]
enable_nat_gateway = false
single_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Terraform = "true"
Environment = "${var.environment_name}"
}
nat_gateway_tags = {
Project = "${var.project_name}"
Terraform = "true"
Environment = "${var.environment_name}"
}
}
resource "aws_security_group" "allow-ssh" {
vpc_id = data.aws_vpc.vpc_id.id
name = "allow-ssh"
description = "Security group that allows ssh and all egress traffic"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
// Allow direct access to the EC2 boxes for me only.
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${chomp(data.http.myip.body)}/32"]
}
tags = {
Name = "allow-ssh"
}
}
Note, the
module "vpc"
is a prebuilt VPC module, so what we are doing is grabbing a VPC module provided by AWS:
Which handles a lot of the boilerplate setup. Then, we take care of even more settings we don't often want to change. That is, every place you see
${var.something}
we are creating inputs that may be changed at the time of deployment.
Take the VPC name parameter for example:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "${var.vpc_name}"
...
Keep this in mind while we look at the
prod
folder:
prod
The production folder and subfolder are where we consume the modules we have defined in the
common_modules
folder. Let's look at the
/prod/us-west-2/terragrunt.hcl
file.
terraform {
source = "../../..//common_modules/vpc"
}
...
inputs = {
vpc_name = "ladviens-analytics-stack-vpc"
vpc_network_prefix = "10.17"
}
The important definitions here are the
terraform
and
inputs
maps. The
terraform
map will tell Terragrunt, when run from this folder, to treat the
source
directory specified as a Terraform module.
The
inputs
map contains all of the variables needed to make sure VPC is deployed correctly. You'll notice, we have hardcoded everything in our
vpc
module but the name and network prefix. This may not be ideal for you. Feel free to change anything in the
vpc
module files to make it more reusable. And to help, I'll provide an example a bit later in the article.
Planning Our VPC
One of the joys of Terraform is its ability to report what infrastructure would be built before it is actually built. This can be done by running the
terragrunt plan
command at the terminal when inside the directory
./prod/us-west-2/vpc/
.
At the terminal, navigate to the VPC definition directory:
cd prod/us-west-2/vpc
Then run:
terragrunt plan
You should end up with something like this:
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.18.1 for vpc...
- vpc in .terraform/modules/vpc
Initializing the backend...
...
After a little while, Terraform should print a complete plan. This consists of a bunch of diffs. The green
+
are indicating what will be added. Yellow
~
flagging what will be changed. And a red
-
indicates resources to be destroyed. At this point, the plan will return showing everything that needs to be built.
Deploying the VPC
Let's deploy the VPC. Still in the
./prod/us-west-2/vpc/
directory, type:
terragrunt apply
Again Terraform will assess the inputs in your
terragrunt.hcl
and the module definitions in
./common_modules/vpc/
then print out what will be created. However, this time it will ask if you want to deploy. Type
yes
and hit return.
Terraform will begin requesting resources in AWS on your behalf. Once it is done, I encourage you to open your AWS console and navigate to the "VPC" section. You should see a newly created VPC alongside your
default
VPC (the one with no name).
Huzzah!
Modifying modules to increase reusability
Let's look at how to add a new input to the
vpc
module we've made. Open the
./common_modules/vpc/variables.tf
file go to the bottom of the file and add:
variable "enable_dns" {
}
We will add more to it, but this is good for now.
A
variable
in Terraform acts as an input for a module or file. With the
enable_dns
variable in place, in the terminal, in the directory
./prod/us-west-2/vpc/
, run the following:
terragrunt apply
This time you should be prompted with:
var.enable_dns
Enter a value:
This is Terragrunt seeing a variable definition and there's no matching input in your
./prod/us-west-2/vpc/terragrunt.hcl
, so it prompts at the command line. This can come in handy, say, having a variable that is a password and you don't want it hard coded in your repository.
But let's go ahead and adjust our
terragrunt.hcl
to contain the needed input. In the file
./prod/us-west-2/vpc/terragrunt.hcl
add the following
enable_dns = true
. The result should look like this:
terraform {
source = "../../..//common_modules/vpc"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_name = "ladviens-analytics-stack-vpc"
vpc_network_prefix = "10.17"
enable_dns = true
}
Now run
terragrunt apply
again. This time Terragrunt should find the input in the
terragrunt.hcl
file matching the variable name in the module.
Of course, we're not quite done. We still need to use the variable
enable_dns
in our Terraform module. Open the file
./common_modules/vpc/main.tf
and edit the
vpc
module by modifying these two lines:
enable_dns_hostnames = true
enable_dns_support = true
It should look like this:
enable_dns_hostnames = var.enable_dns
enable_dns_support = var.enable_dns
Now, if you run
terragrunt apply
from
./prod/us-west-2/vpc/
again you should not be prompted for variable input.
A couple of clean-up items. First, let's go back to the
enable_dns
variable definition and add a
description
and
default
value. Back in the
./common_modules/vpc/variables.tf
update the
enable_dns
variable to:
variable "enable_dns" {
type = bool
default = true
description = "Should DNS services be enabled."
}
It's a best practice to add descriptions to all your Terraform definitions, as they can be used to generate a dependency graph by running in the resource defintion folder:
terragrunt graph
But more importantly,
make sure you add a type all your variables.
They are some instances where Terraform assumes a variable is a type when the input will not be compatible. That is, Terraform may assume and variable is a string, but you meant to provide it a boolean. Trust me. Declaring an appropriate
type
on all variables will save you a lot of time debugging Terraform code one day--not that I've ever lost a day's work to such a problem.
Last clean-up item, let's go back to the
./commond_modules/vpc/
directory and run:
terraform fmt
This will ensure our code stays nice and tidy.
This last bit is optional, but if you've made additional changes and want to keep them, don't forget to commit them and push them to your repository:
git add .
git commit -m "Init"
git push
Let's Destroy a VPC
Lastly, let's use Terragrunt to destroy the VPC we made. I'm often experimenting on my own AWS account and get worried I'll leave something on waking with an astronomical bill. Getting comfortable with Terragrunt has taken a lot of the fear away, as at the of the day clean up is often as easy as running
terragrunt destroy
.
Let's use it to destroy our VPC. And don't worry, we can redeploy it by running
terragrunt apply
again.
In the terminal navigate to
./prod/us-west-2/vpc/
folder and run:
terragrunt destroy
Before you type "yes" ensure you are in the correct directory. If you are, type "yes," hit enter, and watch while Terraform destroys everything we built together.
What's Next
In the a next article we'll begin to add resources to our VPC, but we will take a break from Terraform and Terragrun and use the Serverless Framework to attach an API Gateway to our existing VPC. I'm so excited! 🙌🏽