My Introduction to Terraform
My time spent learning Terraform and the project I completed in the process.
I’ve been billed monthly by Amazon Web Services (AWS) for a while now. Plenty nights find me up late clicking through the UI, navigating through their hundreds of services, thinking to myself,
“There has to be a better way to do this?!”
Maybe my bill wouldn’t be so high if I didn’t have 2 t2.xlarge EC2 instances running in the background while I read the documentation on how to resize my EBS volume while running, or configuring traffic mirroring to a different subnet.
Well…there is. Thanks to Terraform and Infrastructure as Code (IaC).
What is IaC? What is Terraform?
Infrastructure as Code (IaC) manages technology infrastructure using code-like configuration files, making it easier to automate and maintain infrastructure resources without manual intervention. Basically, instead of wasting time scrolling through “Advanced details” trying to find where to enable a nitro-based instance, it can be defined in a JSON file, allowing for fast, repeatable infrastructure while eliminating possible human error of manually provisioning 100 of “the same” VPCs.
Terraform is the most popular IaC tool in use today.
HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.
Why did I want to learn Terraform you ask?
Besides the added cost savings of being able to get rid of all my infrastructure at once with a simple ‘terraform destroy’ and not having to wonder why I’m being charged $2 a week for some Elastic IPs that I forgot to release, knowing Terraform would be a useful skill for me to have as I grow in my career. Terraform can be used in many different roles, and as I and most companies transition into the cloud, this is a skill I would like to have.
Learning Methodology
My methodology for learning Terraform looked something like this:
Start with the basics
Read the documentation
Follow along with someone else’s project
Do my own project
Start With the Basics
To start, I was recommended to complete KodeKloud’s Terraform for Beginner’s Course. This course offered the basics I needed to understand some of the different commands of Hashicorp Configuration Language (HCL). It also provided some basics of the AWS Terraform provider. I would recommend it if you are a complete beginner with Terraform. If you are having an issue signing up with KodeKloud, try out the YouTube version of this course!
Read the Documentation
Reading documentation is a lifelong task for using Terraform, especially when trying to understand specific providers. To better understand syntax and the AWS provider, I went through the documentation. This stage was repeated throughout the entire learning process, there is no running from it! There is also documentation on the Terraform CLI which I would recommend reading as well. This will assist in being more efficient and having even more automation in your code.
Follow Along
For my follow-along project, I followed Derek Morgan’s Learn Terraform (and AWS) by Building a Dev Environment – Full Course for Beginners. This course prepared me to take on Terraform on my own. About halfway through the course I would just get a grasp of what the next steps were and do it myself.
Here is the final configuration file:
resource "aws_vpc" "tf-test-vpc" {
cidr_block = "10.1.0.0/16"
enable_dns_hostnames = true
tags = {
Name = "tf-test"
}
}
resource "aws_subnet" "tft-subnet1_public" {
vpc_id = aws_vpc.tf-test-vpc.id
cidr_block = "10.1.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
tags = {
Name = "tft-public"
}
}
resource "aws_subnet" "tft-subnet2_private" {
vpc_id = aws_vpc.tf-test-vpc.id
cidr_block = "10.1.2.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "tft-private"
}
}
resource "aws_internet_gateway" "tf-test-int_gate" {
vpc_id = aws_vpc.tf-test-vpc.id
tags = {
Name = "tft-igw"
}
}
resource "aws_route_table" "tft-public-rt" {
vpc_id = aws_vpc.tf-test-vpc.id
tags = {
Name = "tft-pb-rt"
}
}
resource "aws_route" "default_route" {
route_table_id = aws_route_table.tft-public-rt.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.tf-test-int_gate.id
}
resource "aws_route_table_association" "tft-public-assoc" {
subnet_id = aws_subnet.tft-subnet1_public.id
route_table_id = aws_route_table.tft-public-rt.id
}
resource "aws_security_group" "tft-public-SG" {
name = "tft-public-SG"
description = "Allow all traffic from my IP"
vpc_id = aws_vpc.tf-test-vpc.id
ingress {
description = "All traffic from my IP"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "tft-public-SG"
}
}
resource "aws_key_pair" "tft-auth" {
key_name = "tft-key"
public_key = file("~/.ssh/tft-key.pub")
}
resource "aws_instance" "ubuntu-tft" {
ami = data.aws_ami.server_ami.id
instance_type = "t2.micro"
key_name = aws_key_pair.tft-auth.id
vpc_security_group_ids = [aws_security_group.tft-public-SG.id]
subnet_id = aws_subnet.tft-subnet1_public.id
user_data = file("userdata.tpl")
root_block_device {
volume_size = 8
}
tags = {
Name = "tft-test"
}
}My Project
Okay, onto the fun part.
Having felt comfortable enough to build Terraform on my own, I decided to create my project. I asked ChatGPT to assume the role of a Principal Cloud Security Architect and hand down requirements for a new project to a Senior Cloud Security Engineer, and here’s what I got:
Requirements to reference during Terraform project:
- Define IAM Roles, Policies, and permissions. Follow least privilege.
- Configure VPC with subnets, security groups, and ACLs. Ensure network segmentation.
- Use consistent and meaningful tagging of resources.
- Implement encryption at rest. Utilize KMS and S3, RDS, and/or EBS with encryption enabled.
- Ensure data storage such as S3 has proper access control policies.
- Use Terraform scanning tools to identify and mitigate insecure AWS infrastructure.
- Avoid hardcoding sensitive information into Terraform configuration.
- Maintain documentation and version control of Terraform configuration files. Some of these requirements overlap, like hard coding sensitive information into configuration files, or using scanning tools to scan Terraform code for vulnerabilities/misconfigurations. Prepare to see some of this overlap.
Time to build!
VPC
I started with the VPC because all infrastructure (except S3) will be inside of it. This, along with IAM, are the most essential pieces of the deployment to me.
Per the requirements, sensitive information like my IP address, account IDs, etc. is scrubbed from this deployment. But here is the VPC I created before running through Tfsec for better security of my deployment:
resource "aws_vpc" "tf-project-vpc" {
cidr_block = "10.1.0.0/16"
enable_dns_hostnames = true
tags = {
Name = "tf-vpc"
}
}
resource "aws_subnet" "tf-project-public" {
vpc_id = aws_vpc.tf-project-vpc.id
cidr_block = "10.1.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
tags = {
Name = "tf-project-pub"
}
}
resource "aws_subnet" "tfr-project-private" {
vpc_id = aws_vpc.tf-project-vpc.id
cidr_block = "10.1.2.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "tf-project-priv"
}
}
resource "aws_internet_gateway" "tf-project-igw" {
vpc_id = aws_vpc.tf-project-vpc.id
tags = {
Name = "tf-project-igw"
}
}
resource "aws_eip" "tf-project-nat-eip" {
domain = "vpc"
}
resource "aws_nat_gateway" "tf-project-ngw" {
allocation_id = aws_eip.tf-project-nat-eip.allocation_id
subnet_id = aws_subnet.tf-project-public.id
}
resource "aws_route_table" "tf-project-pub-rt" {
vpc_id = aws_vpc.tf-project-vpc.id
tags = {
Name = "tf-project-pub-rt"
}
}
resource "aws_route_table" "tf-project-priv-rt" {
vpc_id = aws_vpc.tf-project-vpc.id
tags = {
Name = "tf-project-priv-rt"
}
}
resource "aws_route" "tf-public-default-route" {
route_table_id = aws_route_table.tf-project-pub-rt.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.tf-project-igw.id
}
resource "aws_route" "tf-private-default-route" {
route_table_id = aws_route_table.tf-project-priv-rt.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.tf-project-ngw.id
}
resource "aws_route_table_association" "tf-project-pub-assoc" {
subnet_id = aws_subnet.tf-project-public.id
route_table_id = aws_route_table.tf-project-pub-rt.id
}
resource "aws_route_table_association" "tf-project-priv-assoc" {
subnet_id = aws_subnet.tfr-project-private.id
route_table_id = aws_route_table.tf-project-priv-rt.id
}
resource "aws_security_group" "tf-project-pub-ssh" {
name = "tf-project-pub-ssh"
description = "Allow SSH from my IP"
vpc_id = aws_vpc.tf-project-vpc.id
ingress {
description = "Allow ssh from my IP"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["<0.0.0.0/0>"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "tf-project-pub-ssh"
}
}
resource "aws_security_group" "tf-project-priv-ssh" {
name = "tf-project-priv-ssh"
description = "Allow Bastion hosts from public subnet"
vpc_id = aws_vpc.tf-project-vpc.id
ingress {
description = "Allow Bastion hosts from public subnet"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.1.1.0/24"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "tf-project-priv-ssh"
}
}
resource "aws_security_group" "tf-project-pub-web" {
name = "tf-project-pub-web"
description = "Allow web traffic from my IP"
vpc_id = aws_vpc.tf-project-vpc.id
ingress {
description = "Allow web traffic from my IP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["<0.0.0.0/0>"]
}
ingress {
description = "Allow web traffic from my IP"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "tf-project-pub-web"
}
}Pretty long, I know. Unfortunately, there will be plenty more of these long blocks of Terraform. Onto IAM…
IAM
The requirement for IAM is:
“Define IAM Roles, Policies, and permissions. Follow least privilege.”
Because of this, I wanted to restrict access for users of this deployment to the VPC that was created above. As stated earlier, the reading of documentation never goes away. Solving this issue took some extensive research, reading, and knowing what questions to Google. Here is what I ended up with. Just like the last file, this is before running Tfsec to scan for potential vulnerabilities as a result of misconfiguration.
resource "aws_iam_policy" "LockdownVPC-1" {
name = "LockdownVPC-1"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:DetachVolume",
"ec2:AttachVolume",
"ec2:RebootInstances",
"ec2:TerminateInstances",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "arn:aws:ec2:us-east-1:${var.account_id}:instance/*",
"Condition": {
"StringEquals": {
"ec2:InstanceProfile": "arn:aws:iam::${var.account_id}:instance-profile/VPCLockDown"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:${var.account_id}:instance/*",
"Condition": {
"StringEquals": {
"ec2:InstanceProfile": "arn:aws:iam::${var.account_id}:instance-profile/VPCLockDown"
}
}
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:${var.account_id}:subnet/*",
"Condition": {
"StringEquals": {
"ec2:vpc": "${aws_vpc.tf-project-vpc.arn}"
}
}
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": [
"ec2:RevokeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:DeleteRoute",
"ec2:DeleteNetworkAcl",
"ec2:DeleteNetworkAclEntry",
"ec2:DeleteRouteTable"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:vpc": "${aws_vpc.tf-project-vpc.arn}"
}
}
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1:${var.account_id}:key-pair/*",
"arn:aws:ec2:us-east-1:${var.account_id}:volume/*",
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1::snapshot/*",
"arn:aws:ec2:us-east-1:${var.account_id}:network-interface/*",
"arn:aws:ec2:us-east-1:${var.account_id}:security-group/*"
]
},
{
"Sid": "VisualEditor5",
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"iam:GetInstanceProfile",
"ec2:CreateKeyPair",
"ec2:CreateSecurityGroup",
"iam:ListInstanceProfiles"
],
"Resource": "*"
},
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::${var.account_id}:role/VPCLockDown"
},
{
"Sid": "VisualEditor7",
"Effect": "Allow",
"Action": "iam:ChangePassword",
"Resource": [
"${aws_iam_user.tf-project-user.arn}",
"${aws_iam_user.tf-project-user-2.arn}"
]
},
{
"Sid": "VisualEditor8",
"Effect":"Allow",
"Action": "s3:ListAllMyBuckets",
"Resource":"*"
},
{
"Sid": "VisualEditor9",
"Effect":"Allow",
"Action":["s3:ListBucket","s3:GetBucketLocation"],
"Resource":"${aws_s3_bucket.<UNIQUE-BUCKET-NAME>.arn}"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "tf-project-role-attachment" {
name = "tf-project-role-attachment"
users = [aws_iam_user.tf-project-user.name, aws_iam_user.tf-project-user-2.name]
roles = [aws_iam_role.tf-project-vpc-role.name]
policy_arn = aws_iam_policy.LockdownVPC-1.arn
}
resource "aws_iam_role" "tf-project-vpc-role" {
name = "TfProjectVPCUserRole"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com",
"AWS": "arn:aws:iam::${var.account_id}:user/${aws_iam_user.tf-project-user.name}"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_user" "tf-project-user" {
name = "TfProjectUser"
force_destroy = true
}
resource "aws_iam_user" "tf-project-user-2" {
name = "TfProjectUser2"
}
resource "pgp_key" "TfProjectUser" {
name = "TfProjectUser"
email = "xxxxx@xxxxx"
comment = "Generated PGP Key"
}
resource "pgp_key" "TfProjectUser2" {
name = "TfProjectUser2"
email = "xxxxx@xxxxx"
comment = "Generated Second PGP Key"
}
resource "aws_iam_user_login_profile" "TfProjectUser-Login" {
user = aws_iam_user.tf-project-user.name
pgp_key = pgp_key.TfProjectUser.public_key_base64
password_reset_required = true
}
resource "aws_iam_user_login_profile" "TfProjectUser2-Login" {
user = aws_iam_user.tf-project-user-2.name
pgp_key = pgp_key.TfProjectUser2.public_key_base64
password_reset_required = true
}
data "pgp_decrypt" "TfProjectUser" {
private_key = pgp_key.TfProjectUser.private_key
ciphertext = aws_iam_user_login_profile.TfProjectUser-Login.encrypted_password
ciphertext_encoding = "base64"
}
data "pgp_decrypt" "TfProjectUser2" {
private_key = pgp_key.TfProjectUser2.private_key
ciphertext = aws_iam_user_login_profile.TfProjectUser2-Login.encrypted_password
ciphertext_encoding = "base64"
}
output "password-TfProjectUser" {
value = data.pgp_decrypt.TfProjectUser.plaintext
sensitive = true
}
output "password-TfProjectUser2" {
value = data.pgp_decrypt.TfProjectUser2.plaintext
sensitive = true
}This file restricts access to EC2 instances inside of the project’s VPC, creates a PGP key, and creates a temporary password for the two users which must be changed on login. I’m pretty proud of this. It could be a lot “cleaner”, but the feeling I got when I validated that this was working is irreplaceable. I think I followed the concept of least privilege pretty well.
S3 + Encryption at Rest
For S3, I used a similar strategy to IAM, least privilege. I only wanted to give access to the bucket that is created by this deployment. Encryption at rest also needed to be enabled per the requirements. Before security scanning, here’s what I produced:
resource "aws_s3_bucket" "<UNIQUE-BUCKET-NAME>" {
bucket = "<UNIQUE-BUCKET-NAME>"
force_destroy = true
tags = {
Name = "Tf-Project-Bucket"
}
}
resource "aws_s3_bucket_ownership_controls" "tf-project-own-control" {
bucket = aws_s3_bucket.t<UNIQUE-BUCKET-NAME>.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}
resource "aws_s3_bucket_acl" "tf-project-s3-acl" {
depends_on = [aws_s3_bucket_ownership_controls.tf-project-own-control]
bucket = aws_s3_bucket.<UNIQUE-BUCKET-NAME>.id
acl = "private"
}
resource "aws_s3_bucket" "<UNIQUE-LOG-BUCKET-NAME>" {
bucket = "<UNIQUE-LOG-BUCKET-NAME>"
force_destroy = true
tags = {
Name = "Tf-Project-s3-Log-Bucket"
}
}
resource "aws_s3_bucket_logging" "tf-project-logging" {
bucket = aws_s3_bucket.<UNIQUE-BUCKET-NAME>.id
target_bucket = aws_s3_bucket.<UNIQUE-LOG-BUCKET-NAME>.id
target_prefix = "logs/"
}
resource "aws_s3_bucket_versioning" "tf-bucket-versioning" {
bucket = aws_s3_bucket.<UNIQUE-BUCKET-NAME>.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_policy" "tf-project-bucket-policy" {
bucket = aws_s3_bucket.<UNIQUE-BUCKET-NAME>.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"${aws_iam_user.tf-project-user.arn}",
"${aws_iam_user.tf-project-user-2.arn}"
]
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": [
"${aws_s3_bucket.<UNIQUE-BUCKET-NAME>.arn}",
"${aws_s3_bucket.<UNIQUE-BUCKET-NAME>.arn}/*"
]
}
]
}
EOF
}
resource "aws_kms_key" "tf-project-bucket-key" {
description = "Encryption key for Project Bucket"
deletion_window_in_days = 10
}
resource "aws_kms_alias" "tf-project-key-alias" {
name = "alias/tf-project-bucket-key"
target_key_id = aws_kms_key.tf-project-bucket-key.key_id
}
resource "aws_s3_bucket_server_side_encryption_configuration" "tf-project-sse" {
bucket = aws_s3_bucket.<UNIQUE-BUCKET-NAME>.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.tf-project-bucket-key.arn
sse_algorithm = "aws:kms"
}
bucket_key_enabled = true
}
}Another proud moment for me after this was created and validated. At this point, I felt like I could start doing DevOps today.
Additional Requirements
For the additional requirements. Version control was kept for this project in a GitHub repository that will be attached to the end of this post, all significant resources were tagged and used a consistent naming convention, and sensitive information has been prevented from being kept in the files. I’d love to know your opinion if you see anything that I missed.
Security Scanning
To finish this project, these configuration files needed to be run through tfsec to scan for misconfigurations that lead to a lack of security over your infrastructure. So I ran the files through and correct all the misconfigurations seen in the files. To spare the length of this blog, I’ll just be posting the GitHub repo that contains the final version of all the files.
Attached is the final Github Repo, containing all the files from the projects, with version control. Try it for yourself!
Lessons Learned
In conclusion, my journey learning Terraform has been an amazing experience. It’s given me to find a better way to manage my resources on AWS and reduce unnecessary costs. By using Terraform, I was able to automate and maintain my cloud infrastructure resulting in faster and repeatable deployments while eliminating human errors caused by manually searching through the UI. The hands-on approach I took provided me with the necessary skills to build some pretty complex configurations while trying to adhere to best practices. With Terraform in my toolkit, I feel confident in my ability to navigate the ever-changing landscape of cloud infrastructure and streamline my deployments in my professional career. Thank you for reading!









