During the last few years, we have been hearing about a lot of new practices applying to Software Development and DevOps. One of these topics is Infrastructure as Code. Probably, in this space, two of the most well-known solutions are Terraform and CloudFormation.

We have already discussed Terraform on this blog previously. If we take a look to the basic code on the example on this blog or you are already using it, probably, you are aware that when the infrastructure grows, the Terraform code, been quite verbose, grows fast too and, unless we have a very good code structure it can get very messy. The same can be said about CloudFomation. In addition, they are not as developer-friendly as common programming languages are and, they need to be learned as a new language.

To solve the first problem, code turning messing over time, there are some measures to split the Terraform code like creating modules but, for much of the projects I have been taken a look and articles I have read about it, it seems there is no agreement about the best way to split the code and, usually, if you do not work regularly with this kind of projects, it is very hard to find things and move around.

Trying to solve this problem and the second one, using languages well know by developers, there are projects like Pulumi trying to bring infrastructure code to use familiar programming languages and tools, including packaging and APIs.

Investigating a little bit about this, I have found another one, AWS Cloud Development Kit (AWS CDK), which is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. As said, initially AWS CDK is designed for CloudFormation but, there is an implementation to generate Terraform code.

Today’s purpose is just to play a little bit with this technology, just the one that generates CloudFormation code and not to compare them (I do not know enough about it to compare them…yet).

I was going to write a long post with examples but, I have found a great introduction workshop offered by AWS that it does the job nicely. For this reason, I am just leaving here my project on Github.

One of the nice things I have found about AWS CDK is that support multiple languages:

  • Typescript
  • Javascript
  • Python
  • Java
  • C#

And, after writing a few lines, it feels very comfortable to be writing code using Java APIs and using all the power of the multiple tools that exist on the Java ecosystem. Ond, of course, be using Java packages to sort the infrastructure code.

Just another alternative in our toolbox that seems worth it to explore.


Terraform: EC2 + Apache

Today, let’s do some hands-on. We are going to have a (very) short introduction to terraform and, in addition to some commands we are going to deploy in AWS an EC2 instance with a running Apache Server.

It is assumed the reader have some basic AWS knowledge for this article because the different components created on AWS and the reason they are created is not in the scope of this small demo. If you, as a reader, do not have any previous knowledge on AWS, you can still follow the demo and take advantage to learn of the few basic components of AWS and, one of the most basic architectures.

We are going to be using the version 0.12.29 of Terraform for this small demo.

Terraform is a tool that allows us to specify, provision and manage our infrastructure and run it on the most popular providers. It is one tool that embraces the principle of Infrastructure as a Code.

Some basic commands we will we using during this article are:

  • init: The ‘terraform init‘ command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.
  • validate: The ‘terraform validate‘ command validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.
  • plan: The ‘terraform plan‘ command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
  • apply: The ‘terraform apply‘ command is used to apply the changes required to reach the desired state of the configuration or the pre-determined set of actions generated by a terraform plan execution plan. A specific resource can be addressed with the flag ‘-target‘.
  • destroy: The ‘terraform destroy‘ command is used to destroy the Terraform-managed infrastructure. A specific resource can be addressed with the flag ‘-target‘.

Option ‘–auto-approve‘ skips the confirmation (yes).

The infrastructure we are going to need to deploy in AWS our Apache server is going to be:

  1. Create a VPC
  2. Create an internet gateway
  3. Create a custom route table
  4. Create a subnet
  5. Associate the subnet with the routeing table
  6. Create a security group to allow ports 22, 80, 443
  7. Create a network interface with IP in the subnet
  8. Assign an elastic IP to the network interface
  9. Create an Ubuntu server and install/enable Apache 2

Let’s build our infrastructure now.

First thing we need to do is to define our provider. In this case, AWS.

# Configure the AWS Provider
provider "aws" {
  version = "~> 2.70"
  # Optional
  region  = "eu-west-1"

Now, we can start to follow the steps defined previously:

# 1. Create a VPC
resource "aws_vpc" "pr03-vpc" {
  cidr_block = ""

  tags = {
    Name = "pr03-vpc"

Here, we are creating a VPC and assigning a range of IPs.

# 2. Create an internet gateway
resource "aws_internet_gateway" "pr03-gw" {
  vpc_id =

  tags = {
    Name = "pr03-gw"

In this step, we are defining our Internet Gateway and attach it to the created VPC.

# 3. Create a custom route table
resource "aws_route_table" "pr03-r" {
  vpc_id =

  route {
    cidr_block = ""
    gateway_id =

  route {
    ipv6_cidr_block = "::/0"
    gateway_id      =

  tags = {
    Name = "pr03-r"

Now, we need to allow the traffic to arrive at our gateway. In this case, we are creating an entry for the IPv4 and one for the IPv6.

# 4. Create a subnet
resource "aws_subnet" "pr03-subnet-1" {
  vpc_id            =
  cidr_block        = ""
  availability_zone = "eu-west-1a"

  tags = {
    Name = "pr03-subnet-1"

In this step, we are creating a subnet in our VPC where our server will live.

# 5. Associate the subnet with the route table
resource "aws_route_table_association" "pr03-a" {
  subnet_id      =
  route_table_id =

And, we need to associate the subnet with the routing table.

# 6. Create a security group to allow ports 22, 80, 443
resource "aws_security_group" "pr03-allow_web_traffic" {
  name        = "allow_web_traffic"
  description = "Allow web traffic"
  vpc_id      =

  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    # We want internet to access it
    cidr_blocks = [""]

  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    # We want internet to access it
    cidr_blocks = [""]

  ingress {
    description = "HTTPS"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    # We want internet to access it
    cidr_blocks = [""]

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = [""]

  tags = {
    Name = "pr03-allow_web_traffic"

Now, it is time to create our security group to guarantee only the ports we want are accessible and nothing else. In this specific case, the ports are going to be 22, 80 and 443 for incoming traffic and, we are going to allow all exiting traffic.

# 7. Create a network interface with an ip in the subnet
resource "aws_network_interface" "pr03-nic" {
  subnet_id       =
  private_ips     = [""]
  security_groups = []

The next step, it is to create a network interface in our subnet.

# 8. Assign an elastic IP to the network interface
resource "aws_eip" "pr03-eip" {
  vpc                       = true
  network_interface         =
  associate_with_private_ip = ""
  depends_on                = [aws_internet_gateway.pr03-gw]

And, we associate to the network interface an Elastic IP to make de server reachable from the Internet.

# 9. Create an Ubuntu server and install/enable Apache 2
resource "aws_instance" "pr03-web_server" {
  ami               = "ami-099926fbf83aa61ed"
  instance_type     = "t2.micro"
  availability_zone = "eu-west-1a"
  key_name          = "terraform-tutorial"

  network_interface {
    device_index         = 0
    network_interface_id =

  # Install Apache 2
  user_data = <<-EOF
              sudo apt update -y
              sudo apt install apache2 -y
              sudo systemctl start apache2
              sudo bash -c 'echo your very first web server > /var/www/html/index.html'

  tags = {
    Name = "pr03-web_server"

Almost the last step is to create our ubuntu server and install out Apache service.

Finally, we are going to add a few outputs to check some of the resultant information after the execution has been completed. Just to double-check everything has worked without the need to go to the AWS Console:

# Print the elastic IP when execution finishes
output "server_public_ip" {
  value = aws_eip.pr03-eip.public_ip

# Print the private IP of the server when execution finishes
output "server_private_ip" {
  value = aws_instance.pr03-web_server.private_ip

# Print the ID of the server when execution finishes
output "server_id" {
  value =

With all of this created, we can starting to execute terraform commands.

  1. terraform init: This will initialise terraform.
  2. terraform plan: It will show us what actions are going to be executed.
  3. terraform apply –auto-approve: It will execute our changes in AWS.

If you have been following along these steps, once the terraform apply command has been executed we will see something like:

Execution result

Now, we can check on the AWS Console our different components are there and, we can check, using the public IP, the Apache service is running.

In theory, everything we have deployed on AWS, if you have followed all the steps, belongs to the AWS Free Tier but, just in case, and to avoid any costs, let’s take down now our infrastructure. For this, we will execute the terraform destroy command:

terraform destroy –auto-approve

The code for this demo can be found here.

Terraform: EC2 + Apache