New Book Day: A Developer’s Essential Guide to Docker Compose

As Developers nowadays we have a wide variety of Software components and Cloud Services to use. This was a scenario that we could not even imagine in the past.
I still remember when we had to setup our Application Servers and Databases on top of Bare metal servers.
This burst of computing functionality in the form of the Cloud and managed services, allowed us to be able to utilise more tools for our applications and build better products.
Issues like orchestrating workloads, isolating and shipping them went to a whole different level.
As a result containerization came to the rescue.
Docker took over the microservice world and became the dominant solution to deploy microservices and in certain cases even to deploy Databases and Brokers.
This brings us to the development process. Production deployments is a huge chapter on its own. Platform engineers needs to take care of the security, the scaling and robustness of container based deployments.
But the development process is also affected:

  • SQL/NoSQL Databases as well as purpose-built databases like InfluxDB need to be available locally
  • Scenarios of Microservices applications need to be tested
  • Other Components such as brokers need to be available for testing

A step forward to the challenges mentioned above is to utilise Docker and its rich functionality. As convenient as it is to spin up containers locally you still end up managing containers, volumes and networks. Most of the times those have been spinned up add-hoc or with some scripts that a team has to maintain.

Docker Compose is one of the solutions to the problems described. With Compose you can spin up multiple containers locally organised using yaml files.
Here are some of the benefits when used during development:

  • The containers are organised and can spin up or shut down in an organized way
  • Applications can be placed on different networks
  • Volumes can be managed and attached to containers in a managed way
  • Containers resolve each other’s location through a DNS automatically, manual linking is not needed.

I have been using Compose for years. It helped me to make my development process much more efficient. Also in certain cases it helped me on actual production deployments. Writing this book was an opportunity for me to advocate for Docker Compose.

 

 

Thanks to the amazing people at Packt publishing it was possible to write this book and give back to the community.

The book is focused on various aspects of Compose.

In the beginning it will be an extensive look on Docker Compose, how it is implemented, how it interacts with the Docker engine, the available commands as well as the functionality it provides.

Onwards we will dive deep in day to day development using Compose. We will spin up complex infrastructure locally, as well as simulate Microservice environments using Compose. We will take this concept further and incrementally simulate things that we have to deal with a production deployment, as well as issue workarounds. Lastly we will use Compose for CI/CD jobs on popular solutions like Github Actions, Travis, and BitBucket Pipelines.

The last part of the book is all about deploying to production. All the knowledge acquired previously can be used for actual production deployments. A production deployment comes with some standards:

  • Infrastructure as Code
  • Container registry
  • Networks
  • Load Balancing
  • Autoscaling

The above, in combination with the knowledge we accumulated so far using Compose, will be used for a deployment on AWS and Azure, the most popular Cloud providers. This will be done also with the extra help of Infrastructure as Code by using Terraform.

Lastly since many production deployments nowadays reside on Kubernetes we shall build a bridge between Compose and Kubernetes by migrating the existing Compose deployment using Kompose.

You can find the book on Amazon  as well as on the Packt portal.
Happy Learning!

Advertisement

Autoscaling Groups with terraform on AWS Part 3: Elastic Load Balancer and health check

Previously we set up some Apache Ignite servers in an autoscaling group. The next step is to add a Load Balancer in front of the autoscaling group.

Before any steps let’s add some environment variables to variables.tf.

variable "autoscalling_group_elb_name" {
  type = string
  default = "autoscallinggroupelb"
}

variable "elb_security_group_name" {
  type = string
  default = "elb_name"
}

First we shall add the security group for the Load Balancer.

resource "aws_security_group" "elb_security_group" {
  name = var.elb_security_group_name
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Then we need to retrieve the availability zones for the Load Balancer.

data "aws_availability_zones" "available" {
  state = "available"
}

Then let’s add the Load Balancer.

resource "aws_elb" "autoscalling_group_elb" {
  name = var.autoscalling_group_elb_name
  security_groups = ["${aws_security_group.elb_security_group.id}"]
  availability_zones = data.aws_availability_zones.available.names
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:8080/ignite?cmd=version"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "8080"
    instance_protocol = "http"
  }
}

Then let’s match the Load Balancer with the autoscaling group and set the health type to ELB.

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]
  load_balancers = ["${aws_elb.autoscalling_group_elb.name}"]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

As before you apply your terraform solution

> terraform apply

Autoscaling Groups with terraform on AWS Part 2: Instance security group and Boot Script

Previously we followed the minimum steps required in order to spin up an autoscaling group in terraform.On this post we shall add a security group to the autoscaling group and an http server to serve the requests.

Using our base configuration we shall create the security group for the instances.

resource "aws_security_group" "instance_security_group" {
  name = "autoscalling_security_group"
  ingress {
    from_port = 8080
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    protocol = "-1"
    to_port = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Our instances shall spin up a server listening at port 8080 thus the security port shall allow ingress traffic to that port. Pay attention to the egress. We shall access resources from the internet thus we want to be able to download em.

Then we will just set the security group at the launch confiuration.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
}

Now it’s time to spin up a server on those instances. The aws_launch_configuration gives us the option to specify the startup script (user data on aws ec2). I shall use the Apache Ignite server and its http interface.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
  user_data =  <<-EOF
              #!/bin/bash
              yum install java unzip -y
              curl https://www-eu.apache.org/dist/ignite/2.7.6/apache-ignite-2.7.6-bin.zip -o apache-ignite.zip
              unzip apache-ignite.zip -d /opt/apache-ignite
              cd /opt/apache-ignite/apache-ignite-2.7.6-bin/
              cp -r libs/optional/ignite-rest-http/ libs/ignite-rest-http/
              ./bin/ignite.sh ./examples/config/example-cache.xml
              EOF

And now we are ready to spin up the autoscaling as shown previously.

> terraform init
> terraform apply

We successfully added an instance security group and a bootstrap script. Then next challenge is to add some load balancing and health checks.

Autoscaling Groups with terraform on AWS Part 1: Basic Steps

So you want to create an autoscaling group on AWS using terraform. The following are the minimum steps in order to achieve so.

 

 

Before writing the actual code you shall specify the aws terraform provider as well as the region on the provider.tf file.

provider "aws" {
  version = "~&gt; 2.0"
  region  = "eu-west-1"
}

terraform {
  required_version = "~&gt;0.12.0"
}

The next step would be to define some variables on the variables.tf file.

variable "vpc_id" {
  type = string
  default = "your-vpc-id"
}

variable "launch_configuration_name" {
  type = string
  default = "launch_configuration_name"
}

variable "auto_scalling_group_name" {
  type = string
  default = "auto_scalling_group_name"
}

variable "image_id" {
  type = string
  default =  "image-id-based-on-the-region"
}

variable "instance_type" {
  type = "string" 
  default = "t2.micro"
}

Then we are going to have the autoscalling group configuration on the autoscalling_group.tf file.

data "aws_subnet_ids" "subnets" {
  vpc_id = var.vpc_id
}

data "aws_subnet" "subnet_values" {
  for_each = data.aws_subnet_ids.subnets.ids
  id       = each.value
}

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
}

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "EC2"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

Let’s break them down.
The vpc id is needed in order to identify the subnets used by your autoscaling group.
Thus the value vpc_zone_identifier shall derive the subnets from the vpc defined.

Then you have to create a launch configuration.
The launch configuration shall specify the image id which is based on your region and the instance type.

To execute this provided you have your aws credentials in place you have to do initialize and then apply

> terraform init
> terraform apply

On the next tutorial we shall focus on adding an instance security group and a boot script.