Kafka & Zookeeper for Development: Connecting Clients to the Cluster

Previously we achieved to have our Kafka brokers connect to a ZooKeeper ensemble. Also we brought down some brokers checked the election leadership and produced/consumed some messages.

For now we want to make sure that we will be able to connect to those nodes. The problem with connecting to the ensemble we created previously is that it is located inside the container network. When a client interacts with one of the brokers and receives the full list of the brokers he will receive a list of IPs not accessible to it.

So the initial handshake of a client will be successful but then the client will try to interact withs some unreachable hosts.

In order to tackle this we will have a combination of workarounds.

The first one would be to bind the port of each Kafka broker to a different local ip.

kafka-1 will be mapped to 127.0.0.1:9092
kafka-2 will be mapped to 127.0.0.2:9092
kafka-3 will be mapped to 127.0.0.3:9092

So let’s create the aliases of those addresses

sudo ifconfig lo0 alias 127.0.0.2
sudo ifconfig lo0 alias 127.0.0.3

Now it’s possible to do the ip binding. Let’s also put those entries to our /etc/hosts. By doing this, we achieve our local network and our docker network to be in agreement on which broker they should access.

127.0.0.1	kafka-1
127.0.0.2	kafka-2
127.0.0.3	kafka-3

The next step is also to change the KAFKA_ADVERTISED_LISTENERS on each broker. We will adapt this to the DNS entry of each broker. By setting KAFKA_ADVERTISED_LISTENERS the clients from the outside can correctly connect to it, to an address reachable to them and not an address through the internal network. Further explanations can be found on this blog.

  kafka-1:
    container_name: kafka-1
    image: confluent/kafka
    ports:
    - "127.0.0.1:9092:9092"
    volumes:
    - type: bind
      source: ./server1.properties
      target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-1:9092"
  kafka-2:
    container_name: kafka-2
    image: confluent/kafka
    ports:
      - "127.0.0.2:9092:9092"
    volumes:
      - type: bind
        source: ./server2.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-2:9092"
  kafka-3:
    container_name: kafka-3
    image: confluent/kafka
    ports:
      - "127.0.0.3:9092:9092"
    volumes:
      - type: bind
        source: ./server3.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-3:9092"

We see the port binding change as well as the KAFKA_ADVERTISED_LISTENERS. Now let’s wrap everything together in our docker-compose

version: "3.8"
services:
  zookeeper-1:
    container_name: zookeeper-1
    image: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: "1"
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181
  zookeeper-2:
    container_name: zookeeper-2
    image: zookeeper
    ports:
      - "2182:2181"
    environment:
      ZOO_MY_ID: "2"
      ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181
  zookeeper-3:
    container_name: zookeeper-3
    image: zookeeper
    ports:
      - "2183:2181"
    environment:
      ZOO_MY_ID: "3"
      ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181
  kafka-1:
    container_name: kafka-1
    image: confluent/kafka
    ports:
    - "127.0.0.1:9092:9092"
    volumes:
    - type: bind
      source: ./server1.properties
      target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-1:9092"
  kafka-2:
    container_name: kafka-2
    image: confluent/kafka
    ports:
      - "127.0.0.2:9092:9092"
    volumes:
      - type: bind
        source: ./server2.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-2:9092"
  kafka-3:
    container_name: kafka-3
    image: confluent/kafka
    ports:
      - "127.0.0.3:9092:9092"
    volumes:
      - type: bind
        source: ./server3.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-3:9092"

Last but not least you can find the code on github.

Apache Ignite on your Kubernetes Cluster Part 4: Deployment explained

Previously we saw the Ignite configuration that comes with the Kubernetes installation.
The default configuration does not have persistence enabled so we won’t focus on any storage classes provided by the helm chart.

The default installation uses a stateful set. You can find more information on a stateful set on the Kubernetes documentation.

> kubectl get statefulset ignite-cache -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: 2020-04-09T12:29:04Z
  generation: 1
  labels:
    app.kubernetes.io/instance: ignite-cache
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ignite
    helm.sh/chart: ignite-1.0.1
  name: ignite-cache
  namespace: default
  resourceVersion: "281390"
  selfLink: /apis/apps/v1/namespaces/default/statefulsets/ignite-cache
  uid: fcaa7bef-84cd-4e7c-aa33-a4312a1d47a9
spec:
  podManagementPolicy: OrderedReady
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ignite-cache
  serviceName: ignite-cache
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ignite-cache
    spec:
      containers:
      - env:
        - name: IGNITE_QUIET
          value: "false"
        - name: JVM_OPTS
          value: -Djava.net.preferIPv4Stack=true
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        image: apacheignite/ignite:2.7.6
        imagePullPolicy: IfNotPresent
        name: ignite
        ports:
        - containerPort: 11211
          protocol: TCP
        - containerPort: 47100
          protocol: TCP
        - containerPort: 47500
          protocol: TCP
        - containerPort: 49112
          protocol: TCP
        - containerPort: 10800
          protocol: TCP
        - containerPort: 8080
          protocol: TCP
        - containerPort: 10900
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/ignite/apache-ignite/config
          name: config-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ignite-cache
      serviceAccountName: ignite-cache
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: ignite-config.xml
            path: default-config.xml
          name: ignite-cache-configmap
        name: config-volume
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
status:
  replicas: 0

As you can see the Ingite configuration has been mounted through the configmap. Also you can see that this pod will use a specific service account.
Through the environment variables certain libraries are enabled which provide more features on the Ignite cluster. Also the ports needed for the communication and various protocols are being specified.

The last step is the service. All the ignite nodes shall be load balancer behind the Kubernetes service.

> kubectl get svc ignite-cache -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2020-04-09T12:29:04Z
  labels:
    app: ignite-cache
  name: ignite-cache
  namespace: default
  resourceVersion: "281389"
  selfLink: /api/v1/namespaces/default/services/ignite-cache
  uid: 5be68e28-a57c-4cb5-b610-b708bff80da7
spec:
  clusterIP: None
  ports:
  - name: jdbc
    port: 11211
    protocol: TCP
    targetPort: 11211
  - name: spi-communication
    port: 47100
    protocol: TCP
    targetPort: 47100
  - name: spi-discovery
    port: 47500
    protocol: TCP
    targetPort: 47500
  - name: jmx
    port: 49112
    protocol: TCP
    targetPort: 49112
  - name: sql
    port: 10800
    protocol: TCP
    targetPort: 10800
  - name: rest
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: thin-clients
    port: 10900
    protocol: TCP
    targetPort: 10900
  selector:
    app: ignite-cache
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Whether you add a new node or you add an ignite client node your ignite cluster shall be reached through this Kubernetes service. Apart from, that based on the Kubernetes services you can make this cache public or internal.

Apache Ignite on your Kubernetes Cluster Part 3: Configuration explained

Previously we had a look on the RBAC needed for and ignite cluster in Kubernetes.
This blogs focuses on the deployment and the configuration of the cache.

The default ignite installation uses and xml based configuration. It is easy to mount files using configmaps.

> kubectl get configmap ignite-cache-configmap -o yaml
NAME                     DATA   AGE
ignite-cache-configmap   1      32d
gkatzioura@MacBook-Pro-2 templates % kubectl get configmap ignite-cache-configmap -o yaml
apiVersion: v1
data:
  ignite-config.xml: "....\n"
kind: ConfigMap
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache-configmap
  namespace: default
  resourceVersion: "137521"
  selfLink: /api/v1/namespaces/default/configmaps/ignite-cache-configmap
  uid: ff530e3d-10d6-4708-817f-f9845886c1b0

Since viewing the xml from the configmap is cumbersome this is the actual xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="
		       http://www.springframework.org/schema/beans        http://www.springframework.org/schema/beans/spring-beans.xsd">
	<bean class="org.apache.ignite.configuration.IgniteConfiguration">
		<property
				name="peerClassLoadingEnabled" value="false"/>
		<property name="dataStorageConfiguration">
			<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
			</bean>
		</property>
		<property
				name="discoverySpi">
			<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
				<property name="ipFinder">
					<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
						<property name="namespace" value="default"/>
						<property
								name="serviceName" value="ignite-cache"/>
					</bean>
				</property>
			</bean>
		</property>
	</bean>
</beans>

The default DataStorageConfiguration is being used.
What you can see different from other ignite installations is TCP discovery. The tcp discover used is using the Kubernetes TCP based discovery.

The next blog focuses on the services and the deployment.

Apache Ignite on your Kubernetes Cluster Part 2: RBAC Explained

So previously we had a vanilla installation of Apache Ignite on Kubernetes.

You had a cache service running however all you did was installing a helm chart.
In this blog we shall evaluate what is installed and take notes for our futures helm charts.

The first step would be to view the helm chart.

> helm list
NAME        	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
ignite-cache	default  	1       	2020-03-07 22:23:49.918924 +0000 UTC	deployed	ignite-1.0.1	2.7.6

Now let’s download it

> helm fetch stable/ignite
> tar xvf ignite-1.0.1.tgz
> cd ignite/; ls -R
Chart.yaml	README.md	templates	values.yaml

./templates:
NOTES.txt			account-role.yaml		persistence-storage-class.yaml	service-account.yaml		svc.yaml
_helpers.tpl			configmap.yaml			role-binding.yaml		stateful-set.yaml		wal-storage-class.yaml

Reading through the template files is a bit challenging (well they are tempaltes :P) so we shall just check what was installed through our previous blog.

Let’s get started with the account-role. The cluster role that ignite shall use needs to be able to get/list/watch the pods and the endpoints. It makes sense since there is a need for discovery between the nodes.

> kubectl get ClusterRole ignite-cache -o yaml
kind: ClusterRole
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137525"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/ignite-cache
  uid: 0cad0689-2f94-4b74-87bc-b468e2ac78ae
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - endpoints
  verbs:
  - get
  - list
  - watch

In order to use this role you need a service account. A service account is create with a token.

> kubectl get serviceaccount ignite-cache -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  namespace: default
  resourceVersion: "137524"
  selfLink: /api/v1/namespaces/default/serviceaccounts/ignite-cache
  uid: 7aab67e5-04db-41a8-b73d-e76e34ca1d8e
secrets:
- name: ignite-cache-token-8rln4

Then we have the role binding. We have a new service account called the ignite-cache which has the role ignite-cache.

> kubectl get ClusterRoleBinding ignite-cache -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137526"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/ignite-cache
  uid: 1e180bd1-567f-4979-a278-ba2e420ed482
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ignite-cache
subjects:
- kind: ServiceAccount
  name: ignite-cache
  namespace: default

It is important for you ignite workloads to use this service account and its token. By doing so they have the permissions to discover the other nodes in your cluster.

The next blog focuses on the configuration.

Apache Ignite on your Kubernetes Cluster Part 1: Vanilla installation

By all means apache Ignite is an Amazing Open Source project.
Don’t assume it’s just a  Cache. It provides way more.

 

Kubernetes gets more popular by the day and is also a very convenient tool.
In this tutorial we shall integrate ignite and Kubernetes.

The first step would be to spin up Minikube.

To get ignite on your Kubernetes installation the first step would be to install the helm chart.

>helm repo add stable https://kubernetes-charts.storage.googleapis.com
>helm install ignite-cache stable/ignite
NAME: ignite-cache
LAST DEPLOYED: Sat Mar  7 22:23:49 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To check cluster state please run:

kubectl exec -n default ignite-cache-0 -- /opt/ignite/apache-ignite/bin/control.sh --state

Eventually after this command is issued it is expected to have an ignite cache setup on your kubernetes cluster.

>kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
ignite-cache-0   1/1     Running   0          79s
ignite-cache-1   1/1     Running   0          13s
>kubectl get svc ignite-cache
ignite-cache   ClusterIP   None         <none>        11211/TCP,47100/TCP,47500/TCP,49112/TCP,10800/TCP,8080/TCP,10900/TCP   6m24s

To those familiar with Kubernetes an ignite cache has just been spinned up in your kubernetes cluster and your applications can use the ignite service within the cluster.
The next blog focuses on the service account needed.

Use local docker image on minikube.

You use Minikube and you want to run your development images that you create locally. This might seem tricky since Minikube needs to download your images from a registry however you images are being uploaded on your local registry.

In any case you can still use you local images with Minikube so let’s get started.

Before running any container let’s issue.

> eval $(minikube docker-env)

This actually reuses the docker host from Minikube for your current bash session.

See for yourself.

> minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/gkatzioura/.minikube/certs"
# Run this command to configure your shell:
# eval $(minikube docker-env)

Then spin up an nginx image. Most of the commands are taken from this tutorial.

>docker run -d -p 8080:80 --name my-nginx nginx
>docker ps --filter name=my-nginx
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
128ce006ecae        nginx               "nginx -g 'daemon of…"   13 seconds ago      Up 12 seconds       0.0.0.0:8080->80/tcp   my-nginx

Now let’s create an image from the running container.

docker commit 128ce006ecae dockerimage:version1

Then let’s run our custom image on minikube.

kubectl create deployment test-image --image=dockerimage:version1

Let’s also expose the service

kubectl expose deployment test-image --type=LoadBalancer --port=80

Let’s take to the next level and try to wget our service

> kubectl exec -it podwithbinbash /bin/bash
bash-4.4# wget test-image
Connecting to test-image (10.101.70.7:80)
index.html           100% |***********************************************************************************************************|   612  0:00:00 ETA
bash-4.4# cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Take extra attention that the above will work only on the terminal that you executed the command

eval $(minikube docker-env)

If you want to you can just setup your bash_profile to do it for every terminal but this is up to you.
Eventually this is one of the quick ways to use you local images on Minikube and most probably there are others available.

Autoscaling Groups with terraform on AWS Part 3: Elastic Load Balancer and health check

Previously we set up some Apache Ignite servers in an autoscaling group. The next step is to add a Load Balancer in front of the autoscaling group.

Before any steps let’s add some environment variables to variables.tf.

variable "autoscalling_group_elb_name" {
  type = string
  default = "autoscallinggroupelb"
}

variable "elb_security_group_name" {
  type = string
  default = "elb_name"
}

First we shall add the security group for the Load Balancer.

resource "aws_security_group" "elb_security_group" {
  name = var.elb_security_group_name
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Then we need to retrieve the availability zones for the Load Balancer.

data "aws_availability_zones" "available" {
  state = "available"
}

Then let’s add the Load Balancer.

resource "aws_elb" "autoscalling_group_elb" {
  name = var.autoscalling_group_elb_name
  security_groups = ["${aws_security_group.elb_security_group.id}"]
  availability_zones = data.aws_availability_zones.available.names
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:8080/ignite?cmd=version"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "8080"
    instance_protocol = "http"
  }
}

Then let’s match the Load Balancer with the autoscaling group and set the health type to ELB.

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]
  load_balancers = ["${aws_elb.autoscalling_group_elb.name}"]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

As before you apply your terraform solution

> terraform apply

Autoscaling Groups with terraform on AWS Part 2: Instance security group and Boot Script

Previously we followed the minimum steps required in order to spin up an autoscaling group in terraform.On this post we shall add a security group to the autoscaling group and an http server to serve the requests.

Using our base configuration we shall create the security group for the instances.

resource "aws_security_group" "instance_security_group" {
  name = "autoscalling_security_group"
  ingress {
    from_port = 8080
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    protocol = "-1"
    to_port = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Our instances shall spin up a server listening at port 8080 thus the security port shall allow ingress traffic to that port. Pay attention to the egress. We shall access resources from the internet thus we want to be able to download em.

Then we will just set the security group at the launch confiuration.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
}

Now it’s time to spin up a server on those instances. The aws_launch_configuration gives us the option to specify the startup script (user data on aws ec2). I shall use the Apache Ignite server and its http interface.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
  user_data =  <<-EOF
              #!/bin/bash
              yum install java unzip -y
              curl https://www-eu.apache.org/dist/ignite/2.7.6/apache-ignite-2.7.6-bin.zip -o apache-ignite.zip
              unzip apache-ignite.zip -d /opt/apache-ignite
              cd /opt/apache-ignite/apache-ignite-2.7.6-bin/
              cp -r libs/optional/ignite-rest-http/ libs/ignite-rest-http/
              ./bin/ignite.sh ./examples/config/example-cache.xml
              EOF

And now we are ready to spin up the autoscaling as shown previously.

> terraform init
> terraform apply

We successfully added an instance security group and a bootstrap script. Then next challenge is to add some load balancing and health checks.

Autoscaling Groups with terraform on AWS Part 1: Basic Steps

So you want to create an autoscaling group on AWS using terraform. The following are the minimum steps in order to achieve so.

 

 

Before writing the actual code you shall specify the aws terraform provider as well as the region on the provider.tf file.

provider "aws" {
  version = "~&gt; 2.0"
  region  = "eu-west-1"
}

terraform {
  required_version = "~&gt;0.12.0"
}

The next step would be to define some variables on the variables.tf file.

variable "vpc_id" {
  type = string
  default = "your-vpc-id"
}

variable "launch_configuration_name" {
  type = string
  default = "launch_configuration_name"
}

variable "auto_scalling_group_name" {
  type = string
  default = "auto_scalling_group_name"
}

variable "image_id" {
  type = string
  default =  "image-id-based-on-the-region"
}

variable "instance_type" {
  type = "string" 
  default = "t2.micro"
}

Then we are going to have the autoscalling group configuration on the autoscalling_group.tf file.

data "aws_subnet_ids" "subnets" {
  vpc_id = var.vpc_id
}

data "aws_subnet" "subnet_values" {
  for_each = data.aws_subnet_ids.subnets.ids
  id       = each.value
}

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
}

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "EC2"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

Let’s break them down.
The vpc id is needed in order to identify the subnets used by your autoscaling group.
Thus the value vpc_zone_identifier shall derive the subnets from the vpc defined.

Then you have to create a launch configuration.
The launch configuration shall specify the image id which is based on your region and the instance type.

To execute this provided you have your aws credentials in place you have to do initialize and then apply

> terraform init
> terraform apply

On the next tutorial we shall focus on adding an instance security group and a boot script.

Docker basics: Docker compose

Docker Compose is a tool that allows you to run multi-container applications. With compose we can use yaml files to configure our application’ services and then using a single command create and start all of the configured services. I use this tool a lot when it comes to local development in a microservice environment. It is also lightweight and needs just a small effort. Instead of managing how to run each service while developing you can have the environment and services needed preconfigured and focus on the service that you currently develop.

With docker compose we can configure a network for our services, volumes, mount-points, environment variables just about everything.
To showcase this we are going to solve a problem. Our goal would be to extract data from mongodb using grafana. Grafana does not have out of the box support for MongoDB therefore we will have to use a plugin.

First step we shall create our networks. Creating a network is not necessary since your services once started will join the default network. We shall make a showcase of using custom networks. We shall have a network for backend services and a network for frontend services. Apparently network configuration can get more advanced and specify custom network drivers or even configure static addresses.

version: '3.5'

networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true

The backend network is going to be internal therefore there won’t be any outbound connectivity to the containers attached to it.

Then we shall setup our mongodb instance.

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend

As you see we specified a volume. Volumes can also be specified separately and attach them to a service.
Also we used environment variables for the root account, you might as well have spotted that the password is going to be provided through environment variables ie. MONGO_USER=root MONGO_PASSWORD=root docker-compose -f stack.yaml up. The same applies for the volume path too. You can have a more advanced configuration for volumes in your compose configuration and reference them from your service.

Our next goal is to setup the proxy server which shall be in the middle of our grafana and mongodb server. Since it needs a custom Dockerfile to create it, we shall do it through docker-compose. Compose has the capability to spin up a service by specifying the docker file.

So let’s start with the Dockerfile.

FROM node

WORKDIR /usr/src/mongografanaproxy

COPY . /usr/src/mongografanaproxy

EXPOSE 3333

RUN cd /usr/src/mongografanaproxy
RUN npm install
ENTRYPOINT ["npm","run","server"]

Then let’s add it to compose

version: '3.5'

services:
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend

And the same shall be done to the Grafana image that we want to use. In stead of using a ready grafana image we shall create one with the plugin preinstalled.

FROM grafana/grafana

COPY . /var/lib/grafana/plugins/mongodb-grafana

EXPOSE 3000

version: '3.5'

services:
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend

Let’s wrap them all together

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend
networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true

So let’s run them all together.

docker-compose -f stack.yaml build
MONGO_USER=root MONGO_PASSWORD=root DB_PATH=~/grafana-mongo  docker-compose -f stack.yaml up

The above can be found on github.

You might as well find the Docker Images, Docker Containers and Docker registry posts useful.