Apache Ignite on your Kubernetes Cluster Part 2: RBAC Explained

So previously we had a vanilla installation of Apache Ignite on Kubernetes.

You had a cache service running however all you did was installing a helm chart.
In this blog we shall evaluate what is installed and take notes for our futures helm charts.

The first step would be to view the helm chart.

> helm list
NAME        	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
ignite-cache	default  	1       	2020-03-07 22:23:49.918924 +0000 UTC	deployed	ignite-1.0.1	2.7.6

Now let’s download it

> helm fetch stable/ignite
> tar xvf ignite-1.0.1.tgz
> cd ignite/; ls -R
Chart.yaml	README.md	templates	values.yaml

./templates:
NOTES.txt			account-role.yaml		persistence-storage-class.yaml	service-account.yaml		svc.yaml
_helpers.tpl			configmap.yaml			role-binding.yaml		stateful-set.yaml		wal-storage-class.yaml

Reading through the template files is a bit challenging (well they are tempaltes :P) so we shall just check what was installed through our previous blog.

Let’s get started with the account-role. The cluster role that ignite shall use needs to be able to get/list/watch the pods and the endpoints. It makes sense since there is a need for discovery between the nodes.

> kubectl get ClusterRole ignite-cache -o yaml
kind: ClusterRole
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137525"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/ignite-cache
  uid: 0cad0689-2f94-4b74-87bc-b468e2ac78ae
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - endpoints
  verbs:
  - get
  - list
  - watch

In order to use this role you need a service account. A service account is create with a token.

> kubectl get serviceaccount ignite-cache -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  namespace: default
  resourceVersion: "137524"
  selfLink: /api/v1/namespaces/default/serviceaccounts/ignite-cache
  uid: 7aab67e5-04db-41a8-b73d-e76e34ca1d8e
secrets:
- name: ignite-cache-token-8rln4

Then we have the role binding. We have a new service account called the ignite-cache which has the role ignite-cache.

> kubectl get ClusterRoleBinding ignite-cache -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137526"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/ignite-cache
  uid: 1e180bd1-567f-4979-a278-ba2e420ed482
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ignite-cache
subjects:
- kind: ServiceAccount
  name: ignite-cache
  namespace: default

It is important for you ignite workloads to use this service account and its token. By doing so they have the permissions to discover the other nodes in your cluster.

The next blog focuses on the configuration.

Apache Ignite on your Kubernetes Cluster Part 1: Vanilla installation

By all means apache Ignite is an Amazing Open Source project.
Don’t assume it’s just a  Cache. It provides way more.

 

Kubernetes gets more popular by the day and is also a very convenient tool.
In this tutorial we shall integrate ignite and Kubernetes.

The first step would be to spin up Minikube.

To get ignite on your Kubernetes installation the first step would be to install the helm chart.

>helm repo add stable https://kubernetes-charts.storage.googleapis.com
>helm install ignite-cache stable/ignite
NAME: ignite-cache
LAST DEPLOYED: Sat Mar  7 22:23:49 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To check cluster state please run:

kubectl exec -n default ignite-cache-0 -- /opt/ignite/apache-ignite/bin/control.sh --state

Eventually after this command is issued it is expected to have an ignite cache setup on your kubernetes cluster.

>kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
ignite-cache-0   1/1     Running   0          79s
ignite-cache-1   1/1     Running   0          13s
>kubectl get svc ignite-cache
ignite-cache   ClusterIP   None         <none>        11211/TCP,47100/TCP,47500/TCP,49112/TCP,10800/TCP,8080/TCP,10900/TCP   6m24s

To those familiar with Kubernetes an ignite cache has just been spinned up in your kubernetes cluster and your applications can use the ignite service within the cluster.
The next blog focuses on the service account needed.

Spring Boot and Micrometer with InlfuxDB Part 3: Servlets and JDBC

In the previous blog we setup a reactive application with micrometer backed with an InfluxDB.

On this tutorial we shall use our old school blocking Servlet Based Spring Stack with JDBC.
My database of choice would be postgresql. I shall use the same scripts of a previous blog post.

Thus we shall have the script that initializes the database

#!/bin/bash
set -e

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
    create schema spring_data_jpa_example;

    create table spring_data_jpa_example.employee(
        id  SERIAL PRIMARY KEY,
        firstname   TEXT    NOT NULL,
        lastname    TEXT    NOT NULL,
        email       TEXT    not null,
        age         INT     NOT NULL,
        salary         real,
        unique(email)
    );

    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 1','john1@doe.com',18,1234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 2','john2@doe.com',19,2234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 3','john3@doe.com',20,3234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 4','john4@doe.com',21,4234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 5','john5@doe.com',22,5234.23);
EOSQL

The we shall have a docker compose file that contains InfluxDB, Postgres and Grafana.

version: '3.5'

services:
  influxdb:
    image: influxdb
    restart: always
    ports:
      - 8086:8086
  grafana:
    image: grafana/grafana
    restart: always
    ports:
      - 3000:3000
  postgres:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: db-user
      POSTGRES_PASSWORD: your-password
      POSTGRES_DB: postgres
    ports:
      - 5432:5432
    volumes:
      - $PWD/init-db-script.sh:/docker-entrypoint-initdb.d/init-db-script.sh

Now it’s time to build our spring application starting with our maven dependencies.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.4.RELEASE</version>
    </parent>

    <groupId>com.gkatzioura</groupId>
    <artifactId>EmployeeApi</artifactId>
    <version>1.0-SNAPSHOT</version>

    <build>
        <defaultGoal>spring-boot:run</defaultGoal>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>42.2.8</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-core</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-influx</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
            <scope>provided</scope>
        </dependency>
   </dependencies>
</project>

Since this is a JDBC backed dependency we shall create the entities and the repositories.

package com.gkatzioura.employee.model;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

import lombok.Data;

@Data
@Entity
@Table(name = "employee", schema="spring_data_jpa_example")
public class Employee {

	@Id
	@Column(name = "id")
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Long id;

	@Column(name = "firstname")
	private String firstName;

	@Column(name = "lastname")
	private String lastname;

	@Column(name = "email")
	private String email;

	@Column(name = "age")
	private Integer age;

	@Column(name = "salary")
	private Integer salary;

}

Then let’s add the Repository

package com.gkatzioura.employee.repository;

import com.gkatzioura.employee.model.Employee;
import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository<Employee,Long> {
}

And the controller

package com.gkatzioura.employee.controller;

import java.util.List;

import com.gkatzioura.employee.model.Employee;
import com.gkatzioura.employee.repository.EmployeeRepository;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class EmployeeController {

	private final EmployeeRepository employeeRepository;

	public EmployeeController(EmployeeRepository employeeRepository) {
		this.employeeRepository = employeeRepository;
	}

	@RequestMapping("/employee")
	public List<Employee> getEmployees() {
		return employeeRepository.findAll();
	}

}

Last but not least the Application class

package com.gkatzioura.employee;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

As well the configuration

spring:
  datasource:
    platform: postgres
    driverClassName: org.postgresql.Driver
    username: db-user
    password: your-password
    url: jdbc:postgresql://127.0.0.1:5432/postgres
management:
  metrics:
    export:
      influx:
        enabled: true
        db: employeeapi
        uri: http://127.0.0.1:8086
  endpoints:
    web:
      expose: "*"

Let’s try it

curl http://localhost:8080/employee

After some requests we can find the entries persisted.

docker exec -it influxdb-local influx
> SHOW DATABASES;
name: databases
name
----
_internal
employeeapi
> use employeeapi
Using database employeeapi
> SHOW MEASUREMENTS
name: measurements
name
----
hikaricp_connections
hikaricp_connections_acquire
hikaricp_connections_active
hikaricp_connections_creation
hikaricp_connections_idle
hikaricp_connections_max
hikaricp_connections_min
hikaricp_connections_pending
hikaricp_connections_timeout
hikaricp_connections_usage
http_server_requests
jdbc_connections_active
jdbc_connections_idle
jdbc_connections_max
jdbc_connections_min
jvm_buffer_count
jvm_buffer_memory_used
jvm_buffer_total_capacity
jvm_classes_loaded
jvm_classes_unloaded
jvm_gc_live_data_size
jvm_gc_max_data_size
jvm_gc_memory_allocated
jvm_gc_memory_promoted
jvm_gc_pause
jvm_memory_committed
jvm_memory_max
jvm_memory_used
jvm_threads_daemon
jvm_threads_live
jvm_threads_peak
jvm_threads_states
logback_events
process_cpu_usage
process_files_max
process_files_open
process_start_time
process_uptime
system_cpu_count
system_cpu_usage
system_load_average_1m
tomcat_sessions_active_current
tomcat_sessions_active_max
tomcat_sessions_alive_max
tomcat_sessions_created
tomcat_sessions_expired
tomcat_sessions_rejected

As you can see the metrics are a bit different from the previous example. We have jdbc connection metrics tomcat metrics and all metrics relevant to our application.
You can find the sourcecode on github.

Use local docker image on minikube.

You use Minikube and you want to run your development images that you create locally. This might seem tricky since Minikube needs to download your images from a registry however you images are being uploaded on your local registry.

In any case you can still use you local images with Minikube so let’s get started.

Before running any container let’s issue.

> eval $(minikube docker-env)

This actually reuses the docker host from Minikube for your current bash session.

See for yourself.

> minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/gkatzioura/.minikube/certs"
# Run this command to configure your shell:
# eval $(minikube docker-env)

Then spin up an nginx image. Most of the commands are taken from this tutorial.

>docker run -d -p 8080:80 --name my-nginx nginx
>docker ps --filter name=my-nginx
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
128ce006ecae        nginx               "nginx -g 'daemon of…"   13 seconds ago      Up 12 seconds       0.0.0.0:8080->80/tcp   my-nginx

Now let’s create an image from the running container.

docker commit 128ce006ecae dockerimage:version1

Then let’s run our custom image on minikube.

kubectl create deployment test-image --image=dockerimage:version1

Let’s also expose the service

kubectl expose deployment test-image --type=LoadBalancer --port=80

Let’s take to the next level and try to wget our service

> kubectl exec -it podwithbinbash /bin/bash
bash-4.4# wget test-image
Connecting to test-image (10.101.70.7:80)
index.html           100% |***********************************************************************************************************|   612  0:00:00 ETA
bash-4.4# cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Take extra attention that the above will work only on the terminal that you executed the command

eval $(minikube docker-env)

If you want to you can just setup your bash_profile to do it for every terminal but this is up to you.
Eventually this is one of the quick ways to use you local images on Minikube and most probably there are others available.

Spring Boot and Micrometer with InlfuxDB Part 2: Adding InfluxDB

Since we added our base application it is time for us to spin up an InfluxDB instance.

We shall follow a previous tutorial and add a docker instance.

docker run –rm -p 8086:8086 –name influxdb-local influxdb

Time to add the micrometer InfluxDB dependency on our pom

<dependencies>
...
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-core</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-influx</artifactId>
            <version>1.3.2</version>
        </dependency>
...
</dependencies>

Time to add the configuration through the application.yaml

management:
  metrics:
    export:
      influx:
        enabled: true
        db: devjobsapi
        uri: http://127.0.0.1:8086
  endpoints:
    web:
      expose: "*"

Let’s spin up our application and do some requests.
After some time we can check the database and the data contained.

docker exec -it influxdb-local influx
> SHOW DATABASES;
name: databases
name
----
_internal
devjobsapi
> use devjobsapi
Using database devjobsapi
> SHOW MEASUREMENTS
name: measurements
name
----
http_server_requests
jvm_buffer_count
jvm_buffer_memory_used
jvm_buffer_total_capacity
jvm_classes_loaded
jvm_classes_unloaded
jvm_gc_live_data_size
jvm_gc_max_data_size
jvm_gc_memory_allocated
jvm_gc_memory_promoted
jvm_gc_pause
jvm_memory_committed
jvm_memory_max
jvm_memory_used
jvm_threads_daemon
jvm_threads_live
jvm_threads_peak
jvm_threads_states
logback_events
process_cpu_usage
process_files_max
process_files_open
process_start_time
process_uptime
system_cpu_count
system_cpu_usage
system_load_average_1m

That’s pretty awesome. Let’s check the endpoints accessed.

> SELECT*FROM http_server_requests;
name: http_server_requests
time                count exception mean        method metric_type outcome status sum         upper       uri
----                ----- --------- ----        ------ ----------- ------- ------ ---         -----       ---
1582586157093000000 1     None      252.309331  GET    histogram   SUCCESS 200    252.309331  252.309331  /actuator
1582586157096000000 0     None      0           GET    histogram   SUCCESS 200    0           2866.531375 /jobs/github/{page}

Pretty great! The next step would be to visualise those metrics.

Spring Boot and Micrometer with InlfuxDB Part 1: The base project

To those who follow this blog it’s no wonder that I tend to use InfluxDB a lot. I like the fact that it is a real single purpose database (time series) with many features and also comes with enterprise support.

Spring is also one of the tools of my choice.
Thus in this blog we shall integrate spring with micrometer and InfluxDB.

Our application will be a rest api for jobs.
Initially it will fetch the Jobs from Github’s job api as shown here.

Let’s start with a pom

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.4.RELEASE</version>
    </parent>

    <groupId>com.gkatzioura</groupId>
    <artifactId>DevJobsApi</artifactId>
    <version>1.0-SNAPSHOT</version>

    <build>
        <defaultGoal>spring-boot:run</defaultGoal>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-webflux</artifactId>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
            <scope>provided</scope>
        </dependency>
   </dependencies>
</project>

Let’s add the Job Repository for GitHub.

package com.gkatzioura.jobs.repository;

import java.util.List;

import org.springframework.http.HttpMethod;
import org.springframework.stereotype.Repository;
import org.springframework.web.reactive.function.client.WebClient;

import com.gkatzioura.jobs.model.Job;

import reactor.core.publisher.Mono;

@Repository
public class GitHubJobRepository {

    private WebClient githubClient;

    public GitHubJobRepository() {
        this.githubClient = WebClient.create("https://jobs.github.com");
    }

    public Mono<List<Job>> getJobsFromPage(int page) {

        return githubClient.method(HttpMethod.GET)
                           .uri("/positions.json?page=" + page)
                           .retrieve()
                           .bodyToFlux(Job.class)
                           .collectList();
    }

}

The Job model

package com.gkatzioura.jobs.model;

import lombok.Data;

@Data
public class Job {

    private String id;
    private String type;
    private String url;
    private String createdAt;
    private String company;
    private String companyUrl;
    private String location;
    private String title;
    private String description;

}

The controller

package com.gkatzioura.jobs.controller;

import java.util.List;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import com.gkatzioura.jobs.model.Job;
import com.gkatzioura.jobs.repository.GitHubJobRepository;

import reactor.core.publisher.Mono;

@RestController
@RequestMapping("/jobs")
public class JobsController {

    private final GitHubJobRepository gitHubJobRepository;

    public JobsController(GitHubJobRepository gitHubJobRepository) {
        this.gitHubJobRepository = gitHubJobRepository;
    }

    @GetMapping("/github/{page}")
    private Mono<List<Job>> getEmployeeById(@PathVariable int page) {
        return gitHubJobRepository.getJobsFromPage(page);
    }

}

And last but not least the main application.

package com.gkatzioura;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration;

@SpringBootApplication
@EnableAutoConfiguration(exclude = {
        ReactiveSecurityAutoConfiguration.class
})
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

On the next blog we are going to integrate with InfluxDB and micrometer.

Autoscaling Groups with terraform on AWS Part 3: Elastic Load Balancer and health check

Previously we set up some Apache Ignite servers in an autoscaling group. The next step is to add a Load Balancer in front of the autoscaling group.

Before any steps let’s add some environmental variables to variables.tf.

variable "autoscalling_group_elb_name" {
  type = string
  default = "autoscallinggroupelb"
}

variable "elb_security_group_name" {
  type = string
  default = "elb_name"
}

First we shall add the security group for the Load Balancer.

resource "aws_security_group" "elb_security_group" {
  name = var.elb_security_group_name
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Then we need to retrieve the availability zones for the Load Balancer.

data "aws_availability_zones" "available" {
  state = "available"
}

Then let’s add the Load Balancer.

resource "aws_elb" "autoscalling_group_elb" {
  name = var.autoscalling_group_elb_name
  security_groups = ["${aws_security_group.elb_security_group.id}"]
  availability_zones = data.aws_availability_zones.available.names
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:8080/ignite?cmd=version"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "8080"
    instance_protocol = "http"
  }
}

Then let’s match the Load Balancer with the autoscaling group and set the health type to ELB.

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]
  load_balancers = ["${aws_elb.autoscalling_group_elb.name}"]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

As before you apply your terraform solution

> terraform apply

Autoscaling Groups with terraform on AWS Part 2: Instance security group and Boot Script

Previously we followed the minimum steps required in order to spin up an autoscaling group in terraform.On this post we shall add a security group to the autoscaling group and an http server to serve the requests.

Using our base configuration we shall create the security group for the instances.

resource "aws_security_group" "instance_security_group" {
  name = "autoscalling_security_group"
  ingress {
    from_port = 8080
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    protocol = "-1"
    to_port = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Our instances shall spin up a server listening at port 8080 thus the security port shall allow ingress traffic to that port. Pay attention to the egress. We shall access resources from the internet thus we want to be able to download em.

Then we will just set the security group at the launch confiuration.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
}

Now it’s time to spin up a server on those instances. The aws_launch_configuration gives us the option to specify the startup script (user data on aws ec2). I shall use the Apache Ignite server and its http interface.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
  user_data =  <<-EOF
              #!/bin/bash
              yum install java unzip -y
              curl https://www-eu.apache.org/dist/ignite/2.7.6/apache-ignite-2.7.6-bin.zip -o apache-ignite.zip
              unzip apache-ignite.zip -d /opt/apache-ignite
              cd /opt/apache-ignite/apache-ignite-2.7.6-bin/
              cp -r libs/optional/ignite-rest-http/ libs/ignite-rest-http/
              ./bin/ignite.sh ./examples/config/example-cache.xml
              EOF

And now we are ready to spin up the autoscaling as shown previously.

> terraform init
> terraform apply

Autoscaling Groups with terraform on AWS Part 1: Basic Steps

So you want to create an autoscaling group on AWS using terraform. The following are the minimum steps in order to achieve so.

 

 

Before writing the actual code you shall specify the aws terraform provider as well as the region on the provider.tf file.

provider "aws" {
  version = "~> 2.0"
  region  = "eu-west-1"
}

terraform {
  required_version = "~>0.12.0"
}

Then we shall

The first step would be to define some variables on the variables.tf file.

variable "vpc_id" {
  type = string
  default = "your-vpc-id"
}

variable "launch_configuration_name" {
  type = string
  default = "launch_configuration_name"
}

variable "auto_scalling_group_name" {
  type = string
  default = "auto_scalling_group_name"
}

variable "image_id" {
  type = string
  default =  "image-id-based-on-the-region"
}

variable "instance_type" {
  type = "string" 
  default = "t2.micro"
}

Then we are going to have the autoscalling group configuration on the autoscalling_group.tf file.

data "aws_subnet_ids" "subnets" {
  vpc_id = var.vpc_id
}

data "aws_subnet" "subnet_values" {
  for_each = data.aws_subnet_ids.subnets.ids
  id       = each.value
}

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
}

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "EC2"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

Let’s break them down.
The vpc id is needed in order to identify the subnets used by your autoscaling group.
Thus the value vpc_zone_identifier shall derive the subnets from the vpc defined.

Then you have to create a launch configuration.
The launch configuration shall specify the image id which is based on your region and the instance type.

To execute this provided you have your aws credentials in place you have to do initialize and then apply

> terraform init
> terraform apply

Scala Main class

Adding a main class is Scala is something that I always end up searching so next time it shall be through my blog.

You can go for the extends App option

One way is to add a main class by extending the App class. Everything else that get’s executed on that block is part of the “main” function.

package com.gkatzioura

object MainClass extends App {

  println("Hello world"!)
}

Then you can access the arguments since they are a variable on the App.

package com.gkatzioura

object MainClass extends App {

  for( arg <- args ) {
    println(arg)
  }

}

Add a main method

This is the most Java familiar option

package com.gkatzioura

object MainClass {

  def main(args: Array[String]): Unit = {
    println("Hello, world!")
  }

}

As expected you receive the program arguments through the function arguments.

package com.gkatzioura

object MainClass {

  def main(args: Array[String]): Unit = {
    for( arg <- args ) {
      println(arg)
    }
  }

}