Spring Boot and Micrometer with InlfuxDB Part 3: Servlets and JDBC

In the previous blog we setup a reactive application with micrometer backed with an InfluxDB.

On this tutorial we shall use our old school blocking Servlet Based Spring Stack with JDBC.
My database of choice would be postgresql. I shall use the same scripts of a previous blog post.

Thus we shall have the script that initializes the database

#!/bin/bash
set -e

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
    create schema spring_data_jpa_example;

    create table spring_data_jpa_example.employee(
        id  SERIAL PRIMARY KEY,
        firstname   TEXT    NOT NULL,
        lastname    TEXT    NOT NULL,
        email       TEXT    not null,
        age         INT     NOT NULL,
        salary         real,
        unique(email)
    );

    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 1','john1@doe.com',18,1234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 2','john2@doe.com',19,2234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 3','john3@doe.com',20,3234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 4','john4@doe.com',21,4234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 5','john5@doe.com',22,5234.23);
EOSQL

The we shall have a docker compose file that contains InfluxDB, Postgres and Grafana.

version: '3.5'

services:
  influxdb:
    image: influxdb
    restart: always
    ports:
      - 8086:8086
  grafana:
    image: grafana/grafana
    restart: always
    ports:
      - 3000:3000
  postgres:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: db-user
      POSTGRES_PASSWORD: your-password
      POSTGRES_DB: postgres
    ports:
      - 5432:5432
    volumes:
      - $PWD/init-db-script.sh:/docker-entrypoint-initdb.d/init-db-script.sh

Now it’s time to build our spring application starting with our maven dependencies.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.4.RELEASE</version>
    </parent>

    <groupId>com.gkatzioura</groupId>
    <artifactId>EmployeeApi</artifactId>
    <version>1.0-SNAPSHOT</version>

    <build>
        <defaultGoal>spring-boot:run</defaultGoal>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>42.2.8</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-core</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-influx</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
            <scope>provided</scope>
        </dependency>
   </dependencies>
</project>

Since this is a JDBC backed dependency we shall create the entities and the repositories.

package com.gkatzioura.employee.model;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

import lombok.Data;

@Data
@Entity
@Table(name = "employee", schema="spring_data_jpa_example")
public class Employee {

	@Id
	@Column(name = "id")
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Long id;

	@Column(name = "firstname")
	private String firstName;

	@Column(name = "lastname")
	private String lastname;

	@Column(name = "email")
	private String email;

	@Column(name = "age")
	private Integer age;

	@Column(name = "salary")
	private Integer salary;

}

Then let’s add the Repository

package com.gkatzioura.employee.repository;

import com.gkatzioura.employee.model.Employee;
import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository<Employee,Long> {
}

And the controller

package com.gkatzioura.employee.controller;

import java.util.List;

import com.gkatzioura.employee.model.Employee;
import com.gkatzioura.employee.repository.EmployeeRepository;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class EmployeeController {

	private final EmployeeRepository employeeRepository;

	public EmployeeController(EmployeeRepository employeeRepository) {
		this.employeeRepository = employeeRepository;
	}

	@RequestMapping("/employee")
	public List<Employee> getEmployees() {
		return employeeRepository.findAll();
	}

}

Last but not least the Application class

package com.gkatzioura.employee;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

As well the configuration

spring:
  datasource:
    platform: postgres
    driverClassName: org.postgresql.Driver
    username: db-user
    password: your-password
    url: jdbc:postgresql://127.0.0.1:5432/postgres
management:
  metrics:
    export:
      influx:
        enabled: true
        db: employeeapi
        uri: http://127.0.0.1:8086
  endpoints:
    web:
      expose: "*"

Let’s try it

curl http://localhost:8080/employee

After some requests we can find the entries persisted.

docker exec -it influxdb-local influx
> SHOW DATABASES;
name: databases
name
----
_internal
employeeapi
> use employeeapi
Using database employeeapi
> SHOW MEASUREMENTS
name: measurements
name
----
hikaricp_connections
hikaricp_connections_acquire
hikaricp_connections_active
hikaricp_connections_creation
hikaricp_connections_idle
hikaricp_connections_max
hikaricp_connections_min
hikaricp_connections_pending
hikaricp_connections_timeout
hikaricp_connections_usage
http_server_requests
jdbc_connections_active
jdbc_connections_idle
jdbc_connections_max
jdbc_connections_min
jvm_buffer_count
jvm_buffer_memory_used
jvm_buffer_total_capacity
jvm_classes_loaded
jvm_classes_unloaded
jvm_gc_live_data_size
jvm_gc_max_data_size
jvm_gc_memory_allocated
jvm_gc_memory_promoted
jvm_gc_pause
jvm_memory_committed
jvm_memory_max
jvm_memory_used
jvm_threads_daemon
jvm_threads_live
jvm_threads_peak
jvm_threads_states
logback_events
process_cpu_usage
process_files_max
process_files_open
process_start_time
process_uptime
system_cpu_count
system_cpu_usage
system_load_average_1m
tomcat_sessions_active_current
tomcat_sessions_active_max
tomcat_sessions_alive_max
tomcat_sessions_created
tomcat_sessions_expired
tomcat_sessions_rejected

As you can see the metrics are a bit different from the previous example. We have jdbc connection metrics tomcat metrics and all metrics relevant to our application.
You can find the sourcecode on github.

Use local docker image on minikube.

You use Minikube and you want to run your development images that you create locally. This might seem tricky since Minikube needs to download your images from a registry however you images are being uploaded on your local registry.

In any case you can still use you local images with Minikube so let’s get started.

Before running any container let’s issue.

> eval $(minikube docker-env)

This actually reuses the docker host from Minikube for your current bash session.

See for yourself.

> minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/gkatzioura/.minikube/certs"
# Run this command to configure your shell:
# eval $(minikube docker-env)

Then spin up an nginx image. Most of the commands are taken from this tutorial.

>docker run -d -p 8080:80 --name my-nginx nginx
>docker ps --filter name=my-nginx
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
128ce006ecae        nginx               "nginx -g 'daemon of…"   13 seconds ago      Up 12 seconds       0.0.0.0:8080->80/tcp   my-nginx

Now let’s create an image from the running container.

docker commit 128ce006ecae dockerimage:version1

Then let’s run our custom image on minikube.

kubectl create deployment test-image --image=dockerimage:version1

Let’s also expose the service

kubectl expose deployment test-image --type=LoadBalancer --port=80

Let’s take to the next level and try to wget our service

> kubectl exec -it podwithbinbash /bin/bash
bash-4.4# wget test-image
Connecting to test-image (10.101.70.7:80)
index.html           100% |***********************************************************************************************************|   612  0:00:00 ETA
bash-4.4# cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Take extra attention that the above will work only on the terminal that you executed the command

eval $(minikube docker-env)

If you want to you can just setup your bash_profile to do it for every terminal but this is up to you.
Eventually this is one of the quick ways to use you local images on Minikube and most probably there are others available.

Spring Boot and Micrometer with InlfuxDB Part 2: Adding InfluxDB

Since we added our base application it is time for us to spin up an InfluxDB instance.

We shall follow a previous tutorial and add a docker instance.

docker run –rm -p 8086:8086 –name influxdb-local influxdb

Time to add the micrometer InfluxDB dependency on our pom

<dependencies>
...
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-core</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-influx</artifactId>
            <version>1.3.2</version>
        </dependency>
...
</dependencies>

Time to add the configuration through the application.yaml

management:
  metrics:
    export:
      influx:
        enabled: true
        db: devjobsapi
        uri: http://127.0.0.1:8086
  endpoints:
    web:
      expose: "*"

Let’s spin up our application and do some requests.
After some time we can check the database and the data contained.

docker exec -it influxdb-local influx
> SHOW DATABASES;
name: databases
name
----
_internal
devjobsapi
> use devjobsapi
Using database devjobsapi
> SHOW MEASUREMENTS
name: measurements
name
----
http_server_requests
jvm_buffer_count
jvm_buffer_memory_used
jvm_buffer_total_capacity
jvm_classes_loaded
jvm_classes_unloaded
jvm_gc_live_data_size
jvm_gc_max_data_size
jvm_gc_memory_allocated
jvm_gc_memory_promoted
jvm_gc_pause
jvm_memory_committed
jvm_memory_max
jvm_memory_used
jvm_threads_daemon
jvm_threads_live
jvm_threads_peak
jvm_threads_states
logback_events
process_cpu_usage
process_files_max
process_files_open
process_start_time
process_uptime
system_cpu_count
system_cpu_usage
system_load_average_1m

That’s pretty awesome. Let’s check the endpoints accessed.

> SELECT*FROM http_server_requests;
name: http_server_requests
time                count exception mean        method metric_type outcome status sum         upper       uri
----                ----- --------- ----        ------ ----------- ------- ------ ---         -----       ---
1582586157093000000 1     None      252.309331  GET    histogram   SUCCESS 200    252.309331  252.309331  /actuator
1582586157096000000 0     None      0           GET    histogram   SUCCESS 200    0           2866.531375 /jobs/github/{page}

Pretty great! The next step would be to visualise those metrics.

Spring Boot and Micrometer with InlfuxDB Part 1: The base project

To those who follow this blog it’s no wonder that I tend to use InfluxDB a lot. I like the fact that it is a real single purpose database (time series) with many features and also comes with enterprise support.

Spring is also one of the tools of my choice.
Thus in this blog we shall integrate spring with micrometer and InfluxDB.

Our application will be a rest api for jobs.
Initially it will fetch the Jobs from Github’s job api as shown here.

Let’s start with a pom

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.4.RELEASE</version>
    </parent>

    <groupId>com.gkatzioura</groupId>
    <artifactId>DevJobsApi</artifactId>
    <version>1.0-SNAPSHOT</version>

    <build>
        <defaultGoal>spring-boot:run</defaultGoal>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-webflux</artifactId>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
            <scope>provided</scope>
        </dependency>
   </dependencies>
</project>

Let’s add the Job Repository for GitHub.

package com.gkatzioura.jobs.repository;

import java.util.List;

import org.springframework.http.HttpMethod;
import org.springframework.stereotype.Repository;
import org.springframework.web.reactive.function.client.WebClient;

import com.gkatzioura.jobs.model.Job;

import reactor.core.publisher.Mono;

@Repository
public class GitHubJobRepository {

    private WebClient githubClient;

    public GitHubJobRepository() {
        this.githubClient = WebClient.create("https://jobs.github.com");
    }

    public Mono<List<Job>> getJobsFromPage(int page) {

        return githubClient.method(HttpMethod.GET)
                           .uri("/positions.json?page=" + page)
                           .retrieve()
                           .bodyToFlux(Job.class)
                           .collectList();
    }

}

The Job model

package com.gkatzioura.jobs.model;

import lombok.Data;

@Data
public class Job {

    private String id;
    private String type;
    private String url;
    private String createdAt;
    private String company;
    private String companyUrl;
    private String location;
    private String title;
    private String description;

}

The controller

package com.gkatzioura.jobs.controller;

import java.util.List;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import com.gkatzioura.jobs.model.Job;
import com.gkatzioura.jobs.repository.GitHubJobRepository;

import reactor.core.publisher.Mono;

@RestController
@RequestMapping("/jobs")
public class JobsController {

    private final GitHubJobRepository gitHubJobRepository;

    public JobsController(GitHubJobRepository gitHubJobRepository) {
        this.gitHubJobRepository = gitHubJobRepository;
    }

    @GetMapping("/github/{page}")
    private Mono<List<Job>> getEmployeeById(@PathVariable int page) {
        return gitHubJobRepository.getJobsFromPage(page);
    }

}

And last but not least the main application.

package com.gkatzioura;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration;

@SpringBootApplication
@EnableAutoConfiguration(exclude = {
        ReactiveSecurityAutoConfiguration.class
})
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

On the next blog we are going to integrate with InfluxDB and micrometer.

Autoscaling Groups with terraform on AWS Part 3: Elastic Load Balancer and health check

Previously we set up some Apache Ignite servers in an autoscaling group. The next step is to add a Load Balancer in front of the autoscaling group.

Before any steps let’s add some environmental variables to variables.tf.

variable "autoscalling_group_elb_name" {
  type = string
  default = "autoscallinggroupelb"
}

variable "elb_security_group_name" {
  type = string
  default = "elb_name"
}

First we shall add the security group for the Load Balancer.

resource "aws_security_group" "elb_security_group" {
  name = var.elb_security_group_name
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Then we need to retrieve the availability zones for the Load Balancer.

data "aws_availability_zones" "available" {
  state = "available"
}

Then let’s add the Load Balancer.

resource "aws_elb" "autoscalling_group_elb" {
  name = var.autoscalling_group_elb_name
  security_groups = ["${aws_security_group.elb_security_group.id}"]
  availability_zones = data.aws_availability_zones.available.names
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:8080/ignite?cmd=version"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "8080"
    instance_protocol = "http"
  }
}

Then let’s match the Load Balancer with the autoscaling group and set the health type to ELB.

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]
  load_balancers = ["${aws_elb.autoscalling_group_elb.name}"]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

As before you apply your terraform solution

> terraform apply

Autoscaling Groups with terraform on AWS Part 2: Instance security group and Boot Script

Previously we followed the minimum steps required in order to spin up an autoscaling group in terraform.On this post we shall add a security group to the autoscaling group and an http server to serve the requests.

Using our base configuration we shall create the security group for the instances.

resource "aws_security_group" "instance_security_group" {
  name = "autoscalling_security_group"
  ingress {
    from_port = 8080
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    protocol = "-1"
    to_port = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Our instances shall spin up a server listening at port 8080 thus the security port shall allow ingress traffic to that port. Pay attention to the egress. We shall access resources from the internet thus we want to be able to download em.

Then we will just set the security group at the launch confiuration.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
}

Now it’s time to spin up a server on those instances. The aws_launch_configuration gives us the option to specify the startup script (user data on aws ec2). I shall use the Apache Ignite server and its http interface.

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
  security_groups = ["${aws_security_group.instance_security_group.id}"]
  user_data =  <<-EOF
              #!/bin/bash
              yum install java unzip -y
              curl https://www-eu.apache.org/dist/ignite/2.7.6/apache-ignite-2.7.6-bin.zip -o apache-ignite.zip
              unzip apache-ignite.zip -d /opt/apache-ignite
              cd /opt/apache-ignite/apache-ignite-2.7.6-bin/
              cp -r libs/optional/ignite-rest-http/ libs/ignite-rest-http/
              ./bin/ignite.sh ./examples/config/example-cache.xml
              EOF

And now we are ready to spin up the autoscaling as shown previously.

> terraform init
> terraform apply

Autoscaling Groups with terraform on AWS Part 1: Basic Steps

So you want to create an autoscaling group on AWS using terraform. The following are the minimum steps in order to achieve so.

 

 

Before writing the actual code you shall specify the aws terraform provider as well as the region on the provider.tf file.

provider "aws" {
  version = "~> 2.0"
  region  = "eu-west-1"
}

terraform {
  required_version = "~>0.12.0"
}

Then we shall

The first step would be to define some variables on the variables.tf file.

variable "vpc_id" {
  type = string
  default = "your-vpc-id"
}

variable "launch_configuration_name" {
  type = string
  default = "launch_configuration_name"
}

variable "auto_scalling_group_name" {
  type = string
  default = "auto_scalling_group_name"
}

variable "image_id" {
  type = string
  default =  "image-id-based-on-the-region"
}

variable "instance_type" {
  type = "string" 
  default = "t2.micro"
}

Then we are going to have the autoscalling group configuration on the autoscalling_group.tf file.

data "aws_subnet_ids" "subnets" {
  vpc_id = var.vpc_id
}

data "aws_subnet" "subnet_values" {
  for_each = data.aws_subnet_ids.subnets.ids
  id       = each.value
}

resource"aws_launch_configuration" "launch-configuration" {
  name = var.launch_configuration_name
  image_id = var.image_id
  instance_type = var.instance_type
}

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "EC2"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

Let’s break them down.
The vpc id is needed in order to identify the subnets used by your autoscaling group.
Thus the value vpc_zone_identifier shall derive the subnets from the vpc defined.

Then you have to create a launch configuration.
The launch configuration shall specify the image id which is based on your region and the instance type.

To execute this provided you have your aws credentials in place you have to do initialize and then apply

> terraform init
> terraform apply

Scala Main class

Adding a main class is Scala is something that I always end up searching so next time it shall be through my blog.

You can go for the extends App option

One way is to add a main class by extending the App class. Everything else that get’s executed on that block is part of the “main” function.

package com.gkatzioura

object MainClass extends App {

  println("Hello world"!)
}

Then you can access the arguments since they are a variable on the App.

package com.gkatzioura

object MainClass extends App {

  for( arg <- args ) {
    println(arg)
  }

}

Add a main method

This is the most Java familiar option

package com.gkatzioura

object MainClass {

  def main(args: Array[String]): Unit = {
    println("Hello, world!")
  }

}

As expected you receive the program arguments through the function arguments.

package com.gkatzioura

object MainClass {

  def main(args: Array[String]): Unit = {
    for( arg <- args ) {
      println(arg)
    }
  }

}

AtomicInteger on Java and Round-Robin

AtomicInteger belongs to the family of Atomic Variables. The main benefit is that using it, is not blocking instead of doing a blocking synchronization, thus you avoid the suspension and rescheduling of thread.

The AtomicInteger is based on the Compare and Swap mechanism and is part of the scalar group of the atomic variables.

Our first use case would be a function on a web page which might be accessed multiple times.

package com.gkatzioura.concurrency;

import java.util.concurrent.atomic.AtomicInteger;

public class AtomicIntegerExample {

    private AtomicInteger atomicInteger = new AtomicInteger();
    public void serveRequest() {
        atomicInteger.incrementAndGet();
        /**
         * logic
         */
    }

    public int requestsServed() {
        return atomicInteger.get();
    }
}

And the test for our use case

package com.gkatzioura.concurrency;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;

public class AtomicIntegerExampleTest {

    private AtomicIntegerExample atomicIntegerExample;

    @BeforeEach
    void setUp() {
        atomicIntegerExample = new AtomicIntegerExample();
    }

    @Test
    void testConcurrentIncrementAndGet() throws ExecutionException, InterruptedException {
        final int threads = 10;

        ExecutorService executorService = Executors.newFixedThreadPool(threads);

        List<Future> futures = new ArrayList();

        for (int i = 0; i  {
                atomicIntegerExample.serveRequest();
                return null;
            }));
        }

        for(Future future: futures) {
            future.get();
        }

        Assertions.assertEquals(10,atomicIntegerExample.requestsServed());
    }

}

Apart from using atomic integer as a counter, you can use it in various cases. For example a thread safe round robin algorithm.

package com.gkatzioura.concurrency;

import java.util.concurrent.atomic.AtomicInteger;

public class AtomicIntegerRoundRobin {

    private final int totalIndexes;
    private final AtomicInteger atomicInteger = new AtomicInteger(-1);

    public AtomicIntegerRoundRobin(int totalIndexes) {
        this.totalIndexes = totalIndexes;
    }

    public int index() {
        int currentIndex;
        int nextIndex;

        do {
            currentIndex = atomicInteger.get();
            nextIndex = currentIndex< Integer.MAX_VALUE ? currentIndex+1: 0;
        } while (!atomicInteger.compareAndSet(currentIndex, nextIndex));

        return nextIndex % totalIndexes;
    }

}

The totalIndex is the total number of indexes. When a request for the next index is being requested then the counter shall be incremented and a compare and set operation will take place. If it fails due to another thread then it will try the operation again and will get the next value of the counter.
A modulo operation will give the current index. If the atomic Integer reaches the max value it shall be reset to zero. The reset can cause an edge case and change the order of the indexes. If this is an issue you can adjust you max value based on your total index size in order to avoid this.

Also some testing on that.

package com.gkatzioura.concurrency;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;

class AtomicIntegerRoundRobinTest {

    private static final int MAX_INDEX = 10;

    private AtomicIntegerRoundRobin atomicIntegerRoundRobin;

    @BeforeEach
    void setUp() {
        atomicIntegerRoundRobin = new AtomicIntegerRoundRobin(MAX_INDEX);
    }

    @Test
    void testIndexesSerially() {
        for(long i=0;i<MAX_INDEX*20;i++) {
            System.out.println(atomicIntegerRoundRobin.index());
        }

        Assertions.assertEquals(0, atomicIntegerRoundRobin.index());
    }

    @Test
    void testIndexesConcurrently() throws ExecutionException, InterruptedException {
        ExecutorService executorService = Executors.newFixedThreadPool(4);

        List<Future> futures = new ArrayList();

        for (int i = 0; i  atomicIntegerRoundRobin.index()));
        }

        for(Future future: futures) {
            System.out.println(future.get());
        }

        Assertions.assertEquals(0,atomicIntegerRoundRobin.index());
    }

}

My most used Git commands on open source projects.

 

The basic step when committing to open source projects is to fork the project.
Then the process is easy you create your branch and you make a pull request. However from time to time you need to adjust you branch based on the latest changes.

This is how you sync your fork to the original one.

git fetch upstream
git checkout master
git merge upstream/master

This is pretty easy but you might want something more than just synchronizing with the original repository.

For example there might be a pull request which never got merged for various reasons and you wan’t to pick up from where it was left.

The first step is to add the repository needed

git remote add $remote_repo_identifier $remote_repo_url

So we just added another remote to our repository.

The next step is to fetch the branches from the remote.

git fetch $remote_repo_identifier

Then you can switch to the branch of your choice, continue make a new branch and continue with a pull request.

git fetch $remote_branch

Remove the upstream

 git remote remove $remote_repo_identifier

Set upstream to the current repo

git branch -u $remote_repo_identifier/$remote_branch $remote_branch
git branch --set-upstream-to=$remote_repo_identifier/$remote_branch $remote_branch

For example change the upstream to the origin one

git push --set-upstream origin $remote_branch