Apache Ignite on your Kubernetes Cluster Part 2: RBAC Explained

So previously we had a vanilla installation of Apache Ignite on Kubernetes.

You had a cache service running however all you did was installing a helm chart.
In this blog we shall evaluate what is installed and take notes for our futures helm charts.

The first step would be to view the helm chart.

> helm list
NAME        	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
ignite-cache	default  	1       	2020-03-07 22:23:49.918924 +0000 UTC	deployed	ignite-1.0.1	2.7.6

Now let’s download it

> helm fetch stable/ignite
> tar xvf ignite-1.0.1.tgz
> cd ignite/; ls -R
Chart.yaml	README.md	templates	values.yaml

./templates:
NOTES.txt			account-role.yaml		persistence-storage-class.yaml	service-account.yaml		svc.yaml
_helpers.tpl			configmap.yaml			role-binding.yaml		stateful-set.yaml		wal-storage-class.yaml

Reading through the template files is a bit challenging (well they are tempaltes :P) so we shall just check what was installed through our previous blog.

Let’s get started with the account-role. The cluster role that ignite shall use needs to be able to get/list/watch the pods and the endpoints. It makes sense since there is a need for discovery between the nodes.

> kubectl get ClusterRole ignite-cache -o yaml
kind: ClusterRole
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137525"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/ignite-cache
  uid: 0cad0689-2f94-4b74-87bc-b468e2ac78ae
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - endpoints
  verbs:
  - get
  - list
  - watch

In order to use this role you need a service account. A service account is create with a token.

> kubectl get serviceaccount ignite-cache -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  namespace: default
  resourceVersion: "137524"
  selfLink: /api/v1/namespaces/default/serviceaccounts/ignite-cache
  uid: 7aab67e5-04db-41a8-b73d-e76e34ca1d8e
secrets:
- name: ignite-cache-token-8rln4

Then we have the role binding. We have a new service account called the ignite-cache which has the role ignite-cache.

> kubectl get ClusterRoleBinding ignite-cache -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137526"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/ignite-cache
  uid: 1e180bd1-567f-4979-a278-ba2e420ed482
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ignite-cache
subjects:
- kind: ServiceAccount
  name: ignite-cache
  namespace: default

It is important for you ignite workloads to use this service account and its token. By doing so they have the permissions to discover the other nodes in your cluster.

The next blog focuses on the configuration.

Apache Ignite on your Kubernetes Cluster Part 1: Vanilla installation

By all means apache Ignite is an Amazing Open Source project.
Don’t assume it’s just a  Cache. It provides way more.

 

Kubernetes gets more popular by the day and is also a very convenient tool.
In this tutorial we shall integrate ignite and Kubernetes.

The first step would be to spin up Minikube.

To get ignite on your Kubernetes installation the first step would be to install the helm chart.

>helm repo add stable https://kubernetes-charts.storage.googleapis.com
>helm install ignite-cache stable/ignite
NAME: ignite-cache
LAST DEPLOYED: Sat Mar  7 22:23:49 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To check cluster state please run:

kubectl exec -n default ignite-cache-0 -- /opt/ignite/apache-ignite/bin/control.sh --state

Eventually after this command is issued it is expected to have an ignite cache setup on your kubernetes cluster.

>kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
ignite-cache-0   1/1     Running   0          79s
ignite-cache-1   1/1     Running   0          13s
>kubectl get svc ignite-cache
ignite-cache   ClusterIP   None         <none>        11211/TCP,47100/TCP,47500/TCP,49112/TCP,10800/TCP,8080/TCP,10900/TCP   6m24s

To those familiar with Kubernetes an ignite cache has just been spinned up in your kubernetes cluster and your applications can use the ignite service within the cluster.
The next blog focuses on the service account needed.

Use local docker image on minikube.

You use Minikube and you want to run your development images that you create locally. This might seem tricky since Minikube needs to download your images from a registry however you images are being uploaded on your local registry.

In any case you can still use you local images with Minikube so let’s get started.

Before running any container let’s issue.

> eval $(minikube docker-env)

This actually reuses the docker host from Minikube for your current bash session.

See for yourself.

> minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/gkatzioura/.minikube/certs"
# Run this command to configure your shell:
# eval $(minikube docker-env)

Then spin up an nginx image. Most of the commands are taken from this tutorial.

>docker run -d -p 8080:80 --name my-nginx nginx
>docker ps --filter name=my-nginx
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
128ce006ecae        nginx               "nginx -g 'daemon of…"   13 seconds ago      Up 12 seconds       0.0.0.0:8080->80/tcp   my-nginx

Now let’s create an image from the running container.

docker commit 128ce006ecae dockerimage:version1

Then let’s run our custom image on minikube.

kubectl create deployment test-image --image=dockerimage:version1

Let’s also expose the service

kubectl expose deployment test-image --type=LoadBalancer --port=80

Let’s take to the next level and try to wget our service

> kubectl exec -it podwithbinbash /bin/bash
bash-4.4# wget test-image
Connecting to test-image (10.101.70.7:80)
index.html           100% |***********************************************************************************************************|   612  0:00:00 ETA
bash-4.4# cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Take extra attention that the above will work only on the terminal that you executed the command

eval $(minikube docker-env)

If you want to you can just setup your bash_profile to do it for every terminal but this is up to you.
Eventually this is one of the quick ways to use you local images on Minikube and most probably there are others available.

Read replicas and Spring Data Part 2: Configuring the base project

In our previous post we set up multiple PostgreSQL instances with the same data.
Our next step would be to configure our spring project by using the both servers.

As stated previously we shall use some of the code taken from the Spring Boot JPA post, since we use exactly the same database.

This shall be our gradle build file

plugins {
	id 'org.springframework.boot' version '2.1.9.RELEASE'
	id 'io.spring.dependency-management' version '1.0.8.RELEASE'
	id 'java'
}

group = 'com.gkatzioura'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'

repositories {
	mavenCentral()
}

dependencies {
	implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
	implementation 'org.springframework.boot:spring-boot-starter-web'
	implementation "org.postgresql:postgresql:42.2.8"
	testImplementation 'org.springframework.boot:spring-boot-starter-test'
}

Now let’s proceed on creating the model based on the table created on the previous blog.

package com.gkatzioura.springdatareadreplica.entity;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table(name = "employee", catalog="spring_data_jpa_example")
public class Employee {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @Column(name = "firstname")
    private String firstName;

    @Column(name = "lastname")
    private String lastname;

    @Column(name = "email")
    private String email;

    @Column(name = "age")
    private Integer age;

    @Column(name = "salary")
    private Integer salary;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastname() {
        return lastname;
    }

    public void setLastname(String lastname) {
        this.lastname = lastname;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    public Integer getAge() {
        return age;
    }

    public void setAge(Integer age) {
        this.age = age;
    }

    public Integer getSalary() {
        return salary;
    }

    public void setSalary(Integer salary) {
        this.salary = salary;
    }

}

And the next step is to create a spring data repository.

package com.gkatzioura.springdatareadreplica.repository;

import org.springframework.data.jpa.repository.JpaRepository;
import com.gkatzioura.springdatareadreplica.entity.Employee;

public interface EmployeeRepository extends JpaRepository<Employee,Long> {
}

Also we are going to add a controller.

package com.gkatzioura.springdatareadreplica.controller;

import java.util.List;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import com.gkatzioura.springdatareadreplica.entity.Employee;
import com.gkatzioura.springdatareadreplica.repository.EmployeeRepository;

@RestController
public class EmployeeContoller {

    private final EmployeeRepository employeeRepository;

    public EmployeeContoller(EmployeeRepository employeeRepository) {
        this.employeeRepository = employeeRepository;
    }

    @RequestMapping("/employee")
    public List<Employee> getEmployees() {
        return employeeRepository.findAll();
    }

}

All that it takes is to just add the right properties in you application.yaml

spring:
  datasource:
    platform: postgres
    driverClassName: org.postgresql.Driver
    username: db-user
    password: your-password
    url: jdbc:postgresql://127.0.0.2:5432/postgres

Spring boot has made it possible nowadays not to bother with any JPA configurations.

This is all you need in order to run the application. Once your application is running just try to fetch the employees.

curl http://localhost:8080/employee

As you have seen we did not do any JPA configuration. Since Spring Boot 2 specifying the database url is sufficient for the auto configuration to kick in and do all this configuration for you.

However in our case we want to have multiple datasource and entity manager configurations. In the next post we shall configure the entity managers for our application.

Read replicas and Spring Data Part 1: Configuring the Databases

This is a series of blog posts on our quest to increase our application’s performance by utilizing read replicas.

For this project our goal is to set up our spring data application and use read repositories for writes and
repositories based on read replicas for reads.

In order to simulate this environment we shall use PostgreSQL instances through Docker.

The motives are simple. Your Spring application has become increasingly popular and you want it to handle more requests. Most of the applications out there have a higher demand for read operations rather than write operations. Thus I assume that your application falls into the same category.
Although SQL databases are not horizontally scalable on their own, you can work you way with them by using read replicas.

Our goal is not to make an actual Read replication in PostgreSQL

thereforeinstead of configuring any replication

we will just copy some data from both databases

This is the script we shall use to populate the databases.

#!/bin/bash
set -e

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" &amp;lt;&amp;lt;-EOSQL
    create schema spring_data_jpa_example;

    create table spring_data_jpa_example.employee(
        id  SERIAL PRIMARY KEY,
        firstname   TEXT    NOT NULL,
        lastname    TEXT    NOT NULL,
        email       TEXT    not null,
        age         INT     NOT NULL,
        salary         real,
        unique(email)
    );

    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 1','john1@doe.com',18,1234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 2','john2@doe.com',19,2234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 3','john3@doe.com',20,3234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 4','john4@doe.com',21,4234.23);
    insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
    values ('John','Doe 5','john5@doe.com',22,5234.23);
EOSQL

Since we shall use and Docker and Docker Compose the script above shall be used in order to initialize the database.
Now on to create our Docker Compose stack.

version: '3.5'

services:
  write-db:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: db-user
      POSTGRES_PASSWORD: your-password
      POSTGRES_DB: postgres
    networks:
      - postgresql-network
    ports:
      - "127.0.0.2:5432:5432"
    volumes:
      - $PWD/init-db-script.sh:/docker-entrypoint-initdb.d/init-db-script.sh
  read-db-1:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: db-user
      POSTGRES_PASSWORD: your-password
      POSTGRES_DB: postgres
    networks:
      - postgresql-network
    ports:
      - "127.0.0.3:5432:5432"
    volumes:
      - $PWD/init-db-script.sh:/docker-entrypoint-initdb.d/init-db-script.sh
networks:
  postgresql-network:
    name: postgresql-network

As you see our configuration is pretty simple. If you are careful enough you would see that I gave the number one to the read-db. This is because in the future we will add more replicas to it.

What I also did is bounding the machines to different local ips.

If you have problem binding addresses like 127.0.0.*:5432
You should try

sudo ifconfig lo0 alias 127.0.0.2 up
sudo ifconfig lo0 alias 127.0.0.3 up

If you are unsuccessful then just change the ports and it will work. It might not be as convenient but it’s still ok.

So let’s get up and running our Docker Compose stack.

docker-compose -f ./postgresql-stack.yaml up

We must be able to query data in both postgresql instances.

docker exec -it deploy_read-db-1_1 /bin/bash
root@07c502968cb3:/# psql -v --username "$POSTGRES_USER" --dbname "$POSTGRES_DB"
db-user=# select*from spring_data_jpa_example.employee;
 id | firstname | lastname |     email     | age | salary
----+-----------+----------+---------------+-----+---------
  1 | John      | Doe 1    | john1@doe.com |  18 | 1234.23
  2 | John      | Doe 2    | john2@doe.com |  19 | 2234.23
  3 | John      | Doe 3    | john3@doe.com |  20 | 3234.23
  4 | John      | Doe 4    | john4@doe.com |  21 | 4234.23
  5 | John      | Doe 5    | john5@doe.com |  22 | 5234.23
(5 rows)

We pretty much set up for our next step. We have some databases up and running and we are going to spin up a spring application running upon them. The next blog focuses on implementing an application running upon our primary database.

Using Minikube on osx

Docker compose is making for me wonders when it comes to run some simple components on my workstation.

Spawning and simulating an infrastructure locally is fast and takes no time. Also it is lightweight.
However most teams nowadays use Kubernetes.
If you want to simulate a Kubernetes environment locally the tool to use is Minikube.

With Minikube you need to have a vm running on your workstation. This is normal, after all your workloads in a Kubernetes environment are running on multiple VMs.

So if you use OSX it’s very simple, provided you have a hypervisor installed (my default one is VirtualBox)
I just went for the local library option.

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
  && chmod +x minikube
sudo mv minikube /usr/local/bin

Depending on your workstation your installation varies but is still an easy one.

So let’s get started

minikube start

Now you might face a challenge with Minikube on osx.

Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg *)" at line 94 of file VBoxManageHostonly.cpp

As you can understand you need to reinstall VirtualBox. However you might face a challenge with the installation. I was getting the error ‘The installation failed’. As explained here you need to edit your osx security settings and allow ‘System software from Oracle’ to load.

At the end just do.

> minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

and you are ready to go.

Debug your container by overriding the command.

The main problem with docker on debugging has to do with images that already have the DockerImageCMD command specified.

If something goes wrong and has to do with the filesystem, or some commands that should have taken effect and they did not you need to do some troubleshooting.

Overriding the command which is executed should be helpful. I do this all the time on custom images since I need some bash in order to be able to troubleshoot.

Supposing I need to troubleshoot something on the default nginx image.

Then nginx image should run like this.

docker run --rm nginx

Now instead of running our server, we shall override the command with a bash session (enter a shell by executing /bin/sh).


docker run --rm -it --entrypoint "/bin/sh" nginx

If you also want to pass arguments to the executable specified there is also another workaround.


docker run --rm -it --entrypoint "ls" nginx -l

 

 

Docker compose: run stack dynamically

I use docker compose every day for my local development needs.

DockerImage

During the day I might turn on/off various databases or servers thus I need to do it fast and in a managed way.

Usually your docker-compose files contains the configuration for many containers, network, volumes etc.

stack.yaml

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    ports:
      - 27017:27017
    environment:
      MONGO_INITDB_ROOT_USERNAME: username
      MONGO_INITDB_ROOT_PASSWORD: password
  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: username
      ME_CONFIG_MONGODB_ADMINPASSWORD: password

This works if you always want the same services up and running.

However it does have a cost on resources, and most of the times you don’t need the full stack.

What you can do in this cases, would be to split them into files and choose what to use.

mongo.yaml

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    ports:
      - 27017:27017
    environment:
      MONGO_INITDB_ROOT_USERNAME: username
      MONGO_INITDB_ROOT_PASSWORD: password

express.yaml

version: '3.5'

services:
  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: username
      ME_CONFIG_MONGODB_ADMINPASSWORD: password

Then choosing what to use becomes very easy, just omit the file

docker-compose -f mongo.yaml -f express.yaml up

Pass multiple commands on Docker run

Docker apart form serving our workloads efficiently is also an amazing tool when it comes to not installing additional binaries to your workstation.

DockerImage

Eventually you will find it very easy and simple to just run only one command on docker.

For example I want to run a hello world in go.

My source code is going to be the simple hello world.

package main

import "fmt"

func main() {
    fmt.Println("hello world")
}

Pretty simple! The file shall be named hello_world.go

Now let’s run this in a container.

docker run -v $(pwd):/go/src/app --rm --name helloworld golang:1.8 go run src/app/hello_world.go

How about installing some go packages and the run our application in one liner?

If you try to do so, you shall realise that docker won’t interpret the commands the way you want, thus heres’ s how to get the result that you want.

If your image contains the /bin/bash or bin/sh binary you can pass the commands you wan to execute as a string.

docker run -v $(pwd):/go/src/app --rm --name helloworld golang:1.8 /bin/bash -c "cd src/app;go get https:yourpackage;go run src/app/hello_world.go

That’s it! Now you can run complex bash one-liners without worrying on installing additional software on your workstations

Spin up an InfluxDB instance with docker for testing.

It is a reality that we tend to make things harder than they might be when we try to use and connect various databases.
Since docker came out things became a lot easier.

Most databases like Mongodb, InfluxDB etc come with the binaries needed to spin up the database but also with the clients needed in order to connect. Actually it pretty much starts to become a standard.

We will make a showcase of this by using InfluxDB’s docker image and the data walkthrough.

Let’s start with spinning up the instance.

docker run --rm -p 8086:8086 --name influxdb-local influxdb

We have an influxDB instance running on port 8086 under the name influxdb-local. Once the container is stopped it will also be deleted.

First step is to connect to an influxDB shell and interact with the database.

docker exec -it influxdb-local influx
CREATE DATABASE NOAA_water_database
> exit

Now let’s import some data

docker exec -it influxdb-local /bin/bash
curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt -o NOAA_data.txt
influx -import -path=NOAA_data.txt -precision=s -database=NOAA_water_database
rm NOAA_data.txt

Next step is to connect to the shell and query some data.

docker exec -it influxdb-local influx -precision rfc3339 -database NOAA_water_database
Connected to http://localhost:8086 version 1.4.x
InfluxDB shell 1.4.x
> SHOW measurements
name: measurements
name
----
average_temperature
h2o_feet
h2o_pH
h2o_quality
h2o_temperature
>

As you can see we just created an InfluxDB instance with data ready to execute queries and have some tests! Pretty simple and clean. Once we are done by stopping the container all data and the container included shall be removed.