Testing with Hoverfly and Java Part 2: Delays

On the previous post we implemented json and Java based Hoverfly scenarios..
Now it’s time to dive deeper and use other Ηoverfly features.

A big part of testing has to do with negative scenarios. One of them is delays. Although we always mock a server and we are successful to reproduce erroneous scenarios one thing that is key to simulate in todays microservices driven world is delay.

So let me make a server with a 30 secs delay.

public class SimulateDelayTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/delay")
				.willReturn(success("{\"username\":\"test-user\"}", "application/json").withDelay(30, TimeUnit.SECONDS)));

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

}

Let’s add the Delay Test

@Test
void testWithDelay() {
   var client = HttpClient.newHttpClient();
   var request = HttpRequest.newBuilder()
         .uri(URI.create("http://localhost:8085/delay"))
         .build();
   var start = Instant.now();
   var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
         .thenApply(HttpResponse::body)
         .join();
   var end = Instant.now();
   Assertions.assertEquals("{\"username\":\"test-user\"}", res);

   var seconds = Duration.between(start, end).getSeconds();
   Assertions.assertTrue(seconds >= 30);
}

Delay simulation is there, up and running, so let’s try to simulate timeouts.

	@Test
	void testTimeout() {
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/delay"))
				.timeout(Duration.ofSeconds(10))
				.build();
		assertThrows(HttpTimeoutException.class, () -> {
					try {
						client.sendAsync(request, HttpResponse.BodyHandlers.ofString()).join();
					} catch (CompletionException ex) {
						throw ex.getCause();
					}
				}

		);
	}

That’s it we got delays and timeouts!
Other test scenarios should contain state which is covered on the next tutorial.

Testing with Hoverfly and Java Part 1: Get started with Simulation Mode

These days a major problem exists when it comes to testing code that has to do with various cloud services where test tools are not provided.
For example although you might have the tools for local Pub/Sub testing, including Docker images you might not have anything that can Mock BigQuery.

This causes an issue when it comes to the CI jobs, as testing is part of the requirements, however there might be blockers on testing with the actual service. The case is, you do need to cover all the pessimistic scenarios you need to be covered (for example timeouts).

And this is where Hoverfly can help.

Hoverfly is a lightweight, open source API simulation tool. Using Hoverfly, you can create realistic simulations of the APIs your application depends on

Our first examples will have to do with simulating just a web server. The first step is to add the Hoverfly dependency.

    <dependencies>
        <dependency>
            <groupId>io.specto</groupId>
            <artifactId>hoverfly-java</artifactId>
            <version>0.12.2</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

Instead of using the Hoverfly docker image we shall use the Java Library for some extra flexibility.

We got two options on configuring the Hoverfly simulation mode. One is through the Java dsl and the other one is through json.
Let’s cover both.

The example below uses the Java DSL. We spin up hoverfly on 8085 and load this configuration.

class SimulationJavaDSLTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/user")
				.willReturn(success("{\"username\":\"test-user\"}", "application/json")));

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

	@Test
	void testHttpGet() {
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/user"))
				.build();
		var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",res);
	}
}

Now let’s do the same with Json. Instead of manually trying things with json we can make the code do the work for us.

var simulation = SimulationSource.dsl(service("http://localhost:8085")
			.get("/user")
			.willReturn(success("{\"username\":\"test-user\"}", "application/json")));

var simulationStr = simulation.getSimulation()
System.out.println(simulationStr);

We can get the JSON generated by the Java DSL. The result would be like this.

{
  "data": {
    "pairs": [
      {
        "request": {
          "path": [
            {
              "matcher": "exact",
              "value": "/user"
            }
          ],
          "method": [
            {
              "matcher": "exact",
              "value": "GET"
            }
          ],
          "destination": [
            {
              "matcher": "exact",
              "value": "localhost:8085"
            }
          ],
          "scheme": [
            {
              "matcher": "exact",
              "value": "http"
            }
          ],
          "query": {},
          "body": [
            {
              "matcher": "exact",
              "value": ""
            }
          ],
          "headers": {},
          "requiresState": {}
        },
        "response": {
          "status": 200,
          "body": "{\"username\":\"test-user\"}",
          "encodedBody": false,
          "templated": true,
          "headers": {
            "Content-Type": [
              "application/json"
            ]
          }
        }
      }
    ],
    "globalActions": {
      "delays": []
    }
  },
  "meta": {
    "schemaVersion": "v5"
  }
}

Let’s place this one on the resources folder of tests under the name simulation.json

And with some code changes we get exactly the same result.


public class SimulationJsonTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulationUrl = SimulationJsonTests.class.getClassLoader().getResource("simulation.json");
		var simulation = SimulationSource.url(simulationUrl);

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

	@Test
	void testHttpGet() {
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/user"))
				.build();
		var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",res);
	}

}

Also sometimes there is the need of combining simulations regardless they json or Java ones. This can also be facilitated by loading more that one simulations.

	@Test
	void testMixedConfiguration() {
		var simulationUrl = SimulationJsonTests.class.getClassLoader().getResource("simulation.json");
		var jsonSimulation = SimulationSource.url(simulationUrl);


		var javaSimulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/admin")
				.willReturn(success("{\"username\":\"test-admin\"}", "application/json")));

		hoverfly.simulate(jsonSimulation, javaSimulation);

		var client = HttpClient.newHttpClient();
		var jsonConfigBasedRequest = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/user"))
				.build();
		var userResponse = client.sendAsync(jsonConfigBasedRequest, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",userResponse);

		var javaConfigBasedRequest = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/admin"))
				.build();
		var adminResponse = client.sendAsync(javaConfigBasedRequest, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-admin\"}",adminResponse);
	}

That’s it, we are pretty setup to continues exploring Hoverfly and it’s capabilities. The next blog is about delays.

Dependency management and Maven

Maven is great and mature. There is always a solution on almost everything. The main case you might stumble on organisation projects is dependency management. Instead of each project having it’s own dependencies you want a centralised way to inherit those dependencies.

 

In those case you declare on the parent prom the managed dependencies. In my example I just want to include the Akka stream dependencies.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>org.example</groupId>
	<artifactId>maven-dependency-management</artifactId>
	<packaging>pom</packaging>
	<version>1.0-SNAPSHOT</version>

	<properties>
		<akka.version>2.5.31</akka.version>
		<akka.http.version>10.1.11</akka.http.version>
		<scala.binary.version>2.12</scala.binary.version>
	</properties>

	<modules>
		<module>child-one</module>
	</modules>


	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>com.typesafe.akka</groupId>
				<artifactId>akka-stream_2.12</artifactId>
				<version>${akka.version}</version>
			</dependency>
			<dependency>
				<groupId>com.typesafe.akka</groupId>
				<artifactId>akka-http_2.12</artifactId>
				<version>${akka.http.version}</version>
			</dependency>
			<dependency>
				<groupId>com.typesafe.akka</groupId>
				<artifactId>akka-http-spray-json_2.12</artifactId>
				<version>${akka.http.version}</version>
			</dependency>
		</dependencies>
	</dependencyManagement>

</project>

What I use is the dependency management block.

Now the child project would be able to include those libraries without specifying the version. Having the version derived and managed is essential. Many unpleasant surprises can come if a version is incompatible.

Now on to the child module the versions are declared without the version since it is the child module.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<parent>
		<artifactId>maven-dependency-management</artifactId>
		<groupId>org.example</groupId>
		<version>1.0-SNAPSHOT</version>
	</parent>
	<modelVersion>4.0.0</modelVersion>

	<artifactId>child-one</artifactId>

	<dependencies>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-stream_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http-spray-json_2.12</artifactId>
		</dependency>
	</dependencies>

</project>

On another note sometimes we want to use another project’s dependency management without that project being our parent. Those are cases where you need to include the dependency management from a parent project when you already have a parent project.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>

	<groupId>org.example</groupId>
	<artifactId>independent-project</artifactId>
	<version>1.0-SNAPSHOT</version>

	<dependencyManagement>
		<dependencies>
			<dependency>
				<artifactId>maven-dependency-management</artifactId>
				<groupId>org.example</groupId>
				<version>1.0-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

	<dependencies>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-stream_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http-spray-json_2.12</artifactId>
		</dependency>
	</dependencies>
</project>

As you can see in the block

	<dependencyManagement>
		<dependencies>
			<dependency>
				<artifactId>maven-dependency-management</artifactId>
				<groupId>org.example</groupId>
				<version>1.0-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

We included the dependency management from another project, which can be applied to inherit dependencies from multiple projects.

Spring Boot and Micrometer with Prometheus Part 6: Securing metrics

Previously we successfully spun up our Spring Boot application With Prometheus. An endpoint in our Spring application is exposing our metric data so that prometheus is able to retrieve them.
The main question that comes to mind is how to secure this information.

Spring already provides us with its great security framework, so it will be fairly easy to use it for our application. The goal would be to use basic authentication for the actuator/prometheus endpoints and also configure prometheus in order to access that information using basic authentication.

So the first step is to enable the security on our app. The first step is to add the security jar.

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>

The Spring boot application will get secured on its own by generating a password for the default user.
However we do want to have control over the username and password so we are going to use some environment variables.

By running the application with the credentials for the default user we have the prometheus endpoints secured with a minimal configuration.

SPRING_SECURITY_USER_NAME=test-user SPRING_SECURITY_USER_PASSWORD=test-password mvn spring-boot:run

So now that we have the security setup on our app, it’s time to update our prometheus config.

scrape_configs:
  - job_name: 'prometheus-spring'
    scrape_interval: 1m
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['my.local.machine:8080']
    basic_auth:
      username: "test-user"
      password: "test-password"

So let’s run again prometheus as described previously.

To sum app after this change prometheus will gather metrics data for our application in a secure way.

Spring Boot and Micrometer with Prometheus Part 5: Spinning up prometheus

Previously we got our Spring Boot Application adapter in order to expose the endpoints for prometheus.
This blog will focus on setting up prometheus and configure it in order to server the Spring Boot Endpoints.
So let’s get started by spinning up the prometheus server using docker.

Before proceeding on spinning up prometheus we need to supply a configuration file to pull data from our application.
Thus we should supply a prometheus.yaml file with the following contents.

scrape_configs:
  - job_name: 'prometheus-spring'
    scrape_interval: 1m
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['my.local.machine:8080']

Let’s use the command taken from here.

Due to using prometheus on osx through docker, we need some workarounds to connect through the app

sudo ifconfig lo0 alias 172.16.222.111

We can use directly docker

docker run -v /path/to/prometheus.yaml:/etc/prometheus/prometheus.yml -p 9090:9090 --add-host="my.local.machine:172.16.222.111" prom/prometheus

By doing the above we shall be able to interact with our local application from inside the docker image.

So if we navigate to http://localhost:9090/graph we shall be greeted with our prometheus screen.
Also inside our prometheus container we are also able to communicate to our application which shall run locally.

So let’s give some time and see if the data has been collected. Then let’s go to prometheus status page http://localhost:9090/status.

We shall be greeted by the JVM information of our application.

On the next blog we shall focus on securing our prometheus endpoints.

Spring Boot and Micrometer with Prometheus Part 4: The base project

In previous posts we had a look on Spring Micrometer and InfluxDB. So you are gonna ask me why prometheus.
The reason is that prometheus is operating on a pull model vs the push model of InfluxDB.

This means that if you use micrometer with InfluxDB you are definitely going to have some overhead on pushing the results to the database as well as it is one extra pain point to make the InfluxDB database always there available to handle all the requests.

So what if instead of pushing the data, use another tool in order to pull data from the applications?
This is one of the things you can get by using Prometheus. By using prometheus you ask for the data from the application, you don’t have to receive the data.

So what we are going to do is to use exactly the same project we used on the first tutorial.

The only changes needed shall be on the applicaiton.yaml as well as the pom.xml

We shall start from pom.xml and add the micrometer binary for prometheus.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.2.4.RELEASE</version>
	</parent>

	<groupId>com.gkatzioura</groupId>
	<artifactId>spring-prometheus-micrometer</artifactId>
	<version>1.0-SNAPSHOT</version>

	<properties>
		<micrometer.version>1.3.2</micrometer.version>
	</properties>

	<build>
		<defaultGoal>spring-boot:run</defaultGoal>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<configuration>
					<source>8</source>
					<target>8</target>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-webflux</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-actuator</artifactId>
		</dependency>
		<dependency>
			<groupId>io.micrometer</groupId>
			<artifactId>micrometer-core</artifactId>
			<version>${micrometer.version}</version>
		</dependency>
		<dependency>
			<groupId>io.micrometer</groupId>
			<artifactId>micrometer-registry-prometheus</artifactId>
			<version>${micrometer.version}</version>
		</dependency>
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<version>1.18.12</version>
			<scope>provided</scope>
		</dependency>
	</dependencies>
</project>

Then we shall add application.yaml which enables prometheus.

management:
endpoints:
web:
exposure:
include: prometheus

So now we are ready to run the application.

> mvn spring-boot:run

If we try to access actuator we are gonna be presented with the prometheus endpoint.

> curl http://localhost:8080/actuator
{
  "_links": {
    "self": {
      "href": "http://localhost:8080/actuator",
      "templated": false
    },
    "prometheus": {
      "href": "http://localhost:8080/actuator/prometheus",
      "templated": false
    }
  }
}

This “http://localhost:8080/actuator/prometheus&#8221; is the endpoint that our prometheus server would use to pull data.
So our prometheus server needs to be configured to access these data exposed by that endpoint.

On the next blog we shall deploy prometheus and view some metrics.

Apache Ignite and Spring on your Kubernetes Cluster Part 3: Testing the application

On the previous blog we created our Kubernetes deployment files for our Ignite application. On this blog we shall deploy our Ignite application on Kubernetes. I will use minikube on this.

Let’s build first

mvn clean install

I shall create a simple docker image, thus a Dockerfile is neeeded.
Let’s add a Dockerfile to the root of our project.

FROM adoptopenjdk/openjdk11

COPY target/job-api-ignite-0.0.1-SNAPSHOT.jar app.jar

ENTRYPOINT ["java","-jar","app.jar"]

Now we want to deploy this to our local Κubernetes. Follow this guide on how to use local images on Kubernetes.

Then let’s build our app

docker build -f Dockerfile -t job-api:1.0 .

Time to apply our Kubernetes yaml files.

kubectl apply -f job-cache-rbac.yaml
kubectl apply -f job-api-deployment.yaml
kubectl apply -f job-api-service.yaml

Give it some time and check your pods

> kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
job-api-deployment-86f54c9d75-dpnsc   1/1     Running   0          11m
job-api-deployment-86f54c9d75-xj267   1/1     Running   0          11m

Let’s issue a request through the first pod. This request will reach github and then shall cache the results in memory.

kubectl exec -it job-api-deployment-86f54c9d75-dpnsc -- curl localhost:8080/jobs/github/1

Then we shall use the other endpoint in order to fetch data straight from ignite.

kubectl exec -it job-api-deployment-86f54c9d75-xj267 -- curl localhost:8080/jobs/github/ignite/1

So we are successful, which means that our Ignite cluster is running in our Kubernetes workloads. The data are cached and shared between the nodes.

You can find the code on GitHub.

Apache Ignite and Spring on your Kubernetes Cluster Part 2: Kubernetes deployment

Previously we have been successful on creating our first Spring boot Application powered by Apache Ignite.

On this blog we shall focus on what is needed to be done on the Kubernetes side in order to be able to spin up our application.

As described on a previous blog we need to have our Kubernetes RBAC policies in place.

We need a role, a service account and the binding.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: job-cache
rules:
  - apiGroups:
    - ""
    resources:
    - pods
    - endpoints
    verbs:
    - get
    - list
    - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: job-cache
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: job-cache
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: job-cache
subjects:
  - kind: ServiceAccount
    name: job-cache
    namespace: "default"

Our service account will be the job cache. This means that we should use the job-cache service account for our Ignite based workloads.

The next step is to create the deployment. The configuration would not be very different from statefulset as explained in a previous post.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: job-api-deployment
  labels:
    app: job-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: job-api
  template:
    metadata:
      labels:
        app: job-api
    spec:
      containers:
        - name: job-api
          image: job-api:1.0
          env:
            - name: IGNITE_QUIET
              value: "false"
            - name: IGNITE_CACHE_CLIENT
              value: "false"
          ports:
            - containerPort: 11211
              protocol: TCP
            - containerPort: 47100
              protocol: TCP
            - containerPort: 47500
              protocol: TCP
            - containerPort: 49112
              protocol: TCP
            - containerPort: 10800
              protocol: TCP
            - containerPort: 8080
              protocol: TCP
            - containerPort: 10900
              protocol: TCP
      serviceAccount: job-cache
      serviceAccountName: job-cache

This is simpler since the Ignite configuration has been done through Java code.
The image that you see is supposed to be your dockerised Java application we worked before.
The next big step is to define the service. I will not use one service for all. Instead I would create a service for the cache and a service for our api in order to be used as an api.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: job-cache
  name: job-cache
spec:
  ports:
    - name: jdbc
      port: 11211
      protocol: TCP
      targetPort: 11211
    - name: spi-communication
      port: 47100
      protocol: TCP
      targetPort: 47100
    - name: spi-discovery
      port: 47500
      protocol: TCP
      targetPort: 47500
    - name: jmx
      port: 49112
      protocol: TCP
      targetPort: 49112
    - name: sql
      port: 10800
      protocol: TCP
      targetPort: 10800
    - name: rest
      port: 8080
      protocol: TCP
      targetPort: 8080
    - name: thin-clients
      port: 10900
      protocol: TCP
      targetPort: 10900
  selector:
    app: job-api
  type: ClusterIP

Without getting into kubernetes details, the Ignite nodes shall synchronize using the job-cache internal dns. So we shall use kubernetes internal dns capabilities to communicate with the Ignite cluster.

The next step is to create the service for the actual job api application.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: job-api
  name: job-api
spec:
  ports:
    - name: rest-api
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: job-api
  sessionAffinity: None
  type: ClusterIP

Οn the following blog we shall apply our configurations to kubernetes and test our codebase.

Apache Ignite and Spring on your Kubernetes Cluster Part 1: Spring Boot application

On a previous series of blogs we spun up an Ignite cluster on a Kubernetes cluster.
In this tutorial we shall use the Ignite cluster created previously on with a Spring Boot Application.


Let’s create our project using Spring Boot. The Spring Boot application will connect to the Ignite cluster.

Let’s add our dependencies.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.2.5.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.gkatzioura</groupId>
	<artifactId>job-api-ignite</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>job-api-ignite</name>
	<description>Demo project for Spring Boot</description>

	<properties>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-cache</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<dependency>
			<groupId>org.apache.ignite</groupId>
			<artifactId>ignite-kubernetes</artifactId>
			<version>2.7.6</version>
		</dependency>
		<dependency>
			<groupId>org.apache.ignite</groupId>
			<artifactId>ignite-spring</artifactId>
			<version>2.7.6</version>
			<exclusions>
				<exclusion>
					<groupId>org.apache.ignite</groupId>
					<artifactId>ignite-indexing</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<version>1.18.12</version>
			<scope>provided</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
			<exclusions>
				<exclusion>
					<groupId>org.junit.vintage</groupId>
					<artifactId>junit-vintage-engine</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

As in previous tutorials we shall use GitHub’s Job api.

The first step would be to add the Job Model that deserializes.

package com.gkatzioura.jobapi.model;

import java.io.Serializable;

import lombok.Data;

@Data
public class Job implements Serializable {

	private String id;
	private String type;
	private String url;
	private String createdAt;
	private String company;
	private String companyUrl;
	private String location;
	private String title;
	private String description;

}

The we need a repository for the Jobs. Beware the class needs to be serializable. Ignite caches data off-heap.

package com.gkatzioura.jobapi.repository;

import java.util.ArrayList;
import java.util.List;

import com.gkatzioura.jobapi.model.Job;
import lombok.Data;
import org.apache.ignite.Ignite;

import org.springframework.cache.annotation.Cacheable;
import org.springframework.stereotype.Repository;
import org.springframework.web.client.RestTemplate;

@Repository
public class GitHubJobRepository {

	private static final String JOB_API_CONSTANST = "https://jobs.github.com/positions.json?page={page}";
	public static final String GITHUBJOB_CACHE = "githubjob";

	private final RestTemplate restTemplate;
	private final Ignite ignite;

	GitHubJobRepository(Ignite ignite) {
		this.restTemplate = new RestTemplate();
		this.ignite = ignite;
	}

	@Cacheable(value = GITHUBJOB_CACHE)
	public List<Job> getJob(int page) {
		return restTemplate.getForObject(JOB_API_CONSTANST,JobList.class,page);
	}

	public List<Job> fetchFromIgnite(int page) {
		for(String cache: ignite.cacheNames()) {
			if(cache.equals(GITHUBJOB_CACHE)) {
				return (List<Job>) ignite.getOrCreateCache(cache).get(1);
			}
		}

		return new ArrayList<>();
	}

	@Data
	private static class JobList  extends ArrayList<Job> {
	}
}

The main reason the JobList class exists is for convenience for unmarshalling.
As you can see the repository has the annotation @Cacheable. This mean that our requests will be cached. The fetchFromIgnite method is a test method for the sake of this example. We shall use it to access the data cached by ignite directly.

We shall also add the controller.

package com.gkatzioura.jobapi.controller;

import java.util.List;

import com.gkatzioura.jobapi.model.Job;
import com.gkatzioura.jobapi.repository.GitHubJobRepository;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/jobs")
public class JobsController {

	private final GitHubJobRepository gitHubJobRepository;

	JobsController(GitHubJobRepository gitHubJobRepository) {
		this.gitHubJobRepository = gitHubJobRepository;
	}

	@GetMapping("/github/{page}")
	public List<Job> gitHub(@PathVariable("page") int page) {
		return this.gitHubJobRepository.getJob(page);
	}

	@GetMapping("/github/ignite/{page}")
	public List<Job> gitHubIgnite(@PathVariable("page") int page) {
		return this.gitHubJobRepository.fetchFromIgnite(page);
	}

}

Two methods on the controller, the one to fetch the data as usual and caches them behind the scenes and the other on that we shall use for testing.

It’s time for us to configure the Ignite client that uses the nodes on our Kubernetes cluster.

package com.gkatzioura.jobapi.config;


import lombok.extern.slf4j.Slf4j;
import org.apache.ignite.Ignite;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.spring.SpringCacheManager;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder;

import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableCaching
@Slf4j
public class SpringCacheConfiguration {

	@Bean
	public Ignite igniteInstance() {
		log.info("Creating ignite instance");
		TcpDiscoveryKubernetesIpFinder tcpDiscoveryKubernetesIpFinder = new TcpDiscoveryKubernetesIpFinder();
		tcpDiscoveryKubernetesIpFinder.setNamespace("default");
		tcpDiscoveryKubernetesIpFinder.setServiceName("job-cache");

		TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
		tcpDiscoverySpi.setIpFinder(tcpDiscoveryKubernetesIpFinder);

		IgniteConfiguration igniteConfiguration = new IgniteConfiguration();

		igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
		igniteConfiguration.setClientMode(false);

		return Ignition.start(igniteConfiguration);
	}

	@Bean
	public SpringCacheManager cacheManager(Ignite ignite) {
		SpringCacheManager springCacheManager =new SpringCacheManager();
		springCacheManager.setIgniteInstanceName(ignite.name());
		return springCacheManager;
	}

}

We created our cache. It shall use the Kubernetes TCP discovery mode.

The next step is to add our Main class.

package com.gkatzioura.jobapi;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;

@SpringBootApplication
@EnableCaching
public class IgniteKubeClusterApplication {

	public static void main(String[] args) {
		SpringApplication.run(IgniteKubeClusterApplication.class, args);
	}

}

The next blog will be focused on shipping the solution to kubernetes.

Apache Ignite on your Kubernetes Cluster Part 4: Deployment explained

Previously we saw the Ignite configuration that comes with the Kubernetes installation.
The default configuration does not have persistence enabled so we won’t focus on any storage classes provided by the helm chart.

The default installation uses a stateful set. You can find more information on a stateful set on the Kubernetes documentation.

> kubectl get statefulset ignite-cache -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: 2020-04-09T12:29:04Z
  generation: 1
  labels:
    app.kubernetes.io/instance: ignite-cache
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ignite
    helm.sh/chart: ignite-1.0.1
  name: ignite-cache
  namespace: default
  resourceVersion: "281390"
  selfLink: /apis/apps/v1/namespaces/default/statefulsets/ignite-cache
  uid: fcaa7bef-84cd-4e7c-aa33-a4312a1d47a9
spec:
  podManagementPolicy: OrderedReady
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ignite-cache
  serviceName: ignite-cache
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ignite-cache
    spec:
      containers:
      - env:
        - name: IGNITE_QUIET
          value: "false"
        - name: JVM_OPTS
          value: -Djava.net.preferIPv4Stack=true
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        image: apacheignite/ignite:2.7.6
        imagePullPolicy: IfNotPresent
        name: ignite
        ports:
        - containerPort: 11211
          protocol: TCP
        - containerPort: 47100
          protocol: TCP
        - containerPort: 47500
          protocol: TCP
        - containerPort: 49112
          protocol: TCP
        - containerPort: 10800
          protocol: TCP
        - containerPort: 8080
          protocol: TCP
        - containerPort: 10900
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/ignite/apache-ignite/config
          name: config-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ignite-cache
      serviceAccountName: ignite-cache
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: ignite-config.xml
            path: default-config.xml
          name: ignite-cache-configmap
        name: config-volume
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
status:
  replicas: 0

As you can see the Ingite configuration has been mounted through the configmap. Also you can see that this pod will use a specific service account.
Through the environment variables certain libraries are enabled which provide more features on the Ignite cluster. Also the ports needed for the communication and various protocols are being specified.

The last step is the service. All the ignite nodes shall be load balancer behind the Kubernetes service.

> kubectl get svc ignite-cache -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2020-04-09T12:29:04Z
  labels:
    app: ignite-cache
  name: ignite-cache
  namespace: default
  resourceVersion: "281389"
  selfLink: /api/v1/namespaces/default/services/ignite-cache
  uid: 5be68e28-a57c-4cb5-b610-b708bff80da7
spec:
  clusterIP: None
  ports:
  - name: jdbc
    port: 11211
    protocol: TCP
    targetPort: 11211
  - name: spi-communication
    port: 47100
    protocol: TCP
    targetPort: 47100
  - name: spi-discovery
    port: 47500
    protocol: TCP
    targetPort: 47500
  - name: jmx
    port: 49112
    protocol: TCP
    targetPort: 49112
  - name: sql
    port: 10800
    protocol: TCP
    targetPort: 10800
  - name: rest
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: thin-clients
    port: 10900
    protocol: TCP
    targetPort: 10900
  selector:
    app: ignite-cache
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Whether you add a new node or you add an ignite client node your ignite cluster shall be reached through this Kubernetes service. Apart from, that based on the Kubernetes services you can make this cache public or internal.