Java Based Akka application Part 1: Your base Project

Akka is a free, open-source toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM. Along with Akka you have akka-streams  a module that makes the ingestion and processing of streams easy  and Alpakka, a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka.

On this blog I shall focus on creating an Akka project using Java as well as packaging it.

 

You already know that Akka is built on Scala, thus why Java and no Scala? There are various reasons to go for Java.

  • Akka is a toolkit running on the JVM so you don’t have to be proficient with Scala to use it.
  • You might have a team already proficient with Java but not in Scala.
  • It’s much easier to evaluate if you already have a codebase on Java and the various build tools (maven etc)

Will shall go for the simple route and Download the Application from lightbend quickstart. The project received, will be backed with typed actors.

After some adaption the maven file would look like this, take note that we shall use lombok .

<project>
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.gkatzioura</groupId>
    <artifactId>akka-java-app</artifactId>
    <version>1.0</version>

    <properties>
      <akka.version>2.6.10</akka.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.typesafe.akka</groupId>
            <artifactId>akka-actor-typed_2.13</artifactId>
            <version>${akka.version}</version>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.2.3</version>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.16</version>
            <scope>provided</scope>
        </dependency>

    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.0</version>
                <configuration>
                    <source>11</source>
                    <target>11</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.6.0</version>
                <configuration>
                    <executable>java</executable>
                    <arguments>
                        <argument>-classpath</argument>
                        <classpath />
                        <argument>com.gkatzioura.Application</argument>
                    </arguments>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Now there is one Actor that is responsible for managing your other actors. This is the top level actor called Guardian Actor. It is created along with the ActorSystem and when it stops the ActorSystem will stop too.

In order to create an actor you define the message the actor will receive and you specify why it will behave to those messages.

package com.gkatzioura;

import akka.actor.typed.Behavior;
import akka.actor.typed.javadsl.AbstractBehavior;
import akka.actor.typed.javadsl.ActorContext;
import akka.actor.typed.javadsl.Behaviors;
import akka.actor.typed.javadsl.Receive;
import lombok.AllArgsConstructor;
import lombok.Getter;

public class AppGuardian extends AbstractBehavior<AppGuardian.GuardianMessage> {

	public interface GuardianMessage {}

	static Behavior<GuardianMessage> create() {
		return Behaviors.setup(AppGuardian::new);
	}

	@Getter
	@AllArgsConstructor
	public static class MessageToGuardian implements GuardianMessage {
		private String message;
	}

	private AppGuardian(ActorContext<GuardianMessage> context) {
		super(context);
	}

	@Override
	public Receive<GuardianMessage> createReceive() {
		return newReceiveBuilder().onMessage(MessageToGuardian.class, this::receiveMessage).build();
	}

	private Behavior<GuardianMessage> receiveMessage(MessageToGuardian messageToGuardian) {
		getContext().getLog().info("Message received: {}",messageToGuardian.getMessage());
		return this;
	}

}

Akka is message driven so the guardian actor should be able to consume messages send to it. Therefore messages that implement the GuardianMessage interface are going to be processed.

By creating the actor the createReceive method is used in order to add handling of the messages that the actor should handle.

Be aware that when it comes to logging instead of spinning up a logger in the class use the
getContext().getLog()

Behind the scenes the log messages will have the path of the actor automatically added as akkaSource Mapped Diagnostic Context (MDC) value.

Last step would be to add the Main class.

package com.gkatzioura;

import java.io.IOException;

import akka.actor.typed.ActorSystem;
import lombok.extern.slf4j.Slf4j;

@Slf4j
public class Application {

	public static final String APP_NAME = "akka-java-app";

	public static void main(String[] args) {
		final ActorSystem<AppGuardian.GuardianMessage> appGuardian = ActorSystem.create(AppGuardian.create(), APP_NAME);
		appGuardian.tell(new AppGuardian.MessageToGuardian("First Akka Java App"));

		try {
			System.out.println(">>> Press ENTER to exit <<<");
			System.in.read();
		}
		catch (IOException ignored) {
		}
		finally {
			appGuardian.terminate();
		}
	}

}

The expected outcome is to have our Guardian actor to print the message submitted. By pressing enter the Akka application will terminate through the guardian actor.
On the next blog we will go one step further and add a unit test that validates the message received.
As always you can find the source code on github.

Locking for multiple nodes the easy way: GCS

It happens to all of us. We develop stateless applications that can scale horizontally without any effort.
However sometimes cases arise where you need to achieve some type of coordination.

You can go really advanced on this one. For example you can use a framework like Akka and it’s cluster capabilities. Or you can go really simple like rolling a mechanism on your own as long as it gives you the results needed. On another note you can just have different node groups based on the work you need them to do. The options and the solutions can change based on the problem.

If your problem can go with a simple option, one way to do so , provided you use Google Cloud Storage, is to use its lock capabilities.
Imagine for example a scenario of 4 nodes, they do scale dynamically but each time a new node registers you want to change its actions by acquiring a unique configuration which does not collide with a configuration another node might have received.

The strategy can be to use a file on Google Cloud Storage for locking and a file that acts as a centralised configuration registry.

The lock file is nothing more that a file on cloud storage which shall be created and deleted. What will give us lock abilities is the option on GCS to create a file only if it not exists.
Thus a process from one node will try to create the `lock` file, this action would be equivalent to obtaining the lock.
Once the process is done will delete the file, this action would be equivalent to releasing the lock.
Other processes in the meantime will try to create the file (acquire the lock) and fail (file already exists) because other processes have created the file.
Meanwhile the process that has successfully created the file (acquired the lock) will change the centralised configuration registry and once done will delete the file (release the lock).

So let’s start with the lock object.

package com.gkatzioura.gcs.lock;

import java.util.Optional;

import com.google.cloud.storage.Blob;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageException;

public class GCSLock {

	public static final String LOCK_STRING = "_lock";
	private final Storage storage;
	private final String bucket;
	private final String keyName;

	private Optional<Blob> acquired = Optional.empty();

	GCSLock(Storage storage, String bucket, String keyName) {
		this.storage = storage;
		this.bucket = bucket;
		this.keyName = keyName;
	}

	public boolean acquire() {
		try {
			var blobInfo = BlobInfo.newBuilder(bucket, keyName).build();
			var blob = storage.create(blobInfo, LOCK_STRING.getBytes(), Storage.BlobTargetOption.doesNotExist());
			acquired = Optional.of(blob);
			return true;
		} catch (StorageException storageException) {
			return false;
		}
	}

	public void release() {
		if(!acquired.isPresent()) {
			throw new IllegalStateException("Lock was never acquired");
		}
		storage.delete(acquired.get().getBlobId());
	}

}

As you can see the write specifies to write an object only if it does not exist. This operation behind the scenes is using the x-goog-if-generation-match header which is used for concurrency.
Thus one node will be able to acquire the lock and change the configuration files.
Afterwards it can delete the lock. If an exception is raised probably the operation fails and the lock is already acquired.

To make the example more complete let’s make the configuration file. The configuration file would be a simple json file for key map actions.

package com.gkatzioura.gcs.lock;

import java.util.HashMap;
import java.util.Map;

import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import org.json.JSONObject;

public class GCSConfiguration {

	private final Storage storage;
	private final String bucket;
	private final String keyName;

	GCSConfiguration(Storage storage, String bucket, String keyName) {
		this.storage = storage;
		this.bucket = bucket;
		this.keyName = keyName;
	}

	public void addProperty(String key, String value) {
		var blobId = BlobId.of(bucket, keyName);
		var blob = storage.get(blobId);

		final JSONObject configJson;

		if(blob==null) {
			configJson = new JSONObject();
		} else {
			configJson = new JSONObject(new String(blob.getContent()));
		}

		configJson.put(key, value);

		var blobInfo = BlobInfo.newBuilder(blobId).build();
		storage.create(blobInfo, configJson.toString().getBytes());
	}

	public Map<String,String> properties() {

		var blobId = BlobId.of(bucket, keyName);
		var blob = storage.get(blobId);

		var map = new HashMap<String,String>();

		if(blob!=null) {
			var jsonObject = new JSONObject(new String(blob.getContent()));
			for(var key: jsonObject.keySet()) {
				map.put(key, jsonObject.getString(key));
			}
		}

		return map;
	}

}

It is a simple config util backed by GCS. Eventually it can be changed and put the lock operating inside the addProperty operation, it’s up to the user and the code. For the purpose of this blog we shall just acquire the lock change the configuration and release the lock.
Our main class will look like this.

package com.gkatzioura.gcs.lock;

import com.google.cloud.storage.StorageOptions;

public class Application {

	public static void main(String[] args) {
		var storage = StorageOptions.getDefaultInstance().getService();

		final String bucketName = "bucketName";
		final String lockFileName = "lockFileName";
		final String configFileName = "configFileName";

		var lock = new GCSLock(storage, bucketName, lockFileName);
		var gcsConfig = new GCSConfiguration(storage, bucketName, configFileName);

		var lockAcquired = lock.acquire();
		if(lockAcquired) {
			gcsConfig.addProperty("testProperty", "testValue");
			lock.release();
		}

		var config = gcsConfig.properties();

		for(var key: config.keySet()) {
			System.out.println("Key "+key+" value "+config.get(key));
		}

	}

}

Now let’s go for some multithreading. Ten threads will try to put values, it is expected that they have some failure.

package com.gkatzioura.gcs.lock;

import java.util.ArrayList;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

public class ApplicationConcurrent {

	private static final String bucketName = "bucketName";
	private static final String lockFileName = "lockFileName";
	private static final String configFileName = "configFileName";

	public static void main(String[] args) throws ExecutionException, InterruptedException {
		var storage = StorageOptions.getDefaultInstance().getService();

		final int threads = 10;
		var service = Executors.newFixedThreadPool(threads);
		var futures = new ArrayList<Future>(threads);

		for (var i = 0; i < threads; i++) {
			futures.add(service.submit(update(storage, "property-"+i, "value-"+i)));
		}

		for (var f : futures) {
			f.get();
		}

		service.shutdown();

		var gcsConfig = new GCSConfiguration(storage, bucketName, configFileName);
		var properties = gcsConfig.properties();

		for(var i=0; i < threads; i++) { System.out.println(properties.get("property-"+i)); } } private static Runnable update(final Storage storage, String property, String value) { return () -> {
			var lock = new GCSLock(storage, bucketName, lockFileName);
			var gcsConfig = new GCSConfiguration(storage, bucketName, configFileName);

			boolean lockAcquired = false;

			while (!lockAcquired) {
				lockAcquired = lock.acquire();
				System.out.println("Could not acquire lock");
			}

			gcsConfig.addProperty(property, value);
			lock.release();
		};
	}
}

Obviously 10 threads are ok to display the capabilities. During the thread initialization and execution some threads will try to acquire the lock simultaneously and one will fails, while other threads will be late and will fail and wait until the lock is available.

In the end what is expected is all of them to have their values added to the configuration.
That’s it. If your problems have a simple nature this approach might do the trick. Obviously you can use the http api instead of the sdk. You can find the code on github.

Testing with Hoverfly and Java Part 3: State

Previously we simulated a delay scenario using Hoverfly. Now it’s time to dive deeper and go for a state based testing. By doing a stateful simulation we can change the way the tests endpoints behave based on how the state changed.

 

Hoverfly does have a state capability. State in a hoverfly simulation is like a map. Initially it is empty but you can define how it will get populated per request.

Our strategy would be to have a request that initializes the state and then specifies other requests that change that state.

public class SimulationStateTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/initialize")
				.willReturn(success("{\"initialized\":true}", "application/json")
						.andSetState("shouldSucceed", "true")
				)
				.get("/state")
				.withState("shouldSucceed", "false")
				.willReturn(serverError().andSetState("shouldSucceed", "true"))
				.get("/state")
				.withState("shouldSucceed", "true")
				.willReturn(success("{\"username\":\"test-user\"}", "application/json")
						.andSetState("shouldSucceed", "false"))

		);

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

}

Unfortunately on the state we can only specify values in a key value fashion and not by passing a function for a key.
However with the right workaround many scenarios could be simulated.

In the example we first initialize the state and the we issue requests that behave differently based on the state, but also they do change the state.

So we expect to have a continuous first succeed and then fail mode, which can be depicted in the following test.

	@Test
	void testWithState() {
		var client = HttpClient.newHttpClient();
		var initializationRequest = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/initialize"))
				.build();
		var initializationResponse = client.sendAsync(initializationRequest, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"initialized\":true}", initializationResponse);

		var statefulRequest = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/state"))
				.build();

		for (int i = 0; i < 100; i++) {
			var response = client.sendAsync(statefulRequest, HttpResponse.BodyHandlers.ofString())
					.join();

			int statusCode = i % 2 == 0 ? 200 : 500;

			Assertions.assertEquals(statusCode, response.statusCode());
		}
	}

That’s all about stateful simulation. On the next part we shall proceed on Hoverfly matchers

Testing with Hoverfly and Java Part 2: Delays

On the previous post we implemented json and Java based Hoverfly scenarios..
Now it’s time to dive deeper and use other Ηoverfly features.

A big part of testing has to do with negative scenarios. One of them is delays. Although we always mock a server and we are successful to reproduce erroneous scenarios one thing that is key to simulate in todays microservices driven world is delay.

So let me make a server with a 30 secs delay.

public class SimulateDelayTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/delay")
				.willReturn(success("{\"username\":\"test-user\"}", "application/json").withDelay(30, TimeUnit.SECONDS)));

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

}

Let’s add the Delay Test

@Test
void testWithDelay() {
   var client = HttpClient.newHttpClient();
   var request = HttpRequest.newBuilder()
         .uri(URI.create("http://localhost:8085/delay"))
         .build();
   var start = Instant.now();
   var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
         .thenApply(HttpResponse::body)
         .join();
   var end = Instant.now();
   Assertions.assertEquals("{\"username\":\"test-user\"}", res);

   var seconds = Duration.between(start, end).getSeconds();
   Assertions.assertTrue(seconds >= 30);
}

Delay simulation is there, up and running, so let’s try to simulate timeouts.

	@Test
	void testTimeout() {
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/delay"))
				.timeout(Duration.ofSeconds(10))
				.build();
		assertThrows(HttpTimeoutException.class, () -&gt; {
					try {
						client.sendAsync(request, HttpResponse.BodyHandlers.ofString()).join();
					} catch (CompletionException ex) {
						throw ex.getCause();
					}
				}

		);
	}

That’s it we got delays and timeouts!
Other test scenarios should contain state which is covered on the next tutorial.

Testing with Hoverfly and Java Part 1: Get started with Simulation Mode

These days a major problem exists when it comes to testing code that has to do with various cloud services where test tools are not provided.
For example although you might have the tools for local Pub/Sub testing, including Docker images you might not have anything that can Mock BigQuery.

This causes an issue when it comes to the CI jobs, as testing is part of the requirements, however there might be blockers on testing with the actual service. The case is, you do need to cover all the pessimistic scenarios you need to be covered (for example timeouts).

And this is where Hoverfly can help.

Hoverfly is a lightweight, open source API simulation tool. Using Hoverfly, you can create realistic simulations of the APIs your application depends on

Our first examples will have to do with simulating just a web server. The first step is to add the Hoverfly dependency.

    <dependencies>
        <dependency>
            <groupId>io.specto</groupId>
            <artifactId>hoverfly-java</artifactId>
            <version>0.12.2</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

Instead of using the Hoverfly docker image we shall use the Java Library for some extra flexibility.

We got two options on configuring the Hoverfly simulation mode. One is through the Java dsl and the other one is through json.
Let’s cover both.

The example below uses the Java DSL. We spin up hoverfly on 8085 and load this configuration.

class SimulationJavaDSLTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/user")
				.willReturn(success("{\"username\":\"test-user\"}", "application/json")));

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

	@Test
	void testHttpGet() {
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/user"))
				.build();
		var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",res);
	}
}

Now let’s do the same with Json. Instead of manually trying things with json we can make the code do the work for us.

var simulation = SimulationSource.dsl(service("http://localhost:8085")
			.get("/user")
			.willReturn(success("{\"username\":\"test-user\"}", "application/json")));

var simulationStr = simulation.getSimulation()
System.out.println(simulationStr);

We can get the JSON generated by the Java DSL. The result would be like this.

{
  "data": {
    "pairs": [
      {
        "request": {
          "path": [
            {
              "matcher": "exact",
              "value": "/user"
            }
          ],
          "method": [
            {
              "matcher": "exact",
              "value": "GET"
            }
          ],
          "destination": [
            {
              "matcher": "exact",
              "value": "localhost:8085"
            }
          ],
          "scheme": [
            {
              "matcher": "exact",
              "value": "http"
            }
          ],
          "query": {},
          "body": [
            {
              "matcher": "exact",
              "value": ""
            }
          ],
          "headers": {},
          "requiresState": {}
        },
        "response": {
          "status": 200,
          "body": "{\"username\":\"test-user\"}",
          "encodedBody": false,
          "templated": true,
          "headers": {
            "Content-Type": [
              "application/json"
            ]
          }
        }
      }
    ],
    "globalActions": {
      "delays": []
    }
  },
  "meta": {
    "schemaVersion": "v5"
  }
}

Let’s place this one on the resources folder of tests under the name simulation.json

And with some code changes we get exactly the same result.


public class SimulationJsonTests {

	private Hoverfly hoverfly;

	@BeforeEach
	void setUp() {
		var simulationUrl = SimulationJsonTests.class.getClassLoader().getResource("simulation.json");
		var simulation = SimulationSource.url(simulationUrl);

		var localConfig = HoverflyConfig.localConfigs().disableTlsVerification().asWebServer().proxyPort(8085);
		hoverfly = new Hoverfly(localConfig, SIMULATE);
		hoverfly.start();
		hoverfly.simulate(simulation);
	}

	@AfterEach
	void tearDown() {
		hoverfly.close();
	}

	@Test
	void testHttpGet() {
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/user"))
				.build();
		var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",res);
	}

}

Also sometimes there is the need of combining simulations regardless they json or Java ones. This can also be facilitated by loading more that one simulations.

	@Test
	void testMixedConfiguration() {
		var simulationUrl = SimulationJsonTests.class.getClassLoader().getResource("simulation.json");
		var jsonSimulation = SimulationSource.url(simulationUrl);


		var javaSimulation = SimulationSource.dsl(service("http://localhost:8085")
				.get("/admin")
				.willReturn(success("{\"username\":\"test-admin\"}", "application/json")));

		hoverfly.simulate(jsonSimulation, javaSimulation);

		var client = HttpClient.newHttpClient();
		var jsonConfigBasedRequest = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/user"))
				.build();
		var userResponse = client.sendAsync(jsonConfigBasedRequest, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",userResponse);

		var javaConfigBasedRequest = HttpRequest.newBuilder()
				.uri(URI.create("http://localhost:8085/admin"))
				.build();
		var adminResponse = client.sendAsync(javaConfigBasedRequest, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-admin\"}",adminResponse);
	}

That’s it, we are pretty setup to continues exploring Hoverfly and it’s capabilities.

Dependency management and Maven

Maven is great and mature. There is always a solution on almost everything. The main case you might stumble on organisation projects is dependency management. Instead of each project having it’s own dependencies you want a centralised way to inherit those dependencies.

 

In those case you declare on the parent prom the managed dependencies. In my example I just want to include the Akka stream dependencies.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>org.example</groupId>
	<artifactId>maven-dependency-management</artifactId>
	<packaging>pom</packaging>
	<version>1.0-SNAPSHOT</version>

	<properties>
		<akka.version>2.5.31</akka.version>
		<akka.http.version>10.1.11</akka.http.version>
		<scala.binary.version>2.12</scala.binary.version>
	</properties>

	<modules>
		<module>child-one</module>
	</modules>


	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>com.typesafe.akka</groupId>
				<artifactId>akka-stream_2.12</artifactId>
				<version>${akka.version}</version>
			</dependency>
			<dependency>
				<groupId>com.typesafe.akka</groupId>
				<artifactId>akka-http_2.12</artifactId>
				<version>${akka.http.version}</version>
			</dependency>
			<dependency>
				<groupId>com.typesafe.akka</groupId>
				<artifactId>akka-http-spray-json_2.12</artifactId>
				<version>${akka.http.version}</version>
			</dependency>
		</dependencies>
	</dependencyManagement>

</project>

What I use is the dependency management block.

Now the child project would be able to include those libraries without specifying the version. Having the version derived and managed is essential. Many unpleasant surprises can come if a version is incompatible.

Now on to the child module the versions are declared without the version since it is the child module.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<parent>
		<artifactId>maven-dependency-management</artifactId>
		<groupId>org.example</groupId>
		<version>1.0-SNAPSHOT</version>
	</parent>
	<modelVersion>4.0.0</modelVersion>

	<artifactId>child-one</artifactId>

	<dependencies>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-stream_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http-spray-json_2.12</artifactId>
		</dependency>
	</dependencies>

</project>

On another note sometimes we want to use another project’s dependency management without that project being our parent. Those are cases where you need to include the dependency management from a parent project when you already have a parent project.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>

	<groupId>org.example</groupId>
	<artifactId>independent-project</artifactId>
	<version>1.0-SNAPSHOT</version>

	<dependencyManagement>
		<dependencies>
			<dependency>
				<artifactId>maven-dependency-management</artifactId>
				<groupId>org.example</groupId>
				<version>1.0-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

	<dependencies>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-stream_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http_2.12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http-spray-json_2.12</artifactId>
		</dependency>
	</dependencies>
</project>

As you can see in the block

	<dependencyManagement>
		<dependencies>
			<dependency>
				<artifactId>maven-dependency-management</artifactId>
				<groupId>org.example</groupId>
				<version>1.0-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

We included the dependency management from another project, which can be applied to inherit dependencies from multiple projects.

Spring Boot and Micrometer with Prometheus Part 6: Securing metrics

Previously we successfully spun up our Spring Boot application With Prometheus. An endpoint in our Spring application is exposing our metric data so that prometheus is able to retrieve them.
The main question that comes to mind is how to secure this information.

Spring already provides us with its great security framework, so it will be fairly easy to use it for our application. The goal would be to use basic authentication for the actuator/prometheus endpoints and also configure prometheus in order to access that information using basic authentication.

So the first step is to enable the security on our app. The first step is to add the security jar.

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>

The Spring boot application will get secured on its own by generating a password for the default user.
However we do want to have control over the username and password so we are going to use some environment variables.

By running the application with the credentials for the default user we have the prometheus endpoints secured with a minimal configuration.

SPRING_SECURITY_USER_NAME=test-user SPRING_SECURITY_USER_PASSWORD=test-password mvn spring-boot:run

So now that we have the security setup on our app, it’s time to update our prometheus config.

scrape_configs:
  - job_name: 'prometheus-spring'
    scrape_interval: 1m
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['my.local.machine:8080']
    basic_auth:
      username: "test-user"
      password: "test-password"

So let’s run again prometheus as described previously.

To sum app after this change prometheus will gather metrics data for our application in a secure way.

Spring Boot and Micrometer with Prometheus Part 5: Spinning up prometheus

Previously we got our Spring Boot Application adapter in order to expose the endpoints for prometheus.
This blog will focus on setting up prometheus and configure it in order to server the Spring Boot Endpoints.
So let’s get started by spinning up the prometheus server using docker.

Before proceeding on spinning up prometheus we need to supply a configuration file to pull data from our application.
Thus we should supply a prometheus.yaml file with the following contents.

scrape_configs:
  - job_name: 'prometheus-spring'
    scrape_interval: 1m
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['my.local.machine:8080']

Let’s use the command taken from here.

Due to using prometheus on osx through docker, we need some workarounds to connect through the app

sudo ifconfig lo0 alias 172.16.222.111

We can use directly docker

docker run -v /path/to/prometheus.yaml:/etc/prometheus/prometheus.yml -p 9090:9090 --add-host="my.local.machine:172.16.222.111" prom/prometheus

By doing the above we shall be able to interact with our local application from inside the docker image.

So if we navigate to http://localhost:9090/graph we shall be greeted with our prometheus screen.
Also inside our prometheus container we are also able to communicate to our application which shall run locally.

So let’s give some time and see if the data has been collected. Then let’s go to prometheus status page http://localhost:9090/status.

We shall be greeted by the JVM information of our application.

On the next blog we shall focus on securing our prometheus endpoints.

Spring Boot and Micrometer with Prometheus Part 4: The base project

In previous posts we had a look on Spring Micrometer and InfluxDB. So you are gonna ask me why prometheus.
The reason is that prometheus is operating on a pull model vs the push model of InfluxDB.

This means that if you use micrometer with InfluxDB you are definitely going to have some overhead on pushing the results to the database as well as it is one extra pain point to make the InfluxDB database always there available to handle all the requests.

So what if instead of pushing the data, use another tool in order to pull data from the applications?
This is one of the things you can get by using Prometheus. By using prometheus you ask for the data from the application, you don’t have to receive the data.

So what we are going to do is to use exactly the same project we used on the first tutorial.

The only changes needed shall be on the applicaiton.yaml as well as the pom.xml

We shall start from pom.xml and add the micrometer binary for prometheus.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.2.4.RELEASE</version>
	</parent>

	<groupId>com.gkatzioura</groupId>
	<artifactId>spring-prometheus-micrometer</artifactId>
	<version>1.0-SNAPSHOT</version>

	<properties>
		<micrometer.version>1.3.2</micrometer.version>
	</properties>

	<build>
		<defaultGoal>spring-boot:run</defaultGoal>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<configuration>
					<source>8</source>
					<target>8</target>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-webflux</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-actuator</artifactId>
		</dependency>
		<dependency>
			<groupId>io.micrometer</groupId>
			<artifactId>micrometer-core</artifactId>
			<version>${micrometer.version}</version>
		</dependency>
		<dependency>
			<groupId>io.micrometer</groupId>
			<artifactId>micrometer-registry-prometheus</artifactId>
			<version>${micrometer.version}</version>
		</dependency>
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<version>1.18.12</version>
			<scope>provided</scope>
		</dependency>
	</dependencies>
</project>

Then we shall add application.yaml which enables prometheus.

management:
endpoints:
web:
exposure:
include: prometheus

So now we are ready to run the application.

> mvn spring-boot:run

If we try to access actuator we are gonna be presented with the prometheus endpoint.

> curl http://localhost:8080/actuator
{
  "_links": {
    "self": {
      "href": "http://localhost:8080/actuator",
      "templated": false
    },
    "prometheus": {
      "href": "http://localhost:8080/actuator/prometheus",
      "templated": false
    }
  }
}

This “http://localhost:8080/actuator/prometheus&#8221; is the endpoint that our prometheus server would use to pull data.
So our prometheus server needs to be configured to access these data exposed by that endpoint.

On the next blog we shall deploy prometheus and view some metrics.

Apache Ignite and Spring on your Kubernetes Cluster Part 3: Testing the application

On the previous blog we created our Kubernetes deployment files for our Ignite application. On this blog we shall deploy our Ignite application on Kubernetes. I will use minikube on this.

Let’s build first

mvn clean install

I shall create a simple docker image, thus a Dockerfile is neeeded.
Let’s add a Dockerfile to the root of our project.

FROM adoptopenjdk/openjdk11

COPY target/job-api-ignite-0.0.1-SNAPSHOT.jar app.jar

ENTRYPOINT ["java","-jar","app.jar"]

Now we want to deploy this to our local Κubernetes. Follow this guide on how to use local images on Kubernetes.

Then let’s build our app

docker build -f Dockerfile -t job-api:1.0 .

Time to apply our Kubernetes yaml files.

kubectl apply -f job-cache-rbac.yaml
kubectl apply -f job-api-deployment.yaml
kubectl apply -f job-api-service.yaml

Give it some time and check your pods

> kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
job-api-deployment-86f54c9d75-dpnsc   1/1     Running   0          11m
job-api-deployment-86f54c9d75-xj267   1/1     Running   0          11m

Let’s issue a request through the first pod. This request will reach github and then shall cache the results in memory.

kubectl exec -it job-api-deployment-86f54c9d75-dpnsc -- curl localhost:8080/jobs/github/1

Then we shall use the other endpoint in order to fetch data straight from ignite.

kubectl exec -it job-api-deployment-86f54c9d75-xj267 -- curl localhost:8080/jobs/github/ignite/1

So we are successful, which means that our Ignite cluster is running in our Kubernetes workloads. The data are cached and shared between the nodes.

You can find the code on GitHub.