Testing using TestContainers

Part of our everyday ci/cd tasks involve using containers in order for the tests to take effect.
So what if you could control the containers you use through your tests and serve your scenarios better.
Also what if you could do this in a more managed way?

Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.

You pretty much can guess what is all about. Our tests can spin up the containers with the parameters needed. We will get started by using it in our tests with Junit.

It all starts with the right dependencies. Supposing we use maven for this tutorial.

	<properties>
		<junit-jupiter.version>5.4.2</junit-jupiter.version>
		<testcontainers.version>1.15.0</testcontainers.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>testcontainers</artifactId>
			<version>${testcontainers.version}</version>
			<scope>test</scope>
		</dependency>

		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>junit-jupiter</artifactId>
			<version>${testcontainers.version}</version>
			<scope>test</scope>
		</dependency>
	</dependencies>

I shall use an example we already have with Hoverfly.
We can use Hoverfly on our tests either by running it using Java or having a Hoverfly container with the test cases preloaded.
On the previous blog Hoverfly was integrated in our tests through the Java binary.
For this blog we shall use the Hoverfly container.

Our end result will look like this.

package com.gkatzioura.hoverfly.docker;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.BindMode;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;

@Testcontainers
public class ContainerBasedSimulation {

	private static final String SIMULATION_HOST_PATH = ContainerBasedSimulation.class.getClassLoader().getResource("simulation.json").getPath();

	@Container
	public static GenericContainer gcs = new GenericContainer("spectolabs/hoverfly")
			.withExposedPorts(8888)
			.withExposedPorts(8500)
			.withCommand("-webserver","-import","/var/hoverfly/simulation.json")
			.withClasspathResourceMapping("simulation.json","/var/hoverfly/simulation.json" ,BindMode.READ_ONLY);


	@Test
	void testHttpGet() {
		var hoverFlyHost = gcs.getHost();
		var hoverFlyPort = gcs.getMappedPort(8500);
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://"+hoverFlyHost+":"+ hoverFlyPort +"/user"))
				.build();
		var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",res);
	}

}

Let’s break it down.

The @Testcontainers annotation is needed for the Jupiter integration.

@Testcontainers
public class ContainerBasedSimulation {
}

We shall use a container image that is not preloaded among the test containers available (for example Elastic Search), thus we shall use the GenericContainer class.

@Container
public static GenericContainer gcs = new GenericContainer("spectolabs/hoverfly")

Since we want to load to the container a simulation, we need to set the path to our simulation from our host machine. By using withClasspathResourceMapping we directly specify files in our classpath, for example the test resources.

			.withClasspathResourceMapping("simulation.json","/var/hoverfly/simulation.json",BindMode.READ_ONLY);

Hoverfly needs the simulation and the admin port to be exposed so we shall instruct Testcontainers to expose those ports and map them to host.

new GenericContainer("spectolabs/hoverfly")
			.withExposedPorts(8888)
			.withExposedPorts(8500)

We need to have a simulation placed on the container. By using withFileSystemBind we specify the local path and the path on the container.

...
.withFileSystemBind(SIMULATION_HOST_PATH,"/var/hoverfly/simulation.json" ,BindMode.READ_ONLY)
...

Also docker images might need to have some extra commands, therefore we shall use .withCommand, to pass the commands needed.

...
.withCommand("-webserver","-import","/var/hoverfly/simulation.json")
...

Technically we can say we are ready to go and connect to the container, however when running test containers the container is not accessible through the port specified to do the binding. After all, if tests run on parallel there is going to be a collision. So what Testcontainers do is to map the exposed port of the container to a random local port.
This way port collisions are avoided.

	@Test
	void testHttpGet() {
		var hoverFlyHost = gcs.getHost();
		var hoverFlyPort = gcs.getMappedPort(8500);
		var client = HttpClient.newHttpClient();
		var request = HttpRequest.newBuilder()
				.uri(URI.create("http://"+hoverFlyHost+":"+ hoverFlyPort +"/user"))
				.build();
		var res = client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
				.thenApply(HttpResponse::body)
				.join();
		Assertions.assertEquals("{\"username\":\"test-user\"}",res);
	}

Using GenericContainer.getMappedPort(8500) we can get the port we have to use to interact with the container. Also getHost() is essential too since it won’t always direct to localhost.

Last but not least while testing if your are curious enough and do a docker ps.

docker ps 
>04a322447226        testcontainers/ryuk:0.3.0   "/app"                   3 seconds ago       Up 2 seconds        0.0.0.0:32814->8080/tcp    testcontainers-ryuk-fb60c3c6-5f31-4f4e-9ab7-ce25a00eeccc

You shall see a container running which is not the one we instructed through our unit test. The ryuk container is responsible for removing containers/networks/volumes/images by given filter after specified delay.

That’s it! We just achieved running the container we needed through our a test and we successfully migrated a previous test to one using test containers.

Upload and Download files to S3 using maven.

Throughout the years I’ve seen many teams using maven in many different ways. Maven can be used for many ci/cd tasks instead of using extra pipeline code or it can be used to prepare the development environment before running some tests.
Generally it is convenient tool, widely used among java teams and will continue so since there is a huge ecosystem around it.

The CloudStorage Maven plugin helps you with using various cloud buckets as a private maven repository. Recently CloudStorageMaven for s3 got a huge upgrade, and you can use it in order to download or upload files from s3, by using it as a plugin.

The plugin assumes that your environment is configured properly to access the s3 resources needed.
This can be achieved individually through aws configure

aws configure

Other ways are through environment variables or by using the appropriate iam role.

Supposing you want to download some certain files from a path in s3.

<build>
        <plugins>
            <plugin>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.6</version>
                <executions>
                    <execution>
                        <id>download-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/local/download/path</downloadPath>
                            <keys>1.txt,2.txt,directory/3.txt</keys>
                        </configuration>
                    </execution>
                <executions>
            <plugin>
        <plugins>
</build>

The files 1.txt,2.txt,directory/3.txt once the execution is finished shall reside in the local directory specified
(/local/download/path).
Be aware that the file discovery on s3 is done with prefix, thus if you have file 1.txt and 1.txt.jpg both files shall be downloaded.

You can also download only one file to one file that you specified locally, as long as it is one to one.

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/your-file.txt</downloadPath>
                            <keys>a-key-to-download.txt</keys>
                        </configuration>
                    </execution>

Apparently files with a prefix that contain directories (they are fakes ones on s3) will downloaded to the directory specified in the form of directories and sub directories

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/</downloadPath>
                            <keys>s3-prefix</keys>
                        </configuration>
                    </execution>

The next part is about uploading files to s3.

Uploading one file

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/your-file.txt</path>
                            <key>key-to-download.txt</key>
                        </configuration>
                    </execution>

Upload a directory

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                            <key>prefix</key>
                        </configuration>
                    </execution>

Upload to the root of bucket.

                    <execution>
                        <id>upload-multiples-files-no-key</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                        </configuration>
                    </execution>

That’s it! Since it is an open source project you can contribute or issue pull requests at github.