Execute mTLS calls using Java

Previously we secured an Nginx instance using SSL and mTLS. If you are using Java interacting with a service secured with mTLS requires some changes on your code base. On this tutorial we shall enable our Java application to use mTLS using different clients.

To get started fast, we shall spin up a server exactly the same way we did on the mTLS blog. This will make things streamlined and the client credentials would be in place.

In order to make ssl configurations to our Java clients we need to setup first an SSLContext. This simplifies things since that SSLContext can be use for various http clients that are out there.

Since we have the client public and private keys, we need to convert the private key from PEM format to DER.

openssl pkcs8 -topk8 -inform PEM -outform PEM -in /path/to/generated/client.key -out /path/to/generated/client.key.pkcs8 -nocrypt

By using a local Nginx service for this example, we need to disable the hostname verification.

        final Properties props = System.getProperties();
        props.setProperty("jdk.internal.httpclient.disableHostnameVerification", Boolean.TRUE.toString());

In other clients this might need a HostVerifier to be setup that accepts all connections.

        HostnameVerifier allHostsValid = new HostnameVerifier() {
            public boolean verify(String hostname, SSLSession session) {
                return true;
            }
        };

Next step is to load the client keys into java code and create a KeyManagerFactory.

        String privateKeyPath = "/path/to/generated/client.key.pkcs8";
        String publicKeyPath = "/path/to/generated/client.crt";

        final byte[] publicData = Files.readAllBytes(Path.of(publicKeyPath));
        final byte[] privateData = Files.readAllBytes(Path.of(privateKeyPath));

        String privateString = new String(privateData, Charset.defaultCharset())
                .replace("-----BEGIN PRIVATE KEY-----", "")
                .replaceAll(System.lineSeparator(), "")
                .replace("-----END PRIVATE KEY-----", "");

        byte[] encoded = Base64.getDecoder().decode(privateString);

        final CertificateFactory certificateFactory = CertificateFactory.getInstance("X.509");
        final Collection<? extends Certificate> chain = certificateFactory.generateCertificates(
                new ByteArrayInputStream(publicData));

        Key key = KeyFactory.getInstance("RSA").generatePrivate(new PKCS8EncodedKeySpec(encoded));

        KeyStore clientKeyStore = KeyStore.getInstance("jks");
        final char[] pwdChars = "test".toCharArray();
        clientKeyStore.load(null, null);
        clientKeyStore.setKeyEntry("test", key, pwdChars, chain.toArray(new Certificate[0]));

        KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("SunX509");
        keyManagerFactory.init(clientKeyStore, pwdChars);

On the above snippet

  • We read the bytes from the files.
  • We created a certificate chain from the public key.
  • We created a key instance using the private key.
  • Created a Keystore using the chain and keys
  • Created a KeyManagerFactory

Now that we have a KeyManagerFactory created we can use it to create an SSLContext

Due to using self signed certificates we need to use a TrustManager that will accept them. On this example the Trust Manager will accept all certificates presented from the server.

TrustManager[] acceptAllTrustManager = {
                new X509TrustManager() {
                    public X509Certificate[] getAcceptedIssuers() {
                        return new X509Certificate[0];
                    }

                    public void checkClientTrusted(
                            X509Certificate[] certs, String authType) {
                    }

                    public void checkServerTrusted(
                            X509Certificate[] certs, String authType) {
                    }
                }
        };

Then the ssl context initialization.

        SSLContext sslContext = SSLContext.getInstance("TLS");
        sslContext.init(keyManagerFactory.getKeyManagers(), acceptAllTrustManager, new java.security.SecureRandom());

Let’s use a client and see how it behaves

 HttpClient client = HttpClient.newBuilder()
                                      .sslContext(sslContext)
                                      .build();



        HttpRequest exactRequest = HttpRequest.newBuilder()
                                      .uri(URI.create("https://127.0.0.1"))
                                      .GET()
                                      .build();

        var exactResponse = client.sendAsync(exactRequest, HttpResponse.BodyHandlers.ofString())
                                  .join();
        System.out.println(exactResponse.statusCode());

We shall receive an 404 code (default for that Nginx installation )which means that our request had a successful mTLS handshake.

Now let’s try with another client, the old school synchronous HttpsURLConnection. Pay attention: I use the allHostsValid created previously.

        HttpsURLConnection httpsURLConnection = (HttpsURLConnection)   new URL("https://127.0.0.1").openConnection();
        httpsURLConnection.setSSLSocketFactory(sslContext.getSocketFactory());
        httpsURLConnection.setHostnameVerifier(allHostsValid);

        InputStream  inputStream = httpsURLConnection.getInputStream();
        String result =  new String(inputStream.readAllBytes(), Charset.defaultCharset());

This will throw a 404 error which means that the handshake took place successfully.

So wether you have an async http client or a synchronous one, provided you have the right SSLContext configured you should be able to do the handshake.

Add mTLS to Nginx

Previously we added ssl to an Nginx server. On this example we shall enhance our security by adding mTLS to Nginx.

Apart from encrypting the traffic between client and server, SSL is also a way for the client to make sure that the server connecting to, is a trusted source.

On the other hand mTLS is a way for the server to ensure that the client is a trusted one. The client does accept the SSL connection to the server however it has to present to the server a certificate signed from an authority that the Server accepts. This way the Server, by validating the certificate the client presents can allow the connection.

More or less we shall build upon the previous example. The ssl certificates shall be the same, however we shall add the configuration for mtls.

The server ssl creation.

mkdir certs

cd certs

openssl genrsa -des3 -out ca.key 4096
#Remove passphrase for example purposes
openssl rsa -in ca.key -out ca.key
openssl req -new -x509 -days 3650 -key ca.key -subj "/CN=*.your.hostname" -out ca.crt

printf test > passphrase.txt
openssl genrsa -des3 -passout file:passphrase.txt -out server.key 2048
openssl req -new -passin file:passphrase.txt -key server.key -subj "/CN=*.your.hostname" -out server.csr

openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

The above is sufficient to secure out Nginx with SSL. So let’s create the mTLS certificates for the clients.
In order to create a certificate for mTLS we need a certificate authority. For convenience the certificate authority will be the same as the one we generated on the previous example.

printf test > client_passphrase.txt
openssl genrsa -des3 -passout file:client_passphrase.txt -out client.key 2048
openssl rsa -passin file:client_passphrase.txt -in client.key -out client.key
openssl req -new -key client.key -subj "/CN=*.client.hostname" -out client.csr

##Sign the certificate with the certificate authority
openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

Take note that the client common name needs to be different from the server’s certs common name, or else your request will be reject.

So we have our client certificate generated.
The next step is to configure Nginx to force mTLS connections from a specific authority

server {
error_log /var/log/nginx/error.log debug;
    listen 443 ssl;
    server_name  test.your.hostname;
    ssl_password_file /etc/nginx/certs/password;
    ssl_certificate /etc/nginx/certs/tls.crt;
    ssl_certificate_key /etc/nginx/certs/tls.key;

    ssl_client_certificate /etc/nginx/mtls/ca.crt;
    ssl_verify_client on;
    ssl_verify_depth  3;

    ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;

    location / {
    }

}

By using the ssl_client_certificate we point to the certificate authority that the client certificates should be signed from.
By using the ssl_verify_client as on, we enforce mTLS connections.

Since we have all certificates generated let’s spin up the Nginx server using docker.

docker run --rm --name mtls-nginx -p 443:443 -v $(pwd)/certs/ca.crt:/etc/nginx/mtls/ca.crt -v $(pwd)/certs/server.key:/etc/nginx/certs/tls.key -v $(pwd)/certs/server.crt:/etc/nginx/certs/tls.crt -v $(pwd)/nginx.mtls.conf:/etc/nginx/conf.d/nginx.conf -v $(pwd)/certs/passphrase.txt:/etc/nginx/certs/password nginx

Our server is up and running. Let’s try to do a request using curl without using any client certificates.

curl https://localhost/ --insecure

The result shall be

<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx/1.21.3</center>
</body>
</html>

As expected our request is rejected.
Let’s use the client certificates we generated from the expected certificate authority.

curl --key certs/client.key --cert certs/client.crt https://127.0.0.1 --insecure
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.3</center>
</body>
</html>

The connection was established and the client could connect to the Nginx instance.

Let’s put them all together

mkdir certs

cd certs

openssl genrsa -des3 -out ca.key 4096
#Remove passphrase for example purposes
openssl rsa -in ca.key -out ca.key
openssl req -new -x509 -days 3650 -key ca.key -subj "/CN=*.your.hostname" -out ca.crt

printf test > passphrase.txt
openssl genrsa -des3 -passout file:passphrase.txt -out server.key 2048
openssl req -new -passin file:passphrase.txt -key server.key -subj "/CN=*.your.hostname" -out server.csr

openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

printf test > client_passphrase.txt
openssl genrsa -des3 -passout file:client_passphrase.txt -out client.key 2048
openssl rsa -passin file:client_passphrase.txt -in client.key -out client.key
openssl req -new -key client.key -subj "/CN=*.client.hostname" -out client.csr

##Sign the certificate with the certificate authority
openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

cd ../

docker run --rm --name mtls-nginx -p 443:443 -v $(pwd)/certs/ca.crt:/etc/nginx/mtls/ca.crt -v $(pwd)/certs/server.key:/etc/nginx/certs/tls.key -v $(pwd)/certs/server.crt:/etc/nginx/certs/tls.crt -v $(pwd)/nginx.mtls.conf:/etc/nginx/conf.d/nginx.conf -v $(pwd)/certs/passphrase.txt:/etc/nginx/certs/password nginx

You can find the code on github

Add SSL to Nginx

Nginx is a versatile tool that has many usages, can be used as a reverse proxy, load balancer etc.

A common usage is to handle the SSL traffic in front of applications. Thus instead of handling SSL from your application layer you can have nginx in front.

In our example we shall generate the certificates and make Nginx do the tls termination. I will use self signed certificates for our example. The certificates will be self signed and have a CA authority which shall help us on another example. In a real world example the certificate authority is something external like Let’s Encrypt or GlobalSign. By creating our own certificate authority we will be able to simulate them

openssl genrsa -des3 -out ca.key 4096
#Remove passphrase for example purposes
openssl rsa -in ca.key -out ca.key
openssl req -new -x509 -days 3650 -key ca.key -subj "/CN=*.your.hostname" -out ca.crt

Now that we have a certificate authority lets create the server key and certificate. First step is to create the key.

printf test > passphrase.txt
openssl genrsa -des3 -passout file:passphrase.txt -out server.key 1024
openssl req -new -passin file:passphrase.txt -key server.key -subj "/CN=*.your.hostname" -out server.csr

The result is to have a private key and a certificate signing request (csr). The csr needs to be signed by a certificate authority. The certificate authority in our case would be the one we create previously.Take note that we did not remove the password from the server.key. It was done on purpose to display how to load on Nginx, if you don’t want to tackle it remove it as shown at the certificate authority example.

So let’s sign the csr.

openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

Now we are ready to install them on Nginx. We shall use docker on this one.
This is how the nginx configuration should. What we shall do is to mount the files we generated previously to our docker image.

server {

    listen 443 ssl;
    server_name  test.your.hostname;
    ssl_password_file /etc/nginx/certs/password
    ssl_certificate /etc/nginx/certs/tls.crt;
    ssl_certificate_key /etc/nginx/certs/tls.key;


    location / {

        error_log /var/log/front_end_errors.log;
    }

    location = /swagger.json {
        proxy_pass https://petstore.swagger.io/v2/swagger.json;
    }

}

Our docker command mounting the files.

docker run --rm --name some-nginx -p 443:443 -v $(pwd)/certs/server.key:/etc/nginx/certs/tls.key -v $(pwd)/certs/server.crt:/etc/nginx/certs/tls.crt -v $(pwd)/nginx.conf:/etc/nginx/conf.d/nginx.conf -v $(pwd)/certs/passphrase.txt:/etc/nginx/certs/password nginx

Since this is a self signed certificate it cannot be accessed by a browser without tweaks but we can use curl –insecure to inspect the results. On a trusted certificate authority this would not be the case.

curl https://localhost/ -v --insecure

Let’s put them all in a script

mkdir certs

cd certs

openssl genrsa -des3 -out ca.key 4096
#Remove passphrase for example purposes
openssl rsa -in ca.key -out ca.key
openssl req -new -x509 -days 3650 -key ca.key -subj "/CN=*.your.hostname" -out ca.crt

printf test > passphrase.txt
openssl genrsa -des3 -passout file:passphrase.txt -out server.key 2048
openssl req -new -passin file:passphrase.txt -key server.key -subj "/CN=*.your.hostname" -out server.csr

openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

cd ../

docker run --rm --name some-nginx -p 443:443 -v $(pwd)/certs/server.key:/etc/nginx/certs/tls.key -v $(pwd)/certs/server.crt:/etc/nginx/certs/tls.crt -v $(pwd)/nginx.conf:/etc/nginx/conf.d/nginx.conf -v $(pwd)/certs/passphrase.txt:/etc/nginx/certs/password nginx

You can find the code on github.

On the next blog we shall configure Nginx to support mTLS.

Kafka & Zookeeper for Development: Connecting Clients to the Cluster

Previously we achieved to have our Kafka brokers connect to a ZooKeeper ensemble. Also we brought down some brokers checked the election leadership and produced/consumed some messages.

For now we want to make sure that we will be able to connect to those nodes. The problem with connecting to the ensemble we created previously is that it is located inside the container network. When a client interacts with one of the brokers and receives the full list of the brokers he will receive a list of IPs not accessible to it.

So the initial handshake of a client will be successful but then the client will try to interact withs some unreachable hosts.

In order to tackle this we will have a combination of workarounds.

The first one would be to bind the port of each Kafka broker to a different local ip.

kafka-1 will be mapped to 127.0.0.1:9092
kafka-2 will be mapped to 127.0.0.2:9092
kafka-3 will be mapped to 127.0.0.3:9092

So let’s create the aliases of those addresses

sudo ifconfig lo0 alias 127.0.0.2
sudo ifconfig lo0 alias 127.0.0.3

Now it’s possible to do the ip binding. Let’s also put those entries to our /etc/hosts. By doing this, we achieve our local network and our docker network to be in agreement on which broker they should access.

127.0.0.1	kafka-1
127.0.0.2	kafka-2
127.0.0.3	kafka-3

The next step is also to change the KAFKA_ADVERTISED_LISTENERS on each broker. We will adapt this to the DNS entry of each broker. By setting KAFKA_ADVERTISED_LISTENERS the clients from the outside can correctly connect to it, to an address reachable to them and not an address through the internal network. Further explanations can be found on this blog.

  kafka-1:
    container_name: kafka-1
    image: confluent/kafka
    ports:
    - "127.0.0.1:9092:9092"
    volumes:
    - type: bind
      source: ./server1.properties
      target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-1:9092"
  kafka-2:
    container_name: kafka-2
    image: confluent/kafka
    ports:
      - "127.0.0.2:9092:9092"
    volumes:
      - type: bind
        source: ./server2.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-2:9092"
  kafka-3:
    container_name: kafka-3
    image: confluent/kafka
    ports:
      - "127.0.0.3:9092:9092"
    volumes:
      - type: bind
        source: ./server3.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-3:9092"

We see the port binding change as well as the KAFKA_ADVERTISED_LISTENERS. Now let’s wrap everything together in our docker-compose

version: "3.8"
services:
  zookeeper-1:
    container_name: zookeeper-1
    image: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: "1"
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181
  zookeeper-2:
    container_name: zookeeper-2
    image: zookeeper
    ports:
      - "2182:2181"
    environment:
      ZOO_MY_ID: "2"
      ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181
  zookeeper-3:
    container_name: zookeeper-3
    image: zookeeper
    ports:
      - "2183:2181"
    environment:
      ZOO_MY_ID: "3"
      ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181
  kafka-1:
    container_name: kafka-1
    image: confluent/kafka
    ports:
    - "127.0.0.1:9092:9092"
    volumes:
    - type: bind
      source: ./server1.properties
      target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-1:9092"
  kafka-2:
    container_name: kafka-2
    image: confluent/kafka
    ports:
      - "127.0.0.2:9092:9092"
    volumes:
      - type: bind
        source: ./server2.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-2:9092"
  kafka-3:
    container_name: kafka-3
    image: confluent/kafka
    ports:
      - "127.0.0.3:9092:9092"
    volumes:
      - type: bind
        source: ./server3.properties
        target: /etc/kafka/server.properties
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka-3:9092"

Last but not least you can find the code on github.

Apache Ignite on your Kubernetes Cluster Part 4: Deployment explained

Previously we saw the Ignite configuration that comes with the Kubernetes installation.
The default configuration does not have persistence enabled so we won’t focus on any storage classes provided by the helm chart.

The default installation uses a stateful set. You can find more information on a stateful set on the Kubernetes documentation.

> kubectl get statefulset ignite-cache -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: 2020-04-09T12:29:04Z
  generation: 1
  labels:
    app.kubernetes.io/instance: ignite-cache
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ignite
    helm.sh/chart: ignite-1.0.1
  name: ignite-cache
  namespace: default
  resourceVersion: "281390"
  selfLink: /apis/apps/v1/namespaces/default/statefulsets/ignite-cache
  uid: fcaa7bef-84cd-4e7c-aa33-a4312a1d47a9
spec:
  podManagementPolicy: OrderedReady
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ignite-cache
  serviceName: ignite-cache
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ignite-cache
    spec:
      containers:
      - env:
        - name: IGNITE_QUIET
          value: "false"
        - name: JVM_OPTS
          value: -Djava.net.preferIPv4Stack=true
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        image: apacheignite/ignite:2.7.6
        imagePullPolicy: IfNotPresent
        name: ignite
        ports:
        - containerPort: 11211
          protocol: TCP
        - containerPort: 47100
          protocol: TCP
        - containerPort: 47500
          protocol: TCP
        - containerPort: 49112
          protocol: TCP
        - containerPort: 10800
          protocol: TCP
        - containerPort: 8080
          protocol: TCP
        - containerPort: 10900
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/ignite/apache-ignite/config
          name: config-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ignite-cache
      serviceAccountName: ignite-cache
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: ignite-config.xml
            path: default-config.xml
          name: ignite-cache-configmap
        name: config-volume
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
status:
  replicas: 0

As you can see the Ingite configuration has been mounted through the configmap. Also you can see that this pod will use a specific service account.
Through the environment variables certain libraries are enabled which provide more features on the Ignite cluster. Also the ports needed for the communication and various protocols are being specified.

The last step is the service. All the ignite nodes shall be load balancer behind the Kubernetes service.

> kubectl get svc ignite-cache -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2020-04-09T12:29:04Z
  labels:
    app: ignite-cache
  name: ignite-cache
  namespace: default
  resourceVersion: "281389"
  selfLink: /api/v1/namespaces/default/services/ignite-cache
  uid: 5be68e28-a57c-4cb5-b610-b708bff80da7
spec:
  clusterIP: None
  ports:
  - name: jdbc
    port: 11211
    protocol: TCP
    targetPort: 11211
  - name: spi-communication
    port: 47100
    protocol: TCP
    targetPort: 47100
  - name: spi-discovery
    port: 47500
    protocol: TCP
    targetPort: 47500
  - name: jmx
    port: 49112
    protocol: TCP
    targetPort: 49112
  - name: sql
    port: 10800
    protocol: TCP
    targetPort: 10800
  - name: rest
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: thin-clients
    port: 10900
    protocol: TCP
    targetPort: 10900
  selector:
    app: ignite-cache
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Whether you add a new node or you add an ignite client node your ignite cluster shall be reached through this Kubernetes service. Apart from, that based on the Kubernetes services you can make this cache public or internal.

Apache Ignite on your Kubernetes Cluster Part 3: Configuration explained

Previously we had a look on the RBAC needed for and ignite cluster in Kubernetes.
This blogs focuses on the deployment and the configuration of the cache.

The default ignite installation uses and xml based configuration. It is easy to mount files using configmaps.

> kubectl get configmap ignite-cache-configmap -o yaml
NAME                     DATA   AGE
ignite-cache-configmap   1      32d
gkatzioura@MacBook-Pro-2 templates % kubectl get configmap ignite-cache-configmap -o yaml
apiVersion: v1
data:
  ignite-config.xml: "....\n"
kind: ConfigMap
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache-configmap
  namespace: default
  resourceVersion: "137521"
  selfLink: /api/v1/namespaces/default/configmaps/ignite-cache-configmap
  uid: ff530e3d-10d6-4708-817f-f9845886c1b0

Since viewing the xml from the configmap is cumbersome this is the actual xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="
		       http://www.springframework.org/schema/beans        http://www.springframework.org/schema/beans/spring-beans.xsd">
	<bean class="org.apache.ignite.configuration.IgniteConfiguration">
		<property
				name="peerClassLoadingEnabled" value="false"/>
		<property name="dataStorageConfiguration">
			<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
			</bean>
		</property>
		<property
				name="discoverySpi">
			<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
				<property name="ipFinder">
					<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
						<property name="namespace" value="default"/>
						<property
								name="serviceName" value="ignite-cache"/>
					</bean>
				</property>
			</bean>
		</property>
	</bean>
</beans>

The default DataStorageConfiguration is being used.
What you can see different from other ignite installations is TCP discovery. The tcp discover used is using the Kubernetes TCP based discovery.

The next blog focuses on the services and the deployment.

Apache Ignite on your Kubernetes Cluster Part 2: RBAC Explained

So previously we had a vanilla installation of Apache Ignite on Kubernetes.

You had a cache service running however all you did was installing a helm chart.
In this blog we shall evaluate what is installed and take notes for our futures helm charts.

The first step would be to view the helm chart.

> helm list
NAME        	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
ignite-cache	default  	1       	2020-03-07 22:23:49.918924 +0000 UTC	deployed	ignite-1.0.1	2.7.6

Now let’s download it

> helm fetch stable/ignite
> tar xvf ignite-1.0.1.tgz
> cd ignite/; ls -R
Chart.yaml	README.md	templates	values.yaml

./templates:
NOTES.txt			account-role.yaml		persistence-storage-class.yaml	service-account.yaml		svc.yaml
_helpers.tpl			configmap.yaml			role-binding.yaml		stateful-set.yaml		wal-storage-class.yaml

Reading through the template files is a bit challenging (well they are tempaltes :P) so we shall just check what was installed through our previous blog.

Let’s get started with the account-role. The cluster role that ignite shall use needs to be able to get/list/watch the pods and the endpoints. It makes sense since there is a need for discovery between the nodes.

> kubectl get ClusterRole ignite-cache -o yaml
kind: ClusterRole
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137525"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/ignite-cache
  uid: 0cad0689-2f94-4b74-87bc-b468e2ac78ae
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - endpoints
  verbs:
  - get
  - list
  - watch

In order to use this role you need a service account. A service account is create with a token.

> kubectl get serviceaccount ignite-cache -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  namespace: default
  resourceVersion: "137524"
  selfLink: /api/v1/namespaces/default/serviceaccounts/ignite-cache
  uid: 7aab67e5-04db-41a8-b73d-e76e34ca1d8e
secrets:
- name: ignite-cache-token-8rln4

Then we have the role binding. We have a new service account called the ignite-cache which has the role ignite-cache.

> kubectl get ClusterRoleBinding ignite-cache -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2020-03-07T22:23:50Z
  name: ignite-cache
  resourceVersion: "137526"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/ignite-cache
  uid: 1e180bd1-567f-4979-a278-ba2e420ed482
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ignite-cache
subjects:
- kind: ServiceAccount
  name: ignite-cache
  namespace: default

It is important for you ignite workloads to use this service account and its token. By doing so they have the permissions to discover the other nodes in your cluster.

The next blog focuses on the configuration.

Apache Ignite on your Kubernetes Cluster Part 1: Vanilla installation

By all means apache Ignite is an Amazing Open Source project.
Don’t assume it’s just a  Cache. It provides way more.

 

Kubernetes gets more popular by the day and is also a very convenient tool.
In this tutorial we shall integrate ignite and Kubernetes.

The first step would be to spin up Minikube.

To get ignite on your Kubernetes installation the first step would be to install the helm chart.

>helm repo add stable https://kubernetes-charts.storage.googleapis.com
>helm install ignite-cache stable/ignite
NAME: ignite-cache
LAST DEPLOYED: Sat Mar  7 22:23:49 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To check cluster state please run:

kubectl exec -n default ignite-cache-0 -- /opt/ignite/apache-ignite/bin/control.sh --state

Eventually after this command is issued it is expected to have an ignite cache setup on your kubernetes cluster.

>kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
ignite-cache-0   1/1     Running   0          79s
ignite-cache-1   1/1     Running   0          13s
>kubectl get svc ignite-cache
ignite-cache   ClusterIP   None         <none>        11211/TCP,47100/TCP,47500/TCP,49112/TCP,10800/TCP,8080/TCP,10900/TCP   6m24s

To those familiar with Kubernetes an ignite cache has just been spinned up in your kubernetes cluster and your applications can use the ignite service within the cluster.
The next blog focuses on the service account needed.

Use local docker image on minikube.

You use Minikube and you want to run your development images that you create locally. This might seem tricky since Minikube needs to download your images from a registry however you images are being uploaded on your local registry.

In any case you can still use you local images with Minikube so let’s get started.

Before running any container let’s issue.

> eval $(minikube docker-env)

This actually reuses the docker host from Minikube for your current bash session.

See for yourself.

> minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/gkatzioura/.minikube/certs"
# Run this command to configure your shell:
# eval $(minikube docker-env)

Then spin up an nginx image. Most of the commands are taken from this tutorial.

>docker run -d -p 8080:80 --name my-nginx nginx
>docker ps --filter name=my-nginx
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
128ce006ecae        nginx               "nginx -g 'daemon of…"   13 seconds ago      Up 12 seconds       0.0.0.0:8080->80/tcp   my-nginx

Now let’s create an image from the running container.

docker commit 128ce006ecae dockerimage:version1

Then let’s run our custom image on minikube.

kubectl create deployment test-image --image=dockerimage:version1

Let’s also expose the service

kubectl expose deployment test-image --type=LoadBalancer --port=80

Let’s take to the next level and try to wget our service

> kubectl exec -it podwithbinbash /bin/bash
bash-4.4# wget test-image
Connecting to test-image (10.101.70.7:80)
index.html           100% |***********************************************************************************************************|   612  0:00:00 ETA
bash-4.4# cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Take extra attention that the above will work only on the terminal that you executed the command

eval $(minikube docker-env)

If you want to you can just setup your bash_profile to do it for every terminal but this is up to you.
Eventually this is one of the quick ways to use you local images on Minikube and most probably there are others available.

Autoscaling Groups with terraform on AWS Part 3: Elastic Load Balancer and health check

Previously we set up some Apache Ignite servers in an autoscaling group. The next step is to add a Load Balancer in front of the autoscaling group.

Before any steps let’s add some environment variables to variables.tf.

variable "autoscalling_group_elb_name" {
  type = string
  default = "autoscallinggroupelb"
}

variable "elb_security_group_name" {
  type = string
  default = "elb_name"
}

First we shall add the security group for the Load Balancer.

resource "aws_security_group" "elb_security_group" {
  name = var.elb_security_group_name
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Then we need to retrieve the availability zones for the Load Balancer.

data "aws_availability_zones" "available" {
  state = "available"
}

Then let’s add the Load Balancer.

resource "aws_elb" "autoscalling_group_elb" {
  name = var.autoscalling_group_elb_name
  security_groups = ["${aws_security_group.elb_security_group.id}"]
  availability_zones = data.aws_availability_zones.available.names
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:8080/ignite?cmd=version"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "8080"
    instance_protocol = "http"
  }
}

Then let’s match the Load Balancer with the autoscaling group and set the health type to ELB.

resource "aws_autoscaling_group" "autoscalling_group_config" {
  name = var.auto_scalling_group_name
  max_size = 3
  min_size = 2
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 3
  force_delete = true
  vpc_zone_identifier = [for s in data.aws_subnet.subnet_values: s.id]
  load_balancers = ["${aws_elb.autoscalling_group_elb.name}"]

  launch_configuration = aws_launch_configuration.launch-configuration.name

  lifecycle {
    create_before_destroy = true
  }
}

As before you apply your terraform solution

> terraform apply