Run you first gatling load test using scala.

Gatling is a neat tool. You can create your load tests by just coding in scala. Jmeter allows you to do so through a plugin or beanshell but it is not as direct as the way gatling does so.

I will start by adding the gatling plugin

addSbtPlugin("io.gatling" % "gatling-sbt" % "3.0.0")

The next step is to changed the build.sbt


version := "0.1"
scalaVersion := "2.12.8"

enablePlugins(GatlingPlugin)

scalacOptions := Seq(
  "-encoding", "UTF-8", "-target:jvm-1.8", "-deprecation",
  "-feature", "-unchecked", "-language:implicitConversions", "-language:postfixOps")
libraryDependencies += "io.gatling.highcharts" % "gatling-charts-highcharts" % "3.1.2" % "test,it"
libraryDependencies += "io.gatling"            % "gatling-test-framework"    % "3.1.2" % "test,it"

The above are no different than what you can find on the official site when it comes to sbt commands and gatling.

Our next step is to add a simple http test. Be aware that you should add it in the directories src/test or src/it since, as it is instructed from the sbt dependencies for the binaries to take effect on these directories.

I shall put this test on src/test/scala/com/gkatzioura/BasicSimulation.scala

package com.gkatzioura

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class BasicSimulation extends Simulation {

  val httpConf = http.baseUrl("http://yourapi.com")
      .doNotTrackHeader("1")

  val scn = scenario("BasicSimulation")
    .exec(http("request_1")
    .get("/"))
    .pause(5)

  setUp(scn.inject(atOnceUsers(1))).protocols(httpConf)
}

Afterwards testing is simple. You go to sbt mode and execute the test.

sbt
>gatling:testOnly com.gkatzioura.BasicSimulation
>gatling:test

The first command instructs to run just one test, the second one shall run everything.

That’s it! Pretty simple.

A guide to the InfluxDBMapper and QueryBuilder for Java: Into and Order

Previously we used the group by statement extensively in order to execute complex aggregation queries

On this tutorial we are going to have a look at ‘into’ statements and the ‘order by’ close.

Apart from inserting or selecting data we might as well want to persist the results from one query into another table. The usages on something like this can vary. For example you might have a complex operation that cannot be executed in one single query.

Before we continue, make sure you have an influxdb instance up and running.

The most common action with an into query would be to populate a measurement with the results of a previous query.

Let’s copy a database.

Query query = select()
                .into("\"copy_NOAA_water_database\".\"autogen\".:MEASUREMENT")
                .from(DATABASE, "\"NOAA_water_database\".\"autogen\"./.*/")
                .groupBy(new RawText("*"));

The result of this query will be to copy the results into the h2o_feet_copy_1 measurement.

SELECT * INTO "copy_NOAA_water_database"."autogen".:MEASUREMENT FROM "NOAA_water_database"."autogen"./.*/ GROUP BY *;

Now let’s just copy a column into another table.

        Query query = select().column("water_level")
                               .into("h2o_feet_copy_1")
                               .from(DATABASE,"h2o_feet")
                               .where(eq("location","coyote_creek"));

Bellow is the query which is going to be execcuted.

SELECT water_level INTO h2o_feet_copy_1 FROM h2o_feet WHERE location = 'coyote_creek';

Also we can do exactly the same thing with aggregations.

Query query = select()
                .mean("water_level")
                .into("all_my_averages")
                .from(DATABASE,"h2o_feet")
                .where(eq("location","coyote_creek"))
                .and(gte("time","2015-08-18T00:00:00Z"))
                .and(lte("time","2015-08-18T00:30:00Z"))
                .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

And generate a query which persists the aggregation result into a table.

SELECT MEAN(water_level) INTO all_my_averages FROM h2o_feet WHERE location = 'coyote_creek' AND time >= '2015-08-18T00:00:00Z' AND time <= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Order clauses

Influxdb does provide ordering however it is limited only to dates.
So we will execute a query with ascending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(asc());

And we get the ascending ordering as expected.

SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time ASC;

And the same query we shall executed with descending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(desc());
 SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time DESC;

That’s it! We just created some new databases and measurements by just using existing data in our database. We also executed some statements where we specified the time ordering.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java: Group By

Previously we executed some selection examples and aggregations against an InfluxDB database. In this tutorial we are going to check the group by functionality that the Query Builder provides to us with.

Before you start you need to spin up an influxdb instance with the data needed.

Supposing that we want to group by a single tag, we shall use the groupBy function.

        Query query = select().mean("water_level").from(DATABASE, "h2o_feet").groupBy("location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The query to be executed shall be

SELECT MEAN(water_level) FROM h2o_feet GROUP BY location;

If we want to group by multiple tags we will pass an array of tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy("location","randtag");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result will be

SELECT MEAN(index) FROM h2o_feet GROUP BY location,randtag;

Another option is to query by all tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy(raw("*"));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(index) FROM h2o_feet GROUP BY *;

Since InfluxDB is a time series database we have great group by functionality based on time.

For example let’s group query results into 12 minute intervals

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the result

SELECT COUNT(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Group results by 12 minute intervals and location.

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where()
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE),"location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the following query.

SELECT COUNT(water_level) FROM h2o_feet WHERE time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m),location;

We will get more advanced and group query results into 18 minute intervals and shift the preset time boundaries forward.

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,6l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,6m);

Or group query results into 12 minute intervals and shift the preset time boundaries back;

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,-12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,-12m);

Eventually we can group by and fill

        Query query = select()
                .column("water_level")
                .from(DATABASE, "h2o_feet")
                .where(gt("time", op(ti(24043524l, MINUTE), SUB, ti(6l, MINUTE))))
                .groupBy("water_level")
                .fill(100);
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT water_level FROM h2o_feet WHERE time &gt; 24043524m - 6m GROUP BY water_level fill(100);

That’s it! We just run some really complex group by queries against our InfluxDB database. The query builder makes it possible to create queries using only java.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 2

Previously we setup an influxdb instance running through docker and we also run our first InfluxDBMapper code against an influxdb database.

The next step is to execute some queries against influxdb using the QueryBuilder combined with the InfluxDBMapper.

Let’s get started and select everything from the table H2OFeetMeasurement.

private static final String DATABASE = "NOAA_water_database";

public static void main(String[] args) {
    InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

    InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);

    Query query = select().from(DATABASE,"h2o_feet");
    List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);
}

Let’s get more specific, we will select measurements with water level higher than 8.

        Query query = select().from(DATABASE,"h2o_feet").where(gt("water_level",8));
        LOGGER.info("Executing query "+query.getCommand());
        List higherThanMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

I bet you noticed the query.getCommand() detail. If you want to see the actual query that is being executed you can call the getCommand() method from the query.

Apart from where statements we can perform certain operations on fields such as calculations.

        Query query = select().op(op(cop("water_level",MUL,2),"+",4)).from(DATABASE,"h2o_feet");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We just used the cop function to multiply the water level by 2. The cop function creates a clause which will execute an operation to a column. Then we are going to increment by 4 the product of the previous operation by using the op function. The op function creates a clause which will execute an operation with regards to two arguments given.

Next case is to select using a specific string field key-value

        Query query = select().from(DATABASE,"h2o_feet").where(eq("location","santa_monica"));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Things can get even more specific and select data that have specific field key-values and tag key-values.

        Query query = select().column("water_level").from(DATABASE,"h2o_feet")
                              .where(neq("location","santa_monica"))
                              .andNested()
                              .and(lt("water_level",-0.59))
                              .or(gt("water_level",9.95))
                              .close();
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Since influxdb is a time series database it is essential to issue queries with specific timestamps.

        Query query = select().from(DATABASE,"h2o_feet")
                              .where(gt("time",subTime(7,DAY)));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Last but not least we can make a query for specific fields. I will create a model just for the fields that we are going to retrieve.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", timeUnit = TimeUnit.SECONDS)
public class LocationWithDescription {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }
}

And now I shall query for them.

Query selectFields = select("level description","location").from(DATABASE,"h2o_feet");
List locationWithDescriptions = influxDBMapper.query(selectFields, LocationWithDescription.class);

As you can see we can also map certain fields to a model. For now mapping to models can be done only when data come from a certain measurements. Thus we shall proceed on more query builder specific examples next time.

You can find the source code on github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 1

With the release of latest influxdb-java driver version came along the InfluxbMapper.

To get started we need to spin up an influxdb instance, and docker is the easiest way to do so. We just follow the steps as described here.

Now we have a database with some data and we are ready to execute our queries.

We have the measure h2o_feet

> SELECT * FROM "h2o_feet"

name: h2o_feet
--------------
time                   level description      location       water_level
2015-08-18T00:00:00Z   below 3 feet           santa_monica   2.064
2015-08-18T00:00:00Z   between 6 and 9 feet   coyote_creek   8.12
[...]
2015-09-18T21:36:00Z   between 3 and 6 feet   santa_monica   5.066
2015-09-18T21:42:00Z   between 3 and 6 feet   santa_monica   4.938

So we shall create a model for that.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", database = "NOAA_water_database", timeUnit = TimeUnit.SECONDS)
public class H2OFeetMeasurement {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    @Column(name = "water_level")
    private Double waterLevel;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }

    public Double getWaterLevel() {
        return waterLevel;
    }

    public void setWaterLevel(Double waterLevel) {
        this.waterLevel = waterLevel;
    }
}

And the we shall fetch all the entries of the h2o_feet measurement.

package com.gkatzioura.mapper.showcase;

import java.util.List;
import java.util.logging.Logger;

import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory;
import org.influxdb.impl.InfluxDBImpl;
import org.influxdb.impl.InfluxDBMapper;

public class InfluxDBMapperShowcase {

    private static final Logger LOGGER = Logger.getLogger(InfluxDBMapperShowcase.class.getName());

    public static void main(String[] args) {

        InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

        InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);
        List h2OFeetMeasurements = influxDBMapper.query(H2OFeetMeasurement.class);

    }
}

After being successful on fetching the data we will continue with persisting data.


        H2OFeetMeasurement h2OFeetMeasurement = new H2OFeetMeasurement();
        h2OFeetMeasurement.setTime(Instant.now());
        h2OFeetMeasurement.setLevelDescription("Just a test");
        h2OFeetMeasurement.setLocation("London");
        h2OFeetMeasurement.setWaterLevel(1.4d);

        influxDBMapper.save(h2OFeetMeasurement);

        List measurements = influxDBMapper.query(H2OFeetMeasurement.class);

        H2OFeetMeasurement h2OFeetMeasurement1 = measurements.get(measurements.size()-1);
        assert h2OFeetMeasurement1.getLevelDescription().equals("Just a test");

Apparently fetching all the measurements to get the last entry is not the most efficient thing to do. In the upcoming tutorials we are going to see how we use the InfluxDBMapper with advanced InfluxDB queries.

Spin up an InfluxDB instance with docker for testing.

It is a reality that we tend to make things harder than they might be when we try to use and connect various databases.
Since docker came out things became a lot easier.

Most databases like Mongodb, InfluxDB etc come with the binaries needed to spin up the database but also with the clients needed in order to connect. Actually it pretty much starts to become a standard.

We will make a showcase of this by using InfluxDB’s docker image and the data walkthrough.

Let’s start with spinning up the instance.

docker run --rm -p 8086:8086 --name influxdb-local influxdb

We have an influxDB instance running on port 8086 under the name influxdb-local. Once the container is stopped it will also be deleted.

First step is to connect to an influxDB shell and interact with the database.

docker exec -it influxdb-local influx
CREATE DATABASE NOAA_water_database
> exit

Now let’s import some data

docker exec -it influxdb-local /bin/bash
curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt -o NOAA_data.txt
influx -import -path=NOAA_data.txt -precision=s -database=NOAA_water_database
rm NOAA_data.txt

Next step is to connect to the shell and query some data.

docker exec -it influxdb-local influx -precision rfc3339 -database NOAA_water_database
Connected to http://localhost:8086 version 1.4.x
InfluxDB shell 1.4.x
> SHOW measurements
name: measurements
name
----
average_temperature
h2o_feet
h2o_pH
h2o_quality
h2o_temperature
>

As you can see we just created an InfluxDB instance with data ready to execute queries and have some tests! Pretty simple and clean. Once we are done by stopping the container all data and the container included shall be removed.

Docker basics: Docker compose

Docker Compose is a tool that allows you to run multi-container applications. With compose we can use yaml files to configure our application’ services and then using a single command create and start all of the configured services. I use this tool a lot when it comes to local development in a microservice environment. It is also lightweight and needs just a small effort. Instead of managing how to run each service while developing you can have the environment and services needed preconfigured and focus on the service that you currently develop.

With docker compose we can configure a network for our services, volumes, mount-points, environmental variables just about everything.
To showcase this we are going to solve a problem. Our goal would be to extract data from mongodb using grafana. Grafana does not have out of the box support for MongoDB therefore we will have to use a plugin.

First step we shall create our networks. Creating a network is not necessary since your services once started will join the default network. We shall make a showcase of using custom networks. We shall have a network for backend services and a network for frontend services. Apparently network configuration can get more advanced and specify custom network drivers or even configure static addresses.

version: '3.5'

networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true

The backend network is going to be internal therefore there won’t be any outbound connectivity to the containers attached to it.

Then we shall setup our mongodb instance.

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend

As you see we specified a volume. Volumes can also be specified separately and attach them to a service.
Also we used environmental variables for the root account, you might as well have spotted that the password is going to be provided through environmental variables ie. MONGO_USER=root MONGO_PASSWORD=root docker-compose -f stack.yaml up. The same applies for the volume path too. You can have a more advanced configuration for volumes in your compose configuration and reference them from your service.

Our next goal is to setup the proxy server which shall be in the middle of our grafana and mongodb server. Since it needs a custom Dockerfile to create it, we shall do it through docker-compose. Compose has the capability to spin up a service by specifying the docker file.

So let’s start with the Dockerfile.

FROM node

WORKDIR /usr/src/mongografanaproxy

COPY . /usr/src/mongografanaproxy

EXPOSE 3333

RUN cd /usr/src/mongografanaproxy
RUN npm install
ENTRYPOINT ["npm","run","server"]

Then let’s add it to compose

version: '3.5'

services:
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend

And the same shall be done to the Grafana image that we want to use. In stead of using a ready grafana image we shall create one with the plugin preinstalled.

FROM grafana/grafana

COPY . /var/lib/grafana/plugins/mongodb-grafana

EXPOSE 3000

version: '3.5'

services:
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend

Let’s wrap them all together

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend
networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true

So let’s run them all together.

docker-compose -f stack.yaml build
MONGO_USER=root MONGO_PASSWORD=root DB_PATH=~/grafana-mongo  docker-compose -f stack.yaml up

The above can be found on github.

You might as well find the Docker Images, Docker Containers and Docker registry posts useful.

Behavioural Design Patterns: Visitor

Our last pattern of the behavioural design patterns is going to be the visitor pattern.

We use the visitor pattern when we want to make it possible to define a new operation for classes of an object structure without changing the classes.

Imagine the scenario of a software that executes http requests to an api. Most http apis out there have certain limits and allow a specific number of requests to be executed per minute. We might have different class that executes requests and also takes into consideration the business logic with regards to the apis that they interact.
In case we want to inspect those calls and print some information or persist request related information to the database the visitor pattern might be a good fit.

We will start with the visitor interface.

package com.gkatzioura.design.behavioural.visitor;

public interface Visitor {
}

This interface will not specify any methods, however interfaces which extend it will contain methods visit with specific types to visit. We do this in order to be able to have loosely coupled visitor implementations (or even composition based visitors).

Then we shall implement the visitable interface.

package com.gkatzioura.design.behavioural.visitor;

public interface Visitable {

     void accept(T visitor);

}

Based on the above we shall create our request execution classes which are visitable.

package com.gkatzioura.design.behavioural.visitor;

public class LocationRequestExecutor implements Visitable {

    private int successfulRequests = 0;
    private double requestsPerMinute = 0.0;

    public void executeRequest() {
        /**
         * Execute the request and change the successfulRequests and requestsPerMinute value
         */
    }

    @Override
    public void accept(LocationVisitor visitor) {
        visitor.visit(this);
    }

    public int getSuccessfulRequests() {
        return successfulRequests;
    }

    public double getRequestsPerMinute() {
        return requestsPerMinute;
    }

}
package com.gkatzioura.design.behavioural.visitor;

public class RouteRequestExecutor implements Visitable {

    private int successfulRequests = 0;
    private double requestsPerMinute = 0.0;

    public void executeRequest() {
        /**
         * Execute the request and change the successfulRequests and requestsPerMinute value
         */
    }

    @Override
    public void accept(RouteVisitor visitor) {
        visitor.visit(this);
    }

    public int getSuccessfulRequests() {
        return successfulRequests;
    }

    public double getRequestsPerMinute() {
        return requestsPerMinute;
    }
}

And then we shall add the visitor interfaces for these type of executors

package com.gkatzioura.design.behavioural.visitor;

public interface LocationVisitor extends Visitor {

    void visit(LocationRequestExecutor locationRequestExecutor);
}
package com.gkatzioura.design.behavioural.visitor;

public interface RouteVisitor extends Visitor {

    void visit(RouteRequestExecutor routeRequestExecutor);
}

The last step would be to create a visitor that implements the above interfaces.

package com.gkatzioura.design.behavioural.visitor;

public class RequestVisitor implements LocationVisitor, RouteVisitor {

    @Override
    public void visit(LocationRequestExecutor locationRequestExecutor) {

    }

    @Override
    public void visit(RouteRequestExecutor routeRequestExecutor) {

    }
}

So let’s put em all together.

package com.gkatzioura.design.behavioural.visitor;

public class VisitorMain {

    public static void main(String[] args) {
        final LocationRequestExecutor locationRequestExecutor = new LocationRequestExecutor();
        final RouteRequestExecutor routeRequestExecutor = new RouteRequestExecutor();
        final RequestVisitor requestVisitor = new RequestVisitor();

        locationRequestExecutor.accept(requestVisitor);
        routeRequestExecutor.accept(requestVisitor);
    }
}

That’s it! You can find the sourcecode on github.

Upload and Download files to S3 using maven.

Throughout the years I’ve seen many teams using maven in many different ways. Maven can be used for many ci/cd tasks instead of using extra pipeline code or it can be used to prepare the development environment before running some tests.
Generally it is convenient tool, widely used among java teams and will continue so since there is a huge ecosystem around it.

The CloudStorage Maven plugin helps you with using various cloud buckets as a private maven repository. Recently CloudStorageMaven for s3 got a huge upgrade, and you can use it in order to download or upload files from s3, by using it as a plugin.

The plugin assumes that your environment is configured properly to access the s3 resources needed.
This can be achieved individually through aws configure

aws configure

Other ways are through environmental variables or by using the appropriate iam role.

Supposing you want to download some certain files from a path in s3.

<build>
        <plugins>
            <plugin>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.6</version>
                <executions>
                    <execution>
                        <id>download-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/local/download/path</downloadPath>
                            <keys>1.txt,2.txt,directory/3.txt</keys>
                        </configuration>
                    </execution>
                <executions>
            <plugin>
        <plugins>
</build>

The files 1.txt,2.txt,directory/3.txt once the execution is finished shall reside in the local directory specified
(/local/download/path).
Be aware that the file discovery on s3 is done with prefix, thus if you have file 1.txt and 1.txt.jpg both files shall be downloaded.

You can also download only one file to one file that you specified locally, as long as it is one to one.

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/your-file.txt</downloadPath>
                            <keys>a-key-to-download.txt</keys>
                        </configuration>
                    </execution>

Apparently files with a prefix that contain directories (they are fakes ones on s3) will downloaded to the directory specified in the form of directories and sub directories

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/</downloadPath>
                            <keys>s3-prefix</keys>
                        </configuration>
                    </execution>

The next part is about uploading files to s3.

Uploading one file

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/your-file.txt</path>
                            <key>key-to-download.txt</key>
                        </configuration>
                    </execution>

Upload a directory

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                            <key>prefix</key>
                        </configuration>
                    </execution>

Upload to the root of bucket.

                    <execution>
                        <id>upload-multiples-files-no-key</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                        </configuration>
                    </execution>

That’s it! Since it is an open source project you can contribute or issue pull requests at github.

Behavioural Design Patterns: Template method

Previously we used the strategy pattern to in order to solve the problem of choosing various speeding algorithms based on the road type. The next behavioural design pattern we are going to use is the template method.
By using the template method we define the skeleton of the algorithm and the implementation of certain steps is done by subclasses.

Thus we have methods with concrete implementations, and methods without any implementation. Those methods will be implemented based on the application logic needed to be achieved.

Imagine the case of a coffee machine. There are many types of coffees and different ways to implement them, however some steps are common and some steps although they vary they also need to be implemented. Processing the beans, boiling, processing the milk, they are all actions that differ based on the type of coffee. Placing to a cup and service however are actions that do no differentiate.

package com.gkatzioura.design.behavioural.template;

public abstract class CoffeeMachineTemplate {

    protected abstract void processBeans();

    protected abstract void processMilk();

    protected abstract void boil();

    public void pourToCup() {
        /**
         * pour to various cups based on the size
         */
    }

    public void serve() {
        processBeans();
        boil();
        processMilk();
        pourToCup();
    }

}

Then we shall add an implementation for the espresso. So here’s our espresso machine.

package com.gkatzioura.design.behavioural.template;

public class EspressoMachine extends CoffeeMachineTemplate {

    @Override
    protected void processBeans() {
        /**
         * Gring the beans
         */
    }

    @Override
    protected void processMilk() {
        /**
         * Use milk to create leaf art
         */
    }

    @Override
    protected void boil() {
        /**
         * Mix water and beans
         */
    }
}

As you see we can create various coffee machine no matter how different some steps might be.
You can find the sourcecode on github.