Spring Boot & Hibernate: Print queries and variables

It’s late in the office and you are stuck with this strange Jpa code with JoinColumns and cascades and you cannot find what goes wrong. You wish there is a way to view the icon-spring-frameworkqueries printed and also the values.
With a little tweaking to your Spring Boot application this is possible.

 

With the help of lombock heres is our jpa model.

package com.gkatzioura.hibernatelog.dao;

import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;

import lombok.Data;

@Data
@Entity
@Table(name = "application_user")
public class ApplicationUser {

    @Id
    private Long id;

    private String username;

    private String password;

}

It’s repository

package com.gkatzioura.hibernatelog.dao;

import org.springframework.data.repository.CrudRepository;

public interface ApplicationUserRepository extends CrudRepository {
}

A not found exception

package com.gkatzioura.hibernatelog.controller;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;

@ResponseStatus(value = HttpStatus.NOT_FOUND)
class ApplicationUserNotFoundException extends RuntimeException {

    public ApplicationUserNotFoundException() {
    }

    public ApplicationUserNotFoundException(String message) {
        super(message);
    }

    public ApplicationUserNotFoundException(String message, Throwable cause) {
        super(message, cause);
    }

    public ApplicationUserNotFoundException(Throwable cause) {
        super(cause);
    }

    public ApplicationUserNotFoundException(String message, Throwable cause, boolean enableSuppression, boolean writableStackTrace) {
        super(message, cause, enableSuppression, writableStackTrace);
    }
}

And a controller

package com.gkatzioura.hibernatelog.controller;

import java.util.Optional;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

import com.gkatzioura.hibernatelog.dao.ApplicationUser;
import com.gkatzioura.hibernatelog.dao.ApplicationUserRepository;

@RestController
public class ApplicationUserController {

    private final ApplicationUserRepository applicationUserRepository;

    public ApplicationUserController(ApplicationUserRepository applicationUserRepository) {
        this.applicationUserRepository = applicationUserRepository;
    }

    @GetMapping("/user/{id}")
    @ResponseBody
    public ApplicationUser getApplicationUser(@PathVariable Long id) {
        Optional applicationUser = applicationUserRepository.findById(id);
        if(applicationUser.isPresent()) {
            return applicationUser.get();
        } else {
            throw new ApplicationUserNotFoundException();
        }
    }

}

By adding the following to application.yaml we ensure the creation of the table through hibernate, the logging of the queries, the formatting of the sql queries logged and also the actual parameters values displayed.

spring:
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        use_sql_comments: true
        format_sql: true
logging:
  level:
    org:
      hibernate:
        type: trace

Just

curl http://localhost:8080/user/1

And you got your logs.

Run you first gatling load test using scala.

Gatling is a neat tool. You can create your load tests by just coding in scala. Jmeter allows you to do so through a plugin or beanshell but it is not as direct as the way gatling does so.

I will start by adding the gatling plugin

addSbtPlugin("io.gatling" % "gatling-sbt" % "3.0.0")

The next step is to changed the build.sbt


version := "0.1"
scalaVersion := "2.12.8"

enablePlugins(GatlingPlugin)

scalacOptions := Seq(
  "-encoding", "UTF-8", "-target:jvm-1.8", "-deprecation",
  "-feature", "-unchecked", "-language:implicitConversions", "-language:postfixOps")
libraryDependencies += "io.gatling.highcharts" % "gatling-charts-highcharts" % "3.1.2" % "test,it"
libraryDependencies += "io.gatling"            % "gatling-test-framework"    % "3.1.2" % "test,it"

The above are no different than what you can find on the official site when it comes to sbt commands and gatling.

Our next step is to add a simple http test. Be aware that you should add it in the directories src/test or src/it since, as it is instructed from the sbt dependencies for the binaries to take effect on these directories.

I shall put this test on src/test/scala/com/gkatzioura/BasicSimulation.scala

package com.gkatzioura

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class BasicSimulation extends Simulation {

  val httpConf = http.baseUrl("http://yourapi.com")
      .doNotTrackHeader("1")

  val scn = scenario("BasicSimulation")
    .exec(http("request_1")
    .get("/"))
    .pause(5)

  setUp(scn.inject(atOnceUsers(1))).protocols(httpConf)
}

Afterwards testing is simple. You go to sbt mode and execute the test.

sbt
>gatling:testOnly com.gkatzioura.BasicSimulation
>gatling:test

The first command instructs to run just one test, the second one shall run everything.

That’s it! Pretty simple.

A guide to the InfluxDBMapper and QueryBuilder for Java: Into and Order

Previously we used the group by statement extensively in order to execute complex aggregation queries

On this tutorial we are going to have a look at ‘into’ statements and the ‘order by’ close.

Apart from inserting or selecting data we might as well want to persist the results from one query into another table. The usages on something like this can vary. For example you might have a complex operation that cannot be executed in one single query.

Before we continue, make sure you have an influxdb instance up and running.

The most common action with an into query would be to populate a measurement with the results of a previous query.

Let’s copy a database.

Query query = select()
                .into("\"copy_NOAA_water_database\".\"autogen\".:MEASUREMENT")
                .from(DATABASE, "\"NOAA_water_database\".\"autogen\"./.*/")
                .groupBy(new RawText("*"));

The result of this query will be to copy the results into the h2o_feet_copy_1 measurement.

SELECT * INTO "copy_NOAA_water_database"."autogen".:MEASUREMENT FROM "NOAA_water_database"."autogen"./.*/ GROUP BY *;

Now let’s just copy a column into another table.

        Query query = select().column("water_level")
                               .into("h2o_feet_copy_1")
                               .from(DATABASE,"h2o_feet")
                               .where(eq("location","coyote_creek"));

Bellow is the query which is going to be execcuted.

SELECT water_level INTO h2o_feet_copy_1 FROM h2o_feet WHERE location = 'coyote_creek';

Also we can do exactly the same thing with aggregations.

Query query = select()
                .mean("water_level")
                .into("all_my_averages")
                .from(DATABASE,"h2o_feet")
                .where(eq("location","coyote_creek"))
                .and(gte("time","2015-08-18T00:00:00Z"))
                .and(lte("time","2015-08-18T00:30:00Z"))
                .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

And generate a query which persists the aggregation result into a table.

SELECT MEAN(water_level) INTO all_my_averages FROM h2o_feet WHERE location = 'coyote_creek' AND time >= '2015-08-18T00:00:00Z' AND time <= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Order clauses

Influxdb does provide ordering however it is limited only to dates.
So we will execute a query with ascending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(asc());

And we get the ascending ordering as expected.

SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time ASC;

And the same query we shall executed with descending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(desc());
 SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time DESC;

That’s it! We just created some new databases and measurements by just using existing data in our database. We also executed some statements where we specified the time ordering.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java: Group By

Previously we executed some selection examples and aggregations against an InfluxDB database. In this tutorial we are going to check the group by functionality that the Query Builder provides to us with.

Before you start you need to spin up an influxdb instance with the data needed.

Supposing that we want to group by a single tag, we shall use the groupBy function.

        Query query = select().mean("water_level").from(DATABASE, "h2o_feet").groupBy("location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The query to be executed shall be

SELECT MEAN(water_level) FROM h2o_feet GROUP BY location;

If we want to group by multiple tags we will pass an array of tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy("location","randtag");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result will be

SELECT MEAN(index) FROM h2o_feet GROUP BY location,randtag;

Another option is to query by all tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy(raw("*"));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(index) FROM h2o_feet GROUP BY *;

Since InfluxDB is a time series database we have great group by functionality based on time.

For example let’s group query results into 12 minute intervals

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the result

SELECT COUNT(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Group results by 12 minute intervals and location.

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where()
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE),"location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the following query.

SELECT COUNT(water_level) FROM h2o_feet WHERE time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m),location;

We will get more advanced and group query results into 18 minute intervals and shift the preset time boundaries forward.

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,6l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,6m);

Or group query results into 12 minute intervals and shift the preset time boundaries back;

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,-12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,-12m);

Eventually we can group by and fill

        Query query = select()
                .column("water_level")
                .from(DATABASE, "h2o_feet")
                .where(gt("time", op(ti(24043524l, MINUTE), SUB, ti(6l, MINUTE))))
                .groupBy("water_level")
                .fill(100);
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT water_level FROM h2o_feet WHERE time &gt; 24043524m - 6m GROUP BY water_level fill(100);

That’s it! We just run some really complex group by queries against our InfluxDB database. The query builder makes it possible to create queries using only java.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 2

Previously we setup an influxdb instance running through docker and we also run our first InfluxDBMapper code against an influxdb database.

The next step is to execute some queries against influxdb using the QueryBuilder combined with the InfluxDBMapper.

Let’s get started and select everything from the table H2OFeetMeasurement.

private static final String DATABASE = "NOAA_water_database";

public static void main(String[] args) {
    InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

    InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);

    Query query = select().from(DATABASE,"h2o_feet");
    List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);
}

Let’s get more specific, we will select measurements with water level higher than 8.

        Query query = select().from(DATABASE,"h2o_feet").where(gt("water_level",8));
        LOGGER.info("Executing query "+query.getCommand());
        List higherThanMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

I bet you noticed the query.getCommand() detail. If you want to see the actual query that is being executed you can call the getCommand() method from the query.

Apart from where statements we can perform certain operations on fields such as calculations.

        Query query = select().op(op(cop("water_level",MUL,2),"+",4)).from(DATABASE,"h2o_feet");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We just used the cop function to multiply the water level by 2. The cop function creates a clause which will execute an operation to a column. Then we are going to increment by 4 the product of the previous operation by using the op function. The op function creates a clause which will execute an operation with regards to two arguments given.

Next case is to select using a specific string field key-value

        Query query = select().from(DATABASE,"h2o_feet").where(eq("location","santa_monica"));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Things can get even more specific and select data that have specific field key-values and tag key-values.

        Query query = select().column("water_level").from(DATABASE,"h2o_feet")
                              .where(neq("location","santa_monica"))
                              .andNested()
                              .and(lt("water_level",-0.59))
                              .or(gt("water_level",9.95))
                              .close();
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Since influxdb is a time series database it is essential to issue queries with specific timestamps.

        Query query = select().from(DATABASE,"h2o_feet")
                              .where(gt("time",subTime(7,DAY)));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Last but not least we can make a query for specific fields. I will create a model just for the fields that we are going to retrieve.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", timeUnit = TimeUnit.SECONDS)
public class LocationWithDescription {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }
}

And now I shall query for them.

Query selectFields = select("level description","location").from(DATABASE,"h2o_feet");
List locationWithDescriptions = influxDBMapper.query(selectFields, LocationWithDescription.class);

As you can see we can also map certain fields to a model. For now mapping to models can be done only when data come from a certain measurements. Thus we shall proceed on more query builder specific examples next time.

You can find the source code on github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 1

With the release of latest influxdb-java driver version came along the InfluxbMapper.

To get started we need to spin up an influxdb instance, and docker is the easiest way to do so. We just follow the steps as described here.

Now we have a database with some data and we are ready to execute our queries.

We have the measure h2o_feet

> SELECT * FROM "h2o_feet"

name: h2o_feet
--------------
time                   level description      location       water_level
2015-08-18T00:00:00Z   below 3 feet           santa_monica   2.064
2015-08-18T00:00:00Z   between 6 and 9 feet   coyote_creek   8.12
[...]
2015-09-18T21:36:00Z   between 3 and 6 feet   santa_monica   5.066
2015-09-18T21:42:00Z   between 3 and 6 feet   santa_monica   4.938

So we shall create a model for that.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", database = "NOAA_water_database", timeUnit = TimeUnit.SECONDS)
public class H2OFeetMeasurement {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    @Column(name = "water_level")
    private Double waterLevel;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }

    public Double getWaterLevel() {
        return waterLevel;
    }

    public void setWaterLevel(Double waterLevel) {
        this.waterLevel = waterLevel;
    }
}

And the we shall fetch all the entries of the h2o_feet measurement.

package com.gkatzioura.mapper.showcase;

import java.util.List;
import java.util.logging.Logger;

import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory;
import org.influxdb.impl.InfluxDBImpl;
import org.influxdb.impl.InfluxDBMapper;

public class InfluxDBMapperShowcase {

    private static final Logger LOGGER = Logger.getLogger(InfluxDBMapperShowcase.class.getName());

    public static void main(String[] args) {

        InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

        InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);
        List h2OFeetMeasurements = influxDBMapper.query(H2OFeetMeasurement.class);

    }
}

After being successful on fetching the data we will continue with persisting data.


        H2OFeetMeasurement h2OFeetMeasurement = new H2OFeetMeasurement();
        h2OFeetMeasurement.setTime(Instant.now());
        h2OFeetMeasurement.setLevelDescription("Just a test");
        h2OFeetMeasurement.setLocation("London");
        h2OFeetMeasurement.setWaterLevel(1.4d);

        influxDBMapper.save(h2OFeetMeasurement);

        List measurements = influxDBMapper.query(H2OFeetMeasurement.class);

        H2OFeetMeasurement h2OFeetMeasurement1 = measurements.get(measurements.size()-1);
        assert h2OFeetMeasurement1.getLevelDescription().equals("Just a test");

Apparently fetching all the measurements to get the last entry is not the most efficient thing to do. In the upcoming tutorials we are going to see how we use the InfluxDBMapper with advanced InfluxDB queries.

Behavioural Design Patterns: Visitor

Our last pattern of the behavioural design patterns is going to be the visitor pattern.

We use the visitor pattern when we want to make it possible to define a new operation for classes of an object structure without changing the classes.

Imagine the scenario of a software that executes http requests to an api. Most http apis out there have certain limits and allow a specific number of requests to be executed per minute. We might have different class that executes requests and also takes into consideration the business logic with regards to the apis that they interact.
In case we want to inspect those calls and print some information or persist request related information to the database the visitor pattern might be a good fit.

We will start with the visitor interface.

package com.gkatzioura.design.behavioural.visitor;

public interface Visitor {
}

This interface will not specify any methods, however interfaces which extend it will contain methods visit with specific types to visit. We do this in order to be able to have loosely coupled visitor implementations (or even composition based visitors).

Then we shall implement the visitable interface.

package com.gkatzioura.design.behavioural.visitor;

public interface Visitable {

     void accept(T visitor);

}

Based on the above we shall create our request execution classes which are visitable.

package com.gkatzioura.design.behavioural.visitor;

public class LocationRequestExecutor implements Visitable {

    private int successfulRequests = 0;
    private double requestsPerMinute = 0.0;

    public void executeRequest() {
        /**
         * Execute the request and change the successfulRequests and requestsPerMinute value
         */
    }

    @Override
    public void accept(LocationVisitor visitor) {
        visitor.visit(this);
    }

    public int getSuccessfulRequests() {
        return successfulRequests;
    }

    public double getRequestsPerMinute() {
        return requestsPerMinute;
    }

}
package com.gkatzioura.design.behavioural.visitor;

public class RouteRequestExecutor implements Visitable {

    private int successfulRequests = 0;
    private double requestsPerMinute = 0.0;

    public void executeRequest() {
        /**
         * Execute the request and change the successfulRequests and requestsPerMinute value
         */
    }

    @Override
    public void accept(RouteVisitor visitor) {
        visitor.visit(this);
    }

    public int getSuccessfulRequests() {
        return successfulRequests;
    }

    public double getRequestsPerMinute() {
        return requestsPerMinute;
    }
}

And then we shall add the visitor interfaces for these type of executors

package com.gkatzioura.design.behavioural.visitor;

public interface LocationVisitor extends Visitor {

    void visit(LocationRequestExecutor locationRequestExecutor);
}
package com.gkatzioura.design.behavioural.visitor;

public interface RouteVisitor extends Visitor {

    void visit(RouteRequestExecutor routeRequestExecutor);
}

The last step would be to create a visitor that implements the above interfaces.

package com.gkatzioura.design.behavioural.visitor;

public class RequestVisitor implements LocationVisitor, RouteVisitor {

    @Override
    public void visit(LocationRequestExecutor locationRequestExecutor) {

    }

    @Override
    public void visit(RouteRequestExecutor routeRequestExecutor) {

    }
}

So let’s put em all together.

package com.gkatzioura.design.behavioural.visitor;

public class VisitorMain {

    public static void main(String[] args) {
        final LocationRequestExecutor locationRequestExecutor = new LocationRequestExecutor();
        final RouteRequestExecutor routeRequestExecutor = new RouteRequestExecutor();
        final RequestVisitor requestVisitor = new RequestVisitor();

        locationRequestExecutor.accept(requestVisitor);
        routeRequestExecutor.accept(requestVisitor);
    }
}

That’s it! You can find the sourcecode on github.

Upload and Download files to S3 using maven.

Throughout the years I’ve seen many teams using maven in many different ways. Maven can be used for many ci/cd tasks instead of using extra pipeline code or it can be used to prepare the development environment before running some tests.
Generally it is convenient tool, widely used among java teams and will continue so since there is a huge ecosystem around it.

The CloudStorage Maven plugin helps you with using various cloud buckets as a private maven repository. Recently CloudStorageMaven for s3 got a huge upgrade, and you can use it in order to download or upload files from s3, by using it as a plugin.

The plugin assumes that your environment is configured properly to access the s3 resources needed.
This can be achieved individually through aws configure

aws configure

Other ways are through environmental variables or by using the appropriate iam role.

Supposing you want to download some certain files from a path in s3.

<build>
        <plugins>
            <plugin>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.6</version>
                <executions>
                    <execution>
                        <id>download-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/local/download/path</downloadPath>
                            <keys>1.txt,2.txt,directory/3.txt</keys>
                        </configuration>
                    </execution>
                <executions>
            <plugin>
        <plugins>
</build>

The files 1.txt,2.txt,directory/3.txt once the execution is finished shall reside in the local directory specified
(/local/download/path).
Be aware that the file discovery on s3 is done with prefix, thus if you have file 1.txt and 1.txt.jpg both files shall be downloaded.

You can also download only one file to one file that you specified locally, as long as it is one to one.

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/your-file.txt</downloadPath>
                            <keys>a-key-to-download.txt</keys>
                        </configuration>
                    </execution>

Apparently files with a prefix that contain directories (they are fakes ones on s3) will downloaded to the directory specified in the form of directories and sub directories

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/</downloadPath>
                            <keys>s3-prefix</keys>
                        </configuration>
                    </execution>

The next part is about uploading files to s3.

Uploading one file

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/your-file.txt</path>
                            <key>key-to-download.txt</key>
                        </configuration>
                    </execution>

Upload a directory

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                            <key>prefix</key>
                        </configuration>
                    </execution>

Upload to the root of bucket.

                    <execution>
                        <id>upload-multiples-files-no-key</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                        </configuration>
                    </execution>

That’s it! Since it is an open source project you can contribute or issue pull requests at github.

Behavioural Design Patterns: Template method

Previously we used the strategy pattern to in order to solve the problem of choosing various speeding algorithms based on the road type. The next behavioural design pattern we are going to use is the template method.
By using the template method we define the skeleton of the algorithm and the implementation of certain steps is done by subclasses.

Thus we have methods with concrete implementations, and methods without any implementation. Those methods will be implemented based on the application logic needed to be achieved.

Imagine the case of a coffee machine. There are many types of coffees and different ways to implement them, however some steps are common and some steps although they vary they also need to be implemented. Processing the beans, boiling, processing the milk, they are all actions that differ based on the type of coffee. Placing to a cup and service however are actions that do no differentiate.

package com.gkatzioura.design.behavioural.template;

public abstract class CoffeeMachineTemplate {

    protected abstract void processBeans();

    protected abstract void processMilk();

    protected abstract void boil();

    public void pourToCup() {
        /**
         * pour to various cups based on the size
         */
    }

    public void serve() {
        processBeans();
        boil();
        processMilk();
        pourToCup();
    }

}

Then we shall add an implementation for the espresso. So here’s our espresso machine.

package com.gkatzioura.design.behavioural.template;

public class EspressoMachine extends CoffeeMachineTemplate {

    @Override
    protected void processBeans() {
        /**
         * Gring the beans
         */
    }

    @Override
    protected void processMilk() {
        /**
         * Use milk to create leaf art
         */
    }

    @Override
    protected void boil() {
        /**
         * Mix water and beans
         */
    }
}

As you see we can create various coffee machine no matter how different some steps might be.
You can find the sourcecode on github.

Behavioural Design Patterns: Strategy

Previously we used the state in order to add some functionality to an application based on the user state. Our next behavioural design pattern is Strategy.
The strategy pattern enables us to select an algorithm at runtime. Based on the instructions our program will pick the most suitable algorithm instead of implementing an algorithm directly. This makes our codebase more flexible and keep it clean from any extra logic.

Our example shall evolve around vehicles and the speeding that is allowed based on the type of road. For example if a vehicle is on a four lane road the speed would be way different than being on an urban area road.
So we are actually going to implement the strategy patterns with regards to speeding.

We will start with the speeding interface.

package com.gkatzioura.design.behavioural.strategy;

public interface Speeding {

    Double adjustSpeed(Double currentSpeed);

}

Then we shall create some implementations based on the road type.
The four lane speeding implementation adjusts the speeding when driving on a four lane.

package com.gkatzioura.design.behavioural.strategy;

public class FourLaneSpeeding implements Speeding {

    private static final Double upperLimit = 50d;

    @Override
    public Double adjustSpeed(Double currentSpeed) {
        if(currentSpeed>upperLimit) {
            currentSpeed = upperLimit;
        }

        System.out.println("Speed adjusted at "+currentSpeed);

        return currentSpeed;
    }

}

The urban area speeding implementation adjusts the speeding when driving on a rural road.

package com.gkatzioura.design.behavioural.strategy;

public class UrbanAreaSpeeding implements Speeding {

    private static final Double upperLimit = 30d;

    @Override
    public Double adjustSpeed(Double currentSpeed) {
        if(currentSpeed>upperLimit) {
            currentSpeed = upperLimit;
        }

        System.out.println("Speed adjusted at "+currentSpeed);

        return currentSpeed;
    }

}

And then we shall create the vehicle class.

package com.gkatzioura.design.behavioural.strategy;

public class Vehicle {

    private Speeding speeding;
    private Double currentSpeed;

    public void drive() {

        speeding.adjustSpeed(currentSpeed);

        /**
         * Driving related actions.
         */
    }

    public void setSpeeding(Speeding speeding) {
        this.speeding = speeding;
    }

    public void setCurrentSpeed(Double currentSpeed) {
        this.currentSpeed = currentSpeed;
    }
}

As you can see the vehicle shall change its speeding strategy based on the road driving.
Let’s put them all together.

package com.gkatzioura.design.behavioural.strategy;

public class Strategy {

    public static void main(String[] args) {
        Vehicle vehicle = new Vehicle();

        vehicle.setCurrentSpeed(70d);
        
        vehicle.drive();
        
        /**
         * Changed route
         */
        
        vehicle.setSpeeding(new FourLaneSpeeding());

        vehicle.drive();

        /**
         * Changed route
         */
        
        vehicle.setSpeeding(new UrbanAreaSpeeding());

        vehicle.drive();
    }
}

That’s all for now! You can find the sourcecode on github.