Use JMH for your Java applications with Gradle

If you want to benchmark you code, the Java Microbenchmark Harness is the tool of choice.
In our example we shall use the refill-rate-limiter project

 

Since refill-rate-limiter uses Gradle we will use the following plugin for gradle

plugins {
...
  id "me.champeau.gradle.jmh" version "0.5.3"
...
}

We shall place the Benchmark at the jmh/java/io/github/resilience4j/ratelimiter folder.

Our Benchmark should look like this.

package io.github.resilience4j.ratelimiter;

import io.github.resilience4j.ratelimiter.internal.RefillRateLimiter;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;
import org.openjdk.jmh.profile.GCProfiler;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;

import java.time.Duration;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;

@State(Scope.Benchmark)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@BenchmarkMode(Mode.All)
public class RateLimiterBenchmark {

    private static final int FORK_COUNT = 2;
    private static final int WARMUP_COUNT = 10;
    private static final int ITERATION_COUNT = 10;
    private static final int THREAD_COUNT = 2;

    private RefillRateLimiter refillRateLimiter;

    private Supplier<String> refillGuardedSupplier;

    public static void main(String[] args) throws RunnerException {
        Options options = new OptionsBuilder()
                .addProfiler(GCProfiler.class)
                .build();
        new Runner(options).run();
    }

    @Setup
    public void setUp() {

        RefillRateLimiterConfig refillRateLimiterConfig = RefillRateLimiterConfig.custom()
                                                                                 .limitForPeriod(1)
                                                                                 .limitRefreshPeriod(Duration.ofNanos(1))
                                                                                 .timeoutDuration(Duration.ofSeconds(5))
                                                                                 .build();

        refillRateLimiter = new RefillRateLimiter("refillBased", refillRateLimiterConfig);

        Supplier<String> stringSupplier = () -> {
            Blackhole.consumeCPU(1);
            return "Hello Benchmark";
        };

        refillGuardedSupplier = RateLimiter.decorateSupplier(refillRateLimiter, stringSupplier);
    }

    @Benchmark
    @Threads(value = THREAD_COUNT)
    @Warmup(iterations = WARMUP_COUNT)
    @Fork(value = FORK_COUNT)
    @Measurement(iterations = ITERATION_COUNT)
    public String refillPermission() {
        return refillGuardedSupplier.get();
    }

}

Let’s now check the elements one by one.

By using Benchmark scope all the threads used on the benchmark scope will share the same object. We do so because we want to test how refill-rate-limiter performs in a multithreaded scenario.

@State(Scope.Benchmark)

We would like our results to be reported in microseconds, therefore we shall use the OutputTimeUnit.

@OutputTimeUnit(TimeUnit.MICROSECONDS)

On JMH We have various benchmark modes depending on what we want to measure.

Throughput is when we want to measure the number operations per unit of time.
AverageTime when we want to measure the average time per operation.
SampleTime when we want to sample the time for each operation including min, max time, more than just the average.
SingleShotTime: when we want to measure the time for a single operation. This can help when we want to identify how the operation will do on a cold start.

We also have the option to measure all the above.

@BenchmarkMode(Mode.All)

Those options configured on the class level will apply to the benchmark methods we shall add.

Let’s also examine how the benchmark will run

We will specify the number of Threads by using the Threads annotation.

@Threads(value = THREAD_COUNT)

Also we want to warm up before we run the actual benchmarks. This way our code will be initialized, online optimizations will take place, and our runtime will adapt to the conditions before we run the benchmarks.

@Warmup(iterations = WARMUP_COUNT)

Using a Fork we shall instruct how many times the benchmark will run.

@Fork(value = FORK_COUNT)

Then we need to specify the number of iterations we want to measure/

@Measurement(iterations = ITERATION_COUNT)

We can start our test by just using

gradle jmh

The results will be save in a file.

...
2022-10-28T09:08:44.522+0100 [QUIET] [system.out] Benchmark result is saved to /path/refill-rate-limiter/build/reports/jmh/results.txt
..

Let’s examine the results.

Benchmark                                                         Mode       Cnt      Score   Error   Units
RateLimiterBenchmark.refillPermission                            thrpt        20     13.594 ± 0.217  ops/us
RateLimiterBenchmark.refillPermission                             avgt        20      0.147 ± 0.002   us/op
RateLimiterBenchmark.refillPermission                           sample  10754462      0.711 ± 0.025   us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.00    sample                  ≈ 0           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.50    sample                0.084           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.90    sample                0.125           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.95    sample                0.125           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.99    sample                0.209           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.999   sample              139.008           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p0.9999  sample              935.936           us/op
RateLimiterBenchmark.refillPermission:refillPermission·p1.00    sample            20709.376           us/op
RateLimiterBenchmark.refillPermission                               ss        20     14.700 ± 4.003   us/op

As we can see we have the modes listed.
Count is the number of iterations. Apart from throughput where we measure the operations by time, the rest is time per operation.
Throughput,Average and Single shot are straightforward, Sample lists the percentiles. Error is the margin of error.

That’s it! Happy benchmarking

Advertisement

Gradle: Push to Maven Repository

If you are a developer sharing your artifacts is a common task, that needs to be in place from the start.

In most teams and companies a Maven repository is already setup, this repository would be used mostly through CI/CD tasks enabling developers to distribute the generated artifacts.

In order to make the example possible we shall spin up a Nexus repository using Docker Compose.

First let’s create a default password and the directory containing the plugins Nexus will download. As of now the password is clear text, this will serve us for doing our first action. By resetting the admin password on nexus the file shall be removed, thus there won’t be a file with a clear text password.

mkdir nexus-data
cat a-test-password > admin.password

Then onwards to the Compose file:

services:
  nexus:
    image: sonatype/nexus3
    ports:
      - 8081:8081
    environment:
      INSTALL4J_ADD_VM_PARAMS: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m"
    volumes:
      - nexus-data:/nexus-data
      - ./admin.password:/nexus-data/admin.password
volumes:
  nexus-data:

Let’s examine the file.
By using INSTALL4J_ADD_VM_PARAMS we override the Java command that will run the Nexus server, this way we can instruct to use more memory.
Because nexus takes too long on initialization we shall create a Docker volume. Everytime we run the compose file, the initialization will not happen from start, instead will use the results of the previous initialization. By mounting the admin password created previously we have a predefined password for the service.

By issuing the following command, the server will be up and running:

 
docker compose up

You can find more on Compose on the Developers Essential Guide to Docker Compose.

After some time nexus will have been initialized and running, therefore we shall proceed to our Gradle configuration.

We will keep track of the version on the gradle.properties file

version 1.0-SNAPSHOT

The build.gradle file:

plugins {
    id 'java'
    id 'maven-publish'
}

group 'org.example'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.1'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.8.1'
}

test {
    useJUnitPlatform()
}

publishing {
    publications {
        mavenJava(MavenPublication) {
            groupId 'org.example'
            artifactId 'gradle-push'
            version version
            from components.java
            versionMapping {
                usage('java-api') {
                    fromResolutionOf('runtimeClasspath')
                }
                usage('java-runtime') {
                    fromResolutionResult()
                }
            }
            pom {
                name = 'gradle-push'
                description = 'Gradle Push to Nexus'
                url = 'https://github.com/gkatzioura/egkatzioura.wordpress.com.git'
                licenses {
                    license {
                        name = 'The Apache License, Version 2.0'
                        url = 'http://www.apache.org/licenses/LICENSE-2.0.txt'
                    }
                }
                developers {
                    developer {
                        id = 'John'
                        name = 'John Doe'
                        email = 'an-email@gmail.com'
                    }
                }
                scm {
                    connection = 'scm:git:https://github.com/gkatzioura/egkatzioura.wordpress.com.git'
                    developerConnection = 'scm:git:https://github.com/gkatzioura/egkatzioura.wordpress.com.git'
                    url = 'https://github.com/gkatzioura/egkatzioura.wordpress.com.git'
                }
            }
        }
    }

    repositories {
        maven {
            def releasesRepoUrl = "http://localhost:8081/repository/maven-releases/"
            def snapshotsRepoUrl = "http://localhost:8081/repository/maven-snapshots/"
            url = version.endsWith('SNAPSHOT') ? snapshotsRepoUrl : releasesRepoUrl

            credentials {
                username "admin"
                password "a-test-password"
            }
        }
    }
}

The plugin to be used is the maven-publish plugin. If we examine the file, we identify that the plugin generates a pom maven file which shall be used in order to execute the deploy command to Nexus. The repositories are configured on the repositories section including the nexus users we defined previously. Whether the version is a SNAPSHOT or a release the corresponding repository endpoint will be picked. As we see clear text password is used. On the next blog we will examine how we can avoid that.

Now it is sufficient to run

gradle publish

This will deploy the binary to the snapshot repository.

You can find the source code on GitHub.

Use properties with your gradle project.

Most of us are used in the maven way of specifying properties by using the properties tag


    2.0.9.RELEASE

and then when it comes to a dependency it can be easily referenced as ${springdata.commons}.


	org.springframework.data
	spring-data-commons
	${springdata.commons}

This is a bit tricky when it comes to gradle. The way  to do so is by using the extra properties extension.

ext{
  springdataCommons = "2.0.9.RELEASE"
}
dependencies {
    compile "org.springframework.data:spring-data-commons:${springdataCommons}"
}

Be aware that in order for this to work you should use double quotes, since in groovy double-quoted String literals support String interpolation while single quoted don’t.

This might work for you but you might want to use the dot character, for example springdata.commons .

That’s not possible using the previous way due to confusing the springdata as a symbol.
A workaround is to use the set method from ext.

ext{
    set('springdata.commons', "2.0.9.RELEASE")
}

dependencies {
    compile "org.springframework.data:spring-data-commons:${property('springdata.commons')}"
}

Pay attention to the compile section since you are going to have the same problem there regarding your name, thus you need to use an expression.

The other way around is to use a gradle.properties file and add there your properties.

springdata.commons=2.0.9.RELEASE

You compile section will be just the same.

dependencies {
    compile "org.springframework.data:spring-data-commons:${property('springdata.commons')}"
}

Gradle multi project build – parent pom like structure

When you come from a maven background most probably you have been used to the parent pom structure.

Now when it comes to gradle things are a little bit different.

Imagine the scenario of having a project including the interfaces and various other implementations.
This is going to be our project structure.

multi-project-gradle
-- specification
-- core
-- implementation-a
-- implementation-b

The specification project contains the interfaces, which the implementations will be based upon. The core project will contain functionality which needs to be shared among implementations.

The next step is to create each project inside the multi-project-gradle.

Each project is actually a directory with the builde.gradle file.

plugins {
    id 'java'
}

repositories {
    mavenCentral()
}

dependencies {
    testCompile group: 'junit', name: 'junit', version: '4.12'
}

Once done you need to do the linking between the parent project and the child project.
to do so you create the multi-project-gradle/settings.gradle and include the other projects.

rootProject.name = 'com.gkatzioura'
include 'specification'
include 'core'
include 'implementation-a'
include 'implementation- b'

Now if you set the build.gradle file for every sub project you’ve just realised that you include the junit dependency and the mavencentral repository everywhere.

One of the main benefits on using multi-project builds, is removing duplication.

To do so we shall create the multi-project-gradle/build.gradle file add the junit dependency and the maven central reference there.

subprojects {
    apply plugin: 'java'

    repositories {
        mavenCentral()
    }

    dependencies {
        testCompile group: 'junit', name: 'junit', version: '4.12'
    }

}

Now we can add our dependencies to each project and even specify the dependencies needed from the sub-projects.

For example the core project uses the specification project

dependencies {
  compile project(':specification')
}

and each implementation project uses the core project

dependencies {
    compile project(':core')
}

You can find the project on github.

Spring-Boot and Cache Abstraction with HazelCast

Previously we got started with Spring Cache abstraction using the default Cache Manager that spring provides.

Although this approach might suit our needs for simple applications, in case of complex problems we need to use different tools with more capabilities. Hazelcast is one of them. Hazelcast is hands down a great caching tool when it comes to a JVM based application. By using hazelcast as a cache, data is evenly distributed among the nodes of a computer cluster, allowing for horizontal scaling of available storage.

We will run our codebase using spring profiles thus ‘hazelcast-cache’ will be our profile name.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'


buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

repositories {
    mavenCentral()
}


sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("org.springframework.boot:spring-boot-starter-cache")
    compile("org.springframework.boot:spring-boot-starter")
    compile("com.hazelcast:hazelcast:3.7.4")
    compile("com.hazelcast:hazelcast-spring:3.7.4")

    testCompile("junit:junit")
}

bootRun {
    systemProperty "spring.profiles.active", "hazelcast-cache"
}

As you can see we updated the gradle file from the previous example and we added two extra dependencies hazelcast and hazelcast-spring. Also we changed the profile that our application will run by default.

Our next step is to configure the hazelcast cache manager.

package com.gkatzioura.caching.config;

import com.hazelcast.config.Config;
import com.hazelcast.config.EvictionPolicy;
import com.hazelcast.config.MapConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;

/**
 * Created by gkatzioura on 1/10/17.
 */
@Configuration
@Profile("hazelcast-cache")
public class HazelcastCacheConfig {

    @Bean
    public Config hazelCastConfig() {

        Config config = new Config();
        config.setInstanceName("hazelcast-cache");

        MapConfig allUsersCache = new MapConfig();
        allUsersCache.setTimeToLiveSeconds(20);
        allUsersCache.setEvictionPolicy(EvictionPolicy.LFU);
        config.getMapConfigs().put("alluserscache",allUsersCache);

        MapConfig usercache = new MapConfig();
        usercache.setTimeToLiveSeconds(20);
        usercache.setEvictionPolicy(EvictionPolicy.LFU);
        config.getMapConfigs().put("usercache",usercache);

        return config;
    }

}

We just created two maps with a ttl policy of 20 seconds. Therefore 20 seconds since the map gets populated a cache eviction will occur. For more hazelcast configurations please refer to the official hazelcast documentation.

Another change that we have to implement is to change UserPayload into a serializable Java object, since objects stored in hazelcast must be Serializable.

package com.gkatzioura.caching.model;

import java.io.Serializable;

/**
 * Created by gkatzioura on 1/5/17.
 */
public class UserPayload implements Serializable {

    private String userName;
    private String firstName;
    private String lastName;

    public String getUserName() {
        return userName;
    }

    public void setUserName(String userName) {
        this.userName = userName;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
}

Last but not least we add another repository bound to the hazelcast-cache profile.

The result is our previous spring-boot application integrated with hazelcast instead of the default cache, configured with a ttl policy.

You can find the sourcecode on github.

Spring boot and Cache Abstraction

Caching is a major ingredient of most applications, and as long as we try to avoid disk access it will stay strong.
Spring has great support for caching with a wide range of configurations. You can start as simple as you want and progress to something much more customizable.

This would be an example with the simplest form of caching that spring provides.
Spring comes by default with an in memory cache which is pretty easy to setup.

Let us start with our gradle file.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'


buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

repositories {
    mavenCentral()
}


sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("org.springframework.boot:spring-boot-starter-cache")
    compile("org.springframework.boot:spring-boot-starter")
    testCompile("junit:junit")
}

bootRun {
    systemProperty "spring.profiles.active", "simple-cache"
}

Since the same project will be used for different cache providers there are gonna be multiple spring profiles. The spring profile for this tutorial would be the simple-cache since we are going to use the ConcurrentMap-based Cache which happens to be the default.

We will implement an application which will fetch user information from our local file system.
The information shall reside on the users.json file

[
  {"userName":"user1","firstName":"User1","lastName":"First"},
  {"userName":"user2","firstName":"User2","lastName":"Second"},
  {"userName":"user3","firstName":"User3","lastName":"Third"},
  {"userName":"user4","firstName":"User4","lastName":"Fourth"}
]

Also we will specify a simple model for the data to be retrieved.

package com.gkatzioura.caching.model;

/**
 * Created by gkatzioura on 1/5/17.
 */
public class UserPayload {

    private String userName;
    private String firstName;
    private String lastName;

    public String getUserName() {
        return userName;
    }

    public void setUserName(String userName) {
        this.userName = userName;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
}

Then we will add a bean that will read the information.

package com.gkatzioura.caching.config;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.gkatzioura.caching.model.UserPayload;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.io.Resource;

import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;

/**
 * Created by gkatzioura on 1/5/17.
 */
@Configuration
@Profile("simple-cache")
public class SimpleDataConfig {

    @Autowired
    private ObjectMapper objectMapper;

    @Value("classpath:/users.json")
    private Resource usersJsonResource;

    @Bean
    public List<UserPayload> payloadUsers() throws IOException {

        try(InputStream inputStream = usersJsonResource.getInputStream()) {

            UserPayload[] payloadUsers = objectMapper.readValue(inputStream,UserPayload[].class);
            return Collections.unmodifiableList(Arrays.asList(payloadUsers));
        }
    }
}

Obviously in order to access the information we will use the bean instantiated containing all the user information.

Next step will be to create a repository interface to specify the methods that will be used.

package com.gkatzioura.caching.repository;

import com.gkatzioura.caching.model.UserPayload;

import java.util.List;

/**
 * Created by gkatzioura on 1/6/17.
 */
public interface UserRepository {

    List<UserPayload> fetchAllUsers();

    UserPayload firstUser();

    UserPayload userByFirstNameAndLastName(String firstName,String lastName);

}

Now let’s dive into the implementation which will contain the cache annotations needed.

package com.gkatzioura.caching.repository;

import com.gkatzioura.caching.model.UserPayload;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Repository;

import java.util.List;
import java.util.Optional;

/**
 * Created by gkatzioura on 12/30/16.
 */
@Repository
@Profile("simple-cache")
public class UserRepositoryLocal implements UserRepository {

    @Autowired
    private List<UserPayload> payloadUsers;

    private static final Logger LOGGER = LoggerFactory.getLogger(UserRepositoryLocal.class);

    @Override
    @Cacheable("alluserscache")
    public List<UserPayload> fetchAllUsers() {

        LOGGER.info("Fetching all users");

        return payloadUsers;
    }

    @Override
    @Cacheable(cacheNames = "usercache",key = "#root.methodName")
    public UserPayload firstUser() {

        LOGGER.info("fetching firstUser");

        return payloadUsers.get(0);
    }

    @Override
    @Cacheable(cacheNames = "usercache",key = "{#firstName,#lastName}")
    public UserPayload userByFirstNameAndLastName(String firstName,String lastName) {

        LOGGER.info("fetching user by firstname and lastname");

        Optional<UserPayload> user = payloadUsers.stream().filter(
                p-> p.getFirstName().equals(firstName)
                &&p.getLastName().equals(lastName))
                .findFirst();

        if(user.isPresent()) {
            return user.get();
        } else {
            return null;
        }
    }

}

Methods that contain the @Cacheable will trigger cache population contrary to methods that contain @CacheEvict which trigger cache eviction.
By using @Cacheable instead of just specifying the cache map that our values will be stored, we can proceed into specifying also keys based on the method name or the method arguments. Thus we achieve method caching.
For example the method firstUser, uses as a key the method name whilst the method userByFirstNameAndLastName uses the method arguments in order to create a key.

Two methods with the @CacheEvict annotation will empty the caches specified.

LocalCacheEvict will be the component that will handler the eviction.

package com.gkatzioura.caching.repository;

import org.springframework.cache.annotation.CacheEvict;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 1/7/17.
 */
@Component
@Profile("simple-cache")
public class LocalCacheEvict {

    @CacheEvict(cacheNames = "alluserscache",allEntries = true)
    public void evictAllUsersCache() {

    }

    @CacheEvict(cacheNames = "usercache",allEntries = true)
    public void evictUserCache() {

    }

}

Since we use a very simple form of cacheh ttl eviction is not supported. Therefore we will add a scheduler only for this particular case which will evict the cache after a certain period of time.

package com.gkatzioura.caching.scheduler;

import com.gkatzioura.caching.repository.LocalCacheEvict;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Profile;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 1/7/17.
 */
@Component
@Profile("simple-cache")
public class EvictScheduler {

    @Autowired
    private LocalCacheEvict localCacheEvict;

    private static final Logger LOGGER = LoggerFactory.getLogger(EvictScheduler.class);

    @Scheduled(fixedDelay=10000)
    public void clearCaches() {

        LOGGER.info("Invalidating caches");

        localCacheEvict.evictUserCache();
        localCacheEvict.evictAllUsersCache();
    }


}

To wrap up we will use a controller to call the methods specified

package com.gkatzioura.caching.controller;

import com.gkatzioura.caching.model.UserPayload;
import com.gkatzioura.caching.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
 * Created by gkatzioura on 12/30/16.
 */
@RestController
public class UsersController {

    @Autowired
    private UserRepository userRepository;

    @RequestMapping(path = "/users/all",method = RequestMethod.GET)
    public List<UserPayload> fetchUsers() {

        return userRepository.fetchAllUsers();
    }

    @RequestMapping(path = "/users/first",method = RequestMethod.GET)
    public UserPayload fetchFirst() {
        return userRepository.firstUser();
    }

    @RequestMapping(path = "/users/",method = RequestMethod.GET)
    public UserPayload findByFirstNameLastName(String firstName,String lastName ) {

        return userRepository.userByFirstNameAndLastName(firstName,lastName);
    }

}

Last but not least our Application class should contain two extra annotations. @EnableScheduling is needed in order to enable schedulers and @EnableCaching in order to enable caching

package com.gkatzioura.caching;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.scheduling.annotation.EnableScheduling;

/**
 * Created by gkatzioura on 12/30/16.
 */
@SpringBootApplication
@EnableScheduling
@EnableCaching
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class,args);
    }

}

You can find the sourcecode on github.

Integrate Spring Boot and EC2 using Cloudformation

On a previous blog we integrated a spring boot application with elastic beanstalk.
The application was a servlet based application responding to requests.

On this tutorial we are going to deploy a spring boot application, which executes some scheduled tasks on an ec2 instance.
The application will be pretty much the same application taken from the official spring guide with some minor differences on packages.

The name of our application will be ec2-deployment

rootProject.name = 'ec2-deployment'

Then we will schedule a task to our spring boot application.

package com.gkatzioura.deployment.task;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 12/16/16.
 */
@Component
public class SimpleTask {

    private static final Logger LOGGER = LoggerFactory.getLogger(SimpleTask.class);

    @Scheduled(fixedRate = 5000)
    public void reportCurrentTime() {
        LOGGER.info("This is a simple task on ec2");
    }

}


Next step is to build the application and deploy it to our s3 bucket.

gradle build
aws s3 cp build/libs/ec2-deployment-1.0-SNAPSHOT.jar s3://{your bucket name}/ec2-deployment-1.0-SNAPSHOT.jar 

What comes next is a bootstrapping script in order to run our application once the server is up and running.

#!/usr/bin/env bash
aws s3 cp s3://{bucket with code}/ec2-deployment-1.0-SNAPSHOT.jar /home/ec2-user/ec2-deployment-1.0-SNAPSHOT.jar
sudo yum -y install java-1.8.0
sudo yum -y remove java-1.7.0-openjdk
cd /home/ec2-user/
sudo nohup java -jar ec2-deployment-1.0-SNAPSHOT.jar > ec2dep.log

This script is pretty much self explanatory. We download the application from the bucket we uploaded it previously, we install the java version needed and then we run the application (this script serves us for example purposes, there are certainly many ways to set up you java application running on linux).

Next step would be to proceed to our cloudformation script. Since we will download our application from s3 it is essential to have an IAM policy that will allow us to download items from the s3 bucket we used previously. Therefore we will create a role with the policy needed

"RootRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [ {
            "Effect": "Allow",
            "Principal": {
              "Service": [ "ec2.amazonaws.com" ]
            },
            "Action": [ "sts:AssumeRole" ]
          } ]
        },
        "Path": "/",
        "Policies": [ {
          "PolicyName": "root",
          "PolicyDocument": {
            "Version" : "2012-10-17",
            "Statement": [ {
              "Effect": "Allow",
              "Action": [
                "s3:Get*",
                "s3:List*"
              ],
              "Resource": {"Fn::Join" : [ "", [ "arn:aws:s3:::", {"Ref":"SourceCodeBucket"},"/*"] ] }
            } ]
          }
        } ]
      }
    }

Next step is to encode our bootstrapping script to Base64 in order to be able to pass it as user data.
Once the ec2 instance is up and running it will run the shell commands previously specified.

Last step is to create our instance profile and specify the ec2 instance to be launched

    "RootInstanceProfile": {
      "Type": "AWS::IAM::InstanceProfile",
      "Properties": {
        "Path": "/",
        "Roles": [ {
          "Ref": "RootRole"
        } ]
      }
    },
    "Ec2Instance":{
      "Type":"AWS::EC2::Instance",
      "Properties":{
        "ImageId":"ami-9398d3e0",
        "InstanceType":"t2.nano",
        "KeyName":"TestKey",
        "IamInstanceProfile": {"Ref":"RootInstanceProfile"},
"UserData":"IyEvdXNyL2Jpbi9lbnYgYmFzaA0KYXdzIHMzIGNwIHMzOi8ve2J1Y2tldCB3aXRoIGNvZGV9L2VjMi1kZXBsb3ltZW50LTEuMC1TTkFQU0hPVC5qYXIgL2hvbWUvZWMyLXVzZXIvZWMyLWRlcGxveW1lbnQtMS4wLVNOQVBTSE9ULmphcg0Kc3VkbyB5dW0gLXkgaW5zdGFsbCBqYXZhLTEuOC4wDQpzdWRvIHl1bSAteSByZW1vdmUgamF2YS0xLjcuMC1vcGVuamRrDQpjZCAvaG9tZS9lYzItdXNlci8NCnN1ZG8gbm9odXAgamF2YSAtamFyIGVjMi1kZXBsb3ltZW50LTEuMC1TTkFQU0hPVC5qYXIgPiBlYzJkZXAubG9n"
      }
    }

KeyName stands for the ssh key name, in case you want to login to the ec2 instance.

So we are good to go and create our cloudformation stack. You have to add the CAPABILITY_IAM flag.

aws s3 cp ec2spring.template s3://{bucket with templates}/ec2spring.template
aws cloudformation create-stack --stack-name SpringEc2 --parameters ParameterKey=SourceCodeBucket,ParameterValue={bucket with code} --template-url https://s3.amazonaws.com/{bucket with templates}/ec2spring.template --capabilities CAPABILITY_IAM

That’s it. Now you have your spring application up and running on top of an ec2 instance.
You can download the source code from GitHub.

Integrate Redis to your Spring project

This article shows how to integrate Redis cache to your spring project through annotation configuration.

We will begin with our Gradle configuration. We will use the jedis driver.

group 'com.gkatzioura.spring'
version '1.0-SNAPSHOT'

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'spring-boot'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.2.5.RELEASE")
    }
}

jar {
    baseName = 'gs-serving-web-content'
    version =  '0.1.0'
}

sourceCompatibility = 1.8

repositories {
    mavenCentral()
}

dependencies {
    compile "org.springframework.boot:spring-boot-starter-thymeleaf"
    compile 'org.slf4j:slf4j-api:1.6.6'
    compile 'ch.qos.logback:logback-classic:1.0.13'
    compile 'redis.clients:jedis:2.7.0'
    compile 'org.springframework.data:spring-data-redis:1.5.0.RELEASE'
    testCompile group: 'junit', name: 'junit', version: '4.11'
}

task wrapper(type: Wrapper) {
    gradleVersion = '2.3'
}

Will proceed with the Redis configuration using spring annotations.

package com.gkatzioura.spring.config;

import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.RedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

@Configuration
@EnableCaching
public class RedisConfig extends CachingConfigurerSupport {

    @Bean
    public JedisConnectionFactory redisConnectionFactory() {

        JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
        jedisConnectionFactory.setUsePool(true);
        return jedisConnectionFactory;
    }

    @Bean
    public RedisSerializer redisStringSerializer() {
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
        return stringRedisSerializer;
    }

    @Bean(name="redisTemplate")
    public RedisTemplate<String, String> redisTemplate(RedisConnectionFactory cf,RedisSerializer redisSerializer) {
        RedisTemplate<String, String> redisTemplate = new RedisTemplate<String, String>();
        redisTemplate.setConnectionFactory(cf);
        redisTemplate.setDefaultSerializer(redisSerializer);
        return redisTemplate;
    }

    @Bean
    public CacheManager cacheManager() {
        return new RedisCacheManager(redisTemplate(redisConnectionFactory(),redisStringSerializer()));
    }

}

Next step is to create our caching interface

package com.gkatzioura.spring.cache;

import java.util.Date;
import java.util.List;

public interface CacheService {

    public void addMessage(String user,String message);

    public List<String> listMessages(String user);

}

A user will add messages and he will be able to retrieve them .
However on our implementation, user related messages will have a time to live of one minute.

Our implementation CacheService using Redis follows.

package com.gkatzioura.spring.cache.impl;

import com.gkatzioura.spring.cache.CacheService;
import org.springframework.data.redis.core.ListOperations;
import org.springframework.data.redis.core.RedisOperations;
import org.springframework.data.redis.core.SetOperations;
import org.springframework.stereotype.Service;

import javax.annotation.Resource;
import java.time.ZonedDateTime;
import java.time.temporal.ChronoUnit;
import java.util.Date;
import java.util.List;

@Service("cacheService")
public class RedisService implements CacheService {

    @Resource(name = "redisTemplate")
    private ListOperations<String, String> messageList;

    @Resource(name = "redisTemplate")
    private RedisOperations<String,String> latestMessageExpiration;

    @Override
    public void addMessage(String user,String message) {

        messageList.leftPush(user,message);

        ZonedDateTime zonedDateTime = ZonedDateTime.now();
        Date date = Date.from(zonedDateTime.plus(1, ChronoUnit.MINUTES).toInstant());
        latestMessageExpiration.expireAt(user,date);
    }

    @Override
    public List<String> listMessages(String user) {
        return messageList.range(user,0,-1);
    }

}

Our cache mechanism will retain a list of messages sent by each user. To achieve so we will employee the ListOperations interface using the user as a key.
The RedisOperations interface gives us the ability to specify a time to live for a key. In our case it is used for the user key.

Next we create a controller with the cache service injected.

package com.gkatzioura.spring.controller;

import com.gkatzioura.spring.cache.CacheService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@RestController
public class MessageController {


    @Autowired
    private CacheService cacheService;

    @RequestMapping(value = "/message",method = RequestMethod.GET)
    @ResponseBody
    public List<String> greeting(String user) {

        List<String> messages = cacheService.listMessages(user);

        return messages;
    }

    @RequestMapping(value = "/message",method = RequestMethod.POST)
    @ResponseBody
    public String saveGreeting(String user,String message) {

        cacheService.addMessage(user,message);

        return "OK";

    }

}

Last but not least our Application class

package com.gkatzioura.spring;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

In order to run just issue

gradle bootRun

Set up a SpringData project using Apache Cassandra

On this post we will use Gradle and spring boot in order to create a project that integrates spring-mvc and the Apache Cassandra database.

First we will begin with our Gradle configuration

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'spring-boot'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.2.5.RELEASE")
    }
}

jar {
    baseName = 'gs-serving-web-content'
    version =  '0.1.0'
}

repositories {
    mavenCentral()
}


sourceCompatibility = 1.8

repositories {
    mavenCentral()
}


dependencies {
    compile "org.springframework.boot:spring-boot-starter-thymeleaf"
    compile "org.springframework.data:spring-data-cassandra:1.2.2.RELEASE"
    compile 'org.slf4j:slf4j-api:1.6.6'
    compile 'ch.qos.logback:logback-classic:1.0.13'
    testCompile "junit:junit"
}

task wrapper(type: Wrapper) {
    gradleVersion = '2.3'
}

We will create the keyspace and the table on our Cassandra database

CREATE KEYSPACE IF NOT EXISTS example WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}  AND durable_writes = true;

CREATE TABLE IF NOT EXISTS example.greetings (
    user text,
    id timeuuid,
    greet text,
    creation_date timestamp,
    PRIMARY KEY (user, id)
) WITH CLUSTERING ORDER BY (id DESC);

We can run a file containing cql statements by using cqlsh

cqlsh -f database_creation.cql

Cassandra connection information will reside on META-INF/cassandra.properties

cassandra.contactpoints=localhost
cassandra.port=9042
cassandra.keyspace=example

Now we can proceed with the Cassandra configuration using spring annotations.

package com.gkatzioura.spring.config;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.core.env.Environment;
import org.springframework.data.cassandra.config.CassandraClusterFactoryBean;
import org.springframework.data.cassandra.config.CassandraSessionFactoryBean;
import org.springframework.data.cassandra.config.SchemaAction;
import org.springframework.data.cassandra.convert.CassandraConverter;
import org.springframework.data.cassandra.convert.MappingCassandraConverter;
import org.springframework.data.cassandra.core.CassandraOperations;
import org.springframework.data.cassandra.core.CassandraTemplate;
import org.springframework.data.cassandra.mapping.BasicCassandraMappingContext;
import org.springframework.data.cassandra.mapping.CassandraMappingContext;
import org.springframework.data.cassandra.repository.config.EnableCassandraRepositories;


@Configuration
@PropertySource(value = {"classpath:META-INF/cassandra.properties"})
@EnableCassandraRepositories(basePackages = {"com.gkatzioura.spring"})
public class CassandraConfig {

    @Autowired
    private Environment environment;

    private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);

    @Bean
    public CassandraClusterFactoryBean cluster() {

        CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
        cluster.setContactPoints(environment.getProperty("cassandra.contactpoints"));
        cluster.setPort(Integer.parseInt(environment.getProperty("cassandra.port")));

        return cluster;
    }

    @Bean
    public CassandraMappingContext mappingContext() {
        return new BasicCassandraMappingContext();
    }

    @Bean
    public CassandraConverter converter() {
        return new MappingCassandraConverter(mappingContext());
    }

    @Bean
    public CassandraSessionFactoryBean session() throws Exception {

        CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
        session.setCluster(cluster().getObject());
        session.setKeyspaceName(environment.getProperty("cassandra.keyspace"));
        session.setConverter(converter());
        session.setSchemaAction(SchemaAction.NONE);

        return session;
    }

    @Bean
    public CassandraOperations cassandraTemplate() throws Exception {
        return new CassandraTemplate(session().getObject());
    }

}

Then we create the Greeting entity.

package com.gkatzioura.spring.entity;

import com.datastax.driver.core.utils.UUIDs;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;

import java.util.Date;
import java.util.UUID;

@Table(value = "greetings")
public class Greeting {

    @PrimaryKeyColumn(name = "id",ordinal = 1,type = PrimaryKeyType.CLUSTERED)
    private UUID id = UUIDs.timeBased();

    @PrimaryKeyColumn(name="user",ordinal = 0,type = PrimaryKeyType.PARTITIONED)
    private String user;

    @Column(value = "greet")
    private String greet;

    @Column(value = "creation_date")
    private Date creationDate;

    public UUID getId() {
        return id;
    }

    public void setId(UUID id) {
        this.id = id;
    }

    public Date getCreationDate() {
        return creationDate;
    }

    public void setCreationDate(Date creationDate) {
        this.creationDate = creationDate;
    }

    public String getUser() {
        return user;
    }

    public void setUser(String user) {
        this.user = user;
    }

    public String getGreet() {
        return greet;
    }

    public void setGreet(String greet) {
        this.greet = greet;
    }
}

In order to access the data, a repository should be created. In our case we will add some extra functionality to the repository by adding some queries.

package com.gkatzioura.spring.repository;

import com.gkatzioura.spring.entity.Greeting;
import org.springframework.data.cassandra.repository.CassandraRepository;
import org.springframework.data.cassandra.repository.Query;
import org.springframework.data.repository.NoRepositoryBean;

import java.util.UUID;

public interface GreetRepository extends CassandraRepository<Greeting> {

    @Query("SELECT*FROM greetings WHERE user=?0 LIMIT ?1")
    Iterable<Greeting> findByUser(String user,Integer limit);

    @Query("SELECT*FROM greetings WHERE user=?0 AND id<?1 LIMIT ?2")
    Iterable<Greeting> findByUserFrom(String user,UUID from,Integer limit);

}

Now we can implement the controller in order to access the data through http.
By post we can save a Greeting entity.
Through get we can fetch all the greetings received.
By specifying user we can use the Cassandra query to fetch greetings for a specific user.

package com.gkatzioura.spring.controller;

import com.gkatzioura.spring.entity.Greeting;
import com.gkatzioura.spring.repository.GreetRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;

@RestController
public class GreetingController {

    @Autowired
    private GreetRepository greetRepository;

    @RequestMapping(value = "/greeting",method = RequestMethod.GET)
    @ResponseBody
    public List<Greeting> greeting() {
        List<Greeting> greetings = new ArrayList<>();
        greetRepository.findAll().forEach(e->greetings.add(e));
        return greetings;
    }

    @RequestMapping(value = "/greeting/{user}/",method = RequestMethod.GET)
    @ResponseBody
    public List<Greeting> greetingUserLimit(@PathVariable String user,Integer limit) {
        List<Greeting> greetings = new ArrayList<>();
        greetRepository.findByUser(user,limit).forEach(e -> greetings.add(e));
        return greetings;
    }

    @RequestMapping(value = "/greeting",method = RequestMethod.POST)
    @ResponseBody
    public String saveGreeting(@RequestBody Greeting greeting) {

        greeting.setCreationDate(new Date());
        greetRepository.save(greeting);

        return "OK";
    }

}

Last but not least our Application class

package com.gkatzioura.spring;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

In order to run just run

gradle bootRun