A guide to the InfluxDBMapper and QueryBuilder for Java Part: 1

With the release of latest influxdb-java driver version came along the InfluxbMapper.

To get started we need to spin up an influxdb instance, and docker is the easiest way to do so. We just follow the steps as described here.

Now we have a database with some data and we are ready to execute our queries.

We have the measure h2o_feet

> SELECT * FROM "h2o_feet"

name: h2o_feet
--------------
time                   level description      location       water_level
2015-08-18T00:00:00Z   below 3 feet           santa_monica   2.064
2015-08-18T00:00:00Z   between 6 and 9 feet   coyote_creek   8.12
[...]
2015-09-18T21:36:00Z   between 3 and 6 feet   santa_monica   5.066
2015-09-18T21:42:00Z   between 3 and 6 feet   santa_monica   4.938

So we shall create a model for that.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", database = "NOAA_water_database", timeUnit = TimeUnit.SECONDS)
public class H2OFeetMeasurement {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    @Column(name = "water_level")
    private Double waterLevel;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }

    public Double getWaterLevel() {
        return waterLevel;
    }

    public void setWaterLevel(Double waterLevel) {
        this.waterLevel = waterLevel;
    }
}

And the we shall fetch all the entries of the h2o_feet measurement.

package com.gkatzioura.mapper.showcase;

import java.util.List;
import java.util.logging.Logger;

import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory;
import org.influxdb.impl.InfluxDBImpl;
import org.influxdb.impl.InfluxDBMapper;

public class InfluxDBMapperShowcase {

    private static final Logger LOGGER = Logger.getLogger(InfluxDBMapperShowcase.class.getName());

    public static void main(String[] args) {

        InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

        InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);
        List h2OFeetMeasurements = influxDBMapper.query(H2OFeetMeasurement.class);

    }
}

After being successful on fetching the data we will continue with persisting data.


        H2OFeetMeasurement h2OFeetMeasurement = new H2OFeetMeasurement();
        h2OFeetMeasurement.setTime(Instant.now());
        h2OFeetMeasurement.setLevelDescription("Just a test");
        h2OFeetMeasurement.setLocation("London");
        h2OFeetMeasurement.setWaterLevel(1.4d);

        influxDBMapper.save(h2OFeetMeasurement);

        List measurements = influxDBMapper.query(H2OFeetMeasurement.class);

        H2OFeetMeasurement h2OFeetMeasurement1 = measurements.get(measurements.size()-1);
        assert h2OFeetMeasurement1.getLevelDescription().equals("Just a test");

Apparently fetching all the measurements to get the last entry is not the most efficient thing to do. In the upcoming tutorials we are going to see how we use the InfluxDBMapper with advanced InfluxDB queries.

Hibernate Caching with HazelCast: Basic configuration

Previously we went through an introduction on JPA caching, the mechanisms and what hibernate offers.

What comes next is a hibernate project using Hazelcast as a second level cache.

We will use a basic spring boot project for this purpose with JPA. Spring boot uses hibernate as the default JPA provider.
Our setup will be pretty close to the one of a previous post.
We will use postgresql with docker for our sql database.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.5.1.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

repositories {
    mavenCentral()
}

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-data-jpa'
    compile group: 'org.postgresql', name:'postgresql', version:'9.4-1206-jdbc42'
    compile group: 'org.springframework', name: 'spring-jdbc'
    compile group: 'com.zaxxer', name: 'HikariCP', version: '2.6.0'
    compile group: 'com.hazelcast', name: 'hazelcast-hibernate5', version: '1.2'
    compile group: 'com.hazelcast', name: 'hazelcast', version: '3.7.5'
    testCompile group: 'junit', name: 'junit', version: '4.11'
}

By examining the dependencies carefully we see the hikari pool, the postgresql driver, spring data jpa and of course hazelcast.

Instead of creating the database manually we will automate it by utilizing the database initialization feature of Spring boot.

We shall create a file called schema.sql under the resources folder.

create schema spring_data_jpa_example;

create table spring_data_jpa_example.employee(
    id  SERIAL PRIMARY KEY,
    firstname   TEXT    NOT NULL,
    lastname    TEXT    NOT NULL,
    email       TEXT    not null,
    age         INT     NOT NULL,
    salary         real,
    unique(email)
);

insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
values ('Test','Me','test@me.com',18,3000.23);

To keep it simple and avoid any further configurations we shall put the configurations for datasource, jpa and caching inside the application.yml file.

spring:
  datasource:
    continue-on-error: true
    type: com.zaxxer.hikari.HikariDataSource
    url: jdbc:postgresql://172.17.0.2:5432/postgres
    driver-class-name: org.postgresql.Driver
    username: postgres
    password: postgres
    hikari:
      idle-timeout: 10000
  jpa:
    properties:
      hibernate:
        cache:
          use_second_level_cache: true
          use_query_cache: true
          region:
            factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
    show-sql: true

The configuration spring.datasource.continue-on-error is crucial since once the application relaunches, there should be a second attempt to create the database and thus a crash is inevitable.

Any hibernate specific properties reside at the spring.jpa.properties path. We enabled the second level cache and the query cache.

Also we set show-sql to true. This means that once a query hits the database it shall be logged through the console.

Then create our employee entity.

package com.gkatzioura.hibernate.enitites;

import javax.persistence.*;

import java.io.Serializable;

import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;

/**
 * Created by gkatzioura on 2/6/17.
 */
@Entity
@Table(name = "employee", schema="spring_data_jpa_example")
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class Employee implements Serializable {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;

    @Column(name = "firstname")
    private String firstName;

    @Column(name = "lastname")
    private String lastname;

    @Column(name = "email")
    private String email;

    @Column(name = "age")
    private Integer age;

    @Column(name = "salary")
    private Integer salary;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastname() {
        return lastname;
    }

    public void setLastname(String lastname) {
        this.lastname = lastname;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    public Integer getAge() {
        return age;
    }

    public void setAge(Integer age) {
        this.age = age;
    }

    public Integer getSalary() {
        return salary;
    }

    public void setSalary(Integer salary) {
        this.salary = salary;
    }
}

Everything is setup. Spring boot will detect the entity and create an EntityManagerFactory on its own.
What comes next is the repository class for employee.

package com.gkatzioura.hibernate.repository;

import com.gkatzioura.hibernate.enitites.Employee;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.repository.CrudRepository;

/**
 * Created by gkatzioura on 2/11/17.
 */
public interface EmployeeRepository extends JpaRepository {
}

And the last one is the controller

package com.gkatzioura.hibernate.controller;

import com.gkatzioura.hibernate.enitites.Employee;
import com.gkatzioura.hibernate.repository.EmployeeRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
 * Created by gkatzioura on 2/6/17.
 */
@RestController
public class EmployeeController {

    @Autowired
    private EmployeeRepository employeeRepository;

    @RequestMapping("/employee")
    public List testIt() {

        return employeeRepository.findAll();
    }

    @RequestMapping("/employee/{employeeId}")
    public Employee getEmployee(@PathVariable Long employeeId) {

        return employeeRepository.findOne(employeeId);
    }

}

Once we issue a request at
http://localhost:8080/employee/1

Console will display the query issued at the database

Hibernate: select employee0_.id as id1_0_0_, employee0_.age as age2_0_0_, employee0_.email as email3_0_0_, employee0_.firstname as firstnam4_0_0_, employee0_.lastname as lastname5_0_0_, employee0_.salary as salary6_0_0_ from spring_data_jpa_example.employee employee0_ where employee0_.id=?

The second time we issue the request, since we have the second cache enabled there won’t be a query issued upon the database. Instead the entity shall be fetched from the second level cache.

You can download the project from github.

Hibernate Caching With HazelCast: JPA caching basics

One of the greatest capabilities of HazelCast is the support for hibernate’s second level cache.

JPA has two levels of cache.
The first level cache caches an object’s state for the duration of a transaction. By querying the same object twice you have to get the object your retrieved the first time.
However in case of complex queries which include the object you retrieved and access your database, chances are, that the results would be out of sync since they will not reflect the changes you applied to the object in memory during the transaction. However you can tackle this with flush().
Once a JPA session is initiated its first level cache is restricted to that session, it will not affect other sessions.
First level cache is required as a part of JPA

The second level cache on the contrary to first level cache is associated with the Session Factory, thus the second level cache is shared across sessions. Commonly used data can be stored in memory and retrieved faster.

Once you have the second level cache enabled hibernate will cache the entities retrieved in a hibernate region. To do so you have to set your entities as cachable. Under the hood the information that resides in an entity is cached in a dehydrated format.

Hazelcast can be used with second level cache in two forms of architectures.
Client-server or cluster-only architecture.
For start we will investigate a cluster only architecture.
Hazelcast creates a separate distributed map for each Hibernate cache region therefore an entity. You can easily configure these regions via Hazelcast map configuration.The name of the region has a corresponding hazelcast map. For example one of our entities is called User and the full package path is ‘com.gkatzioura.User’ then our hazelcast will have the name ‘com.gkatzioura.User’. Provided that this map is distributed across hazelcast nodes, once the entity is retrieved from one node the cached information will be shared to other hazelcast nodes. Once an entity gets updated in a node the cached information will be invalidated in the other nodes.

Hibernate also provides us with Query cache. Query cache is a cache that caches query results . For example in case of a jpql query

SELECT usr.username,usr.firstname FROM User usr

the cached result would be a map with a key composed of the query and the parameters

and the value the results retrieved. In the previous cache the data retrieved are primitive values and they are stored as it is.
However there cases in which a query might retrieve entities.
For example

SELECT c FROM Customer c

In such cases instead of storing all the information retrieved, the entities are retrieved and cached in the second level cache whilst the query cache has an entry using the query and its parameters as a key and the entity ids as the value.
Once the same query is issued again the query cache will fetch the ids and will lookup on the second level cache for the corresponding entities. If an entity does not exist in the second level cache, then a query is issued in order to fetch the entity missing.
When it comes to second level cache and query cache configuration we need to pay attention on the eviction mechanisms of the second level cache and query cache. You might stumble in cases of ids being cached in the query cache however their corresponding entities are evicted from the second level cache. In such cases there is a performance hit since hibernate will issue a query for each entity missing.

Hazelcast has support for the query cache, however it is local to the node and never distributed across hazelcast cluster.
Although the results fetched from a query remain on the specific node, the entities specified from the cached query shall be retrieved from the distributed map which is used as a second level cache.

This is the theory we need so far. On the next blog we do some spring data jpa code and some hazelcast configurations.

Set up a SpringData project using Apache Cassandra

On this post we will use Gradle and spring boot in order to create a project that integrates spring-mvc and the Apache Cassandra database.

First we will begin with our Gradle configuration

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'spring-boot'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.2.5.RELEASE")
    }
}

jar {
    baseName = 'gs-serving-web-content'
    version =  '0.1.0'
}

repositories {
    mavenCentral()
}


sourceCompatibility = 1.8

repositories {
    mavenCentral()
}


dependencies {
    compile "org.springframework.boot:spring-boot-starter-thymeleaf"
    compile "org.springframework.data:spring-data-cassandra:1.2.2.RELEASE"
    compile 'org.slf4j:slf4j-api:1.6.6'
    compile 'ch.qos.logback:logback-classic:1.0.13'
    testCompile "junit:junit"
}

task wrapper(type: Wrapper) {
    gradleVersion = '2.3'
}

We will create the keyspace and the table on our Cassandra database

CREATE KEYSPACE IF NOT EXISTS example WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}  AND durable_writes = true;

CREATE TABLE IF NOT EXISTS example.greetings (
    user text,
    id timeuuid,
    greet text,
    creation_date timestamp,
    PRIMARY KEY (user, id)
) WITH CLUSTERING ORDER BY (id DESC);

We can run a file containing cql statements by using cqlsh

cqlsh -f database_creation.cql

Cassandra connection information will reside on META-INF/cassandra.properties

cassandra.contactpoints=localhost
cassandra.port=9042
cassandra.keyspace=example

Now we can proceed with the Cassandra configuration using spring annotations.

package com.gkatzioura.spring.config;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.core.env.Environment;
import org.springframework.data.cassandra.config.CassandraClusterFactoryBean;
import org.springframework.data.cassandra.config.CassandraSessionFactoryBean;
import org.springframework.data.cassandra.config.SchemaAction;
import org.springframework.data.cassandra.convert.CassandraConverter;
import org.springframework.data.cassandra.convert.MappingCassandraConverter;
import org.springframework.data.cassandra.core.CassandraOperations;
import org.springframework.data.cassandra.core.CassandraTemplate;
import org.springframework.data.cassandra.mapping.BasicCassandraMappingContext;
import org.springframework.data.cassandra.mapping.CassandraMappingContext;
import org.springframework.data.cassandra.repository.config.EnableCassandraRepositories;


@Configuration
@PropertySource(value = {"classpath:META-INF/cassandra.properties"})
@EnableCassandraRepositories(basePackages = {"com.gkatzioura.spring"})
public class CassandraConfig {

    @Autowired
    private Environment environment;

    private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);

    @Bean
    public CassandraClusterFactoryBean cluster() {

        CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
        cluster.setContactPoints(environment.getProperty("cassandra.contactpoints"));
        cluster.setPort(Integer.parseInt(environment.getProperty("cassandra.port")));

        return cluster;
    }

    @Bean
    public CassandraMappingContext mappingContext() {
        return new BasicCassandraMappingContext();
    }

    @Bean
    public CassandraConverter converter() {
        return new MappingCassandraConverter(mappingContext());
    }

    @Bean
    public CassandraSessionFactoryBean session() throws Exception {

        CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
        session.setCluster(cluster().getObject());
        session.setKeyspaceName(environment.getProperty("cassandra.keyspace"));
        session.setConverter(converter());
        session.setSchemaAction(SchemaAction.NONE);

        return session;
    }

    @Bean
    public CassandraOperations cassandraTemplate() throws Exception {
        return new CassandraTemplate(session().getObject());
    }

}

Then we create the Greeting entity.

package com.gkatzioura.spring.entity;

import com.datastax.driver.core.utils.UUIDs;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;

import java.util.Date;
import java.util.UUID;

@Table(value = "greetings")
public class Greeting {

    @PrimaryKeyColumn(name = "id",ordinal = 1,type = PrimaryKeyType.CLUSTERED)
    private UUID id = UUIDs.timeBased();

    @PrimaryKeyColumn(name="user",ordinal = 0,type = PrimaryKeyType.PARTITIONED)
    private String user;

    @Column(value = "greet")
    private String greet;

    @Column(value = "creation_date")
    private Date creationDate;

    public UUID getId() {
        return id;
    }

    public void setId(UUID id) {
        this.id = id;
    }

    public Date getCreationDate() {
        return creationDate;
    }

    public void setCreationDate(Date creationDate) {
        this.creationDate = creationDate;
    }

    public String getUser() {
        return user;
    }

    public void setUser(String user) {
        this.user = user;
    }

    public String getGreet() {
        return greet;
    }

    public void setGreet(String greet) {
        this.greet = greet;
    }
}

In order to access the data, a repository should be created. In our case we will add some extra functionality to the repository by adding some queries.

package com.gkatzioura.spring.repository;

import com.gkatzioura.spring.entity.Greeting;
import org.springframework.data.cassandra.repository.CassandraRepository;
import org.springframework.data.cassandra.repository.Query;
import org.springframework.data.repository.NoRepositoryBean;

import java.util.UUID;

public interface GreetRepository extends CassandraRepository<Greeting> {

    @Query("SELECT*FROM greetings WHERE user=?0 LIMIT ?1")
    Iterable<Greeting> findByUser(String user,Integer limit);

    @Query("SELECT*FROM greetings WHERE user=?0 AND id<?1 LIMIT ?2")
    Iterable<Greeting> findByUserFrom(String user,UUID from,Integer limit);

}

Now we can implement the controller in order to access the data through http.
By post we can save a Greeting entity.
Through get we can fetch all the greetings received.
By specifying user we can use the Cassandra query to fetch greetings for a specific user.

package com.gkatzioura.spring.controller;

import com.gkatzioura.spring.entity.Greeting;
import com.gkatzioura.spring.repository.GreetRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;

@RestController
public class GreetingController {

    @Autowired
    private GreetRepository greetRepository;

    @RequestMapping(value = "/greeting",method = RequestMethod.GET)
    @ResponseBody
    public List<Greeting> greeting() {
        List<Greeting> greetings = new ArrayList<>();
        greetRepository.findAll().forEach(e->greetings.add(e));
        return greetings;
    }

    @RequestMapping(value = "/greeting/{user}/",method = RequestMethod.GET)
    @ResponseBody
    public List<Greeting> greetingUserLimit(@PathVariable String user,Integer limit) {
        List<Greeting> greetings = new ArrayList<>();
        greetRepository.findByUser(user,limit).forEach(e -> greetings.add(e));
        return greetings;
    }

    @RequestMapping(value = "/greeting",method = RequestMethod.POST)
    @ResponseBody
    public String saveGreeting(@RequestBody Greeting greeting) {

        greeting.setCreationDate(new Date());
        greetRepository.save(greeting);

        return "OK";
    }

}

Last but not least our Application class

package com.gkatzioura.spring;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

In order to run just run

gradle bootRun