Pub/Sub local emulator

Pub/Sub is a nice tool provided by GCP.  It is really handy and can help you with the messaging challenges you application might face. Actually if you work with GCP it is the managed messaging solution that you can use.

As expected working with the actual Pub/Sub solution comes with some quota, so for

development it is essential to use something which is not going to cost you.

In these cases you can use the Pub/Sub emulator. To get started with the emulator you need to install it

gcloud components install pubsub-emulator

It is indeed convenient however to have a docker image since it is way more portable. Unfortunately there is no official image for that from google cloud. In any case you can use one of the solutions available on Docker Hub.

Now let’s run it

gcloud beta emulators pubsub start --project=test-project

After that your application can connect to the pub/sub emulator. The default port is 8085

I will use a Java unit test as an example for this one.

package org.gkatzioura.pubsub;

import java.io.IOException;
import java.nio.charset.Charset;

import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import com.google.api.gax.core.CredentialsProvider;
import com.google.api.gax.core.NoCredentialsProvider;
import com.google.api.gax.grpc.GrpcTransportChannel;
import com.google.api.gax.rpc.FixedTransportChannelProvider;
import com.google.api.gax.rpc.TransportChannelProvider;
import com.google.cloud.pubsub.v1.Publisher;
import com.google.cloud.pubsub.v1.SubscriptionAdminClient;
import com.google.cloud.pubsub.v1.SubscriptionAdminSettings;
import com.google.cloud.pubsub.v1.TopicAdminClient;
import com.google.cloud.pubsub.v1.TopicAdminSettings;
import com.google.cloud.pubsub.v1.stub.GrpcSubscriberStub;
import com.google.cloud.pubsub.v1.stub.SubscriberStub;
import com.google.cloud.pubsub.v1.stub.SubscriberStubSettings;
import com.google.protobuf.ByteString;
import com.google.pubsub.v1.ProjectSubscriptionName;
import com.google.pubsub.v1.ProjectTopicName;
import com.google.pubsub.v1.PubsubMessage;
import com.google.pubsub.v1.PullRequest;
import com.google.pubsub.v1.PullResponse;
import com.google.pubsub.v1.PushConfig;
import com.google.pubsub.v1.Subscription;

import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;

public class LocalPubSubTest {

    private static final String PROJECT = "test-project";
    private static final String SUBSCRIPTION_NAME = "SUBSCRIBER";
    private static final String TOPIC_NAME = "test-topic-id";

    private static final String hostPort = "127.0.0.1:8085";

    private ManagedChannel channel;
    private TransportChannelProvider channelProvider;
    private TopicAdminClient topicAdmin;

    private Publisher publisher;
    private SubscriberStub subscriberStub;
    private SubscriptionAdminClient subscriptionAdminClient;

    private ProjectTopicName topicName = ProjectTopicName.of(PROJECT, TOPIC_NAME);
    private ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(PROJECT, SUBSCRIPTION_NAME);

    private Subscription subscription;

    @Before
    public void setUp() throws Exception {
        channel = ManagedChannelBuilder.forTarget(hostPort).usePlaintext().build();
        channelProvider = FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));

        CredentialsProvider credentialsProvider = NoCredentialsProvider.create();

        topicAdmin = createTopicAdmin(credentialsProvider);
        topicAdmin.createTopic(topicName);

        publisher = createPublisher(credentialsProvider);
        subscriberStub = createSubscriberStub(credentialsProvider);
        subscriptionAdminClient = createSubscriptionAdmin(credentialsProvider);
        subscription = subscriptionAdminClient.createSubscription(subscriptionName, topicName, PushConfig.getDefaultInstance(), 0);
    }

    @After
    public void tearDown() throws Exception {
        topicAdmin.deleteTopic(topicName);
        subscriptionAdminClient.deleteSubscription(subscription.getName());
        channel.shutdownNow();
    }

    @Test
    public void testLocalPubSub() throws Exception {
        final String messageText = "text";
        PubsubMessage pubsubMessage = PubsubMessage.newBuilder()
                                                   .setData(ByteString.copyFrom(messageText, Charset.defaultCharset()))
                                                   .build();
        publisher.publish(pubsubMessage).get();

        PullRequest pullRequest = PullRequest.newBuilder()
                                             .setMaxMessages(1)
                                             .setReturnImmediately(true) // return immediately if messages are not available
                                             .setSubscription(subscription.getName())
                                             .build();

        PullResponse pullResponse = subscriberStub.pullCallable().call(pullRequest);
        String receiveMessageText = pullResponse.getReceivedMessages(0).getMessage().getData().toStringUtf8();

        Assert.assertEquals(messageText, receiveMessageText);
    }

    private TopicAdminClient createTopicAdmin(CredentialsProvider credentialsProvider) throws IOException {
        return TopicAdminClient.create(
                TopicAdminSettings.newBuilder()
                                  .setTransportChannelProvider(channelProvider)
                                  .setCredentialsProvider(credentialsProvider)
                                  .build()
        );
    }

    private SubscriptionAdminClient createSubscriptionAdmin(CredentialsProvider credentialsProvider) throws IOException {
        SubscriptionAdminSettings subscriptionAdminSettings = SubscriptionAdminSettings.newBuilder()
                                                                                       .setCredentialsProvider(credentialsProvider)
                                                                                       .setTransportChannelProvider(channelProvider)
                                                                                       .build();
        return SubscriptionAdminClient.create(subscriptionAdminSettings);
    }

    private Publisher createPublisher(CredentialsProvider credentialsProvider) throws IOException {
        return Publisher.newBuilder(topicName)
                        .setChannelProvider(channelProvider)
                        .setCredentialsProvider(credentialsProvider)
                        .build();
    }

    private SubscriberStub createSubscriberStub(CredentialsProvider credentialsProvider) throws IOException {
        SubscriberStubSettings subscriberStubSettings = SubscriberStubSettings.newBuilder()
                                                                              .setTransportChannelProvider(channelProvider)
                                                                              .setCredentialsProvider(credentialsProvider)
                                                                              .build();
        return GrpcSubscriberStub.create(subscriberStubSettings);
    }

}

That’s it. Now you can have some cost efficient unit tests!

Using Minikube on osx

Docker compose is making for me wonders when it comes to run some simple components on my workstation.

Spawning and simulating an infrastructure locally is fast and takes no time. Also it is lightweight.
However most teams nowadays use Kubernetes.
If you want to simulate a Kubernetes environment locally the tool to use is Minikube.

With Minikube you need to have a vm running on your workstation. This is normal, after all your workloads in a Kubernetes environment are running on multiple VMs.

So if you use OSX it’s very simple, provided you have a hypervisor installed (my default one is VirtualBox)
I just went for the local library option.

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
  && chmod +x minikube
sudo mv minikube /usr/local/bin

Depending on your workstation your installation varies but is still an easy one.

So let’s get started

minikube start

Now you might face a challenge with Minikube on osx.

Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg *)" at line 94 of file VBoxManageHostonly.cpp

As you can understand you need to reinstall VirtualBox. However you might face a challenge with the installation. I was getting the error ‘The installation failed’. As explained here you need to edit your osx security settings and allow ‘System software from Oracle’ to load.

At the end just do.

> minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

and you are ready to go.

Debug your container by overriding the command.

The main problem with docker on debugging has to do with images that already have the DockerImageCMD command specified.

If something goes wrong and has to do with the filesystem, or some commands that should have taken effect and they did not you need to do some troubleshooting.

Overriding the command which is executed should be helpful. I do this all the time on custom images since I need some bash in order to be able to troubleshoot.

Supposing I need to troubleshoot something on the default nginx image.

Then nginx image should run like this.

docker run --rm nginx

Now instead of running our server, we shall override the command with a bash session (enter a shell by executing /bin/sh).


docker run --rm -it --entrypoint "/bin/sh" nginx

If you also want to pass arguments to the executable specified there is also another workaround.


docker run --rm -it --entrypoint "ls" nginx -l

 

 

Spring Boot & Hibernate: Print queries and variables

It’s late in the office and you are stuck with this strange Jpa code with JoinColumns and cascades and you cannot find what goes wrong. You wish there is a way to view the icon-spring-frameworkqueries printed and also the values.
With a little tweaking to your Spring Boot application this is possible.

 

With the help of lombock heres is our jpa model.

package com.gkatzioura.hibernatelog.dao;

import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;

import lombok.Data;

@Data
@Entity
@Table(name = "application_user")
public class ApplicationUser {

    @Id
    private Long id;

    private String username;

    private String password;

}

It’s repository

package com.gkatzioura.hibernatelog.dao;

import org.springframework.data.repository.CrudRepository;

public interface ApplicationUserRepository extends CrudRepository {
}

A not found exception

package com.gkatzioura.hibernatelog.controller;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;

@ResponseStatus(value = HttpStatus.NOT_FOUND)
class ApplicationUserNotFoundException extends RuntimeException {

    public ApplicationUserNotFoundException() {
    }

    public ApplicationUserNotFoundException(String message) {
        super(message);
    }

    public ApplicationUserNotFoundException(String message, Throwable cause) {
        super(message, cause);
    }

    public ApplicationUserNotFoundException(Throwable cause) {
        super(cause);
    }

    public ApplicationUserNotFoundException(String message, Throwable cause, boolean enableSuppression, boolean writableStackTrace) {
        super(message, cause, enableSuppression, writableStackTrace);
    }
}

And a controller

package com.gkatzioura.hibernatelog.controller;

import java.util.Optional;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

import com.gkatzioura.hibernatelog.dao.ApplicationUser;
import com.gkatzioura.hibernatelog.dao.ApplicationUserRepository;

@RestController
public class ApplicationUserController {

    private final ApplicationUserRepository applicationUserRepository;

    public ApplicationUserController(ApplicationUserRepository applicationUserRepository) {
        this.applicationUserRepository = applicationUserRepository;
    }

    @GetMapping("/user/{id}")
    @ResponseBody
    public ApplicationUser getApplicationUser(@PathVariable Long id) {
        Optional applicationUser = applicationUserRepository.findById(id);
        if(applicationUser.isPresent()) {
            return applicationUser.get();
        } else {
            throw new ApplicationUserNotFoundException();
        }
    }

}

By adding the following to application.yaml we ensure the creation of the table through hibernate, the logging of the queries, the formatting of the sql queries logged and also the actual parameters values displayed.

spring:
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        use_sql_comments: true
        format_sql: true
logging:
  level:
    org:
      hibernate:
        type: trace

Just

curl http://localhost:8080/user/1

And you got your logs.

Docker compose: run stack dynamically

I use docker compose every day for my local development needs.

DockerImage

During the day I might turn on/off various databases or servers thus I need to do it fast and in a managed way.

Usually your docker-compose files contains the configuration for many containers, network, volumes etc.

stack.yaml

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    ports:
      - 27017:27017
    environment:
      MONGO_INITDB_ROOT_USERNAME: username
      MONGO_INITDB_ROOT_PASSWORD: password
  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: username
      ME_CONFIG_MONGODB_ADMINPASSWORD: password

This works if you always want the same services up and running.

However it does have a cost on resources, and most of the times you don’t need the full stack.

What you can do in this cases, would be to split them into files and choose what to use.

mongo.yaml

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    ports:
      - 27017:27017
    environment:
      MONGO_INITDB_ROOT_USERNAME: username
      MONGO_INITDB_ROOT_PASSWORD: password

express.yaml

version: '3.5'

services:
  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: username
      ME_CONFIG_MONGODB_ADMINPASSWORD: password

Then choosing what to use becomes very easy, just omit the file

docker-compose -f mongo.yaml -f express.yaml up

Pass multiple commands on Docker run

Docker apart form serving our workloads efficiently is also an amazing tool when it comes to not installing additional binaries to your workstation.

DockerImage

Eventually you will find it very easy and simple to just run only one command on docker.

For example I want to run a hello world in go.

My source code is going to be the simple hello world.

package main

import "fmt"

func main() {
    fmt.Println("hello world")
}

Pretty simple! The file shall be named hello_world.go

Now let’s run this in a container.

docker run -v $(pwd):/go/src/app --rm --name helloworld golang:1.8 go run src/app/hello_world.go

How about installing some go packages and the run our application in one liner?

If you try to do so, you shall realise that docker won’t interpret the commands the way you want, thus heres’ s how to get the result that you want.

If your image contains the /bin/bash or bin/sh binary you can pass the commands you wan to execute as a string.

docker run -v $(pwd):/go/src/app --rm --name helloworld golang:1.8 /bin/bash -c "cd src/app;go get https:yourpackage;go run src/app/hello_world.go

That’s it! Now you can run complex bash one-liners without worrying on installing additional software on your workstations

Run you first gatling load test using scala.

Gatling is a neat tool. You can create your load tests by just coding in scala. Jmeter allows you to do so through a plugin or beanshell but it is not as direct as the way gatling does so.

I will start by adding the gatling plugin

addSbtPlugin("io.gatling" % "gatling-sbt" % "3.0.0")

The next step is to changed the build.sbt


version := "0.1"
scalaVersion := "2.12.8"

enablePlugins(GatlingPlugin)

scalacOptions := Seq(
  "-encoding", "UTF-8", "-target:jvm-1.8", "-deprecation",
  "-feature", "-unchecked", "-language:implicitConversions", "-language:postfixOps")
libraryDependencies += "io.gatling.highcharts" % "gatling-charts-highcharts" % "3.1.2" % "test,it"
libraryDependencies += "io.gatling"            % "gatling-test-framework"    % "3.1.2" % "test,it"

The above are no different than what you can find on the official site when it comes to sbt commands and gatling.

Our next step is to add a simple http test. Be aware that you should add it in the directories src/test or src/it since, as it is instructed from the sbt dependencies for the binaries to take effect on these directories.

I shall put this test on src/test/scala/com/gkatzioura/BasicSimulation.scala

package com.gkatzioura

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class BasicSimulation extends Simulation {

  val httpConf = http.baseUrl("http://yourapi.com")
      .doNotTrackHeader("1")

  val scn = scenario("BasicSimulation")
    .exec(http("request_1")
    .get("/"))
    .pause(5)

  setUp(scn.inject(atOnceUsers(1))).protocols(httpConf)
}

Afterwards testing is simple. You go to sbt mode and execute the test.

sbt
>gatling:testOnly com.gkatzioura.BasicSimulation
>gatling:test

The first command instructs to run just one test, the second one shall run everything.

That’s it! Pretty simple.

A guide to the InfluxDBMapper and QueryBuilder for Java: Into and Order

Previously we used the group by statement extensively in order to execute complex aggregation queries

On this tutorial we are going to have a look at ‘into’ statements and the ‘order by’ close.

Apart from inserting or selecting data we might as well want to persist the results from one query into another table. The usages on something like this can vary. For example you might have a complex operation that cannot be executed in one single query.

Before we continue, make sure you have an influxdb instance up and running.

The most common action with an into query would be to populate a measurement with the results of a previous query.

Let’s copy a database.

Query query = select()
                .into("\"copy_NOAA_water_database\".\"autogen\".:MEASUREMENT")
                .from(DATABASE, "\"NOAA_water_database\".\"autogen\"./.*/")
                .groupBy(new RawText("*"));

The result of this query will be to copy the results into the h2o_feet_copy_1 measurement.

SELECT * INTO "copy_NOAA_water_database"."autogen".:MEASUREMENT FROM "NOAA_water_database"."autogen"./.*/ GROUP BY *;

Now let’s just copy a column into another table.

        Query query = select().column("water_level")
                               .into("h2o_feet_copy_1")
                               .from(DATABASE,"h2o_feet")
                               .where(eq("location","coyote_creek"));

Bellow is the query which is going to be execcuted.

SELECT water_level INTO h2o_feet_copy_1 FROM h2o_feet WHERE location = 'coyote_creek';

Also we can do exactly the same thing with aggregations.

Query query = select()
                .mean("water_level")
                .into("all_my_averages")
                .from(DATABASE,"h2o_feet")
                .where(eq("location","coyote_creek"))
                .and(gte("time","2015-08-18T00:00:00Z"))
                .and(lte("time","2015-08-18T00:30:00Z"))
                .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

And generate a query which persists the aggregation result into a table.

SELECT MEAN(water_level) INTO all_my_averages FROM h2o_feet WHERE location = 'coyote_creek' AND time >= '2015-08-18T00:00:00Z' AND time <= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Order clauses

Influxdb does provide ordering however it is limited only to dates.
So we will execute a query with ascending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(asc());

And we get the ascending ordering as expected.

SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time ASC;

And the same query we shall executed with descending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(desc());
 SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time DESC;

That’s it! We just created some new databases and measurements by just using existing data in our database. We also executed some statements where we specified the time ordering.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java: Group By

Previously we executed some selection examples and aggregations against an InfluxDB database. In this tutorial we are going to check the group by functionality that the Query Builder provides to us with.

Before you start you need to spin up an influxdb instance with the data needed.

Supposing that we want to group by a single tag, we shall use the groupBy function.

        Query query = select().mean("water_level").from(DATABASE, "h2o_feet").groupBy("location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The query to be executed shall be

SELECT MEAN(water_level) FROM h2o_feet GROUP BY location;

If we want to group by multiple tags we will pass an array of tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy("location","randtag");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result will be

SELECT MEAN(index) FROM h2o_feet GROUP BY location,randtag;

Another option is to query by all tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy(raw("*"));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(index) FROM h2o_feet GROUP BY *;

Since InfluxDB is a time series database we have great group by functionality based on time.

For example let’s group query results into 12 minute intervals

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the result

SELECT COUNT(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Group results by 12 minute intervals and location.

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where()
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE),"location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the following query.

SELECT COUNT(water_level) FROM h2o_feet WHERE time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m),location;

We will get more advanced and group query results into 18 minute intervals and shift the preset time boundaries forward.

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,6l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,6m);

Or group query results into 12 minute intervals and shift the preset time boundaries back;

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,-12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,-12m);

Eventually we can group by and fill

        Query query = select()
                .column("water_level")
                .from(DATABASE, "h2o_feet")
                .where(gt("time", op(ti(24043524l, MINUTE), SUB, ti(6l, MINUTE))))
                .groupBy("water_level")
                .fill(100);
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT water_level FROM h2o_feet WHERE time &gt; 24043524m - 6m GROUP BY water_level fill(100);

That’s it! We just run some really complex group by queries against our InfluxDB database. The query builder makes it possible to create queries using only java.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 2

Previously we setup an influxdb instance running through docker and we also run our first InfluxDBMapper code against an influxdb database.

The next step is to execute some queries against influxdb using the QueryBuilder combined with the InfluxDBMapper.

Let’s get started and select everything from the table H2OFeetMeasurement.

private static final String DATABASE = "NOAA_water_database";

public static void main(String[] args) {
    InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

    InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);

    Query query = select().from(DATABASE,"h2o_feet");
    List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);
}

Let’s get more specific, we will select measurements with water level higher than 8.

        Query query = select().from(DATABASE,"h2o_feet").where(gt("water_level",8));
        LOGGER.info("Executing query "+query.getCommand());
        List higherThanMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

I bet you noticed the query.getCommand() detail. If you want to see the actual query that is being executed you can call the getCommand() method from the query.

Apart from where statements we can perform certain operations on fields such as calculations.

        Query query = select().op(op(cop("water_level",MUL,2),"+",4)).from(DATABASE,"h2o_feet");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We just used the cop function to multiply the water level by 2. The cop function creates a clause which will execute an operation to a column. Then we are going to increment by 4 the product of the previous operation by using the op function. The op function creates a clause which will execute an operation with regards to two arguments given.

Next case is to select using a specific string field key-value

        Query query = select().from(DATABASE,"h2o_feet").where(eq("location","santa_monica"));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Things can get even more specific and select data that have specific field key-values and tag key-values.

        Query query = select().column("water_level").from(DATABASE,"h2o_feet")
                              .where(neq("location","santa_monica"))
                              .andNested()
                              .and(lt("water_level",-0.59))
                              .or(gt("water_level",9.95))
                              .close();
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Since influxdb is a time series database it is essential to issue queries with specific timestamps.

        Query query = select().from(DATABASE,"h2o_feet")
                              .where(gt("time",subTime(7,DAY)));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Last but not least we can make a query for specific fields. I will create a model just for the fields that we are going to retrieve.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", timeUnit = TimeUnit.SECONDS)
public class LocationWithDescription {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }
}

And now I shall query for them.

Query selectFields = select("level description","location").from(DATABASE,"h2o_feet");
List locationWithDescriptions = influxDBMapper.query(selectFields, LocationWithDescription.class);

As you can see we can also map certain fields to a model. For now mapping to models can be done only when data come from a certain measurements. Thus we shall proceed on more query builder specific examples next time.

You can find the source code on github.