Spring Boot and Micrometer with Prometheus Part 6: Securing metrics

Previously we successfully spun up our Spring Boot application With Prometheus. An endpoint in our Spring application is exposing our metric data so that prometheus is able to retrieve them.
The main question that comes to mind is how to secure this information.

Spring already provides us with its great security framework, so it will be fairly easy to use it for our application. The goal would be to use basic authentication for the actuator/prometheus endpoints and also configure prometheus in order to access that information using basic authentication.

So the first step is to enable the security on our app. The first step is to add the security jar.

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>

The Spring boot application will get secured on its own by generating a password for the default user.
However we do want to have control over the username and password so we are going to use some environment variables.

By running the application with the credentials for the default user we have the prometheus endpoints secured with a minimal configuration.

SPRING_SECURITY_USER_NAME=test-user SPRING_SECURITY_USER_PASSWORD=test-password mvn spring-boot:run

So now that we have the security setup on our app, it’s time to update our prometheus config.

scrape_configs:
  - job_name: 'prometheus-spring'
    scrape_interval: 1m
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['my.local.machine:8080']
    basic_auth:
      username: "test-user"
      password: "test-password"

So let’s run again prometheus as described previously.

To sum app after this change prometheus will gather metrics data for our application in a secure way.

Advertisement

Spring Boot and Micrometer with InlfuxDB Part 2: Adding InfluxDB

Since we added our base application it is time for us to spin up an InfluxDB instance.

We shall follow a previous tutorial and add a docker instance.

docker run –rm -p 8086:8086 –name influxdb-local influxdb

Time to add the micrometer InfluxDB dependency on our pom

<dependencies>
...
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-core</artifactId>
            <version>1.3.2</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-influx</artifactId>
            <version>1.3.2</version>
        </dependency>
...
</dependencies>

Time to add the configuration through the application.yaml

management:
  metrics:
    export:
      influx:
        enabled: true
        db: devjobsapi
        uri: http://127.0.0.1:8086
  endpoints:
    web:
      expose: "*"

Let’s spin up our application and do some requests.
After some time we can check the database and the data contained.

docker exec -it influxdb-local influx
> SHOW DATABASES;
name: databases
name
----
_internal
devjobsapi
> use devjobsapi
Using database devjobsapi
> SHOW MEASUREMENTS
name: measurements
name
----
http_server_requests
jvm_buffer_count
jvm_buffer_memory_used
jvm_buffer_total_capacity
jvm_classes_loaded
jvm_classes_unloaded
jvm_gc_live_data_size
jvm_gc_max_data_size
jvm_gc_memory_allocated
jvm_gc_memory_promoted
jvm_gc_pause
jvm_memory_committed
jvm_memory_max
jvm_memory_used
jvm_threads_daemon
jvm_threads_live
jvm_threads_peak
jvm_threads_states
logback_events
process_cpu_usage
process_files_max
process_files_open
process_start_time
process_uptime
system_cpu_count
system_cpu_usage
system_load_average_1m

That’s pretty awesome. Let’s check the endpoints accessed.

> SELECT*FROM http_server_requests;
name: http_server_requests
time                count exception mean        method metric_type outcome status sum         upper       uri
----                ----- --------- ----        ------ ----------- ------- ------ ---         -----       ---
1582586157093000000 1     None      252.309331  GET    histogram   SUCCESS 200    252.309331  252.309331  /actuator
1582586157096000000 0     None      0           GET    histogram   SUCCESS 200    0           2866.531375 /jobs/github/{page}

Pretty great! The next step would be to visualise those metrics.

Spring Boot and Micrometer with InlfuxDB Part 1: The base project

To those who follow this blog it’s no wonder that I tend to use InfluxDB a lot. I like the fact that it is a real single purpose database (time series) with many features and also comes with enterprise support.

Spring is also one of the tools of my choice.
Thus in this blog we shall integrate spring with micrometer and InfluxDB.

Our application will be a rest api for jobs.
Initially it will fetch the Jobs from Github’s job api as shown here.

Let’s start with a pom

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.4.RELEASE</version>
    </parent>

    <groupId>com.gkatzioura</groupId>
    <artifactId>DevJobsApi</artifactId>
    <version>1.0-SNAPSHOT</version>

    <build>
        <defaultGoal>spring-boot:run</defaultGoal>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-webflux</artifactId>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
            <scope>provided</scope>
        </dependency>
   </dependencies>
</project>

Let’s add the Job Repository for GitHub.

package com.gkatzioura.jobs.repository;

import java.util.List;

import org.springframework.http.HttpMethod;
import org.springframework.stereotype.Repository;
import org.springframework.web.reactive.function.client.WebClient;

import com.gkatzioura.jobs.model.Job;

import reactor.core.publisher.Mono;

@Repository
public class GitHubJobRepository {

    private WebClient githubClient;

    public GitHubJobRepository() {
        this.githubClient = WebClient.create("https://jobs.github.com");
    }

    public Mono<List<Job>> getJobsFromPage(int page) {

        return githubClient.method(HttpMethod.GET)
                           .uri("/positions.json?page=" + page)
                           .retrieve()
                           .bodyToFlux(Job.class)
                           .collectList();
    }

}

The Job model

package com.gkatzioura.jobs.model;

import lombok.Data;

@Data
public class Job {

    private String id;
    private String type;
    private String url;
    private String createdAt;
    private String company;
    private String companyUrl;
    private String location;
    private String title;
    private String description;

}

The controller

package com.gkatzioura.jobs.controller;

import java.util.List;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import com.gkatzioura.jobs.model.Job;
import com.gkatzioura.jobs.repository.GitHubJobRepository;

import reactor.core.publisher.Mono;

@RestController
@RequestMapping("/jobs")
public class JobsController {

    private final GitHubJobRepository gitHubJobRepository;

    public JobsController(GitHubJobRepository gitHubJobRepository) {
        this.gitHubJobRepository = gitHubJobRepository;
    }

    @GetMapping("/github/{page}")
    private Mono<List<Job>> getEmployeeById(@PathVariable int page) {
        return gitHubJobRepository.getJobsFromPage(page);
    }

}

And last but not least the main application.

package com.gkatzioura;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration;

@SpringBootApplication
@EnableAutoConfiguration(exclude = {
        ReactiveSecurityAutoConfiguration.class
})
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

On the next blog we are going to integrate with InfluxDB and micrometer.

A guide to the InfluxDBMapper and QueryBuilder for Java: Into and Order

Previously we used the group by statement extensively in order to execute complex aggregation queries

On this tutorial we are going to have a look at ‘into’ statements and the ‘order by’ close.

Apart from inserting or selecting data we might as well want to persist the results from one query into another table. The usages on something like this can vary. For example you might have a complex operation that cannot be executed in one single query.

Before we continue, make sure you have an influxdb instance up and running.

The most common action with an into query would be to populate a measurement with the results of a previous query.

Let’s copy a database.

Query query = select()
                .into("\"copy_NOAA_water_database\".\"autogen\".:MEASUREMENT")
                .from(DATABASE, "\"NOAA_water_database\".\"autogen\"./.*/")
                .groupBy(new RawText("*"));

The result of this query will be to copy the results into the h2o_feet_copy_1 measurement.

SELECT * INTO "copy_NOAA_water_database"."autogen".:MEASUREMENT FROM "NOAA_water_database"."autogen"./.*/ GROUP BY *;

Now let’s just copy a column into another table.

        Query query = select().column("water_level")
                               .into("h2o_feet_copy_1")
                               .from(DATABASE,"h2o_feet")
                               .where(eq("location","coyote_creek"));

Bellow is the query which is going to be execcuted.

SELECT water_level INTO h2o_feet_copy_1 FROM h2o_feet WHERE location = 'coyote_creek';

Also we can do exactly the same thing with aggregations.

Query query = select()
                .mean("water_level")
                .into("all_my_averages")
                .from(DATABASE,"h2o_feet")
                .where(eq("location","coyote_creek"))
                .and(gte("time","2015-08-18T00:00:00Z"))
                .and(lte("time","2015-08-18T00:30:00Z"))
                .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

And generate a query which persists the aggregation result into a table.

SELECT MEAN(water_level) INTO all_my_averages FROM h2o_feet WHERE location = 'coyote_creek' AND time >= '2015-08-18T00:00:00Z' AND time <= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Order clauses

Influxdb does provide ordering however it is limited only to dates.
So we will execute a query with ascending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(asc());

And we get the ascending ordering as expected.

SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time ASC;

And the same query we shall executed with descending order.

Query query = select().from(DATABASE, "h2o_feet")
                               .where(eq("location","santa_monica"))
                               .orderBy(desc());
 SELECT * FROM h2o_feet WHERE location = 'santa_monica' ORDER BY time DESC;

That’s it! We just created some new databases and measurements by just using existing data in our database. We also executed some statements where we specified the time ordering.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java: Group By

Previously we executed some selection examples and aggregations against an InfluxDB database. In this tutorial we are going to check the group by functionality that the Query Builder provides to us with.

Before you start you need to spin up an influxdb instance with the data needed.

Supposing that we want to group by a single tag, we shall use the groupBy function.

        Query query = select().mean("water_level").from(DATABASE, "h2o_feet").groupBy("location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The query to be executed shall be

SELECT MEAN(water_level) FROM h2o_feet GROUP BY location;

If we want to group by multiple tags we will pass an array of tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy("location","randtag");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result will be

SELECT MEAN(index) FROM h2o_feet GROUP BY location,randtag;

Another option is to query by all tags.

        Query query = select().mean("index").from(DATABASE,"h2o_feet")
                              .groupBy(raw("*"));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(index) FROM h2o_feet GROUP BY *;

Since InfluxDB is a time series database we have great group by functionality based on time.

For example let’s group query results into 12 minute intervals

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the result

SELECT COUNT(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m);

Group results by 12 minute intervals and location.

        Query query = select().count("water_level").from(DATABASE,"h2o_feet")
                              .where()
                              .and(gte("time","2015-08-18T00:00:00Z"))
                              .and(lte("time","2015-08-18T00:30:00Z"))
                              .groupBy(time(12l,MINUTE),"location");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We get the following query.

SELECT COUNT(water_level) FROM h2o_feet WHERE time &gt;= '2015-08-18T00:00:00Z' AND time &lt;= '2015-08-18T00:30:00Z' GROUP BY time(12m),location;

We will get more advanced and group query results into 18 minute intervals and shift the preset time boundaries forward.

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,6l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);
SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,6m);

Or group query results into 12 minute intervals and shift the preset time boundaries back;

        Query query = select().mean("water_level").from(DATABASE,"h2o_feet")
                              .where(eq("location","coyote_creek"))
                              .and(gte("time","2015-08-18T00:06:00Z"))
                              .and(lte("time","2015-08-18T00:54:00Z"))
                              .groupBy(time(18l,MINUTE,-12l,MINUTE));
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT MEAN(water_level) FROM h2o_feet WHERE location = 'coyote_creek' AND time &gt;= '2015-08-18T00:06:00Z' AND time &lt;= '2015-08-18T00:54:00Z' GROUP BY time(18m,-12m);

Eventually we can group by and fill

        Query query = select()
                .column("water_level")
                .from(DATABASE, "h2o_feet")
                .where(gt("time", op(ti(24043524l, MINUTE), SUB, ti(6l, MINUTE))))
                .groupBy("water_level")
                .fill(100);
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

The result would be

SELECT water_level FROM h2o_feet WHERE time &gt; 24043524m - 6m GROUP BY water_level fill(100);

That’s it! We just run some really complex group by queries against our InfluxDB database. The query builder makes it possible to create queries using only java.
You can find the sourcecode in github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 2

Previously we setup an influxdb instance running through docker and we also run our first InfluxDBMapper code against an influxdb database.

The next step is to execute some queries against influxdb using the QueryBuilder combined with the InfluxDBMapper.

Let’s get started and select everything from the table H2OFeetMeasurement.

private static final String DATABASE = "NOAA_water_database";

public static void main(String[] args) {
    InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

    InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);

    Query query = select().from(DATABASE,"h2o_feet");
    List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);
}

Let’s get more specific, we will select measurements with water level higher than 8.

        Query query = select().from(DATABASE,"h2o_feet").where(gt("water_level",8));
        LOGGER.info("Executing query "+query.getCommand());
        List higherThanMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

I bet you noticed the query.getCommand() detail. If you want to see the actual query that is being executed you can call the getCommand() method from the query.

Apart from where statements we can perform certain operations on fields such as calculations.

        Query query = select().op(op(cop("water_level",MUL,2),"+",4)).from(DATABASE,"h2o_feet");
        LOGGER.info("Executing query "+query.getCommand());
        QueryResult queryResult = influxDB.query(query);

We just used the cop function to multiply the water level by 2. The cop function creates a clause which will execute an operation to a column. Then we are going to increment by 4 the product of the previous operation by using the op function. The op function creates a clause which will execute an operation with regards to two arguments given.

Next case is to select using a specific string field key-value

        Query query = select().from(DATABASE,"h2o_feet").where(eq("location","santa_monica"));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Things can get even more specific and select data that have specific field key-values and tag key-values.

        Query query = select().column("water_level").from(DATABASE,"h2o_feet")
                              .where(neq("location","santa_monica"))
                              .andNested()
                              .and(lt("water_level",-0.59))
                              .or(gt("water_level",9.95))
                              .close();
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Since influxdb is a time series database it is essential to issue queries with specific timestamps.

        Query query = select().from(DATABASE,"h2o_feet")
                              .where(gt("time",subTime(7,DAY)));
        LOGGER.info("Executing query "+query.getCommand());
        List h2OFeetMeasurements = influxDBMapper.query(query, H2OFeetMeasurement.class);

Last but not least we can make a query for specific fields. I will create a model just for the fields that we are going to retrieve.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", timeUnit = TimeUnit.SECONDS)
public class LocationWithDescription {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }
}

And now I shall query for them.

Query selectFields = select("level description","location").from(DATABASE,"h2o_feet");
List locationWithDescriptions = influxDBMapper.query(selectFields, LocationWithDescription.class);

As you can see we can also map certain fields to a model. For now mapping to models can be done only when data come from a certain measurements. Thus we shall proceed on more query builder specific examples next time.

You can find the source code on github.

A guide to the InfluxDBMapper and QueryBuilder for Java Part: 1

With the release of latest influxdb-java driver version came along the InfluxbMapper.

To get started we need to spin up an influxdb instance, and docker is the easiest way to do so. We just follow the steps as described here.

Now we have a database with some data and we are ready to execute our queries.

We have the measure h2o_feet

> SELECT * FROM "h2o_feet"

name: h2o_feet
--------------
time                   level description      location       water_level
2015-08-18T00:00:00Z   below 3 feet           santa_monica   2.064
2015-08-18T00:00:00Z   between 6 and 9 feet   coyote_creek   8.12
[...]
2015-09-18T21:36:00Z   between 3 and 6 feet   santa_monica   5.066
2015-09-18T21:42:00Z   between 3 and 6 feet   santa_monica   4.938

So we shall create a model for that.

package com.gkatzioura.mapper.showcase;

import java.time.Instant;
import java.util.concurrent.TimeUnit;

import org.influxdb.annotation.Column;
import org.influxdb.annotation.Measurement;

@Measurement(name = "h2o_feet", database = "NOAA_water_database", timeUnit = TimeUnit.SECONDS)
public class H2OFeetMeasurement {

    @Column(name = "time")
    private Instant time;

    @Column(name = "level description")
    private String levelDescription;

    @Column(name = "location")
    private String location;

    @Column(name = "water_level")
    private Double waterLevel;

    public Instant getTime() {
        return time;
    }

    public void setTime(Instant time) {
        this.time = time;
    }

    public String getLevelDescription() {
        return levelDescription;
    }

    public void setLevelDescription(String levelDescription) {
        this.levelDescription = levelDescription;
    }

    public String getLocation() {
        return location;
    }

    public void setLocation(String location) {
        this.location = location;
    }

    public Double getWaterLevel() {
        return waterLevel;
    }

    public void setWaterLevel(Double waterLevel) {
        this.waterLevel = waterLevel;
    }
}

And the we shall fetch all the entries of the h2o_feet measurement.

package com.gkatzioura.mapper.showcase;

import java.util.List;
import java.util.logging.Logger;

import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory;
import org.influxdb.impl.InfluxDBImpl;
import org.influxdb.impl.InfluxDBMapper;

public class InfluxDBMapperShowcase {

    private static final Logger LOGGER = Logger.getLogger(InfluxDBMapperShowcase.class.getName());

    public static void main(String[] args) {

        InfluxDB influxDB = InfluxDBFactory.connect("http://localhost:8086", "root", "root");

        InfluxDBMapper influxDBMapper = new InfluxDBMapper(influxDB);
        List h2OFeetMeasurements = influxDBMapper.query(H2OFeetMeasurement.class);

    }
}

After being successful on fetching the data we will continue with persisting data.


        H2OFeetMeasurement h2OFeetMeasurement = new H2OFeetMeasurement();
        h2OFeetMeasurement.setTime(Instant.now());
        h2OFeetMeasurement.setLevelDescription("Just a test");
        h2OFeetMeasurement.setLocation("London");
        h2OFeetMeasurement.setWaterLevel(1.4d);

        influxDBMapper.save(h2OFeetMeasurement);

        List measurements = influxDBMapper.query(H2OFeetMeasurement.class);

        H2OFeetMeasurement h2OFeetMeasurement1 = measurements.get(measurements.size()-1);
        assert h2OFeetMeasurement1.getLevelDescription().equals("Just a test");

Apparently fetching all the measurements to get the last entry is not the most efficient thing to do. In the upcoming tutorials we are going to see how we use the InfluxDBMapper with advanced InfluxDB queries.

Spin up an InfluxDB instance with docker for testing.

It is a reality that we tend to make things harder than they might be when we try to use and connect various databases.
Since docker came out things became a lot easier.

Most databases like Mongodb, InfluxDB etc come with the binaries needed to spin up the database but also with the clients needed in order to connect. Actually it pretty much starts to become a standard.

We will make a showcase of this by using InfluxDB’s docker image and the data walkthrough.

Let’s start with spinning up the instance.

docker run --rm -p 8086:8086 --name influxdb-local influxdb

We have an influxDB instance running on port 8086 under the name influxdb-local. Once the container is stopped it will also be deleted.

First step is to connect to an influxDB shell and interact with the database.

docker exec -it influxdb-local influx
CREATE DATABASE NOAA_water_database
> exit

Now let’s import some data

docker exec -it influxdb-local /bin/bash
curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt -o NOAA_data.txt
influx -import -path=NOAA_data.txt -precision=s -database=NOAA_water_database
rm NOAA_data.txt

Next step is to connect to the shell and query some data.

docker exec -it influxdb-local influx -precision rfc3339 -database NOAA_water_database
Connected to http://localhost:8086 version 1.4.x
InfluxDB shell 1.4.x
> SHOW measurements
name: measurements
name
----
average_temperature
h2o_feet
h2o_pH
h2o_quality
h2o_temperature
>

As you can see we just created an InfluxDB instance with data ready to execute queries and have some tests! Pretty simple and clean. Once we are done by stopping the container all data and the container included shall be removed.

Spring Security and Custom Password Encoding

On a previous post we added password encoding to our spring security configuration using jdbc and md5 password encoding.

However in case of custom UserDetailsServices we need to make some tweeks to our security configuration.
We need to create a DaoAuthenticationProvider bean and set it to the AuthenticationManagerBuilder.

Since we need a Custom UserDetailsService I will use use the Spring Security/MongoDB example codebase.

What we have to do is to change our Spring Security configuration.

package com.gkatzioura.spring.security.config;

import com.gkatzioura.spring.security.service.CustomerUserDetailsService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Profile;
import org.springframework.security.authentication.dao.DaoAuthenticationProvider;
import org.springframework.security.authentication.encoding.Md5PasswordEncoder;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;

import javax.sql.DataSource;

/**
 * Created by gkatzioura on 10/5/16.
 */
@EnableWebSecurity
@Profile("encodedcustompassword")
public class PasswordCustomEncodedSecurityConfig extends WebSecurityConfigurerAdapter {

    @Bean
    public UserDetailsService mongoUserDetails() {
        return new CustomerUserDetailsService();
    }

    @Bean
    public DaoAuthenticationProvider authProvider() {
        DaoAuthenticationProvider authProvider = new DaoAuthenticationProvider();
        authProvider.setUserDetailsService(mongoUserDetails());
        authProvider.setPasswordEncoder(new BCryptPasswordEncoder());
        return authProvider;
    }

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {

        auth.authenticationProvider(authProvider());
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {

        http.authorizeRequests()
                .antMatchers("/public").permitAll()
                .anyRequest().authenticated()
                .and()
                .formLogin()
                .permitAll()
                .and()
                .logout()
                .permitAll();
    }

}

In most cases this works ok. However we might as well want to roll our own PasswordEncoder, which is pretty easy.

package com.gkatzioura.spring.security.encoder;

import org.springframework.security.crypto.bcrypt.BCrypt;
import org.springframework.security.crypto.password.PasswordEncoder;

/**
 * Created by gkatzioura on 10/5/16.
 */
public class CustomPasswordEncoder implements PasswordEncoder {

    @Override
    public String encode(CharSequence rawPassword) {

        String hashed = BCrypt.hashpw(rawPassword.toString(), BCrypt.gensalt(12));

        return hashed;
    }

    @Override
    public boolean matches(CharSequence rawPassword, String encodedPassword) {

        return BCrypt.checkpw(rawPassword.toString(), encodedPassword);
    }

}

So we will change our configuration in order to use the new PasswordEncoder

    @Bean
    public DaoAuthenticationProvider authProvider() {
        DaoAuthenticationProvider authProvider = new DaoAuthenticationProvider();
        authProvider.setUserDetailsService(mongoUserDetails());
        authProvider.setPasswordEncoder(new CustomPasswordEncoder());
        return authProvider;
    }

Next step will be to create the encoded password.

   @Test
    public void customEncoder() {

        CustomPasswordEncoder customPasswordEncoder = new CustomPasswordEncoder();
        String encoded = customPasswordEncoder.encode("custom_pass");

        LOGGER.info("Custom encoded "+encoded);
    }

Then add a user with a hashed password to our mongodb database.

db.users.insert({"name":"John","surname":"doe","email":"john2@doe.com","password":"$2a$12$qB.L7buUPi2RJHZ9fYceQ.XdyEFxjAmiekH9AEkJvh1gLFPGEf9mW","authorities":["user","admin"]})

All that we need is to change the default profile on our gradle script and we are good to go.

bootRun {
    systemProperty "spring.profiles.active", "encodedcustompassword"
}

You can find the sourcecode on github.

Scan DynamoDB Items with DynamoDBMapper

Previously we covered how to query a DynamoDB database either using DynamoDBMapper or the low level java api.

Apart from issuing queries, DynamoDB also offers Scan functionality.
What scan does, is fetching all the Items you might have on your DynamoDB Table.
Therefore scan does not require any rules based on our partition key or your global/local secondary indexes.
What scan offers is filtering based on the items already fetched and return specific attributes from the items fetched.

The snippet below issues a scan on the Logins table by filtering items with a lower date.

    public List<Login> scanLogins(Long date) {

        Map<String, String> attributeNames = new HashMap<String, String>();
        attributeNames.put("#timestamp", "timestamp");

        Map<String, AttributeValue> attributeValues = new HashMap<String, AttributeValue>();
        attributeValues.put(":from", new AttributeValue().withN(date.toString()));

        DynamoDBScanExpression dynamoDBScanExpression = new DynamoDBScanExpression()
                .withFilterExpression("#timestamp < :from")
                .withExpressionAttributeNames(attributeNames)
                .withExpressionAttributeValues(attributeValues);

        List<Login> logins = dynamoDBMapper.scan(Login.class, dynamoDBScanExpression);

        return logins;
    }

Another great feature of DynamoDBMapper is parallel scan. Parallel scan divides the scan task among multiple workers, one for each logical segment. The workers process the data in parallel and return the results.
Generally the performance of a scan request depends largely on the number of items stored in a DynamoDB table. Therefore parallel scan might lift some of the performance issues of a scan request, since you have to deal with large amounts of data.

    public List<Login> scanLogins(Long date,Integer workers) {

        Map<String, String> attributeNames = new HashMap<String, String>();
        attributeNames.put("#timestamp", "timestamp");

        Map<String, AttributeValue> attributeValues = new HashMap<String, AttributeValue>();
        attributeValues.put(":from", new AttributeValue().withN(date.toString()));

        DynamoDBScanExpression dynamoDBScanExpression = new DynamoDBScanExpression()
                .withFilterExpression("#timestamp < :from")
                .withExpressionAttributeNames(attributeNames)
                .withExpressionAttributeValues(attributeValues);

        List<Login> logins = dynamoDBMapper.parallelScan(Login.class, dynamoDBScanExpression,workers);

        return logins;
    }

Before using scan to our application we have to take into consideration that scan fetches all table items. Therefore It has a high cost both on charges and performance. Also it might consume your provision capacity.
Generally it is better to stick to queries and avoid scans.

You can find full source code with unit tests on github.