Secure a docker registry using ssl

As mentioned on a previous article having a registry with a username and password is not secure if the registry is not ssl configured.

DockerImage

So we are going to add the ssl certificates to our registry. To make things easier we will use let’s encrypt which is free.

Once we have generated the credentials we have to add them to the registry. We will create a directory called certificates which will contain the certificate pem file and the key pem file. Then we will move the generated certificates on the certificates directory with the names crt.pem and key.crt.

We will follow exactly the same steps we followed in the previous article to generate the password.

docker run --entrypoint htpasswd registry:2 -Bbn {your-user} {your-password} > auth/password-file

Now we are ready to create our registry by also specifying the certificates. To do so we will mount the certificates directory to our docker container. The we will specify where the registry is going to find the credentials on the containers filesystem

docker run -d -p 5000:5000 --restart=always --name registry -v `pwd`/auth:/auth -v `pwd`/certificates:/certificates -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/password-file -e REGISTRY_HTTP_TLS_CERTIFICATE=/certificates/crt.pem -e REGISTRY_HTTP_TLS_KEY=/certificates/key.pem registry:2

So your registry will pickup the credentials specified and will also use the certificates created.
Next step is to do the dns mapping and add a dns entry which directs your subdomain to your registry’s ip.

However if you just wan’t to test it, you can run your registry locally and just change your /etc/hosts and add this entry.

127.0.0.1 registry.{your certificate's domain }

Once you navigate through your browser to https://registry.{your certificate’s domain }:5000
you will get a 200 status code and your browser will identify your connection as secure.

Docker basics: Docker Registry

By default when using docker you pull the images from the Dockerhub docker registry. Most probably you have your own docker images for you application and you want to distribute them and do so in a secure way. One way to do so is to go with the already set options such as a paid plan from Dockerhub or the registries provided by cloud providers like amazon, azure etc.

The other option is setting up your own docker registry.
In any case since you use docker you need to have a registry to distribute your images so that they can make it into production.
There are many benefits on managing your own registry but be aware that it requires effort on your side on provisioning and maintaining it.
Therefore we will create our docker registry

docker run -d -p 5000:5000 --restart=always --name registry registry:2

So we have a docker registry running on port 5000 and the registry will always restart.

Now let’s test our registry and push an image.
First I will build a simple image with no specific purpose.

FROM ubuntu
ENTRYPOINT top

It is just a dummy image printing top.

so we are gonna build it

docker build --tag top-ubuntu:1.0 .

The key is to tag your image based on the domain under which your registry runs.
Currently our registry runs on the localhost therefore by tagging we also specify the location of the registry.

docker tag top-ubuntu:1.0 localhost:5000/top-ubuntu:1.0

And no we push our image

docker push localhost:5000/top-ubuntu:1.0

Now let’s remove our images and see if our image will be downloaded from our running registry

docker rmi top-ubuntu:1.0
docker rmi localhost:5000/top-ubuntu:1.0

And let’s pull

docker pull localhost:5000/top-ubuntu:1.0

As you can see our image has been downloaded from our local registry and is ready to be used.

So far so good. The next step is securing our registry with a username and password.

Let’s start by setting the username and password

First let’s create a directory which shall contain our credentials

mkdir auth

The we shall creae

docker run --entrypoint htpasswd registry:2 -Bbn {your-user} {your-password} > auth/password-file

The file shall contain your username and password information. The password shall be hashed.

Now let’s run our secured registry

 docker run -d -p 5000:5000 --restart=always --name registry -v `pwd`/auth:/auth -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/password-file registry:2

As you can see we mounted the credentials file to the docker container and we specified the location of the password-file.

Let’s try to push our image

docker push localhost:5000/top-ubuntu:1.0
.
.
.

059ad60bcacf: Preparing 
8db5f072feec: Preparing 
67885e448177: Preparing 
ec75999a0cb1: Preparing 
65bdd50ee76a: Preparing 
no basic auth credentials

It’s time to login to our registry

docker login localhost:5000

Once your have provided your credentials you will be able to push the image to your local repository.

docker push localhost:5000/top-ubuntu:1.0

Be aware that our registry is not secure. Having your registry secured with credentials does not make it secure since you need to have ssl encryption.

On the next tutorial we will secure a docker registry with ssl.

 

Docker basics: Containers

So we have just created a docker image and now it’s time for us to run a container and interact with it.

docker run nginx

This will run an nginx image and as we can see due to running the container in an interactive mode the terminal we issued the command is no longer available.

We will press ctrl-c. By doing so the docker process exists and our container is no longer running. Our container lifecycle is bound to the main process. If the process exits the container stops.

So let’s run our container in the detached mode and interact with it

docker run -d nginx 

The minus -d option makes the container to run in a detached mode.

Now let’s have a look on our running containers

docker ps

Only one container is listed. The ps command by default show only the running containers. The ps command gives us various information such as when the container was created, the status of the container and the image which was created.
If we want to see our previously stopped container all we have to do is to execute ps with the -a option

docker ps -a

We can remove a container either after stopping the container or we can specify the container to get removed automatically once the container is stopped.

docker run --rm -d nginx 

Otherwise in order to remove a container we just have to issue the rm command

docker rm {container-id|container-name}

Be aware that in order to delete a container, the container must be stopped. Otherwise you must use the -f argument which forces the container to stop and gets deleted.

In order to stop the container issue

docker stop {container-id|container-name}

As we can see we can remove a container by specifying it’s name. In the previous examples we did not any names so let’s do it now.

We can set a name by either renaming a running container

docker rename 53552a9f0a95 my-nginx

Or we can specify the name of the container while we create it.

docker run --name='my-nginx' -d nginx 

Now that we have only one container the ps command serves us well, however in cases where we run multiple containers we need to apply the filter argument which comes along with the ps command.

You can filter by the id, name, label, status etc.
Also you can use filter with substrings in case of name.

docker ps --filter=name="nginx"

So everything is set for us. As we have seen running a container in a detached state does not display any logs in the console. To get the logs we can use the logs command.

docker logs my-nginx

If we want to follow the logs we can also follow the output

docker logs -f my-nginx

The next step is to get into interactive mode with our container while it is running

docker run --rm --name='my-nginx' -it nginx /bin/bash

with this command our container is running and we are logged in to the containers bash console. Be aware that in cases where an entrypoint has been specified you need to override it.

docker run -it --entrypoint "/bin/bash" {image}

However still we might need some extra commands to be run. One way to do this is to use
the docker exec command

docker exec my-nginx echo 'hello wolrd'

As you see we specified the container name and the bash command to execute.

So most of the thing to get us covered has been mentioned. The last tip has to do with exposing the port of our docker container.
Our container is accessible through the ip it has been assigned while creating it however there are cases where we want to map the container’s listening port to our host.

To do so we will use the -p command

docker run -d -p 8080:80 --name my-nginx nginx 

In the above example 8080 is the port of our host and 80 is the port of our docker container.

Last but not least you can create an image by a running container by committing it

docker commit e2e222e22e2e dockerimage:version1

So that was a quick reference on the docker commands which you might use on a daily basis.

Docker basics: Images

So what is a docker image? A docker image is read-only layer consisting of binaries libraries and the software which the container shall execute.

A docker image can have as a base another docker image. Actually they are a read only template.
Once created they are stored in your file system.

To view the docker images currently installed on your system you can issue

docker images

Let’s download an image

docker pull ubuntu

By default your images are downloaded through DockerHub. Also you can upload your images to DockerHub too, if you want to share them.
In cases you want to share your images privately then you have to check the private option on image hosting on dockerhub or other providers such as Google, Amazon, Azure etc.

Since we have just downloaded an image, we will list again our images. Now we will go a bit further and issue

docker images -a

What you can see is that some images don’t have any name on them. Those images are called intermediate images. Behind the scenes the image that you use consists of many other images. Actually it is a parent-child relation. An intermediate image might be contained in two different images due to common dependencies and binaries.
So next time you download an image the intermediate files and your image will be downloaded.

Now let’s filter some of our images.
Current filters supported are
dangling, label, before, since, reference.

Most probably the reference is the one you are gonna use the most.

docker images --filter=reference='*ubuntu*'

What you actually do with the reference filter is filtering based on the image reference and the pattern you specified.

Now let’s build an image. To build an image you need to create a Dockerfile. The dockerfile will specify the image which you will inherit from and any extra action you need to do.

Here’s the content of the Dockerfile

FROM ubuntu
ARG username
ENV USERNAME $username
ENTRYPOINT echo $USERNAME

An argument is passed and the default command that will be executed when the container is running would be to echo the USERNAME environmental variable which is set based on the argument.

So let’s build the image

docker build --build-arg username=john .

As you can see you have your intermediate images printed and the segment images. Try building it again and the same steps and ids will fill your screen, changes and new intermediates will happen only if you change your Dockerfile.

Next step is to check all our images.

docker images

Seems like our image is different that the others. The others have name and tags whilst ours has the autogenerated image id.

Let’s add one

docker tag d88ef0502ecf ubuntu-hello:1.0

Also we can do the tagging while we build the image

docker build --build-arg username=john --tag ubuntu-hello:1.0 .

As you can see I have put a version on the image. For your application most probably you are going to have more than one images created therefore you can tag your images in order to keep track of the versions.

Now time to clean up.

docker rmi ubuntu-hello:1.0

This one will not be successful if you have already run the image. You can force it’s deletion by using the -f argument but is not graceful so don’t do it. Instead delete the containers created from this image and then delete the images.

Also be aware that when you issue rmi using tags, if your image has been tagged with more multiple tags then your tags get removed from the image until the image remains with one tag. If your image has only one tag left then the image gets removed as well.

Docker basics: An introduction

On this blog docker is mentioned many times, and many tutorials utilize docker. It is always good to have a go on docker basics and provide a fast reference, in our workday the more we get absorbed with certain problems the easier it gets to forget.

So with docker you can achieve containerization. You can achieve a special runtime for your process to get executed isolated without affecting any other resources of your system.

Instead of spawning a new virtual machine for your applications and consume extra resources, which a vm needs to simulate a machine and the operating system, you can use docker.

So here are some of the benefits on using docker.

  • Isolation
  • Portability
  • Safety

Isolation

If your application needs some certain binaries installed on your os, instead of installing them to your os you can have them installed on your docker image. This way you application does not affect your host system installation.

Portability

You application and it’s dependencies are stored in the form of an image. This image can be shipped, distributed and run on any docker installation without the need of taking any extra action such as installing dependencies.

Safety

The software that comes packaged with your image is used by the container only. It is not installed or used by your os. If there is anything untrusted on that It will be run only by your containers process. Your container is isolated, it cannot interact with other parts of your system.

The main parts which we will discuss are the

In the meantime the best way to start is just by running a hello-world application on docker.

 docker run hello-world 

The best thing with the above image is that we have a step by step documentation on the process of running a docker image.

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

Dockerize your Scala application

Dockerizing a Scala application is pretty easy.

The first concern is creating a fat jar. Now we all come from different backgrounds including maven/gradle and different plugins that handle this issue.
If you use sbt the way to go is to use the sbt-assembly plugin.

To use it we should add it to our project/plugins.sbt file. If the file does not exist create it.

logLevel := Level.Warn

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")

So by executing

sbt clean assembly

We will end up with a fat jar located at the target/scala-**/**.jar path.

Now the easy part is putting our application inside docker, thus a Dockerfile is needed.

We will use the openjdk alpine as a base image.

FROM openjdk:8-jre-alpine

ADD target/scala-**/your-fat-jar app.jar

ENTRYPOINT ["java","-jar","/app.jar"]

The above approach works ok and gives the control needed to customize your build process.
For a more bootstraping experience you can use the sbt native packager.

All you need to do is to add the plugin to project/plugins.sbt file.

logLevel := Level.Warn

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.4")

Then we specify the main class of our application and enable the Java and Docker plugins from the native packager at the build.sbt file.

mainClass in Compile := Some("your.package.MainClass")

enablePlugins(JavaAppPackaging)
enablePlugins(DockerPlugin)

The next step is to issue the sbt command.

sbt docker:publishLocal

This command will build your application, include the binaries needed to the jar, containerize your application and publish it to your local maven repo.

Spring Data with JPA and @NamedQueries

If you use Spring Data and @NamedQuery annotations at your JPA entity you can easily use them in a more convenient way using the spring data repository.

On a previous blog we created a spring data project using spring boot and docker. We will use the pretty same project and enhance our repository’s functionality.

We will implement a named query that will fetch employees only if their Last Name has as many characters as the ones specified.

package com.gkatzioura.springdata.jpa.persistence.entity;

import javax.persistence.*;

/**
 * Created by gkatzioura on 6/2/16.
 */
@Entity
@Table(name = "employee", schema="spring_data_jpa_example")
@NamedQuery(name = "Employee.fetchByLastNameLength",
        query = "SELECT e FROM Employee e WHERE CHAR_LENGTH(e.lastname) =:length "
)
public class Employee {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;

    @Column(name = "firstname")
    private String firstName;

    @Column(name = "lastname")
    private String lastname;

    @Column(name = "email")
    private String email;

    @Column(name = "age")
    private Integer age;

    @Column(name = "salary")
    private Integer salary;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastname() {
        return lastname;
    }

    public void setLastname(String lastname) {
        this.lastname = lastname;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    public Integer getAge() {
        return age;
    }

    public void setAge(Integer age) {
        this.age = age;
    }

    public Integer getSalary() {
        return salary;
    }

    public void setSalary(Integer salary) {
        this.salary = salary;
    }
}

Pay extra attention to the query name and the convention we follow @{EntityName}.{queryName}.
Then we will add the method to our spring data repository.

package com.gkatzioura.springdata.jpa.persistence.repository;

import com.gkatzioura.springdata.jpa.persistence.entity.Employee;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.repository.query.Param;
import org.springframework.stereotype.Repository;

import java.util.List;

/**
 * Created by gkatzioura on 6/2/16.
 */
@Repository
public interface EmployeeRepository extends JpaRepository<Employee,Long>, EmployeeRepositoryCustom {

    List<Employee> fetchByLastNameLength(@Param("length") Long length);
}

And last but not least add some functionality to our controller.

package com.gkatzioura.springdata.jpa.controller;

import com.gkatzioura.springdata.jpa.persistence.entity.Employee;
import com.gkatzioura.springdata.jpa.persistence.repository.EmployeeRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
 * Created by gkatzioura on 6/2/16.
 */
@RestController
public class TestController {

    @Autowired
    private EmployeeRepository employeeRepository;

    @RequestMapping("/employee")
    public List<Employee> getTest() {

        return employeeRepository.findAll();
    }

    @RequestMapping("/employee/filter")
    public List<Employee> getFiltered(String firstName,@RequestParam(defaultValue = "0") Double bonusAmount) {

        return employeeRepository.getFirstNamesLikeAndBonusBigger(firstName,bonusAmount);
    }

    @RequestMapping("/employee/lastnameLength")
    public List<Employee> fetchByLength(Long length) {
        return employeeRepository.fetchByLastNameLength(length);
    }

}

You can find the source code on github.

Hibernate Caching with HazelCast: Basic configuration

Previously we went through an introduction on JPA caching, the mechanisms and what hibernate offers.

What comes next is a hibernate project using Hazelcast as a second level cache.

We will use a basic spring boot project for this purpose with JPA. Spring boot uses hibernate as the default JPA provider.
Our setup will be pretty close to the one of a previous post.
We will use postgresql with docker for our sql database.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.5.1.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

repositories {
    mavenCentral()
}

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-data-jpa'
    compile group: 'org.postgresql', name:'postgresql', version:'9.4-1206-jdbc42'
    compile group: 'org.springframework', name: 'spring-jdbc'
    compile group: 'com.zaxxer', name: 'HikariCP', version: '2.6.0'
    compile group: 'com.hazelcast', name: 'hazelcast-hibernate5', version: '1.2'
    compile group: 'com.hazelcast', name: 'hazelcast', version: '3.7.5'
    testCompile group: 'junit', name: 'junit', version: '4.11'
}

By examining the dependencies carefully we see the hikari pool, the postgresql driver, spring data jpa and of course hazelcast.

Instead of creating the database manually we will automate it by utilizing the database initialization feature of Spring boot.

We shall create a file called schema.sql under the resources folder.

create schema spring_data_jpa_example;

create table spring_data_jpa_example.employee(
    id  SERIAL PRIMARY KEY,
    firstname   TEXT    NOT NULL,
    lastname    TEXT    NOT NULL,
    email       TEXT    not null,
    age         INT     NOT NULL,
    salary         real,
    unique(email)
);

insert into spring_data_jpa_example.employee (firstname,lastname,email,age,salary)
values ('Test','Me','test@me.com',18,3000.23);

To keep it simple and avoid any further configurations we shall put the configurations for datasource, jpa and caching inside the application.yml file.

spring:
  datasource:
    continue-on-error: true
    type: com.zaxxer.hikari.HikariDataSource
    url: jdbc:postgresql://172.17.0.2:5432/postgres
    driver-class-name: org.postgresql.Driver
    username: postgres
    password: postgres
    hikari:
      idle-timeout: 10000
  jpa:
    properties:
      hibernate:
        cache:
          use_second_level_cache: true
          use_query_cache: true
          region:
            factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
    show-sql: true

The configuration spring.datasource.continue-on-error is crucial since once the application relaunches, there should be a second attempt to create the database and thus a crash is inevitable.

Any hibernate specific properties reside at the spring.jpa.properties path. We enabled the second level cache and the query cache.

Also we set show-sql to true. This means that once a query hits the database it shall be logged through the console.

Then create our employee entity.

package com.gkatzioura.hibernate.enitites;

import javax.persistence.*;

import java.io.Serializable;

import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;

/**
 * Created by gkatzioura on 2/6/17.
 */
@Entity
@Table(name = "employee", schema="spring_data_jpa_example")
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class Employee implements Serializable {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;

    @Column(name = "firstname")
    private String firstName;

    @Column(name = "lastname")
    private String lastname;

    @Column(name = "email")
    private String email;

    @Column(name = "age")
    private Integer age;

    @Column(name = "salary")
    private Integer salary;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastname() {
        return lastname;
    }

    public void setLastname(String lastname) {
        this.lastname = lastname;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    public Integer getAge() {
        return age;
    }

    public void setAge(Integer age) {
        this.age = age;
    }

    public Integer getSalary() {
        return salary;
    }

    public void setSalary(Integer salary) {
        this.salary = salary;
    }
}

Everything is setup. Spring boot will detect the entity and create an EntityManagerFactory on its own.
What comes next is the repository class for employee.

package com.gkatzioura.hibernate.repository;

import com.gkatzioura.hibernate.enitites.Employee;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.repository.CrudRepository;

/**
 * Created by gkatzioura on 2/11/17.
 */
public interface EmployeeRepository extends JpaRepository {
}

And the last one is the controller

package com.gkatzioura.hibernate.controller;

import com.gkatzioura.hibernate.enitites.Employee;
import com.gkatzioura.hibernate.repository.EmployeeRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
 * Created by gkatzioura on 2/6/17.
 */
@RestController
public class EmployeeController {

    @Autowired
    private EmployeeRepository employeeRepository;

    @RequestMapping("/employee")
    public List testIt() {

        return employeeRepository.findAll();
    }

    @RequestMapping("/employee/{employeeId}")
    public Employee getEmployee(@PathVariable Long employeeId) {

        return employeeRepository.findOne(employeeId);
    }

}

Once we issue a request at
http://localhost:8080/employee/1

Console will display the query issued at the database

Hibernate: select employee0_.id as id1_0_0_, employee0_.age as age2_0_0_, employee0_.email as email3_0_0_, employee0_.firstname as firstnam4_0_0_, employee0_.lastname as lastname5_0_0_, employee0_.salary as salary6_0_0_ from spring_data_jpa_example.employee employee0_ where employee0_.id=?

The second time we issue the request, since we have the second cache enabled there won’t be a query issued upon the database. Instead the entity shall be fetched from the second level cache.

You can download the project from github.

Push Spring Boot Docker images on ECR

On a previous blog we integrated a spring boot application with EC2.
It is one of the most raw forms of deployment that you can have on Amazon Web Services.

On this tutorial we will create a docker image with our application which will be stored to the Amazon EC2 container registry.

You need to have the aws cli tool installed.

We will get as simple as we can with our spring application therefore we will use an example from the official spring source page. The only changes applied will be on the packaging and the application name.

Our application shall be named ecs-deployment

rootProject.name = 'ecs-deployment'

Then we build and run our application

gradle build
gradle bootRun

Now let’s dockerize our application.
First we shall create a Dockerfile that will reside on src/main/docker.

FROM frolvlad/alpine-oraclejdk8
VOLUME /tmp
ADD ecs-deployment-1.0-SNAPSHOT.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

Then we should edit our gradle file in order to add the docker dependency, the docker plugin and an extra gradle task in order to create our docker image.

buildscript {
    ...
    dependencies {
        ...
        classpath('se.transmode.gradle:gradle-docker:1.2')
    }
}

...
apply plugin: 'docker'


task buildDocker(type: Docker, dependsOn: build) {
    push = false
    applicationName = jar.baseName
    dockerfile = file('src/main/docker/Dockerfile')
}

And we are ready to build our docker image.

./gradlew build buildDocker

You can also run your docker application from the newly created image.

docker run -p 8080:8080 -t com.gkatzioura.deployment/ecs-deployment:1.0-SNAPSHOT

First step is too create our ecr repository

aws ecr create-repository  --repository-name ecs-deployment

Then let us proceed with our docker registry authentication.

aws ecr get-login

Then run the command given in the output. The login attempt will succeed and your are ready to proceed to push your image.

First tag the image in order to specify the repository that we previously created and then do a docker push.

docker tag {imageid} {aws account id}.dkr.ecr.{aws region}.amazonaws.com/ecs-deployment:1.0-SNAPSHOT
docker push {aws account id}.dkr.ecr.{aws region}.amazonaws.com/ecs-deployment:1.0-SNAPSHOT

And we are done! Our spring boot docker image is deployed on the Amazon EC2 container registry.

You can find the source code on github.

Spring boot with Spring Security and NoSQL

In the previous post we set up a spring security configuration by providing custom queries for user and authority retrieval from an sql database.

Nowadays many modern applications utilize NoSQL databases. Spring security does not come with an out of the box solution for NoSQL databases.

In those cases we need to provide a solution by Implementing a Custom UserDetailsService.

We will use a MongoDB Database for this example.
I will use a docker image, however it is as easy to set up a mongodb database by downloading it from the official website.

Those are some commands to get started with docker and mongodb (feel free to ignore them if you don’t use docker)

#pull the mongo image
docker pull mongo
#create a mongo container
docker run --name some-mongo -d mongo
#get the docker container id
docker ps
#get the containers ip
docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CID
#connection using the ip retrieved
mongo $mongodb_container_ip

Then we will write a simple initialization script called createuser.js. The script creates an document containing user information such as username password and authorities.

use springsecurity
db.users.insert({"name":"John","surname":"doe","email":"john@doe.com","password":"cleartextpass","authorities":["user","admin"]})

We will use mongo cli to execute it.

mongo 172.17.0.2:27017 < createuser.js

In order to use spring security with mongodb we need to retrieve the user information from the users collection.

First step is to add the mongodb dependencies to our gradle file, including the mongodb driver. Note that we will use a profile called ‘customuserdetails’.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.0.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'spring-boot'

sourceCompatibility = 1.8

repositories {
    mavenCentral()
}

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("org.thymeleaf:thymeleaf-spring4")
    compile("org.springframework.boot:spring-boot-starter-security")
    compile("org.mongodb:mongo-java-driver:1.3")
    compile("org.slf4j:slf4j-api:1.6.6")
    compile("ch.qos.logback:logback-core:1.1.7")
    compile("ch.qos.logback:logback-classic:1.1.7")
    testCompile "junit:junit:4.11"
}

bootRun {
    systemProperty "spring.profiles.active", "customuserdetails"
}

Then we shall create a mongodb connection bean.

package com.gkatzioura.spring.security.config;

import com.mongodb.Mongo;
import com.mongodb.MongoClient;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;

/**
 * Created by gkatzioura on 9/27/16.
 */
@Configuration
@Profile("customuserdetails")
public class MongoConfiguration {

    @Bean
    public MongoClient createConnection() {

        //You should put your mongo ip here
        return new MongoClient("172.17.0.2:27017");
    }
}

Then we will create a custom user details object.

package com.gkatzioura.spring.security.model;

import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.AuthorityUtils;
import org.springframework.security.core.userdetails.UserDetails;

import java.util.Collection;
import java.util.List;

/**
 * Created by gkatzioura on 9/27/16.
 */
public class MongoUserDetails  implements UserDetails{

    private String username;
    private String password;
    private List<GrantedAuthority> grantedAuthorities;
    
    public MongoUserDetails(String username,String password,String[] authorities) {
        this.username = username;
        this.password = password;
        this.grantedAuthorities = AuthorityUtils.createAuthorityList(authorities);
    }
    
    @Override
    public Collection<? extends GrantedAuthority> getAuthorities() {
        return grantedAuthorities;
    }

    @Override
    public String getPassword() {
        return password;
    }

    @Override
    public String getUsername() {
        return username;
    }

    @Override
    public boolean isAccountNonExpired() {
        return true;
    }

    @Override
    public boolean isAccountNonLocked() {
        return true;
    }

    @Override
    public boolean isCredentialsNonExpired() {
        return true;
    }

    @Override
    public boolean isEnabled() {
        return true;
    }
}

Next step we will add a custom UserDetailsService retrieving user details through the mongodb database.

package com.gkatzioura.spring.security.service;

import com.gkatzioura.spring.security.model.MongoUserDetails;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Filters;
import org.bson.Document;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.stereotype.Service;

import java.util.List;

/**
 * Created by gkatzioura on 9/27/16.
 */
public class CustomerUserDetailsService implements UserDetailsService {

    @Autowired
    private MongoClient mongoClient;

    @Override
    public UserDetails loadUserByUsername(String email) throws UsernameNotFoundException {

        MongoDatabase database = mongoClient.getDatabase("springsecurity");
        MongoCollection<Document> collection = database.getCollection("users");

        Document document = collection.find(Filters.eq("email",email)).first();

        if(document!=null) {

            String name = document.getString("name");
            String surname = document.getString("surname");
            String password = document.getString("password");
            List<String> authorities = (List<String>) document.get("authorities");

            MongoUserDetails mongoUserDetails = new MongoUserDetails(email,password,authorities.toArray(new String[authorities.size()]));

            return mongoUserDetails;
        } else {

           throw new UsernameNotFoundException("username not found");
        }
    }

}

Final step is to provide a spring security configuration using the custom UserDetailsService we implemented previously.

package com.gkatzioura.spring.security.config;

import com.gkatzioura.spring.security.service.CustomerUserDetailsService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Profile;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.core.userdetails.UserDetailsService;

/**
 * Created by gkatzioura on 9/27/16.
 */
@EnableWebSecurity
@Profile("customuserdetails")
public class CustomUserDetailsSecurityConfig extends WebSecurityConfigurerAdapter {

    @Bean
    public UserDetailsService mongoUserDetails() {
        return new CustomerUserDetailsService();
    }

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {

        UserDetailsService userDetailsService = mongoUserDetails();
        auth.userDetailsService(userDetailsService);
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {

        http.authorizeRequests()
                .antMatchers("/public").permitAll()
                .anyRequest().authenticated()
                .and()
                .formLogin()
                .permitAll()
                .and()
                .logout()
                .permitAll();
    }

}

To run the application issue

gradle bootRun

You can find the source code on github