Dockerize your Scala application

Dockerizing a Scala application is pretty easy.

The first concern is creating a fat jar. Now we all come from different backgrounds including maven/gradle and different plugins that handle this issue.
If you use sbt the way to go is to use the sbt-assembly plugin.

To use it we should add it to our project/plugins.sbt file. If the file does not exist create it.

logLevel := Level.Warn

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")

So by executing

sbt clean assembly

We will end up with a fat jar located at the target/scala-**/**.jar path.

Now the easy part is putting our application inside docker, thus a Dockerfile is needed.

We will use the openjdk alpine as a base image.

FROM openjdk:8-jre-alpine

ADD target/scala-**/your-fat-jar app.jar

ENTRYPOINT ["java","-jar","/app.jar"]

The above approach works ok and gives the control needed to customize your build process.
For a more bootstraping experience you can use the sbt native packager.

All you need to do is to add the plugin to project/plugins.sbt file.

logLevel := Level.Warn

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.4")

Then we specify the main class of our application and enable the Java and Docker plugins from the native packager at the build.sbt file.

mainClass in Compile := Some("your.package.MainClass")

enablePlugins(JavaAppPackaging)
enablePlugins(DockerPlugin)

The next step is to issue the sbt command.

sbt docker:publishLocal

This command will build your application, include the binaries needed to the jar, containerize your application and publish it to your local maven repo.

Spring Security with Spring Boot 2.0: UserDetailsService

As we have seen on a previous post the username and password for our spring application was configured through environmental variables. This is ok for prototype purposes however in real life scenarios we have to provide another way to make the users eligible to login to the application.
To do so we use the UserDetailsService Interface.

The user details service comes with the loadUserByUsername function. The loadUserByUsername locates the user based on the username. The result of the search if existing then validates the credentials given through the login form with the user information retrieved through the UserDetailsService.

So let’s start with a very simple custom user details service.

@Service
public class UserDetailsServiceImpl implements UserDetailsService {

    @Override
    public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {

        if(username.equals("test")) {

            return User.withDefaultPasswordEncoder()
                       .username("test")
                       .password("test")
                       .roles("test")
                       .build();
        } else {
            return null;
        }
    }
}

As you can see the only user who is able to login is the one with the username test. Also spring provides us with a builder when it comes to user details. As a password encoder we have specified the default password encoder which is actually an encoder that does no password hashing at all since we provide the password clear-text.

Although the password encoder will be covered in another tutorial it is always good to remind that you should always hash the password stored in a database for security reasons.

Now do you need to add any extra information? Well no. Just having a bean that implements the UserDetailsService, in you spring context, is enough. Spring security will pick the UserDetailsService implementation you provided and this will be used to authenticate.

For example you can even provide the UserDetailsService by using the @Bean Configuration.

@Configuration
public class SecurityConfig {

    @Bean
    public UserDetailsService createUserDetailsService() {
        return new UserDetailsServiceImpl();
    }
    
}

By this way regardless where your store your user information whether it is on an sql database, a nosql-database or even a csv file the only thing that you have to do is in your loadUserByUsername to load the user and pass him back by creating a UserDetails object.

Spring Security with Spring Boot 2.0: Simple authentication using the Servlet Stack

Spring security is a great framework saving lots of time and effort from the developers. Also It is flexible enough to customize and bring it down to your needs. As spring evolves spring security involves too making it easier and more bootstrapping to setup up security in you project.

Spring Boot 2.0 is out there and we will take advantage of it for our security projects.

On this Project we aim at creating an as simple security backed project as possible. To get started we shall create a simple spring boot 2.0 project.

We can use the spring SPRING INITIALIZR application.

The end result of the project would be to have a spring boot 2 project with gradle.

buildscript {
	ext {
		springBootVersion = '2.0.1.RELEASE'
	}
	repositories {
		mavenCentral()
	}
	dependencies {
		classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
	}
}

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'

group = 'com.gkatzioura.security'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = 1.8

repositories {
	mavenCentral()
}


dependencies {
	compile('org.springframework.boot:spring-boot-starter-security')
        compile('org.springframework.boot:spring-boot-starter-web')
	testCompile('org.springframework.boot:spring-boot-starter-test')
	testCompile('org.springframework.security:spring-security-test')
}

Now be aware that with Spring Boot 2 there are two stacks to go. Either the Servlet stack or the WebFlux reactive stack. On this tutorial we shall use the servlet stack. We will cover WebFlux on another tutorial.

Let’s go and add our first controller.

package com.gkatzioura.security.simple.controller;

import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class HelloWorldController {

    @GetMapping("/hello")
    public ResponseEntity<String> hello(String name) {

        return new ResponseEntity<>("Hello "+name, HttpStatus.OK);
    }

}

If we try to access the endpoint http://localhost:8080/hello?name=john we will be presented with a login screen.
Thus including the security dependency in our project auto secures our endpoints and configures a user with a password.
In order to retrieve the password you can check at the login screen.
The username would be ‘user’ and the password will be the one that spring autogenerates.

Of course using an autogenerated password is not sufficient, thus we are going to provide the username and the password of our choice.

One of the ways to set your username and password on the application.yaml file

spring:
  security:
    user:
      name: test-user
      password: test-password

Now putting you passwords in the file system especially when not encrypted is not a good practice, let alone being uploaded in you version control since application.yaml is a source file. Also anyone with access to the binary can retrieve the username and password

Therefore instead of putting these sensitive information in the application.yaml file you can set them by using environmental variables.

So your environmental variables would be

SPRING_SECURITY_USER_NAME=test-user
SPRING_SECURITY_USER_PASSWORD=test-password

To sum up this was the easiest and fastest way to add security to your project.
On the next blog we will do the same but using the WebFlux reactive stack.

The MoSCoW Prioritization

One of the hardest part when it comes to implement new ideas and features through the lifecycle of a project is making sure that the specific feature or the new framework that will be added, will play an essential part in the project’s success.

By implementing features that are hard to implement and are not as essential to the customer, or by adding tools to your stack, which require a certain expertise and time to invest in learning, but its uses are limited for this project, then we risk the project’s success and our customer will be unhappy.
For example one of your business analysts John comes with a brilliant idea which he believes it should be reordered on top of the product backlog. Also one of your top developers Jack might stumble upon a new tool which will help automate the testing process.

Is this new feature that John proposes as great and as essential as it seems for the client? Is the new tool that Jack proposes really needed, so that we will incorporate it in one of our sprint features for our next sprint?

A good and simple way to avoid waste is to use the MoSCoW Prioritization.
So here’s what MoSCow prioritization stands up for

  • M : Must have
  • S : Should have
  • C : Could have
  • W : Won’t have

Must have

Is our final solution useless if we won’t ship this specific feature with our product? Are we going to have constant problems through the product’s development lifecycle, which will risk its delivery, if we don’t use this specific tool?

If the above cases are satisfied then the feature or the non functional requirement is a must-have and we should work our way out in order to include them.

Should have

Is the feature important but not necessary for the customer to get started? Can we have a workaround if it doesn’t meet the delivery date? Is the product delivery not going to be affected if we won’t use this tool although it will save us time and effort?

If the above cases are satisfied then the feature or the non functional requirement is a should-have and we can find workarounds for that.

Could have

Is this feature going to enhance the customer experience but it won’t affect the project goal at all if won’t get delivered? Will the development process continue to be successful although using that framework will helps us speed things up?

If the above cases are satisfied then the feature or the non functional requirement is a could-have and it is not of high importance to have it. However if the must-haves and the could-haves are satisfied there is some real value on implementing them.

Won’t Have

Is the project scope not effected at all if we won’t implement this feature? Is the new tool introduced through our development process not used at all?

If the above cases are satisfied then the feature or the non functional requirement is a won’t-have and thus you don’t have to bother at all.

To sum up since the beginning of the project our main focus is on the must-have items. By satisfying them we can move to the should-have items. Finally we can tackle the could-have items.

All in all this way can helps us avoid waste and keep being efficient on resources needed.

 

Host your maven artifacts on the cloud using CloudStorageMaven

One of the major issues when dealing with large codebases in our teams has to do with artifact sharing and artifact storage.

There are various options out there that provide many features such as jfrog, nexus, archiva etc.

I have been into using them, setting them up and configuring and they certainly provide you with many features. Also having you own repository installation gives you a lot of flexibility. Furthermore docker has made things a lot easier and thus setting them up takes almost no time.

Now if you use a cloud provider like amazon, azure etc there is a more lightweight option and pretty easy to setup. By using a cloud provider such as amazon, azure or google you have cheap and easy access to storage. The storage options that they provide can also be used in order to host your private artifacts or even your public ones.

To do so you need to use a maven wagon which is capable to communicate with the storage options that your cloud provider has and this is exactly what the CloudStorageMaven project deals with.

The CloudStorageMaven project provides you with wagons interacting with Amazon S3, Azure Blob Storage and Google Cloud Storage.

If you already use one of these cloud services hosting your artificats on them seems like a no brainer and theese wagons make it a lot easier to do so.

I have compiled some tutorials on how to get started with each one of them

Happy coding!

Host your maven artifacts using Google Cloud Storage

If you use Google Cloud and you use Java for your projects then Google Cloud Storage is a great place to host your teams artifacts.

It is easy to setup and pretty cheap. Also it is much simpler than setting one of the existing repository options (jfrog, nexus, archiva etc) if you are not particularly interested in their features.

To get started you need to specify a maven wagon which supports google cloud storage.
We will use the Google storage wagon.

Let’s get started by creating a maven project

mvn archetype:generate -DgroupId=com.test.apps -DartifactId=GoogleWagonTest -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

We are going to add a simple service.

package com.test.apps;

public class HelloService {

    public String sayHello() {

        return "Hello";
    }
}

Then we are going to add the maven wagon which will upload and fetch our binaries to the google cloud storage.

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>google-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

Then we shall create the google cloud storage bucket which will host our artifacts.

Our bucket shall be called mavenrepository

Now that we have set up our bucket in google we shall set the distribution management on our maven project.

    <distributionManagement>
        <snapshotRepository>
            <id>my-repo-bucket-snapshot</id>
            <url>gs://mavenrepository/snapshot</url>
        </snapshotRepository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>gs://mavenrepository/release</url>
        </repository>
    </distributionManagement>

From the maven documentation

Where as the repositories element specifies in the POM the location and manner in which Maven may download remote artifacts for use by the current project, distributionManagement specifies where (and how) this project will get to a remote repository when it is deployed. The repository elements will be used for snapshot distribution if the snapshotRepository is not defined.

The next step is the most crucial and this has to to do with authenticating to google cloud.

You need to have the gcloud command line setup in your system and you must issue a login
‘gcloud auth login –brief’ with an account that has access to the bucket we created previously.
The other way is to use the GOOGLE_APPLICATION_CREDENTIALS environmental variable. You can use this GOOGLE_APPLICATION_CREDENTIALS in order to set the path to your google application credentials file.
The credentials file should also be able to access the bucket we created previously.

And now the easiest part which is deploying.

mvn deploy

Now since your artifact has been deployed you can use it in another repo by specifying your repository and your wagon.

    <repositories>
        <repository>
            <id>my-repo-bucket-snapshot</id>
            <url>gs://mavenrepository/snapshot</url>
        </repository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>gs://mavenrepository/release</url>
        </repository>
    </repositories>

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>google-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

That’s it! Next thing you know, your artifact will be downloaded by maven through google cloud storage and used as a dependency in your new project.

Host your maven artifacts using Amazon s3

If you use amazon Web Services and you use Java for your projects then Amazon S3 is a great place to host your teams artifcats.

It is easy to setup and pretty cheap. Also it is much simpler than setting one of the existing repository options (jfrog, nexus, archiva etc) if you are not particularly interested in their features.

To get started you need to specify a maven wagon which supports s3.
We will use the s3 storage wagon.

Let’s get started by creating a maven project

mvn archetype:generate -DgroupId=com.test.apps -DartifactId=S3WaggonTest -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

We are going to add a simple service.

package com.test.apps;

public class HelloService {

    public String sayHello() {

        return "Hello";
    }
}

Then we are going to add the maven wagon which will upload and fetch our binaries to s3.

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

Then we shall create the s3 bucket that will host our artifacts.

aws s3 createbucket artifactbucket.

Now we have create our bucket. Then we shall set the distribution management on our maven project.

    <distributionManagement>
        <snapshotRepository>
            <id>my-repo-bucket-snapshot</id>
            <url>s3://my-test-repo/snapshot</url>
        </snapshotRepository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>s3://my-test-repo/release</url>
        </repository>
    </distributionManagement>

From the maven documentation

Where as the repositories element specifies in the POM the location and manner in which Maven may download remote artifacts for use by the current project, distributionManagement specifies where (and how) this project will get to a remote repository when it is deployed. The repository elements will be used for snapshot distribution if the snapshotRepository is not defined.

The next step is the most crucial and this has to to do with authenticating to aws.
There easy way is to have aws cli configured to point to the region where your bucket is located and with credentials which have read and write access to the s3 bucket which will host your binaries.

aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

The other way is to use the maven way and specify our aws credentials on the ~/.m2/settings.xml

  <servers>
    <server>
      <id>my-repo-bucket-snapshot</id>
      <username>EXAMPLEEXAMPLEXAMPLE</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
    <server>
      <id>my-repo-bucket-release</id>
      <username>EXAMPLEEXAMPLEXAMPLE</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
  </servers>

Be aware that you have to specify credentials for each repository specified.
Also we are not over yer since It is crucial to specify the region of the bucket.
To do so you you can either set it up the Amazon way therefore specifying it in an environmental variable

AWS_DEFAULT_REGION=us-east-1

Or you can pass it as a property while executing the deploy command.

-DAWS_DEFAULT_REGION=us-east-1

And now the easiest part which is deploying.

mvn deploy

Now since your artifact has been deployed you can use it in another repo by specifying your repository and your wagon.

    <repositories>
        <repository>
            <id>my-repo-bucket-snapshot</id>
            <url>s3://my-test-repo/snapshot</url>
        </repository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>s3://my-test-repo/release</url>
        </repository>
    </repositories>

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

That’s it! Next thing you know your artifact will be downloaded by maven through s3 and used as a dependency in your new project.

Host your maven artifacts using Azure Blob Storage

If you use Microsoft Azure and you use Java for your projects then Azure Blob Storage is a great place to host your teams artifcats.

It is easy to setup and pretty cheap. Also it is much simpler than setting one of the existing repository options (jfrog, nexus, archiva etc) if you are not particularly interested in their features.

To get started you need to specify a maven wagon which supports azure blob storage.
We will use the Azure storage wagon.

Let’s get started by creating a maven project

mvn archetype:generate -DgroupId=com.test.apps -DartifactId=AzureWagonTest -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

We are going to add a simple service.

package com.test.apps;

public class HelloService {

    public String sayHello() {

        return "Hello";
    }
}

Then we are going to add the maven wagon which will upload and fetch our binaries to azure blob storage.

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>azure-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

Then we shall create the azure storage account that will host our artifacts.

Then we shall create a new container called snapshot. This container will contain our snapshot repositories.

We can go through the same process in order to create a release repository.
Be aware that there is no need to to create different containers for each repository. You can have repositories under the same container.

Now that we have set up our storage account in azure we shall set the distribution management on our maven project.

    <distributionManagement>
        <snapshotRepository>
            <id>my-repo-bucket-snapshot</id>
            <url>bs://mavenrepository/snapshot</url>
        </snapshotRepository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>bs://mavenrepository/release</url>
        </repository>
    </distributionManagement>

From the maven documentation

Where as the repositories element specifies in the POM the location and manner in which Maven may download remote artifacts for use by the current project, distributionManagement specifies where (and how) this project will get to a remote repository when it is deployed. The repository elements will be used for snapshot distribution if the snapshotRepository is not defined.

The next step is the most crucial and this has to to do with authenticating to azure.

What you need is your storage account name and the key of the storage account.
In order to retrieve both you have to navigate to the Access keys of your Storage Account at the Settings section.

Then we shall specify our storage account credentials on the ~/.m2/settings.xml

  <servers>
    <server>
      <id>my-repo-bucket-snapshot</id>
      <username>mavenrepository</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
    <server>
      <id>my-repo-bucket-release</id>
      <username>mavenrepository</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
  </servers>

Be aware that you have to specify credentials for each repository specified.

And now the easiest part which is deploying.

mvn deploy

Now since your artifact has been deployed you can use it in another repo by specifying your repository and your wagon.

    <repositories>
        <repository>
            <id>my-repo-bucket-snapshot</id>
            <url>bs://mavenrepository/snapshot</url>
        </repository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>bs://mavenrepository/release</url>
        </repository>
    </repositories>

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>azure-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

That’s it! Next thing you know your artifact will be downloaded by maven through azure blob storage and used as a dependency in your new project.

Creational Design Patterns: Prototype Pattern

The prototype pattern is used in order to create a copy of an object. This pattern can be really useful especially when creating an object from scratch is costly.
In comparison with the builder, factory and abstract factory patterns it does not create an object from scratch it clones/recreates it.
In comparison with the singleton pattern it creates multiple copies of an instance whilst the singleton has to ensure that only one will exist.

Imagine the scenario of an object, which in order to be created requires a very resource intensive method. It can be a database query with many joins or even the result of a Federated search.
We want those data to be processed by various algorithm using one thread per algorithm. Every thread should have its own copy of the original instance, since using the same object will result to inconsistent results.

So we have an interface for representing the result of a Search.

package com.gkatzioura.design.creational.prototype;

public interface SearchResult {

    SearchResult clone();

    int totalEntries();

    String getPage(int page);
}

And we have the implementation of the SearchResult the FederatedSearchResult.

package com.gkatzioura.design.creational.prototype;

import java.util.ArrayList;
import java.util.List;

public class FederatedSearchResult implements SearchResult {

    private List<String> pages = new ArrayList<String>();

    public FederatedSearchResult(List<String> pages) {
        this.pages = pages;
    }

    @Override
    public SearchResult clone() {

        final List<String> resultCopies = new ArrayList<String>();
        pages.forEach(p->resultCopies.add(p));
        FederatedSearchResult federatedSearchResult = new FederatedSearchResult(resultCopies);
        return federatedSearchResult;
    }

    @Override
    public int totalEntries() {
        return pages.size();
    }

    @Override
    public String getPage(int page) {
        return pages.get(page);
    }
}

So we can use the clone method and create as many copies as we need of a very expensive object to create.

You can find the sourcecode on github.

Also I have compiled a cheat sheet containing a summary of the Creational Design Patterns.
Sign up in the link to receive it.

Creational Design Patterns: Singleton Pattern

The singleton design pattern is a software design pattern the restricts the the instantiation of a class to one object.
In comparison with other creational design patterns such as the abstract factory, factory and the builder pattern the singleton will create an object but will also be responsible so that only one instance of that object exists.

When creating a class as a singleton there are some certain problems that is has to tackle.

  • How can it be ensured that a class has only one instance.
  • How can the sole instance of a class be accessed easily
  • How can a class control its instantiation
  • How can the number of instances of a class be restricted

 

Suppose we have a class that sends messages.
The Messenger class.

package com.gkatzioura.design.creational.singleton;

public class Messenger {

    public void send(String message) {
        
    }
}

However we want the message procedure to be handled only by one instance of the Messenger class. Imagine the scenario where the Messenger class opens a tcp connection (for example xmpp) and has to keep the connection alive in order to send messages. It will be pretty inefficient to open a new xmpp connection each time we have to sent a message.

Therefore we will proceed and make the messenger class a singleton.

package com.gkatzioura.design.creational.singleton;

public class Messenger {

    private static Messenger messenger = new Messenger();

    private Messenger() {}

    public static Messenger getInstance() {
        return messenger;
    }

    public void send(String message) {

    }
}

As you can see we set the messenger constructor as private, and we initialized a messenger using a static variable.
Static variables are class level variables, memory allocation only happens once when the class is loaded in the memory. By this way we ensure that the messenger class will be instantiated only once.
The getInstance method will fetch the static messenger instance once called.

Obviously the previous approach has its pros and cons.
We don’t have to worry on thread safety and the instance will be created only when the Messenger class will be loaded. However it lacks in flexibility. Consider the scenario of passing configuration variables to the Messenger constructor. It is not possible using the previous approach.

A workaround is to instantiate the messenger class on the getInstance method.

package com.gkatzioura.design.creational.singleton.lait;

public class Messenger {

    private static Messenger messenger;

    private Messenger() {}

    public static Messenger getInstance() {

        if(messenger==null) {
            messenger = new Messenger();
        }

        return messenger;
    }
    
    public void send(String message) {

    }
}

The above approach might work in certain case but it misses on thread safety in cases where the class might get instantiated in a multithreaded environment.

The easiest approach to make our class thread safe is to synchronize the getInstance method.

package com.gkatzioura.design.creational.singleton.lait;

public class Messenger {

    private static Messenger messenger;

    private Messenger() {}

    public synchronized static Messenger getInstance() {

        if(messenger==null) {
            messenger = new Messenger();
        }

        return messenger;
    }

    public void send(String message) {

    }
}

That one will work. At least the creation of the messenger will be synchronized and no duplicates will be created.
The problem with this approach is that the synchronization is only needed once when the object is created. Using the above code will lead to unnecessary overhead.

The other approach is to use the Double-Checked Locking approach. Now Double-Checked locking needs extra care since it is easy to pick the broken implementation over the correct one.
The best approach is to implement lazy loading using the volatile keyword.

package com.gkatzioura.design.creational.singleton.dcl;

public class Messenger {

    private static final Object lock = new Object();
    private static volatile Messenger messenger;

    private Messenger() {}

    public static Messenger getInstance() {

        if(messenger==null) {
            synchronized (lock) {
                if(messenger==null) {
                    messenger = new Messenger();
                }
            }
        }

        return messenger;
    }

    public void send(String message) {

    }
}

By using the volatile keyword we prevent the write of a volatile to be reordered with respect to any previous read or write and a read of a volatile to be reordered with respect to any following read or write.
Also a mutex object is used to achieve synchronization.

To sum up we created an object and we also made sure that there will be only one instance of that object. Also we made sure that there won’t be any problem on instantiating the object in a multi-threaded environment.

You can find the sourcecode on github.

On the next blog post we will have a look at the prototype pattern.

Also I have compiled a cheat sheet containing a summary of the Creational Design Patterns.
Sign up in the link to receive it.