Host your maven artifacts on the cloud using CloudStorageMaven

One of the major issues when dealing with large codebases in our teams has to do with artifact sharing and artifact storage.

There are various options out there that provide many features such as jfrog, nexus, archiva etc.

I have been into using them, setting them up and configuring and they certainly provide you with many features. Also having you own repository installation gives you a lot of flexibility. Furthermore docker has made things a lot easier and thus setting them up takes almost no time.

Now if you use a cloud provider like amazon, azure etc there is a more lightweight option and pretty easy to setup. By using a cloud provider such as amazon, azure or google you have cheap and easy access to storage. The storage options that they provide can also be used in order to host your private artifacts or even your public ones.

To do so you need to use a maven wagon which is capable to communicate with the storage options that your cloud provider has and this is exactly what the CloudStorageMaven project deals with.

The CloudStorageMaven project provides you with wagons interacting with Amazon S3, Azure Blob Storage and Google Cloud Storage.

If you already use one of these cloud services hosting your artificats on them seems like a no brainer and theese wagons make it a lot easier to do so.

I have compiled some tutorials on how to get started with each one of them

Happy coding!

Advertisement

Host your maven artifacts using Amazon s3

If you use amazon Web Services and you use Java for your projects then Amazon S3 is a great place to host your teams artifcats.

It is easy to setup and pretty cheap. Also it is much simpler than setting one of the existing repository options (jfrog, nexus, archiva etc) if you are not particularly interested in their features.

To get started you need to specify a maven wagon which supports s3.
We will use the s3 storage wagon.

Let’s get started by creating a maven project

mvn archetype:generate -DgroupId=com.test.apps -DartifactId=S3WaggonTest -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

We are going to add a simple service.

package com.test.apps;

public class HelloService {

    public String sayHello() {

        return "Hello";
    }
}

Then we are going to add the maven wagon which will upload and fetch our binaries to s3.

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

Then we shall create the s3 bucket that will host our artifacts.

aws s3 createbucket artifactbucket.

Now we have create our bucket. Then we shall set the distribution management on our maven project.

    <distributionManagement>
        <snapshotRepository>
            <id>my-repo-bucket-snapshot</id>
            <url>s3://my-test-repo/snapshot</url>
        </snapshotRepository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>s3://my-test-repo/release</url>
        </repository>
    </distributionManagement>

From the maven documentation

Where as the repositories element specifies in the POM the location and manner in which Maven may download remote artifacts for use by the current project, distributionManagement specifies where (and how) this project will get to a remote repository when it is deployed. The repository elements will be used for snapshot distribution if the snapshotRepository is not defined.

The next step is the most crucial and this has to to do with authenticating to aws.
There easy way is to have aws cli configured to point to the region where your bucket is located and with credentials which have read and write access to the s3 bucket which will host your binaries.

aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

The other way is to use the maven way and specify our aws credentials on the ~/.m2/settings.xml

  <servers>
    <server>
      <id>my-repo-bucket-snapshot</id>
      <username>EXAMPLEEXAMPLEXAMPLE</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
    <server>
      <id>my-repo-bucket-release</id>
      <username>EXAMPLEEXAMPLEXAMPLE</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
  </servers>

Be aware that you have to specify credentials for each repository specified.
Also we are not over yer since It is crucial to specify the region of the bucket.
To do so you you can either set it up the Amazon way therefore specifying it in an environment variable

AWS_DEFAULT_REGION=us-east-1

Or you can pass it as a property while executing the deploy command.

-DAWS_DEFAULT_REGION=us-east-1

And now the easiest part which is deploying.

mvn deploy

Now since your artifact has been deployed you can use it in another repo by specifying your repository and your wagon.

    <repositories>
        <repository>
            <id>my-repo-bucket-snapshot</id>
            <url>s3://my-test-repo/snapshot</url>
        </repository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>s3://my-test-repo/release</url>
        </repository>
    </repositories>

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

That’s it! Next thing you know your artifact will be downloaded by maven through s3 and used as a dependency in your new project.

Integrate Spring boot and Elastic Beanstalk using Cloudformation

AWS beanstalk is an amazon web service that does most of the configuration for you and creates an infrastructure suitable for a horizontally scalable application. Instead of Beanstalk the other approach would be to configure load balancers and auto scalling groups, which requires a bit of AWS expertise and time.

On this tutorial we are going to upload a spring boot jar application using amazon elastic beanstalk and a cloud formation bundle.

Less is more therefore we are going to use pretty much the same spring boot application taken from the official Spring guide as a template.

The only change would be to alter the rootProject.name to beanstalk-deployment and some changes on the package structure. Downloading the project from github is sufficient.

Then we can build and run the project

gradlew build
java -jar build/libs/beanstalk-deployment-1.0-SNAPSHOT.jar 

Next step is to upload the application to s3.

aws s3 cp build/libs/beanstalk-deployment-1.0-SNAPSHOT.jar s3://{you bucket name}/beanstalk-deployment-1.0-SNAPSHOT.jar

You need to install the elastic beanstalk client since it helps a lot with most beanstalk operations.

Since we will use Java 8 I would get a list with elastic beanstalk environments in order to retrieve the correct SolutionStackName.

aws elasticbeanstalk list-available-solution-stacks |grep Java 

Based on the results I will use the “64bit Amazon Linux 2016.09 v2.3.0 running Java 8” stackname.

Now we are ready to proceed to our cloudformation script.

We will specify a parameter and this will be the bucket containing the application code

  "Parameters" : {
    "SourceCodeBucket" : {
      "Type" : "String"
    }
  }

Then we will specify the name of the application

    "SpringBootApplication": {
      "Type": "AWS::ElasticBeanstalk::Application",
      "Properties": {
        "Description":"Spring boot and elastic beanstalk"
      }
    }

Next step will be to specify the application version

    "SpringBootApplicationVersion": {
      "Type": "AWS::ElasticBeanstalk::ApplicationVersion",
      "Properties": {
        "ApplicationName":{"Ref":"SpringBootApplication"},
        "SourceBundle": {
                  "S3Bucket": {"Ref":"SourceCodeBucket"},
                  "S3Key": "beanstalk-deployment-1.0-SNAPSHOT.jar"
        }
      }
    }

And then we specify our configuration template.

    "SpringBootBeanStalkConfigurationTemplate": {
      "Type": "AWS::ElasticBeanstalk::ConfigurationTemplate",
      "Properties": {
        "ApplicationName": {"Ref":"SpringBootApplication"},
        "Description":"A display of speed boot application",
        "OptionSettings": [
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MinSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MaxSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:elasticbeanstalk:environment",
            "OptionName": "EnvironmentType",
            "Value": "LoadBalanced"
          }
        ],
        "SolutionStackName": "64bit Amazon Linux 2016.09 v2.3.0 running Java 8"
      }
    }

The last step would be to glue the above properties by defining an environment

    "SpringBootBeanstalkEnvironment": {
      "Type": "AWS::ElasticBeanstalk::Environment",
      "Properties": {
        "ApplicationName": {"Ref":"SpringBootApplication"},
        "EnvironmentName":"JavaBeanstalkEnvironment",
        "TemplateName": {"Ref":"SpringBootBeanStalkConfigurationTemplate"},
        "VersionLabel": {"Ref": "SpringBootApplicationVersion"}
      }
    }

Now you are ready to upload your cloudformation template and deploy your beanstalk application

aws s3 cp beanstalkspring.template s3://{bucket with templates}/beanstalkspring.template
aws cloudformation create-stack --stack-name SpringBeanStalk --parameters ParameterKey=SourceCodeBucket,ParameterValue={bucket with code} --template-url https://s3.amazonaws.com/{bucket with templates}/beanstalkspring.template

You can download the full sourcecode and the cloudformation template from Github.

Testing Amazon Web Services Codebase: DynamoDB and S3

When switching to an amazon web services infrastructure, one of the main challenges is testing.

Components such as DynamoDB and S3 come in handy however they come with a cost.
When it comes to continuous integration you will end up spending resources if you use the amazon components.

Some of these components have their clones that are capable of running locally.

You can use DynamoDB locally.

By issuing

java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

you will have a local DynamoDB instance up and running.

Also on http://localhost:8000/shell you have a DynamoDB Shell (based on javascript) which will help you to get started.

In order to connect to the local instance you need to set the endpoint on your DynamoDB client.

On Java

AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setEndpoint("http://localhost:8000"); 

On Node.js

var AWS = require('aws-sdk');
var config = {"endpoint":"http://localhost:8000"};
var client = new AWS.DynamoDB(config);

Another base component of Amazon Web Services is the Simple Storage Service (S3).

Luckily we have fake-s3 . Fake-S3 a lightweight server clone of amazon S3, exists.

Installing and running fake-s3 is pretty simple

gem install fakes3
fakes3 -r /mnt/fakes3_root -p 4567

In order to connect you have to specify the endpoint

On Java

AmazonS3 client = new AmazonS3Client();
client.setEndpoint("http://localhost:8000"); 

On Node.js

var AWS = require('aws-sdk');
var config = {"endpoint":"http://localhost:8000"};
var client = new AWS.S3(config);

These tools will come in handy during the development face, especially when you get started and want a simple example. By running them locally you avoid overhead of permissions and configurations that come with each component you upload on amazon.