New Book Day: A Developer’s Essential Guide to Docker Compose

As Developers nowadays we have a wide variety of Software components and Cloud Services to use. This was a scenario that we could not even imagine in the past.
I still remember when we had to setup our Application Servers and Databases on top of Bare metal servers.
This burst of computing functionality in the form of the Cloud and managed services, allowed us to be able to utilise more tools for our applications and build better products.
Issues like orchestrating workloads, isolating and shipping them went to a whole different level.
As a result containerization came to the rescue.
Docker took over the microservice world and became the dominant solution to deploy microservices and in certain cases even to deploy Databases and Brokers.
This brings us to the development process. Production deployments is a huge chapter on its own. Platform engineers needs to take care of the security, the scaling and robustness of container based deployments.
But the development process is also affected:

  • SQL/NoSQL Databases as well as purpose-built databases like InfluxDB need to be available locally
  • Scenarios of Microservices applications need to be tested
  • Other Components such as brokers need to be available for testing

A step forward to the challenges mentioned above is to utilise Docker and its rich functionality. As convenient as it is to spin up containers locally you still end up managing containers, volumes and networks. Most of the times those have been spinned up add-hoc or with some scripts that a team has to maintain.

Docker Compose is one of the solutions to the problems described. With Compose you can spin up multiple containers locally organised using yaml files.
Here are some of the benefits when used during development:

  • The containers are organised and can spin up or shut down in an organized way
  • Applications can be placed on different networks
  • Volumes can be managed and attached to containers in a managed way
  • Containers resolve each other’s location through a DNS automatically, manual linking is not needed.

I have been using Compose for years. It helped me to make my development process much more efficient. Also in certain cases it helped me on actual production deployments. Writing this book was an opportunity for me to advocate for Docker Compose.

 

 

Thanks to the amazing people at Packt publishing it was possible to write this book and give back to the community.

The book is focused on various aspects of Compose.

In the beginning it will be an extensive look on Docker Compose, how it is implemented, how it interacts with the Docker engine, the available commands as well as the functionality it provides.

Onwards we will dive deep in day to day development using Compose. We will spin up complex infrastructure locally, as well as simulate Microservice environments using Compose. We will take this concept further and incrementally simulate things that we have to deal with a production deployment, as well as issue workarounds. Lastly we will use Compose for CI/CD jobs on popular solutions like Github Actions, Travis, and BitBucket Pipelines.

The last part of the book is all about deploying to production. All the knowledge acquired previously can be used for actual production deployments. A production deployment comes with some standards:

  • Infrastructure as Code
  • Container registry
  • Networks
  • Load Balancing
  • Autoscaling

The above, in combination with the knowledge we accumulated so far using Compose, will be used for a deployment on AWS and Azure, the most popular Cloud providers. This will be done also with the extra help of Infrastructure as Code by using Terraform.

Lastly since many production deployments nowadays reside on Kubernetes we shall build a bridge between Compose and Kubernetes by migrating the existing Compose deployment using Kompose.

You can find the book on Amazon  as well as on the Packt portal.
Happy Learning!

Upload and Download files to S3 using maven.

Throughout the years I’ve seen many teams using maven in many different ways. Maven can be used for many ci/cd tasks instead of using extra pipeline code or it can be used to prepare the development environment before running some tests.
Generally it is convenient tool, widely used among java teams and will continue so since there is a huge ecosystem around it.

The CloudStorage Maven plugin helps you with using various cloud buckets as a private maven repository. Recently CloudStorageMaven for s3 got a huge upgrade, and you can use it in order to download or upload files from s3, by using it as a plugin.

The plugin assumes that your environment is configured properly to access the s3 resources needed.
This can be achieved individually through aws configure

aws configure

Other ways are through environment variables or by using the appropriate iam role.

Supposing you want to download some certain files from a path in s3.

<build>
        <plugins>
            <plugin>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.6</version>
                <executions>
                    <execution>
                        <id>download-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/local/download/path</downloadPath>
                            <keys>1.txt,2.txt,directory/3.txt</keys>
                        </configuration>
                    </execution>
                <executions>
            <plugin>
        <plugins>
</build>

The files 1.txt,2.txt,directory/3.txt once the execution is finished shall reside in the local directory specified
(/local/download/path).
Be aware that the file discovery on s3 is done with prefix, thus if you have file 1.txt and 1.txt.jpg both files shall be downloaded.

You can also download only one file to one file that you specified locally, as long as it is one to one.

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/your-file.txt</downloadPath>
                            <keys>a-key-to-download.txt</keys>
                        </configuration>
                    </execution>

Apparently files with a prefix that contain directories (they are fakes ones on s3) will downloaded to the directory specified in the form of directories and sub directories

                    <execution>
                        <id>download-prefix</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-download</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <downloadPath>/path/to/local/</downloadPath>
                            <keys>s3-prefix</keys>
                        </configuration>
                    </execution>

The next part is about uploading files to s3.

Uploading one file

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/your-file.txt</path>
                            <key>key-to-download.txt</key>
                        </configuration>
                    </execution>

Upload a directory

                    <execution>
                        <id>upload-one</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                            <key>prefix</key>
                        </configuration>
                    </execution>

Upload to the root of bucket.

                    <execution>
                        <id>upload-multiples-files-no-key</id>
                        <phase>package</phase>
                        <goals>
                            <goal>s3-upload</goal>
                        </goals>
                        <configuration>
                            <bucket>your-bucket</bucket>
                            <path>/path/to/local/directory</path>
                        </configuration>
                    </execution>

That’s it! Since it is an open source project you can contribute or issue pull requests at github.

Migrate your elastic beanstalk workers to docker containers

Amazon Elastic Beanstalk is one of the most popular services that aws provides. Elastic beanstalk comes with two options, the worker and the web application.

The worker application consumes the messages from a sqs queue and process them. If the process was successful the message is removed from the queue, if not the message shall remain in the queue or after some failed attempts it will go back to a dead letter queue. If you want to get more into elb I have made a tutorial on deploying you spring application using elb and cloudformation.

Elastic beanstalk workers are really great because they are managed, they can be scaled up/down depending on your workloads and they provide a wide variety of development environments like java, node.js and also you can use docker images.

Although elastic bean can work wonders if you have an aws based infrastructure you might face some issues when you will try to move to a container based infrastructure using a container orchestration engine.

Most probably your containerized worker application will work seamlessly without any extra configuration, however you need to find an alternative for the agent which dispatches the queue messages to your application.

In order to make things simple I implemented a mechanism which retrieves the messages from the queue and sends them to the worker application.

The container-queue-worker projects aims to provide an easy way to migrate your elastic beanstalk workers to a docker orchestration system.

Since the solution is Scala based it can either be used as a standalone jvm application or it can be run in a container using the image from dockerhub.

Once set up what you need to add the routing configurations.

This can be done using environment variables

WORKER_TYPE=sqs
WORKER_SERVER_ENDPOINT=http://{docker-service}
WORKER_AWS_QUEUE_ENDPOINT=http://{amazon queue endpoint}

Or if you use it as a container you can add a config file on the /etc/worker/worker.conf path.

worker {
  type =  sqs
  server-endpoint = http://{docker-service}
  aws {
    queue-endpoint =  http://{amazon queue endpoint}
  }
}

In order to make thing easier for you I added a docker compose file simulating the desired outcome.

version: '3.5'
networks:
  queue-worker-network:
    name: queue-worker-network
services:
  worker-server:
    build:
      context: ./worker-server
      dockerfile: Dockerfile
    ports:
      - 8080:8080
    networks:
      - queue-worker-network
  elasticmq:
    build:
      context: ./elasticmq
      dockerfile: Dockerfile
    ports:
      - 9324:9324
    networks:
      - queue-worker-network
  container-queue-worker:
    image: gkatzioura/container-queue-worker:0.1
    depends_on:
      - elasticmq
      - worker-server
    environment:
      WORKER_TYPE: sqs
      WORKER_SERVER_ENDPOINT: http://worker-server:8080/
      WORKER_AWS_QUEUE_ENDPOINT: http://elasticmq:9324/queue/test-queue
      AWS_DEFAULT_REGION: eu-west-1
      AWS_ACCESS_KEY_ID: access-key
      AWS_SECRET_ACCESS_KEY: secret-key
    networks:
      - queue-worker-network

Host your maven artifacts on the cloud using CloudStorageMaven

One of the major issues when dealing with large codebases in our teams has to do with artifact sharing and artifact storage.

There are various options out there that provide many features such as jfrog, nexus, archiva etc.

I have been into using them, setting them up and configuring and they certainly provide you with many features. Also having you own repository installation gives you a lot of flexibility. Furthermore docker has made things a lot easier and thus setting them up takes almost no time.

Now if you use a cloud provider like amazon, azure etc there is a more lightweight option and pretty easy to setup. By using a cloud provider such as amazon, azure or google you have cheap and easy access to storage. The storage options that they provide can also be used in order to host your private artifacts or even your public ones.

To do so you need to use a maven wagon which is capable to communicate with the storage options that your cloud provider has and this is exactly what the CloudStorageMaven project deals with.

The CloudStorageMaven project provides you with wagons interacting with Amazon S3, Azure Blob Storage and Google Cloud Storage.

If you already use one of these cloud services hosting your artificats on them seems like a no brainer and theese wagons make it a lot easier to do so.

I have compiled some tutorials on how to get started with each one of them

Happy coding!

Host your maven artifacts using Amazon s3

If you use amazon Web Services and you use Java for your projects then Amazon S3 is a great place to host your teams artifcats.

It is easy to setup and pretty cheap. Also it is much simpler than setting one of the existing repository options (jfrog, nexus, archiva etc) if you are not particularly interested in their features.

To get started you need to specify a maven wagon which supports s3.
We will use the s3 storage wagon.

Let’s get started by creating a maven project

mvn archetype:generate -DgroupId=com.test.apps -DartifactId=S3WaggonTest -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

We are going to add a simple service.

package com.test.apps;

public class HelloService {

    public String sayHello() {

        return "Hello";
    }
}

Then we are going to add the maven wagon which will upload and fetch our binaries to s3.

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

Then we shall create the s3 bucket that will host our artifacts.

aws s3 createbucket artifactbucket.

Now we have create our bucket. Then we shall set the distribution management on our maven project.

    <distributionManagement>
        <snapshotRepository>
            <id>my-repo-bucket-snapshot</id>
            <url>s3://my-test-repo/snapshot</url>
        </snapshotRepository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>s3://my-test-repo/release</url>
        </repository>
    </distributionManagement>

From the maven documentation

Where as the repositories element specifies in the POM the location and manner in which Maven may download remote artifacts for use by the current project, distributionManagement specifies where (and how) this project will get to a remote repository when it is deployed. The repository elements will be used for snapshot distribution if the snapshotRepository is not defined.

The next step is the most crucial and this has to to do with authenticating to aws.
There easy way is to have aws cli configured to point to the region where your bucket is located and with credentials which have read and write access to the s3 bucket which will host your binaries.

aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

The other way is to use the maven way and specify our aws credentials on the ~/.m2/settings.xml

  <servers>
    <server>
      <id>my-repo-bucket-snapshot</id>
      <username>EXAMPLEEXAMPLEXAMPLE</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
    <server>
      <id>my-repo-bucket-release</id>
      <username>EXAMPLEEXAMPLEXAMPLE</username>
      <password>eXampLEkeyEMI/K7EXAMP/bPxRfiCYEXAMPLEKEY</password>
    </server>
  </servers>

Be aware that you have to specify credentials for each repository specified.
Also we are not over yer since It is crucial to specify the region of the bucket.
To do so you you can either set it up the Amazon way therefore specifying it in an environment variable

AWS_DEFAULT_REGION=us-east-1

Or you can pass it as a property while executing the deploy command.

-DAWS_DEFAULT_REGION=us-east-1

And now the easiest part which is deploying.

mvn deploy

Now since your artifact has been deployed you can use it in another repo by specifying your repository and your wagon.

    <repositories>
        <repository>
            <id>my-repo-bucket-snapshot</id>
            <url>s3://my-test-repo/snapshot</url>
        </repository>
        <repository>
            <id>my-repo-bucket-release</id>
            <url>s3://my-test-repo/release</url>
        </repository>
    </repositories>

    <build>
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>s3-storage-wagon</artifactId>
                <version>1.0</version>
            </extension>
        </extensions>
    </build>

That’s it! Next thing you know your artifact will be downloaded by maven through s3 and used as a dependency in your new project.

Push Spring Boot Docker images on ECR

On a previous blog we integrated a spring boot application with EC2.
It is one of the most raw forms of deployment that you can have on Amazon Web Services.

On this tutorial we will create a docker image with our application which will be stored to the Amazon EC2 container registry.

You need to have the aws cli tool installed.

We will get as simple as we can with our spring application therefore we will use an example from the official spring source page. The only changes applied will be on the packaging and the application name.

Our application shall be named ecs-deployment

rootProject.name = 'ecs-deployment'

Then we build and run our application

gradle build
gradle bootRun

Now let’s dockerize our application.
First we shall create a Dockerfile that will reside on src/main/docker.

FROM frolvlad/alpine-oraclejdk8
VOLUME /tmp
ADD ecs-deployment-1.0-SNAPSHOT.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

Then we should edit our gradle file in order to add the docker dependency, the docker plugin and an extra gradle task in order to create our docker image.

buildscript {
    ...
    dependencies {
        ...
        classpath('se.transmode.gradle:gradle-docker:1.2')
    }
}

...
apply plugin: 'docker'


task buildDocker(type: Docker, dependsOn: build) {
    push = false
    applicationName = jar.baseName
    dockerfile = file('src/main/docker/Dockerfile')
}

And we are ready to build our docker image.

./gradlew build buildDocker

You can also run your docker application from the newly created image.

docker run -p 8080:8080 -t com.gkatzioura.deployment/ecs-deployment:1.0-SNAPSHOT

First step is too create our ecr repository

aws ecr create-repository  --repository-name ecs-deployment

Then let us proceed with our docker registry authentication.

aws ecr get-login

Then run the command given in the output. The login attempt will succeed and your are ready to proceed to push your image.

First tag the image in order to specify the repository that we previously created and then do a docker push.

docker tag {imageid} {aws account id}.dkr.ecr.{aws region}.amazonaws.com/ecs-deployment:1.0-SNAPSHOT
docker push {aws account id}.dkr.ecr.{aws region}.amazonaws.com/ecs-deployment:1.0-SNAPSHOT

And we are done! Our spring boot docker image is deployed on the Amazon EC2 container registry.

You can find the source code on github.

Integrate Spring Boot and EC2 using Cloudformation

On a previous blog we integrated a spring boot application with elastic beanstalk.
The application was a servlet based application responding to requests.

On this tutorial we are going to deploy a spring boot application, which executes some scheduled tasks on an ec2 instance.
The application will be pretty much the same application taken from the official spring guide with some minor differences on packages.

The name of our application will be ec2-deployment

rootProject.name = 'ec2-deployment'

Then we will schedule a task to our spring boot application.

package com.gkatzioura.deployment.task;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 12/16/16.
 */
@Component
public class SimpleTask {

    private static final Logger LOGGER = LoggerFactory.getLogger(SimpleTask.class);

    @Scheduled(fixedRate = 5000)
    public void reportCurrentTime() {
        LOGGER.info("This is a simple task on ec2");
    }

}


Next step is to build the application and deploy it to our s3 bucket.

gradle build
aws s3 cp build/libs/ec2-deployment-1.0-SNAPSHOT.jar s3://{your bucket name}/ec2-deployment-1.0-SNAPSHOT.jar 

What comes next is a bootstrapping script in order to run our application once the server is up and running.

#!/usr/bin/env bash
aws s3 cp s3://{bucket with code}/ec2-deployment-1.0-SNAPSHOT.jar /home/ec2-user/ec2-deployment-1.0-SNAPSHOT.jar
sudo yum -y install java-1.8.0
sudo yum -y remove java-1.7.0-openjdk
cd /home/ec2-user/
sudo nohup java -jar ec2-deployment-1.0-SNAPSHOT.jar > ec2dep.log

This script is pretty much self explanatory. We download the application from the bucket we uploaded it previously, we install the java version needed and then we run the application (this script serves us for example purposes, there are certainly many ways to set up you java application running on linux).

Next step would be to proceed to our cloudformation script. Since we will download our application from s3 it is essential to have an IAM policy that will allow us to download items from the s3 bucket we used previously. Therefore we will create a role with the policy needed

"RootRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [ {
            "Effect": "Allow",
            "Principal": {
              "Service": [ "ec2.amazonaws.com" ]
            },
            "Action": [ "sts:AssumeRole" ]
          } ]
        },
        "Path": "/",
        "Policies": [ {
          "PolicyName": "root",
          "PolicyDocument": {
            "Version" : "2012-10-17",
            "Statement": [ {
              "Effect": "Allow",
              "Action": [
                "s3:Get*",
                "s3:List*"
              ],
              "Resource": {"Fn::Join" : [ "", [ "arn:aws:s3:::", {"Ref":"SourceCodeBucket"},"/*"] ] }
            } ]
          }
        } ]
      }
    }

Next step is to encode our bootstrapping script to Base64 in order to be able to pass it as user data.
Once the ec2 instance is up and running it will run the shell commands previously specified.

Last step is to create our instance profile and specify the ec2 instance to be launched

    "RootInstanceProfile": {
      "Type": "AWS::IAM::InstanceProfile",
      "Properties": {
        "Path": "/",
        "Roles": [ {
          "Ref": "RootRole"
        } ]
      }
    },
    "Ec2Instance":{
      "Type":"AWS::EC2::Instance",
      "Properties":{
        "ImageId":"ami-9398d3e0",
        "InstanceType":"t2.nano",
        "KeyName":"TestKey",
        "IamInstanceProfile": {"Ref":"RootInstanceProfile"},
"UserData":"IyEvdXNyL2Jpbi9lbnYgYmFzaA0KYXdzIHMzIGNwIHMzOi8ve2J1Y2tldCB3aXRoIGNvZGV9L2VjMi1kZXBsb3ltZW50LTEuMC1TTkFQU0hPVC5qYXIgL2hvbWUvZWMyLXVzZXIvZWMyLWRlcGxveW1lbnQtMS4wLVNOQVBTSE9ULmphcg0Kc3VkbyB5dW0gLXkgaW5zdGFsbCBqYXZhLTEuOC4wDQpzdWRvIHl1bSAteSByZW1vdmUgamF2YS0xLjcuMC1vcGVuamRrDQpjZCAvaG9tZS9lYzItdXNlci8NCnN1ZG8gbm9odXAgamF2YSAtamFyIGVjMi1kZXBsb3ltZW50LTEuMC1TTkFQU0hPVC5qYXIgPiBlYzJkZXAubG9n"
      }
    }

KeyName stands for the ssh key name, in case you want to login to the ec2 instance.

So we are good to go and create our cloudformation stack. You have to add the CAPABILITY_IAM flag.

aws s3 cp ec2spring.template s3://{bucket with templates}/ec2spring.template
aws cloudformation create-stack --stack-name SpringEc2 --parameters ParameterKey=SourceCodeBucket,ParameterValue={bucket with code} --template-url https://s3.amazonaws.com/{bucket with templates}/ec2spring.template --capabilities CAPABILITY_IAM

That’s it. Now you have your spring application up and running on top of an ec2 instance.
You can download the source code from GitHub.

Integrate Spring boot and Elastic Beanstalk using Cloudformation

AWS beanstalk is an amazon web service that does most of the configuration for you and creates an infrastructure suitable for a horizontally scalable application. Instead of Beanstalk the other approach would be to configure load balancers and auto scalling groups, which requires a bit of AWS expertise and time.

On this tutorial we are going to upload a spring boot jar application using amazon elastic beanstalk and a cloud formation bundle.

Less is more therefore we are going to use pretty much the same spring boot application taken from the official Spring guide as a template.

The only change would be to alter the rootProject.name to beanstalk-deployment and some changes on the package structure. Downloading the project from github is sufficient.

Then we can build and run the project

gradlew build
java -jar build/libs/beanstalk-deployment-1.0-SNAPSHOT.jar 

Next step is to upload the application to s3.

aws s3 cp build/libs/beanstalk-deployment-1.0-SNAPSHOT.jar s3://{you bucket name}/beanstalk-deployment-1.0-SNAPSHOT.jar

You need to install the elastic beanstalk client since it helps a lot with most beanstalk operations.

Since we will use Java 8 I would get a list with elastic beanstalk environments in order to retrieve the correct SolutionStackName.

aws elasticbeanstalk list-available-solution-stacks |grep Java 

Based on the results I will use the “64bit Amazon Linux 2016.09 v2.3.0 running Java 8” stackname.

Now we are ready to proceed to our cloudformation script.

We will specify a parameter and this will be the bucket containing the application code

  "Parameters" : {
    "SourceCodeBucket" : {
      "Type" : "String"
    }
  }

Then we will specify the name of the application

    "SpringBootApplication": {
      "Type": "AWS::ElasticBeanstalk::Application",
      "Properties": {
        "Description":"Spring boot and elastic beanstalk"
      }
    }

Next step will be to specify the application version

    "SpringBootApplicationVersion": {
      "Type": "AWS::ElasticBeanstalk::ApplicationVersion",
      "Properties": {
        "ApplicationName":{"Ref":"SpringBootApplication"},
        "SourceBundle": {
                  "S3Bucket": {"Ref":"SourceCodeBucket"},
                  "S3Key": "beanstalk-deployment-1.0-SNAPSHOT.jar"
        }
      }
    }

And then we specify our configuration template.

    "SpringBootBeanStalkConfigurationTemplate": {
      "Type": "AWS::ElasticBeanstalk::ConfigurationTemplate",
      "Properties": {
        "ApplicationName": {"Ref":"SpringBootApplication"},
        "Description":"A display of speed boot application",
        "OptionSettings": [
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MinSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MaxSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:elasticbeanstalk:environment",
            "OptionName": "EnvironmentType",
            "Value": "LoadBalanced"
          }
        ],
        "SolutionStackName": "64bit Amazon Linux 2016.09 v2.3.0 running Java 8"
      }
    }

The last step would be to glue the above properties by defining an environment

    "SpringBootBeanstalkEnvironment": {
      "Type": "AWS::ElasticBeanstalk::Environment",
      "Properties": {
        "ApplicationName": {"Ref":"SpringBootApplication"},
        "EnvironmentName":"JavaBeanstalkEnvironment",
        "TemplateName": {"Ref":"SpringBootBeanStalkConfigurationTemplate"},
        "VersionLabel": {"Ref": "SpringBootApplicationVersion"}
      }
    }

Now you are ready to upload your cloudformation template and deploy your beanstalk application

aws s3 cp beanstalkspring.template s3://{bucket with templates}/beanstalkspring.template
aws cloudformation create-stack --stack-name SpringBeanStalk --parameters ParameterKey=SourceCodeBucket,ParameterValue={bucket with code} --template-url https://s3.amazonaws.com/{bucket with templates}/beanstalkspring.template

You can download the full sourcecode and the cloudformation template from Github.

Java on the AWS cloud using Lambda, Api Gateway and CloudFormation

On a previous post we implemented a java based aws lambda function and deployed it using CloudFormation.

Since we have our lambda function set up we will integrate it with a http endpoint using AWS API Gateway.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any Web application

For this example imagine API gateway as if it is an HTTP Connector.

We will change our original function in order to implement a division.

package com.gkatzioura.deployment.lambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import java.math.BigDecimal;
import java.util.Map;
import java.util.logging.Logger;

/**
 * Created by gkatzioura on 9/10/2016.
 */
public class RequestFunctionHandler implements RequestHandler<Map<String,String>,String> {

    private static final String NUMERATOR_KEY = "numerator";
    private static final String DENOMINATOR_KEY = "denominator";

    private static final Logger LOGGER = Logger.getLogger(RequestFunctionHandler.class.getName());

    public String handleRequest(Map <String,String> values, Context context) {

        LOGGER.info("Handling request");

        if(!values.containsKey(NUMERATOR_KEY)||!values.containsKey(DENOMINATOR_KEY)) {
            return "You need both numberator and denominator";
        }

        try {
            BigDecimal numerator = new BigDecimal(values.get(NUMERATOR_KEY));
            BigDecimal denominator= new BigDecimal(values.get(DENOMINATOR_KEY));
            return  numerator.divide(denominator).toString();
        } catch (Exception e) {
            return "Please provide valid values";
        }
    }

}

Then we will change our lambda code and update it on s3.

aws s3 cp build/distributions/JavaLambdaDeployment.zip s3://lambda-functions/JavaLambdaDeployment.zip

Next step is to update our CloudFormation template and add the api gateway forwarding requests to our lambda function.

First we have to declare our Rest api

    "AGRA16PAA": {
      "Type": "AWS::ApiGateway::RestApi",
      "Properties": {"Name": "CalculationApi"}
    }

Then we need to add a rest resource. Inside the DependsOn element we can see the id of our rest api. Therefore cloudwatch will create the resource after the rest api has been created.

"AGR2JDQ8": {
      "Type": "AWS::ApiGateway::Resource",
      "Properties": {
        "RestApiId": {"Ref": "AGRA16PAA"},
        "ParentId": {
          "Fn::GetAtt": ["AGRA16PAA","RootResourceId"]
        },
        "PathPart": "divide"
      },
      "DependsOn": [
        "AGRA16PAA"
      ]
    }

Another crucial part is to add a permission in order to be able to invoke our lambda function.

    "LPI6K5": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:invokeFunction",
        "FunctionName": {"Fn::GetAtt": ["LF9MBL", "Arn"]},
        "Principal": "apigateway.amazonaws.com",
        "SourceArn": {"Fn::Join": ["",
          ["arn:aws:execute-api:", {"Ref": "AWS::Region"}, ":", {"Ref": "AWS::AccountId"}, ":", {"Ref": "AGRA16PAA"}, "/*"]
        ]}
      }
    }

Last step would be to add the api gateway method in order to be able to invoke our lambda function from the api gateway. Furthermore we will add an api gateway deployment instruction.

"Deployment": {
      "Type": "AWS::ApiGateway::Deployment",
      "Properties": {
        "RestApiId": { "Ref": "AGRA16PAA" },
        "Description": "First Deployment",
        "StageName": "StagingStage"
      },
      "DependsOn" : ["AGM25KFD"]
    },
    "AGM25KFD": {
      "Type": "AWS::ApiGateway::Method",
      "Properties": {
        "AuthorizationType": "NONE",
        "HttpMethod": "POST",
        "ResourceId": {"Ref": "AGR2JDQ8"},
        "RestApiId": {"Ref": "AGRA16PAA"},
        "Integration": {
          "Type": "AWS",
          "IntegrationHttpMethod": "POST",
          "IntegrationResponses": [{"StatusCode": 200}],
          "Uri": {
            "Fn::Join": [
              "",
              [
                "arn:aws:apigateway:",
                {"Ref": "AWS::Region"},
                ":lambda:path/2015-03-31/functions/",
                {"Fn::GetAtt": ["LF9MBL", "Arn"]},
                "/invocations"
              ]
            ]
          }
        },
        "MethodResponses": [{
          "StatusCode": 200
        }]
      }

So we ended up with our new cloudwatch configuration.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Resources": {
    "LF9MBL": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": "lambda-functions",
          "S3Key": "JavaLambdaDeployment.zip"
        },
        "FunctionName": "SimpleRequest",
        "Handler": "com.gkatzioura.deployment.lambda.RequestFunctionHandler",
        "MemorySize": 128,
        "Role": "arn:aws:iam::274402012893:role/lambda_basic_execution",
        "Runtime": "java8"
      }
    },
    "Deployment": {
      "Type": "AWS::ApiGateway::Deployment",
      "Properties": {
        "RestApiId": { "Ref": "AGRA16PAA" },
        "Description": "First Deployment",
        "StageName": "StagingStage"
      },
      "DependsOn" : ["AGM25KFD"]
    },
    "AGM25KFD": {
      "Type": "AWS::ApiGateway::Method",
      "Properties": {
        "AuthorizationType": "NONE",
        "HttpMethod": "POST",
        "ResourceId": {"Ref": "AGR2JDQ8"},
        "RestApiId": {"Ref": "AGRA16PAA"},
        "Integration": {
          "Type": "AWS",
          "IntegrationHttpMethod": "POST",
          "IntegrationResponses": [{"StatusCode": 200}],
          "Uri": {
            "Fn::Join": [
              "",
              [
                "arn:aws:apigateway:",
                {"Ref": "AWS::Region"},
                ":lambda:path/2015-03-31/functions/",
                {"Fn::GetAtt": ["LF9MBL","Arn"]},
                "/invocations"
              ]
            ]
          }
        },
        "MethodResponses": [{"StatusCode": 200}]
      },
      "DependsOn": ["LF9MBL","AGR2JDQ8","LPI6K5"]
    },
    "AGR2JDQ8": {
      "Type": "AWS::ApiGateway::Resource",
      "Properties": {
        "RestApiId": {"Ref": "AGRA16PAA"},
        "ParentId": {
          "Fn::GetAtt": ["AGRA16PAA","RootResourceId"]
        },
        "PathPart": "divide"
      },
      "DependsOn": ["AGRA16PAA"]
    },
    "AGRA16PAA": {
      "Type": "AWS::ApiGateway::RestApi",
      "Properties": {
        "Name": "CalculationApi"
      }
    },
    "LPI6K5": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:invokeFunction",
        "FunctionName": {"Fn::GetAtt": ["LF9MBL", "Arn"]},
        "Principal": "apigateway.amazonaws.com",
        "SourceArn": {"Fn::Join": ["",
          ["arn:aws:execute-api:", {"Ref": "AWS::Region"}, ":", {"Ref": "AWS::AccountId"}, ":", {"Ref": "AGRA16PAA"}, "/*"]
        ]}
      }
    }
 }
}

Last but not least, we have to update our previous cloudformation stack.

So we uploaded our latest template

aws s3 cp cloudformationjavalambda2.template s3://cloudformation-templates/cloudformationjavalambda2.template

And all we have to do is to update our stack.

aws cloudformation update-stack --stack-name JavaLambdaStack --template-url https://s3.amazonaws.com/cloudformation-templates/cloudformationjavalambda2.template

Our stack has just been updated.
We can got to our api gateway endpoint and try to issue a post.

curl -H "Content-Type: application/json" -X POST -d '{"numerator":1,"denominator":"2"}' https://{you api gateway endpoint}/StagingStage/divide
"0.5"

You can find the sourcecode on github.

Java on the AWS cloud using Lambda

Amazon Web Services gets more popular by the day. Java is a first class citizen on AWS and it is pretty easy to get started.
Deploying your application is a bit different, but still easy and convenient.

AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code.

Actually think of lambda as running a task that needs up to five minutes to finish. In case of simple actions or jobs that are not time consuming, and don’t require a huge framework, AWS lambda is the way to go. Also AWS lambda is great for horizontal scaling.

The most stripped down example would be to create a lambda function that responds to a request.

We shall implement the RequestHandler interface.

package com.gkatzioura.deployment.lambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import java.util.Map;
import java.util.logging.Logger;

/**
 * Created by gkatzioura on 9/10/2016.
 */
public class RequestFunctionHandler implements RequestHandler<Map<String,String>,String> {

    private static final Logger LOGGER = Logger.getLogger(RequestFunctionHandler.class.getName());

    public String handleRequest(Map <String,String> values, Context context) {

        LOGGER.info("Handling request");

        return "You invoked a lambda function";
    }

}

Somehow RequestHandler is like a controller.

To proceed we will have to create a jar file with the dependencies needed, therefore we will create a custom gradle task

apply plugin: 'java'

repositories {
    mavenCentral()
}

dependencies {
    compile (
            'com.amazonaws:aws-lambda-java-core:1.1.0',
            'com.amazonaws:aws-lambda-java-events:1.1.0'
    )
}

task buildZip(type: Zip) {
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime
    }
}

build.dependsOn buildZip

Then we should build

gradle build

Now we have to upload our code to our lambda function.

I have a s3 bucket on amazon for lambda functions only. Supposing that our bucket is called lambda-functions (I am pretty sure it is already reserved).
We will use aws cli wherever possible.

aws s3 cp build/distributions/JavaLambdaDeployment.zip s3://lambda-functions/JavaLambdaDeployment.zip

Now instead of creating a lambda function the manual way we are going to do so by creating a cloud formation template.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Resources": {
    "LF9MBL": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": "lambda-functions",
          "S3Key" : "JavaLambdaDeployment.zip",
        },
        "FunctionName": "SimpleRequest",
        "Handler": "com.gkatzioura.deployment.lambda.RequestFunctionHandler",
        "MemorySize": 128,
        "Role":"arn:aws:iam::274402012893:role/lambda_basic_execution",
        "Runtime":"java8"
      },
      "Metadata": {
        "AWS::CloudFormation::Designer": {
          "id": "66b2b325-f19a-4d7d-a7a9-943dd8cd4a5c"
        }
      }
    }
  }
}

Next step is to upload our cloudformation template to an s3 bucket. Personally I use a separate bucket for my templates. Supposing that our bucket is called cloudformation-templates

aws s3 cp cloudformationjavalambda.template s3://cloudformation-templates/cloudformationjavalambda.template

Next step is to create our cloudformation stack using the template specified

aws cloudformation create-stack --stack-name JavaLambdaStack --template-url https://s3.amazonaws.com/cloudformation-templates/cloudformationjavalambda.template

In order to check we shall invoke the lambda function through the amazon cli

aws lambda invoke --invocation-type RequestResponse --function-name SimpleRequest --region eu-west-1 --log-type Tail --payload '{}' outputfile.txt

And the result is the expected

"You invoked a lambda function"

You can find the source code on github.