My language will overtake yours

From time to time I stumble upon articles comparing one programming language to another or articles which claim that ‘one language will rule them all’.
Most of these comparisons depend largely on metrics based on searches and hits according to popularity (for example tiobe).

The question is not if the data is accurate but if comparisons like these are any more relevant to the current landscape of software development.

Back in the past we used to have languages developed in order to replace other languages and make developers more comfortable. Some languages became obsolete, and some were overtaken by others.
However as the industry continued to evolve, languages started to be associated with some certain fields of our industry. For example when it comes to machine learning and deep learning, python is the first language that comes to mind. As the fields of the industry involved the languages associated with, evolved too, leading to the development of tools and frameworks. As a result we now have programming languages with a huge ecosystem. The tools and the frameworks were not build in a day, it took years, a lot of effort, skills and experience on the pretty specific problems they had to tackle. As the years pass these tools mature and evolve.

Even though there are many good options out there, the major factors on adoption are not based on the language by itself but more on the tools that come with it.
Let’s take for example Java and the Java EE ecosystem. Although there is a great amount of articles discussing the death of Java, Java continuous to be the first choice, especially when it comes to enterprise development. There are certainly languages with better syntax and more convenient tools, but Java comes along with a huge ecosystem.
I believe that the language comparison should shift from language comparison to an ecosystem comparison based on the field.

Another fact that has to be considered is also the boost that happens to several industries. Certain industries require pretty specific solutions and tools, thus leading to a boost in the adoption to those tools and frameworks. It is not that a language just got more popular among developers, instead the industry just got bigger.

Finally we should take into consideration universities. Computer science universities play a leading role in the creation of the language popularity landscape.
For example many of the c language hits are due to university assignments. Furthermore r&d finds its way to the industry, implemented with different tools for environments much different than the original one.

All in all I believe that language comparison is no longer relevant. The industry has evolved a lot and the applications that we develop are largely different than the ones we used to develop in the past. In the past we used to develop console applications and it made sense to compare syntax and extra features. Nowadays we develop large scale applications in clustered environments employing various architectures and we need more than just a language to tackle these problems.

Advertisement

Java on the AWS cloud using Lambda, Api Gateway and CloudFormation

On a previous post we implemented a java based aws lambda function and deployed it using CloudFormation.

Since we have our lambda function set up we will integrate it with a http endpoint using AWS API Gateway.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any Web application

For this example imagine API gateway as if it is an HTTP Connector.

We will change our original function in order to implement a division.

package com.gkatzioura.deployment.lambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import java.math.BigDecimal;
import java.util.Map;
import java.util.logging.Logger;

/**
 * Created by gkatzioura on 9/10/2016.
 */
public class RequestFunctionHandler implements RequestHandler<Map<String,String>,String> {

    private static final String NUMERATOR_KEY = "numerator";
    private static final String DENOMINATOR_KEY = "denominator";

    private static final Logger LOGGER = Logger.getLogger(RequestFunctionHandler.class.getName());

    public String handleRequest(Map <String,String> values, Context context) {

        LOGGER.info("Handling request");

        if(!values.containsKey(NUMERATOR_KEY)||!values.containsKey(DENOMINATOR_KEY)) {
            return "You need both numberator and denominator";
        }

        try {
            BigDecimal numerator = new BigDecimal(values.get(NUMERATOR_KEY));
            BigDecimal denominator= new BigDecimal(values.get(DENOMINATOR_KEY));
            return  numerator.divide(denominator).toString();
        } catch (Exception e) {
            return "Please provide valid values";
        }
    }

}

Then we will change our lambda code and update it on s3.

aws s3 cp build/distributions/JavaLambdaDeployment.zip s3://lambda-functions/JavaLambdaDeployment.zip

Next step is to update our CloudFormation template and add the api gateway forwarding requests to our lambda function.

First we have to declare our Rest api

    "AGRA16PAA": {
      "Type": "AWS::ApiGateway::RestApi",
      "Properties": {"Name": "CalculationApi"}
    }

Then we need to add a rest resource. Inside the DependsOn element we can see the id of our rest api. Therefore cloudwatch will create the resource after the rest api has been created.

"AGR2JDQ8": {
      "Type": "AWS::ApiGateway::Resource",
      "Properties": {
        "RestApiId": {"Ref": "AGRA16PAA"},
        "ParentId": {
          "Fn::GetAtt": ["AGRA16PAA","RootResourceId"]
        },
        "PathPart": "divide"
      },
      "DependsOn": [
        "AGRA16PAA"
      ]
    }

Another crucial part is to add a permission in order to be able to invoke our lambda function.

    "LPI6K5": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:invokeFunction",
        "FunctionName": {"Fn::GetAtt": ["LF9MBL", "Arn"]},
        "Principal": "apigateway.amazonaws.com",
        "SourceArn": {"Fn::Join": ["",
          ["arn:aws:execute-api:", {"Ref": "AWS::Region"}, ":", {"Ref": "AWS::AccountId"}, ":", {"Ref": "AGRA16PAA"}, "/*"]
        ]}
      }
    }

Last step would be to add the api gateway method in order to be able to invoke our lambda function from the api gateway. Furthermore we will add an api gateway deployment instruction.

"Deployment": {
      "Type": "AWS::ApiGateway::Deployment",
      "Properties": {
        "RestApiId": { "Ref": "AGRA16PAA" },
        "Description": "First Deployment",
        "StageName": "StagingStage"
      },
      "DependsOn" : ["AGM25KFD"]
    },
    "AGM25KFD": {
      "Type": "AWS::ApiGateway::Method",
      "Properties": {
        "AuthorizationType": "NONE",
        "HttpMethod": "POST",
        "ResourceId": {"Ref": "AGR2JDQ8"},
        "RestApiId": {"Ref": "AGRA16PAA"},
        "Integration": {
          "Type": "AWS",
          "IntegrationHttpMethod": "POST",
          "IntegrationResponses": [{"StatusCode": 200}],
          "Uri": {
            "Fn::Join": [
              "",
              [
                "arn:aws:apigateway:",
                {"Ref": "AWS::Region"},
                ":lambda:path/2015-03-31/functions/",
                {"Fn::GetAtt": ["LF9MBL", "Arn"]},
                "/invocations"
              ]
            ]
          }
        },
        "MethodResponses": [{
          "StatusCode": 200
        }]
      }

So we ended up with our new cloudwatch configuration.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Resources": {
    "LF9MBL": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": "lambda-functions",
          "S3Key": "JavaLambdaDeployment.zip"
        },
        "FunctionName": "SimpleRequest",
        "Handler": "com.gkatzioura.deployment.lambda.RequestFunctionHandler",
        "MemorySize": 128,
        "Role": "arn:aws:iam::274402012893:role/lambda_basic_execution",
        "Runtime": "java8"
      }
    },
    "Deployment": {
      "Type": "AWS::ApiGateway::Deployment",
      "Properties": {
        "RestApiId": { "Ref": "AGRA16PAA" },
        "Description": "First Deployment",
        "StageName": "StagingStage"
      },
      "DependsOn" : ["AGM25KFD"]
    },
    "AGM25KFD": {
      "Type": "AWS::ApiGateway::Method",
      "Properties": {
        "AuthorizationType": "NONE",
        "HttpMethod": "POST",
        "ResourceId": {"Ref": "AGR2JDQ8"},
        "RestApiId": {"Ref": "AGRA16PAA"},
        "Integration": {
          "Type": "AWS",
          "IntegrationHttpMethod": "POST",
          "IntegrationResponses": [{"StatusCode": 200}],
          "Uri": {
            "Fn::Join": [
              "",
              [
                "arn:aws:apigateway:",
                {"Ref": "AWS::Region"},
                ":lambda:path/2015-03-31/functions/",
                {"Fn::GetAtt": ["LF9MBL","Arn"]},
                "/invocations"
              ]
            ]
          }
        },
        "MethodResponses": [{"StatusCode": 200}]
      },
      "DependsOn": ["LF9MBL","AGR2JDQ8","LPI6K5"]
    },
    "AGR2JDQ8": {
      "Type": "AWS::ApiGateway::Resource",
      "Properties": {
        "RestApiId": {"Ref": "AGRA16PAA"},
        "ParentId": {
          "Fn::GetAtt": ["AGRA16PAA","RootResourceId"]
        },
        "PathPart": "divide"
      },
      "DependsOn": ["AGRA16PAA"]
    },
    "AGRA16PAA": {
      "Type": "AWS::ApiGateway::RestApi",
      "Properties": {
        "Name": "CalculationApi"
      }
    },
    "LPI6K5": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:invokeFunction",
        "FunctionName": {"Fn::GetAtt": ["LF9MBL", "Arn"]},
        "Principal": "apigateway.amazonaws.com",
        "SourceArn": {"Fn::Join": ["",
          ["arn:aws:execute-api:", {"Ref": "AWS::Region"}, ":", {"Ref": "AWS::AccountId"}, ":", {"Ref": "AGRA16PAA"}, "/*"]
        ]}
      }
    }
 }
}

Last but not least, we have to update our previous cloudformation stack.

So we uploaded our latest template

aws s3 cp cloudformationjavalambda2.template s3://cloudformation-templates/cloudformationjavalambda2.template

And all we have to do is to update our stack.

aws cloudformation update-stack --stack-name JavaLambdaStack --template-url https://s3.amazonaws.com/cloudformation-templates/cloudformationjavalambda2.template

Our stack has just been updated.
We can got to our api gateway endpoint and try to issue a post.

curl -H "Content-Type: application/json" -X POST -d '{"numerator":1,"denominator":"2"}' https://{you api gateway endpoint}/StagingStage/divide
"0.5"

You can find the sourcecode on github.

Java on the AWS cloud using Lambda

Amazon Web Services gets more popular by the day. Java is a first class citizen on AWS and it is pretty easy to get started.
Deploying your application is a bit different, but still easy and convenient.

AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code.

Actually think of lambda as running a task that needs up to five minutes to finish. In case of simple actions or jobs that are not time consuming, and don’t require a huge framework, AWS lambda is the way to go. Also AWS lambda is great for horizontal scaling.

The most stripped down example would be to create a lambda function that responds to a request.

We shall implement the RequestHandler interface.

package com.gkatzioura.deployment.lambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import java.util.Map;
import java.util.logging.Logger;

/**
 * Created by gkatzioura on 9/10/2016.
 */
public class RequestFunctionHandler implements RequestHandler<Map<String,String>,String> {

    private static final Logger LOGGER = Logger.getLogger(RequestFunctionHandler.class.getName());

    public String handleRequest(Map <String,String> values, Context context) {

        LOGGER.info("Handling request");

        return "You invoked a lambda function";
    }

}

Somehow RequestHandler is like a controller.

To proceed we will have to create a jar file with the dependencies needed, therefore we will create a custom gradle task

apply plugin: 'java'

repositories {
    mavenCentral()
}

dependencies {
    compile (
            'com.amazonaws:aws-lambda-java-core:1.1.0',
            'com.amazonaws:aws-lambda-java-events:1.1.0'
    )
}

task buildZip(type: Zip) {
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime
    }
}

build.dependsOn buildZip

Then we should build

gradle build

Now we have to upload our code to our lambda function.

I have a s3 bucket on amazon for lambda functions only. Supposing that our bucket is called lambda-functions (I am pretty sure it is already reserved).
We will use aws cli wherever possible.

aws s3 cp build/distributions/JavaLambdaDeployment.zip s3://lambda-functions/JavaLambdaDeployment.zip

Now instead of creating a lambda function the manual way we are going to do so by creating a cloud formation template.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Resources": {
    "LF9MBL": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": "lambda-functions",
          "S3Key" : "JavaLambdaDeployment.zip",
        },
        "FunctionName": "SimpleRequest",
        "Handler": "com.gkatzioura.deployment.lambda.RequestFunctionHandler",
        "MemorySize": 128,
        "Role":"arn:aws:iam::274402012893:role/lambda_basic_execution",
        "Runtime":"java8"
      },
      "Metadata": {
        "AWS::CloudFormation::Designer": {
          "id": "66b2b325-f19a-4d7d-a7a9-943dd8cd4a5c"
        }
      }
    }
  }
}

Next step is to upload our cloudformation template to an s3 bucket. Personally I use a separate bucket for my templates. Supposing that our bucket is called cloudformation-templates

aws s3 cp cloudformationjavalambda.template s3://cloudformation-templates/cloudformationjavalambda.template

Next step is to create our cloudformation stack using the template specified

aws cloudformation create-stack --stack-name JavaLambdaStack --template-url https://s3.amazonaws.com/cloudformation-templates/cloudformationjavalambda.template

In order to check we shall invoke the lambda function through the amazon cli

aws lambda invoke --invocation-type RequestResponse --function-name SimpleRequest --region eu-west-1 --log-type Tail --payload '{}' outputfile.txt

And the result is the expected

"You invoked a lambda function"

You can find the source code on github.