Spring-Boot and Cache Abstraction with HazelCast

Previously we got started with Spring Cache abstraction using the default Cache Manager that spring provides.

Although this approach might suit our needs for simple applications, in case of complex problems we need to use different tools with more capabilities. Hazelcast is one of them. Hazelcast is hands down a great caching tool when it comes to a JVM based application. By using hazelcast as a cache, data is evenly distributed among the nodes of a computer cluster, allowing for horizontal scaling of available storage.

We will run our codebase using spring profiles thus ‘hazelcast-cache’ will be our profile name.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'


buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

repositories {
    mavenCentral()
}


sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("org.springframework.boot:spring-boot-starter-cache")
    compile("org.springframework.boot:spring-boot-starter")
    compile("com.hazelcast:hazelcast:3.7.4")
    compile("com.hazelcast:hazelcast-spring:3.7.4")

    testCompile("junit:junit")
}

bootRun {
    systemProperty "spring.profiles.active", "hazelcast-cache"
}

As you can see we updated the gradle file from the previous example and we added two extra dependencies hazelcast and hazelcast-spring. Also we changed the profile that our application will run by default.

Our next step is to configure the hazelcast cache manager.

package com.gkatzioura.caching.config;

import com.hazelcast.config.Config;
import com.hazelcast.config.EvictionPolicy;
import com.hazelcast.config.MapConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;

/**
 * Created by gkatzioura on 1/10/17.
 */
@Configuration
@Profile("hazelcast-cache")
public class HazelcastCacheConfig {

    @Bean
    public Config hazelCastConfig() {

        Config config = new Config();
        config.setInstanceName("hazelcast-cache");

        MapConfig allUsersCache = new MapConfig();
        allUsersCache.setTimeToLiveSeconds(20);
        allUsersCache.setEvictionPolicy(EvictionPolicy.LFU);
        config.getMapConfigs().put("alluserscache",allUsersCache);

        MapConfig usercache = new MapConfig();
        usercache.setTimeToLiveSeconds(20);
        usercache.setEvictionPolicy(EvictionPolicy.LFU);
        config.getMapConfigs().put("usercache",usercache);

        return config;
    }

}

We just created two maps with a ttl policy of 20 seconds. Therefore 20 seconds since the map gets populated a cache eviction will occur. For more hazelcast configurations please refer to the official hazelcast documentation.

Another change that we have to implement is to change UserPayload into a serializable Java object, since objects stored in hazelcast must be Serializable.

package com.gkatzioura.caching.model;

import java.io.Serializable;

/**
 * Created by gkatzioura on 1/5/17.
 */
public class UserPayload implements Serializable {

    private String userName;
    private String firstName;
    private String lastName;

    public String getUserName() {
        return userName;
    }

    public void setUserName(String userName) {
        this.userName = userName;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
}

Last but not least we add another repository bound to the hazelcast-cache profile.

The result is our previous spring-boot application integrated with hazelcast instead of the default cache, configured with a ttl policy.

You can find the sourcecode on github.

Spring boot and Cache Abstraction

Caching is a major ingredient of most applications, and as long as we try to avoid disk access it will stay strong.
Spring has great support for caching with a wide range of configurations. You can start as simple as you want and progress to something much more customizable.

This would be an example with the simplest form of caching that spring provides.
Spring comes by default with an in memory cache which is pretty easy to setup.

Let us start with our gradle file.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'


buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

repositories {
    mavenCentral()
}


sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("org.springframework.boot:spring-boot-starter-cache")
    compile("org.springframework.boot:spring-boot-starter")
    testCompile("junit:junit")
}

bootRun {
    systemProperty "spring.profiles.active", "simple-cache"
}

Since the same project will be used for different cache providers there are gonna be multiple spring profiles. The spring profile for this tutorial would be the simple-cache since we are going to use the ConcurrentMap-based Cache which happens to be the default.

We will implement an application which will fetch user information from our local file system.
The information shall reside on the users.json file

[
  {"userName":"user1","firstName":"User1","lastName":"First"},
  {"userName":"user2","firstName":"User2","lastName":"Second"},
  {"userName":"user3","firstName":"User3","lastName":"Third"},
  {"userName":"user4","firstName":"User4","lastName":"Fourth"}
]

Also we will specify a simple model for the data to be retrieved.

package com.gkatzioura.caching.model;

/**
 * Created by gkatzioura on 1/5/17.
 */
public class UserPayload {

    private String userName;
    private String firstName;
    private String lastName;

    public String getUserName() {
        return userName;
    }

    public void setUserName(String userName) {
        this.userName = userName;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
}

Then we will add a bean that will read the information.

package com.gkatzioura.caching.config;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.gkatzioura.caching.model.UserPayload;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.io.Resource;

import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;

/**
 * Created by gkatzioura on 1/5/17.
 */
@Configuration
@Profile("simple-cache")
public class SimpleDataConfig {

    @Autowired
    private ObjectMapper objectMapper;

    @Value("classpath:/users.json")
    private Resource usersJsonResource;

    @Bean
    public List<UserPayload> payloadUsers() throws IOException {

        try(InputStream inputStream = usersJsonResource.getInputStream()) {

            UserPayload[] payloadUsers = objectMapper.readValue(inputStream,UserPayload[].class);
            return Collections.unmodifiableList(Arrays.asList(payloadUsers));
        }
    }
}

Obviously in order to access the information we will use the bean instantiated containing all the user information.

Next step will be to create a repository interface to specify the methods that will be used.

package com.gkatzioura.caching.repository;

import com.gkatzioura.caching.model.UserPayload;

import java.util.List;

/**
 * Created by gkatzioura on 1/6/17.
 */
public interface UserRepository {

    List<UserPayload> fetchAllUsers();

    UserPayload firstUser();

    UserPayload userByFirstNameAndLastName(String firstName,String lastName);

}

Now let’s dive into the implementation which will contain the cache annotations needed.

package com.gkatzioura.caching.repository;

import com.gkatzioura.caching.model.UserPayload;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Repository;

import java.util.List;
import java.util.Optional;

/**
 * Created by gkatzioura on 12/30/16.
 */
@Repository
@Profile("simple-cache")
public class UserRepositoryLocal implements UserRepository {

    @Autowired
    private List<UserPayload> payloadUsers;

    private static final Logger LOGGER = LoggerFactory.getLogger(UserRepositoryLocal.class);

    @Override
    @Cacheable("alluserscache")
    public List<UserPayload> fetchAllUsers() {

        LOGGER.info("Fetching all users");

        return payloadUsers;
    }

    @Override
    @Cacheable(cacheNames = "usercache",key = "#root.methodName")
    public UserPayload firstUser() {

        LOGGER.info("fetching firstUser");

        return payloadUsers.get(0);
    }

    @Override
    @Cacheable(cacheNames = "usercache",key = "{#firstName,#lastName}")
    public UserPayload userByFirstNameAndLastName(String firstName,String lastName) {

        LOGGER.info("fetching user by firstname and lastname");

        Optional<UserPayload> user = payloadUsers.stream().filter(
                p-> p.getFirstName().equals(firstName)
                &&p.getLastName().equals(lastName))
                .findFirst();

        if(user.isPresent()) {
            return user.get();
        } else {
            return null;
        }
    }

}

Methods that contain the @Cacheable will trigger cache population contrary to methods that contain @CacheEvict which trigger cache eviction.
By using @Cacheable instead of just specifying the cache map that our values will be stored, we can proceed into specifying also keys based on the method name or the method arguments. Thus we achieve method caching.
For example the method firstUser, uses as a key the method name whilst the method userByFirstNameAndLastName uses the method arguments in order to create a key.

Two methods with the @CacheEvict annotation will empty the caches specified.

LocalCacheEvict will be the component that will handler the eviction.

package com.gkatzioura.caching.repository;

import org.springframework.cache.annotation.CacheEvict;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 1/7/17.
 */
@Component
@Profile("simple-cache")
public class LocalCacheEvict {

    @CacheEvict(cacheNames = "alluserscache",allEntries = true)
    public void evictAllUsersCache() {

    }

    @CacheEvict(cacheNames = "usercache",allEntries = true)
    public void evictUserCache() {

    }

}

Since we use a very simple form of cacheh ttl eviction is not supported. Therefore we will add a scheduler only for this particular case which will evict the cache after a certain period of time.

package com.gkatzioura.caching.scheduler;

import com.gkatzioura.caching.repository.LocalCacheEvict;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Profile;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 1/7/17.
 */
@Component
@Profile("simple-cache")
public class EvictScheduler {

    @Autowired
    private LocalCacheEvict localCacheEvict;

    private static final Logger LOGGER = LoggerFactory.getLogger(EvictScheduler.class);

    @Scheduled(fixedDelay=10000)
    public void clearCaches() {

        LOGGER.info("Invalidating caches");

        localCacheEvict.evictUserCache();
        localCacheEvict.evictAllUsersCache();
    }


}

To wrap up we will use a controller to call the methods specified

package com.gkatzioura.caching.controller;

import com.gkatzioura.caching.model.UserPayload;
import com.gkatzioura.caching.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
 * Created by gkatzioura on 12/30/16.
 */
@RestController
public class UsersController {

    @Autowired
    private UserRepository userRepository;

    @RequestMapping(path = "/users/all",method = RequestMethod.GET)
    public List<UserPayload> fetchUsers() {

        return userRepository.fetchAllUsers();
    }

    @RequestMapping(path = "/users/first",method = RequestMethod.GET)
    public UserPayload fetchFirst() {
        return userRepository.firstUser();
    }

    @RequestMapping(path = "/users/",method = RequestMethod.GET)
    public UserPayload findByFirstNameLastName(String firstName,String lastName ) {

        return userRepository.userByFirstNameAndLastName(firstName,lastName);
    }

}

Last but not least our Application class should contain two extra annotations. @EnableScheduling is needed in order to enable schedulers and @EnableCaching in order to enable caching

package com.gkatzioura.caching;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.scheduling.annotation.EnableScheduling;

/**
 * Created by gkatzioura on 12/30/16.
 */
@SpringBootApplication
@EnableScheduling
@EnableCaching
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class,args);
    }

}

You can find the sourcecode on github.

Integrate Spring Boot and EC2 using Cloudformation

On a previous blog we integrated a spring boot application with elastic beanstalk.
The application was a servlet based application responding to requests.

On this tutorial we are going to deploy a spring boot application, which executes some scheduled tasks on an ec2 instance.
The application will be pretty much the same application taken from the official spring guide with some minor differences on packages.

The name of our application will be ec2-deployment

rootProject.name = 'ec2-deployment'

Then we will schedule a task to our spring boot application.

package com.gkatzioura.deployment.task;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

/**
 * Created by gkatzioura on 12/16/16.
 */
@Component
public class SimpleTask {

    private static final Logger LOGGER = LoggerFactory.getLogger(SimpleTask.class);

    @Scheduled(fixedRate = 5000)
    public void reportCurrentTime() {
        LOGGER.info("This is a simple task on ec2");
    }

}


Next step is to build the application and deploy it to our s3 bucket.

gradle build
aws s3 cp build/libs/ec2-deployment-1.0-SNAPSHOT.jar s3://{your bucket name}/ec2-deployment-1.0-SNAPSHOT.jar 

What comes next is a bootstrapping script in order to run our application once the server is up and running.

#!/usr/bin/env bash
aws s3 cp s3://{bucket with code}/ec2-deployment-1.0-SNAPSHOT.jar /home/ec2-user/ec2-deployment-1.0-SNAPSHOT.jar
sudo yum -y install java-1.8.0
sudo yum -y remove java-1.7.0-openjdk
cd /home/ec2-user/
sudo nohup java -jar ec2-deployment-1.0-SNAPSHOT.jar > ec2dep.log

This script is pretty much self explanatory. We download the application from the bucket we uploaded it previously, we install the java version needed and then we run the application (this script serves us for example purposes, there are certainly many ways to set up you java application running on linux).

Next step would be to proceed to our cloudformation script. Since we will download our application from s3 it is essential to have an IAM policy that will allow us to download items from the s3 bucket we used previously. Therefore we will create a role with the policy needed

"RootRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [ {
            "Effect": "Allow",
            "Principal": {
              "Service": [ "ec2.amazonaws.com" ]
            },
            "Action": [ "sts:AssumeRole" ]
          } ]
        },
        "Path": "/",
        "Policies": [ {
          "PolicyName": "root",
          "PolicyDocument": {
            "Version" : "2012-10-17",
            "Statement": [ {
              "Effect": "Allow",
              "Action": [
                "s3:Get*",
                "s3:List*"
              ],
              "Resource": {"Fn::Join" : [ "", [ "arn:aws:s3:::", {"Ref":"SourceCodeBucket"},"/*"] ] }
            } ]
          }
        } ]
      }
    }

Next step is to encode our bootstrapping script to Base64 in order to be able to pass it as user data.
Once the ec2 instance is up and running it will run the shell commands previously specified.

Last step is to create our instance profile and specify the ec2 instance to be launched

    "RootInstanceProfile": {
      "Type": "AWS::IAM::InstanceProfile",
      "Properties": {
        "Path": "/",
        "Roles": [ {
          "Ref": "RootRole"
        } ]
      }
    },
    "Ec2Instance":{
      "Type":"AWS::EC2::Instance",
      "Properties":{
        "ImageId":"ami-9398d3e0",
        "InstanceType":"t2.nano",
        "KeyName":"TestKey",
        "IamInstanceProfile": {"Ref":"RootInstanceProfile"},
"UserData":"IyEvdXNyL2Jpbi9lbnYgYmFzaA0KYXdzIHMzIGNwIHMzOi8ve2J1Y2tldCB3aXRoIGNvZGV9L2VjMi1kZXBsb3ltZW50LTEuMC1TTkFQU0hPVC5qYXIgL2hvbWUvZWMyLXVzZXIvZWMyLWRlcGxveW1lbnQtMS4wLVNOQVBTSE9ULmphcg0Kc3VkbyB5dW0gLXkgaW5zdGFsbCBqYXZhLTEuOC4wDQpzdWRvIHl1bSAteSByZW1vdmUgamF2YS0xLjcuMC1vcGVuamRrDQpjZCAvaG9tZS9lYzItdXNlci8NCnN1ZG8gbm9odXAgamF2YSAtamFyIGVjMi1kZXBsb3ltZW50LTEuMC1TTkFQU0hPVC5qYXIgPiBlYzJkZXAubG9n"
      }
    }

KeyName stands for the ssh key name, in case you want to login to the ec2 instance.

So we are good to go and create our cloudformation stack. You have to add the CAPABILITY_IAM flag.

aws s3 cp ec2spring.template s3://{bucket with templates}/ec2spring.template
aws cloudformation create-stack --stack-name SpringEc2 --parameters ParameterKey=SourceCodeBucket,ParameterValue={bucket with code} --template-url https://s3.amazonaws.com/{bucket with templates}/ec2spring.template --capabilities CAPABILITY_IAM

That’s it. Now you have your spring application up and running on top of an ec2 instance.
You can download the source code from GitHub.

Integrate Spring boot and Elastic Beanstalk using Cloudformation

AWS beanstalk is an amazon web service that does most of the configuration for you and creates an infrastructure suitable for a horizontally scalable application. Instead of Beanstalk the other approach would be to configure load balancers and auto scalling groups, which requires a bit of AWS expertise and time.

On this tutorial we are going to upload a spring boot jar application using amazon elastic beanstalk and a cloud formation bundle.

Less is more therefore we are going to use pretty much the same spring boot application taken from the official Spring guide as a template.

The only change would be to alter the rootProject.name to beanstalk-deployment and some changes on the package structure. Downloading the project from github is sufficient.

Then we can build and run the project

gradlew build
java -jar build/libs/beanstalk-deployment-1.0-SNAPSHOT.jar 

Next step is to upload the application to s3.

aws s3 cp build/libs/beanstalk-deployment-1.0-SNAPSHOT.jar s3://{you bucket name}/beanstalk-deployment-1.0-SNAPSHOT.jar

You need to install the elastic beanstalk client since it helps a lot with most beanstalk operations.

Since we will use Java 8 I would get a list with elastic beanstalk environments in order to retrieve the correct SolutionStackName.

aws elasticbeanstalk list-available-solution-stacks |grep Java 

Based on the results I will use the “64bit Amazon Linux 2016.09 v2.3.0 running Java 8” stackname.

Now we are ready to proceed to our cloudformation script.

We will specify a parameter and this will be the bucket containing the application code

  "Parameters" : {
    "SourceCodeBucket" : {
      "Type" : "String"
    }
  }

Then we will specify the name of the application

    "SpringBootApplication": {
      "Type": "AWS::ElasticBeanstalk::Application",
      "Properties": {
        "Description":"Spring boot and elastic beanstalk"
      }
    }

Next step will be to specify the application version

    "SpringBootApplicationVersion": {
      "Type": "AWS::ElasticBeanstalk::ApplicationVersion",
      "Properties": {
        "ApplicationName":{"Ref":"SpringBootApplication"},
        "SourceBundle": {
                  "S3Bucket": {"Ref":"SourceCodeBucket"},
                  "S3Key": "beanstalk-deployment-1.0-SNAPSHOT.jar"
        }
      }
    }

And then we specify our configuration template.

    "SpringBootBeanStalkConfigurationTemplate": {
      "Type": "AWS::ElasticBeanstalk::ConfigurationTemplate",
      "Properties": {
        "ApplicationName": {"Ref":"SpringBootApplication"},
        "Description":"A display of speed boot application",
        "OptionSettings": [
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MinSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MaxSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:elasticbeanstalk:environment",
            "OptionName": "EnvironmentType",
            "Value": "LoadBalanced"
          }
        ],
        "SolutionStackName": "64bit Amazon Linux 2016.09 v2.3.0 running Java 8"
      }
    }

The last step would be to glue the above properties by defining an environment

    "SpringBootBeanstalkEnvironment": {
      "Type": "AWS::ElasticBeanstalk::Environment",
      "Properties": {
        "ApplicationName": {"Ref":"SpringBootApplication"},
        "EnvironmentName":"JavaBeanstalkEnvironment",
        "TemplateName": {"Ref":"SpringBootBeanStalkConfigurationTemplate"},
        "VersionLabel": {"Ref": "SpringBootApplicationVersion"}
      }
    }

Now you are ready to upload your cloudformation template and deploy your beanstalk application

aws s3 cp beanstalkspring.template s3://{bucket with templates}/beanstalkspring.template
aws cloudformation create-stack --stack-name SpringBeanStalk --parameters ParameterKey=SourceCodeBucket,ParameterValue={bucket with code} --template-url https://s3.amazonaws.com/{bucket with templates}/beanstalkspring.template

You can download the full sourcecode and the cloudformation template from Github.

Embed Jython to you java codebase.

Jython is a great tool for some quick java scripts using a pretty solid syntax. Actually it works wonderfully when it comes to implement some maintenance or monitoring scripts with jmx for you java apps.

In case you work with other teams with a python background, it makes absolute sense to integrate python to your java applications.

First let’s import the jython interpeter using the standalone version.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

apply plugin: 'java'

sourceCompatibility = 1.5

repositories {
    mavenCentral()
}

dependencies {
    testCompile group: 'junit', name: 'junit', version: '4.11'
    compile group: 'org.python', name: 'jython-standalone', version: '2.7.0'
}

So the easiest thing to do is just to execute a python file in our class path. The file would be hello_world.py

print "Hello World"

And then pass the file as an inputstream to the interpeter

package com.gkatzioura;

import org.python.core.PyClass;
import org.python.core.PyInteger;
import org.python.core.PyObject;
import org.python.core.PyObjectDerived;
import org.python.util.PythonInterpreter;

import java.io.InputStream;

/**
 * Created by gkatzioura on 19/10/2016.
 */
public class JythonCaller {

    private PythonInterpreter pythonInterpreter;

    public JythonCaller() {
        pythonInterpreter = new PythonInterpreter();
    }

    public void invokeScript(InputStream inputStream) {

        pythonInterpreter.execfile(inputStream);
    }

}
    @Test
    public void testInvokeScript() {

        InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream("hello_world.py");
        jythonCaller.invokeScript(inputStream);
    }

Next step is to create a python class file and and another python file that will import the class file and instantiate a class.

The class file would be divider.py.

class Divider:

    def divide(self,numerator,denominator):

        return numerator/denominator;

And the file importing the Divider class would be classcaller.py

from divider import Divider

divider = Divider()

print divider.divide(10,5);

So let us test it

    @Test
    public void testInvokeClassCaller() {

        InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream("classcaller.py");
        jythonCaller.invokeScript(inputStream);
    }

What we can understand from this example is that the interpreter imports successfully the files from the classpath.

Running files using the interpreter is ok, however we need to fully utilize classes and functions implemented in python.
Therefore next step is to create a python class and use its functions using java.

package com.gkatzioura;

import org.python.core.PyClass;
import org.python.core.PyInteger;
import org.python.core.PyObject;
import org.python.core.PyObjectDerived;
import org.python.util.PythonInterpreter;

import java.io.InputStream;

/**
 * Created by gkatzioura on 19/10/2016.
 */
public class JythonCaller {

    private PythonInterpreter pythonInterpreter;

    public JythonCaller() {
        pythonInterpreter = new PythonInterpreter();
    }

    public void invokeClass() {

        pythonInterpreter.exec("from divider import Divider");
        PyClass dividerDef = (PyClass) pythonInterpreter.get("Divider");
        PyObject divider = dividerDef.__call__();
        PyObject pyObject = divider.invoke("divide",new PyInteger(20),new PyInteger(4));

        System.out.println(pyObject.toString());
    }

}

You can find the sourcecode on github.

Java on the AWS cloud using Lambda, Api Gateway and CloudFormation

On a previous post we implemented a java based aws lambda function and deployed it using CloudFormation.

Since we have our lambda function set up we will integrate it with a http endpoint using AWS API Gateway.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any Web application

For this example imagine API gateway as if it is an HTTP Connector.

We will change our original function in order to implement a division.

package com.gkatzioura.deployment.lambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import java.math.BigDecimal;
import java.util.Map;
import java.util.logging.Logger;

/**
 * Created by gkatzioura on 9/10/2016.
 */
public class RequestFunctionHandler implements RequestHandler<Map<String,String>,String> {

    private static final String NUMERATOR_KEY = "numerator";
    private static final String DENOMINATOR_KEY = "denominator";

    private static final Logger LOGGER = Logger.getLogger(RequestFunctionHandler.class.getName());

    public String handleRequest(Map <String,String> values, Context context) {

        LOGGER.info("Handling request");

        if(!values.containsKey(NUMERATOR_KEY)||!values.containsKey(DENOMINATOR_KEY)) {
            return "You need both numberator and denominator";
        }

        try {
            BigDecimal numerator = new BigDecimal(values.get(NUMERATOR_KEY));
            BigDecimal denominator= new BigDecimal(values.get(DENOMINATOR_KEY));
            return  numerator.divide(denominator).toString();
        } catch (Exception e) {
            return "Please provide valid values";
        }
    }

}

Then we will change our lambda code and update it on s3.

aws s3 cp build/distributions/JavaLambdaDeployment.zip s3://lambda-functions/JavaLambdaDeployment.zip

Next step is to update our CloudFormation template and add the api gateway forwarding requests to our lambda function.

First we have to declare our Rest api

    "AGRA16PAA": {
      "Type": "AWS::ApiGateway::RestApi",
      "Properties": {"Name": "CalculationApi"}
    }

Then we need to add a rest resource. Inside the DependsOn element we can see the id of our rest api. Therefore cloudwatch will create the resource after the rest api has been created.

"AGR2JDQ8": {
      "Type": "AWS::ApiGateway::Resource",
      "Properties": {
        "RestApiId": {"Ref": "AGRA16PAA"},
        "ParentId": {
          "Fn::GetAtt": ["AGRA16PAA","RootResourceId"]
        },
        "PathPart": "divide"
      },
      "DependsOn": [
        "AGRA16PAA"
      ]
    }

Another crucial part is to add a permission in order to be able to invoke our lambda function.

    "LPI6K5": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:invokeFunction",
        "FunctionName": {"Fn::GetAtt": ["LF9MBL", "Arn"]},
        "Principal": "apigateway.amazonaws.com",
        "SourceArn": {"Fn::Join": ["",
          ["arn:aws:execute-api:", {"Ref": "AWS::Region"}, ":", {"Ref": "AWS::AccountId"}, ":", {"Ref": "AGRA16PAA"}, "/*"]
        ]}
      }
    }

Last step would be to add the api gateway method in order to be able to invoke our lambda function from the api gateway. Furthermore we will add an api gateway deployment instruction.

"Deployment": {
      "Type": "AWS::ApiGateway::Deployment",
      "Properties": {
        "RestApiId": { "Ref": "AGRA16PAA" },
        "Description": "First Deployment",
        "StageName": "StagingStage"
      },
      "DependsOn" : ["AGM25KFD"]
    },
    "AGM25KFD": {
      "Type": "AWS::ApiGateway::Method",
      "Properties": {
        "AuthorizationType": "NONE",
        "HttpMethod": "POST",
        "ResourceId": {"Ref": "AGR2JDQ8"},
        "RestApiId": {"Ref": "AGRA16PAA"},
        "Integration": {
          "Type": "AWS",
          "IntegrationHttpMethod": "POST",
          "IntegrationResponses": [{"StatusCode": 200}],
          "Uri": {
            "Fn::Join": [
              "",
              [
                "arn:aws:apigateway:",
                {"Ref": "AWS::Region"},
                ":lambda:path/2015-03-31/functions/",
                {"Fn::GetAtt": ["LF9MBL", "Arn"]},
                "/invocations"
              ]
            ]
          }
        },
        "MethodResponses": [{
          "StatusCode": 200
        }]
      }

So we ended up with our new cloudwatch configuration.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Resources": {
    "LF9MBL": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": "lambda-functions",
          "S3Key": "JavaLambdaDeployment.zip"
        },
        "FunctionName": "SimpleRequest",
        "Handler": "com.gkatzioura.deployment.lambda.RequestFunctionHandler",
        "MemorySize": 128,
        "Role": "arn:aws:iam::274402012893:role/lambda_basic_execution",
        "Runtime": "java8"
      }
    },
    "Deployment": {
      "Type": "AWS::ApiGateway::Deployment",
      "Properties": {
        "RestApiId": { "Ref": "AGRA16PAA" },
        "Description": "First Deployment",
        "StageName": "StagingStage"
      },
      "DependsOn" : ["AGM25KFD"]
    },
    "AGM25KFD": {
      "Type": "AWS::ApiGateway::Method",
      "Properties": {
        "AuthorizationType": "NONE",
        "HttpMethod": "POST",
        "ResourceId": {"Ref": "AGR2JDQ8"},
        "RestApiId": {"Ref": "AGRA16PAA"},
        "Integration": {
          "Type": "AWS",
          "IntegrationHttpMethod": "POST",
          "IntegrationResponses": [{"StatusCode": 200}],
          "Uri": {
            "Fn::Join": [
              "",
              [
                "arn:aws:apigateway:",
                {"Ref": "AWS::Region"},
                ":lambda:path/2015-03-31/functions/",
                {"Fn::GetAtt": ["LF9MBL","Arn"]},
                "/invocations"
              ]
            ]
          }
        },
        "MethodResponses": [{"StatusCode": 200}]
      },
      "DependsOn": ["LF9MBL","AGR2JDQ8","LPI6K5"]
    },
    "AGR2JDQ8": {
      "Type": "AWS::ApiGateway::Resource",
      "Properties": {
        "RestApiId": {"Ref": "AGRA16PAA"},
        "ParentId": {
          "Fn::GetAtt": ["AGRA16PAA","RootResourceId"]
        },
        "PathPart": "divide"
      },
      "DependsOn": ["AGRA16PAA"]
    },
    "AGRA16PAA": {
      "Type": "AWS::ApiGateway::RestApi",
      "Properties": {
        "Name": "CalculationApi"
      }
    },
    "LPI6K5": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:invokeFunction",
        "FunctionName": {"Fn::GetAtt": ["LF9MBL", "Arn"]},
        "Principal": "apigateway.amazonaws.com",
        "SourceArn": {"Fn::Join": ["",
          ["arn:aws:execute-api:", {"Ref": "AWS::Region"}, ":", {"Ref": "AWS::AccountId"}, ":", {"Ref": "AGRA16PAA"}, "/*"]
        ]}
      }
    }
 }
}

Last but not least, we have to update our previous cloudformation stack.

So we uploaded our latest template

aws s3 cp cloudformationjavalambda2.template s3://cloudformation-templates/cloudformationjavalambda2.template

And all we have to do is to update our stack.

aws cloudformation update-stack --stack-name JavaLambdaStack --template-url https://s3.amazonaws.com/cloudformation-templates/cloudformationjavalambda2.template

Our stack has just been updated.
We can got to our api gateway endpoint and try to issue a post.

curl -H "Content-Type: application/json" -X POST -d '{"numerator":1,"denominator":"2"}' https://{you api gateway endpoint}/StagingStage/divide
"0.5"

You can find the sourcecode on github.

Java on the AWS cloud using Lambda

Amazon Web Services gets more popular by the day. Java is a first class citizen on AWS and it is pretty easy to get started.
Deploying your application is a bit different, but still easy and convenient.

AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code.

Actually think of lambda as running a task that needs up to five minutes to finish. In case of simple actions or jobs that are not time consuming, and don’t require a huge framework, AWS lambda is the way to go. Also AWS lambda is great for horizontal scaling.

The most stripped down example would be to create a lambda function that responds to a request.

We shall implement the RequestHandler interface.

package com.gkatzioura.deployment.lambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import java.util.Map;
import java.util.logging.Logger;

/**
 * Created by gkatzioura on 9/10/2016.
 */
public class RequestFunctionHandler implements RequestHandler<Map<String,String>,String> {

    private static final Logger LOGGER = Logger.getLogger(RequestFunctionHandler.class.getName());

    public String handleRequest(Map <String,String> values, Context context) {

        LOGGER.info("Handling request");

        return "You invoked a lambda function";
    }

}

Somehow RequestHandler is like a controller.

To proceed we will have to create a jar file with the dependencies needed, therefore we will create a custom gradle task

apply plugin: 'java'

repositories {
    mavenCentral()
}

dependencies {
    compile (
            'com.amazonaws:aws-lambda-java-core:1.1.0',
            'com.amazonaws:aws-lambda-java-events:1.1.0'
    )
}

task buildZip(type: Zip) {
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime
    }
}

build.dependsOn buildZip

Then we should build

gradle build

Now we have to upload our code to our lambda function.

I have a s3 bucket on amazon for lambda functions only. Supposing that our bucket is called lambda-functions (I am pretty sure it is already reserved).
We will use aws cli wherever possible.

aws s3 cp build/distributions/JavaLambdaDeployment.zip s3://lambda-functions/JavaLambdaDeployment.zip

Now instead of creating a lambda function the manual way we are going to do so by creating a cloud formation template.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Resources": {
    "LF9MBL": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": "lambda-functions",
          "S3Key" : "JavaLambdaDeployment.zip",
        },
        "FunctionName": "SimpleRequest",
        "Handler": "com.gkatzioura.deployment.lambda.RequestFunctionHandler",
        "MemorySize": 128,
        "Role":"arn:aws:iam::274402012893:role/lambda_basic_execution",
        "Runtime":"java8"
      },
      "Metadata": {
        "AWS::CloudFormation::Designer": {
          "id": "66b2b325-f19a-4d7d-a7a9-943dd8cd4a5c"
        }
      }
    }
  }
}

Next step is to upload our cloudformation template to an s3 bucket. Personally I use a separate bucket for my templates. Supposing that our bucket is called cloudformation-templates

aws s3 cp cloudformationjavalambda.template s3://cloudformation-templates/cloudformationjavalambda.template

Next step is to create our cloudformation stack using the template specified

aws cloudformation create-stack --stack-name JavaLambdaStack --template-url https://s3.amazonaws.com/cloudformation-templates/cloudformationjavalambda.template

In order to check we shall invoke the lambda function through the amazon cli

aws lambda invoke --invocation-type RequestResponse --function-name SimpleRequest --region eu-west-1 --log-type Tail --payload '{}' outputfile.txt

And the result is the expected

"You invoked a lambda function"

You can find the source code on github.