Configure hazelcast with EC2

Hazelcast is hands down a great caching tool when it comes to a JVM based application. If you use Amazon Web Services Hazelcast integrates wonderfully.

First task is to create a policy responsible for describing instances. We shall name this policy as
describe-instances-policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1467219263000",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Applications that have to access amazon resources, should either use a user or a role that has the policies for the resources we need attached. Using an amazon user for your application is a bad practice and It becomes a maintenance headache managing keys, let alone security issues.
Therefore we will focus on hazelcast configuration using IAM roles.

Our role will be called my-ec2-role and will have the policy describe-instances-policy attached.

By doing so an ec2 instance with hazelcast would be able to retrieve the private ip’s of other ec2 instances and therefore would attempt to identify which instances are eligible to establish a distributed cache.

Now we can proceed to the hazelcast configuration.
We can either do a java based configuration or an xml based configuration.

Let us start with the xml configuration.

<hazelcast
        xsi:schemaLocation="https://hazelcast.com/schema/config https://hazelcast.com/schema/config/hazelcast-config-3.7.xsd"
        xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <group>
        <field>ec2-group</field>
        <password>ec2-password</password>
    </group>
    <network>
        <join>
            <multicast enabled="false">
            </multicast>
            <tcp-ip enabled="false">
            </tcp-ip>
            <aws enabled="true">
                <!--optional, default is us-east-1 -->
                <region>eu-west-1</region>
                <iam-role>my-ec2-role</iam-role>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-field></security-group-field>
                <tag-key></tag-key>
                <tag-value></tag-value>
            </aws>
        </join>
    </network>
</hazelcast>

And the main class to load the xml file.

package com.gkatzioura.hazelcastec2;

import com.hazelcast.config.*;
import com.hazelcast.core.Hazelcast;

/**
 * Created by gkatzioura on 7/26/16.
 */
public class HazelCastXMLExample {

    public static void main(String args[]) {

        Config config = new ClasspathXmlConfig("hazelcast.xml");

        Hazelcast.newHazelcastInstance(config);
    }

}

Pay extra attention that multicast and tcp-ip should be disabled.
Since we specify an IAM role there is no need to provide credentials.
Tag-key and Tag-value represent the tags that you can add on an ec2 machine. In case you specify tag key and value a connection will be established only on machine that have the same tag and value.

You can have the security-group-field empty. Hazelcast uses this information for instance filtering however you must make sure the security group that the ec2 instance uses has ports 5701, 5702, and 5703 open for inbound and outbound traffic.

The java configuration follows the same rules.

package com.gkatzioura.hazelcastec2;

import com.hazelcast.aws.AWSClient;
import com.hazelcast.config.AwsConfig;
import com.hazelcast.config.Config;
import com.hazelcast.config.GroupConfig;
import com.hazelcast.config.JoinConfig;
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;

/**
 * Created by gkatzioura on 7/25/16.
 */
public class HazelCastJavaExample {

    public static void main(String args[]) {

        Config config = new Config();

        GroupConfig groupConfig = new GroupConfig();
        groupConfig.setName("ec2-group");
        groupConfig.setPassword("ec2-password");

        config.setGroupConfig(groupConfig);

        JoinConfig joinConfig = config.getNetworkConfig().getJoin();
        joinConfig.getTcpIpConfig().setEnabled(false);
        joinConfig.getMulticastConfig().setEnabled(false);

        AwsConfig awsConfig = joinConfig.getAwsConfig();
        awsConfig.setIamRole("my-ec2-role");
        awsConfig.setEnabled(true);
        awsConfig.setRegion("eu-west-1");

        Hazelcast.newHazelcastInstance(config);
    }

}

After uploading your hazelcast apps to ec2 and run them you can see the following log

Jul 26, 2016 6:34:50 PM com.hazelcast.cluster.ClusterService
INFO: [172.31.33.104]:5701 [dev] [3.5.4] 

Members [2] {
	Member [172.31.33.104]:5701 this
	Member [172.31.41.154]:5701
}

I have added a gradle file for some quick testing either with xml or java configuration.

group 'com.gkatzioura'
version '1.0-SNAPSHOT'

apply plugin: 'java'

sourceCompatibility = 1.5

repositories {
    mavenCentral()
}

apply plugin: 'idea'

dependencies {
    testCompile group: 'junit', name: 'junit', version: '4.11'
    compile group: 'com.hazelcast', name:'hazelcast-cloud', version:'3.5.4'
}

task javaConfJar(type: Jar) {
    manifest {
        attributes 'Main-Class': 'com.gkatzioura.hazelcastec2.HazelCastJavaExample'
    }
    baseName = project.name + '-jconf'
    from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}

task javaXMLJar(type: Jar) {
    manifest {
        attributes 'Main-Class': 'com.gkatzioura.hazelcastec2.HazelCastXMLExample'
    }
    baseName = project.name + '-xmlconf'
    from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}

You can find the sourcecode on github.

Query DynamoDB Items with Node.js Part 2

On a previous post we had the chance to issue some basic DynamoDB query actions.

However apart from the basic actions the DynamoDB api provides us with some extra functionality.

Projections is a feature that has a select-like functionality.
You choose which attributes from a DynamoDB Item shall be fetched. Keep in mind that using projection will not have any impact on your query billing.

var getRegisterDate = function(email,callback) {
	
	var docClient = new AWS.DynamoDB.DocumentClient();
	
	var params = {
		    TableName: "Users",
		    KeyConditionExpression: "#email = :email",
		    ExpressionAttributeNames:{
		        "#email": "email"
		    },
		    ExpressionAttributeValues: {
		        ":email":email
		    },
		    ProjectionExpression: 'registerDate'
		};
	
	docClient.query(params,callback);
}

Apart from selecting the attributes we can also specify the order according to our range key. We shall query the logins Table in a Descending order using scanIndexForward.

var fetchLoginsDesc = function(email,callback) {

	var docClient = new AWS.DynamoDB.DocumentClient();

	var params = {
	    TableName:"Logins",
	    KeyConditionExpression:"#email = :emailValue",
	    ExpressionAttributeNames: {
	    	"#email":"email"
	    },
	    ExpressionAttributeValues: {
	    	":emailValue":email
	    },
	    ScanIndexForward: false
	};
	
	docClient.query(params,callback);
}

A common functionality of databases is counting the items persisted in a collection. In our case we want to count the login occurrences of a specific user. However pay extra attention since the count functionality does nothing more than counting the total items fetched, therefore it will cost you as if you fetched the items.

var countLogins = function(email,callback) {

	var docClient = new AWS.DynamoDB.DocumentClient();

	var params = {
	    TableName:"Logins",
	    KeyConditionExpression:"#email = :emailValue",
	    ExpressionAttributeNames: {
	    	"#email":"email"
	    },
	    ExpressionAttributeValues: {
	    	":emailValue":email
	    },
	    Select:'COUNT'
	};
	
	docClient.query(params,callback);
}

Another feature of DynamoDB is getting items in batches even if they belong on different tables. This is really helpful in cases where data that belong on a specific context are spread through different tables. Every get item is handled and charged as a DynamoDB read action. In case of batch get item all table keys should be specified since every query’s purpose on BatchGetItem is to fetch a single Item.
It is important to know that you can fetch up to 1 MB of data and up to 100 items per BatchGetTime request.

var getMultipleInformation = function(email,name,callback) {
	
	var params = {
			"RequestItems" : {
			    "Users": {
			      "Keys" : [
			        {"email" : { "S" : email }}
			      ]
			    },
			    "Supervisors": {
				   "Keys" : [
					{"name" : { "S" : name }}
				  ]
			    }
			  }
			};
	
	dynamodb.batchGetItem(params,callback);
};

You can find the sourcecode on github

Query DynamoDB Items with Java Part 2

On a previous post we had the chance to issue some basic DynamoDB query actions.

However apart from the basic actions the DynamoDB api provides us with some extra functionality.

Projections is a feature that has a select-like functionality.
You choose which attributes from a DynamoDB Item shall be fetched. Keep in mind that using projection will not have any impact on your query billing.

public Map<String,AttributeValue> getRegisterDate(String email) {

        Map<String,String> expressionAttributesNames = new HashMap<>();
        expressionAttributesNames.put("#email","email");

        Map<String,AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":emailValue",new AttributeValue().withS(email));

        QueryRequest queryRequest = new QueryRequest()
                .withTableName(TABLE_NAME)
                .withKeyConditionExpression("#email = :emailValue")
                .withExpressionAttributeNames(expressionAttributesNames)
                .withExpressionAttributeValues(expressionAttributeValues)
                .withProjectionExpression("registerDate");

        QueryResult queryResult = amazonDynamoDB.query(queryRequest);

        List<Map<String,AttributeValue>> attributeValues = queryResult.getItems();

        if(attributeValues.size()>0) {
            return attributeValues.get(0);
        } else {
            return null;
        }
    }

Apart from selecting the attributes we can also specify the order according to our range key. We shall query the logins Table in a Descending order using scanIndexForward.

    public List<Map<String,AttributeValue>> fetchLoginsDesc(String email) {

        List<Map<String,AttributeValue>> items = new ArrayList<>();

        Map<String,String> expressionAttributesNames = new HashMap<>();
        expressionAttributesNames.put("#email","email");

        Map<String,AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":emailValue",new AttributeValue().withS(email));

        QueryRequest queryRequest = new QueryRequest()
                .withTableName(TABLE_NAME)
                .withKeyConditionExpression("#email = :emailValue")
                .withExpressionAttributeNames(expressionAttributesNames)
                .withExpressionAttributeValues(expressionAttributeValues)
                .withScanIndexForward(false);

        Map<String,AttributeValue> lastKey = null;

        do {

            QueryResult queryResult = amazonDynamoDB.query(queryRequest);
            List<Map<String,AttributeValue>> results = queryResult.getItems();
            items.addAll(results);
            lastKey = queryResult.getLastEvaluatedKey();
        } while (lastKey!=null);

        return items;
    }

A common functionality of databases is counting the items persisted in a collection. In our case we want to count the login occurrences of a specific user. However pay extra attention since the count functionality does nothing more than counting the total items fetched, therefore it will cost you as if you fetched the items.

   public Integer countLogins(String email) {
        List<Map<String,AttributeValue>> items = new ArrayList<>();

        Map<String,String> expressionAttributesNames = new HashMap<>();
        expressionAttributesNames.put("#email","email");

        Map<String,AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":emailValue",new AttributeValue().withS(email));

        QueryRequest queryRequest = new QueryRequest()
                .withTableName(TABLE_NAME)
                .withKeyConditionExpression("#email = :emailValue")
                .withExpressionAttributeNames(expressionAttributesNames)
                .withExpressionAttributeValues(expressionAttributeValues)
                .withSelect(Select.COUNT);

        Map<String,AttributeValue> lastKey = null;
        QueryResult queryResult = amazonDynamoDB.query(queryRequest);
        List<Map<String,AttributeValue>> results = queryResult.getItems();
        return queryResult.getCount();
    }

Another feature of DynamoDB is getting items in batches even if they belong on different tables. This is really helpful in cases where data that belong on a specific context are spread through different tables. Every get item is handled and charged as a DynamoDB read action. In case of batch get item all table keys should be specified since every query’s purpose on BatchGetItem is to fetch a single Item.
It is important to know that you can fetch up to 1 MB of data and up to 100 items per BatchGetTime request.

    public Map<String,List<Map<String,AttributeValue>>> getMultipleInformation(String email,String name) {

        Map<String,KeysAndAttributes> requestItems = new HashMap<>();

        List<Map<String,AttributeValue>> userKeys = new ArrayList<>();
        Map<String,AttributeValue> userAttributes = new HashMap<>();
        userAttributes.put("email",new AttributeValue().withS(email));
        userKeys.add(userAttributes);
        requestItems.put(UserRepository.TABLE_NAME,new KeysAndAttributes().withKeys(userKeys));

        List<Map<String,AttributeValue>> supervisorKeys = new ArrayList<>();
        Map<String,AttributeValue> supervisorAttributes = new HashMap<>();
        supervisorAttributes.put("name",new AttributeValue().withS(name));
        supervisorKeys.add(supervisorAttributes);
        requestItems.put(SupervisorRepository.TABLE_NAME,new KeysAndAttributes().withKeys(supervisorKeys));

        BatchGetItemRequest batchGetItemRequest = new BatchGetItemRequest();
        batchGetItemRequest.setRequestItems(requestItems);

        BatchGetItemResult batchGetItemResult = amazonDynamoDB.batchGetItem(batchGetItemRequest);

        Map<String,List<Map<String,AttributeValue>>> responses = batchGetItemResult.getResponses();

        return responses;
    }

You can find the sourcecode on github

Query DynamoDB Items with Node.js

On a previous post we proceeded on inserting data on a DynamoDB database.

On this tutorial we will issue some basic queries against our DynamoDB tables.

The main rule is that every query has to use the hash key.

The simplest form of query is using the hash key only. We will query the Users table on this one. There would be only one result, therefore there is no use on iterating the Items list.

var getUser = function(email,callback) {
	
	var docClient = new AWS.DynamoDB.DocumentClient();
	
	var params = {
		    TableName: "Users",
		    KeyConditionExpression: "#email = :email",
		    ExpressionAttributeNames:{
		        "#email": "email"
		    },
		    ExpressionAttributeValues: {
		        ":email":email
		    }
		};
	
	docClient.query(params,callback);
};

However we can issue more complex queries using conditions.
Logins Table suits well for an example. We will issue a query that will fetch login attempts between to dates.

var queryLogins = function(email,from,to,callback) {

	var docClient = new AWS.DynamoDB.DocumentClient();
	
	var params = {
	    TableName:"Logins",
	    KeyConditionExpression:"#email = :emailValue and #timestamp BETWEEN :from AND :to",
	    ExpressionAttributeNames: {
	    	"#email":"email",
	    	"#timestamp":"timestamp"
	    },
	    ExpressionAttributeValues: {
	    	":emailValue":email,
	    	":from": from.getTime(),
	    	":to":to.getTime()
	    }			
	};
	
	var items = []
	
	var queryExecute = function(callback) {
	
		docClient.query(params,function(err,result) {

			if(err) {
				callback(err);
			} else {
			
				console.log(result)
				
				items = items.concat(result.Items);
			
				if(result.LastEvaluatedKey) {

					params.ExclusiveStartKey = result.LastEvaluatedKey;
					queryExecute(callback);				
				} else {
					callback(err,items);
				}	
			}
		});
	}
	
	queryExecute(callback);
};

Keep in mind that DynamoDB Fetches data in pages, therefore you have to issue the same request more than once in case of multiple pages. Therefore you have to use the last evaluated key to your next request. In case of many entries be aware that you should handle the call stack size.

Last but not least querying on indexes is one of the basic actions. It is the same routine either for local or global secondary indexes.
Keep in mind that the results fetched depend on the projection type we specified once creating the Table. In our case the projection type is for all fields.

We shall use the Supervisors table.

	var docClient = new AWS.DynamoDB.DocumentClient();
	
	var params = {
		    TableName: "Supervisors",
		    IndexName: "FactoryIndex",
		    KeyConditionExpression:"#company = :companyValue and #factory = :factoryValue",
		    ExpressionAttributeNames: {
		    	"#company":"company",
		    	"#factory":"factory"
		    },
		    ExpressionAttributeValues: {
		    	":companyValue": company,
		    	":factoryValue": factory
		    }
		};

	docClient.query(params,callback);

You can find full source code with unit tests on github.

Query DynamoDB Items with Java

On a previous post we proceeded on inserting data on a DynamoDB database.

On this tutorial we will issue some basic queries against our DynamoDB tables.

The main rule is that every query has to use the hash key.

The simplest form of query is using the hash key only. We will query the Users table on this one. There would be only one result, therefore there is no use on iterating the Items list.

    public Map<String,AttributeValue> getUser(String email) {

        Map<String,String> expressionAttributesNames = new HashMap<>();
        expressionAttributesNames.put("#email","email");

        Map<String,AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":emailValue",new AttributeValue().withS(email));

        QueryRequest queryRequest = new QueryRequest()
                .withTableName(TABLE_NAME)
                .withKeyConditionExpression("#email = :emailValue")
                .withExpressionAttributeNames(expressionAttributesNames)
                .withExpressionAttributeValues(expressionAttributeValues);

        QueryResult queryResult = amazonDynamoDB.query(queryRequest);

        List<Map<String,AttributeValue>> attributeValues = queryResult.getItems();

        if(attributeValues.size()>0) {
            return attributeValues.get(0);
        } else {
            return null;
        }
    }

However we can issue more complex queries using conditions.
Logins Table suits well for an example. We will issue a query that will fetch login attempts between to dates.

    public List<Map<String ,AttributeValue>> queryLoginsBetween(String email, Date from, Date to) {

        List<Map<String,AttributeValue>> items = new ArrayList<>();

        Map<String,String> expressionAttributesNames = new HashMap<>();
        expressionAttributesNames.put("#email","email");
        expressionAttributesNames.put("#timestamp","timestamp");

        Map<String,AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":emailValue",new AttributeValue().withS(email));
        expressionAttributeValues.put(":from",new AttributeValue().withN(Long.toString(from.getTime())));
        expressionAttributeValues.put(":to",new AttributeValue().withN(Long.toString(to.getTime())));

        QueryRequest queryRequest = new QueryRequest()
                .withTableName(TABLE_NAME)
                .withKeyConditionExpression("#email = :emailValue and #timestamp BETWEEN :from AND :to ")
                .withExpressionAttributeNames(expressionAttributesNames)
                .withExpressionAttributeValues(expressionAttributeValues);

        Map<String,AttributeValue> lastKey = null;

        do {

            QueryResult queryResult = amazonDynamoDB.query(queryRequest);
            List<Map<String,AttributeValue>> results = queryResult.getItems();
            items.addAll(results);
            lastKey = queryResult.getLastEvaluatedKey();
        } while (lastKey!=null);

        return items;
    }

Keep in mind that DynamoDB Fetches data in pages, therefore you have to issue the same request more than once in case of multiple pages. Therefore you have to use the last evaluated key to your next request.

Last but not least querying on indexes is one of the basic actions. It is the same routine either for local or global secondary indexes.
Keep in mind that the results fetched depend on the projection type we specified once creating the Table. In our case the projection type is for all fields.

We shall use the Supervisors table.

    public Map<String ,AttributeValue> getSupervisor(String company,String factory) {

        List<Map<String,AttributeValue>> items = new ArrayList<>();

        Map<String,String> expressionAttributesNames = new HashMap<>();
        expressionAttributesNames.put("#company","company");
        expressionAttributesNames.put("#factory","factory");

        Map<String,AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":company",new AttributeValue().withS(company));
        expressionAttributeValues.put(":factory",new AttributeValue().withS(factory));

        QueryRequest queryRequest = new QueryRequest()
                .withTableName(TABLE_NAME)
                .withKeyConditionExpression("#company = :company and #factory = :factory ")
                .withIndexName("FactoryIndex")
                .withExpressionAttributeNames(expressionAttributesNames)
                .withExpressionAttributeValues(expressionAttributeValues);

        QueryResult queryResult = amazonDynamoDB.query(queryRequest);

        List<Map<String,AttributeValue>> attributeValues = queryResult.getItems();

        if(attributeValues.size()>0) {
            return attributeValues.get(0);
        } else {
            return null;
        }
    }

You can find full source code with unit tests on github.

Insert Items to DynamoDB Tables using Node.js

On a previous article we learned how to create DynamoDB Tables using Node.js.

Next step is to insert items to the DynamoDB Tables previously created.

Keep in mind that for the insert action the most basic step is to specify the the primary key.
For the table users the primary key is the attribute email. You can add as many attributes as you want however the cumulative size should not surpass 400 KB.

var AWS = require("aws-sdk");

	var dynamodb = new AWS.DynamoDB();
	var params = {
			TableName:"Users",
		    Item:{
		    	email : { S:"jon@doe.com"},
		        fullname: { S:"Jon Doe"}
		    }
		};
	
	dynamodb.putItem(params,callback);

DynamoDB also supports Batch writes. In this case the main benefit lies on less I/O, however nothing changes regarding consumed capacity. In our case we will add a batch of login attempts.

var AWS = require("aws-sdk");

var insetBatchLogins = function(callback) {
	
	var dynamodb = new AWS.DynamoDB();
	var batchRequest = {
			RequestItems: {
				"Logins": [
				           {
				        	   PutRequest: { 
				        		   Item: {
				        			   "email": { S: "jon@doe.com" },
				        			   "timestamp": { N: "1467041009976" }
				        			   }
				           }},
				           {
				        	   PutRequest: { 
				        		   Item: {
				        			   "email": { S: "jon@doe.com" },
				        			   "timestamp": { N: "1467041019976" }
				        			   }
				           }}]
		    }
		};

	dynamodb.batchWriteItem(batchRequest,callback);
};

In case of an insert with a global/local secondary index all you have to do is to specify the corresponding attributes for the index. Take into consideration that you can have empty index related attributes or even duplicates.

	var dynamodb = new AWS.DynamoDB();
	
	var params = {
			TableName:"Supervisors",
		    Item:{
		    	name: { S:"Random SuperVisor"},
		    	company: { S:"Random Company"},
		    	factory: { S:"Jon Doe"}
		    }
		};
	
	dynamodb.putItem(params,callback);

You can find the sourcecode on github.

Insert Items to DynamoDB Tables using Java

On a previous article we learned how to create DynamoDB Tables using Java.

Next step is to insert items to the DynamoDB Tables previously created.

Keep in mind that for the insert action the most basic step is to specify the the primary key.
For the table users the primary key is the attribute email. You can add as many attributes as you want however the cumulative size should not surpass 400 KB.

 Map<String,AttributeValue> attributeValues = new HashMap<>();
        attributeValues.put("email",new AttributeValue().withS("jon@doe.com"));
        attributeValues.put("fullname",new AttributeValue().withS("Jon Doe"));

        PutItemRequest putItemRequest = new PutItemRequest()
                .withTableName("Users")
                .withItem(attributeValues);

        PutItemResult putItemResult = amazonDynamoDB.putItem(putItemRequest);

DynamoDB also supports Batch writes. In this case the main benefit lies on less I/O, however nothing changes regarding consumed capacity. In our case we will add a batch of login attempts.

        Map<String,AttributeValue> firstAttributeValues = new HashMap<>();
        firstAttributeValues.put("email",new AttributeValue().withS("jon@doe.com"));

        Long date = new Date().getTime();

        firstAttributeValues.put("timestamp",new AttributeValue().withN(Long.toString(date)));

        PutRequest firstPutRequest = new PutRequest();
        firstPutRequest.setItem(firstAttributeValues);

        WriteRequest firstWriteRequest = new WriteRequest();
        firstWriteRequest.setPutRequest(firstPutRequest);


        Map<String,AttributeValue> secondAttributeValues = new HashMap<>();
        secondAttributeValues.put("email",new AttributeValue().withS("jon@doe.com"));
        secondAttributeValues.put("timestamp",new AttributeValue().withN(Long.toString(date+100)));

        PutRequest secondPutRequest = new PutRequest();
        secondPutRequest.setItem(secondAttributeValues);

        WriteRequest secondWriteRequest = new WriteRequest();
        secondWriteRequest.setPutRequest(secondPutRequest);

        List<WriteRequest> batchList = new ArrayList<WriteRequest>();
        batchList.add(firstWriteRequest);
        batchList.add(secondWriteRequest);

        Map<String, List<WriteRequest>> batchTableRequests = new HashMap<String, List<WriteRequest>>();
        batchTableRequests.put("Logins",batchList);

        BatchWriteItemRequest batchWriteItemRequest = new BatchWriteItemRequest();
        batchWriteItemRequest.setRequestItems(batchTableRequests);

        amazonDynamoDB.batchWriteItem(batchWriteItemRequest);

In case of an insert with a global/local secondary index all you have to do is to specify the corresponding attributes for the index. Take into consideration that you can have empty index related attributes or even duplicates.

        Map<String,AttributeValue> attributeValues = new HashMap<>();
        attributeValues.put("name",new AttributeValue().withS("Random SuperVisor"));
        attributeValues.put("company",new AttributeValue().withS("Random Company"));
        attributeValues.put("factory",new AttributeValue().withS("Jon Doe"));


        PutItemRequest putItemRequest = new PutItemRequest()
                .withTableName("Supervisors")
                .withItem(attributeValues);

        PutItemResult putItemResult = amazonDynamoDB.putItem(putItemRequest);

You can find the sourcecode on github.