Implement custom JMeter samplers

As we proceed on different architectures and implementations the need for versatile stress testing tools rises.

Apache Jmeter is one the most well known tools when it comes to load testing. It supports many protocols such as ftp http tcp and also it can be used easily for distributed testing.

Jmeter also provides you with an easy way to create custom samplers. For example if you need to load test a http endpoint that requires a specific procedure for signing the headers then a custom sampler will come in handy.

The goal is to implement a custom sampler project which will load test a simple function.

I use gradle for this example.

group 'com.gkatzioura.jmeter'
version '1.0-SNAPSHOT'

apply plugin: 'java'

sourceCompatibility = 1.6

repositories {
    mavenCentral()
}


dependencies {
    compile 'org.apache.jmeter:ApacheJMeter_java:2.11'
    compile 'org.json:json:20151123'
    testCompile group: 'junit', name: 'junit', version: '4.11'
}

task copySample(type:Copy,dependsOn:[build]) {

    copy {
        from project.buildDir.getPath()+'/libs/jmeter-sampler-1.0-SNAPSHOT.jar'
        into 'pathtoyourjmeterinstallation/apache-jmeter-2.13/lib/ext/'
    }
}

I include the ApacheJMeter dependency on the project since the sampler will have to extend the AbstractJavaSamplerClient.
The copySample task will copy the jar to the lib/ext path of Jmeter where all samplers reside.

A simple function will be called by the sampler

package com.gkatzioura.jmeter;

/**
 * Created by gkatzioura on 30/1/2016.
 */
public class FunctionalityForSampling {

    public String testFunction(String arguement1,String arguement2) throws Exception {

        if (arguement1.equals(arguement2)) {
            throw new Exception();
        }

        else return arguement1+arguement2;
    }

}

The CustomSampler class extends the AbstractJavaSamplerClient class and invokes the testFunction.
By overriding the getDefaultParameters function we can apply default parameters that can be used with the request.

package com.gkatzioura.jmeter;

import org.apache.jmeter.config.Arguments;
import org.apache.jmeter.protocol.java.sampler.AbstractJavaSamplerClient;
import org.apache.jmeter.protocol.java.sampler.JavaSamplerContext;
import org.apache.jmeter.samplers.SampleResult;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.Serializable;

/**
 * Created by gkatzioura on 30/1/2016.
 */
public class CustomSampler extends AbstractJavaSamplerClient implements Serializable {

    private static final String METHOD_TAG = "method";
    private static final String ARG1_TAG = "arg1";
    private static final String ARG2_TAG = "arg2";

    private static final Logger LOGGER = LoggerFactory.getLogger(CustomSampler.class);

    @Override
    public Arguments getDefaultParameters() {

        Arguments defaultParameters = new Arguments();
        defaultParameters.addArgument(METHOD_TAG,"test");
        defaultParameters.addArgument(ARG1_TAG,"arg1");
        defaultParameters.addArgument(ARG2_TAG,"arg2");

        return defaultParameters;
    }

    @Override
    public SampleResult runTest(JavaSamplerContext javaSamplerContext) {

        String method = javaSamplerContext.getParameter(METHOD_TAG);
        String arg1 = javaSamplerContext.getParameter(ARG1_TAG);
        String arg2 = javaSamplerContext.getParameter(ARG2_TAG);

        FunctionalityForSampling functionalityForSampling = new FunctionalityForSampling();

        SampleResult sampleResult = new SampleResult();
        sampleResult.sampleStart();

        try {
            String message = functionalityForSampling.testFunction(arg1,arg2);
            sampleResult.sampleEnd();;
            sampleResult.setSuccessful(Boolean.TRUE);
            sampleResult.setResponseCodeOK();
            sampleResult.setResponseMessage(message);
        } catch (Exception e) {
            LOGGER.error("Request was not successfully processed",e);
            sampleResult.sampleEnd();
            sampleResult.setResponseMessage(e.getMessage());
            sampleResult.setSuccessful(Boolean.FALSE);

        }

        return sampleResult;
    }

}

After compile is finished the jar created must be copied to the lib/ext directory of the JMeter installation home.
Also in case there are more dependencies that have to be imported they should also be copied to the lib path of the JMeter installation home

Once the process is complete by adding Java Sampler to a JMeter Thread Group we can choose our custom sampler.

Screenshot from 2016-01-31 01:30:06

You can also find the source code here.

Testing Amazon Web Services Codebase: DynamoDB and S3

When switching to an amazon web services infrastructure, one of the main challenges is testing.

Components such as DynamoDB and S3 come in handy however they come with a cost.
When it comes to continuous integration you will end up spending resources if you use the amazon components.

Some of these components have their clones that are capable of running locally.

You can use DynamoDB locally.

By issuing

java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

you will have a local DynamoDB instance up and running.

Also on http://localhost:8000/shell you have a DynamoDB Shell (based on javascript) which will help you to get started.

In order to connect to the local instance you need to set the endpoint on your DynamoDB client.

On Java

AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setEndpoint("http://localhost:8000"); 

On Node.js

var AWS = require('aws-sdk');
var config = {"endpoint":"http://localhost:8000"};
var client = new AWS.DynamoDB(config);

Another base component of Amazon Web Services is the Simple Storage Service (S3).

Luckily we have fake-s3 . Fake-S3 a lightweight server clone of amazon S3, exists.

Installing and running fake-s3 is pretty simple

gem install fakes3
fakes3 -r /mnt/fakes3_root -p 4567

In order to connect you have to specify the endpoint

On Java

AmazonS3 client = new AmazonS3Client();
client.setEndpoint("http://localhost:8000"); 

On Node.js

var AWS = require('aws-sdk');
var config = {"endpoint":"http://localhost:8000"};
var client = new AWS.S3(config);

These tools will come in handy during the development face, especially when you get started and want a simple example. By running them locally you avoid overhead of permissions and configurations that come with each component you upload on amazon.

Async for Node.js

Async module for node.js saves the day when it comes to synchronizing asynchronous tasks, or executing them in a serial manner.

To execute tasks in order, you can use the series method.


var async = require('async');

var somethingAsynchronous = function(callback) { 
    console.log('Called something asynchronous'); 
    callback(); 
};

var somethingElseAsynchronous = function(callback) { 
    console.log('called something else asynchronous'); 
    callback() 
};

async.series([
function(callback) {
  somethingAsynchronous(function(err,result) {
    if(err) {
      callback(err);
    } else {
      callback(result);
    }
  });
},
function(callback) {
  somethingElseAsynchronous(function(err,result) {
    if(err) {
      callback(err);
    } else {
      callback(result);
    }
  });
]);

To execute tasks in order and use the results of previous tasks, you have to use the waterfall method.
The last function specified will handle the result of the tasks executed. When an error occurs prior to executing all the specified tasks, then the other tasks will not execute and the last function will handle the error.


var somethingAsynchronous = function(callback) { 
    callback(null,'This is a result'); 
};

var somethingElseAsynchronous = function(firstResult,callback) { 
   callback(null,firstResult+" and this is appended");
};

async.waterfall([
  function (callback){
    somethingAsynchronous(callback);
  },
  function(result,callback) {
    somethingElseAsynchronous(result,callback);
  }
],
function(err,result) {
  console.log('The end result is: '+result);
});

The method parallel is used to execute tasks in parallel and synchronize them after their execution.


var somethingAsynchronous = function(callback) { 
    
    /*
        Asynchronous code
    */
    
    callback(null,'23'); 
};

var somethingElseAsynchronous = function(callback) { 

    /*
        Asynchronous code
    */
    
    callback(null,'sad');
};

async.parallel([
somethingAsynchronous,
somethingElseAsynchronous
],function(err,result){
  console.log('The result is '+result);
});

In case of an array of objects that need to be processed in an asynchronous manner then we can use map.


var items = ['a','b','c','d'];

async.map(items,function(item,callback) {

   callback(null,'Did something asynchronous with '+item);
},
function(err,results){

  results.forEach(function(result) {
      console.log(result);
  });
});

Integrate MongoDB to your Spring project

This article shows how to integrate MongoDB to your spring project through annotation configuration.

We will begin with our Gradle configuration.

group 'com.gkatzioura.spring'
version '1.0-SNAPSHOT'


buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.2.7.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'spring-boot'

jar {
    baseName = 'mdb-spring-boot'
    version =  '0.1.0'
}

repositories {
    mavenCentral()
}

sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {


    compile("org.springframework.boot:spring-boot-starter-web")


    compile('com.googlecode.json-simple:json-simple:1.1.1')
    compile("org.springframework.boot:spring-boot-starter-actuator")
    compile("org.springframework.data:spring-data-mongodb:1.8.0.RELEASE")
    compile("ch.qos.logback:logback-classic:1.1.3")
    compile("ch.qos.logback:logback-core:1.1.3")
    compile("org.json:json:20150729")
    compile("com.google.code.gson:gson:2.4")

    compile("org.slf4j:slf4j-api:1.7.12")

    testCompile("junit:junit")
    testCompile('org.springframework.boot:spring-boot-starter-test')
}

task wrapper(type: Wrapper) {
    gradleVersion = '2.3'
}

We will proceed with the MongoDB configuration using spring annotations.

package com.gkatzioura.spring.config;

import com.mongodb.MongoClient;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.mongodb.MongoDbFactory;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.SimpleMongoDbFactory;

import java.net.UnknownHostException;

/**
 * Created by oSeven3 on 21/10/2015.
 */
@Configuration
public class MongoDbConfiguration {

    public @Bean MongoDbFactory getMongoDbFactory() throws UnknownHostException {
        return new SimpleMongoDbFactory(new MongoClient("localhost",27017),"mongotest");
    }

    public @Bean(name = "mongoTemplate") MongoTemplate getMongoTemplate() throws UnknownHostException {

        MongoTemplate mongoTemplate = new MongoTemplate(getMongoDbFactory());
        return mongoTemplate;
    }

}

Next we will define our model.
We shall create the Location model which will contain the latitude longitude.

package com.gkatzioura.spring.persistence.entities;

import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;

import java.math.BigDecimal;
import java.util.Date;
import java.util.UUID;

@Document(collection = "location")
public class Location {

    @Id
    private String id;
    private BigDecimal latitude;
    private BigDecimal longitude;
    private Date timestamp;

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public BigDecimal getLatitude() {
        return latitude;
    }

    public void setLatitude(BigDecimal latitude) {
        this.latitude = latitude;
    }

    public BigDecimal getLongitude() {
        return longitude;
    }

    public void setLongitude(BigDecimal longitude) {
        this.longitude = longitude;
    }

    public Date getTimestamp() {
        return timestamp;
    }

    public void setTimestamp(Date timestamp) {
        this.timestamp = timestamp;
    }
}

Then we shall create our repository

package com.gkatzioura.spring.persistence.repositories;

import com.gkatzioura.spring.persistence.entities.Location;
import org.springframework.data.repository.CrudRepository;

import java.util.UUID;

public interface LocationRepository extends CrudRepository<Location,String> {
}

Then we shall define our controller

package com.gkatzioura.spring.controller;

import com.gkatzioura.spring.persistence.entities.Location;
import com.gkatzioura.spring.persistence.repositories.LocationRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.ArrayList;
import java.util.List;

import java.io.IOException;

/**
 * Created by oSeven3 on 21/10/2015.
 */

@RestController
@RequestMapping("/location")
public class LocationController {

    @Autowired
    private LocationRepository locationRepository;

    private static final Logger LOGGER = LoggerFactory.getLogger(LocationRepository.class);

    @RequestMapping(value = "/",method = RequestMethod.POST)
    @ResponseBody
    public String post(@RequestBody Location location) {

        locationRepository.save(location);

        return "OK";
    }

    @RequestMapping(value = "/",method = RequestMethod.GET)
    @ResponseBody
    public List<Location> get() {

        List<Location> locations = new ArrayList<>();
        locationRepository.findAll().forEach(l->locations.add(l));
        return locations;
    }

}

Last but not least our Application class

package com.gkatzioura.spring;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * Created by gkatziourasemmanouil on 8/15/15.
 */
@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

In order to run just run

gradle bootRun

Use Map Reduce for Tf-Idf ranking on a Node.js and MongoDB environment

When developing a document search application one of the challenges is to order your results according to the occurrence of the term that you search for. Tf-Idf is a numerical statistic that assists you in weighing the results of you search.
Tf stands for term frequency.
Idf stands for Inverse document frequency.

To get a grasp we will develop a sample of tf-idf in javascript, as a node module.

function TfIdf() {
}

TfIdf.prototype.weights = function(documents,term) {
    
    var results = []
    
    var idf = this.idf(documents,term)
    
    for(var i=0;i<documents.length;i++) {
        
        var tf = this.tf(documents[i],term)
        var tfidf = tf*idf
        var result = {weight:tfidf,doc:documents[i]}    
        
        results.push(result)
    }

    return results
}

TfIdf.prototype.tf = function(words,term) {

    var result = 0
    
    for(var i=0;i<words.length;i++) {

        var word = words[i]

        if(word.indexOf(term)!=-1) {
            result = result+1
        }    
    }

    return result/words.length
}

TfIdf.prototype.idf = function(documents,term) {
   
    var occurence = 0

    for(var j=0;j<documents.length;j++) {
        
        var doc = documents[j]
        
        if(this.__wordInsideDoc(doc,term)){
            occurence = occurence+1
        }                  
    }

    if(occurence==0) {
        return undefined    
    }

    return Math.log(documents.length/occurence)
}

TfIdf.prototype.__wordInsideDoc = function(doc,term) {
    
    for(var i=0;i<doc.length;i++) {

        var word = doc[i]

        if(word.indexOf(term)!=-1) {
            return true
        }
    }    

    return false
}

module.exports = TfIdf

The function weights will accept the documents and term to search

An example follows

var TfIdf = require('./TfIdf')

var tfIdf = new TfIdf()

var docs = [["latest","sprint"],["lair","laugh","fault"],["lemma","on"]]

console.log(tfIdf.weights(docs,"la"))

The result is

[ { weight: 0.2027325540540822, doc: [ 'latest', 'sprint' ] },
  { weight: 0.27031007207210955,
    doc: [ 'lair', 'laugh', 'fault' ] },
  { weight: 0, doc: [ 'lemma', 'on' ] } ]

Now we shall proceed with the map reduce approach.

I will use node.js

First we will install the mongodb driver

npm install mongodb

Then we will setup our mongo database connection. Once initialized, in case there are no records, we will populate the database for testing purposes.

var MongoClient = require('mongodb').MongoClient
Server = require('mongodb').Server

var url = 'mongodb://localhost:27017/mapreduceexample'

function TfIdfMongo() {
}

TfIdfMongo.prototype.__getConnection = function(callback) {

    var tfIdfMongo = this

    MongoClient.connect(url,function (err, connection) {
        if (err) {
            callback(err)
        } else {

            var documents = connection.collection('documents');

            documents.count({}, function (error, numOfDocs) {
                if (numOfDocs == 0) {
                    tfIdfMongo.__insertTestRecords(connection,function(err) {
                        callback(err,connection)
                    })
                } else {
                    callback(undefined,connection)
                }
            })
        }
    })
}

TfIdfMongo.prototype.__insertTestRecords = function(connection,callback) {

    var documents = connection.collection('documents');

    var latestDocuments = [
        {words:["latest","sprint"]},
        {words:["lair","laugh","fault"]},
        {words:["lemma","on"]}
    ]

    documents.insert(latestDocuments,
        function(err,result) {
            callback(err)
        })
}

This is gonna be a two phase process.
On the first phase we have to calculate the idf.
To do so we will issue a map reduce.
The term variable has to be passed in order to be used by the map reduce process.
In order to use a dynamic variable on map reduce we will employee the scope parameter.

TfIdfMongo.prototype.__idf = function(connection,term,callback) {

    var tfIdfMongo = this

    var documents = connection.collection('documents');

    documents.mapReduce(
        tfIdfMongo.__mapIdf,
        tfIdfMongo.__reduceIdf,
        {
            scope: {permterm:term},
            out: "tfidf_results"
        },
        function(err,results) {

            if(err) {
                callback(err)
            }

            results.findOne({},function(err,result) {

                if(err) {
                    callback(err)
                }

                if(result.value.occurrence==0) {
                    return;
                }

                var idf = Math.log(result.value.count/result.value.occurrence)

                callback(undefined,idf)
            })
        }
    )
}

TfIdfMongo.prototype.__mapIdf = function() {

    var term = permterm

    var occurrence = 0

    for (var i = 0; i < this.words.length; i++) {

        var word = this.words[i]

        if (word.indexOf(term) != -1) {

            if (occurrence <=0 ) {

                occurrence = 1
            }
        }
    }

     emit("idf", occurrence)
}

TfIdfMongo.prototype.__reduceIdf = function(key,values) {

    var result = {count:values.length,occurrence:0}

    for(var i=0;i<values.length;i++) {

        if(values[i]==1) {
            result.occurrence += 1
        }
    }

    return result
}

The result is one number

On the second phase we have to calculate the tf for each document and multiply the result with the idf value calculated prior to this.
Map reduce will be used for this case too.
This time through the scope parameter, we are going to pass the term that we search for but also the idf variable.

TfIdfMongo.prototype.__tf = function(connection,term,idf,callback) {

    var tfIdfMongo = this

    var documents = connection.collection('documents');

    documents.mapReduce(
        tfIdfMongo.__mapTf,
        function(key,values) {

            return values
        },
        {
            scope: {permTerm:term,permIdf:idf},
            out: "tf_results"
        },
        function(err,results) {

            if(err) {
                callback(err)
            }

            results.find({},function(err,docs) {

                if(err) {
                    callback(err)
                }

                docs.toArray(function (err,documents) {
                    callback(err,documents)
                })
            })
        }
    )
}

TfIdfMongo.prototype.__mapTf = function() {

    var term = permTerm
    var idf = permIdf

    var occurrence = 0

    for(var i=0;i<this.words.length;i++) {

        var word = this.words[i]
        if (word.indexOf(term) != -1) {

            occurrence += 1
        }
    }

    var weight = idf*(occurrence/this.words.length)

    emit(this, weight)
}

We will implement the tfIdf function which combines the two previous steps.
The function takes the term that we need to search for as an argument.

var MongoClient = require('mongodb').MongoClient
Server = require('mongodb').Server

var url = 'mongodb://localhost:27017/mapreduceexample'

function TfIdfMongo() {
}

TfIdfMongo.prototype.tfIdf = function(term,callback) {

    var tfIdfMongo = this

    tfIdfMongo.__getConnection(function(err,connection) {

        if(err) {
            callback(err)
        }

        tfIdfMongo.__idf(connection,term,function(err,idf) {

            if(err) {
                callback(err)
            }

            tfIdfMongo.__tf(connection,term,idf,function(err,documents) {

                if(err) {
                    callback(err)
                }

                connection.close()

                callback(undefined,documents)

            })

        })
    })
}

TfIdfMongo.prototype.__getConnection = function(callback) {

    var tfIdfMongo = this

    MongoClient.connect(url,function (err, connection) {
        if (err) {
            callback(err)
        } else {

            var documents = connection.collection('documents');

            documents.count({}, function (error, numOfDocs) {
                if (numOfDocs == 0) {
                    tfIdfMongo.__insertTestRecords(connection,function(err) {
                        callback(err,connection)
                    })
                } else {
                    callback(undefined,connection)
                }
            })
        }
    })
}

TfIdfMongo.prototype.__insertTestRecords = function(connection,callback) {

    var documents = connection.collection('documents');

    var latestDocuments = [
        {words:["latest","sprint"]},
        {words:["lair","laugh","fault"]},
        {words:["lemma","on"]}
    ]

    documents.insert(latestDocuments,
        function(err,result) {
            callback(err)
        })

}

TfIdfMongo.prototype.__tf = function(connection,term,idf,callback) {

    var tfIdfMongo = this

    var documents = connection.collection('documents');

    documents.mapReduce(
        tfIdfMongo.__mapTf,
        function(key,values) {

            return values
        },
        {
            scope: {permTerm:term,permIdf:idf},
            out: "tf_results"
        },
        function(err,results) {

            if(err) {
                callback(err)
            }

            results.find({},function(err,docs) {

                if(err) {
                    callback(err)
                }

                docs.toArray(function (err,documents) {
                    callback(err,documents)
                })
            })
        }
    )
}

TfIdfMongo.prototype.__mapTf = function() {

    var term = permTerm
    var idf = permIdf

    var occurrence = 0

    for(var i=0;i<this.words.length;i++) {

        var word = this.words[i]
        if (word.indexOf(term) != -1) {

            occurrence += 1
        }
    }

    var weight = idf*(occurrence/this.words.length)

    emit(this, weight)
}


TfIdfMongo.prototype.__idf = function(connection,term,callback) {

    var tfIdfMongo = this

    var documents = connection.collection('documents');

    documents.mapReduce(
        tfIdfMongo.__mapIdf,
        tfIdfMongo.__reduceIdf,
        {
            scope: {permterm:term},
            out: "tfidf_results"
        },
        function(err,results) {

            if(err) {
                callback(err)
            }

            results.findOne({},function(err,result) {

                if(err) {
                    callback(err)
                }

                if(result.value.occurrence==0) {
                    return;
                }

                var idf = Math.log(result.value.count/result.value.occurrence)

                callback(undefined,idf)
            })
        }
    )
}

TfIdfMongo.prototype.__mapIdf = function() {

    var term = permterm

    var occurrence = 0

    for (var i = 0; i < this.words.length; i++) {

        var word = this.words[i]

        if (word.indexOf(term) != -1) {

            if (occurrence <=0 ) {

                occurrence = 1
            }
        }
    }

     emit(this.__id, occurrence)
}

TfIdfMongo.prototype.__reduceIdf = function(key,values) {

    var result = {count:values.length,occurrence:0}

    for(var i=0;i<values.length;i++) {

        if(values[i]==1) {
            result.occurrence += 1
        }
    }

    return result
}



module.exports = TfIdfMongo

Our test show case

var TfIdf = require('./TfIdf')
var TfIdfMongo = require('./TfIdfMongo')

var tfIdf = new TfIdf()

var docs = [["latest","sprint"],["lair","laugh","fault"],["lemma","on"]]


console.log("The results are "+JSON.stringify(tfIdf.tfIdf(docs,"la")))

var tfIdfMongo = new TfIdfMongo()

tfIdfMongo.tfIdf("la",function(err,results) {


    console.log("The results are "+JSON.stringify(results))

})

And we get the same results for both cases.

The results are [{"weight":0.2027325540540822,"doc":["latest","sprint"]},{"weight":0.27031007207210955,"doc":["lair","laugh","fault"]},{"weight":0,"doc":["lemma","on"]}]
The results are [{"_id":{"_id":"55f46602947446bb1a7f7933","words":["latest","sprint"]},"value":0.2027325540540822},{"_id":{"_id":"55f46602947446bb1a7f7934","words":["lair","laugh","fault"]},"value":0.27031007207210955},{"_id":{"_id":"55f46602947446bb1a7f7935","words":["lemma","on"]},"value":0}]

Why Should I use map reduce for this problem?

The tf-idf ranking problem, is a problem which includes computations, that can be parallelised.
The sequential approach could be an option for other environments but for Node.js there are many drawbacks.
Node.js is a single threaded environment, it was not designed for heavy computational tasks.
Its magic has to do with how good it executes I/O operations.
Consider the scenario of a large data set problem.
While the Node.js process would be executing the time consuming computations, the requests issued won’t be able to be executed appropriately.
However there are some workarounds for solutions based on Node.js, such as spawning extra nodes and implement a way of communication between them.

To sum up

Map reduce fits well to the ranking problem. Not only it takes away much of the computational overhead but also from the implementation overhead.

Integrate Redis to a Node.js project

On this article we are going to add caching to our Node.js application using Redis.

We will install the recommended client for node.js as mentioned on the official Redis page.

npm install redis --save

Next we shall create our client connection

var redis = require('redis')

var hostname = '127.0.0.1'
var port = '6379'
var password = 'yourpassword'

var client = redis.createClient(port,hostname,{no_ready_check: true})

client.auth(password)

client.on('connect', function() {
        console.log('Client was connected')
})

The password provided on auth will be stashed and used on every connect.

Let’s see some basic actions

SMEMBERS actions


client.sadd('aset', 2)
client.sadd('aset', 1)
client.sadd('aset', 5)

client.smembers('aset',function(err,reply) {
    console.log(reply)
})

Get and Set actions

client.set('akey', "This is the value")

client.get('akey',function(err,reply) {
    console.log(reply)
})

Hash value actions

client.hset('hashone', 'fieldone', 'some value');
client.hset('hashone', 'fieldtwo', 'another value');

var hash = 'hashone'

client.hkeys(hash, function (err, fields) {

    fields.forEach(function(field,i) {

        console.log('The field is '+field)

        client.hget(hash,field,function (err, value) {
            console.log('The content is '+value)
        })
    })

});

List actions

client.rpush(['mylist', 'firstItem', 'secondItem'], function(err, listsize) {
    console.log(listsize)
});

client.lrange('mylist',0,-1,function(err,values) {
    console.log(values)
})

client.lpop('mylist',function(err,value) {
    console.log('Got '+value)
})

 

Conclusion

The Redis client for Node.js is pretty straightforward and easy to get started.

Keep in mind that one connection is adequate. Redis is single threaded therefore there is no need in opening multiple connections.

You can refer to the github page for more examples and usage showcases.

Why I use Node.js

It has been a while since I took up Node.js development.
My early impressions were pretty positive and after some months of Node.js development I have to say that I am amazed.

There are many reasons to continue using Node.js for my projects.

Great for applications with heavy I/O

The asynchronous nature of Node.js enables you to stay focused on your implementation. You don’t need to proceed to any extra configurations as you do with multithreaded environments. Also long I/O operations don’t have to be dispatched to any custom mechanisms, enabling you to avoid any extra costs on development.
Provided your application is mostly based on I/O and less on computation, chances are that Node.js will work for you.

Bootstrapping

Node.js is one of the most bootstrapping experiences that I had with a programming environment. All you need is to have node and npm installed. There are libraries for almost everything you need and the configurations needed are minimal.
Also getting started with the implementation of your Node.js extension takes you no time at all.

Set up Simplicity

All it takes to setup your project is your source code and a package.json file with your dependencies.

Make use of your Javascript skills

Although a backend developer I had to write some javascript in the past. The same applies to other developers I know, even to the most backend focused. Learning a language is an investment.You can make more out of your javascript knowledge, by using Node.js on your projects provided it suits to their needs.

Not another web framework

Node.js is not another web application framework. Due to It’s asynchronous nature and efficiency it can be applied to many problems. For example it can be used as a glue among components of your infrastructure. Also due to heavy development you don’t just have a runtime environment, you have a whole ecosystem with tools that apply to a wide variety of problems.

Conclusion

Node.js is already a part of the tools that I use on a daily basis. However it should be used wise and make sure that it fits your project’s nature.
It is really challenging to deal with the callback hell, but in exchange you get a pretty promising and fast growing ecosystem.