Testing has become Mandatory (and there are no more excuses)

There are many posts out there about the value of automated testing and why it is a must through the life cycle of a software product.

Most people and teams totally agree with this statement, so what goes wrong?

It is just a prototype

It seems to be a valid statement, but if we consider how many times a prototype made it into production then it is not so valid.
In fact most prototype codebase ends up to production because this is the goal of a prototype.
Adding tests in a prototype phase makes absolutely sense.

Too difficult to test

Yes there cases where testing is really difficult due to limitations. For example the android and he iphone emulators do not give you the ability
to set accelerometer events, or you use a service which provides you with no test utilities at all, or a testing environment.
Even if mocking would not make an absolute test case, it really can assist you on making a specification on how things work.
The other scenario is testing being hard to implement due to the codebase.
Consider this as an indicator that things are not simple enough. If it is just your codebase then your are lucky, It is up to you to make it more testable and more simple.

Too much to do, too little time

This is our weakest spot and the one that we are most prone too succumb.
Suppose you develop a smartphone application. In case of manual testing each mistake will cost you 3-5 minutes. Starting the emulator or even worse deploy to a physical device, load the application and press some buttons to create events, and in case of an error repeat and loose other 3-5 minutes.
In case you develop a server application things might get even worse. Connect to the sever, upload the application, check that that the upload was successful and then manually test that it works ok, and in case of an error repeat and loose some time again.
Also keep in mind that in case of projects with more than one developers involved, mistakes are more common to happen.
We tend to believe that the best case will happen where everything will work as expected.
It might work most of the time but in case of a problem you loose big. We end up being hooked on an “It’s ok all it needs is another version upload” mode.
Next time this happens just count how much time you end up loosing by testing manually until everything is ok.
Then estimate how much time tests will cost you and how much time you win in such cases.

Everything changes so rapidly

Yes a project on its initial phase, will have a completely different codebase in two months than its initial one.
But since big changes are made more errors are likely to occur. A minimal amount of tests ensures that basic features would continue to work.

All in all testing is not as hard as it used to be.
Nowadays almost every service or utility that we use comes with some test utils.
As the software industry becomes more challenging, codebases without tests would become extinct.
Pick whatever methodology you want but just do it. Your life will become much easier.

Scheduling jobs on Node.js with node-schedule

Batching is a great part of todays software development. The business world runs on batch from bank statements to promotion emails.

Node.js has some good libraries for such cases.

Node Schedule is a light cron like scheduler for node.

npm install node-schedule

In case your are used to cron and the cron expression format, it will be pretty easy for you.


var scheduler = require('node-schedule');
 
var montlyJob  = scheduler.scheduleJob('0 0 1 * *', function(){
  console.log('I run the first day of the month');
});

But you also have a javascript object approach

var scheduler = require('node-schedule');

var rule = new scheduler.RecurrenceRule();
rule.hour = 7
rule.dayOfWeek = new schedule.Range(0,6)
 
var dailyJob = schedule.scheduleJob(date, function(){
  console.log('I run on days at 7:00');
});

scheduler.scheduleJob(rule,task);

Also you can have tasks submitted by giving a date

var scheduler = require('node-schedule');

var date = new Date(2017, 1, 1, 0, 0, 0);
var newYearJob = scheduler.scheduleJob(date, function() {
    console.log("Happy new year");
});

However in case your job is not needed you can cancel it pretty easy


newYearJob.cancel();

Systemd and Upstart Services

Most linux servers that I use are either Debian based or RedHat based.

A common task is adding daemon services.

Suppose that we want to start a tomcat application on startup

First we shall install tomcat

mkdir /opt/tomcat
groupadd tomcat
useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
wget http://apache.cc.uoc.gr/tomcat/tomcat-8/v8.0.33/bin/apache-tomcat-8.0.33.tar.gz
tar xvf apache-tomcat-8.0.33.tar.gz
mv apache-tomcat-8.0.33/* /opt/tomcat
rm -r apache-tomcat-8.0.33 apache-tomcat-8.0.33.tar.gz 
cd /opt/tomcat
chgrp -R tomcat conf
chmod g+rwx conf
chmod g+r conf/*
chown -R tomcat work/ temp/ logs/

In case of Systemd we should add a tomcat.service file on /etc/systemd/system.
The file /etc/systemd/system/tomcat.service shall contain

[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking

Environment=JAVA_HOME=/usr/java/default
Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat
Environment='CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC'
Environment='JAVA_OPTS=-Duser.timezone=UTC -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom'

ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/bin/kill -15 $MAINPID

User=tomcat
Group=tomcat

[Install]
WantedBy=multi-user.target

I specified the script to start after syslog and network are enabled
As we can see systemd handles the tomcat as a daemon and kills the pid.
With User and Group we specify the user and the group that the process should be run as.
Systemd will handle the upstart process and kill it using the PID.

to enable and run you have to issue

systemctl enable tomcat
systemctl start tomcat

In case of upstart we should create a tomcat.conf file in /etc/init/
The content of /etc/init/tomcat.conf

description     "Tomcat instance"
author          "Emmanouil Gkatziouras"

respawn
respawn limit 2 5

start on runlevel [2345]
stop on runlevel [!2345]

setuid tomcat
setgid tomcat

env CATALINA_HOME=/opt/tomcat

script
        $CATALINA_HOME/bin/catalina.sh run
end script

post-stop script
        rm -rf $CATALINA_HOME/temp/*
end script

It will start on run levels 2,3,4 or 5
The group and the user id to be executed would be tomcat
After tomcat is stopped the post script block will remove the temp files.
Instead of starting the process inn the background as a daemon,, upstart will handle the process on the foreground.

To start just issue

sudo initctl start tomcat

Implement a DynamoDB docker Image

When you use DynamoDB and you have good codebase test coverage, chances are that you tend to use a lot local DynamoDB.
Docker comes really in handy in order to distribute a pre-configured local dynamo db among your dev teams or your Continuous integration server.

I will use a Centos image.

We will need Java.
I prefer the oracle jdk therefore I have to accept the license and download locally the java rpm.

In case you want open jdk you can just install it through yum

So we create the Dockerfile.
I will use the default port which is 8000 so I will expose port 8000.
jdk-8u91-linux-x64.rpm is the oracle java I downloaded previously.

FROM centos

ADD jdk-8u91-linux-x64.rpm /

RUN rpm -Uvh jdk-8u91-linux-x64.rpm

RUN rm /jdk-8u91-linux-x64.rpm

RUN mkdir /opt/DynamoDB

RUN curl -O -L http://dynamodb-local.s3-website-us-west-2.amazonaws.com/dynamodb_local_latest.tar.gz

RUN mv dynamodb_local_latest.tar.gz /opt/DynamoDB/

RUN cd /opt/DynamoDB && tar xvf dynamodb_local_latest.tar.gz && rm dynamodb_local_latest.tar.gz

EXPOSE 8000

ENTRYPOINT ["java","-Djava.library.path=/opt/DynamoDB/DynamoDBLocal_lib","-jar","/opt/DynamoDB/DynamoDBLocal.jar","-sharedDb"]

Then we build our image

docker build -t dynamodb .

No we run the container on the background

docker run -p 8000:8000 -d dynamodb

Implement a SciPy Stack Docker Image

SciPy is a powerful python library, but it has many dependencies including Fortran.
So Running your Scipy code in a docker container makes absolute sense.

We will use a private registry

docker run -d -p 5000:5000 --name registry registry:2

I will use a Centos image.
Centos is a very popular linux distribution based on RedHat which is a commercial Linux distribution. Oracle’s Linux and Amazon Linux is based on Red Hat Linux.

docker pull centos
docker tag centos localhost:5000/centos
docker push localhost:5000/centos

Then we start a container

docker run -i -t --name centoscontainer localhost:5000/centos /bin/bash

We install all binary dependencies

yum install -y epel-release
yum -y update
yum -y groupinstall "Development Tools"
yum -y install python-devel
yum -y install blas --enablerepo=epel
yum -y install lapack --enablerepo=epel
yum -y install Cython --enablerepo=epel
yum -y install python-pip

Then we install the scipy stack

pip install boto3
pip install numpy
pip install pandas
pip install scipy

And we are ready. Now we should proceed on committing the image.

docker commit -m 'Added scipy stack' -a "Emmanouil Gkatziouras" 4954f603d93b localhost:5000/scipy
docker push localhost:5000/scipy

Now we are ok to run our SciPy enabled container.

docker run -t -i localhost:5000/scipy /bin/bash

Last but not least we clear our registry.

docker stop registry && docker rm -v registry

Writing unit tests for Sails.js app using mocha

Sails.js is a wonderful node.js framework.

Writing unit tests for Sails.js using mocha is pretty easy.
On the before method of a mocha test you have to lift the sails application and on the after function you have to lower it.

var Sails = require('sails');

describe('SailsMochaTest',function() {

    before(function(done) {
        this.timeout(50000);

        Sails.lift({},
            function(err,server) {
                if(err) {
                    done(err);
                } else {
                    done(err,sails);
                }
            });
    });

    it('testmethod',function(done) {

        Sails.services.sampleService.fetchRecords()
            .then(function(results) {
                done();
            })
            .catch(function(err) {
                done(err);
            });
    });

    after(function(done) {
        Sails.lower(done);
    });
});

This works pretty good however there is a gotcha. In case you want to execute tests simultaneously, for example using the –recursive argument on mocha, you will get an exception.

Cannot load or lift an app after it has already been lowered. 
You can make a new app instance with:
var SailsApp = require('sails').Sails;
var sails = new SailsApp();

For a case like this you can follow the solution recommended and lift a new sails app.

var SailsApp = require('sails').Sails;

describe('SailsMochaTest',function() {
    
    var sails = new SailsApp();

    before(function(done) {
        sails.lift({},
            function(err,server) {
                if(err) {
                    done(err);
                } else {
                    done(err,sails);
                }
            });
    });

    it('testmethod',function(done) {

        sails.services.sampleService.fetchRecords()
            .then(function(results) {
                done();
            })
            .catch(function(err) {
                done(err);
            });
    });

    after(function(done) {
        sails.lower(done);
    });
});

AWS SQS and Spring JMS integration

Amazon WEB Services provide us with the SQS messaging service. The java sdk for sqs is compatible with JMS.

Therefore instead of using SQS as a simple spring bean we can integrate it with the JMS integration framework that spring provides.

I will use spring-boot and gradle

The gradle file

group 'com.gkatzioura.sqstesting'
version '1.0-SNAPSHOT'

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.2.7.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'spring-boot'

sourceCompatibility = 1.8

repositories {
    mavenCentral()
}

dependencies {
    compile "org.springframework.boot:spring-boot-starter-thymeleaf"
    compile "com.amazonaws:aws-java-sdk:1.10.55"
    compile "org.springframework:spring-jms"
    compile "com.amazonaws:amazon-sqs-java-messaging-lib:1.0.0"
    compile 'org.slf4j:slf4j-api:1.6.6'
    compile 'ch.qos.logback:logback-classic:1.0.13'
    testCompile "junit:junit:4.11"
}

The application class

package com.gkatzioura.sqstesting;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * Created by gkatziourasemmanouil on 8/26/15.
 */
@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

And the application yml file

queue:
endpoint: http://localhost:9324
name: sample-queue

I specify a localhost endpoint since I use ElasticMq.

The SQSConfig class is a configuration class in order to have a SQS client as a spring bean available.

package com.gkatzioura.sqstesting.config;

import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.sqs.AmazonSQSClient;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

/**
 * Created by gkatziourasemmanouil on 25/02/16.
 */
@Configuration
public class SQSConfig {

    @Value("${queue.endpoint}")
    private String endpoint;

    @Value("${queue.name}")
    private String queueName;

    @Bean
    public AmazonSQSClient createSQSClient() {

        AmazonSQSClient amazonSQSClient = new AmazonSQSClient(new BasicAWSCredentials("",""));
        amazonSQSClient.setEndpoint(endpoint);

        amazonSQSClient.createQueue(queueName);

        return amazonSQSClient;
    }

}

The SQSListener is a listener class implementing the JMS MessageListener interface.

package com.gkatzioura.sqstesting.listeners;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Created by gkatziourasemmanouil on 25/02/16.
 */
@Component
public class SQSListener implements MessageListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(SQSListener.class);

    public void onMessage(Message message) {

        TextMessage textMessage = (TextMessage) message;

        try {
            LOGGER.info("Received message "+ textMessage.getText());
        } catch (JMSException e) {
            LOGGER.error("Error processing message ",e);
        }
    }
}

The JMSSQSConfig class contains configuration for the JmsTemplate and the DefaultMessageListenerContainer. Through the JMSSQSConfig class we register the JMS MessageListeners.

package com.gkatzioura.sqstesting.config;

import com.amazon.sqs.javamessaging.SQSConnectionFactory;
import com.amazonaws.auth.*;
import com.gkatzioura.sqstesting.listeners.SQSListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.listener.DefaultMessageListenerContainer;

/**
 * Created by gkatziourasemmanouil on 25/02/16.
 */
@Configuration
public class JMSSQSConfig {

    @Value("${queue.endpoint}")
    private String endpoint;

    @Value("${queue.name}")
    private String queueName;

    @Autowired
    private SQSListener sqsListener;

    @Bean
    public DefaultMessageListenerContainer jmsListenerContainer() {

        SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
                .withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain())
                .withEndpoint(endpoint)
                .withAWSCredentialsProvider(awsCredentialsProvider)
                .withNumberOfMessagesToPrefetch(10).build();

        DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
        dmlc.setConnectionFactory(sqsConnectionFactory);
        dmlc.setDestinationName(queueName);

        dmlc.setMessageListener(sqsListener);

        return dmlc;
    }

    @Bean
    public JmsTemplate createJMSTemplate() {

        SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
                .withAWSCredentialsProvider(awsCredentialsProvider)
                .withEndpoint(endpoint)
                .withNumberOfMessagesToPrefetch(10).build();

        JmsTemplate jmsTemplate = new JmsTemplate(sqsConnectionFactory);
        jmsTemplate.setDefaultDestinationName(queueName);
        jmsTemplate.setDeliveryPersistent(false);


        return jmsTemplate;
    }

    private final AWSCredentialsProvider awsCredentialsProvider = new AWSCredentialsProvider() {
        @Override
        public AWSCredentials getCredentials() {
            return new BasicAWSCredentials("", "");
        }

        @Override
        public void refresh() {

        }
    };

}

MessageService is a service that uses JMSTemplate in order to send messages to the queue

package com.gkatzioura.sqstesting;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.core.MessageCreator;
import org.springframework.stereotype.Service;

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Session;

/**
 * Created by gkatziourasemmanouil on 28/02/16.
 */
@Service
public class MessageService {

    @Autowired
    private JmsTemplate jmsTemplate;

    @Value("${queue.name}")
    private String queueName;

    private static final Logger LOGGER = LoggerFactory.getLogger(MessageService.class);

    public void sendMessage(final String message) {

        jmsTemplate.send(queueName, new MessageCreator() {
            @Override
            public Message createMessage(Session session) throws JMSException {
                return session.createTextMessage(message);
            }
        });
    }

}

Last but not least a Controller is added. The controller sends the post request body to the queue as a message.

package com.gkatzioura.sqstesting;

import com.amazonaws.util.IOUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.io.InputStream;

/**
 * Created by gkatziourasemmanouil on 24/02/16.
 */
@Controller
@RequestMapping("/main")
public class MainController {

    @Autowired
    private MessageService messageService;

    @RequestMapping(value = "/write",method = RequestMethod.POST)
    public void write(HttpServletRequest servletRequest,HttpServletResponse servletResponse) throws IOException {

        InputStream inputStream = servletRequest.getInputStream();

        String message = IOUtils.toString(inputStream);

        messageService.sendMessage(message);
    }

}

You can download the source code here.