Thursday, 16 August 2018

How to send an email using the AWS SES service?

Background

In many business use cases, you need to notify your users over an email. AWS provides its own service to do so. You can use AWS SES (Simple email service) to send out emails to your users. In this post, we will see how we can configure AWS SES service and send out emails.

If you want to skip all the text post below you can refer to the video I have created to cover the same thing -





Setting up configurations in AWS SES service

First and foremost go to SES service in your AWS console. There are limited regions were AWS SES service is available. Please choose one of those regions -



Next thing that you need to do is verify your email address - the address from which you want to send out emails. For this go to "Email addresses" section in the left panel. Now click on "Verify a New Email Address" button. Enter your email address and click "Verify this email address". Once done you should get an email to confirm and validate your email address. 



Click on the confirmation link. Once done you should see your email address as verified in SES "Email addresses" section.




Next thing that we would need to setup is email template. The email template is the content that you went send in your email. This would include your unique template name, subject, body etc. Also, this can be parameterized which means you can have placeholders in the template that can be replaced at the runtime. A sample template would be -


{
    "Template": {
        "TemplateName": "testtemplate",
        "SubjectPart": "Test email for username {{userName}}",
        "TextPart": "Test email body!",
        "HtmlPart": "<html>\r\n\r\n<head>\r\n    <title>Test Title<\/title>\r\n<\/head>\r\n\r\n<body>\r\n    <h1> Username : {{userName}}<\/h1>\r\n<\/body>\r\n\r\n<\/html>"


    }
}

HtmlPart is JSON escaped version of your actual HTML content. TemplateName is unique template name for your template that you would reference later from your code to send the mail. SubjectPart is the email subject. For more details refer -

Notice how we have added {{userName}} in the subject and text. This is a placeholder that will be replaced at runtime. We will see how in some time. Once your template is set up it is time to add it to AWS SES. To do this you need to call following API -
  • aws ses create-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg

If you have already created the template and want to update it you can execute -

  • aws ses update-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg

NOTE: You need to have AWS CLI configured on your machine and setup with your AWS profile. I have multiple profiles on my machine which is why you see --profile argument. if you just have one then you do not have to mention it.

Once you have executed this API you can see your template under "Email Templates" section of SES service.




Now that you have your email verified and template setup you can send an email. Following is the node.js code to do it -

const aws = require('aws-sdk');
aws.config.loadFromPath('./config.json');
const ses = new aws.SES();

/**
 * author: athakur
 * aws ses create-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg
 * aws ses update-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg
 * @param {*} callback 
 */
var sendMail = function (callback) {
    var params = {};

    var destination = {
        "ToAddresses": ["opensourceforgeeks@gmail.com"]
    };
    var templateData = {};
    templateData.userName = "athakur";

    params.Source = "opensourceforgeeks@gmail.com";
    params.Destination = destination;
    params.Template = "testtemplate";
    params.TemplateData = JSON.stringify(templateData)



    ses.sendTemplatedEmail(params, function (email_err, email_data) {
        if (email_err) {
            console.error("Failed to send the email : " + email_err);
            callback(email_err, email_data)
        } else {
            console.info("Successfully sent the email : " + JSON.stringify(email_data));
            callback(null, email_data);
        }
    })
}


sendMail(function(err, data){
    if(err){
        console.log("send mail failed");
    }
    else {
        console.log("Send mail succedded");
    }
})


In the above code, you can see that we provide the ToAddresses which is an array field. So you can give multiple addresses here. I have used the same email address as the one we have used in the source (from email address - the one we whitelisted and verified in the first step of setup). Also, notice the template name that we have provided - it is the same name we had configures in the json template file. Finally, notice the placeholder value we have provided - for userName. This will be replaced by "athakur" in the template and the email will be sent.

You can find this code snippet in my GitHub gist-



And you should get an email. In my case, it looks as follows -




NOTE: If you are not able to see the mail check your spam folder. Since it is sent by AWS SES service it might land up there. Mark it as non-spam and you should be good going forward.

Related Links



Saturday, 11 August 2018

Working with async module in Node.js - Part 2 (async.eachSeries)

Background

This is a continuation of my previous post on async module in Node.js -
In the last post, we saw how neatly we can write code using async.waterfall. In this post, I will show you a similar trick with async.eachSeries method. 


Without async

Let's consider the following scenario.

We get the type of events and we need to run them in a loop in order. Following is a sample code to do that -

/**
 * Program to demonstrate asyn nodejs module
 * @author : athalur
 */

const async = require("async");

var startDemo = function () {
    console.log("Starting Demo");
    var events = ["Download", "Process", "Upload", "Del"];
    events.forEach(event => {
        process(event, function () {
            console.log("Got callback for : " + event);
        });
    });
    console.log("Ending Demo");
}

var process = function (processType, callback) {
    var processTime = 0;
    switch (processType) {
        case "Download":
            processTime = 2000;
            break;
        case "Process":
            processTime = 1000;
            break;
        case "Upload":
            processTime = 4000;
            break;
        case "Del":
            processTime = 100;
            break;
    }
    setTimeout(function () {
        console.log("Finished : " + processType);
        callback();
    }, processTime);
}

startDemo();


And the output is -



Wait what happened here? We looped our events array in order -

  1. Download
  2. Process
  3. Upload
  4. Delete
But that did not clearly happen. Mostly because each process takes different time to finish. In a real-world scenario, it would mean it make API calls or disk IO, the time of which we cannot predict. Let's see how async comes to our rescue.

Change the main code as follows -

var startDemo = function () {
    console.log("Starting Demo");
    var events = ["Download", "Process", "Upload", "Del"];
    async.eachSeries(events, function (event, callback) {
        process(event, callback);
    }, function(err){
        if(!err){
            console.log("Ending Demo");
        }
    });
}

and rerun the code.




Now you can see all of them executed in a series.



NOTE1: async.forEach runs through the array in parallel, meaning it will run the function for each item in the array immediately, and then when all of them execute the callback (2nd argument), the callback function will be called (3rd argument).

NOTE2: async.eachSeries runs through the array in series, meaning it will run the function for each item in the array and wait for it to execute the callback (2nd argument) before going to next item and finally when all are done the callback function will be called (3rd argument).


Related Links


Working with async module in Node.js - Part 1 (async.waterfall)

Background

Javascript, as we know, runs on a single thread and to prevent blocking operations like a network call or a disk I/O we use asynchronous callbacks. This essentially means tasks run in the background and we get a callback when the operation is done. If you wish to understand more of how Javascript works please watch below video -




So as you must have known by now there is a lot of asynchronous stuff that happens. Sometimes we need to order these to suit our business logic. Consider a simple example -

  1. Download an mp4 file from the server
  2. Convert it into a gif locally
  3. Upload the gif back to the server
  4. Delete the mp4 from the server

Now in this example, all 4 steps are an asynchronous operation. Also, we cannot move to the next step until the previous step is finished. 

The Callback way

We can use callbacks for this. Something like below -
/**
 * Program to demonstrate asyn nodejs module
 * @author : athalur
 */

var startDemo = function () {
    console.log("Starting Demo");
    download(function () {
        process(function () {
            upload(function () {
                del(function () {
                    console.log("Ending Demo");
                })
            })
        })
    });
}

var download = function (callback) {
    console.log("Starting download");
    delay();
    console.log("Finishing download");
    callback();
}

var process = function (callback) {
    console.log("Starting process");
    delay();
    console.log("Finishing process");
    callback();
}

var upload = function (callback) {
    console.log("Starting upload");
    delay();
    console.log("Finishing upload");
    callback();
}

var del = function (callback) {
    console.log("Starting del");
    delay();
    console.log("Finishing del");
    callback();
}

var delay = function () {
    var i, j;
    for (i = 0; i < 100000; i++) {
        for (j = 0; j < 10000; j++) {
            //do nothing
        }
    }
}

startDemo();
This print below output -




As you can see this is a mess that is created by cascading callbacks. Also if at any point there is an error then we need to send it back to the callback as well and each step would have an if-else check to handle it. Let us see how easy it is with async module.


The async way




First, you need to install async nodejs module -
To do so run following command -

  • npm install async


Now using async our program becomes -

/**
 * Program to demonstrate asyn nodejs module
 * @author : athakur
 */

const async = require("async");

var startDemo = function () {
    console.log("Starting Demo");
    async.waterfall([download,
        process,
        upload,
        del],
        function (err, data) {
            if(err) {
                console.log("There was an error in the demo : " + err);
            }else {
                console.log("Demo complete successfully");
            }
        });
}


NOTE:  I have not included the actual methods again to avoid repetition.

And the output is -



Notice how cleaner our code has become. Async takes care of all the callbacks. It also provides a mechanism to send data from one step to another. If you call the callback with data it will be available in next step.

If you change our download and process method slightly like below -

var download = function (callback) {
    console.log("Starting download");
    delay();
    console.log("Finishing download");
    callback(null, "Downloaded file URL");
}

var process = function (data, callback) {
    console.log("Starting process");
    console.log("In process method. Data from download: " + data);
    delay();
    console.log("Finishing process");
    callback();
}


and re-execute we will get




Also, it provides a cleaner way of error handling. Let's say our download fails. Chain download method as below -

var download = function (callback) {
    console.log("Starting download");
    delay();
    console.log("Finishing download");
    callback("Error in download", "Downloaded file URL");
}


This essentially means our download failed - 1st argument in the callback. In this scenario, the next steps in the waterfall model will not be executed and you would get a callback method that you have provided at the end of the waterfall array. On executing above you will get -



That's all for async module - waterfall. In next post, I will show you how we can use an async module for lopping over an array of data -

Related Links



Friday, 10 August 2018

How to make HTTP/HTTPS request in Node.js

Background

Many times you need to make an external API call from your Node.js application. A simple example would be calling an API gateway from you Node.js based Lambda in your AWS environment. In this post, I will show you two ways to do this -
  1. The standard http/https library
  2. The request library


Using the standard http/https library

 Let's see how we can use the standard http library to make an API request.

To use standard http or https library you can simply import the module using -

const https = require("https");
const http = require("http");

Now you can use these to make your http or https calls. A sample is provided below -


/**
 * Node.js code to demonstrate https calls.
 * @author athakur
 */
const https = require("https");

var startDemo = function () {
    console.log("starting demo code");
    executeHttps(function (err, data) {
        if (err) {
            console.log("Error in running demo code");
        }
        else {
            console.log("Successfully ending demo code");
        }

    });
}


var executeHttps = function (callback) {
    var options = {
        hostname: "opensourceforgeeks.blogspot.com",
        port: 443,
        path: "/p/about-me.html",
        method: 'GET',
        headers: {
            'Content-Type': 'text/html'
        }
    };

    var req = https.request(options, function (res) {
        console.log("Status for API call : " + res.statusCode);
        console.log("Headers for API call : " + JSON.stringify(res.headers));
        res.setEncoding('utf8');

        var body = '';

        res.on('data', function (chunk) {
            body = body + chunk;
        });

        res.on('end', function () {
            console.log("Body for API call : " + body.length);
            if (res.statusCode != 200) {
                console.log("API call failed with response code " + res.statusCode);
                callback("API call failed with response code " + res.statusCode, null)
            } else {
                console.log("Got response : " + body.length);
                callback(null, body);
            }
        });
    });

    req.on('error', function (e) {
        console.log("problem with API call : " + e.message);
        callback(e, null);
    });

    req.end();
}


startDemo();


You can get this code on my Github gist as well - https://gist.github.com/aniket91/2f6e92a005eb2a62fcc1ddd39aac6dc2


To execute just run (Assuming your file name is test.js) -
  • node test.js


You can similarly do it for http as well. For http you need to use -
  • const https = require("http"); 
  • change port to 80 in options
  • call http.request instead of https.request
NOTE: Notice how we are building the body on 'data' event listener and then processing the request on 'end' event. I have seen developers processing data on 'data' event listener only which is not correct. It will break if your response is huge and comes in chunks.

Similarly, you can execute POST method. Change options to -

    var options = {
        hostname: "opensourceforgeeks.blogspot.com",
        port: 80,
        path: "/p/about-me.html",
        method: 'POST',
        headers: {
            'Content-Type': 'text/html',
            'Content-Length': Buffer.byteLength(post_data)
        }
    };



and then before you close request using req.end(); add
  • req.write(post_data);



Now that we have seen how http/https modules work in nodejs let's see how request module works.

Using the request library

Request module is more user-friendly to use.


To begin with, you need to install request module dependency since it is not a standard library that comes with nodejs. To install execute the following command -
  • npm install request


 You should see a folder called node_modules getting created in your directory with the request and other dependent modules getting installed.

You can import request module using -
  • const request = require('request');

Then you can use it as follows -

/**
 * Node.js code to demonstrate https calls.
 * @author athakur
 */
const request = require('request');

var startDemo = function () {
    console.log("starting demo code");
    executeRequest(function (err, data) {
        if (err) {
            console.log("Error in running demo code");
        }
        else {
            console.log("Successfully ending demo code");
        }

    });
}


var executeRequest = function(callback){
    var headers = {}
    headers['Content-type'] = 'text/html'
    //console.log('Payload for refresh_token: ', querString.stringify(payload))
    request({
        url: 'https://opensourceforgeeks.blogspot.com//p/about-me.html',
        method: 'GET',
        headers: headers
    }, function (err, response, body) {
        if (err) {
            console.error('API failed : ', err)
            callback(err)
        } else {
            console.log("Statuscode: " + response.statusCode);
            console.log("Got response : " + body.length);
            callback(null, body);

        }
    })
}



And the output is -


You can execute POST call as well by changing method type to POST. Eg -

    request({
        url: 'https://opensourceforgeeks.blogspot.com//p/about-me.html',
        method: 'POST',
        body: payload,
        headers: headers
    }, function (err, response, body) {
        if (err) {
            console.error('API failed : ', err)
            callback(err)
        } else {
            console.log("Statuscode: " + response.statusCode);
            console.log("Got response : " + body.length);
            callback(null, body);

        }
    });




Hope this helps! Let me know if you have any questions. Thanks.




Related Links 




Monday, 16 July 2018

How to fix "Unable to find a region via the region provider chain" exception with AWS SDK

Background

Recently I was working on a task that needed to upload and download a test file on AWS S3 bucket. For this, I used AWS Java SDK. The functionality seemed to work fine until the code was deployed in production. In this post, I will try to explain why does this exception occur and how we can fix this.



Code that I had used for the test upload was as follows -

  try {
   BasicAWSCredentials awsCreds = new BasicAWSCredentials("YourAccessKeyId", "YourSecretKey");
   AmazonS3 s3client = AmazonS3ClientBuilder.standard()
                    .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
                    .build();
   s3client.putObject(storage.getBucketName(), "test.txt", "Done!");

  } catch (AmazonServiceException ase) {
   logger.error(
     "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason. storage : {}",
     storage, ase);
   logger.error("Error Message:    {}", ase.getMessage());
   logger.error("HTTP Status Code: {}", ase.getStatusCode());
   logger.error("AWS Error Code:   {}", ase.getErrorCode());
   logger.error("Error Type:       {}", ase.getErrorType());
   logger.error("Request ID:       {}", ase.getRequestId());
  } catch (AmazonClientException ace) {
   logger.error(
     "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network storage : {}",
     storage, ace);
   logger.error("Error Message: {}", ace.getMessage());
  } catch (Exception ex) {
   logger.error("Got exception while testing upload to S3", ex);
  }

I had tested this code by deploying it on one of our EC2 instances and it worked fine. I will probably explain this at a later point as to why it worked with EC2 instance but there is a major flaw with the above code that finally ends up throwing an exception - "Unable to find a region via the region provider chain"

Relevant Stacktrace:

com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:352) 
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:386) 
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain.
Must provide an explicit region in the builder or setup environment to supply a region.



The problem

The problem with the above piece of code is that we are not supplying the AmazonS3ClientBuilder with the region. Even though S3 is a global service, the bucket that you create are region specific. Each region would have its own endpoint and hence AWS S3 SDK need to know which region to know the endpoint it needs to make the API call to.

The Solution

The simplest solution is to pass the region to the AmazonS3ClientBuilder builder explicitly as follows -

AmazonS3 s3client = AmazonS3ClientBuilder.standard()
        .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
        .withRegion(Regions.US_EAST_1)
        .build();


NOTE: After you build a client with the builder, it's immutable and the region cannot be changed. If you are working with multiple AWS Regions for the same service, you should create multiple clients—one per region.

Understanding the solution

The problem and the solution might appear simple but there are various aspects you need to understand about setting region in your S3 builder.

There are various ways your AWS SDK can infer the region to use instead if you explicitly providing it yourself.

NOTE: You must use client builders to have the SDK automatically detect the region your code is running in. This is not applicable if you are using client constructor and default region from the SDK will be used.

If you don't explicitly set a region using the withRegion methods, the SDK consults the default region provider chain to try and determine the region to use.


  1. Any explicit region set by using withRegion or setRegion on the builder itself takes precedence over anything else.
  2. The AWS_REGION environment variable is checked. If it's set, that region is used to configure the client.
    1. NOTE: This environment variable is set by the Lambda container.
  3. The SDK checks the AWS shared configuration file (usually located at ~/.aws/config). If the region property is present, the SDK uses it.
    1. The AWS_CONFIG_FILE environment variable can be used to customize the location of the shared config file.
    2. The AWS_PROFILE environment variable or the aws.profile system property can be used to customize the profile that is loaded by the SDK.
  4. The SDK attempts to use the Amazon EC2 instance metadata service to determine the region of the currently running Amazon EC2 instance.

If the SDK still hasn't found a region by this point, client creation fails with an exception. And we saw what exception it throws :) It is is the reason I am writing this post.  Also, you must have realized why it worked for me with the EC2 instance. As per point number 4, SDK attempts to use the Amazon EC2 instance metadata service to determine the region. Hope this helps!



Related Links



t> UA-39527780-1 back to top