Tuesday, 16 October 2018

Lerna Tutorial - Managing Monorepos with Lerna

Background

Lerna is a tool that allows you to maintain multiple npm packages within one repository. There are multiple benefits of using such an approach - one repo, multiple packages. This paradigm is called monorepo (mono - single, repo - repository). You can read more about monorepo -
To summarise pros are -
  • Single lint, build, test and release process.
  • Easy to coordinate changes across modules. 
  • A single place to report issues.
  • Easier to set up a development environment.
  • Tests across modules are run together which finds bugs that touch multiple modules easier.
Now that you understand what a monorepo is, let's come back to Lerna. Lerna is a tool that helps you manage your monorepos. In this post, I am going to show you a complete tutorial of how you can use lerna to manage a custom multi package repo that we will create.


 Lerna Tutorial -  Managing Monorepos with Lerna

First of all, you need to install lerna. Lerna is a CLI. To install it you need to execute following command -
  • npm install --global lerna
This should install lerna in your local machine globally. You can run the following command to check the version of lerna installed -
  • lerna -v
 For me, it's 3.4.1 at the time of writing this post. Now let's create a directory for our lerna demo project.  Let's call it lerna-demo. You can create it with the following command -
  • mkdir lerna-demo 
Now navigate inside this directory
  • cd lerna-demo
Now execute following command -
  • lerna init
This should create a basic structure of monorepo. Take a look at below picture to understand it's structure.


 It does following things -
  1. Creates packages folder. All packages go under this.
  2. Creates package.json at the root. This defines global dependencies. It has the dependency on lerna by default.
  3. Creates lerna.json at the root. This identifies lerna repo root.
 Now let's go to packages folder and start creating our packages. We will then see how we can link them and use. So navigate to packages directory.
  • cd packages

Package - AdditionModule

Let's create a package that takes care of addition. Let's call it AdditionModule. Create a directory for this inside packages folder and execute following commands -
  • mkdir AdditionModule 
  • cd AdditionModule 
  • npm init -y
This should create a file called package.json inside AdditionModule directory. Now in the same folder create a file called index.js in the same folder and add following content to it -

module.exports.add = function(x,y){
    return x + y;
}


Save the file. This basically exposes add method to any other package that would have a dependency on this. Your 1st package is done. Let's create one more package for subtraction.

Package - SubtractionModule

 Run similar commands inside packages folder -

  • mkdir SubtractionModule
  • cd SubtractionModule
  • npm init -y
 Now in SubtractionModule folder create a file called index.js and add following code to it -

module.exports.subtract = function(x,y){
    return x - y;
}


Save the file. This basically exposes subtract method to any other package that would have a dependency on this. Now let's create a package that would have a dependency on  AdditionModule and SubtractionModule and can use add and subtract functions.

Package - Calc

Our final package - let's call it Calc would have a dependency on  AdditionModule and SubtractionModule packages. So let's create the package 1st -

  • mkdir Calc
  • cd Calc
  • npm init -y
Now create a file called index.js and add following content to it -

var add = require('AdditionModule');
var subtract = require('SubtractionModule');

var sum = add.add(2,3);
var diff = subtract.subtract(3,2);

console.log("Sum: " + sum + " Diff: " + diff);

Save the file. Now open package.json to add our dependencies. Edit package.json to add following content to it -

  "dependencies": {
      "AdditionModule": "1.0.0",
      "SubtractionModule": "1.0.0"
   },


So your entire package.json looks like -

{
  "name": "Calc",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
      "AdditionModule": "1.0.0",
      "SubtractionModule": "1.0.0"
   },
}

Now you are all set up. Your directory structure should look like below -



Now it's time to use lerna to link packages. Go to lerna-demo directory and execute the following command -
  • lerna bootstrap
 This should do all linking for you. It would create node_modules directory in Calc package and add symlinks to AdditionModule and SubtractionModule. Your directory structure would now look like -


Now you can simply run calc index.js as follows -

  • node packages/Calc/index.js
And you should get expected output -




The important thing was the linking part that lerna does for you so that you do not have to worry about it.


For complete details on all available commands see - https://github.com/lerna/lerna

Related Links




Sunday, 14 October 2018

What is the purpose of Node.js module.exports and how do you use it?

Background

If you have ever used nodejs and npm you must have encountered usage of module.exports syntax. Or perhaps just exports? If you do have encountered this that you might be aware that it exposes a certain functionality of the code so that other outside code can reference and execute it. Though this is a short version of what exports syntax does, in this post we will see a lot more of how it behaves and some examples to help us understand this better.




What is the purpose of Node.js module.exports and how do you use it?

The module.exports object is created by the Module system. module.exports is the object that's actually returned as the result of a require call. exports is just an alias to module.exports. So you could use either. There are some things that you would need to take care if you are just using exports but more of that later. For now, let's just take a simple example and see how it actually works.

First, create a file called calc.js  with the following content -

var add = function(x,y) {
return x + y;
}

var subtract = function(x,y){
return x - y;
}

module.exports = {
add  : add,
subtract: subtract
};



This code is simply defining two functions -  add and subtract and exporting them. Notice how these methods are declared as exports. We will revisit this back in a moment. Now let's try to use this code in a separate node code.

Now create another file - Let's call it demo.js. Now in this file add following code -

var calc = require('./calc.js');

console.log("2 + 3 = " + calc.add(2,3));
console.log("3 - 2 = " + calc.subtract(3,2));

And finally, run the demo.js file as -

  • node demo.js
It should give you the expected results -




If you understood above logic you must have got a basic idea of how exports work. To revisit our earlier statement we said - "module.exports is the object that's actually returned as the result of a require call." In this case, it returns a map of add and subtract which point to corresponding functions that you can invoke.

NOTE: Notice the "./" in the require statement. This is required to tell node that it is a local package.

Another way to use the same module.exports would be as follows -

module.exports.add = function(x,y) {
return x + y;
}

module.exports.subtract = function(x,y){
return x - y;
}


And if you now run demo.js again you would still see the same output. Both are just different ways to expose your code outside.

NOTE: Note that assignment to module.exports must be done immediately. It cannot be done in any callbacks. So following will not work -


var add = function(x,y) {
return x + y;
}

var subtract = function(x,y){
return x - y;
}

setTimeout(() => {
  module.exports = { add : add, subtract : subtract };
}, 0);



This will fail with below error -

TypeError: calc.add is not a function
    at Object.<anonymous> (demo.js:3:31)
    at Module._compile (module.js:653:30)
    at Object.Module._extensions..js (module.js:664:10)
    at Module.load (module.js:566:32)
    at tryModuleLoad (module.js:506:12)
    at Function.Module._load (module.js:498:3)
    at Function.Module.runMain (module.js:694:10)
    at startup (bootstrap_node.js:204:16)
    at bootstrap_node.js:625:3



So make sure your exports are done immediately and not in any callbacks.



Now let's come back to the exports keyword that we said is just an alias to module.exports. Let us rewrite the code with just exports now.  Let's say your code is as follows -


exports.add = function(x,y) {
return x + y;
}

exports.subtract = function(x,y){
return x - y;
}


Now run demo.js and you should still get your desired output. Like I said earlier exports is just an alias to exports.module. Now lets out 1st approach with just exports -

var add = function(x,y) {
return x + y;
}

var subtract = function(x,y){
return x - y;
}

exports = {
add : add,
subtract: subtract
}


And if you run demo.js again you will get an error -

TypeError: calc.add is not a function
    at Object.<anonymous> (demo.js:3:31)
    at Module._compile (module.js:653:30)
    at Object.Module._extensions..js (module.js:664:10)
    at Module.load (module.js:566:32)
    at tryModuleLoad (module.js:506:12)
    at Function.Module._load (module.js:498:3)
    at Function.Module.runMain (module.js:694:10)
    at startup (bootstrap_node.js:204:16)
    at bootstrap_node.js:625:3

So why did this happen? This brings us to another crucial fact -

If you overwrite exports then it will no longer refer to module.exports. The exports variable is available within a module's file-level scope, and is assigned the value of module.exports before the module is evaluated. So if you overwrite exports it is no longer an alias to module.exports and hence it no longer works.

So make sure you do not overwrite exports variable. It is always safer to just use module.exports.

To summarize this in a single picture module.exports work as follows -




Related Links

Thursday, 11 October 2018

How to install Eclipse plugin manually from a local zip file

Background

Eclipse is an integrated development environment (IDE) used in computer programming and is the most widely used Java IDE. It contains a base workspace and an extensible plug-in system for customizing the environment. You can download Eclipse from their official site - 
It also supports installing the plugin to customize your IDE environment. If you have already used Eclipse you might know how you can install a new plugin. All you need is the sites link where the plugin is hosted and you are all set. You can do this from
  • Help -> Install new Software



In this post, I will show you how to install a plugin downloaded as a zip file from the local disk.

 How to install Eclipse plugin manually from a local zip file

First, make sure your zip file is a valid Eclipse plugin. To verify that you can simply view contents of that zip. You should see two folders-
  1. features,
  2. plugins
The new version of Eclipse might also have following files -
  1. content.jar, 
  2. artifacts.jar
Eg.



 If above files exist then it means this is an archived update site. Once you have verified that your plugin zip file is correct it's time to get it installed.

Now go back to
  • "Help" -> "Install New Software"
Now click on "Add"


 Now click on "Archive"



 Now select the zip file you downloaded earlier. You can leave the name blank. Now click on Ok.

You can now see the plugin details. Select it for installation. Make sure you unselect "Contact all update sites...." checkbox as it can sometimes create problems.

Click "Next", accept terms and conditions and click "Finish".


You should now see a warning that you are installing unsigned content.



You can view the details if you want. Finally, click Ok and restart eclipse for changes to take effect.

 Once installed you can go to -
  • "Help" -> "Installation details" 
to see the installed plugin details.



 You can also uninstall the plugin from here. Just click on the "Uninstall" button at the bottom.


Related Links


Saturday, 6 October 2018

How to do iterative tasks with a delay in AWS Step Functions

Background

Very recently I did a video series on Building Web application using AWS serverless architecture -
In one of the videos, I covered AWS step function service that can be used to create distributed applications using visual workflows.

Step functions use Lambdas internally to execute business logic with the provided workflow. But there could be limitations that arise. Eg.
  1. The business logic that the Lambda is executing might take more than the maximum allowed time of 5 minutes.
  2. The API that you are calling from the Lambda might have the rate limiting.
One straightforward way would be to break down the Lambda into multiple lambdas and execute the business logic. But it is not always possible. For eg. let's say you are importing some data from a site through an API and then saving it in your DB. You may not always have control over the number if user returned and processed. It is always a good idea to handle it on your end. Fortunately, step functions provide a way to handle such scenarios. In this post, I will try to explain the same.

 How to do iterative tasks with a delay in AWS Step Functions

 Let us assume we have the following flow -
  1. Get remote data from an API. Let's say we can process only 50 items in that data in a minute. Let's say the API that we call to process each item allows only 50 APIs per minute.
  2. Let's assume we get more than 50 items in the fetch API call. Now we have to batch 50 items at a time and process it.
  3. Wait for 60 seconds and then process the next batch of 50.
  4. Do this till all items fetched are processed.


To do this we can use the following Step machine definition -


 {
    "Comment": "Step function to import external data",
    "StartAt": "FetchDataFromAPI",
    "States": {
        "FetchDataFromAPI": {
            "Type": "Task",
            "Resource": "arn:aws:lambda:us-east-1:499222264523:function:fetch-data",
            "Next": "ProcessData"
        },

        "ProcessData": {
            "Type": "Task",
            "Resource": "arn:aws:lambda:us-east-1:499222264523:function:process-data",
            "Next": "ProcessMoreChoiceState"
        },

        "ProcessMoreChoiceState": {
            "Type": "Choice",
            "Choices": [{
                "Variable": "$.done",
                "BooleanEquals": false,
                "Next": "WaitAndProcessMore"
            }],
            "Default": "Done"
        },

        "WaitAndProcessMore": {
            "Type": "Wait",
            "Seconds": 60,
            "Next": "ProcessData"
        },

        "Done": {
            "Type": "Pass",
            "End": true
        }
    }
}


Visually it looks like below -




If you have gone through the video link shared earlier most of this would have made sense to you by now. The only difference here is the "Wait" state that waits for 60 seconds before retrying.

You will have to send the entire array and the number of items processed so far as output to "ProcessMoreChoiceState" and subsequently to "WaitAndprocessMore" state so that it can be sent back to "ProcessData" state again to process remaining entries. If all entries are processed we just set "done" variable to true which transitions to "Done" state finishing the state machine execution.

Hope this helps. If you have any questions add it in the comments below. Thanks.

Related Links

Friday, 5 October 2018

Building a web application using AWS serverless architecture

Background

The last couple of weeks I have tried to record Youtube videos to demonstrate how to build a web application using AWS serverless architecture.




Serverless essentially means you do not have to worry about the physical hardware or the operating system or the runtimes. It's all taken care by the service offered under the serverless architecture. This has essentially given way to "Function as a service". 



In this post, I will try to summarize what I have covered in those videos over the last 2 week so that anyone looking to put it all together can refer.
If you do not wish to proceed and are just interested in the video links here's a list of them -

  1. AWS Serverless 101(Building a web application): https://youtu.be/I5bW0Oi0tY4
  2. AWS Serverless 102(Understanding the UI code): https://youtu.be/dQJCr0r_RuM
  3. AWS Serverless 103(AWS Lambda): https://youtu.be/Kn86Lq29IMA
  4. AWS Serverless 104(API Gateway): https://youtu.be/yKI_UCYblio
  5. AWS Serverless 105(CI/CD with code pipeline): https://youtu.be/GEWrpZuBEkQ
  6. AWS Serverless 106(Cloudwatch Events): https://youtu.be/9gUB2n0hV7Q
  7. AWS Serverless 107(Step functions): https://youtu.be/rL6EqaMbC5U

Github repo: https://github.com/aniket91/AWS_Serverless

I will go over each of the topics and explain what has been covered in each of the videos. If you think a specific topic interests you, you can pick that up. However, note that some videos might have references to setup done in previous videos. So it is recommended to go through them in the order provided. 


AWS Serverless 101(Building a web application)




This video is primarily to introduce you to the world of serverless. It covers a bit of cloud deployment history, information about serverless architecture - What and Why?. It also covers what are the prerequisites for this series and what exactly are we trying to build here. If you are completely new to AWS or AWS serverless concept this is a good place to start.

This web application has a UI with a button on click on which an API call is made to get all the image files from an S3 bucket and are rendered on the UI. API call goes to API Gateway which forwards the request to Lambda. Lambda executes to get all the image files from S3 and returns it to API gateway which in turn returns the response to the UI code(javascript). Response is then parsed as json array and images are rendered on UI.

AWS Serverless 102(Understanding the UI code)




This video covers the UI component of the web application. This is in plain HTML and javascript. UI code is hosted on a static website in an S3 bucket.

AWS Serverless 103(AWS Lambda)



AWS Lambda forms the basic unit of AWS serverless architecture. This video talks about AWS Lambda - what is Lambda, How to configure it, How to use it and finally code to get the list of files from an S3 bucket.

AWS Serverless 104(API Gateway) 




 API Gateway is an AWS service to create and expose APIs. This video talks about API Gateway service - how you can create APIs, resources, and methods, How we can integrate API gateway with Lambda. How to create stages and see corresponding endpoints. It also talks about CORS and logging in the API gateway service.

AWS Serverless 105(CI/CD with code pipeline)




 This video shows how we can automate backend code deployment using code pipeline AWS service. This includes using Code pipeline, Code build, and Cloud formation services.

AWS Serverless 106(Cloudwatch Events)



This video shows how you can use Cloud watch events to trigger or schedule periodic jobs just like cron jobs in a Linux based system. We are using this service to do a periodic cleanup for non-image files in the S3 bucket.

 AWS Serverless 107(Step functions)





This video covers how to use state machines of AWS step functions service to build distributed applications using visual workflows. We are using this service to do a asynchronous processing of various image files like jpg, png etc.

Related Links


Thursday, 13 September 2018

How to use API Gateway stage variables to call specific Lambda alias?

Background

When you are using an API Gateway you create stages like for dev, QA, and production. Sometimes you might want to call different aliases of the same lambda for different stages you have created. Stage variables let you do that. Let's see how we can achieve that in this post.


 If you do not wish to read the post below you can just view the youtube video that covers the same flow.




How to use API Gateway stage variables to call specific Lambda alias?

I am going to assume you -
  • Have already created an API
  • Deployed it to a stage called "dev"
  • You have a lambda created. You have published a new version. You have created a new alias and pointed to this new version.

Once you have above setup go to your API gateway and create a resource/method as you like. Once done select integration type as "Lambda" and in place for lambda function add -
  • MyTestLambda:${stageVariables.lambdaAlias}

Here MyTestLambda is the lambda name and stageVariables.lamndaAlias is using a stage variable named. We will see how we can set this later. Once done save this configuration.


On save, AWS will prompt you saying you are using stage variable and you need to grant appropriate permission -

You defined your Lambda function as a stage variable. Please ensure that you have the appropriate Function Policy on all functions you will use. You can do this by running the below AWS CLI command for each function, replacing the stage variable in the function-name parameter with the necessary function name. 



Execute this command in your console.


NOTE: You should have aws cli configured along with a profile that corresponds to your AWS account you are trying this on. If you have multiple AWS profiles configured you might want to use --profile and --region parameters as well. See screenshot below for command and response.




You can now test your method invocation with manually providing the lambdaAlias stage variable value which is "dev" in our case.



You should see appropriate response once executed. Now let's see how we can add it to our actual stages. First, click on your API and deploy it to the stage you need it in, For me, I have created a stage called "dev". In this go to "Stage variables".


 Now create a stage variable with name "lambdaAlias" and value "dev" and save it. Now you are all set up. You can not invoke the actual API URL and you should get back the response returned by the lambda alias function.




 Related Links




Wednesday, 12 September 2018

How to enable CloudWatch Logs for APIs in API Gateway

Background

AWS API gateway lets you create APIs that can scale. In this post, I will show you how to turn on cloud watch logging for your API gateway.

 If you do not wish to read the post below you can just view the youtube video that covers the same flow.



How to enable CloudWatch Logs for APIs in API Gateway

I am assuming you already have an API created in API gateway and have deployed it in a stage. 


Before you turn on cloud watch logging for your API deployed in a stage you need to provide API gateway a role to provide permission to send logs to cloud watch. To do so first create a role from IAM service for API gateway with permission to send logs to cloud watch. To do so go to IAM and then roles and click on create Role.








Next, select the permission that it shows - the one that allows API gateway to publish logs to cloud watch. Click review and create this role.



Once the role is created open it and copy the role ARN.




Now go to API gateway and go to Settings. Here you should see "CloudWatch log role ARN" field. Paste the copied ARN into this and save.





Once this is set up all that is left is to turn on cloud watch logging for your API. To turn it on go to your stage where your API is deployed. Next, go to the "Logs/tracing" tab and select the checkbox that says "Enable CloudWatch Logs". You can also optionally select "Log full requests/responses data".




Now you can go to Cloud watch -> Logs and see logs corresponding to each stage of your API gateway.






NOTES:

  1.  If you successfully enabled CloudWatch Logs for API Gateway, you will see the entry /aws/apigateway/welcome listed in the Log Groups section of the right pane.
  2. You might need to redeploy your API after enabling CloudWatch logs from the API Gateway console before your logs are visible in the CloudWatch console.
  3.  Your API will have a Log Group titled API-Gateway-Execution-Logs_api-id/ that contains numerous log streams.


Related Links

Tuesday, 21 August 2018

How to delete private EC2 AMI from AWS

Background

Most of the times you would want to customize the EC2 image that you want to run. It would help you quickly spin up a new instance or put it behind an auto scaling group. To do this you can create an image out of your current running EC2 instance. But this is an incremental process. So you may want to create a new image after making some changes to EC2 instance spun up from the current image. Once the new image is created you can delete the previous AMI/image. In this post, I will tell you what are the steps to do so.


How to delete private EC2 AMI from AWS

To delete a private AMI follow the steps below -

  1. Open the Amazon EC2 console  (https://console.aws.amazon.com/ec2/).
  2. Select the region from the drop-down at the top based on which region your Alocatedocaled.
  3. In the navigation pane, click AMIs.
  4. Select the AMI, click Actions, and then click Deregister. When prompted for confirmation, click Continue.
    • NOTE: It may take a few minutes before the console removes the AMI from the list. Choose Refresh to refresh the status.
  5. In the navigation pane, click Snapshots.
  6. Select the snapshot, click Actions, and then click Delete. When prompted for confirmation, click Yes, Delete.
  7. Terminate any EC2 instances that might be running with old AMI.
  8. Delete any EBS volumes for those EC2 instances if "delete on termination" is not set.

When you create an AMI it creates a snapshot for each volume associated with that image. So let's say your EC2 instance has 2 EBS volumes C drive and D drive of 30 GB and 50GB respectively in size then you would see 2 snapshots for them under snapshots section. Your AMI size will be the sum of individual snapshot size (i.e 30GB + 50GB = 80GB). To successfully delete the AMI you need to deregister the AMI and then delete the snapshots manually. And finally, when both are done terminate the EC2 instances that you might have running with old AMI.








NOTES:
  1. When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that was created for the root volume of the instance during the AMI creation process. You'll continue to incur storage costs for this snapshot. Therefore, if you are finished with the snapshot, you should delete it.
  2. You can deregister an AMI when you have finished using it. After you deregister an AMI, you can't use it to launch new instances.
  3. When you deregister an AMI, it doesn't affect any instances that you've already launched from the AMI. You'll continue to incur usage costs for these instances. Therefore, if you are finished with these instances, you should terminate them.
  4. AWS will not let you delete a snapshot associated with an AMI before you deregister the AMI.
  5. Delete EBS volumes (unless they are set to delete on termination, in which case, they would be removed on terminating EC2 instance). This isn't necessary for S3 backed instances


Related Links


Thursday, 16 August 2018

How to send an email using the AWS SES service?

Background

In many business use cases, you need to notify your users over an email. AWS provides its own service to do so. You can use AWS SES (Simple email service) to send out emails to your users. In this post, we will see how we can configure AWS SES service and send out emails.

If you want to skip all the text post below you can refer to the video I have created to cover the same thing -





Setting up configurations in AWS SES service

First and foremost go to SES service in your AWS console. There are limited regions were AWS SES service is available. Please choose one of those regions -



Next thing that you need to do is verify your email address - the address from which you want to send out emails. For this go to "Email addresses" section in the left panel. Now click on "Verify a New Email Address" button. Enter your email address and click "Verify this email address". Once done you should get an email to confirm and validate your email address. 



Click on the confirmation link. Once done you should see your email address as verified in SES "Email addresses" section.




Next thing that we would need to setup is email template. The email template is the content that you went send in your email. This would include your unique template name, subject, body etc. Also, this can be parameterized which means you can have placeholders in the template that can be replaced at the runtime. A sample template would be -


{
    "Template": {
        "TemplateName": "testtemplate",
        "SubjectPart": "Test email for username {{userName}}",
        "TextPart": "Test email body!",
        "HtmlPart": "<html>\r\n\r\n<head>\r\n    <title>Test Title<\/title>\r\n<\/head>\r\n\r\n<body>\r\n    <h1> Username : {{userName}}<\/h1>\r\n<\/body>\r\n\r\n<\/html>"


    }
}

HtmlPart is JSON escaped version of your actual HTML content. TemplateName is unique template name for your template that you would reference later from your code to send the mail. SubjectPart is the email subject. For more details refer -

Notice how we have added {{userName}} in the subject and text. This is a placeholder that will be replaced at runtime. We will see how in some time. Once your template is set up it is time to add it to AWS SES. To do this you need to call following API -
  • aws ses create-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg

If you have already created the template and want to update it you can execute -

  • aws ses update-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg

NOTE: You need to have AWS CLI configured on your machine and setup with your AWS profile. I have multiple profiles on my machine which is why you see --profile argument. if you just have one then you do not have to mention it.

Once you have executed this API you can see your template under "Email Templates" section of SES service.




Now that you have your email verified and template setup you can send an email. Following is the node.js code to do it -

const aws = require('aws-sdk');
aws.config.loadFromPath('./config.json');
const ses = new aws.SES();

/**
 * author: athakur
 * aws ses create-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg
 * aws ses update-template --cli-input-json fileb://test_template.json --region us-east-1 --profile osfg
 * @param {*} callback 
 */
var sendMail = function (callback) {
    var params = {};

    var destination = {
        "ToAddresses": ["opensourceforgeeks@gmail.com"]
    };
    var templateData = {};
    templateData.userName = "athakur";

    params.Source = "opensourceforgeeks@gmail.com";
    params.Destination = destination;
    params.Template = "testtemplate";
    params.TemplateData = JSON.stringify(templateData)



    ses.sendTemplatedEmail(params, function (email_err, email_data) {
        if (email_err) {
            console.error("Failed to send the email : " + email_err);
            callback(email_err, email_data)
        } else {
            console.info("Successfully sent the email : " + JSON.stringify(email_data));
            callback(null, email_data);
        }
    })
}


sendMail(function(err, data){
    if(err){
        console.log("send mail failed");
    }
    else {
        console.log("Send mail succedded");
    }
})


In the above code, you can see that we provide the ToAddresses which is an array field. So you can give multiple addresses here. I have used the same email address as the one we have used in the source (from email address - the one we whitelisted and verified in the first step of setup). Also, notice the template name that we have provided - it is the same name we had configures in the json template file. Finally, notice the placeholder value we have provided - for userName. This will be replaced by "athakur" in the template and the email will be sent.

You can find this code snippet in my GitHub gist-



And you should get an email. In my case, it looks as follows -




NOTE: If you are not able to see the mail check your spam folder. Since it is sent by AWS SES service it might land up there. Mark it as non-spam and you should be good going forward.

Related Links



Saturday, 11 August 2018

Working with async module in Node.js - Part 2 (async.eachSeries)

Background

This is a continuation of my previous post on async module in Node.js -
In the last post, we saw how neatly we can write code using async.waterfall. In this post, I will show you a similar trick with async.eachSeries method. 


Without async

Let's consider the following scenario.

We get the type of events and we need to run them in a loop in order. Following is a sample code to do that -

/**
 * Program to demonstrate asyn nodejs module
 * @author : athalur
 */

const async = require("async");

var startDemo = function () {
    console.log("Starting Demo");
    var events = ["Download", "Process", "Upload", "Del"];
    events.forEach(event => {
        process(event, function () {
            console.log("Got callback for : " + event);
        });
    });
    console.log("Ending Demo");
}

var process = function (processType, callback) {
    var processTime = 0;
    switch (processType) {
        case "Download":
            processTime = 2000;
            break;
        case "Process":
            processTime = 1000;
            break;
        case "Upload":
            processTime = 4000;
            break;
        case "Del":
            processTime = 100;
            break;
    }
    setTimeout(function () {
        console.log("Finished : " + processType);
        callback();
    }, processTime);
}

startDemo();


And the output is -



Wait what happened here? We looped our events array in order -

  1. Download
  2. Process
  3. Upload
  4. Delete
But that did not clearly happen. Mostly because each process takes different time to finish. In a real-world scenario, it would mean it make API calls or disk IO, the time of which we cannot predict. Let's see how async comes to our rescue.

Change the main code as follows -

var startDemo = function () {
    console.log("Starting Demo");
    var events = ["Download", "Process", "Upload", "Del"];
    async.eachSeries(events, function (event, callback) {
        process(event, callback);
    }, function(err){
        if(!err){
            console.log("Ending Demo");
        }
    });
}

and rerun the code.




Now you can see all of them executed in a series.



NOTE1: async.forEach runs through the array in parallel, meaning it will run the function for each item in the array immediately, and then when all of them execute the callback (2nd argument), the callback function will be called (3rd argument).

NOTE2: async.eachSeries runs through the array in series, meaning it will run the function for each item in the array and wait for it to execute the callback (2nd argument) before going to next item and finally when all are done the callback function will be called (3rd argument).


Related Links


Working with async module in Node.js - Part 1 (async.waterfall)

Background

Javascript, as we know, runs on a single thread and to prevent blocking operations like a network call or a disk I/O we use asynchronous callbacks. This essentially means tasks run in the background and we get a callback when the operation is done. If you wish to understand more of how Javascript works please watch below video -




So as you must have known by now there is a lot of asynchronous stuff that happens. Sometimes we need to order these to suit our business logic. Consider a simple example -

  1. Download an mp4 file from the server
  2. Convert it into a gif locally
  3. Upload the gif back to the server
  4. Delete the mp4 from the server

Now in this example, all 4 steps are an asynchronous operation. Also, we cannot move to the next step until the previous step is finished. 

The Callback way

We can use callbacks for this. Something like below -
/**
 * Program to demonstrate asyn nodejs module
 * @author : athalur
 */

var startDemo = function () {
    console.log("Starting Demo");
    download(function () {
        process(function () {
            upload(function () {
                del(function () {
                    console.log("Ending Demo");
                })
            })
        })
    });
}

var download = function (callback) {
    console.log("Starting download");
    delay();
    console.log("Finishing download");
    callback();
}

var process = function (callback) {
    console.log("Starting process");
    delay();
    console.log("Finishing process");
    callback();
}

var upload = function (callback) {
    console.log("Starting upload");
    delay();
    console.log("Finishing upload");
    callback();
}

var del = function (callback) {
    console.log("Starting del");
    delay();
    console.log("Finishing del");
    callback();
}

var delay = function () {
    var i, j;
    for (i = 0; i < 100000; i++) {
        for (j = 0; j < 10000; j++) {
            //do nothing
        }
    }
}

startDemo();
This print below output -




As you can see this is a mess that is created by cascading callbacks. Also if at any point there is an error then we need to send it back to the callback as well and each step would have an if-else check to handle it. Let us see how easy it is with async module.


The async way




First, you need to install async nodejs module -
To do so run following command -

  • npm install async


Now using async our program becomes -

/**
 * Program to demonstrate asyn nodejs module
 * @author : athakur
 */

const async = require("async");

var startDemo = function () {
    console.log("Starting Demo");
    async.waterfall([download,
        process,
        upload,
        del],
        function (err, data) {
            if(err) {
                console.log("There was an error in the demo : " + err);
            }else {
                console.log("Demo complete successfully");
            }
        });
}


NOTE:  I have not included the actual methods again to avoid repetition.

And the output is -



Notice how cleaner our code has become. Async takes care of all the callbacks. It also provides a mechanism to send data from one step to another. If you call the callback with data it will be available in next step.

If you change our download and process method slightly like below -

var download = function (callback) {
    console.log("Starting download");
    delay();
    console.log("Finishing download");
    callback(null, "Downloaded file URL");
}

var process = function (data, callback) {
    console.log("Starting process");
    console.log("In process method. Data from download: " + data);
    delay();
    console.log("Finishing process");
    callback();
}


and re-execute we will get




Also, it provides a cleaner way of error handling. Let's say our download fails. Chain download method as below -

var download = function (callback) {
    console.log("Starting download");
    delay();
    console.log("Finishing download");
    callback("Error in download", "Downloaded file URL");
}


This essentially means our download failed - 1st argument in the callback. In this scenario, the next steps in the waterfall model will not be executed and you would get a callback method that you have provided at the end of the waterfall array. On executing above you will get -



That's all for async module - waterfall. In next post, I will show you how we can use an async module for lopping over an array of data -

Related Links



Friday, 10 August 2018

How to make HTTP/HTTPS request in Node.js

Background

Many times you need to make an external API call from your Node.js application. A simple example would be calling an API gateway from you Node.js based Lambda in your AWS environment. In this post, I will show you two ways to do this -
  1. The standard http/https library
  2. The request library


Using the standard http/https library

 Let's see how we can use the standard http library to make an API request.

To use standard http or https library you can simply import the module using -

const https = require("https");
const http = require("http");

Now you can use these to make your http or https calls. A sample is provided below -


/**
 * Node.js code to demonstrate https calls.
 * @author athakur
 */
const https = require("https");

var startDemo = function () {
    console.log("starting demo code");
    executeHttps(function (err, data) {
        if (err) {
            console.log("Error in running demo code");
        }
        else {
            console.log("Successfully ending demo code");
        }

    });
}


var executeHttps = function (callback) {
    var options = {
        hostname: "opensourceforgeeks.blogspot.com",
        port: 443,
        path: "/p/about-me.html",
        method: 'GET',
        headers: {
            'Content-Type': 'text/html'
        }
    };

    var req = https.request(options, function (res) {
        console.log("Status for API call : " + res.statusCode);
        console.log("Headers for API call : " + JSON.stringify(res.headers));
        res.setEncoding('utf8');

        var body = '';

        res.on('data', function (chunk) {
            body = body + chunk;
        });

        res.on('end', function () {
            console.log("Body for API call : " + body.length);
            if (res.statusCode != 200) {
                console.log("API call failed with response code " + res.statusCode);
                callback("API call failed with response code " + res.statusCode, null)
            } else {
                console.log("Got response : " + body.length);
                callback(null, body);
            }
        });
    });

    req.on('error', function (e) {
        console.log("problem with API call : " + e.message);
        callback(e, null);
    });

    req.end();
}


startDemo();


You can get this code on my Github gist as well - https://gist.github.com/aniket91/2f6e92a005eb2a62fcc1ddd39aac6dc2


To execute just run (Assuming your file name is test.js) -
  • node test.js


You can similarly do it for http as well. For http you need to use -
  • const https = require("http"); 
  • change port to 80 in options
  • call http.request instead of https.request
NOTE: Notice how we are building the body on 'data' event listener and then processing the request on 'end' event. I have seen developers processing data on 'data' event listener only which is not correct. It will break if your response is huge and comes in chunks.

Similarly, you can execute POST method. Change options to -

    var options = {
        hostname: "opensourceforgeeks.blogspot.com",
        port: 80,
        path: "/p/about-me.html",
        method: 'POST',
        headers: {
            'Content-Type': 'text/html',
            'Content-Length': Buffer.byteLength(post_data)
        }
    };



and then before you close request using req.end(); add
  • req.write(post_data);



Now that we have seen how http/https modules work in nodejs let's see how request module works.

Using the request library

Request module is more user-friendly to use.


To begin with, you need to install request module dependency since it is not a standard library that comes with nodejs. To install execute the following command -
  • npm install request


 You should see a folder called node_modules getting created in your directory with the request and other dependent modules getting installed.

You can import request module using -
  • const request = require('request');

Then you can use it as follows -

/**
 * Node.js code to demonstrate https calls.
 * @author athakur
 */
const request = require('request');

var startDemo = function () {
    console.log("starting demo code");
    executeRequest(function (err, data) {
        if (err) {
            console.log("Error in running demo code");
        }
        else {
            console.log("Successfully ending demo code");
        }

    });
}


var executeRequest = function(callback){
    var headers = {}
    headers['Content-type'] = 'text/html'
    //console.log('Payload for refresh_token: ', querString.stringify(payload))
    request({
        url: 'https://opensourceforgeeks.blogspot.com//p/about-me.html',
        method: 'GET',
        headers: headers
    }, function (err, response, body) {
        if (err) {
            console.error('API failed : ', err)
            callback(err)
        } else {
            console.log("Statuscode: " + response.statusCode);
            console.log("Got response : " + body.length);
            callback(null, body);

        }
    })
}



And the output is -


You can execute POST call as well by changing method type to POST. Eg -

    request({
        url: 'https://opensourceforgeeks.blogspot.com//p/about-me.html',
        method: 'POST',
        body: payload,
        headers: headers
    }, function (err, response, body) {
        if (err) {
            console.error('API failed : ', err)
            callback(err)
        } else {
            console.log("Statuscode: " + response.statusCode);
            console.log("Got response : " + body.length);
            callback(null, body);

        }
    });




Hope this helps! Let me know if you have any questions. Thanks.




Related Links 




t> UA-39527780-1 back to top