Thursday, 30 November 2017

How to enable and use cross account access to services in AWS with APIs - PART 1 - Bucket Policies

Background

AWS is the most widely used cloud platform today. It is easy to use, cost effective and takes no time to setup. I can go on and on about it's benefits over your own data center but that's not the goal of this post. In this post I am going to show how you can access cross account services in AWS. 

More specifically I will demo accessing cross account S3 bucket. I will show 2 approaches to do so. 1st one is very specific to Cross account bucket access and approach 2 is generic and can be used to access any services.

This post assumes you have basic knowledge of AWS services specifically S3, IAM (Roles, policies , Users) etc.

IAM User Setup

Let's start by creating an IAM user in Account A (the account you own). Create a  user with complete access to S3 service. You can attach S3 full access policy directly.  Other way to do it is attach an inline policy as follows -


{
    "Version": "2012-10-17",
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::*"
        }
    ]
} 

NOTE : I have purposefully not provided bucket name here since it is a cross account bucket access we may not know the bucket name of Account B before hand.

Also enable programmatic access for this IAM user. We will need the access key ID and secret key to use in our API calls. You need to save these details down somewhere as you will not be able to get it again from Amazon console. You will have to regenerate it.

Also note down the arn of this IAM user. For me it is -
  • arn:aws:iam::499222264523:user/athakur
We will need these later in our setups. 


Project Setup

You need to create a new Java project to test these changes out. I am using maven project for dependency management. You can choose whatever you wish to. You need a dependency of AWS Java SDK. 

        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.11.238</version>
        </dependency>


NOTE :  Language should not be a barrier here. You can use any language you want python, nodejs etc. For this post I am going to use Java. But other languages will have similar APIs.

Approach 1 (Using Bucket policies)

The 1st approach to use cross account access for S3 buckets is to use S3 bucket policies. To begin with you need an IAM user in your own account (let's call it Account A). And then there is Account B to which you need access to read/write to it's S3 bucket.


Now let's say bucket name of S3 bucket in Cross account is aniket.help. Go ahead and configure bucket policy for this bucket as follows -


 {
    "Version": "2012-10-17",
    "Id": "Policy1511782738232",
    "Statement": [
        {
            "Sid": "Stmt1511782736332",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::499222264523:user/athakur"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::aniket.help/*"
        }
    ]
}


Above bucket policy basically provides cross account access to our IAM user from Account A (Notice the arn is same as that of IAM user we created in Account A) . Also note we are just giving permission for S3 GET, PUT and DELETE and to a very specific bucket names aniket.help.

NOTE : Bucket names are global and so is S3 service. Even though your bucket may reside in a particular AWS region. So do not try to use same bucket name as above. But you can use any other name you want.


Now you can run the following Java code to upload a file to S3 bucket of Account B.


    public static boolean validateUpload() {
        
        try {
            BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey);
            AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION)
                    .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
            s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!");
            
        }catch (AmazonServiceException ase) {
            System.out.println(
                    "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason." );
            System.out.println("Error Message:    " +  ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
            ase.printStackTrace();
            return false;
        } catch (AmazonClientException ace) {
            System.out.println(
                    "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network");
            System.out.println("Error Message: {}" +  ace.getMessage());
            ace.printStackTrace();
            return false;
        } catch (Exception ex) {
            System.out.println("Got exception while validation bucket configuration.");
            ex.printStackTrace();
            return false;
        }
        return true;
    } 


NOTE : Replace BUCKET_NAME, BUCKET_REGION with the actual bucket name and region that you have created in Account B. Also replace awsAcessKeyId, awsSecretKey with your actual IAM credentials that we created in Account A.

You can simply run this and  validate output -

    public static final String awsAcessKeyId = "REPLACE_THIS";
    public static final String awsSecretKey = "REPLACE_THIS";
    public static final String BUCKET_NAME = "aniket.help";
    public static final String BUCKET_REGION = "us-east-1";
    
    public static void main(String args[]) {
        System.out.println("validated Upload : " + validateUpload());
    }

You should get -
validated Upload : true

 You can verify file is actually uploaded to S3 bucket.



Let's do the same for download as well.


Code is as follows -

    public static boolean validateUpload() {
        
        try {
            BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey);
            AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION)
                    .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
            s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!");
            
        }catch (AmazonServiceException ase) {
            System.out.println(
                    "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason." );
            System.out.println("Error Message:    " +  ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
            ase.printStackTrace();
            return false;
        } catch (AmazonClientException ace) {
            System.out.println(
                    "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network");
            System.out.println("Error Message: {}" +  ace.getMessage());
            ace.printStackTrace();
            return false;
        } catch (Exception ex) {
            System.out.println("Got exception while validation bucket configuration.");
            ex.printStackTrace();
            return false;
        }
        return true;
    }


You can test it out as  -


    public static void main(String args[]) {
        System.out.println("validated Download : " + validateDownload());

    }
   


and the output is as follows -
Read File from S3 bucket. Content : This is from cross account!
validated Download : true


 Drawback : Drawback of using bucket policy is Account B cannot use KMS encryption on their bucket since IAM user of Account B does not have access to KMS of account A. They can still use AES encryption. (These encryptions are encryption at REST and S3 takes care of encrypting files before saving it to the disk and decrypting it before sending it back). This can be resolved by taking approach 2 (assume role).

NOTE :Security is the most important aspect in cloud since potentially any one can access it. It is the responsibility of individual setting these up to ensure it is securely deployed. Never give out your IAM credentials ot check it into any repository. Restrict access roles and policies as much granular as you can. In above case if you need just get,put provide the same in IAM policy. Do not give wildcards there.

Stay tuned for PART 2 of this post. In that we will see how we can do a assume role to access any service in Account B (securely ofcourse). We need not use Bucket policy in that case.

Part 2 - How to enable and use cross account access to services in AWS with APIs - PART 2 - Assume Role


CORS - Cross origin resource sharing

Note if you are trying to access S3 bucket from a domain different from the domain of the actual site then you need to set CORS policy in your bucket (Not applicable for above demo) -

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>


Above lists all types of request. You can restrict it as per your usecase.

Related Links

Sunday, 26 November 2017

How to connect to postgres RDS from AWS Lambda

Background

In one of the previous post we saw how serverless code works with AWS Lambda and API gateway.
 In this post we will see how we can configure Lambda function to connect to RDS instance and run queries on it. RDS is AWS service for Relational database service. It offers multiple databases like -
  • mysql
  • aurora
  • postgres
  • oracle etc
For this particular post we are going to use  postgres DB. This post is about the lambda function so this assumes you have postgres DB running in RDS and have it's endpoint. username and password handy.

https://www.pgadmin.org/ : If you want a GUI based client to test postgres on local try using pgAdmin.

 How to connect to postgres RDS from AWSa Lambda

Code for Lambda function to connect to RDS is as follows -

'use strict';

const pg = require('pg');
const async = require('async');

const databaseUser = process.env.DB_USER;
const databasePassword = process.env.DB_PASSWORD;
const databaseName = process.env.DB_DB_NAME;
const databaseHost = process.env.DB_HOST;
const databasePort = process.env.DB_PORT;
const databaseMaxCon = process.env.DB_MAX_CONNECTIONS;

exports.handler = (event, context) => {
    console.log('Received event : ' + JSON.stringify(event) + ' at ' + new Date());

    let dbConfig = {
        user: databaseUser,
        password: databasePassword,
        database: databaseName,
        host: databaseHost,
        port: databasePort,
        max: databaseMaxCon
    };

    let pool = new pg.Pool(dbConfig);
    pool.connect(function(err, client, done) {

        if (err) {
            console.error('Error connecting to pg server' + err.stack);
            callback(err);
        } else {
            console.log('Connection established with pg db server');

            client.query("select * from employee", (err, res) => {

                    if (err) {
                        console.error('Error executing query on pg db' + err.stack);
                        callback(err);
                    } else {
                        console.log('Got query results : ' + res.rows.length);
                        
                        
                       async.each(res.rows, function(empRecord) {   
                                console.log(empRecord.name);
                        });
                    }
                    client.release();
                    pool.end();
                    console.log('Ending lambda at ' + new Date());

                });
        }

    });    
    
};

    
}

Explanation 

Here we are using postgres library called pg. You can install this module using -
  • npm install pg

  1. In first part we create a pool of connection giving required parameters to connect to postgres DB. Notice how we are reading these parameters from environment variables. 
  2. Next we call connect on it and pass a callback to get the connection when successful
  3. In the callback we can execute client.query() and pass a callback to get rows of data we need for the employee table.
  4. Finally we iterate over each record using async and print the employee record name.
  5. Release the client when you are done with that particular connection
  6. You can end the pool when all the DB operations are done.



AWS specific notes

  • By default AWS Lambda has internet connection. So it can access web resources.
  • Lambda by default does not have to AWS services running in private subnet.
  • If you want to access services in private subnet eg. RDS running in private subnet then you need to configure the VPC, private subnet to run lambda in and security group is network section of Lambda.
  • However once you do this you will no longer have access to internet (since it is run in private subnet now). 
  • Now if you still need internet access then you need to spin up a NAT gateway or a NAT instance in public subnet and make a route from private subnet to this NAT.
  • Note if you are encrypting lambda environment variable using KMS you will require internet access (KMS needs that). So if your RDS is running in private subnet you need to follow above steps to make it work. Else you are going to get bunch of timeout exceptions.
  • Also note maximum run time of Lambda is 5 mins. So make sure your lambda execution completes withing that time. You should probably limit the queries returned by DB and process that much in one Lambda execution.
  • You can also run lambda as a batch job (using cron expression) from cloud watch.


Related Links

Saturday, 11 November 2017

Understanding Promises in javascript

Background

I am really new to nodejs and javascript for that matter. I have used these in past but mainly to manipulate DOM elements, post forms , handle event etc. We saw in last post on node.js working how node.js works on single threaded, non-blocking and asynchronous programming model.
This is achieved mainly using events. So lets say you want to read a file on the file system. You make a call to access this file and essentially provide a callback function which will be invoked on successful completion of file access and meanwhile program can carry on with it's next set of function.

Now sometime we do need to synchronise tasks. For example lets say we want to complete reading files json content which is required to make a network call. One possible way is to do the network call in the callback function of the file access function so that when we get a callback on file access completion we read the file and then make a network call. 

This sounds simple enough for a single function to be synchronised. Try to imagine multiple functions that need to be synchronised.   It can be a nightmare coding that - writing new function in each functions callback (cascading it). 

This is where promises come into picture.



Understanding Promises in javascript


It is literally what it means - a promise. It is an object that may produce a result in the future. Think of it like a Future object in Java (I come from a Java background, hence the reference. Ignore if you are not aware) -

 A promise of a function can be in either one of the following state -
  1. Fulfilled : Asynchronous operation corresponding to this promise is completed successfully. 
  2. Rejected : Asynchronous operation corresponding to this promise has failed. Promise will have the reason why it failed. 
  3. Pending : The asynchronous operation is still pending and is neither in fulfilled or rejected state.
  4. Settled : This is a generic state. Asynchronous operation is complete and can be in - Fulfilled or Rejected state.
A state will be in pending till the asynchronous operation is in progress. Once it is complete state can be fulfilled or rejected and from then state cannot change. 

NOTE : We are saying asynchronous operation because general processed are asynchronous for which promise are created. But it need not be. Promise may correspond to a synchronous operation as well.

Consider a simple promise as follows -


var testPromise = new Promise(function(resolve, reject){
        //your test operation - can be async
        let testSuccess  = true; // can be false depending on if your test async operation failed  
        if(testSuccess) {
                resolve("success");
        }
        else {
                reject("failure");
        }
});

testPromise.then(function(successResult){
        console.log("Test promise succeded with result : " + successResult);
}).catch(function(failureResult){
        console.log("Test promise failed with result : " + failureResult);


This prints output : Test promise succeded with result : success
You can change testSuccess to false and it will print : Test promise failed with result : failure

So let's see what happened here. 
  • First we created a new promise with constructor new Promise()
  • constructor takes an argument as function that basically defines what operation needs to be performed as part of that promise
  • This function takes two callbacks -
    • resolve()
    • reject()
  • You will call resolve() when your operation is successful and will call reject when it fails. resolve() will essentially put the promise in Fulfilled state where as reject will put it in Rejected state ww saw above.
  • Depending on result of our operation (can be asynchronous) we will call resolve() or reject()
  • Once promise object is created we can call it using then() method of promise object. then() method will be called when promise is fulfilled and catch() method will be called when it is rejected/failed.
  • You can toggle the value of testSuccess boolean and see for yourself.
  • Each then() and catch() take an argument which is nothing but variable passed by resolve() and reject() which in this case is success or failure

Now that we know what promise is and how it behaves lets see if this can solve our synchronisation problem. We have 3 asynchronous operations and we need to do it one after the other -


var test1 = new Promise(function(resolve,reject){
        resolve('test1');
});
var test2 = new Promise(function(resolve,reject){
        resolve('test2');
});
var test3 = new Promise(function(resolve,reject){
        resolve('test3');
});


test1.then(function(test1Result){
        console.log('completed : ' + test1Result);
        return test2;
}).then(function(test2Result){
        console.log('completed : ' + test2Result);
        return test3;
}).then(function(test3Result){
        console.log('completed : ' + test3Result);
});


This one outputs - 
completed : test1
completed : test2
completed : test3

Only difference here is in each then function we are returning next promise and calling then on it so that it is run sequentially.

Alternatively you can also do -

var test1Func = function() {
        return test1;
};
var test2Func = function() {
        return test2;
};
var test3Func = function() {
        return test1;
};
test1Func().then(function(test1Result){
        console.log('completed : ' + test1Result);
        return test2Func();
}).then(function(test2Result){
        console.log('completed : ' + test2Result);
        return test3Func();
}).then(function(test3Result){
        console.log('completed : ' + test3Result);
});

Now what if I want to do some task when all three promises are done. For that you can do -

Promise.all([test1Func(),test2Func(),test3Func()]).then(function(){
        console.log('All tests finished');
});


And this will output - All tests finished

Similarly if you want to do something if any one of the promise is complete you can do -


Promise.race([test1Func(),test2Func(),test3Func()]).then(function(){
        console.log('All tests finished');
});

and this will output - One of the tests finished

To sum it up promise looks like below -



Related Links

Wednesday, 1 November 2017

Understanding Node.js working

Background

As you might already know -
  1. Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine.
  2. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. 
  3. Node.js' package ecosystem, npm, is the largest ecosystem of open source libraries in the world.

Node.js runs single-threaded, non-blocking, asynchronously programming, which is very memory efficient. We will see this in a moment.

Understanding Node.js working

Traditional server client architecture is as below -



Server keeps a thread pool ready to server incoming client requests. As soon as a client request is received server takes one of the free threads and uses it to process the incoming request. This limits simultaneous client connections an server can have.Each thread will take up memory/RAM, processor/CPU eventually exhausting computational resources. Not to mention context switch happening. All of this limits the maximum connections and load a traditional server can take.


This is where Node.js differs.  As mentioned before Node.js runs single-threaded, non-blocking, asynchronously programming, which is very memory efficient. So it's just a single thread handling client requests. Only difference is programming is asynchronous and non blocking event driven. So when a client request is received single server thread starts processing it asynchronously. Immediately it is ready to get next client request. When processing for 1st request will be over main thread will get a callback and result will be returned to the client. Aync tasks are done by worker threads.






That being said Node.js is good for CPU intensive tasks given it is single threaded. So if your usecase is cpu intensive Node.js is probably not the way to go.



Related Links

t> UA-39527780-1 back to top