Background
AWS is the most widely used cloud platform today. It is easy to use, cost effective and takes no time to setup. I can go on and on about it's benefits over your own data center but that's not the goal of this post. In this post I am going to show how you can access cross account services in AWS.
More specifically I will demo accessing cross account S3 bucket. I will show 2 approaches to do so. 1st one is very specific to Cross account bucket access and approach 2 is generic and can be used to access any services.
This post assumes you have basic knowledge of AWS services specifically S3, IAM (Roles, policies , Users) etc.
IAM User Setup
Let's start by creating an IAM user in
Account A (the account you own). Create a user with complete access to
S3 service. You can attach S3 full access policy directly. Other way to
do it is attach an inline policy as follows -
{ "Version": "2012-10-17", { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::*" } ] }
NOTE
: I have purposefully not provided bucket name here since it is a cross
account bucket access we may not know the bucket name of Account B
before hand.
Also enable programmatic access for this IAM user. We will need the access key ID and secret key to use in our API calls. You need to save these details down somewhere as you will not be able to get it again from Amazon console. You will have to regenerate it.
Also note down the arn of this IAM user. For me it is -
- arn:aws:iam::499222264523:user/athakur
We will need these later in our setups.
Project Setup
You need to create a new Java project to test these changes out. I am using maven project for dependency management. You can choose whatever you wish to. You need a dependency of AWS Java SDK.
<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk --> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk</artifactId> <version>1.11.238</version> </dependency>
NOTE : Language should not be a barrier here. You can use any language you want python, nodejs etc. For this post I am going to use Java. But other languages will have similar APIs.
Approach 1 (Using Bucket policies)
The 1st approach to use cross account access for S3 buckets is to use S3 bucket policies. To begin with you need an IAM user in your own account (let's call it Account A). And then there is Account B to which you need access to read/write to it's S3 bucket.
Now let's say bucket name of S3 bucket in Cross account is aniket.help. Go ahead and configure bucket policy for this bucket as follows -
{ "Version": "2012-10-17", "Id": "Policy1511782738232", "Statement": [ { "Sid": "Stmt1511782736332", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::499222264523:user/athakur" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::aniket.help/*" } ] }
Above bucket policy basically provides cross account access to our IAM user from Account A (Notice the arn is same as that of IAM user we created in Account A) . Also note we are just giving permission for S3 GET, PUT and DELETE and to a very specific bucket names aniket.help.
NOTE : Bucket names are global and so is S3 service. Even though your bucket may reside in a particular AWS region. So do not try to use same bucket name as above. But you can use any other name you want.
Now you can run the following Java code to upload a file to S3 bucket of Account B.
public static boolean validateUpload() { try { BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey); AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION) .withCredentials(new AWSStaticCredentialsProvider(credentials)).build(); s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!"); }catch (AmazonServiceException ase) { System.out.println( "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason." ); System.out.println("Error Message: " + ase.getMessage()); System.out.println("HTTP Status Code: " + ase.getStatusCode()); System.out.println("AWS Error Code: " + ase.getErrorCode()); System.out.println("Error Type: " + ase.getErrorType()); System.out.println("Request ID: " + ase.getRequestId()); ase.printStackTrace(); return false; } catch (AmazonClientException ace) { System.out.println( "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network"); System.out.println("Error Message: {}" + ace.getMessage()); ace.printStackTrace(); return false; } catch (Exception ex) { System.out.println("Got exception while validation bucket configuration."); ex.printStackTrace(); return false; } return true; }
NOTE : Replace BUCKET_NAME, BUCKET_REGION with the actual bucket name and region that you have created in Account B. Also replace awsAcessKeyId, awsSecretKey with your actual IAM credentials that we created in Account A.
You can simply run this and validate output -
public static final String awsAcessKeyId = "REPLACE_THIS"; public static final String awsSecretKey = "REPLACE_THIS"; public static final String BUCKET_NAME = "aniket.help"; public static final String BUCKET_REGION = "us-east-1"; public static void main(String args[]) { System.out.println("validated Upload : " + validateUpload()); }
You should get -
validated Upload : true
You can verify file is actually uploaded to S3 bucket.
Let's do the same for download as well.
Code is as follows -
You can test it out as -
and the output is as follows -
Read File from S3 bucket. Content : This is from cross account!
validated Download : true
Drawback : Drawback of using bucket policy is Account B cannot use KMS encryption on their bucket since IAM user of Account B does not have access to KMS of account A. They can still use AES encryption. (These encryptions are encryption at REST and S3 takes care of encrypting files before saving it to the disk and decrypting it before sending it back). This can be resolved by taking approach 2 (assume role).
NOTE :Security is the most important aspect in cloud since potentially any one can access it. It is the responsibility of individual setting these up to ensure it is securely deployed. Never give out your IAM credentials ot check it into any repository. Restrict access roles and policies as much granular as you can. In above case if you need just get,put provide the same in IAM policy. Do not give wildcards there.
Stay tuned for PART 2 of this post. In that we will see how we can do a assume role to access any service in Account B (securely ofcourse). We need not use Bucket policy in that case.
Part 2 - How to enable and use cross account access to services in AWS with APIs - PART 2 - Assume Role
Above lists all types of request. You can restrict it as per your usecase.
Code is as follows -
public static boolean validateUpload() { try { BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey); AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION) .withCredentials(new AWSStaticCredentialsProvider(credentials)).build(); s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!"); }catch (AmazonServiceException ase) { System.out.println( "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason." ); System.out.println("Error Message: " + ase.getMessage()); System.out.println("HTTP Status Code: " + ase.getStatusCode()); System.out.println("AWS Error Code: " + ase.getErrorCode()); System.out.println("Error Type: " + ase.getErrorType()); System.out.println("Request ID: " + ase.getRequestId()); ase.printStackTrace(); return false; } catch (AmazonClientException ace) { System.out.println( "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network"); System.out.println("Error Message: {}" + ace.getMessage()); ace.printStackTrace(); return false; } catch (Exception ex) { System.out.println("Got exception while validation bucket configuration."); ex.printStackTrace(); return false; } return true; }
You can test it out as -
public static void main(String args[]) { System.out.println("validated Download : " + validateDownload()); }
and the output is as follows -
Read File from S3 bucket. Content : This is from cross account!
validated Download : true
Drawback : Drawback of using bucket policy is Account B cannot use KMS encryption on their bucket since IAM user of Account B does not have access to KMS of account A. They can still use AES encryption. (These encryptions are encryption at REST and S3 takes care of encrypting files before saving it to the disk and decrypting it before sending it back). This can be resolved by taking approach 2 (assume role).
NOTE :Security is the most important aspect in cloud since potentially any one can access it. It is the responsibility of individual setting these up to ensure it is securely deployed. Never give out your IAM credentials ot check it into any repository. Restrict access roles and policies as much granular as you can. In above case if you need just get,put provide the same in IAM policy. Do not give wildcards there.
Stay tuned for PART 2 of this post. In that we will see how we can do a assume role to access any service in Account B (securely ofcourse). We need not use Bucket policy in that case.
Part 2 - How to enable and use cross account access to services in AWS with APIs - PART 2 - Assume Role
CORS - Cross origin resource sharing
Note if you are trying to access S3 bucket from a domain different from the domain of the actual site then you need to set CORS policy in your bucket (Not applicable for above demo) -
<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>HEAD</AllowedMethod> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>DELETE</AllowedMethod> <ExposeHeader>ETag</ExposeHeader> <AllowedHeader>*</AllowedHeader> </CORSRule> </CORSConfiguration>
Above lists all types of request. You can restrict it as per your usecase.
No comments:
Post a Comment