Thursday, 28 December 2017

How to enable logging in your Jira plugin

Background

In one of our earlier post we saw how to create a Jira cloud plugin -
You can similarly create a plugin for Jira server. Though these tutorials can help you develop the plugin what I felt inadequate is documentation around logging. Logging is very important in any code you write. It helps you understand the flow and find the issues in case it arises. In this post we will see how logging works for Jira.


How to enable logging in your Jira plugin

Jira uses log4j for runtime logging so you do not have to do anything out of the box to set the logging framework. If you are using atlassian sdk you can straight away start using slf4j logging in your code (It will use log4j underneath). A sample example could be -

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class MyClass {
    private static final Logger log = LoggerFactory.getLogger(MyClass.class);

    public void myMethod() {
        ...
        log.info("Log a message here");
        ...
    }
}


And that's it you can see those logs in your log file. Log file is located at following location -
  • <your_addon_dir>/target/jira/home/log
You should see multiple log files like -

-rw-rw-r--  1 athakur athakur       0 Dec 27 14:31 atlassian-greenhopper.log
-rw-rw-r--  1 athakur athakur 1248201 Dec 29 00:40 atlassian-jira.log
-rw-rw-r--  1 athakur athakur    2558 Dec 29 00:39 atlassian-jira-security.log
-rw-rw-r--  1 athakur athakur     223 Dec 27 14:33 atlassian-jira-slow-queries.log
-rw-rw-r--  1 athakur athakur   18217 Dec 28 18:39 atlassian-servicedesk.log



Jira plugin logs should be part of  atlassian-jira.log file.


Logging levels

There are five logging levels available in log4j: 

'DEBUG', 'INFO', 'WARN', 'ERROR' and 'FATAL'. Each logging level provides more logging information that the level before it:
  • 'DEBUG'
  • 'INFO'
  • 'WARN'
  • 'ERROR'
  • 'FATAL'
'DEBUG' provides the most verbose logging and 'FATAL' provides the least verbose logging. The default level is WARN, meaning warnings and errors are displayed. Sometimes it is useful to adjust this level to see more detail.

You can see these configurations in a file called log4j.properties. Localtion of this file is -
  • <your-addon-dir>/target/jira/webapp/WEB-INF/classes/log4j.properties
In this file you should see a line saying -

# To turn more verbose logging on - change "WARN" to "DEBUG"
log4j.rootLogger=WARN, console, filelog, cloudAppender
Just change the WARN to DEBUG if you want to see debug logs.


Alternatively you can temporarily change the debug level or add a new log level for a particular package using Jira admin console. Go to System -> Logging and profiling




Here you can see default logger set to warn. You can change this to debug or add a new package with corresponding log level.





Again note changes done from this admin console are temporary and do not persist over server restarts. To make permanent changes you need to edit file -
<your-addon-dir>/target/jira/webapp/WEB-INF/classes/log4j.properties




Related Links

Friday, 22 December 2017

Why use slf4j over log4j or logback for logging in Java

Background

In last post we saw how we can use slf4j over log4j and logback -
 But the question is why would be use slf4j over any logging implementation and not use the actual implementation. In this post we will try to understand this question.




 SLF4J or Simple logging Facade is not really a logging implementation but an abstraction that can use any of the logging implementation like -
  • java.util.logging, 
  • Apache log4j, 
  • logback etc
So consider this - You have developed a project that uses log4j for logging. But your project is dependent on some other module/library that uses lets say logback for logging. In this case you will need to include logback jar in your application as well. This is just unnecessary overhead. If the module your project is dependent on used slf4j then it could have reused our existing log4j configurations and jar.

Other way to see this is let's say you are writing a library that you want someone else to use. In this case you can use slf4j and let the user of your library choose the actual logging implementation rather than using a actual logging implementation like log4j and  making the user of your library stick to the same.


In short slf4j makes your code independent of any logging implementation specially if your code is part of public api/library.

Now that we know the very basics of why one would use slf4j lets see some of it's advantages -


Why use slf4j over log4j or logback for logging in Java

Let us see how a log statement would look in a log4j implementation -


if (logger.isDebugEnabled()) {
    logger.debug("Inputs are input1 : " + input1 + " input2 : " + input2 );
}


Couple of quick observations -
  1. Lot of boiler plate. Need to check if debug level is enabled everytime we need to log a debug statement.
  2. Lot of string concatenation everytime we call this debug statement.
In slf4j it would be as simple as -

logger.debug("Inputs are input1 : {} , input2 : {}" , input1, input2 );

Here {} are the palceholders and are replaced by the comma separated arguments provided later in the call. Yes the method takes variable arguments. And this is cool because - No more string concatenations!

You also avoid the boiler plate code since sl4fj will internally take care of the logging levels and proceed only if debug level is enabled. So if debug is not enabled final string needed to be logged is not even created. This not only help save memory but also CPU.

Related Links

How to configure Slf4j logging in your web application with log4j or logback implementations

Background

One of the important part of any application building is to implement proper logging.  Slf4j is widely used for this. Slf4j itself is not a logging implementation but it is kind of a wrapper over existing implementations like log4j etc. In this post we will see how we can configure our web application to use slf4j logging with -
  1. log4j
  2. logback
 For log4j I am going to use ivy as dependencies management tool and in case for logback I will use maven. But you can use any really as long as you include correct dependencies in your application.

Using slf4j with log4j

 To use log4j you need to include following dependencies in your application. Your ivy.xml file would look like -

<ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd">
    <info
        organisation="osfg"
        module="WebDynamo"
        status="integration">
    </info>
    
    <dependencies>
        <dependency org="org.slf4j" name="slf4j-api" rev="1.7.21"/>
        <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
        <dependency org="org.slf4j" name="slf4j-log4j12" rev="1.7.21"/>
    </dependencies>
</ivy-module>


You can see the complete xml here - https://github.com/aniket91/WebDynamo/blob/master/ivy.xml 
and complete working app here - https://github.com/aniket91/WebDynamo

Once you have dependencies in place you need to give it a configuration file to tell how your logging behaves. A sample configuration file would look like -

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration debug="true"
  xmlns:log4j='http://jakarta.apache.org/log4j/'>

    <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
        <layout class="org.apache.log4j.PatternLayout">
        <param name="ConversionPattern"
            value="%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n" />
        </layout>
    </appender>

    <appender name="FILE" class="org.apache.log4j.RollingFileAppender">
        <param name="append" value="false" />
        <param name="maxFileSize" value="10MB" />
        <param name="maxBackupIndex" value="10" />
        <param name="file" value="${catalina.home}/logs/webdynamo.log" />
        <layout class="org.apache.log4j.PatternLayout">
        <param name="ConversionPattern"
            value="%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n" />
        </layout>
    </appender>
    
    <category name="org.springframework">
        <priority value="debug" />
    </category>

    <category name="org.springframework.beans">
        <priority value="debug" />
    </category>

    <category name="org.springframework.security">
        <priority value="debug" />
    </category>

    <root>
        <level value="DEBUG" />
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="FILE" />
    </root>

</log4j:configuration>

Again you can see this file in the same project mentioned above - https://github.com/aniket91/WebDynamo/blob/master/src/log4j.xml

This configuration file should be in the classpath. log4j implementation by default looks for a file called log4j.properties or log4j.xml in your classpath.

You can visualize this with following diagram -



Application code uses slf4j interface which in turn uses a log4j-slf4j bridge to talk to log4j implementation.

We will see how to actually use Sl4fj logger a bit later in this post. Let's look how to do the same with a logback implementation.



Using slf4j with logback

For this you need to add following dependencies. Your pom.xml dependencies section would look like -

            <dependency>
                <groupId>ch.qos.logback</groupId>
                <artifactId>logback-classic</artifactId>
                <version>1.0.13</version>
            </dependency>

NOTE : you do not need slf4j-api here as logback has it as a compile time dependency. You can see that here - https://mvnrepository.com/artifact/ch.qos.logback/logback-classic/1.0.13.

NOTE : The logback-classic module can be assimilated to a significantly improved version of log4j. Moreover, logback-classic natively implements the SLF4J API so that you can readily switch back and forth between logback and other logging frameworks such as log4j or java.util.logging (JUL).  (Source)

You can visualize this with following diagram -




As we saw in log4j implementation we need to supply a configuration file to the implementation to tell how logging should work. In case of logback it expects a file called logback.xml to be in the classpath. A sample file could be -

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <layout class="ch.qos.logback.classic.PatternLayout">
            <Pattern>%d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n</Pattern>
        </layout>
    </appender>
    
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${catalina.home}/logs/springdemo.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <FileNamePattern>springdemo.%d{yyyy-MM-dd}.%i.log</FileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy
            class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>10MB</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            <maxHistory>10</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%date{HH:mm:ss.SSS} %-5p [%t] %c{1} - %m%n</pattern>
        </encoder>
        <append>true</append>
    </appender>    
    
    <root level="DEBUG">
        <appender-ref ref="STDOUT" />
    </root>

    <logger name="com.osfg" level="DEBUG" additivity="false">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="FILE" />
    </logger>
</configuration>




This is in the same app memtioned above. You can see this file here - https://github.com/aniket91/SpringFeaturesDemo/blob/master/src/logback.xml


And that's it your logging framework is all set to be used. We will now see how we can actually use this logger.


Using slf4j logger in your application

 Following is a simple controller that uses slf4j logging (implementation can be anything underneath - log4j, logback etc)

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Controller;
import org.springframework.ui.ModelMap;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
/**
 * 
 * @author athakur
 * Test controller
 */
@Controller
public class TestController {
    
    Logger logger = LoggerFactory.getLogger(TestController.class);

    @RequestMapping(value="/test/{data}",method=RequestMethod.GET)
    public String test(@PathVariable String data, ModelMap model,
            HttpServletRequest request, HttpServletResponse response) {
        logger.debug("Received request for test controller with data : {}", data);
        model.put("adminName", properties.getAdminName());
        return "test";    
    }
}

You can just run the code and see that logging works. This is again part of the same maven app I mentioned above in case of logback implementation. You can see this file here - https://github.com/aniket91/SpringFeaturesDemo/blob/master/src/com/osfg/controllers/TestController.java





Related Links

Friday, 15 December 2017

Fixing 'Error:Unsupported method: BaseConfig.getApplicationIdSuffix()' issue in Android Studio

Background

I tried importing an old android project into my Android Studio and the build was failing with following error -

Error:Unsupported method: BaseConfig.getApplicationIdSuffix().
The version of Gradle you connect to does not support that method.
To resolve the problem you can change/upgrade the target version of Gradle you connect to.
Alternatively, you can ignore this exception and read other information from the model.




Solution

As the error itself says solution is to upgrade gradle version.

Go to project level build.gradle. For me it looks like below -

// Top-level build file where you can add configuration options common to all sub-projects/modules.

buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:1.2.3'

        // NOTE: Do not place your application dependencies here; they belong
        // in the individual module build.gradle files
    }
}

allprojects {
    repositories {
        jcenter()
    }
}


Just upgrade the gradle version from 1.2.3 (or whatever you have)  to 3.0.1. You can try upgrading to other higher versions as well. This is what worked for me. So the build.gradle looks like below -

// Top-level build file where you can add configuration options common to all sub-projects/modules.

buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:3.0.1'

        // NOTE: Do not place your application dependencies here; they belong
        // in the individual module build.gradle files
    }
}

allprojects {
    repositories {
        jcenter()
    }
}




Once you do that click on "Try again" and the gradle sync should go through.



Monday, 11 December 2017

Understanding @value annotation in Spring

Background

Spring framework is based on the concept of dependency injection -
And while doing so you may need to set the values of some variables in your Spring application based on the environment or abstract it out to a properties file. For eg.  base URL of your application or username/password and other database connection details etc. Basically properties that may vary in each environment.

@value annotation is used for same reason. To set value of a variable from a properties file or environment variables. We will see the usage of this annotation next.


Setup

Before you start using @value annotation you need to setup the properties file from which your configured values can be read. To set of properties file you can use @PropertySource annotation in your configuration class. Example -

package com.osfg.config;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;

/**
 * 
 * @author athakur
 * Root applciation context
 * Services and data sources should go here - common to all web application contexts
 */
@Configuration
@ComponentScan({ "com.osfg" })
@PropertySource(value = { "classpath:com/osfg/resources/spring-props.properties" })
public class RootApplicationConfig {

}



Source : https://github.com/aniket91/SpringFeaturesDemo/blob/master/src/com/osfg/config/RootApplicationConfig.java

You do not need entirely need the properties file. You can also set values from environment variables. We will see this part next.



Your properties file will have simple key=value content. Eg.
  • adminName=athakur
See https://github.com/aniket91/SpringFeaturesDemo/blob/master/src/com/osfg/resources/spring-props.properties


Usage

You can use this as follows -

package com.osfg.models;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import lombok.Data;

@Component
@Data
public class Properties {
    
    @Value("${adminName}")
    String adminName;
}



Above usage will automatically inject values from your property file into your Java model class as you see above. Now you are free to inject  Properties class anywhere in your Spring project and access the variable.




@Controller
public class TestController {
    
    @Autowired
    Properties properties;
    
    
    Logger logger = LoggerFactory.getLogger(TestController.class);

    @RequestMapping(value="/test/{data}",method=RequestMethod.GET)
    public String test(@PathVariable String data, ModelMap model,
            HttpServletRequest request, HttpServletResponse response) {
        logger.debug("Received request for test controller with data : {}", data);
        model.put("adminName", properties.getAdminName());
        model.put("dara", data);
        return "test";
        
    }    
    
}

That's the basic usage.

NOTE : If same property is present in System property (environment variables) and also in property file then the System property takes preference.


You can also directly give the value of the variable. Eg.

    @Value("athakur")
    String adminName;
}



You can also give default value in the @value annotation.


@Data
public class Properties {
    @Value("${adminName:athakur}")
    String adminName;
}


So if  adminName is not defined in system properties or in properties file default value specified after ":" is picked up and used.

 Advanced usage

You can also use advanced usage of this annotation as follows-

private String adminName;

@Value("#{config['adminName'] ?: 'athakur'}")
private String adminName;

@Value("#{someBean.adminName ?: 'athakur'}")
private String adminName;

Using above you can choose to lookup system properties or config file or value in some predefined bean. This use Elvis operator in Spring Expression Language (SpEL).


You can see a sample demo in my github repo mentioned in Related Links section below -

Related Links



Tuesday, 5 December 2017

Creating an addon for Jira Cloud with Atlassian Connect

Background

Atlassian has multiple products Jira. Confluence, Hipchat, Crucible etc. Jira is one of the widely used issues tracking system. In this post we are going to see how to create a plugin for Jira cloud. This would include creating a small demo app on your local and deploying it.

NOTE : I will be using words addon and plugin interchangeably.  Both mean the same thing.

There are two different ways to create Atlassian addon -
  1. Atlassian Connect
  2. Plugins 2 framework
Plugins built with Atlassian Connect are meant to be run on Jira cloud instance where as plugins developer with Plugins2 framework are supposed to run in Jira server. 

Jira cloud is the cloud version of Jira in which all you need is create an account and get started with Jira products where as Jira server is on premise counterpart where you run your own Jira Server with licenses. As you must have guessed running an onprem version gives you more flexibility in creating and developing the addons where as there are lot of constraints developing plugin for cloud jira instance since developer does not have control over the Jira system and everything happens remotely.

In this post we are going to see how to develop a simple app using Atlassian connect and deploy it in Jira cloud instance. All apps developed with this work remotely from your hosted server. Jira cloud makes it possible to integrate your hosted app with Jira. To an end user it will look like the plugin is running on Jira itself. That's the power of Atlassian connect framework. We will see these in details in a moment.


Step 1. Get an Atlassian Cloud instance

  • Go to http://go.atlassian.com/cloud-dev and create your jira cloud instance for local plugin development.
  • This is a common account and you get multiple products with it like Jira,Confluence etc.
  • Note that there are various limitation on this cloud development account. So you cannot add lot of user and do stuff around it. You can read about these limitation in the same page.
  • Go through the steps shown and get your account setup. For me it is - https://athakur.atlassian.net
  • Next go to Jira. You should already be an administrator. You can do stuff like create project, add users etc.
  • Now go to setting (cog icon at the top) > Add-ons  > Manage add-ons
  • Next select Manage add-ons page and then settings.
  • Here enable Development mode


Your cloud jira instance is all setup for plugin deployment. We will come to this later. Let's go ahead and see plugin development.

Step 2.  Setting up your local development environment

Now we are going to setup our local environment that is needed to develop out Jira cloud addon.
We will need 2 npm modules to be installed. This obviously expects nodejs and npm installed on your machine. If it's not please do that first.

  1. http-server
  2. ngrok
As I mentioned before Jira cloud apps based in atlassian connect are hosted remotely on your own servers and Jira cloud just integrates it with the cloud instance. So we will need http-server to host our jira plugin on a server and we need ngrok to make our local traffic accessible to internet where the actual jira cloud instance is running ( https://athakur.atlassian.net in this case).  ngrok helps tunnel local ports to public URLs and inspect traffic. You can just run the following commands to set up above modules -


sudo npm install -g http-server
sudo npm install -g ngrok
ngrok help


This should suffice out local setup for now. We will again come back to this when we develop our app and need deploying.

Step 3. Building your app

The most basic file that is needed is named - atlassian-connect.json.  It is called plugin descriptor file. This basically tells Jira cloud instance what your plugin is, where it resides etc. This needs to be supplied to the cloud instance while configuring your Jira addon there which is why we need this file to be available over the internet. Hence the http-server and ngrok.

For now create a folder for your app. Let's call it helloworld-jira. Navigate to this folder and create a file called atlassian-connect.json with following content -

{
     "name": "Hello World Jira",
     "description": "Sample Atlassian Connect app",
     "key": "com.osfg.helloworld",
     "baseUrl": "https://<YOUR-APP-URL>",
     "vendor": {
         "name": "OSFG",
         "url": "http://opensourceforgeeks.blogspot.in/"
     },
     "authentication": {
         "type": "none"
     },
     "apiVersion": 1,
     "modules": {
         "generalPages": [
             {
                 "url": "/helloworld.html",
                 "key": "hello-world",
                 "location": "system.top.navigation.bar",
                 "name": {
                     "value": "Welcome"
                 }
             }
         ]
     }
 
}


Couple of important points -
  • baseUrl is the url where your app is hosted. We will supply our ngrok url here. So leave it like a placeholder for now.
  • Other setting are really description about your plugin and your company
  • Next we have generalPages section which defines which pages are part of your plugin. Here we are defining just one page. We also give it'e relative path (relative to base URL), location and a unique key.
  • You can have multiple type of pages like -
    • generalPages
    • adminPages
    • profilePages
  • You can see more details on these - https://developer.atlassian.com/cloud/confluence/modules/page/
  • Save your file with above contents.
Next let's create our actual app page helloworld.html we defined in the plugin descriptor file above. Create a file named helloworld.html and add following content to it -

<!DOCTYPE html>

<html lang="en">
 <head>
     <link rel="stylesheet" href="//aui-cdn.atlassian.com/aui-adg/5.9.12/css/aui.min.css" media="all">
 </head>
 <body>
     <section id="content" class="ac-content">
         <div class="aui-page-header">
             <div class="aui-page-header-main">
                 <h1>Hello World from Jira!</h1>
             </div>
         </div>
     </section>
     <script id="connect-loader" data-options="sizeToParent:true;">
         (function() {
             var getUrlParam = function (param) {
                 var codedParam = (new RegExp(param + '=([^&]*)')).exec(window.location.search)[1];
                 return decodeURIComponent(codedParam);
             };
             var baseUrl = getUrlParam('xdm_e') + getUrlParam('cp');
             var options = document.getElementById('connect-loader').getAttribute('data-options');
             var script = document.createElement("script");
             script.src = baseUrl + '/atlassian-connect/all.js';
             if(options) {
                 script.setAttribute('data-options', options);
             }
             document.getElementsByTagName("head")[0].appendChild(script);
         })();
     </script>
 </body>
</html>


You need a to understand couple of things from above html page before we proceed -

  • AUI is Atlassian user interface. It gives you css to make your plugin look like standard Jira page. For more details refer - https://docs.atlassian.com/aui/
  • Next is just a HTML content showing "Hello World from Jira!" as the content. We should be able to see this when we deploy our app in Cloud Jira instance.
  • Next and last section is just adding a script to the DOM. This script is the Atlassian Connect JavaScript API. It simplifies client interactions with the Atlassian application. Eg making an XMLHttpRequest. This file can be found the URL -https://<yourhostname.atlassian.net>/atlassian-connect/all.js
  • In my case it is https://athakur.atlassian.net/atlassian-connect/all.js.  This should be present for all accounts.
  • You can read more about javascript API - https://developer.atlassian.com/cloud/jira/platform/about-the-javascript-api/
Once you have saved this file your app is ready. Let's see how we can deploy this.


Step 4. Deploy your app


First step would be to host your app on the server. So go to helloworld-jira directory where our app resides and execute following command -

  • http-server -p 8000
This should host your app on localhost domain on port 8000.




You can make sure your URLs are accessible -

  • http://localhost:8000/atlassian-connect.json
  • http://localhost:8000/helloworld.html

Next you need to make this accessible from internet and for this we will use ngrok  we have already set up. Just run following command -

  • ngrok http 8000

This will redirect our local traffic to internet. You should be able to see the URL that you can refer.
We are interested in https part of this URL -



You can again test your URLs with this to check your file is available. In my case they are -
  • https://8d543c3d.ngrok.io/atlassian-connect.json
  • https://8d543c3d.ngrok.io/helloworld.html

Once this is done you are pretty much all setup. You app is build and is accessible from the internet. Last thing that you do this update this url in the baseUrl field in the descriptor file where we left as placeholder. So your baseUrl is as follows -
  • "baseUrl": "https://8d543c3d.ngrok.io/"

Now simply go to Manage Addons in the Jira cloud instance we created in Step1 and click on upload addon. Provide the URL to the atlassian-connect json. In my case it is -
  • https://8d543c3d.ngrok.io/atlassian-connect.json

and your addon should get installed.




Now you can easily test out your addon. Just reload the page and you should see Welcome in the header section. You can click on it and you should see our content - "Hello world from Jira!"



Production Deployment

This was local deployment and testing. For production you need a proper webserver to host your App. You can use service like Heroku or AWS services like S3 , EC2 or Elastic beanstalk. 

Related Links

Friday, 1 December 2017

How to enable and use cross account access to services in AWS with APIs - PART 2 - Assume Role

Background

Please refer PART 1 of this post for details on background and approach 1 to achieve cross account access for S3 bucket using Bucket polcies-
This post I will try to explain and demo how cross account access work with assume Role. This is way more secure and flexible that approach 1 (and generic too - approach 1 was specific to S3).

This post assumes you have required setup and pre requisite knowledge mentioned in part 1. If you have not already I would highly recommend read PART1 first.

So we are going to try following. We already have IAM user in account A. And we will try accessing S3 bucket of Account B using assume cross account role.

NOTE :  Remove bucket policy on the bucket if you have set any while following PART 1 of this post.

Changes to policy of  IAM user of Account A

Since in this approach we are going to call assume role we need to give that access to the IAM user of Account A. So edit the inline policy of this IAM user to add following statement - 

        {
            "Sid": "Stmt1511168304001",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "*"
            ]
        }

This will basically allow Account A IAM user to call assume role or any role of any other account.

Cross account role setup

Before we start with the code lets configure a cross account role in Account B.

Go to Account B IAM console of Account B and create a role as follows -

  • Select a cross account role -


  • Next provide Account ID of Account A in the input. Also select external ID requirement. External ID provided added security.  (In abstract terms, the external ID allows the user that is assuming the role to assert the circumstances in which they are operating. It also provides a way for the account owner to permit the role to be assumed only under specific circumstances. The primary function of the external ID is to address and prevent the "confused deputy" problem - more details)



  • Note the external ID we  have used here. We are going to use it later . In this case we are using string called - SECRET
  •  Do not select any policies for now. We will come to that later. Just review , name your role and create it.

  •  Now once you have finished creating this role go to this role and select add inline policy and add below policy -
 {
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1512121576471",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::aniket.help/*"
    }
  ]
}



NOTE : If you need help creating policies you can go to - AWS policy generator and generate policy from there.

NOTE : The reason to do this is we did not want to give our cross account role entire s3 access or even access to other bucket than aniket.help bucket.

 Finally note the the role arn. In this case it is -
  • arn:aws:iam::706469024316:role/athakur-cross-account-s3-access

Assuming the role and cross account access

Now that our cross account role is setup lets go to the code where we can call assume role and  access our S3 bucket.

Code is as follows -

    public static boolean validateUpload() {

        try {
            BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey);
            AssumeRoleRequest assumeRoleRequest = new AssumeRoleRequest().withRoleArn(ROLE_ARN)
                    .withExternalId(EXTERNAL_ID).withDurationSeconds(3600).withRoleSessionName("testSession");
            AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder.standard()
                    .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
            AssumeRoleResult assumeResult = stsClient.assumeRole(assumeRoleRequest);
            Credentials sessionCredentials = assumeResult.getCredentials();
            BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(
                    sessionCredentials.getAccessKeyId(), sessionCredentials.getSecretAccessKey(),
                    sessionCredentials.getSessionToken());
            AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION)
                    .withCredentials(new AWSStaticCredentialsProvider(basicSessionCredentials)).build();
            s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!");

        } catch (AmazonServiceException ase) {
            System.out.println(
                    "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
            ase.printStackTrace();
            return false;
        } catch (AmazonClientException ace) {
            System.out.println(
                    "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network");
            System.out.println("Error Message: {}" + ace.getMessage());
            ace.printStackTrace();
            return false;
        } catch (Exception ex) {
            System.out.println("Got exception while validation bucket configuration.");
            ex.printStackTrace();
            return false;
        }
        return true;
    }

Now run it in the same way we did in previous post (PART1). Output will be -
validated Upload : true  
as expected.
NOTE : You can try different scenarios here like try changing the external id, or not using external ID at all, or playing around with policies. In all other cases you should get not authorized.


Similarly code for validateDownload() would be -

    public static boolean validateDownload() {

        try {
            BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey);
            AssumeRoleRequest assumeRoleRequest = new AssumeRoleRequest().withRoleArn(ROLE_ARN)
                    .withExternalId(EXTERNAL_ID).withDurationSeconds(3600).withRoleSessionName("testSession");
            AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder.standard()
                    .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
            AssumeRoleResult assumeResult = stsClient.assumeRole(assumeRoleRequest);
            Credentials sessionCredentials = assumeResult.getCredentials();
            BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(
                    sessionCredentials.getAccessKeyId(), sessionCredentials.getSecretAccessKey(),
                    sessionCredentials.getSessionToken());
            AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION)
                    .withCredentials(new AWSStaticCredentialsProvider(basicSessionCredentials)).build();
            GetObjectRequest rangeObjectRequest = new GetObjectRequest(BUCKET_NAME, "test.txt");
            rangeObjectRequest.setRange(0, 26);
            S3Object s3Object = s3client.getObject(rangeObjectRequest);
            BufferedReader reader = new BufferedReader(new InputStreamReader(s3Object.getObjectContent()));
            StringBuilder sb = new StringBuilder();
            String readLine;
            while ((readLine = reader.readLine()) != null) {
                sb.append(readLine);
            }
            System.out.println("Read File from S3 bucket. Content : " + sb.toString());

        } catch (AmazonServiceException ase) {
            System.out.println(
                    "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
            ase.printStackTrace();
            return false;
        } catch (AmazonClientException ace) {
            System.out.println(
                    "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network");
            System.out.println("Error Message: {}" + ace.getMessage());
            ace.printStackTrace();
            return false;
        } catch (Exception ex) {
            System.out.println("Got exception while validation bucket configuration.");
            ex.printStackTrace();
            return false;
        }
        return true;
    }


 Again run it as we did in last post. Ouput should be-
Read File from S3 bucket. Content : This is from cross account!
validated Download : true


Understanding the Workflow

Let's try to understand the workflow here
  1. We have credentials of IAM user of Account A.
  2. We use these credentials to make assume role call with the cross account role created in Account B to give Account A access
  3. We also use the external ID to validate Account A user is the authorized to make this call.
  4. When assumeRole call is made 1st thing that is checked is wherther this user has access to make this assume call. Since we had added this in the inline policy if IAM user of account A it goes through.
  5. Next check is whether assumeRole is successful. This checks if user calling this assumeRole is of same account configured in cross account role of Account B and that same external ID is used.
  6. Once these checks are cleared User from Account A will get temporary credentials corresponding to the role.
  7. Using these we can make call to S3 Upload/Download
  8. Now when these calls are made it is checked whether the role has access to GET/PUT of S3. if not access is denied. Since we explicitly added these policies for our cross account role this step is also accepted.
  9. And finally we have access to S3 GET/PUT.
  10. But note due to our role policy anyone assuming this role will have access to GET/PUT of aniket.help bucket only. No other AWS service or no other bucket of S3. This is why roles and policies are so important.
  11. Same goes with IAM user policy of user in Account A. It can only do sts assumerole call and has access to S3. Nothing else.



NOTE : Good thing about this approach is Account B can give access to KMS as well to the role and you can have a KMS based encryption as well (Which was not possible with previous approach).


To summarize this is diagram it can be as follows -


Again this is just a simplistic overview. All the things that happen in background are listed in workflow section above.

 Related Links

Thursday, 30 November 2017

How to enable and use cross account access to services in AWS with APIs - PART 1 - Bucket Policies

Background

AWS is the most widely used cloud platform today. It is easy to use, cost effective and takes no time to setup. I can go on and on about it's benefits over your own data center but that's not the goal of this post. In this post I am going to show how you can access cross account services in AWS. 

More specifically I will demo accessing cross account S3 bucket. I will show 2 approaches to do so. 1st one is very specific to Cross account bucket access and approach 2 is generic and can be used to access any services.

This post assumes you have basic knowledge of AWS services specifically S3, IAM (Roles, policies , Users) etc.

IAM User Setup

Let's start by creating an IAM user in Account A (the account you own). Create a  user with complete access to S3 service. You can attach S3 full access policy directly.  Other way to do it is attach an inline policy as follows -


{
    "Version": "2012-10-17",
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::*"
        }
    ]
} 

NOTE : I have purposefully not provided bucket name here since it is a cross account bucket access we may not know the bucket name of Account B before hand.

Also enable programmatic access for this IAM user. We will need the access key ID and secret key to use in our API calls. You need to save these details down somewhere as you will not be able to get it again from Amazon console. You will have to regenerate it.

Also note down the arn of this IAM user. For me it is -
  • arn:aws:iam::499222264523:user/athakur
We will need these later in our setups. 


Project Setup

You need to create a new Java project to test these changes out. I am using maven project for dependency management. You can choose whatever you wish to. You need a dependency of AWS Java SDK. 

        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.11.238</version>
        </dependency>


NOTE :  Language should not be a barrier here. You can use any language you want python, nodejs etc. For this post I am going to use Java. But other languages will have similar APIs.

Approach 1 (Using Bucket policies)

The 1st approach to use cross account access for S3 buckets is to use S3 bucket policies. To begin with you need an IAM user in your own account (let's call it Account A). And then there is Account B to which you need access to read/write to it's S3 bucket.


Now let's say bucket name of S3 bucket in Cross account is aniket.help. Go ahead and configure bucket policy for this bucket as follows -


 {
    "Version": "2012-10-17",
    "Id": "Policy1511782738232",
    "Statement": [
        {
            "Sid": "Stmt1511782736332",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::499222264523:user/athakur"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::aniket.help/*"
        }
    ]
}


Above bucket policy basically provides cross account access to our IAM user from Account A (Notice the arn is same as that of IAM user we created in Account A) . Also note we are just giving permission for S3 GET, PUT and DELETE and to a very specific bucket names aniket.help.

NOTE : Bucket names are global and so is S3 service. Even though your bucket may reside in a particular AWS region. So do not try to use same bucket name as above. But you can use any other name you want.


Now you can run the following Java code to upload a file to S3 bucket of Account B.


    public static boolean validateUpload() {
        
        try {
            BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey);
            AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION)
                    .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
            s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!");
            
        }catch (AmazonServiceException ase) {
            System.out.println(
                    "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason." );
            System.out.println("Error Message:    " +  ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
            ase.printStackTrace();
            return false;
        } catch (AmazonClientException ace) {
            System.out.println(
                    "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network");
            System.out.println("Error Message: {}" +  ace.getMessage());
            ace.printStackTrace();
            return false;
        } catch (Exception ex) {
            System.out.println("Got exception while validation bucket configuration.");
            ex.printStackTrace();
            return false;
        }
        return true;
    } 


NOTE : Replace BUCKET_NAME, BUCKET_REGION with the actual bucket name and region that you have created in Account B. Also replace awsAcessKeyId, awsSecretKey with your actual IAM credentials that we created in Account A.

You can simply run this and  validate output -

    public static final String awsAcessKeyId = "REPLACE_THIS";
    public static final String awsSecretKey = "REPLACE_THIS";
    public static final String BUCKET_NAME = "aniket.help";
    public static final String BUCKET_REGION = "us-east-1";
    
    public static void main(String args[]) {
        System.out.println("validated Upload : " + validateUpload());
    }

You should get -
validated Upload : true

 You can verify file is actually uploaded to S3 bucket.



Let's do the same for download as well.


Code is as follows -

    public static boolean validateUpload() {
        
        try {
            BasicAWSCredentials credentials = new BasicAWSCredentials(awsAcessKeyId, awsSecretKey);
            AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(BUCKET_REGION)
                    .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
            s3client.putObject(BUCKET_NAME, "test.txt", "This is from cross account!");
            
        }catch (AmazonServiceException ase) {
            System.out.println(
                    "Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason." );
            System.out.println("Error Message:    " +  ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
            ase.printStackTrace();
            return false;
        } catch (AmazonClientException ace) {
            System.out.println(
                    "Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network");
            System.out.println("Error Message: {}" +  ace.getMessage());
            ace.printStackTrace();
            return false;
        } catch (Exception ex) {
            System.out.println("Got exception while validation bucket configuration.");
            ex.printStackTrace();
            return false;
        }
        return true;
    }


You can test it out as  -


    public static void main(String args[]) {
        System.out.println("validated Download : " + validateDownload());

    }
   


and the output is as follows -
Read File from S3 bucket. Content : This is from cross account!
validated Download : true


 Drawback : Drawback of using bucket policy is Account B cannot use KMS encryption on their bucket since IAM user of Account B does not have access to KMS of account A. They can still use AES encryption. (These encryptions are encryption at REST and S3 takes care of encrypting files before saving it to the disk and decrypting it before sending it back). This can be resolved by taking approach 2 (assume role).

NOTE :Security is the most important aspect in cloud since potentially any one can access it. It is the responsibility of individual setting these up to ensure it is securely deployed. Never give out your IAM credentials ot check it into any repository. Restrict access roles and policies as much granular as you can. In above case if you need just get,put provide the same in IAM policy. Do not give wildcards there.

Stay tuned for PART 2 of this post. In that we will see how we can do a assume role to access any service in Account B (securely ofcourse). We need not use Bucket policy in that case.

Part 2 - How to enable and use cross account access to services in AWS with APIs - PART 2 - Assume Role


CORS - Cross origin resource sharing

Note if you are trying to access S3 bucket from a domain different from the domain of the actual site then you need to set CORS policy in your bucket (Not applicable for above demo) -

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>


Above lists all types of request. You can restrict it as per your usecase.

Related Links

t> UA-39527780-1 back to top