Saturday, 24 September 2016

How to delete recently opened files history in ubuntu 14.04

Background

In this post we will see how we can delete recently opened files from Ubuntu's Unity dashboard. Why would I do that you ask? Answer is privacy but then again it largely depends on the usecase. There should not be such a need for a strictly persona computer but if it is shared it is better to delete history when you leave.


How to delete recently opened files history in ubuntu 14.04

Open security and privacy settings from unity dashbaord.




Then go to Files & Applications and select clear usage data and select the period you want to clear data from - 



Click Ok and you are done.




Related Links


Wednesday, 21 September 2016

Programmatically upload files to amazon (AWS) S3

Background

This post will show you how to programmatically upload files to your AWS S3 account using AWS S3 SDK. Lets start by creating access key.

This post hopes you already have a S3 account set up on your AWS console.



Credentials setup


Lets start be creating access key. Following screenshots show you how to create your access key using AWS IAM (Identity and access management).






Once you have create a user you need to give it access to S3. Follow next set of steps for that -



 This will give your new user access to S3.Now you can use these credentials in the code. Before we go to code there is one more step. You need to set up your credential file. It will be in
  • ~/.aws/credentials
If its not there create one and add credentials to it as show below from the user you had created from IAM console.


I have used some template here but replace it with your exact access key if and secret key. Lets go on to the code now.

Programmatically upload files to amazon s3

I am using eclipse and ivy dependency management. So your ivy file should look like following -

<ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd">
    <info
        organisation="osfg"
        module="AwsS3Demo"
        status="integration">
    </info>
    
    <dependencies>
        <dependency org="com.amazonaws" name="aws-java-sdk-s3" rev="1.11.36"/>
    </dependencies>
</ivy-module>


Note the dependency we have used. It for aws s3 only.

Now lets head on to the code -

package com.osfg;

import java.io.File;
import java.io.IOException;

import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;

/**
 * 
 * @author athakur
 *
 */
public class AwsS3Demo {
    
    private static String AWS_BUCKET_NAME = "test-athakur";
    private static String AWS_KEY_NAME = "testData";
    private static String UPLOAD_FILE = "/Users/athakur/Desktop/data.txt";
    
    public static void main(String[] args) throws IOException {
        AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
        try {
            System.out.println("Uploading a new object to S3 from a file\n");
            File file = new File(UPLOAD_FILE);
            s3client.putObject(new PutObjectRequest(
                    AWS_BUCKET_NAME, AWS_KEY_NAME, file));

         } catch (AmazonServiceException ase) {
            System.out.println("Caught an AmazonServiceException, which " +
                    "means your request made it " +
                    "to Amazon S3, but was rejected with an error response" +
                    " for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        } catch (AmazonClientException ace) {
            System.out.println("Caught an AmazonClientException, which " +
                    "means the client encountered " +
                    "an internal error while trying to " +
                    "communicate with S3, " +
                    "such as not being able to access the network.");
            System.out.println("Error Message: " + ace.getMessage());
        }
    }

}

Just run the above code and it should upload your file to the AWS bucket. Make sure -
  1. Credentials are correct in ~/.aws/credentials file
  2. That credential has access to S3 module
  3. File is present on your machine
  4. bucket name is valid
NOTE : ProfileCredentialsProvider internally uses ~/.aws/credentials file for the credentials to authenticate against AWS S3.

You can also find above code snippet at -
  • http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html

IF all goes correctly file should get uploaded to the S3 -







Related Links

Saturday, 17 September 2016

Spring configuration files XML schema versions

Background

If you have previously worked on a Spring web project then you must have come across Spring configuration files. They are xmls which will have references to to the namespace they are using and the version they are pointing to. Pick up any one of the Spring post we have seen before -
You would see spring configurations files something like -

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans 
    http://www.springframework.org/schema/beans/spring-beans-2.5.xsd">


Notice the 2.5 version. I mean it's pretty old now. You can find all the schema versions here -
 But as your project improves who cares about updating these files right? You update the spring version(jars) perhaps as you go forward but these configuration files remain as is. Lets see how we can resolve this problem.

Spring configuration XML schema: with or without version?

So how do you resolve this you ask? Do not specify the version. Yes I repeat do not specify the version. It should be something like -

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans 
    http://www.springframework.org/schema/beans/spring-beans.xsd">

When you do not specify any version it will take that latest schema that you have in your classpath jar and that will upgrade with the version of jar you upgrade. Spring schema versions are in a file called spring.schemas in spring-beans jar file -



 From this file it will take the latest schema version without you need to worrying about it.



NOTE : You should try to move completely move away from Spring xml configurations. You should start using Java class based configurations and annotations.


Related Links

Saturday, 10 September 2016

Whats new in Java 8?

Background

In this post well see what all new features and changes have come in Java 8 release.

Whats new?

  • Default methods are introduced in Java 8 which means you can provide a method with body in your interface and all concrete classes need not implement it. They can override it though. For this you need to use default keyword in your method. More details - 
  • Java 8 has also introduced Lambda expressions which use functional interface. You can see more details below - 
  • As you know for local variables to be accessed by methods in anonymous classes the local variable needs to be declared final. However from Java 8 it is accessible even if it is effectively final. More details - 
  • As we know variables in an interface are implicitly public, final and static and methods we public and abstract. Though variables remain the same with default methods we saw in point no 1 , non abstract methods are also possible in interface now. Also static methods are allowed in interface now. Following code snippet works from Java 8 -

    public interface TestInterface {
        
        String NAME = "Aniket";    //public static final 
        String getName();    //public abstract
        
        default String getDefaultName() { // non static default method
            return "Abhijit";
        }
        
        static String getNonDefaultSttaicName() { // static methods
            return NAME;
        }
    
    }
    
  • Changes in HashMap : The performance has been improved by using balanced trees instead of linked lists under specific circumstances. It has only been implemented in the classes -
    • java.util.HashMap,
    • java.util.LinkedHashMap and 
    • java.util.concurrent.ConcurrentHashMap.

      This will improve the worst case performance from O(n) to O(log n).
  • Java 8 introduces another new syntax called method references. Covered it a new post -
  • Java 8 also introduces a new class called Optional which is a better way to represent values that may not be present instead of using null and and adding null checks -
  • Lastly another Major change that was added was the Stream APIs. You can read all about it in following post -


Related Links

Understanding Java 8 Stream API

Background

Java 8 has introduced a new set of APIs involving streams. They look very powerful in term of processing and also uses functional programming we have seen in last couple of posts (Refer links in Related Links section at the bottom of this post). In this post we will essentially see what these streams are and how can we leverage it.

Streams in Java are essentially sequence of data which you can operate upon together it's called a pipeline. A stream pipeline is essentially comprising of 3 parts -

  1. Source : Think of it as data set that is used to generate a stream. Depending on data set a stream can be finite or infinite.
  2. Intermediate operations : Intermediate operations are operations that you perform on the given data set to filter or process your data. You can have as many intermediate operations as you desire. These intermediate operations give you the processed stream so that you can perform more intermediate operation on them. Since streams use lazy evaluation, the
    intermediate operations do not run until the terminal operation runs.
  3. Terminal operation :  This actually produces a result. There can be only one terminal operation. As stream can be used only once it will be invalid post terminal operation.





NOTE : Intermediate operations return a new stream. They are always lazy; executing an intermediate operation such as filter() does not actually perform any filtering, but instead creates a new stream that, when traversed, contains the elements of the initial stream that match the given predicate. Traversal of the pipeline source does not begin until the terminal operation of the pipeline is executed.

Intermediate vrs terminal operations



Creating a Stream

You can create Streams in one of the following ways -

        Stream<String> emptyStream = Stream.empty();
        Stream<Integer> singleElementStream = Stream.of(1);
        Stream<Integer> streamFromArray = Stream.of(1,2,3,4);
        List<String> listForStream = Arrays.asList("ABC","PQR","XYZ");
        Stream<String> streamFromList = listForStream.stream();
        Stream<Double> randomInfiniteStream = Stream.generate(Math::random);
        Stream<Integer> sequencedInfiniteStream = Stream.iterate(1, n -> n+1);



Line 1 creates an empty stream. Line 2 creates a stream having one element. Line 3 creates a stream containing multiple elements. Line 5 creates a stream out of a existing List. Line 6 and 7 are generating infinite Streams. Line 6 takes a supplier as argument to generate the sequence whereas Line 7 takes a Seed data integer (something to start with) and an Unary Operator used to generate the sequence.

If you try to print out infinite sequence you program will hand until you terminate it. You can try -

sequencedInfiniteStream.forEach(System.out::println);

Terminal and intermediate Stream operations

We will not get in details of each terminal and intermediate stream operations. Instead I will list them out and then see example for it. 

Common terminal operations
  1. allMatch()/anyMatch()/noneMatch()
  2. collect()
  3. count()
  4. findAny()/findFirst()
  5. forEach()
  6. min()/max()
  7. reduce()
Common intermediate operations
  1. filter()
  2. distinct()
  3. limit() and skip()
  4. map()
  5. sorted()
  6. peek()

Now lets start with how to print a Steams content because that's what we do when we are in doubt.

You can print a Stream is one of the following ways -

        List<String> listForStream = Arrays.asList("ABC","PQR","XYZ");
        Stream<String> streamFromList = listForStream.stream();
        //printing using forEach terminal operation
        streamFromList.forEach(System.out::println);
        //recreate stream as stream once operated on is invalid
        streamFromList = listForStream.stream();
        //printing using peek intermediate operation
        streamFromList.peek(System.out::println).count();
        streamFromList = listForStream.stream();
        //printing using collect terminal operation
        System.out.println(streamFromList.collect(Collectors.toList()));


Line 4 used forEach terminal operation to print out the Stream. It takes a consumer as the argument which in this case  is "System.out::println". We have used method reference here because that's common but corresponding Lambda expression would be "s -> System.out.println(s)". 
Line 8 uses peek which is a intermediate operation to look at the stream elements. It also takes a consumer as the argument. Lastly in Line 11 we have used collect terminal operator to collect the results as List and then print it put. You can define your own Collectors or you can use the ones Java have provided for you. You can find these in java.util.stream.Collectors class. For example here we have used - Collectors.toList().

Note if you have an infinite Stream these print methods will hand and you will have to manually terminate the program.

Also note you cannot modify the Base data structure directly while using it in Stream. So -

        List<String> listForStream = new ArrayList<>(Arrays.asList("ABC","PQR","XYZ"));
        Stream<String> streamFromList = listForStream.stream();
        streamFromList.forEach(elm -> listForStream.remove(elm));
        System.out.println(listForStream);


will give you -

Exception in thread "main" java.util.ConcurrentModificationException
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380)
    at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
    at HelloWorld.main(HelloWorld.java:34)


as you are iterating on a List and modifying it simultaneously. Instead you could filter the stream -

        List<String> listForStream = Arrays.asList("ABC","PQR","XYZ");
        Stream<String> streamFromList = listForStream.stream();
        listForStream = streamFromList.filter(x -> x.contains("A")).collect(Collectors.toList());
        System.out.println(listForStream);


You will get - [ABC]

Examples of Streams usage

Lets see examples of common usage now -

Lets say you have list of name. You want to get all names from that list that start with A and sort it based on their name and return 3 of them.

        List<String> listForStream = Arrays.asList("Aniket", "Amit", "Ram", "John", "Anubhav", "Kate", "Aditi");
        Stream<String> streamFromList = listForStream.stream();
        streamFromList
        .filter(x -> x.startsWith("A"))
        .sorted()
        .limit(3)
        .forEach(System.out::println);



You will get :

Aditi
Amit
Aniket

Let's see what we did here. First we got the stream out of current List, then we added a filter to have only those elements in stream which start with A. Next we are calling sorted which essentially sorts the sequence of data remaining in stream. This will be natural sort based on name. Lastly we just limit 3 entries and print them.

Now guess what the following code does -

        Stream.iterate(1, n -> n+1)
        .filter(x -> x%5==0)
        .limit(5)
        .forEach(System.out::println);


And the output is -
5
10
15
20
25

Firstly we are creating an infinite Stream here using iterate. It will generate sequence 1,2,3,4,5.... so on. Next we apply filter to keep only multiples of 5. Next we limit to only 5 such results. This will reduce our infinite stream to a finite one. Lastly we print out those 5 results. Hence the result.

Now lets move on to using peek -

        Stream.iterate(1, n -> n+1)
        .filter(x -> x%5==0)
        .peek(System.out::println)
        .limit(5)
        .forEach(System.out::println);


What would above code snippet print? Answer is -
5
5
10
10
15
15
20
20
25
25

So here we are printing the details once post filter and then once after limiting. Hence the result.

Similarly we have Streams for primitives as well -
Here are three types of primitive streams:
  • IntStream: Used for the primitive types int, short, byte, and char
  • LongStream: Used for the primitive type long
  • DoubleStream: Used for the primitive types double and float
They have additionally range() and rangeClosed() methods. The call range(1, 100) on IntStream and LongStream creates a stream of the primitives from 1 to 99 whereas rangeClosed(1, 100) creates a stream of the primitives from 1 to 100. The primitive streams have math operations including average(), max(), and sum(). There is one more additional method called summaryStatistics() to get many statistics in one call.

Eg.
private static int range(IntStream ints) {
    IntSummaryStatistics stats = ints.summaryStatistics();
    if (stats.getCount() == 0) throw new RuntimeException();
    return stats.getMax()—stats.getMin();
}


Also there are functional interfaces specific to streams.


Related Links

t> UA-39527780-1 back to top