Saturday 27 July 2019

How to install IntelliJ Idea plugin from local disk

Background

In the last post, we saw a basic tutorial on how to create a custom plugin for IntelliJ IDE's. In this post, I will show you how you can install a plugin you have on your local disk.

How to install IntelliJ Idea plugin from local disk

To install plugin from local disk, go to setting in IDE(Ctrl+Alt+S) -> Plugins. Next click on the gear icon and select "Install plugin from disk".




Select the zip file of your plugin you have stored locally and select Ok. The plugin should get installed. You may have to restart the IDE for change to take effect.



 Now you can see the plugin in the installed tab of your plugins section of settings. 




Directories used by the IDE to store plugins

If you are wondering where are plugins instaled then the path is config\plugins under your IDEA directory. For me it is:

  • C:\Users\anike\.IdeaIC2019.1\config\plugins
It should put your plugin jar in above dir.







Related Links


Creating an Intellij plugin

Background

Intellij IDEA is one of the famous IDEs(Integrated development environment) used for Java development. Intellij has a variant of IDE's that they provide like -
  1. Pycharm - For Python
  2. Webstorm - For web development
  3. IDEA - For Java
etc. In this post, I will show how you can write your own plugin for any of these IDEs. To develop a plugin you need Intellij IDEA IDE. You can use this to create a plugin for any other variant of IDE. In fact, the framework for IDE remains the same, so you can create a plugin that can potentially work in all IDEs. 

To start with download IntelliJ IDEA. I am using version 2019.1.3(Community edition).



Idea

In this plugin, we are going to add a simple action functionality that takes in the selected text and searches on Stack overflow site. This action will be visible when you right-click on the editor panel of your IDE. Let's see how to do that,

Creating an IntelliJ plugin


Open your IDEA and create a new project. File-> New -> Project -> Intellij platform plugin



Once done, click on next, enter your project name and submit. This should create a new project for you. One of the important files is Project\resources\META-INF\plugin.xml. This gives information about your plugin. Think of it as the manifest file of Android project (If you have worked on Android apps before). Got me location is C:\Users\anike\IdeaProjects\StackOverflowSearch\resources\META-INF\plugin.xml 
 and my project name is StackOverflowSearch.

NOTE: You can choose Groovy as well to develop your plugin. I have selected default, which uses Java.

In the source folder create a class called StackoverflowSearch. This is going to be our action class. Make this class extend com.intellij.openapi.actionSystem.AnAction. AnAction is an abstract class provide by Intellij SDK framework. Once you extend it, you will have to implement the abstract methods in it

    @Override
    public void actionPerformed(@NotNull AnActionEvent anActionEvent) {
        
    }

Then you can add the following code to complete your simple action -

import com.intellij.ide.BrowserUtil;
import com.intellij.openapi.actionSystem.AnAction;
import com.intellij.openapi.actionSystem.AnActionEvent;
import com.intellij.openapi.actionSystem.CommonDataKeys;
import com.intellij.openapi.editor.CaretModel;
import com.intellij.openapi.editor.Editor;
import com.intellij.psi.PsiFile;
import org.jetbrains.annotations.NotNull;


public class StackoverflowSearch extends AnAction {
    @Override
    public void actionPerformed(@NotNull AnActionEvent anActionEvent) {
        PsiFile file = anActionEvent.getData(CommonDataKeys.PSI_FILE);
        Editor editor = anActionEvent.getRequiredData(CommonDataKeys.EDITOR);
        CaretModel caretModel = editor.getCaretModel();
        String selectedText = caretModel.getCurrentCaret().getSelectedText();
        BrowserUtil.open("https://stackoverflow.com/search?q=" + selectedText);
    }
}


This essentially gets the selected text from your editor and open a browser URL with that query parameter. If you do not understand much with this code, don't worry just go to the official documentation and see what each class means. For eg. PSIFile - it s file representation in Intellij framework world. You can see more details here



Once done, you will have to list this action in the plugin.xml file we saw above. In this file, you should see the <actions> tag. Add below content inside it.


    <action
            id="Action.Stackoverflow.Search"
            class="StackoverflowSearch"
            text="Search Text on Stack Overflow"
            description="Search Text on Stack Overflow">
      <add-to-group group-id="EditorPopupMenu" anchor="last"/>
    </action>

Once done you are all set to go. The only important thing to note above is the add-to-group group-id field. EditorPopupMenu means this action will be shown in Editor when you right-click. You could have other possible values to show it in Console or top menubar.

My complete plugin.xml looks like below -

<idea-plugin>
  <id>com.your.company.unique.plugin.id</id>
  <name>Stackoverflow Search Plugin</name>
  <version>1.0</version>
  <vendor email="opensourceforgeeks@gmail.com" url="http://opensourceforgeeks.blogspot.com/">OSFG</vendor>

  <description><![CDATA[
      Simple plugin to open selexted text in Stack overflow site
    ]]></description>

  <change-notes><![CDATA[
      Simple plugin to open selexted text in Stack overflow site
    ]]>
  </change-notes>

  <!-- please see http://www.jetbrains.org/intellij/sdk/docs/basics/getting_started/build_number_ranges.html for description -->
  <idea-version since-build="173.0"/>

  <!-- please see http://www.jetbrains.org/intellij/sdk/docs/basics/getting_started/plugin_compatibility.html
       on how to target different products -->
  <!-- uncomment to enable plugin in all products
  <depends>com.intellij.modules.lang</depends>
  -->

  <extensions defaultExtensionNs="com.intellij">
    <!-- Add your extensions here -->
  </extensions>

  <actions>
    <!-- Add your actions here -->
    <action
            id="Action.Stackoverflow.Search"
            class="StackoverflowSearch"
            text="Search Text on Stack Overflow"
            description="Search Text on Stack Overflow">
      <add-to-group group-id="EditorPopupMenu" anchor="last"/>
    </action>
  </actions>

</idea-plugin>



Now you can simply run your project,



Run configuration should automatically be created when you click on run. It should be similar to the following -


NOTE:  Notice that the JRE is Intellij Idea SDK.

This should start a new IDE instance with your plugin activate. You can verify your plugin in installed by going to settings(Ctrl_Alt+S) -> Plugins -> Installed


Now you can select a text, right-click and see the "Search Text on Stackoverflow" action. Click that and it should open the Stack overflow site with your selected text as a search parameter,



Distributing the plugin

To distribute the plugin, simply right click your plugin project and select - "Prepare plugin module for deployment". This should export a zip file which can be distributed. You should see a message like below when you have selected the above option.



To know how to install plugin from local disk refer - How to install IntelliJ Idea plugin from local disk

You can also put it into a plugin repository for others to use instead of distributing zip file. For more details on the plugin, repo see here. I will be adding more details on how to create an Intellij plugin. So stay tuned.


Related Links

Sunday 26 May 2019

How to remove computer name from prompt in Ubuntu terminal

Background

If you are using Ubuntu terminal you must have noticed that terminal prompt looks like below -


Notice that prompt looks like -

athakur@athakur-Inspiron-7572:~/Documents/code$

The problem is this is fairly large and with more folders inside the root directory, there is much less space to type actual commands. In this post, I will show how to fix this.

How to remove computer name from prompt in Ubuntu terminal


To fix this you need to edit ~/.bashrc file. In this file there is a variable called PS1 that governs how your command prompt looks like. If you open ~/.bashrc you should see something like below -

if [ "$color_prompt" = yes ]; then
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
    PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt

# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
    PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
    ;;
*)
    ;;
esac



Let me explain a little bit about PS1 variable and above itself.

  • debian_chroot: This is applicable if you are running chroot operation (Your root directory is different than the default one). If you do not know chroot then do not worry, this is just and empty variable and can be ignored for now. 
  • XTERM: This is for a terminal emulator. If you are using XTERM PS1 under this would take effect.
  • color_prompt: This would be the default case for most. Your terminal would support color prompt. So PS1 under "$color_prompt" = yes is something you need to edit. 
If you have not heard above, don't worry about it just change the PS1 variable under the color_propmt if statement.

  • \u: expands to the current username
  • \h: expands to the current hostname
  • \w: expands to the current working directory
  • \$: expands to # for root and $ for all other users

Now that you know what each means and once you have figured out which PS1 to change you can simply remove "@\h" from it.



So change

if [ "$color_prompt" = yes ]; then

    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '


to

if [ "$color_prompt" = yes ]; then

    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '



Once done save it, and run -

  • source ~/.bashrc
or simply open a new terminal and you should see the change.


Related Links


Customizing tray/taskbar date display in Ubuntu to show the actual date

Background

In my Ubuntu 18.04.2 LTS, I see that the date/time displayed on the desktop on the taskbar above is like - "Sun 12.00 PM".

 

You have to actually click on it to see the current date. In this post, I will show you how to add a date to the format that you see on the top. It would have been much easier if Ubuntu gave us this customization in settings itself, but unfortunately, that is not the case, at least not yet.

Customizing tray/taskbar date display in Ubuntu to show the actual date

There are two ways you can do this. I will show the most user-friendly method first.  But if you are more comfortable with Linux command line terminal and executing commands go to 2. advance section below.



1. User-Friendly - GUI Way

If you do not prefer using the command line, then you can install a GUI based tool. The tool name is "gnome-tweak-tool". You can search for "GNOME tweaks" in the Ubuntu software center. Install it.




Launch it and you should see the following screen. Go to Top Bar for date settings.





You can change the configuration as needed. You can see there are a bunch of other options here as well - like show battery percentage. All yours to play with :)


NOTE: If you do not prefer the Ubuntu software center but apt-get to install software, you can use the following command to install and launch the above application.

sudo apt install gnome-tweak-tool
gnome-tweaks  #  now launch it





2. Advanced - CLI Way

There are two commands that you need to know here -
  1. gsettings set org.gnome.desktop.interface clock-show-date true
    • makes the date appear
  2. gsettings set org.gnome.desktop.interface clock-show-seconds true
    • switches the seconds display on
You can similarly replace "set" by "get" to see the current values. By default, both values are set to false which is why you don't see a date or seconds.



Now let's go ahead and set these values to true and see the change of date/time format in the taskbar above.



And the result is -




I personally don't like seconds showing up. Just date works for me, so I have set accordingly. You can have settings that best suit you. You can do a similar thing with battery percentage as well with following command -

  • gsettings set org.gnome.desktop.interface show-battery-percentage true


Related Links


Saturday 25 May 2019

How to fix hitting arrow keys adds characters in vi editor issue in Ubuntu

Background

I just purchased a new Laptop and the obvious next thing to do was add Ubuntu to it. There are some common problems that everyone faces when a new Ubuntu OS is installed and one such problem is -  hitting arrow keys adds characters in vi the editor. In this post, I will show you how to fix this issue.




Fixing hitting arrow keys adds characters in vi editor issue

To fix this issue, edit a file called vim.rc in your root folder. If it's not present create one. You can use the following command.

  • vi ~/.vimrc
Now add the following content to it -

  • set nocompatible
If backspace is not working add following content -

  • set backspace=2

And that's it. Save the file and you should be good to go. 



Another cleaner way is just to install vim -
  • sudo apt-get install vim
Now that the problem is resolved, let's try to understand the issue here. vi as an editor has normal and insert mode. When you open a file using vi you are in normal mode. Here you can go to any line, use any arrow keys and the behavior is as expected. Now when you type "i", you essentially go into insert mode. In this mode, vi does not expect you to go left, right, top, bottom using arrow keys. You can always do that by pressing "ESC" and going back to normal mode. Insert mode is just to add text to your file and that's the behavior of vi.

Hope this helps :)




Saturday 13 April 2019

How to install Oracle Java 11 in Ubuntu

Background

In one of my previous posts I had covered how you can install Java 8 in your Ubuntu machine. In this post I will show how you can install Java 11 which is the latest SE version released (April 2019).

Oracle Java 11 is first LTS (Long term support) of Oracle Java release. 

NOTE: Oracle uses a commercial license now. You can download and use Oracle java for development and testing without any cost but to use it in production you need to pay a fee. 

If you do not want to pay you can always use Open JDK 11. From Java 11 forward, therefore, Oracle JDK builds and OpenJDK builds will be essentially identical. You can read more about this -

How to install Oracle Java 11 in Ubuntu

You can get Oracle Java 11 installer using linuxuprising PPA. To add this PPA execute following commands -

  • sudo add-apt-repository ppa:linuxuprising/java
  • sudo apt-get update



 To install the installer execute the following command -

  • sudo apt-get install oracle-java11-installer
You would need to accept the terms and conditions and continue with the installer. Once installed you can check the version of Java installed with following command -

  • java -version

To make Java 11 default you can install following package / execute following command -

  • sudo apt-get install oracle-java11-set-default
If you do not want this as default simply remove above pacakge  -
  • sudo apt-get remove oracle-java11-set-default


Related Links

Thursday 28 March 2019

Writing unit tests for AWS Lambda in Node.js

Background

In the last post, we saw -
In this post, we will see an extension of the same. We will see how we can extend this to write test cases for AWS Lambda functions. We are still going to use Chai and Mocha, but also some additional dependencies. Following is the list of final dependencies we need -


    "devDependencies": {
        "chai": "^4.2.0",
        "lambda-tester": "^3.5.0",
        "mocha": "^6.0.2",
        "mock-require": "^3.0.3"
    }


Also, note these are dev dependencies. These are required only for development/testing. Here -
  • lambda-tester is used for simulating lambda behavior. You can use this to send mock events and handle the result or errors.
  • mock-require is used to mock any other dependencies your lambda might be depending on. For eg. any service or network call.

Writing unit tests for AWS Lambda in Node.js

Let's say following is our lambda code -


exports.handler = (event, context, callback) => {

    var searchId = event.pathParameters.searchId;

    if(searchId) {
        var result = {
            status : "success",
            code: 200,
            data: "Got searchId in the request"
        }
        callback(null,result)
    }
    else {
        var result = {
            status : "failure",
            code: 400,
            data: "Missing searchId in the request"
        }
        callback(result, null);
    }


};



It's just a simple code that looks for searchId as the path parameter and if not found returns error. Now mocha tests for this lambda would look like -

const assert = require('chai').assert;
const expect = require('chai').expect;
const lambdaTester = require('lambda-tester');
const mockRequire = require('mock-require');

const index = require("../index");

describe("Lambda Tests", function(){

    describe("Successful Invocation", function(){
        it("Successful Invocation with results", function(done) {

            const mockData = {
                pathParameters : {
                    searchId : 10
                }
            };

            lambdaTester(index.handler).event(mockData)
            .expectResult((result) => {
                expect(result.status).to.exist;
                expect(result.code).to.exist;
                expect(result.data).to.exist;

                assert.equal(result.status, "success");
                assert.equal(result.code, 200);
                assert.equal(result.data, "Got searchId in the request");
            }).verify(done);

        });
       
    });
   
    describe("Failed Invocation", function(){
        it("Unsuccessful invocation", function(done) {

            const mockData = {
                pathParameters : {
                    newSearchId : 5
                }
            };

            lambdaTester(index.handler).event(mockData)
            .expectError((result) => {
                expect(result.status).to.exist;
                expect(result.code).to.exist;
                expect(result.data).to.exist;

                assert.equal(result.status, "failure");
                assert.equal(result.code, 400);
                assert.equal(result.data, "Missing searchId in the request");
            }).verify(done);


        });
    });

})


You need to follow the same conventions we saw in the last post. Create package.json, add dev dependencies mentioned above. Run npm install. Create a folder call test and add your test file in it (Content mentioned above). Once done you can run mocha at the root level.





If you see above I also mentioned we need "mock-require". This would be used if you actually need to mack a service. Let's say your lambda uses a custom logger -

const logger = require('customLogger').logger;

then you can mock this by -


const mockedLogger = {
          logger : {
                 log: function(message) {
                          console.log("message: " + message);
                 }
          }
}

mockRequire('custom-logger', mockedLogger);

This could be a network service or a database query. You can mock it as per your requirement.

Related Links


Sunday 24 February 2019

How to write Mocha and Chai unit tests for your Node.js app?

Background

Testing is an important part of the development lifecycle. Testing can be of many types - unit testing, integration testing, manual testing, QA automation testing. In this tutorial, I am going to show how you can write unit tests for your Node.js application using Mocha and Chai frameworks.  

Mocha is a javascript test framework and Chai is an assertion library. 

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting while mapping uncaught exceptions to the correct test cases.

Chai is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.


Sample Node.js App

Let's start by writing a simple Node.js app that does addition and subtraction and then we will see how we can write tests for it using Mocha and Chai.

Create a directory called mochatest and navigate to that directory using the following commands -
  • mkdir mochatest
  • cd mochatest
Now initialize it as npm module using -
  • npm init -y



You can see the package.json content initialized with default values. Now let's install our dependencies - mocha, and chai. Execute following command -

  • npm install mocha chai --save-dev
Notice we are saving it as dev dependency and not actual dependency since we need this for testing only.




This should create a node_module directory in your current mochatest directory and install your dependencies there. You can also see dependencies added in package.json.



Now let's create our main app. You can see in the package.json above the main file is called index.js. At this point, I would request you to open your application in your favorite IDE. I will be using visual studio code for this. 

Now create a file called index.js in mochatest folder. and add the following content to it


var addition = function(a,b) {
    return a + b;
}
var subtraction = function(a,b) {
    return a - b;
}


module.exports = {
    add: addition,
    subtract: subtraction
}

This essentially exports 2 functions -
  • add 
  • subtract
that does exactly what the name says - adds and subtracts two numbers. Now let's see how we can write mocha tests for these.


How to write Mocha and Chai unit tests for your Node.js app?

In the mochatest folder create a folder called test. We will have test files in this folder. It is recommended to have same file structure as your actual app. Since we just have a single file called index.js in our app, create a file called indexTest.js in your test folder and add following content to it.


const assert = require('chai').assert;
const index = require("../index");

describe("Index Tests", function(){

    describe("Addition", function(){
        it("Addition functionality test", function() {
            let result = index.add(4,5);
            assert.equal(result,9);
        });
        it("Addition return type test", function() {
            let result = index.add(4,5);
            assert.typeOf(result,'number');
        });
    });

    describe("Subtraction", function(){
        it("Subtraction functionality test", function() {
            let result = index.subtract(5,4);
            assert.equal(result,1);
        });
        it("Subtraction return type test", function() {
            let result = index.subtract(5,4);
            assert.typeOf(result,'number');
        });
    });   

});

Let me explain this before we go and test this out. Like I mentioned before chai is an assertion library, so we get a reference to it. Then you get a reference to the actual app file - index.js. Remember it exports two functions -

  • add
  • subtract
Then we have "describe" keyword. Each describe is a logical grouping of tests and you can cascade them further. For example in the above case, we start a grouping on the basis if file meaning these group has tests for file index.js. Inside it, we have cascaded describe for each method - add and subtract. Each "describe" takes 2 arguments - 1st one is the string that defines what the grouping is for and the 2nd one is the function that has the logic of what that group does. It could have other subgroups as you see above.

Inside describe you can define actual tests inside "it" method. This method also takes 2 arguments - 1st one says what the test does and the 2nd one is a function that gets executed to test it. Inside it we use the assert reference we got from chai module. In this case, we have used
  • equal
  • typeof
You can see others in https://www.chaijs.com/guide/styles/



NOTE: By default, mocha looks for the glob "./test/*.js", so you may want to put your test methods in test/ folder.

Now let's run it. Use the following command -


  • mocha




And you can see the results. You can change the logic in index.js to see that tests fail. For example, lets change add method to return a-b instead of a+b and rerun this test. You will see the following output -


Finally, let's plug this in the npm system. package.json already has test script as follows -

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },


change this to


  "scripts": {
    "test": "mocha"
  },


and you can simply run following command to execute tests -

  • npm run test



Let me know if you have any questions. Thanks.


Related Links



Sunday 27 January 2019

Sorting Techniques

Background

This post summarises various sorting techniques.


Sorting Techniques

I have written posts over time to show the implementation of some of the sorting techniques. Following are links to the same -

Related Links

What is transient keyword in Java?

Background

In one of my earlier post, I had covered what serialization in Java is and how transient keyword can be used in that context. You can take a look at that post -
We saw that in serialization context, instance variables marked as transient will not be serialized and on deserialization, they get the default values. In this post, we will see some more details about transient keyword with an example.

Transient use cases

  • The transient keyword should be used when you do not want to serialize your instance variables. Eg.
    • An instance of Logger class. There is no state associated with a logger instance, so we do not need to serialize it.
    • Similarly, any secure data like passwords, which you may not want to serialize.
  • You cannot use any classes in the JDK that does not implement Serializable as references in your class that is Serializable. It needs to be marked transient. Else it will throw “java.io.NotSerializableException” exception.

The transient and final keyword

Transient treats final keyword a bit differently. So let's take an example to understand this,

//final field 1
public final transient String myName = "Aniket";
//final field 2
public final transient Logger myLogger = LoggerFactory.getLogger(MyClass.class.getName());

If you consider above fields in a class, by our above logic they should not be serialized since they are marked as transient. However, if any final variable is evaluated as a "constant expression" like in case of myName above, it will be serialized. So in above case myName is serialized and myLogger is not.

On the side note, recall serialVersionUID which is static and final. It is the only static variable that gets serialized. static instances do not form the state of the object, so by very definition of serialization, it is not serialized.

Use of transient keyword in HashMap.

If you see HashMap implementation you can see the array that backs it is marked as transient.


      /**
      * The table, resized as necessary. Length MUST Always be a power of two.
  */
      transient Entry[] table;


If this array is not serialized, how does it get deserialized back to restore instance state? Class does implement serializable -

public class HashMap<K,V> extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable {


    private static final long serialVersionUID = 362498820763181265L;


Reason for this is that two instances of same class do not generate the same hashcode (unless of course, you override the hashcode method to do so). The native implementation uses the objects memory location which will be different for the object before serialization and after deserialization. So it is not guaranteed that the objects will be in the same bucket and in the same location as that of hashmap that was serialized. This is why the array itself is not serialized by marking it as transient. So how is the instance state restored? 

For this in HashMap implementation, they override writeObject and readObject method. In the writeObject method, all entries from the entry array are read in a sequence and serialized. Similarly, in readObject for deserialization, it is read in the same order and then stored in its own internal entry table array. So the buckets and position are dynamically calculated during deserialization with the same data.

PS: This is a good interview question :)



Related Links

Saturday 19 January 2019

How to print odd and even numbers in order with two threads

Background

Let us say you have two threads - one thread prints even numbers and other one prints odd numbers. You need to design this in such a way that all the numbers are printed in the natural order i.e 1,2,3,4 etc. 

This is more of a synchronization questions rather than a data structure question. You need to understand how threads, synchronization works in order to be able to solve this question. 

How to print odd and even numbers in order with two threads

We can do this two ways -
  1. Using Sempahores
  2. Using wait and notify

Let us see how we can do this using semaphores -

public static void withSemaphores() throws InterruptedException, ExecutionException {

 Semaphore oddLock = new Semaphore(1);
 Semaphore evenLock = new Semaphore(0);

 Runnable printOdd = () -> {
  for (int i = 1; i < 10; i = i + 2) {
   try {
    oddLock.acquire();
   } catch (Exception e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
   }
   ;
   System.out.println(i);
   evenLock.release();
  }
 };

 Runnable printEven = () -> {
  for (int i = 2; i < 10; i = i + 2) {
   try {
    evenLock.acquire();
   } catch (Exception e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
   }
   System.out.println(i);
   oddLock.release();

  }
 };

 new Thread(printOdd).start();
 new Thread(printEven).start();
}


Before we see how this is actually working you need to understand how semaphores work. They essentially are like permits. You can initialize the semaphore with the initial permits. Let us say semaphore has 2 permits, to begin with. In this case, 2 threads can acquire these permits. The 3rd thread which comes has to wait till one of the 2 permits become available again. Threads that have acquired the permits can release them once they are done. Permits are acquired by acquire() method and released by release() method. To read more about Semaphores you can refer a post I had written earlier -

Also, note I have used Java 8 lambda syntax in the code above. You can read more about lambds -
Now with this understanding let's see how the above logic works.


printOdd runnable is responsible for printing odd numbers whereas printEven prints even number. For loop is designed in such a way and it increments by 2 to continue printing respective numbers. We need to start with 1 which is odd, so old thread starts first. Notice we have 2 semaphores - one for odd and one for even. Odd semaphore has 1 permit whereas even semaphore has 0, to begin with. The odd thread can get the permit from an odd semaphore and print the odd value which is 1. Meanwhile, the even thread will get blocked since no permits are available for even semaphore. Only when an odd thread releases even semaphore permit, even thread will go ahead and print 2. That's how each thread locks each other till numbers are printed in sequence.

You can do the same with the wait and notify. You can see both of the above methods on my GitHub repository on data structures - https://github.com/aniket91/DataStructures/blob/master/src/com/osfg/questions/PrintOddEven.java 


Related Links

Thursday 17 January 2019

How to use Callable interface with Threads in Java?

Background

In one of the previous posts on Creating Threads with Executor service, we saw an interface called Callable. Executor has a method called submit that took Callable object and returned a future object which had the results. Unlike the Runnable interface, we can use Callable interface to return the result or throw checked Exceptions. In this post, we will see how to use Callable interface with plain Thread construct.

How to use Callable interface with Threads in Java?

We know that Thread takes Runnable interface only. No surprises there. So we cannot directly use objects implementing the Callable interface. Also to get the result we need Future object as we saw in Executors. Remember any design pattern that suits this use case? yes, it is Adapter Pattern. Java provides a class called FutureTask which implements Runnable and Future, combining both functionalities conveniently. Let's see an example of how we can do this -

package com.osfg;

import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.FutureTask;


public class Test {
 public static void main(String args[]) throws InterruptedException, ExecutionException {

  Callable<String> callable = () -> {
   Thread.sleep(5000);
   return "Hello!";
  };

  FutureTask<String> futureTask = new FutureTask<>(callable);
  Thread t = new Thread(futureTask);
  t.start();

  while (!futureTask.isDone()) {
   System.out.println("Task not done yet!");
   Thread.sleep(1000);
  }

  System.out.println(futureTask.get());
 }

}




You can see here that our Callable waits for 5 seconds and returns the results. And in the main thread, we do a check every 1 second to see if the result is available in the FutureTask. So if you run above code you will get -

Task not done yet!
Task not done yet!
Task not done yet!
Task not done yet!
Task not done yet!
Hello!

FutureTask has following methods -

  • public boolean cancel(boolean mayInterrupt): Used to stop the task. It stops the task if it has not started. If it has started, it interrupts the task only if mayInterrupt is true.
  • public Object get() throws InterruptedException, ExecutionException: Used to get the result of the task. If the task is complete, it returns the result immediately, otherwise, it waits until the task is complete and then returns the result.
  • public boolean isDone(): Returns true if the task is complete and false otherwise

Related Links

t> UA-39527780-1 back to top