Sunday, June 14, 2015

Stack Implementation in Java

Stack is a LIFO (Last In First Out) data structure and supports various operations as specified by the interface below:
public interface IStack<T> {
    public boolean push(T value);
    public T pop();
    public void clear();
    public boolean contains(T value);
    public int size();
}

The Stack can be implemented using an Array or a LinkedList. The Array implementation will be having an underlying array which will be re-sized accordingly. The source code is pretty straight forward and does not need much explanation.
public class ArrayStack<T> implements IStack<T> {
 private static final int MINIMUM_SIZE = 10;
 private T[] array = (T[]) new Object[MINIMUM_SIZE];
 private int size = 0;

 @Override
 public boolean push(T value) {
  int stackSize = this.size;
  if (stackSize >= array.length) {
   array = Arrays.copyOf(array, (stackSize + (stackSize>>1)));
  }
  array[size++] = value;
  return true;
 }

 @Override
 public T pop() {
  if (size <= 0) return null;
  T t = array[--size];
  array[size] = null;
  int stackShrinkSIze = size;
  if (size >= MINIMUM_SIZE && size < (stackShrinkSIze + (stackShrinkSIze<<1))) {
   System.arraycopy(array, 0, array, 0, size);
  }
  return t;
 }
 @Override
 public void clear() {
  size = 0;
 }
 
 @Override
 public boolean contains(T value) {
  for (int i = 0; i < size; i++) {
     T item = array[i];
     if (item.equals(value))
    return true;
  }
  return false;
 }
 @Override
 public int size() {
  return size;
 }
}

Similarly we can make use of LinkedList to implement Stack in Java.
public class LinkedStack<T> implements IStack<T> {
 private Node<T> top = null;
 private int size = 0;

 public LinkedStack() {
  top = null;
  size = 0;
 }
 @Override
 public boolean push(T value) {
  return push(new Node<T>(value));
 }
 private boolean push(Node<T> node) {
  if (top == null) { 
   top = node;
  } else {
   Node<T> oldTop = top;
   top = node;
   top.below = oldTop;
   oldTop.above = top;
  }
  size++;
  return true;
 }
 @Override
 public T pop() {
  if (top==null) return null;

  Node<T> nodeToRemove = top;
  top = nodeToRemove.below;
  if (top != null) top.above = null;

  T value = nodeToRemove.value;
  size--;
  return value;
 }
 @Override
 public void clear() {
  top = null;
  size = 0;
 }
 @Override
 public boolean contains(T value) {
  if (top == null) return false;
  Node<T> node = top;
  while (node != null) {
   if (node.value.equals(value)) return true;
   node = node.below;
  }
  return false;
 }

 @Override
 public int size() {
  return size;
 }
}

On a side note we have a class named Stack in Java as well but it usage is deprecated. That's all. I hope you liked the post!

Saturday, June 13, 2015

AVL Binary Search Tree Implementation in Java

AVL (Adelson-Velsky and Landis) is an example of self-balanced Binary Search Tree. It has the same basic operations as that of a regular BST but modifications are followed by zero or more tree rotation operations. Every node also stores balance factor which is corrected after insertion or deletion. If the recomputed balance factor remains in the range from -1 to +1 then only corrections of the balance factor is required and no rotations are necessary. But if the recomputed balance factor becomes less than -1 or greater than +1 the subtree rooted at this node is unbalanced, and a rotation is needed. The AVLNode class (extends Node from BinarySearchTree class) can be declared like:
public static class AVLNode<T extends Comparable<T>> extends Node<T> {
    protected int height = 1;
    protected AVLNode(Node<T> parent, T value) {
        super(parent, value);
    }

    protected boolean isLeaf() {
        return ((getLeft() == null) && (getRight() == null));
    }

    protected void updateHeight() {
        int lesserHeight = 0;
        int greaterHeight = 0;
        if (getLeft() != null) {
            AVLNode<T> lesserAVLNode = (AVLNode<T>) getLeft();
            lesserHeight = lesserAVLNode.height;
        }
        if (getRight() != null) {
            AVLNode<T> greaterAVLNode = (AVLNode<T>) getRight();
            greaterHeight = greaterAVLNode.height;
        }
        if (lesserHeight > greaterHeight) {
            height = lesserHeight + 1;
        } else {
            height = greaterHeight + 1;
        }
    }

    protected int getBalanceFactor() {
        int lesserHeight = 0;
        int greaterHeight = 0;
        if (getLeft() != null) {
            AVLNode<T> lesserAVLNode = (AVLNode<T>) getLeft();
            lesserHeight = lesserAVLNode.height;
        }
        if (getRight() != null) {
            AVLNode<T> greaterAVLNode = (AVLNode<T>) getRight();
            greaterHeight = greaterAVLNode.height;
        }
        return greaterHeight - lesserHeight;
    }

    @Override
    public String toString() {
        return "value=" + getData() + " height=" + height + " parent=" + ((getParent() != null) ? getParent().getData() : "NULL")
                    + " left=" + ((getLeft() != null) ? getLeft().getData() : "NULL") + " right="
                    + ((getRight() != null) ? getRight().getData() : "NULL");
    }
}

The AVLTree class can be declared like:
public class AVLTree<T extends Comparable<T>> extends RotatableBinarySearchTree<T> {
   public AVLTree() { }
}

We need to re-balance after insertion and deletion. Consider the insertion method first
@Override
protected Node<T> insertValue(T id) {
        Node<T> returnedNode = super.insertValue(id);
 AVLNode<T> insertedNode = (AVLNode<T>) returnedNode;
 while (insertedNode != null) {
  insertedNode.updateHeight();
  balanceAfterInsert(insertedNode);
  insertedNode = (AVLNode<T>) insertedNode.getParent();
 }
 return returnedNode;
}
private enum RotationSequence {
        LEFT_LEFT, LEFT_RIGHT, RIGHT_LEFT, RIGHT_RIGHT
};
private void balanceAfterInsert(AVLNode<T> node) {
 int balanceFactor = node.getBalanceFactor();
 
 if (balanceFactor > 1 || balanceFactor < -1) {
  AVLNode<T> parent = null;
  AVLNode<T> child = null;
  RotationSequence rotation = null;
  
  if (balanceFactor < 0) {
   parent = (AVLNode<T>) node.getLeft();
   balanceFactor = parent.getBalanceFactor();
   if (balanceFactor < 0) {
    child = (AVLNode<T>) parent.getLeft();
    rotation = RotationSequence.LEFT_LEFT;
   } else {
    child = (AVLNode<T>) parent.getRight();
    rotation = RotationSequence.LEFT_RIGHT;
   }
  } else {
   parent = (AVLNode<T>) node.getRight();
   balanceFactor = parent.getBalanceFactor();
   if (balanceFactor < 0) {
    child = (AVLNode<T>) parent.getLeft();
    rotation = RotationSequence.RIGHT_LEFT;
   } else {
    child = (AVLNode<T>) parent.getRight();
    rotation = RotationSequence.RIGHT_RIGHT;
   }
  }

  if (rotation == RotationSequence.LEFT_RIGHT) {
   // Left-Right (Left rotation, right rotation)
   rotateLeft(parent);
   rotateRight(node);
  } else if (rotation == RotationSequence.RIGHT_LEFT) {
   // Right-Left (Right rotation, left rotation)
   rotateRight(parent);
   rotateLeft(node);
  } else if (rotation == RotationSequence.LEFT_LEFT) {
   // Left-Left (Right rotation)
   rotateRight(node);
  } else {
   // Right-Right (Left rotation)
   rotateLeft(node);
  }

  node.updateHeight(); // New child node
  child.updateHeight(); // New child node
  parent.updateHeight(); // New Parent node
 }
}

The deletion will be similar and the process is: find the node in tree, find its replacement and replace the node and balance it accordingly.
@Override
protected Node<T> deleteValue(T value) {
 // Find node to remove
 Node<T> nodeToRemoved = this.searchAndGetNode(value);
 if (nodeToRemoved != null) {
  // Find the replacement node
  Node<T> replacementNode = this.getReplacementNode(nodeToRemoved);

  // Find the parent of the replacement node to re-factor the
  // height/balance of the tree
  AVLNode<T> nodeToRefactor = null;
  if (replacementNode != null)
   nodeToRefactor = (AVLNode<T>) replacementNode.getParent();
  if (nodeToRefactor == null)
   nodeToRefactor = (AVLNode<T>) nodeToRemoved.getParent();
  if (nodeToRefactor != null && nodeToRefactor.equals(nodeToRemoved))
   nodeToRefactor = (AVLNode<T>) replacementNode;

  // Replace the node
  deleteAndReplaceNode(nodeToRemoved, replacementNode);

  // Re-balance the tree all the way up the tree
  if (nodeToRefactor != null) {
   while (nodeToRefactor != null) {
    nodeToRefactor.updateHeight();
    balanceAfterDelete(nodeToRefactor);
    nodeToRefactor = (AVLNode<T>) nodeToRefactor.getParent();
   }
  }
 }
 return nodeToRemoved;
}

private void balanceAfterDelete(AVLNode<T> node) {
 int balanceFactor = node.getBalanceFactor();
 if (balanceFactor == -2 || balanceFactor == 2) {
  if (balanceFactor == -2) {
   AVLNode<T> ll = (AVLNode<T>) node.getLeft().getLeft();
   int lesser = (ll != null) ? ll.height : 0;
   AVLNode<T> lr = (AVLNode<T>) node.getLeft().getRight();
   int greater = (lr != null) ? lr.height : 0;
   if (lesser >= greater) {
    rotateRight(node);
    node.updateHeight();
    if (node.getParent() != null)
     ((AVLNode<T>) node.getParent()).updateHeight();
   } else {
    rotateLeft(node.getLeft());
    rotateRight(node);

    AVLNode<T> p = (AVLNode<T>) node.getParent();
    if (p.getLeft() != null)
     ((AVLNode<T>) p.getLeft()).updateHeight();
    if (p.getRight() != null)
     ((AVLNode<T>) p.getRight()).updateHeight();
    p.updateHeight();
   }
  } else if (balanceFactor == 2) {
   AVLNode<T> rr = (AVLNode<T>) node.getRight().getRight();
   int greater = (rr != null) ? rr.height : 0;
   AVLNode<T> rl = (AVLNode<T>) node.getRight().getLeft();
   int lesser = (rl != null) ? rl.height : 0;
   if (greater >= lesser) {
    rotateLeft(node);
    node.updateHeight();
    if (node.getParent() != null)
     ((AVLNode<T>) node.getParent()).updateHeight();
   } else {
    rotateRight(node.getRight());
    rotateLeft(node);

    AVLNode<T> p = (AVLNode<T>) node.getParent();
    if (p.getLeft() != null)
     ((AVLNode<T>) p.getLeft()).updateHeight();
    if (p.getRight() != null)
     ((AVLNode<T>) p.getRight()).updateHeight();
    p.updateHeight();
   }
  }
 }
}

Remaining methods are inherited from BinarySearchTree class. So the major changes are in insertion/deletion along with re-balancing efforts. But considering the benefits it provide it is worth it.

Friday, June 12, 2015

Self-balancing Binary Search Tree

Before we go any further lets see what will happen  if we insert sorted numbers into a regular Binary Search Tree (BST). Suppose we enter 10,20,30,40 and 50 in the same order. Then what we will get is a highly unbalanced BST:

Actually it has become a linked list now. You can read more about skewed BSTs here. As it hurts the performance badly there is always a need for balance Binary Search Tree.

A self balanced binary search tree is the one that automatically keeps its height (maximal number of levels below the root) small in the face of arbitrary item insertions and deletions. An example is AVL (Adelson-Velsky and Landis) Binary Search Tree which makes use of rotations to keep itself balanced.

As per wikipedia: "In discrete mathematics, tree rotation is an operation on a binary tree that changes the structure without interfering with the order of the elements. A tree rotation moves one node up in the tree and one node down. It is used to change the shape of the tree, and in particular to decrease its height by moving smaller subtrees down and larger subtrees up, resulting in improved performance of many tree operations."

The wi-ki page also demonstrates how rotation works:


AVL tree is generally compared with Red-Black tree as both have certain common properties and both support the concept of rotation. They also support same operations and take O(log n) time for basic operations (For purists they actually take O(lg n) time but base does not matter much in this case as they will show same pattern). For look up intensive applications AVL trees are supposed to be faster than Red-Black trees but both are height balanced.

We have already covered implementation of Binary Search Tree in previous article. Here I will create an abstract class that will have methods for rotation as well.
abstract public class RotatableBinarySearchTree<T extends Comparable<T>> extends BinarySearchTree<T> {
    protected enum Position {
       LEFT, RIGHT
    };
    public RotatableBinarySearchTree() { }
}

The methods that will perform rotation are: rotateLeft and rotateRight. These are shown below:
protected void rotateLeft(Node<T> node) {
 Position parentPosition = null;
 Node<T> parent = node.getParent();
 if (parent != null) {
  if (node.equals(parent.getLeft())) {
   // Lesser
   parentPosition = Position.LEFT;
  } else {
   // Greater
   parentPosition = Position.RIGHT;
  }
 }

 Node<T> greater = node.getRight();
 node.setRight(null);
 Node<T> lesser = greater.getLeft();

 greater.setLeft(node);
 node.setParent(greater);

 node.setRight(lesser);
 if (lesser != null)
  lesser.setParent(node);

 if (parentPosition != null) {
  if (parentPosition == Position.LEFT) {
   parent.setLeft(greater);
  } else {
   parent.setRight(greater);
  }
  greater.setParent(parent);
 } else {
  root = greater;
  greater.setParent(null);
 }
}

protected void rotateRight(Node<T> node) {
 Position parentPosition = null;
 Node<T> parent = node.getParent();
 if (parent != null) {
  if (node.equals(parent.getLeft())) {
   // Lesser
   parentPosition = Position.LEFT;
  } else {
   // Greater
   parentPosition = Position.RIGHT;
  }
 }

 Node<T> lesser = node.getLeft();
 node.setLeft(null);
 Node<T> greater = lesser.getRight();

 lesser.setRight(node);
 node.setParent(lesser);

 node.setLeft(greater);
 if (greater != null)
  greater.setParent(node);

 if (parentPosition != null) {
  if (parentPosition == Position.LEFT) {
   parent.setLeft(lesser);
  } else {
   parent.setRight(lesser);
  }
  lesser.setParent(parent);
 } else {
  root = lesser;
  lesser.setParent(null);
 }
}

This abstract class provides method for rotation and will be extended by the implementation of AVL and Red-Black Trees when we will discuss them.

Binary Search Tree implementation in Java

Binary search trees keep their keys in sorted order, so that lookup and other operations can use the principle of binary search: when looking for a key in a tree (or a place to insert a new key), they traverse the tree from root to leaf, making comparisons to keys stored in the nodes of the tree and deciding, based on the comparison, to continue searching in the left or right subtrees. This post is not about the basics of BST, in case you are interested please check this pdf.

I will first declare an interface ITree which we assume all our tree implementations to implement:
public interface Tree<T> {
    public boolean insert(T value);
    public T delete(T value);
    public void clear();
    public boolean contains(T value);
    public int size();
}

This interface can be implemented by the class as follows:
public class BinarySearchTree<T extends Comparable<T>> implements Tree<T> {
    protected Node<T> root = null;
    protected int size = 0;
    public BinarySearchTree() { }
}

We have used generics and we also want elements to be mutually comparable. We will first provide implementation of insert method as:
@Override
public boolean insert(T value) {
     Node<T> nodeAdded = this.insertValue(value);
     return (nodeAdded != null);
}
protected Node<T> insertValue(T value) {
     Node<T> newNode = getNewNode(value);

     // If root is null, assign
     if (root == null) {
       root = newNode;
       size++;
       return newNode;
     }

     Node<T> currentNode = root;
     while (currentNode != null) {
        if (newNode.getData().compareTo(currentNode.getData()) <= 0) { // Less than or equal to goes left
           if(currentNode.getLeft() == null) {
              insertNodeToLeft(currentNode, newNode);
              break;
           }
           currentNode = currentNode.getLeft();
        } else { // Greater than goes right
           if (currentNode.getRight() == null) {
              insertNodeToRight(currentNode, newNode);
              break;
           }
          currentNode = currentNode.getRight();
        }
     }
     return newNode;
}

The insert method will insert the node in left or right branch based on its value. The deletion part is slightly tricky. The process of deletion is as follows:
  1. First locate the node that needs to be deleted.
  2. Then find the replacement node for this node. We can select from left or right subtree and we do not want to select from one subtree all the time as it can lead to skewed tree.
  3. Then we need to delete this node and replace it with its replacement found in step 2.
This is performed by the following code:
        @Override
 public T delete(T value) {
    Node<T> nodeToRemove = this.deleteValue(value);
    return ((nodeToRemove!=null)?nodeToRemove.getData():null);
 }
 
 protected Node<T> deleteValue(T value) {
    Node<T> nodeToRemoved = this.searchAndGetNode(value);
    if (nodeToRemoved != null) nodeToRemoved = deleteNode(nodeToRemoved);
    return nodeToRemoved;
 }

 protected Node<T> searchAndGetNode(T value) {
    Node<T> currentNode = root;
    while (currentNode != null && currentNode.getData() != null) {
       int valueComparison = value.compareTo(currentNode.getData());
       if (valueComparison == 0) {
         return currentNode;
       } else if (valueComparison < 0) {
         currentNode = currentNode.getLeft();
       } else {
         currentNode = currentNode.getRight();
       }
    }
  return null;
 }

 protected Node<T> deleteNode(Node<T> nodeToRemoved) {
    if (nodeToRemoved != null) {
      Node<T> replacementNode = this.getReplacementNode(nodeToRemoved);
      deleteAndReplaceNode(nodeToRemoved, replacementNode);
    }
  return nodeToRemoved;
 }

 protected Node<T> getReplacementNode(Node<T> nodeToRemoved) {
    Node<T> replacement = null;
    if (nodeToRemoved.getLeft() != null && nodeToRemoved.getRight() == null) { // Using the less subtree
      replacement = nodeToRemoved.getLeft();
    } else if (nodeToRemoved.getRight() != null && nodeToRemoved.getLeft() == null) {
      // Using the greater subtree (there is no lesser subtree)
      replacement = nodeToRemoved.getRight();
    } else if (nodeToRemoved.getRight() != null && nodeToRemoved.getLeft() != null) {
      // Two children add some randomness to deletions, so we don't always use the greatest/least on deletion
      if (toggleReplacementNodeSelection) {
         replacement = this.getLargestNodeInSubTree(nodeToRemoved.getLeft());
         if (replacement == null)
           replacement = nodeToRemoved.getLeft();
         } else {
           replacement = this.getSmallestNodeInSubTree(nodeToRemoved.getRight());
           if (replacement == null)
             replacement = nodeToRemoved.getRight();
         }
         toggleReplacementNodeSelection = !toggleReplacementNodeSelection;
   }
   return replacement;
 }

 protected void deleteAndReplaceNode(Node<T> nodeToDelete, Node<T> replacementNode) {
    if (replacementNode != null) {
      // Record left and right child of replacementNode for later use
      Node<T> leftChildOfReplacementNode = replacementNode.getLeft();
      Node<T> rightChildOfReplacementNode = replacementNode.getRight();

      // For left and right children of nodeToDelete, new parent is replacementNode
      Node<T> leftChildOfNodeToDelete = nodeToDelete.getLeft();
      if (leftChildOfNodeToDelete != null && !leftChildOfNodeToDelete.equals(replacementNode)) {
         replacementNode.setLeft(leftChildOfNodeToDelete);
         leftChildOfNodeToDelete.setParent(replacementNode);
      }
   
      Node<T> rightChildOfNodeToDelete = nodeToDelete.getRight();
      if (rightChildOfNodeToDelete != null && !rightChildOfNodeToDelete.equals(replacementNode)) {
         replacementNode.setRight(rightChildOfNodeToDelete);
         rightChildOfNodeToDelete.setParent(replacementNode);
      }

      // Update the link of parent of replacementNode as well. For the parent of replacementNode kids of replacementNode are its new kids. In short grand-kids are kids now.
      Node<T> parentOfReplacementNode = replacementNode.parent;
      if (parentOfReplacementNode != null && !parentOfReplacementNode.equals(nodeToDelete)) {
         Node<T> leftChildOfParentOfReplacementNode = parentOfReplacementNode.getLeft();
         Node<T> rightChildOfParentOfReplacementNode = parentOfReplacementNode.getRight();

         // Check whether the replacementNode is left or right child of its parent.
         if (leftChildOfParentOfReplacementNode != null && leftChildOfParentOfReplacementNode.equals(replacementNode)) {
            parentOfReplacementNode.setLeft(rightChildOfReplacementNode);
            if (rightChildOfReplacementNode != null)
               rightChildOfReplacementNode.setParent(parentOfReplacementNode);
         } else if (rightChildOfParentOfReplacementNode != null && rightChildOfParentOfReplacementNode.equals(replacementNode)) {
            parentOfReplacementNode.setRight(leftChildOfReplacementNode);
            if (leftChildOfReplacementNode != null)
               leftChildOfReplacementNode.setParent(parentOfReplacementNode);
         }
       }
     } 

     // Update the link in the tree from the nodeToRemoved to the replacementNode
     Node<T> parentOfNodeToDelete = nodeToDelete.getParent();

     if (parentOfNodeToDelete == null) {
        // We are deleting root node. So replacing the root node.
        root = replacementNode;
        if (root != null) root.parent = null;
     } else if (parentOfNodeToDelete.getLeft() != null && (parentOfNodeToDelete.getLeft().getData().compareTo(nodeToDelete.getData()) == 0)) {
        parentOfNodeToDelete.setLeft(replacementNode);
        if (replacementNode != null) replacementNode.parent = parentOfNodeToDelete;
     } else if (parentOfNodeToDelete.getRight() != null && (parentOfNodeToDelete.getRight().getData().compareTo(nodeToDelete.getData()) == 0)) {
        parentOfNodeToDelete.setRight(replacementNode);
        if (replacementNode != null) replacementNode.parent = parentOfNodeToDelete;
     }
     size--;
 }

Now the remaining methods are relatively easy ones.
@Override
 public void clear() {
  root = null;
  size = 0;
 }
 @Override
 public boolean contains(T value) {
  Node<T> node = searchAndGetNode(value);
  return (node != null);
 }
 @Override
 public int size() {
  return size;
 }

This completes the implementation of BST in Java.

Friday, May 1, 2015

Implementing Binary Search Tree in Java

I believe we all know what is a Binary Search Tree (also called ordered tree). I will explain the step-by-step implementation of BST in Java. First of all we will need a node class which will store data/info to be stored in node, parent of node, left and right child. We can also create a static nested class for that as well because this class will not be used by any other class. I will go with a separate class as it will keep the code clean and will also be easy to explain.
public class Node<T extends Comparable<T>> {
    private T data = null;
    private Node<T> left;
    private Node<T> right;
    private Node<T> parent;

    public Node(Node<T> parent, T data) {
        this.parent = parent;
        this.data = data;
    }

    public T getData() {
        return data;
    }

    public void setData(T data) {
        this.data = data;
    }

    public Node<T> getLeft() {
        return left;
    }

    public void setLeft(Node<T> left) {
        this.left = left;
    }

    public Node<T> getRight() {
        return right;
    }

    public void setRight(Node<T> right) {
        this.right = right;
    }

    public Node<T> getParent() {
        return parent;
    }

    public void setParent(Node<T> parent) {
        this.parent = parent;
    }

    @Override
    public String toString() {
        String nodeInformation = "Data: " + data + " Parent: " + ( (parent != null) ? parent.data : "NULL") + " Left: " + ((left != null) ? left.data : "NULL") + " Right: " + ((right != null) ? right.data : "NULL");
        return super.toString();
    }
}

We want to be able to store any thing inside node (String, Integer and even our own class objects) and for that I would have used generics, also we want data stored in Node class comparable so we have used:
Node<T extends Comparable<T>&gt;

Next we will declare an interface ITree which will define the methods to be implemented by BST later on:
public interface ITree<T> {
    public boolean insert(T value);
    public T delete(T value);
    public void clear();
    public boolean contains(T value);
    public int size();
    public boolean validate();
    public java.util.Collection<T> toCollection();
}

Now we will move ahead with a class named BinarySearchTree which will implement this interface and body will look like:
public class BinarySearchTree<T extends Comparable<T>> implements ITree{
    protected Node<T> root;
    protected int size = 0;

    @Override
    public boolean insert(T value) {
        return false;
    }

    @Override
    public T delete(T value) {
        return null;
    }

    @Override
    public void clear() {

    }

    @Override
    public boolean contains(T value) {
        return false;
    }

    @Override
    public int size() {
        return 0;
    }

    @Override
    public boolean validate() {
        return false;
    }

    @Override
    public Collection toCollection() {
        return null;
    }
}

We have also declared two protected fields (as we want them available in derived classes as well) root and size of the tree. Next task is to provide the method implementations. We will consider insert first and will write two methods to insert a node to left or right of a parent.
private void insertNodeToRight(Node<T> parent, Node<T> child) {
    parent.setRight(child);
    child.setParent(parent);
    size++;
}

private void insertNodeToLeft(Node<T> parent, Node<T> child) {
    parent.setLeft(child);
    child.setParent(parent);
    size++;
}

Now we will write a method which will do the actual insertion of node to the tree. It will traverse through the tree to figure out place where a node can be inserted.
private Node<T> createNewNode(T value) {
 return (creator != null) ? (creator.createNewNode(null, value)) : (new Node<T>(null, value));
}
protected Node<T> insertValue(T value) {
 Node<T> newNode = createNewNode(value);

 // If root is null, assign
 if (root == null) {
  root = newNode;
  size++;
  return newNode;
 }

 Node<T> currentNode = root;
 while (currentNode != null) {
  if (newNode.getData().compareTo(currentNode.getData()) <= 0) { // Less than or equal to goes left
   if(currentNode.getLeft() == null) {
    insertNodeToLeft(currentNode, newNode);
    break;
   }
   currentNode = currentNode.getLeft();
  } else {          // Greater than goes right
   if (currentNode.getRight() == null) {
    insertNodeToRight(currentNode, newNode);
    break;
   }
   currentNode = currentNode.getRight();
  }
 }

 return newNode;
}

Now this method will be actually called by our public insertion method as:
public boolean insert(T value) {
 Node<T> nodeAdded = this.insertValue(value);
 return (nodeAdded != null);
}

This completes the insertion functionality. Next we will work upon search functionality where it will search for a value in the tree. If the value is less than the value at current node it will go left else it will go right (assumption is we don't have duplicate).
public boolean contains(T value) {
 Node<T> node = searchAndGetNode(value);
 return (node != null);
}
protected Node<T> searchAndGetNode(T value) {
 Node<T> currentNode = root;
 while (currentNode != null && currentNode.getData() != null) {
  int valueComparison = value.compareTo(currentNode.getData());
  if (valueComparison == 0) {
   return currentNode;
  } else if (valueComparison < 0) {
   currentNode = currentNode.getLeft();
  } else {
   currentNode = currentNode.getRight();
  }
 }
 return null;
}

Next we can work on deletion of a node but before that we need to figure out which node will replace it. We don't have any concern if the node to be deleted has zero or one children, but in case it has both left and right children then we need to design our code carefully. In case node has both children we can delete the node and replace it with either rightmost (largest) child from left sub tree or leftmost (smallest) child from right sub tree. If we keep on using only one of these strategy then it can lead to skewed tree quickly.
private boolean toggleReplacementNodeSelection = true;
protected Node<T> getReplacementNode(Node<T> nodeToRemoved) {
 Node<T> replacement = null;
 if (nodeToRemoved.getLeft() != null && nodeToRemoved.getRight() == null) {
  // Using the less subtree
  replacement = nodeToRemoved.getLeft();
 } else if (nodeToRemoved.getRight() != null && nodeToRemoved.getLeft() == null) {
  // Using the greater subtree (there is no lesser subtree, no
  // refactoring)
  replacement = nodeToRemoved.getRight();
 } else if (nodeToRemoved.getRight() != null && nodeToRemoved.getLeft() != null) {
  // Two children
  // Add some randomness to deletions, so we don't always use the
  // greatest/least on deletion
  if (toggleReplacementNodeSelection) {
   replacement = this.getLargestNodeInSubTree(nodeToRemoved.getLeft());
   if (replacement == null)
    replacement = nodeToRemoved.getLeft();
  } else {
   replacement = this.getSmallestNodeInSubTree(nodeToRemoved.getRight());
   if (replacement == null)
    replacement = nodeToRemoved.getRight();
  }
  toggleReplacementNodeSelection = !toggleReplacementNodeSelection;
 }
 return replacement;
}

Now after getting the replacement node we need to perform deletion. We need to update children of node to be deleted with new parent, update left and right child of replacement node and also update its parent. The method below does that:
protected void deleteAndReplaceNode(Node<T> nodeToDelete, Node<T> replacementNode) {
 if (replacementNode != null) {

  // Record left and right child of replacementNode for later use
  Node<T> leftChildOfReplacementNode = replacementNode.getLeft();
  Node<T> rightChildOfReplacementNode = replacementNode.getRight();

  // For left and right children of nodeToDelete, new parent is replacementNode
  Node<T> leftChildOfNodeToDelete = nodeToDelete.getLeft();
  if (leftChildOfNodeToDelete != null && !leftChildOfNodeToDelete.equals(replacementNode)) {
   replacementNode.setLeft(leftChildOfNodeToDelete);
   leftChildOfNodeToDelete.setParent(replacementNode);
  }
  Node<T> rightChildOfNodeToDelete = nodeToDelete.getRight();
  if (rightChildOfNodeToDelete != null && !rightChildOfNodeToDelete.equals(replacementNode)) {
   replacementNode.setRight(rightChildOfNodeToDelete);
   rightChildOfNodeToDelete.setParent(replacementNode);
  }

  // Update the link of parent of replacementNode as well. For the parent of replacementNode kids of replacementNode are its new kids.
  // In short grand-kids are kids now.
  Node<T> parentOfReplacementNode = replacementNode.parent;
  if (parentOfReplacementNode != null && !parentOfReplacementNode.equals(nodeToDelete)) {
   Node<T> leftChildOfParentOfReplacementNode = parentOfReplacementNode.getLeft();
   Node<T> rightChildOfParentOfReplacementNode = parentOfReplacementNode.getRight();

   // Check whether the replacementNode is left or right child of its parent.
   if (leftChildOfParentOfReplacementNode != null && leftChildOfParentOfReplacementNode.equals(replacementNode)) {
    parentOfReplacementNode.setLeft(rightChildOfReplacementNode);
    if (rightChildOfReplacementNode != null)
     rightChildOfReplacementNode.setParent(parentOfReplacementNode);
   } else if (rightChildOfParentOfReplacementNode != null && rightChildOfParentOfReplacementNode.equals(replacementNode)) {
    parentOfReplacementNode.setRight(leftChildOfReplacementNode);
    if (leftChildOfReplacementNode != null)
     leftChildOfReplacementNode.setParent(parentOfReplacementNode);
   }
  }
 }

 // Update the link in the tree from the nodeToRemoved to the replacementNode
 Node<T> parentOfNodeToDelete = nodeToDelete.getParent();

 if (parentOfNodeToDelete == null) {
  // We are deleting root node. So replacing the root node.
  root = replacementNode;
  if (root != null) root.parent = null;
 } else if (parentOfNodeToDelete.getLeft() != null && (parentOfNodeToDelete.getLeft().getData().compareTo(nodeToDelete.getData()) == 0)) {
  parentOfNodeToDelete.setLeft(replacementNode);
  if (replacementNode != null) replacementNode.parent = parentOfNodeToDelete;
 } else if (parentOfNodeToDelete.getRight() != null && (parentOfNodeToDelete.getRight().getData().compareTo(nodeToDelete.getData()) == 0)) {
  parentOfNodeToDelete.setRight(replacementNode);
  if (replacementNode != null) replacementNode.parent = parentOfNodeToDelete;
 }
 size--;
}

Now this can be called by the public method as:
public T delete(T value) {
 Node<T> nodeToRemove = this.deleteValue(value);
 return ((nodeToRemove!=null)?nodeToRemove.getData():null);
}
protected Node<T> deleteValue(T value) {
 Node<T> nodeToRemoved = this.searchAndGetNode(value);
 if (nodeToRemoved != null) nodeToRemoved = deleteNode(nodeToRemoved);
 return nodeToRemoved;
}

Now similarly other methods can also be written.




Saturday, April 18, 2015

Java Concurrency Problem: A Producer which produces N work items and waits for them to be over.

“With great power often comes great confusion.” ― Dan Allen, Seam in Action

Many times we encounter concurrency problems in Java where existing constructs need to be tweaked. Recently I wrote about one such problem where we wanted to accumulate results of multiple threads and these threads could write or update values randomly.

This time I am going to write about a problem where we will have one producer which will produce an N number of items (of course N is not known in advance) and then producer needs to wait for all those N items to be over. Before we move further I would like to stress again that what I am going to present here is oversimplified version of actual problem. I am not going to write complete code rather only those pieces that would make sense in context of this post.

Using ExecutorCompletionService
If you are not familiar then you need to have a look on this post which explains how ExecutorCompletionService differentiates from ExecutorService. My worker will doing some work and that's all.
public class WorkerThread implements Runnable {
    @Override
    public void run() {
        try {
            Thread.currentThread().sleep(1000);  // do some work
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

At first it seems that we can use this to submit jobs and get list of futures, and then later we can get results out of it. When we will be retrieving the results using take() producer will obviously block.
public class WorkProducerUsingECS implements Runnable{
    private final CompletionService service;
    private final List<Future<Integer>> listOfFutures;

    public WorkProducerUsingECS() {
        this.listOfFutures = new ArrayList<>();
        service = new ExecutorCompletionService(Executors.newFixedThreadPool(5));
    }

    @Override
    public void run() {
        produceRandomWorkers();
    }

    private void produceRandomWorkers() {
        Random random = new Random();
        int numberOfWorkers = random.nextInt(20) + 1;
        System.out.println("Workers count: " + numberOfWorkers);
        for (int i=0; i<numberOfWorkers; i++){
             listOfFutures.add(service.submit(new WorkerThread(),1));
        }
    }

    private void getResultAfterWorkIsOver() throws InterruptedException, ExecutionException {
        for(int i=0; i<listOfFutures.size(); i++) {
            Integer result = (Integer) service.take().get();
            System.out.println("Result: "  + result);
        }
    }
}

This can be called using the following code:
public static void main(String[] args) {
   WorkProducerUsingECS producer =  new WorkProducerUsingECS();
   Thread thread = new Thread(producer);
   thread.start();
}

Now the problem is once all the workers are done how can we signal the producer so that it can call getResultAfterWorkIsOver method.

Using CountDownLatch (CDL)
Using latch seems a good option but the problem is we don't know the actual number of worker threads in advance. If it were available we could have simply created a CDL and have producer thread wait (using await method) until all work was complete.

Lets give it a try. We will create a wrapper class WorkerTask on top of class which will take Runnable (to be executed) and an atomic reference to CDL as constructor parameter. An AtomicReference can be updated atomically.
public class WorkerTask implements Runnable {
    private final Runnable runnable;
    private AtomicReference<CountDownLatch> latchAtomicReference;

    public WorkerTask(Runnable runnable, AtomicReference<CountDownLatch> latchAtomicReference) {
        this.runnable = runnable;
        this.latchAtomicReference = latchAtomicReference;
    }

    @Override
    public void run(){
        runnable.run();
        while (latchAtomicReference.get() == null) {
            try {
                Thread.currentThread().sleep(1000L);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        latchAtomicReference.get().countDown();
    }
}

In case any worker-thread is over by the time CDL is not set, it will sleep for some time and again check whether it is set or not. If it is set then it will invoke countdown on it. The AtomicReference is used because this can be updated atomically and will not create unintended problems in multi-threaded code.

Another thing to note is I have called run() and not start() on the Runnable passed in constructor as I do not want to spawn a new thread. You can read more here and here. This can be used with producer as:
public class WorkProducerUsingCDL implements Runnable{
    private final AtomicReference<CountDownLatch> latchAtomicReference;

    public WorkProducerUsingCDL() {
        latchAtomicReference = new AtomicReference<>();
    }

    @Override
    public void run() {
        produceRandomWorkers();
    }

    private void produceRandomWorkers() {
        Random random = new Random();
        int numberOfWorkers = random.nextInt(20) + 1;
        System.out.println("Workers count: " + numberOfWorkers);
        for (int i=0; i<numberOfWorkers; i++){
            try {
                createWorkerTask().start();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }

        // Add some delay to simulate some processing
        try {
            Thread.currentThread().sleep(5000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        //By now all workers have been added. Some of them may be over and some may be processing.
        latchAtomicReference.set(new CountDownLatch(numberOfWorkers));

        // Now producer will wait for latch to be over.
        try {
            latchAtomicReference.get().await();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        // Now all workers are definitely over.
        try {
            processAfterWorkIsOver();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
    }

    private Thread createWorkerTask() {
        WorkerTask workerTask = new WorkerTask(new WorkerThread(), latchAtomicReference);
        Thread thread = new Thread(workerTask);
        return  thread;
    }

    private void processAfterWorkIsOver() throws InterruptedException, ExecutionException {
        System.out.println("Work is over by all workers.");
    }

}

And we can verify this code as:
public static void main(String[] args) {
    WorkProducerUsingCDL producerUsingCDL = new WorkProducerUsingCDL();
    Thread thread = new Thread(producerUsingCDL);
    thread.start();
}

One thing to observe in this code is that the number of threads is unknown to us and it is possible that some of the threads have already near to completion by the time we create a CDL and set it in the AtomicReference of CDL. I have not used ExecutorService here knowingly in this example because that would have returned Future and when we invoke get on it, it will block the producer. Here also producer will be blocked but by the await method which is called on CDL in the AtomicReference.

This code should work and in my opinion not a neat version. One more point which is still not discussed is: the process is cyclic. It means every time a cycle starts producer will produce unknown number of threads and will wait for them to be completed. It seems we can also make use of CyclicBarrier to solve this problem. Think for a moment before going further. Can we solve it using CyclicBarrier?

Using Phaser
As we do not have Flexible CDL, there is one construct that will fit the problem and that is Phaser. To know about why and how to use Phaser check this post. We need to pass the phaser reference to worker as:
public class WorkerThread implements Runnable {
    private final Phaser phaser;

    public WorkerThread(Phaser phaser) {
        this.phaser = phaser;
    }

    @Override
    public void run() {
        try {
            Thread.currentThread().sleep(5000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println("Work is over.");
        phaser.arrive();
    }
}

Once the work of worker is over it will call arrive method on phaser to notify that this particular worker is done with the work and has arrived on that particular phase. The phaser is a mix of CyclicBarrier and CountDownLatch and when one phase is over, it starts a new phase and the limit is Integer.MAX_VALUE. Once it reaches max value it rounds off to zero and starts again. The producer can be written as:
public class WorkProducerUsingPhaser implements Runnable{
    private final Phaser phaser;
    private final ExecutorService executorService;

    public WorkProducerUsingPhaser() {
        phaser = new Phaser();
        executorService = Executors.newFixedThreadPool(5);
    }

    @Override
    public void run() {
        produceRandomWorkers();
    }

    private void produceRandomWorkers() {
        Random random = new Random();
        int numberOfWorkers = random.nextInt(20) + 1;
        System.out.println("Workers count: " + numberOfWorkers);

        phaser.register();

        for (int i=0; i<numberOfWorkers; i++){
            phaser.register();
            executorService.submit(getWorker());
        }

        phaser.arriveAndAwaitAdvance();

        // Now all workers are definitely over.
        try {
            processAfterWorkIsOver();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
    }

    private Thread getWorker() {
        WorkerThread workerTask = new WorkerThread(phaser);
        Thread thread = new Thread(workerTask);
        return  thread;
    }

    private void processAfterWorkIsOver() throws InterruptedException, ExecutionException {
        System.out.println("Work is over by all workers.");
    }
}

Here I have used a thread pool of 5 (you can take any suitable number) and submit all worker threads to ExecutorService. The producer along with all workers is registered to Phaser and then it waits by calling arriveAndAwaitAdvance method. Once every worker is done it calls method arrive and when the last worker calls this method producer is notified and then producer can move ahead. A new phase will also start. This solution seems more clean and suits better.

That is all and I hope you liked it. Please drop your feedback in comments.

Sunday, April 12, 2015

Overview of Enum in Java

As per the doc "An enum type is a special data type that enables for a variable to be a set of predefined constants. The variable must be equal to one of the values that have been predefined for it. " There are various examples for it:
  • List of directions e.g. east, west, north and south
  • Set of Suits in a deck of card: clubs, diamonds, spades, hearts.
  • Rank in a deck of card: ace, duce, three etc.
  • Supported currency in a stock application: dollar, rupee, yen etc.
  • Supported operating systems by a backup application: windows, unix, ubuntu etc.
Consider the following enum which lists the supported operating systems for a product:
public enum SupportedOS {
    WINDOW, UNIX;
}

The concerned product can be installed only on windows and unix. This may be used by product installer class as:
public class ProductInstaller {
   SupportedOS supportedOS;

    public ProductInstaller(SupportedOS supportedOS) {
        this.supportedOS = supportedOS;
    }

    public void install() {
        switch (supportedOS) {
            case WINDOW:
                installRoutineForWindows();
                break;
            case UNIX:
                installRoutineForUnix();
                break;
        }
    }

    private void installRoutineForUnix() {
        // code for installation for Unix
    }

    private void installRoutineForWindows() {
        // code for installation for Windows
    }
}

Where should I use an Enum?
When a variable can take one of the values from a small set of predefined values then using an Enum may make sense. One classical example of using enum is writing thread-safe singleton as mentioned in Effective Java.

What is an EnumSet?
If we use enums and want to allows a set of values then we should use EnumSet but all the values must come from a single enum that is specified when the set is created.

Why should I prefer an Enum?
Suppose in the above example we are passing the OS in string form:
public class ProductInstaller {
   String supportedOS;

    public ProductInstaller(String supportedOS) {
        this.supportedOS = supportedOS;
    }

    public void install() {
        switch (supportedOS) {
            case "WINDOW":
                installRoutineForWindows();
                break;
            case "UNIX":
                installRoutineForUnix();
                break;
        }
    }

    private void installRoutineForUnix() {
        // code for installation for Unix
    }

    private void installRoutineForWindows() {
        // code for installation for Windows
    }
}

Now user can pass even an operation system for which product has no installation support, worse (s)he can pass any random string. So using an Enum provides
  • provides type safety and better compile time error checking. 
  • passing an enum as parameter makes it self-documented in sense that one of the possible values from that enum can only be passed.
  • also avoid errors in passing invalid constants. We can also document legal use cases to use with this enum.
  • By using an enum we can print more sensible value if we override toSring method in it.
  • also helpful in case of auto completion when using an IDE e.g. IntelliJ IDEA.
  • when we create an enum compiler also adds some helpful methods e.g. values that returns an array containing all of the values of the enum in the order they are declared.
Why an enum cannot extend anything else?
Because all enums explicitly extend java.lang.enum and a class can only extend one class in Java.

How enum differs from class?
Enum is similar to a regular class but it always has private constructor. We also get some additional supporting methods like values, valueof etc. Also intent with an enum is different from a class as we want to have fixed number of instances in case of an enum but not in case of a class.

Are fields of an enum implicitly final?
No they are not. Consider the following example:
public class MainTest {
    enum Product {
        PRODUCT_ONE(1.1),PRODUCT_TWO(2.1),PRODUCT_THREE(3.1);

        private double version;

        Product(double version) {
            this.version = version;
        }

        @Override
        public String toString() {
            switch (this) {
                case PRODUCT_ONE:
                    System.out.println("Version of PRODUCT_ONE: " + version);
                    break;
                case PRODUCT_TWO:
                    System.out.println("Version of PRODUCT_TWO: " + version);
                    break;
                case PRODUCT_THREE:
                    System.out.println("Version of PRODUCT_THREE: " + version);
                    break;
            }
            return super.toString();
        }
    }

    public static void main(String[] args) {
        System.out.println(Product.PRODUCT_ONE);
        Product.PRODUCT_ONE.version = 1.3;
        System.out.println(Product.PRODUCT_ONE);
    }
}

The output would be:
Version of PRODUCT_ONE: 1.1
PRODUCT_ONE
Version of PRODUCT_ONE: 1.3
PRODUCT_ONE

Generally we would not create mutable enums but some examples are still there e.g. lazy initialization (when we need to compute some field value when they are first used), a regular singleton object e.g. Registry etc. In most cases enum objects would be immutable and their fields be final. We can also use reflection to examine enum.

Why an enum cannot have public or protected constructor?
An enum must have package-private or private access constructor. We can think Enum as a class with fixed number of instances and the number is not going to change at run time. We provide public or protected constructors when we allow more instances to be created and now it seems to make sense that we do not need public or protected constructors. An enum automatically creates the constants at the beginning of enum body and we cannot invoke an enum constructor our-self.

Can we declare an Enum inside a method?
An enum can be declared inside or outside of a class but not in a method.As per the doc:Nested enum types are implicitly static. It is permissible to explicitly declare a nested enum type to be static. This implies that it is impossible to define a local enum, or to define an enum in an inner class.

Can we mark an Enum final?
As mentioned above all nested enums are implicitly static. But we cannot mark an enum declared outside of a class as any of the following: static, abstract, final, protected or private.

How can we ensure that all enum values are used?
In one of the projects we are using Enum to specify appliance type which can have various possible values e.g. Red Hat Linux, SUSE, Windows etc.
public enum ApplianceType {
    APPLIANCE_TYPE_RHL("LinuxBox"),APPLIANCE_TYPE_SUSE("SuseBox"),APPLIANCE_TYPE_WIN("WinBox"), APPLIANCE_TYPE_UBUNTU("Ubuntu");

    private final String applianceType;

    ApplianceType(String applianceType) {
        this.applianceType = applianceType;
    }

    @Override
    public String toString() {
        return applianceType;
    }
}

And then there is a service which will do processing specific to the appliance. Here is the stripped down version:
public class ApplianceService {
    public static void initializeAppliances(ApplianceType applianceType) {
        if(ApplianceType.APPLIANCE_TYPE_RHL == applianceType) {
            // Do some processing specific to Red Hat Appliance.
        }
        else if(ApplianceType.APPLIANCE_TYPE_SUSE == applianceType) {
            // Do some processing specific to SUSE Appliance.
        }
        else if(ApplianceType.APPLIANCE_TYPE_WIN == applianceType) {
            // Do some processing specific to Windows Appliance.
        }
        System.out.println("Initialization over for appliance type: "+applianceType);
    }

    public static void main(String[] args) {
        initializeAppliances(ApplianceType.APPLIANCE_TYPE_RHL);
    }
}

Now everything is fine till here. Now after some time a new appliance gets introduced e.g. Ubuntu appliance and now we need to scan the code to introduce code for this new kind of appliance. This feels like code smell and needs to be handled. The objective is to ensure that every enum value is used.

The point to observe is every enum should have processing method so we can introduce an abstract method in enum itself and next thing is to ensure that every enum value will provide implementation for that method.
package enums;

public enum ApplianceType {
    APPLIANCE_TYPE_RHL("LinuxBox") {
        @Override
        public void initializeAppliance() {
           // code to initialize RHL.
        }
    },APPLIANCE_TYPE_SUSE("SuseBox") {
        @Override
        public void initializeAppliance() {
            // code to initialize Suse.
        }
    },APPLIANCE_TYPE_WIN("WinBox") {
        @Override
        public void initializeAppliance() {
            // code to initialize Windows.
        }
    }, APPLIANCE_TYPE_UBUNTU("Ubuntu") {
        @Override
        public void initializeAppliance() {
            // code to initialize Ubuntu.
        }
    };

    private final String applianceType;

    ApplianceType(String applianceType) {
        this.applianceType = applianceType;
    }

    // Force them all to implement doProcessing.
    public abstract void initializeAppliance();

    @Override
    public String toString() {
        return applianceType;
    }
}

Now next time we introduce a new appliance type that must also implement the abstract method else it will not compile. Another option is to declare an interface and make sure the enum implements this interface.
public interface ApplianceInitializer {
    public void initializeAppliance();
}

public enum ApplianceType implements ApplianceInitializer{
    APPLIANCE_TYPE_RHL("LinuxBox") {
        @Override
        public void initializeAppliance() {
           // code to initialize RHL.
        }
    },APPLIANCE_TYPE_SUSE("SuseBox") {
        @Override
        public void initializeAppliance() {
            // code to initialize Suse.
        }
    },APPLIANCE_TYPE_WIN("WinBox") {
        @Override
        public void initializeAppliance() {
            // code to initialize Windows.
        }
    }, APPLIANCE_TYPE_UBUNTU("Ubuntu") {
        @Override
        public void initializeAppliance() {
            // code to initialize Ubuntu.
        }
    };

    private final String applianceType;

    ApplianceType(String applianceType) {
        this.applianceType = applianceType;
    }

    @Override
    public String toString() {
        return applianceType;
    }
}

This implementation will also ensure that all values of a enum are used. But this is more flexible in the sense that the interface can be implemented by other classes as well (in case we need it).

Thats all folks! Please leave your feedback in comments.

Saturday, April 4, 2015

What are nested classes and why do we need them?

If a class A is declared inside another class B then that class A is a nested class. It is a member of enclosing class. If a nested class is marked static then it is called static nested class and if it not then it is called non-static nested class or inner class.

There is nothing called top level static class or static inner class. We only have static nested class.

Why do we need nested class?
As per Oracle's official page:
  1. If a class is useful to only one other class then it seems logical to embed this class as a nested class. For example if a class RedBlackNode (represents node of a Red Black Tree) is used only by class RedBlackTree then it makes sense to make RedBlackNode a nested class in the class RedBlackTree.So it is a  way of logically grouping classes that are only used in one place.
  2. A nested class increases encapsulation. Consider a class A whose members are declared private, but class B needs to access them. In that case we can hide class B in A and B can access members of A in spite of the fact that they are private. Also B can be hidden from outside world when declared private.
  3. It can lead to more readable and maintainable code.
A static nested class  interacts with the instance members of its outer class (and other classes) just like any other top-level class. In effect, a static nested class is behaviourally a top-level class that has been nested in another top-level class for packaging convenience.

Is nested class violation of encapsulation?
Some developers feel that nested class is an extreme violation of nested class. IMO this feature is safe if it is used safely. It is not a violation of encapsulation and does not violate programming principle. As I mentioned above nested class B (when declared private) is hidden from outside world and it needs to be part of class A in some ways, also members of class A are still private, so IMO it is perfectly fine and follows Single Responsibility Principle. If you are not sure whether you need a nested class or not then I believe it is better to avoid them, but if you really need them then it is fine to use them.

How do we use static nested class and non-static nested class (inner class)?
Static nested classes are always accessed using the enclosing class's name.
OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass();

An instance of inner class (non-static class) can exist only with in an instance of Outer class.
OuterClass.InnerClass innerObject = outerObject.new InnerClass();

Types of Non-static nested class (Inner Class)
There are two types: anonymous and local. We generally use an anonymous class (class with no name) when we create an instance of a class with some overloading of a method, without having to subclass a class. We can simply instantiate anonymous inner class without making a separate class. The classical example of anonymous class is initializing an anonymous class for Runnable interface:
Thread thread = new Thread(new Runnable() {
        @Override
        public void run() {
            System.out.println("New Thread started.");
        }
    });

IMO, after Java 8 it makes more sense to make use of Lambda expression in such places to replace anonymous class as below:
Thread thread = new Thread(() -> System.out.println("New Thread started"));

Another type is local class. Local classes are classes that are defined in a block, which is a group of zero or more statements between balanced braces. We typically find local classes defined in the body of a method.
public class ValidateUser {
    public void validateEmails(String emailId, String alternativeEmailId) {

        final int emailIdLength = 20;

        class Email {

            String formattedEmail;

            Email(String email) {
                 if(email.length() == emailIdLength) 
                    formattedEmail = email;
                else
                    formattedEmail = null;
            }

            public String getFormattedEmail() {
                return formattedEmail;
            }

            // Valid in JDK 8 and later
            public void printOriginalEmailId() {
                System.out.println("Original email id is: " + emailId);
            }

        }

        Email email = new Email(emailId);
        if(email.getFormattedEmail() == null) {
            System.out.println("Email is invalid");
        }

        Email alternativeEmail = new Email(alternativeEmailId);
        if(alternativeEmail.getFormattedEmail() == null) {
            System.out.println("Alternative email is invalid");
        }
        
    } // method validateEmails ends here.
}

In the above example method validateEmails validates the emails provided by the user. It defines a local inner class Email to represent the email-id for a user. A local class has access to members of enclosing class. It also has access to local variables that are declared final. In the above example field emailIdLength is final and can be accessed in the constructor of the local class Email.

However starting in Java 8 there are two changes:
  1. A local class can access local variables and parameters of the enclosing block that are final or effectively final. A variable or parameter whose value is never changed after it is initialized is effectively final. Suppose the variable emailIdLength is not final and its value is changed in the constructor of local class Email, in that case this variable is not effectively final and compiler will complain.
  2. If we declare the local class in a method it can access the parameters of the enclosing method. In the above example the method printOriginalEmailId was able to access the method parameter emailId.
Actually anonymous classes are like local classes except that they do not have a name. We should use them if you need to use a local class only once.

Should I always mark a nested class static?
As Jon Skeet points out, it is a better idea if we need a nested class then to start with a static nested class and then decide if it really needs to be non-static (inner class) based on the usage. Whenever we see an inner class we need to decide whether we really need it with extra complexity and implicit (rather than explicit and more clean) reference to the outer containing class?

If an instance of inner class is strongly referenced then the outer instance is strongly referenced too. This can lead to some confusion when the outer instance is not garbage collected even though it appears that nothing references it. We must remember that an inner class maintains an implicit reference to the instance of outer class.


How to figure out if we need an inner class (non-static nested class) ?
The point to note is: Inner classes are not allowed to have static methods or fields. If we feel that we are passing a lot of stuff in the constructor of a top level class then it is an hint that we can make use of an inner class. An inner class can see all the fields of the outer class, it means we don't have to deal with the outer class fields as if they come from an outer class.

Difference between static nested class and non-static nested (inner class) class

Static Nested classNon-static nested class (Inner class)
Does not need instance of outer class as it is not associated with any instance of outer class.Needs instance of outer class for initialization.
Uses static keyword so it means it is static member of the outer class and can be accessed like that.Not a static member and every instance of inner class needs an implicit reference to instance of outer class.
Nested classes can be imported using static imports in Java.Not applicable.
Should be preferred for obvious reasons

Can we mark the nested classes final?
First of all we need to understand that static keyword can only be applied to a nested class and not to outer class. If we make anything final then it becomes constant, something that is final value and cannot be changed any further by any means. If a class is final then it simply means it cannot be inherited. So if we mark a nested class (static or non-static) final then it cannot be inherited, like any regular final class.

Can we extend nested class (static or inner)?
Yes we can extends both of them. Consider the following class which contains one static nested class and one inner class.
public class OuterClass {
    static class StaticNestedClass {
        void display() {
            System.out.println("Inside StaticNestedClass");
        }
    }
    class InnerClass {
        void display() {
            System.out.println("Inside InnerClass");
        }
    }
}

The following class extends Outerclass and its nested class also extends respective nested classes:
public class OuterClassDerivedClass extends OuterClass{
    static class StaticNestedDerivedClass extends OuterClass.StaticNestedClass {
        @Override
        void display() {
            System.out.println("Inside StaticNestedDerivedClass");
        }
    }
    class InnerDerivedClass extends OuterClass.InnerClass {
        @Override
        void display() {
            System.out.println("Inside InnerDerivedClass");
        }
    }
}

Now we can test these classes as:
public class InheritanceTest {
    public static void main(String[] args) {
        OuterClass outerClass = new OuterClass();
        OuterClass.InnerClass innerClass = outerClass.new InnerClass();
        innerClass.display();
        OuterClass.StaticNestedClass staticNestedClass = new OuterClass.StaticNestedClass();
        staticNestedClass.display();

        OuterClassDerivedClass outerClassDerivedClass = new OuterClassDerivedClass();
        OuterClassDerivedClass.InnerDerivedClass innerDerivedClass = outerClassDerivedClass.new InnerDerivedClass();
        innerDerivedClass.display();
        OuterClassDerivedClass.StaticNestedDerivedClass staticNestedDerivedClass = new OuterClassDerivedClass.StaticNestedDerivedClass();
        staticNestedDerivedClass.display();
    }
}

As expected the output will be:
Inside InnerClass 
Inside StaticNestedClass 
Inside InnerDerivedClass 
Inside StaticNestedDerivedClass

Can a Java File contain more than one public class?
A Java file can contain only one public class except for public nested classes. Check the class below:
public class Sample {
    public class InnerClassOne {
        public void display() {
            System.out.println("In class innerClassOne");
        }
    }
    public class InnerClassTwo {
        public void display() {
            System.out.println("In class innerClassTwo");
        }
    }
    static public class StaticNestedClassOne {
        public void display() {
            System.out.println("In class StaticNestedClassOne");
        }
    }
    static public class StaticNestedClassTwo {
        public void display() {
            System.out.println("In class StaticNestedClassTwo");
        }
    }
}

This can be tested as:
Sample sample = new Sample();
Sample.InnerClassOne innerClassOne = sample.new InnerClassOne();
Sample.InnerClassTwo innerClassTwo = sample.new InnerClassTwo();

Sample.StaticNestedClassOne staticNestedClassOne = new Sample.StaticNestedClassOne();
Sample.StaticNestedClassTwo staticNestedClassTwo = new Sample.StaticNestedClassTwo();

innerClassOne.display();
innerClassTwo.display();
staticNestedClassOne.display();
staticNestedClassTwo.display();

Can a nested class extend its outer class itself?
Yes it can. If we look inside the class Arc2D in java.awt.geom package, we will notice that it has static nested class Float and Double which extend the outer class itself.
public abstract class Arc2D extends RectangularShape {
  public static class Float extends Arc2D implements Serializable { .. }
  public static class Double extends Arc2D implements Serializable { .. }
}

This is done to logically group the classes and their logic. If a user wishes to use abstract class Arc2D then it is helpful to find the implementations that they can use in the class itself.

In the similar way a non-static nested class or inner class can also extend its own outer class.But why would we want an inner class to extend its outer class?  For example we can have an outer class Car which can have an inner class Wheel and every instance of Car must be having an instance of Wheel. Now it does not make conceptual sense for an inner class to extend its outer class. IMO that would be a problematic design and should be avoided.

References:
https://docs.oracle.com/javase/tutorial/java/javaOO/nested.html
http://mindprod.com/jgloss/nestedclasses.html#INHERITANCE

Thursday, April 2, 2015

Default methods in Java

Default methods were added in Java 8. They are also known as defender methods or virtual extension methods. The prime purpose for addition of these methods is to allow us to add new functionality to existing interfaces in old libraries and to make sure the code is also binary compatible with those interfaces.

When an interface contains a default method and another interface extends this interface we can do one of the following:
  • We do not mention the default method and in that case that default method is inherited by new interface.
  • We can re-declare the method and now this is abstract.
  • We can redefine the method and now it is overridden.
One aspect where confusion generally arises is the diamond pattern where a class implements two interfaces and both the interfaces define a method with same signature. Consider the following example:
public interface Engine {  
    default void start(){  
        System.out.println("Starting engine Engine.start()");  
    }  
}

public interface CNGKit {
    default void start(){
        System.out.println("Starting CNG Kit CNGKit.start()");
    }
}

public class Car implements Engine, CNGKit {

}

A car can have both an engine and a CNG Kit and in that case we need to specify whether the car should run on engine (petrol or diesel) or it should run using CNG Kit. This example is just used to explain the concept and not an ideal representation of OOP concepts. This will not compile and we need to resolve it manually by overriding the conflicting method as:
public class Car implements Engine, CNGKit {
 public void start(){
        Engine.super.start();
    }
}

What is the motivation of adding default methods in Java?
  1. They provide option to extend existing libraries by providing new methods and also ensure binary compatibility. An example is the addition of forEach method to Collection where the default implementation is written in terms of iterator method.
  2. They provide flexibility to allow an interface to define method implementations that will work as default in case a concrete class (implementation of this interface) fails to provide an implementation for this default method.
  3. Due to default methods, an interface can stay as a functional interface even if it has more than one method. A Functional Interface has only one non-default abstract method which can be implemented using a lambda expression.
Regarding the last point consider an example of functional interface Predicate which has only one non-default abstract method test. But it provides default methods for negating a predicate or combining it with another predicate. Without default methods these methods had to be provided in another utility class like the pre-Java 8 Collections class (as we don’t want to give up the possibility of lambda implementations for such an interface).

Why default methods cannot be declared final?
Consider the following abstract class Dispatcher which can be used to dispatch an event with empty message or an event with specific message:
abstract class Dispatcher {
    final void dispatchMessage() {
        dispatchMessage(null);
    }
    abstract void dispatchMessage(String message);
}
The method declared final is a convenience method to dispatch an empty message where as other method should be implemented by implementation classes. We can think of using an interface like following, if default methods were allowed to be final:
public interface IDispatcher {
    default final void dispatchMessage() {
        dispatchMessage(null);
    }
    void dispatchMessage(String message);
}

Now it seems like a good use case then how come they were not allowed to be declared final?

First: We need to understand the primary goal for taking the decision of adding default methods. It was interface evolution and not turning them into traits. The idea behind a default method is that it is an interface method which will be used if the derived class does not provide any specific implementation for it. If the derived class provides one then in that case the specific implementation will be used.

Again the purpose of default methods is interface evolution, and if a default method is allowed to be declared final then that would not be the default implementation rather it would be the only implementation.

Second: another use case is of diamond problem I explained where both classes (Engine and CNGKit) were having same default method start. Suppose if class CNGKit makes its default method final and class Engine is not under our control (may be third party or any other reason) then our class C is irretrievably broken because it cannot compile without overriding start() but it cannot because start() is final in CNGKit. Actually final methods make more sense for single-inheritance classes than for interfaces which can lead to multiple inheritance.

Third: The default implementation of a default method is only considered if the class (or superclass) does not provide any (abstract or concrete) declaration of the method. If a default method were declared final but one of the super-classes already implemented it then this default method will be ignored. And that is probably not the intention when we were trying to mark the default method final.

Default methods were meant for API evolution but is it wrong to use them in new interfaces?
IMO every feature of a language is right if its used in a right manner. Suppose we are having certain convenience methods which are generally implemented as non-default methods by all the classes. Consider an example of logging which has a convenience method to log and is implemented by each class. This method can be declared as default in an interface ILoggable and every class can implement it.
public interface ILoggable {
    default Logger logger() {
        return LoggerFactory.getLogger(this.getClass());
    }
}

In this example the method is strictly for convenience and can be an ideal fit for default method. As explained by Brian Goetz, the default methods fit the following use cases:
  1. Interface Evolution: We all agree to it.
  2. Optional methods: Implementors need not to implements these methods if they can live with default implementation e.g. Iterator.remove method is given a default and most of the implementations of Iterator have this behavior.
  3. Convenience methods: The methods that are strictly for convenience and again they are implemented as non-default methods on the class. An example is of logger given above.
  4. Combinators: These are the methods which create new instances of an interface based on the current instance. The examples are methods Predicate.and() and Comparator.thenComparing()
Why default methods cannot be declared synchronized?
Actually we cannot mark any method (default/static or whatever) in an interface synchronized. We can have a synchronize block within the default (or static) method but marking the method itself synchronized is not allowed.

Locking is all about coordinating the shared access to a mutable state. Each object should have a synchronization policy to figure out which lock guards the state of this object.  Many objects use Java Monitor Pattern where the state of the object is guarded by the intrinsic lock. The synchronized keyword on a method implicitly assumes this pattern. So the synchronization policy is determined by the state and state is owned by the class and not by an interface. If we use synchronized method in an interface it assumes a particular synchronization policy but we have no reasonable basis for assuming this. It would give us false sense of thread-safety and no error message would tell us that we are assuming wrong synchronization policy.

It is already tough to maintain synchronization policy for a single source file, for interfaces we can have multiple classes implementing it and in that case making sure that a subclass follows the synchronization policy defined by super class would be really hard.

Why synchronized cannot be used with a regular method?
Actually synchronized is an implementation detail. One implementation of the method might need to make the method synchronized where as the other one might not. The caller of the method does not care whether the method is synchronized or not. It is not part of the contract which explains what the method does. So synchronized does not belong to an interface rather to the concrete implementation of the interface.

Why we cannot define a default method for a method from Object class?
In some scenarios it may seem good idea to define default version of methods like toStream from Object class. But this is not allowed. This mail covers a lot of interesting stuff on this subject. There were many design decisions:
To keep interface model simple.
Inheriting  the methods equals/hashCode/toString is strongly tied to single inheritance and interfaces support multiple inheritance and are also stateless.
It can lead to some surprising behaviors.

When we talk about inheritance and conflict resolutions rules are simple:
  1. Classes win over interfaces.
  2. Derived interfaces win over super interfaces. A default from List wins over a default from Collection.
The methods (equals/hashCode/toString) are all about state of the object and the class owns the state, not the interface. And class is in a better position to determine what equality means for the class (see Effective Java for equality). Defaults should act as defaults only they should not change the semantics of concrete implementing classes. But in this case it wont be true.

Why default methods cannot be static?
First think why would you really need them? Default methods provide default implementation which can be overridden by implementation classes, if we declare them static will it be possible? And if you have the intention that the method cannot be overridden by implementation classes then why don't we mark it static all the way?

With advent of default methods they seem like abstract class only. Why were these methods added in first place?
To check the difference between abstract class and interface check this post. The motivation for them is already explained and a good example for them is we can use
list.sort(ordering);
instead of
Collections.sort(list, ordering);

That's all for now. Enjoy!

Sunday, March 29, 2015

How CAS (Compare And Swap) in Java works?

Before we dig into CAS (Compare And Swap) strategy and how is it used by atomic constructs like AtomicInteger, first consider this code:
public class MyApp
{
    private volatile int count = 0;
    public void upateVisitors() 
    {
       ++count; //increment the visitors count
    }
}

This sample code is tracking the count of visitors to the application. Is there anything wrong with this code? What will happen if multiple threads try to update count? Actually the problem is simply marking count as volatile does not guarantee atomicity and ++count is not an atomic operations. To read more check this.

Can we solve this problem if we mark the method itself synchronized as shown below:
public class MyApp
{
    private int count = 0;
    public synchronized void upateVisitors() 
    {
       ++count; //increment the visitors count
    }
}

Will this work? If yes then what changes have we made actually?
Does this code guarantee atomicity? Yes.
Does this code guarantee visibility? Yes.

Then what is the problem?
It makes use of locking and that introduces lot of delay and overhead. Check this article. This is very expensive way of making things work.

To overcome these problems atomic constructs were introduced. If we make use of an AtomicInteger to track the count it will work.
public class MyApp
{
    private AtomicInteger count = new AtomicInteger(0);
    public void upateVisitors() 
    {
       count.incrementAndGet(); //increment the visitors count
    }
}

The classes that support atomic operations e.g. AtomicInteger, AtomicLong etc. makes use of CAS. CAS does not make use of locking rather it is very optimistic in nature. It follows these steps:
  • Compare the value of the primitive to the value we have got in hand.
  • If the values do not match it means some thread in between has changed the value. Else it will go ahead and swap the value with new value.

Check the following code in AtomicLong class:
public final long incrementAndGet() {
    for (;;) {
        long current = get();
        long next = current + 1;
        if (compareAndSet(current, next))
          return next;
    }
}

In JDK 8 the above code has been changed to a single intrinsic:
public final long incrementAndGet() {
        return unsafe.getAndAddLong(this, valueOffset, 1L) + 1L;
}

What advantage this single intrinsic have?
Actually this single line is JVM intrinsic which is translated by JIT into an optimized instruction sequence. In case of x86 architecture it is just a single CPU instruction LOCK XADD which might yield better performance than classic load CAS loop.

Now think about the possibility when we have high contention and a number of threads want to update the same atomic variable. In that case there is a possibility that locking will outperform the atomic variables but in realistic contention levels atomic variables outperform lock. There is one more construct introduced in Java 8, LongAdder. As per the documentation:
This class is usually preferable to AtomicLong when multiple threads update a common sum that is used for purposes such as collecting statistics, not for fine-grained synchronization control. Under low update contention, the two classes have similar characteristics. But under high contention, expected throughput of this class is significantly higher, at the expense of higher space consumption.
So LongAdder is not always a replacement for AtomicLong. We need to consider the following aspects:

  • When no contention is present AtomicLong performs better.
  • LongAdder will allocate Cells (a final class declared in abstract class Striped64) to avoid contention which consumes memory. So in case we have a tight memory budget we should prefer AtomicLong.

That's all folks. Hope you enjoyed it.

Tuesday, March 24, 2015

Is multi-threading really worth it?

"Premature optimization is the root of all evil" - Donald E. Knuth

This article is in no way against the usage of multithreading if we really need it. This is just an attempt to explain the cost multithreading comes with as there ain't no such thing as a free lunch. I have seen many examples where multithreading is used in real life projects in name of making things faster. We should always benchmark these things to conclude whether concurrent code has really made our code faster or not. It is quite possible that sometimes it may introduce a lot of delay and its better to avoid then. There is always a context switch overhead and a thread needs more resources to run.

I will explain it with one very simple example I am running on my machine with Intel Core i5 processor with 4 GB of DDR3 Ram. The example is regarding a very simple counter class having only variable count as:
public class Counter {
    long count;
    public Counter(long count) {
        this.count = count;
    }
    void  incrementByOne() { count++; }
    long getCount() { return count; }
}

This class will be used to increment the counter to 1 billion (1,000,000,000) as shown below:
public static void main(String[] args) {
    Counter counter = new Counter(0);
    long startTime = System.currentTimeMillis();
    for (long index = 0; index<1000000000L; index++) {
        counter.incrementByOne();
    }
    long endTime = System.currentTimeMillis();
    System.out.println("Time taken: " + (endTime - startTime) + " ms");
    System.out.println("Value is: " + counter.getCount());
}

There is only one thread which is going to increment the counter to one billion. The time taken on average is 2450 ms. I have also printed the value of count to ensure that count is incremented to one billion successfully and nothing went wrong during it.Next thing I am going to try is to mark the count variable in Counter class volatile and will again run the same code and note the time taken.
public class Counter {
    volatile long count;
    public Counter(long count) {
        this.count = count;
    }
    void  incrementByOne() { count++; }
    long getCount() { return count; }
}

This takes on average time of 8680 ms, so time is increased by a factor of 3.54. We also have atomic classes to use to increment a counter. Lets write another counter class making use of AtomicLong as:
public class AtomicCounter {
    private AtomicLong count;

    public AtomicCounter() {
        count = new AtomicLong(0);
    }
    void  incrementByOne() { count.incrementAndGet(); }
    long getCount() { return count.get(); }
}

We will now use this class for single thread only to check the time taken.
public static void main(String[] args) {
        AtomicCounter counter = new AtomicCounter();
        long startTime = System.currentTimeMillis();
        for (long index = 0; index<1000000000L; index++) {
            counter.incrementByOne();
        }
        long endTime = System.currentTimeMillis();
        System.out.println("Time taken: " + (endTime - startTime) + " ms");
        System.out.println("Value is: " + counter.getCount());
    }




The time taken on average while using atomic long is 8720 so time is increased by a factor of 3.55. Next example we will try is to use Lock to implement the similar counter. We will use the same Counter class but we will make use of Lock while increasing the counter as:
public static void main(String[] args) {
        Counter counter = new Counter(0);
        Lock lock = new ReentrantLock();
        long startTime = System.currentTimeMillis();
        for (long index = 0; index < 1000000000L; index++) {
            lock.lock();
            try {
                counter.incrementByOne();
            } finally {
                lock.unlock();
            }
        }
        long endTime = System.currentTimeMillis();
        System.out.println("Time taken: " + (endTime - startTime) + " ms");
        System.out.println("Value is: " + counter.getCount());
    }

This time it takes an average time of 28340 which means time is increased by a whopping multiple of 11.5 and the fun part is we are still using single thread. Lets now move to two threads sharing the same instance of AtomicCounter as:
public static void main(String[] args) {
        AtomicCounter counter = new AtomicCounter();

        Thread thread1 = new Thread(() -> {
            for (long index = 0; index < 500000000L; index++) {
                counter.incrementByOne();
            }
        });
        Thread thread2 = new Thread(() -> {
            for (long index = 0; index < 500000000L; index++) {
                counter.incrementByOne();
            }
        });
        long startTime = System.currentTimeMillis();

        thread1.start();
        thread2.start();
        try {
            thread1.join();
            thread2.join();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        long endTime = System.currentTimeMillis();
        System.out.println("Time taken: " + (endTime - startTime) + " ms");
        System.out.println("Value is: " + counter.getCount());
    }

When we were having single thread time taken was 8720 and when we test it for two threads it took an average time of 20687 almost 2.3 times more time. Now what time will be taken when using two threads with locks.
public static void main(String[] args) {

        Counter counter = new Counter(0);
        Lock lock = new ReentrantLock();

        Thread thread1 = new Thread(() -> {
            for (long index = 0; index < 500000000L; index++) {
                lock.lock();
                try {
                    counter.incrementByOne();
                } finally {
                    lock.unlock();
                }
            }
        });

        Thread thread2 = new Thread(() -> {
            for (long index = 0; index < 500000000L; index++) {
                lock.lock();
                try {
                    counter.incrementByOne();
                } finally {
                    lock.unlock();
                }
            }
        });

        long startTime = System.currentTimeMillis();

        thread1.start();
        thread2.start();
        try {
            thread1.join();
            thread2.join();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        long endTime = System.currentTimeMillis();
        System.out.println("Time taken: " + (endTime - startTime) + " ms");
        System.out.println("Value is: " + counter.getCount());
    }

Here is the summary of the result:

Threads Time Taken (in milliseconds)
Single Thread2450
Single Thread using volatile8680 (3.54 times)
Single Thread using AtomicLong8720 (3.55 times)
Single Thread using Lock28340 (11.5 times)
Two Threads using AtomicLong20687
Two threads using Lock90245

So moral of the story is that concurrency comes with its own cost and we need to use it when we really need it. That's all for now. Enjoy!!