Why Microservice?



Why Microservice?



Companies like Netflix, Amazon, and others have adopted the concept of microservices in their products. Microservices are one of the hottest topics in the software industry, and many organizations want to adopt them. Especially helpful is the fact that DevOps can play very well with microservices.
But what is a microservice? Why should  an organization adopt them?
To understand them, let's first take a look at monolithic software.
In monolithic software, we mainly use a three-tier architecture:
  • Presentation layer
  • Business layer
  • Data access layer
Say a traditional web application client (a browser) posts a request. The business tier executes the business logic, the database collects/stores application specific persistence data, and the UI shows the data to the user.
However, there are several problems with this type of system. All code (presentation, business layer, and data access layer) is maintained within the same code base. Although logically we divide the services like JMS Service and Data-Access Service, they are on the same code base and deployed as a single unit.
Even though you created a multi-module project, one module is dependent on another and, moreover, the module needs dependent modules in its class path. Although you use a distributed environment, it runs under single process context
So, in a single process, different services are communicating with each other. To achieve this, all artifacts and their required libraries (jars) are required in each application container.
Say a JMS service want to use the data access layer. The JMS container needs the data access layer jars and the jars upon which the data access layer is dependent (second level dependencies).
In this concept, there are lots of pain points, and the architecture is very rigid in nature.
Here are some of the problems you face with a monolith.

Problem 1

As there is one codebase, it grows gradually. Every programmer, whether it's a UI Developer or a business layer developer, commits in same code base, which becomes very inefficient to manage. Suppose one developer only works in the JMS module, but he has to pull the whole codebase to his local and configure the whole module in order to run it on a local server. Why? He should only concentrate on the JMS module, but the current scenario doesn't allow for that.

Problem 2

As there is one code base and modules are dependent on each other, minimal change in one module needs to generate all artifacts and needs to deploy in each server pool in a distributed environment.
Suppose in a multi-module project that the JMS module and business module are dependent on the data access module. A simple change in the data access module means we need to re-package the JMS module and business module and deploy them in their server pool.

Problem 3

As monolithic software uses a three-tier architecture, three cross-functional teams are involved in developing a feature. Even though a three-tier architecture allows for separation of responsibility, in the long-run, the boundaries are crossed and the layers lose their fluidity and become rigid.
Suppose an inventory management feature has been developed. The UI, business layer, and data access layer have their own jobs. But everyone wants to take control of the main business part so that when defects come up, they can solve them and are not dependent on another layer's developer. Due to this competition, those boundaries end up being crossed, which results in inefficient architecture.

Problem 4

In many projects, I have seen that there is a developer team and another support team. The developer team only develops the project, and after it's released, they hand it over to the support team. I personally don't support this culture. Although some knowledge transfer happens during the handover, it doesn't solve the problem. For critical incidents, the support team has to get help from the developer team, which hurts their credibility.

Problem 5

As our system is monolithic, so is our team management. Often, we create teams base on the tier — UI developers, backend developers, database programmers, etc. They are experts in their domains, but they have little knowledge about other layers. So when there's a critical problem, it encompasses each layer, and the blame game starts. Not only that, but it takes additional time to decide which layer's problem it is and who needs to solve the issue
Netflix and Amazon address these problems with a solution called microservices.
Microservice architecture tells us to break a product or project into independent services so that it can be deployed and managed solely at that level and doesn't depend on other services.
After seeing this definition, an obvious question comes to mind. On what basis do I break down my project into independent services?
Many people have the wrong idea about microservices. Microservices aren't telling you to break your project down based on the tier, such as JMS, UI, logging, etc.
No this is absolutely not. We need to break it down by function. A complete function and its functionality may consist of UI, business, logging, JMS, data access, JNDI lookup service, etc.
The function should not be divisible and not dependent on other functions.
So If the project has Inventory, Order, Billing, Shipping, and UI shopping cart modules, we can break each service down as an independently deployable module. Each has its own maintenance, monitoring, application servers, and database. So with microservices, there is no centralized database — each module has its own database.
And it could be a relational or a NoSQL database. The choice is yours based on the module. It creates a polyglot persistence.
The most important aspect of microservice culture is that whoever develops the service, it is that team's responsibility to manage it. This avoids the handover concept and the problems associated with it.

Microservice Benefits and Shortcomings

Advantages of Microservices on javaonfly
Disadvantages of Microservices on javaonfly

Benefit 1

As in monolithic software, you only develop in one language, say Java, as the code base. But with microservices, as each service is independent and each service is a new project, each service can be developed in any language that is best fits for the requirement.

Benefit 2

The developer is only concentrated on a particular service, so the code base will be very small, and the developer will know the code very well.

Benefit 3

When one service needs to talk with another service, they can talk via API, specifically by a REST service. A REST service is the medium to communicate through, so there is little transformation. Unlike SOA, a microservice message bus is much thinner than an ESB, which does lots of transformation, categorization, and routing.

Benefit 4

There is no centralized database. Each module has its own, so there's data decentralization. You can use NoSQL or a relational database depending on the module, which introduces that polyglot persistence I mentioned before.
A lot of people think SOA and microservices are the same thing. By definition, they look the same, but SOA is used for communicating different systems over an ESB, where the ESB takes a lot of responsibility to manage data, do categorization, etc.
But microservices use a dumb message bus which just transfers the input from one service to another, but its endpoint is smart enough to do the aforementioned tasks. It has a dumb message bus, but smart endpoints.
As microservices communicate through REST, the transformation scope is very small — only one service is dependent on another service via API call.

But Microservices Have Shortcomings, Too

As every functional aspect is an individual service, so in a big project, there are many services. Monitoring these services adds to the overhead.
Not only that, but when there's a service failure, tracking it down can be a painstaking job.
Service calls to one another, so tracing the path and debugging can be difficult, too.
Each service generates a log, so there is no central log monitoring. That's painful stuff, and we need a very good log management system for it.
With microservices, each service communicates through API/remote calls, which have more overhead than with monolithic software's interprocess communication calls.
But in spite of all of those detriments, microservices do real separation of responsibilities.       
























































Rise of Devops



Understanding basics of DevOps

Today in  IT industry, a buzz word is DevOps. Many people are talking about this. Every organization tries to adopt the same.
But the questions pop up in my mind are:
  • What is DevOps?
  • Why should I use this?
  • What are the problems with legacy software management?
  • Is DevOps necessity or ornamental?
To answer these questions, I am trying to look back at the typical model of software management.
In a typical Software development cycle, we do the following tasks:
  • Gather requirements from client.
  • Doing some HLD, LLD, blah, blah, blah.
  • Set up a Version Control System (VCS) to maintain the code base.
  • The developer implements code on their local machine.
  • Run unit tests on that implementation.
  • Commit the code into the Version Control System.
  • Developer sends a ticket to the infrastructure team to build and deploy the code in a QA environment.
  • Infrastructure team deploys the code into the test environment.
  • QA tests the code according to test cases and scripts.
  • Then QA raises a ticket to the infrastructure team to deploy the code in an SIT environment.
  • Infrastructure team deploys the code into SIT environment.
  • QA does the integration testing.
  • If all the previous steps are successful, we are ready to deploy to production.
  • We set up a meeting with the client and set a fixed date for deployment.
  • The infrastructure team gets ready to deploy the product to production.
  • The infrastructure team completes the deployment and sends a status report to the team about the deployment.
  • In production, if any post-production bugs are reported, we follow the same steps again.
Wow, it’s a long procedure, isn’t it?
What I understood from those steps are:
  1. Many cross-functional teams are involved in the cycle.
  2. Developers need an environment to work seamlessly in.
  3. As per the previous project model (waterfall), a product will be delivered to production a long time after requirements are gathered.
So, according to the Pre-DevOps days, the pain points are:

Communication

As I said earlier there are many cross-functional teams involved in the cycle, so hand shaking between them is inevitable. This is the potential point where project progression is blocked.
Let’s take an Example: Suppose a developer develops a feature and runs the unit tests. After a successful unit test, he/she commits the code to VCS. Then the same developer wants to test that code by going through QA according to test scripts. So he or she has to mail it or raise a ticket on a bug tracking system. The developer updates the status and assigns to the testers, but the tester is currently testing other features, so the developer has to wait for confirmation. Here the blocking state starts.

Blocking State


The developer then picks up another feature from his plate. After 2-3 days, the tester tests this implementation and returns to the developer, as there is a defect. So now, the developer is busy with new requirements so the bug is waiting for something to happen.
This is the typical scenario we work. If I want to search for the actual problem I found that this is due to the process and chaos that is created when there are too many functional teams involved and they are highly dependent on each other.
Another scenario you can consider: Suppose a bug was found in production. The developer inspects it and found that it is due to memory space issues, so to solve the problem we need to add a server into the server pool. Then developer contacts the IT-Operations team, who suggest raising a ticket and assigning it to Operations. Once it is approved by the Operations head, the Operations team adds the server into the pool. To solve a small problem we have to wait for 2-3 days to communicate with other functional teams.
Development, Operations, or Testing teams do not take an order without a ticket because their performance counts on that ticket. So, processes hinder us from smooth delivery.

Infrastructure

Infrastructure is another pain point. In a typical project, I have seen developers working in VDI. Not only that, the biggest problem is that developers work on Windows, but the production or SIT environments run on Linux. So the developer's machine is not a replica of the production server. When a developer is not confident about their code, silly things can happen.
Suppose you have a property file where you mention the path where you need to place your uploaded files, as the developer system is different from production. The developer set this path to local home, which is Windows home, but forgot to revert it when he/she commited the files.
In SIT environment, suddenly the tester discovers that upload functionality fails due to the wrong home directory.
Not only that, the developer's VDI m/c has been changed frequently for improving performance so in new VDI m/c he has to set up his whole project which takes just 1-2 days to set up dependency and run on the local server.

Production Release

Another problem is a waterfall approach takes a long time to release a project /product to the clients. So it can happen when you think about the project functionality that is a unique idea but delays to release the product, another competitor thinks and releases the idea before you. So you are in a losing spiral just to maintain the process.
Due to these pain points, DevOps rises and tries to rescue us from this apocalypse.
DevOps is a culture which promotes Continuous Delivery. Or I can say it promote a delivery pipeline concept where everything, from a commit to a production release should be in a pipeline with full automation, no human intervention needed.
There is a subtle difference between Continuous Integration, Continuous Deployment, and Continuous Delivery. I will discuss them in an another article.
For now, we can consider DevOps to dissolve the problem of “cross-functional team involvement."

DevOps is like the conveyer belt, and consists of multiple tools that take care of all the steps I mentioned earlier. DevOps also takes care of orchestration and make sure that the developer's environment should be the same as production. By using Puppet or Chef it can be archived. Docker is another container management tool which works very well in the context of DevOps.
Agile methodology is an integral part of DevOps, so we can build a minimal viable product, show to a client, and take next steps based on client feedback. Don’t go for big target, break it down into small feasible targets to achieve it.

Now the last question: is DevOps a necessity or ornamental?
It is totally based on your product. If your product is simple and won’t change very frequently, the cost to implement DevOps will be high. On  the other hand, if your product is complex and very costly I personally think you should go for DevOps.

Reveal the Programing to an Interface Design principles in java.



Programming to an Interface    

I think it will better to discuss design principals in java based on which design patterns are created.

The very first principal is “Programming to an Interface” what does it mean?

Let try to understand the principal in details.

In general, scenario, if you look at any problem statement or business solution you can find two parts.

1.    Fixed part.
2.    Variable

The fixed part is some kind boilerplate code. But when we design we take care of the variable part.
All the design patterns are discovered to maintain this variable parts. Because they are ever-changing. If your code is not flexible with future enhancement, then your code is not up to the mark.

The question is in java How you can maintain your variable parts?
To drill down to answer let magnify the statement "maintain variable parts". we know same will perform an operation but we don’t know How we can achieve this  operation . Specifically, we know the type of the operation but don’t know details of the implementation moreover client needs can be changed so operation implementation  change in future.

Let’s, take an Example A computer monitor is design for display purpose. So I can say if Computer is a product and Computer monitor is a part of computer product.,then Computer monitor is responsible for display operation.

Now, later on, client needs is changed ,now they want to display by Projector

So If our solution is not capable of welcome this needs it will be nothing but a waste product.

According to new needs what we can analyse is, it will perform same action display but the module should change from Computer monitor to Projector.

So display module in Computer product should be flexible so we can change it easily. Or we can change it dynamically(runtime). We can say display module is like a strategy  and client change the strategy now.

So our java solution is like following.

interface displayModule
{
 public void display();
}

public class Monitor implements displayModule
{
public void display()
{
s.o.p(“Display through Monitor”);
}



public class Projector implements displayModule
{
public void display()
{
s.o.p(“Display through projector”);
}


public class Computer
{
 displayModule dm;

public void setDisplayModule(displayModule dm)
{
this.dm=dm;
}

public void display()
{
 dm.display();
}


public static void main(String args[])
{
Computer cm =new Computer();
displayModule dm = new Monitor();
displayModule dm1 = new Projector();
cm. setDisplayModule(dm);
cm. display();
cm. setDisplayModule(dm1);
cm. display();


}                                                                                                    


}

Look the solution, we know Display Module should be flexible, and we know the operation of the display module is "display" but according to the client it may be changed later, but in a computer, there should always be a display module but we don’t know what will that equipment. It’s may be monitor or projector or any other.


So we create an interface and every Display parts should implement that and provide the own definition of the display.

Look the computer class here I create HAS-A relation call display-module because we know display module change frequently as per client needs so we always make display module as abstract
So we can change it runtime with actual implementation.

Remember always code through an interface so you can change your strategy run-time with actual implementation.
The interface means here java interface or abstract class.
Make the variable parts as an interface or abstract class as you know the operation which is never changes but it implementation or the implement module can change.

Refactoring legacy code to support Multithreading Enviroment




Refactoring Java code for Multithreading Environment


In my service life frequently I encounter a problem that a code is giving weird result in staging or production but when it runs from developer machine it shows a perfect result.

So, the clue is the weird result, try to run it many times and watch result pattern if it is varying then definitely it is due to threading. You can take Thread Dump for analysis but multiple tests give you a rough idea about Thread Problem.

Staging or production are  distributed environment where your resources are shared among multiple requests whereas developing machine there is only one thread involved. So it is run fine in developer machine.


It is very important to design your code for Multithreaded environment.

In this context, I can recall that once I have faced a problem , a module was designed for the stand-alone application. so developer implements the code without thinking of Multithreading as they know they will package the code into a jar then invoke from a command line which spawns a JVM and serves only one request.
So far so good but later a new change request comes that it should be hosted in Web, not as a standalone app. The Web means multiple requests so A New Problem was raised from nowhere. So there is no way but has to refactor the module to support Multithreading.

There are so many classes in a module some of them are pivot classes. Make it multithreaded capable is not an easy task. So we have to think a procedure by which we can do it in less time and minimally affecting the classes.

I want to share that experience with you people.

In that course, we think and try to understand where and How Multithread environment differs from a single thread environment.

Outcomes are.
1.       In Multithread environment problem occurs for the shareable resource which is not synchronized.
2.       If I filter the statement 1 , more specifically I can say shareable resource can create a problem If at least  one thread wants to update/write it states . then only we can encounter dirty data.
3.       If Multiple threads want to read from a resource, then we don’t need to alter them as It will not harm the state.
4.       What makes a shareable resource?
5.       If in a class, member variables are defined and have setter method then these classes are the potential candidate for causing the problem.
6.      If a class has methods and all variables are method level, then they inherently thread safe. We don’t bother them.
7.       If a class has Static variables they are very dangerous as the static variable bound to class and one copy shared by all Objects so, If a thread changes its value it reflects for all objects.
8.       In a project, I see every utility classes design as Singleton so we have to mark them candidate of Multithreading refactoring.
9.      In a project, it often happens a step is dependent on a previous stem or some derived value generated by some business logic. The developer is maintaining them as in Cache or in a Context and they are the potential candidate for Multithreading.
10.   Try to find any immutable class then they are thread safe so don’t need to refactor them.


Based on the Outcome we target our module and Identifies the classes need to be refactored. Believe me
This exercise will help you to find exact classes you need to refactor and you can discover they are the High-level classes like the manager or any factory classes Utility classes etc.

Next, the important question How to refactor them?

To Refactor they put your thinking Cap and try to pick the tools which will cause minimal change.

Tool 1:
Check you can refactor all member variable to method variable.
First check in that class how many methods use this member variable, then try to find out by searching your workspace from where these methods are called. If the number are small pass that variable as the method parameter and remove the member variable.

Suppose in EmployeeManager class Employee is member variable and in checkValidEmploye is method where you use Employee Object

Code snippet

Public class EmployeeManager
{
Employee emp;
Public void checkValidEmploye()
{
//some code using emp variable
}
}

Refactor the code

Code

Public class EmployeeManager
{
//Employee emp;// remove it
Public void checkValidEmploye(Employee  emp)
{
//some code using method emp variable
}
}

Call  employeeManager. checkValidEmploye(new Employee  ());



Tool2:

If there are so many member variables and you can’t make it local variables

Put the class in ThreadLocal

Code snippet
EmployeeManager mgr;
Public void validate()
{
                mgr. . checkValidEmploye();
               
}


Refactor Code snippet
private static ThreadLocal< EmployeeManager > THREAD_LOCAL_CACHE = new ThreadLocal< EmployeeManager >();;
Public void validate()
{

If(THREAD_LOCAL_CACHE.get() !=null)
{
       THREAD_LOCAL_CACHE.get(). . checkValidEmploye();
}
Else
{
THREAD_LOCAL_CACHE.set(new EmployeeManager() )
THREAD_LOCAL_CACHE.get(). . checkValidEmploye();
}             
               
}


By Thread Local we can ensure An EmployeeManager object exist for a thread. So EmployeeManager would be thread specific so no other thread can access one thread’s Employee manager. So mitigate the risk of dirty data


Tool3> Identify the classes which are singleton basically context,cache,utility classes are a singleton so make it thread safe. We can make it to Per thread Per singleton

Refactor singleton to normal class by a trick we can do that


Single to class look like

public class Utility
{
 private  static  Utility instance= new Utility();
private Utility()
{
}
Public static Utility  getInstance(){
return instance;
}


Refactor it like following to avoid change in calling classes


public class Utility
{

private static ThreadLocal< Utility > THREAD_LOCAL_CACHE = new ThreadLocal< Utility >();;

private Utility()
{
}
Public static Utility  getInstance(){
If(THREAD_LOCAL_CACHE.get()==null)
{
THREAD_LOCAL_CACHE.set(new Utility());
}
return THREAD_LOCAL_CACHE.get();
}

This will solve the multithreading problem as each request it will give a new Utility Object if it does not have as There is one object per thread we can put some variable to it and use it in future .


Please choose the option carefully. This is your experience and design decision which tool you are going to use to alter your code minimally.



















Free Java seminar at Bagbazar on 30-Jul-2016

Free Seminar on Java,  Absolutely free No registration fees . Based on First come
first serve basis
HOST: Shamik Mitra(IBM Tech-Lead)
Agenda
What is java
Why Java programing
OOPS principals
Class and Object
Basic syntax and keywords
Operators
Conditional statemants
Overloading and Overriding basiss
Collection frame works
Data structure
Thread Overview
Discussion



Prerequisite :  Basic knowledge on Computer
Basic sense on programing.

Bring Laptop will be benifical for student.
 
Feel free to contact 10.00 AM to 11 PM.
mob : 9830471739
email : mitrashamik@gmail.com

Pass user input from command line in java

There is a various way, you can pass parameter from a console . Using BufferdReader,InputStream but the easiest way to do it with Scanner class it took Inputstream(System.in) as parameter from we can get user given value

Let see an example



package com.example.userInput;

import java.util.Scanner;

public class UserInput {

    public void add(int i,int j)
    {
        int result = i+j;
        System.out.println("sum is " + result);
    }

    public void multiply(int i,int j)
    {
        int result = i*j;
        System.out.println("
Multiplication  is " + result);
   
    }

    public static void main(String[] args) {
   
        Scanner sc= new Scanner(System.in);
        System.out.println("Enter Choice either a or m");
        System.out.println("Enter First Operend");
        int op1 = sc.nextInt();
        System.out.println("Enter Second Operend");
        int op2 = sc.nextInt();
        System.out.println("Enter Choice");
        String choice = sc.next();
   
   
        UserInput input = new UserInput();
        if("a".equalsIgnoreCase(choice))
        {
            input.add(op1, op2);
        }
        else if("m".equalsIgnoreCase(choice))
        {
            input.multiply(op1, op2);
       
        }
        else
        {
            System.out.println("Wrong choice Entered");
        }


Spring framework download and integration with eclipse step by step.

In this tutorial we will learn how we can download spring framework jars for eclipse and integrate the downloaded spring framework with eclipse . Then we create a basic Spring example to check our setup is successful or not.

Here are the steps to configure Spring core with eclipse IDE

1. Install JDK and eclipse
2. create a java project in eclipse called SpringTest
3. create a folder  lib under SpringTest\lib
4. Download commonLoggin1.2.jar from here http://commons.apache.org/proper/commons-logging/download_logging.cgi

5. extract it and put jars into lib folder earlier created.
6. Download spring from here
http://repo.spring.io/release/org/springframework/spring/4.1.6.RELEASE/
7. extract it and put all jars into the lib folder .
8. Right click on SpringTest ->properties->java build path
9. click on add external   jar and add all jars under lib folder
10.  Create a  folder called configFiles under SpringTest/ src folder.
11. create beans.xml file in configFiles

Add following lines

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
    http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

   <bean id="helloWorld" class="com.example.HelloWorld">
       <property name="greet" value="Hello World! Welcome to Spring"/>
   </bean>

</beans>


12 Create a package com.example under
SpringTest\src

13. create a java file HelloWorld.java under the package
com.example

14. Write following in HelloWorld


package com.example;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class HelloWorld {
   
    private String greet;

    public String getGreet() {
        return greet;
    }

    public void setGreet(String greet) {
        this.greet = greet;
    }
   
   
    public static void main(String[] args) {
       
        ApplicationContext ctx = new ClassPathXmlApplicationContext("configFiles/beans.xml");
       
        HelloWorld bean =(HelloWorld) ctx.getBean("helloWorld");
        System.out.println(bean.getGreet());
       
       
    }
   
   

}
15. Run as java application output will be
Hello World! Welcome to Spring

multiple inheritance in java?

Java directly does  not support multiple inheritance , but by interface it supports the same.

To understand why java does not support multiple inheritance  first we need to understand the Diamond problem

Diamond problem says

Suppose we have Parent class Color it has a method cal displayColor().

Now it has two children  Yellow and Blue by invoking displayColor(). they return yellow and blue respectively.

Let assume java supports multiple inheritance
So if I create a child say Green which inherits Yellow as well as Blue then which displayColor() color it inherits? It creates an ambiguous  situation so for this reason java does not support multiple inheritance directly but we can solve this problem by the interface. An interface is a contract only methods are declared in  interface no definition so if Green implements yellow and blue ,
Green class easily override displayColor() , and define the same, there will be no problem as only concrete implementation only in Green class.

Example:
public class Color
{
public void displayColor()
{
System.out.println("white");
}
}


public class Yellow extends Color
{
public void displayColor()
{
System.out.println("yellow");
}

}


public class Blue extends Color
{
public void displayColor()
{
System.out.println("blue");
}
}




interface color
{
  void displayColor();
}

interface yellow extends color
{
  void displayColor();
}

interface blue extends color
{
  void displayColor();
}


Class Green implements blue,yellow
{
  public void displayColor()
{
System.out.println("Green")
}

}







Does Java support multiple inheritance?

Java directly does  not support multiple inheritance , but by interface it supports the same.

To understand why java does not support multiple inheritance  first we need to understand the Diamond problem

Diamond problem says

Suppose we have Parent class Color it has a method cal displayColor().

Now it has two children  Yellow and Blue by invoking displayColor(). they return yellow and blue respectively.

Let assume java supports multiple inheritance
So if I create a child say Green which inherits Yellow as well as Blue then which displayColor() color it inherits? It creates an ambiguous  situation so for this reason java does not support multiple inheritance directly but we can solve this problem by the interface. An interface is a contract only methods are declared in  interface no definition so if Green implements yellow and blue ,
Green class easily override displayColor() , and define the same, there will be no problem as only concrete implementation only in Green class.

Example:
public class Color
{
public void displayColor()
{
System.out.println("white");
}
}


public class Yellow extends Color
{
public void displayColor()
{
System.out.println("yellow");
}

}


public class Blue extends Color
{
public void displayColor()
{
System.out.println("blue");
}
}




interface color
{
  void displayColor();
}

interface yellow extends color
{
  void displayColor();
}

interface blue extends color
{
  void displayColor();
}


Class Green implements blue,yellow
{
  public void displayColor()
{
System.out.println("Green")
}

}







Does runtime polymorphism can be acheived by data memebrs?

No Overriding can not be performed on properties/data members it always call the reference class properties value

Let take an example

Public class Father
{

  public  int age=60;

}

Public class Child extends Father
{

  public  int age=30;

}


Now if Father f = new Child();
f.age print 60;