DDD::Interchange Context and Microservices

In my previous two articles, I describe lots about Bounded context and context map, where we have learned, How to segregate the domain models using Bounded context?  Also learned, How to use context map judiciously to understand the relationship between two contexts, here relationship is a broader statement it does not only represent technical relationship it also defines the relationship between owners of the services, who is in commendable position, who act as downstream etc. Also, we learned different strategy based on the relationships like (Partnership, upstream/downstream,anti-corruption layer etc).
In this article, we will be dealing with a special case when something goes wrong while designing a Microservice. Then how that error bleeds and corrupted other services, Then we will learn how to solve that problem using Interchange context.

Take an Example, In case of our online Student registration module where Registration Module is in commendable position, as it has a partnership relationship with analytics module, Payment module has a downstream relationship with it, Batch scheduler has a downstream relationship with the same.   so it is clear Registration module is in commendable position, so the language it published every one has to follow that language, by Registration module API whatever the message it publishes batch, and payment module has to receive the same message , so I can say if the Registration module publish an R language every other module has to consume that R language, So  in this system R is the defacto language standard. You can think of this situation on the real-world basis as British people conquered the world in ancient time, so the language they speak i.e English is the defacto language standard for world, A British man and an Indian automatically communicate with English not the reverse even an Indian and a Chinese also talk to each other by English (where no British people are involved).
Now, The problem occurs If the Registration Module has some issues in Design, say Boundary is not properly defined, Message structure not properly designed or Edge cases or Exceptions are not handled properly in that case in spite of a Physical boundary(Bounded context) this problems are bleed into all dependent services, as whole context map is polluted due the error in Registration module. So Registration Module now acting as a Big ball of mud. (BBoM). As per Eric Evans, he called it a Small ball of Mud. So one thing is clear if upstream/commandable system is in danger then the whole Microservice architecture become a mess. As an Architect somehow we have to prevent the bleeding of errors from upstream Microservices/ System to downstream.
If we dig through the DDD patterns we can found ACL is doing the same (Anti-corruption layer) which sits in front of consumer Microservice/System and translate the producer /upstream message structure to Consumer Microservice/System message structure also it handles any kind of errors , So ACL protecting the consumer services from External systems.  But think about a system where two services are in a Partnership relationship
they produce and consumes each other messages both dependent on each other and other services dependent on them, in that case rather than putting ACL in front of both Partners we can use a new concept called Interchange Context.
Interchange Context, is such a technique where both partners agreeing upon the model and message structure, so there is one ubiquitous language for both partners, they publish one Message together and other systems consume that message. In this way, two partners can independently change there internal code, internal message structure but in Interchange context, they create a common terminology/Ubiquitous language which understood by this two partners. Every other service, who are dependent on these two partners can consume message from Interchange context.


Conclusion : Interchange context is more robust that Anti-corruption layer, it stops the error to be bled from upstream services , If upstream services has a direct consumers then , upstream service developer should be very careful about the API , any changes in API or errors break others , so Interchange context is kind of centralized facade where partners are created the ubiquitous language and other costumes message from interchange context, partners and dependents services all are free to modify there own data model, message structure but when they communicate to interchange context internal model should converted to Interchange context message structures.

DDD: Thinking in terms of Context Map

In my previous article, I did a detailed discussion about the Bounded Context and learn that how to tackle the complexity of a Domain, it is the best way to divide the domains into several subdomains and mapped them with different bounded contexts where each business entity/Value object has certain meaning on that context, so every stakeholder of the business like Product owner, Developers, Architect, sponsorers are understood the context and referring entity with the proper name, there should be no confusion of the naming when we discuss the terminology basis on a context. A ubiquitous language which creates a unified language among business stakeholder.
By Bounded context, we properly define a business model, create a different context based on the business domains, but always a functionality spans over multiple business entities , and those entities lie in the different Bounded contexts/domains, so it is utmost important to understand the relationship between the Bounded contexts, to architecting the business solution Context Map is a technique by which we can visualize the relationship between different contexts and Integration architect choose the best integration patterns to communicate to other contexts.

Why Context Map is so important while designing solution?
By UML diagram architect cam understand how different parts will communicate with other parts, It gives the architect a view on the communication between different contexts, that is fine but context map stepped in before UML diagram, it helps to visualize the nature of the relationship and based on the nature , architects can decide what kind of technical solution would be adopted.
The best part of Context Map visualization is, it talks about nature of the relationship, It not only tell is that relationship is upstream or downstream, Publisher or subscriber, it also tells how different team dependent on another team their motifs, their politics everything. so counting all of that now architect in a position to decide optimal solution to minimize the risk while integrating with another context.
In the era of Microservices, Context Map is the key player, because before design the holistic Microservice architecture where every team owns a Microservice, It is important to understand how one team is dependent on other, which teams are in commendable position, which team only seeks help then you can architect the solution best possible way .
Think of our Student online Tutor app, as a full grown app say it has more than fifty microservices are deployed in production , so here fifty teams are coming into play and for developing a functionality say Student registrations , multiple contexts will be affected, so we can say to implement that feature multiple teams will be involved so what would be their relationship, while designing this feature who is the pivot service whose data is most needed obviously that service is in commendable position as if that team is not ready other teams can't do anything, so all teams should align with that team , they have to sync there product backlog with that team, so here the internal politics comes to the picture , if the data of the service comes from an External team not with in the organization then the solution is way more complex as you can't force them so only way is to request them and wait for there changes, so based on these diffrent scenarios, politics context map has diffrent solutions. I will cover most important solutions here
 1. Shared Kernel : Shared kernel talks about a partnership relation where two or many teams shared common data model/ value object, it reduces the code duplication as different context use that common model, but that common model/value object is very sensitive, any changes major/minor should be agreed upon all the parties unless it would break other parties code, so more communication and synchronization needs among those teams say one team need a change in the common model but another team is not ready so the priro team has to wait for other oartner to be ready or otherhand othar partner has to change there code unnecessery although that is not heliping them, but to be in sync with other partners, diffrent shades of problems woven while maintaining the pertnership relation ship so choose that pattern  when your common model remains constant or changes once in an era.
In our example, say we developed an analytical module which analysis which courses are most chosen by the students, which students are chosen more than five courses etc, so that module works with Student model, course model so I can say our Analytics module can share Registration modules student model, and they also agreed upon any changes on student model.

Customer/ Supplier: Generally this is the common relationship between two contexts, where a context consumers or depend on data from another context, the context which produces data marked as upstream and the context which consumes data called downstream. When we visualize this relationship in terms of politics we can visualize the power distribution has many shades.
like following
Upstream as the leader: In this type of relationship, the upstream team is in a commendable position, that team does not care about the downstream team, as they producing the data and downstream team is on the mercy of that upstream team they are always changing their model based on the data structure produces by upstream.
In our Student Registration app relationship between Payment app and Notification, app is kind of upstream and downstream where Payment app decide what information in which structure they provide and Notification module consumes that data structure.

Downstream as the Leader: In some cases, the relationship is revert although upstream is produces data, it must have to follow the rule, the data structure for downstream, in this scenario downstream is in a commendable position.
say in our Studen registration system we need to submit Form 16 to government as a tax payee so our payment module has to submit form 16 data to Government exposed API but government API has certain rules and data structure for submitting form 16 data, so although government API is downstream it has total control, our Payment module should communicate with down steam in such a way so it can fulfil downstream rules.

The customer-supplier relation works best when both the parties upstream and downstream are aligned with the work both party agreed upon the interfaces and change in the structure, in case of any changes in the contract both parties will do a discussion synchronize their priority backlogs and agreed upon the changes, If one party does nor care upon another party then every time contract will be broken and it is tough to maintain a customer-supplier relationship.
Conformist: Sometimes, there is a relation between two parties in such a way that downstream team always dependent on an upstream team and they can't do a mutual agreement with upstream about requirements.  The upstream is not aligned with downstream and does not care they are free to change there published endpoint or contract any time and not taking any request from downstream. It is happens when Upstream team is an external system or under a different management hierarchy , and many downstream systems are registered with it so it can't give a priority to any downstream, rather then all downstream system must be aligned with upstream contact and data structures if upstream contracts or data structures change it ps downstream responsibility to changes accordingly.
Say, In our online Student registration app we have a free tutorial module where all students or other application can consume our free tutorials and embed them in their application, so here out free tutorial module acts as upstream and independent of any other third party app who consumes our free tutorials , we can;t give any priority to them and we don't have any contract with them if we change the contracts or data structures it is other third parties duty to change their application accordingly to consume our free tutorials. Other parties are acting as a conformist.

Anti Corruption Layer: When two system interacts if we consume the data directly from upstream we pollute our downstream system as upstream data structure leak through the downstream so if the upstream become polluted our downstream too as it imitates the upstream data while consuming. So it is a good idea while consume data from third party or from a legacy application always use a translation layer where the upstream data translate to downstream data structure before fed in to downstream in that way we can resist the data leakage from upstream, if upstream contract changes it does not pollute downstream internal system only Translation layer has to be changed in order to  adopt new data structure from upstream and convert it into downstream data structure, this technique is called Anti-corruption layer. Anti-corruption layers save the downstream system from upstream changes.
In our application Notification module can implement an ACL while consuming data from payment module so if payment module data structure changes only ACL layers affected.

Open Host : In some cases, your Domain API needs to be accessed by many other services like our Free Tutorial Publisher module, Many external or internal domains want to consume this service, so as Upstream it should be hosted as a service and maintains a protocol and service contract like REST and JSON structure so another system can consume the data.
Published Language: Often two or more system receive and send messages among themselves, in that case, a common language will be needed for the transformation of the data from one system to another like XML, JSON we call that structure as Published language.
The holistic  view of our Student online registration app in terms of context Map

 Conclusion: Context Map is a very important exercise to realize how one domain communicate with other, It gives a proper view of the organization structure, how different domains are distributed, how domain owners are dependent on each other? What is the relationship between team structure? Can they be aligned while developing a feature, based on all the parameters Integration architect can adopt a suitable integration pattern to integrate domains? Prior to designing the integration solution, always architect has to define context map to understand the relationship and structure of the teams based on that Architect can choose the best possible solution.

Bounded Context in my view

In this article, I will share my view about Bounded Context,
What does it mean,?
Why is it required?
The connection between Bounded context and Microservices.
I will try to keep it simple, and this article targeted to that audience who will hear the term Bounded Context while developing Microservices but having a hard time to understanding the Bounded context concept.
Before deep dive into Bounded Context in terms of DDD(Domain Driven Design), Let's understand what is the meaning of that in the real world.
We know the human is the most intelligent species and created different countries for the ruling. But Question is Why they created different countries or Logical Boundaries? In what basis Boundary has been drawn between two countries,  before Human civilization, in the Earth, we only have land there is no concept of country.
One compelling answers come to my mind is that to separate the administration, culture, laws, economics, so each country abstracted its people from other countries unless it is an impossible task to create one Unified culture, language as in ancients time different tribes has there own languages and cultures.
In the same country, there are some predefined rules, language, style every citizen follows. They understand the common language, aware of laws, currency, style so as a brief I can say, Citizens are sync with each other in a country and country has one Unified culture which every citizen follows and understand.
In programming, We apply Bounded context in the same manner, to separate different models/subdomains  from each other in a Domain/Problem space, In a Domain-driven design --Strategic design part,  we introduce Bounded context so in a certain context a model has a certain meaning like the countries where for a particular country--ceratin language, currency has specific meaning, But in a different country that currency or language is not understandable, Because it has no meaning/different meaning respect to that country. Like the word Fool, In English it means Stupid but in Bengali it means Flower !!!!
So If we consider Country as a Bounded context, and language/currencies as a Model inside that context we can easily map the concept of the domain model in a certain bounded context. The model has some meaning for a Bounded context but the same model has no meaning/different meaning for another Bounded context, So by Bounded context, we create a logical boundary where the model and business terms have a certain meaning and the Bounded context separate/hide the models from the outer world. All communication should be done via API. So it is obvious that under a Bounded context -- model and business logic maintains a certain law and maintain its own persistence storage and that is not directly accessible to other Bounded Context.

BoundedContext Communication: Any design has two common parts, abstraction of the data model and communicates with other parts of the system. By Bounded context we separate the data model in a simple term abstracting the commonalities in the business but How one Bounded context communicate with others?
Here the concept of Context Map stepped in, Using Context map we can discover How one Context depends on other Bounded Context, Like are two Context has strong dependencies. or one domain sends a confirmation message to another domain(Conformist) or may use a shared kernel/Shared model, I will talk about Context map later in a separate article, But as of now, you can think context map is for communicating clearly between Bounded Contexts.
At this point, I believe you have a fair bit of idea what is a Bounded context, But still, if you have questions How it fits in Architecture? go to the next paragraph.
Let say we have to design Online Student management system where Student can register to the site choose course accordingly, Pay the Course fee and then he will be tagged to a batch and Teacher & Student is notified about the batch start date and time slot.
As an Architect, you have to identify the bounded context of the different domain related to this business logic.
If we divide the business logic based on related functionality we can found four basic functionality
1. Registration process: Which takes care of Registration of Student.
2. Payment System: Which will process the Course fee and publish online payment status.
3. Batch Scheduling: Upon confirmation of payment,  this function checks the Teacher availability, batch availability and based on that create a batch and assign the candidates or update an existing batch with the candidate.
4. Notification System: It will notify Teacher and Student about the timings and slot information.

So, There are  4 bounded contexts Registration, Payments, Scheduler, and Notification.
Now let's dig down How each Module represents Teacher and Student and Course Models.
Registration process: It only wants the Student information and it needs it demographic information like Name, age, sex, address and which course student chooses.
There is no mean of Teacher in this context
Payment System : It treats Student as Candidates In Payment System only Student/Candidates financial information is required like Account number and based on the course fees it deduct the amount from given account, So here perspective of a student is totally different, Information needed in payment Service is totally different from Registration although Payment system may need few pieces of information like name, address of the student those are very minimal information.
The term "Teacher"  is not valid here, In payment System Teacher treated as Faculty and they can be permanent or Contractor based on the Faculty type Payment System choose the payment calculation either per hour basis or per anum basis.
Batch Scheduling: In case of Batch Scheduling System, it needs bare minimum information from Teacher and Student like name, Address etc. But it needs detailed information, of Course, Batches under the Courses etc.
Notification System: For Notification System, we just need the Teacher and Student email address or Phone number and name nothing else, And it needs the name of the Student management system and Course details, here Student management site acts as Sender and Teacher and Student denotes as Reciever.



Till now we have seen,  same Domain Objects Teacher, Student, Course have different meaning and use-cases for different Context, This is the beauty of Bounded Context ,we have multiple canonical models for same domain Object based on different context, So developers, Business, user are always in the same page when they are talking about a context, the concept of ubiquitous language is woven here, using ubiquitous language DDD create a unified system where every participant understand the language based on the context.
Now, the common question is why the Bounded context term is so popular in Microservices?
To answer this questions first we have to understand that DDD is applicable for Monolith as well as in Microservice, But in case of Monolith, it is vaguer and more of a logical segregation so only good developers can see it. The main reason is in monolith we have a single giant codebase, may we break it multimodule based on DDD create ACL/Translator when one module talks to another but still it needs other modules as dependent jars to invoke its method. Another point is as this is a single code base and multiple coders working on them so some not so skillfull coder can pollute the boundary or domain Object, Architecht can't create a physical boundary based on Bounded Context, But in Microservice architechture it is inherent as Microservice said that rather than a large code base we can create Small services which have it own code base and services are talk to each other through API or messaging, so It clearly says that understand the Business Domain break the Business logic in to multiple Bounded Contexts and each Bounded Context will be a separte codebase and they comuunicate through Context Map, To design the Context map you have to design API carefully, may you can use Port and Hub architechture So your code under the Bounded context is not comminicate with Outerworld and never polluted. Microservice offers this type of strong segregation of Bounded Context. Bounded Context is more visible and understandable in the context of Microservice.
Conclusion: Bounded Context is the basic needs, when you are trying to break a large business logic it helps you to understand how different part of the system use domain objects in a different manner with different terminology. Bounded Context is just a view to properly organize the business logic based on functionality but to make the business logic works --  communication between Bounded context needed, and it uses Context Map for the same. In my next article, I will discuss Context Map.

File Upload Using Angular4/Microservice

Upload a file, is a regular feature of web programming, Every Business needs this facility, We know How to upload a file using JSP/Html as Front-end and Servlet/Struts/Spring MVC as Server end.


But How to achieve the same in Angular 4 /Microservice combination?
In this tutorial, I will show you the same step by step, before that let me clarify one thing, I am assuming you have a basic understanding of Angular 4 and Microservice.
Now directly jump on the problem statement,
I want to create an upload functionality which invokes a FileUpload Microservice and store the profile picture of an Employee.
Let's create an Angular 4 project using Angular.io plugin in eclipse.
after creating the application modify the app.component.ts file under app module.
import { UploadFileService } from './fileUpload.service';
import { Component } from '@angular/core';
import { HttpClient, HttpResponse, HttpEventType } from '@angular/common/http';


@Component({
  selector: 'app-root',
  templateUrl: './view.component.html',
  styleUrls: ['./app.component.css'],
  providers: [UploadFileService]
})

export class AppComponent {
selectedFiles: FileList;
   currentFileUpload: File;
    constructor(private uploadService: UploadFileService) {}
  selectFile(event) {
    this.selectedFiles = event.target.files;
  }
  upload() {

    this.currentFileUpload = this.selectedFiles.item(0);
    this.uploadService.pushFileToStorage(this.currentFileUpload).subscribe(event => {
     if (event instanceof HttpResponse) {
        console.log('File is completely uploaded!');
      }
    });

    this.selectedFiles = undefined;
  }

}
In the @Component decorator, I changed the template URL to view.component.html which actually hold the FileUpload form components. After that, I add a UploadService as a Provider which actually post the selected files to Microservice.
Now, I define a method called selectFile, which capture an event(OnChange event in the fileUpload form field) and extract the file from the target form fields, in this case, the File form fields.
Then I add another method called upload which calls the file upload service and subscribe itself ob Observable<HttpResponse>.

Here, is the view.component.html file
<div style="text-align:center">
<label>
<input type="file" (change)="selectFile($event)">
</label>
<button [disabled]="!selectedFiles"
(click)="upload()">Upload</button>
</div>
Here, I just add a file upload field and when we select a file an onchange event will be fired which call the selectFile method and pass that event to it.
Next, I call the upload method.
Let see the file upload service.
import {Injectable} from '@angular/core';
import {HttpClient, HttpRequest, HttpEvent} from '@angular/common/http';
import {Observable} from 'rxjs/Observable';

@Injectable()
export class UploadFileService {

  constructor(private http: HttpClient) {}

  pushFileToStorage(file: File): Observable<HttpEvent<{}>> {
    const formdata: FormData = new FormData();
    formdata.append('file', file);

    const req = new HttpRequest('POST', 'http://localhost:8085/profile/uploadpicture', formdata, {
      reportProgress: true,
      responseType: 'text'
    }

    );

    return this.http.request(req);
  }

}
here I create a Formdata Object and add the uploaded File into it and using angular HTTP I post the form data to a Microservice running on port 8085 and publish a REST endpoint called /profile/uploadpicture
Hooray, We successfully write the UI part for File upload using Angular4.
if you start the Angular(ng serve) it will look like following,

Let's Build the Microservice part, Create a Project called EmployeeFileUpload service in STS or using start.spring.io, Select Eureka client module to register this microservice with Eureka.
After that, rename the application.properties to bootstrap property. add the following entry.
spring.application.name=EmployeePhotoStorageBoard
eureka.client.serviceUrl.defaultZone:http://localhost:9091/eureka
server.port=8085
security.basic.enable: false   
management.security.enabled: false 
My Eureka server is located on  9091 port, I give a logical name to this Microservice called EmployeePhotoStorageService which runs on 8085 port.
Now I am going to create a Rest Controller which accepts the request from Angular and binds the Multipart Form.
Let see the code snippets of FileUploadController.
package com.example.EmployeePhotoStorageService.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;


@RestController

public class FileController {

@Autowired
FileService fileservice;

@CrossOrigin(origins = "http://localhost:4200")// Call  from Local Angualar
@PostMapping("/profile/uploadpicture")
public ResponseEntity<String> handleFileUpload(@RequestParam("file") MultipartFile file) {
String message = "";
try {
fileservice.store(file);
message = "You successfully uploaded " + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.OK).body(message);
} catch (Exception e) {
message = "Fail to upload Profile Picture" + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.EXPECTATION_FAILED).body(message);
}
}


}

Few things to notice here, I use a @CrossOrigin annotation by this I instruct Spring to allow the request comes from localhost:4200 , In production, Microservice and Angular app hosted in different domains to allow other domains request we must have to provide the cross-origin annotation,  I Autowired a FileUpload service which actually writes the File content into the disk.
Let see the FileService code,
package com.example.EmployeePhotoStorageService.controller;

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;


@Service
public class FileService {
private final Path rootLocation = Paths.get("ProfilePictureStore");

public void store(MultipartFile file) {
try {
System.out.println(file.getOriginalFilename());
System.out.println(rootLocation.toUri());
Files.copy(file.getInputStream(), this.rootLocation.resolve(file.getOriginalFilename()));
} catch (Exception e) {
throw new RuntimeException("FAIL!");
}
}

}

Here, I create a directory called ProfilePictureStore under the project it is the same level to src folder. Now I copy the file input stream to the location using java.nio's  Files.copy() static method.
Now to run this Microservice I have to write the Sring Application boot file. Let see the code.
package com.example.EmployeePhotoStorageService;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;

@EnableDiscoveryClient
@SpringBootApplication
public class EmployeePhotoStorageService {

public static void main(String[] args) {
SpringApplication.run(EmployeePhotoStorageService.class, args);
}


}

Ok, we are all set, Only the last piece is missing from this tutorial that is the pom.xml file.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.example</groupId>
<artifactId>EmployeeFileUploadService</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>EmployeeDashBoardService</name>
<description>Demo project for Spring Boot</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.4.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Dalston.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jersey</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>
 <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix</artifactId>
   </dependency>
</dependencies>


<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>

Yeah, That's it, if we run the Microservice and upload a file from angular we can see that file is stored in ProfilePictureStore folder, very easy isn't it?
Conclusion: This is a very simple example or I can say a prototype of file upload, without any validation or passing any information from UI apart from the raw file like comments, file name, tags etc. You can enrich this basic example using the Formdata Object in Angular, Another Observation is, I directly call the Microservice  instance from Angular, That is not the case in Production there you have to introduce a Zuul API gateway which accepts the request from Angular do some security checking then communicate with Eureka and route the request to the actual microservice for simplicity I just skip that part.

Change Method Call On the Fly:: CallSite

In my previous article, I talked about invokeDynamic, In this article, I will show you the coding how you can leverage the power of invokedynamic.
We all know that to load a class dynamically and call a method at runtime we use Java Reflection, Framework developers are often used Reflection to load class runtime and call a method at runtime. But Reflection has a performance cost as it does the security checking each time, On the other hand, invokeDynamic solves that problem and we can use invokeDynamic to runtime call the method.
To call the method runtime we need Method handle, Mow the interesting thing is we also change the underlying method call based on some parameters, It is a very powerful thing to do, the Same call can invoke different implementation on runtime based on parameter !!!

I try to explain you by a simple example, Say there are two methods call bark and mewaoo, Now based on the animal passed I want to call the corresponding method dynamically?
I will show you the code first then explain the code,
package com.example.methodhandle;
import java.lang.invoke.CallSite;
import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.MethodType;
import java.lang.invoke.MutableCallSite;
public class MethodHandleExample {
MutableCallSite callSite = null;
public void bark() {
System.out.println("I am a Dog :: So barking");
}
public void mewaoo() {
System.out.println("I am a Cat :: So Mewaooing");
}
public MethodHandle findMethodHandle(String command) throws NoSuchMethodException, IllegalAccessException {
MethodHandle mh = null;
if ("DOG".equalsIgnoreCase(command)) {
mh = createMethodHandle(MethodType.methodType(void.class), "bark");
} else if ("CAT".equalsIgnoreCase(command)) {
mh = createMethodHandle(MethodType.methodType(void.class), "mewaoo");
} else {
throw new RuntimeException("Give a Proper Command");
}
return mh;

}

public CallSite createCallSite(String command) throws NoSuchMethodException, IllegalAccessException {
if (callSite == null) {
callSite = new MutableCallSite(findMethodHandle(command));
} else {
callSite.setTarget(findMethodHandle(command));
}
return callSite;
}

public MethodHandle createMethodHandle(MethodType methodType, String methodName)
throws NoSuchMethodException, IllegalAccessException {
return MethodHandles.lookup().findVirtual(this.getClass(), methodName, methodType);
}
public static void main(String[] args) {
MethodHandleExample example = new MethodHandleExample();
try {
example.createCallSite("DOG").dynamicInvoker().invoke(example);
example.createCallSite("CAT").dynamicInvoker().invoke(example);
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
If I run this code output will be
I am a Dog :: So barking
I am a Cat :: So Mewaooing
Explanation:: To call bark and mewaoo method dynamically based on the passing parameter, We first need to create  MethodHandle for each method.
MethodHandle is such an Object which stores the metadata about the method, Such as the name of the method signature of the method etc.
So here, I create a generic method called createMethodHandle, which returns a method handle.
To create a Method handle we need two major thing name of the method and the MethodType Object, MethodType Object says about the signature of the method. name parameter denotes the name of the method.
I create a generic method called findMethodHandle, This method passes the required parameters to the createMethodHandle method based on the passed command, If the command is a dog, this method pass the method name as bark and method type as void as bark takes nothing and return nothing (Signature of bark method).
Now , to change the invocation of the method we need a Calliste --Which actually flips the call, For two methods we have two different method handle, Calliste bound with a Method handle and  actually call underlying  Method handle , MutableCallsite can change the underlying Method handle Object (Strategy), so Callsite change the MethodHandle based on the command given.
Here I create a method called createCallSIte, which create a single instance of a Callsite Object, and based on the command, Callsite change the target aka MethodHandle.

Conclusion: This is a simple example but using Callsite and Methodhandle you can create a factory of Strategy and based on the situation you can choose the optimum one, so developers perspective it just a method call but ad framework developer you can choose what to call when like lambda in java8 maintained LambdaMewtaFactory.