DDD::Interchange Context and Microservices

In my previous two articles, I describe lots about Bounded context and context map, where we have learned, How to segregate the domain models using Bounded context?  Also learned, How to use context map judiciously to understand the relationship between two contexts, here relationship is a broader statement it does not only represent technical relationship it also defines the relationship between owners of the services, who is in commendable position, who act as downstream etc. Also, we learned different strategy based on the relationships like (Partnership, upstream/downstream,anti-corruption layer etc).
In this article, we will be dealing with a special case when something goes wrong while designing a Microservice. Then how that error bleeds and corrupted other services, Then we will learn how to solve that problem using Interchange context.

Take an Example, In case of our online Student registration module where Registration Module is in commendable position, as it has a partnership relationship with analytics module, Payment module has a downstream relationship with it, Batch scheduler has a downstream relationship with the same.   so it is clear Registration module is in commendable position, so the language it published every one has to follow that language, by Registration module API whatever the message it publishes batch, and payment module has to receive the same message , so I can say if the Registration module publish an R language every other module has to consume that R language, So  in this system R is the defacto language standard. You can think of this situation on the real-world basis as British people conquered the world in ancient time, so the language they speak i.e English is the defacto language standard for world, A British man and an Indian automatically communicate with English not the reverse even an Indian and a Chinese also talk to each other by English (where no British people are involved).
Now, The problem occurs If the Registration Module has some issues in Design, say Boundary is not properly defined, Message structure not properly designed or Edge cases or Exceptions are not handled properly in that case in spite of a Physical boundary(Bounded context) this problems are bleed into all dependent services, as whole context map is polluted due the error in Registration module. So Registration Module now acting as a Big ball of mud. (BBoM). As per Eric Evans, he called it a Small ball of Mud. So one thing is clear if upstream/commandable system is in danger then the whole Microservice architecture become a mess. As an Architect somehow we have to prevent the bleeding of errors from upstream Microservices/ System to downstream.
If we dig through the DDD patterns we can found ACL is doing the same (Anti-corruption layer) which sits in front of consumer Microservice/System and translate the producer /upstream message structure to Consumer Microservice/System message structure also it handles any kind of errors , So ACL protecting the consumer services from External systems.  But think about a system where two services are in a Partnership relationship
they produce and consumes each other messages both dependent on each other and other services dependent on them, in that case rather than putting ACL in front of both Partners we can use a new concept called Interchange Context.
Interchange Context, is such a technique where both partners agreeing upon the model and message structure, so there is one ubiquitous language for both partners, they publish one Message together and other systems consume that message. In this way, two partners can independently change there internal code, internal message structure but in Interchange context, they create a common terminology/Ubiquitous language which understood by this two partners. Every other service, who are dependent on these two partners can consume message from Interchange context.


Conclusion : Interchange context is more robust that Anti-corruption layer, it stops the error to be bled from upstream services , If upstream services has a direct consumers then , upstream service developer should be very careful about the API , any changes in API or errors break others , so Interchange context is kind of centralized facade where partners are created the ubiquitous language and other costumes message from interchange context, partners and dependents services all are free to modify there own data model, message structure but when they communicate to interchange context internal model should converted to Interchange context message structures.

DDD: Thinking in terms of Context Map

In my previous article, I did a detailed discussion about the Bounded Context and learn that how to tackle the complexity of a Domain, it is the best way to divide the domains into several subdomains and mapped them with different bounded contexts where each business entity/Value object has certain meaning on that context, so every stakeholder of the business like Product owner, Developers, Architect, sponsorers are understood the context and referring entity with the proper name, there should be no confusion of the naming when we discuss the terminology basis on a context. A ubiquitous language which creates a unified language among business stakeholder.
By Bounded context, we properly define a business model, create a different context based on the business domains, but always a functionality spans over multiple business entities , and those entities lie in the different Bounded contexts/domains, so it is utmost important to understand the relationship between the Bounded contexts, to architecting the business solution Context Map is a technique by which we can visualize the relationship between different contexts and Integration architect choose the best integration patterns to communicate to other contexts.

Why Context Map is so important while designing solution?
By UML diagram architect cam understand how different parts will communicate with other parts, It gives the architect a view on the communication between different contexts, that is fine but context map stepped in before UML diagram, it helps to visualize the nature of the relationship and based on the nature , architects can decide what kind of technical solution would be adopted.
The best part of Context Map visualization is, it talks about nature of the relationship, It not only tell is that relationship is upstream or downstream, Publisher or subscriber, it also tells how different team dependent on another team their motifs, their politics everything. so counting all of that now architect in a position to decide optimal solution to minimize the risk while integrating with another context.
In the era of Microservices, Context Map is the key player, because before design the holistic Microservice architecture where every team owns a Microservice, It is important to understand how one team is dependent on other, which teams are in commendable position, which team only seeks help then you can architect the solution best possible way .
Think of our Student online Tutor app, as a full grown app say it has more than fifty microservices are deployed in production , so here fifty teams are coming into play and for developing a functionality say Student registrations , multiple contexts will be affected, so we can say to implement that feature multiple teams will be involved so what would be their relationship, while designing this feature who is the pivot service whose data is most needed obviously that service is in commendable position as if that team is not ready other teams can't do anything, so all teams should align with that team , they have to sync there product backlog with that team, so here the internal politics comes to the picture , if the data of the service comes from an External team not with in the organization then the solution is way more complex as you can't force them so only way is to request them and wait for there changes, so based on these diffrent scenarios, politics context map has diffrent solutions. I will cover most important solutions here
 1. Shared Kernel : Shared kernel talks about a partnership relation where two or many teams shared common data model/ value object, it reduces the code duplication as different context use that common model, but that common model/value object is very sensitive, any changes major/minor should be agreed upon all the parties unless it would break other parties code, so more communication and synchronization needs among those teams say one team need a change in the common model but another team is not ready so the priro team has to wait for other oartner to be ready or otherhand othar partner has to change there code unnecessery although that is not heliping them, but to be in sync with other partners, diffrent shades of problems woven while maintaining the pertnership relation ship so choose that pattern  when your common model remains constant or changes once in an era.
In our example, say we developed an analytical module which analysis which courses are most chosen by the students, which students are chosen more than five courses etc, so that module works with Student model, course model so I can say our Analytics module can share Registration modules student model, and they also agreed upon any changes on student model.

Customer/ Supplier: Generally this is the common relationship between two contexts, where a context consumers or depend on data from another context, the context which produces data marked as upstream and the context which consumes data called downstream. When we visualize this relationship in terms of politics we can visualize the power distribution has many shades.
like following
Upstream as the leader: In this type of relationship, the upstream team is in a commendable position, that team does not care about the downstream team, as they producing the data and downstream team is on the mercy of that upstream team they are always changing their model based on the data structure produces by upstream.
In our Student Registration app relationship between Payment app and Notification, app is kind of upstream and downstream where Payment app decide what information in which structure they provide and Notification module consumes that data structure.

Downstream as the Leader: In some cases, the relationship is revert although upstream is produces data, it must have to follow the rule, the data structure for downstream, in this scenario downstream is in a commendable position.
say in our Studen registration system we need to submit Form 16 to government as a tax payee so our payment module has to submit form 16 data to Government exposed API but government API has certain rules and data structure for submitting form 16 data, so although government API is downstream it has total control, our Payment module should communicate with down steam in such a way so it can fulfil downstream rules.

The customer-supplier relation works best when both the parties upstream and downstream are aligned with the work both party agreed upon the interfaces and change in the structure, in case of any changes in the contract both parties will do a discussion synchronize their priority backlogs and agreed upon the changes, If one party does nor care upon another party then every time contract will be broken and it is tough to maintain a customer-supplier relationship.
Conformist: Sometimes, there is a relation between two parties in such a way that downstream team always dependent on an upstream team and they can't do a mutual agreement with upstream about requirements.  The upstream is not aligned with downstream and does not care they are free to change there published endpoint or contract any time and not taking any request from downstream. It is happens when Upstream team is an external system or under a different management hierarchy , and many downstream systems are registered with it so it can't give a priority to any downstream, rather then all downstream system must be aligned with upstream contact and data structures if upstream contracts or data structures change it ps downstream responsibility to changes accordingly.
Say, In our online Student registration app we have a free tutorial module where all students or other application can consume our free tutorials and embed them in their application, so here out free tutorial module acts as upstream and independent of any other third party app who consumes our free tutorials , we can;t give any priority to them and we don't have any contract with them if we change the contracts or data structures it is other third parties duty to change their application accordingly to consume our free tutorials. Other parties are acting as a conformist.

Anti Corruption Layer: When two system interacts if we consume the data directly from upstream we pollute our downstream system as upstream data structure leak through the downstream so if the upstream become polluted our downstream too as it imitates the upstream data while consuming. So it is a good idea while consume data from third party or from a legacy application always use a translation layer where the upstream data translate to downstream data structure before fed in to downstream in that way we can resist the data leakage from upstream, if upstream contract changes it does not pollute downstream internal system only Translation layer has to be changed in order to  adopt new data structure from upstream and convert it into downstream data structure, this technique is called Anti-corruption layer. Anti-corruption layers save the downstream system from upstream changes.
In our application Notification module can implement an ACL while consuming data from payment module so if payment module data structure changes only ACL layers affected.

Open Host : In some cases, your Domain API needs to be accessed by many other services like our Free Tutorial Publisher module, Many external or internal domains want to consume this service, so as Upstream it should be hosted as a service and maintains a protocol and service contract like REST and JSON structure so another system can consume the data.
Published Language: Often two or more system receive and send messages among themselves, in that case, a common language will be needed for the transformation of the data from one system to another like XML, JSON we call that structure as Published language.
The holistic  view of our Student online registration app in terms of context Map

 Conclusion: Context Map is a very important exercise to realize how one domain communicate with other, It gives a proper view of the organization structure, how different domains are distributed, how domain owners are dependent on each other? What is the relationship between team structure? Can they be aligned while developing a feature, based on all the parameters Integration architect can adopt a suitable integration pattern to integrate domains? Prior to designing the integration solution, always architect has to define context map to understand the relationship and structure of the teams based on that Architect can choose the best possible solution.

File Upload Using Angular4/Microservice

Upload a file, is a regular feature of web programming, Every Business needs this facility, We know How to upload a file using JSP/Html as Front-end and Servlet/Struts/Spring MVC as Server end.


But How to achieve the same in Angular 4 /Microservice combination?
In this tutorial, I will show you the same step by step, before that let me clarify one thing, I am assuming you have a basic understanding of Angular 4 and Microservice.
Now directly jump on the problem statement,
I want to create an upload functionality which invokes a FileUpload Microservice and store the profile picture of an Employee.
Let's create an Angular 4 project using Angular.io plugin in eclipse.
after creating the application modify the app.component.ts file under app module.
import { UploadFileService } from './fileUpload.service';
import { Component } from '@angular/core';
import { HttpClient, HttpResponse, HttpEventType } from '@angular/common/http';


@Component({
  selector: 'app-root',
  templateUrl: './view.component.html',
  styleUrls: ['./app.component.css'],
  providers: [UploadFileService]
})

export class AppComponent {
selectedFiles: FileList;
   currentFileUpload: File;
    constructor(private uploadService: UploadFileService) {}
  selectFile(event) {
    this.selectedFiles = event.target.files;
  }
  upload() {

    this.currentFileUpload = this.selectedFiles.item(0);
    this.uploadService.pushFileToStorage(this.currentFileUpload).subscribe(event => {
     if (event instanceof HttpResponse) {
        console.log('File is completely uploaded!');
      }
    });

    this.selectedFiles = undefined;
  }

}
In the @Component decorator, I changed the template URL to view.component.html which actually hold the FileUpload form components. After that, I add a UploadService as a Provider which actually post the selected files to Microservice.
Now, I define a method called selectFile, which capture an event(OnChange event in the fileUpload form field) and extract the file from the target form fields, in this case, the File form fields.
Then I add another method called upload which calls the file upload service and subscribe itself ob Observable<HttpResponse>.

Here, is the view.component.html file
<div style="text-align:center">
<label>
<input type="file" (change)="selectFile($event)">
</label>
<button [disabled]="!selectedFiles"
(click)="upload()">Upload</button>
</div>
Here, I just add a file upload field and when we select a file an onchange event will be fired which call the selectFile method and pass that event to it.
Next, I call the upload method.
Let see the file upload service.
import {Injectable} from '@angular/core';
import {HttpClient, HttpRequest, HttpEvent} from '@angular/common/http';
import {Observable} from 'rxjs/Observable';

@Injectable()
export class UploadFileService {

  constructor(private http: HttpClient) {}

  pushFileToStorage(file: File): Observable<HttpEvent<{}>> {
    const formdata: FormData = new FormData();
    formdata.append('file', file);

    const req = new HttpRequest('POST', 'http://localhost:8085/profile/uploadpicture', formdata, {
      reportProgress: true,
      responseType: 'text'
    }

    );

    return this.http.request(req);
  }

}
here I create a Formdata Object and add the uploaded File into it and using angular HTTP I post the form data to a Microservice running on port 8085 and publish a REST endpoint called /profile/uploadpicture
Hooray, We successfully write the UI part for File upload using Angular4.
if you start the Angular(ng serve) it will look like following,

Let's Build the Microservice part, Create a Project called EmployeeFileUpload service in STS or using start.spring.io, Select Eureka client module to register this microservice with Eureka.
After that, rename the application.properties to bootstrap property. add the following entry.
spring.application.name=EmployeePhotoStorageBoard
eureka.client.serviceUrl.defaultZone:http://localhost:9091/eureka
server.port=8085
security.basic.enable: false   
management.security.enabled: false 
My Eureka server is located on  9091 port, I give a logical name to this Microservice called EmployeePhotoStorageService which runs on 8085 port.
Now I am going to create a Rest Controller which accepts the request from Angular and binds the Multipart Form.
Let see the code snippets of FileUploadController.
package com.example.EmployeePhotoStorageService.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;


@RestController

public class FileController {

@Autowired
FileService fileservice;

@CrossOrigin(origins = "http://localhost:4200")// Call  from Local Angualar
@PostMapping("/profile/uploadpicture")
public ResponseEntity<String> handleFileUpload(@RequestParam("file") MultipartFile file) {
String message = "";
try {
fileservice.store(file);
message = "You successfully uploaded " + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.OK).body(message);
} catch (Exception e) {
message = "Fail to upload Profile Picture" + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.EXPECTATION_FAILED).body(message);
}
}


}

Few things to notice here, I use a @CrossOrigin annotation by this I instruct Spring to allow the request comes from localhost:4200 , In production, Microservice and Angular app hosted in different domains to allow other domains request we must have to provide the cross-origin annotation,  I Autowired a FileUpload service which actually writes the File content into the disk.
Let see the FileService code,
package com.example.EmployeePhotoStorageService.controller;

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;


@Service
public class FileService {
private final Path rootLocation = Paths.get("ProfilePictureStore");

public void store(MultipartFile file) {
try {
System.out.println(file.getOriginalFilename());
System.out.println(rootLocation.toUri());
Files.copy(file.getInputStream(), this.rootLocation.resolve(file.getOriginalFilename()));
} catch (Exception e) {
throw new RuntimeException("FAIL!");
}
}

}

Here, I create a directory called ProfilePictureStore under the project it is the same level to src folder. Now I copy the file input stream to the location using java.nio's  Files.copy() static method.
Now to run this Microservice I have to write the Sring Application boot file. Let see the code.
package com.example.EmployeePhotoStorageService;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;

@EnableDiscoveryClient
@SpringBootApplication
public class EmployeePhotoStorageService {

public static void main(String[] args) {
SpringApplication.run(EmployeePhotoStorageService.class, args);
}


}

Ok, we are all set, Only the last piece is missing from this tutorial that is the pom.xml file.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.example</groupId>
<artifactId>EmployeeFileUploadService</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>EmployeeDashBoardService</name>
<description>Demo project for Spring Boot</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.4.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Dalston.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jersey</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>
 <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix</artifactId>
   </dependency>
</dependencies>


<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>

Yeah, That's it, if we run the Microservice and upload a file from angular we can see that file is stored in ProfilePictureStore folder, very easy isn't it?
Conclusion: This is a very simple example or I can say a prototype of file upload, without any validation or passing any information from UI apart from the raw file like comments, file name, tags etc. You can enrich this basic example using the Formdata Object in Angular, Another Observation is, I directly call the Microservice  instance from Angular, That is not the case in Production there you have to introduce a Zuul API gateway which accepts the request from Angular do some security checking then communicate with Eureka and route the request to the actual microservice for simplicity I just skip that part.

5 best practices for Building and Deployment Microservices

Many organizations who are going to adopt or adopted Microservices culture they often think Microservices means breaking a big business functionality into small independent services, which can be scaled automatically. But this is the half part of the story, Another part is often ignored but the other part is equally important and if you do not implement the other part please go back and sit with your architects as you are not getting the full advantages of Microservice culture. The other part is an efficient way to handle build and Deployment so developer code is always production ready.

Today I will discuss 5 best practices for build and development for Microservices.



  1. Independent Deployment : When an organization adopting a Microservice culture, The first rule has to be set is, each and every Microservices should be independent. Here what I mean by independent is, Each Microservice should have a separate code base, publish/consume well-defined API, Separate persistence layer, Separate build, and deployment script, and separate deployable artifacts.  So Developer of the Microservices own the process end to end, They must have complete control over their services in terms of technical and management, which help them to protect themselves from any external impediments. When you building a Microservice you should know Other Microservices API which you will go to consume, And also publish a API so that another Microservices Consumes your Service by doing that you can become independent as you know the request/response for the services you consume so you can MOCK them and work independently, also as you publish your API , there is well-defined method signature so unless you change the API signature, whatever you can do with your codebase, even you can deploy it thousand times in a day that will not affect others. Also in Management perspective Team must own the build and deployment pipeline, administrator access in each and every environment even production!!! Should have complete control everything related to Administration, If your organization does not allow this then you are not doing Microservice and not get its essence. Sometimes dependency with Other services is unavoidable( Not management related those are in our hand), or to cope future changes you need to revamp your API, But those are the Exception cases, in my previous article I had a talk about how to handle the same, But again that is exception, By rule every service is independent has its sole identity, Team related to this service owns everything they are all in all of this service. This brings the speed of development and as an organization, you can publish features to end user easily.

  1. CI/CD : One of the main causes of slowness in publishing feature in production is dependency between Dev team and Ops team, Dev team holds the technical part and ops holds the environment for it , This causes a barrier like( communication, availability of environment/artifact etc) as they depend on each other, In a competitive market even a delay of one hour matters, This is not the way to adopt Microservice culture, Ideal Microservice environment would be anytime any code from developers environment  can be production ready , If developers code pass all criteria then it can be published to production within no time and there may be a manual intervention to publish it in production but else should be automated, Microservice culture welcomes DevOps culture automatically. Without DevOps Organisation not utilize the power of Microservice fullest. There should be a build pipeline whenever user checks in a code it will trigger a build then artifacts should be in UAT environment where unit testing will be done then it moves to SIT and integration testing will be fired at last it can publish to production with or without manual intervention.

  1. Environment Replication : In Project Management lifecycle, our traditional approach is to create the different environments like UAT environment, SIT environment, PROD environment etc. This is really a good approach but the problem is that environments are not identical, In SIT we have few servers , A Load balancer manages them but in Production the Infrastructure is totally changed there may be a Primary and Secondary pool of servers for Blue/green deployment or they may have different data center based on geographical affinity. So one artifact different Topologies. One artifact tested against multiple environments, as a result,  we can’t predict how the artifacts behave in Production as we do not have production like infrastructure to test on. The most common bug comes from production is how artifacts behave when there is a heavy load. So to identify how exactly code behaves it needs an Identical environment like production with Load testing capability. It can be done if organization adopt IAAS, Where we can spawn machines seamlessly and clone the developer's environment. So our artifacts should not be only jar, war, gems. An Artifacts should contain the whole environment i.e the OS, Application server, persistence layer all ancillary settings. Nowadays we can achieve it through IAAS(Infrastructure As A Service). Also, Docker is a very popular option to spawn a container and by Docker file, we can instruct what should be packaged inside the container. Puppet and Chef also a good option to achieve that. If you want it should be managed by the third party you can go for AWS or Cloud Foundry, BlueMix etc.



  1. Artifact Repository :  Although it comes with CI/CD I want to highlight this feature. Why is an Organization adopting Microservice architecture in terms of Business perspective?
The answers are
  1. They want to release their feature quickly to end user to stay ahead of competitors.
  2. No Downtime which increases reputation.

    To achieve this organization has to have a robust framework where it can store all the artifacts with its version number as if any production issues occur with current release either we can patch the fix and deploy it to production or if it takes time to fix we can easily revert the changes and deploy the old artifacts so user does not have to wait for fix for next release. So it is utmost important to maintains the artifact repository and it should be very easy to deploy in production, No cumbersome processes so that it needs a special expertise, anyone can do this it should be like a command in CLI.

Like deploy <ServiceName><Version ID><Environment> where
Service ID: Logical Name of the Microservice.
Version ID is the Version you want to deploy.

Environment: Which has a logical name like prod1,prod2,sit1 internally it contains all the information of the servers and apply artifacts to all servers.



5. Scrum of Scrums : We know that one Microservice holds one Domain information. So When an organization wants to build a feature which distributed over multiple domains naturally the work should be distributed in multiple Agile teams. Although in Microservice culture each Team is independent as they can mock the other services, but make functionality works fully it needs an Integration.  So it is utmost important every team's Work Item should be in Sync so that In a defined timeframe all teams did their jobs and all the works can be integrated and tested in the Integrated environment. So Organization must have to conduct a Scrum of Scrums where each team's Scrum master and Product owner sit together and decide the priority items so that business functionality published in production according to the announced release.



Conclusion : Breaking a Complex Business domain into small moving parts and each part work independent is just one side of the coin, To make and publish new features in a hassle-free manner we need to focus on the Management structure of the organization also.  

I am curious to know from The organization who adopted or about to adopt microservice culture how they change their Management policy or what are the other management related challenges they faced when they move into Microservice Culture.