DDD: Thinking in terms of Context Map

In my previous article, I did a detailed discussion about the Bounded Context and learn that how to tackle the complexity of a Domain, it is the best way to divide the domains into several subdomains and mapped them with different bounded contexts where each business entity/Value object has certain meaning on that context, so every stakeholder of the business like Product owner, Developers, Architect, sponsorers are understood the context and referring entity with the proper name, there should be no confusion of the naming when we discuss the terminology basis on a context. A ubiquitous language which creates a unified language among business stakeholder.
By Bounded context, we properly define a business model, create a different context based on the business domains, but always a functionality spans over multiple business entities , and those entities lie in the different Bounded contexts/domains, so it is utmost important to understand the relationship between the Bounded contexts, to architecting the business solution Context Map is a technique by which we can visualize the relationship between different contexts and Integration architect choose the best integration patterns to communicate to other contexts.

Why Context Map is so important while designing solution?
By UML diagram architect cam understand how different parts will communicate with other parts, It gives the architect a view on the communication between different contexts, that is fine but context map stepped in before UML diagram, it helps to visualize the nature of the relationship and based on the nature , architects can decide what kind of technical solution would be adopted.
The best part of Context Map visualization is, it talks about nature of the relationship, It not only tell is that relationship is upstream or downstream, Publisher or subscriber, it also tells how different team dependent on another team their motifs, their politics everything. so counting all of that now architect in a position to decide optimal solution to minimize the risk while integrating with another context.
In the era of Microservices, Context Map is the key player, because before design the holistic Microservice architecture where every team owns a Microservice, It is important to understand how one team is dependent on other, which teams are in commendable position, which team only seeks help then you can architect the solution best possible way .
Think of our Student online Tutor app, as a full grown app say it has more than fifty microservices are deployed in production , so here fifty teams are coming into play and for developing a functionality say Student registrations , multiple contexts will be affected, so we can say to implement that feature multiple teams will be involved so what would be their relationship, while designing this feature who is the pivot service whose data is most needed obviously that service is in commendable position as if that team is not ready other teams can't do anything, so all teams should align with that team , they have to sync there product backlog with that team, so here the internal politics comes to the picture , if the data of the service comes from an External team not with in the organization then the solution is way more complex as you can't force them so only way is to request them and wait for there changes, so based on these diffrent scenarios, politics context map has diffrent solutions. I will cover most important solutions here
 1. Shared Kernel : Shared kernel talks about a partnership relation where two or many teams shared common data model/ value object, it reduces the code duplication as different context use that common model, but that common model/value object is very sensitive, any changes major/minor should be agreed upon all the parties unless it would break other parties code, so more communication and synchronization needs among those teams say one team need a change in the common model but another team is not ready so the priro team has to wait for other oartner to be ready or otherhand othar partner has to change there code unnecessery although that is not heliping them, but to be in sync with other partners, diffrent shades of problems woven while maintaining the pertnership relation ship so choose that pattern  when your common model remains constant or changes once in an era.
In our example, say we developed an analytical module which analysis which courses are most chosen by the students, which students are chosen more than five courses etc, so that module works with Student model, course model so I can say our Analytics module can share Registration modules student model, and they also agreed upon any changes on student model.

Customer/ Supplier: Generally this is the common relationship between two contexts, where a context consumers or depend on data from another context, the context which produces data marked as upstream and the context which consumes data called downstream. When we visualize this relationship in terms of politics we can visualize the power distribution has many shades.
like following
Upstream as the leader: In this type of relationship, the upstream team is in a commendable position, that team does not care about the downstream team, as they producing the data and downstream team is on the mercy of that upstream team they are always changing their model based on the data structure produces by upstream.
In our Student Registration app relationship between Payment app and Notification, app is kind of upstream and downstream where Payment app decide what information in which structure they provide and Notification module consumes that data structure.

Downstream as the Leader: In some cases, the relationship is revert although upstream is produces data, it must have to follow the rule, the data structure for downstream, in this scenario downstream is in a commendable position.
say in our Studen registration system we need to submit Form 16 to government as a tax payee so our payment module has to submit form 16 data to Government exposed API but government API has certain rules and data structure for submitting form 16 data, so although government API is downstream it has total control, our Payment module should communicate with down steam in such a way so it can fulfil downstream rules.

The customer-supplier relation works best when both the parties upstream and downstream are aligned with the work both party agreed upon the interfaces and change in the structure, in case of any changes in the contract both parties will do a discussion synchronize their priority backlogs and agreed upon the changes, If one party does nor care upon another party then every time contract will be broken and it is tough to maintain a customer-supplier relationship.
Conformist: Sometimes, there is a relation between two parties in such a way that downstream team always dependent on an upstream team and they can't do a mutual agreement with upstream about requirements.  The upstream is not aligned with downstream and does not care they are free to change there published endpoint or contract any time and not taking any request from downstream. It is happens when Upstream team is an external system or under a different management hierarchy , and many downstream systems are registered with it so it can't give a priority to any downstream, rather then all downstream system must be aligned with upstream contact and data structures if upstream contracts or data structures change it ps downstream responsibility to changes accordingly.
Say, In our online Student registration app we have a free tutorial module where all students or other application can consume our free tutorials and embed them in their application, so here out free tutorial module acts as upstream and independent of any other third party app who consumes our free tutorials , we can;t give any priority to them and we don't have any contract with them if we change the contracts or data structures it is other third parties duty to change their application accordingly to consume our free tutorials. Other parties are acting as a conformist.

Anti Corruption Layer: When two system interacts if we consume the data directly from upstream we pollute our downstream system as upstream data structure leak through the downstream so if the upstream become polluted our downstream too as it imitates the upstream data while consuming. So it is a good idea while consume data from third party or from a legacy application always use a translation layer where the upstream data translate to downstream data structure before fed in to downstream in that way we can resist the data leakage from upstream, if upstream contract changes it does not pollute downstream internal system only Translation layer has to be changed in order to  adopt new data structure from upstream and convert it into downstream data structure, this technique is called Anti-corruption layer. Anti-corruption layers save the downstream system from upstream changes.
In our application Notification module can implement an ACL while consuming data from payment module so if payment module data structure changes only ACL layers affected.

Open Host : In some cases, your Domain API needs to be accessed by many other services like our Free Tutorial Publisher module, Many external or internal domains want to consume this service, so as Upstream it should be hosted as a service and maintains a protocol and service contract like REST and JSON structure so another system can consume the data.
Published Language: Often two or more system receive and send messages among themselves, in that case, a common language will be needed for the transformation of the data from one system to another like XML, JSON we call that structure as Published language.
The holistic  view of our Student online registration app in terms of context Map

 Conclusion: Context Map is a very important exercise to realize how one domain communicate with other, It gives a proper view of the organization structure, how different domains are distributed, how domain owners are dependent on each other? What is the relationship between team structure? Can they be aligned while developing a feature, based on all the parameters Integration architect can adopt a suitable integration pattern to integrate domains? Prior to designing the integration solution, always architect has to define context map to understand the relationship and structure of the teams based on that Architect can choose the best possible solution.

Bounded Context in my view

In this article, I will share my view about Bounded Context,
What does it mean,?
Why is it required?
The connection between Bounded context and Microservices.
I will try to keep it simple, and this article targeted to that audience who will hear the term Bounded Context while developing Microservices but having a hard time to understanding the Bounded context concept.
Before deep dive into Bounded Context in terms of DDD(Domain Driven Design), Let's understand what is the meaning of that in the real world.
We know the human is the most intelligent species and created different countries for the ruling. But Question is Why they created different countries or Logical Boundaries? In what basis Boundary has been drawn between two countries,  before Human civilization, in the Earth, we only have land there is no concept of country.
One compelling answers come to my mind is that to separate the administration, culture, laws, economics, so each country abstracted its people from other countries unless it is an impossible task to create one Unified culture, language as in ancients time different tribes has there own languages and cultures.
In the same country, there are some predefined rules, language, style every citizen follows. They understand the common language, aware of laws, currency, style so as a brief I can say, Citizens are sync with each other in a country and country has one Unified culture which every citizen follows and understand.
In programming, We apply Bounded context in the same manner, to separate different models/subdomains  from each other in a Domain/Problem space, In a Domain-driven design --Strategic design part,  we introduce Bounded context so in a certain context a model has a certain meaning like the countries where for a particular country--ceratin language, currency has specific meaning, But in a different country that currency or language is not understandable, Because it has no meaning/different meaning respect to that country. Like the word Fool, In English it means Stupid but in Bengali it means Flower !!!!
So If we consider Country as a Bounded context, and language/currencies as a Model inside that context we can easily map the concept of the domain model in a certain bounded context. The model has some meaning for a Bounded context but the same model has no meaning/different meaning for another Bounded context, So by Bounded context, we create a logical boundary where the model and business terms have a certain meaning and the Bounded context separate/hide the models from the outer world. All communication should be done via API. So it is obvious that under a Bounded context -- model and business logic maintains a certain law and maintain its own persistence storage and that is not directly accessible to other Bounded Context.

BoundedContext Communication: Any design has two common parts, abstraction of the data model and communicates with other parts of the system. By Bounded context we separate the data model in a simple term abstracting the commonalities in the business but How one Bounded context communicate with others?
Here the concept of Context Map stepped in, Using Context map we can discover How one Context depends on other Bounded Context, Like are two Context has strong dependencies. or one domain sends a confirmation message to another domain(Conformist) or may use a shared kernel/Shared model, I will talk about Context map later in a separate article, But as of now, you can think context map is for communicating clearly between Bounded Contexts.
At this point, I believe you have a fair bit of idea what is a Bounded context, But still, if you have questions How it fits in Architecture? go to the next paragraph.
Let say we have to design Online Student management system where Student can register to the site choose course accordingly, Pay the Course fee and then he will be tagged to a batch and Teacher & Student is notified about the batch start date and time slot.
As an Architect, you have to identify the bounded context of the different domain related to this business logic.
If we divide the business logic based on related functionality we can found four basic functionality
1. Registration process: Which takes care of Registration of Student.
2. Payment System: Which will process the Course fee and publish online payment status.
3. Batch Scheduling: Upon confirmation of payment,  this function checks the Teacher availability, batch availability and based on that create a batch and assign the candidates or update an existing batch with the candidate.
4. Notification System: It will notify Teacher and Student about the timings and slot information.

So, There are  4 bounded contexts Registration, Payments, Scheduler, and Notification.
Now let's dig down How each Module represents Teacher and Student and Course Models.
Registration process: It only wants the Student information and it needs it demographic information like Name, age, sex, address and which course student chooses.
There is no mean of Teacher in this context
Payment System : It treats Student as Candidates In Payment System only Student/Candidates financial information is required like Account number and based on the course fees it deduct the amount from given account, So here perspective of a student is totally different, Information needed in payment Service is totally different from Registration although Payment system may need few pieces of information like name, address of the student those are very minimal information.
The term "Teacher"  is not valid here, In payment System Teacher treated as Faculty and they can be permanent or Contractor based on the Faculty type Payment System choose the payment calculation either per hour basis or per anum basis.
Batch Scheduling: In case of Batch Scheduling System, it needs bare minimum information from Teacher and Student like name, Address etc. But it needs detailed information, of Course, Batches under the Courses etc.
Notification System: For Notification System, we just need the Teacher and Student email address or Phone number and name nothing else, And it needs the name of the Student management system and Course details, here Student management site acts as Sender and Teacher and Student denotes as Reciever.



Till now we have seen,  same Domain Objects Teacher, Student, Course have different meaning and use-cases for different Context, This is the beauty of Bounded Context ,we have multiple canonical models for same domain Object based on different context, So developers, Business, user are always in the same page when they are talking about a context, the concept of ubiquitous language is woven here, using ubiquitous language DDD create a unified system where every participant understand the language based on the context.
Now, the common question is why the Bounded context term is so popular in Microservices?
To answer this questions first we have to understand that DDD is applicable for Monolith as well as in Microservice, But in case of Monolith, it is vaguer and more of a logical segregation so only good developers can see it. The main reason is in monolith we have a single giant codebase, may we break it multimodule based on DDD create ACL/Translator when one module talks to another but still it needs other modules as dependent jars to invoke its method. Another point is as this is a single code base and multiple coders working on them so some not so skillfull coder can pollute the boundary or domain Object, Architecht can't create a physical boundary based on Bounded Context, But in Microservice architechture it is inherent as Microservice said that rather than a large code base we can create Small services which have it own code base and services are talk to each other through API or messaging, so It clearly says that understand the Business Domain break the Business logic in to multiple Bounded Contexts and each Bounded Context will be a separte codebase and they comuunicate through Context Map, To design the Context map you have to design API carefully, may you can use Port and Hub architechture So your code under the Bounded context is not comminicate with Outerworld and never polluted. Microservice offers this type of strong segregation of Bounded Context. Bounded Context is more visible and understandable in the context of Microservice.
Conclusion: Bounded Context is the basic needs, when you are trying to break a large business logic it helps you to understand how different part of the system use domain objects in a different manner with different terminology. Bounded Context is just a view to properly organize the business logic based on functionality but to make the business logic works --  communication between Bounded context needed, and it uses Context Map for the same. In my next article, I will discuss Context Map.

File Upload Using Angular4/Microservice

Upload a file, is a regular feature of web programming, Every Business needs this facility, We know How to upload a file using JSP/Html as Front-end and Servlet/Struts/Spring MVC as Server end.


But How to achieve the same in Angular 4 /Microservice combination?
In this tutorial, I will show you the same step by step, before that let me clarify one thing, I am assuming you have a basic understanding of Angular 4 and Microservice.
Now directly jump on the problem statement,
I want to create an upload functionality which invokes a FileUpload Microservice and store the profile picture of an Employee.
Let's create an Angular 4 project using Angular.io plugin in eclipse.
after creating the application modify the app.component.ts file under app module.
import { UploadFileService } from './fileUpload.service';
import { Component } from '@angular/core';
import { HttpClient, HttpResponse, HttpEventType } from '@angular/common/http';


@Component({
  selector: 'app-root',
  templateUrl: './view.component.html',
  styleUrls: ['./app.component.css'],
  providers: [UploadFileService]
})

export class AppComponent {
selectedFiles: FileList;
   currentFileUpload: File;
    constructor(private uploadService: UploadFileService) {}
  selectFile(event) {
    this.selectedFiles = event.target.files;
  }
  upload() {

    this.currentFileUpload = this.selectedFiles.item(0);
    this.uploadService.pushFileToStorage(this.currentFileUpload).subscribe(event => {
     if (event instanceof HttpResponse) {
        console.log('File is completely uploaded!');
      }
    });

    this.selectedFiles = undefined;
  }

}
In the @Component decorator, I changed the template URL to view.component.html which actually hold the FileUpload form components. After that, I add a UploadService as a Provider which actually post the selected files to Microservice.
Now, I define a method called selectFile, which capture an event(OnChange event in the fileUpload form field) and extract the file from the target form fields, in this case, the File form fields.
Then I add another method called upload which calls the file upload service and subscribe itself ob Observable<HttpResponse>.

Here, is the view.component.html file
<div style="text-align:center">
<label>
<input type="file" (change)="selectFile($event)">
</label>
<button [disabled]="!selectedFiles"
(click)="upload()">Upload</button>
</div>
Here, I just add a file upload field and when we select a file an onchange event will be fired which call the selectFile method and pass that event to it.
Next, I call the upload method.
Let see the file upload service.
import {Injectable} from '@angular/core';
import {HttpClient, HttpRequest, HttpEvent} from '@angular/common/http';
import {Observable} from 'rxjs/Observable';

@Injectable()
export class UploadFileService {

  constructor(private http: HttpClient) {}

  pushFileToStorage(file: File): Observable<HttpEvent<{}>> {
    const formdata: FormData = new FormData();
    formdata.append('file', file);

    const req = new HttpRequest('POST', 'http://localhost:8085/profile/uploadpicture', formdata, {
      reportProgress: true,
      responseType: 'text'
    }

    );

    return this.http.request(req);
  }

}
here I create a Formdata Object and add the uploaded File into it and using angular HTTP I post the form data to a Microservice running on port 8085 and publish a REST endpoint called /profile/uploadpicture
Hooray, We successfully write the UI part for File upload using Angular4.
if you start the Angular(ng serve) it will look like following,

Let's Build the Microservice part, Create a Project called EmployeeFileUpload service in STS or using start.spring.io, Select Eureka client module to register this microservice with Eureka.
After that, rename the application.properties to bootstrap property. add the following entry.
spring.application.name=EmployeePhotoStorageBoard
eureka.client.serviceUrl.defaultZone:http://localhost:9091/eureka
server.port=8085
security.basic.enable: false   
management.security.enabled: false 
My Eureka server is located on  9091 port, I give a logical name to this Microservice called EmployeePhotoStorageService which runs on 8085 port.
Now I am going to create a Rest Controller which accepts the request from Angular and binds the Multipart Form.
Let see the code snippets of FileUploadController.
package com.example.EmployeePhotoStorageService.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;


@RestController

public class FileController {

@Autowired
FileService fileservice;

@CrossOrigin(origins = "http://localhost:4200")// Call  from Local Angualar
@PostMapping("/profile/uploadpicture")
public ResponseEntity<String> handleFileUpload(@RequestParam("file") MultipartFile file) {
String message = "";
try {
fileservice.store(file);
message = "You successfully uploaded " + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.OK).body(message);
} catch (Exception e) {
message = "Fail to upload Profile Picture" + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.EXPECTATION_FAILED).body(message);
}
}


}

Few things to notice here, I use a @CrossOrigin annotation by this I instruct Spring to allow the request comes from localhost:4200 , In production, Microservice and Angular app hosted in different domains to allow other domains request we must have to provide the cross-origin annotation,  I Autowired a FileUpload service which actually writes the File content into the disk.
Let see the FileService code,
package com.example.EmployeePhotoStorageService.controller;

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;


@Service
public class FileService {
private final Path rootLocation = Paths.get("ProfilePictureStore");

public void store(MultipartFile file) {
try {
System.out.println(file.getOriginalFilename());
System.out.println(rootLocation.toUri());
Files.copy(file.getInputStream(), this.rootLocation.resolve(file.getOriginalFilename()));
} catch (Exception e) {
throw new RuntimeException("FAIL!");
}
}

}

Here, I create a directory called ProfilePictureStore under the project it is the same level to src folder. Now I copy the file input stream to the location using java.nio's  Files.copy() static method.
Now to run this Microservice I have to write the Sring Application boot file. Let see the code.
package com.example.EmployeePhotoStorageService;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;

@EnableDiscoveryClient
@SpringBootApplication
public class EmployeePhotoStorageService {

public static void main(String[] args) {
SpringApplication.run(EmployeePhotoStorageService.class, args);
}


}

Ok, we are all set, Only the last piece is missing from this tutorial that is the pom.xml file.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.example</groupId>
<artifactId>EmployeeFileUploadService</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>EmployeeDashBoardService</name>
<description>Demo project for Spring Boot</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.4.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Dalston.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jersey</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>
 <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix</artifactId>
   </dependency>
</dependencies>


<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>

Yeah, That's it, if we run the Microservice and upload a file from angular we can see that file is stored in ProfilePictureStore folder, very easy isn't it?
Conclusion: This is a very simple example or I can say a prototype of file upload, without any validation or passing any information from UI apart from the raw file like comments, file name, tags etc. You can enrich this basic example using the Formdata Object in Angular, Another Observation is, I directly call the Microservice  instance from Angular, That is not the case in Production there you have to introduce a Zuul API gateway which accepts the request from Angular do some security checking then communicate with Eureka and route the request to the actual microservice for simplicity I just skip that part.

5 best practices for Building and Deployment Microservices

Many organizations who are going to adopt or adopted Microservices culture they often think Microservices means breaking a big business functionality into small independent services, which can be scaled automatically. But this is the half part of the story, Another part is often ignored but the other part is equally important and if you do not implement the other part please go back and sit with your architects as you are not getting the full advantages of Microservice culture. The other part is an efficient way to handle build and Deployment so developer code is always production ready.

Today I will discuss 5 best practices for build and development for Microservices.



  1. Independent Deployment : When an organization adopting a Microservice culture, The first rule has to be set is, each and every Microservices should be independent. Here what I mean by independent is, Each Microservice should have a separate code base, publish/consume well-defined API, Separate persistence layer, Separate build, and deployment script, and separate deployable artifacts.  So Developer of the Microservices own the process end to end, They must have complete control over their services in terms of technical and management, which help them to protect themselves from any external impediments. When you building a Microservice you should know Other Microservices API which you will go to consume, And also publish a API so that another Microservices Consumes your Service by doing that you can become independent as you know the request/response for the services you consume so you can MOCK them and work independently, also as you publish your API , there is well-defined method signature so unless you change the API signature, whatever you can do with your codebase, even you can deploy it thousand times in a day that will not affect others. Also in Management perspective Team must own the build and deployment pipeline, administrator access in each and every environment even production!!! Should have complete control everything related to Administration, If your organization does not allow this then you are not doing Microservice and not get its essence. Sometimes dependency with Other services is unavoidable( Not management related those are in our hand), or to cope future changes you need to revamp your API, But those are the Exception cases, in my previous article I had a talk about how to handle the same, But again that is exception, By rule every service is independent has its sole identity, Team related to this service owns everything they are all in all of this service. This brings the speed of development and as an organization, you can publish features to end user easily.

  1. CI/CD : One of the main causes of slowness in publishing feature in production is dependency between Dev team and Ops team, Dev team holds the technical part and ops holds the environment for it , This causes a barrier like( communication, availability of environment/artifact etc) as they depend on each other, In a competitive market even a delay of one hour matters, This is not the way to adopt Microservice culture, Ideal Microservice environment would be anytime any code from developers environment  can be production ready , If developers code pass all criteria then it can be published to production within no time and there may be a manual intervention to publish it in production but else should be automated, Microservice culture welcomes DevOps culture automatically. Without DevOps Organisation not utilize the power of Microservice fullest. There should be a build pipeline whenever user checks in a code it will trigger a build then artifacts should be in UAT environment where unit testing will be done then it moves to SIT and integration testing will be fired at last it can publish to production with or without manual intervention.

  1. Environment Replication : In Project Management lifecycle, our traditional approach is to create the different environments like UAT environment, SIT environment, PROD environment etc. This is really a good approach but the problem is that environments are not identical, In SIT we have few servers , A Load balancer manages them but in Production the Infrastructure is totally changed there may be a Primary and Secondary pool of servers for Blue/green deployment or they may have different data center based on geographical affinity. So one artifact different Topologies. One artifact tested against multiple environments, as a result,  we can’t predict how the artifacts behave in Production as we do not have production like infrastructure to test on. The most common bug comes from production is how artifacts behave when there is a heavy load. So to identify how exactly code behaves it needs an Identical environment like production with Load testing capability. It can be done if organization adopt IAAS, Where we can spawn machines seamlessly and clone the developer's environment. So our artifacts should not be only jar, war, gems. An Artifacts should contain the whole environment i.e the OS, Application server, persistence layer all ancillary settings. Nowadays we can achieve it through IAAS(Infrastructure As A Service). Also, Docker is a very popular option to spawn a container and by Docker file, we can instruct what should be packaged inside the container. Puppet and Chef also a good option to achieve that. If you want it should be managed by the third party you can go for AWS or Cloud Foundry, BlueMix etc.



  1. Artifact Repository :  Although it comes with CI/CD I want to highlight this feature. Why is an Organization adopting Microservice architecture in terms of Business perspective?
The answers are
  1. They want to release their feature quickly to end user to stay ahead of competitors.
  2. No Downtime which increases reputation.

    To achieve this organization has to have a robust framework where it can store all the artifacts with its version number as if any production issues occur with current release either we can patch the fix and deploy it to production or if it takes time to fix we can easily revert the changes and deploy the old artifacts so user does not have to wait for fix for next release. So it is utmost important to maintains the artifact repository and it should be very easy to deploy in production, No cumbersome processes so that it needs a special expertise, anyone can do this it should be like a command in CLI.

Like deploy <ServiceName><Version ID><Environment> where
Service ID: Logical Name of the Microservice.
Version ID is the Version you want to deploy.

Environment: Which has a logical name like prod1,prod2,sit1 internally it contains all the information of the servers and apply artifacts to all servers.



5. Scrum of Scrums : We know that one Microservice holds one Domain information. So When an organization wants to build a feature which distributed over multiple domains naturally the work should be distributed in multiple Agile teams. Although in Microservice culture each Team is independent as they can mock the other services, but make functionality works fully it needs an Integration.  So it is utmost important every team's Work Item should be in Sync so that In a defined timeframe all teams did their jobs and all the works can be integrated and tested in the Integrated environment. So Organization must have to conduct a Scrum of Scrums where each team's Scrum master and Product owner sit together and decide the priority items so that business functionality published in production according to the announced release.



Conclusion : Breaking a Complex Business domain into small moving parts and each part work independent is just one side of the coin, To make and publish new features in a hassle-free manner we need to focus on the Management structure of the organization also.  

I am curious to know from The organization who adopted or about to adopt microservice culture how they change their Management policy or what are the other management related challenges they faced when they move into Microservice Culture.