This reference guide is the second in a four-part series about designing, building, and deploying microservices. This series describes the various elements of a microservices architecture. The series includes information about the benefits and drawbacks of the microservices architecture pattern, and how to apply it. Show
This series is intended for application developers and architects who design and implement the migration to refactor a monolith application to a microservices application. The process of transforming a monolithic application into microservices is a form of application modernization. To accomplish application modernization, we recommend that you don't refactor all of your code at the same time. Instead, we recommend that you incrementally refactor your monolithic application. When you incrementally refactor an application, you gradually build a new application that consists of microservices, and run the application along with your monolithic application. This approach is also known as the Strangler Fig pattern. Over time, the amount of functionality that is implemented by the monolithic application shrinks until either it disappears entirely or it becomes another microservice. To decouple capabilities from a monolith, you have to carefully extract the capability's data, logic, and user-facing components, and redirect them to the new service. It's important that you have a good understanding of the problem space before you move into the solution space. When you understand the problem space, you understand the natural boundaries in the domain that provide the right level of isolation. We recommend that you create larger services instead of smaller services until you thoroughly understand the domain. Defining service boundaries is an iterative process. Because this process is a non-trivial amount of work, you need to continuously evaluate the cost of decoupling against the benefits that you get. Following are factors to help you evaluate how you approach decoupling a monolith:
The following sections discuss various strategies to decouple services and incrementally migrate your monolithic application. Decouple by domain-driven designMicroservices should be designed around business capabilities, not horizontal layers such as data access or messaging. Microservices should also have loose coupling and high functional cohesion. Microservices are loosely coupled if you can change one service without requiring other services to be updated at the same time. A microservice is cohesive if it has a single, well-defined purpose, such as managing user accounts or processing payment. Domain-driven design (DDD) requires a good understanding of the domain for which the application is written. The necessary domain knowledge to create the application resides within the people who understand it—the domain experts. You can apply the DDD approach retroactively to an existing application as follows:
The following diagram shows how you can apply bounded contexts to an existing ecommerce application: Figure 1. Application capabilities are separated into bounded contexts that migrate to services. In figure 1, the ecommerce application's capabilities are separated into bounded contexts and migrated to services as follows:
Prioritize services for migrationAn ideal starting point to decouple services is to identify the loosely coupled modules in your monolithic application. You can choose a loosely coupled module as one of the first candidates to convert to a microservice. To complete a dependency analysis of each module, look at the following:
Migrating a module with heavy data dependencies is usually a nontrivial task. If you migrate features first and migrate the related data later, you might be temporarily reading from and writing data to multiple databases. Therefore, you must account for data integrity and synchronization challenges. We recommend that you extract modules that have different resource requirements compared to the rest of the monolith. For example, if a module has an in‑memory database, you can convert it into a service, which can then be deployed on hosts with higher memory. When you turn modules with particular resource requirements into services, you can make your application much easier to scale. From an operations standpoint, refactoring a module into its own service also means adjusting your existing team structures. The best path to clear accountability is to empower small teams that own an entire service. Additional factors that can affect how you prioritize services for migration include business criticality, comprehensive test coverage, security posture of the application, and organizational buy-in. Based on your evaluations, you can rank services as described in the first document in this series, by the benefit you receive from refactoring. After you identify the ideal service candidate, you must identify a way for both microservice and monolithic modules to coexist. One way to manage this coexistence is to introduce an inter-process communication (IPC) adapter, which can help the modules work together. Over time, the microservice takes on the load and eliminates the monolithic component. This incremental process reduces the risk of moving from the monolithic application to the new microservice because you can detect bugs or performance issues in a gradual fashion. The following diagram shows how to implement the IPC approach: Figure 2. An IPC adapter coordinates communication between the monolithic application and a microservices module. In figure 2, module Z is the service candidate that you want to extract from the monolithic application. Modules X and Y are dependent upon module Z. Microservice modules X and Y use an IPC adapter in the monolithic application to communicate with module Z through a REST API. The next document in this series, Interservice communication in a microservices setup, describes the Strangler Fig pattern and how to deconstruct a service from the monolith. Manage a monolithic databaseTypically, monolithic applications have their own monolithic databases. One of the principles of a microservices architecture is to have one database for each microservice. Therefore, when you modernize your monolithic application into microservices, you must split the monolithic database based on the service boundaries that you identify. To determine where to split a monolithic database, first analyze the database mappings. As part of the service extraction analysis , you gathered some insights on the microservices that you need to create. You can use the same approach to analyze database usage and to map tables or other database objects to the new microservices. Tools like SchemaCrawler, SchemaSpy, and ERBuilder can help you to perform such an analysis. Mapping tables and other objects helps you to understand the coupling between database objects that spans across your potential microservices boundaries. However, splitting a monolithic database is complex because there might not be clear separation between database objects. You also need to consider other issues, such as data synchronization, transactional integrity, joins, and latency. The next section describes patterns that can help you respond to these issues when you split your monolithic database. Reference tablesIn monolithic applications, it's common for modules to access required data from a different module through an SQL join to the other module's table. The following diagram uses the previous ecommerce application example to show this SQL join access process: Figure 3. A module joins data to a different module's table. In figure 3, to get product information, an order module uses a However, if you deconstruct modules as individual services, we recommend that you don't have the order service directly call the product service's database to run a join operation. The following sections describe options that you can consider to segregate the database objects. When you separate the core functionalities or modules into microservices, you typically use APIs to share and expose data. The referenced service exposes data as an API that the calling service needs, as shown in the following diagram: Figure 4. A service uses an API call to get data from another service. In figure 4, an order module uses an API call to get data from a product module. This implementation has obvious performance issues due to additional network and database calls. However, sharing data through an API works well when data size is limited. Also, if the called service is returning data that has a well-known rate of change, you can implement a local TTL cache on the caller to reduce network requests to the called service. Replicate dataAnother way to share data between two separate microservices is to replicate data in the dependent service database. The data replication is read-only and can be rebuilt any time. This pattern enables the service to be more cohesive. The following diagram shows how data replication works between two microservices: Figure 5. Data from a service is replicated in a dependent service database. In figure 5, the product service database is replicated to the order service database. This implementation lets the order service get product data without repeated calls to the product service. To build data replication, you can use techniques like materialized views, change data capture (CDC), and event notifications. The replicated data is eventually consistent, but there can be lag in replicating data, so there is a risk of serving stale data. Static data as configurationStatic data, such as country codes and supported currencies, are slow to change. You can inject such static data as a configuration in a microservice. Modern microservices and cloud frameworks provide features to manage such configuration data using configuration servers, key-value stores, and vaults. You can include these features declaratively. Monolithic applications have a common pattern known as shared mutable state. In a shared mutable state configuration, multiple modules use a single table, as shown in the following diagram: Figure 6. Multiple modules use a single table. In figure 6, the order, payment, and shipping functionalities of the ecommerce application use the same To migrate a shared mutable state monolith, you can develop a separate ShoppingStatus microservice to manage the Figure 7. A microservice exposes APIs to multiple other services. In figure 7, the payment, order, and shipping microservices use the ShoppingStatus microservice APIs. If the database table is closely related to one of the services, we recommend that you move the data to that service. You can then expose the data through an API for other services to consume. This implementation helps you ensure that you don't have too many fine-grained services that call each other frequently. If you split services incorrectly, you need to revisit defining the service boundaries. Distributed transactionsAfter you isolate the service from the monolith, a local transaction in the original monolithic system might get distributed between multiple services. A transaction that spans multiple services is considered a distributed transaction. In the monolithic application, the database system ensures that the transactions are atomic. To handle transactions between various services in a microservice-based system, you need to create a global transaction coordinator. The transaction coordinator handles rollback, compensating actions, and other transactions that are described in the next document in this series, Interservice communication in a microservices setup. Data consistencyDistributed transactions introduce the challenge of maintaining data consistency across services. All updates must be done atomically. In a monolithic application, the properties of transactions guarantee that a query returns a consistent view of the database based on its isolation level. In contrast, consider a multistep transaction in a microservices-based architecture. If any one service transaction fails, data must be reconciled by rolling back steps that have succeeded across the other services. Otherwise, the global view of the application's data is inconsistent between services. It can be challenging to determine when a step that implements eventual consistency has failed. For example, a step might not fail immediately, but instead could block or time out. Therefore, you might need to implement some kind of time-out mechanism. If the duplicated data is stale when the called service accesses it, then caching or replicating data between services to reduce network latency can also result in inconsistent data. The next document in the series, Interservice communication in a microservices setup, provides an example of a pattern to handle distributed transactions across microservices. Design interservice communicationIn a monolithic application, components (or application modules) invoke each other directly through function calls. In contrast, a microservices‑based application consists of multiple services that interact with each other over the network. When you design interservices communication, first think about how services are expected to interact with each other. Service interactions can be one of the following:
Also consider whether the interaction is synchronous or asynchronous:
The following table shows combinations of interaction styles:
Each service typically uses a combination of these interaction styles. Implement interservices communicationTo implement interservice communication, you can choose from different IPC technologies. For example, services can use synchronous request-response‑based communication mechanisms such as HTTP‑based REST, gRPC, or Thrift. Alternatively, services can use asynchronous, message‑based communication mechanisms such as AMQP or STOMP. You can also choose from various different message formats. For example, services can use human-readable, text‑based formats such as JSON or XML. Alternatively, services can use a binary format such as Avro or Protocol Buffers. Configuring services to directly call other services leads to high coupling between services. Instead, we recommend using messaging or event-based communication:
In a microservices application, we recommend using asynchronous interservice communication instead of synchronous communication. Request-response is a well-understood architectural pattern, so designing a synchronous API might feel more natural than designing an asynchronous system. Asynchronous communication between services can be implemented using messaging or event-driven communication. Using asynchronous communication provides the following advantages:
However, following are some challenges to using asynchronous messaging effectively:
The next document in the series, Interservice communication in a microservices setup, provides a reference implementation to address some of the challenges mentioned in the preceding list. What's next
What is Active Directory database?Active Directory (AD) is a database and set of services that connect users with the network resources they need to get their work done. The database (or directory) contains critical information about your environment, including what users and computers there are and who's allowed to do what.
Which Active Directory component has a central security database that is used by all computers that are members of it?Which Active Directory component has a central security database that is used by all computers that are members of it? Domain controllers download Group Policy settings every five minutes.
Which Active Directory component is responsible for authenticating users when they log on to a workstation?AD DS is the core component of Active Directory that enables users to authenticate and access resources on the network. Active Directory organizes objects into a hierarchy, which lets various Domain Services connect with them and users access or manage them.
How would you describe Active Directory?Active Directory (AD) is Microsoft's proprietary directory service. It runs on Windows Server and enables administrators to manage permissions and access to network resources. Active Directory stores data as objects. An object is a single element, such as a user, group, application or device such as a printer.
|