Conceptual Article

Best Practices for Rearchitecting Monolithic Applications to Microservices

Published on May 24, 2022

Senior DevOps Technical Writer

Best Practices for Rearchitecting Monolithic Applications to Microservices


Serverless architecture allows backend web services to be implemented on an as-needed basis. Rather than having to maintain your own server configuration, architecting your software for serverless providers can minimize the overhead involved. Serverless applications are typically deployed from a Git repository into an environment that can scale up or down as needed.

Serverless deployments usually involve microservices. Using microservices is an approach to software architecture that structures an application as a collection of services that are loosely coupled, independently deployable, and independently maintainable and testable. Microservice architectures predate the widespread use of serverless deployments, but they are a natural fit together. Microservices can be used in any context that allows them to be deployed independently and managed by a central process or job server. Serverless implementations abstract away this central process management, leaving you to focus on your application logic.

This tutorial will review some best practices for rearchitecting monolithic applications to use microservices.

An Overview of Microservices

Rearchitecting, or refactoring, a monolithic application, is often invisible by design. If you plan to significantly rewrite your application logic while introducing no new features, your goal should be to avoid service disruptions to the greatest extent possible. This can entail using some form of blue-green deployment. When implementing microservices, it usually also entails replacing your application’s functionality on a step-by-step basis. This requires you to thoroughly implement unit tests to ensure that your application gracefully handles any unexpected edge cases. It also provides many opportunities to review your application logic and evaluate how to replace existing features with distinct microservices.

Microservices are equally well-supported by almost all major programming languages, and adopting a microservice-driven architecture can facilitate combining multiple different languages or frameworks within the same project. This allows you to adopt the best possible solution for each component of your stack, but can also change the way that you think about code maintenance.

Frameworks and State Management

Some architectures are a more natural fit for microservices than others. If your application logic contains multiple sequential steps that all depend on one another, it may not be a good idea to abstract each of them into individual microservices. In that case, you would need a sophisticated controller architecture that could handle and route any mid-stage errors. This is possible with a microservice architecture that uses a framework like Gearman to dispatch subprocesses, but it is more inconvenient when working with serverless deployments and can add complexity without necessarily solving problems.

Instead of delineating microservices between stages of the same input processing pipeline, you could delineate microservices between application state changes, or every time some output is returned to a user. This way, you do not need to pass the same data between public API calls as part of a single process. Handling your application state can be challenging with a microservice architecture, because each microservice will only have access to its own input, rather than to a globally defined scope. Wherever possible, you should create and pass similar data structures to each of your microservices, so that you can make reliable assumptions about the scope available to each of them.

Consider creating and maintaining your own application libraries for core logic and functions that are likely to be used in multiple places, and then create microservices which join together unique combinations of this logic. Remember that microservices can scale to zero: there is no penalty from maintaining unused code paths. This way, you can create microservices which do not directly depend on other microservices, because they each include a complete, linear set of application logic, composed of function calls which you maintain in a separate repository.

Deploying from Git

When working with microservices, you should employ the principles of GitOps as much as possible. Treat Git repositories as a single source of truth for deployment purposes. Most language-specific package managers, such as pip for Python and npm for Node.js, provide syntax to deploy packages from your own Git repositories. This can be used in addition to the default functionality of installing from PyPI, or other upstream repositories. This way, you can gracefully combine your own in-development functions with third-party libraries without deviating from best practices around maintainability or reproducibility.

API Endpoints

Each of your microservices can implement its own API, and depending on the complexity of your application, you can implement another API layer on top of that (and so on, and so on), and plan to only expose the highest-level API to your users. Although maintaining multiple different API routes can add complexity, this complexity can be resolved through good documentation of each of your individual microservices’ API endpoints. Communicating between processes using well-defined API calls, such as HTTP GET and POST, adds virtually no overhead and will make your microservices much more reusable than if they used more idiosyncratic interprocess communication.

Adopting microservices may naturally push you toward also adopting more Software-as-a-Service (SaaS) tooling as a drop-in replacement for various parts of your application stack. This is almost always good in principle. While you are under no obligation to replace your own function calls with third-party services, retaining the option to do so will keep your application logic more flexible and more contemporary.

Migrating to Microservices

Effectively migrating to Microservices requires you to synthesize a number of best practices around software development and deployment.

Using CI/CD Principles

When rearchitecting an application to use microservices, you should follow the best practices for Continuous Integration and Continuous Delivery to incrementally replace features of your monolithic architecture. For example, you can use branching by abstraction — building an abstraction layer within an existing implementation so that a new implementation can be built out behind the abstraction in parallel — to refactor production code without any disruption to users. You can also use decorators, a language feature of TypeScript and Python, to add more code paths to existing functions. This way, you can progressively toggle or roll back functionality.


Microservices have become popular at the same time as containerization frameworks like Docker for good reason. They have similar goals and architectural assumptions:

  • Containers provide process and dependency isolation so that they can be deployed on an individual basis.

  • Containers allow other applications running in tandem with them to function as a “black box” — they don’t need to share state or any information other than input and output.

  • Container registries, such as Docker Hub, make it possible to publish and use your own dependencies interchangeably with third-party dependencies.

In theory, your microservices should be equally suited to running in a Docker container or a Kubernetes cluster as they are in a serverless deployment. In practice, there may be significant advantages to one or the other. Highly CPU-intensive microservices such as video processing may not be economical in serverless environments, whereas maintaining a Kubernetes control plane and configuration details requires a significant commitment. However, building with portability in mind is always a worthwhile investment. Depending on the complexity of your architecture, you may be able to support multiple environments merely by creating the relevant .yml metadata declarations and Dockerfiles. Prototyping for both Kubernetes and serverless environments can improve the overall resilience of your architecture.

Generally speaking, you should not need to worry about database concurrency or other storage scaling issues inside of microservices themselves. Any relevant optimizations should be addressed and implemented directly by your database, your database abstraction layer, or your Database-as-a-Service (DBaaS) provider, so that your microservices can perform any create-read-update-delete (CRUD) operations without embellishment. Microservices must be able to concurrently query and update the same data sources, and your database backend should support these assumptions.


When making breaking, non-backwards-compatible updates to your microservices, you should provide new endpoints. For example, you might provide a /my/service/v2 in addition to a preexisting /my/service/v1, and plan to gradually deprecate the /v1 endpoint. This is important because production microservices are likely to become useful and supported outside of their originally intended context. For this reason, many serverless providers will automatically version your URL endpoints to /v1 when deploying new functions.

Microservice Migration Example

Implementing microservices in your application can replace nested function calls or private methods by promoting them to their own standalone service. Take this example of a Flask application, which performs a Google query based on a user’s input into a web form, then manipulates the result before returning it back to the user:
from flask import *
from flask import render_template
from flask import Markup
from googleapiclient.discovery import build
from config import *

app = Flask(__name__)

def google_query(query, api_key, cse_id, **kwargs):
    query_service = build("customsearch", "v1", developerKey=api_key)
    query_results = query_service.cse().list(q=query, cx=cse_id, **kwargs).execute()
    return query_results['items']

def manipulate_result(input, cli=False):
    search_results = google_query(input, keys["api_key"], keys["cse_id"])
    for result in search_results:
    return manipulated_text

@app.route('/<string:text>', methods= ["GET"])
def get_url(text):
    manipulated_text = manipulate_result(text)
    return render_template('index.html', prefill=text, value=Markup(manipulated_text))

if __name__ == "__main__":
    serve(app, host='', port=5000)

This application provides its own web endpoint, which includes an HTTP GET method. Providing a text string to that endpoint calls a function called manipulate_result(), which first sends the text to another function google_query(), then manipulates the text from the query results before returning it to the user.

This application could be refactored into two separate microservices, both of which take HTTP GET parameters as input arguments. The first would return Google query results based on some input, using the googleapiclient Python library:
from googleapiclient.discovery import build
from config import *

def main(input_text):
    query_service = build("customsearch", "v1", developerKey=api_key)
    query_results = query_service.cse().list(q=query, cx=cse_id, **kwargs).execute()
    return query_results['items']

A second microservice would then manipulate and extract the relevant data to be returned to the user from those search results:
import requests

def main(search_string, standalone=True):
    if standalone == False:
        search_results = requests.get('https://path/to/microservice_1/v1/'+search_string).text
        search_results = search_string
    for result in search_results:
    return manipulated_text

In this example, performs all of the input handling, and calls directly via an HTTP post if an additional argument, standalone=False has been provided. You could optionally create a separate, third function to join both microservices together, if you preferred to keep them entirely separate, but still provide their full functionality with a single API call.

This is a straightforward example, and the original Flask code does not appear to present a significant maintenance burden, but there are still advantages to being able to remove Flask from your stack. If you no longer need to run your own web request handler, you could then return these results to a static site, using a Jamstack environment, rather than needing to maintain a Flask backend.


In this tutorial, you reviewed some best practices for migrating monolithic applications to microservices, and followed a brief example for decomposing a Flask application into two separate microservice endpoints.

Next, you may want to learn more about efficient monitoring of microservice architectures to better understand the optimization of serverless deployments. You may also want to understand how to write a serverless function.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar

Senior DevOps Technical Writer

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
Leave a comment

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel