
Why MVP is Important
The main task of any MVP (Minimal Viable Product) is to quickly bring a solution to market, reach the maximum audience, and attract customers. The key factor in launching an MVP is speed.
However, an MVP must be of high quality. When users visit the project site, they should not encounter a broken layout or a multitude of errors—this will drive them away permanently. The same applies to internal systems. A user must be able to seamlessly complete all key actions:
-
Register;
-
Log in;
-
Browse listings or find a specialist;
-
See CRM integration;
-
Receive SMS or push notifications, etc.
All major user steps must work without bugs. This is especially true for payment processing. Any bug in an MVP is a significant problem, and they should be minimal. While we can overlook minor process rough edges or manual actions, everything must function correctly.
Therefore, the MVP should have a minimal number of viable features. You quickly launch a quality product with limited functionality, which allows the business to test a theory or capture a market share, without the burnout from working 120-hour weeks.
Challenges Teams Face
-
Vague Requirements at the Start
At the beginning, the client often doesn’t fully understand how the project will look—not in terms of design, but in terms of business logic. Questions arise such as how payment flows will work, which sections are needed, and what in the initial scope is actually important.
-
Almost always, nuances get lost at the start, and the final scope expands
The business itself evolves, and initial ideas often don’t survive until the end. They develop and change in priority. New important features might arise that are crucial to launch. Conversely, a feature the team worked hard on might suddenly become irrelevant.
-
Mistakes in Implementing Integrations or Object Models
It’s crucial to understand that the decisions made at the start will stay with the project for a long time. Changing the object model is difficult because you don’t have time to rewrite all the services tied to it. The same goes for architecture—if you make a mistake, you’ll likely have to live with it until the end of the project. 4. Hidden Challenges
-
Hidden Challenges
There can be many non-obvious issues—such as network infrastructure problems, database policies that don’t align with the company’s requirements, server limitations, and more. For example, on a project in Kazakhstan, we were surprised to discover that the local cloud provider didn’t have Kubernetes. We had to test and set it up together with the team, which took two weeks. Due to these issues, the initial architecture may suffer. 5. Architecture as a Key Factor
-
Architecture as a Key Factor
Architecture is perhaps the most crucial aspect of your project. How you design the system, which databases you choose, who will act as the broker, how services will communicate, and which modules will be used are all critical decisions. A mistake at this stage multiplies the hours required for debugging and refactoring.
-
Any Mistake in the CORE is Expensive
By CORE, I refer to the modules or services of your system, such as MDM (Master Data Management), IDM (Identity Management), and other core packages that integrate with all services in the system.
The issues above may seem like a problem for managers, team leads, or architects, but it’s at this stage that the foundation for future problems is laid. Understanding these circumstances is vital for developers as well.
Main Mistakes in MVP Development
- Incorrect Scope Management A common mistake in MVP development is not properly defining the scope of work.
For example, early in one of our projects, we agreed with the client that a review system wasn’t essential at the MVP stage due to expected low traffic, so we postponed it. We agreed on a simple form for rating sessions and writing comments.
However, a second mistake occurred.
- Failing to Lock the Scope The review service grew before the MVP release. We ended up adding multiple feedback options, each with its own set of checkboxes. Instead of focusing on the core MVP features, we implemented this feature, which wasn’t a priority, and even expanded it further.
It may seem like a small change, but small tasks like this can quickly add up. A developer might spend two hours on it, a tester half an hour testing it, front-end developers might need an hour or more, and the team lead will validate it in about the same amount of time. What seemed like a 30-minute task ends up taking 4-6 hours.
When you multiply that by 30 such tasks, it can lead to weeks of unnecessary work and stress.
-
Incorrect Architecture Choice Choosing the right architecture is a critical step. Sometimes teams spend too little time or too much time on this aspect, which can result in poor performance or scalability issues down the line.
-
Spending Too Much or Too Little Time on the Technical Specification (TDS) Sometimes, a very detailed TDS is written for simple functionality, which doesn’t really help the development process but consumes a lot of time. On the other hand, sometimes no TDS is created for functionality that affects other services’ logic, leading to data inconsistencies.
From this list, you can influence the final scope and its quality. Every developer is a highly skilled professional, and their opinions should be considered. Don’t hesitate to suggest to the team lead that some services can be removed from the MVP because they aren’t critical or that certain features can be simplified. You can always come back and refactor them after the MVP is released.
The key is to deliver quality and speed—everything else can be adjusted later.
Packages and Utilities
In this article, I mainly focus on microservices because monolithic MVPs are easier to develop. Monolithic platforms are simpler to implement, especially when you have limited functionality. However, the trend in the market is moving toward microservices. It’s a logical progression because scaling and evolving monolithic systems is much harder.
For an MVP, your main strategy should focus on properly dividing the logic and creating a modular monolith, which can later be easily split into services. But even when dealing with microservices, you’ll face the same issues we’re going to discuss.
When to Choose Microservices in MVP
It’s important to understand that microservices are always more expensive to develop than a monolith. This is because they require numerous utilities for implementation: communication between services, standardization of their operation, Single Sign-On (SSO) management, logging, API standardization, traffic distribution, and monitoring.
The second challenge is the more complex infrastructure. For example, you can run a monolith even on Systemctl. With microservices, it’s more complicated. At the very least, you need more time for debugging, system setup, and testing.
SSO and Internal Requests in Microservices
Let’s consider two services: a payment gateway service and a shopping cart service. These services need to communicate with each other. When a user submits their cart for payment, the frontend sends a payment request to the payment gateway. The payment gateway needs to verify the cart’s correctness—like checking if the cost is accurate, etc. This requires an internal network request from the payment gateway to the cart service. The cart service responds confirming everything is correct, and the payment is processed.
However, there’s a problem: we need to make an internal network request, and we need to address two factors:
Who is making the request from the cart to the payment gateway? Is it on behalf of the user or the service itself?
Where is the other service located? We need to know the address of the cart service.
To solve this problem, we can proxy the user’s token or create a token for the service itself. The latter will always be required, regardless of whether we’re making a request on behalf of the user or the service.
How Do We Approach This? To do this, we need to create a client that retrieves the user’s token, passes it to the new request, and forwards it to the other service. At the same time, we need to know where the cart service is located. We can’t hard-code the address, as it might break the system’s logic when network configurations change.
Thus, our package needs to solve three key problems:
Token Proxying: The ability to pass tokens between services.
Service Communication: Ensuring communication between services.
Encapsulation of Service Addresses: Avoid hardcoding service addresses.
To configure this properly, we’ll need to obtain the addresses of neighboring services from our DevOps team. Environment variables work great for this, as they can be injected into the container during build time. Inside the code, we can create a configuration class and store these environment variables. To serialize this data, we’ll use Pydantic models.
Proxying the User’s Token Proxying the user’s token is fairly simple. We extract the token from the incoming request and pass it along to the next request. But what happens when we need to make an inter-service request during a background task without having the user’s token? This is more complex.
One option is to configure internal Kubernetes networks that are only accessible between containers, but this might be restricted by the security team. In such cases, we opted to implement this mechanism as follows:
Implementing the Solution
We created a client for communicating with services like SSO (Single Sign-On). This client extends a base S2SClient, which requires parameters like the service’s login and password, as well as other configurations like logging, timeouts, etc.
For performance optimization, we decided to store the token directly in memory and work with it from there. If the token expires or isn’t available in memory, the client re-authenticates with the SSO service and retrieves a new token. Additionally, we provide an option to cache the token instead of storing it in memory.
Inside the S2SClient, we encapsulate the logic for obtaining the token from SSO, refreshing the token when it expires, retrying in case the service doesn’t respond, and logging all requests—both successful and failed.
Streamlining Inter-Service Communication After this, all we need to do is create a new client for communication with the neighboring service, inheriting from the S2SClient. This allows us to extend the logic if needed. In the end, creating a unified client for communication with any service requires much less time, the rules of communication remain consistent, errors are minimized, and working with the system becomes easier.
Key Takeaways
SSO integration: Implementing SSO-based communication for inter-service requests is essential for ensuring security and proper authentication.
Token management: Storing and managing tokens efficiently (in memory, cache, or re-authentication) is crucial for smooth communication between services.
Unified client: By extending a base client like S2SClient, we create a reusable, consistent, and easy-to-maintain solution for inter-service communication.
Logging in Microservices: A Key to Effective Troubleshooting
In a microservices architecture, logging becomes even more crucial because it enables developers to track requests and identify errors across multiple services. Unlike monolithic systems where debugging is more straightforward, microservices involve inter-service communication, making it harder to pinpoint exactly where issues occur.
The main question arises: What happens when something goes wrong? The answer lies in logging.
Logging allows you to understand what went wrong, where it happened, and with which data. It’s a fundamental tool for troubleshooting, monitoring, and improving your system. In microservices, we often need to track the following:
Which services did the request pass through?
How many requests were made to neighboring services?
How long did each request take to execute?
Without a good logging setup, it's challenging to figure out where the most time was spent during execution, especially when the request is routed through several services.
Key Principles of Effective Logging Here are the key principles of logging in microservices:
Centralized Log Storage:
Logs should be stored in a centralized logging system that can pull logs from the standard output (stdout), such as ELK (Elasticsearch, Logstash, Kibana), Graylog, or similar systems. These systems index logs for easy searching and analysis.
Standardized Log Format:
For effective log search and analysis, all services must log in the same format. Mixing XML, JSON, and plain text formats can complicate indexing and searching. JSON is the ideal format because it’s machine-readable, easy to parse, and works well with modern logging systems.
Structured Logging:
Each log entry should contain not only the event description but also metadata (e.g., request parameters, response results, service name, timestamps, etc.). This helps make logs more informative and useful for debugging.
Avoid simple messages like:
"Start integration. Time %%%%"
"Integration finished. Time %%%%"
"Integration error: %errors"
Instead, include key details:
"Integration started. Request parameters: { ... }"
"Integration finished. Response: { ... }"
"Error during integration. Parameters: { ... }, Error: { ... }"
Log Levels:
Logs should be categorized by their severity level: Info, Error, Debug, etc. This ensures that you can easily filter logs based on their importance and focus on critical issues when necessary.
Debug logs are invaluable in tracking the flow of requests and understanding system behavior during development.
Metadata-Enriched Logs:
In microservices, it’s not enough to just log the request and the response. You need to capture all the parameters of the request, the service it was sent to, and the response received. Additionally, any errors should be logged with sufficient context, including the data used in the request and response.
Example of a Logging System in Action In our microservices architecture, we implemented a centralized logging system in which every inter-service communication logs important details.
For example, the S2SClient handles communication between services. When initializing it, we create various clients for service communication (e.g., for SSO or other services). These clients are inherited from a BaseClient that takes care of the common behavior of requests, such as SSL certificates, retry policies, and timeouts.
The BaseClient also handles logging. Every time a request is made, we log the following details:
Request parameters
Service endpoint being called
Response data
Errors, if any, with detailed context
These logs are sent to stdout and then stored in the centralized log management system (like ELK). They are invaluable when debugging and monitoring the system in production.
The Benefits of This Logging Approach Traceability: You can trace a request’s journey from one service to another and understand where it might have failed or where delays occurred.
Transparency: Logs provide transparency into the inter-service communication, making it easier to diagnose issues.
Efficient Troubleshooting: With well-structured, detailed logs, you can quickly identify the root cause of problems without having to manually inspect each service.
Automated Tests in Microservices: Principles and Approaches
Automated tests are an essential part of developing quality applications. They not only speed up the development process but also help maintain system stability amidst constant changes. At AGIMA, we encourage all our employees to write automated tests, and in most Python projects, we rely on them.
Using Pytest and Developing Our Own Test Framework
Pytest-Django is an excellent tool for testing in Django, offering convenient tools for working with databases and migrations. However, frameworks like FastAPI, Flask, and Aiohttp don't dictate which testing tools to use, requiring the development of our own test framework.
We based our testing framework on the functionality provided by Pytest-Django, implementing key elements such as:
Creating a test database.
Applying migrations to ensure the schema is up-to-date.
Proper transaction handling for rolling back changes after each test.
Cloning the database for each worker when using Xdist.
JWT token generators with the necessary permissions and roles.
Testing in Microservice Architecture For microservices, it is important to create automated tests that ensure the stability of each service while also testing interactions between them.
Key Principles for Writing Tests: Unit Tests:
These are necessary to check the key functions and classes of the system. They help ensure that changes to one service do not break others.
API Endpoint Tests:
Testing API endpoints allows us to test not only business logic but also the entire application stack, including routing and error handling. If possible, individual layers of the application can also be tested separately.
Deterministic Data:
When testing API endpoints, we make sure that all input data is deterministic, so that test results are always predictable.
Comparison with Reference Results:
We always compare the HTTP status and response text with the expected data. This helps us spot changes in the data format that could lead to errors.
Mocks:
We actively use mocks to simulate responses from external services, allowing us to test the interaction with these services without actually calling them.
Example of a Standard Test Our standard tests involve loading the main fixtures, where all external requests are mocked. Then, a request for contract creation with specific data is simulated. We check the status of the response and compare the received data with the expected contract.
Universal Tests for Microservices We have also developed several universal tests that are used across different services:
Healthcheck endpoint tests.
Tests for checking endpoints with debug information.
Tests for endpoints that always "fail" to check integration with error monitoring systems (e.g., Sentry).
Migration tests, checking the correctness of applying and rolling back migrations step by step.
Permission tests for checking the correctness of access rights on endpoints.
Checking Authorization Configurations One of the most critical aspects is verifying proper authorization and permission settings. To do this, we have developed modules that automatically check that each endpoint has the correct protection against unauthorized access:
JWT token checks: testing the absence of a token, incorrect header format, or expired/invalid token.
Permission checks: testing that endpoints do not allow requests without the necessary rights.
Recommendations for MVP Development
It's impossible to cover all the challenges that arise when developing an MVP (Minimum Viable Product) in a single article. However, here are some key areas that deserve special attention:
-
Security Audits (Information Security) Information security issues are always critical. It's crucial to continuously scan the code to prevent critical vulnerabilities from appearing. Regular security checks and updates should be integral to your development process.
-
Load Testing Load testing is essential. It's difficult to predict in advance what level of load your system can handle. Problems often arise when the system exceeds a certain Requests Per Second (RPS) threshold. It’s best to stress-test the system under real-world conditions to identify potential weak points.
-
S3 Storage Management In monolithic architectures, file management can be done within a single package, where file volume, extensions, executable file checks, and the limit on the number of uploads are typically monitored. In microservices, this task shifts to a separate microservice responsible for file management. You'll need to aggregate all logic related to file handling within this microservice, which introduces additional overhead. You’ll also need to manage metadata, such as knowing which service uploaded a file and checking if the file is expired or not. Furthermore, direct uploads to S3 require an asynchronous procedure to synchronize data, notifying the microservice about the file's location, identifier, and its validity.
-
Auto-generated Documentation Always use auto-generated documentation to save time for both frontend and mobile developers. FastAPI offers auto-generation out of the box, while Django has Django-yasg for this purpose. Ready-made documentation saves a huge amount of time.
-
Emphasize Typing Type hints are a great way to ensure you're using packages and classes correctly, avoiding silly mistakes. This is especially true in languages like Python, where dynamic typing can lead to subtle bugs. Strong type-checking and annotations improve code quality and maintainability.
-
Write Automated Tests Where there’s a lot of communication between services, automated tests are indispensable. They serve as protection against unnecessary bugs and are a great development tool. This is particularly true in a microservice architecture where integration testing is crucial for ensuring seamless communication.
-
Ask for Help from Colleagues When Stuck Don't hesitate to ask for help if you're stuck on a problem. It's not something to be ashamed of; in fact, it's essential to avoid missing deadlines or getting caught up in self-criticism. Brainstorming sessions with colleagues are an excellent practice to leverage the collective expertise and solve problems faster.
-
Set Up Sentry Sentry is a simple yet powerful tool that can be easily set up in a stand-alone mode. It's highly configurable and can be integrated with any framework. You can implement it in your project in no more than 30 minutes, and it will help monitor errors, improve debugging, and optimize performance.
-
Track Library Versions In our projects, we use Poetry by default, but the concept applies to all dependency managers. Make sure to specify the minimum version for each library you use. This is important to avoid conflicts between libraries, especially when using popular dependencies. Relying on only a minimum version can lead to issues if incompatible versions are installed. Always be specific about versions and dependencies to avoid package conflicts.
By focusing on these key areas, you can improve the efficiency, security, and stability of your MVP, ensuring it is robust enough to handle the challenges of real-world usage.
- Why MVP is Important
- Challenges Teams Face
- Main Mistakes in MVP Development
- Packages and Utilities
- SSO and Internal Requests in Microservices
- Implementing the Solution
- Key Takeaways
- Logging in Microservices: A Key to Effective Troubleshooting
- Automated Tests in Microservices: Principles and Approaches
- Recommendations for MVP Development