Introduction:
Overhauling a Food Delivery System with Modular Monolith Architecture
For the past month, I’ve been deeply immersed in the task of overhauling a food delivery system. My goal? Not only to revamp its existing functionalities but also to equip it with the potential to evolve into a bustling marketplace.
This endeavor unfolds within the realm of a modular monolith architecture, carefully crafted using .NET Core.
Key Architectural Decisions
1. Database Schema Separation
One of the pivotal decisions I’ve made is to implement physical separation at the database schema level. This enables the possibility of seamless future migration across multiple databases, giving us flexibility as the system grows.
2. Using MediatR for Communication
To ensure efficient communication between modules and promote loose coupling, I am leveraging MediatR.
MediatR and Its Role
MediatR, a renowned .NET library, has been instrumental in orchestrating communication within the system. By embracing the mediator pattern in C#, MediatR facilitates the exchange of messages between components while maintaining a clear separation between senders and receivers.
The elegance of MediatR lies in its simplicity:
- Requests are encapsulated as classes.
- Handlers are defined as methods.
This provides a structured and organized approach to code management. The paradigm shift toward modularity not only enhances maintainability but also lays the foundation for a robust and scalable architecture.
3. Command Query Segregation (CQS) Strategy
In tandem with MediatR, I’ve embraced the Command Query Segregation (CQS) strategy.
- Commands → Trigger actions.
- Queries → Retrieve data.
By adhering to this principle, the system fosters clarity and simplicity, making it easier to reason about, test, and maintain.
4. Communication Between Modules
To facilitate interaction between business modules, I’ve established a set of public packages.
These packages act as the primary means of communication between different system components. This approach:
- Promotes reusability.
- Ensures a cohesive and organized architecture.
- Lays the groundwork for future scalability, should we choose to transition into microservices or micro-monoliths.
Example: Business Module Contracts
In the image below, I illustrate how I structured the solution and the business module contracts (packages):
Choosing the Right Approach
When deciding how to send data into the reporting schema, several options were evaluated. After careful analysis, we selected a post-processor mechanism as the most suitable approach.
Why Post-Processor?
- Data Transformation Requirements
- Reporting data is not just raw — it often requires transformations or processing tailored to reporting needs.
- With a post-processor, we can apply these transformations before dispatching data, ensuring accuracy and consistency.
- Decoupling from Domain Event Interceptors
- We wanted to separate reporting from domain event handling (e.g., persistence or business logic).
- A post-processor ensures autonomy — reporting operates independently and does not interfere with the normal flow of data processing.
- Flexibility & Compatibility
- The post-processor integrates seamlessly into our modular monolith + MediatR-based communication framework.
- It avoids heavy architectural changes while maintaining stability and backward compatibility.
In short: the post-processor gave us the right balance of data transformation, decoupling, and smooth integration.
Challenge: Synchronous Processing
Even with async/await, the MediatR pipeline remained synchronous — it would wait for all processes to completebefore continuing.
This became a performance bottleneck, especially with large volumes of reporting data.
Introducing Asynchronous Processing
To overcome this, I designed an asynchronous processing scheme.
Key Components
- Event Storage
- A dedicated table stores all reporting-related async events.
- This table resides in its own schema or database, ensuring modularity and flexibility.
- Background Task
- A robust background worker fetches and processes stored events asynchronously.
- It runs on a recurring schedule, ensuring continuous event handling.
- Processing Logic
The background task follows a structured sequence:- Attempt to send the async MediatR event for processing.
- Mark the event as chosen for processing.
- On failure: log the error + mark accordingly.
- Implement retry logic for transient failures.
- On success: mark as processed (or erase) to keep the log clean.
Benefits
- Resilience → Retries & error logging ensure robustness.
- Scalability → Decoupled processing supports large reporting workloads.
- Flexibility → Works seamlessly with existing architecture without major refactoring.
- Independence → Reporting doesn’t block or slow down domain workflows.
In summary:
By adopting a post-processor + asynchronous background processing strategy, we were able to decouple reporting, handle transformed data efficiently, and eliminate synchronous bottlenecks. This design gives us a scalable, resilient, and modular reporting infrastructure.

Asynchronous Event Processing for Reporting in a Modular Monolith
Introduction
To facilitate asynchronous processing of events for reporting purposes, I’ve designed an event entity, notification system, and background processing service. This design ensures that reporting data is efficiently stored, transformed, and processed without blocking core business workflows.
1. Event Entity
The Event entity represents an event stored in the database for later processing:
public class Event
{
public long Id { get; init; } // Unique identifier
public Guid IdentityCode { get; set; } = Guid.NewGuid(); // Unique identity code
public string? EventName { get; set; } // Event type name
public string? AssemblyName { get; set; } // Assembly where event is defined
public string? ObjectValue { get; set; } // Serialized object payload
public string? Error { get; set; } // Error message on failure
public byte RetryCount { get; set; } // Retry attempt counter
public bool Processing { get; set; } // Processing flag
public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
public DateTime? UpdatedAt { get; set; }
}
This table serves as the staging area for asynchronous reporting events.
2. Event Notification
Events are published into the system using a notification object:
public class CreateOrUpdateEvent : INotification
{
public Guid IdentityCode { get; set; } = Guid.NewGuid();
public string? EventName { get; set; }
public string? AssemblyName { get; set; }
public object? ObjectValue { get; set; }
}
This decouples the reporting mechanism from the domain logic while leveraging MediatR.
3. Event Notification Handler
The CreateOrUpdateEventHandler is responsible for saving or updating events in the database:
public class CreateOrUpdateEventHandler : INotificationHandler<CreateOrUpdateEvent>
{
private readonly EventsDbContext _eventDbContext;
public CreateOrUpdateEventHandler(EventsDbContext eventDbContext)
{
_eventDbContext = eventDbContext;
}
public async Task Handle(CreateOrUpdateEvent notification, CancellationToken cancellationToken)
{
var eventEntity = await _eventDbContext.Events
.FirstOrDefaultAsync(x => x.IdentityCode == notification.IdentityCode, cancellationToken);
string eventData = JsonConvert.SerializeObject(notification.ObjectValue, Formatting.None,
new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore });
if (eventEntity is null)
{
eventEntity = new Event
{
EventName = notification.EventName,
ObjectValue = eventData,
AssemblyName = notification.AssemblyName,
RetryCount = 0,
};
await _eventDbContext.Events.AddAsync(eventEntity, cancellationToken);
}
else
{
eventEntity.ObjectValue = eventData;
eventEntity.AssemblyName = notification.AssemblyName;
_eventDbContext.Events.Update(eventEntity);
}
try
{
await _eventDbContext.SaveChangesAsync(cancellationToken);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
This ensures all reporting events are persisted reliably in the database.
4. Background Processing
The EventsHostedService continuously processes stored events in the background:
- Runs independently of request/response flows.
- Uses a semaphore lock to avoid concurrent processing.
- Retrieves batches of events, processes them, retries failures, and deletes successful ones.
public class EventsHostedService : IHostedService, IDisposable
{
private readonly IServiceScopeFactory _scopeFactory;
private CancellationTokenSource _cancellationTokenSource;
private readonly SemaphoreSlim _lock = new SemaphoreSlim(1, 1);
public EventsHostedService(IServiceScopeFactory scopeFactory)
{
_scopeFactory = scopeFactory;
_cancellationTokenSource = new CancellationTokenSource();
}
public Task StartAsync(CancellationToken cancellationToken)
{
Task.Run(async () => await DoWorkLoopAsync(), cancellationToken);
return Task.CompletedTask;
}
private async Task DoWorkLoopAsync()
{
while (!_cancellationTokenSource.Token.IsCancellationRequested)
{
await DoWorkAsync();
await Task.Delay(TimeSpan.FromMilliseconds(100), _cancellationTokenSource.Token);
}
}
private async Task DoWorkAsync()
{
await _lock.WaitAsync();
try
{
using var scope = _scopeFactory.CreateScope();
var jobService = scope.ServiceProvider.GetRequiredService<IEventProcessor>();
await jobService.ProcessEvents(_cancellationTokenSource.Token);
}
finally
{
_lock.Release();
}
}
public Task StopAsync(CancellationToken cancellationToken)
{
_cancellationTokenSource.Cancel();
return Task.CompletedTask;
}
public void Dispose()
{
_cancellationTokenSource.Dispose();
_lock.Dispose();
}
}
5. Event Processing Logic
The EventProcessor executes the actual event handling:
- Retrieves events in batches of 10.
- Deserializes payloads using type metadata (
AssemblyName,EventName). - Publishes them via MediatR for downstream handling.
- Retries up to 5 times before marking failure.
- Cleans up successfully processed events.
Conclusion
By implementing this asynchronous processing scheme, we’ve successfully addressed the challenge of managing and storing data for reporting within a modular monolith architecture.
This solution:
- Ensures efficient utilization of system resources by decoupling reporting from the main execution pipeline.
- Provides resilience and reliability through retries, error handling, and background processing.
- Establishes a flexible foundation that supports future scalability and adaptability as the system evolves.
With careful schema design and meticulous processing logic, we’ve created a robust infrastructure for seamless reporting and historical data management, ensuring that the architecture can grow and adapt to new business needs.
