Back to Labs Content
- Software Architecture
- System Design
- Event Sourcing
Understanding Event Sourcing with a Digital Wallet Case Study
Tuesday, May 27, 2025 at 9:29:23 AM GMT+8
Event Sourcing is an architectural pattern where every change to an application's state is stored as an immutable event, rather than just storing the final state. This fundamentally changes how systems record, reconstruct, and interact with data over time.
In this blog, we’ll explore what Event Sourcing is, why it’s important, when to use it, its benefits and trade-offs, and then bring it all together with a concrete case study: a digital wallet application.
What is Event Sourcing?
Traditional systems tend to persist only the latest version of data. For example, in a user table, we store a user’s current name, email, and preferences. If a user changes their name three times, we typically only keep the last one, unless we explicitly log historical records elsewhere.
Event Sourcing flips this approach. Instead of persisting only the current state, we persist a log of all the events that led to that state. These events are stored in an append-only event store, forming a complete chronological sequence of what happened in the system.
Key Concepts:
- Event: A record of something that happened in the system (e.g., `UserRegistered`, `MoneyDeposited`).
- Event Store: A specialized database or log that stores events in the order they occurred.
- Rehydration / Replay: The process of rebuilding the current state of an entity by replaying all its events from the beginning.
This approach is inspired by transaction logs in databases and event logs in distributed systems (like Apache Kafka).
Why Use Event Sourcing?
1. Complete Audit Trail
Every change is recorded. You can see exactly what happened, when, and why.
2. Time Travel / Historical State
You can recreate the state of the system at any point in the past. This is incredibly useful for debugging or compliance.
3. Undo/Redo Capabilities
Because every change is an event, you can reverse or replay them easily, allowing powerful features like rollback or state simulation.
4. Decoupled Read and Write Models
Event Sourcing pairs naturally with CQRS (Command Query Responsibility Segregation), where write operations generate events and read models are optimized views of that data.
5. Scalability & Flexibility
You can project the same event stream into multiple read models tailored for different business needs.
When Should You Use Event Sourcing?
While Event Sourcing offers powerful benefits, it’s not for every application. You should consider using it when:
- The domain is complex and benefits from modeling behavior explicitly (e.g., finance, logistics, healthcare).
- You require a full audit trail or regulatory compliance.
- You want to support advanced features like event replay, analytics on history, or user activity tracing.
- You’re building a system that evolves over time, and want to decouple the logic for data capture from the logic for data usage.
Avoid Event Sourcing if:
- Your domain is very simple and doesn’t change frequently.
- Your team is not experienced with eventual consistency or distributed systems.
- You don’t need history and are optimizing for simplicity and rapid development.
How Event Sourcing Works: A Step-by-Step Breakdown
- Command Handling The user initiates an action (e.g., deposit money).
- The system validates the command (e.g., user exists, sufficient permissions).
- If valid, an event (e.g., MoneyDeposited) is created.
- Event Persistence The event is saved to the event store, which is append-only and immutable.
- Event Replay (Rehydration) To rebuild the current state of a user’s wallet, all related events are replayed in order.
- Projection to Read Model Events are consumed by handlers to update read models (e.g., current balance, transaction history).
Case Study: Digital Wallet System
Features:
- Users can register an account.
- Users can deposit and withdraw funds.
- Users can view their balance and transaction history.
Event Types:
{
"type": "UserCreated",
"userId": "1",
"name": "Ari"
}
{
"type": "MoneyDeposited",
"userId": "1",
"amount": 100
}
{
"type": "MoneyWithdrawn",
"userId": "1",
"amount": 30
}
Each event is stored chronologically in the event store.
Rebuilding State:
let balance = 0;
for (const event of userEvents) {
if (event.type === "MoneyDeposited") balance += event.amount;
if (event.type === "MoneyWithdrawn") balance -= event.amount;
}
Optimization: Read Model (/balance)
Replaying all events every time a user requests their balance is inefficient, especially when the number of events becomes large. To solve this, we build read models, which are materialized views or projections that are updated in response to events.
What is a Read Model?
A read model is a denormalized and query-optimized view of your data, often stored in a different storage engine (e.g., SQL, MongoDB, Redis).
For example:
- A /balance endpoint reads from a `balance_projection` table that is updated every time a `MoneyDeposited` or `MoneyWithdrawn` event occurs.
- A /transactions endpoint uses a `transaction_log` projection that simply appends new entries.
Read Model Optimization Techniques:
1. Precompute Aggregates: For values like balance, update them incrementally as new events arrive, instead of recomputing from scratch.
2. Store Snapshots: Save intermediate state snapshots periodically to reduce the number of events needed during rehydration.
3. Async Event Handlers: Use background workers or message queues to process events and update read models asynchronously, ensuring fast writes and eventual consistency.
4. Polyglot Storage: Use the best storage type per read model: Redis for caching, PostgreSQL for reporting, Elasticsearch for search.
5. Sharding/Partitioning: Distribute projections across shards if the volume grows significantly.
These optimizations allow your event-sourced system to handle read-heavy workloads efficiently without sacrificing the advantages of a full event history.
Benefits Recap
Event Sourcing provides several strategic benefits:
- It enables a complete audit trail, recording every user action in the system.
- Undo and redo functionality become feasible, as the system can reconstruct any prior state from its event history.
- Debugging and analytics are easier because you can replay and analyze event flows.
- Scalable read models can be tailored for different use cases, improving performance and flexibility.
- The architecture encourages decoupling, where each component reacts to events without tight integration.
Challenges and Considerations
Despite its advantages, Event Sourcing comes with certain challenges:
- Event schema evolution can become complex. To manage changes in event formats, you may need versioning, upcasting, or adapters.
- Eventual consistency requires a mental shift. Developers must understand that write and read paths may not be immediately in sync.
- Debugging across asynchronous boundaries may need custom tools to inspect and trace events effectively.
- The learning curve is steeper than traditional CRUD systems. Your team must be comfortable with asynchronous workflows and immutability.
Tools and Libraries
Popular tools that support Event Sourcing:
- Event Store – purpose-built database for event sourcing.
- Axon Framework (Java) – provides CQRS and event sourcing support.
- Akka Persistence (Scala) – for event-sourced actors.
- Kafka + Kafka Streams – can be adapted to support event sourcing.
- NEventStore / Marten (C#) – .NET solutions.
- TypeORM + custom event store (Node.js) – lightweight approach.
Final Thoughts
Event Sourcing is a powerful pattern for systems that need visibility into how state evolves. While it introduces complexity, the ability to replay, audit, and model system behavior over time is a compelling advantage.
If you’re building systems that handle transactions, user behavior, domain complexity, or require traceability — Event Sourcing could unlock significant long-term value.
Another Recommended Labs Content
How to Stop Microservices Failures from Spreading with the Bulkhead Pattern
Microservices are awesome for building apps that scale and evolve quickly. But as your system grows, a small problem in one service can snowball into a disaster, taking down your entire application. This is called a cascading failure, and it’s a big challenge in microservices. The Bulkhead Pattern is a smart way to prevent this by isolating parts of your system so one failure doesn’t sink everything else.
In modern software architecture, microservices have become the go-to approach for building scalable, maintainable, and independently deployable applications. However, with great modularity comes great complexity—especially when it comes to managing data consistency across services.
Understanding Database Partitioning vs Sharding: Concepts, Benefits, and Challenges
When dealing with large volumes of data, efficient database management becomes essential. Two widely used techniques to improve performance and scalability are database partitioning and database sharding. Although often confused, these approaches differ fundamentally in architecture, complexity, and suitable use cases. This article explores these differences in detail, helping you decide which fits your application best.