Skip to content
On this page

Messaging Broker Migration: Why Lock-in Hurts and How to Avoid It

Have you ever struggled with messaging broker migration? I've heard this opinion countless times: "Why do we need that? We're on Amazon and use SNS/SQS - it works perfectly." While that might be true today, what happens when things change?

The reality is that business needs change, and what works today might not work tomorrow. Let me share why messaging broker lock-in is a problem and how to build systems that can adapt easily.

When You Need to Migrate

Even the most satisfied AWS (or another provider) users might face situations where migration becomes needed:

Business Opportunities: You're a startup built on AWS and Google offers you $200K in credits through their startup program. That's real money that could help your business grow.

Market Innovation: A new messaging broker appears that's much more efficient and cheaper than your current solution. Do you want to be stuck with old technology?

Strategic Changes: You decide to build your own messaging provider or move to a completely different setup.

Compliance Requirements: New regulations require data to stay in specific regions that your current provider doesn't support.

Performance Problems: Your current broker can't handle growing message volumes (or can but igh cost) or speed requirements, but alternatives can.

Cost Problems: What happens when your provider decides to raise prices a lot? You need options.

I've seen some of these situations, and those with tightly connected messaging systems found themselves in painful situations.

The Migration Nightmare: Common Anti-Patterns

Anti-Pattern 1: Broker as Event Storage

The worst case from my perspective is when teams use the broker as their primary event storage. You're essentially married to that provider forever, and they can do whatever they want - including increasing costs dramatically.

Anti-Pattern 2: Dual-Send Without Atomicity

Some teams think they can solve migration by adding a new line to send messages to a new provider alongside the existing one. Apart from the original timing problems you had with two operations (storing the model and sending the message), you now have a third operation. What happens when one of them fails?

kotlin
class OrderService(
    private val orderRepository: OrderRepository,
    private val kafkaPublisher: KafkaPublisher,
    private val rabbitPublisher: RabbitPublisher,
) : BrokerProducer {

    override fun save(order: Order) {
        // ...
        orderRepository.save(order)
        kafkaPublisher.send(order)
        rabbitPublisher.send(order) // One more point of failure to prevent an order saving?
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13

Anti-Pattern 3: Feature-Dependent Architecture

Many systems become deeply dependent on broker-specific features:

  • FIFO queue guarantees for ordering
  • Exactly-once delivery promises
  • Dead letter queue mechanisms
  • Broker-specific retry policies

When migration time comes, copying these features across different brokers becomes a complex engineering challenge.

The Path to Broker Independence

To make migration easier, we need to treat message brokers as a transport layer only and avoid using broker-specific features. Here's how:

Design Principles

Don't Rely on Event Ordering: Instead of depending on FIFO queues, make your consumers handle ordering. Each event should have a unique ID and creation timestamp to help manage sequencing.

Handle Duplicates Well: Don't rely on exactly-once delivery guarantees. Design your consumers to handle the same message multiple times without problems.

Avoid Dead Letter Queues: If you store events in your database, you don't need broker-managed dead letter queues. I remember how painful it was to manage and restore events from dead letter queues.

Control Your Own Destiny: Store events in your database where you control delivery status, error handling, and retries. If attempts reach a limit due to network issues, it's easy to resend and retry because you control your database.

The Migration Strategy: Multi-Broker Publishing

Here's how I designed the system to handle broker migration seamlessly:

The processedBrokers Field

Apart from storing multiple event DTO versions (which I described in my previous post), I store a processedBrokers field with each event. During event execution, the system iterates through all implementations of BrokerProducer.

Here's how an event looks in MongoDB with the processedBrokers tracking:

json
{
  "_id": "1cc64bb3-17f0-3619-a68d-a2564bf1644d",
  "topic": "user.model.created.v1",
  "event": {
    "body": {...},
    "metadata": {...}
    }
  },
  "notification": {
    "status": "SENT",
    "attempts": 1,
    "failedReasons": [],
    "processedBrokers": [
      "kafka",
      "spring"
    ]
  },
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

Notice the processedBrokers array contains both "kafka" and "spring", indicating this event was successfully sent to both brokers. The attempts counter and failedReasons array help track delivery issues and retry logic.

This approach allows us to:

  • Map multiple brokers to the same event
  • Send the same message to all configured brokers
  • Avoid resending to already-processed brokers during retries
  • Handle both external brokers (Kafka, SQS) and internal events (Spring pub/sub)
  • Track delivery status per broker independently

The Spring Pub/Sub Example

Let me walk you through a concrete example using the code snippets you provided. We'll look at how the system handles internal Spring pub/sub messaging, which demonstrates the same principles used for external broker migration.

Producer Implementation

kotlin
@Component
@ConditionalOnProperty(
    value = ["vt.events.producer.type.spring.enabled"],
    havingValue = "true",
    matchIfMissing = true,
)
class SpringBrokerProducer(
    private val eventPublisher: ApplicationEventPublisher,
) : BrokerProducer {

    override val broker: String = "spring"

    override fun send(topic: String, event: BrokerMessageV1) {
        eventPublisher.publishEvent(PayloadApplicationEvent(topic, event))
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

The SpringBrokerProducer implements the same BrokerProducer interface as our Kafka, SQS, or RabbitMQ producers. The system automatically discovers all implementations and sends events to each configured broker.

Consumer Implementation

kotlin
@Component
@ConditionalOnProperty(
    value = ["vt.events.consumer.type.spring.enabled"],
    havingValue = "true",
    matchIfMissing = true
)
class SpringEventListener(
    private val topicSubscriber: SpringTopicSubscriber,
) {

    private val log = KotlinLogging.logger {}

    @EventListener
    fun onEvent(event: PayloadApplicationEvent<BrokerMessageV1>) {
        val topic = event.source as String
        val consumers: MutableList<Consumer>? = topicSubscriber.topicConsumers[topic]
        if (consumers.isNullOrEmpty()) {
            log.debug { "No consumers found for topic '$topic'" }
            return
        }

        consumers.forEach { consumer ->
            try {
                log.debug { "Processing event for topic '$topic' with consumer '${consumer.type.simpleName}'" }
                consumer.handler(event.payload)
            } catch (exception: Exception) {
                log.error(exception) { "Error in consumer handler for topic '$topic': ${event.payload}" }
            }
        }
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

The beauty of this design is that the same consumer code works regardless of the underlying broker. The framework abstracts away the transport layer completely.

Topic Subscription

kotlin
@Component
@ConditionalOnProperty(
    value = ["vt.events.consumer.type.spring.enabled"],
    havingValue = "true",
    matchIfMissing = true
)
class SpringTopicSubscriber : TopicSubscriber {

    private val log = KotlinLogging.logger {}

    private val topicConsumers = mutableMapOf<String, MutableList<Consumer>>()

    override fun subscribe(topic: String, consumers: MutableList<Consumer>) {
        topicConsumers[topic] = consumers
        log.info("Registered consumers: ${consumers.map { it.type.simpleName }} for topic: $topic")
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

Configuration Management

Notice how both producer and consumer use @ConditionalOnProperty annotations. This allows us to control which brokers are active through configuration:

# Enable Spring internal messaging
vt.events.producer.type.spring.enabled=true
vt.events.consumer.type.spring.enabled=true

# Enable Kafka
vt.events.producer.type.kafka.enabled=true
vt.events.consumer.type.kafka.enabled=true
1
2
3
4
5
6
7

Maven Module Organization

The modular structure from your pom.xml files shows another layer of flexibility:

xml
<!-- Spring messaging module -->
<dependency>
    <groupId>dev.vibetdd.kotlin</groupId>
    <artifactId>vt-messaging-consumer-spring</artifactId>
</dependency>

<!-- Kafka messaging module -->
<dependency>
    <groupId>dev.vibetdd.kotlin</groupId>
    <artifactId>vt-messaging-consumer-kafka</artifactId>
</dependency>
1
2
3
4
5
6
7
8
9
10
11

You can manage broker support by including/excluding Maven modules, similar to the API client approach I described in the monolith split post.

The Migration Process in Action

Here's how a typical migration would work with this system:

Phase 1: Producer Preparation

  1. Add New Broker Module: Include the new broker's Maven dependency
  2. Deploy Producer: The system automatically starts sending to both brokers

Phase 2: Consumer Migration

  1. Update Client Dependencies: Consumer services update their event library dependency
  2. Gradual Migration: Each consumer team switches to the new broker when ready
  3. No Coordination Required: Teams work independently during the transition period

Phase 3: Cleanup

  1. Monitor Delivery: Ensure all consumers successfully receive events from the new broker
  2. Remove Old Broker: Disable the old broker configuration
  3. Clean Dependencies: Remove old broker Maven modules

Benefits of This Approach

This approach is particularly valuable for:

Startup Growth: Early-stage companies can start with simple solutions using internal messaging and evolve to enterprise-grade messaging without architectural rewrites.

Monolith Migration: When extracting services from a monolith, you can start replacing internal messaging to external brokers as services become independent.

Cloud Migration: Moving between cloud providers becomes much less risky when your messaging layer can adapt.

Compliance Changes: When regulations require data to stay in specific regions, you can quickly change your messaging setup.

Conclusion

Messaging broker lock-in is a real problem that teams often don't recognize until it's too late. By designing your event-driven architecture with broker independence from the start, you maintain the flexibility to adapt as your business needs evolve.

The key insights are:

  • Treat brokers as transport layers, not feature providers
  • Store events in your database for complete control
  • Use abstraction layers that hide broker-specific implementation details
  • Design for multi-broker publishing from day one

Remember: the best time to prepare for migration is when you don't need it yet. By the time migration becomes urgent, it's often too late to implement these patterns without significant pain.

Your future self (and your team) will thank you for building systems that can adapt rather than systems that lock you in.

Built by software engineer for engineers )))