Today, i learnt about Request Reply Pattern. At first it seems deviating from the core concept of asynchronous but convinced with some statements. In this blog i will jot down notes in Request Reply Pattern for my future self.
The Request-Reply Pattern is a fundamental communication style in distributed systems, where a requester sends a message to a responder and waits for a reply. It’s widely used in systems that require synchronous communication, enabling the requester to receive a response for further processing.
Characteristics of the Request-Reply Pattern
- Synchronous Communication: The requester sends a request and waits for a reply before proceeding.
- Timeout Handling: The requester often has a timeout mechanism to handle unresponsive requests.
- Reliability: Ensures delivery of requests and replies in reliable systems.
- Common Use Cases:
- Remote Procedure Calls (RPC)
- API communication
- Messaging systems like RabbitMQ or Kafka

Steps in the RabbitMQ-based RPC Implementation
- Client (Requester):
- Sends the request message to the
rpc_queue. - Specifies a callback queue (a temporary queue) in the message properties.
- Generates a unique correlation ID to associate the response with the request.
- Waits for the response from the callback queue.
- Sends the request message to the
- Server (Responder):
- Listens to the
rpc_queuefor incoming requests. - Processes the request message and generates a response.
- Sends the response to the callback queue specified in the request’s properties.
- Includes the same correlation ID in the response for matching.
- Listens to the
- Client (Requester):
- Receives the response from the callback queue.
- Uses the correlation ID to ensure the response corresponds to the original request.
Simple Implementation in Python
Requester (Client)
The Requester sends a message to a specific queue, listens for a response on a callback queue, and waits for the server’s reply.
import pika
import uuid
class RpcClient:
def __init__(self):
self.connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
self.channel = self.connection.channel()
# Declare a unique callback queue for the client
result = self.channel.queue_declare(queue='', exclusive=True)
self.callback_queue = result.method.queue
# Set up a consumer on the callback queue
self.channel.basic_consume(
queue=self.callback_queue,
on_message_callback=self.on_response,
auto_ack=True,
)
self.response = None
self.corr_id = None
def on_response(self, ch, method, props, body):
if self.corr_id == props.correlation_id:
self.response = body.decode()
def call(self, message):
self.response = None
self.corr_id = str(uuid.uuid4())
self.channel.basic_publish(
exchange='',
routing_key='rpc_queue', # Send to the RPC queue
properties=pika.BasicProperties(
reply_to=self.callback_queue, # Specify the callback queue
correlation_id=self.corr_id, # Unique correlation ID
),
body=message,
)
# Wait for the response
while self.response is None:
self.connection.process_data_events()
return self.response
# Usage
if __name__ == "__main__":
rpc_client = RpcClient()
print("Sending request...")
response = rpc_client.call("Hello, Server!")
print("Response from server:", response)
Responder (Server)
The Responder listens to a queue for incoming requests, processes them, and sends back a response to the callback queue specified by the requester.
import pika
def on_request(ch, method, props, body):
print("Received request:", body.decode())
response = f"Processed: {body.decode()}"
# Publish the response to the callback queue
ch.basic_publish(
exchange='',
routing_key=props.reply_to, # Reply to the callback queue
properties=pika.BasicProperties(correlation_id=props.correlation_id),
body=response,
)
ch.basic_ack(delivery_tag=method.delivery_tag) # Acknowledge the request
if __name__ == "__main__":
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare the RPC queue
channel.queue_declare(queue='rpc_queue')
# Set up a consumer on the RPC queue
channel.basic_consume(queue='rpc_queue', on_message_callback=on_request)
print("Awaiting RPC requests...")
channel.start_consuming()
Deciding on Reply Queue
At first, it may seem simple, but deciding the reply queue is tricky. Whether to have a separate queue for each request, or a queue for a particular client or so. Below are different approaches,
Exclusive Reply Queue Per Request
- What happens:
Each time you send a request, a temporary queue is created just for that request to receive the reply. Once the reply is received, the queue disappears. - Benefits:
- Easy to manage since each request has its own queue. No mix-ups with other responses.
- If the client disconnects, RabbitMQ automatically cleans up the queue.
- Drawbacks:
- Performance hit because a new queue and consumer are created for every request.
- If something goes wrong (e.g., the server doesn’t send a reply), the temporary queue hangs around until it’s manually cleaned up.
Exclusive Reply Queue Per Client
- What happens:
Each client creates a single queue that is reused for all its requests. Each response includes a unique ID (called a correlation ID) to match it to the right request. - Benefits:
- More efficient since you don’t create a new queue for every request.
- RabbitMQ automatically deletes the queue if the client disconnects.
- Drawbacks:
- The client must keep track of requests and their responses using correlation IDs.
- Responses still get lost if the client disconnects while waiting for a reply.
Durable Reply Queue
- What happens:
A permanent queue is used to store replies, and it isn’t tied to any specific client or request. It remains available even if the client disconnects. - Benefits:
- Responses are not lost if the client disconnects while waiting for them.
- Useful for long-running tasks or unstable networks.
- Drawbacks:
- Requires careful management to ensure responses go to the correct client.
- Risk of misrouting responses if multiple clients share queues incorrectly.
- Adds complexity, removing some of the simplicity of using RabbitMQ.
If we wait for reply, then we become synchronous ?
You’re correct that waiting for a reply on the client side introduces synchronous behavior, which might seem at odds with the notion of asynchronous systems. However, there are valid reasons and scenarios where this is necessary and valuable, and it’s important to understand when and why this pattern is used.
Why Does the Client Wait?
- Synchronous Workflow Requirements:
- Sometimes, the client’s operation cannot proceed without the result of the remote procedure.
- For example, if a client needs user authentication from a remote service, it must wait for the response before continuing.
- RPC Mimics Local Function Calls:
- RPC is designed to make a remote function call appear similar to a local function call. This often implies a blocking behavior where the client waits for the function’s return value.
- Request-Reply Guarantee:
- The waiting mechanism ensures that the client receives the correct response (matching via correlation ID).
- This blocking pattern provides a straightforward way to handle scenarios requiring immediate feedback.
Why Is This Still Considered Asynchronous?
Even though the client waits, the communication mechanism is asynchronous. Here\u2019s why:
- Message Handling is Decoupled:
- The requester and responder communicate via queues, not direct connections.
- RabbitMQ ensures the messages are queued and delivered reliably, regardless of whether the server is immediately available.
- Responder Can Be Asynchronous:
- The server can process the request in its own time. For example, it might be handling other requests concurrently.
- Client-Side Timeout:
- The client can implement a timeout, allowing it to avoid waiting indefinitely if the server is slow or unresponsive.
- Scalability:
- Multiple responders can process requests from the same queue concurrently, enabling horizontal scaling.
