Message Queues for .NET Developers - Part 1: The Basics

What is a Message Queue?

A message queue is a form of asynchronous server-to-server communication. Message queues are common in serverless or microservice architectures due to the need of independent systems to communicate with each other. In a message queue, each message is processed once and then removed from the queue (we will discuss this more later). A system that creates messages and adds them to the queue is called a producer, whereas a system that processes messages from the queue is called a consumer.

Breaking It Down

What is a Queue

In the general sense, a queue can be thought of as a line. When you wait in a line, the first person to enter the line will be the first person to exit the line. This is know as first-in-first-out (FIFO). This is in contrast to a stack (think of a stack of poker chips), where chips will be taken from the top of the stack. The item at the top of the stack will be the last item added to the stack. This is know as last-in-first-out (LIFO).

What is a Message

Simply, a message is the data that is being transported between the producer and consumer.

Putting It Together

A message queue, then, is a sequence of work items (messages) waiting to be processed, where the oldest message put into the queue will be the first message processed.

Why Do We Use Message Queues?

At this point you might be asking yourself Why do we use message queues? Or even, When should I use a message queue? Message Queues improve the performance, reliability, and scalability of our projects. They allow us to decouple a monolithic application into smaller pieces, or services. This will ease some of the burdens of development, deployment, and maintenance of our application.


In a simple sense, we can increase the performance of our application because no component is ever waiting on another component to perform an action. Since our services are working independently, our producers can place messages in the queue as fast as they can create them. It does not need to wait for that data to be processed before creating a new message. And, our consumers only process messages when they are available. Our producing and consuming components only ever interact with the queue and never with each other. Processing messages can also be batched for efficiency. Inserting 100 rows once is faster than inserting 1 row 100 times.


Separating components with message queues make your system more fault tolerant. If one part of the system is down or unreachable, others can still interact with the message queue. If a consumer goes down, messages will remain in the queue and processing will restart once the service is restored.


Components can be scaled independently, as needed. When workloads peak, for example, more application instances can be started and write to the queue. Inversely, if the message queue is growing at a rate faster than the consumer(s) can process, more consumer instances can be started to manage the load. In general, you need enough producers to handle peak workloads. On the other hand, you need enough consumers to handle average workload times.


There are 3 important metrics to monitor in your message queue

  1. The number of messages in the queue
  2. The rate of messages entering the queue
  3. The rate of messages being processed from the queue

Message Queue vs Publish/Subscribe Pattern

Message Queues and the publish/subscribe pattern (pub-sub) are common messaging patterns. Many message broker systems support both patterns. So, what is the difference? In a message queue system, messages are processed by a single consumer. In the publish/subscribe pattern, messages are published to 0-many consumers on each topic.