As developers, we often face the challenge of building real-time features. Imagine you are tasked with creating a live dashboard that displays system metrics, a breaking news feed, or real-time notifications. The initial, almost instinctual, thought for many of us is, “I need WebSockets.” This often leads us down a path of managing complex bidirectional connections, handling protocol upgrades, and writing custom reconnection logic. But what if you only need to push data from the server to the client? Is there a simpler, more elegant way?
The answer is yes. Meet Server-Sent Events (SSE), a mature, browser-native HTML5 standard that leverages the simplicity and ubiquity of HTTP for powerful, unidirectional server-to-client streaming. SSE isn’t a new, flashy technology. It’s a reliable workhorse that has been part of the web platform for years, yet it remains one of its most underrated features.
In this post, we will embark on a journey to master Server-Sent Events in ASP.NET Core. We will start by deconstructing the protocol, then build a production-ready, scalable SSE solution. Finally, I will equip you with the architectural knowledge to make the right choice between SSE, WebSockets, and SignalR. By the end, you will not only know how to implement SSE but, more importantly, when and why to choose it.
Deconstructing the SSE Protocol
At its heart, the genius of Server-Sent Events lies in its simplicity. It operates on a model that can be described as “one request, infinite response”. The client initiates a standard HTTP GET request, and the server responds in a way that keeps the connection perpetually open, allowing it to push messages to the client whenever new data is available. This elegant approach avoids the protocol upgrade dance required by WebSockets and works seamlessly with existing HTTP infrastructure, such as proxies and firewalls.
The communication begins with a simple, yet specific, handshake defined by HTTP headers.
- The client signals its intent to establish an event stream by sending a request with an
Accept: text/event-streamheader. This tells the server, “I’m a client capable of understanding and processing Server-Sent Events.” - Upon receiving this request, the server must respond with a specific set of headers to establish the stream. The most crucial is
Content-Type: text/event-stream. Additionally, to ensure that no intermediaries cache the response or close the connection prematurely, the server should also sendCache-Control: no-cacheandConnection: keep-alive.
This header exchange forms the contract that transforms a regular HTTP connection into a persistent, one-way data channel from the server to the client.
The Event Stream Format
Once the connection is established, the server begins sending data in a simple, human-readable, text-based format. Each message, or “event,” is a block of text terminated by a pair of newline characters (\n\n). The structure of these messages is defined by a few key fields:
- data: This field contains the actual payload of the message. It can be a simple string or a more complex structure, such as a JSON object. If a message spans multiple lines, the server can send multiple consecutive data: fields. The client will automatically concatenate their values, inserting a newline character between each one.
Example: Sending a JSON payload
data: {"user": "Alice", "message": "Hello, world!"}- event: This optional field specifies the event name. On the client side, this allows you to create specific event listeners for different message types, enabling more organized, modular code. If the event field is omitted, the message will trigger a generic client-side event.
Example: A named event
event: user-joined
data: {"username": "Bob", "timestamp": "2023-10-27T10:00:00Z"}- id: This field attaches a unique identifier to an event. This is the cornerstone of SSE’s built-in reliability. If the connection to the server is lost, the browser will automatically attempt to reconnect. When it does, it will send a special HTTP header,
Last-Event-ID, containing the value of the last id it received. This allows the server to detect the disconnect and resume the stream, sending any messages the client might have missed.
Example: An event with an ID
id: msg-123
data: This is a message with an identifier.- retry: The server can use this field to specify, in milliseconds, how long to wait before attempting to reconnect after a connection is lost. The browser will use this value instead of its default timeout.
Example: Setting a 10-second reconnection timeout
retry: 10000- Comments: Any line beginning with a colon (:) is treated as a comment and ignored by the client. This seemingly minor feature is incredibly useful for implementing “heartbeats” or keep-alive pings. Sending a comment periodically (e.g., every 15-20 seconds) prevents network intermediaries like proxies and load balancers from assuming the connection is idle and terminating it.
The following diagram illustrates the complete lifecycle of an SSE connection, from the initial request to the continuous stream of events.

The protocol’s design choices have profound implications for reliability. Network connections in the real world are fragile. The designers of the SSE standard anticipated this and baked a robust reconnection and recovery mechanism directly into the protocol and the corresponding browser APIs. This means developers get a significant degree of fault tolerance for free. In contrast, building a similar recovery system on top of raw WebSockets requires considerable manual effort, including client-side state management and custom reconnection logic. For a vast number of applications where eventual consistency is acceptable, such as news feeds or notification systems, SSE’s built-in reliability is more than sufficient and dramatically reduces development complexity.
Building Your First SSE Endpoint in ASP.NET Core
Now that we understand the theory, let’s get our hands dirty and build a functional SSE endpoint using ASP.NET Core. For this example, we will create a simple endpoint that streams a countdown from 30 to 0, sending an update every second.
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/countdown", async (HttpContext context) =>
{
// 1. Set the necessary headers for an SSE response.
context.Response.Headers.Append("Content-Type", "text/event-stream");
context.Response.Headers.Append("Cache-Control", "no-cache");
context.Response.Headers.Append("Connection", "keep-alive");
// 2. Start a loop to send messages.
for (int i = 30; i >= 0; i--)
{
// 3. Check if the client has disconnected.
if (context.RequestAborted.IsCancellationRequested)
{
break;
}
// 4. Write the event to the response stream in the correct format.
await context.Response.WriteAsync($"data: {i}\n\n");
// 5. Flush the response stream to ensure the message is sent immediately.
await context.Response.Body.FlushAsync();
// 6. Wait for a second before sending the next message.
await Task.Delay(1000);
}
});
app.Run();Let’s break down this code piece by piece to understand what is happening.
- Setting Headers: The first three lines inside the endpoint handler are critical. We set the
Content-Typetotext/event-streamto declare the response as an event stream. We also setCache-Controltono-cacheandConnectiontokeep-aliveto inform the client and any intermediaries that this is a live, persistent connection that should not be cached. This is the server-side implementation of the contract we discussed earlier. - The Infinite Loop: The for loop is the engine of our stream. In a real-world application, this might be a
while(true)loop that waits for new messages from a service or a message queue, or anIAsyncEnumerable<T>. This loop is what keeps the HTTP connection open and allows us to send multiple messages over its lifetime. - Graceful Disconnects: Inside the loop, the line
if (context.RequestAborted.IsCancellationRequested)is essential for robust server behavior. TheRequestAbortedproperty is aCancellationTokenthat gets triggered when the client closes the connection (e.g., by closing the browser tab). Checking this token allows us to break out of the loop and gracefully terminate the server-side process, freeing up resources. Without this check, the server would continue sending data to a disconnected client, leading to exceptions and wasted resources. - Writing the Event:
await context.Response.WriteAsync($"data: {i}\n\n");formats our message according to the SSE protocol. We prependdata:to our payload (the current value ofi) and append\n\nto signify the end of the event. - The Magic Combo: The most crucial part of sending real-time updates is the combination of
WriteAsyncandFlushAsync. By default, ASP.NET Core buffers response data for efficiency.WriteAsyncsimply writes our event string to this buffer. It is the call toawait context.Response.Body.FlushAsync()that forces the server to immediately send whatever is in the buffer to the client over the network. If you forget to callFlushAsync, your client will not receive any updates until the buffer fills up or the connection closes, completely defeating the purpose of a real-time stream.
With this simple endpoint, you have a fully functional Server-Sent Events stream ready to be consumed by a client.
Consuming SSE with JavaScript’s EventSource
With our ASP.NET Core endpoint streaming data, we now need a client to listen to it. Fortunately, all modern browsers provide a native JavaScript API called EventSource specifically for this purpose. This means you can consume SSE streams without needing any third-party libraries or complex boilerplate code.
Connecting to the Stream
Connecting to our countdown endpoint is a simple one-liner. Create an index.html file and add the following script block:
const eventSource = new EventSource('/countdown');This single line of code instructs the browser to make a GET request to the /countdown URL with the appropriate Accept: text/event-stream header and to keep the connection open to receive events.
The EventSource object is an event emitter with a few key events you can listen for to manage the lifecycle of the connection.
- onopen: This event fires exactly once when the connection is successfully established. It is a good place to update your UI to indicate a “connected” state.
- onmessage: This is the default event handler. It is triggered for any message received from the server that does not have an
event: fielddefined. The payload sent from the server is available in theevent.dataproperty. - onerror: This event fires if a connection error occurs (e.g., the server becomes unavailable). A key feature of the EventSource API is that after an error, it will automatically try to reconnect. The delay before reconnection is determined by the
retry:value sent by the server or a browser-defined default. - addEventListener(‘eventName’,…): For handling custom, named events, you use the standard
addEventListenermethod. If your server sends a message likeevent: stockUpdate, you would listen for it on the client witheventSource.addEventListener('stockUpdate', (event) => {... });. This allows for much more structured communication than relying solely on the genericonmessagehandler.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>SSE Countdown</title>
</head>
<body>
<h1>Countdown: <span id="countdown-display">Waiting for connection...</span></h1>
<script>
const display = document.getElementById('countdown-display');
const eventSource = new EventSource('/countdown');
eventSource.onopen = () => {
console.log('Connection to server opened.');
display.textContent = 'Connected!';
};
eventSource.onmessage = (event) => {
const countdownValue = event.data;
display.textContent = countdownValue;
if (countdownValue === '0') {
display.textContent = 'Blast off!';
eventSource.close(); // Close the connection when done
}
};
eventSource.onerror = (err) => {
console.error('EventSource failed:', err);
display.textContent = 'Connection error!';
eventSource.close();
};
</script>
</body>
</html>When you run your ASP.NET Core application and open this HTML file in your browser, you will see the countdown update in real time.
Finally, you can programmatically close the connection from the client side by calling eventSource.close(). This will set the readyState property of the EventSource object to CLOSED and inform the server, which will trigger the RequestAborted token we implemented earlier.
From Demo to Production
Our simple countdown timer is a great start, but real-world applications have more complex requirements. A production-ready SSE implementation needs to handle broadcasting messages to multiple clients, ensure reliable delivery after network interruptions, and maintain connection health.
Broadcasting to Multiple Clients
One of the most common requirements is to send a single event to all currently connected clients. For example, in a notification system, a new alert should be pushed to every active user. Our initial endpoint can’t do this, as it only manages the connection for a single client.
The solution is to create a shared service, typically registered as a singleton, that maintains a collection of all active client connections. This service can then provide a method to broadcast a message to every connection it is tracking.
Here is an example of a simple SseService that uses a ConcurrentDictionary to safely manage connections from multiple threads.
// SseService.cs
using System.Collections.Concurrent;
public interface ISseService
{
void AddClient(HttpResponse response);
void RemoveClient(HttpResponse response);
Task BroadcastMessageAsync(string message);
}
public class SseService : ISseService
{
private readonly ConcurrentDictionary<HttpResponse, bool> _clients = new();
public void AddClient(HttpResponse response)
{
_clients.TryAdd(response, true);
}
public void RemoveClient(HttpResponse response)
{
_clients.TryRemove(response, out _);
}
public async Task BroadcastMessageAsync(string message)
{
var disconnectedClients = new List<HttpResponse>();
foreach (var client in _clients.Keys)
{
try
{
await client.WriteAsync($"data: {message}\n\n");
await client.Body.FlushAsync();
}
catch (Exception)
{
// Mark client for removal if sending fails
disconnectedClients.Add(client);
}
}
// Remove disconnected clients from the collection
foreach (var client in disconnectedClients)
{
RemoveClient(client);
}
}
}Your SSE endpoint would then use this service to manage the connection:
app.MapGet("/notifications", async (HttpContext context, ISseService sseService) =>
{
context.Response.Headers.Append("Content-Type", "text/event-stream");
sseService.AddClient(context.Response);
// Keep the connection open until the client disconnects
await context.RequestAborted.WaitHandle.WaitOneAsync();
sseService.RemoveClient(context.Response);
});
app.MapPost("/broadcast", async (string message, ISseService sseService) =>
{
await sseService.BroadcastMessageAsync(message);
return Results.Ok();
});This effectively decouples message generation from connection management, creating a robust system for one-to-many communication.
Ensuring Reliability and Scalability
As we discussed, the id field is the key to recovering from network disruptions. When a client with EventSource reconnects, it automatically includes the Last-Event-ID header in its request. Your server-side code can leverage this to provide a more resilient experience.
Upon receiving a connection request, the endpoint should check for this header. If it is present, the server can query its database or message log for any events that occurred after the specified ID and send them to the client as a batch before resuming the live stream.
app.MapGet("/resilient-stream", async (HttpContext context) =>
{
//... set headers...
if (context.Request.Headers.TryGetValue("Last-Event-ID", out var lastEventId))
{
// Logic to retrieve and send missed messages since 'lastEventId'.
// For example, query your database for messages where ID > lastEventId.
var missedMessages = GetMissedMessages(lastEventId);
foreach (var msg in missedMessages)
{
await context.Response
.WriteAsync($"id: {msg.Id}\ndata: {msg.Content}\n\n");
}
await context.Response.Body.FlushAsync();
}
//... begin live streaming, sending an 'id' with each message...
});This transforms SSE from a simple “fire-and-forget” stream into a durable messaging channel, ensuring that clients do not lose critical information during temporary network outages.
Connection Health and Keep-Alives
An SSE connection can remain silent for long periods if no new data is available. Many network intermediaries, such as corporate proxies and load balancers, are configured to terminate HTTP connections that they perceive as idle, often after 30-60 seconds. This can cause frequent and unwanted disconnects for your clients.
The solution is to periodically send a “heartbeat” ping to keep the connection active. The SSE protocol provides an elegant, built-in mechanism for this: comments. A line starting with a colon (:) is ignored by the client’s EventSource parser but is still considered network traffic, which resets the idle timers on intermediaries.
You can implement this in your server-side loop by sending a comment every 15-20 seconds.
// Inside your streaming loop
while (!context.RequestAborted.IsCancellationRequested)
{
//... wait for a new message or a timeout...
// If a timeout occurs (e.g., after 15 seconds), send a heartbeat
await context.Response.WriteAsync(": heartbeat\n\n");
await context.Response.Body.FlushAsync();
}This simple technique is crucial for maintaining the stability and longevity of SSE connections in real-world network environments.
Scaling SSE in the Real World
As your application grows, you will inevitably face challenges related to scale. For real-time systems like those using SSE, two primary bottlenecks emerge: browser connection limits and the complexities of horizontal scaling.
Bottleneck 1: The HTTP/1.1 Connection Limit
For years, a significant drawback of SSE was the browser’s limitation on the number of concurrent HTTP/1.1 connections to a single domain, typically capped at six. Since a long-lived SSE stream consumes one of these precious connection slots, opening multiple tabs or having other background requests could exhaust the pool, blocking further communication with your server.
Fortunately, this classic problem has been largely solved by the widespread adoption of HTTP/2. With HTTP/2, the browser establishes a single TCP connection to a domain and multiplexes multiple logical streams over it. The limit on the number of simultaneous streams is far higher, often 100 or more. By ensuring your server infrastructure (like Kestrel in ASP.NET Core) is configured for HTTP/2, you effectively neutralize this historical limitation, making SSE a much more viable option for modern applications. This is a critical point that updates the conventional wisdom surrounding SSE’s perceived drawbacks.
Bottleneck 2: Horizontal Scaling and Statefulness
An SSE connection is inherently stateful. The server holds an open connection for a specific client. This presents a challenge when you need to scale your application horizontally by adding more server instances behind a load balancer. How do you ensure a broadcast message, initiated on one server, reaches a client connected to a different server?
Strategy A: Sticky Sessions (Session Affinity)
The most straightforward approach is to configure your load balancer to use “sticky sessions”. This ensures that once a client connects to a particular server instance, all subsequent requests from that client are routed to the same instance.
- Pros: It is simple to configure at the load balancer level and requires no changes to your application code.
- Cons: It undermines the principles of true load balancing and fault tolerance. If a server instance goes down, all clients connected to it are disconnected. It also prevents the load from being distributed perfectly evenly across all instances.
Strategy B: The Backplane Architecture (Stateless Servers)
A far more robust and scalable solution is to make your web servers stateless by introducing a “backplane.” This is a centralized message bus, such as Redis Pub/Sub, RabbitMQ or Azure Service Bus, that all server instances connect to.
The workflow is as follows:
- A client establishes an SSE connection with any available server instance via the load balancer.
- When an event needs to be broadcast (e.g., from an API call), the server that receives the request publishes the message to the backplane.
- The backplane then distributes this message to all subscribed server instances.
- Each server instance, upon receiving the message from the backplane, pushes it to all of its locally connected SSE clients.
This architecture decouples the servers from the connection state, allowing you to add or remove instances seamlessly. If a server fails, its clients will simply reconnect through the load balancer to another healthy instance and resume their streams.

It is important to recognize that this scaling challenge is not unique to Server-Sent Events. Any technology that relies on persistent server connections, including WebSockets and gRPC streaming, introduces state into the web tier. The backplane pattern is the canonical solution to this architectural problem, enabling you to build highly scalable and resilient real-time systems regardless of the underlying transport protocol. The decision to implement a backplane is driven by your requirements for scale and resilience, not by your choice of SSE over another technology.
SSE vs. WebSockets vs. SignalR
SSE vs. WebSockets
This is the classic comparison, and the decision hinges on one primary question: do you need bidirectional communication?
- Communication Direction: SSE is strictly unidirectional (server-to-client). WebSockets provide a full-duplex, bidirectional channel where both the client and server can send messages at any time. If your client needs to send a continuous stream of data back to the server over the same connection (e.g., in a chat application or a multiplayer game), you need WebSockets.
- Protocol: SSE is built on standard
HTTP/1.1orHTTP/2, making it simple and highly compatible with existing network infrastructure. WebSockets require an initial HTTP “Upgrade” handshake to establish a separate, TCP-basedws://orwss://connection. - Built-in Features: SSE, via the EventSource API, comes with built-in support for automatic reconnection, event IDs for message recovery, and named events. Raw WebSockets are a lower-level transport; you must implement these reliability features yourself.
- Data Types: SSE is limited to UTF-8 text-based messages. WebSockets can transmit both text and binary data, making them suitable for streaming media or other non-textual payloads.
SSE vs. SignalR
For .NET developers, the choice often comes down to implementing SSE directly versus using the SignalR library. This is not a comparison of two equal protocols; it is a choice between directly implementing a standard and using a comprehensive framework that abstracts that standard away.
SignalR is a library that provides an abstraction over multiple real-time transport protocols. Its primary value lies in the rich feature set it builds on top of the underlying transport:
- Automatic Transport Negotiation: SignalR automatically detects the best available transport supported by both the client and server, gracefully falling back in order: WebSocket, then Server-Sent Events, and finally HTTP Long Polling. This ensures your application works even on older browsers or restrictive networks.
- Rich API (Hubs): SignalR uses a Hub-based, Remote Procedure Call (RPC) model. You can define methods on your server-side Hub and call them directly from your client-side code (and vice-versa), complete with strongly-typed parameters. This is a much higher level of abstraction than simply sending text messages.
- Advanced Features: SignalR provides powerful, out-of-the-box features like “Groups” (for sending messages to subsets of clients), user management, and seamless scale-out support through official backplanes for Redis and the Azure SignalR Service.
When should you choose to implement SSE directly over using SignalR?
- When your requirements are simple: you only need unidirectional, server-to-client push of text-based data.
- When you want zero third-party dependencies and prefer to work directly with a web standard.
- When you need maximum control over the low-level communication protocol.
When is SignalR the better choice?
- When you need bidirectional communication.
- When you need to support older browsers that do not have native EventSource or WebSocket support.
- When your application logic benefits from an RPC-style API and features like Groups.
- When you want a managed, turnkey solution for scaling out across multiple servers, especially with the Azure SignalR Service.
Conclusion
For years, the conversation around real-time web applications has been dominated by WebSockets. While incredibly powerful, their complexity is often overkill for a very common class of problems: the simple need to push updates from a server to a client. Server-Sent Events provide a solution that is not only simpler but, in many ways, more robust, thanks to its foundation in standard HTTP and its built-in reliability mechanisms.
The evolution of the web platform, particularly the widespread adoption of HTTP/2, has quietly solved one of SSE’s most significant historical drawbacks, making it more relevant and powerful today than ever before. Its production-readiness is not theoretical; major companies like Shopify and Split use SSE to push trillions of events at massive scale, proving its capability in demanding, real-world environments.
As architects and developers, we have a responsibility to choose the right tool for the job, not just the most powerful one. The next time a real-time requirement lands on your desk, resist the automatic impulse to reach for a complex, bidirectional framework. Instead, ask the critical question: “Is the communication unidirectional?” If the answer is yes, give Server-Sent Events the serious consideration it deserves. By doing so, you can build simpler, more maintainable, and remarkably resilient systems that embrace the power and elegance of the open web platform.

