When I first started web development, I learned a bit of JS, then some DOM manipulation, then a JS framework. My intro to the backend was MongoDB with a Node.js server. This type of app is usually called a CRUD app, and the majority of the applications I was hired to build early on fit that mold: every feature, no matter how complex it looked on the surface, eventually collapsed down to creating, reading, updating, or deleting a record. At some point I picked up queues and background jobs, and that opened up a whole new category of work I could take on.
But the longer I spent in that world, the more I noticed where the CRUD shape starts to strain. A "create booking" handler grows into something that also writes an audit log, sends a confirmation email, updates an analytics counter, invalidates a cache, and notifies two other services — all in the name of one POST. Reads suffer the same gravitational pull: a single endpoint stitches together five tables, derives state on the fly, and slowly turns into the hardest-to-change part of the codebase. You end up with handlers that are the business logic, and a database that records the present but forgets how it got there.
My recent experimentation with Datastar signals and NATS has pushed me toward a different way of thinking about this. Rather than one handler doing everything in response to a request, you look at the actions a user takes on the page and treat those as the events your system publishes. From there, one or more consumers each do one thing: write to the database, update the UI, send the email, log the activity, fan out to other connected viewers. The handler shrinks to "validate input, publish event." The database becomes one consumer among several. And as a bonus, the event log itself becomes a first-class artifact you can replay, debug, and stream to other clients.
It sounds heavier than CRUD at first, but in practice it's the maintainability story that wins. Adding a new feature later — a notification, an analytics hook, a live activity feed — is no longer a surgery on a fat handler. It's a new consumer subscribing to an event that already exists.
A good test drive for this theory is building a theatre where someone can create a screening, anyone with the link can come in and book seats, and everyone watching sees what the others are doing in real time. This post is about the first half of that build: how I rendered the seat map, and how I thought through the events that flow over it.
The Seat Map Problem
The first real decision in building a theatre booking UI is the one that looks the simplest: how do you render seats?
If you look at Ticketmaster, seats are SVG paths. Each seat is a hand-placed polygon on a venue map, curved to match the arc of the row, sized differently for premium sections, with hover states and zoom controls and a mini-map in the corner. It's impressive engineering. It's also solving a problem I don't have.
I'm building a demo theatre — fixed layout, three sections, uniform rows. So I went the other direction entirely: a CSS grid of squares.
Three sections — Left Wing, Centre, Right Wing — each rendered as a CSS grid of rows and columns. Each seat is a 36×36px rounded square with a border. Inactive positions (the curved edges where seats don't exist) are rendered as empty spacer divs of the same dimensions, so the grid holds its shape.
The whole layout is plain HTML and CSS. No canvas, no SVG, no JS layout engine.
Four states, not two
The naive version of a seat map has two states: available and booked. That's fine if one person books at a time. Real concurrency needs more.
I settled on four:
- available — grey, hoverable, clickable
- selected — orange, in your current cart
- other_selected — yellow and faded, someone else is looking at it right now
- booked — blue, permanent, not clickable
type Seat struct {
Row string // A–E
Col int
Section string // "L", "C", "R"
IsActive bool
Status string // "available" | "selected" | "other_selected" | "booked"
}
other_selected is the interesting one. It's not a database state — it exists only in memory, for the duration of a live session. When you connect to a screening, you see orange where you've picked seats, yellow where other live viewers have tentatively picked seats, and blue where seats are permanently gone. When someone disconnects, their yellow seats go back to grey.
This distinction matters a lot for the event design, which I'll get to in a moment.
Seat IDs as coordinates
Each seat ID encodes its own location: L-A1, C-B5, R-D3. Section, row, column — everything you need to find it in the grid without a lookup table. The DOM element ID matches the seat ID directly, which means the server can tell the browser "update seat C-B5" and the browser knows exactly which element to patch.
func (s *Seat) SeatID() string {
if !s.IsActive {
return ""
}
return fmt.Sprintf("%s-%s%d", s.Section, s.Row, s.Col)
}
This becomes important once you're pushing seat state updates over SSE — you want the server to be able to address any seat directly, without the client needing to maintain a map.
Thinking Through the Events
Once you have a real-time multi-user UI, you need to think carefully about what actually needs to be an event, and what the payload of each event should be.
I ended up with three distinct event types, each doing a different job.
1. Selection events
When a user clicks a seat, two things happen immediately: their own UI updates (optimistic), and an event is published so every other viewer's UI can update too.
type Signal struct {
ScreeningID string `json:"screening_id"`
SessionID string `json:"session_id"`
SelectedSeatIDs []string `json:"selected_seat_ids"`
SeatID string `json:"seat_id"`
Action string `json:"action"` // "select" | "deselect"
}
Notice SelectedSeatIDs — the full list of seats the user currently has selected, not just the one they just clicked. This is deliberate. If you only send the delta (one seat, one action), consumers need to maintain state to know what a viewer's full selection looks like. By sending the full list every time, each event is self-contained. Any consumer can reconstruct the current state from any single event.
Action is still there for the activity log — you want to say "User joined Left Wing A3" or "User released Centre B2" — but the seat rendering logic ignores it and just works from the full list.
2. Booking events
When a user confirms their selection, that's a fundamentally different event. It's permanent. It writes to the database. And it needs to tell everyone — not just "this user selected these seats" but "these seats are now gone for everyone."
type BookingEvent struct {
ScreeningID string `json:"screening_id"`
BookedSeats []string `json:"booked_seats"`
SessionID string `json:"session_id"`
}
When other connected viewers receive a booking event, they check if any of the newly booked seats overlap with their own selection. If they do, those seats get removed from the viewer's cart automatically — you can't book something someone else just confirmed.
3. Presence events
The third event type has nothing to do with seats. It tracks who's in the room.
type ViewerEvent struct {
SessionID string `json:"session_id"`
Action string `json:"action"` // "join" | "leave"
Color string `json:"color"`
}
Each viewer gets a deterministic color generated from their session ID — same session always gets the same color, no coordination needed, no database lookup. When you join, a presence event fires. When you disconnect (or close the tab), a leave event fires. The viewer list in the sidebar updates in real time.
Why three separate event types?
The honest answer is that the events fell out of the UI, not the other way around. Before I wrote a single line of NATS code, I sketched the page: the seat map sits in the middle, an activity feed streams down the left, the list of active anonymous viewers lives on the right, and the user's currently-selected seats sit in a strip beneath the theatre. Four regions, each updating independently, each in response to something different happening on the network.
Splitting the events along the same seams as the UI also makes the stream itself easier to reason about — when I'm watching messages flow past in nats sub, I can tell at a glance which region of the screen is moving without decoding a payload.
You could model all of this as one generic "theatre event" with a type field. I deliberately didn't. Beyond matching the UI, the three types differ along axes that matter operationally:
| Selection | Booking | Presence | |
|---|---|---|---|
| Persisted to DB? | No | Yes | No |
| Affects seat state? | Yes (temporary) | Yes (permanent) | No |
| Affects viewer list? | No | No | Yes |
| Replayable? | Yes | Yes | Yes |
If you played with the theatre demo — opened it in two tabs, picked some seats, watched the activity feed update on the other side, saw another viewer's avatar appear in the sidebar — you might assume there's a fair amount of machinery behind it. A typical real-time UI built the React way would drag in a lot of things to make that work:
- A Redux or Zustand store with reducers for seat state, the viewer list, and the activity feed
- A WebSocket client wrapper with reconnect logic, exponential backoff, and a "is my connection alive?" heartbeat
- A
switchstatement on incoming messages, dispatching to the right reducer based on atypediscriminator - Optimistic update logic for clicks, plus rollback paths for when the server rejects
- A separate API client layer for the booking POST, returning new state to merge into the store
useEffecthooks coordinating when to mount the socket, when to tear it down, and how to avoid the double-subscribe in StrictMode- React Query or SWR for the initial fetch, kept in sync with the live socket through some bespoke glue
That's not a strawman — that's a fair description of what you'd actually write. Each piece is reasonable on its own; together they're the load-bearing structure of most production real-time apps.
The theatre has none of it.
One line opens the connection
Here is the entire client-side setup for the live page:
<div
data-signals={ templ.JSONString(sig) }
data-init={ datastar.GetSSE("/theatre/screening/%s/events", sig.ScreeningID) }
>
...
</div>
data-init runs once when the element is mounted. datastar.GetSSE opens an SSE connection to the URL and keeps it open. From that moment on, every patch the server sends — a seat changing colour, a viewer joining, a new line in the activity feed — is applied directly to the DOM.
There is no client state. There is no reducer. There is no if (socket.readyState === OPEN) check. The browser is a dumb terminal; the server is the only place that knows anything.
The only other piece of client-side wiring is the click handler on a seat:
<div
id={ seat.SeatID() }
data-on:click={ datastar.PostSSE("/theatre/seat/%s/toggle", seatID) }
...
></div>
That's it. Click sends a POST. The POST returns nothing. The seat changes colour because the SSE stream delivers the patch a moment later — not because the POST handler returned new HTML.
That last sentence is the whole architecture of the page in one line. Everything else is a consequence of it.
A command/query split at the transport layer
What's happening here is a borrowed shape from CQRS — Command Query Responsibility Segregation. I want to be precise about what I mean, because CQRS in the strict sense is a bigger architectural pattern with separate write and read models, eventual consistency between them, and often separate stores. The theatre doesn't do any of that. It has one Postgres database and one stream.
What the theatre does do is split commands and queries at the transport layer:
- Commands travel over POST. A click on a seat is a
POST /theatre/seat/L-A1/toggle. A confirmed booking is aPOST /theatre/book. The server validates, mutates state (in memory or in the database), publishes an event, and returns200 OKwith no body. The HTTP response is not the UI update — it's just the acknowledgement that the command was accepted. - Queries travel over SSE. The single open connection at
/screening/{id}/eventsis the channel through which the browser learns about every state change — its own clicks included. The SSE stream is the read model.
This matters because it changes what each side of the wire is responsible for. The browser is no longer trying to predict what the server will do, mutate local state optimistically, and reconcile when the server disagrees. It just sends commands and renders whatever HTML the server pushes back. The server is no longer trying to compose a response payload that fits the caller's expectation — it just publishes events, and the SSE consumer decides what HTML fragment each event becomes.
The query side: anatomy of StreamEvents
The route that does the heavy lifting:
r.Get("/screening/{screening_id}/events", h.StreamEvents)
Every connected viewer holds one of these open for the duration of their visit. The first version I wrote put everything in one handler — open the SSE writer, register presence, subscribe to JetStream, loop forever. That's the natural shape if you've been writing CRUD handlers for years: the request comes in, the handler does the thing. I'll show that version first, because it's the one that taught me where the seams should be. Later in the post I'll show what I changed.
1. Open an SSE writer.
sse := datastar.NewSSE(w, r)
Datastar's helper wraps the http.ResponseWriter and gives us methods like PatchElementTempl and MarshalAndPatchSignals that emit correctly framed SSE messages.
2. Register presence.
activeViewers.Lock()
activeViewers.Users[mySession] = myColor
activeViewers.Unlock()
h.publishPresence(r.Context(), screeningID, mySession, "join")
defer func() {
activeViewers.Lock()
delete(activeViewers.Users, mySession)
activeViewers.Unlock()
h.publishPresence(context.Background(), screeningID, mySession, "leave")
}()
Joining is just inserting yourself into a process-local map and publishing a presence event. Leaving is a defer that runs when the SSE handler returns — which happens when the client disconnects, the tab closes, or the user navigates away. No heartbeats, no timeouts. The TCP connection itself is the liveness signal.
3. Subscribe to JetStream.
subject := fmt.Sprintf("theatre.screening.%s.>", screeningID)
cons, err := h.js.OrderedConsumer(r.Context(), natsStream, jetstream.OrderedConsumerConfig{
FilterSubjects: []string{subject},
DeliverPolicy: jetstream.DeliverNewPolicy,
})
One stream (theatre_demo) holds every theatre event ever published. The subject hierarchy theatre.screening.{id}.{type} lets a single subscription with a wildcard filter pick out exactly the events for the screening this viewer is watching, ignoring everything else.
DeliverNewPolicy means "only deliver messages that arrive after I subscribe" — a live viewer doesn't want the entire history replayed at them. (The replay endpoint, which I'll get to, uses DeliverAllPolicy for exactly the opposite reason.)
OrderedConsumer is the lightweight, ephemeral consumer flavour. There's no consumer state stored on the server — when the connection drops, the consumer is gone. That's the right trade-off for a live UI: if you reconnect, you don't want to be replayed messages from before the disconnect, you want to see the world as it is now.
4. Loop, decode, patch.
for {
msg, err := msgs.Next()
if err != nil {
return // client disconnected or context cancelled
}
parts := strings.Split(msg.Subject(), ".")
eventType := parts[3]
switch eventType {
case "selection":
// decode Signal, patch the seat, update activity feed
case "booked":
// decode BookingEvent, patch seats to "booked", remove from cart
case "presence":
// decode ViewerEvent, append or remove viewer card
}
}
The full version is longer, but the shape is exactly that: pull the next message, look at the subject's fourth segment to find out what kind of event it is, decode the payload, and emit one or more HTML patches.
A representative slice of the selection branch:
case "selection":
var sig Signal
if err := json.Unmarshal(msg.Data(), &sig); err != nil {
continue
}
if sig.SessionID == mySession {
// It's my own click coming back to me — colour the seat orange
// and update my client-side selection state.
sse.PatchElementTempl(
TheatreSeat(sig.SeatID, "selected"),
datastar.WithModeOuter(),
datastar.WithSelectorID(sig.SeatID),
)
sse.MarshalAndPatchSignals(sig)
sse.PatchElementTempl(SeatFooter(sig.SelectedSeatIDs, []string{}), datastar.WithSelector("#seat-footer"))
} else {
// It's someone else's click — colour the seat yellow.
sse.PatchElementTempl(
TheatreSeat(sig.SeatID, "other_selected"),
datastar.WithModeOuter(),
datastar.WithSelectorID(sig.SeatID),
)
}
Two things about this that I want to call out.
First, the same event produces a different HTML fragment depending on who's watching. My own click paints orange; your click paints yellow. There's no separate "broadcast to others" channel — every viewer subscribes to the same subject, and the consumer decides per-viewer what fragment to render. That logic lives in exactly one place: the SSE handler.
Second, PatchElementTempl takes a templ component and a CSS selector. The seat ID is the DOM element ID (we set this up in Part 1 — L-A1, C-B5, etc.), so the server can address any seat in any viewer's DOM directly. No client-side lookup table, no virtual DOM diff — just "swap the element with id C-B5 for this freshly rendered HTML."
The command side: a click sends, doesn't render
The companion handler:
func (h *handler) ToggleSeat(w http.ResponseWriter, r *http.Request) {
seatID := chi.URLParam(r, "seat_id")
var sig Signal
if err := datastar.ReadSignals(r, &sig); err != nil {
http.Error(w, "Invalid signal: "+err.Error(), http.StatusBadRequest)
return
}
sig.SessionID = sessionID(r)
sig.SeatID = seatID
if found, idx := sliceContains(sig.SelectedSeatIDs, seatID); found {
sig.SelectedSeatIDs = sliceRemoveAt(sig.SelectedSeatIDs, idx)
sig.Action = "deselect"
} else {
sig.SelectedSeatIDs = append(sig.SelectedSeatIDs, seatID)
sig.Action = "select"
}
subject := fmt.Sprintf("theatre.screening.%s.selection.%s", sig.ScreeningID, sig.SessionID)
data, _ := json.Marshal(sig)
h.js.Publish(r.Context(), subject, data)
w.WriteHeader(http.StatusOK)
}
Notice what's missing: there is no HTML in this response. No templ.Handler, no sse.Patch.... The handler reads the current selection out of the request, mutates it, publishes the event to JetStream, and returns 200.
The seat doesn't change colour because of this handler. It changes colour because — a few milliseconds later — the JetStream message reaches every connected StreamEvents goroutine (including the one belonging to the user who just clicked), and those are the things that emit the HTML patches.
This is the part that surprises people coming from request/response thinking. The POST is fire-and-forget for UI purposes. If you closed the SSE stream and only kept the POST, the click would still successfully update the database and publish the event — you just wouldn't see anything happen in the browser.
That's the command/query split made concrete: the command travels one direction and confirms with a status code; the query channel decides what the UI looks like.
I'll be honest: this is the part that took me the longest to get my head around. Years of writing CRUD handlers train you to think of a POST as the moment the UI updates — you mutate, you re-render, you return the new state, the browser swaps it in. Sometimes there's a redirect. Sometimes there's a JSON payload the client merges into a store. Either way, the request and the visual change are bound together in your head as one thing. Decoupling them feels wrong at first, like you've forgotten a step. The handler runs, returns 200, and… nothing happens? It took building two or three of these small features before the model finally clicked: the POST is just an instruction to the server to do something and tell the world about it. What the world does in response is somebody else's job — specifically, the SSE handler's. Once that landed, a lot of the patterns I'd carried over from CRUD-shaped apps stopped feeling necessary.
What this buys you
The architecture isn't free — you're paying for it with a permanent open connection per viewer, an event broker in the middle, and a slightly more abstract mental model. What you get back:
- No client state machine. Nothing on the browser side decides what a seat should look like. The server pushes HTML; the DOM applies it. The bug class of "client and server disagree about state" doesn't exist because there is no client state to disagree.
- Multi-viewer fan-out is free. One viewer's click is just an event on a subject. Anyone subscribed to that subject reacts. Going from one tab to ten tabs requires zero new code.
- The activity feed, viewer list, and seat map all use the same plumbing. They're three regions of the screen driven by three event types on the same SSE stream. Adding a fourth region — a chat panel, a live booking counter, a "now showing" banner — means writing one more case in the switch statement and one more templ component. The transport, the consumer loop, and the connection management all stay the same.
- You get replay almost for free. Because every state-changing event lives in JetStream forever, the same handler can be pointed at the historical log instead of the live tail. That's the next section.
Replay: the same code, a different deliver policy
Before I show the replay route, it's worth saying a word about what a stream actually is, because the rest of this section depends on it. A stream is an ordered, append-only, immutable log: events arrive, they get a sequence number, they sit there in order, and you can't go back and modify history — you can only append to it. That's the entire data structure. It's the same shape as a Kafka topic, a Postgres WAL, a git commit history. Boring on the surface, surprisingly powerful in practice.
The interesting consequence is that, because state never lives optimistically on the client and every change in the system was published as an event, we get to choose what's worth remembering and the log holds onto it for as long as we want. In a CRUD app, by the time a user clicks a seat, hovers another, changes their mind, and confirms a third, the database has only one record of the final outcome — the journey is gone. Here, the journey is the data. If a future-me decides it's worth knowing which seats got hovered before being booked, or how long a typical user takes between joining a screening and committing to a selection, the answer is already sitting in the log. I didn't have to add analytics code; I just have to read the events I was already publishing for the live UI.
That property — the log as the durable, replayable source of truth — is what makes the next thing possible. Since every event for every screening is sitting there in order, I can wire up a second SSE endpoint that consumes the same stream from the beginning instead of the live tail, and watch the screening happen again. So I built it.
The replay route is registered next to the live one:
r.Get("/screening/{screening_id}/replay/events", h.StreamReplayEvents)
The handler is structurally identical to StreamEvents. It opens an SSE connection, creates an ordered consumer, loops through messages, decodes them, and emits HTML patches. There are exactly two differences worth knowing about:
cons, err := h.js.OrderedConsumer(r.Context(), natsStream, jetstream.OrderedConsumerConfig{
FilterSubjects: []string{subject},
DeliverPolicy: jetstream.DeliverAllPolicy, // ← was DeliverNewPolicy
})
DeliverAllPolicy tells JetStream "give me every message ever published to this subject, in order." The same consumer code that processes live events now processes the historical log, oldest-first.
The other difference is a time.Sleep(500 * time.Millisecond) in the loop, so the events play back at a watchable pace instead of arriving in a single instant. There's also a small progress indicator — current / total derived from msg.Metadata().NumPending — patched into the page on each tick.
That's the entire replay feature. No separate event log, no separate replay infrastructure, no "rebuild the world from scratch" routine. The events were already durable in JetStream because the live viewers needed them to be; replay is just a different way of consuming them.
There's one small adjustment in the replay branch: it always paints other-people's selections as yellow (other_selected), never as orange, because nobody owns the cart in a replay — you're a spectator watching the past, not a participant.
The thing you don't have to write
Step back from the code and look at what's not in the codebase:
- No WebSocket reconnect logic. SSE is just HTTP, and Datastar's client handles reconnection if the TCP connection drops.
- No client-side state. There's no
seats[]array on the browser, noviewers[], noactivity[]. Those things exist as DOM nodes, period. - No diffing. The server sends fully-rendered HTML for the elements that changed, addressed by ID. The browser swaps them in.
- No serialisation contract between front and back. There's no
SeatDTOorViewerDTOshared between Go and TypeScript. The "API" is HTML fragments rendered by templ. - No optimistic updates. The round-trip is fast enough — POST, JetStream, SSE, DOM patch — that you don't notice the absence.
- No
useEffect. (Not just in this app — in the entire stack.)
The trade-off is real: you give up the offline story, the rich client-side interactivity that demands no round-trip, and the option to deploy a static frontend separately from the backend. For a UI like a theatre booking system — where state is shared, the server is authoritative, and the user experience is the server pushing updates — those aren't trade-offs. They're things you didn't need anyway.
What could be improved
This is a demo, and there are real seams: process-local presence breaks behind a load balancer, late-joiners don't see live other_selected seats, reconnects drop events, the JetStream stream has no MaxAge, and the booking handler has a race condition. Two of these are worth dwelling on.
The booking race condition, and why the fix is a UX change
There are actually two related problems here, and they share the same root cause.
The first is a true race condition in bookSeats. Two users with overlapping carts hit "Confirm" within the same millisecond. Both transactions run their INSERT ... ON CONFLICT DO NOTHING batch against seat_bookings. The faster transaction inserts every row it asked for; the slower one silently inserts only the non-overlapping seats and returns no error. The slow user sees "booking confirmed" and walks away missing seats they thought they bought.
The second is a softer version of the same thing, and it shows up in the live flow rather than at confirm time. When another user books seats that overlap with what you currently have selected, my SSE handler quietly removes those seats from your selection and updates your footer. The cart corrects itself, which feels tidy, but it's logically wrong: you picked those seats, they vanished, and nothing told you why. Even if you'd been sitting on a selection for two minutes and someone else booked one of your seats out from under you, all you'd see is the chip silently disappear from the strip beneath the theatre.
Both problems are the same shape. There's no notion of "I have a claim on this seat" — only "I clicked it, in memory, and I hope nobody beats me to confirming." Until the database insert at the end, every other user is racing me, and the system silently picks a winner.
The instinct is to fix this at the SQL layer — check the inserted row count, fail if it doesn't match what was requested, return an error. That works, but it's the wrong altitude. The real problem isn't that two writes collided; it's that the UX gave both users the impression they could safely commit those seats at the same moment. A serious booking system doesn't book in one click — it holds the seats while the user goes through checkout, payment, and confirmation. The hold is the lock.
In a production version of this app, that means a seat_locks table with a (screening_id, seat_id) UNIQUE constraint and an expires_at column. Clicking a seat acquires a lock that expires in five minutes; entering checkout extends it; completing payment converts the locks into rows in seat_bookings; abandoning the tab lets the lock expire and the seats return to the pool. The unique constraint is what makes the acquisition race-free; the expiry is what keeps abandoned carts from poisoning the system.
I didn't build that. The whole point of this demo was to get my head around event sourcing and how to drive a UI from a stream — not to ship a Ticketmaster competitor. But it's worth flagging, because someone reading the code might assume the booking flow is correct just because it works in the happy path.
What I changed: one consumer per screening, not per viewer
The first version of StreamEvents had a real architectural problem, and it's the most interesting thing in this post — both because of what was wrong and because the fix forced me to unlearn a CRUD instinct.
Look back at section 3 of the walkthrough. Every connected viewer creates its own OrderedConsumer against JetStream. That means if 500 people are watching the same screening, JetStream has 500 separate consumers — each one filtering with the same wildcard, each one reading the same stream of messages, each one decoding the same payloads. JetStream is happy to do this, but it's wasteful: I'm asking the server to do the same work 500 times for what should be one read and a fan-out.
Coming from a CRUD background, my instinct was to put everything inside the SSE handler — that's where the "request" was, so that's where the work went. Splitting the consumer out felt unnecessary at first. But the SSE handler isn't a CRUD handler; it's a per-viewer renderer. The actual reading of events from JetStream is a shared concern that doesn't belong inside it.
The mental model that made this click for me: NATS is the durability layer; in-process fan-out is the broadcast layer. They're two different jobs and conflating them costs you linear-with-viewer work that should have been constant. The SSE handler is just the last mile.
So I split it. There's a new file, hub.go, that introduces three pieces:
// One per active screening. Owns the JetStream consumer.
type hub struct {
screeningID string
js jetstream.JetStream
mu sync.Mutex
subscribers map[chan decodedEvent]struct{}
cancel context.CancelFunc
}
// Process-wide map of screeningID → *hub.
type hubRegistry struct {
sync.Mutex
hubs map[string]*hub
js jetstream.JetStream
}
subscribe is the one method viewers actually call. It returns a Go channel of decoded events plus an unsubscribe func:
func (r *hubRegistry) subscribe(screeningID string) (<-chan decodedEvent, func()) {
r.Lock()
h, ok := r.hubs[screeningID]
if !ok {
ctx, cancel := context.WithCancel(context.Background())
h = &hub{ /* ... */ cancel: cancel }
r.hubs[screeningID] = h
go h.run(ctx) // first subscriber starts the consumer
}
r.Unlock()
ch := make(chan decodedEvent, 64)
h.mu.Lock()
h.subscribers[ch] = struct{}{}
h.mu.Unlock()
unsubscribe := func() {
// ... if this was the last subscriber, shut the hub down ...
}
return ch, unsubscribe
}
The first viewer for a screening creates the hub and kicks off its goroutine. Every subsequent viewer just attaches a channel. The last viewer to leave shuts the hub down so it doesn't leak forever.
The hub's run method is the loop that used to live inside StreamEvents — open an OrderedConsumer, pull messages, decode based on subject, broadcast to every subscriber's channel:
func (h *hub) run(ctx context.Context) {
cons, _ := h.js.OrderedConsumer(ctx, natsStream, jetstream.OrderedConsumerConfig{
FilterSubjects: []string{"theatre.screening." + h.screeningID + ".>"},
DeliverPolicy: jetstream.DeliverNewPolicy,
})
msgs, _ := cons.Messages()
defer msgs.Stop()
for {
msg, err := msgs.Next()
if err != nil { return }
evt := decode(msg) // single unmarshal, shared across all viewers
h.broadcast(evt)
}
}
And the SSE handler — which used to be 100+ lines of "open consumer, decode, render" — is now a thin renderer:
events, unsubscribe := h.hubs.subscribe(screeningID)
defer unsubscribe()
for {
select {
case <-r.Context().Done():
return
case evt, ok := <-events:
if !ok { return }
// turn evt into HTML patches for this viewer
}
}
The handler still does the per-viewer work — deciding whether to paint a seat orange or yellow, updating the cart for the user who clicked, ignoring presence events for sessions that aren't relevant. That's where it should be, because that's the only thing the viewer's identity matters for. Everything that's not per-viewer (subject filtering, JetStream I/O, JSON decoding) now happens once.
There's one trade-off baked into the hub: each subscriber channel has a 64-message buffer, and if a subscriber falls behind, the hub drops the message rather than blocking. A wedged viewer can miss events; the alternative is one slow viewer freezing every other viewer's UI. For a real-time UI where the next event is usually only a second away, missing one is recoverable; locking up the page is not.
The lesson I keep coming back to: in event-driven UIs, the handler is the last thing you write, not the first. The consumer, the decoding, the fan-out — those belong to the system, not to a request. Treating the SSE handler as a request handler instead of a render function is the CRUD habit that took me longest to drop.