Supabase & PostgreSQL
Supabase Realtime in Next.js: Building Live Features
TL;DR
Supabase Realtime gives you three primitives — database changes, Presence, and Broadcast — all over a single WebSocket connection. I use it in production across multiple projects: EuroParts Lanka has real-time order status updates so customers see their part move from "Processing" to "Shipped" without refreshing. uvin.lk has live notification delivery. This guide covers everything I have learned building those features: setting up subscriptions, handling INSERT/UPDATE/DELETE events with proper typing, tracking who is online with Presence, sending custom events with Broadcast, the RLS gotcha that will silently break your subscriptions, managing channel limits, cleaning up subscriptions to avoid memory leaks, and building a complete live notification system. If you are building anything with Supabase and Next.js through my services or on your own, this is the guide I wish I had when I started.
What Supabase Realtime Does
Supabase Realtime is a server built on top of PostgreSQL's replication functionality. When a row gets inserted, updated, or deleted, PostgreSQL emits a change event. Supabase captures that event and broadcasts it to every client subscribed to that table over WebSockets.
But it is more than just database change notifications. Supabase Realtime gives you three distinct features:
- Postgres Changes — Subscribe to INSERT, UPDATE, and DELETE events on any table. Filter by column values. Get the old and new row data.
- Presence — Track and share ephemeral state between clients. Think "who is online" indicators, cursor positions in collaborative editors, or "5 people are viewing this product" counters.
- Broadcast — Send arbitrary messages to all clients in a channel. No database involved. Think typing indicators, cursor movements, or game state updates.
All three ride on the same WebSocket connection. You do not need Socket.IO. You do not need a separate WebSocket server. You do not need to manage reconnection logic. Supabase handles all of it.
The mental model I use: Postgres Changes is for durable data that lives in your database. Presence is for ephemeral state tied to a user session. Broadcast is for fire-and-forget messages between clients.
Setting Up Realtime Subscriptions
Before any subscription works, you need two things: the Supabase client configured in your Next.js app, and Realtime enabled on the tables you want to watch.
First, the client setup. I always create a shared Supabase client for client components:
// lib/supabase/client.ts
import { createBrowserClient } from "@supabase/ssr";
import type { Database } from "@/types/supabase";
export function createClient() {
return createBrowserClient<Database>(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
}Then enable Realtime on the tables you need. In the Supabase dashboard, go to Database > Replication and toggle the tables on. Or do it with SQL, which I prefer because it is version-controlled in my migrations:
-- Enable realtime for the orders table
ALTER PUBLICATION supabase_realtime ADD TABLE orders;
-- Enable realtime for notifications table
ALTER PUBLICATION supabase_realtime ADD TABLE notifications;One thing people miss: Realtime is not enabled by default on any table. If you subscribe and nothing happens, this is the first thing to check.
Database Changes — INSERT/UPDATE/DELETE
This is the feature most people reach for first. You want to know when a row changes so you can update the UI without polling.
Here is how I subscribe to order status changes in EuroParts Lanka:
// hooks/use-order-updates.ts
"use client";
import { useEffect, useState } from "react";
import { createClient } from "@/lib/supabase/client";
import type { RealtimePostgresChangesPayload } from "@supabase/supabase-js";
import type { Database } from "@/types/supabase";
type Order = Database["public"]["Tables"]["orders"]["Row"];
export function useOrderUpdates(orderId: string) {
const [order, setOrder] = useState<Order | null>(null);
const supabase = createClient();
useEffect(() => {
// Fetch initial data
async function fetchOrder() {
const { data } = await supabase
.from("orders")
.select("*")
.eq("id", orderId)
.single();
if (data) setOrder(data);
}
fetchOrder();
// Subscribe to changes
const channel = supabase
.channel(`order-${orderId}`)
.on<Order>(
"postgres_changes",
{
event: "UPDATE",
schema: "public",
table: "orders",
filter: `id=eq.${orderId}`,
},
(payload: RealtimePostgresChangesPayload<Order>) => {
if (payload.new && "id" in payload.new) {
setOrder(payload.new);
}
}
)
.subscribe();
return () => {
supabase.removeChannel(channel);
};
}, [orderId]);
return order;
}A few things to note about this pattern:
Always fetch initial data first. The subscription only fires on future changes. If you skip the initial fetch, your UI is empty until the next change event.
Use filters aggressively. Without the filter: id=eq.${orderId} part, you would receive every UPDATE on the entire orders table. That is wasted bandwidth and wasted processing. Supabase supports eq, neq, lt, lte, gt, and gte filters on Realtime subscriptions.
Channel names must be unique. If two components subscribe with the same channel name, they share the channel. That can be what you want (shared subscription) or a bug (conflicting event handlers). I prefix channel names with the entity type and ID to keep them unique.
Type your payloads. The generic on<Order>() gives you typed payload.new and payload.old. Without it, everything is Record<string, unknown> and you are casting everywhere.
You can also listen for multiple events on the same channel:
const channel = supabase
.channel("all-orders")
.on<Order>(
"postgres_changes",
{ event: "INSERT", schema: "public", table: "orders" },
(payload) => {
// New order created
addOrder(payload.new);
}
)
.on<Order>(
"postgres_changes",
{ event: "DELETE", schema: "public", table: "orders" },
(payload) => {
// Order deleted — payload.old has the deleted row
removeOrder(payload.old.id);
}
)
.subscribe();For DELETE events, payload.new is empty. The deleted row data comes from payload.old. But here is a gotcha: payload.old only contains data if you have replica identity set to FULL on the table. By default, PostgreSQL only includes the primary key in payload.old:
-- Enable full row data in DELETE and UPDATE old records
ALTER TABLE orders REPLICA IDENTITY FULL;Without this, your DELETE handler gets an object with just { id: "..." } and nothing else. I set replica identity to FULL on any table where I need the old row data in my subscriptions.
Presence — Who's Online
Presence is different from database changes. It does not involve your database at all. It is an in-memory feature that tracks which clients are connected to a channel and what state they are sharing.
I use it for "X people viewing this item" indicators and admin dashboards where I need to see which team members are online:
// hooks/use-online-users.ts
"use client";
import { useEffect, useState } from "react";
import { createClient } from "@/lib/supabase/client";
import type { RealtimePresenceState } from "@supabase/supabase-js";
interface UserPresence {
id: string;
name: string;
avatar: string;
lastSeen: string;
}
export function useOnlineUsers(roomId: string) {
const [onlineUsers, setOnlineUsers] = useState<UserPresence[]>([]);
const supabase = createClient();
useEffect(() => {
const channel = supabase.channel(`room-${roomId}`);
channel
.on("presence", { event: "sync" }, () => {
const state: RealtimePresenceState<UserPresence> =
channel.presenceState();
const users = Object.values(state).flatMap((presences) =>
presences.map((p) => ({
id: p.id,
name: p.name,
avatar: p.avatar,
lastSeen: p.lastSeen,
}))
);
setOnlineUsers(users);
})
.on("presence", { event: "join" }, ({ newPresences }) => {
console.log("Users joined:", newPresences);
})
.on("presence", { event: "leave" }, ({ leftPresences }) => {
console.log("Users left:", leftPresences);
})
.subscribe(async (status) => {
if (status === "SUBSCRIBED") {
await channel.track({
id: crypto.randomUUID(),
name: "Uvin",
avatar: "/avatar.jpg",
lastSeen: new Date().toISOString(),
});
}
});
return () => {
supabase.removeChannel(channel);
};
}, [roomId]);
return onlineUsers;
}The key Presence events are:
- sync — Fires whenever the presence state changes. This is the one you use most. Call
channel.presenceState()inside it to get the current full state. - join — A new client joined.
newPresencesis an array of their tracked state. - leave — A client disconnected.
leftPresencesis an array of their tracked state.
The track() call registers your client's state with the channel. Call it after the subscription is confirmed (status === "SUBSCRIBED"). If you call it before, it silently fails.
One limit to know: Presence state is ephemeral. When a client disconnects, their state disappears. If you need to persist "last seen" data, write it to your database separately.
Broadcast — Custom Events
Broadcast lets you send arbitrary messages to all clients in a channel without touching the database. It is the lightest of the three primitives — just messages over WebSockets.
I use it for typing indicators and real-time cursor positions:
// hooks/use-typing-indicator.ts
"use client";
import { useEffect, useCallback, useRef } from "react";
import { createClient } from "@/lib/supabase/client";
export function useTypingIndicator(
channelId: string,
userId: string
) {
const supabase = createClient();
const channelRef = useRef<ReturnType<typeof supabase.channel> | null>(
null
);
useEffect(() => {
const channel = supabase.channel(`chat-${channelId}`);
channelRef.current = channel;
channel
.on("broadcast", { event: "typing" }, ({ payload }) => {
if (payload.userId !== userId) {
showTypingIndicator(payload.userId, payload.userName);
}
})
.subscribe();
return () => {
supabase.removeChannel(channel);
};
}, [channelId, userId]);
const sendTyping = useCallback(() => {
channelRef.current?.send({
type: "broadcast",
event: "typing",
payload: { userId, userName: "Uvin" },
});
}, [userId]);
return { sendTyping };
}
function showTypingIndicator(userId: string, userName: string) {
// Update UI to show typing indicator
}Broadcast messages do not get persisted anywhere. They are not stored in the database. They are not logged. If a client is offline when a message is sent, they miss it. That is by design — use database changes for anything that needs durability.
You can also configure Broadcast to send messages through the server (the default) or directly peer-to-peer by passing { broadcast: { self: true } } in channel config if you want the sender to also receive their own broadcasts.
Realtime with RLS — The Gotcha
This is the single biggest gotcha with Supabase Realtime, and it burns everyone at least once.
Row Level Security policies apply to Realtime subscriptions.
If you have RLS enabled on a table (and you should — see my RLS guide), your Realtime subscription will only receive events for rows that the authenticated user has permission to SELECT.
Here is what happens in practice. Say you have this RLS policy on your orders table:
CREATE POLICY "Users can view their own orders"
ON orders FOR SELECT
USING (auth.uid() = user_id);If User A inserts a new order, User B will not receive that INSERT event through Realtime — even if User B is subscribed to the orders table — because User B does not have SELECT permission on User A's rows.
This is correct and secure behavior. But it catches people off guard when they are building admin dashboards or shared views.
The fix depends on your use case:
For user-scoped data (like order status), this is exactly what you want. Each user only sees their own updates. No extra work needed.
For admin views where you need to see all events, create a service role client. But never use the service role key in client-side code. Instead, create a server endpoint that forwards events:
// For admin dashboards: use a separate RLS policy
// that grants admins SELECT access to all rows
CREATE POLICY "Admins can view all orders"
ON orders FOR SELECT
USING (
EXISTS (
SELECT 1 FROM profiles
WHERE profiles.id = auth.uid()
AND profiles.role = 'admin'
)
);Another RLS-related trap: if a user's JWT expires while they are subscribed, the subscription keeps working with the old token until the WebSocket reconnects. Supabase handles token refresh automatically in the client library, but there is a brief window during reconnection where events might be missed. Handle this by re-fetching the current state after any reconnection:
const channel = supabase
.channel("orders")
.on("postgres_changes", { /* ... */ }, handleChange)
.subscribe((status) => {
if (status === "SUBSCRIBED") {
// Re-fetch current state to catch any missed events
refetchOrders();
}
});Client-Side Subscription Management
Supabase has a limit on concurrent channels per client. The default is 100 channels per connection on the free tier. That sounds like a lot until you have a dashboard with 20 widgets, each subscribing to a different table with different filters.
Here is how I manage subscriptions efficiently:
Consolidate channels. Instead of one channel per subscription, group related subscriptions into a single channel:
// Instead of this (3 channels):
const ch1 = supabase.channel("orders-insert").on(/* ... */);
const ch2 = supabase.channel("orders-update").on(/* ... */);
const ch3 = supabase.channel("orders-delete").on(/* ... */);
// Do this (1 channel):
const channel = supabase
.channel("orders-all")
.on("postgres_changes", { event: "INSERT", /* ... */ }, onInsert)
.on("postgres_changes", { event: "UPDATE", /* ... */ }, onUpdate)
.on("postgres_changes", { event: "DELETE", /* ... */ }, onDelete)
.subscribe();Share channels across components. I use a simple channel manager that creates a channel once and shares it:
// lib/supabase/channel-manager.ts
import type { SupabaseClient, RealtimeChannel } from "@supabase/supabase-js";
const activeChannels = new Map<string, RealtimeChannel>();
export function getOrCreateChannel(
supabase: SupabaseClient,
name: string
): RealtimeChannel {
const existing = activeChannels.get(name);
if (existing) return existing;
const channel = supabase.channel(name);
activeChannels.set(name, channel);
return channel;
}
export function removeChannel(
supabase: SupabaseClient,
name: string
): void {
const channel = activeChannels.get(name);
if (channel) {
supabase.removeChannel(channel);
activeChannels.delete(name);
}
}Subscribe only when visible. For components below the fold or in hidden tabs, use the Intersection Observer API to subscribe only when the component is visible:
export function useLazySubscription(
channelName: string,
ref: React.RefObject<HTMLElement | null>
) {
const supabase = createClient();
useEffect(() => {
if (!ref.current) return;
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
const channel = supabase
.channel(channelName)
.on("postgres_changes", { /* ... */ }, handleChange)
.subscribe();
}
},
{ threshold: 0.1 }
);
observer.observe(ref.current);
return () => observer.disconnect();
}, [channelName]);
}Cleanup and Memory Leaks
This is where I see the most bugs in production. People set up subscriptions but never clean them up. The result: memory leaks, duplicate event handlers, and a client with dozens of zombie channels eating resources.
The rules are simple:
Always return a cleanup function from useEffect. Every subscription created inside useEffect must be removed in the cleanup function:
useEffect(() => {
const channel = supabase
.channel("my-channel")
.on("postgres_changes", { /* ... */ }, handleChange)
.subscribe();
// This runs when the component unmounts or deps change
return () => {
supabase.removeChannel(channel);
};
}, []);Use `removeChannel`, not `unsubscribe`. The removeChannel method does three things: unsubscribes from the channel, removes all event listeners, and cleans up the channel from the client's internal map. unsubscribe() only does the first part. Always use removeChannel.
Watch your dependency arrays. If your useEffect depends on a value that changes frequently (like a search query), every change creates a new subscription and tears down the old one. Debounce the value before passing it as a dependency:
function useRealtimeSearch(query: string) {
const debouncedQuery = useDebounce(query, 300);
useEffect(() => {
if (!debouncedQuery) return;
const channel = supabase
.channel(`search-${debouncedQuery}`)
.on("postgres_changes", {
event: "*",
schema: "public",
table: "products",
filter: `name=ilike.%${debouncedQuery}%`,
}, handleChange)
.subscribe();
return () => {
supabase.removeChannel(channel);
};
}, [debouncedQuery]);
}Handle component remounting in React Strict Mode. In development, React 18+ mounts components twice to catch effect bugs. This means your subscription setup runs twice. If your channel name is the same both times, the second subscription replaces the first cleanly. But if you are tracking side effects (like incrementing a counter on subscribe), you will see double values. The fix: make your subscription setup idempotent.
Building a Live Notification System
Let me walk through a real example — the notification system I built for uvin.lk. Users get instant notifications when something happens, without polling.
The database table:
CREATE TABLE notifications (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
user_id UUID REFERENCES auth.users(id) NOT NULL,
type TEXT NOT NULL CHECK (type IN ('info', 'success', 'warning', 'error')),
title TEXT NOT NULL,
message TEXT NOT NULL,
read BOOLEAN DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now()
);
-- RLS: users see only their own notifications
ALTER TABLE notifications ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users read own notifications"
ON notifications FOR SELECT
USING (auth.uid() = user_id);
-- Enable realtime
ALTER PUBLICATION supabase_realtime ADD TABLE notifications;The React hook:
// hooks/use-notifications.ts
"use client";
import { useEffect, useState, useCallback } from "react";
import { createClient } from "@/lib/supabase/client";
import type { Database } from "@/types/supabase";
type Notification = Database["public"]["Tables"]["notifications"]["Row"];
export function useNotifications() {
const [notifications, setNotifications] = useState<Notification[]>([]);
const [unreadCount, setUnreadCount] = useState(0);
const supabase = createClient();
const fetchNotifications = useCallback(async () => {
const { data } = await supabase
.from("notifications")
.select("*")
.order("created_at", { ascending: false })
.limit(50);
if (data) {
setNotifications(data);
setUnreadCount(data.filter((n) => !n.read).length);
}
}, []);
useEffect(() => {
fetchNotifications();
const channel = supabase
.channel("user-notifications")
.on<Notification>(
"postgres_changes",
{
event: "INSERT",
schema: "public",
table: "notifications",
},
(payload) => {
if (payload.new && "id" in payload.new) {
setNotifications((prev) => [payload.new, ...prev]);
setUnreadCount((prev) => prev + 1);
// Show browser notification if permitted
if (Notification.permission === "granted") {
new Notification(payload.new.title, {
body: payload.new.message,
});
}
}
}
)
.on<Notification>(
"postgres_changes",
{
event: "UPDATE",
schema: "public",
table: "notifications",
},
(payload) => {
if (payload.new && "id" in payload.new) {
setNotifications((prev) =>
prev.map((n) =>
n.id === payload.new.id ? payload.new : n
)
);
// Recalculate unread count
setNotifications((current) => {
setUnreadCount(current.filter((n) => !n.read).length);
return current;
});
}
}
)
.subscribe((status) => {
if (status === "SUBSCRIBED") {
// Re-fetch to catch anything missed during connection
fetchNotifications();
}
});
return () => {
supabase.removeChannel(channel);
};
}, [fetchNotifications]);
const markAsRead = useCallback(
async (id: string) => {
await supabase
.from("notifications")
.update({ read: true })
.eq("id", id);
},
[]
);
const markAllAsRead = useCallback(async () => {
await supabase
.from("notifications")
.update({ read: true })
.eq("read", false);
}, []);
return {
notifications,
unreadCount,
markAsRead,
markAllAsRead,
};
}Notice I do not need a separate UPDATE handler to decrement the unread count after markAsRead. The UPDATE subscription handles it automatically. When I call supabase.from("notifications").update({ read: true }), that triggers an UPDATE event that the subscription picks up and recalculates the count. One source of truth.
The RLS policy means each user only receives notification events for their own rows. I do not need to filter by user_id in the subscription filter — RLS does it for me. This is one place where RLS and Realtime work beautifully together.
When Realtime Is Overkill
Not everything needs a WebSocket connection. I have seen teams reach for Realtime when simpler solutions work better.
Polling is fine for data that changes slowly. If your dashboard data updates every 5 minutes, a setInterval with a fetch call is simpler, more predictable, and uses fewer resources than a persistent WebSocket connection. I use React Query's refetchInterval for this:
const { data } = useQuery({
queryKey: ["analytics"],
queryFn: fetchAnalytics,
refetchInterval: 5 * 60 * 1000, // 5 minutes
});Server-Sent Events (SSE) work for one-way updates. If you only need to push data from server to client (no client-to-server messages), SSE is lighter than WebSockets. Next.js Route Handlers support streaming responses natively.
Static regeneration handles public content. Blog posts, product listings, marketing pages — use ISR with revalidate. No realtime needed.
Here is my decision framework:
| Scenario | Solution |
|---|---|
| Data changes every few seconds, users need instant feedback | Supabase Realtime |
| Multiple users collaborating on the same document | Realtime + Presence |
| Chat or messaging features | Realtime + Broadcast |
| Dashboard data, updated every few minutes | Polling with React Query |
| Public content, updated by CMS | ISR / On-demand revalidation |
| One-off user-triggered updates | Optimistic UI + API call |
The cost of Realtime is not just the WebSocket connection. It is the complexity of managing subscriptions, handling reconnections, keeping client state in sync with server state, and debugging issues that only happen under specific timing conditions. Use it when the UX demands it. Skip it when it does not.
Key Takeaways
- Enable Realtime explicitly. Tables are not subscribed by default. Use
ALTER PUBLICATION supabase_realtime ADD TABLEin your migrations.
- Use filters on subscriptions. Never subscribe to an entire table when you only need specific rows. Filters reduce bandwidth and processing.
- Set REPLICA IDENTITY FULL on tables where you need old row data in UPDATE and DELETE events.
- RLS applies to Realtime. Subscriptions only deliver events for rows the authenticated user can SELECT. This is a feature, not a bug — but you must design your policies with Realtime in mind.
- Always clean up. Use
supabase.removeChannel(channel)in youruseEffectcleanup. Neverunsubscribe()— it does not fully clean up.
- Consolidate channels. Group related subscriptions into one channel. Share channels across components with a channel manager. You have a limit.
- Re-fetch on reconnection. After a WebSocket reconnects, you may have missed events. Fetch current state in the
subscribecallback when status isSUBSCRIBED.
- Do not default to Realtime. Polling, SSE, and ISR solve most update problems with less complexity. Reserve Realtime for features where instant feedback is essential.
- Presence is ephemeral. It disappears when the client disconnects. Persist "last seen" data separately if you need it.
- Broadcast is fire-and-forget. Offline clients miss broadcast messages. Use database changes for anything that must be durable.
*I build production Next.js applications with Supabase Realtime for clients across Sri Lanka and the UK. If you need live features — order tracking, notifications, collaborative tools, or real-time dashboards — check out my services or reach out at contact@uvin.lk↗.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.