Supabase & PostgreSQL
Supabase Edge Functions: When and How to Use Them
TL;DR
Supabase Edge Functions are Deno-based serverless functions that run on the edge, close to your users. I use them for three things: webhooks from external services like Stripe, scheduled background tasks that should not depend on my frontend deployment, and custom server-side logic that does not belong in a Next.js API route. This guide covers when Edge Functions are the right tool versus when they are overkill, how to set them up locally, how to write and deploy them, and the specific patterns I use across my services projects. If you are already running Supabase and wondering whether you need Edge Functions, this article will give you a clear answer.
What Edge Functions Are
Supabase Edge Functions are serverless functions powered by Deno Deploy. They run TypeScript on the edge — meaning they execute in data centres geographically close to whoever triggered them. Each function is a single Deno file that receives an HTTP request and returns an HTTP response. That is the entire mental model.
Under the hood, Supabase uses Deno's runtime instead of Node.js. That means you get TypeScript out of the box with no build step, Web Standard APIs like fetch and Request/Response, and ES modules via URL imports instead of npm. If you have written Node.js your entire career, the first few minutes feel slightly alien. After that, it clicks.
Here is the simplest possible Edge Function:
import { serve } from "https://deno.land/std@0.168.0/http/server.ts";
serve(async (req: Request) => {
const { name } = await req.json();
return new Response(
JSON.stringify({ message: `Hello, ${name}!` }),
{ headers: { "Content-Type": "application/json" } }
);
});That is a function. It receives a POST request with a JSON body, extracts a name, and returns a greeting. Every Edge Function follows this pattern — import serve, handle the request, return a response. The runtime handles scaling, cold starts, and infrastructure. You handle the logic.
The key difference from something like AWS Lambda or Vercel Serverless Functions is that Edge Functions live inside your Supabase project. They have native access to your Supabase client, your database, your auth system, and your storage buckets. You do not need to configure connection strings or manage separate infrastructure.
When to Use Them vs API Routes
This is the question I get asked most. If you are already running a Next.js app with API routes, why would you also need Edge Functions?
Here is my decision framework. I use Edge Functions when:
The logic should not depend on the frontend deployment. If I redeploy my Next.js app on Vercel, I do not want my Stripe webhook handler to go down for thirty seconds. Edge Functions live in Supabase infrastructure, completely independent of the frontend. If Vercel has an incident, my webhooks keep processing.
The work is triggered by an external service. Stripe sends webhook events, Twilio sends SMS status callbacks, GitHub sends repository events. These do not need to go through my Next.js app. They can hit a Supabase Edge Function directly. Fewer hops, fewer failure points.
I need a scheduled task. Supabase supports cron-triggered Edge Functions via pg_cron. Clean up expired tokens every hour. Send a daily digest email. Recalculate analytics aggregates at midnight. These are background jobs that have nothing to do with user-facing routes.
Multiple frontends share the same backend logic. If I have a web app and a mobile app both needing the same custom logic, an Edge Function avoids duplicating that logic across two API layers.
I stick with Next.js API routes when:
The logic is tightly coupled to the frontend. Form submissions, server-side rendering data fetches, auth callbacks that redirect back to specific pages. These belong in your Next.js app because they are part of the user flow.
I need the Node.js ecosystem. Some npm packages do not work in Deno. If I need a specific Node library without a Deno equivalent, API routes are the pragmatic choice.
The response feeds directly into a page render. Server Actions and Route Handlers in Next.js can feed data directly into RSC streams. Edge Functions cannot.
The rule of thumb: if the work is infrastructure-level and should survive a frontend redeployment, use an Edge Function. If the work is user-facing and part of a page flow, use an API route.
Setting Up
You need the Supabase CLI installed locally. If you do not have it yet:
npm install -g supabaseInitialize your project (if you have not already):
supabase initThis creates a supabase directory in your project root. Edge Functions live inside supabase/functions/. Each function gets its own directory with an index.ts file.
Create your first function:
supabase functions new hello-worldThis generates:
supabase/
functions/
hello-world/
index.tsTo run it locally:
supabase start
supabase functions serveYour function is now running at http://localhost:54321/functions/v1/hello-world. You can hit it with curl or any HTTP client. The local development server watches for file changes and reloads automatically.
One thing that tripped me up early: supabase start needs Docker running. The Supabase local development stack runs PostgreSQL, GoTrue, PostgREST, and everything else in containers. If Docker is not running, you get a cryptic error about missing services.
Writing Your First Function
Let me walk through a function I actually use in production — a function that generates a short-lived signed URL for a private file in Supabase Storage.
import { serve } from "https://deno.land/std@0.168.0/http/server.ts";
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
serve(async (req: Request) => {
try {
const authHeader = req.headers.get("Authorization");
if (!authHeader) {
return new Response(
JSON.stringify({ error: "Missing authorization header" }),
{ status: 401, headers: { "Content-Type": "application/json" } }
);
}
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_ANON_KEY")!,
{
global: { headers: { Authorization: authHeader } },
}
);
const { data: { user }, error: authError } = await supabase.auth.getUser();
if (authError || !user) {
return new Response(
JSON.stringify({ error: "Unauthorized" }),
{ status: 401, headers: { "Content-Type": "application/json" } }
);
}
const { filePath } = await req.json();
if (!filePath || typeof filePath !== "string") {
return new Response(
JSON.stringify({ error: "filePath is required" }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
const { data, error } = await supabase.storage
.from("private-documents")
.createSignedUrl(filePath, 60);
if (error) {
return new Response(
JSON.stringify({ error: error.message }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
return new Response(
JSON.stringify({ signedUrl: data.signedUrl }),
{ status: 200, headers: { "Content-Type": "application/json" } }
);
} catch (err) {
return new Response(
JSON.stringify({ error: "Internal server error" }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
});A few things to notice. First, I am passing the user's Authorization header through to the Supabase client. This means the Supabase client inside the Edge Function acts as the authenticated user, not as the service role. Row Level Security still applies. The user can only generate signed URLs for files they have permission to access.
Second, I validate the input before doing anything with it. The filePath must exist and must be a string. No type coercion, no assumptions.
Third, every error path returns a structured JSON response with an appropriate HTTP status code. Never return raw error strings. Your frontend deserves consistent error shapes.
Calling from Your Frontend
Supabase provides a built-in method to invoke Edge Functions from the client library. You do not need to construct URLs manually.
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
async function getSignedUrl(filePath: string): Promise<string> {
const { data, error } = await supabase.functions.invoke(
"generate-signed-url",
{
body: { filePath },
}
);
if (error) {
throw new Error(`Failed to generate signed URL: ${error.message}`);
}
return data.signedUrl;
}The supabase.functions.invoke method automatically includes the current user's auth token in the request headers. If the user is signed in via Supabase Auth, their JWT is forwarded to the Edge Function. You do not need to manage tokens manually.
For functions that do not require authentication — public endpoints like a contact form handler — you can skip the auth check in the function and call it without a logged-in session.
If you need to call an Edge Function from a server context (like a Next.js Server Action), create a Supabase client with the service role key:
import { createClient } from "@supabase/supabase-js";
const supabaseAdmin = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
);
const { data, error } = await supabaseAdmin.functions.invoke(
"admin-task",
{ body: { action: "cleanup" } }
);The service role key bypasses RLS, so use it only in server-side code. Never expose it to the client.
Webhooks with Edge Functions
This is where Edge Functions earn their keep. Webhook handlers need to be reliable, fast, and independent of your frontend deployment. Here is a Stripe webhook handler I use in production:
import { serve } from "https://deno.land/std@0.168.0/http/server.ts";
import Stripe from "https://esm.sh/stripe@13?target=deno";
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
const stripe = new Stripe(Deno.env.get("STRIPE_SECRET_KEY")!, {
apiVersion: "2023-10-16",
httpClient: Stripe.createFetchHttpClient(),
});
const cryptoProvider = Stripe.createSubtleCryptoProvider();
serve(async (req: Request) => {
const signature = req.headers.get("Stripe-Signature");
if (!signature) {
return new Response("Missing signature", { status: 400 });
}
const body = await req.text();
let event: Stripe.Event;
try {
event = await stripe.webhooks.constructEventAsync(
body,
signature,
Deno.env.get("STRIPE_WEBHOOK_SECRET")!,
undefined,
cryptoProvider
);
} catch (err) {
console.error("Webhook signature verification failed:", err);
return new Response("Invalid signature", { status: 400 });
}
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
);
switch (event.type) {
case "checkout.session.completed": {
const session = event.data.object as Stripe.Checkout.Session;
await supabase
.from("orders")
.update({ status: "paid", stripe_session_id: session.id })
.eq("id", session.metadata?.order_id);
break;
}
case "customer.subscription.updated": {
const subscription = event.data.object as Stripe.Subscription;
await supabase
.from("subscriptions")
.update({
status: subscription.status,
current_period_end: new Date(
subscription.current_period_end * 1000
).toISOString(),
})
.eq("stripe_subscription_id", subscription.id);
break;
}
default:
console.log(`Unhandled event type: ${event.type}`);
}
return new Response(JSON.stringify({ received: true }), {
status: 200,
headers: { "Content-Type": "application/json" },
});
});Critical details here. First, I verify the webhook signature before doing anything else. Never trust an incoming webhook without verifying it was actually sent by the service it claims to be from. Stripe uses HMAC-based signatures. The constructEventAsync method handles this.
Second, I use the service role key for the Supabase client because webhook handlers run without a user context. There is no logged-in user — it is Stripe calling your endpoint. The service role bypasses RLS so you can update any row.
Third, I return 200 for event types I do not handle. Stripe retries failed webhooks. If you return a non-200 for an event type you intentionally ignore, Stripe will keep sending it, which clutters your logs and wastes your function invocations.
In your Stripe dashboard, point the webhook URL to https://<project-ref>.supabase.co/functions/v1/stripe-webhook. That is it. No CORS configuration needed because Stripe is calling your function server-to-server.
Scheduled Functions
Supabase supports scheduled Edge Functions using PostgreSQL's pg_cron extension. You define a cron schedule in SQL, and Supabase invokes the function at that interval.
First, enable the pg_cron extension in your Supabase dashboard (Database > Extensions). Then create the schedule:
SELECT cron.schedule(
'cleanup-expired-tokens',
'0 * * * *', -- Every hour
$$
SELECT
net.http_post(
url := 'https://<project-ref>.supabase.co/functions/v1/cleanup-tokens',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer ' || current_setting('supabase.service_role_key')
),
body := '{}'::jsonb
) AS request_id;
$$
);And the corresponding Edge Function:
import { serve } from "https://deno.land/std@0.168.0/http/server.ts";
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
serve(async (req: Request) => {
const authHeader = req.headers.get("Authorization");
const expectedToken = `Bearer ${Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")}`;
if (authHeader !== expectedToken) {
return new Response("Unauthorized", { status: 401 });
}
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
);
const oneHourAgo = new Date(
Date.now() - 60 * 60 * 1000
).toISOString();
const { data, error } = await supabase
.from("password_reset_tokens")
.delete()
.lt("expires_at", oneHourAgo)
.select("id");
if (error) {
console.error("Cleanup failed:", error);
return new Response(
JSON.stringify({ error: error.message }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
console.log(`Cleaned up ${data?.length ?? 0} expired tokens`);
return new Response(
JSON.stringify({ cleaned: data?.length ?? 0 }),
{ status: 200, headers: { "Content-Type": "application/json" } }
);
});I verify the service role key on scheduled functions because anyone could try to hit the endpoint manually. The cron job passes the service role key as the Bearer token, so the function can verify it is being called by the scheduler and not some random HTTP request.
Common cron patterns I use:
0 * * * *— Every hour. Token cleanup, session pruning.0 0 * * *— Daily at midnight UTC. Analytics aggregation, report generation.*/5 * * * *— Every five minutes. Health checks, queue processing.0 9 * * 1— Every Monday at 9 AM UTC. Weekly digest emails.
To list or remove scheduled jobs:
-- List all scheduled jobs
SELECT * FROM cron.job;
-- Remove a scheduled job
SELECT cron.unschedule('cleanup-expired-tokens');Environment Variables
Edge Functions access environment variables through Deno.env.get(). Supabase automatically injects SUPABASE_URL, SUPABASE_ANON_KEY, and SUPABASE_SERVICE_ROLE_KEY into every function. You do not need to configure these.
For custom secrets like API keys, set them through the CLI:
supabase secrets set STRIPE_SECRET_KEY=sk_live_...
supabase secrets set STRIPE_WEBHOOK_SECRET=whsec_...
supabase secrets set RESEND_API_KEY=re_...To list existing secrets:
supabase secrets listFor local development, create a .env.local file in your supabase directory:
STRIPE_SECRET_KEY=sk_test_...
STRIPE_WEBHOOK_SECRET=whsec_test_...
RESEND_API_KEY=re_test_...Then run supabase functions serve --env-file supabase/.env.local.
Never commit .env.local to version control. Add it to your .gitignore. I have seen developers push test Stripe keys to public repositories. Do not be that developer.
Debugging and Logging
Edge Functions write logs to Supabase's logging infrastructure. Use console.log, console.error, and console.warn as you normally would. Logs appear in the Supabase dashboard under Edge Functions > Logs.
For local development, logs appear directly in your terminal when running supabase functions serve.
Here is my logging pattern for production functions:
serve(async (req: Request) => {
const startTime = Date.now();
const requestId = crypto.randomUUID();
console.log(JSON.stringify({
level: "info",
requestId,
method: req.method,
url: req.url,
timestamp: new Date().toISOString(),
}));
try {
// ... function logic ...
const duration = Date.now() - startTime;
console.log(JSON.stringify({
level: "info",
requestId,
duration,
status: 200,
}));
return new Response(/* ... */);
} catch (err) {
const duration = Date.now() - startTime;
console.error(JSON.stringify({
level: "error",
requestId,
duration,
error: err instanceof Error ? err.message : "Unknown error",
stack: err instanceof Error ? err.stack : undefined,
}));
return new Response(
JSON.stringify({ error: "Internal server error", requestId }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
});Structured JSON logs are searchable and filterable. When something breaks at 3 AM, you want to search by requestId and see the full lifecycle of that request — not scroll through pages of unstructured text.
You can also tail logs in real time using the CLI:
supabase functions logs hello-world --scrollDeploy your function when it is ready:
supabase functions deploy hello-worldThis deploys to your Supabase project's edge infrastructure. The function is live within seconds.
When NOT to Use Edge Functions
I want to be honest about this. Edge Functions are not always the answer, and I have seen developers reach for them when simpler solutions exist.
Do not use Edge Functions for simple database queries. If you just need to read or write data with some filtering, use the Supabase client library directly. PostgREST handles this without a function layer. Adding an Edge Function between your client and your database for a straightforward CRUD operation is pure overhead.
Do not use Edge Functions for logic that PostgreSQL can handle. Database triggers, PostgreSQL functions, and computed columns can handle a surprising amount of server-side logic. Need to automatically set updated_at on every row update? That is a database trigger, not an Edge Function. Need to calculate a derived field? That is a generated column.
-- This does not need an Edge Function
CREATE OR REPLACE FUNCTION update_timestamp()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;Do not use Edge Functions for auth flows. Supabase Auth handles sign-up, sign-in, password reset, OAuth, and MFA out of the box. Writing custom auth logic in Edge Functions is reinventing the wheel with worse security.
Do not use Edge Functions when cold starts matter for UX. Edge Functions have cold starts. They are fast — usually under 200ms — but they exist. If you are rendering a page where every millisecond of TTFB counts, a Next.js Server Component fetching data at the edge with ISR will be faster than a round trip to a Supabase Edge Function.
Do not use Edge Functions for complex, long-running jobs. Edge Functions have a maximum execution time (currently around 150 seconds on the Pro plan, shorter on Free). If you are processing large CSV uploads or running batch operations that take minutes, you need a proper queue system with a background worker.
The pattern I follow: start with the simplest layer that works. Supabase client for reads and writes. Database triggers for reactive logic. Edge Functions for anything that needs custom server-side code outside the database and outside your frontend. API routes for frontend-coupled logic.
Key Takeaways
Edge Functions solve a specific problem. They give you a server-side execution environment inside your Supabase project. Use them for webhooks, scheduled tasks, and logic that should not live in your frontend deployment.
The Deno runtime is not a problem. Yes, it is different from Node.js. No, it does not take long to get comfortable. TypeScript works out of the box. Standard Web APIs work out of the box. The import model is different but not difficult.
Webhook handlers are the strongest use case. Stripe, Twilio, GitHub, any external service that sends events to your application. Edge Functions handle these reliably without coupling them to your frontend infrastructure.
Scheduled functions via pg_cron are underrated. Most applications need background cleanup, aggregation, or notification jobs. Supabase gives you this without setting up a separate cron server or worker process.
Do not over-engineer. If a database trigger or a direct client query solves the problem, you do not need an Edge Function. Every function you deploy is another thing to maintain, monitor, and debug.
Verify everything. Verify webhook signatures. Verify auth tokens on scheduled functions. Validate input shapes. Return structured errors. The Edge Function is part of your security boundary — treat it that way.
If you are building a project with Supabase and need help deciding where Edge Functions fit into your architecture, check out my services page. I have shipped these patterns across multiple production applications and can help you avoid the mistakes that waste weeks.
*Written by Uvin Vindula↗ — Web3/AI engineer building production-grade applications from Sri Lanka and the UK. Follow my work at @IAMUVIN↗ or explore my projects at uvin.lk↗.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.