Entry
Go backSSR Was the Wrong Choice for My Portfolio
When I first built my portfolio, I used SSR because it felt like the default in Next.js. Fetch data on every request, keep things dynamic, and move on. It worked fine locally, and nothing seemed off until I deployed it using Supabase on the free tier.
After a few days of inactivity, I noticed something off. The first time I opened the site, it was slow. Not unusable, but slow enough to feel like something was wrong. If I refreshed, it was instantly fast again. That pattern kept repeating, and it didn’t take long to realize it wasn’t random.
Supabase pauses after a week of inactivity on the free tier. That means the first request after that period has to wait for the database to spin back up. Because I was using SSR, every page load depended on that database call. So the first visitor would always take the hit, and everyone after would get a fast response. For a portfolio, that tradeoff doesn’t make sense. The first visit is the one that matters.
At the time, I was using a server-side Supabase client tied to the request:
import { createServerClient } from '@supabase/ssr'
import { cookies } from 'next/headers'
export async function createServerSupabaseClient() {
const cookieStore = await cookies()
return createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!,
{
cookies: {
getAll() {
return cookieStore.getAll()
},
setAll(cookiesToSet) {
try {
cookiesToSet.forEach(({ name, value, options }) => {
cookieStore.set(name, value, options)
})
} catch {}
},
},
}
)
}This setup made sense for SSR, but it also meant every request depended on Supabase being active. That was fine under constant traffic, but not for something like a portfolio where visits are unpredictable.
Instead of trying to work around the cold start, I removed the dependency entirely. I switched to a build-time client and stopped treating Supabase as something that needed to run on every request.
import { createClient } from '@supabase/supabase-js'
export function createBuildTimeSupabaseClient() {
return createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!,
{
auth: {
persistSession: false,
autoRefreshToken: false,
}
}
)
}From there, I moved the pages to static generation. For dynamic routes like /work/[slug], everything is generated at build time instead of request time.
export async function generateStaticParams() {
const slugs = await getAllWorkSlugs();
return slugs.map((slug) => ({
slug: slug,
}));
}The page itself just reads the data that was already fetched during the build:
const work = await getWorkBySlug(slug);After this change, the delay disappeared completely. There are no more runtime calls to Supabase, so there’s nothing that can “wake up” or slow down the first request. The site loads the same way every time, regardless of when it was last visited.
This works because my content doesn’t change often. I only update my portfolio when I add new work, so fetching data during the build is enough. When something changes, I rebuild and deploy. There’s no need to fetch fresh data on every request when the data itself is mostly static.
If this were a different kind of app, the decision would be different. Anything user-specific or frequently updated would need SSR or something more dynamic. But for a portfolio, SSR added a dependency that didn’t need to exist in the first place.
The main thing that changed for me wasn’t learning SSG itself, but realizing that the problem wasn’t performance. I was relying on something at runtime that I didn’t actually need, and removing that dependency solved the issue completely.