How Next.js 15's Full Route Cache Served Stale Prices at Checkout for 3 Hours
← Back
March 14, 2026Architecture9 min read

How Next.js 15's Full Route Cache Served Stale Prices at Checkout for 3 Hours

Published March 14, 20269 min read

2:14 PM, Tuesday. First support ticket: "I was charged $49 but the page showed $29." Then another. Then seven more. By the time we traced it, our checkout page had been showing the old promotional price to every visitor for three hours while the database was already on the new price. Nineteen customers got manual refunds. The culprit wasn't Cloudflare, wasn't our database, wasn't a bad deploy. It was Next.js 15's own caching layer, doing exactly what it was designed to do.

Production failure

We'd migrated our SaaS marketing and checkout flow from a PHP/Laravel monolith to Next.js 15 App Router six weeks earlier. It felt smooth. Lighthouse scores went up, TTFB dropped from 420ms to 68ms, everyone was happy. We had a promotional campaign running: $29/month for the first 3 months, reverting to $49/month on March 11 at noon.

At noon we updated the pricing record in PostgreSQL. No code deploy. The checkout component fetched pricing server-side from our FastAPI service, which read from the database. On Pages Router (the old setup) this would have been instant. Every request hits the server, fetches live data, renders. On App Router with Next.js 15 defaults, something very different happened, and nobody on the team had a mental model for it.

19manual refunds issued
3 hrsstale cache window
$380revenue exposure
0errors in logs

Zero errors. Zero alerts. Monitoring showed healthy response times and a 200 on every checkout request. The system was working perfectly, it was just serving the wrong price.


False assumptions

First instinct: Cloudflare. We had aggressive caching rules for static assets and I assumed a CDN cache hadn't been busted after the pricing change. I ran cf-cache-status checks on the checkout URL.

terminal
$ curl -I https://app.example.com/checkout
HTTP/2 200
cf-cache-status: DYNAMIC
cache-control: no-store, must-revalidate
x-vercel-cache: HIT
...

Cloudflare showed DYNAMIC. Not caching the route. But that x-vercel-cache: HIT was a hint I ignored for another 40 minutes. Second assumption: Redis cache in our FastAPI pricing service. Redis TTL on pricing keys was 60 seconds. Ruled out.

Third assumption: a stale server-side import or module-level variable holding old pricing. We redeployed. Checkout page showed $49. Fixed. Or so I thought. Twenty minutes later, the first post-redeploy support ticket arrived. Stale price again.

"We deployed three times trying to fix a cache we didn't know existed."

Profiling / investigation

After the third failed redeploy I started reading Next.js 15 internals. App Router has a four-layer caching model that Pages Router never had. Most of us had been trained on Pages Router and assumed App Router behaved the same way for server-rendered routes. It doesn't.

Next.js 15 App Router — 4-Layer Cache Stack
═══════════════════════════════════════════════

  Browser Request
       │
       ▼
┌─────────────────────┐
│   Router Cache      │  ← Client-side, in-memory
│   (5 min TTL        │    Prefetched route segments
│    for static segs) │
└──────────┬──────────┘
           │ miss
           ▼
┌─────────────────────┐
│   Full Route Cache  │  ← Server-side, on-disk
│   (indefinite TTL   │    Entire rendered HTML + RSC payload
│    by default)      │    ⚠️  THIS WAS OUR PROBLEM
└──────────┬──────────┘
           │ miss
           ▼
┌─────────────────────┐
│    Data Cache       │  ← Per fetch() call
│    (indefinite TTL  │    Stored across requests & deploys
│     by default)     │
└──────────┬──────────┘
           │ miss
           ▼
┌─────────────────────┐
│   Upstream / DB     │  ← Actual data source
│   FastAPI + PG      │
└─────────────────────┘
  

The key discovery: in Next.js 15, any route that renders with no dynamic functions (no cookies(), no headers(), no searchParams access) is automatically treated as a static route and stored in the Full Route Cache. Our checkout page used none of those. It fetched pricing via fetch() inside a Server Component and rendered the price directly into HTML.

Next.js captured that HTML at build time (and again on first request after deploy), stored it in the Full Route Cache, and served it to every subsequent visitor without ever hitting our FastAPI service or database again. A fresh deploy busted the cache, which is why redeploying fixed it briefly. But after the first post-deploy request, the new stale HTML was cached again.

Timeline of the Incident
════════════════════════

12:00 PM  ── Pricing updated in PostgreSQL ($29 → $49)
              No deploy triggered (expected: DB-only change)

12:00 PM  ── Full Route Cache still holds HTML with "$29"
              ↳ Every request served from cache (HIT)
              ↳ FastAPI never called
              ↳ 0 errors in logs

12:47 PM  ── First support ticket: "charged $49, saw $29"

 1:10 PM  ── We redeploy (cache busted by build ID change)
              ↳ First request after deploy: FastAPI called → $49 rendered
              ↳ Full Route Cache repopulated with "$49" HTML ✓

 1:10 PM  ── Fixed? No.
              ↳ 20 min later: Router Cache (client-side, 5 min) expires
              ↳ New visitors hit Full Route Cache: still "$49" ← actually fine now

Wait — why did tickets keep coming after redeploy?

 1:10 PM  ── Redeploy only invalidated one region (IAD)
              ↳ Edge regions CDG, SIN still serving old Full Route Cache
              ↳ Geography-split stale window for another 90 min
  

The redeploys did clear the US East cache, but Next.js's Full Route Cache is replicated to each edge region on first request. Our non-US users were still hitting warmed caches in Frankfurt and Singapore that hadn't yet seen a post-deploy request. This is the part I should have known and didn't.


Root cause

Two compounding issues, both from misunderstood App Router defaults.

  • Full Route Cache with no revalidation. Our checkout Server Component called fetch('/api/pricing') without { cache: 'no-store' } or any revalidate setting. Next.js cached the entire rendered HTML indefinitely. No dynamic function access meant Next.js classified the route as static.
  • Edge region cache warming after deploy. Redeploying busts the build ID and invalidates the server's Full Route Cache, but edge replicas re-warm independently on first request per region. A deploy with no warm-up traffic to non-primary regions leaves stale caches active for 60 to 90 minutes per region.
app/checkout/page.tsx — the broken fetch
// ❌ This is cached indefinitely by Next.js 15 Full Route Cache
// No dynamic signal = treated as fully static route
async function CheckoutPage() {
  const pricing = await fetch('https://api.internal/pricing/current')
    .then(r => r.json());

  return (
    <main>
      <PricingCard price={pricing.monthlyPrice} />
      <CheckoutForm priceId={pricing.stripeId} />
    </main>
  );
}

Architecture fix

The fix required changes at two levels. Opting the checkout route out of the Full Route Cache, and ensuring our fetch calls were never cached beyond the TTL we defined.

app/checkout/page.tsx — fixed
// Force dynamic rendering — opts out of Full Route Cache entirely
export const dynamic = 'force-dynamic';

// Alternatively, set per-fetch no-store:
async function CheckoutPage() {
  const pricing = await fetch('https://api.internal/pricing/current', {
    cache: 'no-store',           // ← bypass Data Cache too
    next: { tags: ['pricing'] }, // ← allow on-demand revalidation
  }).then(r => r.json());

  return (
    <main>
      <PricingCard price={pricing.monthlyPrice} />
      <CheckoutForm priceId={pricing.stripeId} />
    </main>
  );
}

export default CheckoutPage;

We picked export const dynamic = 'force-dynamic' at the route level rather than cache: 'no-store' per fetch. The checkout page has multiple data fetches (pricing, user session, active coupons), and setting no-store on each individually is fragile. A future developer adding a new fetch would silently re-enable caching for that piece. Route-level force-dynamic is a single, unambiguous signal that's hard to undo by accident.

For the edge region warming problem, we added a post-deploy step to our GitHub Actions pipeline.

.github/workflows/deploy.yml — post-deploy cache warming
post-deploy:
  runs-on: ubuntu-latest
  needs: deploy
  steps:
    - name: Warm edge regions after deploy
      run: |
        REGIONS=("iad" "cdg" "sin" "syd")
        ROUTES=("/checkout" "/pricing" "/login")

        for region in "${REGIONS[@]}"; do
          for route in "${ROUTES[@]}"; do
            curl -sf               -H "x-vercel-deployment-url: $DEPLOYMENT_URL"               -H "x-vercel-edge-region: ${region}"               "https://$DEPLOYMENT_URL${route}" > /dev/null
            echo "Warmed ${region}: ${route}"
          done
        done

We also added on-demand revalidation to our pricing admin panel. Whenever a pricing record is updated in the database, a webhook fires to /api/revalidate.

app/api/revalidate/route.ts
import { revalidateTag } from 'next/cache';
import { NextRequest, NextResponse } from 'next/server';

export async function POST(req: NextRequest) {
  const secret = req.headers.get('x-revalidate-secret');

  if (secret !== process.env.REVALIDATE_SECRET) {
    return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
  }

  const { tag } = await req.json();
  revalidateTag(tag); // e.g. 'pricing'

  return NextResponse.json({ revalidated: true, tag });
}
Revised Architecture — Pricing Update Flow
═══════════════════════════════════════════

Admin Panel
    │
    ▼
FastAPI /admin/pricing  ──► PostgreSQL (write)
    │
    ▼
Webhook → /api/revalidate
    │
    ▼
revalidateTag('pricing')
    │
    ├─► Full Route Cache: invalidated for tagged routes
    ├─► Data Cache: invalidated for tagged fetches
    │
    ▼
Next request per region
    │
    ▼
FastAPI /pricing/current ──► PostgreSQL (read live price)
    │
    ▼
Fresh HTML rendered & cached with new price
  

Pricing changes now propagate in under 2 seconds globally. Before the fix, the propagation window was unbounded. Whatever the Full Route Cache held, users got.

<2spricing propagation (was 3+ hrs)
68msTTFB maintained (no perf regression)
4edge regions warmed on every deploy
0stale-price tickets since fix

Lessons learned

  • Next.js 15 App Router caching is opt-out, not opt-in, for static-looking routes. If your Server Component fetches data without any dynamic function access, Next.js will cache the entire rendered output by default. Pages Router never did this. Migrating teams need an explicit audit of which routes contain live data.
  • Audit every route that displays user-facing pricing, inventory, or session data. Add export const dynamic = 'force-dynamic' or tag-based revalidation before migrating these routes.
  • Redeploying is not a reliable cache-busting strategy at the edge. Each edge region re-warms independently. A deploy without explicit warming leaves a 60 to 90 minute stale window per region. Build warming into your CI/CD.
  • On-demand revalidation (revalidateTag) is the right primitive for data that changes outside of code deploys. Attach tags to your fetches and fire a revalidation webhook from your backend whenever the data changes.
  • Zero errors in logs is not the same as correct behaviour. The system was healthy by every infra metric: latency, error rate, status codes. What we needed was a business-layer check: does the displayed price match the database price? Add synthetic monitoring that validates content, not just availability.
Share this
← All Posts9 min read