Back to Blog Home

Contents

Share

Share on Twitter
Share on Bluesky
Share on HackerNews
Share on LinkedIn

Next.js observability gaps and how to close them

Sergiy Dybskiy image

Sergiy Dybskiy -

Next.js observability gaps and how to close them

Next.js observability gaps and how to close them

This blog is based on a recent live workshop. You can watch the the full livestream on Youtube.

Next.js gives you a lot for free; server-side rendering, file-based routing, edge runtimes. What it doesn’t give you is a clear picture of what’s actually happening in production. The framework’s three-runtime architecture (client, server, edge) means errors can surface in one layer while originating in another, database queries hide behind ORM abstractions, and server actions swallow useful error messages before they ever reach the browser.

This post walks through five specific observability gaps in Next.js apps, why they exist, and how to close them with Sentry.

TL;DR

  • Next.js production builds strip error details from server actions. The client sees “An error occurred in a server component render” with zero context. Sentry captures the original server-side exception with full stack traces.

  • Hydration errors are among the most common and least helpful errors in React. Sentry provides an HTML diff view that shows exactly which DOM nodes diverged between server and client renders.

  • Server actions don’t emit OpenTelemetry spans, so they need manual instrumentation with withServerActionInstrumentation to appear in your traces.

  • Database queries through ORMs like Drizzle are invisible to tracing by default. Adding an integration for your database client (like libSQL for Turso) surfaces every query as a span.

  • AI agent monitoring using the Vercel AI SDK integration gives you per-model token usage, cost breakdowns, and tool call traces without leaving Sentry.

The setup: three runtimes, three config files

Next.js runs code in different environments. Running the Sentry wizard gets you started:

Click to Copy
npx @sentry/wizard@latest -i nextjs

The wizard creates separate initialization files for each: instrumentation-client.ts for the browser, sentry.server.config.ts for Node.js, and sentry.edge.config.ts for edge runtimes.

This generates configuration files for each runtime, a global error boundary (global-error.tsx), and wraps your next.config.ts with withSentryConfig. The next.config.ts wrapper handles source map uploads for readable stack traces and configures tunnel routing, which sends Sentry data through your own server to avoid ad blockers.

A few things worth noting about the config:

  • Sample rates matter. Set tracesSampleRate to 1.0 in development, 10–20% in production. Going higher burns through quota fast.

  • sendDefaultPii attaches user IP addresses to replays and events. Optional, but useful for correlating sessions to real users.

  • Edge config can differ. If your middleware just reroutes requests, you can safely disable tracing in the edge config to reduce noise.

One more thing about the setup: call Sentry.setUser() once after authentication to propagate user context across errors, logs, traces, and replays.

Hydration errors: common and not very helpful

Hydration is the process where React attaches event handlers to server-rendered HTML, making it interactive. Hydration errors happen when the markup rendered by React on the client doesn’t match the initial server-rendered HTML, or when invalid HTML was sent by the server, and React couldn’t fix it.

The classic cause: a theme toggle that reads from localStorage. The server renders the light theme (it has no access to localStorage), the client reads the stored dark theme preference, and React throws a hydration error because the HTML doesn’t match.

In production, the browser gives you almost nothing useful. You get a minified React error pointing to a decoder URL, and a stack trace full of chunk files.

The HTML diff that actually helps

To help you debug hydration errors, Sentry provides a diff tool that shows the differences between client-rendered and server-rendered HTML. If you have Session Replay enabled, Sentry will detect hydration errors and bring them into your issue stream.

The diff shows before (server) and after (client) in a format that looks like a GitHub PR review, displaying a diff of the page before and after React has hydrated helps you find the element or attribute that caused the error. The easiest ones to spot are text content mismatches, incorrectly nested HTML elements, and attribute changes.

If you’re already using Session Replay, you get automatic grouped hydration error issues for free. They’re generated from Replays, so they have no impact on your error quota.

The fix for theme-related hydration errors is usually straightforward: defer the theme read to a useEffect so the initial server and client renders match, then apply the stored preference after hydration completes.

Server actions are a tracing blind spot

Server actions are Next.js’s pattern for handling form submissions and mutations, essentially typed POST requests. Sentry automatically instruments most operations, but server actions require manual setup.

The reason: server actions don’t emit OTel spans Sentry can hook into. Because of how Turbopack bundles them, auto-instrumentation is very hard and extremely error-prone. It would require building a Next.js server actions compiler, which is not something that seems reasonable to do.

Without instrumentation, a server action shows up as an anonymous HTTP POST. With it, you get a named span, timing data, and (critically) distributed trace continuity between client and server.

Wrapping a server action

Wrap your server actions with Sentry.withServerActionInstrumentation(). Here’s what that looks like:

Click to Copy
"use server";

import * as Sentry from "@sentry/nextjs";
import { headers } from "next/headers";

export async function login(formData: FormData) {
  return Sentry.withServerActionInstrumentation(
    "login", // Name that appears in Sentry traces
    {
      headers: await headers(), // Connects client and server traces
      formData,
      recordResponse: true,
    },
    async () => {
      // Your actual login logic
      const result = await authenticateUser(formData);
      return result;
    },
  );
}

The withServerActionInstrumentation wrapper creates named spans for each action, captures timing and errors, connects client and server traces via headers, and attaches form data to Sentry events.

The headers parameter is what makes distributed tracing work. Sentry reads the trace ID and baggage from the request headers to stitch together the client-initiated trace with the server-side execution. Without it, you get two disconnected traces instead of one continuous picture.

Production error messages are useless (by design)

There’s another reason server action observability matters. In production builds, Next.js intentionally strips error details from server-side failures before they reach the client. What the user sees: “An error occurred in a server component render. The specific message is omitted in production builds to avoid leaking sensitive details”

This is the right security decision. It’s also completely useless for debugging. But because Sentry instruments the server side directly, you still get the full exception "Database connection lost during authentication" instead of the sanitized nothing. This alone justifies the setup cost if you’re using server actions for anything important.

Logs and metrics: choosing the right signal

Errors, logs, and metrics serve different purposes, and the distinction matters for how you instrument a Next.js app.

  • Errors (Sentry.captureException) — something is broken and needs fixing. Creates an issue, triggers alerts, feeds into Seer for root cause analysis.

  • Logs (Sentry.logger) — contextual breadcrumbs. What happened before, during, and after a failure. High-cardinality, queryable, trace-connected.

  • Metrics (Sentry.metrics) — counters, durations, gauges. Good for dashboards and alerts on aggregate patterns.

To enable logs, add enableLogs: true to each of your Sentry init files:

Click to Copy
// instrumentation-client.ts, sentry.server.config.ts, sentry.edge.config.ts
Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  tracesSampleRate: 0.1,
  enableLogs: true,
});

Once enabled, Sentry.logger sends structured logs from anywhere in your application:

Click to Copy
import * as Sentry from "@sentry/nextjs";

Sentry.logger.info("User added talk to schedule", {
  userId: session.user.id,
  talkId: talk.id,
  action: "add_to_schedule",
});

Because logs are trace-connected, when you open an issue in Sentry, you see every log emitted during that trace. You can also navigate to the Log Explorer, filter by any attribute (like talkId or userId), and build alerts or dashboards from the results.

One important distinction: logs and metrics aren’t sampled. If your tracesSampleRate is 10%, you’ll still get 100% of your logs and metric data points. Traces use statistical sampling and Sentry extrapolates aggregate numbers, but logs and metrics give you exact counts.

Database queries disappear behind your ORM

If you’re using an ORM like Drizzle with a database like Turso, your traces will show server actions and API routes, but the actual SQL queries inside them are invisible by default. You’ll see that a request took 850ms but not why.

Fixing this requires two things: wiring up the database client integration and adding it to your Sentry server config.

For a Turso (libSQL) database, add the libsqlIntegration to your server config:

Click to Copy
// sentry.server.config.ts
import * as Sentry from "@sentry/nextjs";
import { libsqlIntegration } from "@sentry/nextjs";
import { client } from "./db"; // Your libSQL client instance

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 0.1,
  integrations: [
    libsqlIntegration({ client }),
  ],
});

You’ll also need to add @libsql/client to the serverExternalPackages in your next.config.ts so it bundles correctly.

Once configured, every Drizzle query surfaces as a span with the actual SQL, even though you wrote your queries using Drizzle’s TypeScript API. Sentry translates the ORM calls into their SQL equivalents in the trace waterfall. This means you can use the Query Insights view to see operations per minute, average duration, and get automatic alerts for N+1 queries or slow database calls.

The same pattern applies to other databases. For Postgres (including Neon), the Sentry Node SDK includes Postgres instrumentation by default, so you might not need any custom configuration. For Supabase, there’s a dedicated Supabase integration.

AI agent monitoring: tracing token spend back to users

If your Next.js app includes AI features (chat interfaces, agent workflows, generated content, etc.), you probably have a decent-sized bill from your model provider. What you probably don’t have is a breakdown of which features, which users, or which agent paths are responsible for that cost.

The vercelAIIntegration adds instrumentation for the AI SDK by Vercel to capture spans using the AI SDK’s built-in telemetry. This integration is enabled by default in the Node runtime, but not in the Edge runtime.

For each AI function call, you can enable detailed telemetry:

Click to Copy
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

const result = await streamText({
  model: anthropic("claude-sonnet-4-20250514"),
  prompt: userMessage,
  experimental_telemetry: {
    isEnabled: true,
    functionId: "search-agent", // Shows up in Sentry as the span name
    recordInputs: true,
    recordOutputs: true,
  },
});

Setting functionId in experimental_telemetry makes it easier to correlate captured spans with function calls. If you have multiple agents, say a router that delegates to a search agent and an info agent, each using different models, each gets its own named span in the trace.

In Sentry’s Agent Monitoring view, you get:

  • Model cost breakdown — which models you’re using, how much, and what it costs

  • Token usage — input and output tokens per model, per request

  • Tool call visibility — every tool invocation, including errors, linked back to the triggering trace

  • Full trace context — AI calls shown alongside database queries, API calls, and everything else in the request

That last point is the one that matters most. If an AI response takes five seconds, is it because the model is slow, or because the tool call triggered a slow database query? The trace waterfall shows you both in the same view, rather than requiring you to cross-reference your Anthropic dashboard with your application logs.

Both recordInputs and recordOutputs default to true. Set these to false if your prompts or responses contain sensitive data you don’t want sent to Sentry.

The recap

  • Three runtimes, three configs. Next.js splits across client, server, and edge. Instrument all of them, but configure each appropriately.

  • Hydration errors need visual diffs. The browser error message is useless in production. Sentry’s diff tool shows you the actual DOM divergence.

  • Server actions need manual wrapping. No OTel spans means no auto-instrumentation. Use withServerActionInstrumentation and pass headers for distributed tracing.

  • Logs and metrics aren’t sampled.

    Unlike traces, you get every single one. Use them for the data that can’t afford gaps.

  • ORM queries are invisible by default. Add a database integration to see actual SQL in your traces and catch N+1 queries automatically.

  • AI monitoring connects cost to context. Token spend is meaningless without knowing which users, features, and code paths generated it.

Get started with the Next.js SDK docs, or check out the debugging Next.js series on YouTube for more stuff like this.

Syntax.fm logo

Listen to the Syntax Podcast

Of course we sponsor a developer podcast. Check it out on your favorite listening platform.

Listen To Syntax
© 2026 • Sentry is a registered Trademark of Functional Software, Inc.