How To Optimize ASP.NET Core Performance With Monitoring Tools
How To Optimize ASP.NET Core Performance With Monitoring Tools
ON THIS PAGE
- Monitoring basics for ASP.NET Core
- Sentry in ASP.NET Core performance monitoring
- Best practices for performance monitoring with Sentry in ASP.NET Core
- Key takeaways (so you can get back to coding)
Slow requests are never subtle. They clog your logs, grumble in your profilers, and surface the moment your PM walks by. ASP.NET Core gives you the basics: built-in logging, the Visual Studio Diagnostic Tools, and dotnet-trace
for those “I swear this only happens in prod” mysteries. Turning those snapshots into a story you can act on still takes more than a stopwatch and good intentions.
This guide starts with the usual suspects: profilers, logs, and traces. Then we show how continuous monitoring with Sentry stitches those clues into a timeline you can replay. By the end, you’ll know which tool to reach for, when “good enough” needs an upgrade, and how to fix performance issues before they become support tickets.
Monitoring basics for ASP.NET Core
Monitoring shows how your application behaves in real-world conditions. In software engineering, performance monitoring typically involves several complementary approaches that work together to provide insight into your system’s behavior:
Logging: Captures and records events as they happen. Logs help you trace behavior over time and are especially useful for debugging issues after the fact.
Profiling: Measures how your code executes: How much CPU it consumes, where memory is allocated, how long operations run, and where bottlenecks occur.
Tracing: Follows a request or operation as it moves through your system, often across services and layers. Traces provide visibility into execution paths and help identify latency or failure points.
Metrics: Track numeric indicators over time, such as request rates, error counts, or memory usage. Metrics help you understand trends and set performance baselines.
ASP.NET Core has built-in tools that support key aspects of monitoring, such as logging, profiling, and tracing. While these tools are useful during development for debugging a feature, tracking a regression, or analyzing performance locally, they aren’t comprehensive.
To fully understand your application’s performance, you’ll likely need a combination of built-in and external tools. Let’s look at some commonly used options for monitoring ASP.NET Core performance in development.
Visual Studio diagnostic tools
If you use Visual Studio, you get access to live diagnostics while running your application with the debugger attached. These include:
CPU Usage: Helps you identify methods consuming the most CPU time.
Memory Usage: Lets you inspect object allocations and track potential memory leaks.
Timeline View: Shows how execution unfolds across threads over time.
These tools help identify slow methods, memory spikes, or thread pool issues like async deadlocks.
In ASP.NET Core, async deadlocks can occur when an awaited operation blocks the thread it’s supposed to resume on, often due to Task.Result
or .Wait()
being called on async methods, especially on the main thread. This stalls the request pipeline and can cause severe bottlenecks under load.
Despite their usefulness, the Visual Studio diagnostic tools have limitations. They only work when the debugger is attached. You can’t use them in production. You can’t trace requests across services. And once the process exits, the data is gone. As a result, these diagnostics are useful for reproducing and analyzing known problems in a local environment, but ineffective for catching intermittent issues.
The dotnet-trace tool
The cross-platform dotnet-trace command-line profiler captures runtime events from a live .NET application. It works on Windows, Linux, and macOS, and outputs a .json
file tailored for Speedscope, a web-based flame graph viewer.
If dotnet-trace
is not already installed on your system, add it with:
dotnet tool install --global dotnet-trace
Before collecting a trace with dotnet-trace
, you’ll need to retrieve the process ID (PID) of your running ASP.NET Core application:
dotnet-trace ps
The output will look something like this:
Screenshot of the output from dotnet-trace containing the application’s PID
Copy the PID from the left-hand column and use it in place of <PID>
in the command below to profile your running application:
dotnet-trace collect --process-id <PID> --format speedscope -o trace.json
This command collects data in the Speedscope format so that you can:
Track CPU samples, garbage collection events, and thread activity.
Visualize flame graphs to find performance bottlenecks.
Identify code that’s blocked, waiting, or consuming too many resources.
The output is a trace.speedscope.json
file.
Now you can visit Speedscope and manually upload the trace file by clicking the Browse button and selecting your file.
Screenshot of the Speedscope home page
In the Speedscope interface, you can explore flame graphs that show which functions consumed CPU over time and where threads were blocked or idle.
Screenshot of a Speedscope flame graph
For example, the flame graph above shows that Thread (3282525)
remains active for more than 40 seconds, but the call stack doesn’t change during that time. At the bottom of the stack, execution starts from Program.Main()
, moving up through TaskAwaiter.HandleNonSuccessAndDebuggerNotification(
), and finally to synchronization methods like SpinThenBlockingWait()
, ManualResetEventSlim.Wait()
, and Monitor.Wait()
.
This pattern indicates that the thread is stalled, likely waiting on another thread or resource that hasn’t responded. When a thread spends this much time inside Monitor.Wait()
, it often indicates that the system is waiting indefinitely, possibly due to an async deadlock or a blocking operation not completing.
This kind of stalled thread can introduce serious performance issues, especially in high-throughput applications. If you encounter a trace like this, it’s worth checking other threads in the trace, especially the main thread or any background worker threads, to see whether they’re holding onto a resource or lock that’s causing the delay.
While powerful, dotnet-trace
is still a manual tool. You need to know when the application misbehaves, start recording at the right time, and analyze the results yourself, which makes dotnet-trace
better suited for local debugging or targeted staging environments, rather than continuous monitoring in production.
Interpreting the flame graphs the dotnet-trace
output produces in Speedscope also requires some familiarity. These visualizations expose a lot of detail, including internal .NET
runtime behavior, and take time to master. Expect a learning curve if you’re just getting started with performance profiling. Learn more about reading flame graphs in the usage section of the Speedscope GitHub repository.
Logging in ASP.NET Core
An essential part of software development, logging allows you to capture events as they happen, track application behavior, and investigate failures after the fact. Quality logs can save you hours of debugging, especially when issues don’t reproduce locally.
In ASP.NET Core, the ILogger<T>
interface supports both basic and structured logging. Basic logging emits plain-text messages, often written to the console or a file. Structured logging emits key-value pairs, making it easier to search, filter, and aggregate logs using observability tools.
Logging providers
A logging provider determines where your logs go. ASP.NET Core supports several logging providers out of the box, including:
Console: Writes real-time logs to the terminal – useful during development.
File: Persist logs to disk for local review or archiving.
Seq, ELK Stack, Application Insights: Forward logs to centralized platforms for searching and analyzing at scale.
Configure logging providers in your app’s settings or in Program.cs
. You can also use multiple providers simultaneously, for example, by writing to the console during local development and to tools like Seq in production.
Example: Logging slow HTTP requests with middleware
One everyday use case for logging is to track slow HTTP requests. These performance outliers can signal blocking I/O, inefficient queries, or other issues that degrade user experience.
In ASP.NET Core, you can use custom middleware like the example below to measure how long a request takes and log a warning if it exceeds a certain threshold.
using System.Diagnostics;
namespace TodoApi.Middlewares
{
public class RequestTimingMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<RequestTimingMiddleware> _logger;
public RequestTimingMiddleware(RequestDelegate next, ILogger<RequestTimingMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task InvokeAsync(HttpContext context)
{
var sw = Stopwatch.StartNew();
await _next(context);
sw.Stop();
if (sw.ElapsedMilliseconds > 500)
{
_logger.LogWarning("⚠️ Slow request: {Path} took {Elapsed}ms", context.Request.Path, sw.ElapsedMilliseconds);
}
}
}
}
The middleware logs the request path and elapsed time for any request that takes longer than 500 milliseconds. The {Path}
and {Elapsed}
values are automatically captured as structured fields if your logging provider supports it.
To enable the middleware, register it in Program.cs
:
using TodoApi.Middlewares;
// ...
var app = builder.Build();
app.UseMiddleware<RequestTimingMiddleware>();
// ...
Now, every request with an execution time greater than 500 ms will be logged in your terminal, as shown in the image below.
Screenshot showing a warning log from the middleware
This kind of logging helps catch performance issues early, especially when paired with tools like Visual Studio’s diagnostic features and dotnet-trace
. Together, they provide a reliable way to identify problems in local development or staging environments.
Sentry in ASP.NET Core performance monitoring
While ASP.NET Core and its supporting tools (such as Visual Studio diagnostics, dotnet-trace
, and built-in logging) offer valuable insights, they have significant limitations: They mostly give you insight into what’s happening in a single process, while you are actively watching. When the app restarts or the moment passes, the data is lost.
These tools don’t help you:
Trace problems across services.
Analyze production performance in real time.
Identify trends across users, routes, or deployments.
Detect N+1 queries or serialization bottlenecks.
Attribute latency to specific environments, users, or tenants.
These gaps are why real-world systems require continuous, real-time visibility. To diagnose slowdowns, catch regressions, and understand failures across users and services, you need monitoring that works retroactively, not just while you’re watching.
This is where tools like Sentry come in.
Once you move beyond local development, you need more than logs or occasional CPU snapshots — you need visibility into what’s happening in production. Not just whether a request failed, but why it failed, how long it took, what code paths or queries were involved, and which users were affected.
Sentry provides that context through real-time monitoring, tracing, and profiling. Tracing captures the full lifecycle of each request, from controller to database to third-party calls, with every operation recorded as a span. Still in beta, profiling reveals CPU and memory behavior during execution, helping you isolate hot paths or blocking operations. With alerting and performance insights, Sentry enables you to detect, prioritize, and fix issues before users notice them.
Setting up Sentry in ASP.NET Core
You can integrate Sentry with your ASP.NET Core app using the Sentry.AspNetCore
SDK. Once installed and configured, the SDK automatically captures exceptions, traces requests, and profiles performance.
The code used for the following examples is available at this sample app GitHub repo.
First, install the Sentry.AspNetCore
NuGet package:
dotnet add package Sentry.AspNetCore -v 5.13.0
In the ASP.NET Core API’s Program.cs
file, configure the builder to initialize Sentry with options:
builder.WebHost.UseSentry(options =>
{
options.Environment = builder.Environment.EnvironmentName;
options.TracesSampleRate = 1.0;
options.ProfilesSampleRate = 1.0;
});
Now configure Sentry in the appsettings.json
file:
"Sentry": {
"Dsn": "https://2b747069....",
"SendDefaultPii": true,
"MaxRequestBodySize": "Always",
"MinimumBreadcrumbLevel": "Debug",
"MinimumEventLevel": "Warning",
"AttachStackTrace": true,
"Debug": true,
"DiagnosticLevel": "Error",
"TracesSampleRate": 1.0
}
With these configurations in place, Sentry starts capturing:
Exceptions and stack traces
Slow requests and spans
External calls (HTTP and SQL)
N+1 queries
Now let’s see the monitoring platform in action.
Detecting N+1 query problems with Sentry
An N+1 query problem occurs when your code executes one query to fetch a collection of items, and then an additional query per item to fetch related data.
This pattern is inefficient – it slows down requests, adds load to the database, and gets worse as the dataset grows. What works in dev with five records can fall apart in production with 5,000.
Our sample app allows you to create to-do items, each with multiple comments. When you fetch a list of to-dos, you also want to retrieve their comments and return both in the API response.
A common implementation might:
Query the to-do items.
Query each item’s comments separately.
Manually attach the comments to each item.
Here’s a naive controller example from the Controllers/TodoApi.cs
file:
[HttpGet("nplusone-comments")]
public async Task<IActionResult> NPlusOneComments()
{
var todos = await _context
.TodoItems.OrderBy(t => t.Id)
.Take(500)
.ToListAsync(); // One query
foreach (var todo in todos)
{
todo.Comments = await _context
.Comments.Where(c => c.TodoItemId == todo.Id)
.ToListAsync(); // 500 additional queries
}
return Ok(todos);
}
This code looks fine at first glance. It works, it returns the correct data, and the logic seems clear. But Sentry alerts us to an issue:
Screenshot of the Sentry dashboard showing an N+1 query issue
Zooming in on the issue gives us more information about what’s happening.
Screenshot of the Sentry dashboard with the N+1 query highlighted
Sentry shows us that we have repeating spans of SQL queries. Blue warning icons flag the SQL transactions identified as N+1 queries.
The trace clearly reveals the problem: redundant queries, high span count, and a long total duration — all without a single exception being thrown. This is what makes tracing so valuable: it catches what logging misses.
To fix the N+1 issue, replace the loop that performs one query per item with a single query that loads all the related data simultaneously. In Entity Framework, this is done using .Include()
, which tells the ORM to eager-load the related entities in a single database operation.
Let’s add another API to the TodoApiController
class that uses .Include()
to eager-load the comments:
[HttpGet("with-comments-fixed")]
public async Task<IActionResult> WithCommentsFixed()
{
var todos = await _context
.TodoItems
.Take(500)
.Include(t => t.Comments)
.ToListAsync(); // One query with join
return Ok(todos);
}
With this change, Entity Framework generates a single SQL query that joins the TodoItems and Comments tables. Instead of executing 500 separate queries to fetch comments individually, it loads everything in one pass.
Sentry goes beyond detecting N+1 database queries. It can also surface slow database operations, N+1 external API calls, render-blocking assets on the frontend, compressed or oversized HTTP payloads, consecutive duplicate HTTP requests, and more. It can even help track slow endpoints or bottlenecks introduced by third-party APIs, whether that delay happens inside your code or during external HTTP calls.
Even better, you get to decide what counts as a problem. Sentry gives you complete control over what gets flagged. For example, you can configure it to report N+1 queries only if they last longer than 100 ms and occur more than five times, or flag any database query exceeding 500 ms. These values are adjustable so that you can tune alerts based on your definition of “too slow.”
To fine-tune Sentry alerts, select your project in the upper left corner of your sidebar, click the Settings button in the top right of your Project Details page. There you will find Performance in the left-hand sidebar, where all configuration options are under Performance Issues – Detector Threshold Settings.
Screenshot of the Sentry dashboard showing where to find performance settings
Sentry also allows you to set up alerts when performance issues are detected in your application. To create an alert, go to your project details page and click Create Alert, choose the performance metric you want to monitor (for example, transaction duration), and then set your thresholds.
In the example in the screenshot below, we’ve configured Sentry to treat any transaction lasting over 100 ms as critical, and anything exceeding 50 ms as a warning.
Screenshot of the Sentry dashboard showing where to find alert settings
Here we opt to be notified of issues and warnings by email, but you can receive Sentry alerts in Slack, Microsoft Teams, or any webhook-compatible service. Once your conditions and actions are set, save the rule, and Sentry will start monitoring based on those parameters.
Best practices for performance monitoring with Sentry in ASP.NET Core
Once Sentry is set up, it’s easy to rely on the defaults and let the platform passively collect errors and traces. But to really improve your application’s performance and reliability, you need to go a little deeper.
Here’s a checklist of practices that will help you get the most value out of Sentry in real-world ASP.NET Core projects, especially when performance and stability matter.
Complete your instrumentation first
When setting up tracing and profiling, start with 100% capture. Capturing everything early ensures important endpoints, jobs, and external calls are properly instrumented. Without full coverage, you risk missing spans when debugging.
Once validated, adjust sampling rates to control costs.
builder.WebHost.UseSentry(options =>
{
// Capture 100% of transactions for now to verify coverage
options.TracesSampleRate = 1.0;
options.ProfilesSampleRate = 1.0;
// After validation, lower these values to reduce ingestion costs
});
Attach User and Tenant IDs to every scope
By default, Sentry captures the trace, but not who was affected. You can add user context to every event with just a few lines of code:
SentrySdk.ConfigureScope(scope =>
{
scope.User = new SentryUser
{
Id = userId,
Email = email,
Username = username
};
scope.SetTag("tenant_id", tenantId);
});
This will help you:
Prioritize issues based on customer impact.
Debug multi-tenant edge cases.
Segment by account, role, or region.
Use Sentry in staging, pre-production, and production
Don’t limit Sentry to production. Enable it in staging, sandbox, and QA environments to:
Catch performance regressions before production.
Debug errors under production-like configurations.
Run profiling safely before real users see the issue.
You can configure the environment like this in your Program.cs
file:
builder.WebHost.UseSentry( options => {
options.Environment = builder.Environment.EnvironmentName;
});
Log manual errors or warnings with breadcrumbs
In Sentry, breadcrumbs represent a timeline of events leading up to an issue. Depending on your platform and enabled integrations, the SDK automatically captures breadcrumbs, including instrumentation changes like outgoing HTTP requests, database calls, or user interactions.
When an error is reported, Sentry attaches these breadcrumbs to the issue, giving you the full execution context without digging through raw logs. This is especially useful when debugging problems that don’t produce exceptions, like unexpected user behavior, race conditions, or silent failures that leave no trace in logs or metrics. Instead of guessing what happened before the failure, you see it — step by step.
You can also add breadcrumbs manually to highlight specific parts of your application’s flow:
SentrySdk.AddBreadcrumb(
message: "Entered payment retry handler",
category: "payments",
level: BreadcrumbLevel.Info
);
For soft failures or logic branches that indicate problems (but don’t crash), you can send a warning message instead:
SentrySdk.AddBreadcrumb(
"Payment provider returned non-success status, retrying...",
category: "payments",
level: BreadcrumbLevel.Warning
);
Learn more about other breadcrumb types in the breadcrumb documentation.
Avoid sensitive data in error context
When you send telemetry to a third-party service, it’s critical to understand what data is being captured, how it’s processed, and where it’s stored. Sentry provides several ways to control sensitive data, both at the SDK and server levels.
Sentry scrubs many common fields (like passwords and credit card numbers) by default. But if your application sends custom data, such as raw request bodies, headers, or form content, you should actively filter out anything sensitive before it’s sent.
You can scrub data using the BeforeSend hook when initializing the SDK in your Program.cs
file. The hook gives you access to the event payload before it’s sent to Sentry:
builder.WebHost.UseSentry(options =>
options.SetBeforeSend((sentryEvent, hint) =>
{
if (sentryEvent.Exception != null
&& sentryEvent.Exception.Message.Contains("user_birthday"))
{
return null;
}
return sentryEvent;
})
);
Add tags, scope data, and context early
Every trace and error in Sentry can carry additional metadata. Use this to filter, group, and search issues more effectively.
SentrySdk.ConfigureScope(scope =>
{
scope.SetTag("feature_flag", "v2-checkout");
scope.SetExtra("cart_items", cart.Count);
scope.TransactionName = "POST /checkout/submit";
});
Key takeaways (so you can get back to coding)
Targeting performance issues after deployment is essential in any production-ready system. Without proper tooling, you’re left guessing – relying on logs, user complaints, and scattered metrics that don’t always tell the full story.
Sentry helps teams with real-time visibility, making ASP.NET Core performance tuning and optimization more actionable. You can detect, understand, and fix issues before they affect your users. Enabling tracing, setting up profiling, or receiving real-time notifications through the right channels allows you to act faster and more precisely – especially when issues are user-specific, intermittent, or tied to performance regressions that aren’t obvious during development.
By following the best practices described in this guide, you ensure Sentry works for you – not just capturing issues, but giving you the right context to act fast. When something goes wrong, you’ll know:
Who was affected
What caused it
Where it happened
From there, you can use native tools to locally reproduce and fix the issue faster. And if it only affects specific users, Sentry already gives you the context — user ID, tenant, and flags — to isolate it quickly.
Learn more about how to set up Sentry in your ASP.NET Core applications in our official documentation.