APIs are the backbone of modern applications, but even the cleanest code can drag if performance isn’t top of mind.
Very often, I receive this question after a session at an event or work from my colleagues: How can I improve or ensure that my APIs are fast?
These aren’t theoretical tips, they’re battle-tested improvements I’ve used (and seen developers forget all too often).
Oh, and yes, we’ll even let GitHub Copilot take a shot at refactoring for speed. 🚀
Use Asynchronous Requests Properly
In .NET, asynchronous programming isn't just a nice-to-have—it's a must for scalable APIs. Blocking calls can choke your thread pool, delay responses, and reduce overall throughput. Fortunately, ASP.NET Core makes writing async code pretty painless.
⚠️ Warning: If you see
.Result
or.Wait()
in your code, chances are you’re leaving performance on the table—or worse, risking deadlocks.
Real-World Example
// ❌ Bad (Blocking)
[HttpGet("weather")]
public IActionResult GetWeather()
{
var forecast = _weatherService.GetForecast().Result;
var log = _dbContext.Logs.FirstOrDefault();
return Ok(new { forecast, log });
}
// ✅ Good (Async All The Way)
[HttpGet("weather")]
public async Task<IActionResult> GetWeather()
{
var forecast = await _weatherService.GetForecastAsync();
var log = await _dbContext.Logs.FirstOrDefaultAsync();
return Ok(new { forecast, log });
}
Tip: Always make the entire call chain async—from controller to service to data layer.
Use Pagination for Large Data Collections
Returning thousands of records in a single API call is one of the fastest ways to tank performance. Pagination helps by delivering data in manageable chunks.
[HttpGet("products")]
public async Task<IActionResult> GetProducts([FromQuery] int page = 1, [FromQuery] int pageSize = 20)
{
var products = await _dbContext.Products
.Skip((page - 1) * pageSize)
.Take(pageSize)
.ToListAsync();
return Ok(products);
}
Bonus: Return Pagination Metadata
var totalCount = await _dbContext.Products.CountAsync();
return Ok(new {
data = products,
pagination = new {
currentPage = page,
pageSize,
totalCount
}
});
Use AsNoTracking
Whenever Possible
By default, EF Core tracks every entity it loads. That’s unnecessary for read-only operations and adds overhead.
// ✅ Optimized with no tracking
var products = await _dbContext.Products
.AsNoTracking()
.ToListAsync();
Combine With Projection
var productList = await _dbContext.Products
.AsNoTracking()
.Select(p => new ProductDto {
Id = p.Id,
Name = p.Name,
Price = p.Price
})
.ToListAsync();
[UPDATE]: some of you commented about the fact that with projections, .AsNoTracking is implicit, so, in this case, you don't need the code above.
Enable Gzip or Brotli Compression
Compressing your responses can dramatically reduce payload size, especially for JSON-heavy APIs.
Pay attention, it uses some CPU resources for each request! 💀
Setup
builder.Services.AddResponseCompression(options =>
{
options.EnableForHttps = true;
options.Providers.Add<BrotliCompressionProvider>();
options.Providers.Add<GzipCompressionProvider>();
});
builder.Services.Configure<BrotliCompressionProviderOptions>(opts =>
{
opts.Level = CompressionLevel.Fastest;
});
builder.Services.Configure<GzipCompressionProviderOptions>(opts =>
{
opts.Level = CompressionLevel.SmallestSize;
});
app.UseResponseCompression();
✅ ASP.NET Core will prefer Brotli if the client supports it.
Use Cache for Frequently Accessed Data
Stop reloading the same data on every request. Use IMemoryCache
or IDistributedCache
to improve response time and reduce DB load.
In-Memory Example
public class ProductService
{
private readonly IMemoryCache _cache;
private readonly AppDbContext _db;
public ProductService(IMemoryCache cache, AppDbContext db)
{
_cache = cache;
_db = db;
}
public async Task<List<Product>> GetFeaturedProductsAsync()
{
return await _cache.GetOrCreateAsync("featured_products", async entry =>
{
entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5);
return await _db.Products
.Where(p => p.IsFeatured)
.AsNoTracking()
.ToListAsync();
});
}
}
🫠 For distributed environments, use Redis (there is also a Redis service on Azure) for consistency across instances.
Avoid Overfetching With Proper DTOs
Entities often contain fields your frontend doesn't need, and shouldn’t see.
Entity vs DTO
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public string InternalCode { get; set; }
public DateTime CreatedAt { get; set; }
public bool IsArchived { get; set; }
}
public class ProductDto
{
public int Id { get; set; }
public string Name { get; set; }
}
Query with DTO
var products = await _dbContext.Products
.AsNoTracking()
.Select(p => new ProductDto
{
Id = p.Id,
Name = p.Name
})
.ToListAsync();
Ask GitHub Copilot to Refactor Your Code (Agent Mode 😎)
Copilot isn’t just for boilerplate, it can help you spot real performance issues.
Example Prompts
"Analyze this ASP.NET Core controller and suggest improvements for performance."
"Refactor this service class to reduce database queries, avoid overfetching, and use caching."
🤖 Copilot can detect blocking calls, suggest
.AsNoTracking()
, promote pagination, and even refactor long service methods.
The more specific your prompt, the better the result.
Thanks for reading this post, I hope you found it interesting!
Feel free to follow me to get notified when new articles are out 🙂
Top comments (11)
Cool tips. Just to note if you use projections, as long as you’re not projecting the entire entity, you don’t need to add AsNoTracking, it’s implicit.
Thanks for the tip!
I am not so expert in EF. :)
Didn't know about this. Thanks for sharing!
I thought pagination was a well-known strategy until I found multiple systems with performance issues by retrieving entire tables from the database and dumping them on the screen. Arrggg!
Hey Emanuele, good piece as always!
About caching: instead of memory or distributed, just go hybrid with FusionCache or HybridCache from Microsoft (or FusionCache via the HybridCache adapter).
Here's why:
With a hybrid cache like FusionCache the code always stays the same even if you add an L2 (distributed cache), it's fast as a memory cache, handles cold starts and horizontal scalability like a distributed cache, serialization is already taken care of and you always have full stampede protection.
My 2 cents.
nice tips Emanuele! thank you!
They come directly from my private knowledge base on my Notion 😊
Cool. I use Obsidian so can can keep m private wiki in source control and never have to worry about Notion not being available. I think the fear stems from clickup, a task/project management tools I used to use, going offline from time to time with my docs and tasks being unavailable.
Do you find Notion is sometimes unavailable?
You don't need
AsNoTracking
on Select queries, they are read-onlyProjection is automatically rendering AsNoTracking() pointless.
Anyways, lots of informative content. Thx
I read it in another comment, and yes. It works as you said.