Performance Budgets for Web Projects: A Practical Setup Guide
Last year I worked on a project that started fast and ended slow. Not dramatically slow - it wasn’t unusable. But the Lighthouse score that started at 95 was sitting at 68 by the time we launched. A few unoptimised images here, a couple of extra npm packages there, a third-party analytics script, a chat widget, and suddenly the bundle was 400KB larger than planned and the page took four seconds to become interactive on mobile.
Nobody noticed it happening in real time. Each individual addition was small. The accumulated effect was significant. This is how most web performance degrades - not through one bad decision, but through dozens of small ones that nobody tracks.
Performance budgets are the fix. They’re limits you set in advance on measurable performance metrics. When a change would push you over the budget, the build fails, the deploy blocks, or at minimum, someone gets a warning. It’s the same concept as a financial budget: decide what you can afford before you start spending.
What to Measure
Not everything needs a budget. Focus on the metrics that correlate most strongly with user experience.
Total JavaScript bundle size is the most impactful thing to budget. JavaScript is the most expensive asset type per byte - it has to be downloaded, parsed, compiled, and executed. A 500KB JavaScript bundle has a much larger impact on interactivity than a 500KB image, because the image doesn’t block the main thread.
Set a budget for your total JS bundle size (compressed) and for your largest individual chunk. For a typical content-heavy site, I aim for under 150KB total compressed JS. For a complex web application, 300KB is a reasonable upper limit. If you’re over 500KB, something has gone wrong.
Largest Contentful Paint (LCP) measures when the largest visible element finishes rendering. It’s Google’s primary loading metric and directly affects Core Web Vitals scores. Budget: under 2.5 seconds on mobile.
Cumulative Layout Shift (CLS) measures visual stability - how much elements jump around during loading. Budget: under 0.1. This is usually an easy target to hit if you set explicit dimensions on images and avoid dynamically injected content above the fold.
Total Transfer Size covers everything: HTML, CSS, JS, images, fonts, third-party scripts. Budget this per page, not for the entire site. A homepage might have a different budget than a product detail page. For most sites, I target under 1MB total transfer for the initial page load.
Third-party script size deserves its own budget because it’s the category most likely to blow up without anyone on the team noticing. Marketing adds a new analytics tool. Support adds a chat widget. Sales adds a heatmap tracker. Each one is “just a small script.” Together, they can add hundreds of KB and dozens of network requests.
Setting Up With Webpack or Vite
If you’re using Webpack, the built-in performance hints give you a basic budget:
// webpack.config.js
module.exports = {
performance: {
maxEntrypointSize: 250000, // 250KB
maxAssetSize: 200000, // 200KB per asset
hints: 'error', // fail the build
},
};
Setting hints: 'error' makes the build fail if any asset exceeds the limit. This is deliberately strict - I prefer builds that fail over warnings that get ignored.
For Vite, there’s no built-in equivalent, but you can use the rollup-plugin-bundle-size or add a custom plugin:
// vite.config.js
import { defineConfig } from 'vite';
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks: {
vendor: ['react', 'react-dom'],
// Split vendor code to track it separately
},
},
},
chunkSizeWarningLimit: 250, // KB
},
});
But Vite’s chunkSizeWarningLimit only warns - it doesn’t fail the build. For enforcement, you need something more.
Lightweight CI Enforcement
The most reliable way to enforce performance budgets is in CI. Here’s a simple approach using bundlesize:
// package.json
{
"bundlesize": [
{
"path": "./dist/assets/index-*.js",
"maxSize": "150 kB",
"compression": "gzip"
},
{
"path": "./dist/assets/vendor-*.js",
"maxSize": "100 kB",
"compression": "gzip"
},
{
"path": "./dist/assets/*.css",
"maxSize": "30 kB",
"compression": "gzip"
}
]
}
Run npx bundlesize in your CI pipeline after the build step. If any file exceeds its budget, the pipeline fails.
For a more comprehensive approach, Lighthouse CI runs a full Lighthouse audit on every pull request and compares against budgets:
// lighthouserc.json
{
"ci": {
"assert": {
"assertions": {
"categories:performance": ["error", { "minScore": 0.9 }],
"largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
"cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
"total-byte-weight": ["warning", { "maxNumericValue": 1000000 }]
}
}
}
}
This gives you both bundle-level and user-experience-level budgets, checked on every commit.
Handling Third-Party Scripts
Third-party scripts are the biggest threat to performance budgets because they’re often added outside the normal development process. Someone adds a script tag to the HTML template and bypasses the entire build pipeline, bundling tools, and CI checks.
Two strategies help:
Script inventory. Maintain a documented list of every third-party script loaded on the site, including its purpose, who added it, what it costs in terms of bytes and network requests, and whether it’s still needed. Review the list quarterly. You’ll be surprised how many scripts are still loading for features nobody uses anymore.
Loading strategy. Load third-party scripts asynchronously and defer them. If a script isn’t needed for initial rendering, it shouldn’t block initial rendering:
<!-- Bad: blocks rendering -->
<script src="https://analytics.example.com/tracker.js"></script>
<!-- Better: doesn't block rendering -->
<script src="https://analytics.example.com/tracker.js" async></script>
<!-- Best for non-critical scripts: load after page is interactive -->
<script>
window.addEventListener('load', () => {
const script = document.createElement('script');
script.src = 'https://analytics.example.com/tracker.js';
document.body.appendChild(script);
});
</script>
For critical scripts that must load on every page (like analytics), consider self-hosting them to eliminate the DNS lookup and connection overhead. The Web Almanac reports that third-party requests account for roughly 45% of all requests on the median web page. That’s a lot of performance impact from code you didn’t write.
Images
Images are typically the largest assets on a page by byte count, but they’re easier to manage than JavaScript because they don’t block the main thread.
Still, set budgets:
{
"bundlesize": [
{
"path": "./dist/images/*.{jpg,png,webp}",
"maxSize": "200 kB"
}
]
}
Better yet, automate image optimisation in your build pipeline. Sharp can resize and compress images at build time. Most frameworks (Next.js, Astro, Nuxt) have built-in image optimisation that generates appropriately sized variants.
The biggest image-related performance mistake I see is loading hero images that are 2400px wide on viewports that are 375px wide. Set srcset and sizes attributes properly and this problem disappears.
What Happens When You Exceed the Budget
This is the question that determines whether your performance budget is a real constraint or a suggestion.
My recommendation: fail the build. Not a warning. Not a notification. A build failure that blocks the deploy.
Is this strict? Yes. Will developers complain? Sometimes. But warnings get ignored. I’ve watched teams accumulate hundreds of “performance warning” annotations on pull requests while the site got progressively slower. Nobody reads the fifteenth warning. Everyone notices a failed build.
When a build fails on a performance budget, the developer has three options:
- Optimise their change to fit within the budget.
- Remove something else to make room.
- Get team agreement to increase the budget (which should require justification).
All three of these are productive conversations that lead to a faster site. Ignoring a warning leads to nothing.
Starting From Where You Are
If your site is already over any reasonable performance budget, don’t set aspirational targets that fail every build. That just leads to disabling the checks.
Instead, measure your current state, set budgets at that level (or slightly below), and ratchet them down over time. Each sprint, reduce the budget by 5-10%. This creates steady pressure to optimise without creating impossible targets.
Performance is a feature, even if nobody puts it in the sprint backlog. Users notice speed. Google rewards it in rankings. And once you’ve lost it, getting it back is much harder than keeping it in the first place. That’s what budgets are for.