Browser Speed Benchmark (2026)
Browser performance remains a critical factor for web developers and power users. Whether you are optimizing a complex web application or choosing the right browser for development work, understanding how to measure browser speed accurately helps you make informed decisions.
This guide covers practical methods to benchmark browser speed in 2026, with code examples you can run immediately.
Why Benchmark Browser Speed
Different browsers use different rendering engines, JavaScript interpreters, and hardware acceleration strategies. These differences manifest in real-world performance variations that affect page load times, animation smoothness, and JavaScript execution speed.
For developers, benchmarking helps identify performance bottlenecks in web applications. For power users, it provides data-driven basis for browser selection.
Beyond raw speed, benchmarking also surfaces consistency problems. A browser that renders your application quickly on average but occasionally stalls for 500ms creates a worse user experience than one that is uniformly 20% slower. Understanding variance, not just averages, is one of the most practical insights benchmarking provides.
The Browser Landscape in 2026
Before diving into methodology, it helps to understand the current competitive landscape. The four major desktop browsers each use distinct engine stacks:
| Browser | Rendering engine | JavaScript engine | Maintained by |
|---|---|---|---|
| Chrome | Blink | V8 | |
| Firefox | Gecko | SpiderMonkey | Mozilla |
| Edge | Blink | V8 | Microsoft |
| Safari | WebKit | JavaScriptCore | Apple |
Chrome and Edge share the Blink rendering engine and V8 JavaScript engine, which means their performance characteristics are often very similar. Differences between them tend to come from Microsoft’s additions (sleeping tabs, startup boost) rather than the underlying engine.
Firefox uses SpiderMonkey, which has improved dramatically over the past several years. Its performance on many workloads is now competitive with V8, and it often leads on memory efficiency.
Safari uses WebKit and JavaScriptCore, and on Apple Silicon hardware it benefits from tight hardware integration that synthetic benchmarks frequently undercount. Battery efficiency metrics especially favor Safari on macOS.
Key Metrics to Measure
Before running benchmarks, understand which metrics matter:
- JavaScript Execution Time - How fast the browser processes JavaScript code
- Rendering Performance - Frames per second during animations and DOM updates
- Page Load Time - Time to interactive for complete page loads
- Memory Usage - How efficiently the browser handles memory under load
These four categories map to four different user experiences. Slow JavaScript execution affects the feel of interactive applications, forms, search, dashboards. Poor rendering performance causes jank in animations and scrolling. Slow page load time affects bounce rate and first impressions. Poor memory management causes tabs to feel sluggish after extended browsing sessions.
Knowing which metric matters for your workload helps you design targeted benchmarks rather than running generic tests that may not reflect your users’ actual experience.
Simple JavaScript Benchmark
The most straightforward way to measure JavaScript performance uses the performance.now() API:
function runBenchmark(iterations = 100000) {
const start = performance.now();
// Test target: array operations
const arr = [];
for (let i = 0; i < iterations; i++) {
arr.push(i * 2);
arr.filter(x => x % 4 === 0);
arr.map(x => x + 1);
}
const end = performance.now();
return end - start;
}
// Run multiple times for accuracy
const results = [];
for (let i = 0; i < 5; i++) {
results.push(runBenchmark());
}
const average = results.reduce((a, b) => a + b) / results.length;
console.log(`Average execution time: ${average.toFixed(2)}ms`);
Run this in different browsers’ developer consoles to compare JavaScript engine performance.
A few important caveats about this type of micro-benchmark: modern JavaScript engines apply JIT compilation, which means performance often improves significantly between the first and subsequent runs of the same code. Running the benchmark five times and averaging the results, as shown above, captures the warmed-up JIT performance rather than cold-start performance.
To measure cold-start performance specifically (relevant for short-lived scripts), wrap the test in a fresh function call each time:
function coldStartBenchmark() {
const results = [];
for (let i = 0; i < 5; i++) {
// Create a new function each time to prevent JIT reuse
const fn = new Function(`
const start = performance.now();
const arr = [];
for (let j = 0; j < 50000; j++) {
arr.push(j * 2);
}
return performance.now() - start;
`);
results.push(fn());
}
const avg = results.reduce((a, b) => a + b) / results.length;
console.log('Cold-start average:', avg.toFixed(2) + 'ms');
console.log('Min:', Math.min(...results).toFixed(2) + 'ms');
console.log('Max:', Math.max(...results).toFixed(2) + 'ms');
console.log('Variance:', (Math.max(...results) - Math.min(...results)).toFixed(2) + 'ms');
}
coldStartBenchmark();
Reporting both average and variance gives you a more honest picture of what your users will experience.
DOM Manipulation Benchmark
DOM operations often represent the biggest performance bottleneck in web applications:
function domBenchmark() {
const container = document.getElementById('benchmark-container');
if (!container) {
console.error('Create a div with id="benchmark-container" first');
return;
}
container.innerHTML = '';
const start = performance.now();
// Create 1000 elements
for (let i = 0; i < 1000; i++) {
const div = document.createElement('div');
div.textContent = `Item ${i}`;
div.className = 'benchmark-item';
container.appendChild(div);
}
// Measure read operations
for (let i = 0; i < 1000; i++) {
const el = container.children[i];
const text = el.textContent;
const classList = el.classList;
}
const end = performance.now();
console.log(`DOM benchmark: ${(end - start).toFixed(2)}ms`);
}
This benchmark has a well-known performance trap built in: appending elements inside a loop causes repeated layout recalculations. A faster pattern, and a good secondary benchmark, uses DocumentFragment to batch all DOM insertions into a single operation:
function domBenchmarkOptimized() {
const container = document.getElementById('benchmark-container');
container.innerHTML = '';
const start = performance.now();
// Use DocumentFragment to batch insertions
const fragment = document.createDocumentFragment();
for (let i = 0; i < 1000; i++) {
const div = document.createElement('div');
div.textContent = `Item ${i}`;
div.className = 'benchmark-item';
fragment.appendChild(div);
}
container.appendChild(fragment); // Single DOM mutation
const end = performance.now();
console.log(`Optimized DOM benchmark: ${(end - start).toFixed(2)}ms`);
}
Run both versions and compare the numbers. The ratio between them tells you how efficiently each browser batches layout operations, a detail that matters when your application does a lot of list rendering.
CSS Animation and Rendering Benchmark
JavaScript engine speed is only one axis of browser performance. CSS animation throughput directly affects how smooth your UI feels:
function animationBenchmark() {
const canvas = document.createElement('canvas');
canvas.width = 800;
canvas.height = 600;
document.body.appendChild(canvas);
const ctx = canvas.getContext('2d');
let frameCount = 0;
let startTime = performance.now();
const duration = 3000; // 3 seconds
function drawFrame() {
ctx.clearRect(0, 0, 800, 600);
// Draw 200 moving circles
for (let i = 0; i < 200; i++) {
const x = (Math.sin(frameCount * 0.01 + i) + 1) * 400;
const y = (Math.cos(frameCount * 0.01 + i * 0.7) + 1) * 300;
ctx.beginPath();
ctx.arc(x, y, 5, 0, Math.PI * 2);
ctx.fillStyle = `hsl(${i * 1.8}, 70%, 50%)`;
ctx.fill();
}
frameCount++;
const elapsed = performance.now() - startTime;
if (elapsed < duration) {
requestAnimationFrame(drawFrame);
} else {
const fps = (frameCount / elapsed) * 1000;
document.body.removeChild(canvas);
console.log(`Rendering benchmark: ${fps.toFixed(1)} FPS`);
}
}
requestAnimationFrame(drawFrame);
}
animationBenchmark();
This canvas benchmark measures how many frames per second the browser can sustain while performing geometric calculations and 2D rendering. Expect results around 55-60 FPS on modern hardware in browsers with good GPU acceleration. Results below 40 FPS suggest the browser is not using hardware acceleration effectively, which you can verify by checking the browser’s GPU process in its task manager.
Using Speedometer 3.0
Speedometer, developed by Apple, remains the industry standard for browser responsiveness testing. In 2026, Speedometer 3.0 provides comprehensive testing across multiple browser subsystems.
Access it at browserbench.org/speedometer:
Open in your browser
open https://browserbench.org/speedometer
Speedometer 3.0 tests:
- DOM operations
- JavaScript framework simulated workloads
- Graphics rendering
- CSS animation performance
Higher scores indicate better performance. As of March 2026, scores range from 200-400 depending on browser and hardware.
Speedometer 3.0 added workloads for React, Angular, Vue, Svelte, and other popular frameworks compared to its predecessor. This makes it significantly more representative of real-world developer workloads. If your application uses a specific framework, Speedometer’s framework-specific subtests are particularly valuable because they exercise the exact rendering patterns your application relies on.
Other standardized benchmarks worth running:
- MotionMark (also from browserbench.org): Focuses on graphics rendering and animation
- JetStream 2: Emphasizes JavaScript and WebAssembly performance
- Basemark Web 3.0: Tests WebGL, CSS, HTML, and general web platform features
Running all four gives you a multi-dimensional profile rather than a single number that can mislead.
Browser-Specific Performance APIs
Modern browsers expose specialized APIs for performance measurement:
// Performance Observer for long tasks
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(`Long task: ${entry.duration}ms`);
console.log(`Start time: ${entry.startTime}`);
}
});
observer.observe({ entryTypes: ['longtask'] });
// Measure First Input Delay
const fidObserver = new PerformanceObserver((list) => {
const firstInput = list.getEntries()[0];
console.log(`FID: ${firstInput.processingStart - firstInput.startTime}ms`);
});
fidObserver.observe({ type: 'first-input', buffered: true });
// Core Web Vitals
function measureCoreWebVitals() {
// Largest Contentful Paint
new PerformanceObserver((list) => {
const entries = list.getEntries();
const lastEntry = entries[entries.length - 1];
console.log(`LCP: ${lastEntry.renderTime || lastEntry.loadTime}ms`);
}).observe({ type: 'largest-contentful-paint', buffered: true });
// Cumulative Layout Shift
new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (!entry.hadRecentInput) {
console.log(`CLS: ${entry.value}`);
}
}
}).observe({ type: 'layout-shift', buffered: true });
}
measureCoreWebVitals();
Long task detection is especially useful during development. Any task taking over 50ms blocks the main thread and delays input response. Observing long tasks while using your application tells you exactly where the jank originates, which is more actionable than a generic “this feels slow” observation.
To collect Core Web Vitals data from real users rather than synthetic tests, integrate the web-vitals library from Google:
import { onCLS, onFID, onFCP, onLCP, onTTFB, onINP } from 'web-vitals';
function sendToAnalytics(metric) {
// Send to your analytics endpoint
navigator.sendBeacon('/analytics', JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // 'good', 'needs-improvement', or 'poor'
id: metric.id,
}));
}
onCLS(sendToAnalytics);
onFID(sendToAnalytics);
onFCP(sendToAnalytics);
onLCP(sendToAnalytics);
onTTFB(sendToAnalytics);
onINP(sendToAnalytics); // Interaction to Next Paint (replaced FID as a Core Web Vital)
This captures real-world performance across your actual user base, which synthetic benchmarks can never fully replicate. Field data and lab data together give you the complete picture.
Comparing Results Across Browsers
Run the same benchmarks across Chrome, Firefox, Edge, and Safari to get comparative data. For accurate results:
- Close unnecessary tabs and extensions
- Use incognito/private mode to avoid extension interference
- Restart browsers between tests
- Run each test multiple times and average results
- Ensure no background processes compete for resources
Typical 2026 results on modern hardware show Chrome and Edge leading in JavaScript benchmarks, while Safari often excels in graphics rendering and battery efficiency.
A structured comparison table helps you document and share results clearly:
| Benchmark | Chrome 123 | Firefox 125 | Edge 123 | Safari 17.4 |
|---|---|---|---|---|
| Speedometer 3.0 | ~380 | ~310 | ~375 | ~400 (Apple Silicon) |
| JetStream 2 | ~290 | ~250 | ~285 | ~320 (Apple Silicon) |
| MotionMark | ~1800 | ~1400 | ~1750 | ~2100 (Apple Silicon) |
| JS Array benchmark (ms, lower=better) | ~45 | ~52 | ~46 | ~38 (Apple Silicon) |
Numbers here are representative ranges, not precise measurements. Your hardware, OS, and background load will produce different absolute numbers. What matters for your decisions is the relative ranking on your specific machine with your specific workload.
Automating Benchmarks with Puppeteer
For continuous performance testing, automate browser benchmarks using Puppeteer:
const puppeteer = require('puppeteer');
async function benchmarkBrowser() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Set a realistic viewport
await page.setViewport({ width: 1920, height: 1080 });
// Navigate to your target page
await page.goto('https://example.com', { waitUntil: 'networkidle0' });
// Measure metrics
const metrics = await page.evaluate(() => {
const perfData = performance.getEntriesByType('navigation')[0];
return {
loadTime: perfData.loadEventEnd - perfData.fetchStart,
domContentLoaded: perfData.domContentLoadedEventEnd - perfData.fetchStart,
firstPaint: performance.getEntriesByType('paint')[0]?.startTime || 0,
};
});
console.log('Performance metrics:', metrics);
await browser.close();
}
benchmarkBrowser();
Extend this to run across multiple URLs and collect structured data over time:
const puppeteer = require('puppeteer');
const fs = require('fs');
async function auditPages(urls) {
const browser = await puppeteer.launch({ headless: 'new' });
const results = [];
for (const url of urls) {
const page = await browser.newPage();
// Enable performance tracing
await page.tracing.start({ categories: [guides] });
await page.goto(url, { waitUntil: 'networkidle0' });
await page.tracing.stop({ path: `trace-${Date.now()}.json` });
const metrics = await page.evaluate(() => {
const nav = performance.getEntriesByType('navigation')[0];
const paints = performance.getEntriesByType('paint');
const fcp = paints.find(p => p.name === 'first-contentful-paint');
return {
url: location.href,
ttfb: nav.responseStart - nav.requestStart,
fcp: fcp ? fcp.startTime : null,
domInteractive: nav.domInteractive - nav.fetchStart,
loadComplete: nav.loadEventEnd - nav.fetchStart,
transferSize: nav.transferSize,
};
});
results.push(metrics);
await page.close();
}
await browser.close();
return results;
}
const urls = [
'https://yourapp.com/',
'https://yourapp.com/products',
'https://yourapp.com/checkout',
];
auditPages(urls).then(results => {
console.table(results);
fs.writeFileSync('perf-audit.json', JSON.stringify(results, null, 2));
});
Running this in CI on every pull request catches performance regressions before they reach production. Store results over time and you build a performance history that lets you attribute changes to specific commits.
Memory Usage Benchmarking
Memory efficiency affects long-running browser sessions. A browser that leaks memory across tab navigation will feel sluggish after an hour of use even if its benchmark scores are excellent.
Use Chrome’s performance.measureUserAgentSpecificMemory() API for cross-browser-compatible memory measurement:
async function measureMemory() {
if (!performance.measureUserAgentSpecificMemory) {
console.log('Memory API not available in this browser');
return;
}
try {
const result = await performance.measureUserAgentSpecificMemory();
console.log('Memory breakdown:');
result.breakdown.forEach(item => {
const mb = (item.bytes / 1024 / 1024).toFixed(2);
console.log(` ${item.types.join(', ')}: ${mb} MB`);
});
console.log(`Total: ${(result.bytes / 1024 / 1024).toFixed(2)} MB`);
} catch (err) {
console.error('Memory measurement failed:', err);
}
}
// Measure before and after a complex operation
async function memoryLeakTest() {
console.log('Before:');
await measureMemory();
// Simulate your application's workload
const largeArray = new Array(100000).fill({ data: 'x'.repeat(100) });
await new Promise(resolve => setTimeout(resolve, 1000));
// Clean up
largeArray.length = 0;
// Force GC suggestion (not guaranteed)
if (window.gc) window.gc();
console.log('After cleanup:');
await measureMemory();
}
memoryLeakTest();
Note that performance.measureUserAgentSpecificMemory() requires a cross-origin isolated context (the page must be served with appropriate COOP and COEP headers). For development testing, Chrome DevTools’ Memory panel provides detailed heap snapshots without this restriction.
Interpreting Your Results
When analyzing benchmark data, focus on consistency over single high or low scores. Performance variance often matters more than absolute numbers. A browser that consistently performs at 100ms is more predictable than one that sometimes hits 50ms and other times 200ms.
Consider your specific use case. Developers working with React or Vue applications may prioritize JavaScript performance. Those building data visualizations might focus on rendering speed. Power users with many open tabs care about memory efficiency.
A practical framework for interpreting results:
- Discard outliers. the highest and lowest individual runs often reflect system noise (background processes, thermal throttling)
- Calculate p95, not just average. the 95th percentile represents the experience of your worst-performing 5% of users
- Test on representative hardware. benchmarking on a high-end developer machine understates performance problems that real users with mid-range devices will experience
- Test on a slow network. browser parsing and rendering performance compounds with network latency; tools like Chrome DevTools network throttling let you simulate 3G conditions
Conclusion
Browser speed benchmarking provides actionable data for both development optimization and browser selection. The techniques covered here, from simple JavaScript timing to automated Puppeteer testing, give you the tools to measure performance objectively.
Remember that synthetic benchmarks represent controlled conditions. Real-world performance depends on your specific workload, hardware, and usage patterns. Use these methods as guidelines, then test with your actual applications.
The most useful benchmarking practice combines three sources: standardized benchmarks (Speedometer, JetStream) for cross-browser comparison, custom micro-benchmarks targeting your specific code patterns, and field data via the Web Vitals library capturing what real users actually experience. No single source tells the whole story, but together they give you a reliable, actionable view of browser performance for your workload.
Run your own benchmarks and share results. Performance characteristics change with each browser update, making regular testing valuable for staying informed.
Related Reading
- Chrome Update Broke Speed? Fix Performance Issues After Updates
- Chrome vs Edge Memory 2026: Which Browser Uses Less RAM?
- Benchmarking Claude Code Skills Performance Guide
- Claude Code Portuguese Documentation Generation Guide
- Claude Code Managed Settings Enterprise Guide
- Claude Code Flutter LSP Setup Guide
- Claude Code Bun Install Setup Guide
- Stop Claude Code from Modifying Unrelated Files — Fix Guide (2026)
- How Claude Code Resolves Git Merge Conflicts
- Modernizing Legacy Codebases with Claude Code
- Claude Code Environment Variables Reference
Built by theluckystrike. More at zovo.one
Find the right skill → Browse 155+ skills in our Skill Finder.