Skip to content

Commit

Permalink
Blog post
Browse files Browse the repository at this point in the history
  • Loading branch information
sindresorhus committed Aug 14, 2024
1 parent 9941793 commit 7fee7f5
Show file tree
Hide file tree
Showing 2 changed files with 43 additions and 0 deletions.
41 changes: 41 additions & 0 deletions source/content/blog/micro-benchmark-fallacy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: The Micro-Benchmark Fallacy
description: It's a trap.
pubDate: 2024-08-13
tags:
- programming
---

Micro-benchmarks, while seemingly insightful, often mislead developers by presenting a skewed view of performance. They isolate small fragments of code, amplifying their significance in ways that don’t reflect real-world scenarios.

### Lack of Real-World Context

Micro-benchmarks strip away the complexities of actual apps. A function may perform well in isolation, but its impact on overall system performance can be negligible.

### Artificial Testing Conditions

These benchmarks typically run under ideal circumstances, failing to account for real-world factors like I/O operations, network latency, and user interactions that significantly affect performance.

### Misguided Optimization Focus

Obsessing over micro-optimizations often yields diminishing returns. It's more productive to focus on optimizing critical paths and bottlenecks that genuinely impact user experience.

### Overlooking Concurrency and Parallelism

Apps generally rely heavily on concurrent and parallel execution. Micro-benchmarks typically test single-threaded performance, which may not accurately represent how code behaves in a multi-threaded environment.

### Neglecting Startup and Initialization Costs

Some operations have significant startup or initialization costs that are amortized over time in long-running apps. Micro-benchmarks may overemphasize these costs if they don't run long enough, leading to misguided optimization efforts.

### JIT and Caching Complications

Modern languages use Just-In-Time (JIT) compilation and caching to dynamically optimize performance. Micro-benchmarks include warm-up phases to account for this, but often miss the full impact. A function in a micro-benchmark might run faster or slower when integrated with other code due to changes in execution paths, resource contention, or different inlining decisions.

### Misleading Marketing

Be skeptical of projects touting micro-benchmark results. These figures are frequently cherry-picked and fail to reflect real-world app performance.

### The Bottom Line

While micro-benchmarks can provide some insights, they shouldn't be the primary basis for performance optimization decisions. Instead, focus on comprehensive profiling of your app under realistic conditions to identify and address genuine performance bottlenecks.
2 changes: 2 additions & 0 deletions source/content/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@ const blogCollection = defineCollection({
description: z.string().nonempty().optional(),
pubDate: z.date(),
tags: z.array(z.enum([
'programming',
'open-source',
'swift',
'javascript',
'nodejs',
])).optional(),
Expand Down

0 comments on commit 7fee7f5

Please sign in to comment.