Skip to content

Commit 1043570

Browse files
committed
posts: The psychology behind why humans suck as reading perf data
1 parent 23eac92 commit 1043570

File tree

1 file changed

+40
-0
lines changed

1 file changed

+40
-0
lines changed
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
layout: post
3+
title: The psychology behind why humans suck at reading performance data
4+
---
5+
6+
People often think that performance testing frameworks exist because
7+
machines are good at finding patterns in performance data and humans
8+
are not.
9+
10+
Actually, humans are *very* good at finding patterns in data. In fact,
11+
[we're too
12+
good](https://en.wikipedia.org/wiki/Apophenia#Models_of_pattern_recognition).
13+
We see patterns in data where none exist because our minds are
14+
hardwired to notice them. Detecting patterns and ascribing meaning to
15+
them was once thought to be a psychotic thought process linked with
16+
[schizophernia](https://en.wikipedia.org/wiki/Klaus_Conrad), but
17+
psychologists now understand it to be an evolutionary skill which
18+
allows us to make predictions about the world we inhabit.
19+
20+
But those pattern recognition skills that helped our ancestors find
21+
food in the wild have all kinds of consequences ranging from the
22+
bizarre (sometimes we [see faces in
23+
clouds](https://www.livescience.com/25448-pareidolia.html)) to the
24+
downright illogical (thinking a coin that has turned up heads for the
25+
last few flips [is likely to be heads next
26+
time](https://en.wikipedia.org/wiki/Gambler%27s_fallacy)).
27+
28+
And it's because of this [cognitive
29+
bias](https://en.wikipedia.org/wiki/Cognitive_bias) that we are really
30+
bad at reading and comparing performance numbers without the help of
31+
performance analysis tools -- our brains simply cannot view the data
32+
without trying to find patterns.
33+
34+
For long-running performance tests, it's common to run the test case
35+
standalone, outside of the test suite, for example when changing the
36+
code in between runs. But if you've ever eyeballed a test result
37+
instead of using the reporting framework of your chosen test suite,
38+
you've potentially fallen victim to this quirk of human nature.
39+
40+
Measuring performance is hard. Let the machines help.

0 commit comments

Comments
 (0)