Discover more from Post Bureaucracy
The social perceptiveness of a dead fish
Finding real patterns in a sea of information
“Natural selection has designed organisms to be ruthless pattern finders, to ignore almost all the information that’s officially available at their surfaces and just focus on what matters to them.” — Daniel Dennett
Humans are pattern-finding creatures. We are hyper-alert for patterns. We look for patterns in systems in order to make bets on the future. Can we get across the road before the traffic arrives? Will this stock or crypto go up next week? Will it rain this afternoon? Will Dad get cranky if I drink his scotch?1
Organisations are also pattern-finding entities. What are our competitors doing? Are our customers happy? Is the team on track? How is Joe performing? Almost all organisational activity involves finding, interpreting, and acting on patterns.
If organisations are deeply reliant on pattern-finding for growth and survival then the fidelity of pattern-determination is crucial. If we misinterpret a pattern, or worse, if we ’see’ a pattern that does not really exist, then any decisions flowing from that misinterpretation will be flawed.
Unfortunately, this happens all the time.
The dead salmon study
An amusing example of how easy it is for us to ‘see’ patterns where none exist was provided by a team of psychologists who strapped a dead Atlantic Salmon to an fMRI machine and ‘found’ signs of psychological engagement.
While taking anatomical scans, they showed the salmon images of people in social situations and asked it to interpret how the person must be feeling. Using the data without sufficient error correction generated patterns that seemed to show the salmon miraculously engaging with the task.2
The study, which won the authors an IgNobel Prize in neuroscience, highlighted the ease at which false positives can occur and the importance of statistical correction.3
Esteemed philosopher, Daniel C. Dennett, argues that the presence of a pattern in data is a matter of degree.4 He cleverly uses a checkerboard image rendered with various degrees of noise as a metaphor.5
In the above images, is there a single underlying pattern in all cases? Does a pattern actually exist even if it is indiscernible? In order to make good predictions we need to know we are dealing with an actual pattern. “Where utter patternlessness or randomness prevails, nothing is predictable.”6
Noise levels aside, patterns are also subject to the whims of personal interpretations. Two people looking at the same data can perceive entirely different patterns, and place different predictive bets accordingly. Disagreement about Covid vaccination is a conspicuous example. Our perceptions are filtered by our personal cognitive idiosyncrasies, our individual set of beliefs, desires, and intentions.
To be able to discern real patterns, patterns that matter, within the sea of data that envelopes us, we need to be ruthless. We need to strip away superfluous information and learn to overcome our inherent biases. In our inquiries, our searches for reality, we should seek to answer, as Tim Maudlin suggests, at least two fundamental questions: What is there (ontology)? and How does it behave (dynamics)?7
How real are your analytics?
Organisations spend a vast amount of time and resources trying to read patterns. Do they really understand what they are studying?
Does NPS represent a real pattern? Are people really promoting or detracting the brand as reported in their survey responses? Is the data really predictive of consumer behaviour?
A detailed study of 16,000 consumers revealed only about half the people who expressed an intention to recommend specific firms actually did so. Furthermore, the most loyal customers were not the strongest advocates. The best referrers had relatively low purchasing patterns. A doing-saying gap9
In another study, it was found that 52% of all people who actively discouraged others from using a brand had also actively recommended it. Across the NPS scale, consumers were found who had both actively promoted and actively criticised the same brand. “Time and again, actual behavioural patterns didn’t align with expectations created by the NPS ratings,” said Christina Stahlkopf, the study’s author.10
Given the above, you’d have to ask yourself if NPS is a ‘dead fish scan’, or merely a ‘fuzzy checkerboard’. Is it providing spurious analytics, or is there actually a pattern beneath the noise?
It’s the job of every executive to ask deep questions about the underlying patterns they are trying to read and to question if their current methods and models are producing useful predictions.
If not, they should be chucked in the sea.
Salmon Illustration, via OpenClipart-Vectors
Dead Salmon Scan, via Bennet et al.
Fuzzy checkerboards, my image
Yes, he will!
Many fMRI studies have used uncorrected results
Bennett, C., Miller, M., & Wolford, G. (2009). Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: an argument for multiple comparisons correction. NeuroImage, 47, S125. doi:10.1016/s1053-8119(09)71202-9
Tyler Millhouse (2021) Really Real Patterns, Australasian Journal of Philosophy, DOI: 10.1080/00048402.2021.1941153
Dennett, D.C. (1991). Real patterns, The Journal of Philosophy, 88(1), 27-51.https://doi.org/10.7551/mitpress/9780262026215.003.0011
Maudlin, T. (2015). Physics, philosophy, and the nature of reality. Annals of the New York Academy of Sciences, 1361(1), 63-68. https://doi.org/10.1111/nyas.12877
The point here is not about disparaging NPS. I only refer to it because most people are familiar with it. Exactly the same critique can be applied to almost any metric or analytic.
Kumar, V., Petersen, J. A., Leone, R.P., How Valuable Is Word of Mouth?, Harvard Business Review, Oct 2007.
Stahlkopf, C. Where Net Promoter Score Goes Wrong, Harvard Business Review Oct 2019