So! I have a new research rabbit hole that has captured my attention. It comes from cognitive science, which is not really my field, but it relates to autism and disability theory in some very interesting ways. My primary source for what follows is a 2014 paper by Van de Cruys et al., titled “Precise Minds in Uncertain Worlds: Predictive Coding in Autism,” which you can find in full at that link. I’ll include a full citation for it at the end of this post as well.
Let’s start with the words “predictive coding” in that title. The basic gist of predictive coding models (if I understand correctly) is that our brains make predictions all the time about what to expect from our environment (including our internal environment). Those predictions happen on various levels in a hierarchical way, with more complex predictions happening at a higher level than more basic processes, such as sensory input. We base these predictions on mental models of our environment, and we can update those models when our predictions fail.
Of course, the sensory world is not always predictable; in fact, it mostly isn’t. Nothing is ever exactly the same from situation to situation, plus our biological sensory systems aren’t perfectly accurate or consistent. Differences between our predictions and reality are called prediction errors, and as I mentioned above, those errors can be used to update our internal models—in other words, we can learn from them. But to be efficient in that learning, we need to be able to determine which of those errors are due to important changes in our environment that we should learn from (signal) versus random fluctuations in either our environment or our sensory systems (noise).
This involves having an idea of how much natural variability there is in the situation or environment; if something is different, we want to know if that’s an expected difference (which can therefore be ignored) or an unexpected one (which we should learn from). In the paper mentioned above, this is called “meta-learning”—learning what can be learned. Ideally, prediction errors are weighted such that differences that seem important (signals) are used to adjust the model while those that seem random are ignored. That weighting is based on what is called the “precision” of our prediction errors, which is an estimate of how much variability we expect in the system. If precision is high, it means we want to pay attention to more of these prediction errors, because we expect them to be meaningful; if precision is low, it means we expect most of that variability to be random noise that can be ignored.
With me so far? Because this is where the HIPPEA model of autism comes in. HIPPEA stands for “High, Inflexible Precision of Prediction Errors in Autism,” and it basically suggests that the precision of our sensory prediction errors is constantly on “high.” This means that we treat every incoming difference between our predictions and the environment as being important and meaningful, no matter how small and/or random it is. So we are constantly updating our models and learning from all of our sensory input, instead of treating some of it as random and ignoring it.
If that sounds exhausting, that’s because it is. Constantly feeling like we’re adjusting to new information results in chronic uncertainty, which leads to anxiety. And the more complex the sensory environment is, the more energy it takes and the more uncertainty we’re likely to encounter—and one very complex environment is the social environment. As Van de Cruys et al., put it, “we wonder whether social may just be a synonym of complex here” (p. 665, italics in original).
As laid out in their paper, this model predicts a number of common features of autism, including executive functioning issues, sensory perception differences (including sensory overload due to a constant influx of prediction errors), face recognition difficulties, and atypical social communication. One aspect that really caught my attention, though, had to do with stimming. As the authors pointed out, in addition to making predictions about our environment, we also make predictions about the outcomes of our actions; for example, I make a prediction that if I press down on the space bar on my keyboard, it will advance the cursor on my screen. I expect a certain tactile feedback from the keyboard, and visual feedback from my screen, and unless my space bar is stuck (or my hands aren’t where I thought they were), I get both.
In this model the purpose of stimming, then, is to create feedback that is predictable—predictable because it is something we ourselves initiated. I flap my hands and feel them flap; I rub something soft and feel the softness; I make a sound and immediately hear it. If I am in an environment where everything else feels unpredictable—meaning that I am adjusting to a large number of prediction errors—generating sensory input that is predictable helps to mitigate that. This is the case even when the form of stimming might otherwise seem to make sensory overload worse (like blasting loud music when you’re on the edge of a meltdown; yes, it’s loud, but I made it loud).
The HIPPEA model is still relatively new, and more research is needed to test its applicability, but I find it very intriguing, and a lot of it really makes sense to me. I initially came across it in another very interesting paper, Legault, Bourdon, and Poirier’s “Neurocognitive Variety in Neurotypical Environments: The Source of ‘Deficit’ in Autism” (2019). This paper used the HIPPEA model to argue that the so-called cognitive “deficits” in autism are actually caused by a mismatch between the kind of environments favored by autistics and the predominance of environments created by and for neurotypicals.
As Legault et al. explain it, HIPPEA suggests that autistic people end up with “overfitted” mental models: because we are constantly refining them based on every new piece of information, they become very detailed and very specific, but less generalizable to other situations. Neurotypicals, on the other hand, tend to have “underfitted” models, with less detail but more general applicability. Because they’re not including everything, they can tolerate (and even enjoy) much noisier, more unpredictable situations. They can also find new social situations more familiar, because they can generalize from their models, while we’re analyzing all of the differences.
Meanwhile, being in their environments leaves us exhausted trying to process all of the noise (both literal noise and figurative noise-as-opposed-to-signal noise) that they’re ignoring. But that’s where we typically are—in environments that favor the neurotypical form of prediction processing (not to mention the neurotypical definition of what is considered worth paying attention to). And as Legault et al. pointed out, framing autism as a set of “deficits” in autistic people ignores this imbalance in who gets to define the dominant social environment(s) in which we find ourselves.
So that is what I have been reading this weekend, and again, it really makes a lot of sense to me. But it’s also still a fairly new model, and I haven’t explored every aspect of it in depth (and again, cognitive science isn’t my field). Any misunderstandings or misstatements in how I’ve presented it are unintentional, and if you think I’ve made any, please let me know. And please let me know how you think this model relates (or not) to your own experience!
Legault, M., Bourdon, J. N., & Poirier, P. (2019). Neurocognitive variety in neurotypical environments: The source of “deficit” in autism. Journal of Behavioral and Brain Science, 9(6), 246-272.
Van de Cruys, S., Evers, K., Van der Hallen, R., Van Eylen, L., Boets, B., de-Wit, L., & Wagemans, J. (2014). Precise minds in uncertain worlds: Predictive coding in autism. Psychological Review, 121(4), 649-675.