I presented at a two-day Masterclass in Edinburgh on Action-Oriented Predictive Coding (26 and 27th of October), headed by Prof. Andy Clark and Dr. Dave Ward.
The “predictive coding framework” (or, as I prefer to call it, the “predictive processing framework”) is a way of thinking about cognition. On this framework (see Clark 2013 for a review), the brain (or at least cortex) can be usefully thought of as a prediction machine. Each level in the cortical hierarchy is constantly trying to predict the next input from the level below. The vast majority of what determines your perceptual experience is what your brain has already predicted, your brain’s best hypothesis. In other words, on the framework, the importance of the incoming signal for determining your conscious perceptual experience is much less great that on more traditional pictures. In fact,
an expected event does not need to be […] communicated to higher cortical areas which have processed all of its relevant features prior to its occurrence. (Bubic et al. 2010, p.10)
Why would anyone hold such a view? There are number of different reasons, both theoretical and experimental (see Clark 2013 for the full array). I will present one that I find compelling. The brain, as a result of evolutionary pressures, will naturally strive towards efficiency. It will try to get the required computational results with as little effort as possible. How does it do this? An illustrative analogy can be made with data compression in informatics. Data compression enables you to minimize the informational load of a signal by only passing on the information that “matters”. Obviously, information doesn’t matter absolutely, but only relative to an interpreter of the signal. What matters depends on what the “interpreter” of the signal already “knows”. So, the information that doesn’t matter, that is left out of the signal, is the information that the interpreter of the signal can fill in for itself, the information that it already expects or predicts. We might say, using the terminology of (Hosoya et al. 2005), that in data compression, only parts of the signal that are deemed “newsworthy” are passed on. Of course, unlike a computer, what the brain already “knows” (or, perhaps better, “hypothesises”) will not be pre-programmed: a lot of it will be learnt from past experience, and some of it might be “innate”.
My presentation was entitled “Hierarchical Predictive Processing and the Varieties of Voice-Hearing”. The predictive processing framework radically changes one’s explanatory focus in trying to account for hallucinations. On a standard view, where front-line sensory stimuli get gradually processed and passed on up the hierarchy, hallucinations make one wonder: Where does this erroneous sensory stimulation come from? However, when, instead, one adopts the predictive processing framework, incoming stimuli play a much smaller role in determining the conscious percept, even where healthy, accurate perception is concerned. Hallucinations then make us wonder, instead: Why has the brain adopted such an unusual hypothesis?
Prominent self-monitoring theories (e.g. Frith 1992), although they have much in common with predictive processing, work within the traditional framework. They answer the question, “Where does the stimulus come from?” by saying that it is a self-produced stimulus, such as inner speech, which, due to the self-monitoring failure (which these theories sometimes explicitly take to be a unifying and definitive abnormality in schizophrenia) gets misattributed as having an external source. However, if we view the data that supports self-monitoring theories through the predictive processing framework, we see self-monitoring not as the basic deficit in schizophrenia, but as one mechanism that is affected by something more basic, namely, a problem with predictive processing. On this framework, it is not only self-produced stimuli that need to be predicted: it’s all stimuli.
There are conscious effects that are suggestive of predictive processing in the non-clinical population. The Hollow Mask Illusion is an effect where a concave face (the back of a mask, the imprint of a face) is experienced as convex.
This is because your brain expects (has a “prior”, to use the technical term) that the faces it encounters will be convex. As a result it “corrects” the input. We get something similar in Binocular Rivalry. Under experimental conditions, each eye is presented with very different, but meaningful, stimuli. One standard example involves presenting one eye with a picture of a house, and the other, with a picture of a face. People do not report visually experiencing, as one might expect, a mixture of face and house. Rather, they experience a “bi-stable” switching from face to house and back (the switching is often reported as a gradual “breaking through” of the other image). On the predictive processing account, you experience the bi-stable state because your brain has a prior that faces and houses simply do not simultaneously occupy the same part of your visual field, and as a result switches between hypotheses about which is actually out there. When it settles on one hypothesis, say, the face hypothesis, inputs from the house image fail to accord with this hypothesis and prediction error signals are sent up the hierarchy. When enough prediction error accumulates, the hypothesis switches (and with that, what the subject consciously experiences) to the house hypothesis, but then the input from the face image doesn’t accord, and so on, and so forth. Interestingly, those with a diagnosis of schizophrenia tend not to experience the Hollow Mask Illusion (Schneider et al., 1996; Emrich et al., 1997), and they have binocular rivalry switching rates that are on average half the frequency of non-clinical subjects (Heslop 2012).
I tentatively suggested that adopting a predictive processing theory might help us to account for more of the positive symptoms of schizophrenia and, more specifically, to account for two subtypes of voice-hearing in clinical contexts, initially put forward by Dodgson and Gordon (2009), and subsequently corroborated by Garwood et al. (2013). These subtypes are dubbed “Inner Speech Hallucinations”, which occur in quiet contexts where attention is inwardly directed, and “Hypervigilance Hallucinations”, which occur in loud contexts where attention is outwardly directed. The idea might then be that, whereas the former can be explained, as self-monitoring theories hypothesised, as badly monitored (and hence badly predicted and mis-attributed) inner speech, the latter should be explained in terms of external stimuli (e.g. the sound of neighbours talking, the radio, traffic noise etc.) producing excessive prediction error, and thereby causing the brain to adopt a hypothesis (to minimise said prediction error) that would embellish the conscious percept into the experience of a voice. Obviously a great deal needs to be said about why the voices have the auditory properties that they have and not others, and why they express certain contents and not others. In some cases the contents might relate to certain worries or concerns that the person might have and which might be caused by various distressing life events (see Dodgson and Gordon 2009). A great deal more needs to be said, also, about the precise nature of the abnormality in predictive processing. This is something I’m exploring in detail now.
Work cited:
Bubic, A., von Cramon, D. Y. & Schubotz, R. I. (2010) Prediction, cognition and the brain. Frontiers in Human Neuroscience 4(25):1–15.
Clark, A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science, Behavioural and Brain Sciences, 36 (3)
Dodgson G., Gordon S. (2009). Avoiding false negatives: are some auditory hallucinations an evolved design flaw? Behav. Cogn. Psychother. 37, 325–334
Emrich H.M., Leweke F.M., Schneider U. (1997) Towards a cannabinoid hypothesis of schizophrenia: Cognitive impairments due to dysregulation of the endogenous cannabinoid system (1997) Pharmacology Biochemistry and Behavior, 56 (4) , pp. 803-807.
Garwood L., Dodgson G., Bruce V., McCarthy-Jones S. (2013) A preliminary investigation into the existence of a hypervigilance subtype of auditory hallucination in people with psychosis Behav. Cogn. Psychother, pp. 1–11
Heslop, K.R. (2012) Binocular rivalry and visuospatial ability in individuals with schizophrenia. PhD thesis, Queensland University of Technology.
Hosoya, T., Baccus, S. A. & Meister, M. (2005) Dynamic predictive coding by the retina. Nature 436(7):71–77.
Schneider U., Leweke F.M., Sternemann U., Weber M.M., Emrich H.M. (1996) Visual 3D illusion: A systems-theoretical approach to psychosis European Archives of Psychiatry and Clinical Neuroscience, 246 (5) , pp. 256-260.