What Are We Really Talking About?
A Predictive Processing Perspective on the Learning–Knowing Debate
The recent back-and-forth between Bernard Andrews and David Didau on the meanings of “knowing,” “learning,” and “understanding” offers a microcosm of a larger issue in education: the clash between language, lived experience, and learning theory.
On one side, Andrews defends the conceptual distinctions in ordinary language, arguing that treating “understanding” as mere “remembering in disguise” collapses important differences. On the other hand, Didau acknowledges these concerns but insists that our lived cognitive experience often pushes against linguistic boundaries. Both make compelling arguments. But what if we could move beyond this impasse?
Predictive Processing and Active Inference, the latest thinking in cognitive and neuroscience, offer exactly that possibility—a framework that respects philosophical distinctions, explains lived experience, and guides practical teaching decisions. At its heart, Predictive Processing / Active Inference offer a radical but increasingly influential claim: the mind is a prediction machine.
To learn more about Predictive Processing Theory, visit my blog Predictably Correct Blog - Teachers are prediction error managers, and/or watch some of the videos listed in the Bibliography.
A Philosophical Theory of Mind, Not Just a Learning Theory
Unlike many of the models still shaping educational discourse, Predictive Processing / Active Inference didn’t begin with classroom data or lab experiments. It was born from a philosophical ambition to understand consciousness, experience, and intelligent behaviour.
Authors like Anil Seth (Being You: A New Science of Consciousness), Andy Clark (The Prediction Machine), Jakob Hohwy (The Predictive Mind), and Karl Friston (Active Inference: The Free Energy Principle in Mind, Brain, and Behavior) each explore this paradigm from different angles—bridging philosophy, neuroscience, and cognitive science.
Predictive Processing did not emerge from education research, but from attempts in neuroscience and philosophy to model how and why humans perceive, understand, and act in the world (Clark, 2023; Hohwy, 2013; Friston et al., 2024).
These thinkers describe the mind not as a passive processor of inputs or a memory storehouse, but as a Bayesian inference engine—constantly generating predictions about the world, updating them when they’re wrong, and acting to minimise surprise or prediction error. This changes everything.
A Tired Toolbox: Why Current Cognitive Science Models Are Lagging Behind
Many educators today still rely on models from the “science of learning” that are decades old: working memory, long-term memory transfer, cognitive load theory, and desirable difficulties. These ideas have offered value, especially in contexts demanding structure and clarity.
But they no longer reflect our best scientific understanding of how cognition actually works.
As Bernard Andrews rightly points out, educators often end up contorting these older models to fit classroom phenomena they weren’t designed to explain. Terms like “load,” “transfer,” or “retrieval strength” begin to feel like blunt instruments—useful in some cases, but ultimately reductive. And worse, they encourage a compartmentalised view of cognition that treats learning as a series of disconnected stages or “stores.”
In contrast, Predictive Processing describes cognition as an ongoing, embodied, active process. It explains learning not as the “movement of information,” but as the ongoing refinement of internal models to reduce uncertainty. Memory, attention, emotion, and action are not separate functions—they are aspects of a unified system aiming to anticipate and regulate interaction with the world.
Rethinking Learning, Knowing, and Struggle
In this framework:
Learning is the process of securely updating our prediction machinery (generative model) to minimise long-term prediction error.
It's triggered by a persistent prediction error, which has needed a change in the generative model to resolve.
The changes to the model, a patch or update in the design of the prediction machinery, are held in a temporary predictive workspace (largely centred in the hippocampus), and wired into the permanent generative model in areas like the frontal cortex through sleep and wakeful rest.
Repeated activation of the updated portion of the generative model strengthens the prediction machinery involved ( synaptic connections across a network of neurons), preventing degradation of the update.
Knowing reflects a confident, precise prediction—a section of the generative model that accurately anticipates sensory and social consequences.
Understanding arises when the generative model is not just accurate, but structured, generalisable, and explanatory.
Struggling to know is not an awkward phrase—it’s a cognitively valid state. It describes the process where Bayesian inference is at work, adjusting, selecting better fitting predictions, taking time, but working towards minimising the prediction error.
Predictive Processing helps us see these distinctions not as semantics, but as the lived dynamics of predictive inference. It allows us to interpret hesitation, doubt, forgetting, and the “tip-of-the-tongue” phenomenon as signal-rich moments—not deficits, but opportunities for model refinement.
What This Means for Educators
For educators, the core question is always: How can this help students learn more effectively?
Predictive Processing reframes learning not as the storage of facts/ skill sequences, but as the continuous refinement of the prediction machine(the generative model). This insight bridges theory and practice in several powerful ways:
It reinterprets cognitive load theory and working memory limits in terms of prediction error bandwidth. Learning stalls when the brain is overwhelmed by too much unpredicted input—this is not a “working memory overload” in a strict capacity sense, but an excess of unresolved uncertainty. Effective learning occurs when prediction errors are manageable: large enough to trigger model updates, but not so large they destabilise the system. This maps cleanly onto ideas like desirable difficulty, germane load, and extraneous load—but from a principled, system-level perspective. More on this in my post: Teachers Are Prediction Error Managers →
It explains expert–novice differences through the lens of model precision. Expert learners do not have “more memory”; they have more finely tuned, high-precision models that can better filter out noise and guide attention. Novices, by contrast, experience more surprise (that is, prediction error), leading to greater uncertainty and cognitive strain. More on this in my post on expert vs. novice learning →
It clarifies the power of prior knowledge and meaningful learning. Prior knowledge is not just background information—it is the structure of the existing generative model itself. When new information links causally or narratively to what learners already have a secure model for, it is more likely to be encoded because it reduces uncertainty in a way the brain finds meaningful. This also explains why stories, causally linked events, and coherent explanations are easier to “remember”—because they are structured in a predictive sequence, aligning with the architecture of how the generative model works.
It supports a deeper understanding of emotional regulation and classroom readiness. Emotional dysregulation can be reframed as an issue of overwhelming or unresolved prediction errors. Strategies to minimise surprise and sensory input make sense. For more on this, see my post on the 6 stages of crisis through the lens of predictive processing →
It reframes scaffolding as precision tuning—not simply “reducing difficulty,” but managing the uncertainty gap so students can take on challenges without hitting their prediction error bandwidth limits. Just enough prediction error to trigger learning, not enough to overload. It's not about complexity, it's about predictability and focused surprise.
It gives educators a unified theory of feedback. Feedback is not just a correction mechanism—it is an opportunity for learners to resolve uncertainty and recalibrate their models. Timely, relevant, and emotionally safe feedback enhances learning because it helps the system converge on better predictions.
Above all, Predictive Processing and Active Inference provide a shared language. One that integrates insights from psychology, neuroscience, philosophy, and education. A language that helps us talk about attention, motivation, metacognition, scaffolding, memory, and emotion, not as isolated mechanisms, but as parts of a coherent system designed to minimise uncertainty and make meaning.
And this is just the beginning. I believe there are many more insights waiting to be drawn out and applied—something I hope we can continue to explore and develop together as a community.
Final Thought: What Are We Really Talking About?
Ultimately, this debate reminds us that different disciplines ask different—but equally important—questions:
Philosophy asks: Are we talking about the same thing?
Science asks: How does this thing work?
Education asks: How can this help students learn more effectively?
Predictive Processing and Active Inference offer a rare opportunity to honour all three. They uphold the conceptual clarity that philosophy demands, provide rigorous models of how the mind works, and, crucially, offer educators a framework that more accurately reflects how learning unfolds in real classrooms. In doing so, they move us toward a shared language—one that respects meaning, mechanism, and meaningful impact.
Maybe the real question isn’t whether we can say something like “struggle to know”—but whether we have a framework that makes such experiences intelligible, describable, and genuinely useful for understanding how we learn, teach, and live.
🔥 Spicy Take – A Postscript
I sometimes wonder whether the continued reliance on decades-old cognitive science frameworks—like working memory models, cognitive load theory, and retrieval practice—is less about dogma and more about safety. Perhaps it reflects the comfort of sticking with well-established models that feel familiar and actionable. Or maybe it highlights something deeper: the real challenge of accessing, interpreting, and applying newer, more complex research, especially when the language of that research can feel abstract or alien to everyday classroom practice. It’s perhaps also true that, just like a new curriculum change, that feeling of “I’ve only just got used to the last change” results in a lack of appetite to embrace the effort that learning to support the change will require.
But if that’s true, then the burden doesn’t fall solely on teachers. It also belongs to those shaping the public discourse—thought leaders like Bernard and David, whose debates help guide what counts as credible, useful, or even sayable in education. If the field is shifting—and Predictive Processing suggests that it is—then perhaps one of the most vital roles of those leading the conversation is to signpost these new directions, to help bridge the gap between cutting-edge theory and everyday teaching.
Predictive Processing and Active Inference are absolutely not final answers; rather, they are the latest in the scientific tradition of evolving models and theories. I believe they are our most comprehensive and integrative models yet, emerging from an interdisciplinary research base that spans neuroscience, philosophy, psychology, and artificial intelligence. They offer educators something that older models increasingly cannot: a framework that unites emotion, memory, attention, perception, and action in one coherent system.
If we are truly science-informed, then we also carry the responsibility to be theory-responsive—to keep asking, with curiosity and courage, whether there might be a better model for understanding how our students learn, and how we teach.
Bibliography & Further Watching
Key Books
Seth, A. (2021). Being You: A New Science of Consciousness. Faber.
Clark, A. (2023). The Prediction Machine. Penguin.
Hohwy, J. (2013). The Predictive Mind. Oxford University Press.
Friston, K., Parr, T., & Pezzulo, G. (2024). Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. MIT Press.
Recommended Talks & Videos