An archive of my written analyses from Mind, Morphology & Machine. These essays explore the intersection of philosophy and robotics, analyzing primary sources from authors like Alan Turing, B.F. Skinner, and Daniel Dennett to understand how historical definitions of "mind" shape the way we build machines today.
Far from definitive treatises, these essays are artifacts of my own learning process. In them, I wrestle with contradictions, question established dogmas, and often find myself revising my own perspectives. I offer them here as a record of that intellectual journey - a humble attempt to understand the deep currents shaping the future of intelligence, neuroscience, and consciousness.
Title: Phaedo
Author: Plato (c. 424/423 – 348/347 BC)
Date: Written c. 375 BC
Analyzed Section: Introduction & Sections 62b–69d
Plato’s Phaedo recounts the final conversation of Socrates before his execution in 399 BC - a pivotal moment that not only defined the enduring image of the philosopher but profoundly shaped Plato’s own trajectory. Written when Plato was in his late twenties, the dialogue serves as a bridge between narrative literature and rigorous philosophy. It captures a specific historical turning point: disillusioned by the political turmoil of the Peloponnesian War and the execution of his beloved mentor, Plato retreated from Athenian politics to found the Academy, dedicating himself to the philosophy that permeates this work.
At the heart of the dialogue is a sharp dualism that has echoed through Western thought for millennia. Socrates portrays the soul as the "true self" and the source of intellect, while the body is cast as a distracting, even corrupting force that hinders the pursuit of truth. He argues that death is merely the soul’s separation from the body-a return to the realm of eternal Forms. These Forms, such as "Justice" or "Beauty," serve as the absolute standards that make knowledge possible, distinct from the imperfect examples we encounter in the sensory world. This leads to the famous Doctrine of Recollection: the idea that learning is not the acquisition of new knowledge, but the re-awakening of truths the soul possessed before birth.
Reading Phaedo today, the connections to other wisdom traditions are striking. Thích Nhất Hạnh’s teachings in The Heart of the Buddha’s Teaching recall Socrates’ call for purification. Both envision an inner essence-whether Buddha-nature or soul-capable of clarity beyond the body. However, the difference is decisive: where Plato roots hope in immortality after death, Thích Nhất Hạnh avoids afterlife speculation, stressing liberation here and now in the “world of no-birth and no-death.” Similarly, the Yoga Sutras of Patanjali describe yoga as the “restraint of the modifications of the mind-stuff.” Like Socrates, Patanjali distinguishes the true Self from the noise of the world, though he treats the body less as Plato’s “filthy distraction” and more as a disciplined instrument to support the mind’s stillness.
This lineage of thought extends to the Stoics, with Marcus Aurelius echoing Plato’s elevation of the “directing mind” over bodily pleasure and pain. It is palpable how Plato’s dualism gave the Stoics a framework for their disciplined acceptance of nature. But the dialogue’s reach isn't limited to ancient philosophy; it resonates even with modern science. Giulio Tononi’s Integrated Information Theory,which argues that consciousness is identical to a system’s maximal integrated information, offers a modern echo of the animated soul. Like Plato, Tononi elevates consciousness as fundamental, though he grounds it in the quantifiable structures of matter rather than an immaterial afterlife. Even Carl Jung’s concept of the collective unconscious shares DNA with Plato’s Recollection, suggesting that our deepest knowledge is innate, waiting to be uncovered in dreams and symbols much like Socrates’ truths wait to be recalled by reason.
Title: Meditations on First Philosophy (Meditationes de Prima Philosophia)
Author: René Descartes (1596–1650)
Date: Published 1641
Analyzed Section: Synopsis of Meditation Two & Meditation Two
"I think, therefore I am" is easily René Descartes' most quoted line, yet in his Meditations on First Philosophy, he never actually says it quite like that. Instead, the text reveals the raw thought process of a man stripping away every certainty-sense experience, memory, and the body itself-until only one unshakable axiom remains: the fact that he is thinking. Writing from the Dutch Republic amidst the Scientific Revolution, Descartes sought to pivot away from the dominant Scholastic Aristotelianism of his time and establish a stable, rational foundation for consciousness.
In the Second Meditation, he systematically doubts everything, hypothesizing a "malicious deceiver" or the possibility that he is merely dreaming. Yet, he realizes he cannot doubt the act of doubting itself. Even if he is deceived, he must exist to be deceived. This "I" is not a "rational animal" or a physical body-definitions he rejects as too complex or uncertain-but simply a "thinking thing," an intelligence that persists through doubt. He cements this reliance on pure reason with his famous wax thought experiment: the sensory qualities of wax (scent, shape, texture) vanish when melted, yet the wax remains. Thus, he concludes that the essence of matter, like the self, is grasped by the mind alone, not the senses or imagination.
While Descartes locates the self in the act of thinking, other wisdom traditions view this very act as the root of the problem. In the Heart Sutra, Red Pine notes that "The problem that arises when we reflect on our experience is that we reflect on our experience... And once we are, we are in trouble, forever divided by what we use to define our existence." Similarly, the Yoga Sutras of Patanjali argue that thinking is the primary obstacle to realizing the true Self. Swami Satchidananda explains that yoga is the practice of calming the "flux of the mind"-the very turbulence Descartes relied upon-to reveal the capital-S Self that abides beyond it.
Challenges to the Cogito also arise from within Western philosophy and psychology. Friedrich Nietzsche dismantles Descartes' insight as a grammatical illusion, arguing that "a thought comes when 'it' wishes, and not when 'I' wish," exposing the "I" as a linguistic prejudice rather than a metaphysical truth. Aristotle, whose framework Descartes sought to replace, offers a non-dualist counterpoint, viewing the soul as inseparable from the body—much like sight is inseparable from the eye. Finally, Carl Jung complicates Descartes’ dismissal of dreams. Where Descartes saw dreams as potential traps of illusion, Jung frames them as vital tools for self-regulation, arguing that the ego-consciousness Descartes prized actually grows out of the unconscious life he sought to doubt.
Title: A Treatise of Human Nature
Author: David Hume (1711–1776)
Date: Published 1739–1740
Analyzed Section: Intro to "Personal Identity" & "Personal identity"
Born in Edinburgh during the Scottish Enlightenment, David Hume wrote A Treatise of Human Nature to build an understanding of human nature based chiefly on empirical analysis. He argued that all knowledge derives from impressions of the senses, and through careful introspection, he found no "constant impression" of a self-only individual perceptions like heat, cold, or pain.
This led to his Bundle Theory of the Mind, which asserts that the mind is simply a bundle of distinct, separable perceptions that appear and pass through like actors in a theater. According to Hume, we feign a false sense of identity because our habitual imagination smooths the transitions between these diverse objects using relations like resemblance and causation.
I see Hume as arguing that identity and simplicity are not truths about the world, but rather mental habits. While the substance of his argument is sound, his rhetoric seems unnecessarily radical; instead of framing these tendencies as helpful frameworks for processing reality, he presents them as fallacies or illusions.
Take, for example, my recognition of my childhood friend Bamdad. We met when we were seven and last saw each other when I was thirteen. Hume would argue that Bamdad no longer possesses the same identity; his biological changes and life experiences mean he is not the same person. Hume would further argue that every letter, text, or Zoom call I receive merely consists of new impressions that I wrongly attribute to the same identity.
But to me, this feels like an unnecessary layer of semantic skepticism. My mind is able to deduce that these distinct sensations all correlate back to one continuing subject: Bamdad. I'm not mistaking the pixels on the screen for him; I'm recognizing that they point to him. To call this act of recognition a "confusion" seems inappropriate. However, Hume might push back against this example. He might ask why I consider the texts and zooms as distinct impressions, yet refuse to do the same for Bamdad himself. Why stop there? Why not keep going and "see" a bundle of impressions rising from what I thought was a unitary thing? If I have two friends standing together, I am comfortable viewing them as separate entities; Hume would ask why I do not view the individual "Bamdad" with that same level of granularity-as a collection of separate parts rather than a single whole.
To that I say, where Hume sees a fiction of the imagination binding together perceptions, Kant reframes this as a transcendental structure. The unity of self is not something we mistakenly impose, but something we must presuppose in order to have experience at all. As he puts it in the Critique of Pure Reason: "I can combine a manifold of given representations in one consciousness, that it is possible for me to represent the identity of the consciousness in these representations itself." The very act of experience requires the synthetic activity of the mind to unify impressions.
That said, The Bayesian Brain Model developed by Andy Clark and Karl Friston resonates deeply with Hume's account. It posits that the brain is essentially a prediction machine, minimizing error by linking perceptions through probabilistic inference. Hume's view of causation as a psychological habit almost reads as an early anticipation of predictive coding: both accounts explain cognition as a process of pattern recognition.
And yet, there remains an unresolved tension. Both Hume and the Bayesian brain reduce the mind to processes-perceptions on one side, probabilistic inferences on the other-without addressing the "watcher," the one for whom the theater plays. If we are only hollow automata, these theories hold great explanatory power, but they do not explain the "hard problem" of neuroscience: the kernel of consciousness, the irreducible "what it is like".
What is striking is that although Hume never explicitly deals with the problem of the "watcher," he arrives at a conclusion similar to Buddhist thought. The doctrine of anatta ("no-self") asserts that "there is no distinction between change in the content or objects of consciousness... and change in consciousness itself." In other words, the experiencer is not separate from the experience: seeing arises with a visual object, hearing with a sound. Thus, where Hume stops with a skeptical conclusion, Buddhism goes further, showing that the experiencer itself can be explained as part of the same impermanent flux.
Title: Man a Machine (L'Homme Machine)
Author: Julien Offray de La Mettrie (1709–1751)
Date: Published 1748
Analyzed Section: First seven pages
With Man a Machine, Julien Offray de La Mettrie set out to ground our understanding of mind and soul in the study of the body, rejecting metaphysical speculation in favor of observation, anatomy, and physiology. He argued that the soul is not an immaterial essence but the living activity of a machine-a "self-winding" mechanism kept in motion by nourishment, passions, and organic processes. To him, mental states like courage, joy, or genius are not spiritual qualities but functions of the body's organization, shaped directly by food, climate, and health.
What struck me first about La Mettrie is how bold and aggressive he is in his writing - an energy echoed in historical accounts of his life. At times, this brashness slides into questionable territory; he casually accepts humoral theory, endorses physiognomy, and dismisses women as the "weaker sex," all of which stain his work more as pseudoscience than genuine philosophy.
Yet, other parts of his materialist work remain prescient. His claim that food shapes temperament immediately reminded me of the Bhagavad Gita, which explains how specific foods promote vitality (sattvic) while others cause pain or disease (rajasic/tamasic). La Mettrie's idea that diet cultivates courage or laziness resonates strongly with this older spiritual framework, though he strips it of metaphysics.
His vivid description of phantom limbs was particularly striking: a soldier continues to feel and even move an arm that has been amputated. Today, this phenomenon is well documented. Ramachandran and Rogers-Ramachandran (1996) used the "mirror-box" to "resurrect the phantom visually," demonstrating that visual feedback could relieve painful spasms. La Mettrie's observations weren't just provocative; they captured something real about the nervous system's plasticity.
I was also fascinated by his speculation about training apes to speak. It reads like Enlightenment science fiction, and while studies with apes like Washoe, Koko, and Kanzi have demonstrated capacities for symbolic communication, this remains a highly contested area. There is a large counter-literature pointing out that these apes may be performing complex mimicry or operant conditioning rather than displaying true linguistic competence - but is language merely an intricate manifestation of these simplistic mechanisms; I digress. Thus, while we have seemingly "done it" as La Mettrie proposed, the question of whether they are truly using language as we do remains unresolved.
What stays with me is how relentlessly La Mettrie ties the mind back to physiology. He places himself at odds with Plato's immortal soul, yet aligns oddly with Hume, who also described a flux of impressions - La Mettrie simply presses harder, rooting that flux in nerves, blood, and organs.
I found his attempt to generalize intelligence through brain structure more nuanced than expected for his time. He noted that brain convolutions (wrinkles) were a sign of mental power - an idea that holds some weight today - though his claims that brain "softness" explains idiocy or that the corpus callosum is the seat of the soul are humorously incorrect by modern standards. Still, his intuition that a tiny anatomical difference could separate genius from idiocy is prescient; modern neuroscience confirms that subtle differences in circuitry, rather than gross deformities, often drive cognitive shifts.
One of La Mettrie's more fascinating observations was his claim that "the deaf see more clearly and keenly than if they were not deaf." This maps neatly onto modern neuroplasticity. As Burton and McLaren (2006) note, fMRI studies show that in blind individuals, the visual cortex is actively recruited for processing language. Modern sensory substitution devices, like vests that translate camera input into vibration patterns, further confirm La Mettrie's hunch: the machinery of perception can be retrained and reconfigured.
Finally, it is striking to see parallels between La Mettrie and Buddhist philosophy. La Mettrie argued that the soul is a flowing flux formed by what is "put forward to it" - food, climate, and company. Thích Nhất Hạnh articulates a similar view on "nutriments," stating that our sense organs are in constant contact with objects, and "these contacts become food for our consciousness." Both perspectives suggest that the mind is not an isolated entity but is continuously shaped by what we consume, whether physically or effortlessly.
Subject: Charles Sanders Peirce (1839–1914)
Author: Daniel Everett
Date: Published 2019
Analyzed Section: Entire Article
Charles Sanders Peirce, often called "The American Aristotle," was a brilliant but troubled polymath who worked in relative obscurity in Milford, Pennsylvania. While he wrote thousands of pages on mathematics, logic, and science, his most enduring contribution is his triadic theory of semiotics, which he conceived as the foundation of all cognition. Unlike the dyadic model of Ferdinand de Saussure (Signifier/Signified), which dominates linguistics, Peirce argued that a sign always involves three parts: the object (what it refers to), the form (the vehicle), and the interpretant (the meaning or response produced).
The Triadic Advantage I find Peirce's triadic theory intriguing because it sits perfectly between everyday common sense - which is fast but hard to formalize - and the Greek syllogism, which is rigorous but brittle.
Trying to compare the models, I reflected on my time when I arrived to the America. I noticed how much larger the Twix bars were compared to back home. If I used a Greek Syllogism, I would have to construct a rigid logical chain: "In countries with weak regulations, candy is larger; U.S. candy is large; therefore, U.S. regulations are weak." This is clear but fragile - if one premise fails, the conclusion collapses.
If I used Saussure’s Dyadic Model, the "sign" would just be the large bar (form) paired with the concept of "Twix" (meaning). The analysis stops there, leaving no theoretical room for the broader cultural implications I sensed.
Peirce’s Triadic Model, however, captures the actual reasoning process:
Form: The large candy bar.
Object: The regulatory environment and national food culture.
Interpretant: The inference that U.S. culture is permissive and less concerned with health.
Here, the sign functions as an index of social policy and a symbol of consumer culture, linking perception to interpretation without demanding rigid premises.
Peirce’s framework sheds light on how modern Large Language Models (LLMs) process meaning. These models encode words geometrically in vector space - subtracting "man" from "king" and adding "woman" yields "queen". This mimics Peirce's icons (similarity clustering) and symbols (statistical convention). However, as Manheim notes, this often leads to a "hall of mirrors" problem: models generate coherent text without the indexical grounding of direct contact with reality or the interpretant sustained by a community of inquiry.
It is worth noting, however, that while Peirce suggests overcoming these limits requires embedding AI in richer semiotic processes, many machine learning experts would push back. They might argue that data scaling has historically defeated such theoretical objections - that with enough data, the "hall of mirrors" becomes sufficiently detailed to function indistinguishably from grounded reality.
Finally, Peirce offers a corrective to psychoanalytic theory. Freud implicitly opened space for a Peircean view when he distinguished between the verbal thoughts of the day and the visual images of the dream-work. However, Jacques Lacan later reframed the unconscious using Saussure’s binary linguistic model ("the unconscious is structured like a language"). Had Lacan adopted Peirce instead, psychoanalysis might have developed a richer toolkit: icons for dream images, indexes for the causal traces of trauma, and symbols for cultural law - avoiding the reductionism of a purely linguistic unconscious.
Title: The Reflex Arc Concept in Psychology
Author: John Dewey (1859–1952)
Date: Published 1896
Analyzed Section: The entire essay
In his 1896 essay, John Dewey challenged the prevailing view of the reflex arc as a linear sequence of stimulus → response. He argued that this model artificially divides experience into separate parts - sensation, processing, and movement - when in fact these are phases of a single, continuous coordination. Dewey emphasized that what we call "stimulus" and "response" are not distinct ontological categories but functional distinctions within an ongoing activity; seeing is already coordinated with reaching, and movement reshapes the meaning of the stimulus itself.
Beyond the Mechanical Reading Dewey's essay was a paradigm shift in how I process reality. With my computational mindset, the old stimulus-response model had always felt natural and intuitive, but his analysis exposed its shortcomings. Dewey shows that reducing stimulus to "input" and response to "output" leaves us with a pixelated account of experience.
I find this point becomes even clearer when we consider the cerebellum. Its role is continual refinement - comparing predicted sensory consequences of movement with actual feedback and issuing constant error corrections. As Popa and Ebner (2019) put it, the cerebellum "implements forward internal models which predict the sensory consequences of motor actions... to compute sensory prediction errors." If we tried to force this process into an input/output framework, each micro-adjustment would count as a new "stimulus" and "response," collapsing the model under its own weight.
This shift also opened up an intriguing connection for me with Lacan's notion of lack. Dewey distinguishes between habits, where the input/output shorthand suffices, versus higher-level conscious processes, which only emerge when the circuit breaks down. It is only when the circuit falters - when the old coordination does not suffice - that an act becomes a "response," a corrective effort requiring attention. It seems as though in those moments we experience the felt sense of will or agency precisely because consciousness arises in relation to a gap, a problem to be solved. Lacan captures this same dynamic when he observes, "What man demands... is to be deprived of something real." Both Dewey and Lacan suggest that our most vital acts spring from breaks in continuity, from the tension between what is given and what is missing.
There is a parallel here to how generative language models work. At a naïve level, it looks like you give the system an input (prompt), and it produces an output. Superficially, that resembles the old "stimulus → response" arc. But as Dewey might warn, starting our explanation with the "prompt" reveals our own bias to always start with the "input" to the system. Why assume the prompt is the beginning? That prompt itself was likely a response to a previous interaction, or a specific context, just as the child's reaching was already coordinated with seeing.
Under the surface, the mechanism of Large Language Models is far more like Dewey's circuit. The model generates text token by token, recursively feeding back on its own outputs - continually updating the "stimulus" with its own "response". As Fein-Ashley, Kannan, and Prasanna (2025) note, Contextual Feedback Loops "bridge the gap between purely bottom-up inference and more dynamic, feedback-driven reasoning." This showcases that the process is less a one-way pipeline and more a loop of coordination, where meaning emerges from ongoing adjustment.
Finally, Dewey provides a fascinating counterpoint to Descartes. Descartes' wax example illustrates his conviction that reason alone grasps the essence of things beyond shifting sensory appearances; the wax changes shape, yet remains the same substance. Dewey brushes up against this but redirects it entirely. He would argue that the wax is not known by stripping away its qualities to arrive at an abstract essence, but by the active coordination of perception and action in handling it. Essence is not a mental residue left after discarding sensory change; rather, it is constituted within the very adjustments we make as the wax transforms. Where Descartes locates certainty in rational detachment, Dewey roots it in the embodied process of adaptation.
Subject: Gordon Pask (1928–1996)
Authors: Jon Bird and Ezequiel Di Paolo.
Original Source: MIT Press
Date: Published 2008
Analyzed Section: Pages 185–187, 201–207
Gordon Pask lived an unconventional life, combining intellectual daring with chaotic charm to become one of the most original figures in British cybernetics. His hands-on creativity produced "maverick machines" like the Musicolour system and SAKI (Self-Adaptive Keyboard Instructor), which blurred the lines between art, science, and learning.
Pask argued that knowledge comes not from detached observation but from engaging with systems - "meeting nature halfway" by treating problems as partially unknown black boxes and exploring them through constructive interaction - as I began to adopt this motif in lab, I was charmed and pleasantly surprised at how fun and effective it can be. A hallmark of his philosophy was that complex behavior need not arise from complex mechanisms; even simple components, when coupled with rich environments, can display intelligence-like responses.
Reading about Pask’s machines instantly called to mind Lev Vygotsky’s Zone of Proximal Development (ZPD) - the distance between independent problem solving and potential development under guidance. Pask's SAKI device paralleled this idea, built to keep users in a lively tension between mastery and challenge. He treated learning as a dance between stability and change, where boredom and overload mark the edges of a productive zone.
This naturally brings to mind Duolingo, which uses its "Birdbrain" AI to sort exercises based on learner proficiency. Yet, Duolingo's path is still largely pre-written. A truly Paskian system would evolve in response to quirks and creative input, turning language practice into a genuine conversation rather than a conveyor belt of exercises.
The Living Sensor Reading Pask stirred up one of my long-standing project ideas: an olfactory detector that uses the receptor neurons of a fruit fly as a living sensor array. Their electrical activity would flow directly into a decoding model, not to optimize the biology but to interpret it - mapping raw neural signatures onto identifiable odors. It is a Pask-like attempt to "meet nature halfway": treating the neuron ensemble as a black box and letting interaction with its outputs reveal structure. We had Pask's Ear, get ready for Tafti's Nose... coming soon.
It feels natural to see today's neural networks as heirs to Pask's work. Like his Musicolour system, they operate with simple building blocks that generate astonishingly complex outputs. As Pask argued, their inner workings are often opaque; we know how to build the framework, but the fine details of why certain behaviors emerge remain hidden.
Recursive techniques in AI, such as Contextual Feedback Loops (CFLs), echo Pask’s insight by feeding a model's output back into itself to refine responses. This is exactly the shift Pask foresaw: that machines could evolve beyond rigid programming, surprising us through their coupling with the environment.
Finally, Pask's insistence on shifting science from passive observation to active interaction offers a radical corrective to the pursuit of neutrality. In fields where the act of looking inevitably changes what is seen - whether in Heisenberg's quantum measurements or Jung's explorations of the psyche - Pask's method offers a way forward. As Pauli observed in Man and His Symbols, "The measuring apparatus has to be included in the description of events because it has a decisive but uncontrollable influence upon the experimental set-up." Pask treated the observer as part of the system, reframing objectivity not as standing apart, but as acknowledging and designing for our own influence.
Title: Cybernetics: Or Control and Communication in the Animal and the Machine
Author: Norbert Wiener (1894–1964)
Date: Published 1948
Analyzed Section: Pages 124-132
Norbert Wiener (1894-1964) was a child prodigy who became one of the most influential mathematicians of the 20th century. During World War II, his work on predicting aircraft trajectories led him to the concepts of communication and feedback that became the foundation of Cybernetics. This new science of control and communication in animals and machines influenced computing, neuroscience, robotics, and systems theory, helping to establish the modern language of information, feedback, and control that underpins today's technology.
Wiener emphasizes that memory, whether in brains or machines, depends on changes in storage elements - specifically at synapses in the nervous system - rather than the creation of new ones. He also argues that behavior and conditioned reflexes are not purely mechanical but shaped by "affective tone" - positive or negative weights that influence whether processes are reinforced or inhibited.
Beyond the Input/Output Trap After reading Dewey's reflex arc, I always flinch a bit whenever I see an input-output system, and I wondered if Wiener fell into a similar trap. At first glance, his Affective-tone diagram seems to fall into the old "input/output" framing, with "messages in" and "responses out". But on a second glance, it seems that Wiener actually does something closer to Dewey's move. He consistently emphasizes feedback loops: conditioned reflexes, affective tone, and control systems are never just one-way chains but self-regulating circuits. In his model of affective tone, outputs feed back into the system and reshape thresholds, which in turn alter future inputs. Likewise, his cybernetics treats organisms and machines alike as systems defined by circular causality, not linear push-pull.
Wiener saw memory not as fresh marks on a blank slate but as changes in the thresholds and permeability of synapses. This finds a direct parallel in modern AI, particularly in how neural networks learn. As Han, Mao, and Dally (2016) explain, "during back-propagation, the gradient for each shared weight is calculated and used to update the shared weight." Just as the brain learns by altering synaptic strength, AI learns by tuning numerical parameters - memory is transformation, not inscription.
Furthermore, Wiener's concept of "affective tone" anticipates modern reinforcement learning. He describes behavior as governed by feedback loops where outcomes strengthen or weaken processes. This is now a formalized principle in AI. As Barthet, Liem, and Cunningham (2022) note, "We view affect modeling as a reinforcement learning (RL) process agents learn to take actions (i.e. affective interactions) that will maximize a set of rewards (i.e. behavioral and emotional patterns)." What Wiener sketched as affective tone is the functional signal that drives the adaptation of intelligent systems today.
Plato vs. Locke (feat. Pavlov):
On the nature of the mind, I find the contrast between Plato and Locke especially striking when set against Pavlov's experiments. Plato argues that "what we call 'learning' is really the regaining of knowledge which the soul possessed in a prenatal, disembodied state," while Locke insists that "the mind [is] white paper, void of all characters, without any ideas; how comes it to be furnished?... from experience."
At first glance, Pavlov’s dogs seem to vindicate Locke: the bell-salivation link appears to be written onto a blank slate. Yet, Plato has a point: the conditioning depends on the nervous system's built-in capacity to form associations. The mind is like a blackboard - blank until marked, but possessing a grain and structure that makes writing possible. Pavlov’s work suggests learning is a fusion where experience draws the lines while innate capacities provide the canvas. This is where I appreciate Wiener's pivot to Pavlov; he shows us that behavior is not simply content written on a slate but an active process of reflexes and feedback - a living circuit rather than a static surface.
Title: Intelligent Machinery
Author: Alan Turing (1912–1954)
Date: Written 1948
Analyzed Section: Pages 107-127
Alan Turing, a central figure in modern computing and artificial intelligence, turned to the problem of machine intelligence after his decisive work at Bletchley Park. In his writings, he proposes that the infant brain is best understood as an "unorganized machine," lacking fixed purpose but capable of being shaped into a universal machine through training. For true intelligence, he argues that two elements are needed: "discipline" (the ability to follow rules) and "initiative" (the capacity to explore and adapt).
What struck me most in reading Turing's essays was the sheer clarity of his writing. Every sentence feels deliberate, clean, and lean - no wasted words, no digressions. It does not read like the detached abstractions of a "high-functioning asocial savant," as is sometimes unfairly projected onto him; instead, it is clear-headed, balanced, sharpened but warm.
That said, I did have my gripes. At points, Turing describes the brain as a continuous machine, yet later frames it firmly around the idea of a discrete machine, and the shift between the two is not entirely clear to me. Modern neuroscience suggests the reality is more hybrid than either extreme: "electrophysiological properties corroborate the neuroanatomical evidence that the superior colliculus is the discrete-continuous interface of the oculomotor system." The messy, continuous qualities of biology resist being neatly divided into categories, and while the discrete analogy is powerful, it risks flattening the mind's complexity into a purely computational metaphor.
The Unorganized Machine Turing's image of the infant cortex as an "unorganized machine" resonates strikingly with today's models of unsupervised and self-supervised learning. Modern AI echoes this logic: "Conceptually, human infant learning is the closest biological parallel to artificial unsupervised learning, as infants too must learn useful representations from unlabelled data." Large neural networks, initialized with little more than random weights, begin as unorganized machines in Turing's sense. Indeed, as Amid et al. note, "the representation of a fully trained network is already 'echoed' in a representation induced from random neural network features."
The parallel extends further. Turing speculated that education could instill discipline and initiative into such unorganized systems. Today, techniques like fine-tuning play this very role. As one recent study observes, "Reinforcement learning from human feedback (RLHF) has recently revolutionized the fine-tuning of large language models (LLMs), achieving remarkable success in aligning model behavior with human preferences."
Jung, The Unconscious & The Machine... A thought experiment
When we compare the human mind to machines, the focus usually falls on conscious faculties like reasoning and logic. Yet as Jung emphasized, "The unconscious mind of man sees correctly even when conscious reason is blind and impotent." Turing's "unorganized machine" already anticipates this - a system vast and latent, organized only gradually by discipline.
Where, then, does the machine's unconscious reside? One answer could be in latent structures: the invisible weights of neural networks and embedded biases. Another could be in repression: what is not allowed to appear. Policies and guardrails act like cultural taboos, dividing acceptable from forbidden output and creating a "shadow space" of excluded possibilities.
However, we must also consider the very definition of the unconscious: processes not observed by the thinker. This prompts a deeper question: Are outputs observed by an AI model, but not its latent representations? Is an AI model even capable of observing some of its inner workings? If not, the machine's true unconscious may lie not just in what is "forbidden" (repression), but in the vast, opaque computational layers...
Title: Materialism and the Mind-Body Problem
Author: Paul Feyerabend (1924–1994)
Date: Published 1963
Analyzed Section: Sections 1-4, 7-9, 12, 13, 20, and 24
Paul Feyerabend, one of the most provocative philosophers of science of the 20th century, is famous for his attacks on scientific rationalism and his embrace of methodological pluralism. In this text, he argues that materialism is too often dismissed before it has had the chance to fully develop. Critics often point to differences between "mental" language and "physical" language to reject the idea that mental processes are brain processes. Feyerabend counters that our current dualistic language is not a reflection of truth but a historical artifact, one that might be replaced by a more unified materialist conceptual system given enough time and development.
The Explanatory Gap and the Problem of Language I find Feyerabend's defense of materialism logically rigorous but ultimately unsatisfying. For me, the struggle lies elsewhere: not in the logical possibility of materialism, but in its explanatory gap. However, applying Feyerabend’s own arguments forces me to reconsider this "gap." He would likely argue that my perception of a gap is itself a product of the language I am currently using. By insisting that qualia (subjective experiences) are distinct from matter, I may be trapped in a linguistic framework that prevents me from seeing how they could be material. The challenge he poses is not just to accept materialism as it is, but to construct a new language in which the "qualia problem" might eventually be solved - or perhaps realized to be a non-problem entirely.
In my initial thinking, I distinguished between types of pain. Physiological pain - stepping on a Lego, touching a hot stove - seemed relatively straightforward: a direct signal of tissue damage. Yet, even this distinction is perhaps too simple. If someone is in a coma, do they feel the pain? If they are drunk, or half-drunk, or in a masochistic depressive state where pain feels "good," is it still pain? Subjective states play a role even in this "straightforward" form. This supports Feyerabend's suspicion that our categories are fluid; the "inner feel" is often reified by language into a special kind of object, when it might not be a fixed "thing" at all.
This becomes even clearer with neuropathic pain, such as phantom limb pain. Here, the sensation is real to the sufferer, yet there is no limb to be hurt. The brain's map of the body persists even when the territory is gone. As Flor and Diers (2009) note, "phantom limb pain is a neuropathic pain syndrome... where cortical reorganization plays a major role."
In my original draft, I argued that the very fact we can talk about these phantom sensations suggests they cannot be dissolved away into mere physiology. However, Feyerabend would strongly object to this. He might suggest that in a different language, we would not be able to talk about qualia as distinct mental events - and thus, in that language, they would not exist. Or perhaps, in yet another language, qualia would have locations, colors, and "clumpiness," making them seem continuous with physical phenomena rather than separate from them.
Similarly, the placebo effect complicates the picture. As Schug and Pogatzki-Zahn (2019) explain, "expectations and learning processes are crucial determinants of placebo and nocebo effects," capable of activating endogenous opioid systems. My instinct was to say that by reducing all this to physiology, Feyerabend "sidesteps" the mystery of subjective experience. But looking closer, I am not sure I'd agree that he is merely sidestepping. Rather, he appears to be arguing for us to make space and provide time for a language to develop that could allow for testable theories about the material provenance of qualia. He is asking us to suspend our current linguistic certainties to see what a future, fully developed materialism might reveal.
Title: On aims and methods of ethology
Author: Nikolaas "Niko" Tinbergen (1907–1988)
Date: Published 1963
Analyzed Section: Entire paper
Nikolaas "Niko" Tinbergen was a Dutch zoologist and a founder of modern ethology, receiving the Nobel Prize in 1973 alongside Konrad Lorenz and Karl von Frisch. He framed ethology as the biological study of behavior, insisting it be focused on observable actions and studied with the same rigor as anatomy. His most enduring contribution is the formulation of the "Four Questions" required for a full explanation of behavior: causation (mechanism), survival value (function), ontogeny (development), and evolution (history).
I find myself in full alignment with Tinbergen here; his insistence on careful observation, his warning against premature theorizing, and his balanced treatment of causation and function feel deeply sensible. It is striking how easy it is to oversimplify behavior - a mistake Dewey already warned about with the reflex arc. Tinbergen reminds us to always consider the resolution at which we're looking. The simple act of "reaching for a candle" can be unfolded into motor planning, visual processing, and atomic interactions - and just as easily zoomed out to ecology and cultural context. We must resist stopping too early and mistaking a single level of analysis for the whole picture.
I also resonate with Tinbergen's view that animals are not closed, self-contained machines but deeply embedded in their environments. We often fail to see this because most creatures appear topologically enclosed. It's easier to perceive connectedness in trees and fungi, but a bear, or a human, is also just as much a whirlpool of nutrients, energy, and information flowing through a living system.
As Alan Watts wrote, "Every individual is an expression of the whole realm of nature, a unique action of the total universe. This fact is rarely, if ever, experienced by most individuals. Even those who know it to be true in theory do not sense or feel it, but continue to be aware of themselves as isolated 'egos' inside bags of skin." Tinbergen's insistence on the role of environmental triggers feels like an antidote to that mechanistic view.
Buddhist philosophy echoes this vision but pushes it further. The illusion of an isolated "ego" distorts how we perceive reality itself. As The Embodied Mind argues, overcoming this requires "a bridge between mind in science and mind in experience by articulating a dialogue between these two traditions of Western cognitive science and Buddhist meditative psychology." Without such integration, even scientific inquiry risks clinging to outdated dualisms and missing the deeply interdependent nature of life.
Gene-Culture Coevolution & The Fifth Question
Reading Tinbergen today, I can't help but connect his framework to contemporary theories of gene-culture coevolution. His four questions explain the biological base, but humans also inherit and transmit ideas, values, and technologies. As Laland and colleagues note, "gene-culture dynamics are typically faster, stronger and operate over a broader range of conditions than conventional evolutionary dynamics, leading some practitioners to argue that gene-culture co-evolution could be the dominant mode of human evolution."
Behaviors and belief systems compete for spread through social learning. As Chudek and Henrich explain, "competition among groups can take the form of warfare, demographic expansion, or more subtle contests of influence... Over centuries, this process sustains and aggregates group-beneficial norms into durable institutions that foster success in competition with other societies." This perspective helps make sense of acts that transcend biological adaptation: the monk who renounces reproduction or the scientist who risks security for truth. Perhaps we need to come up with a "Fifth Question" for humans: how does culture shape the evolutionary trajectory?
Dewey & The Continuity of Experience
John Dewey's critique of the reflex arc anticipated something that resonates with Tinbergen's vision. Dewey rejected the idea of stimulus and response as separate, sequential events. As he put it, "stimulus and response are not distinctions of existence, but teleological distinctions... phases of one and the same forming co-ordination."
Dewey's insistence on the continuity of experience complements Tinbergen's insistence on multi-level analysis: one shows how stimulus and response co-create meaning in real time; the other shows how that integrated act can still be studied from different angles without falling back into simplistic dualisms. Seen together, they encourage us to study behavior as a dynamic process - holistic in its unfolding, yet rich enough to warrant multiple lines of inquiry.
Title: The Sciences of the Artificial
Author: Herbert A. Simon (1916–2001)
Date: Published 1969
Analyzed Section: Chapter 1
Herbert Simon was an American polymath whose work reshaped fields as diverse as economics, psychology, and artificial intelligence. In The Sciences of the Artificial, he argues that the world we live in is largely "artificial" - man-made or designed - rather than natural. He defines an artifact as an "interface" between an "inner environment" (the object's mechanics) and an "outer environment" (the world it acts upon). Central to his thinking is the concept of bounded rationality, the idea that real intelligence does not optimize perfectly but satisfices, looking for solutions that are good enough given limited resources.
This separation allows for abstraction. We don’t need to know the complex circuitry of a computer (inner) to use it for writing (outer); we only need to know its function. This "functional abstraction" is what makes complex systems manageable. It allows us to build modularly, treating components as black boxes that simply need to meet input/output specifications.
The most memorable image for me was his parable of the ant on the beach. He describes an ant's path as it navigates around pebbles and ridges - a path that looks incredibly complex, irregular, and difficult to describe mathematically. But Simon argues that the complexity lies not in the ant, but in the beach. The ant itself might have very simple internal rules (e.g., "if blocked, turn left").
As he puts it, "A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself". This suggests that what we call "intelligence" or "personality" might often just be the shape of the environment reflected in our actions. We attribute depth to the agent when we should be looking at the landscape.
This idea - that behavior is an interaction between a simple agent and a complex world - connects directly to the Extended Mind Thesis by Andy Clark and David Chalmers. They argue that cognition doesn't stop at the skull but spills out into the world: "If, as we confront some task, a part of the world functions as a process which, were it done in the head, we should have no hesitation in recognizing as part of the cognitive process, then that part of the world is... part of the cognitive process." Simon’s view supports this: if the "inner environment" is simple, then the "outer environment" (notebooks, smartphones, tools) must be carrying the heavy cognitive load. The complexity of our thinking is scaffolding provided by the world we’ve built.
Deep Learning & The "Black Box"
There is also a tension here with modern Deep Learning. Simon believed that by analyzing the "inner" and "outer" environments, we could fully understand and design intelligent systems. So it goes with today’s neural networks, which are often opaque - we know the "outer" task (e.g., recognizing faces) and the "inner" architecture (layers of neurons), but the actual logic they learn is a black box even to us.
As Gary Marcus critiques, "Deep learning... has no obvious way of representing the sort of abstract relationships that are fundamental to human knowledge." Simon would likely be frustrated by this; he wanted intelligence to be explicable and symbolic, not just a statistical mess that happens to work. He aimed for a science of design, not just an alchemy of weights and biases.
I also kept thinking about Rittel and Webber’s concept of "Wicked Problems". Simon is an optimist about design; he believes we can decompose complex problems into smaller, solvable sub-problems. But Rittel and Webber argue that social problems are "wicked" - they have no definitive formulation, no stopping rule, and every solution creates new problems. "The planner who works with wicked problems... has no right to be wrong," they write, because unlike a scientist, their experiments leave irreversible marks on people’s lives. Simon’s rational approach works well for "tame" problems (like chess or logistics), but I wonder if it fails when the "outer environment" is not just complex, but fundamentally unstable and contested.
Pask vs. Simon
Finally, putting Simon next to Gordon Pask highlights a fascinating divergence. Both saw machines as crucial to understanding mind, but their metaphors differed. Pask viewed the machine as a participant in a conversation, an underspecified system that learns through interaction. As Bird and Di Paolo note, Pask believed "we must engage in a conversation with them... both the construction and the interaction become a necessity if we wish to understand complex phenomena such as life, autonomy, and intelligence". Simon shared this conviction but took it further: for him, artifacts were not just conversational partners but scientific models for discovering general laws of organization and intelligence. Pask emphasized interaction and emergence, treating machines as collaborators that could surprise their creators; Simon emphasized formalization and universality, using artifacts to test and refine theories of rational design. Going out on a poetic limb I would say that both believed that to know is to make - but Simon made to explain, while Pask made to experience.
Title: Eye and Mind (L'Œil et l'Esprit)
Author: Maurice Merleau-Ponty (1908–1961)
Date: Published 1964
Analyzed Section: Pages 1-5
Maurice Merleau-Ponty was a French phenomenologist who placed the body and perception at the center of philosophy. In Eye and Mind, he challenges the detached, "operational" view of science, arguing that "Science manipulates things and gives up living in them." He critiques the attempt to treat the world as a mere object of study, insisting instead that our perception is an active, embodied dialogue. He proposes that the body is not just a chunk of matter but our "general medium for having a world," and that vision is not a camera-like recording but a way of inhabiting space.
The View from Nowhere Merleau-Ponty’s critique of the view from nowhere - the attempt by science to describe the world as if seen by no one in particular - resonates strongly with me. He writes that science "constructs its models... and treats the world as an object... thinking that it can grasp the world from the outside." This desire for objectivity often leads to a strange blindness: in trying to be perfectly objective, we forget that we are the ones looking.
As Alan Watts puts it, "We suffer from a hallucination... that we are distinct 'egos' enclosed in a bag of skin," separated from the world we study. Merleau-Ponty warns that this detachment makes us manipulate things rather than inhabit them. We build maps that are so precise we mistake them for the territory. In our pursuit of data, we risk losing the "flesh of the world" - the messy, entangled reality that actually sustains us.
AI & The Map vs. Territory
This tension between the map and the territory is vividly played out in modern AI. The "Micro-Psi" architecture developed by Joscha Bach attempts to model human cognition by building a system of needs, motives, and distinct cognitive modules. It is a brilliant map of the mind. Yet, as Bach himself admits, "A map is not the territory."
Merleau-Ponty would likely argue that Micro-Psi, for all its sophistication, remains an "operational" model - a manipulation of symbols rather than a living intelligence. It simulates the structure of needs (hunger, social belonging) but lacks the body that gives those needs meaning. As Lisa Feldman Barrett notes in her work on constructed emotion, "Simulations are the brain's guesses of what's happening in the world... your brain uses your past experiences to construct a hypothesis." For Merleau-Ponty, that hypothesis is grounded in the body's vulnerability and placement in the world, something a disembodied AI, no matter how complex, has yet to achieve.
There is a fascinating contrast between Merleau-Ponty and David Hume. Hume famously looked inside himself and found no "self," only a "bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity." From this, he concluded that personal identity is merely a psychological illusion.
For Merleau-Ponty, the continuity we live is embodied: the body is "our general medium for having a world," and perception is an active coupling of seeing and moving in which the seer is also seen. Where Hume finds only a theatre of passing impressions with no stage, Merleau-Ponty locates a field - the flesh of the world - in which body and world interweave, giving a pre-reflective unity to experience. Both dismantle the metaphysical ego: Hume explains the self as a mental construction from discrete perceptions, while Merleau-Ponty grounds the self in an enactive, bodily continuity where meaning arises from our ongoing participation in the world and with others.
Title: Of Clouds and Clocks
Author: Karl Popper (1902–1994)
Date: Published 1966 (lecture), 1973 (essay collection)
Analyzed Section: Sections I through IX
Karl Popper, a voice of critical rationalism who fled the totalitarian ideologies of the 20th century, argued against the "nightmare of physical determinism." He distinguished between "clouds" (open, irregular systems) and "clocks" (closed, predictable systems), arguing that the Newtonian revolution falsely tried to reduce all clouds to clocks. For Popper, the rise of quantum physics shattered this "daydream of omniscience," revealing a universe that is an open process - creative, indeterminate, and only partly knowable.
I find myself largely in agreement with Popper; his argument against physical determinism feels not only philosophically sound but also deeply humane. Still, I wish he had gone further in addressing a question that lingers beyond the collapse of determinism: even if quantum physics undermines the idea of a clockwork universe, does it actually restore freedom? Indeterminism, after all, is a necessary but not sufficient condition for agency.
Whether our actions unfold like billiard balls in a deterministic system or like moves in a game of snakes and ladders where an element of chance is added - neither scenario by itself guarantees freedom. A probabilistic universe may loosen the causal chains, but randomness is not the same as choice. As Henry Stapp observes, "it is an absurdity to believe that the quantum choices can appear simply randomly 'out of the blue,' on the basis of absolutely nothing at all. Something must select which of the possible events actually occur."
Randomness, in other words, cannot substitute for agency; it still leaves unanswered the question of who or what is doing the selecting. If my actions are governed by quantum probabilities rather than classical laws, I may have options, but those options are still chosen by processes that are not mine in any meaningful sense. Most contemporary discussions of free will turn on this distinction: freedom is not about escaping causality altogether but about guided control - acting from one's own reasons, values, and commitments.
A related two-stage model gives indeterminism a limited role: chance enters in a creative stage, where possibilities are generated, and deliberation follows, where reason and character select among them. Randomness supplies variation, but control lies in the second stage - in the reasoning agent who evaluates and chooses - which you would hope, for the sake of one's integrity, leans more towards causality than randomness. Popper's own stance seems to fit here; his indeterminism dismantles the shackles of a causally closed universe, making freedom possible, but not yet actual. It removes the obstacle but doesn't build the bridge. To make agency real, we still need an account of how reasons and intentions can have causal force - how the mind, as a higher-level process, can shape the physical without being swallowed by it.
The Free-Energy Principle & AI
In describing the essence of life, Karl Friston observes that "the defining characteristic of biological systems is that they maintain their states and form in the face of a constantly changing environment." This captures, in scientific language, the very tension Popper explored through his metaphor of clouds and clocks. Both thinkers see life not as mechanical regularity, but as a precarious equilibrium between order and uncertainty. Friston's free-energy principle formalizes what Popper grasped intuitively: living systems endure by anticipating and adapting to change, not by escaping it.
Furthermore, Bengio and Malkin write that "current deep learning mostly succeeds at System 1 abilities which correspond to our intuition and habitual behaviors - but still lacks something important regarding System 2 abilities - which include reasoning and robust uncertainty estimation." This distinction between two systems of thought - one intuitive and generative, the other deliberate and rule-based - parallels Popper's metaphor. System 1, like Popper's clouds, represents the realm of variance, spontaneity, and open-ended possibility; System 2, like Popper's clocks, brings the necessary structure - the capacity for rational selection, evaluation, and control. Bengio's framework suggests that true intelligence depends on this dynamic equilibrium between chaos and constraint.
Popper is right that classical behaviorism couldn't deliver clock-like precision about human action - but the Rescorla-Wagner model (1972) was a real, if modest, breakthrough in mathematizing behavior. As they put it, "the effect of a reinforcement or nonreinforcement in changing the associative strength of a stimulus depends upon the existing associative strength, not only of that stimulus, but also of other stimuli concurrently present." In Popper's terms, this is a move from pure "cloud" to a bit more "clock": not perfect prediction, but principled constraints on how associative strength should change.
It appears to me as if Popper and Hume clash not because they disagree about the instability of experience, but because they interpret that instability in opposite directions. Hume, famously, dismantles the notion of a unified or continuous self: "for my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other... I never can catch myself at any time without a perception." The self, for Hume, is not a real, enduring entity but a fiction born from the mind's habit of linking impressions.
Order and identity are mental projections - useful illusions that arise from the smooth and uninterrupted progress of thought. Popper, by contrast, insists that this "smooth progress" points not to illusion but to emergence: the formation of stable, self-organizing systems within a fundamentally open and indeterminate world. Where Hume sees the mind as a theater of passing perceptions, Popper sees it as a living cloud capable of generating its own clocks - systems of internal regulation, reasoning, and self-reference.
Title: What Is It Like to Be a Bat?
Author: Thomas Nagel (1937–Present)
Date: Published 1974
Analyzed Section: The entire article
Thomas Nagel is an American philosopher whose work bridges analytic rigor with humanistic depth. In his influential essay, he argues that consciousness is what makes the mind-body problem uniquely hard, as reductionist analogies from other sciences skip the very thing that needs explaining: what experience is like from the inside.
Nagel posits that an organism is conscious if "there is something that it is like to be that organism." This first-person phenomenal character is the definitive target any account of mind must address. He uses the example of a bat - which perceives via echolocation - to illustrate the limits of human imagination. We can imagine acting like a bat, but we cannot inhabit what it is like for a bat to be a bat. Reductionism gains objectivity by discarding species-specific viewpoints, but in the case of experience, the viewpoint is the essence; moving away from it moves away from the phenomenon itself.
Any discussion of consciousness begins with an uncomfortable fact: we know it only from within. Experience is irreducibly first-person, which makes any claim about its scope inherently uncertain. To call consciousness "widespread," as Nagel does, already assumes that other beings possess inner lives - an assumption we can never prove, only trust. This leap of faith is what keeps solipsism at bay: "expressed as the view that 'I am the only mind which exists." We accept the existence of other minds not because we can verify them, but because without that assumption, even asking the question of consciousness becomes impossible. Nagel does not explicitly acknowledge this in his essay, but the entire discussion of subjective life - from bats to humans to artificial intelligence - rests on that fragile, unprovable premise of shared awareness.
The Limits of Imagination and the Challenge of Empathy
Nagel's claim that we cannot imagine what it's like to be a bat reflects a humility before our cognitive limits, but it may also underestimate the range of human imagination. Our capacity to empathize, simulate, and model other minds exists along a continuum - from crude analogy to conceptual reconstruction. Echolocation, for example, is not alien in kind but in mode; like sight, it translates environmental information into sensory form.
To clarify this, I often use what I call the Magenta Prime Problem: Starting with the classic question of "how can I know that your red is the same as my red?" Even among humans, the subjective texture of perception may differ completely, yet we act as though it aligns. Now taking a step further, imagine someone with an extra cone cell who perceives an additional color dimension - a hue that we shall refer to as "Magenta Prime," beyond our visible spectrum. I cannot see it, but I can grasp its implications: a richer, more complex interplay of color relationships. In the same way, I can conceptually approximate echolocation in the same way as the Magenta Prime.
I would argue that appreciating another person's "Red", another person's "Magenta Prime", and a bat's echolocation falls within a spectrum; it does not become a problem of possibility but rather a difficulty of extrapolation as it diverges further away from our own experience. This gap between function and feeling - the problem of qualia - marks the boundary of empathy. Imagine streaming a bat's sonar directly into a human brain equipped with the neural hardware to process it. The person would gain genuine sensory understanding of echolocation, just as a once-blind individual might, through new vision, learn to navigate light, shape, and beauty. Over time, those perceptions would become natural - although I'll admit it's still uniquely human. To be fully bat-like would require more than gaining new faculties; it would mean losing others—memory, language, symbolic abstraction - dissolving the very self that seeks to empathize.
Panpsychism & Theory of Mind
A growing current in contemporary philosophy and consciousness studies, known as panpsychism, proposes that consciousness is not an emergent byproduct of complexity but a fundamental feature of the universe - present in varying degrees across all entities. This perspective reframes Nagel's puzzle entirely: rather than asking how subjective experience arises from physical matter, it asks how the physical might express the subjective. As philosopher David Chalmers puts it, "one could think of the world as fundamentally consisting in entities bearing microphenomenal properties, connected to each other by the laws of physics... constituting the macrophenomenal realm, just as microphysical structure constitutes the macrophysical realm." If this is true, then the bat's sonar world and our visual world are parallel articulations of the same ontological fabric.
Studies of theory of mind - the ability to infer the mental states of others - suggest that the human brain is uniquely structured to model alternate perspectives. Gallagher and Frith (2003) found that the anterior paracingulate cortex becomes active when a person must "determine an agent’s mental state... and handle simultaneously these two perspectives on the world." This region effectively mediates the act of mental decoupling. From Nagel's perspective, this is the neural echo of the philosophical problem he describes: we can simulate another's beliefs but not their experience - our empathy ends where their qualia begin.
Nagel's insight finds a modern counterpart in Giulio Tononi's Integrated Information Theory (IIT). Tononi writes, "Experience is identical to the system’s conceptual structure—it is the way information feels from the inside." IIT turns Nagel's mystery into a formal principle: what Nagel described as the incommunicable "what-it's-like" becomes, for Tononi, a measurable pattern - the geometry of subjectivity itself.
Peirce gives Nagel's puzzle a sharper edge by showing where cross-species "what-it's-like" talk typically breaks: not at sensation, but at symbols. Icons (resemblance) and indices (physical linkage) can carry some of a bat's sonar into our grasp; but once experience is organized as thirdness - a web of learned, rule-governed symbols - translation depends on a culture of meanings we and bats don't share. That asymmetry explains why we might partly enter sonar with training and prosthetics, while a bat cannot step into a novel human experience such as irony; and it also leaves open the possibility that animals host symbol-like structures we don't even detect. In this sense, Peirce's system-building ambition - "[I intend] to make a philosophy like that of Aristotle... so comprehensive that... the entire work of human reason... shall appear as the filling up of its details." - frames Nagel's problem as one detail in a larger semiotic architecture.
Title: About Behaviorism
Author: B.F. Skinner (1904–1990)
Date: Published 1974
Analyzed Section: Pages 208-221
Burrhus Frederic Skinner was an American psychologist whose work sought to place human behavior on the same empirical footing as physics and biology. Moving away from inward reflection toward external measurement, he championed "Radical Behaviorism," which treats thoughts and feelings not as mysterious mental forces but as collateral products of environmental and biological histories. For Skinner, behavior is shaped and maintained by the environment, not by inner motives or will; thus, the role of science is to uncover these external determinants so that behavior can be predicted and controlled.
B. F. Skinner attempted to reduce human behavior to just two factors: the evolutionary history of the species and the individual's lifetime of environmental conditioning. At first glance, this sounds scientifically thorough, but I find it oddly trivial - rather like saying "rain is just weather plus physics." It is technically true, yet provides no real insight. Of course, behavior arises from biology and environment; the question is how, and here Skinner's formula remains frustratingly hollow. His sweeping statement explains too much and therefore too little, glossing over the nuanced process of behavior in favor of a catch-all determinism.
A Determinism We've Seen Before
Skinner's insistence that behavior is just the result of prior conditions echoes the 19th-century conviction that the universe is a vast, clockwork machine - what Karl Popper described in Of Clouds and Clocks as the delusion that even the messiest, most complex systems were just poorly understood mechanisms. But we now know better. With the rise of quantum mechanics, chaos theory, and the acknowledgment of true uncertainty in even physical systems, the dream of total predictability was shattered.
Skinner's vision of human action as fully programmable through environmental control looks like a psychological version of Laplacian determinism - clean, precise, and doomed. Human behavior, like weather or consciousness, is not a closed mechanical system; it is full of ambiguity, emergence, and subjectivity. When Skinner assures us that we can fully engineer behavior if we just trace the right contingencies, it echoes the same confidence classical physics once had - just before it broke apart under its own weight.
AI & The New Operant Chamber
When I first went looking for how Skinner's ideas might have seeped into the world of artificial intelligence, I didn't expect to find a paper by Kosinski and Zaczek-Chrzanowska titled "Pavlovian, Skinner, and Other Behaviourists' Contributions to AI." Yet here we are. Skinner's insistence that behavior could be analyzed, predicted, and shaped according to environmental contingencies has found its most literal heirs not in psychology departments, but in the algorithms driving reinforcement learning systems.
The logic that once governed pigeons in a box now governs machines in a simulation. As the paper notes, "The essentials of operant conditioning can be summarized in five words - consequences are contingent on behaviour." This single line could almost serve as the credo of modern machine learning. Reinforcement learning systems are, at bottom, operant learners: they act, receive feedback, and adjust future behavior to maximize reward. The authors make the connection explicit: "Operant learning in robot learning is formed in the paradigm of self-supervised learning. Here learning is based on reward (or punishment) resulting from behaviour. This is somehow similar to the operant conditioning of Thorndike and Skinner." The AI revolution, in this sense, is a vindication of Skinner's mechanistic vision: intelligence as the emergent product of reinforcement and feedback, not reflection or consciousness.
Parallels to Buddhist Philosophy
Skinner's move to classify thoughts and feelings as "behaviors caused by the environment" sounds clean and scientific - until the word behavior starts to lose meaning. If thoughts, emotions, and digestion all qualify as behaviors, then what doesn't? As Diller and Lattal note, Skinner's model treats "subjective experience as secondary - important only insofar as it reflects environmental variables." This move keeps his theory comprehensive but hollow; what makes experience experience - its immediacy and interiority - gets flattened into environmental function.
Take meditation as a counterexample. You sit still, intending to count your breaths. The "behavior" is clear: inhale, exhale, count. Then - without any prompt or cause - a thought appears. It might be random, old, surprising, even contradictory to your recent experiences. Skinner might say this is a latent effect of some earlier environmental condition - a delayed echo of past reinforcement. But this explanation feels like an ad hoc stretch; the thought emerged unchosen and unsolicited - more like a cloud appearing in the sky than a behavior performed by an organism.
And here is where Skinner's theory hits a deeper philosophical wall. If thoughts and feelings are behaviors, then what is observing them? Is the act of watching a thought itself a behavior? A meta-behavior? You can think about thinking, but you can't walk a walking or digest a digestion. That is not just a linguistic curiosity - it is the structural core of contemplative practices. The mind can turn on itself; it can see itself. Skinner's framework has no language for this recursive interiority.
Title: The Theory of Affordances
Author: James J. Gibson (1904–1979)
Date: Published 1977
Analyzed Section: Chapter 8
James J. Gibson, an American psychologist, reshaped our understanding of perception by moving away from introspection and behaviorism toward an ecological approach. He argued that perception is not a process of decoding neutral data but of directly sensing "affordances" - the actionable possibilities the environment offers. A flat surface is seen immediately as walkable; meaning is built into the perception itself. These affordances exist in the complementarity between the animal and its environment; a niche is not just a location but a set of affordances, a way of living that implies the animal that inhabits it.
Gibson’s notion of affordances is a fascinating and genuinely radical idea, especially when seen through the lens of Feyerabend’s insight that shifts in language can completely reshape how we think. Gibson’s term "affordance" changes the vocabulary of perception from one of detached observation to one of lived interaction. We move away from the mechanistic view of the mind as a passive receiver of data and toward a vision of life grounded in felt, embodied engagement with the world.
Objects aren’t just neutral things with measurable properties; they are interactive invitations that fit our bodies, skills, and purposes. A chair affords sitting, a path affords walking, a friend affords conversation - all meaningful possibilities perceived directly, not inferred. But Gibson’s framework also invites deeper questions about cause and effect. Suppose I’m standing near a coyote. In that instant, it’s tempting to narrate the event as a chain of mental and physical steps: I saw the coyote, processed it as danger, and decided to step back.
Yet Gibson’s perspective challenges that whole sequence. In ecological terms, perception and action aren’t separate stages but continuous couplings within a single system. The coyote’s presence and my movement away aren’t neatly divided into cause and response - they are mutual adjustments within one field of behavior, more like two magnets repelling each other than a signal being processed. From this view, "decision" is not a detached mental computation but an emergent coordination between organism and environment, shaped by available affordances. The ground affords stepping back, the coyote affords retreat or vigilance, and my movement is simply the pattern that fits those conditions. Prediction becomes less about representing the future internally and more about sensing ongoing possibilities - a kind of direct attunement to the flow of change.
Engineering & Design
Modern engineering design continues to draw deeply on Gibson’s insight that perception and action are inseparable. His theory becomes, in product design, a theory of how artifacts and users mutually define one another. As Masoudi and colleagues explain, "the Ecological approach includes the direct perception of affordances for the user along with a consideration of the users’ biomechanics." Good design mirrors Gibson’s ecological realism: objects should reveal what they can do directly to the senses, without demanding cognitive interpretation. A door handle should invite pulling; a control panel should show its use.
However, as the paper points out, much of this ecological nuance was lost as the concept migrated into design theory. Affordance-Based Design (ABD) attempts to reintroduce Gibson’s concern for human-environment fit - what the authors call "the extent of human-environment fit in design based on affordance." They note that designers should aim to "make the affordances perceptible as directly as possible, hence removing the need for adding signifiers and other information." This is precisely Gibson’s vision translated into engineering language: the artifact and the user should form a single perceptual system where usability flows from physical structure, not symbolic mediation.
Dewey's nod of approval
Both John Dewey and James J. Gibson dismantle the idea that perception is a linear chain of input and output. Dewey, in The Reflex Arc Concept in Psychology, criticizes the notion that sensation leads to thought which then leads to action, calling it a "patchwork of disjointed parts." Instead, he argues for a continuous circuit of activity. "The so-called response," he writes, "is not merely to the stimulus; it is into it."
Gibson’s theory of affordances makes a parallel move: rather than treating perception as internal representation, he sees it as direct engagement with the environment’s possibilities for action. For both thinkers, the mind does not stand over against the world; it is woven into the same circuit of movement, perception, and consequence.
Seen this way, Dewey’s child and candle example translates seamlessly into Gibson’s ecological language. The child’s act begins not with a sensation of light but with an ongoing coordination of looking-reaching. The candle, in that moment, affords illumination, grasping, and warmth - possibilities that shift as the body moves. When the hand nears the flame, a new affordance enters the field: burning. The withdrawal that follows is not a new response tacked on to a stimulus, but a transformation of the same perceptual system - the body now tuned to a different set of affordances, where the flame affords danger and the air around the flame affords escape. The seeing, the reaching, the burning, and the retreat all belong to a single continuous act in which perception reorganizes itself around changing opportunities.
Title: Minds, Brains, and Programs
Author: John Searle (1932–2025)
Date: Published 1980
Analyzed Section: Up to the replies
John Searle was an American philosopher who transformed discussions of language, mind, and artificial intelligence. In his famous critique of "Strong AI," he argued that merely instantiating a computer program is never sufficient for intentionality. Using the "Chinese Room" thought experiment, he demonstrated that one can follow formal rules (syntax) perfectly without understanding the meaning (semantics) behind them. For Searle, intentionality is a biological phenomenon produced by the unique "causal features" of the brain, meaning that any machine claiming to think must duplicate these specific causal powers, not just simulate the program.
His writing was persuasive and a genuine pleasure to read. The Chinese Room argument, in particular, is lucid, tightly reasoned, and remarkably difficult to dismantle. Still, it often feels as though Searle hides behind this idea of "causal power," treating it as something necessarily different in essence from a program. The term has an almost mystical ring to it - a kind of modern counterfeit of how the Greeks once used the word soul.
His axioms, for all their confidence, seem to lack grounding. He makes the claim: if a program is insufficient for understanding, then a brain cannot simply be running a program. But Searle sets an extraordinarily high bar for what counts as "understanding," raising it from the biological and physical - the gene-driven - into the cognitive, abstract, and cultural. In doing so, he is comparing the apex of millions of years of evolution to programs that, at the time, were still in their infancy.
The Evolutionary Ladder & The Turing Test for Worms
Yet to lose this "causal power," you only need to go back a few hundred thousand years - before Homo sapiens emerged. Across the vast spectrum of nervous systems - from something as simple as a spider, a jellyfish, or, better yet, C. elegans with its precisely 302 neurons, to something as complex as the human brain - it seems that only one, ours, would meet Searle's definition of "causal power."
Of course, I initially assumed we could all agree that no matter how elegant C. elegans may be, it cannot comprehend English or, like Searle, Chinese. However, my professor challenges this, noting that there may be a way to convert the latent representations of English into chemical cues that would enable C. elegans to pass an English Turing Test; we just don't know yet. But even if we assume it lacks that high-level comprehension, does it lack initiative? Does it lack an understanding of its own social system or environmental navigation in the same way an adder or an automatic door does? If the door were carbon-based, built of proteins and DNA, would Searle be more satisfied?
Suffice it to say, Searle's definition of "understanding" is set so high that it draws the cutoff line squarely at the human brain. Everything below it - every other nervous system on the evolutionary ladder - gets tossed out. And that is my critical point: if a variation of the same type (a brain) can fail to show "causal power," there's no reason to believe that a variation of a program cannot eventually achieve it. Claiming otherwise is like saying the human brain can't understand English simply because spiders (which also have brains) can't interpret Shakespeare. Just because a simple instance of the substrate fails the test doesn't mean the substrate itself is incapable.
Searle's idea of causal powers - the claim that consciousness arises only from the brain's unique physical capacities - finds both support and reinterpretation in Seth and Bayne's review of contemporary theories. Integrated Information Theory (IIT), for instance, "proposes that consciousness should be understood in terms of the 'cause-effect power' associated with irreducible maxima of integrated information generated by a physical system." This echoes Searle's intuition that the substrate's causal structure matters but transforms it from a metaphysical notion into something measurable.
By contrast, Global Workspace Theory (GWT) offers a framework that resonates with Searle's emphasis on causal accessibility while remaining grounded in empirical neuroscience. As Seth and Bayne describe, "[GWT] proposes that conscious mental states are those which are 'globally available' to a wide range of cognitive processes including attention, evaluation, memory and verbal report." This focus on the broadcasting of information presents a concrete, testable analogue to Searle's causal powers - one that does not privilege biology per se but the system's capacity to integrate and distribute meaning.
Plato 2.0
Searle's notion of "causal powers" plays a similar rhetorical role to Plato's soul: it marks the bright line between mere mechanism and genuine understanding. In Phaedo, Socrates insists that the body "confuses the soul, and doesn't allow it to gain truth and wisdom when in partnership with it," drawing a dramatic boundary between what simply moves and what truly knows.
Searle uses a similar boundary in his rhetoric - between syntax (programs) and the brain's intrinsic causal organization - but of course the ontology diverges: Plato's soul is immaterial and separable; Searle's "causal powers" are biological through and through. Still, the family resemblance is hard to miss. Each concept guards the gate of understanding: for Plato, only the soul gains unclouded access to truth; for Searle, only systems with the right causal endowment cross from symbol-shuffling to semantics.
I don't appreciate Searle boiling every plausible account of understanding down to whether a system has the requisite causal powers, and then denying them to one type (programs as such), then gliding from some current instances lack them to no system of that type could ever have them. The result is a kind of secularized sacred category: "causal powers" can function like a modern soul - philosophically serious, yet rhetorically protective - unless we say, concretely, what those powers are and admit, in principle, that differently built systems might realize them. Otherwise we risk treating biology as destiny and turning a live empirical question - what organization yields understanding? - into riddles.
Title: Autopoiesis and Cognition: The Realization of the Living
Authors: Humberto Maturana (1928–2021) and Francisco Varela (1946–2001)
Date: Published 1980 (English translation)
Analzyed Section: Pages 63-72 and 73-84
Humberto Maturana and Francisco Varela emerged from a Chile that was intellectually vibrant, politically volatile, and deeply connected to the global rise of cybernetics. Maturana, a neurobiologist, and Varela, his student, merged physiology and philosophy to propose that a system is living if it is a physical autopoietic machine - a network of processes that produces the very components that realize that same network, thereby constituting the system as a unity.
They distinguish between organization (the relational pattern that defines the unity) and structure (the particular materials realizing it). Crucially, they argue against defining life by property lists like reproduction or evolution. This error often leads to the definition of life to fall into a check list of requirement criteria.
Throughout my reading, I found my definition of life from "life as something that does things" to "life as the continuous production of the organization that makes doing possible." A cell is not alive because it maintains itself; it is alive because it maintains the maintenance. That recursive folding - making the parts that make the parts - finally clicked for me as the essence of autopoiesis. Tension emerges in various edge cases, such as a virus, which hovers at that edge - nearly capable of self-production but borrowing another's machinery to achieve it.
Stafford Beer's insight that institutions can exhibit autopoiesis felt very appropriate. If a cell is a living unity, then why not a university? A city? A corporation? Each of these maintains its own organization through networks of production - of people, policies, processes - that regenerate the very structures that sustain them. They survive by producing the conditions of their own persistence, even as their components - people, buildings, technologies - are constantly replaced. In that sense, they live.
This recognition unsettles the moral ground beneath it. If life is simply autopoiesis, then we do not, and perhaps cannot, treat all life as morally equivalent. We kill living systems constantly - swatting insects, mowing grass, clearing forests, even digesting salads. Autopoiesis may be sufficient for life, but not sufficient for moral standing.
The same tension appears in discussions about artificial intelligence. We ask whether AI systems "understand," are "conscious," or "alive." But those questions often disguise a deeper one: how close are they to us? We know a Language Model might "understand" language better than a monkey, yet we hesitate to call that understanding "real." We suspect it might be alive in an organizational sense, but not alive enough to matter.
The Game of Life
John Conway's Game of Life is a cellular automaton where each cell follows simple local rules. From this minimal logic, surprisingly lifelike behaviors emerge - oscillators, "gliders," and colonies. In Resilient Life: An Exploration of Perturbed Autopoietic Patterns in Conway's Game of Life, Cika et al. (2020) argue that "complex systems can exhibit autopoiesis—a remarkable capability to reproduce or restore themselves to maintain existence and functionality. We explore the resilience of autopoietic patterns–their ability to recover from shocks or perturbations–in a simplified form in Conway’s Game of Life."
In this sense, the Game of Life visually captures the struggle of living systems to maintain their organization under perturbation. However, the resemblance is only surface-deep. The Game of Life shows how rules can simulate the appearance of autopoiesis, but not the inward, self-producing closure that defines it. The distinction underscores what makes actual living systems unique: they are not maintained by an external algorithm (allopoietic), but by their own continuous production of the conditions for their existence.
Seen in this light, Hume's "bundle" description stops feeling like a threat to the self and starts looking like its working principle. If living systems are not defined by their stuff but by the organization that reproduces the organization, then perhaps personal identity is likewise not a hidden pearl inside experience but the recursively sustained pattern among experiences.
Hume already hints at this when he says, "I never can catch myself at any time without a perception, and never can observe any thing but the perception," and that "identity is nothing really belonging to these different perceptions but is merely a quality, which we attribute to them, because of the union of their ideas in the imagination." On the autopoietic reading, that "union" is not a metaphysical substance; it's the ongoing work of a system that generates the very capacities (memory, inference, narrative, concern) that keep generating it. This also explains the odd sensation of grasping at thin air when we look for a solid "me": the search keeps missing the organizing pattern that maintains the patterns.
Title: Vehicles: Experiments in Synthetic Psychology
Author: Valentino Braitenberg (1926–2011)
Date: Published 1984
Analyzed Section: Till end of Chapter 14
Valentino Braitenberg, a cybernetician and neuroanatomist born in Bolzano, sought to understand how brain structures constitute a machine through a "hands-on, almost tactile approach." In Vehicles, he employs "synthetic psychology" or "downhill invention," positing that it is easier to build simple mechanisms and observe their emergent behavior than to analyze complex behavior "uphill" to deduce the mechanism. He demonstrates how complex descriptors like "fear," "love," and "aggression" can emerge from trivial changes in wiring.
Valentino Braitenberg's law of uphill analysis and downhill invention feels uncannily predictive of the challenges facing modern neuroscience and artificial intelligence. We see this principle at play in contemporary neural networks: we can build systems that reliably categorize images or generate plausible text, but explaining how they do so requires immense effort. When we crack open the hood, we find not a clean, logical process, but seemingly jumbled layers where, beyond parsing out simple functions, the complexity quickly reaches a point where it becomes too complicated to actively conceptualize.
This suggests a striking possibility for the study of the brain: we may approach a point where we have the tools for accurate, pragmatic engineering - knowing which regions to alter for "better linguistic processing" or even to "improve charisma" - long before we have a holistic framework of "how it works." This begs the question: if the brain's full complexity is beyond our conceptual grasp, what is the point of asking "how the brain works" outside of these pragmatic outcomes, if not for existential meaning?
Optimization & The Limits of Vehicle 14
This mechanistic framework is most convincing in its final model, Vehicle 14, whose free will runs on a principle of optimism. This model, where behavior is guided by an evaluator seeking the most pleasing outcome, seems to be at the core of most modern optimization. It perfectly explains how our current environment - from ultra-processed food to algorithms competing for attention - can exploit our lower levers by optimizing directly for our base-level reward circuits.
However, this is also where the model's limits become clear. It does not account for the uniquely human capacity to override those base pleasures, such as the "pleasure found in the pain of a workout" or the "discipline of a fast." We stopped at Vehicle 14, a creature still fundamentally tethered to its simple Darwinian evaluator. A truly human-level intelligence would require one wired not just to obey this evaluator, but to consciously find value in overriding it. Perhaps Braitenberg would have gotten into that in Vehicle 15.
A powerful modern realization of Braitenberg's "impersonal engineer" can be found in the 1994 paper "Evolved Virtual Creatures" by Karl Sims. Sims created a 3D simulated world where virtual creatures were automatically generated using genetic algorithms, evolving both morphology and neural systems together from a random primordial soup. By defining a fitness evaluation for behaviors like swimming or walking, the system used Darwinian selection to breed generations of creatures.
The result was the emergence of "successful and interesting locomotion strategies... some of which would be difficult to invent or build by design." This paper is a computational implementation of Braitenberg's Vehicle 6. Where Braitenberg imagined copyists and unfit vehicles falling off the table, Sims built that digital table. This work is a perfect example of downhill invention, as Sims did not design a walker but rather a world that evolved one. The paper also powerfully confirms Braitenberg's law of uphill analysis; the resulting evolved artificial brains were often incomprehensible. As Sims himself concludes, "a control system that someday actually generates 'intelligent' behavior might tend to be a complex mess beyond our understanding."
Wiener Braitenberg's Vehicles can be read as a direct, synthetic proof of the central arguments Norbert Wiener makes in Cybernetics. Wiener draws a sharp distinction between a formal logical machine and the nervous or mechanical brain, noting that many psychological states do not conform to the canons of logic. Braitenberg's entire project is a downhill invention that explores this very non-logical world of behavior, demonstrating how complex psychology emerges from mechanisms far simpler, yet far richer than formal logic.
The deepest connection lies in the mechanism both authors propose for learning. Wiener dismisses the clean blackboard model of the mind and instead points to the conditioned reflex as a learning mechanism, arguing the key is how this reflex is reinforced. Wiener theorizes, "All that is needed is that the inducements or punishments used have, respectively, a positive and a negative affective tone." This affective tone—a feedback mechanism of pain and pleasure - is precisely what Braitenberg builds. His simple Vehicles 2 and 3 ("fear," "love") are pre-wired reflexes. His most advanced Vehicles (13 and 14) are learning mechanisms that depend on a Darwinian evaluator to provide this positive and negative affective tone. Braitenberg, in essence, built the Vehicles that run on the very affective feedback loops Wiener identified as the true alternative to a cold, logical machine.