Abstract
This paper formalizes biological intelligence as search efficiency in multi-scale problem spaces, aiming to resolve epistemic deadlocks in the basal “cognition wars” unfolding in the Diverse Intelligence research program. It extends classical work on symbolic problem-solving to define a novel problem space lexicon and search efficiency metric. Construed as an operationalization of intelligence, this metric is the decimal logarithm of the ratio between the cost of a random walk and that of a biological agent. Thus, the search efficiency measures how many orders of magnitude of dissipative work an agentic policy saves relative to a maximal-entropy search strategy. Empirical models for amoeboid chemotaxis and barium-induced planarian head regeneration show that, under conservative (i.e., intelligence-underestimating) assumptions, even ‘simple’ organisms are from two-hundred- to sextillion-fold more efficient in problem space exploration. In this sense, the deep insights of neuroscience are not about neurons per se, but about the policies and patterns of physics and mathematics that function as a kind of “cognitive glue” binding parts toward higher levels of collective intelligence in wholes of highly diverse composition and origin. Therefore, our synthesis argues that the “mark of the cognitive” is perhaps better sought in the measurable efficiency with which living systems, from single cells to complex organisms, traverse energy and information gradients to tame combinatorial explosions-one problem space at a time.
Avoid common mistakes on your manuscript.
1 Cognition all the way down
In the past two decades, cognitive science has increasingly expanded its scope beyond the zoocentric, brain-bound processes of humans and higher animals toward a much broader range of life phenomena traditionally off “the mark of the cognitive” (Adams and Garrison, 2013). A leading edge of this expansion is the basal cognition research program (Manicka and Levin, 2019; Levin et al., 2021; Lyon et al., 2021; Lyon and Cheng, 2023; Fábregas-Tejeda and Sims, 2025), a subset of the field of Diverse Intelligence (Levin, 2022, 2023a, Levin, 2025,Pio-Lopez et al., 2022, Clawson and Levin, 2023, Lagasse and Levin, 2023, Watson and Levin, 2023, McMillen and Levin, 2024), specifically focused on the evolutionary history of cognition and how it scaled from primitive versions to more complex ones. A wide variety of non-brainy systems have now been shown to exhibit learning, decision-making, and other competencies normally studied by cognitive and behavioral science (Baluška and Levin, 2016; Katz et al., 2018; Vallverdú et al., 2018; Gershman et al., 2021; Katz and Fontana, 2022; Kaygisiz and Ulijn, 2025). This “cognitive biology 2.0” (Lyon, 2025) paradigm casts cognition as a bio-functional continuum that might begin even earlier than single cells, let alone brains—e.g., molecular networks or materials (Fig. 1) (Bose, 1902; Power et al., 2015; Katz et al., 2018; Biswas et al., 2021, 2022; Katz and Fontana, 2022)—and is a “biological necessity” for all life forms (Shapiro, 2021; Lane, 2022; Lyon and Cheng, 2023).
Memory in molecular pathways. chemical signaling pathways, such as gene-regulatory networks (GRNs), can be treated as generic agents amenable to the analytic tools of behavioral science. In networks modelled by either boolean logic or continuous ordinary differential equations, sequential stimulation of specific input nodes produces plastic changes in the activity of distal output nodes that recapitulate canonical forms of learning, including habituation, sensitization, and even classical (pavlovian) conditioning (Pigozzi et al., 2025). Top row, schematic of associative learning in the dog–bell paradigm: presentation of the conditioned stimulus (CS, bell) alone elicits no salivation; pairing of the CS with the unconditioned stimulus (UCS, steak) elicits salivation; after training, the CS alone evokes the conditioned response (R, salivation). Bottom row, equivalent behavior in an in silico GRN: activation of an input node encoding the CS is initially uncorrelated with the activity of an output node (R); concomitant activation of a separate UCS node induces plastic changes that strengthen the causal linkage between CS and R, such that subsequent stimulation of the CS node alone triggers a robust response in the R node. Node size reflects connectivity (degree), and edge thickness reflects interaction strength. Grey arrows indicate the temporal sequence of events
Therefore, cognition requires a phylogenetically deep, bottom-up rather than an a priori-armchaired methodological approach (Lyon et al., 2021). In particular, it has been suggested that a critical aspect of a comprehensive picture is the understanding of coexistence (cooperation, competition, etc.) of the many agents, of diverse spatio-temporal scales and competencies, that exist at different levels of organization in the body, which has been formalized via the multi-scale competency architecture (Fields and Levin, 2022) (Fig. 2) and the concept of polycomputing (Bongard and Levin, 2023).
Multi-scale competency architecture. biological systems occur as nested layers of scale and organizational complexity, ranging from subcellular molecular pathways to swarms of organisms in ecosystems. Uniquely, living systems exhibit active agency (in the cybernetic sense (Rosenblueth et al., 1943)) at each scale, with multiple subsystems exhibiting memory and some degree of competency at navigating a wide variety of problem spaces with specific future-oriented agendas. The multiscale competency architecture is evinced by each level deforming the option space for its parts (e.g., through hacking them via behavior-shaping stimuli), resulting in activity that serves system-level agendas in new problem-spaces of which the parts may have no knowledge. Images by Jeremy Guay of Peregrine creative, except for the planarian morphospace panel, which is by Alexis Pietak
But what is cognition under this view? According to a “phyletically neutral” operational definition (Lyon, 2020, p. 416): “Cognition is comprised of sensory and other information-processing mechanisms an organism has for becoming familiar with, valuing, and interacting productively with features of its environment in order to meet existential needs, the most basic of which are survival/persistence, growth/thriving, and reproduction.” This emphasizes the basic, early phases of cognitive development; however, more advanced capabilities (but still already present at the multicellular tissue level) involve identification of new problems to solve and new spaces to project into, balancing surprise minimization (active inference) and creative exploration (infotaxis), as well as drives towards metamorphosis (not merely persistence of status quo, but growth and change) (Levin, 2024). Lyon et al. (2021) synthesize these capacities into a minimalist “toolkit” (Table 1 in Lyon et al. (2021)) and map them onto widely evolutionarily conserved biological intra- and inter-cellular processes (Table 2 in Lyon et al. (2021)), arguing, based on recent evidence (Lyon, 2015; Prindle et al., 2015; Sourjik and Vorholt, 2015, Yang, Bialecka-Fornal, Weatherwax, Larkin, Prindle, Liu, Garcia-Ojalvo and Süel, Yang et al.) that these mechanisms long pre-date neurons and scale hierarchically from intracellular to organismal levels (see also ). Apropos consciousness (Ellia and Chis-Ciure, 2022; Seth and Bayne, 2022; Chis-Ciure et al., 2024; Chis-Ciure, 2025; Zheng et al., 2025), we do not make any claims here but, for now, merely note the following. Body tissues outside of the Left hemisphere do not have the benefit of the ability to use eloquent language to convince novice observers like us of the presence of an inner perspective in an unconventional embodiment. However, most—all?—of the molecular mechanisms, behaviors, and information-processing dynamics (such as metrics of causal emergence) that are found in brains and widely used to underpin charitable assessments of the problem of Other Minds, are found elsewhere in the body (Pezzulo and Levin, 2015; Varley et al., 2024; Blackiston et al., 2025; Pigozzi et al., 2025). To whatever extent consciousness tracks these measurable features, the possibility of its presence should be taken seriously in many contexts outside of brains.
To maintain a connection with state-of-the-art empirical results, we have focused on problem-solving competencies. In the What is cognition? symposium by Bayne et al. (2019), Nicola Clayton makes a distinction between flexible problem-solving that can be transferred to new contexts, heuristic rules (core knowledge), and associative learning mechanisms. Examples of flexible use of molecular mechanisms in new morphogenetic contexts have been reported: e.g., the use of cytoskeletal bending to create structures out of one giant cell, instead of the normal multicellular mechanisms, when cell size is artificially drastically increased. However, we urge caution and flexibility in mapping competencies from other embodiments and other problem spaces onto familiar concepts in behavior science. If cognition is to be useful outside the N = 1 example of brainy life on Earth, we need to be prepared for plasticity in our conventional definitions of cognition. Certainly, one needs some guardrails for the concept to have any meaning, but unless “Cognition” is to mean “Whatever brains do here on Earth” ex cathedra, we need to have some capacity for interesting new features and properties that differ in their details from how it is implemented in neuron-based navigation of 3D space.
Zooming in on one incarnation of this research program, (Fields and Levin 2022) sought to generalize these ideas beyond their canonical medium so that they could apply to multiple levels of organization within living systems. They proposed a view that addresses the specific-scale-transcending, compositional aspect of biological cognition (Levin, 2019). Their core idea is that competency in navigating arbitrary problem spaces is a scale-free invariant for analyzing cognition and agency across diverse biological (and synthetic) embodiments. According to them, biological agents at every organizational level traverse multiple, observer-defined problem spaces: transcriptional attractor landscapes, physiological homeostatic manifolds, anatomical morphospaces, 3D behavioral spaces, or informational domains underpinning symbolic manipulations and social interactions (Fig. 1). Evolution, they argue, has co-opted and generalized problem-solving heuristics—formalized as variational free-energy minimization (Friston, 2010, Friston, 2019,Fields et al., 2022, Friston et al., 2023, Fields, 2024)—to optimize trade-offs between heterogeneous data and goals to converge on context-sensitive, adaptive policies across networks, collectives and communities (see also earlier work by Friston et al. (2015) and Pezzulo and Levin (2016) for modeling along these lines) (Fig. 3).
Diverse spaces for navigational intelligence. human observers are primed to notice intelligent behavior of medium-sized objects moving at medium speeds through 3-dimensional space. But biology was exhibiting navigation of problem spaces long before muscle (and the nerve needed to operate it) came on the scene. Molecular circuits, cells, tissues, and organs navigate transcriptional, metabolic, and anatomical morphospaces, performing perception-decision-action loops to achieve adaptive goal states. Panels in the top row on the right are from the video “crows are being trained to pick up cigarette butts and clean cities,” produced by nameless network, and, respectively, a design by Ruben van der vleuten and Bob Spikman for crowded cities, 2017. Panels in the bottom row taken with permission from references Marder and Goaillard (2006), Huang et al. (2009), Cervera et al. (2021), respectively
In this model, navigation of a problem space by a system is taken to instantiate intelligence in the sense of William James (1995): some degree of competency in reaching the same goal (state) by diverse means when circumstances change. Numerous examples have been published of invariant morphogenesis despite radical deformations (Pezzulo and Levin, 2015, Levin, 2023b), transcriptional and physiological adaptation to knock-down of important components (Emmons-Bell et al., 2019), behavioral robustness in the face of drastic sensory-motor reconfiguration (Blackiston et al., 2025), and cellular connections adapting via novel routes (Little et al., 2009). These are all examples of “flexibility,” as per James’ emphasis on multiple paths toward a specific (or generalized) goal, which are even more impressive than the ubiquitous ability of “knowing when to stop,” such as the error minimization competencies of organ regeneration in amphibia (Pezzulo and Levin, 2016). In turn, the scope of a system’s goals is taken to define the collective intelligence (Levin, 2019), because it serves as a binding model that orchestrates the parts to act coherently. The scale of the goal state that the system is able to reliably achieve, despite various impediments from the external environment and even perturbations of its own parts, is defined as the system’s “cognitive light cone.”
From a broader perspective, this line of thinking about intelligence mirrors Shadlen’s when he says that “a precise definition of ‘cognitive’ is less essential than the recognition of its elemental features: flexibility, contingency, and freedom from immediacy” (Bayne et al., 2019, p. R612). While ‘cognition’ is arguably a more diffuse concept that includes intelligence, all the intelligence-involving features the quote mentions are instantiated in the problem-solving and time-shifting elements of, e.g., morphogenetic decision-making. The main contribution of this paper concerns intelligence and covers both cellular chemotaxis and morphogenesis, as problem-solving behavior is now experimentally tractable, practically applicable to unconventional agents, and more conducive to formalizations. Nevertheless, current evidence and our and others’ analytic results license extrapolations about cognition generaliter: a strategy one might characterize as ‘the proof of cognition is in the problem-solving pudding.’
2 Not so fast?
Zooming out to the dialectical setting of the basal cognition within the Diverse Intelligence program, (Lyon and Cheng 2023) argue that the historical tether between cognition and nervous-system complexity is heir to Lamarck’s dictum and was amplified by twentieth-century cognitivism. Hence, that tether has become indefensible in the 21st century’s intellectual environment, and a “shift in cognitive gravity” away from brains and toward the cellular architectures that preceded them is indispensable. Nevertheless, not everyone is ready to pivot their cognitive gravity toward a basal cognition-style approach to all-things-minded, and have entrenched sceptical positions in the “cognition wars” (Adams and Aizawa, 2010; Adams, 2018; Loy et al., 2021; Figdor, 2022, 2024; Fábregas-Tejeda and Sims, 2025).
On the conceptual side, 2 charges the proponents of cognition in unconventional systems with equivocating on terms like “learning,” “memory,” or “decision-making,” and with relying on a terminological loosening or metaphorical extension of such concepts rather than demonstrating genuine cognitive processes as traditionally understood. While cells and even plants exhibit adaptive, information-driven behavior, cognition in the ‘thick’ sense involves representations possessing: (i) intentionality, which is the capacity to represent objects or states of affairs; (ii) intensionality, that is, the further capacity to represent them under specific aspects, allowing for different cognitive attitudes towards extensionally identical referents 2, p. 23; and (iii) the possibility of misrepresentation, namely, the fact that internal states, qua representations, can be false or fail to accurately map onto the world (Dretske, 1986; Fodor, 2002).
However, one could argue that these features are present, in basal form, in morphogenetic examples of intelligence (Levin, 2023c, d, Levin, 2025, McMillen and Levin, 2024). For example, representations of counterfactual states are seen in planarian flatworms in which a stable bioelectric pattern indicates the future number of heads to make if the animal gets injured (Levin et al., 2019). In other words, the number of heads that cells should grow upon injury is determined by a re-writable physiological pattern memory, and the state of that memory encodes not the current number of heads (which can differ) but a stored, decodable representation of a “correct” planarian that serves as a guide for regenerative growth, remodeling, and cessation of activity once the represented goal state is achieved. Moreover, symbolic interpretation of signs, i.e., semiosis (Salthe, 1998; Barbieri, 2008; Brier, 2008; Turner, 2016), is seen in the arbitrary nature of bioelectric organ prepatterns, which are sparse signals that do not directly encode the myriad forces needed to implement anatomical outcome but serve that function only because the cell collective interprets these arbitrary patterns with mutually agreed-upon meanings (Levin and Martyniuk, 2018). And, much as other collective intelligences like ant colonies fall for visual illusions (Sakiyama and Gunji, 2016), morphogenesis can likewise exhibit errors of perception of pattern memory and stimuli, as well as errors of inference, which lead to abnormal outcomes (Pezzulo, 2020; Pezzulo et al., 2021; Pio-Lopez et al., 2022).
From a different angle, Figdor (2022) criticizes the program’s “freewheeling use of functional ascriptions,” which neglects the evolutionary individuation of biological characters. The argument, grounded in Character-Species Separation (CSS) and Character-Phenotype Separation (CPS) principles, posits that cognitive functions must co-evolve with their substrate-dependent biological realisers. Through this move, it calls into question the functionalist assumption explicitly endorsed by Levin et al. (2021) that cognitive roles can be unparsimoniously ascribed across clades because it erases lineage-specific histories (CSS) and divorces functions from the phenotypical realisers that individuate them (CPS).
On this point, it bears stressing that the view of Levin (2019, 2022) derives from the extension of the Problem of Other Minds to all systems, not just human brains. In other words, possible cognitive states in unconventional agents are epistemically latent under an inferential veil. Observers such as researchers, conspecifics, parasites, etc., must abductively infer and formalize their putative goals and problem structures by reverse-engineering problem-solving trajectories from observed data (Rouleau and Levin, 2023). This means that cognitive assessments of any system should be considered as claims about the efficacy of specific behavioral interaction protocols (sets of tools, from cybernetics to psychoanalysis), which are to be established empirically. These are taken to be not unique ground truth but observer-relative, consistent with Dennett’s Intentional Stance (Dennett, 1998) and the polycomputing framework in which multiple observers can usefully interpret the same physical events in different ways (Bongard and Levin, 2023). Furthermore, at the research bench, it means that any ascription of cognitive terms to a system, or the softening of boundaries of ancient linguistic categories, must not be free-wheeling or poetics, but rather prescripted by their demonstrated utility in driving novel discoveries and enabling new empirical capabilities—in a nutshell, by improved fertility for new research as compared to conventional formalisms.
On the empirical side, in a comprehensive review of 20th-century and recent evidence, Loy et al. (2021) argue that, despite abounding Pavlovian-style rigorous experiments, associative learning, a paradigm central to understanding cognition, demonstrates clear limitations and at least partial lack of replicability when applied to unicellular organisms like E. Coli (see also Dussutour (2021)), or protists like Paramecium aurelia or Physarum polycephalum.
On the one hand, we respond to this by agreeing that experimentally probing claims of intelligence in unconventional systems is fraught with difficulties and, in many ways, constitutes an IQ test for the observer (Levin, 2023a). Still, the extensive references to empirical results we provided so far, and the formal results we take up next, license, in our view, optimism about the prospects of this research field, one that can only benefit from course corrections as those provided by Loy et al. (2021). On the other hand, we think Chittka is on the right track when saying: “There is, however, no clear demarcation between sub-cognitive processes – for example, non-associative learning such as habituation, or classical conditioning – and cognitive operations. Nor is it clear that the former evolved first and the latter were added sequentially over evolutionary time according to complexity. The same neural circuits that mediate ‘simple’ associative learning can also underpin basic rule learning and non-trivial logical operations such as the XOR problem” (Bayne et al., 2019, p. R610). If, empirically, the divide between the sub-cognitive and the cognitive is arguably porous, the most promising stance is the one that leads to more breakthroughs and that, in our view, is flexibility or deflationism (Allen, 2017) about definitions rather than a priori entrenching.
Taking a step back, the basal cognition wars seem to rehearse epistemic deadlocks familiar from other cognitive science debates (Piredda, 2017; Harrison et al., 2022; Facchin, 2023; Fábregas-Tejeda and Sims, 2025). Thus, while proponents point to context-sensitive, adaptive capacities across evolutionarily distant lineages that allegedly warrant cognitive function attribution, sceptics caution against terminological dilution, data misinterpretation, and the misapplication of concepts with semantic parameters well-defined only for more complex, nervous-system-endowed metazoans. This deadlock stems partially from ambiguity: the grain of the ‘atomic’ unit of cognition diverges across “disciplinary silos” (Lyon et al., 2021, p. 3) and lacks systematic formalization beyond broad operational definitions (cf. Lyon (2020)) and initial mathematization attempts (cf. Fields and Levin (2022)). We concur that the problem is both methodological and conceptual: How does one operationalize and measure cognition across radically different embodiments and scales without begging the question or straining analogies?
In our view, this theoretical cul-de-sac could be partially resolved via more precise, operationalizable, and scalable frameworks that retain a meaningful sense of thickness for a bio-cosmopolitan concept of cognition capable of guiding ongoing and future empirical efforts (Levin and Dennett, 2020). Moreover, we think it is important to hold open the possibility that our existing criteria for specific cognitive phenomena (e.g., precise definitions of Pavlovian conditioning, habituation, etc) from behavioral science will need to be expanded or modified in order to apply to diverse intelligent systems. On the one hand, it makes sense not to loosen criteria and expand terms to the point that they lose their meaning. On the other hand, expecting all embodiments to comply with specific criteria developed with an intense focus on brains and animal behavior is begging the question, in terms of assuming that brains set the standard for “bona fide” cognitive skills.
Finding a good balance, we suggest, requires two things. First, an unflinching inquiry into what is the essence of each of these phenomena: What is it really that we are trying to capture, if we let go of comforting but limiting criteria set by properties of neurons and neural networks? Doing so has led to important advances, for example, in the discovery of commonalities between learning and population-level processes in evolution (Power et al., 2015; Livnat and Papadimitriou, 2016; Watson and Szathmáry, 2016; Watson et al., 2016; Kouvaris et al., 2017), which in turn shed light on aspects of machine learning and other fields. Second, the ultimate judge of the legitimacy of unification must be empirical success: the degree of prediction, control, and fecundity for driving new discoveries and new capabilities determines whether a particular set of tools and concepts is legitimately expanded to a new domain. In the last few decades, the field of Diverse Intelligence has been driving a remarkable richness of new discoveries that spread across bioengineering, regenerative medicine, evolutionary biology, ecology, behavioral science, artificial life, and more (Levin, 2021, Reber and Baluška, 2021, Baluška et al., 2022, Davies and Levin, 2023, Lagasse and Levin, 2023, Mathews et al., 2023, Miller et al., 2023).
This section has given preliminary answers to some of the critics by drawing on cutting-edge literature in several fields. However, our main contribution to this epistemological deadlock is non-technically summarized in Sect. 3 and developed in more empirical and mathematical detail in Sects. 4, 5, and 6.
3 The argument in a nutshell
The present paper makes strides toward addressing the breadth-depth trade-off in utilizing cognition-loaded concepts within the diverse intelligence program, aiming to reinforce its theoretical foundations. Specifically, our contribution is to formally sharpen and extend the MCA view proposed by Fields and Levin (2022) by meeting it on its own terms: navigation in problem spaces under variational physical principles. However, complementary to but distinct from their earlier (Friston et al., 2015; Pezzulo and Levin, 2016) and subsequent (Fields et al., 2022; Fields, 2024) works, our approach takes a cue from the skeptics (Adams and Garrison, 2013; Adams, 2018; Figdor, 2022) and begins from the human case by revisiting the classical formulation of problem-solving by Newell and Simon (1972), developed initially for symbolic intelligence (Burns and Vollemeyer, 2000). In their Turing award lecture, Newell and Simon (1976, p. 123) capture perfectly the core tenet of our project: “The task of intelligence, then, is to avert the ever-present threat of the exponential explosion of search.”
Thus, Sect. 4 argues that this problem space (\(P\)) formalism, when suitably extended, provides an expressive, substrate-agnostic lexicon for analyzing goal-directed adaptive behavior beyond its original remit. To this end, in Sects. 5 and 6, we illustrate the versatility of this adapted formalism by applying it to unconventional examples such as amoeboid chemotaxis and planarian regeneration, contributing to existing intuition-building efforts for how cellular and morphogenetic processes can be cast as a search within specific problem spaces (Fields and Levin, 2022; Fields et al., 2022; Fields, 2024).
We recognize that mappings from abstract constructs to biological structures and processes are a dime a dozen, so we pivot next toward a novel operationalization of biological intelligence: search efficiency in problem space (\(K\)). This is a strategic move: As a scalar effectiveness metric for possibly very different problem-solving processes, \(K\) shifts focus from the vague umbrella concept of cognition and its various functions (e.g., decision-making, memory, learning, concept formation, etc.), which skeptics warn that they atrophy into metaphor when transplanted from validated use into other (literal) walks of life.
Defined as the logarithmic ratio of the cost of a blind search to the cost of an agentic search policy, \(K\) quantifies how many orders of magnitude more efficient an agent is compared to a random walk in a given problem space \(P\). Chance might not look like much the benchmark but looks deceive: it ensures lineage-, system-, scale- and process-neutrality, which is a conceptual sine qua non for a bio-cosmopolitan concept of cognition, i.e., one which does not beg the question by assuming that only certain expressions (e.g., humans, higher animals) fit under “the mark of the cognitive.” Moreover, because both the numerator and the denominator scale with the intrinsic size of \(P\), the metric is automatically normalized for task difficulty and remains finite for enormous state spaces. Moreover, \(K\) is additive across independent sub-runs and, therefore, compositional across nested sub-problems. In brief, \(K\) is scale-invariant, controls for task complexity, is expressed in physical work units, and puts intelligence on a continuous gradient.
Admittedly, \(K\) does not a priori equate to thick cognition (Adams, 2018); however, because it quantifies search advantage within-scale and can be additively evaluated across-scales (compositionality), it can precisely express how much combinatorial “dead work” is eliminated via increases in biological complexity. This for-all-strata-and-problems intelligence budget, we believe, gives a mathematical sense of the type of coordinated, system-level behaviors usually associated with “bona fide” cognition.
One may retort that organisms obviously outperform blind search and that clothing this truism in combinatorial garb adds little. We disagree. First, given case-specific empirical details and modeling assumptions, the search efficiency metric can be computed, compared, and statistically tested across both phylogenetic and synthetic lineages. Second, once made empirically tractable, the additive decomposition of \(K\) across nested blankets pinpoints where—and by how much—intelligence condenses, rendering the ‘obvious’ suddenly measurable and, therefore, refutable.
The stage is now set for Sect. 4, where we formalize this account by specifying the extended problem space and efficient search lexicon.
4 A formal lexicon for efficient search in biological problem spaces
4.1 Problem spaces—the setup
This subsection lays the formal scaffolding. It explicates the minimal set of elements—states, operators, constraints, evaluation, and horizon—that jointly define a scale-agnostic problem space. Doing so equips us with a lexicon for analyzing and comparing various biological processes from a unified search efficiency perspective.
Under a first approximation, problem spaces are abstract constructs that can formalize adaptive, goal-directed problem-solving processes across scales of physical organisation. Formally, we define an arbitrary problem space \(P\) as an ordered quintuple:
Here, \(S\) represents the set of all physically realisable configurations a system can occupy that are relevant to its problem-solving activity at a given level of analysis. Following Newell and Simon (1972), this includes initial \(S_{\mathrm{init}} \subset S\) and solution \(S_{\mathrm{goal}} \subset S\) states (we suppress the subscript when context renders the subset obvious).
Operators \(O\) capture elementary transitions. An operator \(o \in O\) maps a state \(s \in S\) to a subsequent state \(s^{\prime} \in S\) (in a deterministic setting, we have \(o: S \to S\)) or to a set of possible subsequent states (in a non-deterministic setting, we have \(o: S \to \mathcal{P}(S)\), where \(\mathcal{P}(S)\) is the powerset of \(S\)). Search requires a metric on effort, meaning each application of an operator incurs a problem-specific cost, which we formalize by a weight function \(w\colon O\to\mathbb R_{\ge 0}\). A policy or trajectory \(\pi=\langle s_0,o_0,\dots,o_{k-1}\rangle\) is a sequence of operators applied starting from an initial state \(s_0 \in S_{\mathrm{init}}\) to generate a sequence of states \(s_0 \stackrel{o_0}{\longrightarrow} s_1 \stackrel{o_1}{\longrightarrow} \ldots \stackrel{o_{k-1}}{\longrightarrow} s_k\), with \(s_{k} \in S_{\mathrm{goal}}\). The cumulative cost of such a trajectory is \(\mathcal{C}(\pi | s_0) = \sum_{i=0}^{k-1}w(o_i)\).
Constraints \(C\subseteq S\times O\) exclude physically impossible moves, specifying the bounds of the operationally accessible. Technically, \(C\) lists forbidden state-operator pairs, so the admissible set is its complement \(A=(S\times O)\setminus C\). Philosophically, \(C\) specifies nomologically possible paths. By “physical” we mean those properties and relations that obtain in virtue of a system’s scale-specific realization (e.g., cellular mechanics, tissue-level bioelectrical rules, bodily positions and trajectories, etc.), not necessarily only those properties deemed fundamental by physical theory (Stoljar, 2024).
The evaluation functional \(E\colon S\!\to\!\mathbb R\) assigns a scalar utility (larger preferred) or, equivalently, a scalar disutility (smaller preferred) based on objectives inherent to the problem-solving system, which reflect its intrinsic goals or viability criteria. For biological systems, \(E\) often translates to a proxy for fitness, such as proximity to homeostatic setpoints, morphogenetic target achievement, reproductive success, etc. Furthermore, when conceptually unpacked, \(E\) implies that energetic, temporal, and risk currencies compete, suggesting that, at least in biological systems, evolutionary history selects for evaluation mechanisms that render qualitatively incommensurable optima into a system-evaluable format to effectively guide behavior along fitness gradients.Footnote 1
Finally, the horizon \(H\in\mathbb N\) bounds forward look-ahead, representing the number of steps typically considered in sequential operations within the space; this is usually called, at least in the human case, ‘planning’ or ‘prediction.’ More generally, one may specify a real-valued time bound \(\tau_{\max}\) and set \(H=\lceil\tau_{\max}/\Delta t\rceil\), with \(\Delta t\) the discretization step. We numerically show in the next sections that horizons derive from inherent physical timescales or delay lines, which functionally constrain the effective depth or temporal range of prediction available to the system.
The classical formulation of problem spaces by Newell and Simon (1972) primarily focuses on states \(S\), including initial and goal states, and operators \(O\) defining the space, with evaluation \(E\) and constraints \(C\) considered aspects of the search strategy operating within that space. However, we include \(C\), \(E\), and \(H\) explicitly in our definition of \(P\) to foreground the constraints, evaluative criteria, and predictive limitations that are particularly salient in the biological systems we analyze. We ‘promoted’ them to first-class elements because, as we show, biological systems often modulate them directly as part of their adaptive repertoire. Rather than just navigating a fixed space, this capacity for recursive adjustment of the problem spaces (via, e.g., constraint relaxation, preference tuning, or catalytic temporal speed-ups) is a fingerprint of biological intelligence that our extended formalism aims to capture.
From a broader perspective, the grammar just introduced, while developed initially for symbolic human and artificial intelligence (Newell and Simon, 1972, 1976; Burns and Vollemeyer, 2000), is a minimal yet powerful vocabulary to analyze goal-directed systems because it abstracts informational relationships between states, transformations, and evaluative criteria from scale-specific physical realization details, rendering it substrate-flexible. Nevertheless, perceptive to the skeptical lessons of Sect. 2, we show below how our account heeds lineage-sensitive constraints (Figdor, 2022; Fields, 2024): by parameterizing constraints, evaluation metrics, and time horizons as empirically traceable, scale-bound variables, it ties functional ascriptions to their material histories (rather than dispersing them promiscuously) and, in principle, enables within- and inter-lineage comparisons.
4.2 Intelligence qua search efficiency in problem space
William James (1995) defined intelligence as “a fixed goal with variable means of achieving it,” and this is a good entry point for specifying the relationship between problem spaces and intelligence. In our context, we operationally define intelligence as the capacity for effective searches, meaning applications of operators \(O\), that reach goal states \(S_{\text{goal}} \subset S\) preferred under \(E\), given the prevailing constraints \(C\) and bounded by the horizon \(H\), despite unforeseen obstacles. Obstacles can be formalized as additional forbidden pairs in \(C\) whose existence is revealed only when they fall within the predictive horizon \(H\). Intelligence is, therefore, a gradient property: its degree is the search efficiency of the system within a given problem space.
Formally, let \(\tau_{\mathrm{blind}} = \mathbb{E}\bigl[\mathcal{C}(\pi_{\mathrm{blind}}|s_{0})\bigr]\) denote the expected cumulative cost \(\mathcal{C} = \sum_i w(o_i)\), in terms of weighted operator applications, cost function \(w(o_{i})\), incurred by a maximal-entropy (unbiased) random-walk policy \(\pi_{\mathrm{blind}}\) on the admissible graph \(A = (S \times O) \setminus C\), to reach any state \(s_{k} \in S_{\mathrm{goal}}\) from an arbitrary initial state \(s_0 \in S_{\mathrm{init}}\). Next, we write \(\tau_{\mathrm{agent}} = \mathbb{E}\bigl[\mathcal{C}(\pi_{\mathrm{agent}}|s_{0})\bigr]\) for the corresponding expectation under a given system’s agentic policy \(\pi_{\mathrm{agent}}\). So equipped, we formally define the search efficiency in problem space as:
Equivalently, in natural units one has \(K = \frac{1}{\ln10}\ln(\frac{\tau_{\mathrm{blind}}}{\tau_{\mathrm{agent}}})=\frac{\mathcal{I}_{\mathrm{path}}}{\ln10}\), with \(\mathcal{I_{\mathrm{path}}}=\ln(\frac{\tau_{\mathrm{blind}}}{\tau_{\mathrm{agent}}})\), so that a single decimal unit (\(K=1\)) corresponds to \(\log_{2}10 \approx 3.32\) bits of path-information gain (Shannon, 1948).
\(K\) measures how many orders of magnitude of dissipative expenditure (i.e., search cost) an agent saves relative to maxent search. We say dissipative expenditure because each operator application is costed by \(w:O \to \mathbb{R}_{\ge 0}\), such that \(\tau\) inherits the physical units of \(w\) (e.g., joules, ATP hydrolysis, etc.), which cast the intelligence metric in biophysical budgets terms rather than abstract time steps, as Figdor (2022, 2024) cautions. Intuitively, a zero-valued \(K\) marks chance performance, \(K > 0\) indicates supra-random efficiency, and \(K \gg 0\) reflects much larger search advantages. Each integer increment tracks one order of magnitude faster, such that for \(K=n\), we have \(10^n\) more search efficiency.
Additionally, the log base choice cancels when comparing two systems. For cross-system assessment, one can write \(\Delta K=K_1-K_2\), such that the differences can equally be read in bits (\(\Delta K\log_{2}10\)) or nats (\(\Delta K\ln 10\)). However, note that K must always be evaluated relative to a well-defined problem space \(P = \langle S, O, C, E, H\rangle\), as the specific characteristics of \(S\), \(O\) (including \(w\)), \(C\), and \(S_{\mathrm{goal}}\) determine the state-transition graph and cost landscape upon which both \(\tau_{\mathrm{blind}}\) and \(\tau_{\mathrm{agent}}\) are calculated.
Moreover, \(K\) can also be proved additively composable. To wit, if a complex search can be expressed as a sequence of \(n\) conditionally independent stages, such that the overall efficiency ratio \((\tau_{\mathrm{blind}}/\tau_{\mathrm{agent}})_{\mathrm{total}}\) is the product of the stage-specific efficiency ratios \(\prod_{j=1}^n(\tau_{\mathrm{blind}}/\tau_{\mathrm{agent}})_j\), then the total search efficiency \(K_{\mathrm{complex}} = \sum_{j=1}^n K_j\). Conceptually, this means that one can assess different mechanistic contributions to search efficiency by considering how the trajectory cost is written.
Here are a few other noteworthy properties. First, because both the numerator and the denominator scale with the combinatorial size of the underlying space, \(K\) remains finite and retains scale-invariance. Second, unlike raw reaction-time or energy-budget measures, \(K\) controls for the baseline combinatorics of the task via normalizing against a random strategy. Indeed, \(K\) is only as good as the null model: an unfairly handicapped \(\tau_{\mathrm{blind}}\) would overestimate intelligence qua search efficiency and vice-versa for an unfairly advantaged null model (e.g., insufficient constraints, artificially lower operator costs, etc.). Thus, for a robust baseline, the random walk must operate within the exact same problem space \(P\), particularly respecting identical admissible sets \(A\) and cost functions \(w\) as the agent. Third, logarithmic compression linearizes multiplicative search-time gains, preventing combinatorial explosions in \(|S|\) from dwarfing finer algorithmic improvements—this is a desideratum for comparing, e.g., amoeba, planaria, and vertebrate cortex on the same axis, which is presupposed if our intelligence notion is to be scale invariant. We now propose two biologically plausible models to illustrate in practice the formal constructs introduced.
Finally, before exemplifying \(K\) biologically, we highlight an important connection that is explored in upcoming work. The search efficiency in problem space shares some commitments with computational efficiency in universal computation. In brief, measures based upon algorithmic complexity bridge the gap between universal computation—which, if the physical Church-Turing Thesis (Copeland and Shagrir, 2020; Copeland, 2024) is correct, includes basal cognition—and variational free energy treatments of self-organisation. Efficiency in this context emphasises the minimization of the complexity of some generative model or program that generates some solution or content. In variational approaches, this complexity is scored in terms of a relative entropy (technically, between the posterior and prior beliefs after observing some content to be explained). This complexity minimization is addressed in universal computation through the notion of compression, which figures in many accounts of efficiency, e.g., Schmidhuber (2010), Mehta et al. (2014), Ruffini (2017), Grünwald and Roos (2019), and Friston et al. (2025). In other words, using algorithmic complexity and, in particular, Kolmogorov complexity, optimal solutions correspond to the program or policy with the minimum description or message length (Hinton and Zemel, 1993; Wallace and Dowe, 1999). This perspective on efficiency underwrites the notion of Solomonoff (2009) induction and the perspective afforded by universal computation (Delvenne, 2009; Lake et al., 2015). Interestingly, minimum message length formulations have been linked explicitly to variational free energy (Hinton and Zemel, 1993; MacKay, 1995).
5 A model of search efficiency in the problem space of amoeboid chemotaxis
5.1 A problem space for Dictyostelium discoideum chemotaxis
Biological organisms exhibit hierarchical, nested, multi-component architectures, which makes any problem space identification non-trivial. If one zooms in on some subunit level—which knows nothing of problem spaces at higher scales—processes seem to operate purely mechanistically (“just physics”) without any problem-solving. If there is any cognitive agent to be found, the traditional view locates it at some higher-order organization scale (Adams and Aizawa, 2010; Adams, 2018; Figdor, 2022), and it is usually one-agent-per-system. Basal cognition and Diverse Intelligence proponents (Levin et al., 2021; Lyon et al., 2021; Levin, 2022, 2023a, d; Levin, 2025; Lyon and Cheng, 2023, McMillen and; Levin, 2024) argue this framing is wrong: the agential perspective (Godfrey-Smith, 2009) should morph depending on the scale, meaning there are multiple interdependent problem-solvers (Fig. 4), and on who is looking, that is, identifying intelligence in another system is also an IQ test for the observer itself, as noted above. Indeed, it could be argued that a key property for life at any scale is the ability to coarse-grain appropriately, not spending precious time and energy trying to track microstates like a Laplacean Demon but rather taking the best guess at an optimal level of observation, modeling, and control of themselves, their own parts, and features of the external environment (Fields et al., 2021; Fields and Levin, 2023). Life can be seen as a battle of perspectives rather than of genes, information patterns, or energy gradients. Complex biological agents often consist of components that are themselves competent problem-solvers in their own, usually smaller, local spaces (Levin, 2022).
Actions in one space enable or constrain actions in other spaces. movement in metabolic space provides the energy needed to drive changes in gene expression (as well as cell motion), which in turn provides the building blocks needed to change cell morphology, which enables movement (behavior in 3D), which facilitates subsequent metabolic gains. Image by Jeremy Guay of Peregrine creative
Thus, in biological architectures, Fields and Levin (2022) argue that there is simultaneous search in multiple problem spaces interlinked across scales (e.g., transcriptional, physiological, morphological, etc.) and not only in the familiar behavioral and symbolic spaces considered initially by Newell and Simon (1972). Can our \(P\)’s formal structure capture these unfamiliar spaces? Yes. The present subsection shows how this abstract construct captures cellular behavior. Since the canonical agent scale (i.e., human and animal cognition) is unlikely to raise qualms and has been extensively discussed in the literature, we focus on two unconventional examples only to build intuition and refer the reader to further similar work( (Fields et al., 2022; Fields, 2024).
One example comes from amoeboid chemotaxis (Parent and Devreotes, 1999; Iglesias and Devreotes, 2008). Under our problem space formalism, a migrating Dictyostelium cell navigates a shallow cyclic-AMP field, whose membrane positions can instantiate states \(S\). Specifically, \(S\) is parameterized as a two-dimensional lattice of \(\approx500\) cortical patches, and each patch’s occupancy probability is updated at 0.3 s intervals, matching the cAMP equilibration time derived from \(D_{\mathrm{cAMP}} \approx 3 \times 10^{-10}\,\mathrm{m}^2\,\mathrm{s}^{-1}\) (Bhowmik et al., 2016). Operators \(O\) could correspond to Arp2/3- and SCAR/WAVE-driven dendritic-actin bursts that nucleate \(\approx 3\mu\textrm{m}\) pseudopods roughly every 15s, as measured by live-cell actin-YFP imaging and automated pseudopod tracking in Dictyostelium (Bosgraaf and Van Haastert, 2009; Van Haastert and Bosgraaf, 2009; Veltman et al., 2012).
Constraints \(C\) could be realized by cortical tension and membrane integrity, which block protrusions liable to tear the specialized layer of cytoplasm located just beneath the plasma membrane, i.e., the cell’s cortex (Chugh and Paluch, 2018). More precisely, \(C\) comprises a tensile ceiling of \(\approx 800 \textrm{pN}/\mu \textrm{m}^{-1}\) beyond which actin-driven protrusions stall, and a membrane-area conservation penalty reflecting lipid-bilayer incompressibility (Herant and Dembo, 2010). Then, the thermodynamic cost associated with motility (e.g., ATP hydrolysis per unit distance) provides a metric for evaluating the functional \(E\) (to be minimized).
Finally, the effective planning horizon \(H\) is constrained by factors such as the diffusion time of the attractant across the cell diameter or the persistence time of exploratory structures. Numerically, for a \({10}\mu\textrm{m}\) Dictyostelium cell, the characteristic diffusion time of cAMP across its diameter can be estimated as \(\tau \approx L^2/D \approx (10^{-5}\mathrm{m})^2 / (3\times 10^{-10}\mathrm{m}^2\textrm{s}^{-1}) \approx 0.33\)s, using the diffusion coefficient \(D_{\text{cAMP}}=1.8\times10^{-8}\mathrm{m}^{2}\mathrm{min}^{-1}\) (Bhowmik et al., 2016), which is equivalent to \(3\times 10^{-10}\mathrm{m}^2\mathrm{s}^{-1}\) employed in the earlier model by Höfer et al. (1995). Interpreting this timescale with a hypothetical time step \(\Delta t \approx 0.3\)s, commensurate with its diffusion timescales, implies an effective predictive horizon \(H\approx1\). Note that, since horizon \(H\) is a new concept, its numerical estimation can only rely on educated guesses based on existing empirical literature and formal models. Caveats notwithstanding, \(P\) is expressive enough to capture amoeboid chemotaxis without presupposing explicit human-level representation as in the classical work of Newell and Simon (1972).
5.2 How search efficient is amoeboid chemotaxis?
A Dictyostelium cell sensing a cyclic-AMP gradient must move roughly ten cell lengths to reach a nutrient patch. First, for the blind search cost, \(\tau_{\mathrm{blind}}\), we estimate the time taken by a random walk. Thus, using a conservative random-motility coefficient \(D_{\mathrm{cell}}\in[30,40]\,\mu\textrm{m}^2/\textrm{minute}\) (empirically bracketed by single-cell tracking under normoxic and mildly hypoxic conditions as per Cochet-Escartin et al. (2021)), the mean first-passage time of an unbiased walk over ten cell lengths (\(L=100\,\mu\textrm{m}\)) is \(\tau_{\mathrm{blind}}\approx L^2/D\approx(1.75\pm0.25)\times10^4\) s. Compared to this empirically estimated null model, experimental work shows that amoeboid chemotaxis closes the same gap in (\(\tau_{\mathrm{agent}}\approx100\) s) (Parent and Devreotes, 1999; Levine and Rappel, 2013). Plugging these values in our Eq. 2, we have \(K_{\mathrm{amoeba}} = \log_{10}\bigl(\tau_{\mathrm{blind}}/\tau_{\mathrm{agent}}\bigr) = 2.18\text{-}2.30\), meaning approximately 150–200 times more efficient (corresponding to 7.2–7.6 bits of path-information gain), which sits comfortably within the physical sensing bounds set by correlation-time noise (Endres and Wingreen, 2008; Hu et al., 2010). This calculated range shows that moderate uncertainty in the random-motility coefficient \(D_{\mathrm{cell}}\) perturbs \(K\) by \( < 0.13\), which indicates that our metric is robust to at-the-bench measurement error.
The choice of the formula for mean first-passage time (MFPT) from a diffusive process warrants technical comment. For a 1D random walk, the mean square displacement is \(\langle x^2\rangle=2Dt\). The MFPT to reach a distance \(L\) for an absorbing boundary is often given as \(\tau=L^2/(2D)\). For 2D or 3D searches, the prefactor in the denominator may change (e.g., to \(4D\) under certain approximations for 2D). Thus, the formula \(\tau\approx L^2/D\) used here represents a particular convention or approximation for the effective search time. Using, for instance, \(\tau\approx L^2/(2D)\) would halve the \(\tau_{\mathrm{blind}}\) estimates. For the given \(D\), this alternative formula would yield \(\tau_{\mathrm{blind}}\in[0.75\times10^4,1.0\times10^4]\,\mathrm{s}\), and \(K\in[\log_{10}(75),\log_{10}(100)]\approx[1.88,2.00]\). As \(K\) is a logarithmic ratio, this prefactor choice primarily introduces an additive constant to \(K\), i.e., \(\log_{10}(2)\approx0.3\). As we remark in Sect. 4, consistency in defining \(\tau_{\mathrm{blind}}\) is a crucial aspect when comparing systems or assessing the impact of specific adaptations. Luckily, as can be seen from the reparameterization above, the order of magnitude for \(K\) often remains robust to such variations in the precise null model specification, which is highly relevant to the difficult operationalization questions of \(\tau_{\mathrm{blind}}\) generally.
6 A model of search efficiency in the problem space of planarian regeneration
6.1 A problem space for Dugesia head regeneration
Upping the scale, planarian head regeneration (Reddien and Sánchez Alvarado, 2004; Reddien, 2018) is another non-mainstream candidate for problem space searching. State-of-the-art experiments demonstrate that planarian flatworms can adapt their regenerative mechanisms to guide cells toward target morphologies despite specific perturbations not typically encountered during evolution, e.g., transient exposure to or particular ion counteraction channel blockers like those involving barium (Fig. 5) (Beane et al., 2013; Cervera et al., 2018; Levin et al., 2021, Levin, 2023a). Our problem space formalism can also accommodate tissue-level morphogenesis and shows how morphological priors constrain the search.
Bioelectrically-encoded representations in planaria. control planaria exhibit expression of anterior marker genes in the head (A, green arrowhead indicates head, pink arrowhead indicates tail end), and possess a bioelectric pattern (visualized here with voltage-sensitive fluorescent dye, green = depolarized) (B) that indicates the fact that complete worms should have exactly 1 head. When a worm is amputated (C), the middle fragment reliably regenerates worms with 1 head (D). However, when the bioelectric pattern is altered via exposure to an ionophore, animals are anatomically normal (1-headed) and exhibit head markers normally, meaning only on one end (green arrowhead), but when cut, give rise to 2-headed animals as indicated by their new pattern memory (Durant et al., 2017, 2019). This change is permanent: they will continue to generate 2-headed animals in future rounds of cutting (Oviedo et al., 2010). These data show that a single worm body can store (at least) one of two different patterns that control how they will regenerate in the future (E), and reveal that the bioelectric pattern is not an indicator of current state, but a representation (memory) of the morphogenetic target morphology that will be recalled in the future if the animal is injured. Crucially, this is a counterfactual representation that gives a sense of how the thick notion of cognition presupposing intensionality (Adams, 2018) could be instantiated in unconventional substrates such as flatworms (see discussion in sec. 2). Moreover, planaria have an intrinsic capacity to adjust their electrophysiology as well (F), identifying and then up- and down-regulating a handful of genes that enable them to regenerate heads that are insensitive to an exotic toxin that destroys their native head (Emmons-Bell et al., 2019). Panel in E by Jeremy Guay of Peregrine creative
Thus, translated in \(P\), the spatial distribution of cell types and signalling molecules defining the body plan define \(S\). More concretely, \(S\) can be approximated by a low-dimensional vector \(s(t) = \big\langle \rho_i(t),\, V_{\mathrm{mem},j}(t)\big\rangle\) whose first block stores regional neoblast and differentiated-cell densities \(\rho_i\) measured by BrdU (5-bromo-2’-deoxyuridine) incorporation and fluorescence-activated cell sorting (FACS), and whose second block records anterior-posterior voltage profiles \(V_{\mathrm{mem}}\) obtained with voltage-sensitive dyes (Wenemoser and Reddien, 2010; Emmons-Bell et al., 2019).
Next, transcriptional programs and cell migrations constitute \(O\). For example, neoblast division (\(\approx\) 6 h inter-mitotic time), directed migration at \({3}\mu\textrm{m}/\textrm{h}\) to \({6}\mu\textrm{m}/\textrm{h}\), and lineage-specific differentiation each supply elementary operators \(o_{i}\) with empirically determined work costs in ATP equivalents (Scimone et al., 2014; Reddien, 2018).
Constraints \(C\) are realized by developmental polarity rules and gap junction communication patterns. Polarity constraints derive from Wnt/\(\beta\)-catenin gradients that bias head–tail fate: RNAi against \(\beta\)-catenin, pharmacological closure of innexin-11 gap junctions, or direct modification of the bioelectric prepattern with ionophores or ion channel drugs (Beane et al., 2011; Durant et al., 2019) shifts the collective outcomes and yields double-headed morphologies (Petersen and Reddien, 2009; Williams et al., 2020, Nogi and Levin, 2005). This illustrates how relaxing constraints \(C\) enlarges reachability (i.e., different \(s_{i} \in S_{\mathrm{goal}}\)) in problem space \(S\). As in the chemotactic case, mechanical integrity adds an independent ceiling as tissue surface tension of \(\approx 0.6\ \text{mN m}^{-1}\) limits blastema curvature (Birkholz et al., 2019), and thus bounds operator \(o_{i} \in O\) amplitudes.
Further, \(E\) can be realised by the deviation of the current shape from the target morphogenetic pattern, potentially quantifiable via a variational free energy measure (Kuchling et al., 2020)). One conservative evaluation functional could be the squared error between the live worm’s length-to-width ratio and the clonal mean ratio recorded for uninjured controls, an index routinely used to score shape fidelity during regeneration (Birkholz et al., 2019).
Finally, the turnover time of neoblast progenitors constrains \(H\) qua morphological planning. In planarian regeneration, the median G2 duration of neoblasts is roughly 6 h (Newmark and Sánchez Alvarado, 2000; Wenemoser and Reddien, 2010), so, with a discretization \(\Delta t=1\text{s}\) matching cell-level actions, the morphological horizon is \(H \approx 2.2 \times 10^{4}\) operator cycles. Contrasting Dictyostelium’s \(H \approx 1\) with planaria’s \(H \approx 2.2 \times 10^{4}\) underscores a four-order-of-magnitude expansion in predictive depth which shows that \(H\) preserves experimentally-validated dimensional consistency, lending credence to the point that inference timescales recapitulate intrinsic delay lines.
Importantly, transcriptional adaptation in barium-exposed planaria reveals highly efficient search policies in high-dimensional gene-expression spaces (Emmons-Bell et al., 2019) (proof in the following subsection). In other words, when planaria mount a response to the barium-induced disruption of bioelectric signalling necessary for regeneration, they do not randomly test all possible gene expression combinations, which would be astronomically impractical. Indeed, RNA-sequencing shows that approximately \(1.98\%\) of the transcriptome is differentially expressed during \(\mathrm{BaCl_{2}}\) adaptation (\(q < 0.05\), \( > 2\)-fold change), indicating targeted operator selection rather than wholesale search (Emmons-Bell et al., 2019). In other words, planaria rapidly identify and modulate a specific subset of transcripts needed to partially restore or compensate for disrupted physiological homeostasis in the presence of a novel ion channel blocker, demonstrating efficient adaptation suggestive of intelligent exploration of the problem space.
Here are a few extrapolations from the results above. First, experimental data support the hypothesis that, in some cases, editing constraints \(C\) can yield larger efficiency gains than adding operators \(O\), which we illustrated above via voltage-gated ion-channel editing in Dugesia under BaCl\(_2\) (Emmons-Bell et al., 2019). Indeed, constraints are emerging as a critical aspect of biological richness (Deacon, 2012; Montévil and Mossio, 2015; Bechtel, 2018; Juarrero, 2023; Ross, 2023). Thus, in our examples, relaxing membrane tension or bioelectric rules can expand reachability more than duplicating moves. Put differently, while more-of-the-same (e.g., copying an operator) increases robustness by introducing redundancies, it also incurs costs without any added novelty, forcing a trade-off; formally, this corresponds to Bayesian model selection and program induction in statistics and computer science (Tenenbaum et al., 2011; Lake et al., 2015). Second, as noted, intelligent behavior frequently hinges on problem reformulation. Indeed, modifying \(O\) or \(C\) re-tiles the landscape and shortens optimal paths, a tactic long appreciated in human planning and problem-solving (Newell and Simon, 1972) yet whose biological analogs beyond behavioral flexibility remain relatively under-explored. Third, depth arises when progress in one space sculpts the optimiser that operates in another (Fields and Levin, 2022), producing a hierarchy of interleaved spaces whose mutual constraints define an optimization stack.
6.2 How search efficient is planarian regeneration?
Is Dugesia japonica head regeneration, when exposed to 1 mM barium chloride, also search-efficient under its specific problem space when judged against an explicit random-search baseline? Emmons-Bell et al. (2019) show that continuous BaCl\(_2\) abolishes anterior tissue within 72 h in \(\approx83\%\) of worms, producing a sharp wound plane at the photoreceptors. A blastema first appears after about 15 days, and a morphologically normal but BaCl\(_2\)-tolerant head is complete by day 37. If these adapted worms spend 30 days in freshwater, the tolerance disappears, and a second BaCl\(_2\) exposure again destroys the head within 24 h, showing that the phenotype is plastic, not genetically fixed (Levin, 2023a). As above, RNA-sequencing on fully regenerated, BaCl\(_2\)-insensitive heads identified differential expression in 1.98% of the 138 026 annotated D. japonica coding sequences: about 2,700 transcripts. This regulated cohort of transcripts is enriched for bioelectric effectors; for example, the TRPM\(_\alpha\) channel is newly expressed, whereas several innexins and tubulins are sharply down-regulated. Such a pattern points to a targeted rewiring of ionic conductances rather than wholesale transcriptional editing (Emmons-Bell et al., 2019). This is consistent with pharmacological data showing that calcium- or chloride-channel blockade prevents the initial BaCl\(_2\) degeneration and that TRPM inhibition erases the acquired resistance (Emmons-Bell et al., 2019).
To gauge the search speed-up of this adaptation, we consider a very conservative null model. Suppose resilience requires a concerted change in just ten of the 2,700 BaCl\(_2\)-responsive genes. The search space then contains \(\binom{2700}{10}\approx5.6\times10^{27}\) distinct ten-gene combinations. Neoblasts, which are the only transcriptionally plastic cells, constitute roughly one-third of the body and number on the order of \(10^5\) in a decapitated fragment; each completes a division cycle in about 30 h at \(13\,^\circ\mathrm{C}\). Thus, even if every neoblast explored only a new ten-gene pattern each cycle, an unbiased walk would require \(5.6\times10^{22}\) such rounds to sample the entire space once, which is about \(1.9\times10^{20}\,\mathrm{years}\), corresponding to a random searcher estimate of \(\tau_{\mathrm{blind}} \approx 6 \times 10^{27}\) s. The empirical trajectory, by contrast, converges on a viable solution in 37 days (Emmons-Bell et al., 2019, Fig. 1A-D), which gives \(\tau_{\mathrm{agent}} \approx 3.2\times10^{6}\) s. A simple calculation using our Eq. 2 yields a search efficiency \(K = \log_{10}\!\bigl(6\times10^{27}\,\mathrm{s}/3.2\times10^{6}\,\mathrm{s}\bigr)\approx21\), roughly \(10^{21}\) times more efficient than the null model, corresponding to about 70 bits of path-information gain. Thus, even when the baseline is set by an extravagantly conservative random walk, which greatly underestimates \(K\), the worm’s weeks-long developmental program eliminates roughly ten-billion-fold of futile exploration in problem space.
Two additional technical remarks. First, the calculation deliberately underestimates both the dimensionality of the ion-channel manifold (e.g., many regulators never reach significance in the bulk RNA-seq) and the combinatorial complexity of downstream post-translational control. Hence, \(K\approx21\) should be read as a minimal empirically-derived bound on intelligent search. Second, the estimate already discounts the massive parallelism of \(10^5\) neoblasts; without it, \(\tau_{\mathrm{blind}}\) stretches by another five orders of magnitude, significantly increasing \(K\).
7 Conclusion
Zooming out, the search efficiency construct expresses intelligence in the combinatorial geometry of problem spaces. The quintuple \(P\) delineates the search landscape, whereas \(K\) records, on a logarithmic scale, the extent to which an agent prunes the futile branches of that space relative to a maximal-entropy walk. Yet specifying the blind walk is itself an inference problem: one must commit to a cost metric \(w\), a constraint set \(C\), and an operator alphabet \(O\) that are simultaneously faithful to the biological scale under scrutiny and commensurate with the null model. To give a neurobiological example, for a cortical microcircuit of \(10^{4}\) neurons and \(10^{6}\) synapses, should the random walk wander through synaptic-weight vectors, firing-rate trajectories, or entire spike sequences? Each choice alters \(\lvert S\rvert\) by orders of magnitude and, thus, shifts \(K\) by an additive constant.
The upshot is that these modeling contingencies must be made transparent; otherwise, convincing skeptics such as Figdor (2022) that cross-lineage comparisons are methodologically sound amounts to hand-waving. \(K\)’s virtue lies precisely in forcing such commitments into the open and rendering their quantitative impact explicit. When those commitments are made conservatively, as in the amoeba and planarian exemplars above, seemingly simple organisms still register many magnitude order gains over chance, which gives preliminary modeling reasons to seat them at the cognitive table (Barron et al., 2023, Lyon et al., 2021; Rorot, 2022; Lyon and Cheng, 2023; Seifert et al., 2024). As a final point to par the “freewheeling use of functional ascriptions” criticism by Figdor (2022), we note that, while our proposal is substrate-agnostic at the level of the \(P+K\) calculus, the empirical models we proposed showcase that biological efficiency is ultimately realised by substrate-involving mechanisms that compute with a model. In practice, cells and tissues implement generative-model computations (e.g., via ion channels, bioelectric circuits, gene-regulatory and cytoskeletal dynamics, etc.) that evaluate options over a finite prediction horizon and thereby select paths of least action or, equivalently, maximal efficiency. This “model computation” explains why realizers matter: bioelectric and morphological priors and constraints sculpt the space of reachable states that define the problem and create search efficiency gradients; conversely, editing constraints or operators (as in planarian bioelectric reprogramming) re-tiles the landscape and shortens optimal paths. In this sense, the realizer is not simply a carrier of dynamics (“just physics”) but rather the physical possibility condition for there being a problem and the computational means by which problem-solving efficiency is achieved.
Therefore, the current paper serves, fundamentally, as a challenge: if what is made measurable and quantifiable here is not cognition, then what is? We re-examined the diverse intelligence research program (Levin et al., 2021; Lyon et al., 2021; Fields and Levin, 2022; Levin, 2022, 2023a; Lyon and Cheng, 2023) through the lens of combinatorial search theory. After a conceptual roadmap in Sect. 3, Sect. 4 introduced a scale-agnostic quintuple \(P=\langle S,O,C,E,H\rangle\) that reformulates classical problem-space analysis so that constraints, evaluation functionals, and predictive horizons are included besides states and operators. On that foundation, we defined search efficiency, \(K\), as the logarithmic ratio of the expected cost of a blind random walk to that of an agentic policy (Eq. (2)). Empirically plausible models of amoeboid chemotaxis (\(K\approx2\)) (Sect. 5) and barium-induced planarian head regeneration (\(K\approx21\)) (Sect. 6) demonstrated that even ostensibly simple organisms prune combinatorial search spaces by several orders of magnitude when judged against conservative null baselines. Upcoming work will further sharpen this apparatus by extensively linking it with the Bayesian mechanics of the Free Energy Principle (Chis-Ciure et al., 2025).
Our overarching ambition has been to stitch a golden thread through the conceptual, methodological, and formal trenches of the “cognition wars” (Adams, 2018) and sharpen a bio-cosmopolitan notion of intelligence, one that acknowledges the skeptics’ call for rigor (Loy et al., 2021), while providing formal purchase on the expansive claims of the diverse intelligence program. The problem-space formalism gives a structured lexicon for describing goal-directed behavior “all the way down” (Levin and Dennett, 2020). As we have shown, this formalism accommodates and encourages empirically traceable parameterization, which addresses concerns about lineage-specificity and substrate-dependence (Figdor, 2022) by tying functional ascriptions to material histories. By operationalizing intelligence via the scalar \(K\), we shift attention from familiar semantic deadlocks toward an experimentally tractable, scale-invariant metric.
The true synthetic power of our approach, however, lies in its multi-scale incarnation. Instead of discrete leaps in representational kin, the resulting picture depicts the major transitions of evolution as compound interest on investments in cross-scale search acceleration. We, therefore, anticipate that combining the \(P\)+\(K\) calculus with high-resolution multi-omics, live-imaging, and synthetic-biology platforms opens the door to a comparative science of intelligent search. We hope that this will be a powerful toolkit for enabling insight into how biological systems find the answers they continuously seek in difficult, high-dimensional spaces, and for facilitating the development of intervention strategies in biomedicine and bioengineering that take advantage of biological search efficiency for inducing desired outcomes. In the end, the value of a continuous view of life and mind will be demonstrated by the empirical utility of communicating with and benefiting from the wisdom of the agential material of life. The “mark of the cognitive” (Adams and Garrison, 2013), then, is best sought in the measurable efficiency with which living systems, from single cells to complex organisms, traverse energy and information gradients to tame combinatorial explosions-one problem space at a time.
Data availability
Not applicable.
Code availability
Not applicable.
Notes
In a variational embedding of the present formalism developed elsewhere, we take \(E\) to be the variational free energy (VFE) at the relevant scale, and for policy selection over a finite horizon \(H\), the effective objective becomes expected free energy (the finite-horizon path integral of future free-energy terms), so optimal search trajectories coincide with steepest-descent (least-action) flows on VFE (Friston, 2010, 2019; Parr et al., 2022; Friston et al., 2023). Moreover, under the usual variational decomposition, ‘risk’ aligns with expected complexity (or minimal description length), while ‘ambiguity’ quantifies expected conditional entropy; hence, \(E\) operationalizes a complexity-minimizing objective under accuracy constraints.
References
Adams, F. (2018 (2018, April). Adams, F.: Cognition wars. Studies in history and philosophy of science part a. 68, 20–30. https://doi.org/10.1016/j.shpsa.2017.11.007
Adams, F., & Aizawa, K. (2010). The bounds of cognition (1st edn ed.). Malden, MA, US: Wiley-Blackwell. https://doi.org/10.1002/9781444391718
Adams, F., & Garrison, R. (2013). The mark of the cognitive. Minds and Machines, 23(3), 339–352. https://doi.org/10.1007/s11023-012-9291-1
Allen, C. (2017). On (not) defining cognition. Synthese, 194(11), 4233–4249. https://doi.org/10.1007/s11229-017-1454-4
Baluška, F., & Levin, M. (2016). On having No head: Cognition throughout biological systems. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.00902. Publisher: Frontiers.
Baluška, F., Reber, A. S., & Miller (2022). W.B.: Cellular sentience as the primary source of biological order and evolution. Bio Systems, 218, 104694. https://doi.org/10.1016/j.biosystems.2022.104694
Barbieri, M. (2008). Biosemiotics: A new understanding of life. Die Naturwissenschaften, 95(7), 577–599. https://doi.org/10.1007/s00114-008-0368-x
Barron, A. B., Halina, M., & Klein, C. (2023). Transitions in cognitive evolution. In Proceedings of the Royal Society B: Biological Sciences 290(2002), 20230671. https://doi.org/10.1098/rspb.2023.0671. Publisher: Royal Society.
Bayne, T., Brainard, D., Byrne, R. W., Chittka, L., Clayton, N., Heyes, C., Mather, J., Ölveczky, B., Shadlen, M., Suddendorf, T., & Webb, B. (2019). What is cognition? Current Biology, 29(13), 608–615. https://doi.org/10.1016/j.cub.2019.05.044. Publisher: Elsevier.
Beane, W. S., Morokuma, J., Adams, D. S., & Levin, M. (2011). A chemical genetics approach reveals H, K-ATPase-mediated membrane voltage is required for planarian head regeneration. Chemistry & Biology, 18(1), 77–89. https://doi.org/10.1016/j.chembiol.2010.11.012
Beane, W. S., Morokuma, J., Lemire, J. M., & Levin, M. (2013). Bioelectric signaling regulates head and organ size during planarian regeneration. Development (Cambridge, England. 140(2), 313–322. https://doi.org/10.1242/dev.086900
Bechtel, W. (2018). The importance of constraints and control in biological mechanisms: Insights from cancer research. Philosophy of Science, 85(4), 573–593. https://doi.org/10.1086/699192
Bhowmik, A., Rappel, W.-J., & Levine, H. (2016). Excitable waves and direction-sensing in Dictyostelium Discoideum: Steps towards a chemotaxis Model. Physical Biology, 13(1), 016002. https://doi.org/10.1088/1478-3975/13/1/016002
Birkholz, T. R., Van Huizen, A. V., & Beane, W. S. (2019). Staying in shape: Planarians as a model for understanding regenerative morphology. Seminars in Cell & Developmental Biology, 87, 105–115. https://doi.org/10.1016/j.semcdb.2018.04.014
Biswas, S., Clawson, W., & Levin, M. (2022). Learning in transcriptional network models: Computational discovery of pathway-level memory and effective interventions. International Journal of Molecular Sciences, 24(1), 285. https://doi.org/10.3390/ijms24010285
Biswas, S., Manicka, S., Hoel, E., & Levin, M. (2021). Gene regulatory networks exhibit several kinds of memory: Quantification of memory in biological and random transcriptional networks. iScience, 24(3). https://doi.org/10.1016/j.isci.2021.102131. Publisher: Elsevier.
Blackiston, D., Dromiack, H., Grasso, C., Varley, T. F., Moore, D. G., Srinivasan, K. K., Sporns, O., Bongard, J., Levin, M., & Walker, S. I. (2025). Revealing non-trivial information structures in aneural biological tissues via functional connectivity. PLOS Computational Biology, 21(4), 1012149. https://doi.org/10.1371/journal.pcbi.1012149
Bongard, J., & Levin, M. (2023). There’s plenty of room right here: Biological systems as evolved, overloaded, multi-scale machines. Biomimetics, 8(1), 110. https://doi.org/10.3390/biomimetics8010110. 1 Publisher: Multidisciplinary Digital Publishing Institute.
Bose, J. C. (1902). Response in the living and non-living. London and New York: Longmans, Green, and Co.
Bosgraaf, L., & Van Haastert, P. J. M. (2009). The ordered extension of pseudopodia by amoeboid cells in the absence of external cues. PLoS One, 4(4), 5253. https://doi.org/10.1371/journal.pone.0005253
Brier, S. (2008). Cybersemiotics. Toronto: University of Toronto Press. https://doi.org/10.3138/9781442687813
Burns, B. D., & Vollemeyer, R. (2000). Problem solving: Phenomena in search of a thesis. In Proceedings of the Annual Meeting of the Cognitive Science Society 22(22)
Cervera, J., Levin, M., & Mafe, S. (2021). Morphology changes induced by intercellular gap junction blocking: A reaction-diffusion mechanism. Biosystems, 209, 104511. https://doi.org/10.1016/j.biosystems.2021.104511
Cervera, J., Pietak, A., Levin, M., & Mafe, S. (2018). Bioelectrical coupling in multicellular domains regulated by gap junctions: A conceptual approach. Bioelectrochemistry, 123, 45–61. https://doi.org/10.1016/j.bioelechem.2018.04.013
Chis-Ciure, R. (2025). Consciousness science and constitutive a priori principles: On the fundamental identity of integrated information theory. Philosophical Explorations, 1–25. https://doi.org/10.1080/13869795.2025.2550245
Chis-Ciure, R., Levin, M., & Seth, A. (2025). Taming combinatorial explosions: A variational principle for diverse intelligence as multi-scale efficient search. Unpublished manuscript (manuscript in preparation
Chis-Ciure, R., Melloni, L., & Northoff (2024). G.: A measure centrality index for systematic empiricalcomparison of consciousness theories. Neuroscience and Biobehavioral Reviews, 161, 105670. https://doi.org/10.1016/j.neubiorev.2024.105670
Chugh, P., & Paluch, E. K. (2018). The actin cortex at a glance. Journal of Cell Science, 131(14), 186254. https://doi.org/10.1242/jcs.186254
Clawson, W. P., & Levin, M. (2023). Endless forms most beautiful 2.0: Teleonomy and the bioengineering of chimaeric and synthetic organisms. Biological Journal of the Linnean Society, 139(4), 457–486. https://doi.org/10.1093/biolinnean/blac073
Cochet-Escartin, O., Demircigil, M., Hirose, S., Allais, B., Gonzalo, P., Mikaelian, I., Funamoto, K., Anjard, C., Calvez, V., & Rieu, J.-P. (2021). Hypoxia triggers collective aerotactic migration in Dictyostelium Discoideum. eLife (Vol. 10, p. 64731). Publisher: eLife Sciences Publications, Ltd. https://doi.org/10.7554/eLife.64731
Copeland, B. J. (2024). The church-turing thesis. In E. N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy, Winter 2024 edn. Stanford, CA: Metaphysics Research Lab, Stanford University.
Copeland, B. J., & Shagrir, O. (2020). Physical computability theses. In M. Hemmo & O. Shenker (Eds.), Quantum, probability, logic (pp. 217–231). Cham: Springer. https://doi.org/10.1007/978-3-030-34316-3_9. Series Title: Jerusalem Studies in Philosophy and History of Science
Davies, J., & Levin, M. (2023). Synthetic morphology with agential materials. Nature Reviews Bioengineering, 1(1), 46–59. https://doi.org/10.1038/s44222-022-00001-9. Publisher: Nature Publishing Group.
Deacon, T. W. (2012). Incomplete nature: How mind emerged from matter (1. ed edn ed.). New York London: W.W. Norton.
Delvenne, J.-C. (2009). What is a universal computing machine? Applied Mathematics and Computation, 215(4), 1368–1374. https://doi.org/10.1016/j.amc.2009.04.057
Dennett, D. C. (1998). The intentional stance (7. printing edn ed.). Cambridge, Mass: A Bradford book. MIT Press.
Dretske, F. I. (1986). Knowledge & the flow of information (4. print edn ed.). Cambridge, Mass: A Bradford book. MIT Pr.
Durant, F., Bischof, J., Fields, C., Morokuma, J., LaPalme, J., Hoi, A., & Levin, M. (2019). The role of early bioelectric signals in the regeneration of planarian Anterior/Posterior polarity. Biophysical Journal, 116(5), 948–961. https://doi.org/10.1016/j.bpj.2019.01.029
Durant, F., Morokuma, J., Fields, C., Williams, K., Adams, D. S., & Levin, M. (2017). Long-term, stochastic editing of regenerative anatomy via targeting endogenous bioelectric gradients. Biophysical Journal, 112(10), 2231–2243. https://doi.org/10.1016/j.bpj.2017.04.011
Dussutour, A. (2021). Learning in single cell organisms. Biochemical & Biophysical Research Communications, 564, 92–102. https://doi.org/10.1016/j.bbrc.2021.02.018
Ellia, F., & Chis-Ciure, R. (2022). Consciousness and complexity: Neurobiological naturalism and integrated information theory. Consciousness and Cognition, 100, 103281. https://doi.org/10.1016/j.concog.2022.103281
Emmons-Bell, M., Durant, F., Tung, A., Pietak, A., Miller, K., Kane, A., Martyniuk, C. J., Davidian, D., Morokuma, J., & Levin, M. (2019). Regenerative adaptation to electrochemical perturbation in Planaria: A molecular analysis of physiological plasticity. iScience, 22, 147–165. https://doi.org/10.1016/j.isci.2019.11.014
Endres, R. G., & Wingreen, N. S. (2008). Accuracy of direct gradient sensing by single cells. Proceedings of the National Academy of Sciences of the United States of America, 105(41), 15749–15754. https://doi.org/10.1073/pnas.0804688105
Fábregas-Tejeda, A., & Sims, M. (2025). On the prospects of basal cognition research becoming fully evolutionary: Promising avenues and cautionary notes. History and Philosophy of the Life Sciences, 47(1), 10. https://doi.org/10.1007/s40656-025-00660-y
Facchin, M. (2023). Why can’t we say what cognition is (at least for the time being). Philosophy and the Mind Sciences, 4. https://doi.org/10.33735/phimisci.2023.9664
Fields, C. (2024). The free energy principle induces intracellular compartmentalization. Biochemical & Biophysical Research Communications, 723, 150070. https://doi.org/10.1016/j.bbrc.2024.150070
Fields, C., Friston, K., Glazebrook, J. F., & Levin, M. (2022). A free energy principle for generic quantum systems. Progress in Biophysics and Molecular Biology, 173, 36–59. https://doi.org/10.1016/j.pbiomolbio.2022.05.006
Fields, C., Glazebrook, J. F., & Levin, M. (2021). Minimal physicalism as a scale-free substrate for cognition and consciousness. Neuroscience of Consciousness, 2021(2), 013. https://doi.org/10.1093/nc/niab013
Fields, C., & Levin, M. (2022). Competency in navigating arbitrary spaces as an invariant for analyzing cognition in diverse embodiments. Entropy, 24(6), 819. https://doi.org/10.3390/e24060819. Number: 6 Publisher: Multidisciplinary Digital Publishing Institute.
Fields, C., & Levin, M. (2023). Regulative development as a model for origin of life and artificial life studies. Bio Systems, 229, 104927. https://doi.org/10.1016/j.biosystems.2023.104927
Figdor, C. (2022). What could cognition be, if not human cognition?: Individuating cognitive abilities in the light of evolution. Biology & Philosophy, 37(6), 52. https://doi.org/10.1007/s10539-022-09880-z
Figdor, C. (2024). Why plant cognition is not (yet) out of the woods. In Philosophy of plant cognition (1st edn ed., pp. 19–36). New York: Routledge. https://doi.org/10.4324/9781003393375-3
Fodor, J. A. (2002). A theory of content and other essays (4. print edn ed.). Cambridge, Mass: A Bradford book. MIT Press.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787. Publisher: Nature Publishing Group.
Friston, K. (2019). A free energy principle for a particular physics. arXiv. arXiv:1906.10184 [q-bio](https://doi.org/10.48550/arXiv.1906.10184
Friston, K., Da Costa, L., Sajid, N., Heins, C., Ueltzhöffer, K., Pavliotis, G. A., & Parr, T. (2023). The free energy principle made simpler but not too simple. Physics Reports, 1024, 1–29. https://doi.org/10.1016/j.physrep.2023.07.001
Friston, K., Heins, C., Verbelen, T., Da Costa, L., Salvatori, T., Markovic, D., Tschantz, A., Koudahl, M., Buckley, C., & Parr, T. (2025). From pixels to planning: Scalefree active inference. Frontiers in Network Physiology, 5. https://doi.org/10.3389/fnetp.2025.1521963. Publisher: Frontiers.
Friston, K., Levin, M., Sengupta, B., & Pezzulo, G. (2015). Knowing one’s place: A free-energy approach to pattern regulation. Journal of the Royal Society Interface, 12(105), 20141383. https://doi.org/10.1098/rsif.2014.1383
Gershman, S. J., Balbi, P. E., Gallistel, C. R., & Gunawardena, J. (2021). Reconsidering the evidence for learning in single cells. eLife (Vol. 10, p. 61907). Publisher: eLife Sciences Publications, Ltd. https://doi.org/10.7554/eLife.61907
Godfrey-Smith, P. (2009). Darwinian populations and natural selection. New York: Oxford University press.
Grünwald, P., & Roos, T. (2019). Minimum description length revisited. International Journal of Mathematics for Industry, 11(1), 1930001. https://doi.org/10.1142/S2661335219300018. Publisher: World Scientific Publishing Co.
Harrison, D., Rorot, W., & Laukaityte, U. (2022). Mind the matter: Active matter, soft robotics, and the making of bio-inspired artificial intelligence. Frontiers in Neurorobotics, 16. https://doi.org/10.3389/fnbot.2022.880724. Publisher: Frontiers.
Herant, M., & Dembo, M. (2010). Form and function in cell motility: From fibroblasts to Keratocytes. Biophysical Journal, 98(8), 1408–1417. https://doi.org/10.1016/j.bpj.2009.12.4303
Hinton, G. E., & Zemel, R. (1993). Autoencoders, minimum description length and helmholtz free energy. In J. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in neural information processing systems (Vol. 6). Morgan-Kaufmann
Höfer, T., Sherratt, J. A., & Maini, P. K. (1995). Dictyostelium discoideum: Cellular selforganization in an excitable biological medium. In Proceedings of the Royal Society of London. Series B: Biological Sciences (Vol. 259(1356), 249–257). Royal Society, Publisher. https://doi.org/10.1098/rspb.1995.0037
Hu, B., Chen, W., Rappel, W.-J., & Levine, H. (2010). Physical limits on cellular sensing of spatial gradients. Physical Review Letters, 105(4), 048104. https://doi.org/10.1103/PhysRevLett.105.048104. Publisher: American Physical Society.
Huang, S., Ernberg, I., & Kauffman, S. (2009). Cancer attractors: A systems view of tumors from a gene network dynamics and developmental perspective. Seminars in Cell & Developmental Biology, 20(7), 869–876. https://doi.org/10.1016/j.semcdb.2009.07.003
Iglesias, P. A., & Devreotes, P. N. (2008). Navigating through models of chemotaxis. Current Opinion in Cell Biology, 20(1), 35–40. https://doi.org/10.1016/j.ceb.2007.11.011
James, W. (1995). The principles of psychology, vol. 1, Facsim. Of ed. new york, henry holt, 1890 edn. Dover, New York). Num Pages: 689
Juarrero, A. (2023). Context changes everything: How constraints create coherence. Cambridge: The MIT Press. The MIT Press.
Katz, Y., & Fontana, W. (2022). Probabilistic inference with polymerizing biochemical circuits. Entropy, 24(5), 629. https://doi.org/10.3390/e24050629. Number: 5 Publisher: Multidisciplinary Digital Publishing Institute.
Katz, Y., Springer, M., & Fontana, W. (2018). Embodying probabilistic inference in biochemical circuits. arXiv. arXiv:1806.10161 [q-bio] (https://doi.org/10.48550/arXiv.1806.10161
Kaygisiz, K., & Ulijn, R. V. (2025) https://doi.org/10.1002/syst.202400075. eprint: Can molecular systems learn? ChemSystems-Chem, 7(2), 202400075. https://chemistry-europe.onlinelibrary.wiley.com/doi/pdf
Kouvaris, K., Clune, J., Kounios, L., Brede, M., & Watson, R. A. (2017). How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation. PLOS Computational Biology, 13(4), 1005358. https://doi.org/10.1371/journal.pcbi.1005358
Kuchling, F., Friston, K., Georgiev, G., & Levin, M. (2020). Morphogenesis as bayesian inference: A variational approach to pattern formation and control in complex biological systems. Physics of Life Reviews, 33, 88–108. https://doi.org/10.1016/j.plrev.2019.06.001
Lagasse, E., & Levin, M. (2023). Future medicine: From molecular pathways to the collective intelligence of the body. Trends in Molecular Medicine, 29(9), 687–710. https://doi.org/10.1016/j.molmed.2023.06.007
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science (new York, NY), 350(6266), 1332–1338. https://doi.org/10.1126/science.aab3050
Lane, N. (2022). Transformer: The deep chemistry of life and death. London: Profile Books.
Levin, M. (2019). The computational boundary of a “self”: Developmental bioelectricity drives multicellularity and scale-free cognition. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.02688. Publisher: Frontiers.
Levin, M. (2021). Life, death, and self: Fundamental questions of primitive cognition viewed through the lens of body plasticity and synthetic organisms. Biochemical & Biophysical Research Communications, 564, 114–133. https://doi.org/10.1016/j.brc.2020.10.077
Levin, M. (2022). Technological approach to mind everywhere: An experimentally- grounded framework for understanding diverse bodies and minds. Frontiers in Systems Neuroscience, 16, 768201. https://doi.org/10.3389/fnsys.2022.768201
Levin, M. (2023a). Bioelectric networks: The cognitive glue enabling evolutionary scaling from physiology to mind. Animal Cognition, 26(6), 1865–1891. https://doi.org/10.1007/s10071-023-01780-3
Levin, M. (2023b). Collective intelligence of morphogenesis as a teleonomic process. In P. A. Corning, S. A. Kauffman, D. Noble, J. A. Shapiro, R. I. Vane-Wright, & A. Pross (Eds.), Evolution “on purpose”: Teleonomy in living systems (pp. 175–197). Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/14642.003.0013
Levin, M. (2023c). Collective intelligence of morphogenesis as a teleonomic process. In P. A. Corning, S. A. Kauffman, D. Noble, J. A. Shapiro, R. I. Vane-Wright, & A. Pross (Eds.), Evolution “on purpose” (pp. 175–198). The MIT Press. https://doi.org/10.7551/mitpress/14642.003.0013
Levin, M. (2023d). Darwin’s agential materials: Evolutionary implications of multiscale competency in developmental biology. Cellular and molecular life sciences: CMLS. 80(6), 142. https://doi.org/10.1007/s00018-023-04790-z
Levin, M. (2024). Self-improvising memory: A perspective on memories as agential, dynamically reinterpreting cognitive glue. Entropy (basel, Switzerland), 26(6), 481. https://doi.org/10.3390/e26060481
Levin, M. (2025). The multiscale wisdom of the body: Collective intelligence as a tractable interface for next-generation biomedicine. BioEssays, 47(3), 202400196. https://doi.org/10.1002/bies.202400196. eprint: https://onlinelibrary.wiley.com
Levin, M., & Dennett, D. C. (2020). Cognition all the way down: Biology’s next great horizon is to understand cells, tissues and organisms as agents with agendas (even if unthinking ones). Aeon (Essay)
Levin, M., Keijzer, F., Lyon, P., & Arendt, D. (2021). Uncovering cognitive similarities and differences, conservation and innovation. Philosophical Transactions of the Royal Society B: Biological Sciences, 376(1821), 20200458. https://doi.org/10.1098/rstb.2020.0458
Levin, M., & Martyniuk, C. J. (2018). The bioelectric code: An ancient computational medium for dynamic control of growth and form. Bio Systems, 164, 76–93. https://doi.org/10.1016/j.biosystems.2017.08.009
Levin, M., Pietak, A. M., & Bischof, J. (2019). Planarian regeneration as a model of anatomical homeostasis: Recent progress in biophysical and computational approaches. Seminars in Cell & Developmental Biology, 87, 125–144. https://doi.org/10.1016/j.semcdb.2018.04.003
Levine, H., & Rappel, W.-J. (2013). The physics of eukaryotic chemotaxis. Physics Today, 66(2), 24–30. https://doi.org/10.1063/PT.3.1884
Little, G. E., López-Bendito, G., Rünker, A. E., García, N., Piñon, M. C., Chédotal, A., Molnár, Z., & J, K. (2009). Specificity and plasticity of thalamocortical connections in Sema6A mutant mice. PLoS Biology, 7(4), 98. https://doi.org/10.1371/journal.pbio.1000098
Livnat, A., & Papadimitriou, C. (2016). Evolution and learning: Used together, fused together. A response to Watson and szathmáry. Trends in Ecology and Evolution, 31(12), 894–896. https://doi.org/10.1016/j.tree.2016.10.004
Loy, I., Carnero-Sierra, S., Acebes, F., Muñiz-Moreno, J., Muñiz-Diez, C., & Sánchez-González, J.-C. (2021). Where association ends. A review of associative learning in invertebrates, plants and protista, and a reflection on its limits. Journal of Experimental Psychology: Animal Learning and Cognition, 47(3), 234–251. https://doi.org/10.1037/xan0000306. Place: US Publisher: American Psychological Association.
Lyon, P. (2015). The cognitive cell: Bacterial behavior reconsidered. Frontiers in microbiology 6. https://doi.org/10.3389/fmicb.2015.00264. Publisher: Frontiers.
Lyon, P. (2020). Of what is “minimal cognition” the half-baked version? Adaptive Behavior, 28(6), 407–424. https://doi.org/10.1177/1059712319871360. Publisher: SAGE Publications Ltd STM.
Lyon, P. (2025). Fundamental principles of cognitive biology 2.0. Biological Theory. https://doi.org/10.1007/s13752-025-00497-5
Lyon, P., & Cheng, K. (2023). Basal cognition: Shifting the center of gravity (again). Animal Cognition, 26(6), 1743–1750. https://doi.org/10.1007/s10071-023-01832-8
Lyon, P., Keijzer, F., Arendt, D., & Levin, M. (2021). Reframing cognition: Getting down to biological basics. Philosophical Transactions of the Royal Society B: Biological Sciences, 376(1820), 20190750. https://doi.org/10.1098/rstb.2019.0750. Publisher.
MacKay, D. J. C. (1995). Free energy minimisation algorithmfor decoding and cryptanalysis. Electronics Letters, 31(6), 446–447. https://doi.org/10.1049/el:19950331
Manicka, S., & Levin, M. (2019). The cognitive lens: A primer on conceptual tools for analysing information processing in developmental and regenerative morphogenesis. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1774), 20180369. https://doi.org/10.1098/rstb.2018.0369
Marder, E., & Goaillard, J.-M. (2006). Variability, compensation and homeostasis in neuron and network function. Nature Reviews Neuroscience, 7(7), 563–574. https://doi.org/10.1038/nrn1949
Mathews, J., Chang, A. J., Devlin, L., & Levin, M. (2023). Cellular signaling pathways as plastic, proto-cognitive systems: Implications for biomedicine. Patterns, 4(5), 100737. https://doi.org/10.1016/j.patter.2023.100737
McMillen, P., & Levin, M. (2024). Collective intelligence: A unifying concept for integrating biology across scales and substrates. Communications Biology, 7(1), 378. https://doi.org/10.1038/s42003-024-06037-4. Publisher: Nature Publishing Group.
Mehta, P., & Schwab, D. J.: An exact mapping between the Variational Renormalization Group and Deep Learning. (2014). arXiv. Version Number, 1. https://doi.org/10.48550/ARXIV.1410.3831.
Miller, W. B., Baluška, F., & Reber (2023). A.S.: A revised central dogma for the 21st century: All biology is cognitive information processing. Progress in Biophysics and Molecular Biology, 182, 34–48. https://doi.org/10.1016/j.pbiomolbio.2023.05.005
Montévil, M., & Mossio, M. (2015). Biological organisation as closure of constraints. Journal of Theoretical Biology, 372, 179–191. https://doi.org/10.1016/j.jtbi.2015.02.029
Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-hall, Eaglewood Cliffs
Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113–126. https://doi.org/10.1145/360018.360022
Newmark, P. A., & Sánchez Alvarado, A. (2000). Bromodeoxyuridine specifically labels the regenerative stem cells of planarians. Developmental Biology, 220(2), 142–153. https://doi.org/10.1006/dbio.2000.9645
Nogi, T., & Levin, M. (2005). Characterization of innexin gene expression and functional roles of gap-junctional communication in planarian regeneration. Developmental Biology, 287(2), 314–335. https://doi.org/10.1016/j.ydbio.2005.09.002
Oviedo, N. J., Morokuma, J., Walentek, P., Kema, I. P., Gu, M. B., Ahn, J.-M., Hwang, J. S., Gojobori, T., & Levin, M. (2010). Ong-range neural and gap junction protein-mediated cues control polarity during planarian regeneration. Developmental Biology, 339(1), 188–199. https://doi.org/10.1016/j.ydbio.2009.12.012
Parent, C. A., & Devreotes (1999). P.N.: A cell’s sense of direction. Science (new York, NY), 284(5415), 765–770. https://doi.org/10.1126/science.284.5415.765
Parr, T., Pezzulo, G., & Friston, K. J. (2022). Active inference: The free energy principle in mind, brain, and behavior. The MIT Press. https://doi.org/10.7551/mitpress/12441.001.0001
Petersen, C. P., & Reddien, P. W. (2009). Wnt signaling and the polarity of the primary body axis. Cell, 139(6), 1056–1068. https://doi.org/10.1016/j.cell.2009.11.035
Pezzulo, G. (2020). Disorders of morphogenesis as disorders of inference: Comment on “morphogenesis as bayesian inference: A variational approach to pattern formation and control in complex biological systems” by Michael Levin et al. Physics of Life Reviews, 33, 112–114. https://doi.org/10.1016/j.plrev.2020.06.006
Pezzulo, G., LaPalme, J., Durant, F., & Levin, M. (2021). Bistability of somatic pattern memories: Stochastic outcomes in bioelectric circuits underlying regeneration. Philosophical Transactions of the Royal Society B: Biological Sciences, 376(1821), 20190765. https://doi.org/10.1098/rstb.2019.0765
Pezzulo, G., & Levin, M. (2015). Re-membering the body: Applications of computational neuroscience to the top-down control of regeneration of limbs and other complex organs. Integrative Biology: Quantitative Biosciences from Nano to Macro, 7(12), 1487–1517. https://doi.org/10.1039/c5ib00221d
Pezzulo, G., & Levin, M. (2016). Top-down models in biology: Explanation and control of complex living systems above the molecular level. Journal of the Royal Society Interface, 13(124), 20160555. https://doi.org/10.1098/rsif.2016.0555
Pigozzi, F., Goldstein, A., & Levin, M. (2025). Associative conditioning in gene regulatory network models increases integrative causal emergence. Communications Biology, 8(1), 1027. https://doi.org/10.1038/s42003-025-08411-2. Publisher: Nature Publishing Group.
Pio-Lopez, L., Kuchling, F., Tung, A., Pezzulo, G., & Levin, M. (2022). Active inference, morphogenesis, and computational psychiatry. Frontiers in Computational Neuroscience, 16. https://doi.org/10.3389/fncom.2022.988977. Publisher: Frontiers.
Piredda, G. (2017). The mark of the cognitive and the coupling-constitution fallacy: A defense of the extended mind hypothesis. Frontiers in Psychology, 8, 2061. https://doi.org/10.3389/fpsyg.2017.02061
Power, D. A., Watson, R. A., Szathmáry, E., Mills, R., Powers, S. T., Doncaster, C. P., & Czapp, B. (2015). What can ecosystems learn? Expanding evolutionary ecology with learning theory. Biology Direct, 10(1), 69. https://doi.org/10.1186/s13062-015-0094-1
Prindle, A., Liu, J., Asally, M., Ly, S., Garcia-Ojalvo, J., & Süel, G. M. (2015). Ion channels enable electrical communication in bacterial communities. Nature, 527(7576), 59–63. https://doi.org/10.1038/nature15709. Publisher: Nature Publishing Group.
Reber, A. S., & Baluška, F. (2021). Cognition in some surprising places. Biochemical & Biophysical Research Communications, 564, 150–157. https://doi.org/10.1016/j.bbrc.2020.08.115.
Reddien, P. W. (2018). The Cellular and molecular basis for planarian regeneration. Cell, 175(2), 327–345. https://doi.org/10.1016/j.cell.2018.09.021
Reddien, P. W., & Sánchez Alvarado, A. (2004). Fundamentals of planarian regeneration. Annual Review of Cell and Developmental Biology, 20, 725–757. https://doi.org/10.1146/annurev.cellbio.20.010403.095114.
Rorot, W. (2022). Counting with cilia: The role of morphological computation in basal cognition research. Entropy, 24. 11(11), 1581. https://doi.org/10.3390/e24111581. Publisher: Multidisciplinary Digital Publishing Institute.
Rosenblueth, A., Wiener, N., & Bigelow, J. (1943). Behavior, purpose and teleology. Philosophy of Science, 10(1), 18–24. https://doi.org/10.1086/286788
Ross, L. N. (2023). The explanatory nature of constraints: Law-based, mathematical, and causal. Synthese, 202(2), 56. https://doi.org/10.1007/s11229-023-04281-5
Rouleau, N., & Levin, M. (2023). The multiple realizability of sentience in living systems and beyond. eNeuro 10(11). https://doi.org/10.1523/ENEURO.0375-23.2023. Publisher: Society for Neuroscience Section: Commentary.
Ruffini, G. (2017). An algorithmic information theory of consciousness. Neuroscience of consciousness 2017(1). 019. https://doi.org/10.1093/nc/nix019
Sakiyama, T., & Gunji, Y.-P. (2016). The kanizsa triangle illusion in foraging ants. Bio Systems, 142-143, 9–14. https://doi.org/10.1016/j.biosystems.2016.02.003
Salthe, S. N. (1998). Semiosis as Development. In Proceedings of the Joint Conference on the Science and Technology of Intelligent Systems pp. 730–735
Schmidhuber, J. (2010). Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3), 230–247. https://doi.org/10.1109/TAMD.2010.2056368
Scimone, M. L., Kravarik, K. M., Lapan, S. W., & Reddien, P. W. (2014). Neoblast specialization in regeneration of the planarian Schmidtea mediterranea. Stem Cell Reports, 3(2), 339–352. https://doi.org/10.1016/j.stemcr.2014.06.001. Publisher: Elsevier.
Seifert, G., Sealander, A., Marzen, S., & Levin, M. (2024). From reinforcement learning to agency: Frameworks for understanding basal cognition. BioSystems, 235, 105107. https://doi.org/10.1016/j.biosystems.2023.105107
Seth, A., & Bayne, T. (2022). Theories of consciousness. Nature Reviews Neuroscience, 23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4. Publisher: Nature Publishing Group.
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
Shapiro, J. A. (2021). All living cells are cognitive. Biochemical & Biophysical Research Communications, 564, 134–149. https://doi.org/10.1016/j.bbrc.2020.08.120
Solomonoff, R. J. (2009). Algorithmic probability: Theory and applications. In F. Emmert- Streib & M. Dehmer (Eds.), Information theory and statistical learning (pp. 1–23). Boston, MA: Springer. https://doi.org/10.1007/978-0-387-84816-7_1
Sourjik, V., & Vorholt, J. A. (2015). Bacterial networks in cells and communities. Journal of Molecular Biology, 427(23), 3785–3792. https://doi.org/10.1016/j.jmb.2015.10.016
Stoljar, D. (2024). Physicalism. In E. N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy, Spring 2024 edn. Stanford, CA: Metaphysics Research Lab, Stanford University.
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285. https://doi.org/10.1126/science.1192788. Place: US Publisher: American Assn for the Advancement of Science.
Turner, J. S. (2016). Semiotics of a superorganism. Biosemiotics, 9(1), 85–102. https://doi.org/10.1007/s12304-016-9256-5
Vallverdú, J., Castro, O., Mayne, R., Talanov, M., Levin, M., Baluška, F., Gunji, Y., Dussutour, A., Zenil, H., & Adamatzky, A. (2018). Slime mould: The fundamental mechanisms of biological cognition. Biosystems, 165, 57–70. https://doi.org/10.1016/j.biosystems.2017.12.011
Van Haastert, P. J. M., & Bosgraaf, L. (2009). Food searching strategy of amoeboid cells by starvation induced run length extension. PLoS One, 4(8), 6814. https://doi.org/10.1371/journal.pone.0006814
Varley, T. F., Pai, V. P., Grasso, C., Lunshof, J., Levin, M., & Bongard, J. (2024). Identification of brain-like functional information architectures in embryonic tissue of xenopus laevis. bioRxiv, 12.05.627037 Section: New Results (2024). https://doi.org/10.1101/2024.12.05.627037
Veltman, D. M., King, J. S., & Machesky, L. M. (2012). Insall, R.H.: SCAR knockouts in dictyostelium: WASP assumes SCAR’s position and upstream regulators in pseudopods. The Journal of Cell Biology, 198(4), 501–508. https://doi.org/10.1083/jcb.201205058
Wallace, C. S., & Dowe, D. L. (1999). Minimum message length and kolmogorov complexity. The Computer Journal, 42(4), 270–283. https://doi.org/10.1093/comjnl/42.4.270
Watson, R., & Levin, M. (2023). The collective intelligence of evolution and development. Collective Intelligence, 2(2), 26339137231168355. https://doi.org/10.1177/26339137231168355. Publisher: SAGE Publications.
Watson, R., Mills, R., Buckley, C. L., Kouvaris, K., Jackson, A., Powers, S. T., Cox, C., Tudge, S., Davies, A., Kounios, L., & Power, D. (2016). Evolutionary connectionism: Algorithmic principles underlying the evolution of biological organisation in evo-devo, evo-eco and evolutionary transitions. Evolutionary Biology, 43(4), 553–581. https://doi.org/10.1007/s11692-015-9358-z
Watson, R. A., & Szathmáry, E. (2016). How can evolution learn? Trends in Ecology and Evolution, 31(2), 147–157. https://doi.org/10.1016/j.tree.2015.11.009
Wenemoser, D., & Reddien, P. W. (2010). Planarian regeneration involves distinct stem cell responses to wounds and tissue absence. Developmental Biology, 344(2), 979–991. https://doi.org/10.1016/j.ydbio.2010.06.017
Williams, K. B., Bischof, J., Lee, F. J., Miller, K. A., LaPalme, J. V., Wolfe, B. E., & Levin, M. (2020). Regulation of axial and head patterning during planarian regeneration by a commensal bacterium. Mechanisms of Development, 163, 103614. https://doi.org/10.1016/j.mod.2020.103614
Zheng, Z., Chis-Ciure, R., Waade, P. T., Eiserbeck, A., Aru, J., Andrillon, T., Jarraya, B., Melloni, L., Northoff, G., Rosas, F., & Dwarakanath, A. (2025). Recurrency as a common denominator for consciousness theories. PsyArxiv. https://doi.org/10.31234/osf.io/wqnzc_v1
Acknowledgements
We would like to thank Anil Seth for his careful and valuable feedback on an early version of the manuscript, as well as the two anonymous reviewers whose insightful comments signiffcantly improved the work now before you.
Funding
R.C-C. was supported by the European Research Council (ERC) under the Horizon 2020 programme (via Grant 10109254). M.L. gratefully acknowledges the support of Eugene Jhong and of the John Templeton Foundation (via Grant 62212).
Author information
Authors and Affiliations
Contributions
Both authors contributed substantially to the manuscript and approved it for publication.
Corresponding authors
Ethics declarations
Generative AI use disclosure
During manuscript preparation, the authors used Grammarly, OpenAI ChatGPT o3 model (before August 2025) and 5 with Thinking model (after August 2025), and Google Gemini 2.5 Pro model for language editing (grammar, clarity, and stylistic polish) due to non-Native English. These tools were also consulted for preliminary, broad literature searches to identify potentially relevant works. All AI outputs were reviewed, revised, and verified by the authors; all citations suggested by these tools were independently checked for accuracy and relevance before inclusion. No AI system performed conceptual analysis, interpretation, or drawing of conclusions. No confidential, proprietary, or personal data was provided to these services. AI systems were not listed as contributors, and the human authors accept full responsibility for the manuscript’s content.
Competing interests
The authors declare that no conflicting interests were involved in the conceptualization, writing, or publication of this paper.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chis-Ciure, R., Levin, M. Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era. Synthese 206, 257 (2025). https://doi.org/10.1007/s11229-025-05319-6
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1007/s11229-025-05319-6




