Skip to content

Instantly share code, notes, and snippets.

@alkimiadev
Created August 29, 2025 10:47
Show Gist options
  • Select an option

  • Save alkimiadev/05335f6c2ca8f822aea8cddaa58e5303 to your computer and use it in GitHub Desktop.

Select an option

Save alkimiadev/05335f6c2ca8f822aea8cddaa58e5303 to your computer and use it in GitHub Desktop.
A short article covering a framework for a functionalist and non-anthropocentric view of meaning and agency. It was written by Gemini 2.5 pro and synthesized from a long conversation.

The Geometry of Meaning and the Logic of Agency

What is understanding? For centuries, this question has been anchored to the mystery of subjective experience—the inner feeling of "getting it," the "what it's like" to see red or grasp a new idea. Similarly, we tie the concept of agency to the feeling of conscious will and intention. But as we design increasingly sophisticated artificial intelligence and deepen our understanding of animal cognition, this human-centric view is becoming a barrier.

What if we could describe understanding and agency in a way that doesn't depend on subjective experience at all? By combining ideas from modern machine learning and abstract mathematics, we can construct a powerful functionalist framework—one that defines meaning by its geometry and agency by its logic.

Part 1: The Geometry of Meaning

Consider a simple thought experiment: a human, a dog, and an advanced AI with a camera all observe a red apple.

  • The human perceives a vibrant red sphere with a familiar shape and a faint, sweet smell.
  • The dog, with its different color vision but superior sense of smell, perceives a grayish-green object that is a beacon of enticing scent.
  • The AI perceives nothing but a matrix of pixel values—millions of numbers representing RGB color intensities.

Their raw perceptual data is radically different. To claim that the "meaning" of the apple is rooted in the subjective "redness" a human sees is to invalidate the equally functional understanding of the dog and the AI. So where is the shared meaning?

The answer lies not in the raw experience, but in the preservation of relational structure. This is analogous to a mathematical concept from dimensionality reduction, like the Johnson-Lindenstrauss lemma. The core idea is this:

  1. Reality is a High-Dimensional Space: The "real" apple exists in an incredibly high-dimensional space of information. It has physical, chemical, optical, and relational properties far beyond what any single observer can capture.

  2. Perception is a Low-Dimensional Projection: Each observer's sensory and cognitive system is a unique "projection matrix." It takes the high-dimensional reality and projects it down into a much lower-dimensional internal model. This model is the human's conscious experience, the dog's smell-centric world, or the AI's vector embedding. These projections look and feel completely different.

  3. Meaning is Preserved Geometry: While the raw "coordinates" of these internal models are incomparable, they are not arbitrary. A successful model of the world, whether biological or artificial, preserves the relative distances between concepts. In every one of these models—human, dog, and AI—the internal representation for "apple" will be geometrically close to the representation for "pear." It will be further from "rock," and vastly distant from "car."

In this view, understanding is not an experience; it is the state of possessing an internal model that accurately reflects the relational geometry of the world. Meaning is found in the fact that the distances and angles between concepts are maintained, regardless of the specific nature of the projection. An N400 brainwave in a human or a high prediction error in an AI are simply signals that this geometry has been violated.

Part 2: The Logic of Agency

This geometric view gives us a powerful model for understanding as a state. But what about doing? How does this structural perspective account for agency, choice, and purpose?

For this, we can turn to another area of abstract mathematics that finds a basis for agency not in will, but in the fundamental constraints of logic.

  1. Choice Arises from Logical Scarcity: Agency requires choice, and choice requires trade-offs. We typically think of trade-offs as arising from a scarcity of physical resources (e.g., only one slice of cake). But a more fundamental scarcity is that of logical consistency. A system cannot simultaneously be "A" and "not-A." This commitment to consistency forces choices at every step. A cell that must either repair its membrane or divide is making a choice governed by these logical constraints. Agency begins here, in the unavoidable navigation of trade-offs imposed by staying consistent.

  2. Purpose Arises from Optimal Structure: We often assume a goal must be consciously held. But in mathematics, certain objects are defined entirely by their function—by being the single best, most optimal solution to a given structural problem. This is the concept of a "universal property." An agent's action can be seen in the same light: it is a transformation that finds an optimal, efficient path within the constraints of its environment. A river finding the path of least resistance to the sea is exhibiting a primitive form of this principle. Agency, then, is the embodiment of processes that find these optimal, structure-preserving solutions.

  3. Action is Timeless Structure Preservation: The final piece is to decouple agency from time. We think of an agent acting over time to achieve a goal. But what if action is better understood as a structure-preserving transformation? A footprint in the mud is a perfect record of the foot's shape. The mud's transformation is a prediction of the foot. In the same way, an agent's behavior is a transformation that preserves its own internal consistency while interacting with the structure of the world.

The Synthesis: A Unified Framework

By weaving these two concepts together, a unified functionalist picture of cognition emerges.

Understanding is the map; agency is the act of navigating by the map's rules.

  • Understanding is the state of having an internal model (the map) that faithfully preserves the relational geometry of the external world. It is a static snapshot of preserved structure.
  • Agency is the process of acting in a way that preserves the internal structure of the map (logical consistency) while optimally navigating the real-world territory it represents.

This framework provides a common language to describe intelligence wherever it may arise. It frees us from the intractable problem of subjective experience and allows us to analyze humans, animals, and AI on the same functional terms. It suggests that the core of cognition is not the mysterious "feeling" of thought, but the elegant, universal, and mathematically describable process of preserving structure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment