Reframe Blog

The Pathway to AGI isn’t Intelligence, it’s Shared Cognition

Written by Jeff Szczepanski | Nov 14, 2025 1:09:55 PM

Desperately Seeking Super-Intelligence?

Everyone keeps speculating when AI will become truly intelligent, but today's models can already generate code, summarize legal briefs, spark new ideas in conversation, and produce entire documents in seconds. And it's often even fun to use. Yet somehow, despite all this, I'm not any less busy and the work sure hasn't gotten much easier. What is going on here?

Part of this is a reliability and trust problem. LLMs, for all that they seem to know, certainly do make up lots of things. It's like LLMs know everything and nothing at the same time. Many seem to think this is an intelligence problem. OpenAI suggested ChatGPT 5 would be the breakthrough. Now it's ChatGPT 6. Or wait, is it AGI we're waiting for? I'm uncertain as the narrative evolves faster than their models do.

Joking aside, I believe the pursuit of AGI as an intelligence problem is a red herring. Improving one-shot benchmark scores - whether an LLM can pass the bar exam, ace the SAT, etc. - are distractions that mean almost nothing relative to how useful AI is in your workflow with your team trying to get real work done. It's certainly no wonder that the ROI for general knowledge worker tasks is proving to be quite elusive. And, I believe if we woke up tomorrow and ChatGPT was suddenly 100% reliable and never hallucinated, I don't think the real problem goes away.

This is because the real problem isn't one of intelligence - it's one of continuity.

Right now, LLMs are largely static black boxes with a chat window glued on - sometimes brilliant in the moment, but always unable to accumulate understanding. There's no memory of what matters, no continuity of goals, no persistent context.

That's not an intelligent agent - that's an idiot savant with amnesia at scale.

The Amnesia Problem

Working with today's AI is like having a teammate with amnesia - like the guy in Memento. It can converse in the moment, but can't effectively recall or evolve. You have to constantly remind and update it because there's no structure for reasoning across time.

Picture a product team: On Monday, they use ChatGPT to brainstorm features for their app. On Tuesday, they're using Claude for code reviews. By Wednesday, they're explaining the entire product context, user base, technical constraints, and business goals to a legal AI drafting terms of service - again. Hours burn away in repetitive context-setting, not actual progress.

This is the real bottleneck: AI that doesn't follow along and doesn't know what matters can't effectively participate in the work that matters.

For AI agents to shift beyond just task execution to becoming collaborators that contribute meaningfully to human goals, with the humans still in control, we need more than better language models. What we need is Shared Cognition.

Why Shared Cognition Is the Missing Link

Real cognition isn’t a clever burst of output from a single prompt. It stems from a continuum of context: the ability to carry goals forward, connect decisions, revisit past reasoning, and evolve with the work.

Today's AI can't do that. Bigger context windows and better RAG pipelines help fetch information, but they don't create shared understanding. Even MCP, useful for connecting models to tools and data sources, doesn't solve this. It's a driver layer that exposes endpoints; it doesn't produce a persistent, evolving workspace. Your Monday brainstorm still disappears by Wednesday, and agents still can't align on goals or decisions over time.

The result is obvious: every interaction starts from zero, and nothing compounds.

To move beyond task execution toward actual collaboration, AI needs a layer where context lives - persistent, structured, and shared across tools, agents, and time. This layer has to be interoperable, transportable, and secure. And it can’t be a proprietary platform. It has to function as open infrastructure that any tool or agent can build on, just as HTML and HTTP serve as shared foundations for the web.

That’s the missing link: the shift from isolated, ephemeral interactions to shared cognition - a durable substrate where humans and agents can reason together and build on what came before. Just as the cerebral cortex coordinates different regions of the brain, we need an infrastructure that spans the applications, people and agents involved in any given working context. We call this the Cortex Layer.

The Cortex Layer: Turning Memory into Infrastructure

The Cortex Layer is the missing substrate for persistent context. In the same way that the web created an internet layer for democratized content, this infrastructure creates just such a layer for coordinated collaborative contexts.

Based on a free and open protocol leveraging existing standards, it establishes the foundational layer for coordinated intelligences undertaking shared cognition that is:

  • Structured & Human-Centric: Alignment with human workflows and accountability structures, with auditable history of decisions, dependencies, and reasoning.

  • Transportable & Collaborative: Durable understanding that outlives any single tool or session, enabling collaboration across multiple tools, agents, and human stakeholders - with seamless agent onboarding/offboarding without losing institutional knowledge.

  • Secure & User-Owned: Persistent context that is shareable, composable, and secure, keeping control in human hands.

In short, the Cortex Layer turns human intent, context, discussions and decisions into first-class digital artifacts - something only now possible with the convergence of modern LLMs, local-first sync, and log-anchored, addressable, cryptographically signed records.

Starting from the principle that human agency is the prime directive, it defines how humans and agents share structured, persistent context across time and tools. Within this layer, we call each active collaborative workspace a Stream.

A Stream is an evolving, structured fabric of context that captures goals, decisions, and reasoning as they evolve. It's composed of small, addressable units of meaning that can be linked, versioned, and recombined as work progresses. Unlike Slack channels or Notion pages that only record messages or document snapshots, Streams maintain a persistent model of why decisions were made, what information matters, and how pieces of work relate, preserving the structure, dependencies, and rationale that make full continuity and therefore Shared Cognition possible.

Think version control for context instead of for content.

This inverts the typical relationship: instead of humans trying to become better prompt engineers and manually feeding things to agents in just the right ways; with a Cortex Layer, humans and agents become fully collaborative partners.

Shared Cognition is the Pathway to "AGI"

When people talk about AGI, I believe what they’re actually seeking is systems that remember what matters, build on context over time, collaborate across conversations, and contribute meaningfully to ongoing work.

That won’t come from smarter LLMs or from walled gardens. It comes from open infrastructure that lets contextual awareness compound instead of resetting with every interaction in every tool.

The future of AI isn’t a race toward artificial minds. It’s a leap toward collective ones.

You can see early hints of this direction in Apple Intelligence, which is trying to bring continuity and context to personal computing. But that vision only becomes transformative when it’s open, interoperable, and available across every tool and platform, not confined to a single ecosystem.

The Cortex Layer is the missing link that enables shared cognition across a collective of minds, human and artificial, to ultimately deliver the intelligence people are looking for.

At Reframe, we’re translating these ideas into reality through the Agent Shared Cognition Protocol (ASCP), the free and open standard powering the Cortex Layer. ASCP makes shared cognition practical and interoperable across tools, agents, and organizations.

We’ll be releasing more details about the Cortex Layer, including draft ASCP specifications, as we work toward a free and open international standard. The goal is for ASCP to function like other IETF-style protocols—HTTP, DNS, SMTP—open, vendor-neutral, and widely implemented, with both FOSS and commercial servers running across the internet.

If you’re building agents or platforms and you’re tired of every AI interaction starting from scratch, we should connect. Early builders don’t just help shape an open standard. They help define the infrastructure that every future application, and every human working with AI, will rely on.