Reframe Blog

Why Contextual Data is the Next Big Leap in Tech

Written by Nada Elkady | Feb 20, 2025 3:17:15 PM

Sometimes the best way to understand a system is to step back and explain it from scratch because it forces us to clarify our assumptions, expose gaps in our thinking and see what we may have overlooked.

In our previous post, we imagined explaining computers to an alien - an exercise that exposed a critical flaw in how our digital environments function. We realized that while computers are excelling at processing data, they don’t understand context. They don’t link related information across apps, and they certainly don’t assist us in making sense of our work the way we naturally do.

But they process data really well. So let’s take a look at that data. What do we mean by ‘data’ today? How do computers use that data and why don’t they understand context?

What is Data in Computing?

At its core, data is simply information that computers can process and use. It’s a collection of facts - numbers, words, measurements, or descriptions - that can be stored, manipulated, and interpreted by computers to perform tasks. Without data, a computer system can’t function. It needs data to make decisions, process actions, and provide results.

In computing, data can come from many sources, like user input (typed words or images), sensors (like GPS data, biometric data), or algorithms (like calculations or predictions). This data is typically stored electronically in the form of files, databases, or even temporary caches that the system can access.

How Do Computers Use Data?

Once data is captured, computers use it to perform a bunch of tasks. For example, when you enter a URL into your browser, the data tells the computer to fetch the relevant webpage from a server and display it for you. Or when you enter numbers in a spreadsheet, the data tells the program to perform calculations and show the results. Or even when you make a video call, the data (in this case your voice, image, etc.) is transmitted to another system that processes and shows it on the recipient’s screen.

Without data, computers wouldn’t work. Period. They rely on it for every decision they perform.

The problem we’re seeing now (especially with the proliferation of endless apps and tools) is that computers don’t understand the context of all that data. They can process raw data (text, numbers, audio, image, video, sensor, etc.) but they don’t recognize how different pieces of data relate to each other in meaningful ways.

The Two Types of Data Computers Understand

At the most basic level, computers only understand two types of data - binary code and character-based code.

Everything a computer processes, whether it’s text, numbers, or images, is ultimately converted into binary code - a system of 0’s and 1’s (queue the show, “Bits and Bytes” from the 80’s). It’s the language computers use to understand data. This is the lowest level of data representation, allowing computers to perform calculations and store information in a form they can understand.

Then there’s character-based code - for humans to interact with computers, we need data that WE can understand. That’s why we have character-based codes like ASCII (for English characters) or Unicode (for other languages like Arabic). There’s also a bunch of other character-based codes like UTF-8 and ISO-8859. These codes bridge the gap between binary data and human-readable text by mapping characters (letters, numbers, symbols) to binary values, allowing us to see text and symbols on the screen, making it possible for us to interact with computers in a meaningful way.

So while these data types enable computers to function, they don’t enable the computer to understand the relationships between different pieces of data - and that’s where the problem lies. (This may seem like a drag, but the importance of understanding this will come later.)

Data Computers Process Today is Isolated and Siloed

If you receive an email about a project and then update a task in JIRA, drop a note in Trello, or tweak a design in Figma, there is no natural connection between these actions. Each app stores its own data, but they don’t talk to each other. We, the users, act as the glue, manually linking the information between them by attaching an email to a JIRA ticket, or copying a link from Trello into a Slack message, or searching for the latest version of a document to update a presentation or a “centralized” doc.

The data is there, but the context is missing. It exists in the user’s mind but the computer doesn’t understand how the different pieces of data are connected or why they’re even important. This creates a huge amount of friction and cognitive overload for us as we navigate this disconnected ecosystem.

Where is the Contextual Data?

We need contextual data - the relationships, dependencies, and interconnections between pieces of work in order to be more efficient, effective, and impactful in our work.

We need our computers to finally know how an email relates to a project update which relates to a task in Trello which ties to a Figma design that then needs to be updated in a roadmap to influence a decision that needs to be made in an upcoming meeting.

All this data today is knowledge that only exists in our minds - what Tall Jeff (our founder and CEO) calls “the mind’s-eye context”), with slivers of interconnectedness stored separately in siloed apps. Computers don’t recognize these connections.

There is no language or mechanism today that allows computers to capture and process this contextual data.

Even if apps became borderless, the problem still persists

One might argue that the reason for the lack of contextual data is simple because of the independent nature of applications. Open them up, and contextual data can be captured and processed.

Ok, let’s imagine a world where privacy and security weren’t an issue, and apps were fully open and borderless. Even in that ideal scenario, the problem wouldn’t be solved. Why? Because computers were never designed to interlink application data, our operating systems don’t facilitate contextual relationships between different pieces of data, and even the UI/UX has been built around independent apps, not fluid workstreams that understand how data points relate to each other.

So even in a free-flowing world of data, apps and computers still wouldn’t understand the context of our data, and the problem remains unsolved.

Computers Need Contextual Awareness

The next technological leap isn’t just about making apps talk to each other, it’s about changing the entire computing paradigm - the whole environment - one that contains contextual awareness so that contextual data is captured, processed, and acted upon.

This requires computers to understand two types of contextual data:

Application Contextual Data - The system must automatically recognize and maintain relationships between various pieces of work - linking emails to projects, tasks to documents, discussions to files, and everything in between.

User Contextual Data - The system must also capture the implicit, unstructured knowledge that exists in our minds - our priorities, workflows, and reasoning behind why things matter. This is the mind’s-eye-context that Jeff refers to, that we are largely unaware of and take for granted because it’s constantly running in the background of our minds.

By capturing, integrating, and processing both types of contextual data, we shift from computers that simply store information to computers that actively assist in work, making connections and surfacing relevant insights automatically.

Contextual Data is the Next Big Leap in Tech

Contextual data isn’t just another incremental improvement. Heck no. It’s a foundational shift that will inherently:

Eliminate the burden of manual linking - no more searching, copying, pasting, and keeping track of updates across apps.

Reduce cognitive overload - computers will surface the right information at the right time, where and when you need it

Enhance AI capabilities - it is only in this new, contextually-aware environment with access to contextual data can AI truly evolve from being reactive (answering questions, needing to be prompted with context and information) to proactive (suggesting actions, organizing work, implementing tasks).

Unlock human capacity - once we remove the friction of managing contextual data on our own, we free up so much mental (and emotional) space for higher-order thinking, creativity, and innovation.

Empower collaboration across teams - with shared contexts, teams can stay aligned, make faster decisions, and collaborate more effectively, reducing confusion and miscommunication by ensuring that everyone has access to the same relevant information at the right time.

Reframe is Leading This Revolution

We’re building a new computing environment - an Organized Work Environment (OWE) - where contextual awareness is native, and contextual data is captured, processed, and understood.

You won’t need to buy a new kind of computer or throw out your old one. Reframe’s OWE is like a wrap-around for your current computing environment - one that introduces a new language and protocol that adds a powerful layer of context to the existing binary and character-based data that computers understand (more on that in an upcoming blog post).

Reframe unifies data across applications so that context is preserved and surfaced when needed. It captures user context to support intuitive, personalized workflows, and redefines how we interact with each other and with computers by reducing friction and enhancing alignment, efficiency, and flow.

This isn’t just about productivity. It’s about augmenting human and artificial intelligence with a system that understands, manages, and acts on context.

The future of computing isn’t just about faster processors or smarter AI - it’s about context. And once we unlock that for human agents and AI agents, everything changes!

Want to see it in action? Sign up to be one of our early Alpha users.