Scaffolding AI · Part 1

What is context and how do you eat it?

About our blindness to complexity and why AI struggles with it.

01

Complex vs hard

One of my favorite definitions of intelligence is that it’s the ability to resolve complex problems. However, it is often confused with “the ability to resolve hard problems”. Hard implies difficulty, while complex means that it involves multiple interconnected parts.

Although complex problems are in general difficult and hard problems are often complex, I find it necessary to emphasize that complex is the right word. Lifting a 30kg weight with a hand is hard but not complex. Cooking rice in a pot is not necessarily hard (though one could argue it is), but it is certainly complex if we think about all the elements that take part in doing so.

It is very likely that you never pay attention to all the utensils, steps or strategies involved in day-to-day activities, and that’s absolutely normal. Because our life requires a continuous interaction with an environment of almost infinite complexity, we have evolved to automatically internalize complex situations until they feel trivial.

Our set of tools — which is hard to enumerate — allows us to observe, form assumptions, validate them and extract patterns from our conclusions. Unconsciously, every new pattern becomes part of the unknown structure in which we organize information in our heads, and is then available when we face a new situation in which it might be useful.

Doing this over and over, at an unimaginable speed and with little control, is how we handle incredibly well both known and new complexity.

Cooking rice — a seemingly simple task
Water Rice Pot Lid Heat source Temperature Time Water ratio Rinsing Resting

Built in our image, Artificial Intelligence (AI) is certainly able to handle complexity too, but with some nuances. While our strength lies in how we are able to autonomously interact with the environment and iteratively increase our knowledge — AI comes with a built-in knowledge base that allows it to handle complexity from the get-go, but it is limited both in its interaction toolkit and in the integration of new information.

If you have played around with the latest AI agents, I’m sure you have been both amazed at their capability to answer and reason about general knowledge information — and by general I mean that it can be found on the internet, not that everyone knows it — and frustrated when they were unable to help you resolve a task in which different systems, tools or topics were involved. Remember complexity?

Don’t get me wrong. AI can handle complexity. It just can’t handle ALL the complexity out of the box. The same way you’d not expect someone without experience in a certain task to solve it instantly and in the best way, we cannot expect AI to automatically do so. It simply does not have the “full context”.

Context: the information required to understand a certain situation, statement, environment or idea and make decisions based on it.
02

The nature of context

Amazing! Then we just need to provide our AI with the context it needs, right?

While context has a very clear definition, it has a property that makes it very hard to share effectively. It’s completely subjective.

Our previous knowledge, our strengths and weaknesses affect our perception of every piece of information that we gather from an environment. Even if two people with very similar backgrounds read this article, they would very likely differ in the way they explained it to a friend.

This is not just due to our nature. Context is everything and nothing at the same time. It is the color of this website, it is the style of writing, it is how you got to this article. In the end, context is information, and information is how we perceive the world. We could easily dive into philosophy here, but let’s try to go back to the topic.

Despite this subjectivity, we are constantly sharing context. We do it at work, at home, with friends — it just feels natural. However, communication is one of these infinitely complex things that we have learnt to trivialize. The reason why communication is so complex is because when shared, context becomes DOUBLY subjective.

Same article, different understanding
Person A same article Person B Different lines resonate with different readers

We used this article before to exemplify that two individuals might infer different insights from it. It was treated as a one-way process, but in reality I am writing it based on my perceptions too. I am giving relevance to topics I believe are useful for an introduction to using AI, assuming a certain knowledge base from the readers, and even writing it in a way I believe could be enjoyable to read.

However, the way you interpret this is completely out of my control. It is the collision of our knowledge and perceptions that will determine the utility, enjoyment or the learnings you extract from the text and its context. This is precisely why in the same class two students learn at different paces, or why requirements shared in a work meeting are not equally clear for every employee.

This opens the gate to the concepts of bad and good context. If the information that we gave to our counterparts allowed them to understand the scenario in the way we wanted, we'd be sharing good context. If it did not, we probably gave them bad context. This is certainly not a black and white science, but rather a scale of greys.

In general we don’t give this much thought because of our inherent toolkit to deal with complexity. When we speak, we try to estimate what our counterpart knows and focus on providing the information we believe they need. When we listen, we use all our knowledge to interpret what we are being told and if it’s not enough we ask, research or feel through our senses.

However, due to laziness, bad estimates of others’ previous knowledge, or other reasons, there are a lot of cases where we give others bad context, forcing them to take action.

KITCHEN BEDROOM A B Where are you? Here! ?

This is a problem that we need to avoid when dealing with AI, because it has three particularities that differentiate it from us and directly affect the interaction with it.

First of all, it is not autonomous to decide to learn something, or to do anything in reality — in other words, it is not alive. This means that even if we give it bad context, the AI won’t have an interest in diving deeper into the topic we shared when it is not interacting with us.

Secondly, it does not have the necessary tools to explore the environment out of the box. Even if it thinks the information that we gave is not sufficient, its only way of obtaining more information is from the interaction with the user.

Lastly, we could say AI has two brains/memories. A long-term one, formed by all the training of the model and that can't be updated, and a short-term one, which keeps all the information from the current conversation and that is lost as soon as the conversation finishes or a new one is started.

These differences make the effects of bad context much worse, because AI can’t compensate for it like humans do. We will later also show that they bring some opportunities, but let’s focus on the fact that we really need to provide our agents with good context.

03

Quality vs quantity

Given that we have seen that: i) context is good when it allows the receiver to understand the situation and make decisions based on it, and ii) AI can only use the information that is given to it; the first thing that comes to mind is that we could provide AI with as much context as we can to ensure it can obtain from it every element that it needs to handle our complex situation. But, is more always better?

Given the framing of the question one can easily guess this is not the case. For humans, more information can be detrimental because both our processing capacity and our attention are limited. If we overload someone with information, even if correct, we are diluting the signals that truly matter. Essential elements compete with other less relevant ones, making it more complicated to distinguish what’s relevant.

For AI, this is even more important! If you recall, we mentioned that AI has a long-term memory — based on the information it was trained on — and a short-term memory, which holds the context you share with it but is lost at the end of every conversation. This short-term memory is not infinite. In fact, it is delimited by what we call context windows. Each model has one, and it refers to the amount of information that the AI can hold “in memory” before overflowing.

Context window: cluttered → compressed
Cluttered
CONTEXT WINDOW
Bloated interaction
Irrelevant noise
Useful
limit
Compressed
CONTEXT WINDOW
Room to work
Compressed interaction
Compressed noise
Compressed useful
Useful context
Irrelevant noise
Interaction data
Available space
Everything the AI touches fills the window — useful or not.
The more relevant the information, the better the results.

Once the amount of data surpasses the limits of this window, the information is compressed, often losing relevant details in exchange for “freeing” space for the next interaction. So flooding an AI with data does double damage: it buries the signal in noise — just as it would with a human — and it fills the context window faster, triggering compaction that degrades everything shared so far.

Good context, then, is not about quantity but about quality. If more information can actually make things worse, what variables make context genuinely useful? It comes down to three: being correct, sufficient and precise.

Correct refers to the truthfulness of what is being shared. If the information is incorrect, everything else does not matter. It is closely tied to how well the sender understands the topic.

Sufficient refers to what must be included to properly understand and act. It is not about what the sender thinks is needed, but about what is actually relevant for the receiver based on its knowledge and capabilities.

Precise refers to how clearly and unambiguously the information is expressed. Vague context leaves too much room for interpretation.

What makes context useful?
Correct Truthful for the problem at hand Precise Clear, not vague or generic Sufficient No more, no less than what's needed Context Accurate but maybe too much Relevant but vague Detailed but wrong
Good context lives at the intersection: information that is correct for the problem, precise enough to act on, and only what’s sufficient — nothing more.

These three are not independent — each one only matters if the previous one is already in place. If the information is wrong, it does not matter how much of it you share or how clearly you express it. If key pieces are missing, no amount of precision will close the gap. And without precision, even correct and complete information can be misunderstood.

Getting all three right at once is hard. Especially sufficiency, because there is no reliable way of knowing beforehand exactly what the other side needs to know — we went over this before, context is subjective. Thankfully for us, in this case, AI models' long-term and short-term memories are tools in our favor.

04

The power of iteration

A good way of understanding AI is thinking about it as having access to a technical expert — one who is brilliant in many domains, but suffers from context Alzheimer’s. The expert has an enormous amount of knowledge, even more than you at times, but you constantly need to remind him about all the important details he does not have access to immediately.

While inconvenient — since it’d be much better to just have an artificial never forgetting omnipotent expert — this eternal flow of calling the artificial intelligence, giving it the context for our goal and forgetting everything, simplifies our task. It allows us to correct bad context.

If we share bad context with someone, it is likely that we may even miss the fact that we gave bad context and find the consequences later, once the task is done, or part of it. More importantly, that person may not be able to just dust it off directly and bad context might still affect future endeavors.

AI models, on the other hand, generate a quicker output which clearly shows us the consequences of our poor context. This allows us to iterate on it faster, starting a new conversation but tweaking the initial context to add more, change wording or remove irrelevant elements, until we get the desired result. New interactions will never suffer the consequences of past bad context!

Iterating on context
ORIGINAL Result 3 / 10 ITERATION 1 Result 6 / 10 ITERATION 2 Result 10 / 10

This basically means that AI interaction is a science. You can do A/B tests with context until the result is good enough, no matter the degree of complexity. If you do it enough, and with rigor, it is likely you can allow AI to handle any complexity you want. But this is not trivial, and it is tiring.

We are used to using deterministic tools in which you press a button and you get the expected result, creating a function that receives some data and generates the desired output. Interacting with AI means going from determinism to probabilism. We will try to give the right input and see if we get the output we expect. If we do not, we need to adapt the input but without truly knowing what would be necessary to completely get the desired result.

As we mentioned before, this is basically what we do with human interaction too. However, in this case we are putting on our shoulders all the context responsibility that we often share in conversations. And there is also a differential factor: the speed of results.

In general, complex tasks take time. This allows us to iteratively gather context for them, divide them into simpler subtasks, and slowly shift the results as we progress. AI churns out solutions for complex tasks in seconds, forcing us to address them directly — maybe even without fully understanding the underlying complexity. This is more draining than one would think.

1 + 15 16 A + J = @ ? ENERGY

For now, I think this is more than enough for an introduction, but don't worry. As always with humans, there are ways to reduce all this complexity. In the following chapters, if we can call them so, we will explore how we can make this process less tiring and more effective, entering the world of context engineering.

I hope this first article wasn’t too dense; it’s not a simple topic. I wanted to share not just some basic concepts relevant to this field, but also a perspective on AI and its similarities and differences with ourselves. Happy to get any feedback!

See you in the next iteration, and remember, nobody likes being caught… out of context.