7 Comments
User's avatar
Murali Krishnan's avatar

Good thesis.

Aren't the elements of context same as 'business rules' and 'embedded logic' in the prior world. Smart companies had a way of capturing these and turning them also into system of records. Are you saying that context now becomes a first class citizen of its own?

Artas Bartas's avatar

The context might be the king but given how easy it is to migrate information from one platform to another these days what stops the companies from taking their context to the cheapest vendor du jour? And if that is the case, it's hard to see the value accrue to the context layer.

I fear that what's coming is the rise of insurance arbitrage.

Whichever vendor can offer the most generous reimbursement policy for imaginary disasters, wins. And so the real value creation then shifts from encoding the context to crunching insurance policy premiums and determining the most wease-ly definition of a "disaster".

Ruslan Karmanny's avatar

If context is the winning layer, who decides what the context is when finance, legal, and ops disagree?

Who sets the risk boundary when the data is incomplete?

Who approves exceptions when a “small” edge case could become a major liability?

Who is accountable when the call is wrong?

If those questions still require people, then context gathering is only half the problem. The harder half is judgment under ambiguity, and that is exactly why the human layer doesn’t disappear just because an agent can read more documents faster.

Teja Sunku's avatar

That's already currently a problem though. As the article states, context currently exists in a disparate list of sources: email chains, slack messages, notion pages, etc. Maybe there are a few outliers who manage their planning and processes in a unified and organized manner. But, to my knowledge, not many do.

And, forget automation, there's a lot of value in just being able to reliably document what's going on. It can reduce inter-departmental overhead quite a bit, at minimum. For example, currently, there is not really an easy way for product designers to know if the feature they want to propose will be easy to implement. Because the things that need to be changed, how long they will take, are super non-obvious to people not living in the code. It could reduce a lot of friction if they could answer that question reliably *before* they bring it up to the engineers.

But also, you don't need big wins either for this to be valuable. For example, consider a situation where a company recognizes that its current meeting notes are insufficient; that key details are lost and then have to be rediscovered from slack threads, etc. The "context" that is added is creating an automated meeting summarizer that sends the right details to the right people. It's not a super risky tool, and the benefits it provides may be marginal. But, things like that add up

And micro-tools like that do create a moat. Because they are specific to the company and to the people they are built for.

Alan's avatar

This is brilliant. Write more like this! I've been building and executing skills with Claude Code attached to supabase. It's good but the learning layer isn't there yet. That's what I'm thinking about now. If I needed to scale it to more people, I'd move it to Google Cloud or MS Foundry I think. I wouldn't start with notion or any of the other competitors at this point.

Stefan Wirth's avatar

Long live context.

From my personal experience, the first part is to clean all the data, the amount of sheer accumulated cruft is often just too much too handle at the current stage.

RAG stops working, agents retrieve too much for the context window to handle.

Context size, speed and cost are the three variables for me to watch.

More context means smarter decision without the model going braindead, lower cost means more individual agentic searches to inject into the larger context and speed because I don't want to wait 3 years for all of it to happen haha.

That's why claude code uses haiku for the explore agents to scan a lot of stuff fast (and cheaply) with agentic searches but it's not working properly yet, overreliance on the small subset that is passed back often creates worse results than the main agent doing the work itself.

You can see this in the latest Claude Code updates where people complained that it got dumber.

I'm sure we'll get there but timeline TBD 🤷‍♂️

George Barrios's avatar

Great article. Simplifies a complex and critical issue that both operators and investors are grappling with.