When initially experimenting with LLMs and agentic AI, software engineers discovered at AI concept applied advanced code generation, complex schematics and heavy instructions.
But the team soon learned through trial and error that it could be done Get rid of all that complicated data modeling. Notion’s AI engineering lead Ryan Nystrom and his team revolved around simple prompts, human-readable representations, minimal abstraction, and familiar markdown formats. The result was dramatically improved model performance.
Applying this updated approach, the AI-native company released version 3 of its productivity software in September. The standout feature: Cutomizable AI agents – which have quickly become Notion’s most successful AI tool yet. Based on usage patterns compared to previous versions, Nystrom calls it an “incremental feature improvement.”
“It’s the feeling of the product being pulled out of you instead of you trying to push it,” Nystrom explains in a VB Beyond the Pilot podcast. “We knew from that point on that we had something very early on. Now it’s, ‘How can I ever use Notion without this feature?'”
‘Rewiring’ for the age of AI agents
As a traditional software engineer, Nystrom was used to “extremely deterministic” experiences. But a lightbulb moment came when a colleague advised him to simply describe his AI prompt as he would to a human, rather than creating rules about how agents should behave in different scenarios. The Rationale: LLMs are designed to understand, “see” and reason about content in the same way that humans can.
“Now, when I work with AI, I will re-read the directions and tool descriptions and… [ask myself] Is this something I could give to someone without context and who could understand what’s going on? Nystrom said on the podcast. “If it doesn’t, it will do a poor job.”
Nystrom and his team moved away from the “quite complicated representation” of data within Notion (such as JSON or XML) and represented Notion pages as markdown, the popular device-independent markup language that defines structure and meaning using plain text without the need for HTML tags or formal editors. This allows the model to interact with, read, search, and make changes to text files.
Ultimately, this required Notion to rewire its systems, with Nystrom’s team focusing largely on the middleware transition layer.
They also recognized early on the importance of restraint when it comes to context. It’s tempting to load as much information as possible into a model, but that can slow things down and confuse the model. For Notion, Nystrom described a limit of 100,000 to 150,000 tokens as the sweet spot.
“There are cases where you can load tons and tons of content into your context window and the model will struggle,” he said. “The more you put in the context window, you’ll see a degradation in performance, latency and also accuracy.”
A spartan approach is also important when it comes to tooling; this can help teams avoid the “slippery slope” of endless features, Nystrom advised. Notion focuses on a “curated menu” of tools rather than a bulky Cheesecake Factory-style menu that creates a paradox of choice for users.
“If people ask for new features, we can just add a tool to the model or agent,” he said. But “the more tools we add, the more decisions the model has to make.”
The bottom line: Channel the model. Use APIs as intended. Don’t try to be fancy, don’t try to make it too complicated. Just use English.
Listen to the full podcast and hear more about:
Why AI is still in the pre-Blackberry, pre-iPhone era;
The importance of “dogfooding” in product development;
Why you don’t have to worry about how cost-effective your AI function is in the early stages – which can be optimized later;
How engineering teams can keep tools minimal in the MCP era;
Notion’s evolution from wikis to full-fledged AI assistants.
Subscribe to Beyond the Pilot at Apple podcasts, SpotifyAnd YouTube.
#Notions #biggest #breakthrough #simplifying

