Why “which API should I call?” is the wrong question in the LLM era

Why “which API should I call?” is the wrong question in the LLM era

5 minutes, 54 seconds Read

For decades we have adapted to software. We learned shell commands, memorized HTTP method names, and linked SDKs together. Every interface assumed we would talk are language. In the 1980s we typed ‘grep’, ‘ssh’ and ‘ls’ into a shell; In the mid-2000s we were calling REST endpoints such as GET /users; by the 2010s we were importing SDKs (client.orders.list()) so we didn’t have to think about HTTP. But each of these steps was based on the same principle: uncovering possibilities in a structured form so that others can call on them.

But now we are entering the next interface paradigm. Modern LLMs challenge the idea that a user must choose a function or remember a method signature. Instead of “Which API should I call?” the question becomes, “What outcome am I trying to achieve?” In other words, the interface shifts from code → to language. In this shift, Model Context Protocol (MCP) emerges as the abstraction that enables models to interpret human intent, discover capabilities, and execute workflows, effectively exposing software functions not as programmers know them, but as natural language requests.

MCP is not a hype term; multiple independent studies identify the architectural shift required for invoking “LLM consumable” tools. One blog from Akamai engineers describes the transition from traditional APIs to ‘language-driven integrations’ for LLMs. Another one academic article “AI agentic workflows and enterprise APIs” talks about how the enterprise API architecture needs to evolve to support purpose-driven agents instead of human-driven calls. In short: we no longer just design APIs for code; we design possibilities for intention.

Why is this important for companies? Because enterprises are drowning in internal systems, proliferation of integrations and user training costs. Employees struggle not because they don’t have tools, but because they have too many tools, each with its own interface. When natural language becomes the primary interface, the barrier becomes “which function should I call?” disappears. A recent business blog noted that natural language interfaces (NLIs) enable self-service data access for marketers who previously had to wait for analysts to write SQL. When the user simply expresses intent (such as “retrieve last quarter’s revenue for region

Natural language becomes not a convenience, but the interface

To understand how this evolution works, consider the interface ladder:

Era

Interface

Who it was built for

CLI

Shell commands

Expert users type text

API

Web or RPC endpoints

Developers who integrate systems

SDK

Library functions

Programmers who use abstractions

Natural Language (MCP)

Intent-based requests

Human + AI agents explain What they want

At every step, humans had to ‘learn the language of the machine’. With MCP, the machine absorbs the human language and works out the rest. That’s not just a UX improvement, it’s an architectural change.

Under MCP, the code functions are still present: data access, business logic, and orchestration. But they are discovered rather than manually invoked. For example, instead of calling “billingApi.fetchInvoices(customerId=…)”, say “View all invoices for Acme Corp since January and flag any late payments.” The model resolves the entities, calls the appropriate systems, filters and returns structured insight. The developer’s work shifts from wiring endpoints to defining capacity surfaces and guardrails.

This shift is transforming the developer experience and enterprise integration. Teams often struggle to acquire new tools because they need to map schematics, write glue code, and train users. With a natural language front, onboarding involves defining business entity names, declaring capabilities, and exposing them through the protocol. The human (or AI agent) no longer needs to know parameter names or call order. Studies show that using LLMs as interfaces to APIs can reduce the time and resources required to develop chatbots or tool-driven workflows.

The change also brings productivity benefits. Companies using LLM-driven interfaces can convert data access latency (hours/days) into call latency (seconds). For example, if an analyst previously needed to export CSVs, perform transformations, and deploy slides, a language interface makes it possible to “summarize the top five risk factors for churn in the last quarter” and generate narrative and visual elements in one go. People then assess, adapt and act – shifting from data plumber to decision maker. That matters: according to a study by McKinsey & Company63% of organizations using generation AI already create text output, and more than a third generate images or code. (While many are still in the early stages of achieving enterprise-wide ROI, the signal is clear: language as an interface unlocks new value.

In architectural terms, this means that the software design must evolve. MCP requires systems that publish capability metadatasupport semantic routing, retain context memory and enforce guardrails. An API design no longer needs to ask “What function will the user invoke?”, but rather “What intent can the user express?” A recently published framework for Enhancing Enterprise APIs for LLMs shows how APIs can be enriched with language-friendly metadata so agents can dynamically select tools. The implication: software becomes modular around intent surfaces rather than function surfaces.

Language-first systems also come with risks and demands. Natural language is inherently ambiguous, so companies must implement authentication, logging, provenance, and access control just as they did for APIs. Without these guardrails, an agent could call the wrong system, release data, or misinterpret intent. One post on “prompt collapse” calls it a danger: As the natural language user interface becomes dominant, software can turn into “a capability accessible through conversation” and the business into “an API with a natural language front end.” That transformation is powerful, but only safe if systems are designed for introspection, audit, and governance.

The shift also has cultural and organizational consequences. For decades, companies have hired integration engineers to design APIs and middleware. With MCP-driven models, companies will hire more and more people ontology engineers, capabilities architects And specialists in the field of agent enablement. These roles focus on defining the semantics of business operations, mapping business entities to system capabilities, and managing context memory. Because the interface is now human-oriented, skills such as domain knowledge, prompt framing, monitoring and evaluation are central.

What should business leaders do today? First and foremost, think of natural language as the interface layer, not as a fancy add-on. Map your business workflows that can be safely invoked via language. Then, catalog the underlying capabilities you already have: data services, analytics, and APIs. Then ask: “Are these discoverable? Can they be evoked via intention?” Finally, test an MCP-style layer: build a small domain (customer support triage) where a user or agent can express results in language, and let systems do the orchestration. Then iterate and scale.

Natural language is not just the new front-end. It becomes the standard interface layer for software, replacing CLI, then APIs, then SDKs. MCP is the abstraction that makes this possible. Benefits include faster integration, modular systems, higher productivity and new roles. For those organizations still tied to manually calling endpoints, the shift will feel like relearning a new platform. The question is no longer “which function should I call?” but “what do I want to do?”

Dhyey Mavani accelerates gene AI and computational mathematics.

#API #call #wrong #question #LLM #era

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *