Vol.01 · No.10 CS · AI · Infra May 13, 2026

AI Glossary

GlossaryReferenceLearn
LLM & Generative AI

MCP

Model Context Protocol

Difficulty

Plain Explanation

MCP starts from a practical integration problem: AI apps need databases, files, SaaS tools, internal APIs, search systems, and local developer tools, but one-off connectors do not scale. MCP splits the system into host, client, and server. The host is the user-facing AI app. An MCP client inside the host manages a session with each server. An MCP server exposes what it can provide through standard messages: tools, resources, and prompt templates.

The key boundary is that the model does not directly operate external systems. The host mediates the call. A server can expose primitives such as tools/list, resources/read, and prompts/get; the host decides whether a user, model, or workflow is allowed to use them. Local tools can connect through STDIO, while remote services can use Streamable HTTP. This makes MCP a connection layer between RAG, tool calling, internal automation, and agent tool ecosystems.

Examples & Analogies

  • Internal data access: wrapping an HR database as an MCP server lets an AI app call approved read-only resources instead of running arbitrary SQL.

  • Developer tooling: a local file-analysis server can use STDIO, while an issue tracker or documentation service can use Streamable HTTP in the same host.

  • Shared prompt templates: a team can expose review prompts as MCP prompts, avoiding copy-pasted prompt versions across apps.

At a Glance


MCPFunction CallingOpenAPI
FocusStandard connection layer for AI tools, data, and promptsModel proposes structured function callsHTTP API contract
StructureHost, client, server sessionsModel API plus host execution logicEndpoints, methods, schemas
Main primitivesresources, tools, prompts, sampling, elicitationfunction/tool callpaths, methods, schemas
TransportSTDIO, Streamable HTTPInside model APIHTTP-centric
Security boundaryHost mediates consent, permissions, and auditDepends on host implementationDepends on API authz/authn

MCP does not simply replace function calling. It standardizes the connection layer for managing many tools and data sources over time.

Where and Why It Matters

  • Tool ecosystem growth: apps can reuse MCP servers instead of building every integration directly.

  • Clear permission boundary: the host, not the model, mediates user consent, tool approval, and audit logging.

  • Lower context stuffing: resources can be read on demand rather than pasted wholesale into prompts.

  • Local and remote support: local developer tools can use STDIO while remote services use Streamable HTTP under the same protocol.

Common Misconceptions

  • Myth: Connecting an MCP server gives the model access to everything → Reality: exposed server capabilities and host policy define the boundary.

  • Myth: MCP is just OpenAPI for AI → Reality: MCP directly models AI primitives such as prompts, resources, tools, sampling, and elicitation.

  • Myth: MCP automatically solves security → Reality: scoped permissions, user consent, tool-poisoning defenses, and audit logs still need design.

How It Sounds in Conversation

  • "Expose this database as read-only MCP resources, and put updates behind an approved tool call."

  • "Use Streamable HTTP for the remote service and keep local developer tools as STDIO servers."

  • "Tool descriptions are model-visible input, so prompt-injection checks need to be part of the deployment gate."

  • "Only allow sampling requests after host policy approval, and log which server requested them."

Related Reading

References

Helpful?