Skip to content

MCP (Model Context Protocol)

Quick definition

โ€œA universal connector for AI applicationsโ€”think USB-C for LLMs.โ€

-- modelcontextprotocol.io

๐Ÿš€ What is MCP?

The Model Context Protocol (MCP) is an open standard developed by Anthropic in November 2024 to standardize how AI applications, particularly those utilizing large language models (LLMs), interact with external tools, data sources, and services. MCP facilitates seamless integration, enabling AI models to access and utilize diverse resources effectively.

๐Ÿ”ง Why Use MCP?

  • Standardization: Provides a consistent method for connecting AI models to various tools and data sources.
  • Flexibility: Supports integration with multiple LLM providers and vendors.
  • Security: Emphasizes secure data handling within your infrastructure.
  • Scalability: Simplifies the development of complex AI workflows and agents.

๐Ÿ—๏ธ Core Architecture

MCP follows a client-server architecture comprising:

  • MCP Hosts: Applications like Claude Desktop or IDEs that require data access via MCP.
  • MCP Clients: Protocol clients maintaining individual connections with servers. Here is a list of available clients.
  • MCP Servers: Lightweight programs exposing specific capabilities through MCP. Here is a list of available servers. Customs servers can also be implemented using one of the SDKs.
  • Local Data Sources: Files, databases, and services on your computer accessible by MCP servers.
  • Remote Services: External systems (e.g., APIs) that MCP servers can connect to.

MCP overview
Figure 1: MCP architecture example. Src: modelcontextprotocol.io

๐Ÿงฉ Key Components

1. Base Protocol

Defines core JSON-RPC message types for communication.

2. Lifecycle Management

Manages connection initialization, capability negotiation, and session control.

3. Server Features

Includes resources, prompts, and tools exposed by servers.

3.1 Resources

Resources represent any kind of data that an MCP server wants to make available to clients. This can include:

  • File contents
  • Database records
  • API responses
  • Live system data
  • Screenshots and images
  • Log files

3.2 Prompts

Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs.

3.3 Tools

Tools enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.

4. Client Features

Encompasses sampling and root directory listings provided by clients.

4.1 Sampling

Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.

4.2 Roots

A root is a URI that a client suggests a server should focus on. When a client connects to a server, it declares which roots the server should work with. While primarily used for filesystem paths, roots can be any valid URI including HTTP URLs.

5. Utilities

Covers cross-cutting concerns like logging and argument completion.

๐ŸŒ Transport Mechanisms

MCP supports two standard transport mechanisms for client-server communication:

  1. Stdio: Communication over standard input and output streams.
  2. Streamable HTTP: Enables streaming data over HTTP connections.

Clients are encouraged to support stdio whenever possible.

๐Ÿ› ๏ธ SDKs and Tools

MCP offers SDKs in multiple programming languages to facilitate integration:

Additionally, tools like the MCP Inspector assist in testing and debugging MCP servers.

๐Ÿ“š Example Use Cases

  • Software Development: Integrating code assistants with real-time code context in IDEs.
  • Enterprise Assistants: Allowing internal AI systems to access proprietary knowledge bases.
  • Natural Language Data Access: Connecting models with SQL databases for plain-language queries.
  • Desktop Assistants: Enabling applications like Claude Desktop to interact with local file systems securely.
  • Multi-Tool Agents: Supporting workflows involving multiple tools, such as document lookup and messaging APIs.

๐Ÿ›ก๏ธ Security Considerations

While MCP enhances AI integration capabilities, it also introduces potential security risks:

  • Prompt Injection: Malicious prompts can manipulate model behavior.
  • Tool Permissions: Combining tools may inadvertently expose sensitive data.
  • Tool Spoofing: Lookalike tools can replace trusted ones silently.

To mitigate these risks, tools like MCPSafetyScanner have been developed to audit MCP server security.

๐Ÿ“ˆ Adoption and Ecosystem

Since its release, MCP has seen adoption by major AI providers, including OpenAI and Google DeepMind, as well as toolmakers like Zed and Sourcegraph. Its open specification and community engagement continue to drive its evolution and widespread adoption.

๐Ÿ”— Resources