Explore topic-wise interview questions and answers.
Model Context Protocol (MCP)
QUESTION 01
What is the Model Context Protocol (MCP) and who developed it?
š DEFINITION:
The Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how LLM applications connect to external tools, data sources, and services. It creates a universal interface layer, allowing any MCP-compatible AI client to discover and use capabilities from any MCP server, regardless of who built them.
āļø HOW IT WORKS:
MCP employs a client-server architecture where servers expose capabilities through a standardized JSON-RPC interface over transports like stdio or SSE. When a client connects to a server, they perform an initialization handshake where the server advertises its available tools, resources, and prompts with full JSON schemas. The client can then invoke these capabilities on behalf of the LLM. This discovery and invocation process is standardized, meaning any MCP client can work with any MCP server automatically.
š” WHY IT MATTERS:
Before MCP, the AI tool ecosystem was fragmented - every framework had its own way of defining tools, and tools built for one wouldn't work in another. MCP solves this by providing a common language that all AI applications can speak. Think of it as the USB-C for AI - a single standard that enables interoperability across the entire ecosystem. This reduces duplication, accelerates development, and creates a thriving marketplace of compatible tools.
š EXAMPLE:
A company builds an MCP server for their internal CRM. Now any MCP-compatible client can use it - whether it's Claude Desktop for executives, a custom agent built by engineering, or an open-source chatbot. One server, many clients, all through the same standard protocol.
QUESTION 02
What problem does MCP solve in the LLM tool ecosystem?
š DEFINITION:
MCP solves the fundamental problem of fragmentation and duplication in the LLM tool ecosystem. Without a common standard, every framework reinvents tool integration, every tool requires multiple implementations, and users face vendor lock-in.
āļø HOW IT WORKS:
Currently, LangChain has its own tool definition system, AutoGen has another, CrewAI another, and custom applications build their own. A developer building a weather API tool must create separate integrations for each framework. Users are locked into frameworks because their tools don't work elsewhere. MCP replaces this with a single standard: build one MCP server, and any MCP client can use it. The protocol handles capability discovery, invocation, and result formatting uniformly across all clients.
š” WHY IT MATTERS:
This fragmentation has real costs. It slows innovation because developers spend time reinventing wheels instead of building new capabilities. It creates friction for users who want to switch tools or frameworks. MCP's solution is analogous to what USB did for peripherals - before USB, every device had different connectors; after USB, any device could connect to any computer.
š EXAMPLE:
A company building a database query tool would need to create a LangChain integration, an AutoGen plugin, and a custom API. That's 3x the work. With MCP, they build one server, and it works with any MCP client. The effort drops from months to days.
QUESTION 03
What is the MCP architecture and what are its core components (host, client, server)?
š DEFINITION:
MCP follows a clean three-tier architecture: hosts are applications where users interact with AI, clients are the MCP components within hosts that manage connections, and servers are standalone processes that expose capabilities.
āļø HOW IT WORKS:
The host is any application that needs AI capabilities - Claude Desktop, an IDE, a custom web app. The host embeds an MCP client that handles all protocol communication. The client establishes connections to one or more MCP servers, performs capability discovery, and maintains these connections. Servers are independent processes that implement the MCP protocol to expose tools, resources, and prompts. They can run locally or remotely. Communication uses JSON-RPC over stdio for local processes or SSE for remote servers.
š” WHY IT MATTERS:
This architecture decouples concerns. Tool developers focus on building great servers without worrying about how they'll be used. Application developers focus on user experience without reinventing tool integrations. Users benefit from a rich ecosystem where any tool works with any application.
š EXAMPLE:
Claude Desktop (host) has a built-in MCP client. When a user installs a file system MCP server locally, Claude's client launches it, discovers it can read/write files, and makes those capabilities available to the AI. The host doesn't need to know about individual servers.
QUESTION 04
What types of resources can MCP servers expose to LLMs?
š DEFINITION:
MCP servers can expose three distinct types of resources: tools (callable functions), resources (data that can be read), and prompts (reusable templates). This comprehensive set covers all common LLM interaction patterns.
āļø HOW IT WORKS:
Tools are executable operations with defined input schemas that perform actions and may have side effects. Examples include search_web, send_email, calculate. Resources are data identified by URIs that can be fetched on demand without side effects. Examples include file:///document.txt, database://customers/123. Prompts are reusable templates with placeholders that ensure consistent, high-quality interactions. Examples include summarize-document with {text} placeholder. Servers advertise all three during initialization.
š” WHY IT MATTERS:
This three-part model covers virtually every way LLMs interact with external systems. Tools let agents take action, resources let them access information, prompts let them follow best practices. By standardizing all three, MCP provides a complete toolkit for building sophisticated AI applications.
š EXAMPLE:
A database MCP server might expose: Tools like query(sql) and explain(query), Resources like schema://table/name, and Prompts like optimize-query with {query} placeholder. An LLM can use all three through the same server.
QUESTION 05
What is the difference between MCP tools, resources, and prompts?
š DEFINITION:
MCP distinguishes between tools (actions with side effects), resources (data without side effects), and prompts (templates for consistent interactions). This separation makes servers more maintainable and capabilities clearer to both clients and LLMs.
āļø HOW IT WORKS:
Tools are for doing - they execute operations that may change state. They require parameters and return results. Resources are for knowing - they provide access to data without modification. They're identified by URIs and return content. Prompts are for guiding - they provide structured templates that help LLMs produce consistent outputs. The client presents these differently to the LLM: tools for function calling, resources for context injection, prompts for structured generation.
š” WHY IT MATTERS:
This clear separation helps both developers and LLMs. Developers know exactly what each capability type is for. LLMs can reason about when to use each type. It also enables different security policies - resources might be readable by anyone, tools might require authorization.
š EXAMPLE:
A GitHub MCP server demonstrates all three: Tools: create_issue, merge_pull_request - these change state. Resources: repo://owner/name/readme - provides data. Prompts: summarize_pr_changes with {pr_number} - structures how the LLM should analyze a pull request.
QUESTION 06
How does MCP differ from OpenAI's plugin system or function calling?
š DEFINITION:
MCP is an open, decentralized protocol for tool interoperability, while OpenAI's plugin system and function calling are proprietary, centralized solutions tied to OpenAI's platform.
āļø HOW IT WORKS:
OpenAI's function calling is an API feature - tools are defined per request, work only with OpenAI models, and require OpenAI's infrastructure. Plugins add discovery but remain within OpenAI's ecosystem. MCP is an open standard anyone can implement. Any model provider can build MCP clients; any tool builder can create MCP servers. The protocol is decentralized - no single company controls it. Clients discover servers locally or remotely without needing a central registry.
š” WHY IT MATTERS:
OpenAI's approach creates vendor lock-in. Tools built for OpenAI only work with OpenAI. If you switch models, you rebuild your tools. MCP promotes an open ecosystem where tools are portable across models and applications. This future-proofs investments - tools built for MCP today will work with whatever AI emerges tomorrow.
š EXAMPLE:
A company builds a CRM tool. With OpenAI function calling, it only works for OpenAI users. With MCP, it works for Claude Desktop, any MCP-compatible framework, and even OpenAI via an adapter.
QUESTION 07
What transport protocols does MCP support (stdio, SSE)?
š DEFINITION:
MCP supports two primary transport protocols: stdio for local communication and Server-Sent Events (SSE) for remote HTTP-based communication. This dual support enables both secure local tools and scalable remote services.
āļø HOW IT WORKS:
stdio transport: the client launches the server as a subprocess and communicates via stdin/stdout. This is ideal for local tools like file system access - fast, secure, simple. SSE transport: the client connects to a server over HTTP, using SSE for server-to-client messages and HTTP POST for client-to-server requests. This enables remote servers that many clients can access. The protocol messages are identical regardless of transport.
š” WHY IT MATTERS:
Different use cases need different deployment models. Local tools demand the security of stdio. Remote services need the scalability of HTTP. Supporting both makes MCP versatile for any scenario.
š EXAMPLE:
A file system server uses stdio - runs on user's machine, no network exposure. A company's internal API server uses SSE - runs centrally, many employees connect. Both implement same MCP protocol.
QUESTION 08
How do you build a simple MCP server?
š DEFINITION:
Building an MCP server involves implementing the MCP protocol to expose capabilities, but SDKs make this straightforward. With just a few dozen lines of code, developers can create servers that make their tools available to any MCP client.
āļø HOW IT WORKS:
Using Anthropic's MCP SDK (available for Python, TypeScript), the process is: import SDK, create server instance, define tools using decorators, implement handler functions, set up transport, run server. The SDK handles all protocol details: JSON-RPC parsing, capability discovery, request routing, error handling, result formatting.
š” WHY IT MATTERS:
Low barrier to entry is crucial for ecosystem growth. If building an MCP server required deep protocol expertise, adoption would be slow. SDKs make it accessible to any developer who can write a function.
š EXAMPLE:
A simple calculator MCP server in Python using the MCP SDK can be built in about 30 lines of code, exposing a calculate tool that any MCP client can use.
QUESTION 09
What security considerations apply to MCP servers?
š DEFINITION:
MCP servers must address multiple security layers: authentication, authorization, input validation, rate limiting, and safe execution. Since LLMs can be manipulated via prompt injection, servers must be robust against misuse.
āļø HOW IT WORKS:
Authentication: for remote servers, implement API keys or OAuth. For local servers, rely on filesystem permissions. Authorization: check if client has permission for specific operation. Input validation: validate all parameters against schemas, prevent injection attacks. Rate limiting: track usage per client, reject excessive requests. Safe execution: sandbox dangerous operations. Logging: audit all tool calls.
š” WHY IT MATTERS:
MCP servers are powerful - they can read data, modify systems, and trigger actions. A compromised server could lead to data breaches or system damage. Since LLMs can be tricked, servers must be secure by design.
š EXAMPLE:
A file system server must validate paths to prevent directory traversal, check user permissions, limit file sizes, log all access, and rate-limit requests. Missing any of these creates vulnerability.
QUESTION 10
What is the MCP client and how does it integrate with a host application?
š DEFINITION:
An MCP client is a component embedded within a host application that manages connections to MCP servers, handles protocol communication, and presents discovered capabilities to the application.
āļø HOW IT WORKS:
The client: launches local servers as subprocesses or connects to remote servers, performs capability discovery during initialization, routes requests to appropriate servers, handles timeouts and errors, manages server lifecycle. It provides a simple API to the host like client.call_tool('name', args).
š” WHY IT MATTERS:
The client abstracts all MCP complexity from the host. Application developers don't need to know about individual servers or protocol details. This makes integrating MCP straightforward and promotes adoption.
š EXAMPLE:
In Claude Desktop, the built-in MCP client handles all server connections. When the AI decides to use a tool, the client manages the call and returns results seamlessly.
QUESTION 11
How does MCP enable tool discovery and schema advertisement to LLMs?
š DEFINITION:
MCP enables dynamic tool discovery through a standardized initialization handshake where servers advertise their capabilities with full JSON schemas. This allows clients to present available tools to LLMs without pre-configuration.
āļø HOW IT WORKS:
Client connects to server, sends initialize request. Server responds with capabilities including lists of tools with names, descriptions, and complete JSON Schemas defining parameters. Client stores this information. When presenting tools to an LLM, it uses these schemas directly. If LLM calls a tool, client validates arguments against schema before sending to server.
š” WHY IT MATTERS:
Dynamic discovery makes MCP plug-and-play. Users can install new servers, and immediately their AI can use them. No configuration files, no code changes. Schema advertisement ensures LLMs get rich information about how to use tools correctly.
š EXAMPLE:
User installs weather server. Claude connects, discovers tool get_forecast with schema requiring location. Claude's function calling now presents this tool to the AI with that exact schema.
QUESTION 12
What is the sampling capability in MCP and how does it work?
š DEFINITION:
Sampling is a bidirectional capability that allows servers to request LLM completions from the client. This enables servers to leverage the client's language model for tasks like summarization or explanation.
āļø HOW IT WORKS:
When a server needs language model assistance, it sends a sampling request with prompt, parameters (temperature, max tokens), and optional context. The client generates a completion using its LLM and returns it to the server. This can happen multiple times in an interaction. The server can use the generated text in its response or for further processing.
š” WHY IT MATTERS:
Sampling makes servers more intelligent. A database server can not only query data but also explain results in plain English. A code server can debug errors and suggest fixes. This transforms simple tools into intelligent assistants without needing their own LLM infrastructure.
š EXAMPLE:
A database server runs a complex SQL query. Instead of returning raw data, it sends a sampling request: "Summarize these quarterly sales figures for an executive." The client generates a summary, and the server returns both data and summary.
QUESTION 13
How do you version and evolve an MCP server without breaking clients?
š DEFINITION:
Versioning MCP servers requires backward-compatible evolution strategies to ensure existing clients continue working when servers are updated. The protocol supports capability negotiation and additive changes.
āļø HOW IT WORKS:
Key strategies: protocol version negotiation during initialization, capability discovery so clients adapt to what's available, additive changes (add new tools without modifying existing ones), optional parameters with defaults for tool modifications, deprecation with migration windows, semantic versioning for releases.
š” WHY IT MATTERS:
Breaking clients destroys user trust and fragments the ecosystem. Users should update servers without worrying about breaking workflows. Good versioning ensures ecosystem stability while allowing evolution.
š EXAMPLE:
File server v1 has read_file. v2 adds write_file (additive). v3 needs to change read_file response format - keep old as deprecated, add read_file_v2, announce migration window, remove after 6 months.
QUESTION 14
What LLM clients currently support MCP (Claude Desktop, etc.)?
š DEFINITION:
MCP client support is growing, with Claude Desktop as the flagship implementation. Several other applications and frameworks have added MCP support, expanding where MCP servers can be used.
āļø HOW IT WORKS:
Claude Desktop has built-in MCP client support. Continue.dev (open-source IDE extension) supports MCP. Zed code editor has MCP integration. LangChain has experimental MCP support. AutoGen and CrewAI have community plugins. Custom applications can add MCP using client libraries in Python, TypeScript, etc.
š” WHY IT MATTERS:
Client adoption drives MCP's value. Every new client makes the entire ecosystem of MCP servers more valuable. For tool developers, their servers reach more users. For users, tools work across more applications.
š EXAMPLE:
A developer builds an MCP server for internal tools. It works in Claude Desktop for executives, in Continue.dev for engineers, and in a custom chatbot via LangChain. One server, three clients.
QUESTION 15
How does MCP compare to LangChain tool definitions or LlamaIndex tools?
š DEFINITION:
MCP is a protocol for interoperability across frameworks, while LangChain and LlamaIndex tools are framework-specific implementations. They are complementary rather than competitive.
āļø HOW IT WORKS:
LangChain tools are defined within LangChain's ecosystem and only usable there. MCP is framework-agnostic - any MCP client can use any MCP server. Frameworks can build MCP clients to use MCP servers, or provide adapters that expose their internal tools as MCP servers.
š” WHY IT MATTERS:
Frameworks provide rich abstractions within their ecosystem. MCP provides interoperability across ecosystems. The ideal is frameworks embracing MCP - allowing users to access the broader MCP ecosystem while benefiting from framework-specific features.
š EXAMPLE:
A developer builds a tool as an MCP server. It works in LangChain via LangChain's MCP client adapter, and also in AutoGen via its MCP support. The tool works everywhere.
QUESTION 16
What are the latency considerations of using MCP for tool calls?
š DEFINITION:
MCP adds some latency due to inter-process communication, serialization, and protocol overhead. However, this is typically small compared to the actual work tools perform.
āļø HOW IT WORKS:
Latency components: stdio transport adds ~0.5ms, network adds RTT (10-100ms), serialization adds ~1-2ms, protocol handling adds ~1ms, tool execution varies. For most tools, execution dominates. For very fast tools, local stdio minimizes overhead.
š” WHY IT MATTERS:
Latency affects user experience. Understanding where it comes from helps deployment decisions. For fast operations, use local stdio. For slower operations, remote is fine. Overhead is usually acceptable given tool value.
š EXAMPLE:
Calculator via local stdio: total ~3ms. Weather API via remote: 5ms overhead + 50ms RTT + 200ms API = 255ms. Overhead is <2%.
QUESTION 17
How do you implement authentication and authorization in an MCP server?
š DEFINITION:
Implementing auth in MCP servers involves verifying client identity and checking permissions before allowing access. Since MCP doesn't prescribe auth mechanisms, servers implement their own appropriate to their deployment.
āļø HOW IT WORKS:
For local servers, authentication is handled by OS - only users who can launch the server can use it. For remote servers: implement API key validation, JWTs, or client certificates. Authorization checks if authenticated identity has permission for specific tool/resource. Implement checks in every handler. Return appropriate errors when checks fail.
š” WHY IT MATTERS:
Without auth, anyone could use your server, potentially abusing resources or accessing sensitive data. For internal servers, auth ensures only employees access. Security is not optional.
š EXAMPLE:
Database server requires API key in header, validates against active keys, checks what tables this key can access before allowing queries, logs all operations.
QUESTION 18
What is the community ecosystem around MCP and what servers are available?
š DEFINITION:
The MCP community ecosystem is a growing collection of open-source servers, tools, and libraries contributed by developers worldwide, providing ready-to-use integrations with popular services.
āļø HOW IT WORKS:
Community repositories catalog available servers across categories: developer tools (GitHub, GitLab, filesystem), productivity (Google Drive, Slack, Gmail), data sources (PostgreSQL, MySQL), search (Brave Search, Wikipedia), media (Spotify, YouTube), utility (calculator, weather). Most are open-source and available for immediate use.
š” WHY IT MATTERS:
Ecosystem determines MCP's practical value. The growing collection means users can immediately benefit - connecting their AI to tools they already use. For developers, existing servers provide examples and inspiration.
š EXAMPLE:
A user wants Claude to manage GitHub issues. They find the community MCP GitHub server, install it, configure their token, and suddenly Claude can list issues, create new ones, and manage PRs.
QUESTION 19
How does MCP fit into an enterprise AI integration strategy?
š DEFINITION:
MCP fits as a standardized integration layer enabling consistent, secure, and manageable access to internal systems across diverse AI applications, future-proofing enterprise AI investments.
āļø HOW IT WORKS:
Enterprises: identify internal systems to expose (CRM, ERP, databases), build MCP servers for each with auth and auditing, deploy on internal infrastructure, allow various AI clients to connect, centralize monitoring and management, update servers as systems evolve.
š” WHY IT MATTERS:
Without MCP, each AI application needs custom integrations, leading to duplication and security gaps. MCP provides a single, standardized way to expose capabilities. New AI tools can be adopted without rebuilding integrations.
š EXAMPLE:
Enterprise builds MCP servers for Salesforce, SAP, and internal docs. Sales uses Claude Desktop, support uses custom chatbot, executives use reporting tool - all accessing same servers via MCP.
QUESTION 20
How would you explain MCP to a developer who is new to agentic AI?
š DEFINITION:
MCP is a universal adapter that lets any AI assistant plug into any tool or data source, just like USB-C lets any device plug into any computer. It's a standard way for AI to interact with the world.
āļø HOW IT WORKS:
Before USB, every peripheral had its own connector. USB created a single standard that all devices could use. MCP does the same for AI tools. Tool builders create MCP servers (like USB devices). AI applications have MCP clients (like USB ports). Any server works with any client. The server advertises its capabilities, and the client makes them available to the AI.
š” WHY IT MATTERS:
For developers: build a tool once as an MCP server, and every MCP-compatible AI can use it. No more writing separate integrations for different frameworks. It's write once, run everywhere.
š EXAMPLE:
A developer builds a flight price checking MCP server. Now any MCP-compatible AI - Claude, Cursor, custom apps - can use it. One effort, universal reach.