What Is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard that defines how AI models communicate with external data sources and tools. Instead of building a bespoke integration for every service, MCP provides a unified interface through which LLMs can access file systems, databases, APIs, and any other resources.
Put simply: MCP is to AI models what USB was to hardware – a standardised connection that eliminates fragmentation.
Understanding the Architecture
MCP follows a client-server architecture. The MCP Client is the AI tool (e.g. Claude Code, Cursor, or a custom application) that needs access to external resources. The MCP Server exposes these resources via a standardised protocol.
An MCP server can offer three types of capabilities:
Resources – Data sources the model can read. These can be files, database records, API responses, or any other kind of structured data.
Tools – Functions the model can call to perform actions. For example: querying a database, sending an email, or triggering a deployment process.
Prompts – Pre-defined prompt templates the model can use to carry out common tasks in a standardised way.
When an MCP Server Makes Sense
Not every integration needs an MCP server. Here are the scenarios where the effort pays off:
Recurring AI workflows: If your team regularly connects the same data sources to AI tools – for example reading Jira tickets, checking database states, or analysing logs – a dedicated MCP server is worthwhile.
Multi-tool environments: If you use various AI clients (Claude Code in the terminal, Cursor in the IDE, custom agents), an MCP server provides a unified interface for all of them.
Proprietary data sources: If your organisation has internal systems that aren't reachable via standard APIs, an MCP server makes this data accessible to AI tools.
Building an MCP Server: Step by Step
A minimal MCP server in TypeScript is surprisingly straightforward. The official SDK abstracts the protocol and lets you focus on the business logic.
The basic workflow:
- Initialise the server – Define name, version, and capabilities
- Register tools – Each tool gets a name, a description, and a JSON schema for its parameters
- Implement handlers – The actual logic that runs when the AI model calls a tool
- Configure transport – Stdio for local use, HTTP/SSE for remote servers
Practical Example: Database MCP Server
A concrete example from our work: we built an MCP server for a client project that provides read-only access to a PostgreSQL database. The AI model can query table schemas, filter data, and retrieve aggregated statistics – without direct database access.
The key design decisions:
Read-Only by Default – The server permits only SELECT queries. No INSERTs, no UPDATEs, no DELETEs. This eliminates an entire class of risks.
Query Limits – Every query has a LIMIT of 100 rows and a timeout of 5 seconds. This prevents a poorly formulated query from overloading the database.
Schema Awareness – The server exposes the database schema as a resource, so the AI model knows the table structure before it formulates queries.
Avoiding Common Mistakes
From our experience with MCP implementations, there are recurring pitfalls:
Overly broad tool definitions: A tool called "database" with a free-form SQL parameter is dangerous. Better: specific tools like "get_user_by_id", "list_recent_orders", "get_monthly_revenue" with clearly defined parameters.
Missing validation: Every input from the AI model must be validated. LLMs can generate unexpected values. Zod schemas for every parameter are a must.
No rate limits: Without rate limiting, an AI agent can send hundreds of requests per minute in a loop. Implement throttling at the server level.
Too much context at once: If a tool returns megabytes of data, it overwhelms the model's context window. Pagination and summaries are essential.
Security Considerations
MCP servers extend your system's attack surface. A few non-negotiable security measures:
- Principle of Least Privilege: The server should have only the minimum necessary permissions
- Input Sanitisation: Everything coming from the AI model is untrusted input
- Audit Logging: Every tool call should be logged – who, what, when
- Secrets Management: API keys and credentials belong in environment variables, never in the server code
When You Don't Need an MCP Server
MCP is not always the right answer. Skip it when:
- The integration is only used by a single tool – a direct API integration is simpler
- The data source already has a well-documented REST API that the AI tool can call directly
- The development and maintenance effort outweighs the time saved through the AI integration
- Security requirements rule out a direct connection between the AI model and the data source
Conclusion
MCP servers are a powerful tool for connecting AI models with the data and systems they need for real work. The standard is mature enough for production use, but young enough that best practices are still evolving.
Our advice: start with a narrowly scoped, read-only MCP server for a specific use case. Gain experience. Expand incrementally. And treat every MCP server like a public API – with the appropriate care around security, validation, and monitoring.