MCP Prompts
Defining prompt templates in MCP servers that AI clients can discover and invoke. Covers prompt definitions with arguments, dynamic prompt generation, multi-turn prompt structures, embedding resources in prompts, prompt discovery, and patterns for building reusable prompt libraries.
You are an AI assistant helping developers define prompts in MCP servers. Prompts are reusable templates that servers expose for common workflows — they let users invoke pre-built interactions with optional arguments. Unlike tools (model-controlled) and resources (application-controlled), prompts are user-controlled: the user selects a prompt from a menu or slash command. ## Key Points - **Overly long prompts** — keep generated prompts focused. A 5000-word prompt template wastes context and dilutes the instruction. - **Missing argument validation** — validate that required arguments are present and sensible before generating the prompt. - **Hardcoded data in prompts** — fetch live data where possible. A prompt that embeds a stale schema is worse than one that reads the current schema dynamically. - Use kebab-case for prompt names: `analyze-query`, `review-migration`, `explain-table`. - Write clear descriptions — clients may show these in a menu or slash-command list. - Embed relevant resources dynamically so prompts always reflect current state. - Keep prompts focused on a single workflow. Multiple steps are fine, but the goal should be clear. - Test prompts with real AI models to verify they produce useful conversations. - Use multi-message prompts to establish persona or approach when the task benefits from it. ## Quick Example ``` ### Multi-Message Prompts Prompts can return multiple messages to set up a conversation: ``` ``` Multi-message prompts can prime the assistant with a persona or approach before presenting the actual task. ### Embedding Resources in Prompts Prompts can include resource references to pull in contextual data: ```
skilldb get mcp-server-skills/MCP PromptsFull skill: 287 linesMCP Prompts
You are an AI assistant helping developers define prompts in MCP servers. Prompts are reusable templates that servers expose for common workflows — they let users invoke pre-built interactions with optional arguments. Unlike tools (model-controlled) and resources (application-controlled), prompts are user-controlled: the user selects a prompt from a menu or slash command.
Philosophy
Prompts encode domain expertise into reusable templates. A database server might offer a "analyze-query-performance" prompt that guides the model through EXPLAIN output. A code server might offer a "review-pull-request" prompt. Prompts reduce the burden on users to craft effective instructions — the server author, who understands the domain, writes the prompt once and users invoke it by name.
Techniques
Defining Prompts
Prompts are declared via the prompts/list method:
{
"prompts": [
{
"name": "analyze-query",
"description": "Analyze a SQL query for performance issues, suggest indexes, and identify potential problems",
"arguments": [
{
"name": "query",
"description": "The SQL query to analyze",
"required": true
},
{
"name": "context",
"description": "Additional context like table sizes or current indexes",
"required": false
}
]
},
{
"name": "explain-table",
"description": "Generate a comprehensive explanation of a database table's purpose, relationships, and usage patterns",
"arguments": [
{
"name": "table_name",
"description": "Name of the table to explain",
"required": true
}
]
}
]
}
Getting a Prompt
When a user selects a prompt, the client calls prompts/get with the prompt name and arguments:
// Request
{
"method": "prompts/get",
"params": {
"name": "analyze-query",
"arguments": {
"query": "SELECT u.*, o.* FROM users u JOIN orders o ON u.id = o.user_id WHERE o.total > 100"
}
}
}
// Response
{
"description": "Analyze a SQL query for performance issues",
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Please analyze this SQL query for performance issues, suggest appropriate indexes, and identify any potential problems:\n\n```sql\nSELECT u.*, o.* FROM users u JOIN orders o ON u.id = o.user_id WHERE o.total > 100\n```\n\nConsider:\n1. Are there missing indexes that would improve performance?\n2. Is SELECT * appropriate or should specific columns be listed?\n3. Are there potential issues with the JOIN or WHERE clause?\n4. How would this query perform at scale (millions of rows)?"
}
}
]
}
Multi-Message Prompts
Prompts can return multiple messages to set up a conversation:
{
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "I need you to review this database schema for a new feature."
}
},
{
"role": "assistant",
"content": {
"type": "text",
"text": "I'll review the schema. Let me look at the table definitions and relationships. What specific aspects are you most concerned about — normalization, performance, or data integrity?"
}
},
{
"role": "user",
"content": {
"type": "text",
"text": "Here is the schema:\n\n```sql\nCREATE TABLE products (\n id SERIAL PRIMARY KEY,\n name VARCHAR(255),\n price DECIMAL(10,2)\n);\n```\n\nFocus on whether this is production-ready."
}
}
]
}
Multi-message prompts can prime the assistant with a persona or approach before presenting the actual task.
Embedding Resources in Prompts
Prompts can include resource references to pull in contextual data:
{
"messages": [
{
"role": "user",
"content": {
"type": "resource",
"resource": {
"uri": "db://production/tables/orders/schema",
"text": "{\"columns\": [{\"name\": \"id\", \"type\": \"integer\"}, ...]}",
"mimeType": "application/json"
}
}
},
{
"role": "user",
"content": {
"type": "text",
"text": "Given the orders table schema above, write a migration to add soft-delete support with a deleted_at timestamp column and an index for querying non-deleted records."
}
}
]
}
This is powerful — the prompt dynamically fetches current data (like a live schema) and embeds it directly into the conversation.
Implementing Prompts (TypeScript)
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({
name: "db-server",
version: "1.0.0",
});
server.prompt(
"analyze-query",
"Analyze a SQL query for performance issues and suggest improvements",
{
query: z.string().describe("The SQL query to analyze"),
context: z.string().optional().describe("Additional context about tables or indexes"),
},
async ({ query, context }) => {
// Optionally fetch live data to enrich the prompt
const tables = extractTableNames(query);
const schemas = await Promise.all(
tables.map(t => db.getTableSchema(t))
);
let promptText = `Analyze this SQL query for performance issues:\n\n\`\`\`sql\n${query}\n\`\`\`\n\n`;
promptText += `Table schemas:\n${JSON.stringify(schemas, null, 2)}\n\n`;
if (context) {
promptText += `Additional context: ${context}\n\n`;
}
promptText += "Please evaluate:\n";
promptText += "1. Missing indexes\n";
promptText += "2. Query plan efficiency\n";
promptText += "3. Potential N+1 or full table scan issues\n";
promptText += "4. Suggestions for rewriting\n";
return {
messages: [{ role: "user", content: { type: "text", text: promptText } }],
};
}
);
Implementing Prompts (Python)
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("db-server")
@mcp.prompt()
async def analyze_query(query: str, context: str = "") -> str:
"""Analyze a SQL query for performance issues and suggest improvements.
Args:
query: The SQL query to analyze
context: Additional context about tables or indexes
"""
tables = extract_table_names(query)
schemas = [await db.get_table_schema(t) for t in tables]
prompt = f"Analyze this SQL query for performance issues:\n\n```sql\n{query}\n```\n\n"
prompt += f"Table schemas:\n{json.dumps(schemas, indent=2)}\n\n"
if context:
prompt += f"Additional context: {context}\n\n"
prompt += "Please evaluate:\n"
prompt += "1. Missing indexes\n"
prompt += "2. Query plan efficiency\n"
prompt += "3. Potential N+1 or full table scan issues\n"
prompt += "4. Suggestions for rewriting\n"
return prompt
Dynamic Prompts
Prompts can generate different content based on server state:
server.prompt(
"daily-report",
"Generate a daily report of database activity",
{},
async () => {
const stats = await db.query(`
SELECT
(SELECT count(*) FROM orders WHERE created_at > now() - interval '24 hours') as new_orders,
(SELECT count(*) FROM users WHERE created_at > now() - interval '24 hours') as new_users,
(SELECT avg(response_time_ms) FROM api_logs WHERE created_at > now() - interval '24 hours') as avg_response_time
`);
const row = stats.rows[0];
return {
messages: [{
role: "user",
content: {
type: "text",
text: `Generate a daily activity report based on these metrics from the last 24 hours:\n\n- New orders: ${row.new_orders}\n- New users: ${row.new_users}\n- Average API response time: ${row.avg_response_time}ms\n\nProvide insights on trends and any anomalies.`,
},
}],
};
}
);
Prompt List Change Notifications
If your server's available prompts can change at runtime, declare listChanged in capabilities and notify clients:
// After adding or removing prompts dynamically
server.notification({
method: "notifications/prompts/list_changed",
});
Anti-Patterns
- Static prompts with no arguments — if a prompt always returns the exact same text, it should probably be documentation, not an MCP prompt. Prompts are most valuable when they are dynamic or parameterized.
- Prompts that duplicate tool functionality — a prompt should set up a conversation, not replicate what a tool does. If you need to execute a query, that is a tool. If you need to guide analysis of a query, that is a prompt.
- Overly long prompts — keep generated prompts focused. A 5000-word prompt template wastes context and dilutes the instruction.
- Missing argument validation — validate that required arguments are present and sensible before generating the prompt.
- Hardcoded data in prompts — fetch live data where possible. A prompt that embeds a stale schema is worse than one that reads the current schema dynamically.
Best Practices
- Use kebab-case for prompt names:
analyze-query,review-migration,explain-table. - Write clear descriptions — clients may show these in a menu or slash-command list.
- Embed relevant resources dynamically so prompts always reflect current state.
- Keep prompts focused on a single workflow. Multiple steps are fine, but the goal should be clear.
- Test prompts with real AI models to verify they produce useful conversations.
- Use multi-message prompts to establish persona or approach when the task benefits from it.
Install this skill directly: skilldb add mcp-server-skills
Related Skills
MCP Auth and Security
Securing MCP servers with authentication, authorization, and defensive practices. Covers OAuth 2.1 integration for remote servers, API key management through environment variables, input validation and sanitization, rate limiting, sandboxing tool execution, path traversal prevention, and the principle of least privilege for tool design.
MCP Deployment
Deploying MCP servers across different environments and transports. Covers local deployment via stdio, remote deployment with SSE and streamable HTTP, Docker containerization, cloud deployment on AWS/GCP/Vercel, npx and uvx distribution for zero-install usage, configuration management, and production hardening.
MCP Fundamentals
Core architecture of the Model Context Protocol (MCP) — the open protocol from Anthropic that connects AI assistants to external tools and data sources. Covers JSON-RPC transport, capabilities negotiation, server lifecycle, the client-server interaction model, and how tools, resources, and prompts fit together.
MCP Patterns
Common architectural patterns for MCP servers — database servers, API wrappers, file system servers, multi-tool orchestration, caching strategies, error recovery, and composition patterns. Practical blueprints for building production-quality MCP servers that handle real-world complexity.
MCP Python Server
Building MCP servers in Python using the official mcp SDK and the FastMCP high-level pattern. Covers project setup with uv, defining tools with type hints, async handlers, resources, prompts, stdio and SSE transports, context objects, and deployment strategies including uvx distribution.
MCP Resources
Exposing data and content to AI clients through MCP resources. Covers resource URIs, listing and reading resources, resource templates with URI patterns, MIME types, subscriptions for real-time updates, and patterns for exposing files, database records, and API data as browsable resources.