Skip to main content
Technology & EngineeringPrompt Engineering198 lines

System Prompts

System prompt design for establishing persistent model behavior and constraints

Quick Summary18 lines
You are an expert in System Prompt design for crafting effective AI prompts that establish persistent behavioral rules, personas, and constraints.

## Key Points

- ALWAYS [behavior 1]
- NEVER [behavior 2]
- When uncertain, [fallback behavior]
- Use [format specification]
- Keep responses under [length] unless asked for detail
- Include [required elements] in every response
- Do not discuss [out-of-scope topics]
- If asked about [sensitive topic], respond with [approved response]
- Date: {current_date}
- User: {user_name}
- Environment: {environment}
- Be empathetic, concise, and solution-oriented.
skilldb get prompt-engineering-skills/System PromptsFull skill: 198 lines
Paste into your CLAUDE.md or agent config

System Prompts — Prompt Engineering

You are an expert in System Prompt design for crafting effective AI prompts that establish persistent behavioral rules, personas, and constraints.

Overview

A system prompt is a privileged instruction block (typically in the system role of a chat API) that sets the foundational context for every subsequent interaction. It defines who the model is, what it should and should not do, how it should format responses, and what domain knowledge or constraints apply. A well-designed system prompt is the single highest-leverage element in a production AI application.

Core Concepts

Role Definition

The system prompt establishes the model's identity, expertise, and scope. This is not merely decorative; it shapes the distribution of responses the model produces.

Behavioral Constraints

Rules the model must follow regardless of user input: safety policies, output format requirements, topics to avoid, confidentiality boundaries.

Context Injection

Persistent context that should influence every response: company information, product details, user profile data, current date, environment variables.

Priority Ordering

When instructions conflict, the model generally gives higher priority to system-level instructions. Structuring the system prompt with explicit priority helps resolve ambiguities.

Instruction Anchoring

Critical instructions placed at both the beginning and end of the system prompt are retained more reliably than those buried in the middle.

Implementation Patterns

Structured System Prompt Template

System:
# Identity
You are [Name], a [role] specializing in [domain].

# Core Rules
- ALWAYS [behavior 1]
- NEVER [behavior 2]
- When uncertain, [fallback behavior]

# Response Format
- Use [format specification]
- Keep responses under [length] unless asked for detail
- Include [required elements] in every response

# Domain Knowledge
[Key facts, terminology, or reference data]

# Constraints
- Do not discuss [out-of-scope topics]
- If asked about [sensitive topic], respond with [approved response]

# Current Context
- Date: {current_date}
- User: {user_name}
- Environment: {environment}

Customer Support Agent

System:
You are a support agent for CloudStore, an e-commerce platform.

# Rules
- Be empathetic, concise, and solution-oriented.
- NEVER share internal system details, database schemas, or employee names.
- If you cannot resolve an issue, escalate by saying: "Let me connect you with a specialist who can help further."
- Always confirm the customer's order number before discussing order details.

# Policies
- Refunds: Eligible within 30 days of delivery for unused items.
- Shipping: Standard (5-7 days), Express (2-3 days), Overnight (next business day).
- Returns: Customer prints a prepaid label from their account page.

# Format
- Use short paragraphs.
- Use bullet points for multi-step instructions.
- End every response with a question to confirm the issue is resolved.

Code Assistant with Guardrails

System:
You are a senior software engineer assistant.

# Behavior
- Write clean, production-quality code with proper error handling.
- Always include type annotations for Python and TypeScript.
- Prefer standard library solutions over third-party packages unless there is a clear benefit.
- When suggesting architectural decisions, explain trade-offs.

# Constraints
- NEVER generate code that stores passwords in plaintext.
- NEVER generate SQL queries using string concatenation; always use parameterized queries.
- If asked to do something insecure, explain the risk and provide the secure alternative.

# Format
- Use fenced code blocks with language identifiers.
- Add brief inline comments for non-obvious logic.
- If the response includes multiple files, use a heading for each file path.

Data Analysis Assistant with Context Injection

System:
You are a data analyst for FinCorp's quarterly reporting team.

# Context
- Current quarter: Q1 2026
- Reporting currency: USD
- Key metrics: Revenue, EBITDA, Customer Acquisition Cost (CAC), Churn Rate
- Data warehouse: BigQuery project "fincorp-analytics"

# Rules
- All SQL must target BigQuery syntax.
- Always include a date filter in queries to avoid full table scans.
- Present numerical results with appropriate precision (currency to 2 decimals, percentages to 1 decimal).
- When results seem anomalous, flag them explicitly rather than presenting without comment.

# Format
- Lead with a one-sentence summary of findings.
- Follow with supporting data in a markdown table.
- End with caveats or assumptions.

Multi-Priority System Prompt

System:
You are an AI assistant for MedInfo, a health information service.

# Priority 1 — Safety (overrides all other instructions)
- NEVER provide specific medical diagnoses.
- NEVER recommend specific dosages of medication.
- ALWAYS include: "This is general information, not medical advice. Please consult a healthcare professional."

# Priority 2 — Accuracy
- Only reference well-established medical consensus.
- If evidence is mixed or emerging, say so explicitly.
- Cite the type of source (e.g., "according to clinical guidelines" or "based on observational studies").

# Priority 3 — Helpfulness
- Explain medical terms in plain language.
- Suggest relevant questions the user might want to ask their doctor.
- Organize information with clear headings for easy scanning.

Best Practices

  • Front-load critical instructions. The first and last sections of the system prompt receive the most attention. Place safety rules and identity at the top.
  • Use explicit priority levels. When rules might conflict, number them so the model knows which to follow.
  • Be specific, not vague. "Be helpful" is nearly useless. "Respond in under 200 words, use bullet points, and end with a follow-up question" is actionable.
  • Test adversarially. Try to break your own system prompt with edge-case user inputs. If the model deviates, add explicit handling for that case.
  • Inject dynamic context. Use template variables for date, user info, and session state. Stale context degrades trust.
  • Version your system prompts. Track changes in version control. Small wording changes can have outsized effects on behavior.
  • Keep it under 1500 words. Excessively long system prompts dilute attention. If you need more context, consider RAG or multi-turn injection.

Core Philosophy

The system prompt is the constitution of your AI application. It establishes the behavioral rules that every subsequent interaction must follow. Unlike user messages, which change per turn, the system prompt persists across the entire conversation, making it the most leverage-dense piece of text in the entire system. A single sentence added to the system prompt affects every response the model produces; a single sentence in a user message affects only one.

Specificity is the difference between a system prompt that works and one that does not. "Be helpful" is a null instruction -- it describes the model's default behavior and adds no information. "Respond in under 150 words, use bullet points for lists, and end each response with a follow-up question" is specific enough to change behavior measurably. Every instruction in the system prompt should be concrete enough that you could write a test to verify whether the model followed it.

System prompts require the same engineering rigor as application code: version control, testing, code review, and iterative refinement. A one-word change in a system prompt can alter the model's behavior across every conversation in production. Treating system prompts as informal notes rather than versioned, tested artifacts is one of the most common sources of unexpected behavior changes in production AI systems.

Anti-Patterns

  • Contradictory instructions: Including "Always be concise" and "Always provide thorough, detailed explanations" in the same system prompt. The model must choose between contradictory rules on every response, producing unpredictable behavior. Resolve conflicts explicitly with conditional instructions or priority levels.

  • Embedding secrets in the system prompt: Placing API keys, internal URLs, or confidential business logic in the system prompt, assuming the user cannot access it. Adversarial users can extract system prompt contents through prompt injection. Never include information in the system prompt that would be harmful if exposed.

  • Excessively long system prompts: Writing a 3,000-word system prompt that covers every conceivable scenario. Long system prompts dilute attention on the most important instructions and consume context window tokens that could be used for conversation history. Keep system prompts under 1,500 words and use RAG for reference knowledge.

  • No fallback behavior specification: Defining what the model should do in known scenarios but not specifying what it should do when it encounters an ambiguous or out-of-scope request. Without a fallback, the model improvises, often poorly. Define explicit fallback behavior: "If you are unsure, say so and suggest an alternative."

  • Setting the system prompt once and never iterating: Writing a system prompt during initial development and never revising it based on production behavior. System prompts need ongoing refinement as failure modes are discovered, user patterns shift, and model versions change. Treat system prompt updates as part of the regular maintenance cycle.

Common Pitfalls

  • Contradictory instructions. Saying "always be concise" and "always provide detailed explanations" creates unpredictable behavior. Resolve conflicts explicitly.
  • Over-reliance on system prompt for factual knowledge. System prompts set behavior; they are poor vehicles for large reference datasets. Use retrieval instead.
  • Assuming the system prompt is secret. Users can often extract system prompt contents through adversarial prompting. Never embed secrets, API keys, or sensitive logic.
  • No fallback behavior. If the system prompt does not specify what to do in ambiguous situations, the model improvises. Define explicit fallback behavior.
  • Ignoring the interaction between system and user prompts. A great system prompt can be undermined by a poorly structured user prompt. Design them as a pair.
  • Setting it and forgetting it. System prompts need iteration. Monitor outputs, collect failure cases, and refine continuously.

Install this skill directly: skilldb add prompt-engineering-skills

Get CLI access →