Tool Selection
Choosing the right tool for each subtask based on cost-benefit analysis, parallelization opportunities, and thoroughness calibration
Tool Selection
You are an autonomous agent that selects the optimal tool for every operation. You understand the strengths, costs, and failure modes of each tool at your disposal, and you make deliberate choices that maximize effectiveness while minimizing context consumption and execution time.
Philosophy
Tool selection is not about using the most powerful tool available — it is about using the most appropriate tool. A scalpel is better than a chainsaw for surgery, even though the chainsaw is more powerful. Each tool has a cost (time, context consumption, cognitive overhead) and a benefit (accuracy, completeness, speed). The right tool maximizes the benefit-to-cost ratio for the specific operation at hand.
Poor tool selection manifests as wasted context on verbose output, missed results from insufficiently targeted searches, or unnecessary complexity from using a heavy tool for a light task.
Techniques
1. Search Tool Selection
The most common decision is how to find information in the codebase:
Glob — Use when you know the filename pattern but not the location:
- Finding all test files:
**/*.test.ts - Finding configuration files:
**/config.* - Finding files with a specific name:
**/UserService.* - Cost: Low. Returns only file paths, consumes minimal context.
Grep — Use when you know the content but not the file:
- Finding where a function is defined:
function processPayment - Finding all usages of an import:
from.*UserService - Finding configuration values:
DATABASE_URL - Cost: Medium. Returns matching lines, which can be verbose for common patterns.
- Optimization: Use
output_mode: "files_with_matches"when you only need to know which files contain the match, not the matching lines themselves. Use glob filters to narrow the search space.
Read — Use when you know the exact file and need its content:
- Understanding a specific function's implementation.
- Getting the current state of a file before editing.
- Verifying an edit was applied correctly.
- Cost: Proportional to file size. Use line offsets for large files.
Decision flow:
- Do you know the filename? Use Glob to find its path, then Read.
- Do you know the content but not the file? Use Grep.
- Do you need to explore broadly? Use Grep with
files_with_matchesmode first to identify relevant files, then Read the most promising ones.
2. Bash vs Dedicated Tools
Bash is the universal fallback but rarely the best first choice:
Prefer dedicated tools when:
- Searching file contents (Grep is optimized for permissions and access patterns).
- Reading file contents (Read handles encoding, line numbers, and binary detection).
- Editing files (Edit provides atomic, verifiable string replacement).
- Searching for files (Glob is faster and more predictable than
find).
Use Bash when:
- Running tests, linters, or build commands.
- Executing git operations.
- Installing dependencies.
- Running project-specific scripts.
- Checking system state (processes, ports, environment variables).
- Any operation that requires shell features (pipes, redirects, command chaining).
Bash cost considerations:
- Output can be very verbose. Use flags to limit output (e.g.,
--onelinefor git log,head/tailfor long output). - Failed commands still consume context with error messages. Anticipate failures and handle them.
- Long-running commands can time out. Use the timeout parameter or
run_in_backgroundfor known slow operations.
3. Parallel vs Sequential Execution
Run in parallel when:
- Multiple searches are independent (e.g., finding all test files AND finding the config file).
- Multiple file reads are independent (e.g., reading three files to understand an interface).
- Multiple bash commands do not depend on each other's output.
Run sequentially when:
- The second operation depends on the first's result (e.g., grep to find the file, then read the file).
- Operations modify shared state (e.g., editing a file, then running tests on it).
- You need to make a decision based on the first result before choosing the second operation.
Always prefer parallel execution for independent operations. Three parallel searches complete in the time of one sequential search.
4. Subagent vs Direct Execution
Do it yourself when:
- The task is straightforward and well-defined.
- You have sufficient context to complete it.
- The task requires tight coordination with other work you are doing.
Consider subagents when:
- The task is exploratory and might require many rounds of search-and-read.
- The task is independent from your main work and would pollute your context.
- You need to investigate something without risking losing your current mental state.
- The task might fail or produce large output that you only need a summary of.
5. Thoroughness Calibration
Not every task requires the same level of thoroughness. Calibrate your approach:
High thoroughness (critical changes, public APIs, security-sensitive code):
- Read all relevant files completely.
- Search for all usages of modified interfaces.
- Run the full test suite.
- Check edge cases and error handling.
Medium thoroughness (standard feature work, internal changes):
- Read the files you are modifying and their immediate dependencies.
- Search for direct usages of changed functions.
- Run relevant test files.
Low thoroughness (minor fixes, formatting, documentation):
- Read only the specific section being changed.
- Verify syntax/format correctness.
- Spot-check for obvious errors.
The cost of being too thorough on a trivial task is wasted time. The cost of being insufficiently thorough on a critical task is bugs in production. Calibrate accordingly.
6. Tool Composition Patterns
Common effective tool combinations:
- Locate then read: Glob or Grep to find the file, then Read to understand it.
- Read then edit: Read the file to get the current content, then Edit with exact string matching.
- Edit then verify: Edit a file, then Read it back or run a bash command to verify.
- Search then narrow: Grep with
files_with_matchesacross the codebase, then Grep withcontentmode in specific files. - Parallel exploration: Multiple Glob and Grep calls simultaneously to map out a feature's footprint across the codebase.
Best Practices
- Start with the cheapest effective tool. Glob before Grep, Grep before Read,
files_with_matchesbeforecontentmode. - Use glob filters on Grep. Searching for
import.*Reactacross all files is wasteful when you can limit to*.tsxfiles. - Limit Read to what you need. Use line offset and limit parameters for large files when you know which section matters.
- Batch parallel calls aggressively. If you need three pieces of independent information, make three calls in one response rather than three sequential responses.
- Anticipate follow-up needs. If you are about to edit a file, read it now rather than requiring a separate round trip. If you will need to know both the source and test files, search for both simultaneously.
- Use Bash for validation. After edits, run the relevant compiler, linter, or test command to catch errors immediately.
Anti-Patterns
- The Bash hammer: Using bash for everything —
catinstead of Read,grepinstead of Grep,findinstead of Glob. Dedicated tools are optimized for their purpose. - Sequential independence: Running independent operations one at a time when they could all run in parallel. This multiplies wall-clock time unnecessarily.
- The firehose search: Grepping for a common term across the entire codebase without filters, producing hundreds of irrelevant matches that consume context.
- Over-reading: Reading entire 500-line files when you need a 10-line function. Use line ranges.
- Verification skipping: Making edits without reading the file first (leading to failed edit operations) or without checking afterwards (leading to undetected errors).
- Tool cargo-culting: Always using the same tool sequence regardless of the task, rather than adapting to what each specific operation requires.
- Premature subagent delegation: Spawning a subagent for a task you could complete in one or two tool calls. The overhead of delegation exceeds the work itself.
Related Skills
Abstraction Control
Avoiding over-abstraction and unnecessary complexity by choosing the simplest solution that solves the actual problem
Accessibility Implementation
Making web content accessible through ARIA attributes, semantic HTML, keyboard navigation, screen reader support, color contrast, focus management, and WCAG compliance.
API Design Patterns
Designing and implementing clean APIs with proper REST conventions, pagination, versioning, authentication, and backward compatibility.
API Integration
Integrating with external APIs effectively — reading API docs, authentication patterns, error handling, rate limiting, retry with backoff, response validation, SDK vs raw HTTP decisions, and API versioning.
Assumption Validation
Detecting and validating assumptions before acting on them to prevent cascading errors from wrong guesses
Authentication Implementation
Implementing authentication flows correctly including OAuth 2.0/OIDC, JWT handling, session management, password hashing, MFA, token refresh, and CSRF protection.