Agent Usage Guide¶
Learn how to work with Nova AI's specialist agents for different development tasks.
Agent Overview¶
Nova AI uses 6 specialist agents, each with distinct responsibilities:
graph TB
A[orchestrator] --> B[architect]
A --> C[implementer]
A --> D[code-reviewer]
A --> E[tester]
A --> F[debugger]
B --> G[Architecture decisions]
C --> H[Feature implementation]
D --> I[Security review]
E --> J[Test execution]
F --> K[Error analysis]
Agent Profiles¶
Orchestrator¶
Purpose: Multi-agent coordination and workflow management
Responsibilities:
- Task decomposition and planning
- Agent coordination and delegation
- Session management
- Quality gate enforcement
- KB search and context gathering
When to use:
- Complex multi-step tasks
- Tasks requiring multiple agents
- Unclear requirements (needs clarification)
- Standard
/novaaicommands
Example:
Characteristics:
- Model: Sonnet 4.5 (agentic workflows)
- Access: All tools (Read, Write, Edit, Grep, Bash, etc.)
- Session: Maintains state across agent switches
- Cost: Optimized with prompt caching
Architect¶
Purpose: Architecture decisions and design trade-offs
Responsibilities:
- System design and architecture
- Technology selection and trade-offs
- Scalability and performance planning
- Microservices design
- Database schema design
When to use:
- New system design
- Major refactoring decisions
- Technology evaluation
- Architecture documentation
Example:
/novaai design a microservices architecture for the API with
user service, payment service, and notification service
Output:
- Architecture diagrams (Mermaid)
- Component specifications
- Trade-off analysis
- Implementation recommendations
Implementer¶
Purpose: Feature implementation and code writing
Responsibilities:
- Write production-ready code
- Create comprehensive tests (>80% coverage)
- Add type hints and docstrings
- Follow project patterns
- Document implementations
When to use:
- Feature implementation
- Bug fixes (after root cause analysis)
- Code refactoring
- Test writing
Example:
from src.orchestrator.claude_sdk_executor import ClaudeSDKExecutor
executor = ClaudeSDKExecutor(
project_root=Path.cwd(),
agent_name="implementer",
use_sdk_mcp=True
)
result = await executor.run_task(
"implement user registration endpoint with email validation"
)
Quality Standards:
- ✅ Type hints on all functions
- ✅ Google-style docstrings
- ✅ >80% test coverage
- ✅ Error handling and validation
- ✅ Logging at appropriate levels
Code-Reviewer¶
Purpose: Security, correctness, and maintainability review
Responsibilities:
- Security vulnerability scanning
- Code quality analysis
- Best practices compliance
- Performance optimization suggestions
- Documentation review
When to use:
- Before all commits (required)
- PR reviews
- Security audits
- Code quality checks
Example:
Review Checklist:
- 🔒 Security: SQL injection, XSS, hardcoded secrets
- ✅ Correctness: Logic errors, edge cases
- 📚 Maintainability: Code clarity, documentation
- ⚡ Performance: N+1 queries, inefficient algorithms
- 🧪 Testing: Test coverage, test quality
Output:
CODE REVIEW RESULTS
Security: ✅ PASS
- No SQL injection vulnerabilities
- No hardcoded secrets
- Proper password hashing with bcrypt
Correctness: ⚠️ NEEDS ATTENTION
- Line 42: Missing null check for email parameter
- Line 87: Race condition in token refresh logic
Maintainability: ✅ PASS
- Type hints complete
- Docstrings present and accurate
- Clear variable names
Performance: ✅ PASS
- No N+1 queries detected
- Efficient database indexing
Recommendations:
1. Add null check at line 42
2. Use database lock for token refresh (line 87)
3. Consider adding rate limiting
Overall: NEEDS CHANGES (2 blockers)
Tester¶
Purpose: Test execution and validation
Responsibilities:
- Run test suites (unit + integration)
- Check code coverage
- Validate quality gates
- Run linters and type checkers
- Performance testing
When to use:
- After implementation
- Before deployment
- CI/CD pipelines
- Performance benchmarks
Example:
Test Types:
| Type | Purpose | Example |
|---|---|---|
| Unit | Test individual functions | test_hash_password() |
| Integration | Test component interactions | test_user_registration_flow() |
| Performance | Test speed and efficiency | test_api_response_time() |
| Security | Test security measures | test_sql_injection_prevention() |
Output:
TEST RESULTS
Unit Tests: ✅ 45/45 passing (100%)
Integration Tests: ✅ 12/12 passing (100%)
Coverage: 87% (target: >80%)
Performance:
- API response time: 45ms (target: <100ms)
- Database query time: 8ms (target: <50ms)
Quality Gates:
- ✅ All tests passing
- ✅ Coverage >80%
- ✅ Type hints complete (mypy clean)
- ✅ Linting clean (ruff clean)
Status: APPROVED for production
Debugger¶
Purpose: Error analysis and debugging
Responsibilities:
- Analyze error messages and stack traces
- Identify root causes
- Suggest fixes
- Reproduce bugs
- Add debugging instrumentation
When to use:
- Production errors
- Failing tests
- Performance issues
- Unclear error messages
Example:
Process:
- Analyze Error - Parse stack trace and error message
- Reproduce - Create minimal reproduction case
- Root Cause - Identify underlying issue
- Fix - Implement solution
- Test - Verify fix and add regression tests
Output:
DEBUG ANALYSIS
Error: Authentication timeout after 30s
Location: src/api/client.py:142
Root Cause:
- No retry logic for network failures
- Hardcoded 30s timeout too aggressive
- Missing exponential backoff
Stack Trace Analysis:
- Line 142: requests.post() timeout
- Line 98: Token refresh triggered
- Line 45: No exception handling
Reproduction:
```python
# Simulate network failure
import requests
requests.post(url, timeout=30) # Fails after 30s
Suggested Fix: 1. Add retry logic with exponential backoff 2. Increase timeout to 60s 3. Add proper exception handling 4. Log retry attempts
Implementation: - src/api/client.py: Add retry decorator - tests/api/test_client.py: Test retry behavior
## Agent Coordination
### Multi-Agent Workflows
The orchestrator coordinates agents for complex tasks:
```mermaid
sequenceDiagram
participant O as Orchestrator
participant A as Architect
participant I as Implementer
participant R as Code-Reviewer
participant T as Tester
O->>A: Design architecture
A-->>O: Architecture specs
O->>I: Implement components
I-->>O: Code + tests
O->>R: Review security
R-->>O: Review feedback
alt Review Pass
O->>T: Run tests
T-->>O: Test results
else Review Fail
O->>I: Fix issues
end
Agent Selection¶
The orchestrator automatically selects the right agent based on the task:
| Task Type | Primary Agent | Supporting Agents |
|---|---|---|
| Feature implementation | implementer | code-reviewer, tester |
| Architecture design | architect | implementer |
| Bug fix | debugger | implementer, tester |
| Code review | code-reviewer | - |
| Testing | tester | - |
| Complex task | orchestrator | All |
Direct Agent Usage¶
For advanced usage, you can invoke agents directly via Python SDK:
Example 1: Direct Implementation¶
from pathlib import Path
from src.orchestrator.claude_sdk_executor import ClaudeSDKExecutor
# Use implementer directly
executor = ClaudeSDKExecutor(
project_root=Path.cwd(),
agent_name="implementer",
use_sdk_mcp=True
)
result = await executor.run_task(
"implement user registration endpoint following the spec in docs/api/auth.md"
)
print(f"Files modified: {result.files_modified}")
print(f"Tests: {result.test_results}")
Example 2: Code Review Pipeline¶
# Implement code
impl_executor = ClaudeSDKExecutor(agent_name="implementer")
impl_result = await impl_executor.run_task("implement feature X")
# Review code
review_executor = ClaudeSDKExecutor(agent_name="code-reviewer")
review_result = await review_executor.run_task(
f"review the changes in {impl_result.files_modified}"
)
if review_result.status == "APPROVED":
# Run tests
test_executor = ClaudeSDKExecutor(agent_name="tester")
test_result = await test_executor.run_task("run full test suite")
Example 3: Session Continuation¶
# First task with orchestrator
orch = ClaudeSDKExecutor(agent_name="orchestrator")
result1 = await orch.run_task("implement authentication")
# Reuse session with implementer (88-95% overhead reduction)
impl = ClaudeSDKExecutor(
agent_name="implementer",
session_id=result1.session_id # Reuse session
)
result2 = await impl.run_task("add refresh token rotation")
Agent Configuration¶
Model Selection¶
All agents use Sonnet 4.5 (Anthropic's recommendation for agentic workflows):
# .claude/agents/implementer.yaml
name: implementer
description: Feature implementation specialist
model: claude-sonnet-4-5-20250929
tools:
- Read
- Write
- Edit
- Grep
- Bash
Tool Access¶
Each agent has specific tool permissions:
| Agent | Tools | Rationale |
|---|---|---|
| orchestrator | All | Full coordination |
| architect | Read, Grep | Read-only for analysis |
| implementer | Read, Write, Edit, Grep, Bash | Implementation + testing |
| code-reviewer | Read, Grep | Read-only for review |
| tester | Read, Bash | Testing + validation |
| debugger | Read, Grep, Bash | Analysis + debugging |
Custom Agents¶
You can create custom agents by adding YAML files to .claude/agents/:
# .claude/agents/custom-agent.yaml
name: custom-agent
description: Your custom specialist
model: claude-sonnet-4-5-20250929
tools:
- Read
- Grep
Best Practices¶
1. Use Orchestrator for Complex Tasks¶
Let the orchestrator coordinate agents for multi-step tasks:
✅ /novaai implement user authentication with JWT
❌ Manually calling implementer → code-reviewer → tester
2. Always Review Before Commit¶
Required: Run code-reviewer before all commits:
# CORRECT
result = await executor.run_task("implement feature")
await code_reviewer.review(result) # REQUIRED
if review.passed:
await commit()
# WRONG - Missing review
await commit() # Will fail quality gates
3. Provide Specifications¶
Give agents clear specifications:
✅ /novaai implement user registration following the spec in docs/api/auth.md
❌ /novaai implement user stuff
4. Leverage KB Search¶
The orchestrator searches the knowledge base automatically:
/novaai implement authentication following JWT best practices
# Orchestrator finds: kb/auth/jwt-patterns.md, kb/security/password-hashing.md
5. Use Session Continuation¶
Reuse sessions for related tasks:
# First task
result1 = await executor.run_task("implement feature A")
# Reuse session (88-95% faster)
executor2 = ClaudeSDKExecutor(
agent_name="implementer",
session_id=result1.session_id
)
result2 = await executor2.run_task("implement feature B")
Troubleshooting¶
Agent Not Found¶
Solution: Check agent YAML exists:
Agent Switch Timeout¶
Solution: Check API key and network:
Agent Returns Empty Response¶
Solution: Check agent has proper tool access and the task is within scope.
Performance Metrics¶
Agent Overhead¶
With session continuation:
| Transition | Without Session | With Session | Improvement |
|---|---|---|---|
| orchestrator → implementer | 680ms | 45ms | 93% |
| implementer → code-reviewer | 280ms | 15ms | 95% |
| code-reviewer → tester | 280ms | 12ms | 96% |
Token Usage¶
Typical token usage per agent (with caching):
| Agent | Input Tokens | Output Tokens | Cache Savings |
|---|---|---|---|
| orchestrator | 15K | 3K | 90% |
| implementer | 25K | 5K | 85% |
| code-reviewer | 20K | 2K | 92% |
| tester | 10K | 1K | 88% |
Next Steps¶
-
MCP Servers
Configure and use MCP tools
-
Python SDK
Use agents programmatically
-
Architecture
Understand agent system design