Testing Guide¶
Comprehensive guide for writing and running tests in Nova AI.
Overview¶
Nova AI follows test-driven development with strict quality gates:
- >80% coverage required - All new code must be tested
- Unit + integration tests - Test individual functions and workflows
- Security tests - Validate security measures
- Performance tests - Ensure speed requirements
Test Structure¶
tests/
├── unit/ # Unit tests (fast, isolated)
│ ├── orchestrator/
│ ├── agents/
│ └── kb_store/
├── integration/ # Integration tests (slower, full stack)
│ ├── agent_workflows/
│ └── mcp_integration/
├── performance/ # Performance benchmarks
└── conftest.py # Shared fixtures
Running Tests¶
All Tests¶
With Coverage¶
Specific Test¶
By Marker¶
Writing Tests¶
Unit Test Example¶
import pytest
from src.orchestrator.claude_sdk_executor import ClaudeSDKExecutor
def test_executor_initialization():
"""Test executor initializes with correct defaults."""
executor = ClaudeSDKExecutor(
agent_name="orchestrator",
use_sdk_mcp=True
)
assert executor.agent_name == "orchestrator"
assert executor.use_sdk_mcp is True
@pytest.mark.asyncio
async def test_executor_run_task():
"""Test executor can run simple task."""
executor = ClaudeSDKExecutor(agent_name="orchestrator")
result = await executor.run_task("test task")
assert result.status in ["APPROVED", "NEEDS_CHANGES"]
assert result.session_id is not None
Integration Test Example¶
@pytest.mark.asyncio
@pytest.mark.integration
async def test_full_implementation_workflow():
"""Test complete implementation workflow."""
# 1. Orchestrator creates plan
orch = ClaudeSDKExecutor(agent_name="orchestrator")
plan_result = await orch.run_task("implement feature X")
# 2. Implementer executes
impl = ClaudeSDKExecutor(
agent_name="implementer",
session_id=plan_result.session_id
)
impl_result = await impl.run_task("execute plan")
# 3. Code-reviewer validates
reviewer = ClaudeSDKExecutor(
agent_name="code-reviewer",
session_id=impl_result.session_id
)
review_result = await reviewer.run_task("review code")
assert review_result.status == "APPROVED"
Test Fixtures¶
# tests/conftest.py
import pytest
from pathlib import Path
@pytest.fixture
def project_root():
"""Project root directory."""
return Path(__file__).parent.parent
@pytest.fixture
def test_kb_dir(tmp_path):
"""Temporary KB directory for testing."""
kb_dir = tmp_path / "test_kb"
kb_dir.mkdir()
return kb_dir
@pytest.fixture
async def mock_executor():
"""Mock executor for testing."""
executor = ClaudeSDKExecutor(agent_name="test")
yield executor
await executor.cleanup()
Test Markers¶
# Mark slow tests
@pytest.mark.slow
def test_expensive_operation():
...
# Mark security tests
@pytest.mark.security
def test_sql_injection_prevention():
...
# Mark integration tests
@pytest.mark.integration
async def test_full_workflow():
...
Coverage Requirements¶
Minimum: 80% overall coverage
Target: 90%+ for critical modules
Critical Modules (>90% required):
src/orchestrator/claude_sdk_executor.pysrc/orchestrator/session_manager.pysrc/orchestrator/cost_tracker.pysrc/kb_store/retriever.py
Best Practices¶
- Test behavior, not implementation
- Use descriptive test names
- One assertion per test (when possible)
- Use fixtures for setup
- Mock external dependencies
- Test edge cases and errors
- Keep tests fast (<1s per test)