Skip to content

Testing Guide

Comprehensive guide for writing and running tests in Nova AI.

Overview

Nova AI follows test-driven development with strict quality gates:

  • >80% coverage required - All new code must be tested
  • Unit + integration tests - Test individual functions and workflows
  • Security tests - Validate security measures
  • Performance tests - Ensure speed requirements

Test Structure

tests/
├── unit/                    # Unit tests (fast, isolated)
│   ├── orchestrator/
│   ├── agents/
│   └── kb_store/
├── integration/             # Integration tests (slower, full stack)
│   ├── agent_workflows/
│   └── mcp_integration/
├── performance/             # Performance benchmarks
└── conftest.py             # Shared fixtures

Running Tests

All Tests

pytest tests/

With Coverage

pytest --cov=src --cov-report=term-missing --cov-report=html

Specific Test

pytest tests/unit/orchestrator/test_executor.py

By Marker

pytest -m "not slow"  # Skip slow tests
pytest -m "security"  # Only security tests

Writing Tests

Unit Test Example

import pytest
from src.orchestrator.claude_sdk_executor import ClaudeSDKExecutor

def test_executor_initialization():
    """Test executor initializes with correct defaults."""
    executor = ClaudeSDKExecutor(
        agent_name="orchestrator",
        use_sdk_mcp=True
    )

    assert executor.agent_name == "orchestrator"
    assert executor.use_sdk_mcp is True

@pytest.mark.asyncio
async def test_executor_run_task():
    """Test executor can run simple task."""
    executor = ClaudeSDKExecutor(agent_name="orchestrator")

    result = await executor.run_task("test task")

    assert result.status in ["APPROVED", "NEEDS_CHANGES"]
    assert result.session_id is not None

Integration Test Example

@pytest.mark.asyncio
@pytest.mark.integration
async def test_full_implementation_workflow():
    """Test complete implementation workflow."""
    # 1. Orchestrator creates plan
    orch = ClaudeSDKExecutor(agent_name="orchestrator")
    plan_result = await orch.run_task("implement feature X")

    # 2. Implementer executes
    impl = ClaudeSDKExecutor(
        agent_name="implementer",
        session_id=plan_result.session_id
    )
    impl_result = await impl.run_task("execute plan")

    # 3. Code-reviewer validates
    reviewer = ClaudeSDKExecutor(
        agent_name="code-reviewer",
        session_id=impl_result.session_id
    )
    review_result = await reviewer.run_task("review code")

    assert review_result.status == "APPROVED"

Test Fixtures

# tests/conftest.py
import pytest
from pathlib import Path

@pytest.fixture
def project_root():
    """Project root directory."""
    return Path(__file__).parent.parent

@pytest.fixture
def test_kb_dir(tmp_path):
    """Temporary KB directory for testing."""
    kb_dir = tmp_path / "test_kb"
    kb_dir.mkdir()
    return kb_dir

@pytest.fixture
async def mock_executor():
    """Mock executor for testing."""
    executor = ClaudeSDKExecutor(agent_name="test")
    yield executor
    await executor.cleanup()

Test Markers

# Mark slow tests
@pytest.mark.slow
def test_expensive_operation():
    ...

# Mark security tests
@pytest.mark.security
def test_sql_injection_prevention():
    ...

# Mark integration tests
@pytest.mark.integration
async def test_full_workflow():
    ...

Coverage Requirements

Minimum: 80% overall coverage

Target: 90%+ for critical modules

Critical Modules (>90% required):

  • src/orchestrator/claude_sdk_executor.py
  • src/orchestrator/session_manager.py
  • src/orchestrator/cost_tracker.py
  • src/kb_store/retriever.py

Best Practices

  1. Test behavior, not implementation
  2. Use descriptive test names
  3. One assertion per test (when possible)
  4. Use fixtures for setup
  5. Mock external dependencies
  6. Test edge cases and errors
  7. Keep tests fast (<1s per test)

Next Steps

Security Guide Development Guide