Skip to content

Test Coverage

Carbon Connect maintains comprehensive test coverage across the backend, frontend, and pipeline layers.


Coverage Targets

Layer Target Current
Backend 85%+ Active enforcement via CI
Frontend 80%+ Tracked via Vitest
Pipelines 70%+ Scraper and normalizer coverage

The quality gates workflow enforces a minimum coverage threshold of 69% (--cov-fail-under=69) to allow for incremental improvement toward the 85% target.


Backend Tests: 533 Tests

Test Breakdown by Category

Category Count Source
Authentication 14 JWT, login, registration, token refresh
Tenant middleware 6 Cross-tenant prevention, inactive tenant rejection
Company CRUD 13 Create, read, update, delete with pagination
Grant search 19 Full-text search, filters, pagination, sorting
Matching engine 32 Unit, integration, API, performance tests
Application assistant 43 Content generation, templates, error handling
Cohesion client 14 SODA API, pagination, error handling
Innovate UK client 17 GtR API, project fetching, pagination
Grant pipeline 57 Normalization, deduplication, embedding
Email service 42 SES sending, templates, batch operations
Meilisearch service 32 Index management, search, fallback
Storage service 28 S3 upload, download, presigned URLs
Task manager 29 Celery task routing, retry logic
Secrets service 21 AWS Secrets Manager integration
Grant pipeline Celery tasks 2 Task execution, async bridge
Worker email provider 1 SMTP sending
Celery app/tasks 40 App configuration, task routing
Sync tasks 49 Source sync, scheduling
Application API 23 CRUD, content generation endpoints
Reference data API 12 NACE codes, countries
Dashboard API 10 Stats, activity feed
Partner API 21 Registration, referrals, commissions
Total 533

Running Coverage

Backend Coverage

# Run with coverage report
poetry run pytest --cov=backend --cov-report=html

# Run with terminal output
poetry run pytest --cov=backend --cov-report=term-missing

# Run with minimum threshold enforcement
poetry run pytest --cov=backend --cov-fail-under=69

# Run specific category
poetry run pytest tests/unit/services/test_matching_engine.py --cov=backend.app.services.matching_engine

Coverage Report

After running with --cov-report=html, open htmlcov/index.html in a browser to view the detailed coverage report.

Frontend Coverage

cd frontend
npm run test -- --coverage

CI Coverage Enforcement

Quality Gates Workflow

The quality-gates.yml workflow runs pytest with coverage checks:

- name: Run Tests with Coverage
  run: |
    poetry run pytest tests/ \
      --maxfail=3 \
      --cov=backend \
      --cov-report=xml:coverage.xml \
      --cov-report=term-missing \
      --cov-fail-under=69

Codecov Integration

Coverage reports are automatically uploaded to Codecov on every push to main:

- name: Upload coverage
  uses: codecov/codecov-action@v4
  with:
    files: coverage.xml
    token: ${{ secrets.CODECOV_TOKEN }}

Improving Coverage

Priority Areas

When improving coverage, focus on these areas in order:

  1. Business logic -- Matching engine, carbon scoring, grant normalization
  2. API endpoints -- Request validation, authorization, error responses
  3. Error handling -- Network failures, invalid data, edge cases
  4. Integration points -- Database queries, cache operations

Writing High-Value Tests

Tests that provide the most coverage value:

# Test the happy path AND error cases
@pytest.mark.asyncio
async def test_grant_search_with_all_filters(client):
    """Test search with every filter applied simultaneously."""
    response = await client.get("/api/v1/grants", params={
        "query": "renewable energy",
        "country": "DE",
        "nace_code": "D35.1",
        "is_carbon_focused": True,
        "status": "active",
        "min_funding": 50000,
        "max_funding": 500000,
        "page": 1,
        "per_page": 10,
        "sort_by": "deadline",
    })
    assert response.status_code == 200

Tests to Avoid

  • Tests that only verify mock behavior (not real functionality)
  • Tests that duplicate coverage already provided by type checking
  • Overly specific tests that break with minor refactors
  • Tests that take longer than 5 seconds without good reason