Frequently Asked Questions¶
Stakeholder Questions¶
What does Carbon Connect do?¶
Carbon Connect is an AI-powered SaaS platform that helps small and medium-sized businesses find and apply to EU carbon and sustainability grants. It combines carbon footprint analysis with intelligent grant matching and AI-powered application drafting -- think of it as "TurboTax for Carbon Incentives."
How much does it cost?¶
Carbon Connect offers four pricing tiers:
| Tier | Price | Best For |
|---|---|---|
| Freemium | EUR 0 | Exploring available grants |
| Professional | EUR 199/month | Active grant seekers |
| Enterprise | EUR 999/month | Multi-company portfolios |
| Partner | Revenue share | Sustainability consultants |
How does the matching algorithm work?¶
The platform evaluates each company-grant pair across five dimensions: rule-based eligibility (30%), semantic text similarity (25%), carbon profile alignment (25%), peer behavior patterns (10%), and deadline urgency (10%). These are combined into a single match score that tells you how well you fit each grant. See AI & Carbon Intelligence for a detailed explanation.
How accurate is the matching?¶
The system targets greater than 80% eligibility match accuracy -- meaning more than 80% of recommended grants should genuinely be ones the company qualifies for. The matching includes hard disqualification rules (wrong country or company size) to prevent obviously wrong recommendations.
Is it GDPR compliant?¶
Yes. Core GDPR controls are implemented: all data is stored in the EU (Stockholm), encrypted at rest and in transit, and isolated by tenant. Data subject request workflows (export and deletion) are planned for production launch. See GDPR & Data Protection for full details.
Where is data stored?¶
All data is stored in the EU (Stockholm, eu-north-1) AWS region. This includes the database, file storage, cache, search indexes, and logs. No data is transferred outside the EU for storage.
What data sources does Carbon Connect use?¶
Currently four live sources: CORDIS (EU research funding), EU Funding & Tenders Portal (all EU programs), Cohesion Open Data Portal (ERDF/ESF), and Innovate UK (UK innovation funding). German (BAFA, KfW) and French (Bpifrance) sources are planned. See Data Sources.
How much does AI application generation cost?¶
Under $0.001 per application draft generated -- compared to EUR 5,000-25,000 charged by traditional grant consultants. This is the raw AI cost; users access this through their subscription plan.
What is the current development status?¶
Seven of seven planned development phases are complete. The platform is feature-complete for MVP, with 60+ API endpoints and 533 automated tests passing. Production deployment on AWS is the immediate next step. See Implementation Status for full details.
What makes Carbon Connect different from other grant platforms?¶
Three things: (1) Carbon-first design -- the entire platform is built around carbon and sustainability funding, not generic grants; (2) AI-powered matching -- five-component algorithm that goes far beyond keyword search; (3) AI application writing -- Claude-powered drafts at a fraction of consultant costs. No other platform combines all three. See Competitive Landscape.
How does the partner program work?¶
Sustainability consultants, banks, and industry associations can register as partners. They receive tracked referral links, earn commissions on referred customers (10-20% depending on tier), and can manage multiple clients through a dedicated partner dashboard. Enterprise partners can white-label the platform under their own brand.
What is the revenue model?¶
SaaS subscription revenue (primary), plus partner commission revenue. The freemium tier drives user acquisition, Professional and Enterprise tiers generate recurring revenue, and the Partner tier generates distribution through third-party channels. Revenue projections: EUR 2.4M (2026), EUR 8.5M (2027), EUR 22M (2028). See Business Model.
How big is the market opportunity?¶
The Total Addressable Market is EUR 12.4 billion annually (6.9 million EU SMEs with sustainability needs). The Serviceable Addressable Market is EUR 882 million annually (420,000 carbon-focused grant seekers). See Market Opportunity.
Developer Questions¶
How do I set up the development environment?¶
- Install prerequisites: Python 3.11+, Poetry, Node.js 18+, Docker Desktop
- Start services:
docker compose up -d(PostgreSQL on port 5433, Valkey on 6379, Meilisearch on 7700) - Install dependencies:
poetry install(backend),npm install(frontend) - Run migrations:
poetry run alembic upgrade head - Start the API:
poetry run uvicorn backend.app.main:app --reload - Start the frontend:
npm run dev -- --turbo
Why is PostgreSQL on port 5433 instead of 5432?¶
To avoid conflicts with local PostgreSQL installations on Windows, which typically use port 5432. The project uses 5433 for the Docker-managed PostgreSQL instance.
How do I run tests?¶
# All backend tests
poetry run pytest tests/ -v
# With coverage report
poetry run pytest --cov=backend --cov-report=html
# Specific test file
poetry run pytest tests/unit/services/test_application_assistant.py -v
# Excluding end-to-end tests
pytest -m "not e2e"
What is the TDD requirement?¶
Test-driven development is mandatory. Every feature must have automated tests written before the production code. Pull requests must include a ## TDD Proof section with evidence of failing tests, and the PR TDD Gate GitHub Action enforces this automatically.
How do I create a database migration?¶
# Generate migration after model changes
poetry run alembic revision --autogenerate -m "Description of changes"
# Apply migrations
poetry run alembic upgrade head
# Rollback one step
poetry run alembic downgrade -1
What coding standards apply?¶
Python: Python 3.11+ with mandatory type hints, 4-space indentation, 100-char line limit. Formatting with Black, linting with Ruff, import sorting with isort. Snake_case for modules/functions, PascalCase for classes.
TypeScript: ES2022+ with strict mode, ESLint + Prettier. CamelCase for functions/hooks, PascalCase for components. Functional components with hooks preferred.
How should I mock external APIs in tests?¶
Mock external APIs (Claude, Climatiq, CORDIS) completely using AsyncMock. Never mock SQLAlchemy sessions or internal services. Be careful with MagicMock -- always set None explicitly for attributes that should be None, because MagicMock attributes are always truthy.
# Correct
mock_response = MagicMock()
mock_response.text = "Generated content"
mock_response.prompt_feedback = None # Explicit None
mock_response.total_tokens = 500 # Explicit int
# Wrong - all attributes become truthy MagicMock objects
mock_response = MagicMock()
mock_response.text = "Content"
# mock_response.prompt_feedback is now MagicMock (truthy!)
How do I add a new grant data source?¶
Follow the existing scraper client pattern: 1. Create a client class with async HTTP, rate limiting, and retry logic 2. Use httpx for async HTTP requests 3. Implement async with context manager for resource cleanup 4. Create dataclass-based configuration and response models 5. Write comprehensive tests (see existing clients for examples) 6. Add to the grant pipeline normalization flow
What are the performance targets?¶
| Metric | Target |
|---|---|
| API response time | Under 200ms |
| Search latency | Under 100ms for 100k documents |
| Matching calculation | Under 500ms per company |
| AI application generation | Under 3 seconds |
How do I verify services are running?¶
# Automated check
poetry run python scripts/verify_services.py
# Manual checks
docker exec granted_carbon_postgres psql -U grant_user -d grant_engine -c "\dt"
docker exec granted_carbon_valkey valkey-cli ping
curl http://localhost:7700/health
What if I get an "AsyncMock vs MagicMock" error?¶
This typically means you used MagicMock() for a method that gets awaited. Use AsyncMock() for any method called with await:
mock_client = MagicMock()
mock_client.request = AsyncMock(return_value=mock_response)
mock_client.aclose = AsyncMock() # Required for async context managers
How does multi-tenancy work in the code?¶
Every database table includes a tenant_id column. The get_current_tenant dependency in backend/app/api/deps.py extracts the tenant from the authenticated user's JWT token and injects it into every request. All queries are automatically scoped by tenant_id. PostgreSQL row-level security policies enforce isolation at the database level.
How do I contribute?¶
- Read the
CONTRIBUTING.mdfile anddocs/GITHUB_SETUP.md - Follow TDD practices (tests first, then implementation)
- Ensure all 533 tests pass before submitting a PR
- Include a
## TDD Proofsection in your PR description - Ensure Ruff linting passes with zero warnings
- Run
poetry run pre-commit run --all-filesbefore committing