feat(migration): implement interactive mapping resolution workflow

- Add SQLite database integration for environments and mappings
- Update TaskManager to support pausing tasks (AWAITING_MAPPING)
- Modify MigrationPlugin to detect missing mappings and wait for resolution
- Add frontend UI for handling missing mappings interactively
- Create dedicated migration routes and API endpoints
- Update .gitignore and project documentation
This commit is contained in:
2025-12-25 22:27:29 +03:00
parent 43b4c75e36
commit 2ffc3cc68f
38 changed files with 2437 additions and 51 deletions

View File

@@ -0,0 +1,34 @@
# Specification Quality Checklist: Migration Process and UI Redesign
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2025-12-20
**Feature**: [specs/001-migration-ui-redesign/spec.md](specs/001-migration-ui-redesign/spec.md)
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`

View File

@@ -0,0 +1,115 @@
# API Contracts: Migration Process and UI Redesign
## Environment Management
### GET /api/environments
List all configured environments.
**Response (200 OK)**:
```json
[
{
"id": "uuid",
"name": "Development",
"url": "https://superset-dev.example.com"
}
]
```
### GET /api/environments/{id}/databases
Fetch the list of databases from a specific environment.
**Response (200 OK)**:
```json
[
{
"uuid": "db-uuid",
"database_name": "Dev Clickhouse",
"engine": "clickhouse"
}
]
```
## Database Mapping
### GET /api/mappings
List all saved database mappings.
**Query Parameters**:
- `source_env_id`: Filter by source environment.
- `target_env_id`: Filter by target environment.
**Response (200 OK)**:
```json
[
{
"id": "uuid",
"source_env_id": "uuid",
"target_env_id": "uuid",
"source_db_uuid": "uuid",
"target_db_uuid": "uuid",
"source_db_name": "Dev Clickhouse",
"target_db_name": "Prod Clickhouse"
}
]
```
### POST /api/mappings
Create or update a database mapping.
**Request Body**:
```json
{
"source_env_id": "uuid",
"target_env_id": "uuid",
"source_db_uuid": "uuid",
"target_db_uuid": "uuid"
}
```
### POST /api/mappings/suggest
Get suggested mappings based on fuzzy matching.
**Request Body**:
```json
{
"source_env_id": "uuid",
"target_env_id": "uuid"
}
```
**Response (200 OK)**:
```json
[
{
"source_db_uuid": "uuid",
"target_db_uuid": "uuid",
"confidence": 0.95
}
]
```
## Migration Execution
### POST /api/migrations
Start a migration job.
**Request Body**:
```json
{
"source_env_id": "uuid",
"target_env_id": "uuid",
"assets": [
{"type": "dashboard", "id": 123}
],
"replace_db": true
}
```
**Response (202 Accepted)**:
```json
{
"job_id": "uuid",
"status": "RUNNING"
}
```

View File

@@ -0,0 +1,48 @@
# Data Model: Migration Process and UI Redesign
## Entities
### Environment
Represents a Superset instance.
| Field | Type | Description |
|-------|------|-------------|
| `id` | UUID | Primary Key |
| `name` | String | Display name (e.g., "Development", "Production") |
| `url` | String | Base URL of the Superset instance |
| `credentials_id` | String | Reference to encrypted credentials in the config manager |
### DatabaseMapping
Represents a mapping between a database in the source environment and a database in the target environment.
| Field | Type | Description |
|-------|------|-------------|
| `id` | UUID | Primary Key |
| `source_env_id` | UUID | Foreign Key to Environment (Source) |
| `target_env_id` | UUID | Foreign Key to Environment (Target) |
| `source_db_uuid` | String | UUID of the database in the source environment |
| `target_db_uuid` | String | UUID of the database in the target environment |
| `source_db_name` | String | Name of the database in the source environment (for UI) |
| `target_db_name` | String | Name of the database in the target environment (for UI) |
| `engine` | String | Database engine type (e.g., "clickhouse", "postgres") |
### MigrationJob
Represents a single migration execution.
| Field | Type | Description |
|-------|------|-------------|
| `id` | UUID | Primary Key |
| `source_env_id` | UUID | Foreign Key to Environment |
| `target_env_id` | UUID | Foreign Key to Environment |
| `status` | Enum | `PENDING`, `RUNNING`, `COMPLETED`, `FAILED`, `AWAITING_MAPPING` |
| `replace_db` | Boolean | Whether to apply database mappings |
| `created_at` | DateTime | Timestamp of creation |
## Relationships
- `DatabaseMapping` belongs to a pair of `Environments`.
- `MigrationJob` references two `Environments`.
## Validation Rules
- `source_env_id` and `target_env_id` must be different.
- `source_db_uuid` and `target_db_uuid` must belong to databases with compatible engines (optional warning).
- Mappings must be unique for a given `(source_env_id, target_env_id, source_db_uuid)` triplet.

View File

@@ -0,0 +1,79 @@
# Implementation Plan: Migration Process and UI Redesign
**Branch**: `001-migration-ui-redesign` | **Date**: 2025-12-20 | **Spec**: [specs/001-migration-ui-redesign/spec.md](specs/001-migration-ui-redesign/spec.md)
## Summary
Redesign the migration process to support environment-based selection and automated database mapping. The technical approach involves using a SQLite database to persist mappings between source and target databases, implementing a fuzzy matching algorithm for empirical suggestions, and intercepting asset definitions during migration to apply these mappings.
## Technical Context
**Language/Version**: Python 3.9+, Node.js 18+
**Primary Dependencies**: FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLite
**Storage**: SQLite (for database mappings and environment metadata)
**Testing**: pytest (Backend), Vitest/Playwright (Frontend)
**Target Platform**: Linux server
**Project Type**: Web application (FastAPI + SvelteKit SPA)
**Performance Goals**: SC-001: Users can complete a full database mapping for 5+ databases in under 60 seconds.
**Constraints**: SPA-First Architecture (Constitution Principle I), API-Driven Communication (Constitution Principle II).
**Scale/Scope**: Support for multiple environments and hundreds of database mappings.
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
| Principle | Status | Notes |
|-----------|--------|-------|
| I. SPA-First Architecture | PASS | SvelteKit will be built as a static SPA and served by FastAPI. |
| II. API-Driven Communication | PASS | All mapping and migration actions will go through FastAPI endpoints. |
| III. Modern Stack Consistency | PASS | Using FastAPI, SvelteKit, and Tailwind CSS. |
| IV. Semantic Protocol Adherence | PASS | Code will include GRACE-Poly anchors and contracts. |
## Project Structure
### Documentation (this feature)
```text
specs/001-migration-ui-redesign/
├── plan.md # This file
├── research.md # Phase 0 output
├── data-model.md # Phase 1 output
├── quickstart.md # Phase 1 output
├── contracts/ # Phase 1 output
└── tasks.md # Phase 2 output
```
### Source Code (repository root)
```text
backend/
├── src/
│ ├── api/
│ │ └── routes/
│ │ ├── environments.py # New: Env selection
│ │ └── mappings.py # New: DB mapping management
│ ├── core/
│ │ └── migration_engine.py # Update: DB replacement logic
│ └── models/
│ └── mapping.py # New: SQLite models
└── tests/
frontend/
├── src/
│ ├── components/
│ │ ├── MappingTable.svelte # New: DB mapping UI
│ │ └── EnvSelector.svelte # New: Source/Target selection
│ └── routes/
│ └── migration/ # New: Migration dashboard
└── tests/
```
**Structure Decision**: Web application structure (Option 2) is selected to maintain separation between the FastAPI backend and SvelteKit frontend while adhering to the SPA-first principle.
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| None | N/A | N/A |

View File

@@ -0,0 +1,39 @@
# Quickstart: Migration Process and UI Redesign
## Setup
1. **Install Dependencies**:
```bash
pip install rapidfuzz sqlalchemy
cd frontend && npm install
```
2. **Configure Environments**:
Ensure you have at least two Superset environments configured in the application settings.
3. **Initialize Database**:
The system will automatically create the `mappings.db` SQLite file on the first run.
## Usage
### 1. Define Mappings
1. Navigate to the **Database Mapping** tab.
2. Select your **Source** and **Target** environments.
3. Click **Fetch Databases**.
4. Review the **Suggested Mappings** (highlighted in green).
5. Manually adjust any mappings using the dropdowns.
6. Click **Save Mappings**.
### 2. Run Migration
1. Go to the **Migration** dashboard.
2. Select the **Source** and **Target** environments.
3. Select the assets (Dashboards/Datasets) you want to migrate.
4. Enable the **Replace Database** toggle.
5. Click **Start Migration**.
6. If a database is missing a mapping, a modal will appear prompting you to select a target database.
## Troubleshooting
- **Connection Error**: Ensure the backend can reach both Superset instances. Check credentials in settings.
- **Mapping Not Applied**: Verify that the "Replace Database" toggle was enabled and that the mapping exists for the specific environment pair.
- **Fuzzy Match Failure**: If names are too different, manual mapping is required. The system learns from manual overrides.

View File

@@ -0,0 +1,33 @@
# Research: Migration Process and UI Redesign
## Decision: Fuzzy Matching Algorithm
- **Choice**: `RapidFuzz` library with `fuzz.token_sort_ratio`.
- **Rationale**: `RapidFuzz` is significantly faster than `FuzzyWuzzy` and provides robust string similarity metrics. `token_sort_ratio` is ideal for database names because it ignores word order and is less sensitive to prefixes like "Dev-" or "Prod-".
- **Alternatives considered**:
- `Levenshtein`: Too sensitive to string length and prefixes.
- `Jaro-Winkler`: Good for short strings but less effective for multi-word names with different orders.
## Decision: Asset Interception Strategy
- **Choice**: ZIP-based transformation during migration.
- **Rationale**: Superset's native export/import format is a ZIP archive containing YAML definitions. Intercepting this archive allows for precise modification of database references (UUIDs) before they reach the target environment.
- **Implementation**:
1. Export dashboard/dataset from source (ZIP).
2. Extract ZIP to a temporary directory.
3. Iterate through `datasets/*.yaml` files.
4. Replace `database_uuid` values based on the mapping table.
5. Re-package the ZIP.
6. Import to target.
## Decision: Database Mapping Persistence
- **Choice**: SQLite with SQLAlchemy/SQLModel.
- **Rationale**: SQLite is lightweight, requires no separate server, and is perfect for storing local configuration and mappings. It aligns with the project's existing stack.
- **Schema**:
- `Environment`: `id`, `name`, `url`, `credentials_id`.
- `DatabaseMapping`: `id`, `source_env_id`, `target_env_id`, `source_db_uuid`, `target_db_uuid`, `source_db_name`, `target_db_name`.
## Decision: Superset API Integration
- **Choice**: Extend existing `SupersetClient`.
- **Rationale**: `SupersetClient` already handles authentication, network requests, and basic CRUD for dashboards/datasets. Adding environment-specific fetching and database listing is a natural extension.
- **New Endpoints to use**:
- `GET /api/v1/database/`: List all databases.
- `GET /api/v1/database/{id}`: Get detailed database config.

View File

@@ -0,0 +1,109 @@
# Feature Specification: Migration Process and UI Redesign
**Feature Branch**: `001-migration-ui-redesign`
**Created**: 2025-12-20
**Status**: Draft
**Input**: User description: "я хочу переработать процесс и интерфейс миграции. 1. Необходимо чтобы был выпадающий список enviroments (откуда и куда), а также просто галка замены БД 2. Процесс замены БД должен быть предустановленными парами , необходима отдельная вкладка которая бы считывала базы данных с источника и цели и позволяла их маппить, при этом первоначально эмпирически подставляя пары вида 'Dev Clickhouse' -> 'Prod Clickhouse'. Меппинг нужно сохранять и иметь возможность его редактировать"
## Clarifications
### Session 2025-12-20
- Q: Scope of Database Mapping → A: Map the full configuration object obtained from the Superset API.
- Q: Persistence of mappings → A: Use a SQLite database for storing mappings.
- Q: Handling of missing mappings during migration → A: Show a modal dialog during the migration process to prompt for missing mappings.
- Q: Empirical matching algorithm details → A: Use name-based fuzzy matching (ignoring common prefixes like Dev/Prod).
- Q: Scope of "Replace Database" toggle → A: Apply replacement to all assets (Dashboards, Datasets, Charts) included in the migration.
- Q: Backend exposure of Superset databases → A: Dedicated environment database endpoints (e.g., `/api/environments/{id}/databases`).
- Q: Superset API authentication → A: Use stored environment credentials from the backend.
- Q: Error handling for unreachable environments → A: Return structured error responses (502/503) with descriptive messages.
- Q: Database list filtering → A: Return all available databases with metadata (engine type, etc.).
- Q: Handling large database lists → A: Return full list (no pagination) for simplicity.
## User Scenarios & Testing *(mandatory)*
### User Story 1 - Environment-Based Migration Setup (Priority: P1)
As a migration operator, I want to easily select the source and target environments from a list so that I can quickly define the scope of my migration without manual URL entry.
**Why this priority**: This is the core interaction for starting any migration. Using predefined environments reduces errors and improves speed.
**Independent Test**: Can be tested by opening the migration page and verifying that the "Source" and "Target" dropdowns are populated with configured environments and can be selected.
**Acceptance Scenarios**:
1. **Given** multiple environments are configured in settings, **When** I open the migration page, **Then** I should see two dropdowns for "Source" and "Target" containing these environments.
2. **Given** a source and target are selected, **When** I toggle the "Replace Database" checkbox, **Then** the system should prepare to apply database mappings during the next migration step.
---
### User Story 2 - Database Mapping Management (Priority: P1)
As an administrator, I want to define how databases in my development environment map to databases in production so that my dashboards and datasets work correctly after migration.
**Why this priority**: Migrations often fail or require manual fixups because database references point to the wrong environment. Automated mapping is critical for reliable migrations.
**Independent Test**: Can be tested by navigating to the "Database Mapping" tab, fetching databases, and verifying that mappings can be created, saved, and edited.
**Acceptance Scenarios**:
1. **Given** a source and target environment are selected, **When** I open the "Database Mapping" tab, **Then** the system should fetch and display lists of databases from both environments.
2. **Given** the database lists are loaded, **When** the system identifies similar names (e.g., "Dev Clickhouse" and "Prod Clickhouse"), **Then** it should automatically suggest these as a mapping pair.
3. **Given** suggested or manual mappings, **When** I click "Save Mappings", **Then** these pairs should be persisted and associated with the selected environment pair.
---
### User Story 3 - Migration with Automated DB Replacement (Priority: P2)
As a user, I want the migration process to automatically update database references based on my saved mappings so that I don't have to manually edit exported files or post-migration settings.
**Why this priority**: This delivers the actual value of the mapping feature by automating a tedious and error-prone task.
**Independent Test**: Can be tested by running a migration with "Replace Database" enabled and verifying that the resulting assets in the target environment point to the mapped databases.
**Acceptance Scenarios**:
1. **Given** saved mappings exist for the selected environments, **When** I start a migration with "Replace Database" enabled, **Then** the system should replace all source database IDs/names with their corresponding target values during the transfer.
2. **Given** "Replace Database" is enabled but a source database has no mapping, **When** the migration runs, **Then** the system should pause and show a modal dialog prompting the user to provide a mapping on-the-fly for the missing database.
---
### Edge Cases
- **Environment Connectivity**: If the source or target environment is unreachable, the backend MUST return a structured error (502/503), and the frontend MUST display a clear connection error with a retry option.
- **Duplicate Mappings**: How does the system handle multiple source databases mapping to the same target database? (Assumption: This is allowed, as multiple dev DBs might consolidate into one prod DB).
- **Missing Target Database**: What if a mapped target database no longer exists in the target environment? (Assumption: Validation should occur before migration starts, highlighting broken mappings).
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: System MUST provide dropdown menus for selecting "Source Environment" and "Target Environment" on the migration screen.
- **FR-002**: System MUST provide a "Replace Database" checkbox that, when enabled, triggers the database mapping logic for all assets (Dashboards, Datasets, Charts) during migration.
- **FR-003**: System MUST include a dedicated "Database Mapping" tab or view accessible from the migration interface.
- **FR-004**: System MUST fetch available databases from both source and target environments via their respective APIs when the mapping tab is opened.
- **FR-005**: System MUST implement a name-based fuzzy matching algorithm to suggest initial mappings, ignoring common environment prefixes (e.g., "Dev", "Prod").
- **FR-006**: System MUST allow users to manually override suggested mappings and create new ones via a drag-and-drop or dropdown-based interface.
- **FR-007**: System MUST persist database mappings in a local SQLite database, keyed by the source and target environment identifiers.
- **FR-008**: System MUST provide an "Edit" capability for existing mappings, allowing users to update or delete them.
- **FR-009**: During migration, if "Replace Database" is active, the system MUST intercept asset definitions (JSON/YAML) and replace database references according to the active mapping table.
### Key Entities *(include if feature involves data)*
- **Environment**: A configured Superset instance (Name, URL, Credentials).
- **Database Mapping**: A record linking a source database configuration (including metadata like engine type) to a target database configuration for a specific `source_env` -> `target_env` pair.
- **Migration Configuration**: The set of parameters for a migration job, including selected environments and the "Replace Database" toggle state.
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: Users can complete a full database mapping for 5+ databases in under 60 seconds using the empirical suggestions.
- **SC-002**: 100% of assets migrated with "Replace Database" enabled correctly reference the target databases as defined in the mapping table.
- **SC-003**: Mapping persistence allows users to run subsequent migrations between the same environments without re-configuring database pairs in 100% of cases.
- **SC-004**: The system successfully identifies and suggests at least 90% of matching pairs when naming follows a "Prefix + Name" pattern (e.g., "Dev-Sales" -> "Prod-Sales").
## Assumptions
- **AS-001**: Environments are already configured in the application's global settings.
- **AS-002**: The backend has access to stored credentials for both source and target environments to perform API requests.
- **AS-003**: Database names or IDs are stable enough within an environment to be used as reliable mapping keys.

View File

@@ -0,0 +1,186 @@
---
description: "Task list for Migration Process and UI Redesign implementation"
---
# Tasks: Migration Process and UI Redesign
**Input**: Design documents from `specs/001-migration-ui-redesign/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, quickstart.md
**Tests**: Tests are NOT explicitly requested in the feature specification, so they are omitted from this task list.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Web app**: `backend/src/`, `frontend/src/`
---
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan in `backend/src/` and `frontend/src/`
- [ ] T002 [P] Install backend dependencies (rapidfuzz, sqlalchemy) in `backend/requirements.txt`
- [ ] T003 [P] Install frontend dependencies (if any new) in `frontend/package.json`
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
- [ ] T004 Setup SQLite database schema and SQLAlchemy models in `backend/src/models/mapping.py`
- [ ] T005 [P] Implement fuzzy matching utility using RapidFuzz in `backend/src/core/utils/matching.py`
- [ ] T006 [P] Extend SupersetClient to support database listing and metadata fetching in `backend/src/core/superset_client.py`
- [ ] T007 Configure database mapping persistence layer in `backend/src/core/database.py`
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - Environment-Based Migration Setup (Priority: P1) 🎯 MVP
**Goal**: Enable selection of source and target environments and toggle database replacement.
**Independent Test**: Open the migration page and verify that the "Source" and "Target" dropdowns are populated with configured environments and can be selected.
### Implementation for User Story 1
- [ ] T008 [P] [US1] Implement environment selection API endpoints in `backend/src/api/routes/environments.py`
- [ ] T009 [P] [US1] Create `EnvSelector.svelte` component for source/target selection in `frontend/src/components/EnvSelector.svelte`
- [ ] T010 [US1] Integrate `EnvSelector` and "Replace Database" toggle into migration dashboard in `frontend/src/routes/migration/+page.svelte`
- [ ] T011 [US1] Add validation to ensure source and target environments are different in `frontend/src/routes/migration/+page.svelte`
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently.
---
## Phase 4: User Story 2 - Database Mapping Management (Priority: P1)
**Goal**: Fetch databases from environments, suggest mappings using fuzzy matching, and allow manual overrides/persistence.
**Independent Test**: Navigate to the "Database Mapping" tab, fetch databases, and verify that mappings can be created, saved, and edited.
### Implementation for User Story 2
- [ ] T012 [P] [US2] Implement database mapping CRUD API endpoints in `backend/src/api/routes/mappings.py`
- [ ] T013 [US2] Implement mapping service with fuzzy matching logic in `backend/src/services/mapping_service.py`
- [ ] T014 [P] [US2] Create `MappingTable.svelte` component for displaying and editing pairs in `frontend/src/components/MappingTable.svelte`
- [ ] T015 [US2] Create database mapping management view in `frontend/src/routes/migration/mappings/+page.svelte`
- [ ] T016 [US2] Implement "Fetch Databases" action and suggestion highlighting in `frontend/src/routes/migration/mappings/+page.svelte`
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently.
---
## Phase 5: User Story 3 - Migration with Automated DB Replacement (Priority: P2)
**Goal**: Intercept assets during migration, apply database mappings, and prompt for missing ones.
**Independent Test**: Run a migration with "Replace Database" enabled and verify that the resulting assets in the target environment point to the mapped databases.
### Implementation for User Story 3
- [ ] T017 [US3] Implement ZIP-based asset interception and YAML transformation logic in `backend/src/core/migration_engine.py`
- [ ] T018 [US3] Integrate database mapping application into the migration job execution flow in `backend/src/core/task_manager.py`
- [ ] T019 [P] [US3] Create `MissingMappingModal.svelte` for on-the-fly mapping prompts in `frontend/src/components/MissingMappingModal.svelte`
- [ ] T020 [US3] Implement backend pause and frontend modal trigger for missing mappings in `backend/src/api/routes/tasks.py` and `frontend/src/components/TaskRunner.svelte`
**Checkpoint**: All user stories should now be independently functional.
---
## Phase 6: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] T021 [P] Update documentation in `docs/` to include database mapping instructions
- [ ] T022 Code cleanup and refactoring of migration logic
- [ ] T023 [P] Performance optimization for fuzzy matching and ZIP processing
- [ ] T024 Run `quickstart.md` validation to ensure end-to-end flow works as documented
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 3 (P2)**: Can start after Foundational (Phase 2) - Depends on US1/US2 for mapping data and configuration
### Within Each User Story
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, US1 and US2 can start in parallel
- Models and UI components within a story marked [P] can run in parallel
---
## Parallel Example: User Story 2
```bash
# Launch backend and frontend components for User Story 2 together:
Task: "Implement database mapping CRUD API endpoints in backend/src/api/routes/mappings.py"
Task: "Create MappingTable.svelte component for displaying and editing pairs in frontend/src/components/MappingTable.svelte"
```
---
## Implementation Strategy
### MVP First (User Story 1 & 2)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. Complete Phase 4: User Story 2
5. **STOP and VALIDATE**: Test environment selection and mapping management independently
6. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo
3. Add User Story 2 → Test independently → Deploy/Demo (MVP!)
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence