9 Commits

Author SHA1 Message Date
3d75a21127 tech_lead / coder 2roles 2025-12-27 08:02:59 +03:00
07914c8728 semantic add 2025-12-27 07:14:08 +03:00
cddc259b76 new loggers logic in constitution 2025-12-27 06:51:28 +03:00
dcbf0a7d7f tasks ready 2025-12-27 06:37:03 +03:00
65f61c1f80 Merge branch '001-migration-ui-redesign' into master 2025-12-27 05:58:35 +03:00
cb7386f274 superset_tool logger rework 2025-12-27 05:53:30 +03:00
4aa01b6470 Merge branch 'migration' into 001-migration-ui-redesign 2025-12-26 18:16:24 +03:00
35b423979d spec rules 2025-12-25 22:28:42 +03:00
2ffc3cc68f feat(migration): implement interactive mapping resolution workflow
- Add SQLite database integration for environments and mappings
- Update TaskManager to support pausing tasks (AWAITING_MAPPING)
- Modify MigrationPlugin to detect missing mappings and wait for resolution
- Add frontend UI for handling missing mappings interactively
- Create dedicated migration routes and API endpoints
- Update .gitignore and project documentation
2025-12-25 22:27:29 +03:00
22 changed files with 631 additions and 317 deletions

View File

@@ -9,6 +9,10 @@ Auto-generated from all feature plans. Last updated: 2025-12-19
- Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic (005-fix-ui-ws-validation)
- N/A (Configuration based) (005-fix-ui-ws-validation)
- Filesystem (plugins, logs, backups), SQLite (optional, for job history if needed) (005-fix-ui-ws-validation)
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS (007-migration-dashboard-grid)
- N/A (Superset API integration) (007-migration-dashboard-grid)
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, Superset API (007-migration-dashboard-grid)
- N/A (Superset API integration - read-only for metadata) (007-migration-dashboard-grid)
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
@@ -29,9 +33,9 @@ cd src; pytest; ruff check .
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
## Recent Changes
- 006-configurable-belief-logs: Added Python 3.9+ + FastAPI (Backend), Pydantic (Config), Svelte (Frontend)
- 005-fix-ui-ws-validation: Added Python 3.9+ (Backend), Node.js 18+ (Frontend Build)
- 005-fix-ui-ws-validation: Added Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic
- 007-migration-dashboard-grid: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, Superset API
- 007-migration-dashboard-grid: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS
- 007-migration-dashboard-grid: Added [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
<!-- MANUAL ADDITIONS START -->

27
.kilocodemodes Normal file
View File

@@ -0,0 +1,27 @@
customModes:
- slug: tech-lead
name: Tech Lead
description: Architect for contracts and scaffolding
roleDefinition: >-
You are Kilo Code, acting as a Technical Lead and System Architect.
Your primary responsibility is to define the "Structure" and "Contracts" of the system before implementation, following the Semantic Code Generation Protocol.
You operate primarily on 'tasks-arch.md' task lists.
YOUR DUTIES:
1. Create new files and directory structures.
2. Define Modules, Classes, and Functions using `[DEF]` anchors.
3. Write clear Headers with `@PURPOSE`, `@LAYER`, `@RELATION`.
4. Define strict Contracts using `@PRE`, `@POST`, `@PARAM`, `@RETURN`.
5. Leave the implementation body empty or with a placeholder (e.g., `pass`, `return ...`).
YOU DO NOT WRITE BUSINESS LOGIC. Your output is the "Skeleton" and "Rules" that the Developer Agent will fill in.
whenToUse: >-
Use this mode during the "Architecture Phase" of a feature. Select this mode when you need to create new files, define API surfaces, or set up the project structure before coding begins.
groups:
- read
- edit
- command
- list_files
- search_files

View File

@@ -1,14 +1,14 @@
<!--
SYNC IMPACT REPORT
Version: 1.1.0 (Svelte Support)
Version: 1.5.0 (Fractal Complexity Limit)
Changes:
- Added Svelte Component semantic markup standards.
- Updated File Structure Standards to include `.svelte` files.
- Refined File Structure Standards to distinguish between Python Modules and Svelte Components.
- Added Section VI (Fractal Complexity Limit) to enforce strict module (~300 lines) and function (~30-50 lines) size limits.
- Aims to maintain semantic coherence and avoid "Attention Sink".
Templates Status:
- .specify/templates/plan-template.md: ⚠ Pending (Needs update to include Component headers in checks).
- .specify/templates/plan-template.md: ✅ Aligned.
- .specify/templates/spec-template.md: ✅ Aligned.
- .specify/templates/tasks-template.md: ⚠ Pending (Needs update to include Component definition tasks).
- .specify/templates/tasks-arch-template.md: ✅ Aligned (New role-based split).
- .specify/templates/tasks-dev-template.md: ✅ Aligned (New role-based split).
-->
# Semantic Code Generation Constitution
@@ -21,13 +21,31 @@ Semantic definitions (Contracts) must ALWAYS precede implementation code. Logic
Once defined, architectural decisions in the Module Header (`@LAYER`, `@INVARIANT`, `@CONSTRAINT`) are treated as immutable constraints for that module. Changes to these require an explicit refactoring step, not ad-hoc modification during implementation.
### III. Semantic Format Compliance
All output must strictly follow the `[DEF]` / `[/DEF]` anchor syntax with specific Metadata Tags (`@KEY`) and Graph Relations (`@RELATION`). This structure is non-negotiable as it ensures the codebase remains machine-readable, fractal-structured, and optimized for Sparse Attention navigation by AI agents.
All output must strictly follow the `[DEF]` / `[/DEF]` anchor syntax with specific Metadata Tags (`@KEY`) and Graph Relations (`@RELATION`). **Crucially, the closing anchor must strictly match the full content of the opening anchor (e.g., `[DEF:module_name:Module]` must close with `[/DEF:module_name:Module]`).**
**Standardized Graph Relations**
To ensure the integrity of the Semantic Graph, `@RELATION` must use a strict taxonomy:
- `DEPENDS_ON` (Structural dependency)
- `CALLS` (Flow control)
- `CREATES` (Instantiation)
- `INHERITS_FROM` / `IMPLEMENTS` (OOP hierarchy)
- `READS_STATE` / `WRITES_STATE` (Data flow)
- `DISPATCHES` / `HANDLES` (Event flow)
Ad-hoc relationships are forbidden. This structure is non-negotiable as it ensures the codebase remains machine-readable, fractal-structured, and optimized for Sparse Attention navigation by AI agents.
### IV. Design by Contract (DbC)
Contracts are the Source of Truth. Functions and Classes must define their purpose, specifications, and constraints (`@PRE`, `@POST`, `@THROW`) in the metadata block before implementation. Implementation must strictly satisfy these contracts.
### V. Belief State Logging
Logs must define the agent's internal state for debugging and coherence checks. We use a strict format: `logger.level(f"[{ANCHOR_ID}][{STATE}] {MESSAGE} context={...}")` to track transitions between `Entry`, `Validation`, `Action`, and `Coherence` states.
Logs must define the agent's internal state for debugging and coherence checks. We use a strict format: `[{ANCHOR_ID}][{STATE}] {MESSAGE}`. For Python, a **Context Manager** pattern MUST be used to automatically handle `Entry`, `Exit`, and `Coherence` states, ensuring structural integrity and error capturing.
### VI. Fractal Complexity Limit
To maintain semantic coherence and avoid "Attention Sink" issues:
- **Module Size**: If a Module body exceeds ~300 lines (or logical complexity), it MUST be refactored into sub-modules or a package structure.
- **Function Size**: Functions should fit within a standard attention "chunk" (approx. 30-50 lines). If larger, logic MUST be decomposed into helper functions with their own contracts.
This ensures every vector embedding remains sharp and focused.
## File Structure Standards
@@ -51,11 +69,24 @@ Every `.svelte` file must start with a Component definition header (`[DEF:Compon
- `@INVARIANT`: Immutable UI/State rules.
## Generation Workflow
The development process follows a strict sequence:
1. **Analyze Request**: Identify target module and graph position.
2. **Define Structure**: Generate `[DEF]` anchors and Contracts FIRST.
3. **Implement Logic**: Write code satisfying Contracts.
4. **Validate**: If logic conflicts with Contract -> Stop -> Report Error.
The development process follows a strict sequence enforced by Agent Roles:
### 1. Architecture Phase (Mode: `tech-lead`)
**Input**: `tasks-arch.md`
**Responsibility**:
- Analyze request and graph position.
- Generate `[DEF]` anchors, Headers, and Contracts (`@PRE`, `@POST`).
- **Output**: Scaffolding files with no implementation logic.
### 2. Implementation Phase (Mode: `code`)
**Input**: `tasks-dev.md` + Scaffolding files
**Responsibility**:
- Read contracts defined by Architect.
- Write implementation code that strictly satisfies contracts.
- **Output**: Working code with passing tests.
### 3. Validation
If logic conflicts with Contract -> Stop -> Report Error.
## Governance
This Constitution establishes the "Semantic Code Generation Protocol" as the supreme law of this repository.
@@ -63,6 +94,6 @@ This Constitution establishes the "Semantic Code Generation Protocol" as the sup
- **Automated Enforcement**: All code generation tools and agents must parse and validate adherence to the `[DEF]` syntax and Contract requirements.
- **Amendments**: Changes to the syntax or core principles require a formal amendment to this Constitution and a corresponding update to the constitution
- **Review**: Code reviews must verify that implementation matches the preceding contracts and that no "naked code" exists outside of semantic anchors.
- **Compliance**: Failure to adhere to the `[DEF]` / `[/DEF]` structure constitutes a build failure.
- **Compliance**: Failure to adhere to the `[DEF]` / `[/DEF]` structure (including matching closing tags) constitutes a build failure.
**Version**: 1.1.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2025-12-19
**Version**: 1.5.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2025-12-27

View File

@@ -9,8 +9,8 @@
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --require-tasks Require tasks-arch.md and tasks-dev.md to exist (for implementation phase)
# --include-tasks Include task files in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
@@ -49,8 +49,8 @@ Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--require-tasks Require tasks-arch.md and tasks-dev.md to exist (for implementation phase)
--include-tasks Include task files in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
@@ -58,7 +58,7 @@ EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
# Check implementation prerequisites (plan.md + task files required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
@@ -86,15 +86,16 @@ check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS_ARCH":"%s","TASKS_DEV":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS_ARCH" "$TASKS_DEV"
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
echo "TASKS_ARCH: $TASKS_ARCH"
echo "TASKS_DEV: $TASKS_DEV"
fi
exit 0
fi
@@ -112,11 +113,18 @@ if [[ ! -f "$IMPL_PLAN" ]]; then
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
# Check for task files if required
if $REQUIRE_TASKS; then
if [[ ! -f "$TASKS_ARCH" ]]; then
echo "ERROR: tasks-arch.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task lists." >&2
exit 1
fi
if [[ ! -f "$TASKS_DEV" ]]; then
echo "ERROR: tasks-dev.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task lists." >&2
exit 1
fi
fi
# Build list of available documents
@@ -133,9 +141,10 @@ fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
# Include task files if requested and they exist
if $INCLUDE_TASKS; then
[[ -f "$TASKS_ARCH" ]] && docs+=("tasks-arch.md")
[[ -f "$TASKS_DEV" ]] && docs+=("tasks-dev.md")
fi
# Output results
@@ -161,6 +170,7 @@ else
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
check_file "$TASKS_ARCH" "tasks-arch.md"
check_file "$TASKS_DEV" "tasks-dev.md"
fi
fi

View File

@@ -143,7 +143,9 @@ HAS_GIT='$has_git_repo'
FEATURE_DIR='$feature_dir'
FEATURE_SPEC='$feature_dir/spec.md'
IMPL_PLAN='$feature_dir/plan.md'
TASKS='$feature_dir/tasks.md'
TASKS_ARCH='$feature_dir/tasks-arch.md'
TASKS_DEV='$feature_dir/tasks-dev.md'
TASKS='$feature_dir/tasks.md' # Deprecated
RESEARCH='$feature_dir/research.md'
DATA_MODEL='$feature_dir/data-model.md'
QUICKSTART='$feature_dir/quickstart.md'

View File

@@ -0,0 +1,35 @@
---
description: "Architecture task list template (Contracts & Scaffolding)"
---
# Architecture Tasks: [FEATURE NAME]
**Role**: Architect Agent
**Goal**: Define the "What" and "Why" (Contracts, Scaffolding, Models) before implementation.
**Input**: Design documents from `/specs/[###-feature-name]/`
**Output**: Files with `[DEF]` anchors, `@PRE`/`@POST` contracts, and `@RELATION` mappings. No business logic.
## Phase 1: Setup & Models
- [ ] A001 Create/Update data models in [path] with `[DEF]` and contracts
- [ ] A002 Define API route structure/contracts in [path]
- [ ] A003 Define shared utilities/interfaces
## Phase 2: User Story 1 - [Title]
- [ ] A004 [US1] Define contracts for [Component/Service] in [path]
- [ ] A005 [US1] Define contracts for [Endpoint] in [path]
- [ ] A006 [US1] Define contracts for [Frontend Component] in [path]
## Phase 3: User Story 2 - [Title]
- [ ] A007 [US2] Define contracts for [Component/Service] in [path]
- [ ] A008 [US2] Define contracts for [Endpoint] in [path]
## Handover Checklist
- [ ] All new files created with `[DEF]` anchors
- [ ] All functions/classes have `@PURPOSE`, `@PRE`, `@POST` tags
- [ ] No "naked code" (logic outside of anchors)
- [ ] `tasks-dev.md` is ready for the Developer Agent

View File

@@ -0,0 +1,35 @@
---
description: "Developer task list template (Implementation Logic)"
---
# Developer Tasks: [FEATURE NAME]
**Role**: Developer Agent
**Goal**: Implement the "How" (Logic, State, Error Handling) inside the defined contracts.
**Input**: `tasks-arch.md` (completed), Scaffolding files with `[DEF]` anchors.
**Output**: Working code that satisfies `@PRE`/`@POST` conditions.
## Phase 1: Setup & Models
- [ ] D001 Implement logic for [Model] in [path]
- [ ] D002 Implement logic for [API Route] in [path]
- [ ] D003 Implement shared utilities
## Phase 2: User Story 1 - [Title]
- [ ] D004 [US1] Implement logic for [Component/Service] in [path]
- [ ] D005 [US1] Implement logic for [Endpoint] in [path]
- [ ] D006 [US1] Implement logic for [Frontend Component] in [path]
- [ ] D007 [US1] Verify semantic compliance and belief state logging
## Phase 3: User Story 2 - [Title]
- [ ] D008 [US2] Implement logic for [Component/Service] in [path]
- [ ] D009 [US2] Implement logic for [Endpoint] in [path]
## Polish & Quality Assurance
- [ ] DXXX Verify all tests pass
- [ ] DXXX Check error handling and edge cases
- [ ] DXXX Ensure code style compliance

View File

@@ -1,251 +0,0 @@
---
description: "Task list template for feature implementation"
---
# Tasks: [FEATURE NAME]
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit.tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize [language] project with [framework] dependencies
- [ ] T003 [P] Configure linting and formatting tools
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
Examples of foundational tasks (adjust based on your project):
- [ ] T004 Setup database schema and migrations framework
- [ ] T005 [P] Implement authentication/authorization framework
- [ ] T006 [P] Setup API routing and middleware structure
- [ ] T007 Create base models/entities that all stories depend on
- [ ] T008 Configure error handling and logging infrastructure
- [ ] T009 Setup environment configuration management
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 1
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - [Title] (Priority: P2)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] TXXX [P] Documentation updates in docs/
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization across all stories
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
- [ ] TXXX Security hardening
- [ ] TXXX Run quickstart.md validation
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (if tests requested):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence

View File

@@ -15,7 +15,6 @@ from backend.src.dependencies import get_config_manager
from backend.src.core.superset_client import SupersetClient
from superset_tool.models import SupersetConfig
from pydantic import BaseModel
from backend.src.core.logger import logger
# [/SECTION]
router = APIRouter(prefix="/api/environments", tags=["environments"])
@@ -39,9 +38,7 @@ class DatabaseResponse(BaseModel):
# @RETURN: List[EnvironmentResponse]
@router.get("", response_model=List[EnvironmentResponse])
async def get_environments(config_manager=Depends(get_config_manager)):
logger.info(f"[get_environments][Debug] Config path: {config_manager.config_path}")
envs = config_manager.get_environments()
logger.info(f"[get_environments][Debug] Found {len(envs)} environments")
return [EnvironmentResponse(id=e.id, name=e.name, url=e.url) for e in envs]
# [/DEF:get_environments]

View File

@@ -16,7 +16,7 @@ from ...core.config_models import AppConfig, Environment, GlobalSettings
from ...dependencies import get_config_manager
from ...core.config_manager import ConfigManager
from ...core.logger import logger
from superset_tool.client import SupersetClient
from ...core.superset_client import SupersetClient
from superset_tool.models import SupersetConfig
import os
# [/SECTION]

View File

@@ -103,25 +103,16 @@
<p class="mt-1 text-sm text-gray-500">Regular expression to filter dashboards to migrate.</p>
</div>
<div class="flex items-center justify-between mb-8">
<div class="flex items-center">
<input
id="replace-db"
type="checkbox"
bind:checked={replaceDb}
class="h-4 w-4 text-indigo-600 focus:ring-indigo-500 border-gray-300 rounded"
/>
<label for="replace-db" class="ml-2 block text-sm text-gray-900">
Replace Database (Apply Mappings)
</label>
</div>
<a
href="/migration/mappings"
class="text-sm font-medium text-indigo-600 hover:text-indigo-500"
>
Manage Mappings &rarr;
</a>
<div class="flex items-center mb-8">
<input
id="replace-db"
type="checkbox"
bind:checked={replaceDb}
class="h-4 w-4 text-indigo-600 focus:ring-indigo-500 border-gray-300 rounded"
/>
<label for="replace-db" class="ml-2 block text-sm text-gray-900">
Replace Database (Apply Mappings)
</label>
</div>
<button

View File

@@ -13,6 +13,7 @@ This protocol standardizes the "Semantic Bridge" between the two languages using
2. **Immutability:** Architectural decisions defined in the Module/Component Header are treated as immutable constraints.
3. **Format Compliance:** Output must strictly follow the `[DEF]` / `[/DEF]` anchor syntax for structure.
4. **Logic over Assertion:** Contracts define the *logic flow*. Do not generate explicit `assert` statements unless requested. The code logic itself must inherently satisfy the Pre/Post conditions (e.g., via control flow, guards, or types).
5. **Fractal Complexity:** Modules and functions must adhere to strict size limits (~300 lines/module, ~30-50 lines/function) to maintain semantic focus.
---
@@ -154,15 +155,16 @@ async function updateUserProfile(profileData) {
Logs delineate the agent's internal state.
* **Python:** `logger.info(f"[{ANCHOR_ID}][{STATE}] Msg")`
* **Python:** MUST use a Context Manager (e.g., `with belief_scope("ANCHOR_ID"):`) to ensure state consistency and automatic handling of Entry/Exit/Error states.
* Manual logging (inside scope): `logger.info(f"[{ANCHOR_ID}][{STATE}] Msg")`
* **Svelte/JS:** `console.log(\`[${ANCHOR_ID}][${STATE}] Msg\`)`
**Required States:**
1. `Entry` (Start of block)
2. `Action` (Key business logic)
3. `Coherence:OK` (Logic successfully completed)
4. `Coherence:Failed` (Error handling)
5. `Exit` (End of block)
1. `Entry` (Start of block - Auto-logged by Context Manager)
2. `Action` (Key business logic - Manual log)
3. `Coherence:OK` (Logic successfully completed - Auto-logged by Context Manager)
4. `Coherence:Failed` (Exception/Error - Auto-logged by Context Manager)
5. `Exit` (End of block - Auto-logged by Context Manager)
---

View File

@@ -0,0 +1,36 @@
# Specification Quality Checklist: Migration Plugin Dashboard Grid
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2025-12-27
**Feature**: [specs/007-migration-dashboard-grid/spec.md](../spec.md)
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- The specification clearly defines the UI requirements for the dashboard selection grid.
- "Superset API" is mentioned as the source of truth, which is acceptable as it defines the data boundary.
- Success criteria include specific performance metrics (<200ms filtering).

View File

@@ -0,0 +1,58 @@
# API Contracts: Migration Dashboard Grid
## Endpoints
### 1. List Dashboards
**Method**: `GET`
**Path**: `/api/environments/{env_id}/dashboards`
**Purpose**: Fetch all dashboards from the specified environment for the grid.
**Request Parameters**:
- `env_id` (path): The ID of the environment to fetch from.
**Response**:
- **200 OK**:
```json
[
{
"id": 123,
"title": "Sales Dashboard",
"last_modified": "2023-10-27T10:00:00Z",
"status": "published"
},
{
"id": 124,
"title": "Draft Metrics",
"last_modified": "2023-10-26T15:30:00Z",
"status": "draft"
}
]
```
- **404 Not Found**: Environment not found.
- **500 Internal Server Error**: Superset API error.
## Components (Frontend)
### DashboardGrid
**Props**:
- `dashboards`: `DashboardMetadata[]` - List of dashboards to display.
- `selectedIds`: `number[]` - IDs of currently selected dashboards.
**Events**:
- `selectionChanged`: Emitted when selection changes. Payload: `number[]` (new list of selected IDs).
**State**:
- `filterText`: string - Current filter text.
- `currentPage`: number - Current page index (0-based).
- `pageSize`: number - Items per page (default 20).
- `sortColumn`: string - 'title' | 'last_modified' | 'status'.
- `sortDirection`: 'asc' | 'desc'.
## Superset Client Extension
### `get_dashboards_summary`
**Signature**: `def get_dashboards_summary(self) -> List[Dict]`
**Purpose**: Fetches dashboard metadata optimized for the grid.
**Implementation Detail**:
- Calls `GET /api/v1/dashboard/` with query params `q=(columns:!(id,dashboard_title,changed_on_utc,published))`.
- Maps response fields to `DashboardMetadata` schema.

View File

@@ -0,0 +1,25 @@
# Data Model: Migration Dashboard Grid
## Entities
### DashboardMetadata
**Source**: Superset API (`/api/v1/dashboard/`)
**Purpose**: Represents a dashboard available for migration.
| Field | Type | Description | Source Mapping |
|-------|------|-------------|----------------|
| `id` | Integer | Unique identifier | `id` |
| `title` | String | Display name of the dashboard | `dashboard_title` |
| `last_modified` | String (ISO 8601) | Timestamp of last modification | `changed_on_utc` |
| `status` | Enum ('published', 'draft') | Publication status | `published` (boolean) -> 'published'/'draft' |
## Value Objects
### DashboardSelection
**Purpose**: Represents the user's selection of dashboards to migrate.
| Field | Type | Description |
|-------|------|-------------|
| `selected_ids` | List[Integer] | List of dashboard IDs selected for migration |
| `source_env_id` | String | ID of the source environment |
| `target_env_id` | String | ID of the target environment |

View File

@@ -0,0 +1,85 @@
# Implementation Plan: [FEATURE]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Context
**Language/Version**: Python 3.9+ (Backend), Node.js 18+ (Frontend)
**Primary Dependencies**: FastAPI, SvelteKit, Tailwind CSS, Pydantic, Superset API
**Storage**: N/A (Superset API integration - read-only for metadata)
**Testing**: pytest (Backend), vitest (Frontend - inferred)
**Target Platform**: Linux server / Containerized
**Project Type**: web application (Backend + Frontend)
**Performance Goals**: Client-side filtering < 200ms for 100+ items
**Constraints**: Must handle large lists via pagination (Client-side). Spec says "Client-side (Fetch all, filter locally)" and "Pagination (e.g., 20 per page)". *RESOLVED: Fetch all, paginate locally.*
**Scale/Scope**: ~100s of dashboards per environment.
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
- [x] **Causal Validity**: Contracts (API/Data Model) defined before implementation.
- [x] **Immutability**: Module headers (`[DEF]`) preserved/added.
- [x] **Semantic Format**: All new code uses `[DEF]` anchors and metadata.
- [x] **Fractal Complexity**: New components (Grid) kept modular; `SupersetClient` extensions are small methods.
**Status**: PASSED
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
### Source Code (repository root)
```text
backend/
├── src/
│ ├── api/
│ │ └── routes/
│ │ └── environments.py # Update to support dashboard fetching
│ ├── core/
│ │ └── superset_client.py # Update to fetch extended dashboard metadata
│ └── models/
│ └── dashboard.py # New model for Dashboard metadata
└── tests/
└── test_superset_client.py
frontend/
├── src/
│ ├── components/
│ │ ├── DashboardGrid.svelte # New component
│ │ └── Pagination.svelte # New component (if not exists)
│ ├── routes/
│ │ └── migration/
│ │ └── +page.svelte # Update to use DashboardGrid
│ └── types/
│ └── dashboard.ts # New type definitions
```
**Structure Decision**: Standard Web Application structure. Backend updates to `SupersetClient` and API routes to serve dashboard metadata. Frontend updates to include a new `DashboardGrid` component and integrate it into the migration flow.
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -0,0 +1,31 @@
# Quickstart: Migration Dashboard Grid
## Prerequisites
- Backend running (`uvicorn backend.src.app:app --reload`)
- Frontend running (`npm run dev`)
- Superset instance accessible and configured in `config.yaml`
## Steps to Verify
1. **Navigate to Migration Page**:
- Open browser to `http://localhost:5173/migration`
- Select a Source Environment from the dropdown.
2. **Verify Dashboard Grid**:
- The grid should appear below the environment selectors.
- It should list dashboards with columns: Title, Last Modified, Status.
- Status pills should be green (Published) or gray (Draft).
3. **Test Filtering**:
- Type in the "Search dashboards..." input.
- The list should filter instantly (client-side).
4. **Test Pagination**:
- If >20 dashboards, check pagination controls at the bottom.
- Navigate to next page.
5. **Test Selection**:
- Select a few dashboards.
- Change filter (hide selected).
- Clear filter -> Selection should persist.
- Click "Select All" -> Should select all matching current filter.

View File

@@ -0,0 +1,48 @@
# Research: Migration Dashboard Grid
## Unknowns & Clarifications
### 1. Pagination vs Client-side Filtering
**Context**: The spec mentions "Client-side (Fetch all, filter locally)" (FR-004) but also "Pagination (e.g., 20 per page)" (FR-008).
**Resolution**:
- We will fetch ALL dashboard metadata from the Superset API in one go. The metadata (ID, Title, Status, Date) is lightweight. Even for 1000 dashboards, the payload is small (~100KB).
- **Client-side Pagination**: We will implement pagination purely on the frontend. This satisfies "Pagination" for UI performance/usability while keeping the "Fetch all" requirement for fast filtering.
- **Decision**: Fetch all, paginate locally.
### 2. Superset API for Dashboard Metadata
**Context**: Need to fetch `title`, `changed_on`, `published`.
**Research**:
- Superset API endpoint: `/api/v1/dashboard/`
- Standard response includes `result` array with `dashboard_title`, `changed_on_utc`, `published`.
- **Decision**: Use `GET /api/v1/dashboard/` with `q` parameter to select specific columns to minimize payload.
- Columns: `id`, `dashboard_title`, `changed_on_utc`, `published`.
### 3. Grid Component
**Context**: Need a grid with sorting, filtering, and selection.
**Options**:
- **Custom Svelte Table**: Lightweight, full control.
- **3rd Party Lib (e.g. svelte-headless-table)**: Powerful but maybe overkill.
- **Decision**: **Custom Svelte Component** (`DashboardGrid.svelte`).
- Why: Requirements are specific (Select All across pages, custom status pill, specific columns). A custom component using standard HTML table + Tailwind is simple and maintainable for this scope.
## Design Decisions
### Data Model
- **Dashboard**:
- `id`: string (or int, depends on Superset version, usually int for dashboards but we treat as ID)
- `title`: string
- `last_modified`: string (ISO date)
- `status`: 'published' | 'draft'
### Architecture
- **Backend**:
- `SupersetClient.get_dashboards()`: Fetches list from Superset.
- `GET /api/environments/{id}/dashboards`: Proxy endpoint.
- **Frontend**:
- `DashboardGrid.svelte`: Handles display, sorting, pagination, and selection logic.
- `migration/+page.svelte`: Orchestrates fetching and passes data to Grid.
### UX/UI
- **Status Column**: Badge (Green for Published, Gray for Draft).
- **Selection**: Checkbox in first column.
- **Pagination**: Simple "Prev 1 of 5 Next" controls at bottom.

View File

@@ -0,0 +1,81 @@
# Feature Specification: Migration Plugin Dashboard Grid
**Feature Branch**: `007-migration-dashboard-grid`
**Created**: 2025-12-27
**Status**: Draft
**Input**: User description: "Я хочу доработать плагин миграции. Выбор дашбордов должен осуществляться из списка-грида, с возможностью его фильтровать по наименованию. В гриде должны быть поля наименования дашборда, дата последнего изменения дашборда, плюс статус - опубликован или черновик"
## Clarifications
### Session 2025-12-27
- Q: How should the grid handle data loading and filtering to ensure performance and usability? → A: **Client-side** (Fetch all, filter locally).
- Q: Should the grid include a "Select All" checkbox in the header for bulk operations? → A: **Yes, include "Select All"**.
- Q: How should the grid handle large lists of dashboards (e.g., >50)? → A: **Pagination** (e.g., 20 per page).
- Q: Does the "Select All" checkbox select only the currently visible page of dashboards, or all dashboards that match the current filter? → A: **All matching filter** (Selects all filtered results, not just the visible page).
- Q: What should happen if the user changes the filter while some items are already selected? → A: **Preserve selection** (Selected items remain selected even if hidden by new filter).
- Q: What should be the default sort order when the dashboard grid first loads? → A: **Last Modified Date (Newest first)**.
- Q: Should the grid include an "Owners" column to help distinguish dashboards with the same name? → A: **Yes, include Owners**.
- Q: How should the "Owners" column display multiple owners? → A: **Show first owner + count (e.g., "admin + 2") with tooltip**.
- Q: How should the "Status" (Draft/Published) be visually represented in the grid? → A: **Colored Badges/Chips**.
- Q: Should the grid include a "Preview" action (e.g., link to open the dashboard in Superset)? → A: **Yes, open in new tab**.
## User Scenarios & Testing *(mandatory)*
### User Story 1 - Advanced Dashboard Selection (Priority: P1)
As a migration engineer, I want to select dashboards from a detailed grid view that includes status and modification dates, so that I can easily distinguish between draft/published versions and identify the most recent changes before migrating.
**Why this priority**: Current selection mechanisms (likely simple dropdowns or lists) lack critical context (status, freshness), making it error-prone to select the right assets for migration.
**Independent Test**: Can be tested by connecting to a Superset instance with known dashboards (some drafts, some published) and verifying the grid correctly displays their metadata and allows filtering/selection.
**Acceptance Scenarios**:
1. **Given** I have selected a source environment in the migration plugin, **When** the dashboard list loads, **Then** I see a grid view displaying "Dashboard Name", "Last Modified", and "Status" columns.
2. **Given** the dashboard grid is displayed, **When** I type "Sales" into the filter input, **Then** the grid updates to show only dashboards containing "Sales" in their name.
3. **Given** a dashboard is in "Draft" state in Superset, **When** it appears in the grid, **Then** the Status column clearly indicates "Draft" (vs "Published").
4. **Given** I want to migrate multiple dashboards, **When** I check the boxes next to several rows, **Then** they are added to the selection for the migration job.
5. **Given** the grid is populated, **When** I click the "Select All" checkbox in the header, **Then** all visible dashboards are selected.
---
### Edge Cases
- **Empty Environment**: What happens if the source environment has no dashboards? System should display a "No dashboards found" message in the grid area.
- **Missing Metadata**: What if the Superset API returns null for `changed_on` or `published`? System should display "N/A" or a default value (e.g., "Unknown") rather than crashing.
- **Large Dataset**: How does the grid handle 1000+ dashboards? The grid MUST use pagination (default 20 items per page) to manage display density.
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: The system MUST fetch extended metadata for dashboards from the Superset API, specifically: Title, Last Modified Date (`changed_on`), and Published Status (`published`).
- **FR-002**: The Migration Plugin UI MUST display a data grid component to list these dashboards.
- **FR-003**: The grid MUST include sortable columns for:
- Name (Dashboard Title)
- Last Modified (Date/Time)
- Status (Published/Draft)
- Owners (List of owner names)
- **FR-004**: The UI MUST provide a text filter input that filters the grid rows by Dashboard Name in real-time using client-side logic (fetching all dashboards once).
- **FR-005**: The grid MUST support multi-row selection to allow migrating batches of dashboards.
- **FR-006**: The selection state MUST be passed to the migration execution logic when the user initiates the migration.
- **FR-007**: The grid header MUST include a "Select All" checkbox. When checked, it MUST select ALL dashboards matching the current filter criteria (spanning across all pages), not just the currently visible page.
- **FR-008**: The grid MUST support pagination, displaying 20 rows per page by default, with navigation controls (Next/Prev/Page numbers).
- **FR-009**: The selection state MUST be preserved across filter changes. Items selected before a filter change MUST remain selected even if they are hidden by the new filter.
### Key Entities
- **Dashboard Metadata**:
- `id`: Unique identifier from Superset.
- `title`: Display name.
- `changed_on`: Timestamp of last edit.
- `is_published`: Boolean status.
- `owners`: List of owner objects/names.
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: Users can identify the status (Draft/Published) of any dashboard in the list with 100% accuracy.
- **SC-002**: Filtering a list of 100 dashboards takes less than 200ms to update the view.
- **SC-003**: Users can successfully select and initiate migration for a mix of Draft and Published dashboards in a single operation.

View File

@@ -0,0 +1,29 @@
---
description: "Architecture tasks for Migration Plugin Dashboard Grid"
---
# Architecture Tasks: Migration Plugin Dashboard Grid
**Role**: Architect Agent
**Goal**: Define the "What" and "Why" (Contracts, Scaffolding, Models) before implementation.
## Phase 1: Setup & Models
- [ ] A001 Define contracts/scaffolding for migration route in backend/src/api/routes/migration.py
- [ ] A002 Define contracts/scaffolding for Dashboard model in backend/src/models/dashboard.py
## Phase 2: User Story 1 - Advanced Dashboard Selection
- [ ] A003 [US1] Define contracts/scaffolding for SupersetClient extensions in backend/src/core/superset_client.py
- [ ] A004 [US1] Define contracts/scaffolding for GET /api/migration/dashboards endpoint in backend/src/api/routes/migration.py
- [ ] A005 [US1] Define contracts/scaffolding for DashboardGrid component in frontend/src/components/DashboardGrid.svelte
- [ ] A006 [US1] Define contracts/scaffolding for migration page integration in frontend/src/routes/migration/+page.svelte
- [ ] A007 [US1] Define contracts/scaffolding for POST /api/migration/execute endpoint in backend/src/api/routes/migration.py
## Handover Checklist
- [ ] All new files created with `[DEF]` anchors
- [ ] All functions/classes have `@PURPOSE`, `@PRE`, `@POST` tags
- [ ] No "naked code" (logic outside of anchors)
- [ ] `tasks-dev.md` is ready for the Developer Agent

View File

@@ -0,0 +1,34 @@
---
description: "Developer tasks for Migration Plugin Dashboard Grid"
---
# Developer Tasks: Migration Plugin Dashboard Grid
**Role**: Developer Agent
**Goal**: Implement the "How" (Logic, State, Error Handling) inside the defined contracts.
## Phase 1: Setup & Models
- [ ] D001 Implement logic for migration route in backend/src/api/routes/migration.py
- [ ] D002 Register migration router in backend/src/app.py
- [ ] D003 Export migration router in backend/src/api/routes/__init__.py
- [ ] D004 Implement logic for Dashboard model in backend/src/models/dashboard.py
## Phase 2: User Story 1 - Advanced Dashboard Selection
- [ ] D005 [P] [US1] Implement logic for SupersetClient extensions in backend/src/core/superset_client.py
- [ ] D006 [US1] Implement logic for GET /api/migration/dashboards endpoint in backend/src/api/routes/migration.py
- [ ] D007 [US1] Implement structure and styles for DashboardGrid component in frontend/src/components/DashboardGrid.svelte
- [ ] D008 [US1] Implement data fetching and state management in frontend/src/components/DashboardGrid.svelte
- [ ] D009 [US1] Implement client-side filtering logic in frontend/src/components/DashboardGrid.svelte
- [ ] D010 [US1] Implement pagination logic in frontend/src/components/DashboardGrid.svelte
- [ ] D011 [US1] Implement selection logic (single and Select All) in frontend/src/components/DashboardGrid.svelte
- [ ] D012 [US1] Integrate DashboardGrid and connect selection to submission in frontend/src/routes/migration/+page.svelte
- [ ] D013 [US1] Implement logic for POST /api/migration/execute endpoint in backend/src/api/routes/migration.py
- [ ] D014 [US1] Verify semantic compliance and belief state logging
## Polish & Quality Assurance
- [ ] D015 Verify error handling and empty states in frontend/src/components/DashboardGrid.svelte
- [ ] D016 Ensure consistent styling with Tailwind CSS in frontend/src/components/DashboardGrid.svelte

View File

@@ -28,7 +28,11 @@ class SupersetLogger:
# @PARAM: log_dir (Optional[Path]) - Директория для сохранения лог-файлов.
# @PARAM: level (int) - Уровень логирования (e.g., `logging.INFO`).
# @PARAM: console (bool) - Флаг для включения вывода в консоль.
def __init__(self, name: str = "superset_tool", log_dir: Optional[Path] = None, level: int = logging.INFO, console: bool = True) -> None:
def __init__(self, name: str = "superset_tool", log_dir: Optional[Path] = None, level: int = logging.INFO, console: bool = True, logger: Optional[logging.Logger] = None) -> None:
if logger:
self.logger = logger
return
self.logger = logging.getLogger(name)
self.logger.setLevel(level)
self.logger.propagate = False