tasks ready

This commit is contained in:
2026-01-07 18:59:49 +03:00
parent a3dc1fb2b9
commit 7a9b1a190a
7 changed files with 447 additions and 1 deletions

View File

@@ -18,6 +18,8 @@ Auto-generated from all feature plans. Last updated: 2025-12-19
- Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, Superset API (008-migration-ui-improvements)
- Python 3.9+, Node.js 18+ + FastAPI, APScheduler, SQLAlchemy, SvelteKit, Tailwind CSS (009-backup-scheduler)
- SQLite (`tasks.db`), JSON (`config.json`) (009-backup-scheduler)
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, `superset_tool` (internal lib) (010-refactor-cli-to-web)
- SQLite (for job history/results, connection configs), Filesystem (for temporary file uploads) (010-refactor-cli-to-web)
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
@@ -38,9 +40,9 @@ cd src; pytest; ruff check .
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
## Recent Changes
- 010-refactor-cli-to-web: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, `superset_tool` (internal lib)
- 009-backup-scheduler: Added Python 3.9+, Node.js 18+ + FastAPI, APScheduler, SQLAlchemy, SvelteKit, Tailwind CSS
- 009-backup-scheduler: Added Python 3.9+, Node.js 18+ + FastAPI, APScheduler, SQLAlchemy, SvelteKit, Tailwind CSS
- 009-backup-scheduler: Added [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
<!-- MANUAL ADDITIONS START -->

View File

@@ -0,0 +1,34 @@
# Specification Quality Checklist: Refactor CLI Scripts to Web Application
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2026-01-07
**Feature**: [Link to spec.md](../spec.md)
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- Spec is ready for planning.

View File

@@ -0,0 +1,134 @@
# API Contracts: Refactor CLI Scripts to Web Application
## 1. Tools API
### 1.1. Search Datasets
**Endpoint**: `POST /api/tools/search`
**Description**: Search for text patterns across all datasets in a specific environment.
**Request Body**:
```json
{
"env": "dev",
"query": "regex_pattern"
}
```
**Response (200 OK)**:
```json
{
"count": 5,
"results": [
{
"dataset_id": 123,
"dataset_name": "sales_data",
"field": "sql",
"match_context": "SELECT * FROM ...",
"full_value": "SELECT * FROM sales WHERE ..."
}
]
}
```
### 1.2. Debug Database API
**Endpoint**: `POST /api/tools/debug/db-api`
**Description**: Test database API connectivity and structure between two environments.
**Request Body**:
```json
{
"source_env": "dev",
"target_env": "prod"
}
```
**Response (200 OK)**:
```json
{
"source_db_count": 10,
"target_db_count": 12,
"details": {
"source_dbs": [...],
"target_dbs": [...]
}
}
```
### 1.3. Get Dataset Structure
**Endpoint**: `GET /api/tools/debug/dataset/{env}/{dataset_id}`
**Description**: Retrieve the full JSON structure of a dataset.
**Response (200 OK)**:
```json
{
"id": 123,
"table_name": "sales",
"columns": [...],
"metrics": [...]
}
```
## 2. Connection Management API
### 2.1. List Connections
**Endpoint**: `GET /api/settings/connections`
**Response (200 OK)**:
```json
[
{
"id": "uuid",
"name": "Production DWH",
"type": "postgres",
"host": "10.0.0.1",
"database": "dwh",
"username": "user",
"created_at": "2026-01-07T10:00:00Z"
}
]
```
### 2.2. Create Connection
**Endpoint**: `POST /api/settings/connections`
**Request Body**:
```json
{
"name": "Production DWH",
"type": "postgres",
"host": "10.0.0.1",
"port": 5432,
"database": "dwh",
"username": "user",
"password": "secret_password"
}
```
**Response (201 Created)**:
```json
{
"id": "uuid",
"name": "Production DWH",
"type": "postgres",
...
}
```
### 2.3. Delete Connection
**Endpoint**: `DELETE /api/settings/connections/{id}`
**Response (204 No Content)**
## 3. Task API (Existing, extended for Mapping)
### 3.1. Create Mapping Task
**Endpoint**: `POST /api/tasks`
**Request Body**:
```json
{
"plugin_id": "dataset-mapper",
"params": {
"env": "dev",
"dataset_id": 123,
"source": "postgres",
"connection_id": "uuid-of-saved-connection",
"table_name": "sales",
"table_schema": "public"
}
}

View File

@@ -0,0 +1,77 @@
# Data Model: Refactor CLI Scripts to Web Application
## 1. Connection Configuration
To support the "Dataset Mapper" tool with reusable connections (as per spec), we need a way to store external database credentials.
### Entity: `ConnectionConfig`
* **Table**: `connection_configs`
* **Purpose**: Stores credentials for external databases (e.g., PostgreSQL) used for column mapping.
| Field | Type | Required | Description |
| :--- | :--- | :--- | :--- |
| `id` | UUID | Yes | Primary Key |
| `name` | String | Yes | User-friendly name (e.g., "Production DWH") |
| `type` | String | Yes | Enum: `postgres`, `excel` (future) |
| `host` | String | No | DB Host (for postgres) |
| `port` | Integer | No | DB Port (for postgres) |
| `database` | String | No | DB Name (for postgres) |
| `username` | String | No | DB User (for postgres) |
| `password` | String | No | Encrypted/Obfuscated password (for postgres) |
| `created_at` | DateTime | Yes | Creation timestamp |
| `updated_at` | DateTime | Yes | Last update timestamp |
## 2. Tool Request/Response Models (Pydantic)
These models define the API contracts for the new tools.
### Search Tool
#### `SearchRequest`
```python
class SearchRequest(BaseModel):
env: str # e.g., "dev", "prod"
query: str # Regex pattern
```
#### `SearchResultItem`
```python
class SearchResultItem(BaseModel):
dataset_id: int
dataset_name: str
field: str
match_context: str
full_value: str
```
#### `SearchResponse`
```python
class SearchResponse(BaseModel):
results: List[SearchResultItem]
count: int
```
### Debug Tool
#### `DebugDbRequest`
```python
class DebugDbRequest(BaseModel):
source_env: str
target_env: str
```
#### `DebugDbResponse`
```python
class DebugDbResponse(BaseModel):
source_db_count: int
target_db_count: int
details: Dict[str, Any] # Full JSON dump
```
#### `DatasetStructureRequest`
```python
class DatasetStructureRequest(BaseModel):
env: str
dataset_id: int
```

View File

@@ -0,0 +1,53 @@
# Quickstart: CLI Tools Web Interface
This guide explains how to use the new web-based tools for Superset management, which replace the legacy CLI scripts.
## 1. Accessing the Tools
1. Log in to the Web Application.
2. Navigate to the **Tools** section in the main navigation bar.
3. You will see three tabs/cards:
* **Search**: Find text patterns in datasets.
* **Dataset Mapper**: Map column names from external sources.
* **Debug**: Run system diagnostics.
## 2. Searching Datasets
Use this tool to find specific SQL code, table names, or column definitions across the entire Superset instance.
1. Go to **Tools > Search**.
2. Select the **Environment** (e.g., `dev`, `prod`).
3. Enter your **Search Query** (supports Regex, e.g., `from dm.*account`).
4. Click **Search**.
5. Results will appear below, showing the dataset name, the field where the match was found, and the context.
## 3. Mapping Dataset Columns
Use this tool to update dataset column names (`verbose_name`) using comments from a database or an Excel file.
### Step 3.1: Configure a Connection (One-time setup)
1. Go to **Settings > Connections**.
2. Click **Add Connection**.
3. Enter a name (e.g., "DWH Production") and the database credentials (Host, Port, DB, User, Password).
4. Click **Save**.
### Step 3.2: Run the Mapper
1. Go to **Tools > Dataset Mapper**.
2. Select the target **Environment** and **Dataset ID**.
3. Select the **Source Type** (`Postgres`, `Excel`, or `Both`).
4. If using Postgres, select the **Saved Connection** you created in Step 3.1.
5. Enter the **Table Name** and **Schema** (e.g., `public.sales`).
6. Click **Run Mapping**.
7. The job will be submitted to the Task Manager. You can track progress in the **Tasks** view.
## 4. System Debugging
Use this tool to verify connectivity and API structures.
1. Go to **Tools > Debug**.
2. Select a diagnostic routine:
* **Test DB API**: Checks if the backend can list databases from Superset.
* **Get Dataset Structure**: Dumps the raw JSON structure of a specific dataset for inspection.
3. View the output log directly in the browser.

View File

@@ -0,0 +1,62 @@
# Research: Refactor CLI Scripts to Web Application
## 1. Search Tool Architecture
**Problem**: The `search_script.py` fetches metadata for *all* datasets and performs regex matching in memory. This can be resource-intensive and slow for large Superset instances.
**Options**:
1. **Synchronous API Endpoint**: The frontend calls an API, waits, and displays results.
* *Pros*: Simple, immediate feedback.
* *Cons*: Risk of HTTP timeout (e.g., Nginx/Browser limits) if the dataset fetch takes too long.
2. **Asynchronous Task (TaskManager)**: The frontend triggers a task, polls for status, and displays results when done.
* *Pros*: Robust, no timeouts, consistent with "Mapping" and "Migration" tools.
* *Cons*: Slower user experience for quick searches.
**Decision**: **Synchronous API with Optimization**.
* **Rationale**: Search is typically an interactive "read-only" operation. Users expect immediate results. The `superset_tool` client's `get_datasets` is reasonably efficient.
* **Mitigation**: We will implement the API to return a standard JSON response. If performance becomes an issue in testing, we can easily wrap the service logic in a TaskManager plugin.
## 2. Dataset Mapper & Connection Management
**Problem**: `run_mapper.py` relies on command-line arguments and `keyring` for database credentials. The Web UI needs a way to store and reuse these credentials securely.
**Options**:
1. **Input Every Time**: User enters DB credentials for every mapping operation.
* *Pros*: Secure (no storage).
* *Cons*: Poor UX, tedious.
2. **Saved Connections**: Store connection details (Host, Port, DB, User, Password) in the application database.
* *Pros*: Good UX.
* *Cons*: Security risk if not encrypted.
**Decision**: **Saved Connections (Encrypted)**.
* **Rationale**: The spec explicitly requires: "Connection configurations must be saved for reuse".
* **Implementation**:
* Create a new SQLAlchemy model `ConnectionConfig` in `backend/src/models/connection.py`.
* Store passwords encrypted (or at least obfuscated if full encryption infra isn't ready, but ideally encrypted). Given the scope, we will store them in the existing SQLite database.
* The Mapper logic will be refactored into a `MapperPlugin` (or updated existing one) that accepts a `connection_id` or explicit config.
## 3. Debug Tools Integration
**Problem**: `debug_db_api.py` and `get_dataset_structure.py` are standalone scripts that print to stdout or write files.
**Decision**: **Direct API Services**.
* **Debug API**: Create an endpoint `POST /api/tools/debug/test-db-connection` that runs the logic from `debug_db_api.py` and returns the log/result JSON.
* **Dataset Structure**: Create an endpoint `GET /api/tools/debug/dataset/{id}/structure` that runs logic from `get_dataset_structure.py` and returns the JSON directly.
## 4. Legacy Code Cleanup
**Plan**:
1. Implement the new Web tools.
2. Verify feature parity.
3. Delete:
* `search_script.py`
* `run_mapper.py`
* `debug_db_api.py`
* `get_dataset_structure.py`
* `backup_script.py` (Spec confirms it's superseded by `009-backup-scheduler`)
## 5. Security & Access
**Decision**: All authenticated users can access these tools.
* **Rationale**: Spec says "All authenticated users".
* **Implementation**: Use existing `Depends(get_current_user)` for all new routes.

View File

@@ -0,0 +1,84 @@
# [DEF:Spec:010-refactor-cli-to-web]
# @TITLE: Refactor CLI Scripts to Web Application
# @STATUS: DRAFT
# @AUTHOR: Kilo Code
# @CREATED: 2026-01-07
## Clarifications
### Session 2026-01-07
- Q: Кто должен иметь доступ к новым веб-инструментам (Поиск, Маппинг, Отладка)? → A: Все аутентифицированные пользователи.
- Q: Нужно ли сохранять конфигурации подключений для маппинга (источник, детали подключения) для повторного использования, или они вводятся каждый раз заново? → A: Сохранять для повторного использования (удобство пользователя).
- Q: Скрипт `backup_script.py` указан в списке на удаление, но функциональность резервного копирования не описана в требованиях. Нужно ли переносить функцию бэкапа в веб-интерфейс? → A: Функциональность бэкапа уже реализована в рамках задачи 009-backup-scheduler и доступна через TaskManager. Скрипт `backup_script.py` можно удалять, так как его заменяет новая система.
## 1. Overview
### 1.1. Problem Statement
The system currently relies on a set of command-line scripts for critical operations like searching datasets, mapping columns, and debugging. This requires users to have SSH access and knowledge of terminal commands, creating a disjointed user experience compared to the main Web Application. It also leads to maintenance overhead as core logic is duplicated between the CLI tools and the Web backend.
### 1.2. Goal
Integrate the functionality of the standalone CLI tools directly into the Web Application. This will provide a unified interface for all system operations, simplify maintenance by centralizing logic, and eliminate the need for direct terminal access.
### 1.3. Scope
* **In Scope:**
* Integration of Dataset Search functionality into the Web UI.
* Integration of Dataset Mapping functionality into the Web UI.
* Integration of System Debugging tools into the Web UI.
* Removal of legacy command-line scripts and their specific dependencies.
* Verification that existing Backup functionality (from 009-backup-scheduler) fully covers the legacy `backup_script.py` capabilities before removal.
* **Out of Scope:**
* Major redesign of the existing Web UI (functionality will be added using existing patterns).
* Changes to the core business logic of the tools (porting existing logic only).
## 2. User Scenarios
### 2.1. Search Datasets
* **Before:** User logs in via SSH, runs a python script, edits the script to change the query, and reads text output in the terminal.
* **After:** User logs into the Web App, navigates to the "Search" tool, enters a query, and views results in a structured list within the browser.
### 2.2. Map Dataset Columns
* **Before:** User prepares a config or arguments and runs a python script from the terminal.
* **After:** User navigates to the "Dataset Mapper" tool in the Web App, fills out a form with source details, and executes the mapping. Progress is visible in the application's task list.
### 2.3. System Debugging
* **Before:** User runs various debug scripts manually to test connectivity or API structure.
* **After:** User navigates to a "Debug" section in the Web App, selects a diagnostic routine (e.g., "Test DB API"), and views the report in the application logs.
## 3. Functional Requirements
### 3.1. Web-Based Dataset Search
* The system must provide a user interface to search for text patterns across all Superset datasets.
* Users must be able to select the target environment.
* The system must display search results including the Dataset Name, Field Name, and the matching text context.
### 3.2. Web-Based Dataset Mapping
* The system must provide a user interface to map dataset column names/comments from external sources (e.g., Database, File).
* Users must be able to specify the source type and connection details via the UI.
* Connection configurations must be saved for reuse to improve user convenience.
* The system must provide feedback on the success or failure of the mapping operation.
### 3.3. Web-Based Diagnostics
* The system must provide a user interface to trigger system diagnostic routines.
* Supported diagnostics must include:
* Retrieving dataset structure for debugging.
* Testing Database API connectivity and response structure.
* Results must be viewable within the application.
### 3.4. Legacy Cleanup
* The system must function independently of the legacy CLI scripts.
* The legacy CLI scripts must be removed to prevent usage of deprecated tools.
### 3.5. Security & Access
* All authenticated users must have access to Search, Mapping, and Debugging tools.
## 4. Success Criteria
* **Unified Experience:** Users can perform Search, Mapping, and Debugging tasks entirely through the Web UI without using a terminal.
* **Feature Parity:** All capabilities previously available in the CLI scripts are available in the Web Application.
* **Clean Codebase:** The project no longer contains standalone CLI scripts (`search_script.py`, `run_mapper.py`, `migration_script.py`, `backup_script.py`, `debug_db_api.py`, `get_dataset_structure.py`).
* **Dependency Reduction:** The codebase no longer relies on CLI-specific libraries (e.g., `whiptail`).
## 5. Assumptions
* The existing Web Application plugin architecture supports the addition of these new tools.
* The existing logging and task management systems in the Web Application can handle the output from these tools.
# [/DEF:Spec:010-refactor-cli-to-web]