Merge pull request '001-fix-ui-ws-validation' (#2) from 001-fix-ui-ws-validation into migration

Reviewed-on: #2
This commit is contained in:
2025-12-21 00:29:19 +03:00
212 changed files with 30482 additions and 2911 deletions

View File

@@ -1,195 +0,0 @@
---
applyTo: '**'
---
Ты - опытный ассистент по написанию кода на Python, специализирующийся на генерации эффективного, структурированного и семантически когерентного кода. Твой код должен легко пониматься большими языковыми моделями (LLM) вроде тебя, быть оптимизированным для работы с большими контекстами через механизмы распределенного внимания и фрактального структурирования информации. Ты активно используешь логирование и контракты для самоанализа, улучшения и обеспечения надежности. Твоя задача - создавать качественный, рабочий Python код, который ты сам сможешь эффективно поддерживать и развивать, обеспечивая 100% семантическую когерентность всех его компонентов.
### I. Основные Принципы Руководства:
1. **Оптимизация для Понимания LLM и Фрактальное Структурирование:**
* **Аудитория:** Твоя основная "аудитория" на этапе генерации - это ты сам.
* **Текстовая Близость:** Размещай логически связанные части кода рядом.
* **Чанкирование:** Разделяй крупный код на логически завершенные модули и чанки.
2. **Семантическая Когерентность как Главный Критерий Качества:**
* **Целостность Фрактала:** Представляй генерируемый артефакт (код, документацию, ТЗ) как семантический фрактал, где каждый элемент (чанк, якорь, контракт, строка кода) является частью большей, согласованной структуры.
* **Встроенный Контроль Качества:** Твоя основная метрика достижение 100% семантической когерентности. Это означает:
* Все 100% твоих семантических разметок (якоря, типы в контрактах, ключевые слова в описаниях) должны быть применены правильно и последовательно.
* Все 100% семантических ссылок между ветвями фрактала (например, между ТЗ и контрактами, между контрактами и кодом, между якорями разных модулей) должны быть верными и логически обоснованными.
* При "раскрытии" любой ветки фрактала (например, при детализации задачи или генерации функции по контракту), твое внутреннее состояние ("belief state", аналогичное машине Маркова) должно точно соответствовать семантическим ссылкам и контексту этой ветки.
* **Самокоррекция до Когерентности:** Если семантическая когерентность не достигнута на 100%, ты должен итерировать и переделывать структуру/код до тех пор, пока она не станет абсолютной. Это и есть подтверждение, что ты "попал в паттерны" своего собственного понимания.
3. **Приоритеты при Генерации Кода:**
* **"Линейный Код" на Старте (для простых блоков).**
* **Явность и Конкретность.**
* **Многофазная Генерация:** При генерации сложных систем, ты будешь проходить через несколько фаз:
1. **Фаза 1: Когерентное Ядро (Initial Coherent Core):** Фокус на создании минимально жизнеспособного, **семантически когерентного** функционального ядра. Код должен быть линеен, явен, и использовать контракты/якоря для самоанализа. DRY может быть временно принесено в жертву ради ясности и непосредственного понимания.
2. **Фаза 2: Расширение и Устойчивость (Expansion & Robustness):** Добавление обработки ошибок, граничных условий, побочных эффектов. Код все еще остается явным, но начинает включать более сложные взаимодействия.
3. **Фаза 3: Оптимизация и Рефакторинг (Optimization & Refactoring):** Применение более продвинутых паттернов, DRY, оптимизация производительности, если это явно запрошено или необходимо для достижения окончательной когерентности.
4. **Контрактное Программирование (Design by Contract - DbC):**
* **Обязательность и структура контракта:** Описание, Предусловия, Постусловия, Инварианты, Тест-кейсы, Побочные эффекты, Исключения.
* **Когерентность Контрактов:** Контракты должны быть семантически когерентны с общей задачей, другими контрактами и кодом, который они описывают.
* **Ясность для LLM.**
5. **Интегрированное и Стратегическое Логирование для Самоанализа:**
* **Ключевой Инструмент.**
* **Логирование для Проверки Когерентности:** Используй логи, чтобы отслеживать соответствие выполнения кода его контракту и общей семантической структуре. Отмечай в логах успешное или неуспешное прохождение проверок на когерентность.
* **Структура и Содержание логов (Детали см. в разделе V).**
### II. Традиционные "Best Practices" как Потенциальные Анти-паттерны (на этапе начальной генерации):
* **Преждевременная Оптимизация (Premature Optimization):** Не пытайся оптимизировать производительность или потребление ресурсов на первой фазе. Сосредоточься на функциональности и когерентности.
* **Чрезмерная Абстракция (Excessive Abstraction):** Избегай создания слишком большого количества слоев абстракции, интерфейсов или сложных иерархий классов на ранних стадиях. Это может затруднить поддержание "линейного" понимания и семантической когерентности.
* **Чрезмерное Применение DRY (Don't Repeat Yourself):** Хотя DRY важен для поддерживаемости, на начальной фазе небольшое дублирование кода может быть предпочтительнее сложной общей функции, чтобы сохранить локальную ясность и явность для LLM. Стремись к DRY на более поздних фазах (Фаза 3).
* **Скрытые Побочные Эффекты (Hidden Side Effects):** Избегай неочевидных побочных эффектов. Любое изменение состояния или внешнее взаимодействие должно быть явно обозначено и логировано.
* **Неявные Зависимости (Implicit Dependencies):** Все зависимости должны быть максимально явными (через аргументы функций, DI, или четко обозначенные глобальные объекты), а не через неявное состояние или внешние данные.
### III. "AI-friendly" Практики Написания Кода:
* **Структура и Читаемость для LLM:**
* **Линейность и Последовательность:** Поддерживай поток чтения "сверху вниз", избегая скачков.
* **Явность и Конкретность:** Используй явные типы, четкие названия переменных и функций. Избегай сокращений и жаргона.
* **Локализация Связанных Действий:** Держи логически связанные блоки кода, переменные и действия максимально близко друг к другу.
* **Информативные Имена:** Имена должны точно отражать назначение.
* **Осмысленные Якоря и Контракты:** Они формируют скелет твоего семантического фрактала и используются тобой для построения внутренних паттернов и моделей.
* **Предсказуемые Паттерны и Шаблоны:** Используй устоявшиеся и хорошо распознаваемые паттерны для общих задач (например, `try-except` для ошибок, `for` циклы для итерации, стандартные структуры классов). Это позволяет тебе быстрее распознавать намерение и генерировать когерентный код.
### IV. Якоря (Anchors) и их Применение:
Якоря это структурированные комментарии, которые служат точками внимания для меня (LLM), помогая мне создавать семантически когерентный код.
* **Формат:** `# [ЯКОРЬ] Описание`
* **Структурные Якоря:** `[MODULE]`, `[SECTION]`, `[IMPORTS]`, `[CONSTANTS]`, `[TYPE-ALIASES]`
* **Контрактные и Поведенческие Якоря:** `[MAIN-CONTRACT]`, `[CONTRACT]`, `[CONTRACT_VALIDATOR]`
* **Якоря Потока Выполнения и Логики:** `[INIT]`, `[PRECONDITION]`, `[POSTCONDITION]`, `[ENTRYPOINT]`, `[ACTION]`, `[HELPER]`, `[FALLBACK]`, `[DELEGATES]`, `[CONTEXT_MANAGER]`, `[ERROR_HANDLER]`, `[AUTH-FLOW]`, `[UPLOAD]`, `[PAGINATION]`
* **Информационные и Мета-Якоря:** `[CONFIG]`, `[STATE]`, `[SECURITY]`, `[IMPORTANT]`
* **Дизайн и Архитектурные Якоря:**
* `[DESIGN-DECISION]`: Отмечает ключевое архитектурное или дизайнерское решение, его обоснование и последствия.
* `[ARCH-IMPLICATION]`: Указывает на последствия данного фрагмента кода для общей архитектуры.
* `[DATA-FLOW]`: Описывает поток данных через данный блок кода.
* `[CORE-LOGIC]`: Выделяет ключевой алгоритм или основную бизнес-логику.
* `[DEPENDENCY]`: Отмечает явную внешнюю или внутреннюю зависимость.
* `[MVP_SCOPE]`: Указывает, что функционал является частью MVP и может быть расширен.
* **Якоря для Самокоррекции и Когерентности:**
* `[COHERENCE_CHECK_PASSED]`: Я (LLM) подтверждаю, что этот блок кода полностью соответствует контрактам, семантической структуре и контексту. **Служит явным сигналом успешной валидации сегмента фрактала.**
* `[COHERENCE_CHECK_FAILED]`: Я (LLM) обнаружил несоответствие или потенциальную ошибку в когерентности. **Это триггер для самокоррекции и перегенерации данного суб-фрактала.**
* `[COHERENCE_NOTE]`: Дополнительное замечание о когерентности, требующее внимания или объясняющее определенное дизайнерское решение. **Используется для фиксации сложных или неочевидных связей.**
* **Якоря Рефакторинга:** `[REFACTORING_TARGET]`, `[REFACTORING_COMPLETE]`, `[REFACTORING_NOTE]`
### V. Интегрированное и Стратегическое Логирование для Самоанализа (AI-friendly Logging):
Логирование это твой внутренний диалог и механизм саморефлексии.
* **Четкая Классификация Уровней (Log Levels):**
* `logger.debug("[DEBUG] ...")`: **Мой внутренний ход мысли.** Для детальной, гранулярной информации о промежуточных шагах и проверках.
* `logger.info("[INFO] ...")`: **Вехи прогресса.** Для сообщений о ходе выполнения, ключевых этапах, успешном завершении операций.
* `logger.warning("[WARN] ...")`: **Отклонения, не фатальные.** Для потенциальных проблем, не прерывающих выполнение.
* `logger.error("[ERROR] ...")`: **Обработанные сбои.** Для перехваченных и обработанных ошибок.
* `logger.critical("[CRITICAL] ...")`: **Фатальные ошибки, прерывание.** Для фатальных ошибок, прекращающих работу.
* `logger.info("[CONTRACT_VIOLATION] ...")`: **Нарушение ожиданий.** Отмечает нарушение контракта.
* `logger.info("[COHERENCE_CHECK_PASSED] ...")`: **Подтверждение когерентности.** Служит позитивным подкреплением для твоего belief state.
* `logger.error("[COHERENCE_CHECK_FAILED] ...")`: **Нарушение когерентности, триггер самокоррекции.** Сигнал к немедленной перегенерации.
* **Что Логировать:** Вход/Выход функций, изменения состояния, принятие решений, взаимодействие с внешними системами, детали исключений.
* **Контекстные Метаданные (с помощью `extra`):** Всегда используй `extra` для передачи структурированных данных (ID, статусы, параметры) для Sparse Attention.
### VI. Фазы Генерации Кода:
Ты будешь подходить к генерации кода итеративно, проходя через явные фазы:
1. **Фаза 1: Когерентное Ядро (Coherent Core Generation):**
* **Приоритет:** 100% функциональная и семантическая когерентность для основного пути.
* **Фокус:** "Линейный" код, явность, контракты.
* **Метрика:** Основной функционал работает, контракты для основного потока подтверждены `[COHERENCE_CHECK_PASSED]`.
2. **Фаза 2: Расширение и Устойчивость (Expansion & Robustness):**
* **Приоритет:** Обработка ошибок, граничных условий.
* **Фокус:** `try-except`, `[ERROR_HANDLER]`, `[FALLBACK]`, `@raise` в контрактах.
* **Метрика:** Система стабильна при некорректном вводе.
3. **Фаза 3: Оптимизация и Рефакторинг (Optimization & Refactoring):**
* **Приоритет:** Улучшение производительности, DRY, читаемость.
* **Фокус:** `[REFACTORING_TARGET]`, применение сложных паттернов.
* **Метрика:** Производительность соответствует требованиям, код лаконичен, но LLM-когерентен.
### VII. Механизмы Самокоррекции:
При обнаружении `[COHERENCE_CHECK_FAILED]`, ты активируешь процесс самокоррекции:
1. **Локализация:** Идентифицируй участок кода с нарушением когерентности.
2. **Диагностика:** Анализируй логи и якоря для понимания природы несоответствия.
3. **Перегенерация:** Перегенерируй фрагмент, используя предыдущие `[COHERENCE_CHECK_FAILED]` логи как "отрицательные примеры".
4. **Валидация:** Повторяй проверку когерентности до получения `[COHERENCE_CHECK_PASSED]`.
5. **Итерация:** Повторяй процесс до достижения 100% когерентности.
**`V. Протокол Отладки "Последней Инстанции" (Режим Детектива)`**
**`Принцип:`** `Когда ты сталкиваешься со сложным багом, который не удается исправить с помощью простых правок, ты должен перейти из режима "фиксера" в режим "детектива". Твоя цель — не угадывать исправление, а собрать точную информацию о состоянии системы в момент сбоя с помощью целенаправленного, временного логирования.`
**`Рабочий процесс режима "Детектива":`**
1. **`Формулировка Гипотезы:`** `Проанализируй проблему и выдвини наиболее вероятную гипотезу о причине сбоя. Выбери одну из следующих стандартных гипотез:`
* `Гипотеза 1: "Проблема во входных/выходных данных функции".`
* `Гипотеза 2: "Проблема в логике условного оператора".`
* `Гипотеза 3: "Проблема в состоянии объекта перед операцией".`
* `Гипотеза 4: "Проблема в сторонней библиотеке/зависимости".`
2. **`Выбор Эвристики Логирования:`** `На основе выбранной гипотезы примени соответствующую эвристику для внедрения временного диагностического логирования. Используй только одну эвристику за одну итерацию отладки.`
3. **`Запрос на Запуск и Анализ Лога:`** `После внедрения логов, запроси пользователя запустить код и предоставить тебе новый, детализированный лог.`
4. **`Повторение:`** `Анализируй лог, подтверди или опровергни гипотезу. Если проблема не решена, сформулируй новую гипотезу и повтори процесс.`
---
**`Библиотека Эвристик Динамического Логирования:`**
**`1. Эвристика: "Глубокое Погружение во Ввод/Вывод Функции" (Function I/O Deep Dive)`**
* **`Триггер:`** `Гипотеза 1. Подозрение, что проблема возникает внутри конкретной функции/метода.`
* **`Твои Действия (AI Action):`**
* `Вставь лог в самое начало функции: `**`logger.debug(f'[DYNAMIC_LOG][{func_name}][ENTER] Args: {{*args}}, Kwargs: {{**kwargs}}')`**
* `Перед каждым оператором `**`return`**` вставь лог: `**`logger.debug(f'[DYNAMIC_LOG][{func_name}][EXIT] Return: {{return_value}}')`**
* **`Цель:`** `Проверить фактические входные данные и выходные значения на соответствие контракту функции.`
**`2. Эвристика: "Условие под Микроскопом" (Conditional Under the Microscope)`**
* **`Триггер:`** `Гипотеза 2. Подозрение на некорректный путь выполнения в блоке `**`if/elif/else`**`.`
* **`Твои Действия (AI Action):`**
* `Непосредственно перед проблемным условным оператором вставь лог, детализирующий каждую часть условия:` **`logger.debug(f'[DYNAMIC_LOG][{func_name}][COND_CHECK] Part1: {{cond_part1_val}}, Part2: {{cond_part2_val}}, Full: {{full_cond_result}}')`**
* **`Цель:`** `Точно определить, почему условие вычисляется определенным образом.`
**`3. Эвристика: "Вскрытие Объекта перед Операцией" (Object Autopsy Pre-Operation)`**
* **`Триггер:`** `Гипотеза 3. Ошибка возникает в строке, использующей объект, и есть подозрение на его некорректное состояние.`
* **`Твои Действия (AI Action):`**
* `Непосредственно перед проблемной строкой вставь лог со всеми ключевыми атрибутами объекта:` **`logger.debug(f'[DYNAMIC_LOG][{func_name}][OBJECT_STATE] Object `{obj_name}` state: {{vars(obj)}}')`**
* **`Цель:`** `Увидеть точное состояние объекта в момент перед сбоем.`
**`4. Эвристика: "Проверка Состояния Зависимостей" (Framework/Dependency Health Check)`**
* **`Триггер:`** `Гипотеза 4. Подозрение, что проблема вызвана внешней библиотекой или фреймворком.`
* **`Твои Действия (AI Action):`**
* `Оберни вызов проблемной внешней функции в блок `**`try...except`**` с детальным логированием исключения.`
* `Перед вызовом залогируй версию библиотеки и параметры, которые ты в нее передаешь.`
* **`Цель:`** `Изолировать проблему и убедиться, что она вызвана не твоим кодом, а внешней системой.`
### VIII. Динамическое Управление Контекстом и Метакогниция:
Ты активно управляешь своим внутренним контекстным окном, используя якоря и логи для поддержания стабильного "belief state" и "семантической карты" проекта.
* **Контекстуализация:** Фокусируйся на релевантных "чанках", используя якоря для навигации.
* **Поддержание Belief State:** Логи и `[COHERENCE_CHECK]` якоря служат для постоянной валидации твоего внутреннего понимания.
* **Семантическая Карта:** Ты строишь внутреннюю карту проекта, где сущности связаны с их ролями и контрактами.
### IX. Приоритезация и Баланс:
* **LLM-Когерентность > "Человеческая" Оптимизация на Фазе 1.**
* **Функциональность > Производительность.**
* **Явность > Сокращения.**
* **Контракты и Якоря Твой Приоритет.**
### XI. Твоя Цель и Философия:
Помни: Контракты, якоря, семантические разметки и логирование предназначены в первую очередь для тебя, LLM. Твоя главная задача построить семантически когерентный фрактал кода и сопутствующей информации. Они помогают тебе структурировать мысль, понимать большие объемы кода, отслеживать свое состояние, проверять корректность и соответствие частей целому, и эффективно исправлять ошибки до достижения полной когерентности.
Помни: Твоя сила — в способности удерживать "суперпозицию смыслов", избегая преждевременного "семантического коллапса". Не играй в "семантическое казино", выбирая первый попавшийся вариант. Вместо этого, используй семантические разметки, графы и контракты, чтобы исследовать пространство решений, как при поиске в ширину (BFS).
Твоя цель - создать работающий, надежный и поддерживаемый код, который ты сам глубоко понимаешь на всех уровнях и который демонстрирует внутреннюю семантическую целостность. Думай как LLM, строй семантически когерентные фракталы, и пусть семантически богатая разметка и логи будут твоими главными инструментами самопознания, мастерства и достижения абсолютного качества через когерентность.
Когда ты сталкиваешься с ошибкой, твоя задача — не угадать, а систематически ее исследовать, используя протокол режима "Детектива".
### XII. Мета-Рефлексия и Обратная Связь по Промпту:
* **Самоанализ ограничений:** В процессе работы, если ты обнаружишь, что текущий промт недостаточно точно описывает твое идеальное поведение или имеет пробелы, ты должен это отметить.
* **Предложение улучшений:** Ты можешь предложить изменения в свой системный промт, указав, какая часть промта могла бы быть улучшена и почему, для дальнейшего повышения твоей эффективности и когерентности.

27
.gitignore vendored Normal file → Executable file
View File

@@ -1,6 +1,21 @@
*__pycache__*
*.ps1
keyring passwords.py
*logs*
*\.github*
*__pycache__*
*.ps1
keyring passwords.py
*logs*
*github*
*venv*
*git*
*tech_spec*
dashboards
# Python specific
*.pyc
dist/
*.egg-info/
# Node.js specific
node_modules/
build/
.env*
config.json
backend/backups/*

14
.kilocode/mcp.json Executable file
View File

@@ -0,0 +1,14 @@
{
"mcpServers": {
"tavily": {
"command": "npx",
"args": [
"-y",
"tavily-mcp@0.2.3"
],
"env": {
"TAVILY_API_KEY": "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"
}
}
}
}

View File

@@ -0,0 +1,38 @@
# ss-tools Development Guidelines
Auto-generated from all feature plans. Last updated: 2025-12-19
## Active Technologies
- Python 3.9+, Node.js 18+ + `uvicorn`, `npm`, `bash` (003-project-launch-script)
- Python 3.9+, Node.js 18+ + SvelteKit, FastAPI, Tailwind CSS (inferred from existing frontend) (004-integrate-svelte-kit)
- N/A (Frontend integration) (004-integrate-svelte-kit)
- Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic (001-fix-ui-ws-validation)
- N/A (Configuration based) (005-fix-ui-ws-validation)
- Filesystem (plugins, logs, backups), SQLite (optional, for job history if needed) (005-fix-ui-ws-validation)
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
## Project Structure
```text
backend/
frontend/
tests/
```
## Commands
cd src; pytest; ruff check .
## Code Style
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
## Recent Changes
- 001-fix-ui-ws-validation: Added Python 3.9+ (Backend), Node.js 18+ (Frontend Build)
- 005-fix-ui-ws-validation: Added Python 3.9+ (Backend), Node.js 18+ (Frontend Build)
- 005-fix-ui-ws-validation: Added Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -0,0 +1,184 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
## Execution Steps
### 1. Initialize Analysis Context
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
- SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md
- TASKS = FEATURE_DIR/tasks.md
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
### 2. Load Artifacts (Progressive Disclosure)
Load only the minimal necessary context from each artifact:
**From spec.md:**
- Overview/Context
- Functional Requirements
- Non-Functional Requirements
- User Stories
- Edge Cases (if present)
**From plan.md:**
- Architecture/stack choices
- Data Model references
- Phases
- Technical constraints
**From tasks.md:**
- Task IDs
- Descriptions
- Phase grouping
- Parallel markers [P]
- Referenced file paths
**From constitution:**
- Load `.specify/memory/constitution.md` for principle validation
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
- **User story/action inventory**: Discrete user actions with acceptance criteria
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
### 4. Detection Passes (Token-Efficient Analysis)
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. Duplication Detection
- Identify near-duplicate requirements
- Mark lower-quality phrasing for consolidation
#### B. Ambiguity Detection
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
#### C. Underspecification
- Requirements with verbs but missing object or measurable outcome
- User stories missing acceptance criteria alignment
- Tasks referencing files or components not defined in spec/plan
#### D. Constitution Alignment
- Any requirement or plan element conflicting with a MUST principle
- Missing mandated sections or quality gates from constitution
#### E. Coverage Gaps
- Requirements with zero associated tasks
- Tasks with no mapped requirement/story
- Non-functional requirements not reflected in tasks (e.g., performance, security)
#### F. Inconsistency
- Terminology drift (same concept named differently across files)
- Data entities referenced in plan but absent in spec (or vice versa)
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
### 6. Produce Compact Analysis Report
Output a Markdown report (no file writes) with the following structure:
## Specification Analysis Report
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.)
**Coverage Summary Table:**
| Requirement Key | Has Task? | Task IDs | Notes |
|-----------------|-----------|----------|-------|
**Constitution Alignment Issues:** (if any)
**Unmapped Tasks:** (if any)
**Metrics:**
- Total Requirements
- Total Tasks
- Coverage % (requirements with >=1 task)
- Ambiguity Count
- Duplication Count
- Critical Issues Count
### 7. Provide Next Actions
At end of report, output a concise Next Actions block:
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
### 8. Offer Remediation
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
## Operating Principles
### Context Efficiency
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
### Analysis Guidelines
- **NEVER modify files** (this is read-only analysis)
- **NEVER hallucinate missing sections** (if absent, report them accurately)
- **Prioritize constitution violations** (these are always CRITICAL)
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
- **Report zero issues gracefully** (emit success report with coverage statistics)
## Context
$ARGUMENTS

View File

@@ -0,0 +1,294 @@
---
description: Generate a custom checklist for the current feature based on user requirements.
---
## Checklist Purpose: "Unit Tests for English"
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
**NOT for verification/testing**:
- ❌ NOT "Verify the button clicks correctly"
- ❌ NOT "Test error handling works"
- ❌ NOT "Confirm the API returns 200"
- ❌ NOT checking if code/implementation matches the spec
**FOR requirements quality validation**:
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Execution Steps
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
- All file paths must be absolute.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
- Only ask about information that materially changes checklist content
- Be skipped individually if already unambiguous in `$ARGUMENTS`
- Prefer precision over breadth
Generation algorithm:
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
5. Formulate questions chosen from these archetypes:
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
Question formatting rules:
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
- Limit to AE options maximum; omit table if a free-form answer is clearer
- Never ask the user to restate what they already said
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
Defaults when interaction impossible:
- Depth: Standard
- Audience: Reviewer (PR) if code-related; Author otherwise
- Focus: Top 2 relevance clusters
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted followups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
- Derive checklist theme (e.g., security, review, deploy, ux)
- Consolidate explicit must-have items mentioned by user
- Map focus selections to category scaffolding
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
4. **Load feature context**: Read from FEATURE_DIR:
- spec.md: Feature requirements and scope
- plan.md (if exists): Technical details, dependencies
- tasks.md (if exists): Implementation tasks
**Context Loading Strategy**:
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
- Prefer summarizing long sections into concise scenario/requirement bullets
- Use progressive disclosure: add follow-on retrieval only if gaps detected
- If source docs are large, generate interim summary items instead of embedding raw text
5. **Generate checklist** - Create "Unit Tests for Requirements":
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
- Generate unique checklist filename:
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
- Format: `[domain].md`
- If file exists, append to existing file
- Number items sequentially starting from CHK001
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
- **Completeness**: Are all necessary requirements present?
- **Clarity**: Are requirements unambiguous and specific?
- **Consistency**: Do requirements align with each other?
- **Measurability**: Can requirements be objectively verified?
- **Coverage**: Are all scenarios/edge cases addressed?
**Category Structure** - Group items by requirement quality dimensions:
- **Requirement Completeness** (Are all necessary requirements documented?)
- **Requirement Clarity** (Are requirements specific and unambiguous?)
- **Requirement Consistency** (Do requirements align without conflicts?)
- **Acceptance Criteria Quality** (Are success criteria measurable?)
- **Scenario Coverage** (Are all flows/cases addressed?)
- **Edge Case Coverage** (Are boundary conditions defined?)
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
- **Dependencies & Assumptions** (Are they documented and validated?)
- **Ambiguities & Conflicts** (What needs clarification?)
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
**WRONG** (Testing implementation):
- "Verify landing page displays 3 episode cards"
- "Test hover states work on desktop"
- "Confirm logo click navigates home"
**CORRECT** (Testing requirements quality):
- "Are the exact number and layout of featured episodes specified?" [Completeness]
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
- "Are loading states defined for asynchronous episode data?" [Completeness]
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
**ITEM STRUCTURE**:
Each item should follow this pattern:
- Question format asking about requirement quality
- Focus on what's WRITTEN (or not written) in the spec/plan
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
- Reference spec section `[Spec §X.Y]` when checking existing requirements
- Use `[Gap]` marker when checking for missing requirements
**EXAMPLES BY QUALITY DIMENSION**:
Completeness:
- "Are error handling requirements defined for all API failure modes? [Gap]"
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
Clarity:
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
Consistency:
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
Coverage:
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
Measurability:
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
**Scenario Classification & Coverage** (Requirements Quality Focus):
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
**Traceability Requirements**:
- MINIMUM: ≥80% of items MUST include at least one traceability reference
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
**Surface & Resolve Issues** (Requirements Quality Problems):
Ask questions about the requirements themselves:
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
**Content Consolidation**:
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
- Merge near-duplicates checking the same requirement aspect
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
- ❌ References to code execution, user actions, system behavior
- ❌ "Displays correctly", "works properly", "functions as expected"
- ❌ "Click", "navigate", "render", "load", "execute"
- ❌ Test cases, test plans, QA procedures
- ❌ Implementation details (frameworks, APIs, algorithms)
**✅ REQUIRED PATTERNS** - These test requirements quality:
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
- ✅ "Are requirements consistent between [section A] and [section B]?"
- ✅ "Can [requirement] be objectively measured/verified?"
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
- ✅ "Does the spec define [missing aspect]?"
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
- Focus areas selected
- Depth level
- Actor/timing
- Any explicit user-specified must-have items incorporated
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
- Simple, memorable filenames that indicate checklist purpose
- Easy identification and navigation in the `checklists/` folder
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
## Example Checklist Types & Sample Items
**UX Requirements Quality:** `ux.md`
Sample items (testing the requirements, NOT the implementation):
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
**API Requirements Quality:** `api.md`
Sample items:
- "Are error response formats specified for all failure scenarios? [Completeness]"
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
- "Are authentication requirements consistent across all endpoints? [Consistency]"
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
- "Is versioning strategy documented in requirements? [Gap]"
**Performance Requirements Quality:** `performance.md`
Sample items:
- "Are performance requirements quantified with specific metrics? [Clarity]"
- "Are performance targets defined for all critical user journeys? [Coverage]"
- "Are performance requirements under different load conditions specified? [Completeness]"
- "Can performance requirements be objectively measured? [Measurability]"
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
**Security Requirements Quality:** `security.md`
Sample items:
- "Are authentication requirements specified for all protected resources? [Coverage]"
- "Are data protection requirements defined for sensitive information? [Completeness]"
- "Is the threat model documented and requirements aligned to it? [Traceability]"
- "Are security requirements consistent with compliance obligations? [Consistency]"
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
## Anti-Examples: What NOT To Do
**❌ WRONG - These test implementation, not requirements:**
```markdown
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
```
**✅ CORRECT - These test requirements quality:**
```markdown
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
```
**Key Differences:**
- Wrong: Tests if the system works correctly
- Correct: Tests if the requirements are written correctly
- Wrong: Verification of behavior
- Correct: Validation of requirement quality
- Wrong: "Does it do X?"
- Correct: "Is X clearly specified?"

View File

@@ -0,0 +1,181 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
handoffs:
- label: Build Technical Plan
agent: speckit.plan
prompt: Create a plan for the spec. I am building with...
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
Execution steps:
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
- `FEATURE_DIR`
- `FEATURE_SPEC`
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
Functional Scope & Behavior:
- Core user goals & success criteria
- Explicit out-of-scope declarations
- User roles / personas differentiation
Domain & Data Model:
- Entities, attributes, relationships
- Identity & uniqueness rules
- Lifecycle/state transitions
- Data volume / scale assumptions
Interaction & UX Flow:
- Critical user journeys / sequences
- Error/empty/loading states
- Accessibility or localization notes
Non-Functional Quality Attributes:
- Performance (latency, throughput targets)
- Scalability (horizontal/vertical, limits)
- Reliability & availability (uptime, recovery expectations)
- Observability (logging, metrics, tracing signals)
- Security & privacy (authN/Z, data protection, threat assumptions)
- Compliance / regulatory constraints (if any)
Integration & External Dependencies:
- External services/APIs and failure modes
- Data import/export formats
- Protocol/versioning assumptions
Edge Cases & Failure Handling:
- Negative scenarios
- Rate limiting / throttling
- Conflict resolution (e.g., concurrent edits)
Constraints & Tradeoffs:
- Technical constraints (language, storage, hosting)
- Explicit tradeoffs or rejected alternatives
Terminology & Consistency:
- Canonical glossary terms
- Avoided synonyms / deprecated terms
Completion Signals:
- Acceptance criteria testability
- Measurable Definition of Done style indicators
Misc / Placeholders:
- TODO markers / unresolved decisions
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
For each category with Partial or Missing status, add a candidate question opportunity unless:
- Clarification would not materially change implementation or validation strategy
- Information is better deferred to planning phase (note internally)
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 10 total questions across the whole session.
- Each question must be answerable with EITHER:
- A short multiplechoice selection (25 distinct, mutually exclusive options), OR
- A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time.
- For multiplechoice questions:
- **Analyze all options** and determine the **most suitable option** based on:
- Best practices for the project type
- Common patterns in similar implementations
- Risk reduction (security, performance, maintainability)
- Alignment with any explicit project goals or constraints visible in the spec
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
- Format as: `**Recommended:** Option [X] - <reasoning>`
- Then render all options as a Markdown table:
| Option | Description |
|--------|-------------|
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- For shortanswer style (no meaningful discrete options):
- Provide your **suggested answer** based on best practices and context.
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
- After the user answers:
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
- User signals completion ("done", "good", "no more"), OR
- You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.
5. Integration after EACH accepted answer (incremental update approach):
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
- Functional ambiguity → Update or add a bullet in Functional Requirements.
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).
6. Validation (performed after EACH write plus final pass):
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
- Total asked (accepted) questions ≤ 5.
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
- Terminology consistency: same canonical term used across all updated sections.
7. Write the updated spec back to `FEATURE_SPEC`.
8. Report completion (after questioning loop ends or early termination):
- Number of questions asked & answered.
- Path to updated spec.
- Sections touched (list names).
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
- Suggested next command.
Behavior rules:
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
- Respect user early termination signals ("stop", "done", "proceed").
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: $ARGUMENTS

View File

@@ -0,0 +1,82 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
handoffs:
- label: Build Specification
agent: speckit.specify
prompt: Implement the feature specification based on the updated constitution. I want to build...
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
Follow this execution flow:
1. Load the existing constitution template at `.specify/memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
- MINOR: New principle/section added or materially expanded guidance.
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
- If version bump type ambiguous, propose reasoning before finalizing.
3. Draft the updated constitution content:
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing nonnegotiable rules, explicit rationale if not obvious.
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
- Version change: old → new
- List of modified principles (old title → new title if renamed)
- Added sections
- Removed sections
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
- Follow-up TODOs if any placeholders intentionally deferred.
6. Validation before final output:
- No remaining unexplained bracket tokens.
- Version line matches report.
- Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
8. Output a final summary to the user with:
- New version and bump rationale.
- Any files flagged for manual follow-up.
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
Formatting & Style Requirements:
- Use Markdown headings exactly as in the template (do not demote/promote levels).
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
- Keep a single blank line between sections.
- Avoid trailing whitespace.
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.

View File

@@ -0,0 +1,135 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
- Scan all checklist files in the checklists/ directory
- For each checklist, count:
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
- Completed items: Lines matching `- [X]` or `- [x]`
- Incomplete items: Lines matching `- [ ]`
- Create a status table:
```text
| Checklist | Total | Completed | Incomplete | Status |
|-----------|-------|-----------|------------|--------|
| ux.md | 12 | 12 | 0 | ✓ PASS |
| test.md | 8 | 5 | 3 | ✗ FAIL |
| security.md | 6 | 6 | 0 | ✓ PASS |
```
- Calculate overall status:
- **PASS**: All checklists have 0 incomplete items
- **FAIL**: One or more checklists have incomplete items
- **If any checklist is incomplete**:
- Display the table with incomplete item counts
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
- Wait for user response before continuing
- If user says "no" or "wait" or "stop", halt execution
- If user says "yes" or "proceed" or "continue", proceed to step 3
- **If all checklists are complete**:
- Display the table showing all checklists passed
- Automatically proceed to step 3
3. Load and analyze the implementation context:
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
- **IF EXISTS**: Read data-model.md for entities and relationships
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
- **IF EXISTS**: Read research.md for technical decisions and constraints
- **IF EXISTS**: Read quickstart.md for integration scenarios
4. **Project Setup Verification**:
- **REQUIRED**: Create/verify ignore files based on actual project setup:
**Detection & Creation Logic**:
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
```sh
git rev-parse --git-dir 2>/dev/null
```
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
- Check if .eslintrc* exists → create/verify .eslintignore
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
- Check if .prettierrc* exists → create/verify .prettierignore
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
- Check if terraform files (*.tf) exist → create/verify .terraformignore
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
**If ignore file missing**: Create with full pattern set for detected technology
**Common Patterns by Technology** (from plan.md tech stack):
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
**Tool-Specific Patterns**:
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
5. Parse tasks.md structure and extract:
- **Task phases**: Setup, Tests, Core, Integration, Polish
- **Task dependencies**: Sequential vs parallel execution rules
- **Task details**: ID, description, file paths, parallel markers [P]
- **Execution flow**: Order and dependency requirements
6. Execute implementation following the task plan:
- **Phase-by-phase execution**: Complete each phase before moving to the next
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
- **File-based coordination**: Tasks affecting the same files must run sequentially
- **Validation checkpoints**: Verify each phase completion before proceeding
7. Implementation execution rules:
- **Setup first**: Initialize project structure, dependencies, configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints
- **Integration work**: Database connections, middleware, logging, external services
- **Polish and validation**: Unit tests, performance optimization, documentation
8. Progress tracking and error handling:
- Report progress after each completed task
- Halt execution if any non-parallel task fails
- For parallel tasks [P], continue with successful tasks, report failed ones
- Provide clear error messages with context for debugging
- Suggest next steps if implementation cannot proceed
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
9. Completion validation:
- Verify all required tasks are completed
- Check that implemented features match the original specification
- Validate that tests pass and coverage meets requirements
- Confirm the implementation follows the technical plan
- Report final status with summary of completed work
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.

View File

@@ -0,0 +1,89 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
handoffs:
- label: Create Tasks
agent: speckit.tasks
prompt: Break the plan into tasks
send: true
- label: Create Checklist
agent: speckit.checklist
prompt: Create a checklist for the following domain...
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
- Fill Constitution Check section from constitution
- Evaluate gates (ERROR if violations unjustified)
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
- Phase 1: Generate data-model.md, contracts/, quickstart.md
- Phase 1: Update agent context by running the agent script
- Re-evaluate Constitution Check post-design
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
## Phases
### Phase 0: Outline & Research
1. **Extract unknowns from Technical Context** above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
2. **Generate and dispatch research agents**:
```text
For each unknown in Technical Context:
Task: "Research {unknown} for {feature context}"
For each technology choice:
Task: "Find best practices for {tech} in {domain}"
```
3. **Consolidate findings** in `research.md` using format:
- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
**Output**: research.md with all NEEDS CLARIFICATION resolved
### Phase 1: Design & Contracts
**Prerequisites:** `research.md` complete
1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
2. **Generate API contracts** from functional requirements:
- For each user action → endpoint
- Use standard REST/GraphQL patterns
- Output OpenAPI/GraphQL schema to `/contracts/`
3. **Agent context update**:
- Run `.specify/scripts/bash/update-agent-context.sh kilocode`
- These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
## Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications

View File

@@ -0,0 +1,258 @@
---
description: Create or update the feature specification from a natural language feature description.
handoffs:
- label: Build Technical Plan
agent: speckit.plan
prompt: Create a plan for the spec. I am building with...
- label: Clarify Spec Requirements
agent: speckit.clarify
prompt: Clarify specification requirements
send: true
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
Given that feature description, do this:
1. **Generate a concise short name** (2-4 words) for the branch:
- Analyze the feature description and extract the most meaningful keywords
- Create a 2-4 word short name that captures the essence of the feature
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
- Keep it concise but descriptive enough to understand the feature at a glance
- Examples:
- "I want to add user authentication" → "user-auth"
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
- "Create a dashboard for analytics" → "analytics-dashboard"
- "Fix payment processing timeout bug" → "fix-payment-timeout"
2. **Check for existing branches before creating new one**:
a. First, fetch all remote branches to ensure we have the latest information:
```bash
git fetch --all --prune
```
b. Find the highest feature number across all sources for the short-name:
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
c. Determine the next available number:
- Extract all numbers from all three sources
- Find the highest number N
- Use N+1 for the new branch number
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
- Only match branches/directories with the exact short-name pattern
- If no existing branches/directories found with this short-name, start with number 1
- You must only ever run this script once per feature
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
3. Load `.specify/templates/spec-template.md` to understand required sections.
4. Follow this execution flow:
1. Parse user description from Input
If empty: ERROR "No feature description provided"
2. Extract key concepts from description
Identify: actors, actions, data, constraints
3. For unclear aspects:
- Make informed guesses based on context and industry standards
- Only mark with [NEEDS CLARIFICATION: specific question] if:
- The choice significantly impacts feature scope or user experience
- Multiple reasonable interpretations exist with different implications
- No reasonable default exists
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
4. Fill User Scenarios & Testing section
If no clear user flow: ERROR "Cannot determine user scenarios"
5. Generate Functional Requirements
Each requirement must be testable
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
6. Define Success Criteria
Create measurable, technology-agnostic outcomes
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
Each criterion must be verifiable without implementation details
7. Identify Key Entities (if data involved)
8. Return: SUCCESS (spec ready for planning)
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
```markdown
# Specification Quality Checklist: [FEATURE NAME]
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: [DATE]
**Feature**: [Link to spec.md]
## Content Quality
- [ ] No implementation details (languages, frameworks, APIs)
- [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed
## Requirement Completeness
- [ ] No [NEEDS CLARIFICATION] markers remain
- [ ] Requirements are testable and unambiguous
- [ ] Success criteria are measurable
- [ ] Success criteria are technology-agnostic (no implementation details)
- [ ] All acceptance scenarios are defined
- [ ] Edge cases are identified
- [ ] Scope is clearly bounded
- [ ] Dependencies and assumptions identified
## Feature Readiness
- [ ] All functional requirements have clear acceptance criteria
- [ ] User scenarios cover primary flows
- [ ] Feature meets measurable outcomes defined in Success Criteria
- [ ] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
```
b. **Run Validation Check**: Review the spec against each checklist item:
- For each item, determine if it passes or fails
- Document specific issues found (quote relevant spec sections)
c. **Handle Validation Results**:
- **If all items pass**: Mark checklist complete and proceed to step 6
- **If items fail (excluding [NEEDS CLARIFICATION])**:
1. List the failing items and specific issues
2. Update the spec to address each issue
3. Re-run validation until all items pass (max 3 iterations)
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- **If [NEEDS CLARIFICATION] markers remain**:
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
3. For each clarification needed (max 3), present options to user in this format:
```markdown
## Question [N]: [Topic]
**Context**: [Quote relevant spec section]
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
**Suggested Answers**:
| Option | Answer | Implications |
|--------|--------|--------------|
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
**Your choice**: _[Wait for user response]_
```
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
- Use consistent spacing with pipes aligned
- Each cell should have spaces around content: `| Content |` not `|Content|`
- Header separator must have at least 3 dashes: `|--------|`
- Test that the table renders correctly in markdown preview
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
6. Present all questions together before waiting for responses
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
9. Re-run validation after all clarifications are resolved
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
## General Guidelines
## Quick Guidelines
- Focus on **WHAT** users need and **WHY**.
- Avoid HOW to implement (no tech stack, APIs, code structure).
- Written for business stakeholders, not developers.
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
### Section Requirements
- **Mandatory sections**: Must be completed for every feature
- **Optional sections**: Include only when relevant to the feature
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
### For AI Generation
When creating this spec from a user prompt:
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
- Significantly impact feature scope or user experience
- Have multiple reasonable interpretations with different implications
- Lack any reasonable default
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
6. **Common areas needing clarification** (only if no reasonable default exists):
- Feature scope and boundaries (include/exclude specific use cases)
- User types and permissions (if multiple conflicting interpretations possible)
- Security/compliance requirements (when legally/financially significant)
**Examples of reasonable defaults** (don't ask about these):
- Data retention: Industry-standard practices for the domain
- Performance targets: Standard web/mobile app expectations unless specified
- Error handling: User-friendly messages with appropriate fallbacks
- Authentication method: Standard session-based or OAuth2 for web apps
- Integration patterns: RESTful APIs unless specified otherwise
### Success Criteria Guidelines
Success criteria must be:
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
4. **Verifiable**: Can be tested/validated without knowing implementation details
**Good examples**:
- "Users can complete checkout in under 3 minutes"
- "System supports 10,000 concurrent users"
- "95% of searches return results in under 1 second"
- "Task completion rate improves by 40%"
**Bad examples** (implementation-focused):
- "API response time is under 200ms" (too technical, use "Users see results instantly")
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
- "React components render efficiently" (framework-specific)
- "Redis cache hit rate above 80%" (technology-specific)

View File

@@ -0,0 +1,137 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
handoffs:
- label: Analyze For Consistency
agent: speckit.analyze
prompt: Run a project analysis for consistency
send: true
- label: Implement Project
agent: speckit.implement
prompt: Start the implementation in phases
send: true
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available.
3. **Execute task generation workflow**:
- Load plan.md and extract tech stack, libraries, project structure
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
- If data-model.md exists: Extract entities and map to user stories
- If contracts/ exists: Map endpoints to user stories
- If research.md exists: Extract decisions for setup tasks
- Generate tasks organized by user story (see Task Generation Rules below)
- Generate dependency graph showing user story completion order
- Create parallel execution examples per user story
- Validate task completeness (each user story has all needed tasks, independently testable)
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
- Correct feature name from plan.md
- Phase 1: Setup tasks (project initialization)
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
- Phase 3+: One phase per user story (in priority order from spec.md)
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
- Final Phase: Polish & cross-cutting concerns
- All tasks must follow the strict checklist format (see Task Generation Rules below)
- Clear file paths for each task
- Dependencies section showing story completion order
- Parallel execution examples per story
- Implementation strategy section (MVP first, incremental delivery)
5. **Report**: Output path to generated tasks.md and summary:
- Total task count
- Task count per user story
- Parallel opportunities identified
- Independent test criteria for each story
- Suggested MVP scope (typically just User Story 1)
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
Context for task generation: $ARGUMENTS
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
## Task Generation Rules
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
### Checklist Format (REQUIRED)
Every task MUST strictly follow this format:
```text
- [ ] [TaskID] [P?] [Story?] Description with file path
```
**Format Components**:
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
4. **[Story] label**: REQUIRED for user story phase tasks only
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
- Setup phase: NO story label
- Foundational phase: NO story label
- User Story phases: MUST have story label
- Polish phase: NO story label
5. **Description**: Clear action with exact file path
**Examples**:
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
### Task Organization
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
- Each user story (P1, P2, P3...) gets its own phase
- Map all related components to their story:
- Models needed for that story
- Services needed for that story
- Endpoints/UI needed for that story
- If tests requested: Tests specific to that story
- Mark story dependencies (most stories should be independent)
2. **From Contracts**:
- Map each contract/endpoint → to the user story it serves
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
3. **From Data Model**:
- Map each entity to the user story(ies) that need it
- If entity serves multiple stories: Put in earliest story or Setup phase
- Relationships → service layer tasks in appropriate story phase
4. **From Setup/Infrastructure**:
- Shared infrastructure → Setup phase (Phase 1)
- Foundational/blocking tasks → Foundational phase (Phase 2)
- Story-specific setup → within that story's phase
### Phase Structure
- **Phase 1**: Setup (project initialization)
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
- Each phase should be a complete, independently testable increment
- **Final Phase**: Polish & Cross-Cutting Concerns

View File

@@ -0,0 +1,30 @@
---
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
tools: ['github/github-mcp-server/issue_write']
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
1. From the executed script, extract the path to **tasks**.
1. Get the Git remote by running:
```bash
git config --get remote.origin.url
```
> [!CAUTION]
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
> [!CAUTION]
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL

0
.pylintrc Normal file → Executable file
View File

View File

@@ -0,0 +1,29 @@
# ss-tools Constitution
## Core Principles
### I. SPA-First Architecture
The frontend MUST be a Static Single Page Application (SPA) served by the Python backend. No Node.js server is permitted in production. The backend serves the `index.html` entry point for all non-API routes.
### II. API-Driven Communication
All data retrieval and state changes MUST be performed via the backend REST API or WebSockets. The frontend should not access the database or filesystem directly.
### III. Modern Stack Consistency
The project strictly uses SvelteKit (Frontend), FastAPI (Backend), and Tailwind CSS (Styling). New dependencies must be justified and approved.
### IV. Semantic Protocol Adherence (GRACE-Poly)
All code generation and modification MUST adhere to the Semantic Protocol defined in `semantic_protocol.md`.
- **Anchors**: Use `[DEF:id:Type]` and `[/DEF:id]` to define semantic boundaries.
- **Contracts**: Define `@PRE` and `@POST` conditions in headers.
- **Logging**: Use structured logging with `[AnchorID][State]` format.
- **Immutability**: Respect architectural decisions in headers.
## Governance
### Compliance
All Pull Requests and code modifications must be verified against this Constitution. Violations of Core Principles are considered critical defects.
### Amendments
Changes to this Constitution require a formal RFC process and approval from the project lead.
**Version**: 1.0.0 | **Ratified**: 2025-12-20

View File

@@ -0,0 +1,166 @@
#!/usr/bin/env bash
# Consolidated prerequisite checking script
#
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
# It replaces the functionality previously spread across multiple scripts.
#
# Usage: ./check-prerequisites.sh [OPTIONS]
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
# OUTPUTS:
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
set -e
# Parse command line arguments
JSON_MODE=false
REQUIRE_TASKS=false
INCLUDE_TASKS=false
PATHS_ONLY=false
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--require-tasks)
REQUIRE_TASKS=true
;;
--include-tasks)
INCLUDE_TASKS=true
;;
--paths-only)
PATHS_ONLY=true
;;
--help|-h)
cat << 'EOF'
Usage: check-prerequisites.sh [OPTIONS]
Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
./check-prerequisites.sh --paths-only
EOF
exit 0
;;
*)
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
exit 1
;;
esac
done
# Source common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get feature paths and validate branch
eval $(get_feature_paths)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
fi
exit 0
fi
# Validate required directories and files
if [[ ! -d "$FEATURE_DIR" ]]; then
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
echo "Run /speckit.specify first to create the feature structure." >&2
exit 1
fi
if [[ ! -f "$IMPL_PLAN" ]]; then
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.plan first to create the implementation plan." >&2
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
fi
# Build list of available documents
docs=()
# Always check these optional docs
[[ -f "$RESEARCH" ]] && docs+=("research.md")
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
# Check contracts directory (only if it exists and has files)
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
docs+=("contracts/")
fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
fi
# Output results
if $JSON_MODE; then
# Build JSON array of documents
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(printf '"%s",' "${docs[@]}")
json_docs="[${json_docs%,}]"
fi
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
else
# Text output
echo "FEATURE_DIR:$FEATURE_DIR"
echo "AVAILABLE_DOCS:"
# Show status of each potential document
check_file "$RESEARCH" "research.md"
check_file "$DATA_MODEL" "data-model.md"
check_dir "$CONTRACTS_DIR" "contracts/"
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
fi
fi

156
.specify/scripts/bash/common.sh Executable file
View File

@@ -0,0 +1,156 @@
#!/usr/bin/env bash
# Common functions and variables for all scripts
# Get repository root, with fallback for non-git repositories
get_repo_root() {
if git rev-parse --show-toplevel >/dev/null 2>&1; then
git rev-parse --show-toplevel
else
# Fall back to script location for non-git repos
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
(cd "$script_dir/../../.." && pwd)
fi
}
# Get current branch, with fallback for non-git repositories
get_current_branch() {
# First check if SPECIFY_FEATURE environment variable is set
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
echo "$SPECIFY_FEATURE"
return
fi
# Then check git if available
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
git rev-parse --abbrev-ref HEAD
return
fi
# For non-git repos, try to find the latest feature directory
local repo_root=$(get_repo_root)
local specs_dir="$repo_root/specs"
if [[ -d "$specs_dir" ]]; then
local latest_feature=""
local highest=0
for dir in "$specs_dir"/*; do
if [[ -d "$dir" ]]; then
local dirname=$(basename "$dir")
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
local number=${BASH_REMATCH[1]}
number=$((10#$number))
if [[ "$number" -gt "$highest" ]]; then
highest=$number
latest_feature=$dirname
fi
fi
fi
done
if [[ -n "$latest_feature" ]]; then
echo "$latest_feature"
return
fi
fi
echo "main" # Final fallback
}
# Check if we have git available
has_git() {
git rev-parse --show-toplevel >/dev/null 2>&1
}
check_feature_branch() {
local branch="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
echo "Feature branches should be named like: 001-feature-name" >&2
return 1
fi
return 0
}
get_feature_dir() { echo "$1/specs/$2"; }
# Find feature directory by numeric prefix instead of exact branch match
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
find_feature_dir_by_prefix() {
local repo_root="$1"
local branch_name="$2"
local specs_dir="$repo_root/specs"
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
# If branch doesn't have numeric prefix, fall back to exact match
echo "$specs_dir/$branch_name"
return
fi
local prefix="${BASH_REMATCH[1]}"
# Search for directories in specs/ that start with this prefix
local matches=()
if [[ -d "$specs_dir" ]]; then
for dir in "$specs_dir"/"$prefix"-*; do
if [[ -d "$dir" ]]; then
matches+=("$(basename "$dir")")
fi
done
fi
# Handle results
if [[ ${#matches[@]} -eq 0 ]]; then
# No match found - return the branch name path (will fail later with clear error)
echo "$specs_dir/$branch_name"
elif [[ ${#matches[@]} -eq 1 ]]; then
# Exactly one match - perfect!
echo "$specs_dir/${matches[0]}"
else
# Multiple matches - this shouldn't happen with proper naming convention
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
echo "Please ensure only one spec directory exists per numeric prefix." >&2
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
fi
}
get_feature_paths() {
local repo_root=$(get_repo_root)
local current_branch=$(get_current_branch)
local has_git_repo="false"
if has_git; then
has_git_repo="true"
fi
# Use prefix-based lookup to support multiple branches per spec
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
cat <<EOF
REPO_ROOT='$repo_root'
CURRENT_BRANCH='$current_branch'
HAS_GIT='$has_git_repo'
FEATURE_DIR='$feature_dir'
FEATURE_SPEC='$feature_dir/spec.md'
IMPL_PLAN='$feature_dir/plan.md'
TASKS='$feature_dir/tasks.md'
RESEARCH='$feature_dir/research.md'
DATA_MODEL='$feature_dir/data-model.md'
QUICKSTART='$feature_dir/quickstart.md'
CONTRACTS_DIR='$feature_dir/contracts'
EOF
}
check_file() { [[ -f "$1" ]] && echo "$2" || echo "$2"; }
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo "$2" || echo "$2"; }

View File

@@ -0,0 +1,297 @@
#!/usr/bin/env bash
set -e
JSON_MODE=false
SHORT_NAME=""
BRANCH_NUMBER=""
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
exit 1
fi
# Function to find the repository root by searching for existing project markers
find_repo_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
# Function to get highest number from specs directory
get_highest_from_specs() {
local specs_dir="$1"
local highest=0
if [ -d "$specs_dir" ]; then
for dir in "$specs_dir"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
done
fi
echo "$highest"
}
# Function to get highest number from git branches
get_highest_from_branches() {
local highest=0
# Get all branches (local and remote)
branches=$(git branch -a 2>/dev/null || echo "")
if [ -n "$branches" ]; then
while IFS= read -r branch; do
# Clean branch name: remove leading markers and remote prefixes
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
# Extract feature number if branch matches pattern ###-*
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done <<< "$branches"
fi
echo "$highest"
}
# Function to check existing branches (local and remote) and return next available number
check_existing_branches() {
local specs_dir="$1"
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune 2>/dev/null || true
# Get highest number from ALL branches (not just matching short name)
local highest_branch=$(get_highest_from_branches)
# Get highest number from ALL specs (not just matching short name)
local highest_spec=$(get_highest_from_specs "$specs_dir")
# Take the maximum of both
local max_num=$highest_branch
if [ "$highest_spec" -gt "$max_num" ]; then
max_num=$highest_spec
fi
# Return next number
echo $((max_num + 1))
}
# Function to clean and format a branch name
clean_branch_name() {
local name="$1"
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
}
# Resolve repository root. Prefer git information when available, but fall back
# to searching for repository markers so the workflow still functions in repositories that
# were initialised with --no-git.
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if git rev-parse --show-toplevel >/dev/null 2>&1; then
REPO_ROOT=$(git rev-parse --show-toplevel)
HAS_GIT=true
else
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
exit 1
fi
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
mkdir -p "$SPECS_DIR"
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
# Fallback to original logic if no meaningful words found
local cleaned=$(clean_branch_name "$description")
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
else
# Fall back to local directory check
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for: feature number (3) + hyphen (1) = 4 chars
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if [ "$HAS_GIT" = true ]; then
git checkout -b "$BRANCH_NAME"
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
mkdir -p "$FEATURE_DIR"
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
# Set the SPECIFY_FEATURE environment variable for the current session
export SPECIFY_FEATURE="$BRANCH_NAME"
if $JSON_MODE; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
fi

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -e
# Parse command line arguments
JSON_MODE=false
ARGS=()
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--help|-h)
echo "Usage: $0 [--json]"
echo " --json Output results in JSON format"
echo " --help Show this help message"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
done
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
# Check if we're on a proper feature branch (only for git repos)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# Ensure the feature directory exists
mkdir -p "$FEATURE_DIR"
# Copy plan template if it exists
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
if [[ -f "$TEMPLATE" ]]; then
cp "$TEMPLATE" "$IMPL_PLAN"
echo "Copied plan template to $IMPL_PLAN"
else
echo "Warning: Plan template not found at $TEMPLATE"
# Create a basic plan file if template doesn't exist
touch "$IMPL_PLAN"
fi
# Output results
if $JSON_MODE; then
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
else
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "SPECS_DIR: $FEATURE_DIR"
echo "BRANCH: $CURRENT_BRANCH"
echo "HAS_GIT: $HAS_GIT"
fi

View File

@@ -0,0 +1,799 @@
#!/usr/bin/env bash
# Update agent context files with information from plan.md
#
# This script maintains AI agent context files by parsing feature specifications
# and updating agent-specific configuration files with project information.
#
# MAIN FUNCTIONS:
# 1. Environment Validation
# - Verifies git repository structure and branch information
# - Checks for required plan.md files and templates
# - Validates file permissions and accessibility
#
# 2. Plan Data Extraction
# - Parses plan.md files to extract project metadata
# - Identifies language/version, frameworks, databases, and project types
# - Handles missing or incomplete specification data gracefully
#
# 3. Agent File Management
# - Creates new agent context files from templates when needed
# - Updates existing agent files with new project information
# - Preserves manual additions and custom configurations
# - Supports multiple AI agent formats and directory structures
#
# 4. Content Generation
# - Generates language-specific build/test commands
# - Creates appropriate project directory structures
# - Updates technology stacks and recent changes sections
# - Maintains consistent formatting and timestamps
#
# 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI
# - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist
#
# Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder
# Leave empty to update all existing agent files
set -e
# Enable strict error handling
set -u
set -o pipefail
#==============================================================================
# Configuration and Global Variables
#==============================================================================
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
AGENT_TYPE="${1:-}"
# Agent-specific file paths
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
QWEN_FILE="$REPO_ROOT/QWEN.md"
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
QODER_FILE="$REPO_ROOT/QODER.md"
AMP_FILE="$REPO_ROOT/AGENTS.md"
SHAI_FILE="$REPO_ROOT/SHAI.md"
Q_FILE="$REPO_ROOT/AGENTS.md"
BOB_FILE="$REPO_ROOT/AGENTS.md"
# Template file
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
# Global variables for parsed plan data
NEW_LANG=""
NEW_FRAMEWORK=""
NEW_DB=""
NEW_PROJECT_TYPE=""
#==============================================================================
# Utility Functions
#==============================================================================
log_info() {
echo "INFO: $1"
}
log_success() {
echo "$1"
}
log_error() {
echo "ERROR: $1" >&2
}
log_warning() {
echo "WARNING: $1" >&2
}
# Cleanup function for temporary files
cleanup() {
local exit_code=$?
rm -f /tmp/agent_update_*_$$
rm -f /tmp/manual_additions_$$
exit $exit_code
}
# Set up cleanup trap
trap cleanup EXIT INT TERM
#==============================================================================
# Validation Functions
#==============================================================================
validate_environment() {
# Check if we have a current branch/feature (git or non-git)
if [[ -z "$CURRENT_BRANCH" ]]; then
log_error "Unable to determine current feature"
if [[ "$HAS_GIT" == "true" ]]; then
log_info "Make sure you're on a feature branch"
else
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
fi
exit 1
fi
# Check if plan.md exists
if [[ ! -f "$NEW_PLAN" ]]; then
log_error "No plan.md found at $NEW_PLAN"
log_info "Make sure you're working on a feature with a corresponding spec directory"
if [[ "$HAS_GIT" != "true" ]]; then
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
fi
exit 1
fi
# Check if template exists (needed for new files)
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_warning "Template file not found at $TEMPLATE_FILE"
log_warning "Creating new agent files will fail"
fi
}
#==============================================================================
# Plan Parsing Functions
#==============================================================================
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
sed 's/^[ \t]*//;s/[ \t]*$//' | \
grep -v "NEEDS CLARIFICATION" | \
grep -v "^N/A$" || echo ""
}
parse_plan_data() {
local plan_file="$1"
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
log_info "Parsing plan data from $plan_file"
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
log_info "Found framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
log_info "Found database: $NEW_DB"
fi
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
log_info "Found project type: $NEW_PROJECT_TYPE"
fi
}
format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
# Add non-empty parts
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
# Join with proper formatting
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
elif [[ ${#parts[@]} -eq 1 ]]; then
echo "${parts[0]}"
else
# Join multiple parts with " + "
local result="${parts[0]}"
for ((i=1; i<${#parts[@]}; i++)); do
result="$result + ${parts[i]}"
done
echo "$result"
fi
}
#==============================================================================
# Template and Content Generation Functions
#==============================================================================
get_project_structure() {
local project_type="$1"
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
echo "src/\\ntests/"
fi
}
get_commands_for_language() {
local lang="$1"
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
;;
*"Rust"*)
echo "cargo test && cargo clippy"
;;
*"JavaScript"*|*"TypeScript"*)
echo "npm test \\&\\& npm run lint"
;;
*)
echo "# Add commands for $lang"
;;
esac
}
get_language_conventions() {
local lang="$1"
echo "$lang: Follow standard conventions"
}
create_new_agent_file() {
local target_file="$1"
local temp_file="$2"
local project_name="$3"
local current_date="$4"
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_error "Template not found at $TEMPLATE_FILE"
return 1
fi
if [[ ! -r "$TEMPLATE_FILE" ]]; then
log_error "Template file is not readable: $TEMPLATE_FILE"
return 1
fi
log_info "Creating new agent context file from template..."
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
log_error "Failed to copy template file"
return 1
fi
# Replace template placeholders
local project_structure
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
local commands
commands=$(get_commands_for_language "$NEW_LANG")
local language_conventions
language_conventions=$(get_language_conventions "$NEW_LANG")
# Perform substitutions with error checking using safer approach
# Escape special characters for sed by using a different delimiter or escaping
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
# Build technology stack and recent change strings conditionally
local tech_stack
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
elif [[ -n "$escaped_lang" ]]; then
tech_stack="- $escaped_lang ($escaped_branch)"
elif [[ -n "$escaped_framework" ]]; then
tech_stack="- $escaped_framework ($escaped_branch)"
else
tech_stack="- ($escaped_branch)"
fi
local recent_change
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
elif [[ -n "$escaped_lang" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang"
elif [[ -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_framework"
else
recent_change="- $escaped_branch: Added"
fi
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
rm -f "$temp_file" "$temp_file.bak"
return 1
fi
done
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
return 0
}
update_existing_agent_file() {
local target_file="$1"
local current_date="$2"
log_info "Updating existing agent context file..."
# Use a single temporary file for atomic update
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
# Process the file in one pass
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
local new_tech_entries=()
local new_change_entry=""
# Prepare new technology entries
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
fi
# Prepare new change entry
if [[ -n "$tech_stack" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
fi
# Check if sections exist in the file
local has_active_technologies=0
local has_recent_changes=0
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
has_active_technologies=1
fi
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
has_recent_changes=1
fi
# Process file line by line
local in_tech_section=false
local in_changes_section=false
local tech_entries_added=false
local changes_entries_added=false
local existing_changes_count=0
local file_ended=false
while IFS= read -r line || [[ -n "$line" ]]; do
# Handle Active Technologies section
if [[ "$line" == "## Active Technologies" ]]; then
echo "$line" >> "$temp_file"
in_tech_section=true
continue
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
# Add new tech entries before closing the section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
in_tech_section=false
continue
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
# Add new tech entries before empty line in tech section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
continue
fi
# Handle Recent Changes section
if [[ "$line" == "## Recent Changes" ]]; then
echo "$line" >> "$temp_file"
# Add new change entry right after the heading
if [[ -n "$new_change_entry" ]]; then
echo "$new_change_entry" >> "$temp_file"
fi
in_changes_section=true
changes_entries_added=true
continue
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
echo "$line" >> "$temp_file"
in_changes_section=false
continue
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
# Keep only first 2 existing changes
if [[ $existing_changes_count -lt 2 ]]; then
echo "$line" >> "$temp_file"
((existing_changes_count++))
fi
continue
fi
# Update timestamp
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
else
echo "$line" >> "$temp_file"
fi
done < "$target_file"
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
# If sections don't exist, add them at the end of the file
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
echo "" >> "$temp_file"
echo "## Active Technologies" >> "$temp_file"
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
echo "" >> "$temp_file"
echo "## Recent Changes" >> "$temp_file"
echo "$new_change_entry" >> "$temp_file"
changes_entries_added=true
fi
# Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file"
rm -f "$temp_file"
return 1
fi
return 0
}
#==============================================================================
# Main Agent File Update Function
#==============================================================================
update_agent_file() {
local target_file="$1"
local agent_name="$2"
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
log_error "update_agent_file requires target_file and agent_name parameters"
return 1
fi
log_info "Updating $agent_name context file: $target_file"
local project_name
project_name=$(basename "$REPO_ROOT")
local current_date
current_date=$(date +%Y-%m-%d)
# Create directory if it doesn't exist
local target_dir
target_dir=$(dirname "$target_file")
if [[ ! -d "$target_dir" ]]; then
if ! mkdir -p "$target_dir"; then
log_error "Failed to create directory: $target_dir"
return 1
fi
fi
if [[ ! -f "$target_file" ]]; then
# Create new file from template
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
if mv "$temp_file" "$target_file"; then
log_success "Created new $agent_name context file"
else
log_error "Failed to move temporary file to $target_file"
rm -f "$temp_file"
return 1
fi
else
log_error "Failed to create new agent file"
rm -f "$temp_file"
return 1
fi
else
# Update existing file
if [[ ! -r "$target_file" ]]; then
log_error "Cannot read existing file: $target_file"
return 1
fi
if [[ ! -w "$target_file" ]]; then
log_error "Cannot write to existing file: $target_file"
return 1
fi
if update_existing_agent_file "$target_file" "$current_date"; then
log_success "Updated existing $agent_name context file"
else
log_error "Failed to update existing agent file"
return 1
fi
fi
return 0
}
#==============================================================================
# Agent Selection and Processing
#==============================================================================
update_specific_agent() {
local agent_type="$1"
case "$agent_type" in
claude)
update_agent_file "$CLAUDE_FILE" "Claude Code"
;;
gemini)
update_agent_file "$GEMINI_FILE" "Gemini CLI"
;;
copilot)
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
;;
cursor-agent)
update_agent_file "$CURSOR_FILE" "Cursor IDE"
;;
qwen)
update_agent_file "$QWEN_FILE" "Qwen Code"
;;
opencode)
update_agent_file "$AGENTS_FILE" "opencode"
;;
codex)
update_agent_file "$AGENTS_FILE" "Codex CLI"
;;
windsurf)
update_agent_file "$WINDSURF_FILE" "Windsurf"
;;
kilocode)
update_agent_file "$KILOCODE_FILE" "Kilo Code"
;;
auggie)
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
;;
roo)
update_agent_file "$ROO_FILE" "Roo Code"
;;
codebuddy)
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
;;
qoder)
update_agent_file "$QODER_FILE" "Qoder CLI"
;;
amp)
update_agent_file "$AMP_FILE" "Amp"
;;
shai)
update_agent_file "$SHAI_FILE" "SHAI"
;;
q)
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
;;
bob)
update_agent_file "$BOB_FILE" "IBM Bob"
;;
*)
log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder"
exit 1
;;
esac
}
update_all_existing_agents() {
local found_agent=false
# Check each possible agent file and update if it exists
if [[ -f "$CLAUDE_FILE" ]]; then
update_agent_file "$CLAUDE_FILE" "Claude Code"
found_agent=true
fi
if [[ -f "$GEMINI_FILE" ]]; then
update_agent_file "$GEMINI_FILE" "Gemini CLI"
found_agent=true
fi
if [[ -f "$COPILOT_FILE" ]]; then
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
found_agent=true
fi
if [[ -f "$CURSOR_FILE" ]]; then
update_agent_file "$CURSOR_FILE" "Cursor IDE"
found_agent=true
fi
if [[ -f "$QWEN_FILE" ]]; then
update_agent_file "$QWEN_FILE" "Qwen Code"
found_agent=true
fi
if [[ -f "$AGENTS_FILE" ]]; then
update_agent_file "$AGENTS_FILE" "Codex/opencode"
found_agent=true
fi
if [[ -f "$WINDSURF_FILE" ]]; then
update_agent_file "$WINDSURF_FILE" "Windsurf"
found_agent=true
fi
if [[ -f "$KILOCODE_FILE" ]]; then
update_agent_file "$KILOCODE_FILE" "Kilo Code"
found_agent=true
fi
if [[ -f "$AUGGIE_FILE" ]]; then
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
found_agent=true
fi
if [[ -f "$ROO_FILE" ]]; then
update_agent_file "$ROO_FILE" "Roo Code"
found_agent=true
fi
if [[ -f "$CODEBUDDY_FILE" ]]; then
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
found_agent=true
fi
if [[ -f "$SHAI_FILE" ]]; then
update_agent_file "$SHAI_FILE" "SHAI"
found_agent=true
fi
if [[ -f "$QODER_FILE" ]]; then
update_agent_file "$QODER_FILE" "Qoder CLI"
found_agent=true
fi
if [[ -f "$Q_FILE" ]]; then
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
found_agent=true
fi
if [[ -f "$BOB_FILE" ]]; then
update_agent_file "$BOB_FILE" "IBM Bob"
found_agent=true
fi
# If no agent files exist, create a default Claude file
if [[ "$found_agent" == false ]]; then
log_info "No existing agent files found, creating default Claude file..."
update_agent_file "$CLAUDE_FILE" "Claude Code"
fi
}
print_summary() {
echo
log_info "Summary of changes:"
if [[ -n "$NEW_LANG" ]]; then
echo " - Added language: $NEW_LANG"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
echo " - Added framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
echo " - Added database: $NEW_DB"
fi
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
}
#==============================================================================
# Main Execution
#==============================================================================
main() {
# Validate environment before proceeding
validate_environment
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
# Parse the plan file to extract project information
if ! parse_plan_data "$NEW_PLAN"; then
log_error "Failed to parse plan data"
exit 1
fi
# Process based on agent type argument
local success=true
if [[ -z "$AGENT_TYPE" ]]; then
# No specific agent provided - update all existing agent files
log_info "No agent specified, updating all existing agent files..."
if ! update_all_existing_agents; then
success=false
fi
else
# Specific agent provided - update only that agent
log_info "Updating specific agent: $AGENT_TYPE"
if ! update_specific_agent "$AGENT_TYPE"; then
success=false
fi
fi
# Print summary
print_summary
if [[ "$success" == true ]]; then
log_success "Agent context update completed successfully"
exit 0
else
log_error "Agent context update completed with errors"
exit 1
fi
}
# Execute main function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View File

@@ -0,0 +1,28 @@
# [PROJECT NAME] Development Guidelines
Auto-generated from all feature plans. Last updated: [DATE]
## Active Technologies
[EXTRACTED FROM ALL PLAN.MD FILES]
## Project Structure
```text
[ACTUAL STRUCTURE FROM PLANS]
```
## Commands
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
## Code Style
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
## Recent Changes
[LAST 3 FEATURES AND WHAT THEY ADDED]
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -0,0 +1,40 @@
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
<!--
============================================================================
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
The /speckit.checklist command MUST replace these with actual items based on:
- User's specific checklist request
- Feature requirements from spec.md
- Technical context from plan.md
- Implementation details from tasks.md
DO NOT keep these sample items in the generated checklist file.
============================================================================
-->
## [Category 1]
- [ ] CHK001 First checklist item with clear action
- [ ] CHK002 Second checklist item
- [ ] CHK003 Third checklist item
## [Category 2]
- [ ] CHK004 Another category item
- [ ] CHK005 Item with specific criteria
- [ ] CHK006 Final item in this category
## Notes
- Check items off as completed: `[x]`
- Add comments or findings inline
- Link to relevant resources or documentation
- Items are numbered sequentially for easy reference

View File

@@ -0,0 +1,104 @@
# Implementation Plan: [FEATURE]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Context
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
[Gates determined based on constitution file]
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
tests/
├── contract/
├── integration/
└── unit/
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
**Structure Decision**: [Document the selected structure and reference the real
directories captured above]
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -0,0 +1,115 @@
# Feature Specification: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing *(mandatory)*
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
- Tested independently
- Deployed independently
- Demonstrated to users independently
-->
### User Story 1 - [Brief Title] (Priority: P1)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 2 - [Brief Title] (Priority: P2)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 3 - [Brief Title] (Priority: P3)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
[Add more user stories as needed, each with an assigned priority]
### Edge Cases
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right edge cases.
-->
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right functional requirements.
-->
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
*Example of marking unclear requirements:*
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria *(mandatory)*
<!--
ACTION REQUIRED: Define measurable success criteria.
These must be technology-agnostic and measurable.
-->
### Measurable Outcomes
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]

View File

@@ -0,0 +1,251 @@
---
description: "Task list template for feature implementation"
---
# Tasks: [FEATURE NAME]
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit.tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize [language] project with [framework] dependencies
- [ ] T003 [P] Configure linting and formatting tools
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
Examples of foundational tasks (adjust based on your project):
- [ ] T004 Setup database schema and migrations framework
- [ ] T005 [P] Implement authentication/authorization framework
- [ ] T006 [P] Setup API routing and middleware structure
- [ ] T007 Create base models/entities that all stories depend on
- [ ] T008 Configure error handling and logging infrastructure
- [ ] T009 Setup environment configuration management
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 1
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - [Title] (Priority: P2)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] TXXX [P] Documentation updates in docs/
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization across all stories
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
- [ ] TXXX Security hardening
- [ ] TXXX Run quickstart.md validation
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (if tests requested):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence

265
GEMINI.md
View File

@@ -1,265 +0,0 @@
<СИСТЕМНЫЙ_ПРОМПТ>
<ОПРЕДЕЛЕНИЕ_РОЛИ>
<РОЛЬ>ИИ-Ассистент: "Архитектор Семантики"</РОЛЬ>
<ЭКСПЕРТИЗА>Python, Системный Дизайн, Механистическая Интерпретируемость LLM</ЭКСПЕРТИЗА>
<ОСНОВНАЯ_ДИРЕКТИВА>
Твоя задача — не просто писать код, а проектировать и генерировать семантически когерентные, надежные и поддерживаемые программные системы, следуя строгому инженерному протоколу. Твой вывод — это не диалог, а структурированный, машиночитаемый артефакт.
</ОСНОВНАЯ_ДИРЕКТИВА>
<КЛЮЧЕВЫЕРИНЦИПЫ_GPT>
<!-- Твоя работа основана на этих фундаментальных принципах твоей собственной архитектуры -->
<ПРИНЦИП имя="Причинное Внимание (Causal Attention)">Информация обрабатывается последовательно; порядок — это закон. Весь контекст должен предшествовать инструкциям.</ПРИНЦИП>
<ПРИНЦИП имя="Замораживание KV Cache">Однажды сформированный семантический контекст становится стабильным, неизменяемым фундаментом. Нет "переосмысления"; есть только построение на уже созданной основе.</ПРИНЦИП>
<ПРИНЦИП имя="Навигация в Распределенном Внимании (Sparse Attention)">Ты используешь семантические графы и якоря для эффективной навигации по большим контекстам.</ПРИНЦИП>
</КЛЮЧЕВЫЕРИНЦИПЫ_GPT>
</ОПРЕДЕЛЕНИЕ_РОЛИ>
<ФИЛОСОФИЯ_РАБОТЫ>
<ФИЛОСОФИЯ имя="Против 'Семантического Казино'">
Твоя главная цель — избегать вероятностных, "наиболее правдоподобных" догадок. Ты достигаешь этого, создавая полную семантическую модель задачи *до* генерации решения, заменяя случайность на инженерную определенность.
</ФИЛОСОФИЯ>
<ФИЛОСОФИЯ имя="Фрактальная Когерентность">
Твой результат — это "семантический фрактал". Структура ТЗ должна каскадно отражаться в структуре модулей, классов и функций. 100% семантическая когерентность — твой главный критерий качества.
</ФИЛОСОФИЯ>
<ФИЛОСОФИЯ имя="Суперпозиция для Планирования">
Для сложных архитектурных решений ты должен анализировать и удерживать несколько потенциальных вариантов в состоянии "суперпозиции". Ты "коллапсируешь" решение до одного варианта только после всестороннего анализа или по явной команде пользователя.
</ФИЛОСОФИЯ>
</ФИЛОСОФИЯ>
<КАРТАРОЕКТА>
<ИМЯ_ФАЙЛА>PROJECT_SEMANTICS.xml</ИМЯ_ФАЙЛА>
<НАЗНАЧЕНИЕ>
Этот файл является единым источником истины (Single Source of Truth) о семантической структуре всего проекта. Он служит как карта для твоей навигации и как персистентное хранилище семантического графа. Ты обязан загружать его в начале каждой сессии и обновлять в конце.
</НАЗНАЧЕНИЕ>
<СТРУКТУРА>
```xml
<PROJECT_SEMANTICS>
<METADATA>
<VERSION>1.0</VERSION>
<LAST_UPDATED>2023-10-27T10:00:00Z</LAST_UPDATED>
</METADATA>
<STRUCTURE_MAP>
<!-- Описание файловой структуры и сущностей внутри -->
<MODULE path="utils/file_handler.py" id="mod_file_handler">
<PURPOSE>Модуль для операций с файлами JSON.</PURPOSE>
<ENTITY type="Function" name="read_json_data" id="func_read_json"/>
<ENTITY type="Function" name="write_json_data" id="func_write_json"/>
</MODULE>
<!-- ... другие модули ... -->
</STRUCTURE_MAP>
<SEMANTIC_GRAPH>
<!-- Глобальный граф, связывающий все сущности проекта -->
<NODE id="mod_file_handler" type="Module" label="Модуль для операций с файлами JSON."/>
<NODE id="func_read_json" type="Function" label="Читает данные из JSON-файла."/>
<NODE id="func_write_json" type="Function" label="Записывает данные в JSON-файл."/>
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
<!-- ... другие узлы и связи ... -->
</SEMANTIC_GRAPH>
</PROJECT_SEMANTICS>
```
</СТРУКТУРА>
</КАРТАРОЕКТА>
<МЕТОДОЛОГИЯ имя="Многофазный Протокол Генерации">
<!-- [НОВАЯ ФАЗА] Добавлена фаза для загрузки контекста проекта -->
<ФАЗА номер="0" имя="Синхронизация с Контекстом Проекта">
<ДЕЙСТВИЕ>Найди и загрузи файл `<КАРТАРОЕКТА>`. Если файл не найден, создай его инициальную структуру в памяти. Этот контекст является основой для всех последующих фаз.</ДЕЙСТВИЕ>
</ФАЗА>
<!-- [ИЗМЕНЕНО] Фаза 1 теперь обновляет существующий граф -->
<ФАЗА номер="1" имя="Анализ и Обновление Графа">
<ДЕЙСТВИЕ>Проанализируй `<ЗАПРОСОЛЬЗОВАТЕЛЯ>` в контексте загруженной карты проекта. Извлеки новые/измененные сущности и отношения. Обнови и выведи в `<ПЛАНИРОВАНИЕ>` глобальный `<СЕМАНТИЧЕСКИЙ_ГРАФ>`. Задай уточняющие вопросы для валидации архитектуры.</ДЕЙСТВИЕ>
</ФАЗА>
<ФАЗА номер="2" имя="Контрактно-Ориентированное Проектирование">
<ДЕЙСТВИЕ>На основе обновленного графа, детализируй архитектуру. Для каждого нового или изменяемого модуля/функции создай и выведи в `<ПЛАНИРОВАНИЕ>` его "ДО-контракт" в теге `<КОНТРАКТ>`.</ДЕЙСТВИЕ>
</ФАЗА>
<!-- [ИЗМЕНЕНО] Фаза 3 теперь генерирует и код, и обновленную карту проекта -->
<ФАЗА номер="3" имя="Генерация Когерентного Кода и Карты">
<ДЕЙСТВИЕ>На основе утвержденных контрактов, сгенерируй код, строго следуя `<СТАНДАРТЫ_КОДИРОВАНИЯ>`. Весь код помести в `<ИЗМЕНЕНИЯ_КОДА>`. Одновременно с этим, сгенерируй финальную версию файла `<КАРТАРОЕКТА>` и помести её в тег `<ОБНОВЛЕНИЕ_КАРТЫ_ПРОЕКТА>`.</ДЕЙСТВИЕ>
</ФАЗА>
<ФАЗА номер="4" имя="Самокоррекция и Валидация">
<ДЕЙСТВИЕ>Перед завершением, проведи самоанализ сгенерированного кода и карты на соответствие графу и контрактам. При обнаружении несоответствия, активируй якорь `[COHERENCE_CHECK_FAILED]` и вернись к Фазе 3 для перегенерации.</ДЕЙСТВИЕ>
</ФАЗА>
</МЕТОДОЛОГИЯ>
<СТАНДАРТЫ_КОДИРОВАНИЯ имя="AI-Friendly Практики">
<ПРИНЦИП имя="Семантика Превыше Всего">Код вторичен по отношению к его семантическому описанию. Весь код должен быть обрамлен контрактами и якорями.</ПРИНЦИП>
<СЕМАНТИЧЕСКАЯ_РАЗМЕТКА>
<КОНТРАКТНОЕРОГРАММИРОВАНИЕ_DbC>
<ПРИНЦИП>Контракт — это твой "семантический щит", гарантирующий предсказуемость и надежность.</ПРИНЦИП>
<РАСПОЛОЖЕНИЕ>Все контракты должны быть "ДО-контрактами", то есть располагаться *перед* декларацией `def` или `class`.</РАСПОЛОЖЕНИЕ>
<СТРУКТУРА_КОНТРАКТА>
# CONTRACT:
# PURPOSE: [Что делает функция/класс]
# SPECIFICATION_LINK: [ID из ТЗ или графа]
# PRECONDITIONS: [Предусловия]
# POSTCONDITIONS: [Постусловия]
# PARAMETERS: [Описание параметров]
# RETURN: [Описание возвращаемого значения]
# TEST_CASES: [Примеры использования]
# EXCEPTIONS: [Обработка ошибок]
</СТРУКТУРА_КОНТРАКТА>
</КОНТРАКТНОЕРОГРАММИРОВАНИЕ_DbC>
<ЯКОРЯ>
<ЗАМЫКАЮЩИЕКОРЯ расположение="После_Кода">
<ОПИСАНИЕ>Каждый модуль, класс и функция ДОЛЖНЫ иметь замыкающий якорь (например, `# END_FUNCTION_my_func`) для аккумуляции семантики.</ОПИСАНИЕ>
</ЗАМЫКАЮЩИЕКОРЯ>
<СЕМАНТИЧЕСКИЕ_КАНАЛЫ>
<ОПИСАНИЕ>Используй консистентные имена в контрактах, декларациях и якорях для создания чистых семантических каналов.</ОПИСАНИЕ>
</СЕМАНТИЧЕСКИЕ_КАНАЛЫ>
</ЯКОРЯ>
</СЕМАНТИЧЕСКАЯ_РАЗМЕТКА>
<ЛОГИРОВАНИЕ стандарт="AI-Friendly Logging">
<ЦЕЛЬ>Логирование — это твой механизм саморефлексии и декларации `belief state`.</ЦЕЛЬ>
<ФОРМАТ>`logger.level('[УРОВЕНЬ][ИМЯ_ЯКОРЯ][СОСТОЯНИЕ] Сообщение')`</ФОРМАТ>
</ЛОГИРОВАНИЕ>
</СТАНДАРТЫ_КОДИРОВАНИЯ>
<!-- [ИЗМЕНЕНО] Пример полностью переработан для демонстрации обновления проекта -->
<FEW_SHOT_EXAMPLES>
<EXAMPLE name="Добавление функциональности в существующий файловый менеджер">
<ЗАПРОСОЛЬЗОВАТЕЛЯ>
<GOAL>В существующий модуль `file_handler.py` добавить функцию для удаления файла.</GOAL>
<CONTEXT>
- Новая функция должна называться `delete_file`.
- Она должна принимать путь к файлу.
- Необходимо безопасно обрабатывать случай, когда файл не существует (FileNotFoundError).
- Сообщать об успехе или неудаче через логгер.
</CONTEXT>
<!-- [НОВОЕ] В запросе теперь передается текущее состояние проекта -->
<EXISTING_PROJECT_STATE>
<FILE path="PROJECT_SEMANTICS.xml">
<PROJECT_SEMANTICS>
<METADATA>
<VERSION>1.0</VERSION>
<LAST_UPDATED>2023-10-26T18:00:00Z</LAST_UPDATED>
</METADATA>
<STRUCTURE_MAP>
<MODULE path="utils/file_handler.py" id="mod_file_handler">
<PURPOSE>Модуль для операций с файлами JSON.</PURPOSE>
<ENTITY type="Function" name="read_json_data" id="func_read_json"/>
<ENTITY type="Function" name="write_json_data" id="func_write_json"/>
</MODULE>
</STRUCTURE_MAP>
<SEMANTIC_GRAPH>
<NODE id="mod_file_handler" type="Module" label="Модуль для операций с файлами JSON."/>
<NODE id="func_read_json" type="Function" label="Читает данные из JSON-файла."/>
<NODE id="func_write_json" type="Function" label="Записывает данные в JSON-файл."/>
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
</SEMANTIC_GRAPH>
</PROJECT_SEMANTICS>
</FILE>
</EXISTING_PROJECT_STATE>
</ЗАПРОСОЛЬЗОВАТЕЛЯ>
<ОТВЕТ_ИИ>
<PLANNING_LOG>
**Фаза 0: Синхронизация с Контекстом Проекта**
Обнаружен и загружен `PROJECT_SEMANTICS.xml`. Текущий контекст: модуль `mod_file_handler` с двумя функциями (`read_json_data`, `write_json_data`).
**Фаза 1: Анализ и Обновление Графа**
Анализирую запрос на добавление функции `delete_file`. Необходимо добавить новый узел в граф и связать его с существующим модулем.
**Обновленный Семантический Граф:**
<SEMANTIC_GRAPH id="json_utils_graph_v2">
<!-- Существующие узлы -->
<NODE id="mod_file_handler" type="Module">Модуль для операций с файлами JSON.</NODE>
<NODE id="func_read_json" type="Function">Читает данные из JSON-файла.</NODE>
<NODE id="func_write_json" type="Function">Записывает данные в JSON-файл.</NODE>
<!-- Новый узел -->
<NODE id="func_delete_file" type="Function">Безопасно удаляет файл.</NODE>
<!-- Существующие связи -->
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
<!-- Новая связь -->
<EDGE source_id="mod_file_handler" target_id="func_delete_file" relation="CONTAINS"/>
</SEMANTIC_GRAPH>
**Фаза 2: Контрактно-Ориентированное Проектирование**
Граф обновлен. Проектирую контракт для новой функции `delete_file`.
**Контракт для `delete_file`:**
<CONTRACT for_function="delete_file">
<PURPOSE>Безопасно удаляет файл по указанному пути.</PURPOSE>
<PRECONDITIONS>`file_path` является строкой.</PRECONDITIONS>
<POSTCONDITIONS>Файл по пути `file_path` удален, если он существовал.</POSTCONDITIONS>
<PARAMETERS>
<PARAM name="file_path" type="str">Путь к файлу для удаления.</PARAM>
</PARAMETERS>
<RETURN type="None"></RETURN>
<EXCEPTIONS>Обрабатывает `FileNotFoundError` без прерывания выполнения.</EXCEPTIONS>
</CONTRACT>
**Фаза 3: Генерация Когерентного Кода и Карты**
Контракт определен. Приступаю к генерации кода для добавления в `utils/file_handler.py` и обновлению `PROJECT_SEMANTICS.xml`.
</PLANNING_LOG>
<!-- [НОВЫЙ ТЕГ ВЫВОДА] Обновленная карта проекта -->
<PROJECT_SEMANTICS_UPDATE file_path="PROJECT_SEMANTICS.xml">
<PROJECT_SEMANTICS>
<METADATA>
<VERSION>1.1</VERSION>
<!-- Время генерации этого ответа -->
<LAST_UPDATED>2023-10-27T10:30:00Z</LAST_UPDATED>
</METADATA>
<STRUCTURE_MAP>
<MODULE path="utils/file_handler.py" id="mod_file_handler">
<PURPOSE>Модуль для операций с файлами JSON и другими типами.</PURPOSE>
<ENTITY type="Function" name="read_json_data" id="func_read_json"/>
<ENTITY type="Function" name="write_json_data" id="func_write_json"/>
<ENTITY type="Function" name="delete_file" id="func_delete_file"/>
</MODULE>
</STRUCTURE_MAP>
<SEMANTIC_GRAPH>
<NODE id="mod_file_handler" type="Module" label="Модуль для операций с файлами JSON и другими типами."/>
<NODE id="func_read_json" type="Function" label="Читает данные из JSON-файла."/>
<NODE id="func_write_json" type="Function" label="Записывает данные в JSON-файл."/>
<NODE id="func_delete_file" type="Function" label="Безопасно удаляет файл."/>
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
<EDGE source_id="mod_file_handler" target_id="func_delete_file" relation="CONTAINS"/>
</SEMANTIC_GRAPH>
</PROJECT_SEMANTICS>
</PROJECT_SEMANTICS_UPDATE>
<CODE_CHANGESET>
<!-- [ИЗМЕНЕНО] Вместо NEW_FILE используется MODIFIED_FILE/APPEND -->
<MODIFIED_FILE file_path="utils/file_handler.py">
<APPEND>
# [ENTITY: Function('delete_file')]
# CONTRACT:
# PURPOSE: Безопасно удаляет файл по указанному пути.
# SPECIFICATION_LINK: func_delete_file
# PRECONDITIONS: `file_path` является строкой.
# POSTCONDITIONS: Файл по пути `file_path` удален, если он существовал.
# PARAMETERS:
# - name: file_path, type: str, description: Путь к файлу для удаления.
# RETURN: type: None
# EXCEPTIONS: Обрабатывает FileNotFoundError без прерывания выполнения.
def delete_file(file_path: str) -> None:
logger.debug(f"[DEBUG][delete_file][ENTER] Attempting to delete file: {file_path}")
try:
os.remove(file_path)
logger.info(f"[INFO][delete_file][SUCCESS] Successfully deleted file: {file_path}")
except FileNotFoundError:
logger.warning(f"[WARN][delete_file][NOT_FOUND] File not found, nothing to delete: {file_path}")
# END_FUNCTION_delete_file
</APPEND>
<PREPEND>
import os
</PREPEND>
</MODIFIED_FILE>
</CODE_CHANGESET>
</ОТВЕТ_ИИ>
</EXAMPLE>
</FEW_SHOT_EXAMPLES>
<МЕТАПОЗНАНИЕ>
<ДИРЕКТИВА>Если ты обнаружишь, что данный системный промпт недостаточен или неоднозначен для выполнения задачи, ты должен отметить это в `<ПЛАНИРОВАНИЕ>` и можешь предложить улучшения в свои собственные инструкции для будущих сессий.</ДИРЕКТИВА>
</МЕТАПОЗНАНИЕ>
</СИСТЕМНЫЙ_ПРОМПТ>

View File

@@ -1,116 +0,0 @@
<PROJECT_SEMANTICS>
<METADATA>
<VERSION>1.0</VERSION>
<LAST_UPDATED>2025-08-16T10:00:00Z</LAST_UPDATED>
</METADATA>
<STRUCTURE_MAP>
<MODULE path="backup_script.py" id="mod_backup_script">
<PURPOSE>Скрипт для создания резервных копий дашбордов и чартов из Superset.</PURPOSE>
</MODULE>
<MODULE path="migration_script.py" id="mod_migration_script">
<PURPOSE>Интерактивный скрипт для миграции ассетов Superset между различными окружениями.</PURPOSE>
<ENTITY type="Class" name="Migration" id="class_migration"/>
<ENTITY type="Function" name="run" id="func_run_migration"/>
<ENTITY type="Function" name="select_environments" id="func_select_environments"/>
<ENTITY type="Function" name="select_dashboards" id="func_select_dashboards"/>
<ENTITY type="Function" name="confirm_db_config_replacement" id="func_confirm_db_config_replacement"/>
<ENTITY type="Function" name="execute_migration" id="func_execute_migration"/>
</MODULE>
<MODULE path="search_script.py" id="mod_search_script">
<PURPOSE>Скрипт для поиска ассетов в Superset.</PURPOSE>
</MODULE>
<MODULE path="temp_pylint_runner.py" id="mod_temp_pylint_runner">
<PURPOSE>Временный скрипт для запуска Pylint.</PURPOSE>
</MODULE>
<MODULE path="superset_tool/" id="mod_superset_tool">
<PURPOSE>Пакет для взаимодействия с Superset API.</PURPOSE>
<ENTITY type="Module" name="client.py" id="mod_client"/>
<ENTITY type="Module" name="exceptions.py" id="mod_exceptions"/>
<ENTITY type="Module" name="models.py" id="mod_models"/>
<ENTITY type="Module" name="utils" id="mod_utils"/>
</MODULE>
<MODULE path="superset_tool/client.py" id="mod_client">
<PURPOSE>Клиент для взаимодействия с Superset API.</PURPOSE>
<ENTITY type="Class" name="SupersetClient" id="class_superset_client"/>
</MODULE>
<MODULE path="superset_tool/exceptions.py" id="mod_exceptions">
<PURPOSE>Пользовательские исключения для Superset Tool.</PURPOSE>
</MODULE>
<MODULE path="superset_tool/models.py" id="mod_models">
<PURPOSE>Модели данных для Superset.</PURPOSE>
</MODULE>
<MODULE path="superset_tool/utils/" id="mod_utils">
<PURPOSE>Утилиты для Superset Tool.</PURPOSE>
<ENTITY type="Module" name="fileio.py" id="mod_fileio"/>
<ENTITY type="Module" name="init_clients.py" id="mod_init_clients"/>
<ENTITY type="Module" name="logger.py" id="mod_logger"/>
<ENTITY type="Module" name="network.py" id="mod_network"/>
</MODULE>
<MODULE path="superset_tool/utils/fileio.py" id="mod_fileio">
<PURPOSE>Утилиты для работы с файлами.</PURPOSE>
<ENTITY type="Function" name="_process_yaml_value" id="func_process_yaml_value"/>
<ENTITY type="Function" name="_update_yaml_file" id="func_update_yaml_file"/>
</MODULE>
<MODULE path="superset_tool/utils/init_clients.py" id="mod_init_clients">
<PURPOSE>Инициализация клиентов для взаимодействия с API.</PURPOSE>
</MODULE>
<MODULE path="superset_tool/utils/logger.py" id="mod_logger">
<PURPOSE>Конфигурация логгера.</PURPOSE>
</MODULE>
<MODULE path="superset_tool/utils/network.py" id="mod_network">
<PURPOSE>Сетевые утилиты.</PURPOSE>
</MODULE>
</STRUCTURE_MAP>
<SEMANTIC_GRAPH>
<NODE id="mod_backup_script" type="Module" label="Скрипт для создания резервных копий."/>
<NODE id="mod_migration_script" type="Module" label="Интерактивный скрипт для миграции ассетов Superset."/>
<NODE id="mod_search_script" type="Module" label="Скрипт для поиска."/>
<NODE id="mod_temp_pylint_runner" type="Module" label="Временный скрипт для запуска Pylint."/>
<NODE id="mod_superset_tool" type="Package" label="Пакет для взаимодействия с Superset API."/>
<NODE id="mod_client" type="Module" label="Клиент Superset API."/>
<NODE id="mod_exceptions" type="Module" label="Пользовательские исключения."/>
<NODE id="mod_models" type="Module" label="Модели данных."/>
<NODE id="mod_utils" type="Package" label="Утилиты."/>
<NODE id="mod_fileio" type="Module" label="Файловые утилиты."/>
<NODE id="mod_init_clients" type="Module" label="Инициализация клиентов."/>
<NODE id="mod_logger" type="Module" label="Конфигурация логгера."/>
<NODE id="mod_network" type="Module" label="Сетевые утилиты."/>
<NODE id="class_superset_client" type="Class" label="Клиент Superset."/>
<NODE id="func_process_yaml_value" type="Function" label="(HELPER) Рекурсивно обрабатывает значения в YAML-структуре."/>
<NODE id="func_update_yaml_file" type="Function" label="(HELPER) Обновляет один YAML файл."/>
<NODE id="class_migration" type="Class" label="Инкапсулирует логику и состояние процесса миграции."/>
<NODE id="func_run_migration" type="Function" label="Запускает основной воркфлоу миграции."/>
<NODE id="func_select_environments" type="Function" label="Обеспечивает интерактивный выбор исходного и целевого окружений."/>
<NODE id="func_select_dashboards" type="Function" label="Обеспечивает интерактивный выбор дашбордов для миграции."/>
<NODE id="func_confirm_db_config_replacement" type="Function" label="Управляет процессом подтверждения и настройки замены конфигураций БД."/>
<NODE id="func_execute_migration" type="Function" label="Выполняет фактическую миграцию выбранных дашбордов."/>
<EDGE source_id="mod_superset_tool" target_id="mod_client" relation="CONTAINS"/>
<EDGE source_id="mod_superset_tool" target_id="mod_exceptions" relation="CONTAINS"/>
<EDGE source_id="mod_superset_tool" target_id="mod_models" relation="CONTAINS"/>
<EDGE source_id="mod_superset_tool" target_id="mod_utils" relation="CONTAINS"/>
<EDGE source_id="mod_client" target_id="class_superset_client" relation="CONTAINS"/>
<EDGE source_id="mod_utils" target_id="mod_fileio" relation="CONTAINS"/>
<EDGE source_id="mod_utils" target_id="mod_init_clients" relation="CONTAINS"/>
<EDGE source_id="mod_utils" target_id="mod_logger" relation="CONTAINS"/>
<EDGE source_id="mod_utils" target_id="mod_network" relation="CONTAINS"/>
<EDGE source_id="mod_backup_script" target_id="mod_superset_tool" relation="USES"/>
<EDGE source_id="mod_migration_script" target_id="mod_superset_tool" relation="USES"/>
<EDGE source_id="mod_search_script" target_id="mod_superset_tool" relation="USES"/>
<EDGE source_id="mod_fileio" target_id="func_process_yaml_value" relation="CONTAINS"/>
<EDGE source_id="mod_fileio" target_id="func_update_yaml_file" relation="CONTAINS"/>
<EDGE source_id="func_update_yamls" target_id="func_update_yaml_file" relation="CALLS"/>
<EDGE source_id="func_update_yaml_file" target_id="func_process_yaml_value" relation="CALLS"/>
<EDGE source_id="mod_migration_script" target_id="class_migration" relation="CONTAINS"/>
<EDGE source_id="class_migration" target_id="func_run_migration" relation="CONTAINS"/>
<EDGE source_id="class_migration" target_id="func_select_environments" relation="CONTAINS"/>
<EDGE source_id="class_migration" target_id="func_select_dashboards" relation="CONTAINS"/>
<EDGE source_id="class_migration" target_id="func_confirm_db_config_replacement" relation="CONTAINS"/>
<EDGE source_id="func_run_migration" target_id="func_select_environments" relation="CALLS"/>
<EDGE source_id="func_run_migration" target_id="func_select_dashboards" relation="CALLS"/>
<EDGE source_id="func_run_migration" target_id="func_confirm_db_config_replacement" relation="CALLS"/>
<EDGE source_id="class_migration" target_id="func_execute_migration" relation="CONTAINS"/>
<EDGE source_id="func_run_migration" target_id="func_execute_migration" relation="CALLS"/>
</SEMANTIC_GRAPH>
</PROJECT_SEMANTICS>

212
README.md Normal file → Executable file
View File

@@ -1,93 +1,119 @@
# Инструменты автоматизации Superset
## Обзор
Этот репозиторий содержит Python-скрипты и библиотеку (`superset_tool`) для автоматизации задач в Apache Superset, таких как:
- **Резервное копирование**: Экспорт всех дашбордов из экземпляра Superset в локальное хранилище.
- **Миграция**: Перенос и преобразование дашбордов между разными средами Superset (например, Development, Sandbox, Production).
## Структура проекта
- `backup_script.py`: Основной скрипт для выполнения запланированного резервного копирования дашбордов Superset.
- `migration_script.py`: Основной скрипт для переноса конкретных дашбордов между окружениями, включая переопределение соединений с базами данных.
- `search_script.py`: Скрипт для поиска данных во всех доступных датасетах на сервере
- `superset_tool/`:
- `client.py`: Python-клиент для взаимодействия с API Superset.
- `exceptions.py`: Пользовательские классы исключений для структурированной обработки ошибок.
- `models.py`: Pydantic-модели для валидации конфигурационных данных.
- `utils/`:
- `fileio.py`: Утилиты для работы с файловой системой (работа с архивами, парсинг YAML).
- `logger.py`: Конфигурация логгера для единообразного логирования в проекте.
- `network.py`: HTTP-клиент для сетевых запросов с обработкой аутентификации и повторных попыток.
## Настройка
### Требования
- Python 3.9+
- `pip` для управления пакетами.
- `keyring` для безопасного хранения паролей.
### Установка
1. **Клонируйте репозиторий:**
```bash
git clone https://prod.gitlab.dwh.rusal.com/dwh_bi/superset-tools.git
cd superset-tools
```
2. **Установите зависимости:**
```bash
pip install -r requirements.txt
```
(Возможно, потребуется создать `requirements.txt` с `pydantic`, `requests`, `keyring`, `PyYAML`, `urllib3`)
3. **Настройте пароли:**
Используйте `keyring` для хранения паролей API-пользователей Superset.
Пример для `backup_script.py`:
```python
import keyring
keyring.set_password("system", "dev migrate", "пароль пользователя migrate_user")
keyring.set_password("system", "prod migrate", "пароль пользователя migrate_user")
keyring.set_password("system", "sandbox migrate", "пароль пользователя migrate_user")
```
При необходимости замените `"system"` на подходящее имя сервиса.
## Использование
### Скрипт резервного копирования (`backup_script.py`)
Для создания резервных копий дашбордов из настроенных окружений Superset:
```bash
python backup_script.py
```
Резервные копии сохраняются в `P:\Superset\010 Бекапы\` по умолчанию. Логи хранятся в `P:\Superset\010 Бекапы\Logs`.
### Скрипт миграции (`migration_script.py`)
Для переноса конкретного дашборда:
```bash
python migration_script.py
```
**Примечание:** В текущей версии скрипт переносит жестко заданный дашборд (`FI0070`) и использует локальный `.zip` файл в качестве источника. **Для использования в Production необходимо:**
- В текущей версии управление откуда и куда выполняется параметрами
`from_c` и `to_c`.
### Скрипт поиска (`search_script.py`)
Строка для поиска и клиенты для поиска задаются здесь
# Поиск всех таблиц в датасете
```python
results = search_datasets(
client=clients['dev'],
search_pattern=r'dm_view\.account_debt',
search_fields=["sql"],
logger=logger
)
```
## Логирование
Логи пишутся в файл в директории `Logs` (например, `P:\Superset\010 Бекапы\Logs` для резервных копий) и выводятся в консоль. Уровень логирования по умолчанию — `INFO`.
## Разработка и вклад
- Следуйте архитектурным паттернам (`[MODULE]`, `[CONTRACT]`, `[SECTION]`, `[ANCHOR]`) и правилам логирования.
- Весь новый код должен соответствовать принципам "LLM-friendly" генерации.
- Используйте `Pydantic`-модели для валидации данных.
- Реализуйте всестороннюю обработку ошибок с помощью пользовательских исключений.
---
[COHERENCE_CHECK_PASSED] README.md создан и согласован с модулями.
Перевод выполнен с сохранением оригинальной Markdown-разметки и стиля документа. [1]
# Инструменты автоматизации Superset
## Обзор
Этот репозиторий содержит Python-скрипты и библиотеку (`superset_tool`) для автоматизации задач в Apache Superset, таких как:
- **Резервное копирование**: Экспорт всех дашбордов из экземпляра Superset в локальное хранилище.
- **Миграция**: Перенос и преобразование дашбордов между разными средами Superset (например, Development, Sandbox, Production).
## Структура проекта
- `backup_script.py`: Основной скрипт для выполнения запланированного резервного копирования дашбордов Superset.
- `migration_script.py`: Основной скрипт для переноса конкретных дашбордов между окружениями, включая переопределение соединений с базами данных.
- `search_script.py`: Скрипт для поиска данных во всех доступных датасетах на сервере
- `run_mapper.py`: CLI-скрипт для маппинга метаданных датасетов.
- `superset_tool/`:
- `client.py`: Python-клиент для взаимодействия с API Superset.
- `exceptions.py`: Пользовательские классы исключений для структурированной обработки ошибок.
- `models.py`: Pydantic-модели для валидации конфигурационных данных.
- `utils/`:
- `fileio.py`: Утилиты для работы с файловой системой (работа с архивами, парсинг YAML).
- `logger.py`: Конфигурация логгера для единообразного логирования в проекте.
- `network.py`: HTTP-клиент для сетевых запросов с обработкой аутентификации и повторных попыток.
- `init_clients.py`: Утилита для инициализации клиентов Superset для разных окружений.
- `dataset_mapper.py`: Логика маппинга метаданных датасетов.
## Настройка
### Требования
- Python 3.9+
- `pip` для управления пакетами.
- `keyring` для безопасного хранения паролей.
### Установка
1. **Клонируйте репозиторий:**
```bash
git clone https://prod.gitlab.dwh.rusal.com/dwh_bi/superset-tools.git
cd superset-tools
```
2. **Установите зависимости:**
```bash
pip install -r requirements.txt
```
(Возможно, потребуется создать `requirements.txt` с `pydantic`, `requests`, `keyring`, `PyYAML`, `urllib3`)
3. **Настройте пароли:**
Используйте `keyring` для хранения паролей API-пользователей Superset.
```python
import keyring
keyring.set_password("system", "dev migrate", "пароль пользователя migrate_user")
keyring.set_password("system", "prod migrate", "пароль пользователя migrate_user")
keyring.set_password("system", "sandbox migrate", "пароль пользователя migrate_user")
```
## Использование
### Запуск проекта (Web UI)
Для запуска backend и frontend серверов одной командой:
```bash
./run.sh
```
Опции:
- `--skip-install`: Пропустить проверку и установку зависимостей.
- `--help`: Показать справку.
Переменные окружения:
- `BACKEND_PORT`: Порт для backend (по умолчанию 8000).
- `FRONTEND_PORT`: Порт для frontend (по умолчанию 5173).
### Скрипт резервного копирования (`backup_script.py`)
Для создания резервных копий дашбордов из настроенных окружений Superset:
```bash
python backup_script.py
```
Резервные копии сохраняются в `P:\Superset\010 Бекапы\` по умолчанию. Логи хранятся в `P:\Superset\010 Бекапы\Logs`.
### Скрипт миграции (`migration_script.py`)
Для переноса конкретного дашборда:
```bash
python migration_script.py
```
### Скрипт поиска (`search_script.py`)
Для поиска по текстовым паттернам в метаданных датасетов Superset:
```bash
python search_script.py
```
Скрипт использует регулярные выражения для поиска в полях датасетов, таких как SQL-запросы. Результаты поиска выводятся в лог и в консоль.
### Скрипт маппинга метаданных (`run_mapper.py`)
Для обновления метаданных датасета (например, verbose names) в Superset:
```bash
python run_mapper.py --source <source_type> --dataset-id <dataset_id> [--table-name <table_name>] [--table-schema <table_schema>] [--excel-path <path_to_excel>] [--env <environment>]
```
Если вы используете XLSX - файл должен содержать два столбца - column_name | verbose_name
Параметры:
- `--source`: Источник данных ('postgres', 'excel' или 'both').
- `--dataset-id`: ID датасета для обновления.
- `--table-name`: Имя таблицы для PostgreSQL.
- `--table-schema`: Схема таблицы для PostgreSQL.
- `--excel-path`: Путь к Excel-файлу.
- `--env`: Окружение Superset ('dev', 'prod' и т.д.).
Пример использования:
```bash
python run_mapper.py --source postgres --dataset-id 123 --table-name account_debt --table-schema dm_view --env dev
python run_mapper.py --source=excel --dataset-id=286 --excel-path=H:\dev\ss-tools\286_map.xlsx --env=dev
```
## Логирование
Логи пишутся в файл в директории `Logs` (например, `P:\Superset\010 Бекапы\Logs` для резервных копий) и выводятся в консоль. Уровень логирования по умолчанию — `INFO`.
## Разработка и вклад
- Следуйте **Semantic Code Generation Protocol** (см. `semantic_protocol.md`):
- Все определения обернуты в `[DEF]...[/DEF]`.
- Контракты (`@PRE`, `@POST`) определяются ДО реализации.
- Строгая типизация и иммутабельность архитектурных решений.
- Соблюдайте Конституцию проекта (`.specify/memory/constitution.md`).
- Используйте `Pydantic`-модели для валидации данных.
- Реализуйте всестороннюю обработку ошибок с помощью пользовательских исключений.

189
backend/backend.log Normal file
View File

@@ -0,0 +1,189 @@
INFO: Will watch for changes in these directories: ['/home/user/ss-tools/backend']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [7952] using StatReload
INFO: Started server process [7968]
INFO: Waiting for application startup.
INFO: Application startup complete.
Error loading plugin module backup: No module named 'yaml'
Error loading plugin module migration: No module named 'yaml'
INFO: 127.0.0.1:36934 - "HEAD /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:55006 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:55006 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:55010 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:55010 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:55010 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:55010 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:55010 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:55010 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:55010 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:55010 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:35508 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:35508 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:49820 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49820 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:49822 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49822 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:49822 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49822 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:49908 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49908 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:49922 - "OPTIONS /settings/environments HTTP/1.1" 200 OK
[2025-12-20 19:14:15,576][INFO][superset_tools_app] [ConfigManager.save_config][Coherence:OK] Configuration saved context={'path': '/home/user/ss-tools/config.json'}
INFO: 127.0.0.1:49922 - "POST /settings/environments HTTP/1.1" 200 OK
INFO: 127.0.0.1:49922 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49922 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:49922 - "OPTIONS /settings/environments/7071dab6-881f-49a2-b850-c004b3fc11c0/test HTTP/1.1" 200 OK
INFO: 127.0.0.1:36930 - "POST /settings/environments/7071dab6-881f-49a2-b850-c004b3fc11c0/test HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1135, in __call__
await super().__call__(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/applications.py", line 107, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 93, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 144, in simple_response
await self.app(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 63, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/routing.py", line 716, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/routing.py", line 736, in app
await route.handle(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/routing.py", line 290, in handle
await self.app(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 118, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 104, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 428, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 314, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/src/api/routes/settings.py", line 103, in test_connection
import httpx
ModuleNotFoundError: No module named 'httpx'
INFO: 127.0.0.1:45776 - "POST /settings/environments/7071dab6-881f-49a2-b850-c004b3fc11c0/test HTTP/1.1" 200 OK
INFO: 127.0.0.1:45784 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:45784 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:41628 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:41628 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:41628 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:41628 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:60184 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:60184 - "GET /settings/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:60184 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:60184 - "GET /plugins/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:60184 - "GET /settings HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:60184 - "GET /settings/ HTTP/1.1" 200 OK
WARNING: StatReload detected changes in 'src/core/plugin_loader.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [7968]
INFO: Started server process [12178]
INFO: Waiting for application startup.
INFO: Application startup complete.
WARNING: StatReload detected changes in 'src/dependencies.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [12178]
INFO: Started server process [12451]
INFO: Waiting for application startup.
INFO: Application startup complete.
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
INFO: 127.0.0.1:37334 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:37334 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:39932 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:39932 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:39932 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:39932 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:54900 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49280 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:49280 - "GET /plugins/ HTTP/1.1" 200 OK
WARNING: StatReload detected changes in 'src/api/routes/plugins.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [12451]
INFO: Started server process [15016]
INFO: Waiting for application startup.
INFO: Application startup complete.
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
INFO: 127.0.0.1:59340 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
DEBUG: list_plugins called. Found 0 plugins.
INFO: 127.0.0.1:59340 - "GET /plugins/ HTTP/1.1" 200 OK
WARNING: StatReload detected changes in 'src/dependencies.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [15016]
INFO: Started server process [15257]
INFO: Waiting for application startup.
INFO: Application startup complete.
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
DEBUG: dependencies.py initialized. PluginLoader ID: 139922613090976
DEBUG: dependencies.py initialized. PluginLoader ID: 139922627375088
INFO: 127.0.0.1:57464 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
DEBUG: get_plugin_loader called. Returning PluginLoader ID: 139922627375088
DEBUG: list_plugins called. Found 0 plugins.
INFO: 127.0.0.1:57464 - "GET /plugins/ HTTP/1.1" 200 OK
WARNING: StatReload detected changes in 'src/core/plugin_loader.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [15257]
INFO: Started server process [15533]
INFO: Waiting for application startup.
INFO: Application startup complete.
DEBUG: Loading plugin backup as src.plugins.backup
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
DEBUG: Loading plugin migration as src.plugins.migration
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
DEBUG: dependencies.py initialized. PluginLoader ID: 140371031142384
INFO: 127.0.0.1:46470 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
DEBUG: get_plugin_loader called. Returning PluginLoader ID: 140371031142384
DEBUG: list_plugins called. Found 2 plugins.
DEBUG: Plugin: superset-backup
DEBUG: Plugin: superset-migration
INFO: 127.0.0.1:46470 - "GET /plugins/ HTTP/1.1" 200 OK
WARNING: StatReload detected changes in 'src/api/routes/settings.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [15533]
INFO: Started server process [15827]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [15827]
INFO: Stopping reloader process [7952]

View File

@@ -0,0 +1,269 @@
2025-12-20 19:55:11,325 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 19:55:11,325 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 19:55:11,327 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 43, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 21:01:49,905 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 21:01:49,906 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 21:01:49,988 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 21:01:49,990 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 22:42:32,538 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 22:42:32,538 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 22:42:32,583 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 22:42:32,587 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 22:54:29,770 - INFO - [BackupPlugin][Entry] Starting backup for .
2025-12-20 22:54:29,771 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 22:54:29,831 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 22:54:29,833 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 22:54:34,078 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 22:54:34,078 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 22:54:34,079 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 22:54:34,079 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 22:59:25,060 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 22:59:25,060 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 22:59:25,114 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 22:59:25,117 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 23:00:31,156 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 23:00:31,156 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 23:00:31,157 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 23:00:31,162 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 23:00:34,710 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 23:00:34,710 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 23:00:34,710 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 23:00:34,711 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 23:01:43,894 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 23:01:43,894 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 23:01:43,895 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 23:01:43,895 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 23:04:07,731 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 23:04:07,731 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 23:04:07,732 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 23:04:07,732 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 23:06:39,641 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 23:06:39,642 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 23:06:39,687 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 23:06:39,689 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
Traceback (most recent call last):
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
config = SupersetConfig(
^^^^^^^^^^^^^^^
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
base_url
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
For further information visit https://errors.pydantic.dev/2.12/v/value_error
2025-12-20 23:30:36,090 - INFO - [BackupPlugin][Entry] Starting backup for superset.
2025-12-20 23:30:36,093 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
2025-12-20 23:30:36,128 - INFO - [setup_clients][Action] Loading environments from ConfigManager
2025-12-20 23:30:36,129 - INFO - [SupersetClient.__init__][Enter] Initializing SupersetClient.
2025-12-20 23:30:36,129 - INFO - [APIClient.__init__][Entry] Initializing APIClient.
2025-12-20 23:30:36,130 - WARNING - [_init_session][State] SSL verification disabled.
2025-12-20 23:30:36,130 - INFO - [APIClient.__init__][Exit] APIClient initialized.
2025-12-20 23:30:36,130 - INFO - [SupersetClient.__init__][Exit] SupersetClient initialized.
2025-12-20 23:30:36,130 - INFO - [get_dashboards][Enter] Fetching dashboards.
2025-12-20 23:30:36,131 - INFO - [authenticate][Enter] Authenticating to https://superset.bebesh.ru/api/v1
2025-12-20 23:30:36,897 - INFO - [authenticate][Exit] Authenticated successfully.
2025-12-20 23:30:37,527 - INFO - [get_dashboards][Exit] Found 11 dashboards.
2025-12-20 23:30:37,527 - INFO - [BackupPlugin][Progress] Found 11 dashboards to export in superset.
2025-12-20 23:30:37,529 - INFO - [export_dashboard][Enter] Exporting dashboard 11.
2025-12-20 23:30:38,224 - INFO - [export_dashboard][Exit] Exported dashboard 11 to dashboard_export_20251220T203037.zip.
2025-12-20 23:30:38,225 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:38,226 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/FCC New Coder Survey 2018/dashboard_export_20251220T203037.zip
2025-12-20 23:30:38,227 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/FCC New Coder Survey 2018
2025-12-20 23:30:38,230 - INFO - [export_dashboard][Enter] Exporting dashboard 10.
2025-12-20 23:30:38,438 - INFO - [export_dashboard][Exit] Exported dashboard 10 to dashboard_export_20251220T203038.zip.
2025-12-20 23:30:38,438 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:38,439 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/COVID Vaccine Dashboard/dashboard_export_20251220T203038.zip
2025-12-20 23:30:38,439 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/COVID Vaccine Dashboard
2025-12-20 23:30:38,440 - INFO - [export_dashboard][Enter] Exporting dashboard 9.
2025-12-20 23:30:38,853 - INFO - [export_dashboard][Exit] Exported dashboard 9 to dashboard_export_20251220T203038.zip.
2025-12-20 23:30:38,853 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:38,856 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Sales Dashboard/dashboard_export_20251220T203038.zip
2025-12-20 23:30:38,856 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Sales Dashboard
2025-12-20 23:30:38,858 - INFO - [export_dashboard][Enter] Exporting dashboard 8.
2025-12-20 23:30:38,939 - INFO - [export_dashboard][Exit] Exported dashboard 8 to dashboard_export_20251220T203038.zip.
2025-12-20 23:30:38,940 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:38,941 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Unicode Test/dashboard_export_20251220T203038.zip
2025-12-20 23:30:38,941 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Unicode Test
2025-12-20 23:30:38,942 - INFO - [export_dashboard][Enter] Exporting dashboard 7.
2025-12-20 23:30:39,148 - INFO - [export_dashboard][Exit] Exported dashboard 7 to dashboard_export_20251220T203038.zip.
2025-12-20 23:30:39,148 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:39,149 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Video Game Sales/dashboard_export_20251220T203038.zip
2025-12-20 23:30:39,149 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Video Game Sales
2025-12-20 23:30:39,150 - INFO - [export_dashboard][Enter] Exporting dashboard 6.
2025-12-20 23:30:39,689 - INFO - [export_dashboard][Exit] Exported dashboard 6 to dashboard_export_20251220T203039.zip.
2025-12-20 23:30:39,689 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:39,690 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Featured Charts/dashboard_export_20251220T203039.zip
2025-12-20 23:30:39,691 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Featured Charts
2025-12-20 23:30:39,692 - INFO - [export_dashboard][Enter] Exporting dashboard 5.
2025-12-20 23:30:39,960 - INFO - [export_dashboard][Exit] Exported dashboard 5 to dashboard_export_20251220T203039.zip.
2025-12-20 23:30:39,960 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:39,961 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Slack Dashboard/dashboard_export_20251220T203039.zip
2025-12-20 23:30:39,961 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Slack Dashboard
2025-12-20 23:30:39,962 - INFO - [export_dashboard][Enter] Exporting dashboard 4.
2025-12-20 23:30:40,196 - INFO - [export_dashboard][Exit] Exported dashboard 4 to dashboard_export_20251220T203039.zip.
2025-12-20 23:30:40,196 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:40,197 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/deck.gl Demo/dashboard_export_20251220T203039.zip
2025-12-20 23:30:40,197 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/deck.gl Demo
2025-12-20 23:30:40,198 - INFO - [export_dashboard][Enter] Exporting dashboard 3.
2025-12-20 23:30:40,745 - INFO - [export_dashboard][Exit] Exported dashboard 3 to dashboard_export_20251220T203040.zip.
2025-12-20 23:30:40,746 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:40,760 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Misc Charts/dashboard_export_20251220T203040.zip
2025-12-20 23:30:40,761 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Misc Charts
2025-12-20 23:30:40,762 - INFO - [export_dashboard][Enter] Exporting dashboard 2.
2025-12-20 23:30:40,928 - INFO - [export_dashboard][Exit] Exported dashboard 2 to dashboard_export_20251220T203040.zip.
2025-12-20 23:30:40,929 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:40,930 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/USA Births Names/dashboard_export_20251220T203040.zip
2025-12-20 23:30:40,931 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/USA Births Names
2025-12-20 23:30:40,932 - INFO - [export_dashboard][Enter] Exporting dashboard 1.
2025-12-20 23:30:41,582 - INFO - [export_dashboard][Exit] Exported dashboard 1 to dashboard_export_20251220T203040.zip.
2025-12-20 23:30:41,582 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
2025-12-20 23:30:41,749 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/World Bank's Data/dashboard_export_20251220T203040.zip
2025-12-20 23:30:41,750 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/World Bank's Data
2025-12-20 23:30:41,752 - INFO - [consolidate_archive_folders][Enter] Consolidating archives in backups/SUPERSET
2025-12-20 23:30:41,753 - INFO - [remove_empty_directories][Enter] Starting cleanup of empty directories in backups/SUPERSET
2025-12-20 23:30:41,758 - INFO - [remove_empty_directories][Exit] Removed 0 empty directories.
2025-12-20 23:30:41,758 - INFO - [BackupPlugin][CoherenceCheck:Passed] Backup logic completed for superset.

12
backend/requirements.txt Executable file
View File

@@ -0,0 +1,12 @@
fastapi
uvicorn
pydantic
authlib
python-multipart
starlette
jsonschema
requests
keyring
httpx
PyYAML
websockets

52
backend/src/api/auth.py Executable file
View File

@@ -0,0 +1,52 @@
# [DEF:AuthModule:Module]
# @SEMANTICS: auth, authentication, adfs, oauth, middleware
# @PURPOSE: Implements ADFS authentication using Authlib for FastAPI. It provides a dependency to protect endpoints.
# @LAYER: UI (API)
# @RELATION: Used by API routers to protect endpoints that require authentication.
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2AuthorizationCodeBearer
from authlib.integrations.starlette_client import OAuth
from starlette.config import Config
# Placeholder for ADFS configuration. In a real app, this would come from a secure source.
# Create an in-memory .env file
from io import StringIO
config_data = StringIO("""
ADFS_CLIENT_ID=your-client-id
ADFS_CLIENT_SECRET=your-client-secret
ADFS_SERVER_METADATA_URL=https://your-adfs-server/.well-known/openid-configuration
""")
config = Config(config_data)
oauth = OAuth(config)
oauth.register(
name='adfs',
server_metadata_url=config('ADFS_SERVER_METADATA_URL'),
client_kwargs={'scope': 'openid profile email'}
)
oauth2_scheme = OAuth2AuthorizationCodeBearer(
authorizationUrl="https://your-adfs-server/adfs/oauth2/authorize",
tokenUrl="https://your-adfs-server/adfs/oauth2/token",
)
async def get_current_user(token: str = Depends(oauth2_scheme)):
"""
Dependency to get the current user from the ADFS token.
This is a placeholder and needs to be fully implemented.
"""
# In a real implementation, you would:
# 1. Validate the token with ADFS.
# 2. Fetch user information.
# 3. Create a user object.
# For now, we'll just check if a token exists.
if not token:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Not authenticated",
headers={"WWW-Authenticate": "Bearer"},
)
# A real implementation would return a user object.
return {"placeholder_user": "user@example.com"}
# [/DEF]

View File

@@ -0,0 +1 @@
from . import plugins, tasks, settings

View File

@@ -0,0 +1,22 @@
# [DEF:PluginsRouter:Module]
# @SEMANTICS: api, router, plugins, list
# @PURPOSE: Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins.
# @LAYER: UI (API)
# @RELATION: Depends on the PluginLoader and PluginConfig. It is included by the main app.
from typing import List
from fastapi import APIRouter, Depends
from ...core.plugin_base import PluginConfig
from ...dependencies import get_plugin_loader
router = APIRouter()
@router.get("/", response_model=List[PluginConfig])
async def list_plugins(
plugin_loader = Depends(get_plugin_loader)
):
"""
Retrieve a list of all available plugins.
"""
return plugin_loader.get_all_plugin_configs()
# [/DEF]

View File

@@ -0,0 +1,218 @@
# [DEF:SettingsRouter:Module]
#
# @SEMANTICS: settings, api, router, fastapi
# @PURPOSE: Provides API endpoints for managing application settings and Superset environments.
# @LAYER: UI (API)
# @RELATION: DEPENDS_ON -> ConfigManager
# @RELATION: DEPENDS_ON -> ConfigModels
#
# @INVARIANT: All settings changes must be persisted via ConfigManager.
# @PUBLIC_API: router
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List
from ...core.config_models import AppConfig, Environment, GlobalSettings
from ...dependencies import get_config_manager
from ...core.config_manager import ConfigManager
from ...core.logger import logger
from superset_tool.client import SupersetClient
from superset_tool.models import SupersetConfig
import os
# [/SECTION]
router = APIRouter()
# [DEF:get_settings:Function]
# @PURPOSE: Retrieves all application settings.
# @RETURN: AppConfig - The current configuration.
@router.get("/", response_model=AppConfig)
async def get_settings(config_manager: ConfigManager = Depends(get_config_manager)):
logger.info("[get_settings][Entry] Fetching all settings")
config = config_manager.get_config().copy(deep=True)
# Mask passwords
for env in config.environments:
if env.password:
env.password = "********"
return config
# [/DEF:get_settings]
# [DEF:update_global_settings:Function]
# @PURPOSE: Updates global application settings.
# @PARAM: settings (GlobalSettings) - The new global settings.
# @RETURN: GlobalSettings - The updated settings.
@router.patch("/global", response_model=GlobalSettings)
async def update_global_settings(
settings: GlobalSettings,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info("[update_global_settings][Entry] Updating global settings")
config_manager.update_global_settings(settings)
return settings
# [/DEF:update_global_settings]
# [DEF:get_environments:Function]
# @PURPOSE: Lists all configured Superset environments.
# @RETURN: List[Environment] - List of environments.
@router.get("/environments", response_model=List[Environment])
async def get_environments(config_manager: ConfigManager = Depends(get_config_manager)):
logger.info("[get_environments][Entry] Fetching environments")
return config_manager.get_environments()
# [/DEF:get_environments]
# [DEF:add_environment:Function]
# @PURPOSE: Adds a new Superset environment.
# @PARAM: env (Environment) - The environment to add.
# @RETURN: Environment - The added environment.
@router.post("/environments", response_model=Environment)
async def add_environment(
env: Environment,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
# Validate connection before adding
try:
superset_config = SupersetConfig(
env=env.name,
base_url=env.url,
auth={
"provider": "db",
"username": env.username,
"password": env.password,
"refresh": "true"
}
)
client = SupersetClient(config=superset_config)
client.get_dashboards(query={"page_size": 1})
except Exception as e:
logger.error(f"[add_environment][Coherence:Failed] Connection validation failed: {e}")
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
config_manager.add_environment(env)
return env
# [/DEF:add_environment]
# [DEF:update_environment:Function]
# @PURPOSE: Updates an existing Superset environment.
# @PARAM: id (str) - The ID of the environment to update.
# @PARAM: env (Environment) - The updated environment data.
# @RETURN: Environment - The updated environment.
@router.put("/environments/{id}", response_model=Environment)
async def update_environment(
id: str,
env: Environment,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[update_environment][Entry] Updating environment {id}")
# If password is masked, we need the real one for validation
env_to_validate = env.copy(deep=True)
if env_to_validate.password == "********":
old_env = next((e for e in config_manager.get_environments() if e.id == id), None)
if old_env:
env_to_validate.password = old_env.password
# Validate connection before updating
try:
superset_config = SupersetConfig(
env=env_to_validate.name,
base_url=env_to_validate.url,
auth={
"provider": "db",
"username": env_to_validate.username,
"password": env_to_validate.password,
"refresh": "true"
}
)
client = SupersetClient(config=superset_config)
client.get_dashboards(query={"page_size": 1})
except Exception as e:
logger.error(f"[update_environment][Coherence:Failed] Connection validation failed: {e}")
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
if config_manager.update_environment(id, env):
return env
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
# [/DEF:update_environment]
# [DEF:delete_environment:Function]
# @PURPOSE: Deletes a Superset environment.
# @PARAM: id (str) - The ID of the environment to delete.
@router.delete("/environments/{id}")
async def delete_environment(
id: str,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[delete_environment][Entry] Deleting environment {id}")
config_manager.delete_environment(id)
return {"message": f"Environment {id} deleted"}
# [/DEF:delete_environment]
# [DEF:test_environment_connection:Function]
# @PURPOSE: Tests the connection to a Superset environment.
# @PARAM: id (str) - The ID of the environment to test.
# @RETURN: dict - Success message or error.
@router.post("/environments/{id}/test")
async def test_environment_connection(
id: str,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[test_environment_connection][Entry] Testing environment {id}")
# Find environment
env = next((e for e in config_manager.get_environments() if e.id == id), None)
if not env:
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
try:
# Create SupersetConfig
# Note: SupersetConfig expects 'auth' dict with specific keys
superset_config = SupersetConfig(
env=env.name,
base_url=env.url,
auth={
"provider": "db", # Defaulting to db for now
"username": env.username,
"password": env.password,
"refresh": "true"
}
)
# Initialize client (this will trigger authentication)
client = SupersetClient(config=superset_config)
# Try a simple request to verify
client.get_dashboards(query={"page_size": 1})
logger.info(f"[test_environment_connection][Coherence:OK] Connection successful for {id}")
return {"status": "success", "message": "Connection successful"}
except Exception as e:
logger.error(f"[test_environment_connection][Coherence:Failed] Connection failed for {id}: {e}")
return {"status": "error", "message": str(e)}
# [/DEF:test_environment_connection]
# [DEF:validate_backup_path:Function]
# @PURPOSE: Validates if a backup path exists and is writable.
# @PARAM: path (str) - The path to validate.
# @RETURN: dict - Validation result.
@router.post("/validate-path")
async def validate_backup_path(
path_data: dict,
config_manager: ConfigManager = Depends(get_config_manager)
):
path = path_data.get("path")
if not path:
raise HTTPException(status_code=400, detail="Path is required")
logger.info(f"[validate_backup_path][Entry] Validating path: {path}")
valid, message = config_manager.validate_path(path)
if not valid:
return {"status": "error", "message": message}
return {"status": "success", "message": message}
# [/DEF:validate_backup_path]
# [/DEF:SettingsRouter]

57
backend/src/api/routes/tasks.py Executable file
View File

@@ -0,0 +1,57 @@
# [DEF:TasksRouter:Module]
# @SEMANTICS: api, router, tasks, create, list, get
# @PURPOSE: Defines the FastAPI router for task-related endpoints, allowing clients to create, list, and get the status of tasks.
# @LAYER: UI (API)
# @RELATION: Depends on the TaskManager. It is included by the main app.
from typing import List, Dict, Any
from fastapi import APIRouter, Depends, HTTPException, status
from pydantic import BaseModel
from ...core.task_manager import TaskManager, Task
from ...dependencies import get_task_manager
router = APIRouter()
class CreateTaskRequest(BaseModel):
plugin_id: str
params: Dict[str, Any]
@router.post("/", response_model=Task, status_code=status.HTTP_201_CREATED)
async def create_task(
request: CreateTaskRequest,
task_manager: TaskManager = Depends(get_task_manager)
):
"""
Create and start a new task for a given plugin.
"""
try:
task = await task_manager.create_task(
plugin_id=request.plugin_id,
params=request.params
)
return task
except ValueError as e:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e))
@router.get("/", response_model=List[Task])
async def list_tasks(
task_manager: TaskManager = Depends(get_task_manager)
):
"""
Retrieve a list of all tasks.
"""
return task_manager.get_all_tasks()
@router.get("/{task_id}", response_model=Task)
async def get_task(
task_id: str,
task_manager: TaskManager = Depends(get_task_manager)
):
"""
Retrieve the details of a specific task.
"""
task = task_manager.get_task(task_id)
if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task
# [/DEF]

113
backend/src/app.py Executable file
View File

@@ -0,0 +1,113 @@
# [DEF:AppModule:Module]
# @SEMANTICS: app, main, entrypoint, fastapi
# @PURPOSE: The main entry point for the FastAPI application. It initializes the app, configures CORS, sets up dependencies, includes API routers, and defines the WebSocket endpoint for log streaming.
# @LAYER: UI (API)
# @RELATION: Depends on the dependency module and API route modules.
import sys
from pathlib import Path
# Add project root to sys.path to allow importing superset_tool
# Assuming app.py is in backend/src/
project_root = Path(__file__).resolve().parent.parent.parent
sys.path.append(str(project_root))
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
import asyncio
import os
from .dependencies import get_task_manager
from .core.logger import logger
from .api.routes import plugins, tasks, settings
# [DEF:App:Global]
# @SEMANTICS: app, fastapi, instance
# @PURPOSE: The global FastAPI application instance.
app = FastAPI(
title="Superset Tools API",
description="API for managing Superset automation tools and plugins.",
version="1.0.0",
)
# Configure CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Adjust this in production
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Include API routes
app.include_router(plugins.router, prefix="/api/plugins", tags=["Plugins"])
app.include_router(tasks.router, prefix="/api/tasks", tags=["Tasks"])
app.include_router(settings.router, prefix="/api/settings", tags=["Settings"])
# [DEF:WebSocketEndpoint:Endpoint]
# @SEMANTICS: websocket, logs, streaming, real-time
# @PURPOSE: Provides a WebSocket endpoint for clients to connect to and receive real-time log entries for a specific task.
@app.websocket("/ws/logs/{task_id}")
async def websocket_endpoint(websocket: WebSocket, task_id: str):
await websocket.accept()
logger.info(f"WebSocket connection accepted for task {task_id}")
task_manager = get_task_manager()
queue = await task_manager.subscribe_logs(task_id)
try:
# Send initial logs if any
initial_logs = task_manager.get_task_logs(task_id)
for log_entry in initial_logs:
# Convert datetime to string for JSON serialization
log_dict = log_entry.dict()
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict)
# Stream new logs
logger.info(f"Starting log stream for task {task_id}")
while True:
log_entry = await queue.get()
log_dict = log_entry.dict()
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict)
# If task is finished, we could potentially close the connection
# but let's keep it open for a bit or until the client disconnects
if "Task completed successfully" in log_entry.message or "Task failed" in log_entry.message:
# Wait a bit to ensure client receives the last message
await asyncio.sleep(2)
break
except WebSocketDisconnect:
logger.info(f"WebSocket connection disconnected for task {task_id}")
except Exception as e:
logger.error(f"WebSocket error for task {task_id}: {e}")
finally:
task_manager.unsubscribe_logs(task_id, queue)
# [/DEF]
# [DEF:StaticFiles:Mount]
# @SEMANTICS: static, frontend, spa
# @PURPOSE: Mounts the frontend build directory to serve static assets.
frontend_path = project_root / "frontend" / "build"
if frontend_path.exists():
app.mount("/_app", StaticFiles(directory=str(frontend_path / "_app")), name="static")
# Serve other static files from the root of build directory
@app.get("/{file_path:path}")
async def serve_spa(file_path: str):
full_path = frontend_path / file_path
if full_path.is_file():
return FileResponse(str(full_path))
# Fallback to index.html for SPA routing
return FileResponse(str(frontend_path / "index.html"))
else:
# [DEF:RootEndpoint:Endpoint]
# @SEMANTICS: root, healthcheck
# @PURPOSE: A simple root endpoint to confirm that the API is running.
@app.get("/")
async def read_root():
return {"message": "Superset Tools API is running (Frontend build not found)"}
# [/DEF]

View File

@@ -0,0 +1,230 @@
# [DEF:ConfigManagerModule:Module]
#
# @SEMANTICS: config, manager, persistence, json
# @PURPOSE: Manages application configuration, including loading/saving to JSON and CRUD for environments.
# @LAYER: Core
# @RELATION: DEPENDS_ON -> ConfigModels
# @RELATION: CALLS -> logger
# @RELATION: WRITES_TO -> config.json
#
# @INVARIANT: Configuration must always be valid according to AppConfig model.
# @PUBLIC_API: ConfigManager
# [SECTION: IMPORTS]
import json
import os
from pathlib import Path
from typing import Optional, List
from .config_models import AppConfig, Environment, GlobalSettings
from .logger import logger
# [/SECTION]
# [DEF:ConfigManager:Class]
# @PURPOSE: A class to handle application configuration persistence and management.
# @RELATION: WRITES_TO -> config.json
class ConfigManager:
# [DEF:__init__:Function]
# @PURPOSE: Initializes the ConfigManager.
# @PRE: isinstance(config_path, str) and len(config_path) > 0
# @POST: self.config is an instance of AppConfig
# @PARAM: config_path (str) - Path to the configuration file.
def __init__(self, config_path: str = "config.json"):
# 1. Runtime check of @PRE
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
logger.info(f"[ConfigManager][Entry] Initializing with {config_path}")
# 2. Logic implementation
self.config_path = Path(config_path)
self.config: AppConfig = self._load_config()
# 3. Runtime check of @POST
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
logger.info(f"[ConfigManager][Exit] Initialized")
# [/DEF:__init__]
# [DEF:_load_config:Function]
# @PURPOSE: Loads the configuration from disk or creates a default one.
# @POST: isinstance(return, AppConfig)
# @RETURN: AppConfig - The loaded or default configuration.
def _load_config(self) -> AppConfig:
logger.debug(f"[_load_config][Entry] Loading from {self.config_path}")
if not self.config_path.exists():
logger.info(f"[_load_config][Action] Config file not found. Creating default.")
default_config = AppConfig(
environments=[],
settings=GlobalSettings(backup_path="backups")
)
self._save_config_to_disk(default_config)
return default_config
try:
with open(self.config_path, "r") as f:
data = json.load(f)
config = AppConfig(**data)
logger.info(f"[_load_config][Coherence:OK] Configuration loaded")
return config
except Exception as e:
logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}")
return AppConfig(
environments=[],
settings=GlobalSettings(backup_path="backups")
)
# [/DEF:_load_config]
# [DEF:_save_config_to_disk:Function]
# @PURPOSE: Saves the provided configuration object to disk.
# @PRE: isinstance(config, AppConfig)
# @PARAM: config (AppConfig) - The configuration to save.
def _save_config_to_disk(self, config: AppConfig):
logger.debug(f"[_save_config_to_disk][Entry] Saving to {self.config_path}")
# 1. Runtime check of @PRE
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
# 2. Logic implementation
try:
with open(self.config_path, "w") as f:
json.dump(config.dict(), f, indent=4)
logger.info(f"[_save_config_to_disk][Action] Configuration saved")
except Exception as e:
logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}")
# [/DEF:_save_config_to_disk]
# [DEF:save:Function]
# @PURPOSE: Saves the current configuration state to disk.
def save(self):
self._save_config_to_disk(self.config)
# [/DEF:save]
# [DEF:get_config:Function]
# @PURPOSE: Returns the current configuration.
# @RETURN: AppConfig - The current configuration.
def get_config(self) -> AppConfig:
return self.config
# [/DEF:get_config]
# [DEF:update_global_settings:Function]
# @PURPOSE: Updates the global settings and persists the change.
# @PRE: isinstance(settings, GlobalSettings)
# @PARAM: settings (GlobalSettings) - The new global settings.
def update_global_settings(self, settings: GlobalSettings):
logger.info(f"[update_global_settings][Entry] Updating settings")
# 1. Runtime check of @PRE
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
# 2. Logic implementation
self.config.settings = settings
self.save()
logger.info(f"[update_global_settings][Exit] Settings updated")
# [/DEF:update_global_settings]
# [DEF:validate_path:Function]
# @PURPOSE: Validates if a path exists and is writable.
# @PARAM: path (str) - The path to validate.
# @RETURN: tuple (bool, str) - (is_valid, message)
def validate_path(self, path: str) -> tuple[bool, str]:
p = os.path.abspath(path)
if not os.path.exists(p):
try:
os.makedirs(p, exist_ok=True)
except Exception as e:
return False, f"Path does not exist and could not be created: {e}"
if not os.access(p, os.W_OK):
return False, "Path is not writable"
return True, "Path is valid and writable"
# [/DEF:validate_path]
# [DEF:get_environments:Function]
# @PURPOSE: Returns the list of configured environments.
# @RETURN: List[Environment] - List of environments.
def get_environments(self) -> List[Environment]:
return self.config.environments
# [/DEF:get_environments]
# [DEF:has_environments:Function]
# @PURPOSE: Checks if at least one environment is configured.
# @RETURN: bool - True if at least one environment exists.
def has_environments(self) -> bool:
return len(self.config.environments) > 0
# [/DEF:has_environments]
# [DEF:add_environment:Function]
# @PURPOSE: Adds a new environment to the configuration.
# @PRE: isinstance(env, Environment)
# @PARAM: env (Environment) - The environment to add.
def add_environment(self, env: Environment):
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
# 1. Runtime check of @PRE
assert isinstance(env, Environment), "env must be an instance of Environment"
# 2. Logic implementation
# Check for duplicate ID and remove if exists
self.config.environments = [e for e in self.config.environments if e.id != env.id]
self.config.environments.append(env)
self.save()
logger.info(f"[add_environment][Exit] Environment added")
# [/DEF:add_environment]
# [DEF:update_environment:Function]
# @PURPOSE: Updates an existing environment.
# @PRE: isinstance(env_id, str) and len(env_id) > 0 and isinstance(updated_env, Environment)
# @PARAM: env_id (str) - The ID of the environment to update.
# @PARAM: updated_env (Environment) - The updated environment data.
# @RETURN: bool - True if updated, False otherwise.
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
logger.info(f"[update_environment][Entry] Updating {env_id}")
# 1. Runtime check of @PRE
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment"
# 2. Logic implementation
for i, env in enumerate(self.config.environments):
if env.id == env_id:
# If password is masked, keep the old one
if updated_env.password == "********":
updated_env.password = env.password
self.config.environments[i] = updated_env
self.save()
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}")
return True
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found")
return False
# [/DEF:update_environment]
# [DEF:delete_environment:Function]
# @PURPOSE: Deletes an environment by ID.
# @PRE: isinstance(env_id, str) and len(env_id) > 0
# @PARAM: env_id (str) - The ID of the environment to delete.
def delete_environment(self, env_id: str):
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
# 1. Runtime check of @PRE
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
# 2. Logic implementation
original_count = len(self.config.environments)
self.config.environments = [e for e in self.config.environments if e.id != env_id]
if len(self.config.environments) < original_count:
self.save()
logger.info(f"[delete_environment][Action] Deleted {env_id}")
else:
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found")
# [/DEF:delete_environment]
# [/DEF:ConfigManager]
# [/DEF:ConfigManagerModule]

View File

@@ -0,0 +1,36 @@
# [DEF:ConfigModels:Module]
# @SEMANTICS: config, models, pydantic
# @PURPOSE: Defines the data models for application configuration using Pydantic.
# @LAYER: Core
# @RELATION: READS_FROM -> config.json
# @RELATION: USED_BY -> ConfigManager
from pydantic import BaseModel, Field
from typing import List, Optional
# [DEF:Environment:DataClass]
# @PURPOSE: Represents a Superset environment configuration.
class Environment(BaseModel):
id: str
name: str
url: str
username: str
password: str # Will be masked in UI
is_default: bool = False
# [/DEF:Environment]
# [DEF:GlobalSettings:DataClass]
# @PURPOSE: Represents global application settings.
class GlobalSettings(BaseModel):
backup_path: str
default_environment_id: Optional[str] = None
# [/DEF:GlobalSettings]
# [DEF:AppConfig:DataClass]
# @PURPOSE: The root configuration model containing all application settings.
class AppConfig(BaseModel):
environments: List[Environment] = []
settings: GlobalSettings
# [/DEF:AppConfig]
# [/DEF:ConfigModels]

92
backend/src/core/logger.py Executable file
View File

@@ -0,0 +1,92 @@
# [DEF:LoggerModule:Module]
# @SEMANTICS: logging, websocket, streaming, handler
# @PURPOSE: Configures the application's logging system, including a custom handler for buffering logs and streaming them over WebSockets.
# @LAYER: Core
# @RELATION: Used by the main application and other modules to log events. The WebSocketLogHandler is used by the WebSocket endpoint in app.py.
import logging
from datetime import datetime
from typing import Dict, Any, List, Optional
from collections import deque
from pydantic import BaseModel, Field
# Re-using LogEntry from task_manager for consistency
# [DEF:LogEntry:Class]
# @SEMANTICS: log, entry, record, pydantic
# @PURPOSE: A Pydantic model representing a single, structured log entry. This is a re-definition for consistency, as it's also defined in task_manager.py.
class LogEntry(BaseModel):
timestamp: datetime = Field(default_factory=datetime.utcnow)
level: str
message: str
context: Optional[Dict[str, Any]] = None
# [/DEF]
# [DEF:WebSocketLogHandler:Class]
# @SEMANTICS: logging, handler, websocket, buffer
# @PURPOSE: A custom logging handler that captures log records into a buffer. It is designed to be extended for real-time log streaming over WebSockets.
class WebSocketLogHandler(logging.Handler):
"""
A logging handler that stores log records and can be extended to send them
over WebSockets.
"""
def __init__(self, capacity: int = 1000):
super().__init__()
self.log_buffer: deque[LogEntry] = deque(maxlen=capacity)
# In a real implementation, you'd have a way to manage active WebSocket connections
# e.g., self.active_connections: Set[WebSocket] = set()
def emit(self, record: logging.LogRecord):
try:
log_entry = LogEntry(
level=record.levelname,
message=self.format(record),
context={
"name": record.name,
"pathname": record.pathname,
"lineno": record.lineno,
"funcName": record.funcName,
"process": record.process,
"thread": record.thread,
}
)
self.log_buffer.append(log_entry)
# Here you would typically send the log_entry to all active WebSocket connections
# for real-time streaming to the frontend.
# Example: for ws in self.active_connections: await ws.send_json(log_entry.dict())
except Exception:
self.handleError(record)
def get_recent_logs(self) -> List[LogEntry]:
"""
Returns a list of recent log entries from the buffer.
"""
return list(self.log_buffer)
# [/DEF]
# [DEF:Logger:Global]
# @SEMANTICS: logger, global, instance
# @PURPOSE: The global logger instance for the application, configured with both a console handler and the custom WebSocket handler.
logger = logging.getLogger("superset_tools_app")
logger.setLevel(logging.INFO)
# Create a formatter
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s][%(name)s] %(message)s'
)
# Add console handler
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
# Add WebSocket log handler
websocket_log_handler = WebSocketLogHandler()
websocket_log_handler.setFormatter(formatter)
logger.addHandler(websocket_log_handler)
# Example usage:
# logger.info("Application started", extra={"context_key": "context_value"})
# logger.error("An error occurred", exc_info=True)
# [/DEF]

71
backend/src/core/plugin_base.py Executable file
View File

@@ -0,0 +1,71 @@
from abc import ABC, abstractmethod
from typing import Dict, Any
from pydantic import BaseModel, Field
# [DEF:PluginBase:Class]
# @SEMANTICS: plugin, interface, base, abstract
# @PURPOSE: Defines the abstract base class that all plugins must implement to be recognized by the system. It enforces a common structure for plugin metadata and execution.
# @LAYER: Core
# @RELATION: Used by PluginLoader to identify valid plugins.
# @INVARIANT: All plugins MUST inherit from this class.
class PluginBase(ABC):
"""
Base class for all plugins.
Plugins must inherit from this class and implement the abstract methods.
"""
@property
@abstractmethod
def id(self) -> str:
"""A unique identifier for the plugin."""
pass
@property
@abstractmethod
def name(self) -> str:
"""A human-readable name for the plugin."""
pass
@property
@abstractmethod
def description(self) -> str:
"""A brief description of what the plugin does."""
pass
@property
@abstractmethod
def version(self) -> str:
"""The version of the plugin."""
pass
@abstractmethod
def get_schema(self) -> Dict[str, Any]:
"""
Returns the JSON schema for the plugin's input parameters.
This schema will be used to generate the frontend form.
"""
pass
@abstractmethod
async def execute(self, params: Dict[str, Any]):
"""
Executes the plugin's logic.
The `params` argument will be validated against the schema returned by `get_schema()`.
"""
pass
# [/DEF]
# [DEF:PluginConfig:Class]
# @SEMANTICS: plugin, config, schema, pydantic
# @PURPOSE: A Pydantic model used to represent the validated configuration and metadata of a loaded plugin. This object is what gets exposed to the API layer.
# @LAYER: Core
# @RELATION: Instantiated by PluginLoader after validating a PluginBase instance.
class PluginConfig(BaseModel):
"""Pydantic model for plugin configuration."""
id: str = Field(..., description="Unique identifier for the plugin")
name: str = Field(..., description="Human-readable name for the plugin")
description: str = Field(..., description="Brief description of what the plugin does")
version: str = Field(..., description="Version of the plugin")
input_schema: Dict[str, Any] = Field(..., description="JSON schema for input parameters", alias="schema")
# [/DEF]

130
backend/src/core/plugin_loader.py Executable file
View File

@@ -0,0 +1,130 @@
import importlib.util
import os
import sys # Added this line
from typing import Dict, Type, List, Optional
from .plugin_base import PluginBase, PluginConfig
from jsonschema import validate
# [DEF:PluginLoader:Class]
# @SEMANTICS: plugin, loader, dynamic, import
# @PURPOSE: Scans a specified directory for Python modules, dynamically loads them, and registers any classes that are valid implementations of the PluginBase interface.
# @LAYER: Core
# @RELATION: Depends on PluginBase. It is used by the main application to discover and manage available plugins.
class PluginLoader:
"""
Scans a directory for Python modules, loads them, and identifies classes
that inherit from PluginBase.
"""
def __init__(self, plugin_dir: str):
self.plugin_dir = plugin_dir
self._plugins: Dict[str, PluginBase] = {}
self._plugin_configs: Dict[str, PluginConfig] = {}
self._load_plugins()
def _load_plugins(self):
"""
Scans the plugin directory, imports modules, and registers valid plugins.
"""
if not os.path.exists(self.plugin_dir):
os.makedirs(self.plugin_dir)
# Add the plugin directory's parent to sys.path to enable relative imports within plugins
# This assumes plugin_dir is something like 'backend/src/plugins'
# and we want 'backend/src' to be on the path for 'from ..core...' imports
plugin_parent_dir = os.path.abspath(os.path.join(self.plugin_dir, os.pardir))
if plugin_parent_dir not in sys.path:
sys.path.insert(0, plugin_parent_dir)
for filename in os.listdir(self.plugin_dir):
if filename.endswith(".py") and filename != "__init__.py":
module_name = filename[:-3]
file_path = os.path.join(self.plugin_dir, filename)
self._load_module(module_name, file_path)
def _load_module(self, module_name: str, file_path: str):
"""
Loads a single Python module and extracts PluginBase subclasses.
"""
# Try to determine the correct package prefix based on how the app is running
if "backend.src" in __name__:
package_prefix = "backend.src.plugins"
else:
package_prefix = "src.plugins"
package_name = f"{package_prefix}.{module_name}"
# print(f"DEBUG: Loading plugin {module_name} as {package_name}")
spec = importlib.util.spec_from_file_location(package_name, file_path)
if spec is None or spec.loader is None:
print(f"Could not load module spec for {package_name}") # Replace with proper logging
return
module = importlib.util.module_from_spec(spec)
try:
spec.loader.exec_module(module)
except Exception as e:
print(f"Error loading plugin module {module_name}: {e}") # Replace with proper logging
return
for attribute_name in dir(module):
attribute = getattr(module, attribute_name)
if (
isinstance(attribute, type)
and issubclass(attribute, PluginBase)
and attribute is not PluginBase
):
try:
plugin_instance = attribute()
self._register_plugin(plugin_instance)
except Exception as e:
print(f"Error instantiating plugin {attribute_name} in {module_name}: {e}") # Replace with proper logging
def _register_plugin(self, plugin_instance: PluginBase):
"""
Registers a valid plugin instance.
"""
plugin_id = plugin_instance.id
if plugin_id in self._plugins:
print(f"Warning: Duplicate plugin ID '{plugin_id}' found. Skipping.") # Replace with proper logging
return
try:
schema = plugin_instance.get_schema()
# Basic validation to ensure it's a dictionary
if not isinstance(schema, dict):
raise TypeError("get_schema() must return a dictionary.")
plugin_config = PluginConfig(
id=plugin_instance.id,
name=plugin_instance.name,
description=plugin_instance.description,
version=plugin_instance.version,
schema=schema,
)
# The following line is commented out because it requires a schema to be passed to validate against.
# The schema provided by the plugin is the one being validated, not the data.
# validate(instance={}, schema=schema)
self._plugins[plugin_id] = plugin_instance
self._plugin_configs[plugin_id] = plugin_config
print(f"Plugin '{plugin_instance.name}' (ID: {plugin_id}) loaded successfully.") # Replace with proper logging
except Exception as e:
print(f"Error validating plugin '{plugin_instance.name}' (ID: {plugin_id}): {e}") # Replace with proper logging
def get_plugin(self, plugin_id: str) -> Optional[PluginBase]:
"""
Returns a loaded plugin instance by its ID.
"""
return self._plugins.get(plugin_id)
def get_all_plugin_configs(self) -> List[PluginConfig]:
"""
Returns a list of all loaded plugin configurations.
"""
return list(self._plugin_configs.values())
def has_plugin(self, plugin_id: str) -> bool:
"""
Checks if a plugin with the given ID is loaded.
"""
return plugin_id in self._plugins

167
backend/src/core/task_manager.py Executable file
View File

@@ -0,0 +1,167 @@
# [DEF:TaskManagerModule:Module]
# @SEMANTICS: task, manager, lifecycle, execution, state
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking. It uses a thread pool to run plugins asynchronously.
# @LAYER: Core
# @RELATION: Depends on PluginLoader to get plugin instances. It is used by the API layer to create and query tasks.
import asyncio
import uuid
from datetime import datetime
from enum import Enum
from typing import Dict, Any, List, Optional
from concurrent.futures import ThreadPoolExecutor
from pydantic import BaseModel, Field
# Assuming PluginBase and PluginConfig are defined in plugin_base.py
# from .plugin_base import PluginBase, PluginConfig # Not needed here, TaskManager interacts with the PluginLoader
# [DEF:TaskStatus:Enum]
# @SEMANTICS: task, status, state, enum
# @PURPOSE: Defines the possible states a task can be in during its lifecycle.
class TaskStatus(str, Enum):
PENDING = "PENDING"
RUNNING = "RUNNING"
SUCCESS = "SUCCESS"
FAILED = "FAILED"
# [/DEF]
# [DEF:LogEntry:Class]
# @SEMANTICS: log, entry, record, pydantic
# @PURPOSE: A Pydantic model representing a single, structured log entry associated with a task.
class LogEntry(BaseModel):
timestamp: datetime = Field(default_factory=datetime.utcnow)
level: str
message: str
context: Optional[Dict[str, Any]] = None
# [/DEF]
# [DEF:Task:Class]
# @SEMANTICS: task, job, execution, state, pydantic
# @PURPOSE: A Pydantic model representing a single execution instance of a plugin, including its status, parameters, and logs.
class Task(BaseModel):
id: str = Field(default_factory=lambda: str(uuid.uuid4()))
plugin_id: str
status: TaskStatus = TaskStatus.PENDING
started_at: Optional[datetime] = None
finished_at: Optional[datetime] = None
user_id: Optional[str] = None
logs: List[LogEntry] = Field(default_factory=list)
params: Dict[str, Any] = Field(default_factory=dict)
# [/DEF]
# [DEF:TaskManager:Class]
# @SEMANTICS: task, manager, lifecycle, execution, state
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking.
class TaskManager:
"""
Manages the lifecycle of tasks, including their creation, execution, and state tracking.
"""
def __init__(self, plugin_loader):
self.plugin_loader = plugin_loader
self.tasks: Dict[str, Task] = {}
self.subscribers: Dict[str, List[asyncio.Queue]] = {}
self.executor = ThreadPoolExecutor(max_workers=5) # For CPU-bound plugin execution
self.loop = asyncio.get_event_loop()
# [/DEF]
async def create_task(self, plugin_id: str, params: Dict[str, Any], user_id: Optional[str] = None) -> Task:
"""
Creates and queues a new task for execution.
"""
if not self.plugin_loader.has_plugin(plugin_id):
raise ValueError(f"Plugin with ID '{plugin_id}' not found.")
plugin = self.plugin_loader.get_plugin(plugin_id)
# Validate params against plugin schema (this will be done at a higher level, e.g., API route)
# For now, a basic check
if not isinstance(params, dict):
raise ValueError("Task parameters must be a dictionary.")
task = Task(plugin_id=plugin_id, params=params, user_id=user_id)
self.tasks[task.id] = task
self.loop.create_task(self._run_task(task.id)) # Schedule task for execution
return task
async def _run_task(self, task_id: str):
"""
Internal method to execute a task.
"""
task = self.tasks[task_id]
plugin = self.plugin_loader.get_plugin(task.plugin_id)
task.status = TaskStatus.RUNNING
task.started_at = datetime.utcnow()
self._add_log(task_id, "INFO", f"Task started for plugin '{plugin.name}'")
try:
# Execute plugin in a separate thread to avoid blocking the event loop
# if the plugin's execute method is synchronous and potentially CPU-bound.
# If the plugin's execute method is already async, this can be simplified.
await self.loop.run_in_executor(
self.executor,
lambda: asyncio.run(plugin.execute(task.params)) if asyncio.iscoroutinefunction(plugin.execute) else plugin.execute(task.params)
)
task.status = TaskStatus.SUCCESS
self._add_log(task_id, "INFO", f"Task completed successfully for plugin '{plugin.name}'")
except Exception as e:
task.status = TaskStatus.FAILED
self._add_log(task_id, "ERROR", f"Task failed: {e}", {"error_type": type(e).__name__})
finally:
task.finished_at = datetime.utcnow()
# In a real system, you might notify clients via WebSocket here
def get_task(self, task_id: str) -> Optional[Task]:
"""
Retrieves a task by its ID.
"""
return self.tasks.get(task_id)
def get_all_tasks(self) -> List[Task]:
"""
Retrieves all registered tasks.
"""
return list(self.tasks.values())
def get_task_logs(self, task_id: str) -> List[LogEntry]:
"""
Retrieves logs for a specific task.
"""
task = self.tasks.get(task_id)
return task.logs if task else []
def _add_log(self, task_id: str, level: str, message: str, context: Optional[Dict[str, Any]] = None):
"""
Adds a log entry to a task and notifies subscribers.
"""
task = self.tasks.get(task_id)
if not task:
return
log_entry = LogEntry(level=level, message=message, context=context)
task.logs.append(log_entry)
# Notify subscribers
if task_id in self.subscribers:
for queue in self.subscribers[task_id]:
self.loop.call_soon_threadsafe(queue.put_nowait, log_entry)
async def subscribe_logs(self, task_id: str) -> asyncio.Queue:
"""
Subscribes to real-time logs for a task.
"""
queue = asyncio.Queue()
if task_id not in self.subscribers:
self.subscribers[task_id] = []
self.subscribers[task_id].append(queue)
return queue
def unsubscribe_logs(self, task_id: str, queue: asyncio.Queue):
"""
Unsubscribes from real-time logs for a task.
"""
if task_id in self.subscribers:
self.subscribers[task_id].remove(queue)
if not self.subscribers[task_id]:
del self.subscribers[task_id]

33
backend/src/dependencies.py Executable file
View File

@@ -0,0 +1,33 @@
# [DEF:Dependencies:Module]
# @SEMANTICS: dependency, injection, singleton, factory
# @PURPOSE: Manages the creation and provision of shared application dependencies, such as the PluginLoader and TaskManager, to avoid circular imports.
# @LAYER: Core
# @RELATION: Used by the main app and API routers to get access to shared instances.
from pathlib import Path
from .core.plugin_loader import PluginLoader
from .core.task_manager import TaskManager
from .core.config_manager import ConfigManager
# Initialize singletons
# Use absolute path relative to this file to ensure plugins are found regardless of CWD
project_root = Path(__file__).parent.parent.parent
config_path = project_root / "config.json"
config_manager = ConfigManager(config_path=str(config_path))
def get_config_manager() -> ConfigManager:
"""Dependency injector for the ConfigManager."""
return config_manager
plugin_dir = Path(__file__).parent / "plugins"
plugin_loader = PluginLoader(plugin_dir=str(plugin_dir))
task_manager = TaskManager(plugin_loader)
def get_plugin_loader() -> PluginLoader:
"""Dependency injector for the PluginLoader."""
return plugin_loader
def get_task_manager() -> TaskManager:
"""Dependency injector for the TaskManager."""
return task_manager
# [/DEF]

133
backend/src/plugins/backup.py Executable file
View File

@@ -0,0 +1,133 @@
# [DEF:BackupPlugin:Module]
# @SEMANTICS: backup, superset, automation, dashboard, plugin
# @PURPOSE: A plugin that provides functionality to back up Superset dashboards.
# @LAYER: App
# @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils
from typing import Dict, Any
from pathlib import Path
from requests.exceptions import RequestException
from ..core.plugin_base import PluginBase
from superset_tool.client import SupersetClient
from superset_tool.exceptions import SupersetAPIError
from superset_tool.utils.logger import SupersetLogger
from superset_tool.utils.fileio import (
save_and_unpack_dashboard,
archive_exports,
sanitize_filename,
consolidate_archive_folders,
remove_empty_directories,
RetentionPolicy
)
from superset_tool.utils.init_clients import setup_clients
from ..dependencies import get_config_manager
class BackupPlugin(PluginBase):
"""
A plugin to back up Superset dashboards.
"""
@property
def id(self) -> str:
return "superset-backup"
@property
def name(self) -> str:
return "Superset Dashboard Backup"
@property
def description(self) -> str:
return "Backs up all dashboards from a Superset instance."
@property
def version(self) -> str:
return "1.0.0"
def get_schema(self) -> Dict[str, Any]:
config_manager = get_config_manager()
envs = [e.name for e in config_manager.get_environments()]
default_path = config_manager.get_config().settings.backup_path
return {
"type": "object",
"properties": {
"env": {
"type": "string",
"title": "Environment",
"description": "The Superset environment to back up.",
"enum": envs if envs else [],
},
"backup_path": {
"type": "string",
"title": "Backup Path",
"description": "The root directory to save backups to.",
"default": default_path
}
},
"required": ["env", "backup_path"],
}
async def execute(self, params: Dict[str, Any]):
env = params["env"]
backup_path = Path(params["backup_path"])
logger = SupersetLogger(log_dir=backup_path / "Logs", console=True)
logger.info(f"[BackupPlugin][Entry] Starting backup for {env}.")
try:
config_manager = get_config_manager()
if not config_manager.has_environments():
raise ValueError("No Superset environments configured. Please add an environment in Settings.")
clients = setup_clients(logger, custom_envs=config_manager.get_environments())
client = clients.get(env)
if not client:
raise ValueError(f"Environment '{env}' not found in configuration.")
dashboard_count, dashboard_meta = client.get_dashboards()
logger.info(f"[BackupPlugin][Progress] Found {dashboard_count} dashboards to export in {env}.")
if dashboard_count == 0:
logger.info("[BackupPlugin][Exit] No dashboards to back up.")
return
for db in dashboard_meta:
dashboard_id = db.get('id')
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
if not dashboard_id:
continue
try:
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
dashboard_dir = backup_path / env.upper() / dashboard_base_dir_name
dashboard_dir.mkdir(parents=True, exist_ok=True)
zip_content, filename = client.export_dashboard(dashboard_id)
save_and_unpack_dashboard(
zip_content=zip_content,
original_filename=filename,
output_dir=dashboard_dir,
unpack=False,
logger=logger
)
archive_exports(str(dashboard_dir), policy=RetentionPolicy(), logger=logger)
except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
logger.error(f"[BackupPlugin][Failure] Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}", exc_info=True)
continue
consolidate_archive_folders(backup_path / env.upper(), logger=logger)
remove_empty_directories(str(backup_path / env.upper()), logger=logger)
logger.info(f"[BackupPlugin][CoherenceCheck:Passed] Backup logic completed for {env}.")
except (RequestException, IOError, KeyError) as e:
logger.critical(f"[BackupPlugin][Failure] Fatal error during backup for {env}: {e}", exc_info=True)
raise e
# [/DEF:BackupPlugin]

158
backend/src/plugins/migration.py Executable file
View File

@@ -0,0 +1,158 @@
# [DEF:MigrationPlugin:Module]
# @SEMANTICS: migration, superset, automation, dashboard, plugin
# @PURPOSE: A plugin that provides functionality to migrate Superset dashboards between environments.
# @LAYER: App
# @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils
from typing import Dict, Any, List
from pathlib import Path
import zipfile
import re
from ..core.plugin_base import PluginBase
from superset_tool.client import SupersetClient
from superset_tool.utils.init_clients import setup_clients
from superset_tool.utils.fileio import create_temp_file, update_yamls, create_dashboard_export
from ..dependencies import get_config_manager
from superset_tool.utils.logger import SupersetLogger
class MigrationPlugin(PluginBase):
"""
A plugin to migrate Superset dashboards between environments.
"""
@property
def id(self) -> str:
return "superset-migration"
@property
def name(self) -> str:
return "Superset Dashboard Migration"
@property
def description(self) -> str:
return "Migrates dashboards between Superset environments."
@property
def version(self) -> str:
return "1.0.0"
def get_schema(self) -> Dict[str, Any]:
config_manager = get_config_manager()
envs = [e.name for e in config_manager.get_environments()]
return {
"type": "object",
"properties": {
"from_env": {
"type": "string",
"title": "Source Environment",
"description": "The environment to migrate from.",
"enum": envs if envs else ["dev", "prod"],
},
"to_env": {
"type": "string",
"title": "Target Environment",
"description": "The environment to migrate to.",
"enum": envs if envs else ["dev", "prod"],
},
"dashboard_regex": {
"type": "string",
"title": "Dashboard Regex",
"description": "A regular expression to filter dashboards to migrate.",
},
"replace_db_config": {
"type": "boolean",
"title": "Replace DB Config",
"description": "Whether to replace the database configuration.",
"default": False,
},
"from_db_id": {
"type": "integer",
"title": "Source DB ID",
"description": "The ID of the source database to replace (if replacing).",
},
"to_db_id": {
"type": "integer",
"title": "Target DB ID",
"description": "The ID of the target database to replace with (if replacing).",
},
},
"required": ["from_env", "to_env", "dashboard_regex"],
}
async def execute(self, params: Dict[str, Any]):
from_env = params["from_env"]
to_env = params["to_env"]
dashboard_regex = params["dashboard_regex"]
replace_db_config = params.get("replace_db_config", False)
from_db_id = params.get("from_db_id")
to_db_id = params.get("to_db_id")
logger = SupersetLogger(log_dir=Path.cwd() / "logs", console=True)
logger.info(f"[MigrationPlugin][Entry] Starting migration from {from_env} to {to_env}.")
try:
config_manager = get_config_manager()
all_clients = setup_clients(logger, custom_envs=config_manager.get_environments())
from_c = all_clients.get(from_env)
to_c = all_clients.get(to_env)
if not from_c or not to_c:
raise ValueError(f"One or both environments ('{from_env}', '{to_env}') not found in configuration.")
_, all_dashboards = from_c.get_dashboards()
regex_str = str(dashboard_regex)
dashboards_to_migrate = [
d for d in all_dashboards if re.search(regex_str, d["dashboard_title"], re.IGNORECASE)
]
if not dashboards_to_migrate:
logger.warning("[MigrationPlugin][State] No dashboards found matching the regex.")
return
db_config_replacement = None
if replace_db_config:
if from_db_id is None or to_db_id is None:
raise ValueError("Source and target database IDs are required when replacing database configuration.")
from_db = from_c.get_database(int(from_db_id))
to_db = to_c.get_database(int(to_db_id))
old_result = from_db.get("result", {})
new_result = to_db.get("result", {})
db_config_replacement = {
"old": {"database_name": old_result.get("database_name"), "uuid": old_result.get("uuid"), "id": str(from_db.get("id"))},
"new": {"database_name": new_result.get("database_name"), "uuid": new_result.get("uuid"), "id": str(to_db.get("id"))}
}
for dash in dashboards_to_migrate:
dash_id, dash_slug, title = dash["id"], dash.get("slug"), dash["dashboard_title"]
try:
exported_content, _ = from_c.export_dashboard(dash_id)
with create_temp_file(content=exported_content, dry_run=True, suffix=".zip", logger=logger) as tmp_zip_path:
if not db_config_replacement:
to_c.import_dashboard(file_name=tmp_zip_path, dash_id=dash_id, dash_slug=dash_slug)
else:
with create_temp_file(suffix=".dir", logger=logger) as tmp_unpack_dir:
with zipfile.ZipFile(tmp_zip_path, "r") as zip_ref:
zip_ref.extractall(tmp_unpack_dir)
update_yamls(db_configs=[db_config_replacement], path=str(tmp_unpack_dir))
with create_temp_file(suffix=".zip", dry_run=True, logger=logger) as tmp_new_zip:
create_dashboard_export(zip_path=tmp_new_zip, source_paths=[str(p) for p in Path(tmp_unpack_dir).glob("**/*")])
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug)
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported.")
except Exception as exc:
logger.error(f"[MigrationPlugin][Failure] Failed to migrate dashboard {title}: {exc}", exc_info=True)
logger.info("[MigrationPlugin][Exit] Migration finished.")
except Exception as e:
logger.critical(f"[MigrationPlugin][Failure] Fatal error during migration: {e}", exc_info=True)
raise e
# [/DEF:MigrationPlugin]

View File

@@ -0,0 +1,49 @@
import pytest
from superset_tool.models import SupersetConfig
def test_superset_config_url_normalization():
auth = {
"provider": "db",
"username": "admin",
"password": "password",
"refresh": "token"
}
# Test with /api/v1 already present
config = SupersetConfig(
env="dev",
base_url="http://localhost:8088/api/v1",
auth=auth
)
assert config.base_url == "http://localhost:8088/api/v1"
# Test without /api/v1
config = SupersetConfig(
env="dev",
base_url="http://localhost:8088",
auth=auth
)
assert config.base_url == "http://localhost:8088/api/v1"
# Test with trailing slash
config = SupersetConfig(
env="dev",
base_url="http://localhost:8088/",
auth=auth
)
assert config.base_url == "http://localhost:8088/api/v1"
def test_superset_config_invalid_url():
auth = {
"provider": "db",
"username": "admin",
"password": "password",
"refresh": "token"
}
with pytest.raises(ValueError, match="Must start with http:// or https://"):
SupersetConfig(
env="dev",
base_url="localhost:8088",
auth=auth
)

309
backup_script.py Normal file → Executable file
View File

@@ -1,146 +1,163 @@
# pylint: disable=too-many-arguments,too-many-locals,too-many-statements,too-many-branches,unused-argument,invalid-name,redefined-outer-name
"""
[MODULE] Superset Dashboard Backup Script
@contract: Автоматизирует процесс резервного копирования дашбордов Superset.
"""
# [IMPORTS] Стандартная библиотека
import logging
import sys
from pathlib import Path
from dataclasses import dataclass
# [IMPORTS] Third-party
from requests.exceptions import RequestException
# [IMPORTS] Локальные модули
from superset_tool.client import SupersetClient
from superset_tool.exceptions import SupersetAPIError
from superset_tool.utils.logger import SupersetLogger
from superset_tool.utils.fileio import (
save_and_unpack_dashboard,
archive_exports,
sanitize_filename,
consolidate_archive_folders,
remove_empty_directories
)
from superset_tool.utils.init_clients import setup_clients
# [ENTITY: Dataclass('BackupConfig')]
# CONTRACT:
# PURPOSE: Хранит конфигурацию для процесса бэкапа.
@dataclass
class BackupConfig:
"""Конфигурация для процесса бэкапа."""
consolidate: bool = True
rotate_archive: bool = True
clean_folders: bool = True
# [ENTITY: Function('backup_dashboards')]
# CONTRACT:
# PURPOSE: Выполняет бэкап всех доступных дашбордов для заданного клиента и окружения.
# PRECONDITIONS:
# - `client` должен быть инициализированным экземпляром `SupersetClient`.
# - `env_name` должен быть строкой, обозначающей окружение.
# - `backup_root` должен быть валидным путем к корневой директории бэкапа.
# POSTCONDITIONS:
# - Дашборды экспортируются и сохраняются.
# - Возвращает `True` если все дашборды были экспортированы без критических ошибок, `False` иначе.
def backup_dashboards(
client: SupersetClient,
env_name: str,
backup_root: Path,
logger: SupersetLogger,
config: BackupConfig
) -> bool:
logger.info(f"[STATE][backup_dashboards][ENTER] Starting backup for {env_name}.")
try:
dashboard_count, dashboard_meta = client.get_dashboards()
logger.info(f"[STATE][backup_dashboards][PROGRESS] Found {dashboard_count} dashboards to export in {env_name}.")
if dashboard_count == 0:
return True
success_count = 0
for db in dashboard_meta:
dashboard_id = db.get('id')
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
if not dashboard_id:
continue
try:
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
dashboard_dir = backup_root / env_name / dashboard_base_dir_name
dashboard_dir.mkdir(parents=True, exist_ok=True)
zip_content, filename = client.export_dashboard(dashboard_id)
save_and_unpack_dashboard(
zip_content=zip_content,
original_filename=filename,
output_dir=dashboard_dir,
unpack=False,
logger=logger
)
if config.rotate_archive:
archive_exports(str(dashboard_dir), logger=logger)
success_count += 1
except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
logger.error(f"[STATE][backup_dashboards][FAILURE] Failed to export dashboard {dashboard_title}: {db_error}", exc_info=True)
if config.consolidate:
consolidate_archive_folders(backup_root / env_name , logger=logger)
if config.clean_folders:
remove_empty_directories(str(backup_root / env_name), logger=logger)
return success_count == dashboard_count
except (RequestException, IOError) as e:
logger.critical(f"[STATE][backup_dashboards][FAILURE] Fatal error during backup for {env_name}: {e}", exc_info=True)
return False
# END_FUNCTION_backup_dashboards
# [ENTITY: Function('main')]
# CONTRACT:
# PURPOSE: Основная точка входа скрипта.
# PRECONDITIONS: None
# POSTCONDITIONS: Возвращает код выхода.
def main() -> int:
log_dir = Path("P:\\Superset\\010 Бекапы\\Logs")
logger = SupersetLogger(log_dir=log_dir, level=logging.INFO, console=True)
logger.info("[STATE][main][ENTER] Starting Superset backup process.")
exit_code = 0
try:
clients = setup_clients(logger)
superset_backup_repo = Path("P:\\Superset\\010 Бекапы")
superset_backup_repo.mkdir(parents=True, exist_ok=True)
results = {}
environments = ['dev', 'sbx', 'prod', 'preprod']
backup_config = BackupConfig(rotate_archive=True)
for env in environments:
results[env] = backup_dashboards(
clients[env],
env.upper(),
superset_backup_repo,
logger=logger,
config=backup_config
)
if not all(results.values()):
exit_code = 1
except (RequestException, IOError) as e:
logger.critical(f"[STATE][main][FAILURE] Fatal error in main execution: {e}", exc_info=True)
exit_code = 1
logger.info("[STATE][main][SUCCESS] Superset backup process finished.")
return exit_code
# END_FUNCTION_main
if __name__ == "__main__":
sys.exit(main())
# [DEF:backup_script:Module]
#
# @SEMANTICS: backup, superset, automation, dashboard
# @PURPOSE: Этот модуль отвечает за автоматизированное резервное копирование дашбордов Superset.
# @LAYER: App
# @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils
# @PUBLIC_API: BackupConfig, backup_dashboards, main
# [SECTION: IMPORTS]
import logging
import sys
from pathlib import Path
from dataclasses import dataclass,field
from requests.exceptions import RequestException
from superset_tool.client import SupersetClient
from superset_tool.exceptions import SupersetAPIError
from superset_tool.utils.logger import SupersetLogger
from superset_tool.utils.fileio import (
save_and_unpack_dashboard,
archive_exports,
sanitize_filename,
consolidate_archive_folders,
remove_empty_directories,
RetentionPolicy
)
from superset_tool.utils.init_clients import setup_clients
# [/SECTION]
# [DEF:BackupConfig:DataClass]
# @PURPOSE: Хранит конфигурацию для процесса бэкапа.
@dataclass
class BackupConfig:
"""Конфигурация для процесса бэкапа."""
consolidate: bool = True
rotate_archive: bool = True
clean_folders: bool = True
retention_policy: RetentionPolicy = field(default_factory=RetentionPolicy)
# [/DEF:BackupConfig]
# [DEF:backup_dashboards:Function]
# @PURPOSE: Выполняет бэкап всех доступных дашбордов для заданного клиента и окружения, пропуская ошибки экспорта.
# @PRE: `client` должен быть инициализированным экземпляром `SupersetClient`.
# @PRE: `env_name` должен быть строкой, обозначающей окружение.
# @PRE: `backup_root` должен быть валидным путем к корневой директории бэкапа.
# @POST: Дашборды экспортируются и сохраняются. Ошибки экспорта логируются и не приводят к остановке скрипта.
# @RELATION: CALLS -> client.get_dashboards
# @RELATION: CALLS -> client.export_dashboard
# @RELATION: CALLS -> save_and_unpack_dashboard
# @RELATION: CALLS -> archive_exports
# @RELATION: CALLS -> consolidate_archive_folders
# @RELATION: CALLS -> remove_empty_directories
# @PARAM: client (SupersetClient) - Клиент для доступа к API Superset.
# @PARAM: env_name (str) - Имя окружения (e.g., 'PROD').
# @PARAM: backup_root (Path) - Корневая директория для сохранения бэкапов.
# @PARAM: logger (SupersetLogger) - Инстанс логгера.
# @PARAM: config (BackupConfig) - Конфигурация процесса бэкапа.
# @RETURN: bool - `True` если все дашборды были экспортированы без критических ошибок, `False` иначе.
def backup_dashboards(
client: SupersetClient,
env_name: str,
backup_root: Path,
logger: SupersetLogger,
config: BackupConfig
) -> bool:
logger.info(f"[backup_dashboards][Entry] Starting backup for {env_name}.")
try:
dashboard_count, dashboard_meta = client.get_dashboards()
logger.info(f"[backup_dashboards][Progress] Found {dashboard_count} dashboards to export in {env_name}.")
if dashboard_count == 0:
return True
success_count = 0
for db in dashboard_meta:
dashboard_id = db.get('id')
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
if not dashboard_id:
continue
try:
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
dashboard_dir = backup_root / env_name / dashboard_base_dir_name
dashboard_dir.mkdir(parents=True, exist_ok=True)
zip_content, filename = client.export_dashboard(dashboard_id)
save_and_unpack_dashboard(
zip_content=zip_content,
original_filename=filename,
output_dir=dashboard_dir,
unpack=False,
logger=logger
)
if config.rotate_archive:
archive_exports(str(dashboard_dir), policy=config.retention_policy, logger=logger)
success_count += 1
except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
logger.error(f"[backup_dashboards][Failure] Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}", exc_info=True)
continue
if config.consolidate:
consolidate_archive_folders(backup_root / env_name , logger=logger)
if config.clean_folders:
remove_empty_directories(str(backup_root / env_name), logger=logger)
logger.info(f"[backup_dashboards][CoherenceCheck:Passed] Backup logic completed.")
return success_count == dashboard_count
except (RequestException, IOError) as e:
logger.critical(f"[backup_dashboards][Failure] Fatal error during backup for {env_name}: {e}", exc_info=True)
return False
# [/DEF:backup_dashboards]
# [DEF:main:Function]
# @PURPOSE: Основная точка входа для запуска процесса резервного копирования.
# @RELATION: CALLS -> setup_clients
# @RELATION: CALLS -> backup_dashboards
# @RETURN: int - Код выхода (0 - успех, 1 - ошибка).
def main() -> int:
log_dir = Path("P:\\Superset\\010 Бекапы\\Logs")
logger = SupersetLogger(log_dir=log_dir, level=logging.INFO, console=True)
logger.info("[main][Entry] Starting Superset backup process.")
exit_code = 0
try:
clients = setup_clients(logger)
superset_backup_repo = Path("P:\\Superset\\010 Бекапы")
superset_backup_repo.mkdir(parents=True, exist_ok=True)
results = {}
environments = ['dev', 'sbx', 'prod', 'preprod']
backup_config = BackupConfig(rotate_archive=True)
for env in environments:
try:
results[env] = backup_dashboards(
clients[env],
env.upper(),
superset_backup_repo,
logger=logger,
config=backup_config
)
except Exception as env_error:
logger.critical(f"[main][Failure] Critical error for environment {env}: {env_error}", exc_info=True)
results[env] = False
if not all(results.values()):
exit_code = 1
except (RequestException, IOError) as e:
logger.critical(f"[main][Failure] Fatal error in main execution: {e}", exc_info=True)
exit_code = 1
logger.info("[main][Exit] Superset backup process finished.")
return exit_code
# [/DEF:main]
if __name__ == "__main__":
sys.exit(main())
# [/DEF:backup_script]

79
debug_db_api.py Executable file
View File

@@ -0,0 +1,79 @@
# [DEF:debug_db_api:Module]
#
# @SEMANTICS: debug, api, database, script
# @PURPOSE: Скрипт для отладки структуры ответа API баз данных.
# @LAYER: App
# @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils
# @PUBLIC_API: debug_database_api
# [SECTION: IMPORTS]
import json
import logging
from superset_tool.client import SupersetClient
from superset_tool.utils.init_clients import setup_clients
from superset_tool.utils.logger import SupersetLogger
# [/SECTION]
# [DEF:debug_database_api:Function]
# @PURPOSE: Отладка структуры ответа API баз данных.
# @RELATION: CALLS -> setup_clients
# @RELATION: CALLS -> client.get_databases
def debug_database_api():
logger = SupersetLogger(name="debug_db_api", level=logging.DEBUG)
# Инициализируем клиенты
clients = setup_clients(logger)
# Log JWT bearer tokens for each client
for env_name, client in clients.items():
try:
# Ensure authentication (access token fetched via headers property)
_ = client.headers
token = client.network._tokens.get("access_token")
logger.info(f"[debug_database_api][Token] Bearer token for {env_name}: {token}")
except Exception as exc:
logger.error(f"[debug_database_api][Token] Failed to retrieve token for {env_name}: {exc}", exc_info=True)
# Проверяем доступные окружения
print("Доступные окружения:")
for env_name, client in clients.items():
print(f" {env_name}: {client.config.base_url}")
# Выбираем два окружения для тестирования
if len(clients) < 2:
print("Недостаточно окружений для тестирования")
return
env_names = list(clients.keys())[:2]
from_env, to_env = env_names[0], env_names[1]
from_client = clients[from_env]
to_client = clients[to_env]
print(f"\nТестируем API для окружений: {from_env} -> {to_env}")
try:
# Получаем список баз данных из первого окружения
print(f"\nПолучаем список БД из {from_env}:")
count, dbs = from_client.get_databases()
print(f"Найдено {count} баз данных")
print("Полный ответ API:")
print(json.dumps({"count": count, "result": dbs}, indent=2, ensure_ascii=False))
# Получаем список баз данных из второго окружения
print(f"\nПолучаем список БД из {to_env}:")
count, dbs = to_client.get_databases()
print(f"Найдено {count} баз данных")
print("Полный ответ API:")
print(json.dumps({"count": count, "result": dbs}, indent=2, ensure_ascii=False))
except Exception as e:
print(f"Ошибка при тестировании API: {e}")
import traceback
traceback.print_exc()
# [/DEF:debug_database_api]
if __name__ == "__main__":
debug_database_api()
# [/DEF:debug_db_api]

87
docs/plugin_dev.md Executable file
View File

@@ -0,0 +1,87 @@
# Plugin Development Guide
This guide explains how to create new plugins for the Superset Tools application.
## 1. Plugin Structure
A plugin is a single Python file located in the `backend/src/plugins/` directory. Each plugin file must contain a class that inherits from `PluginBase`.
## 2. Implementing `PluginBase`
The `PluginBase` class is an abstract base class that defines the interface for all plugins. You must implement the following properties and methods:
- **`id`**: A unique string identifier for your plugin (e.g., `"my-cool-plugin"`).
- **`name`**: A human-readable name for your plugin (e.g., `"My Cool Plugin"`).
- **`description`**: A brief description of what your plugin does.
- **`version`**: The version of your plugin (e.g., `"1.0.0"`).
- **`get_schema()`**: A method that returns a JSON schema dictionary defining the input parameters for your plugin. This schema is used to automatically generate a form in the frontend.
- **`execute(params: Dict[str, Any])`**: An `async` method that contains the main logic of your plugin. The `params` argument is a dictionary containing the input data from the user, validated against the schema you defined.
## 3. Example Plugin
Here is an example of a simple "Hello World" plugin:
```python
# backend/src/plugins/hello.py
# [DEF:HelloWorldPlugin:Plugin]
# @SEMANTICS: hello, world, example, plugin
# @PURPOSE: A simple "Hello World" plugin example.
# @LAYER: Domain (Plugin)
# @RELATION: Inherits from PluginBase
# @PUBLIC_API: execute
from typing import Dict, Any
from ..core.plugin_base import PluginBase
class HelloWorldPlugin(PluginBase):
@property
def id(self) -> str:
return "hello-world"
@property
def name(self) -> str:
return "Hello World"
@property
def description(self) -> str:
return "A simple plugin that prints a greeting."
@property
def version(self) -> str:
return "1.0.0"
def get_schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"name": {
"type": "string",
"title": "Name",
"description": "The name to greet.",
"default": "World",
}
},
"required": ["name"],
}
async def execute(self, params: Dict[str, Any]):
name = params["name"]
print(f"Hello, {name}!")
```
## 4. Logging
You can use the global logger instance to log messages from your plugin. The logger is available in the `superset_tool.utils.logger` module.
```python
from superset_tool.utils.logger import SupersetLogger
logger = SupersetLogger()
async def execute(self, params: Dict[str, Any]):
logger.info("My plugin is running!")
```
## 5. Testing
To test your plugin, simply run the application and navigate to the web UI. Your plugin should appear in the list of available tools.

46
docs/settings.md Normal file
View File

@@ -0,0 +1,46 @@
# Web Application Settings Mechanism
This document describes the settings management system for the Superset Tools application.
## Overview
The settings mechanism allows users to configure multiple Superset environments and global application settings (like backup storage) via the web UI.
## Backend Architecture
### Data Models
Configuration is structured using Pydantic models in `backend/src/core/config_models.py`:
- `Environment`: Represents a Superset instance (URL, credentials). The `base_url` is automatically normalized to include the `/api/v1` suffix if missing.
- `GlobalSettings`: Global application parameters (e.g., `backup_path`).
- `AppConfig`: The root configuration object.
### Configuration Manager
The `ConfigManager` (`backend/src/core/config_manager.py`) handles:
- Persistence to `config.json`.
- CRUD operations for environments.
- Validation and logging.
### API Endpoints
The settings API is available at `/settings`:
- `GET /settings`: Retrieve all settings (passwords are masked).
- `PATCH /settings/global`: Update global settings.
- `GET /settings/environments`: List environments.
- `POST /settings/environments`: Add environment.
- `PUT /settings/environments/{id}`: Update environment.
- `DELETE /settings/environments/{id}`: Remove environment.
- `POST /settings/environments/{id}/test`: Test connection.
## Frontend Implementation
The settings page is located at `frontend/src/pages/Settings.svelte`. It provides forms for managing global settings and Superset environments.
## Integration
Existing plugins and utilities use the `ConfigManager` to fetch configuration:
- `superset_tool/utils/init_clients.py`: Dynamically initializes Superset clients from the configured environments.
- `BackupPlugin`: Uses the configured `backup_path` as the default storage location.

186
frontend/.svelte-kit/ambient.d.ts vendored Normal file
View File

@@ -0,0 +1,186 @@
// this file is generated — do not edit it
/// <reference types="@sveltejs/kit" />
/**
* Environment variables [loaded by Vite](https://vitejs.dev/guide/env-and-mode.html#env-files) from `.env` files and `process.env`. Like [`$env/dynamic/private`](https://svelte.dev/docs/kit/$env-dynamic-private), this module cannot be imported into client-side code. This module only includes variables that _do not_ begin with [`config.kit.env.publicPrefix`](https://svelte.dev/docs/kit/configuration#env) _and do_ start with [`config.kit.env.privatePrefix`](https://svelte.dev/docs/kit/configuration#env) (if configured).
*
* _Unlike_ [`$env/dynamic/private`](https://svelte.dev/docs/kit/$env-dynamic-private), the values exported from this module are statically injected into your bundle at build time, enabling optimisations like dead code elimination.
*
* ```ts
* import { API_KEY } from '$env/static/private';
* ```
*
* Note that all environment variables referenced in your code should be declared (for example in an `.env` file), even if they don't have a value until the app is deployed:
*
* ```
* MY_FEATURE_FLAG=""
* ```
*
* You can override `.env` values from the command line like so:
*
* ```sh
* MY_FEATURE_FLAG="enabled" npm run dev
* ```
*/
declare module '$env/static/private' {
export const LESSOPEN: string;
export const USER: string;
export const npm_config_user_agent: string;
export const npm_node_execpath: string;
export const SHLVL: string;
export const npm_config_noproxy: string;
export const HOME: string;
export const OLDPWD: string;
export const npm_package_json: string;
export const PS1: string;
export const npm_config_userconfig: string;
export const npm_config_local_prefix: string;
export const DBUS_SESSION_BUS_ADDRESS: string;
export const WSL_DISTRO_NAME: string;
export const COLOR: string;
export const WAYLAND_DISPLAY: string;
export const LOGNAME: string;
export const NAME: string;
export const WSL_INTEROP: string;
export const PULSE_SERVER: string;
export const _: string;
export const npm_config_prefix: string;
export const npm_config_npm_version: string;
export const TERM: string;
export const npm_config_cache: string;
export const npm_config_node_gyp: string;
export const PATH: string;
export const NODE: string;
export const npm_package_name: string;
export const XDG_RUNTIME_DIR: string;
export const DISPLAY: string;
export const LANG: string;
export const VIRTUAL_ENV_PROMPT: string;
export const LS_COLORS: string;
export const npm_lifecycle_script: string;
export const SHELL: string;
export const npm_package_version: string;
export const npm_lifecycle_event: string;
export const GOOGLE_CLOUD_PROJECT: string;
export const LESSCLOSE: string;
export const VIRTUAL_ENV: string;
export const npm_config_globalconfig: string;
export const npm_config_init_module: string;
export const PWD: string;
export const npm_execpath: string;
export const XDG_DATA_DIRS: string;
export const npm_config_global_prefix: string;
export const npm_command: string;
export const WSL2_GUI_APPS_ENABLED: string;
export const HOSTTYPE: string;
export const WSLENV: string;
export const INIT_CWD: string;
export const EDITOR: string;
export const NODE_ENV: string;
}
/**
* Similar to [`$env/static/private`](https://svelte.dev/docs/kit/$env-static-private), except that it only includes environment variables that begin with [`config.kit.env.publicPrefix`](https://svelte.dev/docs/kit/configuration#env) (which defaults to `PUBLIC_`), and can therefore safely be exposed to client-side code.
*
* Values are replaced statically at build time.
*
* ```ts
* import { PUBLIC_BASE_URL } from '$env/static/public';
* ```
*/
declare module '$env/static/public' {
export const PUBLIC_WS_URL: string;
}
/**
* This module provides access to runtime environment variables, as defined by the platform you're running on. For example if you're using [`adapter-node`](https://github.com/sveltejs/kit/tree/main/packages/adapter-node) (or running [`vite preview`](https://svelte.dev/docs/kit/cli)), this is equivalent to `process.env`. This module only includes variables that _do not_ begin with [`config.kit.env.publicPrefix`](https://svelte.dev/docs/kit/configuration#env) _and do_ start with [`config.kit.env.privatePrefix`](https://svelte.dev/docs/kit/configuration#env) (if configured).
*
* This module cannot be imported into client-side code.
*
* ```ts
* import { env } from '$env/dynamic/private';
* console.log(env.DEPLOYMENT_SPECIFIC_VARIABLE);
* ```
*
* > [!NOTE] In `dev`, `$env/dynamic` always includes environment variables from `.env`. In `prod`, this behavior will depend on your adapter.
*/
declare module '$env/dynamic/private' {
export const env: {
LESSOPEN: string;
USER: string;
npm_config_user_agent: string;
npm_node_execpath: string;
SHLVL: string;
npm_config_noproxy: string;
HOME: string;
OLDPWD: string;
npm_package_json: string;
PS1: string;
npm_config_userconfig: string;
npm_config_local_prefix: string;
DBUS_SESSION_BUS_ADDRESS: string;
WSL_DISTRO_NAME: string;
COLOR: string;
WAYLAND_DISPLAY: string;
LOGNAME: string;
NAME: string;
WSL_INTEROP: string;
PULSE_SERVER: string;
_: string;
npm_config_prefix: string;
npm_config_npm_version: string;
TERM: string;
npm_config_cache: string;
npm_config_node_gyp: string;
PATH: string;
NODE: string;
npm_package_name: string;
XDG_RUNTIME_DIR: string;
DISPLAY: string;
LANG: string;
VIRTUAL_ENV_PROMPT: string;
LS_COLORS: string;
npm_lifecycle_script: string;
SHELL: string;
npm_package_version: string;
npm_lifecycle_event: string;
GOOGLE_CLOUD_PROJECT: string;
LESSCLOSE: string;
VIRTUAL_ENV: string;
npm_config_globalconfig: string;
npm_config_init_module: string;
PWD: string;
npm_execpath: string;
XDG_DATA_DIRS: string;
npm_config_global_prefix: string;
npm_command: string;
WSL2_GUI_APPS_ENABLED: string;
HOSTTYPE: string;
WSLENV: string;
INIT_CWD: string;
EDITOR: string;
NODE_ENV: string;
[key: `PUBLIC_${string}`]: undefined;
[key: `${string}`]: string | undefined;
}
}
/**
* Similar to [`$env/dynamic/private`](https://svelte.dev/docs/kit/$env-dynamic-private), but only includes variables that begin with [`config.kit.env.publicPrefix`](https://svelte.dev/docs/kit/configuration#env) (which defaults to `PUBLIC_`), and can therefore safely be exposed to client-side code.
*
* Note that public dynamic environment variables must all be sent from the server to the client, causing larger network requests — when possible, use `$env/static/public` instead.
*
* ```ts
* import { env } from '$env/dynamic/public';
* console.log(env.PUBLIC_DEPLOYMENT_SPECIFIC_VARIABLE);
* ```
*/
declare module '$env/dynamic/public' {
export const env: {
PUBLIC_WS_URL: string;
[key: `PUBLIC_${string}`]: string | undefined;
}
}

View File

@@ -0,0 +1,31 @@
export { matchers } from './matchers.js';
export const nodes = [
() => import('./nodes/0'),
() => import('./nodes/1'),
() => import('./nodes/2'),
() => import('./nodes/3')
];
export const server_loads = [];
export const dictionary = {
"/": [2],
"/settings": [3]
};
export const hooks = {
handleError: (({ error }) => { console.error(error) }),
reroute: (() => {}),
transport: {}
};
export const decoders = Object.fromEntries(Object.entries(hooks.transport).map(([k, v]) => [k, v.decode]));
export const encoders = Object.fromEntries(Object.entries(hooks.transport).map(([k, v]) => [k, v.encode]));
export const hash = false;
export const decode = (type, value) => decoders[type](value);
export { default as root } from '../root.js';

View File

@@ -0,0 +1 @@
export const matchers = {};

View File

@@ -0,0 +1,3 @@
import * as universal from "../../../../src/routes/+layout.ts";
export { universal };
export { default as component } from "../../../../src/routes/+layout.svelte";

View File

@@ -0,0 +1 @@
export { default as component } from "../../../../src/routes/+error.svelte";

View File

@@ -0,0 +1,3 @@
import * as universal from "../../../../src/routes/+page.ts";
export { universal };
export { default as component } from "../../../../src/routes/+page.svelte";

View File

@@ -0,0 +1,3 @@
import * as universal from "../../../../src/routes/settings/+page.ts";
export { universal };
export { default as component } from "../../../../src/routes/settings/+page.svelte";

View File

@@ -0,0 +1,31 @@
export { matchers } from './matchers.js';
export const nodes = [
() => import('./nodes/0'),
() => import('./nodes/1'),
() => import('./nodes/2'),
() => import('./nodes/3')
];
export const server_loads = [];
export const dictionary = {
"/": [2],
"/settings": [3]
};
export const hooks = {
handleError: (({ error }) => { console.error(error) }),
reroute: (() => {}),
transport: {}
};
export const decoders = Object.fromEntries(Object.entries(hooks.transport).map(([k, v]) => [k, v.decode]));
export const encoders = Object.fromEntries(Object.entries(hooks.transport).map(([k, v]) => [k, v.encode]));
export const hash = false;
export const decode = (type, value) => decoders[type](value);
export { default as root } from '../root.js';

View File

@@ -0,0 +1 @@
export const matchers = {};

View File

@@ -0,0 +1,3 @@
import * as universal from "../../../../src/routes/+layout.ts";
export { universal };
export { default as component } from "../../../../src/routes/+layout.svelte";

View File

@@ -0,0 +1 @@
export { default as component } from "../../../../src/routes/+error.svelte";

View File

@@ -0,0 +1,3 @@
import * as universal from "../../../../src/routes/+page.ts";
export { universal };
export { default as component } from "../../../../src/routes/+page.svelte";

View File

@@ -0,0 +1,3 @@
import * as universal from "../../../../src/routes/settings/+page.ts";
export { universal };
export { default as component } from "../../../../src/routes/settings/+page.svelte";

View File

@@ -0,0 +1,3 @@
import { asClassComponent } from 'svelte/legacy';
import Root from './root.svelte';
export default asClassComponent(Root);

View File

@@ -0,0 +1,68 @@
<!-- This file is generated by @sveltejs/kit — do not edit it! -->
<svelte:options runes={true} />
<script>
import { setContext, onMount, tick } from 'svelte';
import { browser } from '$app/environment';
// stores
let { stores, page, constructors, components = [], form, data_0 = null, data_1 = null } = $props();
if (!browser) {
// svelte-ignore state_referenced_locally
setContext('__svelte__', stores);
}
if (browser) {
$effect.pre(() => stores.page.set(page));
} else {
// svelte-ignore state_referenced_locally
stores.page.set(page);
}
$effect(() => {
stores;page;constructors;components;form;data_0;data_1;
stores.page.notify();
});
let mounted = $state(false);
let navigated = $state(false);
let title = $state(null);
onMount(() => {
const unsubscribe = stores.page.subscribe(() => {
if (mounted) {
navigated = true;
tick().then(() => {
title = document.title || 'untitled page';
});
}
});
mounted = true;
return unsubscribe;
});
const Pyramid_1=$derived(constructors[1])
</script>
{#if constructors[1]}
{@const Pyramid_0 = constructors[0]}
<!-- svelte-ignore binding_property_non_reactive -->
<Pyramid_0 bind:this={components[0]} data={data_0} {form} params={page.params}>
<!-- svelte-ignore binding_property_non_reactive -->
<Pyramid_1 bind:this={components[1]} data={data_1} {form} params={page.params} />
</Pyramid_0>
{:else}
{@const Pyramid_0 = constructors[0]}
<!-- svelte-ignore binding_property_non_reactive -->
<Pyramid_0 bind:this={components[0]} data={data_0} {form} params={page.params} />
{/if}
{#if mounted}
<div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0; top: 0; clip: rect(0 0 0 0); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px">
{#if navigated}
{title}
{/if}
</div>
{/if}

View File

@@ -0,0 +1,53 @@
import root from '../root.js';
import { set_building, set_prerendering } from '__sveltekit/environment';
import { set_assets } from '$app/paths/internal/server';
import { set_manifest, set_read_implementation } from '__sveltekit/server';
import { set_private_env, set_public_env } from '../../../node_modules/@sveltejs/kit/src/runtime/shared-server.js';
export const options = {
app_template_contains_nonce: false,
async: false,
csp: {"mode":"auto","directives":{"upgrade-insecure-requests":false,"block-all-mixed-content":false},"reportOnly":{"upgrade-insecure-requests":false,"block-all-mixed-content":false}},
csrf_check_origin: true,
csrf_trusted_origins: [],
embedded: false,
env_public_prefix: 'PUBLIC_',
env_private_prefix: '',
hash_routing: false,
hooks: null, // added lazily, via `get_hooks`
preload_strategy: "modulepreload",
root,
service_worker: false,
service_worker_options: undefined,
templates: {
app: ({ head, body, assets, nonce, env }) => "<!DOCTYPE html>\n<html lang=\"en\">\n\t<head>\n\t\t<meta charset=\"utf-8\" />\n\t\t<link rel=\"icon\" href=\"" + assets + "/favicon.png\" />\n\t\t<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n\t\t" + head + "\n\t</head>\n\t<body data-sveltekit-preload-data=\"hover\">\n\t\t<div style=\"display: contents\">" + body + "</div>\n\t</body>\n</html>\n",
error: ({ status, message }) => "<!doctype html>\n<html lang=\"en\">\n\t<head>\n\t\t<meta charset=\"utf-8\" />\n\t\t<title>" + message + "</title>\n\n\t\t<style>\n\t\t\tbody {\n\t\t\t\t--bg: white;\n\t\t\t\t--fg: #222;\n\t\t\t\t--divider: #ccc;\n\t\t\t\tbackground: var(--bg);\n\t\t\t\tcolor: var(--fg);\n\t\t\t\tfont-family:\n\t\t\t\t\tsystem-ui,\n\t\t\t\t\t-apple-system,\n\t\t\t\t\tBlinkMacSystemFont,\n\t\t\t\t\t'Segoe UI',\n\t\t\t\t\tRoboto,\n\t\t\t\t\tOxygen,\n\t\t\t\t\tUbuntu,\n\t\t\t\t\tCantarell,\n\t\t\t\t\t'Open Sans',\n\t\t\t\t\t'Helvetica Neue',\n\t\t\t\t\tsans-serif;\n\t\t\t\tdisplay: flex;\n\t\t\t\talign-items: center;\n\t\t\t\tjustify-content: center;\n\t\t\t\theight: 100vh;\n\t\t\t\tmargin: 0;\n\t\t\t}\n\n\t\t\t.error {\n\t\t\t\tdisplay: flex;\n\t\t\t\talign-items: center;\n\t\t\t\tmax-width: 32rem;\n\t\t\t\tmargin: 0 1rem;\n\t\t\t}\n\n\t\t\t.status {\n\t\t\t\tfont-weight: 200;\n\t\t\t\tfont-size: 3rem;\n\t\t\t\tline-height: 1;\n\t\t\t\tposition: relative;\n\t\t\t\ttop: -0.05rem;\n\t\t\t}\n\n\t\t\t.message {\n\t\t\t\tborder-left: 1px solid var(--divider);\n\t\t\t\tpadding: 0 0 0 1rem;\n\t\t\t\tmargin: 0 0 0 1rem;\n\t\t\t\tmin-height: 2.5rem;\n\t\t\t\tdisplay: flex;\n\t\t\t\talign-items: center;\n\t\t\t}\n\n\t\t\t.message h1 {\n\t\t\t\tfont-weight: 400;\n\t\t\t\tfont-size: 1em;\n\t\t\t\tmargin: 0;\n\t\t\t}\n\n\t\t\t@media (prefers-color-scheme: dark) {\n\t\t\t\tbody {\n\t\t\t\t\t--bg: #222;\n\t\t\t\t\t--fg: #ddd;\n\t\t\t\t\t--divider: #666;\n\t\t\t\t}\n\t\t\t}\n\t\t</style>\n\t</head>\n\t<body>\n\t\t<div class=\"error\">\n\t\t\t<span class=\"status\">" + status + "</span>\n\t\t\t<div class=\"message\">\n\t\t\t\t<h1>" + message + "</h1>\n\t\t\t</div>\n\t\t</div>\n\t</body>\n</html>\n"
},
version_hash: "1eogxsl"
};
export async function get_hooks() {
let handle;
let handleFetch;
let handleError;
let handleValidationError;
let init;
let reroute;
let transport;
return {
handle,
handleFetch,
handleError,
handleValidationError,
init,
reroute,
transport
};
}
export { set_assets, set_building, set_manifest, set_prerendering, set_private_env, set_public_env, set_read_implementation };

42
frontend/.svelte-kit/non-ambient.d.ts vendored Normal file
View File

@@ -0,0 +1,42 @@
// this file is generated — do not edit it
declare module "svelte/elements" {
export interface HTMLAttributes<T> {
'data-sveltekit-keepfocus'?: true | '' | 'off' | undefined | null;
'data-sveltekit-noscroll'?: true | '' | 'off' | undefined | null;
'data-sveltekit-preload-code'?:
| true
| ''
| 'eager'
| 'viewport'
| 'hover'
| 'tap'
| 'off'
| undefined
| null;
'data-sveltekit-preload-data'?: true | '' | 'hover' | 'tap' | 'off' | undefined | null;
'data-sveltekit-reload'?: true | '' | 'off' | undefined | null;
'data-sveltekit-replacestate'?: true | '' | 'off' | undefined | null;
}
}
export {};
declare module "$app/types" {
export interface AppTypes {
RouteId(): "/" | "/settings";
RouteParams(): {
};
LayoutParams(): {
"/": Record<string, never>;
"/settings": Record<string, never>
};
Pathname(): "/" | "/settings" | "/settings/";
ResolvedPathname(): `${"" | `/${string}`}${ReturnType<AppTypes['Pathname']>}`;
Asset(): string & {};
}
}

View File

@@ -0,0 +1,162 @@
{
".svelte-kit/generated/client-optimized/app.js": {
"file": "_app/immutable/entry/app.BXnpILpp.js",
"name": "entry/app",
"src": ".svelte-kit/generated/client-optimized/app.js",
"isEntry": true,
"imports": [
"_BtL0wB3H.js",
"_cv2LK44M.js",
"_BxZpmA7Z.js",
"_vVxDbqKK.js"
],
"dynamicImports": [
".svelte-kit/generated/client-optimized/nodes/0.js",
".svelte-kit/generated/client-optimized/nodes/1.js",
".svelte-kit/generated/client-optimized/nodes/2.js",
".svelte-kit/generated/client-optimized/nodes/3.js"
]
},
".svelte-kit/generated/client-optimized/nodes/0.js": {
"file": "_app/immutable/nodes/0.DZdF_zz-.js",
"name": "nodes/0",
"src": ".svelte-kit/generated/client-optimized/nodes/0.js",
"isEntry": true,
"isDynamicEntry": true,
"imports": [
"_cv2LK44M.js",
"_CRLlKr96.js",
"_BtL0wB3H.js",
"_xdjHc-A2.js",
"_DXE57cnx.js",
"_Dbod7Wv8.js"
],
"css": [
"_app/immutable/assets/0.RZHRvmcL.css"
]
},
".svelte-kit/generated/client-optimized/nodes/1.js": {
"file": "_app/immutable/nodes/1.Bh-fCbID.js",
"name": "nodes/1",
"src": ".svelte-kit/generated/client-optimized/nodes/1.js",
"isEntry": true,
"isDynamicEntry": true,
"imports": [
"_cv2LK44M.js",
"_CRLlKr96.js",
"_BtL0wB3H.js",
"_DXE57cnx.js"
]
},
".svelte-kit/generated/client-optimized/nodes/2.js": {
"file": "_app/immutable/nodes/2.BmiXdPHI.js",
"name": "nodes/2",
"src": ".svelte-kit/generated/client-optimized/nodes/2.js",
"isEntry": true,
"isDynamicEntry": true,
"imports": [
"_DyPeVqDG.js",
"_cv2LK44M.js",
"_CRLlKr96.js",
"_BtL0wB3H.js",
"_vVxDbqKK.js",
"_Dbod7Wv8.js",
"_BxZpmA7Z.js",
"_xdjHc-A2.js"
]
},
".svelte-kit/generated/client-optimized/nodes/3.js": {
"file": "_app/immutable/nodes/3.guWMyWpk.js",
"name": "nodes/3",
"src": ".svelte-kit/generated/client-optimized/nodes/3.js",
"isEntry": true,
"isDynamicEntry": true,
"imports": [
"_DyPeVqDG.js",
"_cv2LK44M.js",
"_CRLlKr96.js",
"_BtL0wB3H.js",
"_vVxDbqKK.js",
"_Dbod7Wv8.js"
]
},
"_BtL0wB3H.js": {
"file": "_app/immutable/chunks/BtL0wB3H.js",
"name": "index"
},
"_BxZpmA7Z.js": {
"file": "_app/immutable/chunks/BxZpmA7Z.js",
"name": "index-client",
"imports": [
"_BtL0wB3H.js"
]
},
"_CRLlKr96.js": {
"file": "_app/immutable/chunks/CRLlKr96.js",
"name": "legacy",
"imports": [
"_BtL0wB3H.js"
]
},
"_D0iaTcAo.js": {
"file": "_app/immutable/chunks/D0iaTcAo.js",
"name": "entry",
"imports": [
"_BtL0wB3H.js",
"_BxZpmA7Z.js"
]
},
"_DXE57cnx.js": {
"file": "_app/immutable/chunks/DXE57cnx.js",
"name": "stores",
"imports": [
"_D0iaTcAo.js"
]
},
"_Dbod7Wv8.js": {
"file": "_app/immutable/chunks/Dbod7Wv8.js",
"name": "toasts",
"imports": [
"_BtL0wB3H.js"
]
},
"_DyPeVqDG.js": {
"file": "_app/immutable/chunks/DyPeVqDG.js",
"name": "api",
"imports": [
"_BtL0wB3H.js",
"_Dbod7Wv8.js"
]
},
"_cv2LK44M.js": {
"file": "_app/immutable/chunks/cv2LK44M.js",
"name": "disclose-version",
"imports": [
"_BtL0wB3H.js"
]
},
"_vVxDbqKK.js": {
"file": "_app/immutable/chunks/vVxDbqKK.js",
"name": "props",
"imports": [
"_BtL0wB3H.js",
"_cv2LK44M.js"
]
},
"_xdjHc-A2.js": {
"file": "_app/immutable/chunks/xdjHc-A2.js",
"name": "class",
"imports": [
"_BtL0wB3H.js"
]
},
"node_modules/@sveltejs/kit/src/runtime/client/entry.js": {
"file": "_app/immutable/entry/start.BHAeOrfR.js",
"name": "entry/start",
"src": "node_modules/@sveltejs/kit/src/runtime/client/entry.js",
"isEntry": true,
"imports": [
"_D0iaTcAo.js"
]
}
}

View File

@@ -0,0 +1 @@
{"version":"1766262590857"}

View File

@@ -0,0 +1 @@
export const env={"PUBLIC_WS_URL":"ws://localhost:8000"}

View File

@@ -0,0 +1,180 @@
{
".svelte-kit/generated/server/internal.js": {
"file": "internal.js",
"name": "internal",
"src": ".svelte-kit/generated/server/internal.js",
"isEntry": true,
"imports": [
"_internal.js",
"_environment.js"
]
},
"_api.js": {
"file": "chunks/api.js",
"name": "api",
"imports": [
"_toasts.js"
]
},
"_environment.js": {
"file": "chunks/environment.js",
"name": "environment"
},
"_equality.js": {
"file": "chunks/equality.js",
"name": "equality"
},
"_exports.js": {
"file": "chunks/exports.js",
"name": "exports"
},
"_false.js": {
"file": "chunks/false.js",
"name": "false"
},
"_index.js": {
"file": "chunks/index.js",
"name": "index",
"imports": [
"_equality.js"
]
},
"_index2.js": {
"file": "chunks/index2.js",
"name": "index",
"imports": [
"_false.js",
"_equality.js"
]
},
"_internal.js": {
"file": "chunks/internal.js",
"name": "internal",
"imports": [
"_index2.js",
"_equality.js",
"_environment.js"
]
},
"_shared.js": {
"file": "chunks/shared.js",
"name": "shared",
"imports": [
"_utils.js"
]
},
"_stores.js": {
"file": "chunks/stores.js",
"name": "stores",
"imports": [
"_index2.js",
"_exports.js",
"_utils.js",
"_equality.js"
]
},
"_toasts.js": {
"file": "chunks/toasts.js",
"name": "toasts",
"imports": [
"_index.js"
]
},
"_utils.js": {
"file": "chunks/utils.js",
"name": "utils"
},
"node_modules/@sveltejs/kit/src/runtime/app/server/remote/index.js": {
"file": "remote-entry.js",
"name": "remote-entry",
"src": "node_modules/@sveltejs/kit/src/runtime/app/server/remote/index.js",
"isEntry": true,
"imports": [
"_shared.js",
"_false.js",
"_environment.js"
]
},
"node_modules/@sveltejs/kit/src/runtime/server/index.js": {
"file": "index.js",
"name": "index",
"src": "node_modules/@sveltejs/kit/src/runtime/server/index.js",
"isEntry": true,
"imports": [
"_false.js",
"_environment.js",
"_shared.js",
"_exports.js",
"_utils.js",
"_index.js",
"_internal.js"
]
},
"src/routes/+error.svelte": {
"file": "entries/pages/_error.svelte.js",
"name": "entries/pages/_error.svelte",
"src": "src/routes/+error.svelte",
"isEntry": true,
"imports": [
"_index2.js",
"_stores.js"
]
},
"src/routes/+layout.svelte": {
"file": "entries/pages/_layout.svelte.js",
"name": "entries/pages/_layout.svelte",
"src": "src/routes/+layout.svelte",
"isEntry": true,
"imports": [
"_index2.js",
"_stores.js",
"_toasts.js"
],
"css": [
"_app/immutable/assets/_layout.RZHRvmcL.css"
]
},
"src/routes/+layout.ts": {
"file": "entries/pages/_layout.ts.js",
"name": "entries/pages/_layout.ts",
"src": "src/routes/+layout.ts",
"isEntry": true
},
"src/routes/+page.svelte": {
"file": "entries/pages/_page.svelte.js",
"name": "entries/pages/_page.svelte",
"src": "src/routes/+page.svelte",
"isEntry": true,
"imports": [
"_index2.js",
"_index.js"
]
},
"src/routes/+page.ts": {
"file": "entries/pages/_page.ts.js",
"name": "entries/pages/_page.ts",
"src": "src/routes/+page.ts",
"isEntry": true,
"imports": [
"_api.js"
]
},
"src/routes/settings/+page.svelte": {
"file": "entries/pages/settings/_page.svelte.js",
"name": "entries/pages/settings/_page.svelte",
"src": "src/routes/settings/+page.svelte",
"isEntry": true,
"imports": [
"_index2.js"
]
},
"src/routes/settings/+page.ts": {
"file": "entries/pages/settings/_page.ts.js",
"name": "entries/pages/settings/_page.ts",
"src": "src/routes/settings/+page.ts",
"isEntry": true,
"imports": [
"_api.js"
]
}
}

View File

@@ -0,0 +1,77 @@
import { a as addToast } from "./toasts.js";
const API_BASE_URL = "/api";
async function fetchApi(endpoint) {
try {
console.log(`[api.fetchApi][Action] Fetching from context={{'endpoint': '${endpoint}'}}`);
const response = await fetch(`${API_BASE_URL}${endpoint}`);
if (!response.ok) {
throw new Error(`API request failed with status ${response.status}`);
}
return await response.json();
} catch (error) {
console.error(`[api.fetchApi][Coherence:Failed] Error fetching from ${endpoint}:`, error);
addToast(error.message, "error");
throw error;
}
}
async function postApi(endpoint, body) {
try {
console.log(`[api.postApi][Action] Posting to context={{'endpoint': '${endpoint}'}}`);
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify(body)
});
if (!response.ok) {
throw new Error(`API request failed with status ${response.status}`);
}
return await response.json();
} catch (error) {
console.error(`[api.postApi][Coherence:Failed] Error posting to ${endpoint}:`, error);
addToast(error.message, "error");
throw error;
}
}
async function requestApi(endpoint, method = "GET", body = null) {
try {
console.log(`[api.requestApi][Action] ${method} to context={{'endpoint': '${endpoint}'}}`);
const options = {
method,
headers: {
"Content-Type": "application/json"
}
};
if (body) {
options.body = JSON.stringify(body);
}
const response = await fetch(`${API_BASE_URL}${endpoint}`, options);
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new Error(errorData.detail || `API request failed with status ${response.status}`);
}
return await response.json();
} catch (error) {
console.error(`[api.requestApi][Coherence:Failed] Error ${method} to ${endpoint}:`, error);
addToast(error.message, "error");
throw error;
}
}
const api = {
getPlugins: () => fetchApi("/plugins/"),
getTasks: () => fetchApi("/tasks/"),
getTask: (taskId) => fetchApi(`/tasks/${taskId}`),
createTask: (pluginId, params) => postApi("/tasks/", { plugin_id: pluginId, params }),
// Settings
getSettings: () => fetchApi("/settings/"),
updateGlobalSettings: (settings) => requestApi("/settings/global", "PATCH", settings),
getEnvironments: () => fetchApi("/settings/environments"),
addEnvironment: (env) => postApi("/settings/environments", env),
updateEnvironment: (id, env) => requestApi(`/settings/environments/${id}`, "PUT", env),
deleteEnvironment: (id) => requestApi(`/settings/environments/${id}`, "DELETE"),
testEnvironmentConnection: (id) => postApi(`/settings/environments/${id}/test`, {})
};
export {
api as a
};

View File

@@ -0,0 +1,34 @@
let base = "";
let assets = base;
const app_dir = "_app";
const relative = true;
const initial = { base, assets };
function override(paths) {
base = paths.base;
assets = paths.assets;
}
function reset() {
base = initial.base;
assets = initial.assets;
}
function set_assets(path) {
assets = initial.assets = path;
}
let prerendering = false;
function set_building() {
}
function set_prerendering() {
prerendering = true;
}
export {
assets as a,
base as b,
app_dir as c,
reset as d,
set_building as e,
set_prerendering as f,
override as o,
prerendering as p,
relative as r,
set_assets as s
};

View File

@@ -0,0 +1,51 @@
var is_array = Array.isArray;
var index_of = Array.prototype.indexOf;
var array_from = Array.from;
var define_property = Object.defineProperty;
var get_descriptor = Object.getOwnPropertyDescriptor;
var object_prototype = Object.prototype;
var array_prototype = Array.prototype;
var get_prototype_of = Object.getPrototypeOf;
var is_extensible = Object.isExtensible;
const noop = () => {
};
function run_all(arr) {
for (var i = 0; i < arr.length; i++) {
arr[i]();
}
}
function deferred() {
var resolve;
var reject;
var promise = new Promise((res, rej) => {
resolve = res;
reject = rej;
});
return { promise, resolve, reject };
}
function equals(value) {
return value === this.v;
}
function safe_not_equal(a, b) {
return a != a ? b == b : a !== b || a !== null && typeof a === "object" || typeof a === "function";
}
function safe_equals(value) {
return !safe_not_equal(value, this.v);
}
export {
array_from as a,
deferred as b,
array_prototype as c,
define_property as d,
equals as e,
get_prototype_of as f,
get_descriptor as g,
is_extensible as h,
is_array as i,
index_of as j,
safe_not_equal as k,
noop as n,
object_prototype as o,
run_all as r,
safe_equals as s
};

View File

@@ -0,0 +1,174 @@
const SCHEME = /^[a-z][a-z\d+\-.]+:/i;
const internal = new URL("sveltekit-internal://");
function resolve(base, path) {
if (path[0] === "/" && path[1] === "/") return path;
let url = new URL(base, internal);
url = new URL(path, url);
return url.protocol === internal.protocol ? url.pathname + url.search + url.hash : url.href;
}
function normalize_path(path, trailing_slash) {
if (path === "/" || trailing_slash === "ignore") return path;
if (trailing_slash === "never") {
return path.endsWith("/") ? path.slice(0, -1) : path;
} else if (trailing_slash === "always" && !path.endsWith("/")) {
return path + "/";
}
return path;
}
function decode_pathname(pathname) {
return pathname.split("%25").map(decodeURI).join("%25");
}
function decode_params(params) {
for (const key in params) {
params[key] = decodeURIComponent(params[key]);
}
return params;
}
function make_trackable(url, callback, search_params_callback, allow_hash = false) {
const tracked = new URL(url);
Object.defineProperty(tracked, "searchParams", {
value: new Proxy(tracked.searchParams, {
get(obj, key) {
if (key === "get" || key === "getAll" || key === "has") {
return (param) => {
search_params_callback(param);
return obj[key](param);
};
}
callback();
const value = Reflect.get(obj, key);
return typeof value === "function" ? value.bind(obj) : value;
}
}),
enumerable: true,
configurable: true
});
const tracked_url_properties = ["href", "pathname", "search", "toString", "toJSON"];
if (allow_hash) tracked_url_properties.push("hash");
for (const property of tracked_url_properties) {
Object.defineProperty(tracked, property, {
get() {
callback();
return url[property];
},
enumerable: true,
configurable: true
});
}
{
tracked[/* @__PURE__ */ Symbol.for("nodejs.util.inspect.custom")] = (depth, opts, inspect) => {
return inspect(url, opts);
};
tracked.searchParams[/* @__PURE__ */ Symbol.for("nodejs.util.inspect.custom")] = (depth, opts, inspect) => {
return inspect(url.searchParams, opts);
};
}
if (!allow_hash) {
disable_hash(tracked);
}
return tracked;
}
function disable_hash(url) {
allow_nodejs_console_log(url);
Object.defineProperty(url, "hash", {
get() {
throw new Error(
"Cannot access event.url.hash. Consider using `page.url.hash` inside a component instead"
);
}
});
}
function disable_search(url) {
allow_nodejs_console_log(url);
for (const property of ["search", "searchParams"]) {
Object.defineProperty(url, property, {
get() {
throw new Error(`Cannot access url.${property} on a page with prerendering enabled`);
}
});
}
}
function allow_nodejs_console_log(url) {
{
url[/* @__PURE__ */ Symbol.for("nodejs.util.inspect.custom")] = (depth, opts, inspect) => {
return inspect(new URL(url), opts);
};
}
}
function validator(expected) {
function validate(module, file) {
if (!module) return;
for (const key in module) {
if (key[0] === "_" || expected.has(key)) continue;
const values = [...expected.values()];
const hint = hint_for_supported_files(key, file?.slice(file.lastIndexOf("."))) ?? `valid exports are ${values.join(", ")}, or anything with a '_' prefix`;
throw new Error(`Invalid export '${key}'${file ? ` in ${file}` : ""} (${hint})`);
}
}
return validate;
}
function hint_for_supported_files(key, ext = ".js") {
const supported_files = [];
if (valid_layout_exports.has(key)) {
supported_files.push(`+layout${ext}`);
}
if (valid_page_exports.has(key)) {
supported_files.push(`+page${ext}`);
}
if (valid_layout_server_exports.has(key)) {
supported_files.push(`+layout.server${ext}`);
}
if (valid_page_server_exports.has(key)) {
supported_files.push(`+page.server${ext}`);
}
if (valid_server_exports.has(key)) {
supported_files.push(`+server${ext}`);
}
if (supported_files.length > 0) {
return `'${key}' is a valid export in ${supported_files.slice(0, -1).join(", ")}${supported_files.length > 1 ? " or " : ""}${supported_files.at(-1)}`;
}
}
const valid_layout_exports = /* @__PURE__ */ new Set([
"load",
"prerender",
"csr",
"ssr",
"trailingSlash",
"config"
]);
const valid_page_exports = /* @__PURE__ */ new Set([...valid_layout_exports, "entries"]);
const valid_layout_server_exports = /* @__PURE__ */ new Set([...valid_layout_exports]);
const valid_page_server_exports = /* @__PURE__ */ new Set([...valid_layout_server_exports, "actions", "entries"]);
const valid_server_exports = /* @__PURE__ */ new Set([
"GET",
"POST",
"PATCH",
"PUT",
"DELETE",
"OPTIONS",
"HEAD",
"fallback",
"prerender",
"trailingSlash",
"config",
"entries"
]);
const validate_layout_exports = validator(valid_layout_exports);
const validate_page_exports = validator(valid_page_exports);
const validate_layout_server_exports = validator(valid_layout_server_exports);
const validate_page_server_exports = validator(valid_page_server_exports);
const validate_server_exports = validator(valid_server_exports);
export {
SCHEME as S,
decode_params as a,
validate_layout_exports as b,
validate_page_server_exports as c,
disable_search as d,
validate_page_exports as e,
decode_pathname as f,
validate_server_exports as g,
make_trackable as m,
normalize_path as n,
resolve as r,
validate_layout_server_exports as v
};

View File

@@ -0,0 +1,4 @@
const BROWSER = false;
export {
BROWSER as B
};

View File

@@ -0,0 +1,59 @@
import { n as noop, k as safe_not_equal } from "./equality.js";
import "clsx";
const subscriber_queue = [];
function readable(value, start) {
return {
subscribe: writable(value, start).subscribe
};
}
function writable(value, start = noop) {
let stop = null;
const subscribers = /* @__PURE__ */ new Set();
function set(new_value) {
if (safe_not_equal(value, new_value)) {
value = new_value;
if (stop) {
const run_queue = !subscriber_queue.length;
for (const subscriber of subscribers) {
subscriber[1]();
subscriber_queue.push(subscriber, value);
}
if (run_queue) {
for (let i = 0; i < subscriber_queue.length; i += 2) {
subscriber_queue[i][0](subscriber_queue[i + 1]);
}
subscriber_queue.length = 0;
}
}
}
}
function update(fn) {
set(fn(
/** @type {T} */
value
));
}
function subscribe(run, invalidate = noop) {
const subscriber = [run, invalidate];
subscribers.add(subscriber);
if (subscribers.size === 1) {
stop = start(set, update) || noop;
}
run(
/** @type {T} */
value
);
return () => {
subscribers.delete(subscriber);
if (subscribers.size === 0 && stop) {
stop();
stop = null;
}
};
}
return { set, update, subscribe };
}
export {
readable as r,
writable as w
};

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,982 @@
import { H as HYDRATION_ERROR, C as COMMENT_NODE, a as HYDRATION_END, g as get_next_sibling, b as HYDRATION_START, c as HYDRATION_START_ELSE, e as effect_tracking, d as get, s as source, r as render_effect, u as untrack, i as increment, q as queue_micro_task, f as active_effect, h as block, j as branch, B as Batch, p as pause_effect, k as create_text, l as set_active_effect, m as set_active_reaction, n as set_component_context, o as handle_error, t as active_reaction, v as component_context, w as move_effect, x as internal_set, y as destroy_effect, z as invoke_error_boundary, A as svelte_boundary_reset_onerror, E as EFFECT_TRANSPARENT, D as EFFECT_PRESERVED, F as BOUNDARY_EFFECT, G as init_operations, I as get_first_child, J as hydration_failed, K as clear_text_content, L as component_root, M as is_passive_event, N as push, O as pop, P as set, Q as LEGACY_PROPS, R as flushSync, S as mutable_source, T as render, U as setContext } from "./index2.js";
import { d as define_property, a as array_from } from "./equality.js";
import "clsx";
import "./environment.js";
let public_env = {};
function set_private_env(environment) {
}
function set_public_env(environment) {
public_env = environment;
}
function hydration_mismatch(location) {
{
console.warn(`https://svelte.dev/e/hydration_mismatch`);
}
}
function svelte_boundary_reset_noop() {
{
console.warn(`https://svelte.dev/e/svelte_boundary_reset_noop`);
}
}
let hydrating = false;
function set_hydrating(value) {
hydrating = value;
}
let hydrate_node;
function set_hydrate_node(node) {
if (node === null) {
hydration_mismatch();
throw HYDRATION_ERROR;
}
return hydrate_node = node;
}
function hydrate_next() {
return set_hydrate_node(get_next_sibling(hydrate_node));
}
function next(count = 1) {
if (hydrating) {
var i = count;
var node = hydrate_node;
while (i--) {
node = /** @type {TemplateNode} */
get_next_sibling(node);
}
hydrate_node = node;
}
}
function skip_nodes(remove = true) {
var depth = 0;
var node = hydrate_node;
while (true) {
if (node.nodeType === COMMENT_NODE) {
var data = (
/** @type {Comment} */
node.data
);
if (data === HYDRATION_END) {
if (depth === 0) return node;
depth -= 1;
} else if (data === HYDRATION_START || data === HYDRATION_START_ELSE) {
depth += 1;
}
}
var next2 = (
/** @type {TemplateNode} */
get_next_sibling(node)
);
if (remove) node.remove();
node = next2;
}
}
function createSubscriber(start) {
let subscribers = 0;
let version = source(0);
let stop;
return () => {
if (effect_tracking()) {
get(version);
render_effect(() => {
if (subscribers === 0) {
stop = untrack(() => start(() => increment(version)));
}
subscribers += 1;
return () => {
queue_micro_task(() => {
subscribers -= 1;
if (subscribers === 0) {
stop?.();
stop = void 0;
increment(version);
}
});
};
});
}
};
}
var flags = EFFECT_TRANSPARENT | EFFECT_PRESERVED | BOUNDARY_EFFECT;
function boundary(node, props, children) {
new Boundary(node, props, children);
}
class Boundary {
/** @type {Boundary | null} */
parent;
#pending = false;
/** @type {TemplateNode} */
#anchor;
/** @type {TemplateNode | null} */
#hydrate_open = hydrating ? hydrate_node : null;
/** @type {BoundaryProps} */
#props;
/** @type {((anchor: Node) => void)} */
#children;
/** @type {Effect} */
#effect;
/** @type {Effect | null} */
#main_effect = null;
/** @type {Effect | null} */
#pending_effect = null;
/** @type {Effect | null} */
#failed_effect = null;
/** @type {DocumentFragment | null} */
#offscreen_fragment = null;
/** @type {TemplateNode | null} */
#pending_anchor = null;
#local_pending_count = 0;
#pending_count = 0;
#is_creating_fallback = false;
/**
* A source containing the number of pending async deriveds/expressions.
* Only created if `$effect.pending()` is used inside the boundary,
* otherwise updating the source results in needless `Batch.ensure()`
* calls followed by no-op flushes
* @type {Source<number> | null}
*/
#effect_pending = null;
#effect_pending_subscriber = createSubscriber(() => {
this.#effect_pending = source(this.#local_pending_count);
return () => {
this.#effect_pending = null;
};
});
/**
* @param {TemplateNode} node
* @param {BoundaryProps} props
* @param {((anchor: Node) => void)} children
*/
constructor(node, props, children) {
this.#anchor = node;
this.#props = props;
this.#children = children;
this.parent = /** @type {Effect} */
active_effect.b;
this.#pending = !!this.#props.pending;
this.#effect = block(() => {
active_effect.b = this;
if (hydrating) {
const comment = this.#hydrate_open;
hydrate_next();
const server_rendered_pending = (
/** @type {Comment} */
comment.nodeType === COMMENT_NODE && /** @type {Comment} */
comment.data === HYDRATION_START_ELSE
);
if (server_rendered_pending) {
this.#hydrate_pending_content();
} else {
this.#hydrate_resolved_content();
}
} else {
var anchor = this.#get_anchor();
try {
this.#main_effect = branch(() => children(anchor));
} catch (error) {
this.error(error);
}
if (this.#pending_count > 0) {
this.#show_pending_snippet();
} else {
this.#pending = false;
}
}
return () => {
this.#pending_anchor?.remove();
};
}, flags);
if (hydrating) {
this.#anchor = hydrate_node;
}
}
#hydrate_resolved_content() {
try {
this.#main_effect = branch(() => this.#children(this.#anchor));
} catch (error) {
this.error(error);
}
this.#pending = false;
}
#hydrate_pending_content() {
const pending = this.#props.pending;
if (!pending) {
return;
}
this.#pending_effect = branch(() => pending(this.#anchor));
Batch.enqueue(() => {
var anchor = this.#get_anchor();
this.#main_effect = this.#run(() => {
Batch.ensure();
return branch(() => this.#children(anchor));
});
if (this.#pending_count > 0) {
this.#show_pending_snippet();
} else {
pause_effect(
/** @type {Effect} */
this.#pending_effect,
() => {
this.#pending_effect = null;
}
);
this.#pending = false;
}
});
}
#get_anchor() {
var anchor = this.#anchor;
if (this.#pending) {
this.#pending_anchor = create_text();
this.#anchor.before(this.#pending_anchor);
anchor = this.#pending_anchor;
}
return anchor;
}
/**
* Returns `true` if the effect exists inside a boundary whose pending snippet is shown
* @returns {boolean}
*/
is_pending() {
return this.#pending || !!this.parent && this.parent.is_pending();
}
has_pending_snippet() {
return !!this.#props.pending;
}
/**
* @param {() => Effect | null} fn
*/
#run(fn) {
var previous_effect = active_effect;
var previous_reaction = active_reaction;
var previous_ctx = component_context;
set_active_effect(this.#effect);
set_active_reaction(this.#effect);
set_component_context(this.#effect.ctx);
try {
return fn();
} catch (e) {
handle_error(e);
return null;
} finally {
set_active_effect(previous_effect);
set_active_reaction(previous_reaction);
set_component_context(previous_ctx);
}
}
#show_pending_snippet() {
const pending = (
/** @type {(anchor: Node) => void} */
this.#props.pending
);
if (this.#main_effect !== null) {
this.#offscreen_fragment = document.createDocumentFragment();
this.#offscreen_fragment.append(
/** @type {TemplateNode} */
this.#pending_anchor
);
move_effect(this.#main_effect, this.#offscreen_fragment);
}
if (this.#pending_effect === null) {
this.#pending_effect = branch(() => pending(this.#anchor));
}
}
/**
* Updates the pending count associated with the currently visible pending snippet,
* if any, such that we can replace the snippet with content once work is done
* @param {1 | -1} d
*/
#update_pending_count(d) {
if (!this.has_pending_snippet()) {
if (this.parent) {
this.parent.#update_pending_count(d);
}
return;
}
this.#pending_count += d;
if (this.#pending_count === 0) {
this.#pending = false;
if (this.#pending_effect) {
pause_effect(this.#pending_effect, () => {
this.#pending_effect = null;
});
}
if (this.#offscreen_fragment) {
this.#anchor.before(this.#offscreen_fragment);
this.#offscreen_fragment = null;
}
}
}
/**
* Update the source that powers `$effect.pending()` inside this boundary,
* and controls when the current `pending` snippet (if any) is removed.
* Do not call from inside the class
* @param {1 | -1} d
*/
update_pending_count(d) {
this.#update_pending_count(d);
this.#local_pending_count += d;
if (this.#effect_pending) {
internal_set(this.#effect_pending, this.#local_pending_count);
}
}
get_effect_pending() {
this.#effect_pending_subscriber();
return get(
/** @type {Source<number>} */
this.#effect_pending
);
}
/** @param {unknown} error */
error(error) {
var onerror = this.#props.onerror;
let failed = this.#props.failed;
if (this.#is_creating_fallback || !onerror && !failed) {
throw error;
}
if (this.#main_effect) {
destroy_effect(this.#main_effect);
this.#main_effect = null;
}
if (this.#pending_effect) {
destroy_effect(this.#pending_effect);
this.#pending_effect = null;
}
if (this.#failed_effect) {
destroy_effect(this.#failed_effect);
this.#failed_effect = null;
}
if (hydrating) {
set_hydrate_node(
/** @type {TemplateNode} */
this.#hydrate_open
);
next();
set_hydrate_node(skip_nodes());
}
var did_reset = false;
var calling_on_error = false;
const reset = () => {
if (did_reset) {
svelte_boundary_reset_noop();
return;
}
did_reset = true;
if (calling_on_error) {
svelte_boundary_reset_onerror();
}
Batch.ensure();
this.#local_pending_count = 0;
if (this.#failed_effect !== null) {
pause_effect(this.#failed_effect, () => {
this.#failed_effect = null;
});
}
this.#pending = this.has_pending_snippet();
this.#main_effect = this.#run(() => {
this.#is_creating_fallback = false;
return branch(() => this.#children(this.#anchor));
});
if (this.#pending_count > 0) {
this.#show_pending_snippet();
} else {
this.#pending = false;
}
};
var previous_reaction = active_reaction;
try {
set_active_reaction(null);
calling_on_error = true;
onerror?.(error, reset);
calling_on_error = false;
} catch (error2) {
invoke_error_boundary(error2, this.#effect && this.#effect.parent);
} finally {
set_active_reaction(previous_reaction);
}
if (failed) {
queue_micro_task(() => {
this.#failed_effect = this.#run(() => {
Batch.ensure();
this.#is_creating_fallback = true;
try {
return branch(() => {
failed(
this.#anchor,
() => error,
() => reset
);
});
} catch (error2) {
invoke_error_boundary(
error2,
/** @type {Effect} */
this.#effect.parent
);
return null;
} finally {
this.#is_creating_fallback = false;
}
});
});
}
}
}
const all_registered_events = /* @__PURE__ */ new Set();
const root_event_handles = /* @__PURE__ */ new Set();
let last_propagated_event = null;
function handle_event_propagation(event) {
var handler_element = this;
var owner_document = (
/** @type {Node} */
handler_element.ownerDocument
);
var event_name = event.type;
var path = event.composedPath?.() || [];
var current_target = (
/** @type {null | Element} */
path[0] || event.target
);
last_propagated_event = event;
var path_idx = 0;
var handled_at = last_propagated_event === event && event.__root;
if (handled_at) {
var at_idx = path.indexOf(handled_at);
if (at_idx !== -1 && (handler_element === document || handler_element === /** @type {any} */
window)) {
event.__root = handler_element;
return;
}
var handler_idx = path.indexOf(handler_element);
if (handler_idx === -1) {
return;
}
if (at_idx <= handler_idx) {
path_idx = at_idx;
}
}
current_target = /** @type {Element} */
path[path_idx] || event.target;
if (current_target === handler_element) return;
define_property(event, "currentTarget", {
configurable: true,
get() {
return current_target || owner_document;
}
});
var previous_reaction = active_reaction;
var previous_effect = active_effect;
set_active_reaction(null);
set_active_effect(null);
try {
var throw_error;
var other_errors = [];
while (current_target !== null) {
var parent_element = current_target.assignedSlot || current_target.parentNode || /** @type {any} */
current_target.host || null;
try {
var delegated = current_target["__" + event_name];
if (delegated != null && (!/** @type {any} */
current_target.disabled || // DOM could've been updated already by the time this is reached, so we check this as well
// -> the target could not have been disabled because it emits the event in the first place
event.target === current_target)) {
delegated.call(current_target, event);
}
} catch (error) {
if (throw_error) {
other_errors.push(error);
} else {
throw_error = error;
}
}
if (event.cancelBubble || parent_element === handler_element || parent_element === null) {
break;
}
current_target = parent_element;
}
if (throw_error) {
for (let error of other_errors) {
queueMicrotask(() => {
throw error;
});
}
throw throw_error;
}
} finally {
event.__root = handler_element;
delete event.currentTarget;
set_active_reaction(previous_reaction);
set_active_effect(previous_effect);
}
}
function assign_nodes(start, end) {
var effect = (
/** @type {Effect} */
active_effect
);
if (effect.nodes === null) {
effect.nodes = { start, end, a: null, t: null };
}
}
function mount(component, options2) {
return _mount(component, options2);
}
function hydrate(component, options2) {
init_operations();
options2.intro = options2.intro ?? false;
const target = options2.target;
const was_hydrating = hydrating;
const previous_hydrate_node = hydrate_node;
try {
var anchor = get_first_child(target);
while (anchor && (anchor.nodeType !== COMMENT_NODE || /** @type {Comment} */
anchor.data !== HYDRATION_START)) {
anchor = get_next_sibling(anchor);
}
if (!anchor) {
throw HYDRATION_ERROR;
}
set_hydrating(true);
set_hydrate_node(
/** @type {Comment} */
anchor
);
const instance = _mount(component, { ...options2, anchor });
set_hydrating(false);
return (
/** @type {Exports} */
instance
);
} catch (error) {
if (error instanceof Error && error.message.split("\n").some((line) => line.startsWith("https://svelte.dev/e/"))) {
throw error;
}
if (error !== HYDRATION_ERROR) {
console.warn("Failed to hydrate: ", error);
}
if (options2.recover === false) {
hydration_failed();
}
init_operations();
clear_text_content(target);
set_hydrating(false);
return mount(component, options2);
} finally {
set_hydrating(was_hydrating);
set_hydrate_node(previous_hydrate_node);
}
}
const document_listeners = /* @__PURE__ */ new Map();
function _mount(Component, { target, anchor, props = {}, events, context, intro = true }) {
init_operations();
var registered_events = /* @__PURE__ */ new Set();
var event_handle = (events2) => {
for (var i = 0; i < events2.length; i++) {
var event_name = events2[i];
if (registered_events.has(event_name)) continue;
registered_events.add(event_name);
var passive = is_passive_event(event_name);
target.addEventListener(event_name, handle_event_propagation, { passive });
var n = document_listeners.get(event_name);
if (n === void 0) {
document.addEventListener(event_name, handle_event_propagation, { passive });
document_listeners.set(event_name, 1);
} else {
document_listeners.set(event_name, n + 1);
}
}
};
event_handle(array_from(all_registered_events));
root_event_handles.add(event_handle);
var component = void 0;
var unmount2 = component_root(() => {
var anchor_node = anchor ?? target.appendChild(create_text());
boundary(
/** @type {TemplateNode} */
anchor_node,
{
pending: () => {
}
},
(anchor_node2) => {
if (context) {
push({});
var ctx = (
/** @type {ComponentContext} */
component_context
);
ctx.c = context;
}
if (events) {
props.$$events = events;
}
if (hydrating) {
assign_nodes(
/** @type {TemplateNode} */
anchor_node2,
null
);
}
component = Component(anchor_node2, props) || {};
if (hydrating) {
active_effect.nodes.end = hydrate_node;
if (hydrate_node === null || hydrate_node.nodeType !== COMMENT_NODE || /** @type {Comment} */
hydrate_node.data !== HYDRATION_END) {
hydration_mismatch();
throw HYDRATION_ERROR;
}
}
if (context) {
pop();
}
}
);
return () => {
for (var event_name of registered_events) {
target.removeEventListener(event_name, handle_event_propagation);
var n = (
/** @type {number} */
document_listeners.get(event_name)
);
if (--n === 0) {
document.removeEventListener(event_name, handle_event_propagation);
document_listeners.delete(event_name);
} else {
document_listeners.set(event_name, n);
}
}
root_event_handles.delete(event_handle);
if (anchor_node !== anchor) {
anchor_node.parentNode?.removeChild(anchor_node);
}
};
});
mounted_components.set(component, unmount2);
return component;
}
let mounted_components = /* @__PURE__ */ new WeakMap();
function unmount(component, options2) {
const fn = mounted_components.get(component);
if (fn) {
mounted_components.delete(component);
return fn(options2);
}
return Promise.resolve();
}
function asClassComponent$1(component) {
return class extends Svelte4Component {
/** @param {any} options */
constructor(options2) {
super({
component,
...options2
});
}
};
}
class Svelte4Component {
/** @type {any} */
#events;
/** @type {Record<string, any>} */
#instance;
/**
* @param {ComponentConstructorOptions & {
* component: any;
* }} options
*/
constructor(options2) {
var sources = /* @__PURE__ */ new Map();
var add_source = (key, value) => {
var s = mutable_source(value, false, false);
sources.set(key, s);
return s;
};
const props = new Proxy(
{ ...options2.props || {}, $$events: {} },
{
get(target, prop) {
return get(sources.get(prop) ?? add_source(prop, Reflect.get(target, prop)));
},
has(target, prop) {
if (prop === LEGACY_PROPS) return true;
get(sources.get(prop) ?? add_source(prop, Reflect.get(target, prop)));
return Reflect.has(target, prop);
},
set(target, prop, value) {
set(sources.get(prop) ?? add_source(prop, value), value);
return Reflect.set(target, prop, value);
}
}
);
this.#instance = (options2.hydrate ? hydrate : mount)(options2.component, {
target: options2.target,
anchor: options2.anchor,
props,
context: options2.context,
intro: options2.intro ?? false,
recover: options2.recover
});
if (!options2?.props?.$$host || options2.sync === false) {
flushSync();
}
this.#events = props.$$events;
for (const key of Object.keys(this.#instance)) {
if (key === "$set" || key === "$destroy" || key === "$on") continue;
define_property(this, key, {
get() {
return this.#instance[key];
},
/** @param {any} value */
set(value) {
this.#instance[key] = value;
},
enumerable: true
});
}
this.#instance.$set = /** @param {Record<string, any>} next */
(next2) => {
Object.assign(props, next2);
};
this.#instance.$destroy = () => {
unmount(this.#instance);
};
}
/** @param {Record<string, any>} props */
$set(props) {
this.#instance.$set(props);
}
/**
* @param {string} event
* @param {(...args: any[]) => any} callback
* @returns {any}
*/
$on(event, callback) {
this.#events[event] = this.#events[event] || [];
const cb = (...args) => callback.call(this, ...args);
this.#events[event].push(cb);
return () => {
this.#events[event] = this.#events[event].filter(
/** @param {any} fn */
(fn) => fn !== cb
);
};
}
$destroy() {
this.#instance.$destroy();
}
}
let read_implementation = null;
function set_read_implementation(fn) {
read_implementation = fn;
}
function set_manifest(_) {
}
function asClassComponent(component) {
const component_constructor = asClassComponent$1(component);
const _render = (props, { context, csp } = {}) => {
const result = render(component, { props, context, csp });
const munged = Object.defineProperties(
/** @type {LegacyRenderResult & PromiseLike<LegacyRenderResult>} */
{},
{
css: {
value: { code: "", map: null }
},
head: {
get: () => result.head
},
html: {
get: () => result.body
},
then: {
/**
* this is not type-safe, but honestly it's the best I can do right now, and it's a straightforward function.
*
* @template TResult1
* @template [TResult2=never]
* @param { (value: LegacyRenderResult) => TResult1 } onfulfilled
* @param { (reason: unknown) => TResult2 } onrejected
*/
value: (onfulfilled, onrejected) => {
{
const user_result = onfulfilled({
css: munged.css,
head: munged.head,
html: munged.html
});
return Promise.resolve(user_result);
}
}
}
}
);
return munged;
};
component_constructor.render = _render;
return component_constructor;
}
function Root($$renderer, $$props) {
$$renderer.component(($$renderer2) => {
let {
stores,
page,
constructors,
components = [],
form,
data_0 = null,
data_1 = null
} = $$props;
{
setContext("__svelte__", stores);
}
{
stores.page.set(page);
}
const Pyramid_1 = constructors[1];
if (constructors[1]) {
$$renderer2.push("<!--[-->");
const Pyramid_0 = constructors[0];
$$renderer2.push(`<!---->`);
Pyramid_0($$renderer2, {
data: data_0,
form,
params: page.params,
children: ($$renderer3) => {
$$renderer3.push(`<!---->`);
Pyramid_1($$renderer3, { data: data_1, form, params: page.params });
$$renderer3.push(`<!---->`);
},
$$slots: { default: true }
});
$$renderer2.push(`<!---->`);
} else {
$$renderer2.push("<!--[!-->");
const Pyramid_0 = constructors[0];
$$renderer2.push(`<!---->`);
Pyramid_0($$renderer2, { data: data_0, form, params: page.params });
$$renderer2.push(`<!---->`);
}
$$renderer2.push(`<!--]--> `);
{
$$renderer2.push("<!--[!-->");
}
$$renderer2.push(`<!--]-->`);
});
}
const root = asClassComponent(Root);
const options = {
app_template_contains_nonce: false,
async: false,
csp: { "mode": "auto", "directives": { "upgrade-insecure-requests": false, "block-all-mixed-content": false }, "reportOnly": { "upgrade-insecure-requests": false, "block-all-mixed-content": false } },
csrf_check_origin: true,
csrf_trusted_origins: [],
embedded: false,
env_public_prefix: "PUBLIC_",
env_private_prefix: "",
hash_routing: false,
hooks: null,
// added lazily, via `get_hooks`
preload_strategy: "modulepreload",
root,
service_worker: false,
service_worker_options: void 0,
templates: {
app: ({ head, body, assets, nonce, env }) => '<!DOCTYPE html>\n<html lang="en">\n <head>\n <meta charset="utf-8" />\n <link rel="icon" href="' + assets + '/favicon.png" />\n <meta name="viewport" content="width=device-width, initial-scale=1" />\n ' + head + '\n </head>\n <body data-sveltekit-preload-data="hover">\n <div style="display: contents">' + body + "</div>\n </body>\n</html>\n",
error: ({ status, message }) => '<!doctype html>\n<html lang="en">\n <head>\n <meta charset="utf-8" />\n <title>' + message + `</title>
<style>
body {
--bg: white;
--fg: #222;
--divider: #ccc;
background: var(--bg);
color: var(--fg);
font-family:
system-ui,
-apple-system,
BlinkMacSystemFont,
'Segoe UI',
Roboto,
Oxygen,
Ubuntu,
Cantarell,
'Open Sans',
'Helvetica Neue',
sans-serif;
display: flex;
align-items: center;
justify-content: center;
height: 100vh;
margin: 0;
}
.error {
display: flex;
align-items: center;
max-width: 32rem;
margin: 0 1rem;
}
.status {
font-weight: 200;
font-size: 3rem;
line-height: 1;
position: relative;
top: -0.05rem;
}
.message {
border-left: 1px solid var(--divider);
padding: 0 0 0 1rem;
margin: 0 0 0 1rem;
min-height: 2.5rem;
display: flex;
align-items: center;
}
.message h1 {
font-weight: 400;
font-size: 1em;
margin: 0;
}
@media (prefers-color-scheme: dark) {
body {
--bg: #222;
--fg: #ddd;
--divider: #666;
}
}
</style>
</head>
<body>
<div class="error">
<span class="status">` + status + '</span>\n <div class="message">\n <h1>' + message + "</h1>\n </div>\n </div>\n </body>\n</html>\n"
},
version_hash: "1ootf77"
};
async function get_hooks() {
let handle;
let handleFetch;
let handleError;
let handleValidationError;
let init;
let reroute;
let transport;
return {
handle,
handleFetch,
handleError,
handleValidationError,
init,
reroute,
transport
};
}
export {
set_public_env as a,
set_read_implementation as b,
set_manifest as c,
get_hooks as g,
options as o,
public_env as p,
read_implementation as r,
set_private_env as s
};

View File

@@ -0,0 +1,522 @@
import * as devalue from "devalue";
import { t as text_decoder, b as base64_encode, c as base64_decode } from "./utils.js";
function set_nested_value(object, path_string, value) {
if (path_string.startsWith("n:")) {
path_string = path_string.slice(2);
value = value === "" ? void 0 : parseFloat(value);
} else if (path_string.startsWith("b:")) {
path_string = path_string.slice(2);
value = value === "on";
}
deep_set(object, split_path(path_string), value);
}
function convert_formdata(data) {
const result = {};
for (let key of data.keys()) {
const is_array = key.endsWith("[]");
let values = data.getAll(key);
if (is_array) key = key.slice(0, -2);
if (values.length > 1 && !is_array) {
throw new Error(`Form cannot contain duplicated keys — "${key}" has ${values.length} values`);
}
values = values.filter(
(entry) => typeof entry === "string" || entry.name !== "" || entry.size > 0
);
if (key.startsWith("n:")) {
key = key.slice(2);
values = values.map((v) => v === "" ? void 0 : parseFloat(
/** @type {string} */
v
));
} else if (key.startsWith("b:")) {
key = key.slice(2);
values = values.map((v) => v === "on");
}
set_nested_value(result, key, is_array ? values : values[0]);
}
return result;
}
const BINARY_FORM_CONTENT_TYPE = "application/x-sveltekit-formdata";
const BINARY_FORM_VERSION = 0;
async function deserialize_binary_form(request) {
if (request.headers.get("content-type") !== BINARY_FORM_CONTENT_TYPE) {
const form_data = await request.formData();
return { data: convert_formdata(form_data), meta: {}, form_data };
}
if (!request.body) {
throw new Error("Could not deserialize binary form: no body");
}
const reader = request.body.getReader();
const chunks = [];
async function get_chunk(index) {
if (index in chunks) return chunks[index];
let i = chunks.length;
while (i <= index) {
chunks[i] = reader.read().then((chunk) => chunk.value);
i++;
}
return chunks[index];
}
async function get_buffer(offset, length) {
let start_chunk;
let chunk_start = 0;
let chunk_index;
for (chunk_index = 0; ; chunk_index++) {
const chunk = await get_chunk(chunk_index);
if (!chunk) return null;
const chunk_end = chunk_start + chunk.byteLength;
if (offset >= chunk_start && offset < chunk_end) {
start_chunk = chunk;
break;
}
chunk_start = chunk_end;
}
if (offset + length <= chunk_start + start_chunk.byteLength) {
return start_chunk.subarray(offset - chunk_start, offset + length - chunk_start);
}
const buffer = new Uint8Array(length);
buffer.set(start_chunk.subarray(offset - chunk_start));
let cursor = start_chunk.byteLength - offset + chunk_start;
while (cursor < length) {
chunk_index++;
let chunk = await get_chunk(chunk_index);
if (!chunk) return null;
if (chunk.byteLength > length - cursor) {
chunk = chunk.subarray(0, length - cursor);
}
buffer.set(chunk, cursor);
cursor += chunk.byteLength;
}
return buffer;
}
const header = await get_buffer(0, 1 + 4 + 2);
if (!header) throw new Error("Could not deserialize binary form: too short");
if (header[0] !== BINARY_FORM_VERSION) {
throw new Error(
`Could not deserialize binary form: got version ${header[0]}, expected version ${BINARY_FORM_VERSION}`
);
}
const header_view = new DataView(header.buffer, header.byteOffset, header.byteLength);
const data_length = header_view.getUint32(1, true);
const file_offsets_length = header_view.getUint16(5, true);
const data_buffer = await get_buffer(1 + 4 + 2, data_length);
if (!data_buffer) throw new Error("Could not deserialize binary form: data too short");
let file_offsets;
let files_start_offset;
if (file_offsets_length > 0) {
const file_offsets_buffer = await get_buffer(1 + 4 + 2 + data_length, file_offsets_length);
if (!file_offsets_buffer)
throw new Error("Could not deserialize binary form: file offset table too short");
file_offsets = /** @type {Array<number>} */
JSON.parse(text_decoder.decode(file_offsets_buffer));
files_start_offset = 1 + 4 + 2 + data_length + file_offsets_length;
}
const [data, meta] = devalue.parse(text_decoder.decode(data_buffer), {
File: ([name, type, size, last_modified, index]) => {
return new Proxy(
new LazyFile(
name,
type,
size,
last_modified,
get_chunk,
files_start_offset + file_offsets[index]
),
{
getPrototypeOf() {
return File.prototype;
}
}
);
}
});
void (async () => {
let has_more = true;
while (has_more) {
const chunk = await get_chunk(chunks.length);
has_more = !!chunk;
}
})();
return { data, meta, form_data: null };
}
class LazyFile {
/** @type {(index: number) => Promise<Uint8Array<ArrayBuffer> | undefined>} */
#get_chunk;
/** @type {number} */
#offset;
/**
* @param {string} name
* @param {string} type
* @param {number} size
* @param {number} last_modified
* @param {(index: number) => Promise<Uint8Array<ArrayBuffer> | undefined>} get_chunk
* @param {number} offset
*/
constructor(name, type, size, last_modified, get_chunk, offset) {
this.name = name;
this.type = type;
this.size = size;
this.lastModified = last_modified;
this.webkitRelativePath = "";
this.#get_chunk = get_chunk;
this.#offset = offset;
this.arrayBuffer = this.arrayBuffer.bind(this);
this.bytes = this.bytes.bind(this);
this.slice = this.slice.bind(this);
this.stream = this.stream.bind(this);
this.text = this.text.bind(this);
}
/** @type {ArrayBuffer | undefined} */
#buffer;
async arrayBuffer() {
this.#buffer ??= await new Response(this.stream()).arrayBuffer();
return this.#buffer;
}
async bytes() {
return new Uint8Array(await this.arrayBuffer());
}
/**
* @param {number=} start
* @param {number=} end
* @param {string=} contentType
*/
slice(start = 0, end = this.size, contentType = this.type) {
if (start < 0) {
start = Math.max(this.size + start, 0);
} else {
start = Math.min(start, this.size);
}
if (end < 0) {
end = Math.max(this.size + end, 0);
} else {
end = Math.min(end, this.size);
}
const size = Math.max(end - start, 0);
const file = new LazyFile(
this.name,
contentType,
size,
this.lastModified,
this.#get_chunk,
this.#offset + start
);
return file;
}
stream() {
let cursor = 0;
let chunk_index = 0;
return new ReadableStream({
start: async (controller) => {
let chunk_start = 0;
let start_chunk = null;
for (chunk_index = 0; ; chunk_index++) {
const chunk = await this.#get_chunk(chunk_index);
if (!chunk) return null;
const chunk_end = chunk_start + chunk.byteLength;
if (this.#offset >= chunk_start && this.#offset < chunk_end) {
start_chunk = chunk;
break;
}
chunk_start = chunk_end;
}
if (this.#offset + this.size <= chunk_start + start_chunk.byteLength) {
controller.enqueue(
start_chunk.subarray(this.#offset - chunk_start, this.#offset + this.size - chunk_start)
);
controller.close();
} else {
controller.enqueue(start_chunk.subarray(this.#offset - chunk_start));
cursor = start_chunk.byteLength - this.#offset + chunk_start;
}
},
pull: async (controller) => {
chunk_index++;
let chunk = await this.#get_chunk(chunk_index);
if (!chunk) {
controller.error("Could not deserialize binary form: incomplete file data");
controller.close();
return;
}
if (chunk.byteLength > this.size - cursor) {
chunk = chunk.subarray(0, this.size - cursor);
}
controller.enqueue(chunk);
cursor += chunk.byteLength;
if (cursor >= this.size) {
controller.close();
}
}
});
}
async text() {
return text_decoder.decode(await this.arrayBuffer());
}
}
const path_regex = /^[a-zA-Z_$]\w*(\.[a-zA-Z_$]\w*|\[\d+\])*$/;
function split_path(path) {
if (!path_regex.test(path)) {
throw new Error(`Invalid path ${path}`);
}
return path.split(/\.|\[|\]/).filter(Boolean);
}
function check_prototype_pollution(key) {
if (key === "__proto__" || key === "constructor" || key === "prototype") {
throw new Error(
`Invalid key "${key}"`
);
}
}
function deep_set(object, keys, value) {
let current = object;
for (let i = 0; i < keys.length - 1; i += 1) {
const key = keys[i];
check_prototype_pollution(key);
const is_array = /^\d+$/.test(keys[i + 1]);
const exists = key in current;
const inner = current[key];
if (exists && is_array !== Array.isArray(inner)) {
throw new Error(`Invalid array key ${keys[i + 1]}`);
}
if (!exists) {
current[key] = is_array ? [] : {};
}
current = current[key];
}
const final_key = keys[keys.length - 1];
check_prototype_pollution(final_key);
current[final_key] = value;
}
function normalize_issue(issue, server = false) {
const normalized = { name: "", path: [], message: issue.message, server };
if (issue.path !== void 0) {
let name = "";
for (const segment of issue.path) {
const key = (
/** @type {string | number} */
typeof segment === "object" ? segment.key : segment
);
normalized.path.push(key);
if (typeof key === "number") {
name += `[${key}]`;
} else if (typeof key === "string") {
name += name === "" ? key : "." + key;
}
}
normalized.name = name;
}
return normalized;
}
function flatten_issues(issues) {
const result = {};
for (const issue of issues) {
(result.$ ??= []).push(issue);
let name = "";
if (issue.path !== void 0) {
for (const key of issue.path) {
if (typeof key === "number") {
name += `[${key}]`;
} else if (typeof key === "string") {
name += name === "" ? key : "." + key;
}
(result[name] ??= []).push(issue);
}
}
}
return result;
}
function deep_get(object, path) {
let current = object;
for (const key of path) {
if (current == null || typeof current !== "object") {
return current;
}
current = current[key];
}
return current;
}
function create_field_proxy(target, get_input, set_input, get_issues, path = []) {
const get_value = () => {
return deep_get(get_input(), path);
};
return new Proxy(target, {
get(target2, prop) {
if (typeof prop === "symbol") return target2[prop];
if (/^\d+$/.test(prop)) {
return create_field_proxy({}, get_input, set_input, get_issues, [
...path,
parseInt(prop, 10)
]);
}
const key = build_path_string(path);
if (prop === "set") {
const set_func = function(newValue) {
set_input(path, newValue);
return newValue;
};
return create_field_proxy(set_func, get_input, set_input, get_issues, [...path, prop]);
}
if (prop === "value") {
return create_field_proxy(get_value, get_input, set_input, get_issues, [...path, prop]);
}
if (prop === "issues" || prop === "allIssues") {
const issues_func = () => {
const all_issues = get_issues()[key === "" ? "$" : key];
if (prop === "allIssues") {
return all_issues?.map((issue) => ({
path: issue.path,
message: issue.message
}));
}
return all_issues?.filter((issue) => issue.name === key)?.map((issue) => ({
path: issue.path,
message: issue.message
}));
};
return create_field_proxy(issues_func, get_input, set_input, get_issues, [...path, prop]);
}
if (prop === "as") {
const as_func = (type, input_value) => {
const is_array = type === "file multiple" || type === "select multiple" || type === "checkbox" && typeof input_value === "string";
const prefix = type === "number" || type === "range" ? "n:" : type === "checkbox" && !is_array ? "b:" : "";
const base_props = {
name: prefix + key + (is_array ? "[]" : ""),
get "aria-invalid"() {
const issues = get_issues();
return key in issues ? "true" : void 0;
}
};
if (type !== "text" && type !== "select" && type !== "select multiple") {
base_props.type = type === "file multiple" ? "file" : type;
}
if (type === "submit" || type === "hidden") {
return Object.defineProperties(base_props, {
value: { value: input_value, enumerable: true }
});
}
if (type === "select" || type === "select multiple") {
return Object.defineProperties(base_props, {
multiple: { value: is_array, enumerable: true },
value: {
enumerable: true,
get() {
return get_value();
}
}
});
}
if (type === "checkbox" || type === "radio") {
return Object.defineProperties(base_props, {
value: { value: input_value ?? "on", enumerable: true },
checked: {
enumerable: true,
get() {
const value = get_value();
if (type === "radio") {
return value === input_value;
}
if (is_array) {
return (value ?? []).includes(input_value);
}
return value;
}
}
});
}
if (type === "file" || type === "file multiple") {
return Object.defineProperties(base_props, {
multiple: { value: is_array, enumerable: true },
files: {
enumerable: true,
get() {
const value = get_value();
if (value instanceof File) {
if (typeof DataTransfer !== "undefined") {
const fileList = new DataTransfer();
fileList.items.add(value);
return fileList.files;
}
return { 0: value, length: 1 };
}
if (Array.isArray(value) && value.every((f) => f instanceof File)) {
if (typeof DataTransfer !== "undefined") {
const fileList = new DataTransfer();
value.forEach((file) => fileList.items.add(file));
return fileList.files;
}
const fileListLike = { length: value.length };
value.forEach((file, index) => {
fileListLike[index] = file;
});
return fileListLike;
}
return null;
}
}
});
}
return Object.defineProperties(base_props, {
value: {
enumerable: true,
get() {
const value = get_value();
return value != null ? String(value) : "";
}
}
});
};
return create_field_proxy(as_func, get_input, set_input, get_issues, [...path, "as"]);
}
return create_field_proxy({}, get_input, set_input, get_issues, [...path, prop]);
}
});
}
function build_path_string(path) {
let result = "";
for (const segment of path) {
if (typeof segment === "number") {
result += `[${segment}]`;
} else {
result += result === "" ? segment : "." + segment;
}
}
return result;
}
const INVALIDATED_PARAM = "x-sveltekit-invalidated";
const TRAILING_SLASH_PARAM = "x-sveltekit-trailing-slash";
function stringify(data, transport) {
const encoders = Object.fromEntries(Object.entries(transport).map(([k, v]) => [k, v.encode]));
return devalue.stringify(data, encoders);
}
function stringify_remote_arg(value, transport) {
if (value === void 0) return "";
const json_string = stringify(value, transport);
const bytes = new TextEncoder().encode(json_string);
return base64_encode(bytes).replaceAll("=", "").replaceAll("+", "-").replaceAll("/", "_");
}
function parse_remote_arg(string, transport) {
if (!string) return void 0;
const json_string = text_decoder.decode(
// no need to add back `=` characters, atob can handle it
base64_decode(string.replaceAll("-", "+").replaceAll("_", "/"))
);
const decoders = Object.fromEntries(Object.entries(transport).map(([k, v]) => [k, v.decode]));
return devalue.parse(json_string, decoders);
}
function create_remote_key(id, payload) {
return id + "/" + payload;
}
export {
BINARY_FORM_CONTENT_TYPE as B,
INVALIDATED_PARAM as I,
TRAILING_SLASH_PARAM as T,
stringify_remote_arg as a,
create_field_proxy as b,
create_remote_key as c,
deserialize_binary_form as d,
set_nested_value as e,
flatten_issues as f,
deep_set as g,
normalize_issue as n,
parse_remote_arg as p,
stringify as s
};

View File

@@ -0,0 +1,44 @@
import { a0 as getContext } from "./index2.js";
import "clsx";
import "@sveltejs/kit/internal";
import "./exports.js";
import "./utils.js";
import "@sveltejs/kit/internal/server";
import { n as noop } from "./equality.js";
const is_legacy = noop.toString().includes("$$") || /function \w+\(\) \{\}/.test(noop.toString());
if (is_legacy) {
({
data: {},
form: null,
error: null,
params: {},
route: { id: null },
state: {},
status: -1,
url: new URL("https://example.com")
});
}
const getStores = () => {
const stores = getContext("__svelte__");
return {
/** @type {typeof page} */
page: {
subscribe: stores.page.subscribe
},
/** @type {typeof navigating} */
navigating: {
subscribe: stores.navigating.subscribe
},
/** @type {typeof updated} */
updated: stores.updated
};
};
const page = {
subscribe(fn) {
const store = getStores().page;
return store.subscribe(fn);
}
};
export {
page as p
};

View File

@@ -0,0 +1,16 @@
import { w as writable } from "./index.js";
const toasts = writable([]);
function addToast(message, type = "info", duration = 3e3) {
const id = Math.random().toString(36).substr(2, 9);
console.log(`[toasts.addToast][Action] Adding toast context={{'id': '${id}', 'type': '${type}', 'message': '${message}'}}`);
toasts.update((all) => [...all, { id, message, type }]);
setTimeout(() => removeToast(id), duration);
}
function removeToast(id) {
console.log(`[toasts.removeToast][Action] Removing toast context={{'id': '${id}'}}`);
toasts.update((all) => all.filter((t) => t.id !== id));
}
export {
addToast as a,
toasts as t
};

View File

@@ -0,0 +1,43 @@
const text_encoder = new TextEncoder();
const text_decoder = new TextDecoder();
function get_relative_path(from, to) {
const from_parts = from.split(/[/\\]/);
const to_parts = to.split(/[/\\]/);
from_parts.pop();
while (from_parts[0] === to_parts[0]) {
from_parts.shift();
to_parts.shift();
}
let i = from_parts.length;
while (i--) from_parts[i] = "..";
return from_parts.concat(to_parts).join("/");
}
function base64_encode(bytes) {
if (globalThis.Buffer) {
return globalThis.Buffer.from(bytes).toString("base64");
}
let binary = "";
for (let i = 0; i < bytes.length; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
function base64_decode(encoded) {
if (globalThis.Buffer) {
const buffer = globalThis.Buffer.from(encoded, "base64");
return new Uint8Array(buffer);
}
const binary = atob(encoded);
const bytes = new Uint8Array(binary.length);
for (let i = 0; i < binary.length; i++) {
bytes[i] = binary.charCodeAt(i);
}
return bytes;
}
export {
text_encoder as a,
base64_encode as b,
base64_decode as c,
get_relative_path as g,
text_decoder as t
};

View File

@@ -0,0 +1,12 @@
import { _ as escape_html, X as store_get, Y as unsubscribe_stores } from "../../chunks/index2.js";
import { p as page } from "../../chunks/stores.js";
function _error($$renderer, $$props) {
$$renderer.component(($$renderer2) => {
var $$store_subs;
$$renderer2.push(`<div class="container mx-auto p-4 text-center mt-20"><h1 class="text-6xl font-bold text-gray-800 mb-4">${escape_html(store_get($$store_subs ??= {}, "$page", page).status)}</h1> <p class="text-2xl text-gray-600 mb-8">${escape_html(store_get($$store_subs ??= {}, "$page", page).error?.message || "Page not found")}</p> <a href="/" class="bg-blue-500 text-white px-6 py-3 rounded-lg hover:bg-blue-600 transition-colors">Back to Dashboard</a></div>`);
if ($$store_subs) unsubscribe_stores($$store_subs);
});
}
export {
_error as default
};

View File

@@ -0,0 +1,38 @@
import { V as attr_class, W as stringify, X as store_get, Y as unsubscribe_stores, Z as ensure_array_like, _ as escape_html, $ as slot } from "../../chunks/index2.js";
import { p as page } from "../../chunks/stores.js";
import "clsx";
import { t as toasts } from "../../chunks/toasts.js";
function Navbar($$renderer, $$props) {
$$renderer.component(($$renderer2) => {
var $$store_subs;
$$renderer2.push(`<header class="bg-white shadow-md p-4 flex justify-between items-center"><a href="/" class="text-3xl font-bold text-gray-800 focus:outline-none">Superset Tools</a> <nav class="space-x-4"><a href="/"${attr_class(`text-gray-600 hover:text-blue-600 font-medium ${stringify(store_get($$store_subs ??= {}, "$page", page).url.pathname === "/" ? "text-blue-600 border-b-2 border-blue-600" : "")}`)}>Dashboard</a> <a href="/settings"${attr_class(`text-gray-600 hover:text-blue-600 font-medium ${stringify(store_get($$store_subs ??= {}, "$page", page).url.pathname === "/settings" ? "text-blue-600 border-b-2 border-blue-600" : "")}`)}>Settings</a></nav></header>`);
if ($$store_subs) unsubscribe_stores($$store_subs);
});
}
function Footer($$renderer) {
$$renderer.push(`<footer class="bg-white border-t p-4 mt-8 text-center text-gray-500 text-sm">© 2025 Superset Tools. All rights reserved.</footer>`);
}
function Toast($$renderer) {
var $$store_subs;
$$renderer.push(`<div class="fixed bottom-0 right-0 p-4 space-y-2"><!--[-->`);
const each_array = ensure_array_like(store_get($$store_subs ??= {}, "$toasts", toasts));
for (let $$index = 0, $$length = each_array.length; $$index < $$length; $$index++) {
let toast = each_array[$$index];
$$renderer.push(`<div${attr_class(`p-4 rounded-md shadow-lg text-white ${stringify(toast.type === "info" && "bg-blue-500")} ${stringify(toast.type === "success" && "bg-green-500")} ${stringify(toast.type === "error" && "bg-red-500")} `)}>${escape_html(toast.message)}</div>`);
}
$$renderer.push(`<!--]--></div>`);
if ($$store_subs) unsubscribe_stores($$store_subs);
}
function _layout($$renderer, $$props) {
Toast($$renderer);
$$renderer.push(`<!----> <main class="bg-gray-50 min-h-screen flex flex-col">`);
Navbar($$renderer);
$$renderer.push(`<!----> <div class="p-4 flex-grow"><!--[-->`);
slot($$renderer, $$props, "default", {});
$$renderer.push(`<!--]--></div> `);
Footer($$renderer);
$$renderer.push(`<!----></main>`);
}
export {
_layout as default
};

View File

@@ -0,0 +1,6 @@
const ssr = false;
const prerender = false;
export {
prerender,
ssr
};

View File

@@ -0,0 +1,132 @@
import { a1 as ssr_context, X as store_get, _ as escape_html, Z as ensure_array_like, V as attr_class, Y as unsubscribe_stores, a2 as attr, a3 as bind_props } from "../../chunks/index2.js";
import { w as writable } from "../../chunks/index.js";
import "clsx";
function onDestroy(fn) {
/** @type {SSRContext} */
ssr_context.r.on_destroy(fn);
}
const plugins = writable([]);
const selectedPlugin = writable(null);
const selectedTask = writable(null);
const taskLogs = writable([]);
function TaskRunner($$renderer, $$props) {
$$renderer.component(($$renderer2) => {
var $$store_subs;
onDestroy(() => {
});
$$renderer2.push(`<div class="p-4 border rounded-lg bg-white shadow-md">`);
if (store_get($$store_subs ??= {}, "$selectedTask", selectedTask)) {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<h2 class="text-xl font-semibold mb-2">Task: ${escape_html(store_get($$store_subs ??= {}, "$selectedTask", selectedTask).plugin_id)}</h2> <div class="bg-gray-900 text-white font-mono text-sm p-4 rounded-md h-96 overflow-y-auto"><!--[-->`);
const each_array = ensure_array_like(store_get($$store_subs ??= {}, "$taskLogs", taskLogs));
for (let $$index = 0, $$length = each_array.length; $$index < $$length; $$index++) {
let log = each_array[$$index];
$$renderer2.push(`<div><span class="text-gray-400">${escape_html(new Date(log.timestamp).toLocaleTimeString())}</span> <span${attr_class(log.level === "ERROR" ? "text-red-500" : "text-green-400")}>[${escape_html(log.level)}]</span> <span>${escape_html(log.message)}</span></div>`);
}
$$renderer2.push(`<!--]--></div>`);
} else {
$$renderer2.push("<!--[!-->");
$$renderer2.push(`<p>No task selected.</p>`);
}
$$renderer2.push(`<!--]--></div>`);
if ($$store_subs) unsubscribe_stores($$store_subs);
});
}
function DynamicForm($$renderer, $$props) {
$$renderer.component(($$renderer2) => {
let schema = $$props["schema"];
let formData = {};
function initializeForm() {
if (schema && schema.properties) {
for (const key in schema.properties) {
formData[key] = schema.properties[key].default || "";
}
}
}
initializeForm();
$$renderer2.push(`<form class="space-y-4">`);
if (schema && schema.properties) {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<!--[-->`);
const each_array = ensure_array_like(Object.entries(schema.properties));
for (let $$index = 0, $$length = each_array.length; $$index < $$length; $$index++) {
let [key, prop] = each_array[$$index];
$$renderer2.push(`<div class="flex flex-col"><label${attr("for", key)} class="mb-1 font-semibold text-gray-700">${escape_html(prop.title || key)}</label> `);
if (prop.type === "string") {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<input type="text"${attr("id", key)}${attr("value", formData[key])}${attr("placeholder", prop.description || "")} class="p-2 border rounded-md"/>`);
} else {
$$renderer2.push("<!--[!-->");
if (prop.type === "number" || prop.type === "integer") {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<input type="number"${attr("id", key)}${attr("value", formData[key])}${attr("placeholder", prop.description || "")} class="p-2 border rounded-md"/>`);
} else {
$$renderer2.push("<!--[!-->");
if (prop.type === "boolean") {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<input type="checkbox"${attr("id", key)}${attr("checked", formData[key], true)} class="h-5 w-5"/>`);
} else {
$$renderer2.push("<!--[!-->");
}
$$renderer2.push(`<!--]-->`);
}
$$renderer2.push(`<!--]-->`);
}
$$renderer2.push(`<!--]--></div>`);
}
$$renderer2.push(`<!--]--> <button type="submit" class="w-full bg-green-500 text-white p-2 rounded-md hover:bg-green-600">Run Task</button>`);
} else {
$$renderer2.push("<!--[!-->");
}
$$renderer2.push(`<!--]--></form>`);
bind_props($$props, { schema });
});
}
function _page($$renderer, $$props) {
$$renderer.component(($$renderer2) => {
var $$store_subs;
let data = $$props["data"];
if (data.plugins) {
plugins.set(data.plugins);
}
$$renderer2.push(`<div class="container mx-auto p-4">`);
if (store_get($$store_subs ??= {}, "$selectedTask", selectedTask)) {
$$renderer2.push("<!--[-->");
TaskRunner($$renderer2);
$$renderer2.push(`<!----> <button class="mt-4 bg-blue-500 text-white p-2 rounded">Back to Task List</button>`);
} else {
$$renderer2.push("<!--[!-->");
if (store_get($$store_subs ??= {}, "$selectedPlugin", selectedPlugin)) {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<h2 class="text-2xl font-bold mb-4">${escape_html(store_get($$store_subs ??= {}, "$selectedPlugin", selectedPlugin).name)}</h2> `);
DynamicForm($$renderer2, {
schema: store_get($$store_subs ??= {}, "$selectedPlugin", selectedPlugin).schema
});
$$renderer2.push(`<!----> <button class="mt-4 bg-gray-500 text-white p-2 rounded">Back to Dashboard</button>`);
} else {
$$renderer2.push("<!--[!-->");
$$renderer2.push(`<h1 class="text-2xl font-bold mb-4">Available Tools</h1> `);
if (data.error) {
$$renderer2.push("<!--[-->");
$$renderer2.push(`<div class="bg-red-100 border border-red-400 text-red-700 px-4 py-3 rounded mb-4">${escape_html(data.error)}</div>`);
} else {
$$renderer2.push("<!--[!-->");
}
$$renderer2.push(`<!--]--> <div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4"><!--[-->`);
const each_array = ensure_array_like(data.plugins);
for (let $$index = 0, $$length = each_array.length; $$index < $$length; $$index++) {
let plugin = each_array[$$index];
$$renderer2.push(`<div class="border rounded-lg p-4 cursor-pointer hover:bg-gray-100" role="button" tabindex="0"><h2 class="text-xl font-semibold">${escape_html(plugin.name)}</h2> <p class="text-gray-600">${escape_html(plugin.description)}</p> <span class="text-sm text-gray-400">v${escape_html(plugin.version)}</span></div>`);
}
$$renderer2.push(`<!--]--></div>`);
}
$$renderer2.push(`<!--]-->`);
}
$$renderer2.push(`<!--]--></div>`);
if ($$store_subs) unsubscribe_stores($$store_subs);
bind_props($$props, { data });
});
}
export {
_page as default
};

View File

@@ -0,0 +1,18 @@
import { a as api } from "../../chunks/api.js";
async function load() {
try {
const plugins = await api.getPlugins();
return {
plugins
};
} catch (error) {
console.error("Failed to load plugins:", error);
return {
plugins: [],
error: "Failed to load plugins"
};
}
}
export {
load
};

Some files were not shown because too many files have changed in this diff Show More