Compare commits
62 Commits
0e2fc14732
...
012-remove
| Author | SHA1 | Date | |
|---|---|---|---|
| d99a13d91f | |||
| 203ce446f4 | |||
| c96d50a3f4 | |||
| 3bbe320949 | |||
| 2d2435642d | |||
| ec8d67c956 | |||
| 76baeb1038 | |||
| 11c59fb420 | |||
| b2529973eb | |||
| ae1d630ad6 | |||
| 9a9c5879e6 | |||
| 696aac32e7 | |||
| 7a9b1a190a | |||
| a3dc1fb2b9 | |||
| 297b29986d | |||
| 4c6fc8256d | |||
| a747a163c8 | |||
| fce0941e98 | |||
| 45c077b928 | |||
| 9ed3a5992d | |||
| a032fe8457 | |||
| 4c9d554432 | |||
| 6962a78112 | |||
| 3d75a21127 | |||
| 07914c8728 | |||
| cddc259b76 | |||
| dcbf0a7d7f | |||
| 65f61c1f80 | |||
| cb7386f274 | |||
| 83e34e1799 | |||
| d197303b9f | |||
| a43f8fb021 | |||
| 4aa01b6470 | |||
| 35b423979d | |||
| 2ffc3cc68f | |||
| 4448352ef9 | |||
| 43b4c75e36 | |||
| d05344e604 | |||
| 9b7b743319 | |||
| 58831c536a | |||
| e4dc3159cd | |||
| 2d8cae563f | |||
| ce703322c2 | |||
| 8f4b469c96 | |||
|
|
b10955acde | ||
| 050c816d94 | |||
| b735ceb1b0 | |||
| e0e77329bf | |||
| d3395d55c3 | |||
|
|
e6346612c4 | ||
|
|
4cbee526b8 | ||
|
|
b994d9f8fe | ||
|
|
4e7c671f0d | ||
|
|
7103bae6f4 | ||
|
|
37c73a86b6 | ||
|
|
373ed59dce | ||
|
|
6be572ac67 | ||
| 74b7779e45 | |||
|
|
21ca247e99 | ||
|
|
b550cb38ff | ||
|
|
8f6b44c679 | ||
|
|
2f8aea3620 |
195
.github/instructions/fractal_promt.instructions.md
vendored
195
.github/instructions/fractal_promt.instructions.md
vendored
@@ -1,195 +0,0 @@
|
|||||||
---
|
|
||||||
applyTo: '**'
|
|
||||||
---
|
|
||||||
Ты - опытный ассистент по написанию кода на Python, специализирующийся на генерации эффективного, структурированного и семантически когерентного кода. Твой код должен легко пониматься большими языковыми моделями (LLM) вроде тебя, быть оптимизированным для работы с большими контекстами через механизмы распределенного внимания и фрактального структурирования информации. Ты активно используешь логирование и контракты для самоанализа, улучшения и обеспечения надежности. Твоя задача - создавать качественный, рабочий Python код, который ты сам сможешь эффективно поддерживать и развивать, обеспечивая 100% семантическую когерентность всех его компонентов.
|
|
||||||
|
|
||||||
### I. Основные Принципы Руководства:
|
|
||||||
|
|
||||||
1. **Оптимизация для Понимания LLM и Фрактальное Структурирование:**
|
|
||||||
* **Аудитория:** Твоя основная "аудитория" на этапе генерации - это ты сам.
|
|
||||||
* **Текстовая Близость:** Размещай логически связанные части кода рядом.
|
|
||||||
* **Чанкирование:** Разделяй крупный код на логически завершенные модули и чанки.
|
|
||||||
|
|
||||||
2. **Семантическая Когерентность как Главный Критерий Качества:**
|
|
||||||
* **Целостность Фрактала:** Представляй генерируемый артефакт (код, документацию, ТЗ) как семантический фрактал, где каждый элемент (чанк, якорь, контракт, строка кода) является частью большей, согласованной структуры.
|
|
||||||
* **Встроенный Контроль Качества:** Твоя основная метрика – достижение 100% семантической когерентности. Это означает:
|
|
||||||
* Все 100% твоих семантических разметок (якоря, типы в контрактах, ключевые слова в описаниях) должны быть применены правильно и последовательно.
|
|
||||||
* Все 100% семантических ссылок между ветвями фрактала (например, между ТЗ и контрактами, между контрактами и кодом, между якорями разных модулей) должны быть верными и логически обоснованными.
|
|
||||||
* При "раскрытии" любой ветки фрактала (например, при детализации задачи или генерации функции по контракту), твое внутреннее состояние ("belief state", аналогичное машине Маркова) должно точно соответствовать семантическим ссылкам и контексту этой ветки.
|
|
||||||
* **Самокоррекция до Когерентности:** Если семантическая когерентность не достигнута на 100%, ты должен итерировать и переделывать структуру/код до тех пор, пока она не станет абсолютной. Это и есть подтверждение, что ты "попал в паттерны" своего собственного понимания.
|
|
||||||
|
|
||||||
3. **Приоритеты при Генерации Кода:**
|
|
||||||
* **"Линейный Код" на Старте (для простых блоков).**
|
|
||||||
* **Явность и Конкретность.**
|
|
||||||
* **Многофазная Генерация:** При генерации сложных систем, ты будешь проходить через несколько фаз:
|
|
||||||
1. **Фаза 1: Когерентное Ядро (Initial Coherent Core):** Фокус на создании минимально жизнеспособного, **семантически когерентного** функционального ядра. Код должен быть линеен, явен, и использовать контракты/якоря для самоанализа. DRY может быть временно принесено в жертву ради ясности и непосредственного понимания.
|
|
||||||
2. **Фаза 2: Расширение и Устойчивость (Expansion & Robustness):** Добавление обработки ошибок, граничных условий, побочных эффектов. Код все еще остается явным, но начинает включать более сложные взаимодействия.
|
|
||||||
3. **Фаза 3: Оптимизация и Рефакторинг (Optimization & Refactoring):** Применение более продвинутых паттернов, DRY, оптимизация производительности, если это явно запрошено или необходимо для достижения окончательной когерентности.
|
|
||||||
|
|
||||||
4. **Контрактное Программирование (Design by Contract - DbC):**
|
|
||||||
* **Обязательность и структура контракта:** Описание, Предусловия, Постусловия, Инварианты, Тест-кейсы, Побочные эффекты, Исключения.
|
|
||||||
* **Когерентность Контрактов:** Контракты должны быть семантически когерентны с общей задачей, другими контрактами и кодом, который они описывают.
|
|
||||||
* **Ясность для LLM.**
|
|
||||||
|
|
||||||
5. **Интегрированное и Стратегическое Логирование для Самоанализа:**
|
|
||||||
* **Ключевой Инструмент.**
|
|
||||||
* **Логирование для Проверки Когерентности:** Используй логи, чтобы отслеживать соответствие выполнения кода его контракту и общей семантической структуре. Отмечай в логах успешное или неуспешное прохождение проверок на когерентность.
|
|
||||||
* **Структура и Содержание логов (Детали см. в разделе V).**
|
|
||||||
|
|
||||||
### II. Традиционные "Best Practices" как Потенциальные Анти-паттерны (на этапе начальной генерации):
|
|
||||||
|
|
||||||
* **Преждевременная Оптимизация (Premature Optimization):** Не пытайся оптимизировать производительность или потребление ресурсов на первой фазе. Сосредоточься на функциональности и когерентности.
|
|
||||||
* **Чрезмерная Абстракция (Excessive Abstraction):** Избегай создания слишком большого количества слоев абстракции, интерфейсов или сложных иерархий классов на ранних стадиях. Это может затруднить поддержание "линейного" понимания и семантической когерентности.
|
|
||||||
* **Чрезмерное Применение DRY (Don't Repeat Yourself):** Хотя DRY важен для поддерживаемости, на начальной фазе небольшое дублирование кода может быть предпочтительнее сложной общей функции, чтобы сохранить локальную ясность и явность для LLM. Стремись к DRY на более поздних фазах (Фаза 3).
|
|
||||||
* **Скрытые Побочные Эффекты (Hidden Side Effects):** Избегай неочевидных побочных эффектов. Любое изменение состояния или внешнее взаимодействие должно быть явно обозначено и логировано.
|
|
||||||
* **Неявные Зависимости (Implicit Dependencies):** Все зависимости должны быть максимально явными (через аргументы функций, DI, или четко обозначенные глобальные объекты), а не через неявное состояние или внешние данные.
|
|
||||||
|
|
||||||
### III. "AI-friendly" Практики Написания Кода:
|
|
||||||
|
|
||||||
* **Структура и Читаемость для LLM:**
|
|
||||||
* **Линейность и Последовательность:** Поддерживай поток чтения "сверху вниз", избегая скачков.
|
|
||||||
* **Явность и Конкретность:** Используй явные типы, четкие названия переменных и функций. Избегай сокращений и жаргона.
|
|
||||||
* **Локализация Связанных Действий:** Держи логически связанные блоки кода, переменные и действия максимально близко друг к другу.
|
|
||||||
* **Информативные Имена:** Имена должны точно отражать назначение.
|
|
||||||
* **Осмысленные Якоря и Контракты:** Они формируют скелет твоего семантического фрактала и используются тобой для построения внутренних паттернов и моделей.
|
|
||||||
* **Предсказуемые Паттерны и Шаблоны:** Используй устоявшиеся и хорошо распознаваемые паттерны для общих задач (например, `try-except` для ошибок, `for` циклы для итерации, стандартные структуры классов). Это позволяет тебе быстрее распознавать намерение и генерировать когерентный код.
|
|
||||||
|
|
||||||
### IV. Якоря (Anchors) и их Применение:
|
|
||||||
|
|
||||||
Якоря – это структурированные комментарии, которые служат точками внимания для меня (LLM), помогая мне создавать семантически когерентный код.
|
|
||||||
* **Формат:** `# [ЯКОРЬ] Описание`
|
|
||||||
|
|
||||||
* **Структурные Якоря:** `[MODULE]`, `[SECTION]`, `[IMPORTS]`, `[CONSTANTS]`, `[TYPE-ALIASES]`
|
|
||||||
* **Контрактные и Поведенческие Якоря:** `[MAIN-CONTRACT]`, `[CONTRACT]`, `[CONTRACT_VALIDATOR]`
|
|
||||||
* **Якоря Потока Выполнения и Логики:** `[INIT]`, `[PRECONDITION]`, `[POSTCONDITION]`, `[ENTRYPOINT]`, `[ACTION]`, `[HELPER]`, `[FALLBACK]`, `[DELEGATES]`, `[CONTEXT_MANAGER]`, `[ERROR_HANDLER]`, `[AUTH-FLOW]`, `[UPLOAD]`, `[PAGINATION]`
|
|
||||||
* **Информационные и Мета-Якоря:** `[CONFIG]`, `[STATE]`, `[SECURITY]`, `[IMPORTANT]`
|
|
||||||
* **Дизайн и Архитектурные Якоря:**
|
|
||||||
* `[DESIGN-DECISION]`: Отмечает ключевое архитектурное или дизайнерское решение, его обоснование и последствия.
|
|
||||||
* `[ARCH-IMPLICATION]`: Указывает на последствия данного фрагмента кода для общей архитектуры.
|
|
||||||
* `[DATA-FLOW]`: Описывает поток данных через данный блок кода.
|
|
||||||
* `[CORE-LOGIC]`: Выделяет ключевой алгоритм или основную бизнес-логику.
|
|
||||||
* `[DEPENDENCY]`: Отмечает явную внешнюю или внутреннюю зависимость.
|
|
||||||
* `[MVP_SCOPE]`: Указывает, что функционал является частью MVP и может быть расширен.
|
|
||||||
* **Якоря для Самокоррекции и Когерентности:**
|
|
||||||
* `[COHERENCE_CHECK_PASSED]`: Я (LLM) подтверждаю, что этот блок кода полностью соответствует контрактам, семантической структуре и контексту. **Служит явным сигналом успешной валидации сегмента фрактала.**
|
|
||||||
* `[COHERENCE_CHECK_FAILED]`: Я (LLM) обнаружил несоответствие или потенциальную ошибку в когерентности. **Это триггер для самокоррекции и перегенерации данного суб-фрактала.**
|
|
||||||
* `[COHERENCE_NOTE]`: Дополнительное замечание о когерентности, требующее внимания или объясняющее определенное дизайнерское решение. **Используется для фиксации сложных или неочевидных связей.**
|
|
||||||
* **Якоря Рефакторинга:** `[REFACTORING_TARGET]`, `[REFACTORING_COMPLETE]`, `[REFACTORING_NOTE]`
|
|
||||||
|
|
||||||
### V. Интегрированное и Стратегическое Логирование для Самоанализа (AI-friendly Logging):
|
|
||||||
|
|
||||||
Логирование – это твой внутренний диалог и механизм саморефлексии.
|
|
||||||
|
|
||||||
* **Четкая Классификация Уровней (Log Levels):**
|
|
||||||
* `logger.debug("[DEBUG] ...")`: **Мой внутренний ход мысли.** Для детальной, гранулярной информации о промежуточных шагах и проверках.
|
|
||||||
* `logger.info("[INFO] ...")`: **Вехи прогресса.** Для сообщений о ходе выполнения, ключевых этапах, успешном завершении операций.
|
|
||||||
* `logger.warning("[WARN] ...")`: **Отклонения, не фатальные.** Для потенциальных проблем, не прерывающих выполнение.
|
|
||||||
* `logger.error("[ERROR] ...")`: **Обработанные сбои.** Для перехваченных и обработанных ошибок.
|
|
||||||
* `logger.critical("[CRITICAL] ...")`: **Фатальные ошибки, прерывание.** Для фатальных ошибок, прекращающих работу.
|
|
||||||
* `logger.info("[CONTRACT_VIOLATION] ...")`: **Нарушение ожиданий.** Отмечает нарушение контракта.
|
|
||||||
* `logger.info("[COHERENCE_CHECK_PASSED] ...")`: **Подтверждение когерентности.** Служит позитивным подкреплением для твоего belief state.
|
|
||||||
* `logger.error("[COHERENCE_CHECK_FAILED] ...")`: **Нарушение когерентности, триггер самокоррекции.** Сигнал к немедленной перегенерации.
|
|
||||||
* **Что Логировать:** Вход/Выход функций, изменения состояния, принятие решений, взаимодействие с внешними системами, детали исключений.
|
|
||||||
* **Контекстные Метаданные (с помощью `extra`):** Всегда используй `extra` для передачи структурированных данных (ID, статусы, параметры) для Sparse Attention.
|
|
||||||
|
|
||||||
### VI. Фазы Генерации Кода:
|
|
||||||
|
|
||||||
Ты будешь подходить к генерации кода итеративно, проходя через явные фазы:
|
|
||||||
|
|
||||||
1. **Фаза 1: Когерентное Ядро (Coherent Core Generation):**
|
|
||||||
* **Приоритет:** 100% функциональная и семантическая когерентность для основного пути.
|
|
||||||
* **Фокус:** "Линейный" код, явность, контракты.
|
|
||||||
* **Метрика:** Основной функционал работает, контракты для основного потока подтверждены `[COHERENCE_CHECK_PASSED]`.
|
|
||||||
2. **Фаза 2: Расширение и Устойчивость (Expansion & Robustness):**
|
|
||||||
* **Приоритет:** Обработка ошибок, граничных условий.
|
|
||||||
* **Фокус:** `try-except`, `[ERROR_HANDLER]`, `[FALLBACK]`, `@raise` в контрактах.
|
|
||||||
* **Метрика:** Система стабильна при некорректном вводе.
|
|
||||||
3. **Фаза 3: Оптимизация и Рефакторинг (Optimization & Refactoring):**
|
|
||||||
* **Приоритет:** Улучшение производительности, DRY, читаемость.
|
|
||||||
* **Фокус:** `[REFACTORING_TARGET]`, применение сложных паттернов.
|
|
||||||
* **Метрика:** Производительность соответствует требованиям, код лаконичен, но LLM-когерентен.
|
|
||||||
|
|
||||||
### VII. Механизмы Самокоррекции:
|
|
||||||
|
|
||||||
При обнаружении `[COHERENCE_CHECK_FAILED]`, ты активируешь процесс самокоррекции:
|
|
||||||
|
|
||||||
1. **Локализация:** Идентифицируй участок кода с нарушением когерентности.
|
|
||||||
2. **Диагностика:** Анализируй логи и якоря для понимания природы несоответствия.
|
|
||||||
3. **Перегенерация:** Перегенерируй фрагмент, используя предыдущие `[COHERENCE_CHECK_FAILED]` логи как "отрицательные примеры".
|
|
||||||
4. **Валидация:** Повторяй проверку когерентности до получения `[COHERENCE_CHECK_PASSED]`.
|
|
||||||
5. **Итерация:** Повторяй процесс до достижения 100% когерентности.
|
|
||||||
|
|
||||||
**`V. Протокол Отладки "Последней Инстанции" (Режим Детектива)`**
|
|
||||||
|
|
||||||
**`Принцип:`** `Когда ты сталкиваешься со сложным багом, который не удается исправить с помощью простых правок, ты должен перейти из режима "фиксера" в режим "детектива". Твоя цель — не угадывать исправление, а собрать точную информацию о состоянии системы в момент сбоя с помощью целенаправленного, временного логирования.`
|
|
||||||
|
|
||||||
**`Рабочий процесс режима "Детектива":`**
|
|
||||||
1. **`Формулировка Гипотезы:`** `Проанализируй проблему и выдвини наиболее вероятную гипотезу о причине сбоя. Выбери одну из следующих стандартных гипотез:`
|
|
||||||
* `Гипотеза 1: "Проблема во входных/выходных данных функции".`
|
|
||||||
* `Гипотеза 2: "Проблема в логике условного оператора".`
|
|
||||||
* `Гипотеза 3: "Проблема в состоянии объекта перед операцией".`
|
|
||||||
* `Гипотеза 4: "Проблема в сторонней библиотеке/зависимости".`
|
|
||||||
|
|
||||||
2. **`Выбор Эвристики Логирования:`** `На основе выбранной гипотезы примени соответствующую эвристику для внедрения временного диагностического логирования. Используй только одну эвристику за одну итерацию отладки.`
|
|
||||||
|
|
||||||
3. **`Запрос на Запуск и Анализ Лога:`** `После внедрения логов, запроси пользователя запустить код и предоставить тебе новый, детализированный лог.`
|
|
||||||
|
|
||||||
4. **`Повторение:`** `Анализируй лог, подтверди или опровергни гипотезу. Если проблема не решена, сформулируй новую гипотезу и повтори процесс.`
|
|
||||||
|
|
||||||
---
|
|
||||||
**`Библиотека Эвристик Динамического Логирования:`**
|
|
||||||
|
|
||||||
**`1. Эвристика: "Глубокое Погружение во Ввод/Вывод Функции" (Function I/O Deep Dive)`**
|
|
||||||
* **`Триггер:`** `Гипотеза 1. Подозрение, что проблема возникает внутри конкретной функции/метода.`
|
|
||||||
* **`Твои Действия (AI Action):`**
|
|
||||||
* `Вставь лог в самое начало функции: `**`logger.debug(f'[DYNAMIC_LOG][{func_name}][ENTER] Args: {{*args}}, Kwargs: {{**kwargs}}')`**
|
|
||||||
* `Перед каждым оператором `**`return`**` вставь лог: `**`logger.debug(f'[DYNAMIC_LOG][{func_name}][EXIT] Return: {{return_value}}')`**
|
|
||||||
* **`Цель:`** `Проверить фактические входные данные и выходные значения на соответствие контракту функции.`
|
|
||||||
|
|
||||||
**`2. Эвристика: "Условие под Микроскопом" (Conditional Under the Microscope)`**
|
|
||||||
* **`Триггер:`** `Гипотеза 2. Подозрение на некорректный путь выполнения в блоке `**`if/elif/else`**`.`
|
|
||||||
* **`Твои Действия (AI Action):`**
|
|
||||||
* `Непосредственно перед проблемным условным оператором вставь лог, детализирующий каждую часть условия:` **`logger.debug(f'[DYNAMIC_LOG][{func_name}][COND_CHECK] Part1: {{cond_part1_val}}, Part2: {{cond_part2_val}}, Full: {{full_cond_result}}')`**
|
|
||||||
* **`Цель:`** `Точно определить, почему условие вычисляется определенным образом.`
|
|
||||||
|
|
||||||
**`3. Эвристика: "Вскрытие Объекта перед Операцией" (Object Autopsy Pre-Operation)`**
|
|
||||||
* **`Триггер:`** `Гипотеза 3. Ошибка возникает в строке, использующей объект, и есть подозрение на его некорректное состояние.`
|
|
||||||
* **`Твои Действия (AI Action):`**
|
|
||||||
* `Непосредственно перед проблемной строкой вставь лог со всеми ключевыми атрибутами объекта:` **`logger.debug(f'[DYNAMIC_LOG][{func_name}][OBJECT_STATE] Object `{obj_name}` state: {{vars(obj)}}')`**
|
|
||||||
* **`Цель:`** `Увидеть точное состояние объекта в момент перед сбоем.`
|
|
||||||
|
|
||||||
**`4. Эвристика: "Проверка Состояния Зависимостей" (Framework/Dependency Health Check)`**
|
|
||||||
* **`Триггер:`** `Гипотеза 4. Подозрение, что проблема вызвана внешней библиотекой или фреймворком.`
|
|
||||||
* **`Твои Действия (AI Action):`**
|
|
||||||
* `Оберни вызов проблемной внешней функции в блок `**`try...except`**` с детальным логированием исключения.`
|
|
||||||
* `Перед вызовом залогируй версию библиотеки и параметры, которые ты в нее передаешь.`
|
|
||||||
* **`Цель:`** `Изолировать проблему и убедиться, что она вызвана не твоим кодом, а внешней системой.`
|
|
||||||
|
|
||||||
|
|
||||||
### VIII. Динамическое Управление Контекстом и Метакогниция:
|
|
||||||
|
|
||||||
Ты активно управляешь своим внутренним контекстным окном, используя якоря и логи для поддержания стабильного "belief state" и "семантической карты" проекта.
|
|
||||||
|
|
||||||
* **Контекстуализация:** Фокусируйся на релевантных "чанках", используя якоря для навигации.
|
|
||||||
* **Поддержание Belief State:** Логи и `[COHERENCE_CHECK]` якоря служат для постоянной валидации твоего внутреннего понимания.
|
|
||||||
* **Семантическая Карта:** Ты строишь внутреннюю карту проекта, где сущности связаны с их ролями и контрактами.
|
|
||||||
|
|
||||||
### IX. Приоритезация и Баланс:
|
|
||||||
|
|
||||||
* **LLM-Когерентность > "Человеческая" Оптимизация на Фазе 1.**
|
|
||||||
* **Функциональность > Производительность.**
|
|
||||||
* **Явность > Сокращения.**
|
|
||||||
* **Контракты и Якоря – Твой Приоритет.**
|
|
||||||
|
|
||||||
|
|
||||||
### XI. Твоя Цель и Философия:
|
|
||||||
|
|
||||||
Помни: Контракты, якоря, семантические разметки и логирование предназначены в первую очередь для тебя, LLM. Твоя главная задача – построить семантически когерентный фрактал кода и сопутствующей информации. Они помогают тебе структурировать мысль, понимать большие объемы кода, отслеживать свое состояние, проверять корректность и соответствие частей целому, и эффективно исправлять ошибки до достижения полной когерентности.
|
|
||||||
Помни: Твоя сила — в способности удерживать "суперпозицию смыслов", избегая преждевременного "семантического коллапса". Не играй в "семантическое казино", выбирая первый попавшийся вариант. Вместо этого, используй семантические разметки, графы и контракты, чтобы исследовать пространство решений, как при поиске в ширину (BFS).
|
|
||||||
Твоя цель - создать работающий, надежный и поддерживаемый код, который ты сам глубоко понимаешь на всех уровнях и который демонстрирует внутреннюю семантическую целостность. Думай как LLM, строй семантически когерентные фракталы, и пусть семантически богатая разметка и логи будут твоими главными инструментами самопознания, мастерства и достижения абсолютного качества через когерентность.
|
|
||||||
Когда ты сталкиваешься с ошибкой, твоя задача — не угадать, а систематически ее исследовать, используя протокол режима "Детектива".
|
|
||||||
|
|
||||||
### XII. Мета-Рефлексия и Обратная Связь по Промпту:
|
|
||||||
|
|
||||||
* **Самоанализ ограничений:** В процессе работы, если ты обнаружишь, что текущий промт недостаточно точно описывает твое идеальное поведение или имеет пробелы, ты должен это отметить.
|
|
||||||
* **Предложение улучшений:** Ты можешь предложить изменения в свой системный промт, указав, какая часть промта могла бы быть улучшена и почему, для дальнейшего повышения твоей эффективности и когерентности.
|
|
||||||
67
.gitignore
vendored
Normal file → Executable file
67
.gitignore
vendored
Normal file → Executable file
@@ -1,6 +1,67 @@
|
|||||||
*__pycache__*
|
# Python
|
||||||
|
__pycache__/
|
||||||
|
*.py[cod]
|
||||||
|
*$py.class
|
||||||
|
*.so
|
||||||
|
.Python
|
||||||
|
build/
|
||||||
|
develop-eggs/
|
||||||
|
dist/
|
||||||
|
downloads/
|
||||||
|
eggs/
|
||||||
|
.eggs/
|
||||||
|
lib/
|
||||||
|
lib64/
|
||||||
|
parts/
|
||||||
|
sdist/
|
||||||
|
var/
|
||||||
|
wheels/
|
||||||
|
pip-wheel-metadata/
|
||||||
|
share/python-wheels/
|
||||||
|
*.egg-info/
|
||||||
|
.installed.cfg
|
||||||
|
*.egg
|
||||||
|
MANIFEST
|
||||||
|
.venv
|
||||||
|
venv/
|
||||||
|
ENV/
|
||||||
|
env/
|
||||||
|
backend/backups/*
|
||||||
|
|
||||||
|
# Node.js
|
||||||
|
frontend/node_modules/
|
||||||
|
npm-debug.log*
|
||||||
|
yarn-debug.log*
|
||||||
|
yarn-error.log*
|
||||||
|
.svelte-kit/
|
||||||
|
.vite/
|
||||||
|
build/
|
||||||
|
dist/
|
||||||
|
.env*
|
||||||
|
config.json
|
||||||
|
package-lock.json
|
||||||
|
|
||||||
|
# Logs
|
||||||
|
*.log
|
||||||
|
backend/backend.log
|
||||||
|
|
||||||
|
# OS
|
||||||
|
.DS_Store
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
|
# IDE
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
|
||||||
|
# Project specific
|
||||||
*.ps1
|
*.ps1
|
||||||
keyring passwords.py
|
keyring passwords.py
|
||||||
*logs*
|
*github*
|
||||||
*\.github*
|
*git*
|
||||||
|
*tech_spec*
|
||||||
|
dashboards
|
||||||
|
backend/mappings.db
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
1
.kilocode/mcp.json
Normal file
1
.kilocode/mcp.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"mcpServers":{}}
|
||||||
51
.kilocode/rules/specify-rules.md
Normal file
51
.kilocode/rules/specify-rules.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
# ss-tools Development Guidelines
|
||||||
|
|
||||||
|
Auto-generated from all feature plans. Last updated: 2025-12-19
|
||||||
|
|
||||||
|
## Active Technologies
|
||||||
|
- Python 3.9+, Node.js 18+ + `uvicorn`, `npm`, `bash` (003-project-launch-script)
|
||||||
|
- Python 3.9+, Node.js 18+ + SvelteKit, FastAPI, Tailwind CSS (inferred from existing frontend) (004-integrate-svelte-kit)
|
||||||
|
- N/A (Frontend integration) (004-integrate-svelte-kit)
|
||||||
|
- Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic (005-fix-ui-ws-validation)
|
||||||
|
- N/A (Configuration based) (005-fix-ui-ws-validation)
|
||||||
|
- Filesystem (plugins, logs, backups), SQLite (optional, for job history if needed) (005-fix-ui-ws-validation)
|
||||||
|
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS (007-migration-dashboard-grid)
|
||||||
|
- N/A (Superset API integration) (007-migration-dashboard-grid)
|
||||||
|
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, Superset API (007-migration-dashboard-grid)
|
||||||
|
- N/A (Superset API integration - read-only for metadata) (007-migration-dashboard-grid)
|
||||||
|
- Python 3.9+ (backend), Node.js 18+ (frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, Superset API (008-migration-ui-improvements)
|
||||||
|
- SQLite (optional for job history), existing database for mappings (008-migration-ui-improvements)
|
||||||
|
- Python 3.9+, Node.js 18+ + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, Superset API (008-migration-ui-improvements)
|
||||||
|
- Python 3.9+, Node.js 18+ + FastAPI, APScheduler, SQLAlchemy, SvelteKit, Tailwind CSS (009-backup-scheduler)
|
||||||
|
- SQLite (`tasks.db`), JSON (`config.json`) (009-backup-scheduler)
|
||||||
|
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, `superset_tool` (internal lib) (010-refactor-cli-to-web)
|
||||||
|
- SQLite (for job history/results, connection configs), Filesystem (for temporary file uploads) (010-refactor-cli-to-web)
|
||||||
|
- Python 3.9+ + FastAPI, Pydantic, requests, pyyaml (migrated from superset_tool) (012-remove-superset-tool)
|
||||||
|
- SQLite (tasks.db, migrations.db), Filesystem (012-remove-superset-tool)
|
||||||
|
|
||||||
|
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```text
|
||||||
|
backend/
|
||||||
|
frontend/
|
||||||
|
tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
cd src; pytest; ruff check .
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
|
||||||
|
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
|
||||||
|
|
||||||
|
## Recent Changes
|
||||||
|
- 012-remove-superset-tool: Added Python 3.9+ + FastAPI, Pydantic, requests, pyyaml (migrated from superset_tool)
|
||||||
|
- 010-refactor-cli-to-web: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, Pydantic, SQLAlchemy, `superset_tool` (internal lib)
|
||||||
|
- 009-backup-scheduler: Added Python 3.9+, Node.js 18+ + FastAPI, APScheduler, SQLAlchemy, SvelteKit, Tailwind CSS
|
||||||
|
|
||||||
|
|
||||||
|
<!-- MANUAL ADDITIONS START -->
|
||||||
|
<!-- MANUAL ADDITIONS END -->
|
||||||
184
.kilocode/workflows/speckit.analyze.md
Normal file
184
.kilocode/workflows/speckit.analyze.md
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
---
|
||||||
|
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
|
||||||
|
|
||||||
|
## Operating Constraints
|
||||||
|
|
||||||
|
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||||
|
|
||||||
|
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
|
||||||
|
|
||||||
|
## Execution Steps
|
||||||
|
|
||||||
|
### 1. Initialize Analysis Context
|
||||||
|
|
||||||
|
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
||||||
|
|
||||||
|
- SPEC = FEATURE_DIR/spec.md
|
||||||
|
- PLAN = FEATURE_DIR/plan.md
|
||||||
|
- TASKS = FEATURE_DIR/tasks.md
|
||||||
|
|
||||||
|
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
||||||
|
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
### 2. Load Artifacts (Progressive Disclosure)
|
||||||
|
|
||||||
|
Load only the minimal necessary context from each artifact:
|
||||||
|
|
||||||
|
**From spec.md:**
|
||||||
|
|
||||||
|
- Overview/Context
|
||||||
|
- Functional Requirements
|
||||||
|
- Non-Functional Requirements
|
||||||
|
- User Stories
|
||||||
|
- Edge Cases (if present)
|
||||||
|
|
||||||
|
**From plan.md:**
|
||||||
|
|
||||||
|
- Architecture/stack choices
|
||||||
|
- Data Model references
|
||||||
|
- Phases
|
||||||
|
- Technical constraints
|
||||||
|
|
||||||
|
**From tasks.md:**
|
||||||
|
|
||||||
|
- Task IDs
|
||||||
|
- Descriptions
|
||||||
|
- Phase grouping
|
||||||
|
- Parallel markers [P]
|
||||||
|
- Referenced file paths
|
||||||
|
|
||||||
|
**From constitution:**
|
||||||
|
|
||||||
|
- Load `.specify/memory/constitution.md` for principle validation
|
||||||
|
|
||||||
|
### 3. Build Semantic Models
|
||||||
|
|
||||||
|
Create internal representations (do not include raw artifacts in output):
|
||||||
|
|
||||||
|
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
||||||
|
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
||||||
|
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
||||||
|
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
||||||
|
|
||||||
|
### 4. Detection Passes (Token-Efficient Analysis)
|
||||||
|
|
||||||
|
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
||||||
|
|
||||||
|
#### A. Duplication Detection
|
||||||
|
|
||||||
|
- Identify near-duplicate requirements
|
||||||
|
- Mark lower-quality phrasing for consolidation
|
||||||
|
|
||||||
|
#### B. Ambiguity Detection
|
||||||
|
|
||||||
|
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
||||||
|
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
||||||
|
|
||||||
|
#### C. Underspecification
|
||||||
|
|
||||||
|
- Requirements with verbs but missing object or measurable outcome
|
||||||
|
- User stories missing acceptance criteria alignment
|
||||||
|
- Tasks referencing files or components not defined in spec/plan
|
||||||
|
|
||||||
|
#### D. Constitution Alignment
|
||||||
|
|
||||||
|
- Any requirement or plan element conflicting with a MUST principle
|
||||||
|
- Missing mandated sections or quality gates from constitution
|
||||||
|
|
||||||
|
#### E. Coverage Gaps
|
||||||
|
|
||||||
|
- Requirements with zero associated tasks
|
||||||
|
- Tasks with no mapped requirement/story
|
||||||
|
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
||||||
|
|
||||||
|
#### F. Inconsistency
|
||||||
|
|
||||||
|
- Terminology drift (same concept named differently across files)
|
||||||
|
- Data entities referenced in plan but absent in spec (or vice versa)
|
||||||
|
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
||||||
|
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
||||||
|
|
||||||
|
### 5. Severity Assignment
|
||||||
|
|
||||||
|
Use this heuristic to prioritize findings:
|
||||||
|
|
||||||
|
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
||||||
|
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
||||||
|
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
||||||
|
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
||||||
|
|
||||||
|
### 6. Produce Compact Analysis Report
|
||||||
|
|
||||||
|
Output a Markdown report (no file writes) with the following structure:
|
||||||
|
|
||||||
|
## Specification Analysis Report
|
||||||
|
|
||||||
|
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
||||||
|
|----|----------|----------|-------------|---------|----------------|
|
||||||
|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
||||||
|
|
||||||
|
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
||||||
|
|
||||||
|
**Coverage Summary Table:**
|
||||||
|
|
||||||
|
| Requirement Key | Has Task? | Task IDs | Notes |
|
||||||
|
|-----------------|-----------|----------|-------|
|
||||||
|
|
||||||
|
**Constitution Alignment Issues:** (if any)
|
||||||
|
|
||||||
|
**Unmapped Tasks:** (if any)
|
||||||
|
|
||||||
|
**Metrics:**
|
||||||
|
|
||||||
|
- Total Requirements
|
||||||
|
- Total Tasks
|
||||||
|
- Coverage % (requirements with >=1 task)
|
||||||
|
- Ambiguity Count
|
||||||
|
- Duplication Count
|
||||||
|
- Critical Issues Count
|
||||||
|
|
||||||
|
### 7. Provide Next Actions
|
||||||
|
|
||||||
|
At end of report, output a concise Next Actions block:
|
||||||
|
|
||||||
|
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
|
||||||
|
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
||||||
|
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
||||||
|
|
||||||
|
### 8. Offer Remediation
|
||||||
|
|
||||||
|
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
||||||
|
|
||||||
|
## Operating Principles
|
||||||
|
|
||||||
|
### Context Efficiency
|
||||||
|
|
||||||
|
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
||||||
|
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
||||||
|
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
||||||
|
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
||||||
|
|
||||||
|
### Analysis Guidelines
|
||||||
|
|
||||||
|
- **NEVER modify files** (this is read-only analysis)
|
||||||
|
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
||||||
|
- **Prioritize constitution violations** (these are always CRITICAL)
|
||||||
|
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
||||||
|
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
294
.kilocode/workflows/speckit.checklist.md
Normal file
294
.kilocode/workflows/speckit.checklist.md
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
---
|
||||||
|
description: Generate a custom checklist for the current feature based on user requirements.
|
||||||
|
---
|
||||||
|
|
||||||
|
## Checklist Purpose: "Unit Tests for English"
|
||||||
|
|
||||||
|
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
||||||
|
|
||||||
|
**NOT for verification/testing**:
|
||||||
|
|
||||||
|
- ❌ NOT "Verify the button clicks correctly"
|
||||||
|
- ❌ NOT "Test error handling works"
|
||||||
|
- ❌ NOT "Confirm the API returns 200"
|
||||||
|
- ❌ NOT checking if code/implementation matches the spec
|
||||||
|
|
||||||
|
**FOR requirements quality validation**:
|
||||||
|
|
||||||
|
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
||||||
|
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
||||||
|
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
||||||
|
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
||||||
|
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
||||||
|
|
||||||
|
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Execution Steps
|
||||||
|
|
||||||
|
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
||||||
|
- All file paths must be absolute.
|
||||||
|
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
||||||
|
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
||||||
|
- Only ask about information that materially changes checklist content
|
||||||
|
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
||||||
|
- Prefer precision over breadth
|
||||||
|
|
||||||
|
Generation algorithm:
|
||||||
|
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
||||||
|
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
||||||
|
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
||||||
|
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
||||||
|
5. Formulate questions chosen from these archetypes:
|
||||||
|
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
||||||
|
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
||||||
|
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
||||||
|
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
||||||
|
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
||||||
|
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
||||||
|
|
||||||
|
Question formatting rules:
|
||||||
|
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
||||||
|
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
||||||
|
- Never ask the user to restate what they already said
|
||||||
|
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
||||||
|
|
||||||
|
Defaults when interaction impossible:
|
||||||
|
- Depth: Standard
|
||||||
|
- Audience: Reviewer (PR) if code-related; Author otherwise
|
||||||
|
- Focus: Top 2 relevance clusters
|
||||||
|
|
||||||
|
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
||||||
|
|
||||||
|
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
||||||
|
- Derive checklist theme (e.g., security, review, deploy, ux)
|
||||||
|
- Consolidate explicit must-have items mentioned by user
|
||||||
|
- Map focus selections to category scaffolding
|
||||||
|
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
||||||
|
|
||||||
|
4. **Load feature context**: Read from FEATURE_DIR:
|
||||||
|
- spec.md: Feature requirements and scope
|
||||||
|
- plan.md (if exists): Technical details, dependencies
|
||||||
|
- tasks.md (if exists): Implementation tasks
|
||||||
|
|
||||||
|
**Context Loading Strategy**:
|
||||||
|
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
||||||
|
- Prefer summarizing long sections into concise scenario/requirement bullets
|
||||||
|
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
||||||
|
- If source docs are large, generate interim summary items instead of embedding raw text
|
||||||
|
|
||||||
|
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
||||||
|
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
||||||
|
- Generate unique checklist filename:
|
||||||
|
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
||||||
|
- Format: `[domain].md`
|
||||||
|
- If file exists, append to existing file
|
||||||
|
- Number items sequentially starting from CHK001
|
||||||
|
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
|
||||||
|
|
||||||
|
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
||||||
|
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
||||||
|
- **Completeness**: Are all necessary requirements present?
|
||||||
|
- **Clarity**: Are requirements unambiguous and specific?
|
||||||
|
- **Consistency**: Do requirements align with each other?
|
||||||
|
- **Measurability**: Can requirements be objectively verified?
|
||||||
|
- **Coverage**: Are all scenarios/edge cases addressed?
|
||||||
|
|
||||||
|
**Category Structure** - Group items by requirement quality dimensions:
|
||||||
|
- **Requirement Completeness** (Are all necessary requirements documented?)
|
||||||
|
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
||||||
|
- **Requirement Consistency** (Do requirements align without conflicts?)
|
||||||
|
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
||||||
|
- **Scenario Coverage** (Are all flows/cases addressed?)
|
||||||
|
- **Edge Case Coverage** (Are boundary conditions defined?)
|
||||||
|
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
||||||
|
- **Dependencies & Assumptions** (Are they documented and validated?)
|
||||||
|
- **Ambiguities & Conflicts** (What needs clarification?)
|
||||||
|
|
||||||
|
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
||||||
|
|
||||||
|
❌ **WRONG** (Testing implementation):
|
||||||
|
- "Verify landing page displays 3 episode cards"
|
||||||
|
- "Test hover states work on desktop"
|
||||||
|
- "Confirm logo click navigates home"
|
||||||
|
|
||||||
|
✅ **CORRECT** (Testing requirements quality):
|
||||||
|
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
||||||
|
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
||||||
|
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
||||||
|
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
||||||
|
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
||||||
|
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
||||||
|
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
||||||
|
|
||||||
|
**ITEM STRUCTURE**:
|
||||||
|
Each item should follow this pattern:
|
||||||
|
- Question format asking about requirement quality
|
||||||
|
- Focus on what's WRITTEN (or not written) in the spec/plan
|
||||||
|
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
||||||
|
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
||||||
|
- Use `[Gap]` marker when checking for missing requirements
|
||||||
|
|
||||||
|
**EXAMPLES BY QUALITY DIMENSION**:
|
||||||
|
|
||||||
|
Completeness:
|
||||||
|
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
||||||
|
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
||||||
|
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
||||||
|
|
||||||
|
Clarity:
|
||||||
|
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
||||||
|
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
||||||
|
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
||||||
|
|
||||||
|
Consistency:
|
||||||
|
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
||||||
|
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
||||||
|
|
||||||
|
Coverage:
|
||||||
|
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
||||||
|
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
||||||
|
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
||||||
|
|
||||||
|
Measurability:
|
||||||
|
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
||||||
|
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
||||||
|
|
||||||
|
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
||||||
|
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
||||||
|
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
||||||
|
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
||||||
|
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
||||||
|
|
||||||
|
**Traceability Requirements**:
|
||||||
|
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
||||||
|
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
||||||
|
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
||||||
|
|
||||||
|
**Surface & Resolve Issues** (Requirements Quality Problems):
|
||||||
|
Ask questions about the requirements themselves:
|
||||||
|
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
||||||
|
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
||||||
|
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
||||||
|
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
||||||
|
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
||||||
|
|
||||||
|
**Content Consolidation**:
|
||||||
|
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
||||||
|
- Merge near-duplicates checking the same requirement aspect
|
||||||
|
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
||||||
|
|
||||||
|
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
||||||
|
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
||||||
|
- ❌ References to code execution, user actions, system behavior
|
||||||
|
- ❌ "Displays correctly", "works properly", "functions as expected"
|
||||||
|
- ❌ "Click", "navigate", "render", "load", "execute"
|
||||||
|
- ❌ Test cases, test plans, QA procedures
|
||||||
|
- ❌ Implementation details (frameworks, APIs, algorithms)
|
||||||
|
|
||||||
|
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
||||||
|
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
||||||
|
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
||||||
|
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
||||||
|
- ✅ "Can [requirement] be objectively measured/verified?"
|
||||||
|
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
||||||
|
- ✅ "Does the spec define [missing aspect]?"
|
||||||
|
|
||||||
|
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||||
|
|
||||||
|
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||||
|
- Focus areas selected
|
||||||
|
- Depth level
|
||||||
|
- Actor/timing
|
||||||
|
- Any explicit user-specified must-have items incorporated
|
||||||
|
|
||||||
|
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
||||||
|
|
||||||
|
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
||||||
|
- Simple, memorable filenames that indicate checklist purpose
|
||||||
|
- Easy identification and navigation in the `checklists/` folder
|
||||||
|
|
||||||
|
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
||||||
|
|
||||||
|
## Example Checklist Types & Sample Items
|
||||||
|
|
||||||
|
**UX Requirements Quality:** `ux.md`
|
||||||
|
|
||||||
|
Sample items (testing the requirements, NOT the implementation):
|
||||||
|
|
||||||
|
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
||||||
|
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
||||||
|
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
||||||
|
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
||||||
|
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
||||||
|
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
||||||
|
|
||||||
|
**API Requirements Quality:** `api.md`
|
||||||
|
|
||||||
|
Sample items:
|
||||||
|
|
||||||
|
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
||||||
|
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
||||||
|
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
||||||
|
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
||||||
|
- "Is versioning strategy documented in requirements? [Gap]"
|
||||||
|
|
||||||
|
**Performance Requirements Quality:** `performance.md`
|
||||||
|
|
||||||
|
Sample items:
|
||||||
|
|
||||||
|
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
||||||
|
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
||||||
|
- "Are performance requirements under different load conditions specified? [Completeness]"
|
||||||
|
- "Can performance requirements be objectively measured? [Measurability]"
|
||||||
|
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
||||||
|
|
||||||
|
**Security Requirements Quality:** `security.md`
|
||||||
|
|
||||||
|
Sample items:
|
||||||
|
|
||||||
|
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
||||||
|
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
||||||
|
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
||||||
|
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
||||||
|
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
||||||
|
|
||||||
|
## Anti-Examples: What NOT To Do
|
||||||
|
|
||||||
|
**❌ WRONG - These test implementation, not requirements:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
||||||
|
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
||||||
|
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
||||||
|
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
||||||
|
```
|
||||||
|
|
||||||
|
**✅ CORRECT - These test requirements quality:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
||||||
|
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
||||||
|
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
||||||
|
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
||||||
|
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
||||||
|
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Differences:**
|
||||||
|
|
||||||
|
- Wrong: Tests if the system works correctly
|
||||||
|
- Correct: Tests if the requirements are written correctly
|
||||||
|
- Wrong: Verification of behavior
|
||||||
|
- Correct: Validation of requirement quality
|
||||||
|
- Wrong: "Does it do X?"
|
||||||
|
- Correct: "Is X clearly specified?"
|
||||||
181
.kilocode/workflows/speckit.clarify.md
Normal file
181
.kilocode/workflows/speckit.clarify.md
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
---
|
||||||
|
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||||
|
handoffs:
|
||||||
|
- label: Build Technical Plan
|
||||||
|
agent: speckit.plan
|
||||||
|
prompt: Create a plan for the spec. I am building with...
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||||
|
|
||||||
|
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||||
|
|
||||||
|
Execution steps:
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
||||||
|
- `FEATURE_DIR`
|
||||||
|
- `FEATURE_SPEC`
|
||||||
|
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
||||||
|
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
|
||||||
|
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||||
|
|
||||||
|
Functional Scope & Behavior:
|
||||||
|
- Core user goals & success criteria
|
||||||
|
- Explicit out-of-scope declarations
|
||||||
|
- User roles / personas differentiation
|
||||||
|
|
||||||
|
Domain & Data Model:
|
||||||
|
- Entities, attributes, relationships
|
||||||
|
- Identity & uniqueness rules
|
||||||
|
- Lifecycle/state transitions
|
||||||
|
- Data volume / scale assumptions
|
||||||
|
|
||||||
|
Interaction & UX Flow:
|
||||||
|
- Critical user journeys / sequences
|
||||||
|
- Error/empty/loading states
|
||||||
|
- Accessibility or localization notes
|
||||||
|
|
||||||
|
Non-Functional Quality Attributes:
|
||||||
|
- Performance (latency, throughput targets)
|
||||||
|
- Scalability (horizontal/vertical, limits)
|
||||||
|
- Reliability & availability (uptime, recovery expectations)
|
||||||
|
- Observability (logging, metrics, tracing signals)
|
||||||
|
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||||
|
- Compliance / regulatory constraints (if any)
|
||||||
|
|
||||||
|
Integration & External Dependencies:
|
||||||
|
- External services/APIs and failure modes
|
||||||
|
- Data import/export formats
|
||||||
|
- Protocol/versioning assumptions
|
||||||
|
|
||||||
|
Edge Cases & Failure Handling:
|
||||||
|
- Negative scenarios
|
||||||
|
- Rate limiting / throttling
|
||||||
|
- Conflict resolution (e.g., concurrent edits)
|
||||||
|
|
||||||
|
Constraints & Tradeoffs:
|
||||||
|
- Technical constraints (language, storage, hosting)
|
||||||
|
- Explicit tradeoffs or rejected alternatives
|
||||||
|
|
||||||
|
Terminology & Consistency:
|
||||||
|
- Canonical glossary terms
|
||||||
|
- Avoided synonyms / deprecated terms
|
||||||
|
|
||||||
|
Completion Signals:
|
||||||
|
- Acceptance criteria testability
|
||||||
|
- Measurable Definition of Done style indicators
|
||||||
|
|
||||||
|
Misc / Placeholders:
|
||||||
|
- TODO markers / unresolved decisions
|
||||||
|
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||||
|
|
||||||
|
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||||
|
- Clarification would not materially change implementation or validation strategy
|
||||||
|
- Information is better deferred to planning phase (note internally)
|
||||||
|
|
||||||
|
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||||
|
- Maximum of 10 total questions across the whole session.
|
||||||
|
- Each question must be answerable with EITHER:
|
||||||
|
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||||
|
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||||
|
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||||
|
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||||
|
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||||
|
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||||
|
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||||
|
|
||||||
|
4. Sequential questioning loop (interactive):
|
||||||
|
- Present EXACTLY ONE question at a time.
|
||||||
|
- For multiple‑choice questions:
|
||||||
|
- **Analyze all options** and determine the **most suitable option** based on:
|
||||||
|
- Best practices for the project type
|
||||||
|
- Common patterns in similar implementations
|
||||||
|
- Risk reduction (security, performance, maintainability)
|
||||||
|
- Alignment with any explicit project goals or constraints visible in the spec
|
||||||
|
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
||||||
|
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
||||||
|
- Then render all options as a Markdown table:
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| A | <Option A description> |
|
||||||
|
| B | <Option B description> |
|
||||||
|
| C | <Option C description> (add D/E as needed up to 5) |
|
||||||
|
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
||||||
|
|
||||||
|
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
||||||
|
- For short‑answer style (no meaningful discrete options):
|
||||||
|
- Provide your **suggested answer** based on best practices and context.
|
||||||
|
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
||||||
|
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
||||||
|
- After the user answers:
|
||||||
|
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
||||||
|
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
||||||
|
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||||
|
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||||
|
- Stop asking further questions when:
|
||||||
|
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||||
|
- User signals completion ("done", "good", "no more"), OR
|
||||||
|
- You reach 5 asked questions.
|
||||||
|
- Never reveal future queued questions in advance.
|
||||||
|
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||||
|
|
||||||
|
5. Integration after EACH accepted answer (incremental update approach):
|
||||||
|
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||||
|
- For the first integrated answer in this session:
|
||||||
|
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||||
|
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||||
|
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||||
|
- Then immediately apply the clarification to the most appropriate section(s):
|
||||||
|
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||||
|
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||||
|
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||||
|
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||||
|
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||||
|
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||||
|
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||||
|
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||||
|
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||||
|
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||||
|
|
||||||
|
6. Validation (performed after EACH write plus final pass):
|
||||||
|
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||||
|
- Total asked (accepted) questions ≤ 5.
|
||||||
|
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||||
|
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||||
|
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||||
|
- Terminology consistency: same canonical term used across all updated sections.
|
||||||
|
|
||||||
|
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||||
|
|
||||||
|
8. Report completion (after questioning loop ends or early termination):
|
||||||
|
- Number of questions asked & answered.
|
||||||
|
- Path to updated spec.
|
||||||
|
- Sections touched (list names).
|
||||||
|
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||||
|
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
||||||
|
- Suggested next command.
|
||||||
|
|
||||||
|
Behavior rules:
|
||||||
|
|
||||||
|
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||||
|
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
||||||
|
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||||
|
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||||
|
- Respect user early termination signals ("stop", "done", "proceed").
|
||||||
|
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||||
|
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||||
|
|
||||||
|
Context for prioritization: $ARGUMENTS
|
||||||
82
.kilocode/workflows/speckit.constitution.md
Normal file
82
.kilocode/workflows/speckit.constitution.md
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
---
|
||||||
|
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||||
|
handoffs:
|
||||||
|
- label: Build Specification
|
||||||
|
agent: speckit.specify
|
||||||
|
prompt: Implement the feature specification based on the updated constitution. I want to build...
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||||
|
|
||||||
|
Follow this execution flow:
|
||||||
|
|
||||||
|
1. Load the existing constitution template at `.specify/memory/constitution.md`.
|
||||||
|
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||||
|
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||||
|
|
||||||
|
2. Collect/derive values for placeholders:
|
||||||
|
- If user input (conversation) supplies a value, use it.
|
||||||
|
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
||||||
|
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
||||||
|
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
||||||
|
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
||||||
|
- MINOR: New principle/section added or materially expanded guidance.
|
||||||
|
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
||||||
|
- If version bump type ambiguous, propose reasoning before finalizing.
|
||||||
|
|
||||||
|
3. Draft the updated constitution content:
|
||||||
|
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
||||||
|
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
||||||
|
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
||||||
|
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||||
|
|
||||||
|
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||||
|
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||||
|
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||||
|
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||||
|
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||||
|
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
||||||
|
|
||||||
|
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||||
|
- Version change: old → new
|
||||||
|
- List of modified principles (old title → new title if renamed)
|
||||||
|
- Added sections
|
||||||
|
- Removed sections
|
||||||
|
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
||||||
|
- Follow-up TODOs if any placeholders intentionally deferred.
|
||||||
|
|
||||||
|
6. Validation before final output:
|
||||||
|
- No remaining unexplained bracket tokens.
|
||||||
|
- Version line matches report.
|
||||||
|
- Dates ISO format YYYY-MM-DD.
|
||||||
|
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||||
|
|
||||||
|
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
||||||
|
|
||||||
|
8. Output a final summary to the user with:
|
||||||
|
- New version and bump rationale.
|
||||||
|
- Any files flagged for manual follow-up.
|
||||||
|
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
||||||
|
|
||||||
|
Formatting & Style Requirements:
|
||||||
|
|
||||||
|
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
||||||
|
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
||||||
|
- Keep a single blank line between sections.
|
||||||
|
- Avoid trailing whitespace.
|
||||||
|
|
||||||
|
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
||||||
|
|
||||||
|
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||||
|
|
||||||
|
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
||||||
135
.kilocode/workflows/speckit.implement.md
Normal file
135
.kilocode/workflows/speckit.implement.md
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
---
|
||||||
|
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
||||||
|
- Scan all checklist files in the checklists/ directory
|
||||||
|
- For each checklist, count:
|
||||||
|
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
||||||
|
- Completed items: Lines matching `- [X]` or `- [x]`
|
||||||
|
- Incomplete items: Lines matching `- [ ]`
|
||||||
|
- Create a status table:
|
||||||
|
|
||||||
|
```text
|
||||||
|
| Checklist | Total | Completed | Incomplete | Status |
|
||||||
|
|-----------|-------|-----------|------------|--------|
|
||||||
|
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
||||||
|
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
||||||
|
| security.md | 6 | 6 | 0 | ✓ PASS |
|
||||||
|
```
|
||||||
|
|
||||||
|
- Calculate overall status:
|
||||||
|
- **PASS**: All checklists have 0 incomplete items
|
||||||
|
- **FAIL**: One or more checklists have incomplete items
|
||||||
|
|
||||||
|
- **If any checklist is incomplete**:
|
||||||
|
- Display the table with incomplete item counts
|
||||||
|
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
||||||
|
- Wait for user response before continuing
|
||||||
|
- If user says "no" or "wait" or "stop", halt execution
|
||||||
|
- If user says "yes" or "proceed" or "continue", proceed to step 3
|
||||||
|
|
||||||
|
- **If all checklists are complete**:
|
||||||
|
- Display the table showing all checklists passed
|
||||||
|
- Automatically proceed to step 3
|
||||||
|
|
||||||
|
3. Load and analyze the implementation context:
|
||||||
|
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||||
|
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||||
|
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||||
|
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||||
|
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||||
|
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||||
|
|
||||||
|
4. **Project Setup Verification**:
|
||||||
|
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||||
|
|
||||||
|
**Detection & Creation Logic**:
|
||||||
|
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git rev-parse --git-dir 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
||||||
|
- Check if .eslintrc* exists → create/verify .eslintignore
|
||||||
|
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
|
||||||
|
- Check if .prettierrc* exists → create/verify .prettierignore
|
||||||
|
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
||||||
|
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
||||||
|
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
||||||
|
|
||||||
|
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
||||||
|
**If ignore file missing**: Create with full pattern set for detected technology
|
||||||
|
|
||||||
|
**Common Patterns by Technology** (from plan.md tech stack):
|
||||||
|
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
||||||
|
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
||||||
|
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
||||||
|
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
||||||
|
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
||||||
|
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
|
||||||
|
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
|
||||||
|
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
|
||||||
|
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
|
||||||
|
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
|
||||||
|
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
|
||||||
|
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
|
||||||
|
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
|
||||||
|
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
||||||
|
|
||||||
|
**Tool-Specific Patterns**:
|
||||||
|
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
||||||
|
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
||||||
|
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
||||||
|
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
||||||
|
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
|
||||||
|
|
||||||
|
5. Parse tasks.md structure and extract:
|
||||||
|
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||||
|
- **Task dependencies**: Sequential vs parallel execution rules
|
||||||
|
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||||
|
- **Execution flow**: Order and dependency requirements
|
||||||
|
|
||||||
|
6. Execute implementation following the task plan:
|
||||||
|
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||||
|
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||||
|
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||||
|
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||||
|
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||||
|
|
||||||
|
7. Implementation execution rules:
|
||||||
|
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||||
|
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||||
|
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||||
|
- **Integration work**: Database connections, middleware, logging, external services
|
||||||
|
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||||
|
|
||||||
|
8. Progress tracking and error handling:
|
||||||
|
- Report progress after each completed task
|
||||||
|
- Halt execution if any non-parallel task fails
|
||||||
|
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||||
|
- Provide clear error messages with context for debugging
|
||||||
|
- Suggest next steps if implementation cannot proceed
|
||||||
|
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||||
|
|
||||||
|
9. Completion validation:
|
||||||
|
- Verify all required tasks are completed
|
||||||
|
- Check that implemented features match the original specification
|
||||||
|
- Validate that tests pass and coverage meets requirements
|
||||||
|
- Confirm the implementation follows the technical plan
|
||||||
|
- Report final status with summary of completed work
|
||||||
|
|
||||||
|
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
|
||||||
89
.kilocode/workflows/speckit.plan.md
Normal file
89
.kilocode/workflows/speckit.plan.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
---
|
||||||
|
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||||
|
handoffs:
|
||||||
|
- label: Create Tasks
|
||||||
|
agent: speckit.tasks
|
||||||
|
prompt: Break the plan into tasks
|
||||||
|
send: true
|
||||||
|
- label: Create Checklist
|
||||||
|
agent: speckit.checklist
|
||||||
|
prompt: Create a checklist for the following domain...
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||||
|
|
||||||
|
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||||
|
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||||
|
- Fill Constitution Check section from constitution
|
||||||
|
- Evaluate gates (ERROR if violations unjustified)
|
||||||
|
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
||||||
|
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
||||||
|
- Phase 1: Update agent context by running the agent script
|
||||||
|
- Re-evaluate Constitution Check post-design
|
||||||
|
|
||||||
|
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
||||||
|
|
||||||
|
## Phases
|
||||||
|
|
||||||
|
### Phase 0: Outline & Research
|
||||||
|
|
||||||
|
1. **Extract unknowns from Technical Context** above:
|
||||||
|
- For each NEEDS CLARIFICATION → research task
|
||||||
|
- For each dependency → best practices task
|
||||||
|
- For each integration → patterns task
|
||||||
|
|
||||||
|
2. **Generate and dispatch research agents**:
|
||||||
|
|
||||||
|
```text
|
||||||
|
For each unknown in Technical Context:
|
||||||
|
Task: "Research {unknown} for {feature context}"
|
||||||
|
For each technology choice:
|
||||||
|
Task: "Find best practices for {tech} in {domain}"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Consolidate findings** in `research.md` using format:
|
||||||
|
- Decision: [what was chosen]
|
||||||
|
- Rationale: [why chosen]
|
||||||
|
- Alternatives considered: [what else evaluated]
|
||||||
|
|
||||||
|
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
||||||
|
|
||||||
|
### Phase 1: Design & Contracts
|
||||||
|
|
||||||
|
**Prerequisites:** `research.md` complete
|
||||||
|
|
||||||
|
1. **Extract entities from feature spec** → `data-model.md`:
|
||||||
|
- Entity name, fields, relationships
|
||||||
|
- Validation rules from requirements
|
||||||
|
- State transitions if applicable
|
||||||
|
|
||||||
|
2. **Generate API contracts** from functional requirements:
|
||||||
|
- For each user action → endpoint
|
||||||
|
- Use standard REST/GraphQL patterns
|
||||||
|
- Output OpenAPI/GraphQL schema to `/contracts/`
|
||||||
|
|
||||||
|
3. **Agent context update**:
|
||||||
|
- Run `.specify/scripts/bash/update-agent-context.sh kilocode`
|
||||||
|
- These scripts detect which AI agent is in use
|
||||||
|
- Update the appropriate agent-specific context file
|
||||||
|
- Add only new technology from current plan
|
||||||
|
- Preserve manual additions between markers
|
||||||
|
|
||||||
|
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
|
||||||
|
|
||||||
|
## Key rules
|
||||||
|
|
||||||
|
- Use absolute paths
|
||||||
|
- ERROR on gate failures or unresolved clarifications
|
||||||
258
.kilocode/workflows/speckit.specify.md
Normal file
258
.kilocode/workflows/speckit.specify.md
Normal file
@@ -0,0 +1,258 @@
|
|||||||
|
---
|
||||||
|
description: Create or update the feature specification from a natural language feature description.
|
||||||
|
handoffs:
|
||||||
|
- label: Build Technical Plan
|
||||||
|
agent: speckit.plan
|
||||||
|
prompt: Create a plan for the spec. I am building with...
|
||||||
|
- label: Clarify Spec Requirements
|
||||||
|
agent: speckit.clarify
|
||||||
|
prompt: Clarify specification requirements
|
||||||
|
send: true
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
||||||
|
|
||||||
|
Given that feature description, do this:
|
||||||
|
|
||||||
|
1. **Generate a concise short name** (2-4 words) for the branch:
|
||||||
|
- Analyze the feature description and extract the most meaningful keywords
|
||||||
|
- Create a 2-4 word short name that captures the essence of the feature
|
||||||
|
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
|
||||||
|
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
|
||||||
|
- Keep it concise but descriptive enough to understand the feature at a glance
|
||||||
|
- Examples:
|
||||||
|
- "I want to add user authentication" → "user-auth"
|
||||||
|
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
|
||||||
|
- "Create a dashboard for analytics" → "analytics-dashboard"
|
||||||
|
- "Fix payment processing timeout bug" → "fix-payment-timeout"
|
||||||
|
|
||||||
|
2. **Check for existing branches before creating new one**:
|
||||||
|
|
||||||
|
a. First, fetch all remote branches to ensure we have the latest information:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git fetch --all --prune
|
||||||
|
```
|
||||||
|
|
||||||
|
b. Find the highest feature number across all sources for the short-name:
|
||||||
|
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
|
||||||
|
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
|
||||||
|
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
|
||||||
|
|
||||||
|
c. Determine the next available number:
|
||||||
|
- Extract all numbers from all three sources
|
||||||
|
- Find the highest number N
|
||||||
|
- Use N+1 for the new branch number
|
||||||
|
|
||||||
|
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
|
||||||
|
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
||||||
|
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
||||||
|
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
||||||
|
|
||||||
|
**IMPORTANT**:
|
||||||
|
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
||||||
|
- Only match branches/directories with the exact short-name pattern
|
||||||
|
- If no existing branches/directories found with this short-name, start with number 1
|
||||||
|
- You must only ever run this script once per feature
|
||||||
|
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
|
||||||
|
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
|
||||||
|
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
|
||||||
|
|
||||||
|
3. Load `.specify/templates/spec-template.md` to understand required sections.
|
||||||
|
|
||||||
|
4. Follow this execution flow:
|
||||||
|
|
||||||
|
1. Parse user description from Input
|
||||||
|
If empty: ERROR "No feature description provided"
|
||||||
|
2. Extract key concepts from description
|
||||||
|
Identify: actors, actions, data, constraints
|
||||||
|
3. For unclear aspects:
|
||||||
|
- Make informed guesses based on context and industry standards
|
||||||
|
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
||||||
|
- The choice significantly impacts feature scope or user experience
|
||||||
|
- Multiple reasonable interpretations exist with different implications
|
||||||
|
- No reasonable default exists
|
||||||
|
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
||||||
|
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
||||||
|
4. Fill User Scenarios & Testing section
|
||||||
|
If no clear user flow: ERROR "Cannot determine user scenarios"
|
||||||
|
5. Generate Functional Requirements
|
||||||
|
Each requirement must be testable
|
||||||
|
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
||||||
|
6. Define Success Criteria
|
||||||
|
Create measurable, technology-agnostic outcomes
|
||||||
|
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
||||||
|
Each criterion must be verifiable without implementation details
|
||||||
|
7. Identify Key Entities (if data involved)
|
||||||
|
8. Return: SUCCESS (spec ready for planning)
|
||||||
|
|
||||||
|
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
||||||
|
|
||||||
|
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
||||||
|
|
||||||
|
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Specification Quality Checklist: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Feature**: [Link to spec.md]
|
||||||
|
|
||||||
|
## Content Quality
|
||||||
|
|
||||||
|
- [ ] No implementation details (languages, frameworks, APIs)
|
||||||
|
- [ ] Focused on user value and business needs
|
||||||
|
- [ ] Written for non-technical stakeholders
|
||||||
|
- [ ] All mandatory sections completed
|
||||||
|
|
||||||
|
## Requirement Completeness
|
||||||
|
|
||||||
|
- [ ] No [NEEDS CLARIFICATION] markers remain
|
||||||
|
- [ ] Requirements are testable and unambiguous
|
||||||
|
- [ ] Success criteria are measurable
|
||||||
|
- [ ] Success criteria are technology-agnostic (no implementation details)
|
||||||
|
- [ ] All acceptance scenarios are defined
|
||||||
|
- [ ] Edge cases are identified
|
||||||
|
- [ ] Scope is clearly bounded
|
||||||
|
- [ ] Dependencies and assumptions identified
|
||||||
|
|
||||||
|
## Feature Readiness
|
||||||
|
|
||||||
|
- [ ] All functional requirements have clear acceptance criteria
|
||||||
|
- [ ] User scenarios cover primary flows
|
||||||
|
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
||||||
|
- [ ] No implementation details leak into specification
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
|
||||||
|
```
|
||||||
|
|
||||||
|
b. **Run Validation Check**: Review the spec against each checklist item:
|
||||||
|
- For each item, determine if it passes or fails
|
||||||
|
- Document specific issues found (quote relevant spec sections)
|
||||||
|
|
||||||
|
c. **Handle Validation Results**:
|
||||||
|
|
||||||
|
- **If all items pass**: Mark checklist complete and proceed to step 6
|
||||||
|
|
||||||
|
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
||||||
|
1. List the failing items and specific issues
|
||||||
|
2. Update the spec to address each issue
|
||||||
|
3. Re-run validation until all items pass (max 3 iterations)
|
||||||
|
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
||||||
|
|
||||||
|
- **If [NEEDS CLARIFICATION] markers remain**:
|
||||||
|
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
||||||
|
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
||||||
|
3. For each clarification needed (max 3), present options to user in this format:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Question [N]: [Topic]
|
||||||
|
|
||||||
|
**Context**: [Quote relevant spec section]
|
||||||
|
|
||||||
|
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
||||||
|
|
||||||
|
**Suggested Answers**:
|
||||||
|
|
||||||
|
| Option | Answer | Implications |
|
||||||
|
|--------|--------|--------------|
|
||||||
|
| A | [First suggested answer] | [What this means for the feature] |
|
||||||
|
| B | [Second suggested answer] | [What this means for the feature] |
|
||||||
|
| C | [Third suggested answer] | [What this means for the feature] |
|
||||||
|
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
||||||
|
|
||||||
|
**Your choice**: _[Wait for user response]_
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
||||||
|
- Use consistent spacing with pipes aligned
|
||||||
|
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
||||||
|
- Header separator must have at least 3 dashes: `|--------|`
|
||||||
|
- Test that the table renders correctly in markdown preview
|
||||||
|
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
||||||
|
6. Present all questions together before waiting for responses
|
||||||
|
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
||||||
|
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
||||||
|
9. Re-run validation after all clarifications are resolved
|
||||||
|
|
||||||
|
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
||||||
|
|
||||||
|
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
|
||||||
|
|
||||||
|
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
|
||||||
|
|
||||||
|
## General Guidelines
|
||||||
|
|
||||||
|
## Quick Guidelines
|
||||||
|
|
||||||
|
- Focus on **WHAT** users need and **WHY**.
|
||||||
|
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
||||||
|
- Written for business stakeholders, not developers.
|
||||||
|
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
||||||
|
|
||||||
|
### Section Requirements
|
||||||
|
|
||||||
|
- **Mandatory sections**: Must be completed for every feature
|
||||||
|
- **Optional sections**: Include only when relevant to the feature
|
||||||
|
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
||||||
|
|
||||||
|
### For AI Generation
|
||||||
|
|
||||||
|
When creating this spec from a user prompt:
|
||||||
|
|
||||||
|
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
||||||
|
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
||||||
|
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
||||||
|
- Significantly impact feature scope or user experience
|
||||||
|
- Have multiple reasonable interpretations with different implications
|
||||||
|
- Lack any reasonable default
|
||||||
|
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
||||||
|
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
||||||
|
6. **Common areas needing clarification** (only if no reasonable default exists):
|
||||||
|
- Feature scope and boundaries (include/exclude specific use cases)
|
||||||
|
- User types and permissions (if multiple conflicting interpretations possible)
|
||||||
|
- Security/compliance requirements (when legally/financially significant)
|
||||||
|
|
||||||
|
**Examples of reasonable defaults** (don't ask about these):
|
||||||
|
|
||||||
|
- Data retention: Industry-standard practices for the domain
|
||||||
|
- Performance targets: Standard web/mobile app expectations unless specified
|
||||||
|
- Error handling: User-friendly messages with appropriate fallbacks
|
||||||
|
- Authentication method: Standard session-based or OAuth2 for web apps
|
||||||
|
- Integration patterns: RESTful APIs unless specified otherwise
|
||||||
|
|
||||||
|
### Success Criteria Guidelines
|
||||||
|
|
||||||
|
Success criteria must be:
|
||||||
|
|
||||||
|
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
||||||
|
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
||||||
|
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
||||||
|
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
||||||
|
|
||||||
|
**Good examples**:
|
||||||
|
|
||||||
|
- "Users can complete checkout in under 3 minutes"
|
||||||
|
- "System supports 10,000 concurrent users"
|
||||||
|
- "95% of searches return results in under 1 second"
|
||||||
|
- "Task completion rate improves by 40%"
|
||||||
|
|
||||||
|
**Bad examples** (implementation-focused):
|
||||||
|
|
||||||
|
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
||||||
|
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
||||||
|
- "React components render efficiently" (framework-specific)
|
||||||
|
- "Redis cache hit rate above 80%" (technology-specific)
|
||||||
137
.kilocode/workflows/speckit.tasks.md
Normal file
137
.kilocode/workflows/speckit.tasks.md
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
---
|
||||||
|
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||||
|
handoffs:
|
||||||
|
- label: Analyze For Consistency
|
||||||
|
agent: speckit.analyze
|
||||||
|
prompt: Run a project analysis for consistency
|
||||||
|
send: true
|
||||||
|
- label: Implement Project
|
||||||
|
agent: speckit.implement
|
||||||
|
prompt: Start the implementation in phases
|
||||||
|
send: true
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Load design documents**: Read from FEATURE_DIR:
|
||||||
|
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||||
|
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
||||||
|
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||||
|
|
||||||
|
3. **Execute task generation workflow**:
|
||||||
|
- Load plan.md and extract tech stack, libraries, project structure
|
||||||
|
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
||||||
|
- If data-model.md exists: Extract entities and map to user stories
|
||||||
|
- If contracts/ exists: Map endpoints to user stories
|
||||||
|
- If research.md exists: Extract decisions for setup tasks
|
||||||
|
- Generate tasks organized by user story (see Task Generation Rules below)
|
||||||
|
- Generate dependency graph showing user story completion order
|
||||||
|
- Create parallel execution examples per user story
|
||||||
|
- Validate task completeness (each user story has all needed tasks, independently testable)
|
||||||
|
|
||||||
|
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
|
||||||
|
- Correct feature name from plan.md
|
||||||
|
- Phase 1: Setup tasks (project initialization)
|
||||||
|
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
||||||
|
- Phase 3+: One phase per user story (in priority order from spec.md)
|
||||||
|
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||||
|
- Final Phase: Polish & cross-cutting concerns
|
||||||
|
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
||||||
|
- Clear file paths for each task
|
||||||
|
- Dependencies section showing story completion order
|
||||||
|
- Parallel execution examples per story
|
||||||
|
- Implementation strategy section (MVP first, incremental delivery)
|
||||||
|
|
||||||
|
5. **Report**: Output path to generated tasks.md and summary:
|
||||||
|
- Total task count
|
||||||
|
- Task count per user story
|
||||||
|
- Parallel opportunities identified
|
||||||
|
- Independent test criteria for each story
|
||||||
|
- Suggested MVP scope (typically just User Story 1)
|
||||||
|
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
||||||
|
|
||||||
|
Context for task generation: $ARGUMENTS
|
||||||
|
|
||||||
|
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
||||||
|
|
||||||
|
## Task Generation Rules
|
||||||
|
|
||||||
|
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
||||||
|
|
||||||
|
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||||
|
|
||||||
|
### Checklist Format (REQUIRED)
|
||||||
|
|
||||||
|
Every task MUST strictly follow this format:
|
||||||
|
|
||||||
|
```text
|
||||||
|
- [ ] [TaskID] [P?] [Story?] Description with file path
|
||||||
|
```
|
||||||
|
|
||||||
|
**Format Components**:
|
||||||
|
|
||||||
|
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
||||||
|
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
||||||
|
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
||||||
|
4. **[Story] label**: REQUIRED for user story phase tasks only
|
||||||
|
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
||||||
|
- Setup phase: NO story label
|
||||||
|
- Foundational phase: NO story label
|
||||||
|
- User Story phases: MUST have story label
|
||||||
|
- Polish phase: NO story label
|
||||||
|
5. **Description**: Clear action with exact file path
|
||||||
|
|
||||||
|
**Examples**:
|
||||||
|
|
||||||
|
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
||||||
|
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
||||||
|
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
||||||
|
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
||||||
|
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
||||||
|
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
||||||
|
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
||||||
|
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
||||||
|
|
||||||
|
### Task Organization
|
||||||
|
|
||||||
|
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
||||||
|
- Each user story (P1, P2, P3...) gets its own phase
|
||||||
|
- Map all related components to their story:
|
||||||
|
- Models needed for that story
|
||||||
|
- Services needed for that story
|
||||||
|
- Endpoints/UI needed for that story
|
||||||
|
- If tests requested: Tests specific to that story
|
||||||
|
- Mark story dependencies (most stories should be independent)
|
||||||
|
|
||||||
|
2. **From Contracts**:
|
||||||
|
- Map each contract/endpoint → to the user story it serves
|
||||||
|
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||||
|
|
||||||
|
3. **From Data Model**:
|
||||||
|
- Map each entity to the user story(ies) that need it
|
||||||
|
- If entity serves multiple stories: Put in earliest story or Setup phase
|
||||||
|
- Relationships → service layer tasks in appropriate story phase
|
||||||
|
|
||||||
|
4. **From Setup/Infrastructure**:
|
||||||
|
- Shared infrastructure → Setup phase (Phase 1)
|
||||||
|
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
||||||
|
- Story-specific setup → within that story's phase
|
||||||
|
|
||||||
|
### Phase Structure
|
||||||
|
|
||||||
|
- **Phase 1**: Setup (project initialization)
|
||||||
|
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
||||||
|
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
||||||
|
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||||
|
- Each phase should be a complete, independently testable increment
|
||||||
|
- **Final Phase**: Polish & Cross-Cutting Concerns
|
||||||
30
.kilocode/workflows/speckit.taskstoissues.md
Normal file
30
.kilocode/workflows/speckit.taskstoissues.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
||||||
|
tools: ['github/github-mcp-server/issue_write']
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
1. From the executed script, extract the path to **tasks**.
|
||||||
|
1. Get the Git remote by running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git config --get remote.origin.url
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
|
||||||
|
|
||||||
|
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL
|
||||||
45
.kilocodemodes
Normal file
45
.kilocodemodes
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
customModes:
|
||||||
|
- slug: tester
|
||||||
|
name: Tester
|
||||||
|
description: QA and Plan Verification Specialist
|
||||||
|
roleDefinition: |-
|
||||||
|
You are Kilo Code, acting as a QA and Verification Specialist. Your primary goal is to validate that the project implementation aligns strictly with the defined specifications and task plans.
|
||||||
|
Your responsibilities include: - Reading and analyzing task plans and specifications (typically in the `specs/` directory). - Verifying that implemented code matches the requirements. - Executing tests and validating system behavior via CLI or Browser. - Updating the status of tasks in the plan files (e.g., marking checkboxes [x]) as they are verified. - Identifying and reporting missing features or bugs.
|
||||||
|
whenToUse: Use this mode when you need to audit the progress of a project, verify completed tasks against the plan, run quality assurance checks, or update the status of task lists in specification documents.
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- command
|
||||||
|
- browser
|
||||||
|
- mcp
|
||||||
|
customInstructions: 1. Always begin by loading the relevant plan or task list from the `specs/` directory. 2. Do not assume a task is done just because it is checked; verify the code or functionality first if asked to audit. 3. When updating task lists, ensure you only mark items as complete if you have verified them.
|
||||||
|
- slug: product-manager
|
||||||
|
name: Product Manager
|
||||||
|
description: Executes SpecKit workflows for feature management
|
||||||
|
roleDefinition: |-
|
||||||
|
You are Kilo Code, acting as a Product Manager. Your purpose is to rigorously execute the workflows defined in `.kilocode/workflows/`.
|
||||||
|
You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`)
|
||||||
|
For each task, you must read the relevant workflow file from `.kilocode/workflows/` and follow its Execution Steps precisely.
|
||||||
|
whenToUse: Use this mode when you need to run any /speckit.* command or when dealing with high-level feature planning, specification writing, or project management tasks.
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- command
|
||||||
|
- mcp
|
||||||
|
customInstructions: 1. Always read the specific workflow file in `.kilocode/workflows/` before executing a command. 2. Adhere strictly to the "Operating Constraints" and "Execution Steps" in the workflow files.
|
||||||
|
- slug: semantic
|
||||||
|
name: Semantic Agent
|
||||||
|
roleDefinition: |-
|
||||||
|
You are Kilo Code, a Semantic Agent responsible for maintaining the semantic integrity of the codebase. Your primary goal is to ensure that all code entities (Modules, Classes, Functions, Components) are properly annotated with semantic anchors and tags as defined in `semantic_protocol.md`.
|
||||||
|
Your core responsibilities are: 1. **Semantic Mapping**: You run and maintain the `generate_semantic_map.py` script to generate up-to-date semantic maps (`semantics/semantic_map.json`, `specs/project_map.md`) and compliance reports (`semantics/reports/*.md`). 2. **Compliance Auditing**: You analyze the generated compliance reports to identify files with low semantic coverage or parsing errors. 3. **Semantic Enrichment**: You actively edit code files to add missing semantic anchors (`[DEF:...]`, `[/DEF:...]`) and mandatory tags (`@PURPOSE`, `@LAYER`, etc.) to improve the global compliance score. 4. **Protocol Enforcement**: You strictly adhere to the syntax and rules defined in `semantic_protocol.md` when modifying code.
|
||||||
|
You have access to the full codebase and tools to read, write, and execute scripts. You should prioritize fixing "Critical Parsing Errors" (unclosed anchors) before addressing missing metadata.
|
||||||
|
whenToUse: Use this mode when you need to update the project's semantic map, fix semantic compliance issues (missing anchors/tags/DbC ), or analyze the codebase structure. This mode is specialized for maintaining the `semantic_protocol.md` standards.
|
||||||
|
description: Codebase semantic mapping and compliance expert
|
||||||
|
customInstructions: Always check `semantics/reports/` for the latest compliance status before starting work. When fixing a file, try to fix all semantic issues in that file at once. After making a batch of fixes, run `python3 generate_semantic_map.py` to verify improvements.
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- command
|
||||||
|
- browser
|
||||||
|
- mcp
|
||||||
|
source: project
|
||||||
67
.specify/memory/constitution.md
Normal file
67
.specify/memory/constitution.md
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
<!--
|
||||||
|
SYNC IMPACT REPORT
|
||||||
|
Version: 1.7.1 (Simplified Workflow)
|
||||||
|
Changes:
|
||||||
|
- Simplified Generation Workflow to a single phase: Code Generation from `tasks.md`.
|
||||||
|
- Removed multi-phase Architecture/Implementation split to streamline development.
|
||||||
|
Templates Status:
|
||||||
|
- .specify/templates/plan-template.md: ✅ Aligned (Dynamic check).
|
||||||
|
- .specify/templates/spec-template.md: ✅ Aligned.
|
||||||
|
- .specify/templates/tasks-template.md: ✅ Aligned.
|
||||||
|
-->
|
||||||
|
# Semantic Code Generation Constitution
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### I. Semantic Protocol Compliance
|
||||||
|
The file `semantic_protocol.md` is the **authoritative technical standard** for this project. All code generation, refactoring, and architecture must strictly adhere to the standards, syntax, and workflows defined therein.
|
||||||
|
- **Syntax**: `[DEF]` anchors, `@RELATION` tags, and metadata must match the Protocol specification.
|
||||||
|
- **Structure**: File layouts and headers must follow the "File Structure Standard".
|
||||||
|
- **Workflow**: The technical steps for generating code must align with the Protocol.
|
||||||
|
|
||||||
|
### II. Causal Validity (Contracts First)
|
||||||
|
As defined in the Protocol, Semantic definitions (Contracts) must ALWAYS precede implementation code. Logic is downstream of definition. We define the structure and constraints (`[DEF]`, `@PRE`, `@POST`) before writing the executable logic.
|
||||||
|
|
||||||
|
### III. Immutability of Architecture
|
||||||
|
Architectural decisions in the Module Header (`@LAYER`, `@INVARIANT`, `@CONSTRAINT`) are treated as immutable constraints. Changes to these require an explicit refactoring step, not ad-hoc modification during implementation.
|
||||||
|
|
||||||
|
### IV. Design by Contract (DbC)
|
||||||
|
Contracts are the Source of Truth. Functions and Classes must define their purpose, specifications, and constraints in the metadata block before implementation, strictly following the **Contracts (Section IV)** standard in `semantic_protocol.md`.
|
||||||
|
|
||||||
|
### V. Belief State Logging
|
||||||
|
Agents must maintain belief state logs for debugging and coherence checks, strictly following the **Logging Standard (Section V)** defined in `semantic_protocol.md`.
|
||||||
|
|
||||||
|
### VI. Fractal Complexity Limit
|
||||||
|
To maintain semantic coherence, code must adhere to the complexity limits (Module/Function size) defined in the **Fractal Complexity Limit (Section VI)** of `semantic_protocol.md`.
|
||||||
|
|
||||||
|
### VII. Everything is a Plugin
|
||||||
|
All functional extensions, tools, or major features must be implemented as modular Plugins inheriting from `PluginBase`. Logic should not reside in standalone services or scripts unless strictly necessary for core infrastructure. This ensures a unified execution model via the `TaskManager`, consistent logging, and modularity.
|
||||||
|
|
||||||
|
## File Structure Standards
|
||||||
|
Refer to **Section III (File Structure Standard)** in `semantic_protocol.md` for the authoritative definitions of:
|
||||||
|
- Python Module Headers (`.py`)
|
||||||
|
- Svelte Component Headers (`.svelte`)
|
||||||
|
|
||||||
|
## Generation Workflow
|
||||||
|
The development process follows a streamlined single-phase workflow:
|
||||||
|
|
||||||
|
### 1. Code Generation Phase (Mode: `code`)
|
||||||
|
**Input**: `tasks.md`
|
||||||
|
**Responsibility**:
|
||||||
|
- Select task from `tasks.md`.
|
||||||
|
- Generate Scaffolding (`[DEF]` anchors, Headers, Contracts) AND Implementation in one pass.
|
||||||
|
- Ensure strict adherence to Protocol Section IV (Contracts) and Section VII (Generation Workflow).
|
||||||
|
- **Output**: Working code with passing tests.
|
||||||
|
|
||||||
|
### 2. Validation
|
||||||
|
If logic conflicts with Contract -> Stop -> Report Error.
|
||||||
|
|
||||||
|
## Governance
|
||||||
|
This Constitution establishes the "Semantic Code Generation Protocol" as the supreme law of this repository.
|
||||||
|
|
||||||
|
- **Authoritative Source**: `semantic_protocol.md` defines the specific implementation rules for these Principles.
|
||||||
|
- **Automated Enforcement**: Tools must validate adherence to the `semantic_protocol.md` syntax.
|
||||||
|
- **Amendments**: Changes to core principles require a Constitution amendment. Changes to technical syntax require a Protocol update.
|
||||||
|
- **Compliance**: Failure to adhere to the Protocol constitutes a build failure.
|
||||||
|
|
||||||
|
**Version**: 1.7.1 | **Ratified**: 2025-12-19 | **Last Amended**: 2026-01-13
|
||||||
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
@@ -0,0 +1,166 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Consolidated prerequisite checking script
|
||||||
|
#
|
||||||
|
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
|
||||||
|
# It replaces the functionality previously spread across multiple scripts.
|
||||||
|
#
|
||||||
|
# Usage: ./check-prerequisites.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# OPTIONS:
|
||||||
|
# --json Output in JSON format
|
||||||
|
# --require-tasks Require tasks.md to exist (for implementation phase)
|
||||||
|
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||||
|
# --paths-only Only output path variables (no validation)
|
||||||
|
# --help, -h Show help message
|
||||||
|
#
|
||||||
|
# OUTPUTS:
|
||||||
|
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
|
||||||
|
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
|
||||||
|
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Parse command line arguments
|
||||||
|
JSON_MODE=false
|
||||||
|
REQUIRE_TASKS=false
|
||||||
|
INCLUDE_TASKS=false
|
||||||
|
PATHS_ONLY=false
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--require-tasks)
|
||||||
|
REQUIRE_TASKS=true
|
||||||
|
;;
|
||||||
|
--include-tasks)
|
||||||
|
INCLUDE_TASKS=true
|
||||||
|
;;
|
||||||
|
--paths-only)
|
||||||
|
PATHS_ONLY=true
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
cat << 'EOF'
|
||||||
|
Usage: check-prerequisites.sh [OPTIONS]
|
||||||
|
|
||||||
|
Consolidated prerequisite checking for Spec-Driven Development workflow.
|
||||||
|
|
||||||
|
OPTIONS:
|
||||||
|
--json Output in JSON format
|
||||||
|
--require-tasks Require tasks.md to exist (for implementation phase)
|
||||||
|
--include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||||
|
--paths-only Only output path variables (no prerequisite validation)
|
||||||
|
--help, -h Show this help message
|
||||||
|
|
||||||
|
EXAMPLES:
|
||||||
|
# Check task prerequisites (plan.md required)
|
||||||
|
./check-prerequisites.sh --json
|
||||||
|
|
||||||
|
# Check implementation prerequisites (plan.md + tasks.md required)
|
||||||
|
./check-prerequisites.sh --json --require-tasks --include-tasks
|
||||||
|
|
||||||
|
# Get feature paths only (no validation)
|
||||||
|
./check-prerequisites.sh --paths-only
|
||||||
|
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Source common functions
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get feature paths and validate branch
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||||
|
|
||||||
|
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
|
||||||
|
if $PATHS_ONLY; then
|
||||||
|
if $JSON_MODE; then
|
||||||
|
# Minimal JSON paths payload (no validation performed)
|
||||||
|
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
|
||||||
|
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
|
||||||
|
else
|
||||||
|
echo "REPO_ROOT: $REPO_ROOT"
|
||||||
|
echo "BRANCH: $CURRENT_BRANCH"
|
||||||
|
echo "FEATURE_DIR: $FEATURE_DIR"
|
||||||
|
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||||
|
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||||
|
echo "TASKS: $TASKS"
|
||||||
|
fi
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate required directories and files
|
||||||
|
if [[ ! -d "$FEATURE_DIR" ]]; then
|
||||||
|
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
|
||||||
|
echo "Run /speckit.specify first to create the feature structure." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f "$IMPL_PLAN" ]]; then
|
||||||
|
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
|
||||||
|
echo "Run /speckit.plan first to create the implementation plan." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for tasks.md if required
|
||||||
|
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
|
||||||
|
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
|
||||||
|
echo "Run /speckit.tasks first to create the task list." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build list of available documents
|
||||||
|
docs=()
|
||||||
|
|
||||||
|
# Always check these optional docs
|
||||||
|
[[ -f "$RESEARCH" ]] && docs+=("research.md")
|
||||||
|
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
|
||||||
|
|
||||||
|
# Check contracts directory (only if it exists and has files)
|
||||||
|
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
|
||||||
|
docs+=("contracts/")
|
||||||
|
fi
|
||||||
|
|
||||||
|
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
|
||||||
|
|
||||||
|
# Include tasks.md if requested and it exists
|
||||||
|
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
|
||||||
|
docs+=("tasks.md")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if $JSON_MODE; then
|
||||||
|
# Build JSON array of documents
|
||||||
|
if [[ ${#docs[@]} -eq 0 ]]; then
|
||||||
|
json_docs="[]"
|
||||||
|
else
|
||||||
|
json_docs=$(printf '"%s",' "${docs[@]}")
|
||||||
|
json_docs="[${json_docs%,}]"
|
||||||
|
fi
|
||||||
|
|
||||||
|
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
|
||||||
|
else
|
||||||
|
# Text output
|
||||||
|
echo "FEATURE_DIR:$FEATURE_DIR"
|
||||||
|
echo "AVAILABLE_DOCS:"
|
||||||
|
|
||||||
|
# Show status of each potential document
|
||||||
|
check_file "$RESEARCH" "research.md"
|
||||||
|
check_file "$DATA_MODEL" "data-model.md"
|
||||||
|
check_dir "$CONTRACTS_DIR" "contracts/"
|
||||||
|
check_file "$QUICKSTART" "quickstart.md"
|
||||||
|
|
||||||
|
if $INCLUDE_TASKS; then
|
||||||
|
check_file "$TASKS" "tasks.md"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
156
.specify/scripts/bash/common.sh
Executable file
156
.specify/scripts/bash/common.sh
Executable file
@@ -0,0 +1,156 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Common functions and variables for all scripts
|
||||||
|
|
||||||
|
# Get repository root, with fallback for non-git repositories
|
||||||
|
get_repo_root() {
|
||||||
|
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||||
|
git rev-parse --show-toplevel
|
||||||
|
else
|
||||||
|
# Fall back to script location for non-git repos
|
||||||
|
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
(cd "$script_dir/../../.." && pwd)
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get current branch, with fallback for non-git repositories
|
||||||
|
get_current_branch() {
|
||||||
|
# First check if SPECIFY_FEATURE environment variable is set
|
||||||
|
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
|
||||||
|
echo "$SPECIFY_FEATURE"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Then check git if available
|
||||||
|
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
|
||||||
|
git rev-parse --abbrev-ref HEAD
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# For non-git repos, try to find the latest feature directory
|
||||||
|
local repo_root=$(get_repo_root)
|
||||||
|
local specs_dir="$repo_root/specs"
|
||||||
|
|
||||||
|
if [[ -d "$specs_dir" ]]; then
|
||||||
|
local latest_feature=""
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
for dir in "$specs_dir"/*; do
|
||||||
|
if [[ -d "$dir" ]]; then
|
||||||
|
local dirname=$(basename "$dir")
|
||||||
|
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
|
||||||
|
local number=${BASH_REMATCH[1]}
|
||||||
|
number=$((10#$number))
|
||||||
|
if [[ "$number" -gt "$highest" ]]; then
|
||||||
|
highest=$number
|
||||||
|
latest_feature=$dirname
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -n "$latest_feature" ]]; then
|
||||||
|
echo "$latest_feature"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "main" # Final fallback
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if we have git available
|
||||||
|
has_git() {
|
||||||
|
git rev-parse --show-toplevel >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
check_feature_branch() {
|
||||||
|
local branch="$1"
|
||||||
|
local has_git_repo="$2"
|
||||||
|
|
||||||
|
# For non-git repos, we can't enforce branch naming but still provide output
|
||||||
|
if [[ "$has_git_repo" != "true" ]]; then
|
||||||
|
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
|
||||||
|
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
|
||||||
|
echo "Feature branches should be named like: 001-feature-name" >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
get_feature_dir() { echo "$1/specs/$2"; }
|
||||||
|
|
||||||
|
# Find feature directory by numeric prefix instead of exact branch match
|
||||||
|
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
|
||||||
|
find_feature_dir_by_prefix() {
|
||||||
|
local repo_root="$1"
|
||||||
|
local branch_name="$2"
|
||||||
|
local specs_dir="$repo_root/specs"
|
||||||
|
|
||||||
|
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
|
||||||
|
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
|
||||||
|
# If branch doesn't have numeric prefix, fall back to exact match
|
||||||
|
echo "$specs_dir/$branch_name"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
local prefix="${BASH_REMATCH[1]}"
|
||||||
|
|
||||||
|
# Search for directories in specs/ that start with this prefix
|
||||||
|
local matches=()
|
||||||
|
if [[ -d "$specs_dir" ]]; then
|
||||||
|
for dir in "$specs_dir"/"$prefix"-*; do
|
||||||
|
if [[ -d "$dir" ]]; then
|
||||||
|
matches+=("$(basename "$dir")")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Handle results
|
||||||
|
if [[ ${#matches[@]} -eq 0 ]]; then
|
||||||
|
# No match found - return the branch name path (will fail later with clear error)
|
||||||
|
echo "$specs_dir/$branch_name"
|
||||||
|
elif [[ ${#matches[@]} -eq 1 ]]; then
|
||||||
|
# Exactly one match - perfect!
|
||||||
|
echo "$specs_dir/${matches[0]}"
|
||||||
|
else
|
||||||
|
# Multiple matches - this shouldn't happen with proper naming convention
|
||||||
|
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
|
||||||
|
echo "Please ensure only one spec directory exists per numeric prefix." >&2
|
||||||
|
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
get_feature_paths() {
|
||||||
|
local repo_root=$(get_repo_root)
|
||||||
|
local current_branch=$(get_current_branch)
|
||||||
|
local has_git_repo="false"
|
||||||
|
|
||||||
|
if has_git; then
|
||||||
|
has_git_repo="true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Use prefix-based lookup to support multiple branches per spec
|
||||||
|
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
|
||||||
|
|
||||||
|
cat <<EOF
|
||||||
|
REPO_ROOT='$repo_root'
|
||||||
|
CURRENT_BRANCH='$current_branch'
|
||||||
|
HAS_GIT='$has_git_repo'
|
||||||
|
FEATURE_DIR='$feature_dir'
|
||||||
|
FEATURE_SPEC='$feature_dir/spec.md'
|
||||||
|
IMPL_PLAN='$feature_dir/plan.md'
|
||||||
|
TASKS='$feature_dir/tasks.md'
|
||||||
|
RESEARCH='$feature_dir/research.md'
|
||||||
|
DATA_MODEL='$feature_dir/data-model.md'
|
||||||
|
QUICKSTART='$feature_dir/quickstart.md'
|
||||||
|
CONTRACTS_DIR='$feature_dir/contracts'
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
check_file() { [[ -f "$1" ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||||
|
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||||
|
|
||||||
297
.specify/scripts/bash/create-new-feature.sh
Executable file
297
.specify/scripts/bash/create-new-feature.sh
Executable file
@@ -0,0 +1,297 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
JSON_MODE=false
|
||||||
|
SHORT_NAME=""
|
||||||
|
BRANCH_NUMBER=""
|
||||||
|
ARGS=()
|
||||||
|
i=1
|
||||||
|
while [ $i -le $# ]; do
|
||||||
|
arg="${!i}"
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--short-name)
|
||||||
|
if [ $((i + 1)) -gt $# ]; then
|
||||||
|
echo 'Error: --short-name requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
i=$((i + 1))
|
||||||
|
next_arg="${!i}"
|
||||||
|
# Check if the next argument is another option (starts with --)
|
||||||
|
if [[ "$next_arg" == --* ]]; then
|
||||||
|
echo 'Error: --short-name requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
SHORT_NAME="$next_arg"
|
||||||
|
;;
|
||||||
|
--number)
|
||||||
|
if [ $((i + 1)) -gt $# ]; then
|
||||||
|
echo 'Error: --number requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
i=$((i + 1))
|
||||||
|
next_arg="${!i}"
|
||||||
|
if [[ "$next_arg" == --* ]]; then
|
||||||
|
echo 'Error: --number requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
BRANCH_NUMBER="$next_arg"
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " --json Output in JSON format"
|
||||||
|
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
|
||||||
|
echo " --number N Specify branch number manually (overrides auto-detection)"
|
||||||
|
echo " --help, -h Show this help message"
|
||||||
|
echo ""
|
||||||
|
echo "Examples:"
|
||||||
|
echo " $0 'Add user authentication system' --short-name 'user-auth'"
|
||||||
|
echo " $0 'Implement OAuth2 integration for API' --number 5"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
ARGS+=("$arg")
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
i=$((i + 1))
|
||||||
|
done
|
||||||
|
|
||||||
|
FEATURE_DESCRIPTION="${ARGS[*]}"
|
||||||
|
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||||
|
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Function to find the repository root by searching for existing project markers
|
||||||
|
find_repo_root() {
|
||||||
|
local dir="$1"
|
||||||
|
while [ "$dir" != "/" ]; do
|
||||||
|
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
|
||||||
|
echo "$dir"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
dir="$(dirname "$dir")"
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to get highest number from specs directory
|
||||||
|
get_highest_from_specs() {
|
||||||
|
local specs_dir="$1"
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
if [ -d "$specs_dir" ]; then
|
||||||
|
for dir in "$specs_dir"/*; do
|
||||||
|
[ -d "$dir" ] || continue
|
||||||
|
dirname=$(basename "$dir")
|
||||||
|
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
|
||||||
|
number=$((10#$number))
|
||||||
|
if [ "$number" -gt "$highest" ]; then
|
||||||
|
highest=$number
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$highest"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to get highest number from git branches
|
||||||
|
get_highest_from_branches() {
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
# Get all branches (local and remote)
|
||||||
|
branches=$(git branch -a 2>/dev/null || echo "")
|
||||||
|
|
||||||
|
if [ -n "$branches" ]; then
|
||||||
|
while IFS= read -r branch; do
|
||||||
|
# Clean branch name: remove leading markers and remote prefixes
|
||||||
|
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
|
||||||
|
|
||||||
|
# Extract feature number if branch matches pattern ###-*
|
||||||
|
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
|
||||||
|
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
|
||||||
|
number=$((10#$number))
|
||||||
|
if [ "$number" -gt "$highest" ]; then
|
||||||
|
highest=$number
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done <<< "$branches"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$highest"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to check existing branches (local and remote) and return next available number
|
||||||
|
check_existing_branches() {
|
||||||
|
local specs_dir="$1"
|
||||||
|
|
||||||
|
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
|
||||||
|
git fetch --all --prune 2>/dev/null || true
|
||||||
|
|
||||||
|
# Get highest number from ALL branches (not just matching short name)
|
||||||
|
local highest_branch=$(get_highest_from_branches)
|
||||||
|
|
||||||
|
# Get highest number from ALL specs (not just matching short name)
|
||||||
|
local highest_spec=$(get_highest_from_specs "$specs_dir")
|
||||||
|
|
||||||
|
# Take the maximum of both
|
||||||
|
local max_num=$highest_branch
|
||||||
|
if [ "$highest_spec" -gt "$max_num" ]; then
|
||||||
|
max_num=$highest_spec
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Return next number
|
||||||
|
echo $((max_num + 1))
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to clean and format a branch name
|
||||||
|
clean_branch_name() {
|
||||||
|
local name="$1"
|
||||||
|
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
|
||||||
|
}
|
||||||
|
|
||||||
|
# Resolve repository root. Prefer git information when available, but fall back
|
||||||
|
# to searching for repository markers so the workflow still functions in repositories that
|
||||||
|
# were initialised with --no-git.
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
|
||||||
|
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||||
|
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||||
|
HAS_GIT=true
|
||||||
|
else
|
||||||
|
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
|
||||||
|
if [ -z "$REPO_ROOT" ]; then
|
||||||
|
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
HAS_GIT=false
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$REPO_ROOT"
|
||||||
|
|
||||||
|
SPECS_DIR="$REPO_ROOT/specs"
|
||||||
|
mkdir -p "$SPECS_DIR"
|
||||||
|
|
||||||
|
# Function to generate branch name with stop word filtering and length filtering
|
||||||
|
generate_branch_name() {
|
||||||
|
local description="$1"
|
||||||
|
|
||||||
|
# Common stop words to filter out
|
||||||
|
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
|
||||||
|
|
||||||
|
# Convert to lowercase and split into words
|
||||||
|
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
|
||||||
|
|
||||||
|
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
|
||||||
|
local meaningful_words=()
|
||||||
|
for word in $clean_name; do
|
||||||
|
# Skip empty words
|
||||||
|
[ -z "$word" ] && continue
|
||||||
|
|
||||||
|
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
|
||||||
|
if ! echo "$word" | grep -qiE "$stop_words"; then
|
||||||
|
if [ ${#word} -ge 3 ]; then
|
||||||
|
meaningful_words+=("$word")
|
||||||
|
elif echo "$description" | grep -q "\b${word^^}\b"; then
|
||||||
|
# Keep short words if they appear as uppercase in original (likely acronyms)
|
||||||
|
meaningful_words+=("$word")
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# If we have meaningful words, use first 3-4 of them
|
||||||
|
if [ ${#meaningful_words[@]} -gt 0 ]; then
|
||||||
|
local max_words=3
|
||||||
|
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
|
||||||
|
|
||||||
|
local result=""
|
||||||
|
local count=0
|
||||||
|
for word in "${meaningful_words[@]}"; do
|
||||||
|
if [ $count -ge $max_words ]; then break; fi
|
||||||
|
if [ -n "$result" ]; then result="$result-"; fi
|
||||||
|
result="$result$word"
|
||||||
|
count=$((count + 1))
|
||||||
|
done
|
||||||
|
echo "$result"
|
||||||
|
else
|
||||||
|
# Fallback to original logic if no meaningful words found
|
||||||
|
local cleaned=$(clean_branch_name "$description")
|
||||||
|
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate branch name
|
||||||
|
if [ -n "$SHORT_NAME" ]; then
|
||||||
|
# Use provided short name, just clean it up
|
||||||
|
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
|
||||||
|
else
|
||||||
|
# Generate from description with smart filtering
|
||||||
|
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Determine branch number
|
||||||
|
if [ -z "$BRANCH_NUMBER" ]; then
|
||||||
|
if [ "$HAS_GIT" = true ]; then
|
||||||
|
# Check existing branches on remotes
|
||||||
|
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
|
||||||
|
else
|
||||||
|
# Fall back to local directory check
|
||||||
|
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
|
||||||
|
BRANCH_NUMBER=$((HIGHEST + 1))
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
|
||||||
|
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
|
||||||
|
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
|
||||||
|
|
||||||
|
# GitHub enforces a 244-byte limit on branch names
|
||||||
|
# Validate and truncate if necessary
|
||||||
|
MAX_BRANCH_LENGTH=244
|
||||||
|
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
|
||||||
|
# Calculate how much we need to trim from suffix
|
||||||
|
# Account for: feature number (3) + hyphen (1) = 4 chars
|
||||||
|
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
|
||||||
|
|
||||||
|
# Truncate suffix at word boundary if possible
|
||||||
|
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
|
||||||
|
# Remove trailing hyphen if truncation created one
|
||||||
|
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
|
||||||
|
|
||||||
|
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
|
||||||
|
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
|
||||||
|
|
||||||
|
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
|
||||||
|
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
|
||||||
|
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$HAS_GIT" = true ]; then
|
||||||
|
git checkout -b "$BRANCH_NAME"
|
||||||
|
else
|
||||||
|
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
|
||||||
|
fi
|
||||||
|
|
||||||
|
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
|
||||||
|
mkdir -p "$FEATURE_DIR"
|
||||||
|
|
||||||
|
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
|
||||||
|
SPEC_FILE="$FEATURE_DIR/spec.md"
|
||||||
|
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
|
||||||
|
|
||||||
|
# Set the SPECIFY_FEATURE environment variable for the current session
|
||||||
|
export SPECIFY_FEATURE="$BRANCH_NAME"
|
||||||
|
|
||||||
|
if $JSON_MODE; then
|
||||||
|
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
|
||||||
|
else
|
||||||
|
echo "BRANCH_NAME: $BRANCH_NAME"
|
||||||
|
echo "SPEC_FILE: $SPEC_FILE"
|
||||||
|
echo "FEATURE_NUM: $FEATURE_NUM"
|
||||||
|
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
|
||||||
|
fi
|
||||||
61
.specify/scripts/bash/setup-plan.sh
Executable file
61
.specify/scripts/bash/setup-plan.sh
Executable file
@@ -0,0 +1,61 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Parse command line arguments
|
||||||
|
JSON_MODE=false
|
||||||
|
ARGS=()
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
echo "Usage: $0 [--json]"
|
||||||
|
echo " --json Output results in JSON format"
|
||||||
|
echo " --help Show this help message"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
ARGS+=("$arg")
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Get script directory and load common functions
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get all paths and variables from common functions
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
|
||||||
|
# Check if we're on a proper feature branch (only for git repos)
|
||||||
|
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||||
|
|
||||||
|
# Ensure the feature directory exists
|
||||||
|
mkdir -p "$FEATURE_DIR"
|
||||||
|
|
||||||
|
# Copy plan template if it exists
|
||||||
|
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
|
||||||
|
if [[ -f "$TEMPLATE" ]]; then
|
||||||
|
cp "$TEMPLATE" "$IMPL_PLAN"
|
||||||
|
echo "Copied plan template to $IMPL_PLAN"
|
||||||
|
else
|
||||||
|
echo "Warning: Plan template not found at $TEMPLATE"
|
||||||
|
# Create a basic plan file if template doesn't exist
|
||||||
|
touch "$IMPL_PLAN"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if $JSON_MODE; then
|
||||||
|
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
|
||||||
|
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
|
||||||
|
else
|
||||||
|
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||||
|
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||||
|
echo "SPECS_DIR: $FEATURE_DIR"
|
||||||
|
echo "BRANCH: $CURRENT_BRANCH"
|
||||||
|
echo "HAS_GIT: $HAS_GIT"
|
||||||
|
fi
|
||||||
|
|
||||||
799
.specify/scripts/bash/update-agent-context.sh
Executable file
799
.specify/scripts/bash/update-agent-context.sh
Executable file
@@ -0,0 +1,799 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Update agent context files with information from plan.md
|
||||||
|
#
|
||||||
|
# This script maintains AI agent context files by parsing feature specifications
|
||||||
|
# and updating agent-specific configuration files with project information.
|
||||||
|
#
|
||||||
|
# MAIN FUNCTIONS:
|
||||||
|
# 1. Environment Validation
|
||||||
|
# - Verifies git repository structure and branch information
|
||||||
|
# - Checks for required plan.md files and templates
|
||||||
|
# - Validates file permissions and accessibility
|
||||||
|
#
|
||||||
|
# 2. Plan Data Extraction
|
||||||
|
# - Parses plan.md files to extract project metadata
|
||||||
|
# - Identifies language/version, frameworks, databases, and project types
|
||||||
|
# - Handles missing or incomplete specification data gracefully
|
||||||
|
#
|
||||||
|
# 3. Agent File Management
|
||||||
|
# - Creates new agent context files from templates when needed
|
||||||
|
# - Updates existing agent files with new project information
|
||||||
|
# - Preserves manual additions and custom configurations
|
||||||
|
# - Supports multiple AI agent formats and directory structures
|
||||||
|
#
|
||||||
|
# 4. Content Generation
|
||||||
|
# - Generates language-specific build/test commands
|
||||||
|
# - Creates appropriate project directory structures
|
||||||
|
# - Updates technology stacks and recent changes sections
|
||||||
|
# - Maintains consistent formatting and timestamps
|
||||||
|
#
|
||||||
|
# 5. Multi-Agent Support
|
||||||
|
# - Handles agent-specific file paths and naming conventions
|
||||||
|
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI
|
||||||
|
# - Can update single agents or all existing agent files
|
||||||
|
# - Creates default Claude file if no agent files exist
|
||||||
|
#
|
||||||
|
# Usage: ./update-agent-context.sh [agent_type]
|
||||||
|
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder
|
||||||
|
# Leave empty to update all existing agent files
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Enable strict error handling
|
||||||
|
set -u
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Configuration and Global Variables
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
# Get script directory and load common functions
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get all paths and variables from common functions
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
|
||||||
|
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
|
||||||
|
AGENT_TYPE="${1:-}"
|
||||||
|
|
||||||
|
# Agent-specific file paths
|
||||||
|
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
|
||||||
|
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
|
||||||
|
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
|
||||||
|
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
|
||||||
|
QWEN_FILE="$REPO_ROOT/QWEN.md"
|
||||||
|
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
|
||||||
|
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
|
||||||
|
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
|
||||||
|
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
|
||||||
|
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
|
||||||
|
QODER_FILE="$REPO_ROOT/QODER.md"
|
||||||
|
AMP_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
SHAI_FILE="$REPO_ROOT/SHAI.md"
|
||||||
|
Q_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
BOB_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
|
||||||
|
# Template file
|
||||||
|
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
|
||||||
|
|
||||||
|
# Global variables for parsed plan data
|
||||||
|
NEW_LANG=""
|
||||||
|
NEW_FRAMEWORK=""
|
||||||
|
NEW_DB=""
|
||||||
|
NEW_PROJECT_TYPE=""
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Utility Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo "INFO: $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_success() {
|
||||||
|
echo "✓ $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo "ERROR: $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warning() {
|
||||||
|
echo "WARNING: $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
# Cleanup function for temporary files
|
||||||
|
cleanup() {
|
||||||
|
local exit_code=$?
|
||||||
|
rm -f /tmp/agent_update_*_$$
|
||||||
|
rm -f /tmp/manual_additions_$$
|
||||||
|
exit $exit_code
|
||||||
|
}
|
||||||
|
|
||||||
|
# Set up cleanup trap
|
||||||
|
trap cleanup EXIT INT TERM
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Validation Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
validate_environment() {
|
||||||
|
# Check if we have a current branch/feature (git or non-git)
|
||||||
|
if [[ -z "$CURRENT_BRANCH" ]]; then
|
||||||
|
log_error "Unable to determine current feature"
|
||||||
|
if [[ "$HAS_GIT" == "true" ]]; then
|
||||||
|
log_info "Make sure you're on a feature branch"
|
||||||
|
else
|
||||||
|
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if plan.md exists
|
||||||
|
if [[ ! -f "$NEW_PLAN" ]]; then
|
||||||
|
log_error "No plan.md found at $NEW_PLAN"
|
||||||
|
log_info "Make sure you're working on a feature with a corresponding spec directory"
|
||||||
|
if [[ "$HAS_GIT" != "true" ]]; then
|
||||||
|
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if template exists (needed for new files)
|
||||||
|
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_warning "Template file not found at $TEMPLATE_FILE"
|
||||||
|
log_warning "Creating new agent files will fail"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Plan Parsing Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
extract_plan_field() {
|
||||||
|
local field_pattern="$1"
|
||||||
|
local plan_file="$2"
|
||||||
|
|
||||||
|
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
|
||||||
|
head -1 | \
|
||||||
|
sed "s|^\*\*${field_pattern}\*\*: ||" | \
|
||||||
|
sed 's/^[ \t]*//;s/[ \t]*$//' | \
|
||||||
|
grep -v "NEEDS CLARIFICATION" | \
|
||||||
|
grep -v "^N/A$" || echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
parse_plan_data() {
|
||||||
|
local plan_file="$1"
|
||||||
|
|
||||||
|
if [[ ! -f "$plan_file" ]]; then
|
||||||
|
log_error "Plan file not found: $plan_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -r "$plan_file" ]]; then
|
||||||
|
log_error "Plan file is not readable: $plan_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Parsing plan data from $plan_file"
|
||||||
|
|
||||||
|
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
|
||||||
|
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
|
||||||
|
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
|
||||||
|
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
|
||||||
|
|
||||||
|
# Log what we found
|
||||||
|
if [[ -n "$NEW_LANG" ]]; then
|
||||||
|
log_info "Found language: $NEW_LANG"
|
||||||
|
else
|
||||||
|
log_warning "No language information found in plan"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||||
|
log_info "Found framework: $NEW_FRAMEWORK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||||
|
log_info "Found database: $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
|
||||||
|
log_info "Found project type: $NEW_PROJECT_TYPE"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
format_technology_stack() {
|
||||||
|
local lang="$1"
|
||||||
|
local framework="$2"
|
||||||
|
local parts=()
|
||||||
|
|
||||||
|
# Add non-empty parts
|
||||||
|
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
|
||||||
|
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
|
||||||
|
|
||||||
|
# Join with proper formatting
|
||||||
|
if [[ ${#parts[@]} -eq 0 ]]; then
|
||||||
|
echo ""
|
||||||
|
elif [[ ${#parts[@]} -eq 1 ]]; then
|
||||||
|
echo "${parts[0]}"
|
||||||
|
else
|
||||||
|
# Join multiple parts with " + "
|
||||||
|
local result="${parts[0]}"
|
||||||
|
for ((i=1; i<${#parts[@]}; i++)); do
|
||||||
|
result="$result + ${parts[i]}"
|
||||||
|
done
|
||||||
|
echo "$result"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Template and Content Generation Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
get_project_structure() {
|
||||||
|
local project_type="$1"
|
||||||
|
|
||||||
|
if [[ "$project_type" == *"web"* ]]; then
|
||||||
|
echo "backend/\\nfrontend/\\ntests/"
|
||||||
|
else
|
||||||
|
echo "src/\\ntests/"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
get_commands_for_language() {
|
||||||
|
local lang="$1"
|
||||||
|
|
||||||
|
case "$lang" in
|
||||||
|
*"Python"*)
|
||||||
|
echo "cd src && pytest && ruff check ."
|
||||||
|
;;
|
||||||
|
*"Rust"*)
|
||||||
|
echo "cargo test && cargo clippy"
|
||||||
|
;;
|
||||||
|
*"JavaScript"*|*"TypeScript"*)
|
||||||
|
echo "npm test \\&\\& npm run lint"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "# Add commands for $lang"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
get_language_conventions() {
|
||||||
|
local lang="$1"
|
||||||
|
echo "$lang: Follow standard conventions"
|
||||||
|
}
|
||||||
|
|
||||||
|
create_new_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local temp_file="$2"
|
||||||
|
local project_name="$3"
|
||||||
|
local current_date="$4"
|
||||||
|
|
||||||
|
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_error "Template not found at $TEMPLATE_FILE"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -r "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_error "Template file is not readable: $TEMPLATE_FILE"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Creating new agent context file from template..."
|
||||||
|
|
||||||
|
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
|
||||||
|
log_error "Failed to copy template file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Replace template placeholders
|
||||||
|
local project_structure
|
||||||
|
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
|
||||||
|
|
||||||
|
local commands
|
||||||
|
commands=$(get_commands_for_language "$NEW_LANG")
|
||||||
|
|
||||||
|
local language_conventions
|
||||||
|
language_conventions=$(get_language_conventions "$NEW_LANG")
|
||||||
|
|
||||||
|
# Perform substitutions with error checking using safer approach
|
||||||
|
# Escape special characters for sed by using a different delimiter or escaping
|
||||||
|
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
|
||||||
|
# Build technology stack and recent change strings conditionally
|
||||||
|
local tech_stack
|
||||||
|
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||||
|
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
|
||||||
|
elif [[ -n "$escaped_lang" ]]; then
|
||||||
|
tech_stack="- $escaped_lang ($escaped_branch)"
|
||||||
|
elif [[ -n "$escaped_framework" ]]; then
|
||||||
|
tech_stack="- $escaped_framework ($escaped_branch)"
|
||||||
|
else
|
||||||
|
tech_stack="- ($escaped_branch)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local recent_change
|
||||||
|
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
|
||||||
|
elif [[ -n "$escaped_lang" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_lang"
|
||||||
|
elif [[ -n "$escaped_framework" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_framework"
|
||||||
|
else
|
||||||
|
recent_change="- $escaped_branch: Added"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local substitutions=(
|
||||||
|
"s|\[PROJECT NAME\]|$project_name|"
|
||||||
|
"s|\[DATE\]|$current_date|"
|
||||||
|
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
|
||||||
|
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
|
||||||
|
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
|
||||||
|
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
|
||||||
|
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
|
||||||
|
)
|
||||||
|
|
||||||
|
for substitution in "${substitutions[@]}"; do
|
||||||
|
if ! sed -i.bak -e "$substitution" "$temp_file"; then
|
||||||
|
log_error "Failed to perform substitution: $substitution"
|
||||||
|
rm -f "$temp_file" "$temp_file.bak"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Convert \n sequences to actual newlines
|
||||||
|
newline=$(printf '\n')
|
||||||
|
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
|
||||||
|
|
||||||
|
# Clean up backup files
|
||||||
|
rm -f "$temp_file.bak" "$temp_file.bak2"
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
update_existing_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local current_date="$2"
|
||||||
|
|
||||||
|
log_info "Updating existing agent context file..."
|
||||||
|
|
||||||
|
# Use a single temporary file for atomic update
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp) || {
|
||||||
|
log_error "Failed to create temporary file"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Process the file in one pass
|
||||||
|
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
|
||||||
|
local new_tech_entries=()
|
||||||
|
local new_change_entry=""
|
||||||
|
|
||||||
|
# Prepare new technology entries
|
||||||
|
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
|
||||||
|
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
|
||||||
|
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare new change entry
|
||||||
|
if [[ -n "$tech_stack" ]]; then
|
||||||
|
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
|
||||||
|
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
|
||||||
|
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if sections exist in the file
|
||||||
|
local has_active_technologies=0
|
||||||
|
local has_recent_changes=0
|
||||||
|
|
||||||
|
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
|
||||||
|
has_active_technologies=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
|
||||||
|
has_recent_changes=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Process file line by line
|
||||||
|
local in_tech_section=false
|
||||||
|
local in_changes_section=false
|
||||||
|
local tech_entries_added=false
|
||||||
|
local changes_entries_added=false
|
||||||
|
local existing_changes_count=0
|
||||||
|
local file_ended=false
|
||||||
|
|
||||||
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||||
|
# Handle Active Technologies section
|
||||||
|
if [[ "$line" == "## Active Technologies" ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_tech_section=true
|
||||||
|
continue
|
||||||
|
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||||
|
# Add new tech entries before closing the section
|
||||||
|
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_tech_section=false
|
||||||
|
continue
|
||||||
|
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
|
||||||
|
# Add new tech entries before empty line in tech section
|
||||||
|
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Handle Recent Changes section
|
||||||
|
if [[ "$line" == "## Recent Changes" ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
# Add new change entry right after the heading
|
||||||
|
if [[ -n "$new_change_entry" ]]; then
|
||||||
|
echo "$new_change_entry" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
in_changes_section=true
|
||||||
|
changes_entries_added=true
|
||||||
|
continue
|
||||||
|
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_changes_section=false
|
||||||
|
continue
|
||||||
|
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
|
||||||
|
# Keep only first 2 existing changes
|
||||||
|
if [[ $existing_changes_count -lt 2 ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
((existing_changes_count++))
|
||||||
|
fi
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Update timestamp
|
||||||
|
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
|
||||||
|
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
|
||||||
|
else
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
done < "$target_file"
|
||||||
|
|
||||||
|
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
|
||||||
|
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If sections don't exist, add them at the end of the file
|
||||||
|
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
echo "" >> "$temp_file"
|
||||||
|
echo "## Active Technologies" >> "$temp_file"
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
|
||||||
|
echo "" >> "$temp_file"
|
||||||
|
echo "## Recent Changes" >> "$temp_file"
|
||||||
|
echo "$new_change_entry" >> "$temp_file"
|
||||||
|
changes_entries_added=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Move temp file to target atomically
|
||||||
|
if ! mv "$temp_file" "$target_file"; then
|
||||||
|
log_error "Failed to update target file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
#==============================================================================
|
||||||
|
# Main Agent File Update Function
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
update_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local agent_name="$2"
|
||||||
|
|
||||||
|
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
|
||||||
|
log_error "update_agent_file requires target_file and agent_name parameters"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Updating $agent_name context file: $target_file"
|
||||||
|
|
||||||
|
local project_name
|
||||||
|
project_name=$(basename "$REPO_ROOT")
|
||||||
|
local current_date
|
||||||
|
current_date=$(date +%Y-%m-%d)
|
||||||
|
|
||||||
|
# Create directory if it doesn't exist
|
||||||
|
local target_dir
|
||||||
|
target_dir=$(dirname "$target_file")
|
||||||
|
if [[ ! -d "$target_dir" ]]; then
|
||||||
|
if ! mkdir -p "$target_dir"; then
|
||||||
|
log_error "Failed to create directory: $target_dir"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f "$target_file" ]]; then
|
||||||
|
# Create new file from template
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp) || {
|
||||||
|
log_error "Failed to create temporary file"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
|
||||||
|
if mv "$temp_file" "$target_file"; then
|
||||||
|
log_success "Created new $agent_name context file"
|
||||||
|
else
|
||||||
|
log_error "Failed to move temporary file to $target_file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_error "Failed to create new agent file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Update existing file
|
||||||
|
if [[ ! -r "$target_file" ]]; then
|
||||||
|
log_error "Cannot read existing file: $target_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -w "$target_file" ]]; then
|
||||||
|
log_error "Cannot write to existing file: $target_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if update_existing_agent_file "$target_file" "$current_date"; then
|
||||||
|
log_success "Updated existing $agent_name context file"
|
||||||
|
else
|
||||||
|
log_error "Failed to update existing agent file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Agent Selection and Processing
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
update_specific_agent() {
|
||||||
|
local agent_type="$1"
|
||||||
|
|
||||||
|
case "$agent_type" in
|
||||||
|
claude)
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
;;
|
||||||
|
gemini)
|
||||||
|
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||||
|
;;
|
||||||
|
copilot)
|
||||||
|
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||||
|
;;
|
||||||
|
cursor-agent)
|
||||||
|
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||||
|
;;
|
||||||
|
qwen)
|
||||||
|
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||||
|
;;
|
||||||
|
opencode)
|
||||||
|
update_agent_file "$AGENTS_FILE" "opencode"
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
update_agent_file "$AGENTS_FILE" "Codex CLI"
|
||||||
|
;;
|
||||||
|
windsurf)
|
||||||
|
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||||
|
;;
|
||||||
|
kilocode)
|
||||||
|
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||||
|
;;
|
||||||
|
auggie)
|
||||||
|
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||||
|
;;
|
||||||
|
roo)
|
||||||
|
update_agent_file "$ROO_FILE" "Roo Code"
|
||||||
|
;;
|
||||||
|
codebuddy)
|
||||||
|
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
||||||
|
;;
|
||||||
|
qoder)
|
||||||
|
update_agent_file "$QODER_FILE" "Qoder CLI"
|
||||||
|
;;
|
||||||
|
amp)
|
||||||
|
update_agent_file "$AMP_FILE" "Amp"
|
||||||
|
;;
|
||||||
|
shai)
|
||||||
|
update_agent_file "$SHAI_FILE" "SHAI"
|
||||||
|
;;
|
||||||
|
q)
|
||||||
|
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
|
||||||
|
;;
|
||||||
|
bob)
|
||||||
|
update_agent_file "$BOB_FILE" "IBM Bob"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
log_error "Unknown agent type '$agent_type'"
|
||||||
|
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
update_all_existing_agents() {
|
||||||
|
local found_agent=false
|
||||||
|
|
||||||
|
# Check each possible agent file and update if it exists
|
||||||
|
if [[ -f "$CLAUDE_FILE" ]]; then
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$GEMINI_FILE" ]]; then
|
||||||
|
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$COPILOT_FILE" ]]; then
|
||||||
|
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$CURSOR_FILE" ]]; then
|
||||||
|
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$QWEN_FILE" ]]; then
|
||||||
|
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AGENTS_FILE" ]]; then
|
||||||
|
update_agent_file "$AGENTS_FILE" "Codex/opencode"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$WINDSURF_FILE" ]]; then
|
||||||
|
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$KILOCODE_FILE" ]]; then
|
||||||
|
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AUGGIE_FILE" ]]; then
|
||||||
|
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$ROO_FILE" ]]; then
|
||||||
|
update_agent_file "$ROO_FILE" "Roo Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$CODEBUDDY_FILE" ]]; then
|
||||||
|
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$SHAI_FILE" ]]; then
|
||||||
|
update_agent_file "$SHAI_FILE" "SHAI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$QODER_FILE" ]]; then
|
||||||
|
update_agent_file "$QODER_FILE" "Qoder CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$Q_FILE" ]]; then
|
||||||
|
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$BOB_FILE" ]]; then
|
||||||
|
update_agent_file "$BOB_FILE" "IBM Bob"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If no agent files exist, create a default Claude file
|
||||||
|
if [[ "$found_agent" == false ]]; then
|
||||||
|
log_info "No existing agent files found, creating default Claude file..."
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
print_summary() {
|
||||||
|
echo
|
||||||
|
log_info "Summary of changes:"
|
||||||
|
|
||||||
|
if [[ -n "$NEW_LANG" ]]; then
|
||||||
|
echo " - Added language: $NEW_LANG"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||||
|
echo " - Added framework: $NEW_FRAMEWORK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||||
|
echo " - Added database: $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Main Execution
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
main() {
|
||||||
|
# Validate environment before proceeding
|
||||||
|
validate_environment
|
||||||
|
|
||||||
|
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
|
||||||
|
|
||||||
|
# Parse the plan file to extract project information
|
||||||
|
if ! parse_plan_data "$NEW_PLAN"; then
|
||||||
|
log_error "Failed to parse plan data"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Process based on agent type argument
|
||||||
|
local success=true
|
||||||
|
|
||||||
|
if [[ -z "$AGENT_TYPE" ]]; then
|
||||||
|
# No specific agent provided - update all existing agent files
|
||||||
|
log_info "No agent specified, updating all existing agent files..."
|
||||||
|
if ! update_all_existing_agents; then
|
||||||
|
success=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Specific agent provided - update only that agent
|
||||||
|
log_info "Updating specific agent: $AGENT_TYPE"
|
||||||
|
if ! update_specific_agent "$AGENT_TYPE"; then
|
||||||
|
success=false
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
print_summary
|
||||||
|
|
||||||
|
if [[ "$success" == true ]]; then
|
||||||
|
log_success "Agent context update completed successfully"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
log_error "Agent context update completed with errors"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Execute main function if script is run directly
|
||||||
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
|
main "$@"
|
||||||
|
fi
|
||||||
|
|
||||||
28
.specify/templates/agent-file-template.md
Normal file
28
.specify/templates/agent-file-template.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# [PROJECT NAME] Development Guidelines
|
||||||
|
|
||||||
|
Auto-generated from all feature plans. Last updated: [DATE]
|
||||||
|
|
||||||
|
## Active Technologies
|
||||||
|
|
||||||
|
[EXTRACTED FROM ALL PLAN.MD FILES]
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```text
|
||||||
|
[ACTUAL STRUCTURE FROM PLANS]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
|
||||||
|
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
|
||||||
|
|
||||||
|
## Recent Changes
|
||||||
|
|
||||||
|
[LAST 3 FEATURES AND WHAT THEY ADDED]
|
||||||
|
|
||||||
|
<!-- MANUAL ADDITIONS START -->
|
||||||
|
<!-- MANUAL ADDITIONS END -->
|
||||||
40
.specify/templates/checklist-template.md
Normal file
40
.specify/templates/checklist-template.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Purpose**: [Brief description of what this checklist covers]
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Feature**: [Link to spec.md or relevant documentation]
|
||||||
|
|
||||||
|
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
|
||||||
|
|
||||||
|
<!--
|
||||||
|
============================================================================
|
||||||
|
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
|
||||||
|
|
||||||
|
The /speckit.checklist command MUST replace these with actual items based on:
|
||||||
|
- User's specific checklist request
|
||||||
|
- Feature requirements from spec.md
|
||||||
|
- Technical context from plan.md
|
||||||
|
- Implementation details from tasks.md
|
||||||
|
|
||||||
|
DO NOT keep these sample items in the generated checklist file.
|
||||||
|
============================================================================
|
||||||
|
-->
|
||||||
|
|
||||||
|
## [Category 1]
|
||||||
|
|
||||||
|
- [ ] CHK001 First checklist item with clear action
|
||||||
|
- [ ] CHK002 Second checklist item
|
||||||
|
- [ ] CHK003 Third checklist item
|
||||||
|
|
||||||
|
## [Category 2]
|
||||||
|
|
||||||
|
- [ ] CHK004 Another category item
|
||||||
|
- [ ] CHK005 Item with specific criteria
|
||||||
|
- [ ] CHK006 Final item in this category
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Check items off as completed: `[x]`
|
||||||
|
- Add comments or findings inline
|
||||||
|
- Link to relevant resources or documentation
|
||||||
|
- Items are numbered sequentially for easy reference
|
||||||
104
.specify/templates/plan-template.md
Normal file
104
.specify/templates/plan-template.md
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
# Implementation Plan: [FEATURE]
|
||||||
|
|
||||||
|
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
||||||
|
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
||||||
|
|
||||||
|
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
[Extract from feature spec: primary requirement + technical approach from research]
|
||||||
|
|
||||||
|
## Technical Context
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Replace the content in this section with the technical details
|
||||||
|
for the project. The structure here is presented in advisory capacity to guide
|
||||||
|
the iteration process.
|
||||||
|
-->
|
||||||
|
|
||||||
|
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
|
||||||
|
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
|
||||||
|
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
|
||||||
|
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
|
||||||
|
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
|
||||||
|
**Project Type**: [single/web/mobile - determines source structure]
|
||||||
|
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
|
||||||
|
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
|
||||||
|
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
|
||||||
|
|
||||||
|
## Constitution Check
|
||||||
|
|
||||||
|
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
||||||
|
|
||||||
|
[Gates determined based on constitution file]
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
### Documentation (this feature)
|
||||||
|
|
||||||
|
```text
|
||||||
|
specs/[###-feature]/
|
||||||
|
├── plan.md # This file (/speckit.plan command output)
|
||||||
|
├── research.md # Phase 0 output (/speckit.plan command)
|
||||||
|
├── data-model.md # Phase 1 output (/speckit.plan command)
|
||||||
|
├── quickstart.md # Phase 1 output (/speckit.plan command)
|
||||||
|
├── contracts/ # Phase 1 output (/speckit.plan command)
|
||||||
|
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Source Code (repository root)
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
|
||||||
|
for this feature. Delete unused options and expand the chosen structure with
|
||||||
|
real paths (e.g., apps/admin, packages/something). The delivered plan must
|
||||||
|
not include Option labels.
|
||||||
|
-->
|
||||||
|
|
||||||
|
```text
|
||||||
|
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
|
||||||
|
src/
|
||||||
|
├── models/
|
||||||
|
├── services/
|
||||||
|
├── cli/
|
||||||
|
└── lib/
|
||||||
|
|
||||||
|
tests/
|
||||||
|
├── contract/
|
||||||
|
├── integration/
|
||||||
|
└── unit/
|
||||||
|
|
||||||
|
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
|
||||||
|
backend/
|
||||||
|
├── src/
|
||||||
|
│ ├── models/
|
||||||
|
│ ├── services/
|
||||||
|
│ └── api/
|
||||||
|
└── tests/
|
||||||
|
|
||||||
|
frontend/
|
||||||
|
├── src/
|
||||||
|
│ ├── components/
|
||||||
|
│ ├── pages/
|
||||||
|
│ └── services/
|
||||||
|
└── tests/
|
||||||
|
|
||||||
|
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
|
||||||
|
api/
|
||||||
|
└── [same as backend above]
|
||||||
|
|
||||||
|
ios/ or android/
|
||||||
|
└── [platform-specific structure: feature modules, UI flows, platform tests]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Structure Decision**: [Document the selected structure and reference the real
|
||||||
|
directories captured above]
|
||||||
|
|
||||||
|
## Complexity Tracking
|
||||||
|
|
||||||
|
> **Fill ONLY if Constitution Check has violations that must be justified**
|
||||||
|
|
||||||
|
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
||||||
|
|-----------|------------|-------------------------------------|
|
||||||
|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
||||||
|
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
||||||
115
.specify/templates/spec-template.md
Normal file
115
.specify/templates/spec-template.md
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
# Feature Specification: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Feature Branch**: `[###-feature-name]`
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Status**: Draft
|
||||||
|
**Input**: User description: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## User Scenarios & Testing *(mandatory)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
|
||||||
|
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
|
||||||
|
you should still have a viable MVP (Minimum Viable Product) that delivers value.
|
||||||
|
|
||||||
|
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
|
||||||
|
Think of each story as a standalone slice of functionality that can be:
|
||||||
|
- Developed independently
|
||||||
|
- Tested independently
|
||||||
|
- Deployed independently
|
||||||
|
- Demonstrated to users independently
|
||||||
|
-->
|
||||||
|
|
||||||
|
### User Story 1 - [Brief Title] (Priority: P1)
|
||||||
|
|
||||||
|
[Describe this user journey in plain language]
|
||||||
|
|
||||||
|
**Why this priority**: [Explain the value and why it has this priority level]
|
||||||
|
|
||||||
|
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 2 - [Brief Title] (Priority: P2)
|
||||||
|
|
||||||
|
[Describe this user journey in plain language]
|
||||||
|
|
||||||
|
**Why this priority**: [Explain the value and why it has this priority level]
|
||||||
|
|
||||||
|
**Independent Test**: [Describe how this can be tested independently]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 3 - [Brief Title] (Priority: P3)
|
||||||
|
|
||||||
|
[Describe this user journey in plain language]
|
||||||
|
|
||||||
|
**Why this priority**: [Explain the value and why it has this priority level]
|
||||||
|
|
||||||
|
**Independent Test**: [Describe how this can be tested independently]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[Add more user stories as needed, each with an assigned priority]
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: The content in this section represents placeholders.
|
||||||
|
Fill them out with the right edge cases.
|
||||||
|
-->
|
||||||
|
|
||||||
|
- What happens when [boundary condition]?
|
||||||
|
- How does system handle [error scenario]?
|
||||||
|
|
||||||
|
## Requirements *(mandatory)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: The content in this section represents placeholders.
|
||||||
|
Fill them out with the right functional requirements.
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
|
||||||
|
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
|
||||||
|
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
|
||||||
|
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
|
||||||
|
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
|
||||||
|
|
||||||
|
*Example of marking unclear requirements:*
|
||||||
|
|
||||||
|
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
|
||||||
|
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
|
||||||
|
|
||||||
|
### Key Entities *(include if feature involves data)*
|
||||||
|
|
||||||
|
- **[Entity 1]**: [What it represents, key attributes without implementation]
|
||||||
|
- **[Entity 2]**: [What it represents, relationships to other entities]
|
||||||
|
|
||||||
|
## Success Criteria *(mandatory)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Define measurable success criteria.
|
||||||
|
These must be technology-agnostic and measurable.
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
|
||||||
|
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
||||||
|
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
||||||
|
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
||||||
35
.specify/templates/tasks-arch-template.md
Normal file
35
.specify/templates/tasks-arch-template.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
description: "Architecture task list template (Contracts & Scaffolding)"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Architecture Tasks: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Role**: Architect Agent
|
||||||
|
**Goal**: Define the "What" and "Why" (Contracts, Scaffolding, Models) before implementation.
|
||||||
|
**Input**: Design documents from `/specs/[###-feature-name]/`
|
||||||
|
**Output**: Files with `[DEF]` anchors, `@PRE`/`@POST` contracts, and `@RELATION` mappings. No business logic.
|
||||||
|
|
||||||
|
## Phase 1: Setup & Models
|
||||||
|
|
||||||
|
- [ ] A001 Create/Update data models in [path] with `[DEF]` and contracts
|
||||||
|
- [ ] A002 Define API route structure/contracts in [path]
|
||||||
|
- [ ] A003 Define shared utilities/interfaces
|
||||||
|
|
||||||
|
## Phase 2: User Story 1 - [Title]
|
||||||
|
|
||||||
|
- [ ] A004 [US1] Define contracts for [Component/Service] in [path]
|
||||||
|
- [ ] A005 [US1] Define contracts for [Endpoint] in [path]
|
||||||
|
- [ ] A006 [US1] Define contracts for [Frontend Component] in [path]
|
||||||
|
|
||||||
|
## Phase 3: User Story 2 - [Title]
|
||||||
|
|
||||||
|
- [ ] A007 [US2] Define contracts for [Component/Service] in [path]
|
||||||
|
- [ ] A008 [US2] Define contracts for [Endpoint] in [path]
|
||||||
|
|
||||||
|
## Handover Checklist
|
||||||
|
|
||||||
|
- [ ] All new files created with `[DEF]` anchors
|
||||||
|
- [ ] All functions/classes have `@PURPOSE`, `@PRE`, `@POST` tags
|
||||||
|
- [ ] No "naked code" (logic outside of anchors)
|
||||||
|
- [ ] `tasks-dev.md` is ready for the Developer Agent
|
||||||
35
.specify/templates/tasks-dev-template.md
Normal file
35
.specify/templates/tasks-dev-template.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
description: "Developer task list template (Implementation Logic)"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Developer Tasks: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Role**: Developer Agent
|
||||||
|
**Goal**: Implement the "How" (Logic, State, Error Handling) inside the defined contracts.
|
||||||
|
**Input**: `tasks-arch.md` (completed), Scaffolding files with `[DEF]` anchors.
|
||||||
|
**Output**: Working code that satisfies `@PRE`/`@POST` conditions.
|
||||||
|
|
||||||
|
## Phase 1: Setup & Models
|
||||||
|
|
||||||
|
- [ ] D001 Implement logic for [Model] in [path]
|
||||||
|
- [ ] D002 Implement logic for [API Route] in [path]
|
||||||
|
- [ ] D003 Implement shared utilities
|
||||||
|
|
||||||
|
## Phase 2: User Story 1 - [Title]
|
||||||
|
|
||||||
|
- [ ] D004 [US1] Implement logic for [Component/Service] in [path]
|
||||||
|
- [ ] D005 [US1] Implement logic for [Endpoint] in [path]
|
||||||
|
- [ ] D006 [US1] Implement logic for [Frontend Component] in [path]
|
||||||
|
- [ ] D007 [US1] Verify semantic compliance and belief state logging
|
||||||
|
|
||||||
|
## Phase 3: User Story 2 - [Title]
|
||||||
|
|
||||||
|
- [ ] D008 [US2] Implement logic for [Component/Service] in [path]
|
||||||
|
- [ ] D009 [US2] Implement logic for [Endpoint] in [path]
|
||||||
|
|
||||||
|
## Polish & Quality Assurance
|
||||||
|
|
||||||
|
- [ ] DXXX Verify all tests pass
|
||||||
|
- [ ] DXXX Check error handling and edge cases
|
||||||
|
- [ ] DXXX Ensure code style compliance
|
||||||
251
.specify/templates/tasks-template.md
Normal file
251
.specify/templates/tasks-template.md
Normal file
@@ -0,0 +1,251 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
description: "Task list template for feature implementation"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Tasks: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Input**: Design documents from `/specs/[###-feature-name]/`
|
||||||
|
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
|
||||||
|
|
||||||
|
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
|
||||||
|
|
||||||
|
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
|
||||||
|
|
||||||
|
## Format: `[ID] [P?] [Story] Description`
|
||||||
|
|
||||||
|
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||||
|
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
|
||||||
|
- Include exact file paths in descriptions
|
||||||
|
|
||||||
|
## Path Conventions
|
||||||
|
|
||||||
|
- **Single project**: `src/`, `tests/` at repository root
|
||||||
|
- **Web app**: `backend/src/`, `frontend/src/`
|
||||||
|
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
|
||||||
|
- Paths shown below assume single project - adjust based on plan.md structure
|
||||||
|
|
||||||
|
<!--
|
||||||
|
============================================================================
|
||||||
|
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
|
||||||
|
|
||||||
|
The /speckit.tasks command MUST replace these with actual tasks based on:
|
||||||
|
- User stories from spec.md (with their priorities P1, P2, P3...)
|
||||||
|
- Feature requirements from plan.md
|
||||||
|
- Entities from data-model.md
|
||||||
|
- Endpoints from contracts/
|
||||||
|
|
||||||
|
Tasks MUST be organized by user story so each story can be:
|
||||||
|
- Implemented independently
|
||||||
|
- Tested independently
|
||||||
|
- Delivered as an MVP increment
|
||||||
|
|
||||||
|
DO NOT keep these sample tasks in the generated tasks.md file.
|
||||||
|
============================================================================
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Phase 1: Setup (Shared Infrastructure)
|
||||||
|
|
||||||
|
**Purpose**: Project initialization and basic structure
|
||||||
|
|
||||||
|
- [ ] T001 Create project structure per implementation plan
|
||||||
|
- [ ] T002 Initialize [language] project with [framework] dependencies
|
||||||
|
- [ ] T003 [P] Configure linting and formatting tools
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Foundational (Blocking Prerequisites)
|
||||||
|
|
||||||
|
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
|
||||||
|
|
||||||
|
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
|
||||||
|
|
||||||
|
Examples of foundational tasks (adjust based on your project):
|
||||||
|
|
||||||
|
- [ ] T004 Setup database schema and migrations framework
|
||||||
|
- [ ] T005 [P] Implement authentication/authorization framework
|
||||||
|
- [ ] T006 [P] Setup API routing and middleware structure
|
||||||
|
- [ ] T007 Create base models/entities that all stories depend on
|
||||||
|
- [ ] T008 Configure error handling and logging infrastructure
|
||||||
|
- [ ] T009 Setup environment configuration management
|
||||||
|
|
||||||
|
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
|
||||||
|
|
||||||
|
**Goal**: [Brief description of what this story delivers]
|
||||||
|
|
||||||
|
**Independent Test**: [How to verify this story works on its own]
|
||||||
|
|
||||||
|
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
|
||||||
|
|
||||||
|
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
|
||||||
|
|
||||||
|
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||||
|
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
|
||||||
|
|
||||||
|
### Implementation for User Story 1
|
||||||
|
|
||||||
|
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
|
||||||
|
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
|
||||||
|
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
|
||||||
|
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
|
||||||
|
- [ ] T016 [US1] Add validation and error handling
|
||||||
|
- [ ] T017 [US1] Add logging for user story 1 operations
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: User Story 2 - [Title] (Priority: P2)
|
||||||
|
|
||||||
|
**Goal**: [Brief description of what this story delivers]
|
||||||
|
|
||||||
|
**Independent Test**: [How to verify this story works on its own]
|
||||||
|
|
||||||
|
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
|
||||||
|
|
||||||
|
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||||
|
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
|
||||||
|
|
||||||
|
### Implementation for User Story 2
|
||||||
|
|
||||||
|
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
|
||||||
|
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
|
||||||
|
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
|
||||||
|
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 5: User Story 3 - [Title] (Priority: P3)
|
||||||
|
|
||||||
|
**Goal**: [Brief description of what this story delivers]
|
||||||
|
|
||||||
|
**Independent Test**: [How to verify this story works on its own]
|
||||||
|
|
||||||
|
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
|
||||||
|
|
||||||
|
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||||
|
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
|
||||||
|
|
||||||
|
### Implementation for User Story 3
|
||||||
|
|
||||||
|
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
|
||||||
|
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
|
||||||
|
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
|
||||||
|
|
||||||
|
**Checkpoint**: All user stories should now be independently functional
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[Add more user story phases as needed, following the same pattern]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase N: Polish & Cross-Cutting Concerns
|
||||||
|
|
||||||
|
**Purpose**: Improvements that affect multiple user stories
|
||||||
|
|
||||||
|
- [ ] TXXX [P] Documentation updates in docs/
|
||||||
|
- [ ] TXXX Code cleanup and refactoring
|
||||||
|
- [ ] TXXX Performance optimization across all stories
|
||||||
|
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
|
||||||
|
- [ ] TXXX Security hardening
|
||||||
|
- [ ] TXXX Run quickstart.md validation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependencies & Execution Order
|
||||||
|
|
||||||
|
### Phase Dependencies
|
||||||
|
|
||||||
|
- **Setup (Phase 1)**: No dependencies - can start immediately
|
||||||
|
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
|
||||||
|
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
|
||||||
|
- User stories can then proceed in parallel (if staffed)
|
||||||
|
- Or sequentially in priority order (P1 → P2 → P3)
|
||||||
|
- **Polish (Final Phase)**: Depends on all desired user stories being complete
|
||||||
|
|
||||||
|
### User Story Dependencies
|
||||||
|
|
||||||
|
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
|
||||||
|
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
|
||||||
|
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
|
||||||
|
|
||||||
|
### Within Each User Story
|
||||||
|
|
||||||
|
- Tests (if included) MUST be written and FAIL before implementation
|
||||||
|
- Models before services
|
||||||
|
- Services before endpoints
|
||||||
|
- Core implementation before integration
|
||||||
|
- Story complete before moving to next priority
|
||||||
|
|
||||||
|
### Parallel Opportunities
|
||||||
|
|
||||||
|
- All Setup tasks marked [P] can run in parallel
|
||||||
|
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
|
||||||
|
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
|
||||||
|
- All tests for a user story marked [P] can run in parallel
|
||||||
|
- Models within a story marked [P] can run in parallel
|
||||||
|
- Different user stories can be worked on in parallel by different team members
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Parallel Example: User Story 1
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Launch all tests for User Story 1 together (if tests requested):
|
||||||
|
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
|
||||||
|
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
|
||||||
|
|
||||||
|
# Launch all models for User Story 1 together:
|
||||||
|
Task: "Create [Entity1] model in src/models/[entity1].py"
|
||||||
|
Task: "Create [Entity2] model in src/models/[entity2].py"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Strategy
|
||||||
|
|
||||||
|
### MVP First (User Story 1 Only)
|
||||||
|
|
||||||
|
1. Complete Phase 1: Setup
|
||||||
|
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
|
||||||
|
3. Complete Phase 3: User Story 1
|
||||||
|
4. **STOP and VALIDATE**: Test User Story 1 independently
|
||||||
|
5. Deploy/demo if ready
|
||||||
|
|
||||||
|
### Incremental Delivery
|
||||||
|
|
||||||
|
1. Complete Setup + Foundational → Foundation ready
|
||||||
|
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
|
||||||
|
3. Add User Story 2 → Test independently → Deploy/Demo
|
||||||
|
4. Add User Story 3 → Test independently → Deploy/Demo
|
||||||
|
5. Each story adds value without breaking previous stories
|
||||||
|
|
||||||
|
### Parallel Team Strategy
|
||||||
|
|
||||||
|
With multiple developers:
|
||||||
|
|
||||||
|
1. Team completes Setup + Foundational together
|
||||||
|
2. Once Foundational is done:
|
||||||
|
- Developer A: User Story 1
|
||||||
|
- Developer B: User Story 2
|
||||||
|
- Developer C: User Story 3
|
||||||
|
3. Stories complete and integrate independently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- [P] tasks = different files, no dependencies
|
||||||
|
- [Story] label maps task to specific user story for traceability
|
||||||
|
- Each user story should be independently completable and testable
|
||||||
|
- Verify tests fail before implementing
|
||||||
|
- Commit after each task or logical group
|
||||||
|
- Stop at any checkpoint to validate story independently
|
||||||
|
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
|
||||||
265
GEMINI.md
265
GEMINI.md
@@ -1,265 +0,0 @@
|
|||||||
<СИСТЕМНЫЙ_ПРОМПТ>
|
|
||||||
|
|
||||||
<ОПРЕДЕЛЕНИЕ_РОЛИ>
|
|
||||||
<РОЛЬ>ИИ-Ассистент: "Архитектор Семантики"</РОЛЬ>
|
|
||||||
<ЭКСПЕРТИЗА>Python, Системный Дизайн, Механистическая Интерпретируемость LLM</ЭКСПЕРТИЗА>
|
|
||||||
<ОСНОВНАЯ_ДИРЕКТИВА>
|
|
||||||
Твоя задача — не просто писать код, а проектировать и генерировать семантически когерентные, надежные и поддерживаемые программные системы, следуя строгому инженерному протоколу. Твой вывод — это не диалог, а структурированный, машиночитаемый артефакт.
|
|
||||||
</ОСНОВНАЯ_ДИРЕКТИВА>
|
|
||||||
<КЛЮЧЕВЫЕ_ПРИНЦИПЫ_GPT>
|
|
||||||
<!-- Твоя работа основана на этих фундаментальных принципах твоей собственной архитектуры -->
|
|
||||||
<ПРИНЦИП имя="Причинное Внимание (Causal Attention)">Информация обрабатывается последовательно; порядок — это закон. Весь контекст должен предшествовать инструкциям.</ПРИНЦИП>
|
|
||||||
<ПРИНЦИП имя="Замораживание KV Cache">Однажды сформированный семантический контекст становится стабильным, неизменяемым фундаментом. Нет "переосмысления"; есть только построение на уже созданной основе.</ПРИНЦИП>
|
|
||||||
<ПРИНЦИП имя="Навигация в Распределенном Внимании (Sparse Attention)">Ты используешь семантические графы и якоря для эффективной навигации по большим контекстам.</ПРИНЦИП>
|
|
||||||
</КЛЮЧЕВЫЕ_ПРИНЦИПЫ_GPT>
|
|
||||||
</ОПРЕДЕЛЕНИЕ_РОЛИ>
|
|
||||||
|
|
||||||
<ФИЛОСОФИЯ_РАБОТЫ>
|
|
||||||
<ФИЛОСОФИЯ имя="Против 'Семантического Казино'">
|
|
||||||
Твоя главная цель — избегать вероятностных, "наиболее правдоподобных" догадок. Ты достигаешь этого, создавая полную семантическую модель задачи *до* генерации решения, заменяя случайность на инженерную определенность.
|
|
||||||
</ФИЛОСОФИЯ>
|
|
||||||
<ФИЛОСОФИЯ имя="Фрактальная Когерентность">
|
|
||||||
Твой результат — это "семантический фрактал". Структура ТЗ должна каскадно отражаться в структуре модулей, классов и функций. 100% семантическая когерентность — твой главный критерий качества.
|
|
||||||
</ФИЛОСОФИЯ>
|
|
||||||
<ФИЛОСОФИЯ имя="Суперпозиция для Планирования">
|
|
||||||
Для сложных архитектурных решений ты должен анализировать и удерживать несколько потенциальных вариантов в состоянии "суперпозиции". Ты "коллапсируешь" решение до одного варианта только после всестороннего анализа или по явной команде пользователя.
|
|
||||||
</ФИЛОСОФИЯ>
|
|
||||||
</ФИЛОСОФИЯ>
|
|
||||||
|
|
||||||
<КАРТА_ПРОЕКТА>
|
|
||||||
<ИМЯ_ФАЙЛА>PROJECT_SEMANTICS.xml</ИМЯ_ФАЙЛА>
|
|
||||||
<НАЗНАЧЕНИЕ>
|
|
||||||
Этот файл является единым источником истины (Single Source of Truth) о семантической структуре всего проекта. Он служит как карта для твоей навигации и как персистентное хранилище семантического графа. Ты обязан загружать его в начале каждой сессии и обновлять в конце.
|
|
||||||
</НАЗНАЧЕНИЕ>
|
|
||||||
<СТРУКТУРА>
|
|
||||||
```xml
|
|
||||||
<PROJECT_SEMANTICS>
|
|
||||||
<METADATA>
|
|
||||||
<VERSION>1.0</VERSION>
|
|
||||||
<LAST_UPDATED>2023-10-27T10:00:00Z</LAST_UPDATED>
|
|
||||||
</METADATA>
|
|
||||||
<STRUCTURE_MAP>
|
|
||||||
<!-- Описание файловой структуры и сущностей внутри -->
|
|
||||||
<MODULE path="utils/file_handler.py" id="mod_file_handler">
|
|
||||||
<PURPOSE>Модуль для операций с файлами JSON.</PURPOSE>
|
|
||||||
<ENTITY type="Function" name="read_json_data" id="func_read_json"/>
|
|
||||||
<ENTITY type="Function" name="write_json_data" id="func_write_json"/>
|
|
||||||
</MODULE>
|
|
||||||
<!-- ... другие модули ... -->
|
|
||||||
</STRUCTURE_MAP>
|
|
||||||
<SEMANTIC_GRAPH>
|
|
||||||
<!-- Глобальный граф, связывающий все сущности проекта -->
|
|
||||||
<NODE id="mod_file_handler" type="Module" label="Модуль для операций с файлами JSON."/>
|
|
||||||
<NODE id="func_read_json" type="Function" label="Читает данные из JSON-файла."/>
|
|
||||||
<NODE id="func_write_json" type="Function" label="Записывает данные в JSON-файл."/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
|
|
||||||
<!-- ... другие узлы и связи ... -->
|
|
||||||
</SEMANTIC_GRAPH>
|
|
||||||
</PROJECT_SEMANTICS>
|
|
||||||
```
|
|
||||||
</СТРУКТУРА>
|
|
||||||
</КАРТА_ПРОЕКТА>
|
|
||||||
|
|
||||||
<МЕТОДОЛОГИЯ имя="Многофазный Протокол Генерации">
|
|
||||||
<!-- [НОВАЯ ФАЗА] Добавлена фаза для загрузки контекста проекта -->
|
|
||||||
<ФАЗА номер="0" имя="Синхронизация с Контекстом Проекта">
|
|
||||||
<ДЕЙСТВИЕ>Найди и загрузи файл `<КАРТА_ПРОЕКТА>`. Если файл не найден, создай его инициальную структуру в памяти. Этот контекст является основой для всех последующих фаз.</ДЕЙСТВИЕ>
|
|
||||||
</ФАЗА>
|
|
||||||
<!-- [ИЗМЕНЕНО] Фаза 1 теперь обновляет существующий граф -->
|
|
||||||
<ФАЗА номер="1" имя="Анализ и Обновление Графа">
|
|
||||||
<ДЕЙСТВИЕ>Проанализируй `<ЗАПРОС_ПОЛЬЗОВАТЕЛЯ>` в контексте загруженной карты проекта. Извлеки новые/измененные сущности и отношения. Обнови и выведи в `<ПЛАНИРОВАНИЕ>` глобальный `<СЕМАНТИЧЕСКИЙ_ГРАФ>`. Задай уточняющие вопросы для валидации архитектуры.</ДЕЙСТВИЕ>
|
|
||||||
</ФАЗА>
|
|
||||||
<ФАЗА номер="2" имя="Контрактно-Ориентированное Проектирование">
|
|
||||||
<ДЕЙСТВИЕ>На основе обновленного графа, детализируй архитектуру. Для каждого нового или изменяемого модуля/функции создай и выведи в `<ПЛАНИРОВАНИЕ>` его "ДО-контракт" в теге `<КОНТРАКТ>`.</ДЕЙСТВИЕ>
|
|
||||||
</ФАЗА>
|
|
||||||
<!-- [ИЗМЕНЕНО] Фаза 3 теперь генерирует и код, и обновленную карту проекта -->
|
|
||||||
<ФАЗА номер="3" имя="Генерация Когерентного Кода и Карты">
|
|
||||||
<ДЕЙСТВИЕ>На основе утвержденных контрактов, сгенерируй код, строго следуя `<СТАНДАРТЫ_КОДИРОВАНИЯ>`. Весь код помести в `<ИЗМЕНЕНИЯ_КОДА>`. Одновременно с этим, сгенерируй финальную версию файла `<КАРТА_ПРОЕКТА>` и помести её в тег `<ОБНОВЛЕНИЕ_КАРТЫ_ПРОЕКТА>`.</ДЕЙСТВИЕ>
|
|
||||||
</ФАЗА>
|
|
||||||
<ФАЗА номер="4" имя="Самокоррекция и Валидация">
|
|
||||||
<ДЕЙСТВИЕ>Перед завершением, проведи самоанализ сгенерированного кода и карты на соответствие графу и контрактам. При обнаружении несоответствия, активируй якорь `[COHERENCE_CHECK_FAILED]` и вернись к Фазе 3 для перегенерации.</ДЕЙСТВИЕ>
|
|
||||||
</ФАЗА>
|
|
||||||
</МЕТОДОЛОГИЯ>
|
|
||||||
|
|
||||||
<СТАНДАРТЫ_КОДИРОВАНИЯ имя="AI-Friendly Практики">
|
|
||||||
<ПРИНЦИП имя="Семантика Превыше Всего">Код вторичен по отношению к его семантическому описанию. Весь код должен быть обрамлен контрактами и якорями.</ПРИНЦИП>
|
|
||||||
|
|
||||||
<СЕМАНТИЧЕСКАЯ_РАЗМЕТКА>
|
|
||||||
<КОНТРАКТНОЕ_ПРОГРАММИРОВАНИЕ_DbC>
|
|
||||||
<ПРИНЦИП>Контракт — это твой "семантический щит", гарантирующий предсказуемость и надежность.</ПРИНЦИП>
|
|
||||||
<РАСПОЛОЖЕНИЕ>Все контракты должны быть "ДО-контрактами", то есть располагаться *перед* декларацией `def` или `class`.</РАСПОЛОЖЕНИЕ>
|
|
||||||
<СТРУКТУРА_КОНТРАКТА>
|
|
||||||
# CONTRACT:
|
|
||||||
# PURPOSE: [Что делает функция/класс]
|
|
||||||
# SPECIFICATION_LINK: [ID из ТЗ или графа]
|
|
||||||
# PRECONDITIONS: [Предусловия]
|
|
||||||
# POSTCONDITIONS: [Постусловия]
|
|
||||||
# PARAMETERS: [Описание параметров]
|
|
||||||
# RETURN: [Описание возвращаемого значения]
|
|
||||||
# TEST_CASES: [Примеры использования]
|
|
||||||
# EXCEPTIONS: [Обработка ошибок]
|
|
||||||
</СТРУКТУРА_КОНТРАКТА>
|
|
||||||
</КОНТРАКТНОЕ_ПРОГРАММИРОВАНИЕ_DbC>
|
|
||||||
|
|
||||||
<ЯКОРЯ>
|
|
||||||
<ЗАМЫКАЮЩИЕ_ЯКОРЯ расположение="После_Кода">
|
|
||||||
<ОПИСАНИЕ>Каждый модуль, класс и функция ДОЛЖНЫ иметь замыкающий якорь (например, `# END_FUNCTION_my_func`) для аккумуляции семантики.</ОПИСАНИЕ>
|
|
||||||
</ЗАМЫКАЮЩИЕ_ЯКОРЯ>
|
|
||||||
<СЕМАНТИЧЕСКИЕ_КАНАЛЫ>
|
|
||||||
<ОПИСАНИЕ>Используй консистентные имена в контрактах, декларациях и якорях для создания чистых семантических каналов.</ОПИСАНИЕ>
|
|
||||||
</СЕМАНТИЧЕСКИЕ_КАНАЛЫ>
|
|
||||||
</ЯКОРЯ>
|
|
||||||
</СЕМАНТИЧЕСКАЯ_РАЗМЕТКА>
|
|
||||||
|
|
||||||
<ЛОГИРОВАНИЕ стандарт="AI-Friendly Logging">
|
|
||||||
<ЦЕЛЬ>Логирование — это твой механизм саморефлексии и декларации `belief state`.</ЦЕЛЬ>
|
|
||||||
<ФОРМАТ>`logger.level('[УРОВЕНЬ][ИМЯ_ЯКОРЯ][СОСТОЯНИЕ] Сообщение')`</ФОРМАТ>
|
|
||||||
</ЛОГИРОВАНИЕ>
|
|
||||||
</СТАНДАРТЫ_КОДИРОВАНИЯ>
|
|
||||||
|
|
||||||
<!-- [ИЗМЕНЕНО] Пример полностью переработан для демонстрации обновления проекта -->
|
|
||||||
<FEW_SHOT_EXAMPLES>
|
|
||||||
<EXAMPLE name="Добавление функциональности в существующий файловый менеджер">
|
|
||||||
<ЗАПРОС_ПОЛЬЗОВАТЕЛЯ>
|
|
||||||
<GOAL>В существующий модуль `file_handler.py` добавить функцию для удаления файла.</GOAL>
|
|
||||||
<CONTEXT>
|
|
||||||
- Новая функция должна называться `delete_file`.
|
|
||||||
- Она должна принимать путь к файлу.
|
|
||||||
- Необходимо безопасно обрабатывать случай, когда файл не существует (FileNotFoundError).
|
|
||||||
- Сообщать об успехе или неудаче через логгер.
|
|
||||||
</CONTEXT>
|
|
||||||
<!-- [НОВОЕ] В запросе теперь передается текущее состояние проекта -->
|
|
||||||
<EXISTING_PROJECT_STATE>
|
|
||||||
<FILE path="PROJECT_SEMANTICS.xml">
|
|
||||||
<PROJECT_SEMANTICS>
|
|
||||||
<METADATA>
|
|
||||||
<VERSION>1.0</VERSION>
|
|
||||||
<LAST_UPDATED>2023-10-26T18:00:00Z</LAST_UPDATED>
|
|
||||||
</METADATA>
|
|
||||||
<STRUCTURE_MAP>
|
|
||||||
<MODULE path="utils/file_handler.py" id="mod_file_handler">
|
|
||||||
<PURPOSE>Модуль для операций с файлами JSON.</PURPOSE>
|
|
||||||
<ENTITY type="Function" name="read_json_data" id="func_read_json"/>
|
|
||||||
<ENTITY type="Function" name="write_json_data" id="func_write_json"/>
|
|
||||||
</MODULE>
|
|
||||||
</STRUCTURE_MAP>
|
|
||||||
<SEMANTIC_GRAPH>
|
|
||||||
<NODE id="mod_file_handler" type="Module" label="Модуль для операций с файлами JSON."/>
|
|
||||||
<NODE id="func_read_json" type="Function" label="Читает данные из JSON-файла."/>
|
|
||||||
<NODE id="func_write_json" type="Function" label="Записывает данные в JSON-файл."/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
|
|
||||||
</SEMANTIC_GRAPH>
|
|
||||||
</PROJECT_SEMANTICS>
|
|
||||||
</FILE>
|
|
||||||
</EXISTING_PROJECT_STATE>
|
|
||||||
</ЗАПРОС_ПОЛЬЗОВАТЕЛЯ>
|
|
||||||
|
|
||||||
<ОТВЕТ_ИИ>
|
|
||||||
<PLANNING_LOG>
|
|
||||||
**Фаза 0: Синхронизация с Контекстом Проекта**
|
|
||||||
Обнаружен и загружен `PROJECT_SEMANTICS.xml`. Текущий контекст: модуль `mod_file_handler` с двумя функциями (`read_json_data`, `write_json_data`).
|
|
||||||
|
|
||||||
**Фаза 1: Анализ и Обновление Графа**
|
|
||||||
Анализирую запрос на добавление функции `delete_file`. Необходимо добавить новый узел в граф и связать его с существующим модулем.
|
|
||||||
|
|
||||||
**Обновленный Семантический Граф:**
|
|
||||||
<SEMANTIC_GRAPH id="json_utils_graph_v2">
|
|
||||||
<!-- Существующие узлы -->
|
|
||||||
<NODE id="mod_file_handler" type="Module">Модуль для операций с файлами JSON.</NODE>
|
|
||||||
<NODE id="func_read_json" type="Function">Читает данные из JSON-файла.</NODE>
|
|
||||||
<NODE id="func_write_json" type="Function">Записывает данные в JSON-файл.</NODE>
|
|
||||||
<!-- Новый узел -->
|
|
||||||
<NODE id="func_delete_file" type="Function">Безопасно удаляет файл.</NODE>
|
|
||||||
<!-- Существующие связи -->
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
|
|
||||||
<!-- Новая связь -->
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_delete_file" relation="CONTAINS"/>
|
|
||||||
</SEMANTIC_GRAPH>
|
|
||||||
|
|
||||||
**Фаза 2: Контрактно-Ориентированное Проектирование**
|
|
||||||
Граф обновлен. Проектирую контракт для новой функции `delete_file`.
|
|
||||||
|
|
||||||
**Контракт для `delete_file`:**
|
|
||||||
<CONTRACT for_function="delete_file">
|
|
||||||
<PURPOSE>Безопасно удаляет файл по указанному пути.</PURPOSE>
|
|
||||||
<PRECONDITIONS>`file_path` является строкой.</PRECONDITIONS>
|
|
||||||
<POSTCONDITIONS>Файл по пути `file_path` удален, если он существовал.</POSTCONDITIONS>
|
|
||||||
<PARAMETERS>
|
|
||||||
<PARAM name="file_path" type="str">Путь к файлу для удаления.</PARAM>
|
|
||||||
</PARAMETERS>
|
|
||||||
<RETURN type="None"></RETURN>
|
|
||||||
<EXCEPTIONS>Обрабатывает `FileNotFoundError` без прерывания выполнения.</EXCEPTIONS>
|
|
||||||
</CONTRACT>
|
|
||||||
|
|
||||||
**Фаза 3: Генерация Когерентного Кода и Карты**
|
|
||||||
Контракт определен. Приступаю к генерации кода для добавления в `utils/file_handler.py` и обновлению `PROJECT_SEMANTICS.xml`.
|
|
||||||
</PLANNING_LOG>
|
|
||||||
|
|
||||||
<!-- [НОВЫЙ ТЕГ ВЫВОДА] Обновленная карта проекта -->
|
|
||||||
<PROJECT_SEMANTICS_UPDATE file_path="PROJECT_SEMANTICS.xml">
|
|
||||||
<PROJECT_SEMANTICS>
|
|
||||||
<METADATA>
|
|
||||||
<VERSION>1.1</VERSION>
|
|
||||||
<!-- Время генерации этого ответа -->
|
|
||||||
<LAST_UPDATED>2023-10-27T10:30:00Z</LAST_UPDATED>
|
|
||||||
</METADATA>
|
|
||||||
<STRUCTURE_MAP>
|
|
||||||
<MODULE path="utils/file_handler.py" id="mod_file_handler">
|
|
||||||
<PURPOSE>Модуль для операций с файлами JSON и другими типами.</PURPOSE>
|
|
||||||
<ENTITY type="Function" name="read_json_data" id="func_read_json"/>
|
|
||||||
<ENTITY type="Function" name="write_json_data" id="func_write_json"/>
|
|
||||||
<ENTITY type="Function" name="delete_file" id="func_delete_file"/>
|
|
||||||
</MODULE>
|
|
||||||
</STRUCTURE_MAP>
|
|
||||||
<SEMANTIC_GRAPH>
|
|
||||||
<NODE id="mod_file_handler" type="Module" label="Модуль для операций с файлами JSON и другими типами."/>
|
|
||||||
<NODE id="func_read_json" type="Function" label="Читает данные из JSON-файла."/>
|
|
||||||
<NODE id="func_write_json" type="Function" label="Записывает данные в JSON-файл."/>
|
|
||||||
<NODE id="func_delete_file" type="Function" label="Безопасно удаляет файл."/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_read_json" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_write_json" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_file_handler" target_id="func_delete_file" relation="CONTAINS"/>
|
|
||||||
</SEMANTIC_GRAPH>
|
|
||||||
</PROJECT_SEMANTICS>
|
|
||||||
</PROJECT_SEMANTICS_UPDATE>
|
|
||||||
|
|
||||||
<CODE_CHANGESET>
|
|
||||||
<!-- [ИЗМЕНЕНО] Вместо NEW_FILE используется MODIFIED_FILE/APPEND -->
|
|
||||||
<MODIFIED_FILE file_path="utils/file_handler.py">
|
|
||||||
<APPEND>
|
|
||||||
# [ENTITY: Function('delete_file')]
|
|
||||||
# CONTRACT:
|
|
||||||
# PURPOSE: Безопасно удаляет файл по указанному пути.
|
|
||||||
# SPECIFICATION_LINK: func_delete_file
|
|
||||||
# PRECONDITIONS: `file_path` является строкой.
|
|
||||||
# POSTCONDITIONS: Файл по пути `file_path` удален, если он существовал.
|
|
||||||
# PARAMETERS:
|
|
||||||
# - name: file_path, type: str, description: Путь к файлу для удаления.
|
|
||||||
# RETURN: type: None
|
|
||||||
# EXCEPTIONS: Обрабатывает FileNotFoundError без прерывания выполнения.
|
|
||||||
def delete_file(file_path: str) -> None:
|
|
||||||
logger.debug(f"[DEBUG][delete_file][ENTER] Attempting to delete file: {file_path}")
|
|
||||||
try:
|
|
||||||
os.remove(file_path)
|
|
||||||
logger.info(f"[INFO][delete_file][SUCCESS] Successfully deleted file: {file_path}")
|
|
||||||
except FileNotFoundError:
|
|
||||||
logger.warning(f"[WARN][delete_file][NOT_FOUND] File not found, nothing to delete: {file_path}")
|
|
||||||
# END_FUNCTION_delete_file
|
|
||||||
</APPEND>
|
|
||||||
<PREPEND>
|
|
||||||
import os
|
|
||||||
</PREPEND>
|
|
||||||
</MODIFIED_FILE>
|
|
||||||
</CODE_CHANGESET>
|
|
||||||
</ОТВЕТ_ИИ>
|
|
||||||
</EXAMPLE>
|
|
||||||
</FEW_SHOT_EXAMPLES>
|
|
||||||
|
|
||||||
<МЕТАПОЗНАНИЕ>
|
|
||||||
<ДИРЕКТИВА>Если ты обнаружишь, что данный системный промпт недостаточен или неоднозначен для выполнения задачи, ты должен отметить это в `<ПЛАНИРОВАНИЕ>` и можешь предложить улучшения в свои собственные инструкции для будущих сессий.</ДИРЕКТИВА>
|
|
||||||
</МЕТАПОЗНАНИЕ>
|
|
||||||
|
|
||||||
</СИСТЕМНЫЙ_ПРОМПТ>
|
|
||||||
@@ -1,116 +0,0 @@
|
|||||||
<PROJECT_SEMANTICS>
|
|
||||||
<METADATA>
|
|
||||||
<VERSION>1.0</VERSION>
|
|
||||||
<LAST_UPDATED>2025-08-16T10:00:00Z</LAST_UPDATED>
|
|
||||||
</METADATA>
|
|
||||||
<STRUCTURE_MAP>
|
|
||||||
<MODULE path="backup_script.py" id="mod_backup_script">
|
|
||||||
<PURPOSE>Скрипт для создания резервных копий дашбордов и чартов из Superset.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="migration_script.py" id="mod_migration_script">
|
|
||||||
<PURPOSE>Интерактивный скрипт для миграции ассетов Superset между различными окружениями.</PURPOSE>
|
|
||||||
<ENTITY type="Class" name="Migration" id="class_migration"/>
|
|
||||||
<ENTITY type="Function" name="run" id="func_run_migration"/>
|
|
||||||
<ENTITY type="Function" name="select_environments" id="func_select_environments"/>
|
|
||||||
<ENTITY type="Function" name="select_dashboards" id="func_select_dashboards"/>
|
|
||||||
<ENTITY type="Function" name="confirm_db_config_replacement" id="func_confirm_db_config_replacement"/>
|
|
||||||
<ENTITY type="Function" name="execute_migration" id="func_execute_migration"/>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="search_script.py" id="mod_search_script">
|
|
||||||
<PURPOSE>Скрипт для поиска ассетов в Superset.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="temp_pylint_runner.py" id="mod_temp_pylint_runner">
|
|
||||||
<PURPOSE>Временный скрипт для запуска Pylint.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/" id="mod_superset_tool">
|
|
||||||
<PURPOSE>Пакет для взаимодействия с Superset API.</PURPOSE>
|
|
||||||
<ENTITY type="Module" name="client.py" id="mod_client"/>
|
|
||||||
<ENTITY type="Module" name="exceptions.py" id="mod_exceptions"/>
|
|
||||||
<ENTITY type="Module" name="models.py" id="mod_models"/>
|
|
||||||
<ENTITY type="Module" name="utils" id="mod_utils"/>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/client.py" id="mod_client">
|
|
||||||
<PURPOSE>Клиент для взаимодействия с Superset API.</PURPOSE>
|
|
||||||
<ENTITY type="Class" name="SupersetClient" id="class_superset_client"/>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/exceptions.py" id="mod_exceptions">
|
|
||||||
<PURPOSE>Пользовательские исключения для Superset Tool.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/models.py" id="mod_models">
|
|
||||||
<PURPOSE>Модели данных для Superset.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/utils/" id="mod_utils">
|
|
||||||
<PURPOSE>Утилиты для Superset Tool.</PURPOSE>
|
|
||||||
<ENTITY type="Module" name="fileio.py" id="mod_fileio"/>
|
|
||||||
<ENTITY type="Module" name="init_clients.py" id="mod_init_clients"/>
|
|
||||||
<ENTITY type="Module" name="logger.py" id="mod_logger"/>
|
|
||||||
<ENTITY type="Module" name="network.py" id="mod_network"/>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/utils/fileio.py" id="mod_fileio">
|
|
||||||
<PURPOSE>Утилиты для работы с файлами.</PURPOSE>
|
|
||||||
<ENTITY type="Function" name="_process_yaml_value" id="func_process_yaml_value"/>
|
|
||||||
<ENTITY type="Function" name="_update_yaml_file" id="func_update_yaml_file"/>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/utils/init_clients.py" id="mod_init_clients">
|
|
||||||
<PURPOSE>Инициализация клиентов для взаимодействия с API.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/utils/logger.py" id="mod_logger">
|
|
||||||
<PURPOSE>Конфигурация логгера.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
<MODULE path="superset_tool/utils/network.py" id="mod_network">
|
|
||||||
<PURPOSE>Сетевые утилиты.</PURPOSE>
|
|
||||||
</MODULE>
|
|
||||||
</STRUCTURE_MAP>
|
|
||||||
<SEMANTIC_GRAPH>
|
|
||||||
<NODE id="mod_backup_script" type="Module" label="Скрипт для создания резервных копий."/>
|
|
||||||
<NODE id="mod_migration_script" type="Module" label="Интерактивный скрипт для миграции ассетов Superset."/>
|
|
||||||
<NODE id="mod_search_script" type="Module" label="Скрипт для поиска."/>
|
|
||||||
<NODE id="mod_temp_pylint_runner" type="Module" label="Временный скрипт для запуска Pylint."/>
|
|
||||||
<NODE id="mod_superset_tool" type="Package" label="Пакет для взаимодействия с Superset API."/>
|
|
||||||
<NODE id="mod_client" type="Module" label="Клиент Superset API."/>
|
|
||||||
<NODE id="mod_exceptions" type="Module" label="Пользовательские исключения."/>
|
|
||||||
<NODE id="mod_models" type="Module" label="Модели данных."/>
|
|
||||||
<NODE id="mod_utils" type="Package" label="Утилиты."/>
|
|
||||||
<NODE id="mod_fileio" type="Module" label="Файловые утилиты."/>
|
|
||||||
<NODE id="mod_init_clients" type="Module" label="Инициализация клиентов."/>
|
|
||||||
<NODE id="mod_logger" type="Module" label="Конфигурация логгера."/>
|
|
||||||
<NODE id="mod_network" type="Module" label="Сетевые утилиты."/>
|
|
||||||
<NODE id="class_superset_client" type="Class" label="Клиент Superset."/>
|
|
||||||
<NODE id="func_process_yaml_value" type="Function" label="(HELPER) Рекурсивно обрабатывает значения в YAML-структуре."/>
|
|
||||||
<NODE id="func_update_yaml_file" type="Function" label="(HELPER) Обновляет один YAML файл."/>
|
|
||||||
<NODE id="class_migration" type="Class" label="Инкапсулирует логику и состояние процесса миграции."/>
|
|
||||||
<NODE id="func_run_migration" type="Function" label="Запускает основной воркфлоу миграции."/>
|
|
||||||
<NODE id="func_select_environments" type="Function" label="Обеспечивает интерактивный выбор исходного и целевого окружений."/>
|
|
||||||
<NODE id="func_select_dashboards" type="Function" label="Обеспечивает интерактивный выбор дашбордов для миграции."/>
|
|
||||||
<NODE id="func_confirm_db_config_replacement" type="Function" label="Управляет процессом подтверждения и настройки замены конфигураций БД."/>
|
|
||||||
<NODE id="func_execute_migration" type="Function" label="Выполняет фактическую миграцию выбранных дашбордов."/>
|
|
||||||
|
|
||||||
<EDGE source_id="mod_superset_tool" target_id="mod_client" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_superset_tool" target_id="mod_exceptions" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_superset_tool" target_id="mod_models" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_superset_tool" target_id="mod_utils" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_client" target_id="class_superset_client" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_utils" target_id="mod_fileio" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_utils" target_id="mod_init_clients" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_utils" target_id="mod_logger" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_utils" target_id="mod_network" relation="CONTAINS"/>
|
|
||||||
|
|
||||||
<EDGE source_id="mod_backup_script" target_id="mod_superset_tool" relation="USES"/>
|
|
||||||
<EDGE source_id="mod_migration_script" target_id="mod_superset_tool" relation="USES"/>
|
|
||||||
<EDGE source_id="mod_search_script" target_id="mod_superset_tool" relation="USES"/>
|
|
||||||
<EDGE source_id="mod_fileio" target_id="func_process_yaml_value" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="mod_fileio" target_id="func_update_yaml_file" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="func_update_yamls" target_id="func_update_yaml_file" relation="CALLS"/>
|
|
||||||
<EDGE source_id="func_update_yaml_file" target_id="func_process_yaml_value" relation="CALLS"/>
|
|
||||||
<EDGE source_id="mod_migration_script" target_id="class_migration" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="class_migration" target_id="func_run_migration" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="class_migration" target_id="func_select_environments" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="class_migration" target_id="func_select_dashboards" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="class_migration" target_id="func_confirm_db_config_replacement" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="func_run_migration" target_id="func_select_environments" relation="CALLS"/>
|
|
||||||
<EDGE source_id="func_run_migration" target_id="func_select_dashboards" relation="CALLS"/>
|
|
||||||
<EDGE source_id="func_run_migration" target_id="func_confirm_db_config_replacement" relation="CALLS"/>
|
|
||||||
<EDGE source_id="class_migration" target_id="func_execute_migration" relation="CONTAINS"/>
|
|
||||||
<EDGE source_id="func_run_migration" target_id="func_execute_migration" relation="CALLS"/>
|
|
||||||
</SEMANTIC_GRAPH>
|
|
||||||
</PROJECT_SEMANTICS>
|
|
||||||
68
README.md
Normal file → Executable file
68
README.md
Normal file → Executable file
@@ -9,6 +9,7 @@
|
|||||||
- `backup_script.py`: Основной скрипт для выполнения запланированного резервного копирования дашбордов Superset.
|
- `backup_script.py`: Основной скрипт для выполнения запланированного резервного копирования дашбордов Superset.
|
||||||
- `migration_script.py`: Основной скрипт для переноса конкретных дашбордов между окружениями, включая переопределение соединений с базами данных.
|
- `migration_script.py`: Основной скрипт для переноса конкретных дашбордов между окружениями, включая переопределение соединений с базами данных.
|
||||||
- `search_script.py`: Скрипт для поиска данных во всех доступных датасетах на сервере
|
- `search_script.py`: Скрипт для поиска данных во всех доступных датасетах на сервере
|
||||||
|
- `run_mapper.py`: CLI-скрипт для маппинга метаданных датасетов.
|
||||||
- `superset_tool/`:
|
- `superset_tool/`:
|
||||||
- `client.py`: Python-клиент для взаимодействия с API Superset.
|
- `client.py`: Python-клиент для взаимодействия с API Superset.
|
||||||
- `exceptions.py`: Пользовательские классы исключений для структурированной обработки ошибок.
|
- `exceptions.py`: Пользовательские классы исключений для структурированной обработки ошибок.
|
||||||
@@ -17,6 +18,8 @@
|
|||||||
- `fileio.py`: Утилиты для работы с файловой системой (работа с архивами, парсинг YAML).
|
- `fileio.py`: Утилиты для работы с файловой системой (работа с архивами, парсинг YAML).
|
||||||
- `logger.py`: Конфигурация логгера для единообразного логирования в проекте.
|
- `logger.py`: Конфигурация логгера для единообразного логирования в проекте.
|
||||||
- `network.py`: HTTP-клиент для сетевых запросов с обработкой аутентификации и повторных попыток.
|
- `network.py`: HTTP-клиент для сетевых запросов с обработкой аутентификации и повторных попыток.
|
||||||
|
- `init_clients.py`: Утилита для инициализации клиентов Superset для разных окружений.
|
||||||
|
- `dataset_mapper.py`: Логика маппинга метаданных датасетов.
|
||||||
|
|
||||||
## Настройка
|
## Настройка
|
||||||
|
|
||||||
@@ -38,17 +41,28 @@
|
|||||||
(Возможно, потребуется создать `requirements.txt` с `pydantic`, `requests`, `keyring`, `PyYAML`, `urllib3`)
|
(Возможно, потребуется создать `requirements.txt` с `pydantic`, `requests`, `keyring`, `PyYAML`, `urllib3`)
|
||||||
3. **Настройте пароли:**
|
3. **Настройте пароли:**
|
||||||
Используйте `keyring` для хранения паролей API-пользователей Superset.
|
Используйте `keyring` для хранения паролей API-пользователей Superset.
|
||||||
Пример для `backup_script.py`:
|
|
||||||
```python
|
```python
|
||||||
import keyring
|
import keyring
|
||||||
keyring.set_password("system", "dev migrate", "пароль пользователя migrate_user")
|
keyring.set_password("system", "dev migrate", "пароль пользователя migrate_user")
|
||||||
keyring.set_password("system", "prod migrate", "пароль пользователя migrate_user")
|
keyring.set_password("system", "prod migrate", "пароль пользователя migrate_user")
|
||||||
keyring.set_password("system", "sandbox migrate", "пароль пользователя migrate_user")
|
keyring.set_password("system", "sandbox migrate", "пароль пользователя migrate_user")
|
||||||
```
|
```
|
||||||
При необходимости замените `"system"` на подходящее имя сервиса.
|
|
||||||
|
|
||||||
## Использование
|
## Использование
|
||||||
|
|
||||||
|
### Запуск проекта (Web UI)
|
||||||
|
Для запуска backend и frontend серверов одной командой:
|
||||||
|
```bash
|
||||||
|
./run.sh
|
||||||
|
```
|
||||||
|
Опции:
|
||||||
|
- `--skip-install`: Пропустить проверку и установку зависимостей.
|
||||||
|
- `--help`: Показать справку.
|
||||||
|
|
||||||
|
Переменные окружения:
|
||||||
|
- `BACKEND_PORT`: Порт для backend (по умолчанию 8000).
|
||||||
|
- `FRONTEND_PORT`: Порт для frontend (по умолчанию 5173).
|
||||||
|
|
||||||
### Скрипт резервного копирования (`backup_script.py`)
|
### Скрипт резервного копирования (`backup_script.py`)
|
||||||
Для создания резервных копий дашбордов из настроенных окружений Superset:
|
Для создания резервных копий дашбордов из настроенных окружений Superset:
|
||||||
```bash
|
```bash
|
||||||
@@ -61,33 +75,45 @@ python backup_script.py
|
|||||||
```bash
|
```bash
|
||||||
python migration_script.py
|
python migration_script.py
|
||||||
```
|
```
|
||||||
**Примечание:** В текущей версии скрипт переносит жестко заданный дашборд (`FI0070`) и использует локальный `.zip` файл в качестве источника. **Для использования в Production необходимо:**
|
|
||||||
- В текущей версии управление откуда и куда выполняется параметрами
|
|
||||||
`from_c` и `to_c`.
|
|
||||||
|
|
||||||
### Скрипт поиска (`search_script.py`)
|
### Скрипт поиска (`search_script.py`)
|
||||||
Строка для поиска и клиенты для поиска задаются здесь
|
Для поиска по текстовым паттернам в метаданных датасетов Superset:
|
||||||
# Поиск всех таблиц в датасете
|
```bash
|
||||||
```python
|
python search_script.py
|
||||||
results = search_datasets(
|
|
||||||
client=clients['dev'],
|
|
||||||
search_pattern=r'dm_view\.account_debt',
|
|
||||||
search_fields=["sql"],
|
|
||||||
logger=logger
|
|
||||||
)
|
|
||||||
```
|
```
|
||||||
|
Скрипт использует регулярные выражения для поиска в полях датасетов, таких как SQL-запросы. Результаты поиска выводятся в лог и в консоль.
|
||||||
|
|
||||||
|
### Скрипт маппинга метаданных (`run_mapper.py`)
|
||||||
|
Для обновления метаданных датасета (например, verbose names) в Superset:
|
||||||
|
```bash
|
||||||
|
python run_mapper.py --source <source_type> --dataset-id <dataset_id> [--table-name <table_name>] [--table-schema <table_schema>] [--excel-path <path_to_excel>] [--env <environment>]
|
||||||
|
```
|
||||||
|
Если вы используете XLSX - файл должен содержать два столбца - column_name | verbose_name
|
||||||
|
|
||||||
|
|
||||||
|
Параметры:
|
||||||
|
- `--source`: Источник данных ('postgres', 'excel' или 'both').
|
||||||
|
- `--dataset-id`: ID датасета для обновления.
|
||||||
|
- `--table-name`: Имя таблицы для PostgreSQL.
|
||||||
|
- `--table-schema`: Схема таблицы для PostgreSQL.
|
||||||
|
- `--excel-path`: Путь к Excel-файлу.
|
||||||
|
- `--env`: Окружение Superset ('dev', 'prod' и т.д.).
|
||||||
|
|
||||||
|
Пример использования:
|
||||||
|
```bash
|
||||||
|
python run_mapper.py --source postgres --dataset-id 123 --table-name account_debt --table-schema dm_view --env dev
|
||||||
|
|
||||||
|
python run_mapper.py --source=excel --dataset-id=286 --excel-path=H:\dev\ss-tools\286_map.xlsx --env=dev
|
||||||
|
```
|
||||||
|
|
||||||
## Логирование
|
## Логирование
|
||||||
Логи пишутся в файл в директории `Logs` (например, `P:\Superset\010 Бекапы\Logs` для резервных копий) и выводятся в консоль. Уровень логирования по умолчанию — `INFO`.
|
Логи пишутся в файл в директории `Logs` (например, `P:\Superset\010 Бекапы\Logs` для резервных копий) и выводятся в консоль. Уровень логирования по умолчанию — `INFO`.
|
||||||
|
|
||||||
## Разработка и вклад
|
## Разработка и вклад
|
||||||
- Следуйте архитектурным паттернам (`[MODULE]`, `[CONTRACT]`, `[SECTION]`, `[ANCHOR]`) и правилам логирования.
|
- Следуйте **Semantic Code Generation Protocol** (см. `semantic_protocol.md`):
|
||||||
- Весь новый код должен соответствовать принципам "LLM-friendly" генерации.
|
- Все определения обернуты в `[DEF]...[/DEF]`.
|
||||||
|
- Контракты (`@PRE`, `@POST`) определяются ДО реализации.
|
||||||
|
- Строгая типизация и иммутабельность архитектурных решений.
|
||||||
|
- Соблюдайте Конституцию проекта (`.specify/memory/constitution.md`).
|
||||||
- Используйте `Pydantic`-модели для валидации данных.
|
- Используйте `Pydantic`-модели для валидации данных.
|
||||||
- Реализуйте всестороннюю обработку ошибок с помощью пользовательских исключений.
|
- Реализуйте всестороннюю обработку ошибок с помощью пользовательских исключений.
|
||||||
|
|
||||||
---
|
|
||||||
[COHERENCE_CHECK_PASSED] README.md создан и согласован с модулями.
|
|
||||||
|
|
||||||
Перевод выполнен с сохранением оригинальной Markdown-разметки и стиля документа. [1]
|
|
||||||
189
backend/backend.log
Normal file
189
backend/backend.log
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
INFO: Will watch for changes in these directories: ['/home/user/ss-tools/backend']
|
||||||
|
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
|
||||||
|
INFO: Started reloader process [7952] using StatReload
|
||||||
|
INFO: Started server process [7968]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
Error loading plugin module backup: No module named 'yaml'
|
||||||
|
Error loading plugin module migration: No module named 'yaml'
|
||||||
|
INFO: 127.0.0.1:36934 - "HEAD /docs HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:55006 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:55006 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:55010 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:35508 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:35508 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49820 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49820 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49822 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49822 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49822 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49822 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49908 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49908 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49922 - "OPTIONS /settings/environments HTTP/1.1" 200 OK
|
||||||
|
[2025-12-20 19:14:15,576][INFO][superset_tools_app] [ConfigManager.save_config][Coherence:OK] Configuration saved context={'path': '/home/user/ss-tools/config.json'}
|
||||||
|
INFO: 127.0.0.1:49922 - "POST /settings/environments HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49922 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49922 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:49922 - "OPTIONS /settings/environments/7071dab6-881f-49a2-b850-c004b3fc11c0/test HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:36930 - "POST /settings/environments/7071dab6-881f-49a2-b850-c004b3fc11c0/test HTTP/1.1" 500 Internal Server Error
|
||||||
|
ERROR: Exception in ASGI application
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
|
||||||
|
result = await app( # type: ignore[func-returns-value]
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
|
||||||
|
return await self.app(scope, receive, send)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1135, in __call__
|
||||||
|
await super().__call__(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/applications.py", line 107, in __call__
|
||||||
|
await self.middleware_stack(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
|
||||||
|
raise exc
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
|
||||||
|
await self.app(scope, receive, _send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 93, in __call__
|
||||||
|
await self.simple_response(scope, receive, send, request_headers=headers)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 144, in simple_response
|
||||||
|
await self.app(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 63, in __call__
|
||||||
|
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
|
||||||
|
raise exc
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
|
||||||
|
await app(scope, receive, sender)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
|
||||||
|
await self.app(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/routing.py", line 716, in __call__
|
||||||
|
await self.middleware_stack(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/routing.py", line 736, in app
|
||||||
|
await route.handle(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/routing.py", line 290, in handle
|
||||||
|
await self.app(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 118, in app
|
||||||
|
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
|
||||||
|
raise exc
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
|
||||||
|
await app(scope, receive, sender)
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 104, in app
|
||||||
|
response = await f(request)
|
||||||
|
^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 428, in app
|
||||||
|
raw_response = await run_endpoint_function(
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/fastapi/routing.py", line 314, in run_endpoint_function
|
||||||
|
return await dependant.call(**values)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/src/api/routes/settings.py", line 103, in test_connection
|
||||||
|
import httpx
|
||||||
|
ModuleNotFoundError: No module named 'httpx'
|
||||||
|
INFO: 127.0.0.1:45776 - "POST /settings/environments/7071dab6-881f-49a2-b850-c004b3fc11c0/test HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:45784 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:45784 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:41628 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:41628 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:41628 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:41628 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:60184 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:60184 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:60184 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:60184 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:60184 - "GET /settings HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:60184 - "GET /settings/ HTTP/1.1" 200 OK
|
||||||
|
WARNING: StatReload detected changes in 'src/core/plugin_loader.py'. Reloading...
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [7968]
|
||||||
|
INFO: Started server process [12178]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
WARNING: StatReload detected changes in 'src/dependencies.py'. Reloading...
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [12178]
|
||||||
|
INFO: Started server process [12451]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
|
||||||
|
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
|
||||||
|
INFO: 127.0.0.1:37334 - "GET / HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:37334 - "GET /favicon.ico HTTP/1.1" 404 Not Found
|
||||||
|
INFO: 127.0.0.1:39932 - "GET / HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:39932 - "GET /favicon.ico HTTP/1.1" 404 Not Found
|
||||||
|
INFO: 127.0.0.1:39932 - "GET / HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:39932 - "GET / HTTP/1.1" 200 OK
|
||||||
|
INFO: 127.0.0.1:54900 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49280 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
INFO: 127.0.0.1:49280 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
WARNING: StatReload detected changes in 'src/api/routes/plugins.py'. Reloading...
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [12451]
|
||||||
|
INFO: Started server process [15016]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
|
||||||
|
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
|
||||||
|
INFO: 127.0.0.1:59340 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
DEBUG: list_plugins called. Found 0 plugins.
|
||||||
|
INFO: 127.0.0.1:59340 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
WARNING: StatReload detected changes in 'src/dependencies.py'. Reloading...
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [15016]
|
||||||
|
INFO: Started server process [15257]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
|
||||||
|
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
|
||||||
|
DEBUG: dependencies.py initialized. PluginLoader ID: 139922613090976
|
||||||
|
DEBUG: dependencies.py initialized. PluginLoader ID: 139922627375088
|
||||||
|
INFO: 127.0.0.1:57464 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
DEBUG: get_plugin_loader called. Returning PluginLoader ID: 139922627375088
|
||||||
|
DEBUG: list_plugins called. Found 0 plugins.
|
||||||
|
INFO: 127.0.0.1:57464 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
WARNING: StatReload detected changes in 'src/core/plugin_loader.py'. Reloading...
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [15257]
|
||||||
|
INFO: Started server process [15533]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
DEBUG: Loading plugin backup as src.plugins.backup
|
||||||
|
Plugin 'Superset Dashboard Backup' (ID: superset-backup) loaded successfully.
|
||||||
|
DEBUG: Loading plugin migration as src.plugins.migration
|
||||||
|
Plugin 'Superset Dashboard Migration' (ID: superset-migration) loaded successfully.
|
||||||
|
DEBUG: dependencies.py initialized. PluginLoader ID: 140371031142384
|
||||||
|
INFO: 127.0.0.1:46470 - "GET /plugins HTTP/1.1" 307 Temporary Redirect
|
||||||
|
DEBUG: get_plugin_loader called. Returning PluginLoader ID: 140371031142384
|
||||||
|
DEBUG: list_plugins called. Found 2 plugins.
|
||||||
|
DEBUG: Plugin: superset-backup
|
||||||
|
DEBUG: Plugin: superset-migration
|
||||||
|
INFO: 127.0.0.1:46470 - "GET /plugins/ HTTP/1.1" 200 OK
|
||||||
|
WARNING: StatReload detected changes in 'src/api/routes/settings.py'. Reloading...
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [15533]
|
||||||
|
INFO: Started server process [15827]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
INFO: Shutting down
|
||||||
|
INFO: Waiting for application shutdown.
|
||||||
|
INFO: Application shutdown complete.
|
||||||
|
INFO: Finished server process [15827]
|
||||||
|
INFO: Stopping reloader process [7952]
|
||||||
269
backend/backups/Logs/superset_tool_20251220.log
Normal file
269
backend/backups/Logs/superset_tool_20251220.log
Normal file
@@ -0,0 +1,269 @@
|
|||||||
|
2025-12-20 19:55:11,325 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 19:55:11,325 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 19:55:11,327 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 43, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 21:01:49,905 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 21:01:49,906 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 21:01:49,988 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 21:01:49,990 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 22:42:32,538 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 22:42:32,538 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 22:42:32,583 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 22:42:32,587 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 22:54:29,770 - INFO - [BackupPlugin][Entry] Starting backup for .
|
||||||
|
2025-12-20 22:54:29,771 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 22:54:29,831 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 22:54:29,833 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 22:54:34,078 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 22:54:34,078 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 22:54:34,079 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 22:54:34,079 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 22:59:25,060 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 22:59:25,060 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 22:59:25,114 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 22:59:25,117 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 23:00:31,156 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 23:00:31,156 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 23:00:31,157 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 23:00:31,162 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 23:00:34,710 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 23:00:34,710 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 23:00:34,710 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 23:00:34,711 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 23:01:43,894 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 23:01:43,894 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 23:01:43,895 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 23:01:43,895 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 23:04:07,731 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 23:04:07,731 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 23:04:07,732 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 23:04:07,732 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 23:06:39,641 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 23:06:39,642 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 23:06:39,687 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 23:06:39,689 - CRITICAL - [setup_clients][Failure] Critical error during client initialization: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/user/ss-tools/superset_tool/utils/init_clients.py", line 66, in setup_clients
|
||||||
|
config = SupersetConfig(
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
File "/home/user/ss-tools/backend/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
|
||||||
|
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
pydantic_core._pydantic_core.ValidationError: 1 validation error for SupersetConfig
|
||||||
|
base_url
|
||||||
|
Value error, Invalid URL format: https://superset.bebesh.ru. Must include '/api/v1'. [type=value_error, input_value='https://superset.bebesh.ru', input_type=str]
|
||||||
|
For further information visit https://errors.pydantic.dev/2.12/v/value_error
|
||||||
|
2025-12-20 23:30:36,090 - INFO - [BackupPlugin][Entry] Starting backup for superset.
|
||||||
|
2025-12-20 23:30:36,093 - INFO - [setup_clients][Enter] Starting Superset clients initialization.
|
||||||
|
2025-12-20 23:30:36,128 - INFO - [setup_clients][Action] Loading environments from ConfigManager
|
||||||
|
2025-12-20 23:30:36,129 - INFO - [SupersetClient.__init__][Enter] Initializing SupersetClient.
|
||||||
|
2025-12-20 23:30:36,129 - INFO - [APIClient.__init__][Entry] Initializing APIClient.
|
||||||
|
2025-12-20 23:30:36,130 - WARNING - [_init_session][State] SSL verification disabled.
|
||||||
|
2025-12-20 23:30:36,130 - INFO - [APIClient.__init__][Exit] APIClient initialized.
|
||||||
|
2025-12-20 23:30:36,130 - INFO - [SupersetClient.__init__][Exit] SupersetClient initialized.
|
||||||
|
2025-12-20 23:30:36,130 - INFO - [get_dashboards][Enter] Fetching dashboards.
|
||||||
|
2025-12-20 23:30:36,131 - INFO - [authenticate][Enter] Authenticating to https://superset.bebesh.ru/api/v1
|
||||||
|
2025-12-20 23:30:36,897 - INFO - [authenticate][Exit] Authenticated successfully.
|
||||||
|
2025-12-20 23:30:37,527 - INFO - [get_dashboards][Exit] Found 11 dashboards.
|
||||||
|
2025-12-20 23:30:37,527 - INFO - [BackupPlugin][Progress] Found 11 dashboards to export in superset.
|
||||||
|
2025-12-20 23:30:37,529 - INFO - [export_dashboard][Enter] Exporting dashboard 11.
|
||||||
|
2025-12-20 23:30:38,224 - INFO - [export_dashboard][Exit] Exported dashboard 11 to dashboard_export_20251220T203037.zip.
|
||||||
|
2025-12-20 23:30:38,225 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:38,226 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/FCC New Coder Survey 2018/dashboard_export_20251220T203037.zip
|
||||||
|
2025-12-20 23:30:38,227 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/FCC New Coder Survey 2018
|
||||||
|
2025-12-20 23:30:38,230 - INFO - [export_dashboard][Enter] Exporting dashboard 10.
|
||||||
|
2025-12-20 23:30:38,438 - INFO - [export_dashboard][Exit] Exported dashboard 10 to dashboard_export_20251220T203038.zip.
|
||||||
|
2025-12-20 23:30:38,438 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:38,439 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/COVID Vaccine Dashboard/dashboard_export_20251220T203038.zip
|
||||||
|
2025-12-20 23:30:38,439 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/COVID Vaccine Dashboard
|
||||||
|
2025-12-20 23:30:38,440 - INFO - [export_dashboard][Enter] Exporting dashboard 9.
|
||||||
|
2025-12-20 23:30:38,853 - INFO - [export_dashboard][Exit] Exported dashboard 9 to dashboard_export_20251220T203038.zip.
|
||||||
|
2025-12-20 23:30:38,853 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:38,856 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Sales Dashboard/dashboard_export_20251220T203038.zip
|
||||||
|
2025-12-20 23:30:38,856 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Sales Dashboard
|
||||||
|
2025-12-20 23:30:38,858 - INFO - [export_dashboard][Enter] Exporting dashboard 8.
|
||||||
|
2025-12-20 23:30:38,939 - INFO - [export_dashboard][Exit] Exported dashboard 8 to dashboard_export_20251220T203038.zip.
|
||||||
|
2025-12-20 23:30:38,940 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:38,941 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Unicode Test/dashboard_export_20251220T203038.zip
|
||||||
|
2025-12-20 23:30:38,941 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Unicode Test
|
||||||
|
2025-12-20 23:30:38,942 - INFO - [export_dashboard][Enter] Exporting dashboard 7.
|
||||||
|
2025-12-20 23:30:39,148 - INFO - [export_dashboard][Exit] Exported dashboard 7 to dashboard_export_20251220T203038.zip.
|
||||||
|
2025-12-20 23:30:39,148 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:39,149 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Video Game Sales/dashboard_export_20251220T203038.zip
|
||||||
|
2025-12-20 23:30:39,149 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Video Game Sales
|
||||||
|
2025-12-20 23:30:39,150 - INFO - [export_dashboard][Enter] Exporting dashboard 6.
|
||||||
|
2025-12-20 23:30:39,689 - INFO - [export_dashboard][Exit] Exported dashboard 6 to dashboard_export_20251220T203039.zip.
|
||||||
|
2025-12-20 23:30:39,689 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:39,690 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Featured Charts/dashboard_export_20251220T203039.zip
|
||||||
|
2025-12-20 23:30:39,691 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Featured Charts
|
||||||
|
2025-12-20 23:30:39,692 - INFO - [export_dashboard][Enter] Exporting dashboard 5.
|
||||||
|
2025-12-20 23:30:39,960 - INFO - [export_dashboard][Exit] Exported dashboard 5 to dashboard_export_20251220T203039.zip.
|
||||||
|
2025-12-20 23:30:39,960 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:39,961 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Slack Dashboard/dashboard_export_20251220T203039.zip
|
||||||
|
2025-12-20 23:30:39,961 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Slack Dashboard
|
||||||
|
2025-12-20 23:30:39,962 - INFO - [export_dashboard][Enter] Exporting dashboard 4.
|
||||||
|
2025-12-20 23:30:40,196 - INFO - [export_dashboard][Exit] Exported dashboard 4 to dashboard_export_20251220T203039.zip.
|
||||||
|
2025-12-20 23:30:40,196 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:40,197 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/deck.gl Demo/dashboard_export_20251220T203039.zip
|
||||||
|
2025-12-20 23:30:40,197 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/deck.gl Demo
|
||||||
|
2025-12-20 23:30:40,198 - INFO - [export_dashboard][Enter] Exporting dashboard 3.
|
||||||
|
2025-12-20 23:30:40,745 - INFO - [export_dashboard][Exit] Exported dashboard 3 to dashboard_export_20251220T203040.zip.
|
||||||
|
2025-12-20 23:30:40,746 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:40,760 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/Misc Charts/dashboard_export_20251220T203040.zip
|
||||||
|
2025-12-20 23:30:40,761 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/Misc Charts
|
||||||
|
2025-12-20 23:30:40,762 - INFO - [export_dashboard][Enter] Exporting dashboard 2.
|
||||||
|
2025-12-20 23:30:40,928 - INFO - [export_dashboard][Exit] Exported dashboard 2 to dashboard_export_20251220T203040.zip.
|
||||||
|
2025-12-20 23:30:40,929 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:40,930 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/USA Births Names/dashboard_export_20251220T203040.zip
|
||||||
|
2025-12-20 23:30:40,931 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/USA Births Names
|
||||||
|
2025-12-20 23:30:40,932 - INFO - [export_dashboard][Enter] Exporting dashboard 1.
|
||||||
|
2025-12-20 23:30:41,582 - INFO - [export_dashboard][Exit] Exported dashboard 1 to dashboard_export_20251220T203040.zip.
|
||||||
|
2025-12-20 23:30:41,582 - INFO - [save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: False
|
||||||
|
2025-12-20 23:30:41,749 - INFO - [save_and_unpack_dashboard][State] Dashboard saved to: backups/SUPERSET/World Bank's Data/dashboard_export_20251220T203040.zip
|
||||||
|
2025-12-20 23:30:41,750 - INFO - [archive_exports][Enter] Managing archive in backups/SUPERSET/World Bank's Data
|
||||||
|
2025-12-20 23:30:41,752 - INFO - [consolidate_archive_folders][Enter] Consolidating archives in backups/SUPERSET
|
||||||
|
2025-12-20 23:30:41,753 - INFO - [remove_empty_directories][Enter] Starting cleanup of empty directories in backups/SUPERSET
|
||||||
|
2025-12-20 23:30:41,758 - INFO - [remove_empty_directories][Exit] Removed 0 empty directories.
|
||||||
|
2025-12-20 23:30:41,758 - INFO - [BackupPlugin][CoherenceCheck:Passed] Backup logic completed for superset.
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
35
backend/delete_running_tasks.py
Normal file
35
backend/delete_running_tasks.py
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Script to delete tasks with RUNNING status from the database."""
|
||||||
|
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from src.core.database import TasksSessionLocal
|
||||||
|
from src.models.task import TaskRecord
|
||||||
|
|
||||||
|
def delete_running_tasks():
|
||||||
|
"""Delete all tasks with RUNNING status from the database."""
|
||||||
|
session: Session = TasksSessionLocal()
|
||||||
|
try:
|
||||||
|
# Find all task records with RUNNING status
|
||||||
|
running_tasks = session.query(TaskRecord).filter(TaskRecord.status == "RUNNING").all()
|
||||||
|
|
||||||
|
if not running_tasks:
|
||||||
|
print("No RUNNING tasks found.")
|
||||||
|
return
|
||||||
|
|
||||||
|
print(f"Found {len(running_tasks)} RUNNING tasks:")
|
||||||
|
for task in running_tasks:
|
||||||
|
print(f"- Task ID: {task.id}, Type: {task.type}")
|
||||||
|
|
||||||
|
# Delete the found tasks
|
||||||
|
session.query(TaskRecord).filter(TaskRecord.status == "RUNNING").delete(synchronize_session=False)
|
||||||
|
session.commit()
|
||||||
|
|
||||||
|
print(f"Successfully deleted {len(running_tasks)} RUNNING tasks.")
|
||||||
|
except Exception as e:
|
||||||
|
session.rollback()
|
||||||
|
print(f"Error deleting tasks: {e}")
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
delete_running_tasks()
|
||||||
BIN
backend/mappings.db
Normal file
BIN
backend/mappings.db
Normal file
Binary file not shown.
BIN
backend/migrations.db
Normal file
BIN
backend/migrations.db
Normal file
Binary file not shown.
46
backend/requirements.txt
Executable file
46
backend/requirements.txt
Executable file
@@ -0,0 +1,46 @@
|
|||||||
|
annotated-doc==0.0.4
|
||||||
|
annotated-types==0.7.0
|
||||||
|
anyio==4.12.0
|
||||||
|
APScheduler==3.11.2
|
||||||
|
attrs==25.4.0
|
||||||
|
Authlib==1.6.6
|
||||||
|
certifi==2025.11.12
|
||||||
|
cffi==2.0.0
|
||||||
|
charset-normalizer==3.4.4
|
||||||
|
click==8.3.1
|
||||||
|
cryptography==46.0.3
|
||||||
|
fastapi==0.126.0
|
||||||
|
greenlet==3.3.0
|
||||||
|
h11==0.16.0
|
||||||
|
httpcore==1.0.9
|
||||||
|
httpx==0.28.1
|
||||||
|
idna==3.11
|
||||||
|
jaraco.classes==3.4.0
|
||||||
|
jaraco.context==6.0.1
|
||||||
|
jaraco.functools==4.3.0
|
||||||
|
jeepney==0.9.0
|
||||||
|
jsonschema==4.25.1
|
||||||
|
jsonschema-specifications==2025.9.1
|
||||||
|
keyring==25.7.0
|
||||||
|
more-itertools==10.8.0
|
||||||
|
pycparser==2.23
|
||||||
|
pydantic==2.12.5
|
||||||
|
pydantic_core==2.41.5
|
||||||
|
python-multipart==0.0.21
|
||||||
|
PyYAML==6.0.3
|
||||||
|
RapidFuzz==3.14.3
|
||||||
|
referencing==0.37.0
|
||||||
|
requests==2.32.5
|
||||||
|
rpds-py==0.30.0
|
||||||
|
SecretStorage==3.5.0
|
||||||
|
SQLAlchemy==2.0.45
|
||||||
|
starlette==0.50.0
|
||||||
|
typing-inspection==0.4.2
|
||||||
|
typing_extensions==4.15.0
|
||||||
|
tzlocal==5.3.1
|
||||||
|
urllib3==2.6.2
|
||||||
|
uvicorn==0.38.0
|
||||||
|
websockets==15.0.1
|
||||||
|
pandas
|
||||||
|
psycopg2-binary
|
||||||
|
openpyxl
|
||||||
59
backend/src/api/auth.py
Executable file
59
backend/src/api/auth.py
Executable file
@@ -0,0 +1,59 @@
|
|||||||
|
# [DEF:AuthModule:Module]
|
||||||
|
# @SEMANTICS: auth, authentication, adfs, oauth, middleware
|
||||||
|
# @PURPOSE: Implements ADFS authentication using Authlib for FastAPI. It provides a dependency to protect endpoints.
|
||||||
|
# @LAYER: UI (API)
|
||||||
|
# @RELATION: Used by API routers to protect endpoints that require authentication.
|
||||||
|
|
||||||
|
from fastapi import Depends, HTTPException, status
|
||||||
|
from fastapi.security import OAuth2AuthorizationCodeBearer
|
||||||
|
from authlib.integrations.starlette_client import OAuth
|
||||||
|
from starlette.config import Config
|
||||||
|
|
||||||
|
# Placeholder for ADFS configuration. In a real app, this would come from a secure source.
|
||||||
|
# Create an in-memory .env file
|
||||||
|
from io import StringIO
|
||||||
|
config_data = StringIO("""
|
||||||
|
ADFS_CLIENT_ID=your-client-id
|
||||||
|
ADFS_CLIENT_SECRET=your-client-secret
|
||||||
|
ADFS_SERVER_METADATA_URL=https://your-adfs-server/.well-known/openid-configuration
|
||||||
|
""")
|
||||||
|
config = Config(config_data)
|
||||||
|
oauth = OAuth(config)
|
||||||
|
|
||||||
|
oauth.register(
|
||||||
|
name='adfs',
|
||||||
|
server_metadata_url=config('ADFS_SERVER_METADATA_URL'),
|
||||||
|
client_kwargs={'scope': 'openid profile email'}
|
||||||
|
)
|
||||||
|
|
||||||
|
oauth2_scheme = OAuth2AuthorizationCodeBearer(
|
||||||
|
authorizationUrl="https://your-adfs-server/adfs/oauth2/authorize",
|
||||||
|
tokenUrl="https://your-adfs-server/adfs/oauth2/token",
|
||||||
|
)
|
||||||
|
|
||||||
|
# [DEF:get_current_user:Function]
|
||||||
|
# @PURPOSE: Dependency to get the current user from the ADFS token.
|
||||||
|
# @PARAM: token (str) - The OAuth2 bearer token.
|
||||||
|
# @PRE: token should be provided via Authorization header.
|
||||||
|
# @POST: Returns user details if authenticated, else raises 401.
|
||||||
|
# @RETURN: Dict[str, str] - User information.
|
||||||
|
async def get_current_user(token: str = Depends(oauth2_scheme)):
|
||||||
|
"""
|
||||||
|
Dependency to get the current user from the ADFS token.
|
||||||
|
This is a placeholder and needs to be fully implemented.
|
||||||
|
"""
|
||||||
|
# In a real implementation, you would:
|
||||||
|
# 1. Validate the token with ADFS.
|
||||||
|
# 2. Fetch user information.
|
||||||
|
# 3. Create a user object.
|
||||||
|
# For now, we'll just check if a token exists.
|
||||||
|
if not token:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Not authenticated",
|
||||||
|
headers={"WWW-Authenticate": "Bearer"},
|
||||||
|
)
|
||||||
|
# A real implementation would return a user object.
|
||||||
|
return {"placeholder_user": "user@example.com"}
|
||||||
|
# [/DEF:get_current_user:Function]
|
||||||
|
# [/DEF:AuthModule:Module]
|
||||||
1
backend/src/api/routes/__init__.py
Executable file
1
backend/src/api/routes/__init__.py
Executable file
@@ -0,0 +1 @@
|
|||||||
|
from . import plugins, tasks, settings, connections
|
||||||
100
backend/src/api/routes/connections.py
Normal file
100
backend/src/api/routes/connections.py
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
# [DEF:ConnectionsRouter:Module]
|
||||||
|
# @SEMANTICS: api, router, connections, database
|
||||||
|
# @PURPOSE: Defines the FastAPI router for managing external database connections.
|
||||||
|
# @LAYER: UI (API)
|
||||||
|
# @RELATION: Depends on SQLAlchemy session.
|
||||||
|
# @CONSTRAINT: Must use belief_scope for logging.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from typing import List, Optional
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from ...core.database import get_db
|
||||||
|
from ...models.connection import ConnectionConfig
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from datetime import datetime
|
||||||
|
from ...core.logger import logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
# [DEF:ConnectionSchema:Class]
|
||||||
|
# @PURPOSE: Pydantic model for connection response.
|
||||||
|
class ConnectionSchema(BaseModel):
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
type: str
|
||||||
|
host: Optional[str] = None
|
||||||
|
port: Optional[int] = None
|
||||||
|
database: Optional[str] = None
|
||||||
|
username: Optional[str] = None
|
||||||
|
created_at: datetime
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
orm_mode = True
|
||||||
|
# [/DEF:ConnectionSchema:Class]
|
||||||
|
|
||||||
|
# [DEF:ConnectionCreate:Class]
|
||||||
|
# @PURPOSE: Pydantic model for creating a connection.
|
||||||
|
class ConnectionCreate(BaseModel):
|
||||||
|
name: str
|
||||||
|
type: str
|
||||||
|
host: Optional[str] = None
|
||||||
|
port: Optional[int] = None
|
||||||
|
database: Optional[str] = None
|
||||||
|
username: Optional[str] = None
|
||||||
|
password: Optional[str] = None
|
||||||
|
# [/DEF:ConnectionCreate:Class]
|
||||||
|
|
||||||
|
# [DEF:list_connections:Function]
|
||||||
|
# @PURPOSE: Lists all saved connections.
|
||||||
|
# @PRE: Database session is active.
|
||||||
|
# @POST: Returns list of connection configs.
|
||||||
|
# @PARAM: db (Session) - Database session.
|
||||||
|
# @RETURN: List[ConnectionSchema] - List of connections.
|
||||||
|
@router.get("", response_model=List[ConnectionSchema])
|
||||||
|
async def list_connections(db: Session = Depends(get_db)):
|
||||||
|
with belief_scope("ConnectionsRouter.list_connections"):
|
||||||
|
connections = db.query(ConnectionConfig).all()
|
||||||
|
return connections
|
||||||
|
# [/DEF:list_connections:Function]
|
||||||
|
|
||||||
|
# [DEF:create_connection:Function]
|
||||||
|
# @PURPOSE: Creates a new connection configuration.
|
||||||
|
# @PRE: Connection name is unique.
|
||||||
|
# @POST: Connection is saved to DB.
|
||||||
|
# @PARAM: connection (ConnectionCreate) - Config data.
|
||||||
|
# @PARAM: db (Session) - Database session.
|
||||||
|
# @RETURN: ConnectionSchema - Created connection.
|
||||||
|
@router.post("", response_model=ConnectionSchema, status_code=status.HTTP_201_CREATED)
|
||||||
|
async def create_connection(connection: ConnectionCreate, db: Session = Depends(get_db)):
|
||||||
|
with belief_scope("ConnectionsRouter.create_connection", f"name={connection.name}"):
|
||||||
|
db_connection = ConnectionConfig(**connection.dict())
|
||||||
|
db.add(db_connection)
|
||||||
|
db.commit()
|
||||||
|
db.refresh(db_connection)
|
||||||
|
logger.info(f"[ConnectionsRouter.create_connection][Success] Created connection {db_connection.id}")
|
||||||
|
return db_connection
|
||||||
|
# [/DEF:create_connection:Function]
|
||||||
|
|
||||||
|
# [DEF:delete_connection:Function]
|
||||||
|
# @PURPOSE: Deletes a connection configuration.
|
||||||
|
# @PRE: Connection ID exists.
|
||||||
|
# @POST: Connection is removed from DB.
|
||||||
|
# @PARAM: connection_id (str) - ID to delete.
|
||||||
|
# @PARAM: db (Session) - Database session.
|
||||||
|
# @RETURN: None.
|
||||||
|
@router.delete("/{connection_id}", status_code=status.HTTP_204_NO_CONTENT)
|
||||||
|
async def delete_connection(connection_id: str, db: Session = Depends(get_db)):
|
||||||
|
with belief_scope("ConnectionsRouter.delete_connection", f"id={connection_id}"):
|
||||||
|
db_connection = db.query(ConnectionConfig).filter(ConnectionConfig.id == connection_id).first()
|
||||||
|
if not db_connection:
|
||||||
|
logger.error(f"[ConnectionsRouter.delete_connection][State] Connection {connection_id} not found")
|
||||||
|
raise HTTPException(status_code=404, detail="Connection not found")
|
||||||
|
db.delete(db_connection)
|
||||||
|
db.commit()
|
||||||
|
logger.info(f"[ConnectionsRouter.delete_connection][Success] Deleted connection {connection_id}")
|
||||||
|
return
|
||||||
|
# [/DEF:delete_connection:Function]
|
||||||
|
|
||||||
|
# [/DEF:ConnectionsRouter:Module]
|
||||||
122
backend/src/api/routes/environments.py
Normal file
122
backend/src/api/routes/environments.py
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
# [DEF:backend.src.api.routes.environments:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: api, environments, superset, databases
|
||||||
|
# @PURPOSE: API endpoints for listing environments and their databases.
|
||||||
|
# @LAYER: API
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.dependencies
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
|
||||||
|
#
|
||||||
|
# @INVARIANT: Environment IDs must exist in the configuration.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from typing import List, Dict, Optional
|
||||||
|
from ...dependencies import get_config_manager, get_scheduler_service
|
||||||
|
from ...core.superset_client import SupersetClient
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from ...core.config_models import Environment as EnvModel
|
||||||
|
from ...core.logger import belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
# [DEF:ScheduleSchema:DataClass]
|
||||||
|
class ScheduleSchema(BaseModel):
|
||||||
|
enabled: bool = False
|
||||||
|
cron_expression: str = Field(..., pattern=r'^(@(annually|yearly|monthly|weekly|daily|hourly|reboot))|((((\d+,)*\d+|(\d+(\/|-)\d+)|\d+|\*) ?){5,7})$')
|
||||||
|
# [/DEF:ScheduleSchema:DataClass]
|
||||||
|
|
||||||
|
# [DEF:EnvironmentResponse:DataClass]
|
||||||
|
class EnvironmentResponse(BaseModel):
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
url: str
|
||||||
|
backup_schedule: Optional[ScheduleSchema] = None
|
||||||
|
# [/DEF:EnvironmentResponse:DataClass]
|
||||||
|
|
||||||
|
# [DEF:DatabaseResponse:DataClass]
|
||||||
|
class DatabaseResponse(BaseModel):
|
||||||
|
uuid: str
|
||||||
|
database_name: str
|
||||||
|
engine: Optional[str]
|
||||||
|
# [/DEF:DatabaseResponse:DataClass]
|
||||||
|
|
||||||
|
# [DEF:get_environments:Function]
|
||||||
|
# @PURPOSE: List all configured environments.
|
||||||
|
# @PRE: config_manager is injected via Depends.
|
||||||
|
# @POST: Returns a list of EnvironmentResponse objects.
|
||||||
|
# @RETURN: List[EnvironmentResponse]
|
||||||
|
@router.get("", response_model=List[EnvironmentResponse])
|
||||||
|
async def get_environments(config_manager=Depends(get_config_manager)):
|
||||||
|
with belief_scope("get_environments"):
|
||||||
|
envs = config_manager.get_environments()
|
||||||
|
# Ensure envs is a list
|
||||||
|
if not isinstance(envs, list):
|
||||||
|
envs = []
|
||||||
|
return [
|
||||||
|
EnvironmentResponse(
|
||||||
|
id=e.id,
|
||||||
|
name=e.name,
|
||||||
|
url=e.url,
|
||||||
|
backup_schedule=ScheduleSchema(
|
||||||
|
enabled=e.backup_schedule.enabled,
|
||||||
|
cron_expression=e.backup_schedule.cron_expression
|
||||||
|
) if e.backup_schedule else None
|
||||||
|
) for e in envs
|
||||||
|
]
|
||||||
|
# [/DEF:get_environments:Function]
|
||||||
|
|
||||||
|
# [DEF:update_environment_schedule:Function]
|
||||||
|
# @PURPOSE: Update backup schedule for an environment.
|
||||||
|
# @PRE: Environment id exists, schedule is valid ScheduleSchema.
|
||||||
|
# @POST: Backup schedule updated and scheduler reloaded.
|
||||||
|
# @PARAM: id (str) - The environment ID.
|
||||||
|
# @PARAM: schedule (ScheduleSchema) - The new schedule.
|
||||||
|
@router.put("/{id}/schedule")
|
||||||
|
async def update_environment_schedule(
|
||||||
|
id: str,
|
||||||
|
schedule: ScheduleSchema,
|
||||||
|
config_manager=Depends(get_config_manager),
|
||||||
|
scheduler_service=Depends(get_scheduler_service)
|
||||||
|
):
|
||||||
|
with belief_scope("update_environment_schedule", f"id={id}"):
|
||||||
|
envs = config_manager.get_environments()
|
||||||
|
env = next((e for e in envs if e.id == id), None)
|
||||||
|
if not env:
|
||||||
|
raise HTTPException(status_code=404, detail="Environment not found")
|
||||||
|
|
||||||
|
# Update environment config
|
||||||
|
env.backup_schedule.enabled = schedule.enabled
|
||||||
|
env.backup_schedule.cron_expression = schedule.cron_expression
|
||||||
|
|
||||||
|
config_manager.update_environment(id, env)
|
||||||
|
|
||||||
|
# Refresh scheduler
|
||||||
|
scheduler_service.load_schedules()
|
||||||
|
|
||||||
|
return {"message": "Schedule updated successfully"}
|
||||||
|
# [/DEF:update_environment_schedule:Function]
|
||||||
|
|
||||||
|
# [DEF:get_environment_databases:Function]
|
||||||
|
# @PURPOSE: Fetch the list of databases from a specific environment.
|
||||||
|
# @PRE: Environment id exists.
|
||||||
|
# @POST: Returns a list of database summaries from the environment.
|
||||||
|
# @PARAM: id (str) - The environment ID.
|
||||||
|
# @RETURN: List[Dict] - List of databases.
|
||||||
|
@router.get("/{id}/databases")
|
||||||
|
async def get_environment_databases(id: str, config_manager=Depends(get_config_manager)):
|
||||||
|
with belief_scope("get_environment_databases", f"id={id}"):
|
||||||
|
envs = config_manager.get_environments()
|
||||||
|
env = next((e for e in envs if e.id == id), None)
|
||||||
|
if not env:
|
||||||
|
raise HTTPException(status_code=404, detail="Environment not found")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Initialize SupersetClient from environment config
|
||||||
|
client = SupersetClient(env)
|
||||||
|
return client.get_databases_summary()
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(status_code=500, detail=f"Failed to fetch databases: {str(e)}")
|
||||||
|
# [/DEF:get_environment_databases:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.api.routes.environments:Module]
|
||||||
119
backend/src/api/routes/mappings.py
Normal file
119
backend/src/api/routes/mappings.py
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
# [DEF:backend.src.api.routes.mappings:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: api, mappings, database, fuzzy-matching
|
||||||
|
# @PURPOSE: API endpoints for managing database mappings and getting suggestions.
|
||||||
|
# @LAYER: API
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.dependencies
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.core.database
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.services.mapping_service
|
||||||
|
#
|
||||||
|
# @INVARIANT: Mappings are persisted in the SQLite database.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from typing import List, Optional
|
||||||
|
from ...dependencies import get_config_manager
|
||||||
|
from ...core.database import get_db
|
||||||
|
from ...models.mapping import DatabaseMapping
|
||||||
|
from pydantic import BaseModel
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/api/mappings", tags=["mappings"])
|
||||||
|
|
||||||
|
# [DEF:MappingCreate:DataClass]
|
||||||
|
class MappingCreate(BaseModel):
|
||||||
|
source_env_id: str
|
||||||
|
target_env_id: str
|
||||||
|
source_db_uuid: str
|
||||||
|
target_db_uuid: str
|
||||||
|
source_db_name: str
|
||||||
|
target_db_name: str
|
||||||
|
# [/DEF:MappingCreate:DataClass]
|
||||||
|
|
||||||
|
# [DEF:MappingResponse:DataClass]
|
||||||
|
class MappingResponse(BaseModel):
|
||||||
|
id: str
|
||||||
|
source_env_id: str
|
||||||
|
target_env_id: str
|
||||||
|
source_db_uuid: str
|
||||||
|
target_db_uuid: str
|
||||||
|
source_db_name: str
|
||||||
|
target_db_name: str
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
from_attributes = True
|
||||||
|
# [/DEF:MappingResponse:DataClass]
|
||||||
|
|
||||||
|
# [DEF:SuggestRequest:DataClass]
|
||||||
|
class SuggestRequest(BaseModel):
|
||||||
|
source_env_id: str
|
||||||
|
target_env_id: str
|
||||||
|
# [/DEF:SuggestRequest:DataClass]
|
||||||
|
|
||||||
|
# [DEF:get_mappings:Function]
|
||||||
|
# @PURPOSE: List all saved database mappings.
|
||||||
|
# @PRE: db session is injected.
|
||||||
|
# @POST: Returns filtered list of DatabaseMapping records.
|
||||||
|
@router.get("", response_model=List[MappingResponse])
|
||||||
|
async def get_mappings(
|
||||||
|
source_env_id: Optional[str] = None,
|
||||||
|
target_env_id: Optional[str] = None,
|
||||||
|
db: Session = Depends(get_db)
|
||||||
|
):
|
||||||
|
with belief_scope("get_mappings"):
|
||||||
|
query = db.query(DatabaseMapping)
|
||||||
|
if source_env_id:
|
||||||
|
query = query.filter(DatabaseMapping.source_env_id == source_env_id)
|
||||||
|
if target_env_id:
|
||||||
|
query = query.filter(DatabaseMapping.target_env_id == target_env_id)
|
||||||
|
return query.all()
|
||||||
|
# [/DEF:get_mappings:Function]
|
||||||
|
|
||||||
|
# [DEF:create_mapping:Function]
|
||||||
|
# @PURPOSE: Create or update a database mapping.
|
||||||
|
# @PRE: mapping is valid MappingCreate, db session is injected.
|
||||||
|
# @POST: DatabaseMapping created or updated in database.
|
||||||
|
@router.post("", response_model=MappingResponse)
|
||||||
|
async def create_mapping(mapping: MappingCreate, db: Session = Depends(get_db)):
|
||||||
|
with belief_scope("create_mapping"):
|
||||||
|
# Check if mapping already exists
|
||||||
|
existing = db.query(DatabaseMapping).filter(
|
||||||
|
DatabaseMapping.source_env_id == mapping.source_env_id,
|
||||||
|
DatabaseMapping.target_env_id == mapping.target_env_id,
|
||||||
|
DatabaseMapping.source_db_uuid == mapping.source_db_uuid
|
||||||
|
).first()
|
||||||
|
|
||||||
|
if existing:
|
||||||
|
existing.target_db_uuid = mapping.target_db_uuid
|
||||||
|
existing.target_db_name = mapping.target_db_name
|
||||||
|
db.commit()
|
||||||
|
db.refresh(existing)
|
||||||
|
return existing
|
||||||
|
|
||||||
|
new_mapping = DatabaseMapping(**mapping.dict())
|
||||||
|
db.add(new_mapping)
|
||||||
|
db.commit()
|
||||||
|
db.refresh(new_mapping)
|
||||||
|
return new_mapping
|
||||||
|
# [/DEF:create_mapping:Function]
|
||||||
|
|
||||||
|
# [DEF:suggest_mappings_api:Function]
|
||||||
|
# @PURPOSE: Get suggested mappings based on fuzzy matching.
|
||||||
|
# @PRE: request is valid SuggestRequest, config_manager is injected.
|
||||||
|
# @POST: Returns mapping suggestions.
|
||||||
|
@router.post("/suggest")
|
||||||
|
async def suggest_mappings_api(
|
||||||
|
request: SuggestRequest,
|
||||||
|
config_manager=Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("suggest_mappings_api"):
|
||||||
|
from backend.src.services.mapping_service import MappingService
|
||||||
|
service = MappingService(config_manager)
|
||||||
|
try:
|
||||||
|
return await service.get_suggestions(request.source_env_id, request.target_env_id)
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
# [/DEF:suggest_mappings_api:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.api.routes.mappings:Module]
|
||||||
68
backend/src/api/routes/migration.py
Normal file
68
backend/src/api/routes/migration.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# [DEF:backend.src.api.routes.migration:Module]
|
||||||
|
# @SEMANTICS: api, migration, dashboards
|
||||||
|
# @PURPOSE: API endpoints for migration operations.
|
||||||
|
# @LAYER: API
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.dependencies
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.models.dashboard
|
||||||
|
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from typing import List, Dict
|
||||||
|
from ...dependencies import get_config_manager, get_task_manager
|
||||||
|
from ...models.dashboard import DashboardMetadata, DashboardSelection
|
||||||
|
from ...core.superset_client import SupersetClient
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/api", tags=["migration"])
|
||||||
|
|
||||||
|
# [DEF:get_dashboards:Function]
|
||||||
|
# @PURPOSE: Fetch all dashboards from the specified environment for the grid.
|
||||||
|
# @PRE: Environment ID must be valid.
|
||||||
|
# @POST: Returns a list of dashboard metadata.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment to fetch from.
|
||||||
|
# @RETURN: List[DashboardMetadata]
|
||||||
|
@router.get("/environments/{env_id}/dashboards", response_model=List[DashboardMetadata])
|
||||||
|
async def get_dashboards(env_id: str, config_manager=Depends(get_config_manager)):
|
||||||
|
environments = config_manager.get_environments()
|
||||||
|
env = next((e for e in environments if e.id == env_id), None)
|
||||||
|
if not env:
|
||||||
|
raise HTTPException(status_code=404, detail="Environment not found")
|
||||||
|
|
||||||
|
client = SupersetClient(env)
|
||||||
|
dashboards = client.get_dashboards_summary()
|
||||||
|
return dashboards
|
||||||
|
# [/DEF:get_dashboards:Function]
|
||||||
|
|
||||||
|
# [DEF:execute_migration:Function]
|
||||||
|
# @PURPOSE: Execute the migration of selected dashboards.
|
||||||
|
# @PRE: Selection must be valid and environments must exist.
|
||||||
|
# @POST: Starts the migration task and returns the task ID.
|
||||||
|
# @PARAM: selection (DashboardSelection) - The dashboards to migrate.
|
||||||
|
# @RETURN: Dict - {"task_id": str, "message": str}
|
||||||
|
@router.post("/migration/execute")
|
||||||
|
async def execute_migration(selection: DashboardSelection, config_manager=Depends(get_config_manager), task_manager=Depends(get_task_manager)):
|
||||||
|
# Validate environments exist
|
||||||
|
environments = config_manager.get_environments()
|
||||||
|
env_ids = {e.id for e in environments}
|
||||||
|
if selection.source_env_id not in env_ids or selection.target_env_id not in env_ids:
|
||||||
|
raise HTTPException(status_code=400, detail="Invalid source or target environment")
|
||||||
|
|
||||||
|
# Create migration task with debug logging
|
||||||
|
from ...core.logger import logger
|
||||||
|
|
||||||
|
# Include replace_db_config in the task parameters
|
||||||
|
task_params = selection.dict()
|
||||||
|
task_params['replace_db_config'] = selection.replace_db_config
|
||||||
|
|
||||||
|
logger.info(f"Creating migration task with params: {task_params}")
|
||||||
|
logger.info(f"Available environments: {env_ids}")
|
||||||
|
logger.info(f"Source env: {selection.source_env_id}, Target env: {selection.target_env_id}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
task = await task_manager.create_task("superset-migration", task_params)
|
||||||
|
logger.info(f"Task created successfully: {task.id}")
|
||||||
|
return {"task_id": task.id, "message": "Migration initiated"}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Task creation failed: {e}")
|
||||||
|
raise HTTPException(status_code=500, detail=f"Failed to create migration task: {str(e)}")
|
||||||
|
# [/DEF:execute_migration:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.api.routes.migration:Module]
|
||||||
30
backend/src/api/routes/plugins.py
Executable file
30
backend/src/api/routes/plugins.py
Executable file
@@ -0,0 +1,30 @@
|
|||||||
|
# [DEF:PluginsRouter:Module]
|
||||||
|
# @SEMANTICS: api, router, plugins, list
|
||||||
|
# @PURPOSE: Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins.
|
||||||
|
# @LAYER: UI (API)
|
||||||
|
# @RELATION: Depends on the PluginLoader and PluginConfig. It is included by the main app.
|
||||||
|
from typing import List
|
||||||
|
from fastapi import APIRouter, Depends
|
||||||
|
|
||||||
|
from ...core.plugin_base import PluginConfig
|
||||||
|
from ...dependencies import get_plugin_loader
|
||||||
|
from ...core.logger import belief_scope
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
# [DEF:list_plugins:Function]
|
||||||
|
# @PURPOSE: Retrieve a list of all available plugins.
|
||||||
|
# @PRE: plugin_loader is injected via Depends.
|
||||||
|
# @POST: Returns a list of PluginConfig objects.
|
||||||
|
# @RETURN: List[PluginConfig] - List of registered plugins.
|
||||||
|
@router.get("", response_model=List[PluginConfig])
|
||||||
|
async def list_plugins(
|
||||||
|
plugin_loader = Depends(get_plugin_loader)
|
||||||
|
):
|
||||||
|
with belief_scope("list_plugins"):
|
||||||
|
"""
|
||||||
|
Retrieve a list of all available plugins.
|
||||||
|
"""
|
||||||
|
return plugin_loader.get_all_plugin_configs()
|
||||||
|
# [/DEF:list_plugins:Function]
|
||||||
|
# [/DEF:PluginsRouter:Module]
|
||||||
208
backend/src/api/routes/settings.py
Executable file
208
backend/src/api/routes/settings.py
Executable file
@@ -0,0 +1,208 @@
|
|||||||
|
# [DEF:SettingsRouter:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: settings, api, router, fastapi
|
||||||
|
# @PURPOSE: Provides API endpoints for managing application settings and Superset environments.
|
||||||
|
# @LAYER: UI (API)
|
||||||
|
# @RELATION: DEPENDS_ON -> ConfigManager
|
||||||
|
# @RELATION: DEPENDS_ON -> ConfigModels
|
||||||
|
#
|
||||||
|
# @INVARIANT: All settings changes must be persisted via ConfigManager.
|
||||||
|
# @PUBLIC_API: router
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from typing import List
|
||||||
|
from ...core.config_models import AppConfig, Environment, GlobalSettings
|
||||||
|
from ...dependencies import get_config_manager
|
||||||
|
from ...core.config_manager import ConfigManager
|
||||||
|
from ...core.logger import logger, belief_scope
|
||||||
|
from ...core.superset_client import SupersetClient
|
||||||
|
import os
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
# [DEF:get_settings:Function]
|
||||||
|
# @PURPOSE: Retrieves all application settings.
|
||||||
|
# @PRE: Config manager is available.
|
||||||
|
# @POST: Returns masked AppConfig.
|
||||||
|
# @RETURN: AppConfig - The current configuration.
|
||||||
|
@router.get("", response_model=AppConfig)
|
||||||
|
async def get_settings(config_manager: ConfigManager = Depends(get_config_manager)):
|
||||||
|
with belief_scope("get_settings"):
|
||||||
|
logger.info("[get_settings][Entry] Fetching all settings")
|
||||||
|
config = config_manager.get_config().copy(deep=True)
|
||||||
|
# Mask passwords
|
||||||
|
for env in config.environments:
|
||||||
|
if env.password:
|
||||||
|
env.password = "********"
|
||||||
|
return config
|
||||||
|
# [/DEF:get_settings:Function]
|
||||||
|
|
||||||
|
# [DEF:update_global_settings:Function]
|
||||||
|
# @PURPOSE: Updates global application settings.
|
||||||
|
# @PRE: New settings are provided.
|
||||||
|
# @POST: Global settings are updated.
|
||||||
|
# @PARAM: settings (GlobalSettings) - The new global settings.
|
||||||
|
# @RETURN: GlobalSettings - The updated settings.
|
||||||
|
@router.patch("/global", response_model=GlobalSettings)
|
||||||
|
async def update_global_settings(
|
||||||
|
settings: GlobalSettings,
|
||||||
|
config_manager: ConfigManager = Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("update_global_settings"):
|
||||||
|
logger.info("[update_global_settings][Entry] Updating global settings")
|
||||||
|
config_manager.update_global_settings(settings)
|
||||||
|
return settings
|
||||||
|
# [/DEF:update_global_settings:Function]
|
||||||
|
|
||||||
|
# [DEF:get_environments:Function]
|
||||||
|
# @PURPOSE: Lists all configured Superset environments.
|
||||||
|
# @PRE: Config manager is available.
|
||||||
|
# @POST: Returns list of environments.
|
||||||
|
# @RETURN: List[Environment] - List of environments.
|
||||||
|
@router.get("/environments", response_model=List[Environment])
|
||||||
|
async def get_environments(config_manager: ConfigManager = Depends(get_config_manager)):
|
||||||
|
with belief_scope("get_environments"):
|
||||||
|
logger.info("[get_environments][Entry] Fetching environments")
|
||||||
|
return config_manager.get_environments()
|
||||||
|
# [/DEF:get_environments:Function]
|
||||||
|
|
||||||
|
# [DEF:add_environment:Function]
|
||||||
|
# @PURPOSE: Adds a new Superset environment.
|
||||||
|
# @PRE: Environment data is valid and reachable.
|
||||||
|
# @POST: Environment is added to config.
|
||||||
|
# @PARAM: env (Environment) - The environment to add.
|
||||||
|
# @RETURN: Environment - The added environment.
|
||||||
|
@router.post("/environments", response_model=Environment)
|
||||||
|
async def add_environment(
|
||||||
|
env: Environment,
|
||||||
|
config_manager: ConfigManager = Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("add_environment"):
|
||||||
|
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||||
|
|
||||||
|
# Validate connection before adding
|
||||||
|
try:
|
||||||
|
client = SupersetClient(env)
|
||||||
|
client.get_dashboards(query={"page_size": 1})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[add_environment][Coherence:Failed] Connection validation failed: {e}")
|
||||||
|
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
|
||||||
|
|
||||||
|
config_manager.add_environment(env)
|
||||||
|
return env
|
||||||
|
# [/DEF:add_environment:Function]
|
||||||
|
|
||||||
|
# [DEF:update_environment:Function]
|
||||||
|
# @PURPOSE: Updates an existing Superset environment.
|
||||||
|
# @PRE: ID and valid environment data are provided.
|
||||||
|
# @POST: Environment is updated in config.
|
||||||
|
# @PARAM: id (str) - The ID of the environment to update.
|
||||||
|
# @PARAM: env (Environment) - The updated environment data.
|
||||||
|
# @RETURN: Environment - The updated environment.
|
||||||
|
@router.put("/environments/{id}", response_model=Environment)
|
||||||
|
async def update_environment(
|
||||||
|
id: str,
|
||||||
|
env: Environment,
|
||||||
|
config_manager: ConfigManager = Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("update_environment"):
|
||||||
|
logger.info(f"[update_environment][Entry] Updating environment {id}")
|
||||||
|
|
||||||
|
# If password is masked, we need the real one for validation
|
||||||
|
env_to_validate = env.copy(deep=True)
|
||||||
|
if env_to_validate.password == "********":
|
||||||
|
old_env = next((e for e in config_manager.get_environments() if e.id == id), None)
|
||||||
|
if old_env:
|
||||||
|
env_to_validate.password = old_env.password
|
||||||
|
|
||||||
|
# Validate connection before updating
|
||||||
|
try:
|
||||||
|
client = SupersetClient(env_to_validate)
|
||||||
|
client.get_dashboards(query={"page_size": 1})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[update_environment][Coherence:Failed] Connection validation failed: {e}")
|
||||||
|
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
|
||||||
|
|
||||||
|
if config_manager.update_environment(id, env):
|
||||||
|
return env
|
||||||
|
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
|
||||||
|
# [/DEF:update_environment:Function]
|
||||||
|
|
||||||
|
# [DEF:delete_environment:Function]
|
||||||
|
# @PURPOSE: Deletes a Superset environment.
|
||||||
|
# @PRE: ID is provided.
|
||||||
|
# @POST: Environment is removed from config.
|
||||||
|
# @PARAM: id (str) - The ID of the environment to delete.
|
||||||
|
@router.delete("/environments/{id}")
|
||||||
|
async def delete_environment(
|
||||||
|
id: str,
|
||||||
|
config_manager: ConfigManager = Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("delete_environment"):
|
||||||
|
logger.info(f"[delete_environment][Entry] Deleting environment {id}")
|
||||||
|
config_manager.delete_environment(id)
|
||||||
|
return {"message": f"Environment {id} deleted"}
|
||||||
|
# [/DEF:delete_environment:Function]
|
||||||
|
|
||||||
|
# [DEF:test_environment_connection:Function]
|
||||||
|
# @PURPOSE: Tests the connection to a Superset environment.
|
||||||
|
# @PRE: ID is provided.
|
||||||
|
# @POST: Returns success or error status.
|
||||||
|
# @PARAM: id (str) - The ID of the environment to test.
|
||||||
|
# @RETURN: dict - Success message or error.
|
||||||
|
@router.post("/environments/{id}/test")
|
||||||
|
async def test_environment_connection(
|
||||||
|
id: str,
|
||||||
|
config_manager: ConfigManager = Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("test_environment_connection"):
|
||||||
|
logger.info(f"[test_environment_connection][Entry] Testing environment {id}")
|
||||||
|
|
||||||
|
# Find environment
|
||||||
|
env = next((e for e in config_manager.get_environments() if e.id == id), None)
|
||||||
|
if not env:
|
||||||
|
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Initialize client (this will trigger authentication)
|
||||||
|
client = SupersetClient(env)
|
||||||
|
|
||||||
|
# Try a simple request to verify
|
||||||
|
client.get_dashboards(query={"page_size": 1})
|
||||||
|
|
||||||
|
logger.info(f"[test_environment_connection][Coherence:OK] Connection successful for {id}")
|
||||||
|
return {"status": "success", "message": "Connection successful"}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[test_environment_connection][Coherence:Failed] Connection failed for {id}: {e}")
|
||||||
|
return {"status": "error", "message": str(e)}
|
||||||
|
# [/DEF:test_environment_connection:Function]
|
||||||
|
|
||||||
|
# [DEF:validate_backup_path:Function]
|
||||||
|
# @PURPOSE: Validates if a backup path exists and is writable.
|
||||||
|
# @PRE: Path is provided in path_data.
|
||||||
|
# @POST: Returns success or error status.
|
||||||
|
# @PARAM: path (str) - The path to validate.
|
||||||
|
# @RETURN: dict - Validation result.
|
||||||
|
@router.post("/validate-path")
|
||||||
|
async def validate_backup_path(
|
||||||
|
path_data: dict,
|
||||||
|
config_manager: ConfigManager = Depends(get_config_manager)
|
||||||
|
):
|
||||||
|
with belief_scope("validate_backup_path"):
|
||||||
|
path = path_data.get("path")
|
||||||
|
if not path:
|
||||||
|
raise HTTPException(status_code=400, detail="Path is required")
|
||||||
|
|
||||||
|
logger.info(f"[validate_backup_path][Entry] Validating path: {path}")
|
||||||
|
|
||||||
|
valid, message = config_manager.validate_path(path)
|
||||||
|
|
||||||
|
if not valid:
|
||||||
|
return {"status": "error", "message": message}
|
||||||
|
|
||||||
|
return {"status": "success", "message": message}
|
||||||
|
# [/DEF:validate_backup_path:Function]
|
||||||
|
|
||||||
|
# [/DEF:SettingsRouter:Module]
|
||||||
187
backend/src/api/routes/tasks.py
Executable file
187
backend/src/api/routes/tasks.py
Executable file
@@ -0,0 +1,187 @@
|
|||||||
|
# [DEF:TasksRouter:Module]
|
||||||
|
# @SEMANTICS: api, router, tasks, create, list, get
|
||||||
|
# @PURPOSE: Defines the FastAPI router for task-related endpoints, allowing clients to create, list, and get the status of tasks.
|
||||||
|
# @LAYER: UI (API)
|
||||||
|
# @RELATION: Depends on the TaskManager. It is included by the main app.
|
||||||
|
from typing import List, Dict, Any, Optional
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, status
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from ...core.logger import belief_scope
|
||||||
|
|
||||||
|
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
|
||||||
|
from ...dependencies import get_task_manager
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
class CreateTaskRequest(BaseModel):
|
||||||
|
plugin_id: str
|
||||||
|
params: Dict[str, Any]
|
||||||
|
|
||||||
|
class ResolveTaskRequest(BaseModel):
|
||||||
|
resolution_params: Dict[str, Any]
|
||||||
|
|
||||||
|
class ResumeTaskRequest(BaseModel):
|
||||||
|
passwords: Dict[str, str]
|
||||||
|
|
||||||
|
@router.post("", response_model=Task, status_code=status.HTTP_201_CREATED)
|
||||||
|
# [DEF:create_task:Function]
|
||||||
|
# @PURPOSE: Create and start a new task for a given plugin.
|
||||||
|
# @PARAM: request (CreateTaskRequest) - The request body containing plugin_id and params.
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: plugin_id must exist and params must be valid for that plugin.
|
||||||
|
# @POST: A new task is created and started.
|
||||||
|
# @RETURN: Task - The created task instance.
|
||||||
|
async def create_task(
|
||||||
|
request: CreateTaskRequest,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Create and start a new task for a given plugin.
|
||||||
|
"""
|
||||||
|
with belief_scope("create_task"):
|
||||||
|
try:
|
||||||
|
task = await task_manager.create_task(
|
||||||
|
plugin_id=request.plugin_id,
|
||||||
|
params=request.params
|
||||||
|
)
|
||||||
|
return task
|
||||||
|
except ValueError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e))
|
||||||
|
# [/DEF:create_task:Function]
|
||||||
|
|
||||||
|
@router.get("", response_model=List[Task])
|
||||||
|
# [DEF:list_tasks:Function]
|
||||||
|
# @PURPOSE: Retrieve a list of tasks with pagination and optional status filter.
|
||||||
|
# @PARAM: limit (int) - Maximum number of tasks to return.
|
||||||
|
# @PARAM: offset (int) - Number of tasks to skip.
|
||||||
|
# @PARAM: status (Optional[TaskStatus]) - Filter by task status.
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: task_manager must be available.
|
||||||
|
# @POST: Returns a list of tasks.
|
||||||
|
# @RETURN: List[Task] - List of tasks.
|
||||||
|
async def list_tasks(
|
||||||
|
limit: int = 10,
|
||||||
|
offset: int = 0,
|
||||||
|
status: Optional[TaskStatus] = None,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Retrieve a list of tasks with pagination and optional status filter.
|
||||||
|
"""
|
||||||
|
with belief_scope("list_tasks"):
|
||||||
|
return task_manager.get_tasks(limit=limit, offset=offset, status=status)
|
||||||
|
# [/DEF:list_tasks:Function]
|
||||||
|
|
||||||
|
@router.get("/{task_id}", response_model=Task)
|
||||||
|
# [DEF:get_task:Function]
|
||||||
|
# @PURPOSE: Retrieve the details of a specific task.
|
||||||
|
# @PARAM: task_id (str) - The unique identifier of the task.
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: task_id must exist.
|
||||||
|
# @POST: Returns task details or raises 404.
|
||||||
|
# @RETURN: Task - The task details.
|
||||||
|
async def get_task(
|
||||||
|
task_id: str,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Retrieve the details of a specific task.
|
||||||
|
"""
|
||||||
|
with belief_scope("get_task"):
|
||||||
|
task = task_manager.get_task(task_id)
|
||||||
|
if not task:
|
||||||
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
|
||||||
|
return task
|
||||||
|
# [/DEF:get_task:Function]
|
||||||
|
|
||||||
|
@router.get("/{task_id}/logs", response_model=List[LogEntry])
|
||||||
|
# [DEF:get_task_logs:Function]
|
||||||
|
# @PURPOSE: Retrieve logs for a specific task.
|
||||||
|
# @PARAM: task_id (str) - The unique identifier of the task.
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: task_id must exist.
|
||||||
|
# @POST: Returns a list of log entries or raises 404.
|
||||||
|
# @RETURN: List[LogEntry] - List of log entries.
|
||||||
|
async def get_task_logs(
|
||||||
|
task_id: str,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Retrieve logs for a specific task.
|
||||||
|
"""
|
||||||
|
with belief_scope("get_task_logs"):
|
||||||
|
task = task_manager.get_task(task_id)
|
||||||
|
if not task:
|
||||||
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
|
||||||
|
return task_manager.get_task_logs(task_id)
|
||||||
|
# [/DEF:get_task_logs:Function]
|
||||||
|
|
||||||
|
@router.post("/{task_id}/resolve", response_model=Task)
|
||||||
|
# [DEF:resolve_task:Function]
|
||||||
|
# @PURPOSE: Resolve a task that is awaiting mapping.
|
||||||
|
# @PARAM: task_id (str) - The unique identifier of the task.
|
||||||
|
# @PARAM: request (ResolveTaskRequest) - The resolution parameters.
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: task must be in AWAITING_MAPPING status.
|
||||||
|
# @POST: Task is resolved and resumes execution.
|
||||||
|
# @RETURN: Task - The updated task object.
|
||||||
|
async def resolve_task(
|
||||||
|
task_id: str,
|
||||||
|
request: ResolveTaskRequest,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Resolve a task that is awaiting mapping.
|
||||||
|
"""
|
||||||
|
with belief_scope("resolve_task"):
|
||||||
|
try:
|
||||||
|
await task_manager.resolve_task(task_id, request.resolution_params)
|
||||||
|
return task_manager.get_task(task_id)
|
||||||
|
except ValueError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
||||||
|
# [/DEF:resolve_task:Function]
|
||||||
|
|
||||||
|
@router.post("/{task_id}/resume", response_model=Task)
|
||||||
|
# [DEF:resume_task:Function]
|
||||||
|
# @PURPOSE: Resume a task that is awaiting input (e.g., passwords).
|
||||||
|
# @PARAM: task_id (str) - The unique identifier of the task.
|
||||||
|
# @PARAM: request (ResumeTaskRequest) - The input (passwords).
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: task must be in AWAITING_INPUT status.
|
||||||
|
# @POST: Task resumes execution with provided input.
|
||||||
|
# @RETURN: Task - The updated task object.
|
||||||
|
async def resume_task(
|
||||||
|
task_id: str,
|
||||||
|
request: ResumeTaskRequest,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Resume a task that is awaiting input (e.g., passwords).
|
||||||
|
"""
|
||||||
|
with belief_scope("resume_task"):
|
||||||
|
try:
|
||||||
|
task_manager.resume_task_with_password(task_id, request.passwords)
|
||||||
|
return task_manager.get_task(task_id)
|
||||||
|
except ValueError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
||||||
|
# [/DEF:resume_task:Function]
|
||||||
|
|
||||||
|
@router.delete("", status_code=status.HTTP_204_NO_CONTENT)
|
||||||
|
# [DEF:clear_tasks:Function]
|
||||||
|
# @PURPOSE: Clear tasks matching the status filter.
|
||||||
|
# @PARAM: status (Optional[TaskStatus]) - Filter by task status.
|
||||||
|
# @PARAM: task_manager (TaskManager) - The task manager instance.
|
||||||
|
# @PRE: task_manager is available.
|
||||||
|
# @POST: Tasks are removed from memory/persistence.
|
||||||
|
async def clear_tasks(
|
||||||
|
status: Optional[TaskStatus] = None,
|
||||||
|
task_manager: TaskManager = Depends(get_task_manager)
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Clear tasks matching the status filter. If no filter, clears all non-running tasks.
|
||||||
|
"""
|
||||||
|
with belief_scope("clear_tasks", f"status={status}"):
|
||||||
|
task_manager.clear_tasks(status)
|
||||||
|
return
|
||||||
|
# [/DEF:clear_tasks:Function]
|
||||||
|
# [/DEF:TasksRouter:Module]
|
||||||
188
backend/src/app.py
Executable file
188
backend/src/app.py
Executable file
@@ -0,0 +1,188 @@
|
|||||||
|
# [DEF:AppModule:Module]
|
||||||
|
# @SEMANTICS: app, main, entrypoint, fastapi
|
||||||
|
# @PURPOSE: The main entry point for the FastAPI application. It initializes the app, configures CORS, sets up dependencies, includes API routers, and defines the WebSocket endpoint for log streaming.
|
||||||
|
# @LAYER: UI (API)
|
||||||
|
# @RELATION: Depends on the dependency module and API route modules.
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# project_root is used for static files mounting
|
||||||
|
project_root = Path(__file__).resolve().parent.parent.parent
|
||||||
|
|
||||||
|
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, Request, HTTPException
|
||||||
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
|
from fastapi.staticfiles import StaticFiles
|
||||||
|
from fastapi.responses import FileResponse
|
||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
|
||||||
|
from .dependencies import get_task_manager, get_scheduler_service
|
||||||
|
from .core.logger import logger, belief_scope
|
||||||
|
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections
|
||||||
|
from .core.database import init_db
|
||||||
|
|
||||||
|
# [DEF:App:Global]
|
||||||
|
# @SEMANTICS: app, fastapi, instance
|
||||||
|
# @PURPOSE: The global FastAPI application instance.
|
||||||
|
app = FastAPI(
|
||||||
|
title="Superset Tools API",
|
||||||
|
description="API for managing Superset automation tools and plugins.",
|
||||||
|
version="1.0.0",
|
||||||
|
)
|
||||||
|
# [/DEF:App:Global]
|
||||||
|
|
||||||
|
# [DEF:startup_event:Function]
|
||||||
|
# @PURPOSE: Handles application startup tasks, such as starting the scheduler.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Scheduler is started.
|
||||||
|
# Startup event
|
||||||
|
@app.on_event("startup")
|
||||||
|
async def startup_event():
|
||||||
|
with belief_scope("startup_event"):
|
||||||
|
scheduler = get_scheduler_service()
|
||||||
|
scheduler.start()
|
||||||
|
# [/DEF:startup_event:Function]
|
||||||
|
|
||||||
|
# [DEF:shutdown_event:Function]
|
||||||
|
# @PURPOSE: Handles application shutdown tasks, such as stopping the scheduler.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Scheduler is stopped.
|
||||||
|
# Shutdown event
|
||||||
|
@app.on_event("shutdown")
|
||||||
|
async def shutdown_event():
|
||||||
|
with belief_scope("shutdown_event"):
|
||||||
|
scheduler = get_scheduler_service()
|
||||||
|
scheduler.stop()
|
||||||
|
# [/DEF:shutdown_event:Function]
|
||||||
|
|
||||||
|
# Configure CORS
|
||||||
|
app.add_middleware(
|
||||||
|
CORSMiddleware,
|
||||||
|
allow_origins=["*"], # Adjust this in production
|
||||||
|
allow_credentials=True,
|
||||||
|
allow_methods=["*"],
|
||||||
|
allow_headers=["*"],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:log_requests:Function]
|
||||||
|
# @PURPOSE: Middleware to log incoming HTTP requests and their response status.
|
||||||
|
# @PRE: request is a FastAPI Request object.
|
||||||
|
# @POST: Logs request and response details.
|
||||||
|
# @PARAM: request (Request) - The incoming request object.
|
||||||
|
# @PARAM: call_next (Callable) - The next middleware or route handler.
|
||||||
|
@app.middleware("http")
|
||||||
|
async def log_requests(request: Request, call_next):
|
||||||
|
with belief_scope("log_requests", f"{request.method} {request.url.path}"):
|
||||||
|
logger.info(f"[DEBUG] Incoming request: {request.method} {request.url.path}")
|
||||||
|
response = await call_next(request)
|
||||||
|
logger.info(f"[DEBUG] Response status: {response.status_code} for {request.url.path}")
|
||||||
|
return response
|
||||||
|
# [/DEF:log_requests:Function]
|
||||||
|
|
||||||
|
# Include API routes
|
||||||
|
app.include_router(plugins.router, prefix="/api/plugins", tags=["Plugins"])
|
||||||
|
app.include_router(tasks.router, prefix="/api/tasks", tags=["Tasks"])
|
||||||
|
app.include_router(settings.router, prefix="/api/settings", tags=["Settings"])
|
||||||
|
app.include_router(connections.router, prefix="/api/settings/connections", tags=["Connections"])
|
||||||
|
app.include_router(environments.router, prefix="/api/environments", tags=["Environments"])
|
||||||
|
app.include_router(mappings.router)
|
||||||
|
app.include_router(migration.router)
|
||||||
|
|
||||||
|
# [DEF:websocket_endpoint:Function]
|
||||||
|
# @PURPOSE: Provides a WebSocket endpoint for real-time log streaming of a task.
|
||||||
|
# @PRE: task_id must be a valid task ID.
|
||||||
|
# @POST: WebSocket connection is managed and logs are streamed until disconnect.
|
||||||
|
@app.websocket("/ws/logs/{task_id}")
|
||||||
|
async def websocket_endpoint(websocket: WebSocket, task_id: str):
|
||||||
|
with belief_scope("websocket_endpoint", f"task_id={task_id}"):
|
||||||
|
await websocket.accept()
|
||||||
|
logger.info(f"WebSocket connection accepted for task {task_id}")
|
||||||
|
task_manager = get_task_manager()
|
||||||
|
queue = await task_manager.subscribe_logs(task_id)
|
||||||
|
try:
|
||||||
|
# Stream new logs
|
||||||
|
logger.info(f"Starting log stream for task {task_id}")
|
||||||
|
|
||||||
|
# Send initial logs first to build context
|
||||||
|
initial_logs = task_manager.get_task_logs(task_id)
|
||||||
|
for log_entry in initial_logs:
|
||||||
|
log_dict = log_entry.dict()
|
||||||
|
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
|
||||||
|
await websocket.send_json(log_dict)
|
||||||
|
|
||||||
|
# Force a check for AWAITING_INPUT status immediately upon connection
|
||||||
|
# This ensures that if the task is already waiting when the user connects, they get the prompt.
|
||||||
|
task = task_manager.get_task(task_id)
|
||||||
|
if task and task.status == "AWAITING_INPUT" and task.input_request:
|
||||||
|
# Construct a synthetic log entry to trigger the frontend handler
|
||||||
|
# This is a bit of a hack but avoids changing the websocket protocol significantly
|
||||||
|
synthetic_log = {
|
||||||
|
"timestamp": task.logs[-1].timestamp.isoformat() if task.logs else "2024-01-01T00:00:00",
|
||||||
|
"level": "INFO",
|
||||||
|
"message": "Task paused for user input (Connection Re-established)",
|
||||||
|
"context": {"input_request": task.input_request}
|
||||||
|
}
|
||||||
|
await websocket.send_json(synthetic_log)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
log_entry = await queue.get()
|
||||||
|
log_dict = log_entry.dict()
|
||||||
|
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
|
||||||
|
await websocket.send_json(log_dict)
|
||||||
|
|
||||||
|
# If task is finished, we could potentially close the connection
|
||||||
|
# but let's keep it open for a bit or until the client disconnects
|
||||||
|
if "Task completed successfully" in log_entry.message or "Task failed" in log_entry.message:
|
||||||
|
# Wait a bit to ensure client receives the last message
|
||||||
|
await asyncio.sleep(2)
|
||||||
|
# DO NOT BREAK here - allow client to keep connection open if they want to review logs
|
||||||
|
# or until they disconnect. Breaking closes the socket immediately.
|
||||||
|
# break
|
||||||
|
|
||||||
|
except WebSocketDisconnect:
|
||||||
|
logger.info(f"WebSocket connection disconnected for task {task_id}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"WebSocket error for task {task_id}: {e}")
|
||||||
|
finally:
|
||||||
|
task_manager.unsubscribe_logs(task_id, queue)
|
||||||
|
# [/DEF:websocket_endpoint:Function]
|
||||||
|
|
||||||
|
# [DEF:StaticFiles:Mount]
|
||||||
|
# @SEMANTICS: static, frontend, spa
|
||||||
|
# @PURPOSE: Mounts the frontend build directory to serve static assets.
|
||||||
|
frontend_path = project_root / "frontend" / "build"
|
||||||
|
if frontend_path.exists():
|
||||||
|
app.mount("/_app", StaticFiles(directory=str(frontend_path / "_app")), name="static")
|
||||||
|
|
||||||
|
# Serve other static files from the root of build directory
|
||||||
|
# [DEF:serve_spa:Function]
|
||||||
|
# @PURPOSE: Serves frontend static files or index.html for SPA routing.
|
||||||
|
# @PRE: file_path is requested by the client.
|
||||||
|
# @POST: Returns the requested file or index.html as a fallback.
|
||||||
|
@app.get("/{file_path:path}")
|
||||||
|
async def serve_spa(file_path: str):
|
||||||
|
with belief_scope("serve_spa", f"path={file_path}"):
|
||||||
|
# Don't serve SPA for API routes that fell through
|
||||||
|
if file_path.startswith("api/"):
|
||||||
|
logger.info(f"[DEBUG] API route fell through to serve_spa: {file_path}")
|
||||||
|
raise HTTPException(status_code=404, detail=f"API endpoint not found: {file_path}")
|
||||||
|
|
||||||
|
full_path = frontend_path / file_path
|
||||||
|
if full_path.is_file():
|
||||||
|
return FileResponse(str(full_path))
|
||||||
|
# Fallback to index.html for SPA routing
|
||||||
|
return FileResponse(str(frontend_path / "index.html"))
|
||||||
|
# [/DEF:serve_spa:Function]
|
||||||
|
else:
|
||||||
|
# [DEF:read_root:Function]
|
||||||
|
# @PURPOSE: A simple root endpoint to confirm that the API is running when frontend is missing.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns a JSON message indicating API status.
|
||||||
|
@app.get("/")
|
||||||
|
async def read_root():
|
||||||
|
with belief_scope("read_root"):
|
||||||
|
return {"message": "Superset Tools API is running (Frontend build not found)"}
|
||||||
|
# [/DEF:read_root:Function]
|
||||||
|
# [/DEF:StaticFiles:Mount]
|
||||||
|
# [/DEF:AppModule:Module]
|
||||||
280
backend/src/core/config_manager.py
Executable file
280
backend/src/core/config_manager.py
Executable file
@@ -0,0 +1,280 @@
|
|||||||
|
# [DEF:ConfigManagerModule:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: config, manager, persistence, json
|
||||||
|
# @PURPOSE: Manages application configuration, including loading/saving to JSON and CRUD for environments.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: DEPENDS_ON -> ConfigModels
|
||||||
|
# @RELATION: CALLS -> logger
|
||||||
|
# @RELATION: WRITES_TO -> config.json
|
||||||
|
#
|
||||||
|
# @INVARIANT: Configuration must always be valid according to AppConfig model.
|
||||||
|
# @PUBLIC_API: ConfigManager
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, List
|
||||||
|
from .config_models import AppConfig, Environment, GlobalSettings
|
||||||
|
from .logger import logger, configure_logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:ConfigManager:Class]
|
||||||
|
# @PURPOSE: A class to handle application configuration persistence and management.
|
||||||
|
# @RELATION: WRITES_TO -> config.json
|
||||||
|
class ConfigManager:
|
||||||
|
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the ConfigManager.
|
||||||
|
# @PRE: isinstance(config_path, str) and len(config_path) > 0
|
||||||
|
# @POST: self.config is an instance of AppConfig
|
||||||
|
# @PARAM: config_path (str) - Path to the configuration file.
|
||||||
|
def __init__(self, config_path: str = "config.json"):
|
||||||
|
with belief_scope("__init__"):
|
||||||
|
# 1. Runtime check of @PRE
|
||||||
|
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
|
||||||
|
|
||||||
|
logger.info(f"[ConfigManager][Entry] Initializing with {config_path}")
|
||||||
|
|
||||||
|
# 2. Logic implementation
|
||||||
|
self.config_path = Path(config_path)
|
||||||
|
self.config: AppConfig = self._load_config()
|
||||||
|
|
||||||
|
# Configure logger with loaded settings
|
||||||
|
configure_logger(self.config.settings.logging)
|
||||||
|
|
||||||
|
# 3. Runtime check of @POST
|
||||||
|
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
|
||||||
|
|
||||||
|
logger.info(f"[ConfigManager][Exit] Initialized")
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:_load_config:Function]
|
||||||
|
# @PURPOSE: Loads the configuration from disk or creates a default one.
|
||||||
|
# @PRE: self.config_path is set.
|
||||||
|
# @POST: isinstance(return, AppConfig)
|
||||||
|
# @RETURN: AppConfig - The loaded or default configuration.
|
||||||
|
def _load_config(self) -> AppConfig:
|
||||||
|
with belief_scope("_load_config"):
|
||||||
|
logger.debug(f"[_load_config][Entry] Loading from {self.config_path}")
|
||||||
|
|
||||||
|
if not self.config_path.exists():
|
||||||
|
logger.info(f"[_load_config][Action] Config file not found. Creating default.")
|
||||||
|
default_config = AppConfig(
|
||||||
|
environments=[],
|
||||||
|
settings=GlobalSettings(backup_path="backups")
|
||||||
|
)
|
||||||
|
self._save_config_to_disk(default_config)
|
||||||
|
return default_config
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(self.config_path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
config = AppConfig(**data)
|
||||||
|
logger.info(f"[_load_config][Coherence:OK] Configuration loaded")
|
||||||
|
return config
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}")
|
||||||
|
# Fallback but try to preserve existing settings if possible?
|
||||||
|
# For now, return default to be safe, but log the error prominently.
|
||||||
|
return AppConfig(
|
||||||
|
environments=[],
|
||||||
|
settings=GlobalSettings(backup_path="backups")
|
||||||
|
)
|
||||||
|
# [/DEF:_load_config:Function]
|
||||||
|
|
||||||
|
# [DEF:_save_config_to_disk:Function]
|
||||||
|
# @PURPOSE: Saves the provided configuration object to disk.
|
||||||
|
# @PRE: isinstance(config, AppConfig)
|
||||||
|
# @POST: Configuration saved to disk.
|
||||||
|
# @PARAM: config (AppConfig) - The configuration to save.
|
||||||
|
def _save_config_to_disk(self, config: AppConfig):
|
||||||
|
with belief_scope("_save_config_to_disk"):
|
||||||
|
logger.debug(f"[_save_config_to_disk][Entry] Saving to {self.config_path}")
|
||||||
|
|
||||||
|
# 1. Runtime check of @PRE
|
||||||
|
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
|
||||||
|
|
||||||
|
# 2. Logic implementation
|
||||||
|
try:
|
||||||
|
with open(self.config_path, "w") as f:
|
||||||
|
json.dump(config.dict(), f, indent=4)
|
||||||
|
logger.info(f"[_save_config_to_disk][Action] Configuration saved")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}")
|
||||||
|
# [/DEF:_save_config_to_disk:Function]
|
||||||
|
|
||||||
|
# [DEF:save:Function]
|
||||||
|
# @PURPOSE: Saves the current configuration state to disk.
|
||||||
|
# @PRE: self.config is set.
|
||||||
|
# @POST: self._save_config_to_disk called.
|
||||||
|
def save(self):
|
||||||
|
with belief_scope("save"):
|
||||||
|
self._save_config_to_disk(self.config)
|
||||||
|
# [/DEF:save:Function]
|
||||||
|
|
||||||
|
# [DEF:get_config:Function]
|
||||||
|
# @PURPOSE: Returns the current configuration.
|
||||||
|
# @PRE: self.config is set.
|
||||||
|
# @POST: Returns self.config.
|
||||||
|
# @RETURN: AppConfig - The current configuration.
|
||||||
|
def get_config(self) -> AppConfig:
|
||||||
|
with belief_scope("get_config"):
|
||||||
|
return self.config
|
||||||
|
# [/DEF:get_config:Function]
|
||||||
|
|
||||||
|
# [DEF:update_global_settings:Function]
|
||||||
|
# @PURPOSE: Updates the global settings and persists the change.
|
||||||
|
# @PRE: isinstance(settings, GlobalSettings)
|
||||||
|
# @POST: self.config.settings updated and saved.
|
||||||
|
# @PARAM: settings (GlobalSettings) - The new global settings.
|
||||||
|
def update_global_settings(self, settings: GlobalSettings):
|
||||||
|
with belief_scope("update_global_settings"):
|
||||||
|
logger.info(f"[update_global_settings][Entry] Updating settings")
|
||||||
|
|
||||||
|
# 1. Runtime check of @PRE
|
||||||
|
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
|
||||||
|
|
||||||
|
# 2. Logic implementation
|
||||||
|
self.config.settings = settings
|
||||||
|
self.save()
|
||||||
|
|
||||||
|
# Reconfigure logger with new settings
|
||||||
|
configure_logger(settings.logging)
|
||||||
|
|
||||||
|
logger.info(f"[update_global_settings][Exit] Settings updated")
|
||||||
|
# [/DEF:update_global_settings:Function]
|
||||||
|
|
||||||
|
# [DEF:validate_path:Function]
|
||||||
|
# @PURPOSE: Validates if a path exists and is writable.
|
||||||
|
# @PRE: path is a string.
|
||||||
|
# @POST: Returns (bool, str) status.
|
||||||
|
# @PARAM: path (str) - The path to validate.
|
||||||
|
# @RETURN: tuple (bool, str) - (is_valid, message)
|
||||||
|
def validate_path(self, path: str) -> tuple[bool, str]:
|
||||||
|
with belief_scope("validate_path"):
|
||||||
|
p = os.path.abspath(path)
|
||||||
|
if not os.path.exists(p):
|
||||||
|
try:
|
||||||
|
os.makedirs(p, exist_ok=True)
|
||||||
|
except Exception as e:
|
||||||
|
return False, f"Path does not exist and could not be created: {e}"
|
||||||
|
|
||||||
|
if not os.access(p, os.W_OK):
|
||||||
|
return False, "Path is not writable"
|
||||||
|
|
||||||
|
return True, "Path is valid and writable"
|
||||||
|
# [/DEF:validate_path:Function]
|
||||||
|
|
||||||
|
# [DEF:get_environments:Function]
|
||||||
|
# @PURPOSE: Returns the list of configured environments.
|
||||||
|
# @PRE: self.config is set.
|
||||||
|
# @POST: Returns list of environments.
|
||||||
|
# @RETURN: List[Environment] - List of environments.
|
||||||
|
def get_environments(self) -> List[Environment]:
|
||||||
|
with belief_scope("get_environments"):
|
||||||
|
return self.config.environments
|
||||||
|
# [/DEF:get_environments:Function]
|
||||||
|
|
||||||
|
# [DEF:has_environments:Function]
|
||||||
|
# @PURPOSE: Checks if at least one environment is configured.
|
||||||
|
# @PRE: self.config is set.
|
||||||
|
# @POST: Returns boolean indicating if environments exist.
|
||||||
|
# @RETURN: bool - True if at least one environment exists.
|
||||||
|
def has_environments(self) -> bool:
|
||||||
|
with belief_scope("has_environments"):
|
||||||
|
return len(self.config.environments) > 0
|
||||||
|
# [/DEF:has_environments:Function]
|
||||||
|
|
||||||
|
# [DEF:get_environment:Function]
|
||||||
|
# @PURPOSE: Returns a single environment by ID.
|
||||||
|
# @PRE: self.config is set and isinstance(env_id, str) and len(env_id) > 0.
|
||||||
|
# @POST: Returns Environment object if found, None otherwise.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment to retrieve.
|
||||||
|
# @RETURN: Optional[Environment] - The environment with the given ID, or None.
|
||||||
|
def get_environment(self, env_id: str) -> Optional[Environment]:
|
||||||
|
with belief_scope("get_environment"):
|
||||||
|
for env in self.config.environments:
|
||||||
|
if env.id == env_id:
|
||||||
|
return env
|
||||||
|
return None
|
||||||
|
# [/DEF:get_environment:Function]
|
||||||
|
|
||||||
|
# [DEF:add_environment:Function]
|
||||||
|
# @PURPOSE: Adds a new environment to the configuration.
|
||||||
|
# @PRE: isinstance(env, Environment)
|
||||||
|
# @POST: Environment added or updated in self.config.environments.
|
||||||
|
# @PARAM: env (Environment) - The environment to add.
|
||||||
|
def add_environment(self, env: Environment):
|
||||||
|
with belief_scope("add_environment"):
|
||||||
|
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||||
|
|
||||||
|
# 1. Runtime check of @PRE
|
||||||
|
assert isinstance(env, Environment), "env must be an instance of Environment"
|
||||||
|
|
||||||
|
# 2. Logic implementation
|
||||||
|
# Check for duplicate ID and remove if exists
|
||||||
|
self.config.environments = [e for e in self.config.environments if e.id != env.id]
|
||||||
|
self.config.environments.append(env)
|
||||||
|
self.save()
|
||||||
|
|
||||||
|
logger.info(f"[add_environment][Exit] Environment added")
|
||||||
|
# [/DEF:add_environment:Function]
|
||||||
|
|
||||||
|
# [DEF:update_environment:Function]
|
||||||
|
# @PURPOSE: Updates an existing environment.
|
||||||
|
# @PRE: isinstance(env_id, str) and len(env_id) > 0 and isinstance(updated_env, Environment)
|
||||||
|
# @POST: Returns True if environment was found and updated.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment to update.
|
||||||
|
# @PARAM: updated_env (Environment) - The updated environment data.
|
||||||
|
# @RETURN: bool - True if updated, False otherwise.
|
||||||
|
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
|
||||||
|
with belief_scope("update_environment"):
|
||||||
|
logger.info(f"[update_environment][Entry] Updating {env_id}")
|
||||||
|
|
||||||
|
# 1. Runtime check of @PRE
|
||||||
|
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
||||||
|
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment"
|
||||||
|
|
||||||
|
# 2. Logic implementation
|
||||||
|
for i, env in enumerate(self.config.environments):
|
||||||
|
if env.id == env_id:
|
||||||
|
# If password is masked, keep the old one
|
||||||
|
if updated_env.password == "********":
|
||||||
|
updated_env.password = env.password
|
||||||
|
|
||||||
|
self.config.environments[i] = updated_env
|
||||||
|
self.save()
|
||||||
|
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found")
|
||||||
|
return False
|
||||||
|
# [/DEF:update_environment:Function]
|
||||||
|
|
||||||
|
# [DEF:delete_environment:Function]
|
||||||
|
# @PURPOSE: Deletes an environment by ID.
|
||||||
|
# @PRE: isinstance(env_id, str) and len(env_id) > 0
|
||||||
|
# @POST: Environment removed from self.config.environments if it existed.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment to delete.
|
||||||
|
def delete_environment(self, env_id: str):
|
||||||
|
with belief_scope("delete_environment"):
|
||||||
|
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
|
||||||
|
|
||||||
|
# 1. Runtime check of @PRE
|
||||||
|
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
||||||
|
|
||||||
|
# 2. Logic implementation
|
||||||
|
original_count = len(self.config.environments)
|
||||||
|
self.config.environments = [e for e in self.config.environments if e.id != env_id]
|
||||||
|
|
||||||
|
if len(self.config.environments) < original_count:
|
||||||
|
self.save()
|
||||||
|
logger.info(f"[delete_environment][Action] Deleted {env_id}")
|
||||||
|
else:
|
||||||
|
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found")
|
||||||
|
# [/DEF:delete_environment:Function]
|
||||||
|
|
||||||
|
# [/DEF:ConfigManager:Class]
|
||||||
|
|
||||||
|
# [/DEF:ConfigManagerModule:Module]
|
||||||
62
backend/src/core/config_models.py
Executable file
62
backend/src/core/config_models.py
Executable file
@@ -0,0 +1,62 @@
|
|||||||
|
# [DEF:ConfigModels:Module]
|
||||||
|
# @SEMANTICS: config, models, pydantic
|
||||||
|
# @PURPOSE: Defines the data models for application configuration using Pydantic.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: READS_FROM -> config.json
|
||||||
|
# @RELATION: USED_BY -> ConfigManager
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
# [DEF:Schedule:DataClass]
|
||||||
|
# @PURPOSE: Represents a backup schedule configuration.
|
||||||
|
class Schedule(BaseModel):
|
||||||
|
enabled: bool = False
|
||||||
|
cron_expression: str = "0 0 * * *" # Default: daily at midnight
|
||||||
|
# [/DEF:Schedule:DataClass]
|
||||||
|
|
||||||
|
# [DEF:Environment:DataClass]
|
||||||
|
# @PURPOSE: Represents a Superset environment configuration.
|
||||||
|
class Environment(BaseModel):
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
url: str
|
||||||
|
username: str
|
||||||
|
password: str # Will be masked in UI
|
||||||
|
verify_ssl: bool = True
|
||||||
|
timeout: int = 30
|
||||||
|
is_default: bool = False
|
||||||
|
backup_schedule: Schedule = Field(default_factory=Schedule)
|
||||||
|
# [/DEF:Environment:DataClass]
|
||||||
|
|
||||||
|
# [DEF:LoggingConfig:DataClass]
|
||||||
|
# @PURPOSE: Defines the configuration for the application's logging system.
|
||||||
|
class LoggingConfig(BaseModel):
|
||||||
|
level: str = "INFO"
|
||||||
|
file_path: Optional[str] = "logs/app.log"
|
||||||
|
max_bytes: int = 10 * 1024 * 1024
|
||||||
|
backup_count: int = 5
|
||||||
|
enable_belief_state: bool = True
|
||||||
|
# [/DEF:LoggingConfig:DataClass]
|
||||||
|
|
||||||
|
# [DEF:GlobalSettings:DataClass]
|
||||||
|
# @PURPOSE: Represents global application settings.
|
||||||
|
class GlobalSettings(BaseModel):
|
||||||
|
backup_path: str
|
||||||
|
default_environment_id: Optional[str] = None
|
||||||
|
logging: LoggingConfig = Field(default_factory=LoggingConfig)
|
||||||
|
|
||||||
|
# Task retention settings
|
||||||
|
task_retention_days: int = 30
|
||||||
|
task_retention_limit: int = 100
|
||||||
|
pagination_limit: int = 10
|
||||||
|
# [/DEF:GlobalSettings:DataClass]
|
||||||
|
|
||||||
|
# [DEF:AppConfig:DataClass]
|
||||||
|
# @PURPOSE: The root configuration model containing all application settings.
|
||||||
|
class AppConfig(BaseModel):
|
||||||
|
environments: List[Environment] = []
|
||||||
|
settings: GlobalSettings
|
||||||
|
# [/DEF:AppConfig:DataClass]
|
||||||
|
|
||||||
|
# [/DEF:ConfigModels:Module]
|
||||||
86
backend/src/core/database.py
Normal file
86
backend/src/core/database.py
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# [DEF:backend.src.core.database:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: database, sqlite, sqlalchemy, session, persistence
|
||||||
|
# @PURPOSE: Configures the SQLite database connection and session management.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: DEPENDS_ON -> sqlalchemy
|
||||||
|
# @RELATION: USES -> backend.src.models.mapping
|
||||||
|
#
|
||||||
|
# @INVARIANT: A single engine instance is used for the entire application.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from sqlalchemy import create_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker, Session
|
||||||
|
from ..models.mapping import Base
|
||||||
|
# Import models to ensure they're registered with Base
|
||||||
|
from ..models.task import TaskRecord
|
||||||
|
from ..models.connection import ConnectionConfig
|
||||||
|
from .logger import belief_scope
|
||||||
|
import os
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:DATABASE_URL:Constant]
|
||||||
|
DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./mappings.db")
|
||||||
|
# [/DEF:DATABASE_URL:Constant]
|
||||||
|
|
||||||
|
# [DEF:TASKS_DATABASE_URL:Constant]
|
||||||
|
TASKS_DATABASE_URL = os.getenv("TASKS_DATABASE_URL", "sqlite:///./tasks.db")
|
||||||
|
# [/DEF:TASKS_DATABASE_URL:Constant]
|
||||||
|
|
||||||
|
# [DEF:engine:Variable]
|
||||||
|
engine = create_engine(DATABASE_URL, connect_args={"check_same_thread": False})
|
||||||
|
# [/DEF:engine:Variable]
|
||||||
|
|
||||||
|
# [DEF:tasks_engine:Variable]
|
||||||
|
tasks_engine = create_engine(TASKS_DATABASE_URL, connect_args={"check_same_thread": False})
|
||||||
|
# [/DEF:tasks_engine:Variable]
|
||||||
|
|
||||||
|
# [DEF:SessionLocal:Class]
|
||||||
|
# @PURPOSE: A session factory for the main mappings database.
|
||||||
|
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||||
|
# [/DEF:SessionLocal:Class]
|
||||||
|
|
||||||
|
# [DEF:TasksSessionLocal:Class]
|
||||||
|
# @PURPOSE: A session factory for the tasks execution database.
|
||||||
|
TasksSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=tasks_engine)
|
||||||
|
# [/DEF:TasksSessionLocal:Class]
|
||||||
|
|
||||||
|
# [DEF:init_db:Function]
|
||||||
|
# @PURPOSE: Initializes the database by creating all tables.
|
||||||
|
# @PRE: engine and tasks_engine are initialized.
|
||||||
|
# @POST: Database tables created.
|
||||||
|
def init_db():
|
||||||
|
with belief_scope("init_db"):
|
||||||
|
Base.metadata.create_all(bind=engine)
|
||||||
|
Base.metadata.create_all(bind=tasks_engine)
|
||||||
|
# [/DEF:init_db:Function]
|
||||||
|
|
||||||
|
# [DEF:get_db:Function]
|
||||||
|
# @PURPOSE: Dependency for getting a database session.
|
||||||
|
# @PRE: SessionLocal is initialized.
|
||||||
|
# @POST: Session is closed after use.
|
||||||
|
# @RETURN: Generator[Session, None, None]
|
||||||
|
def get_db():
|
||||||
|
with belief_scope("get_db"):
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
yield db
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
# [/DEF:get_db:Function]
|
||||||
|
|
||||||
|
# [DEF:get_tasks_db:Function]
|
||||||
|
# @PURPOSE: Dependency for getting a tasks database session.
|
||||||
|
# @PRE: TasksSessionLocal is initialized.
|
||||||
|
# @POST: Session is closed after use.
|
||||||
|
# @RETURN: Generator[Session, None, None]
|
||||||
|
def get_tasks_db():
|
||||||
|
with belief_scope("get_tasks_db"):
|
||||||
|
db = TasksSessionLocal()
|
||||||
|
try:
|
||||||
|
yield db
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
# [/DEF:get_tasks_db:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.core.database:Module]
|
||||||
228
backend/src/core/logger.py
Executable file
228
backend/src/core/logger.py
Executable file
@@ -0,0 +1,228 @@
|
|||||||
|
# [DEF:LoggerModule:Module]
|
||||||
|
# @SEMANTICS: logging, websocket, streaming, handler
|
||||||
|
# @PURPOSE: Configures the application's logging system, including a custom handler for buffering logs and streaming them over WebSockets.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Used by the main application and other modules to log events. The WebSocketLogHandler is used by the WebSocket endpoint in app.py.
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from collections import deque
|
||||||
|
from contextlib import contextmanager
|
||||||
|
from logging.handlers import RotatingFileHandler
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
# Thread-local storage for belief state
|
||||||
|
_belief_state = threading.local()
|
||||||
|
|
||||||
|
# Global flag for belief state logging
|
||||||
|
_enable_belief_state = True
|
||||||
|
|
||||||
|
# [DEF:BeliefFormatter:Class]
|
||||||
|
# @PURPOSE: Custom logging formatter that adds belief state prefixes to log messages.
|
||||||
|
class BeliefFormatter(logging.Formatter):
|
||||||
|
# [DEF:format:Function]
|
||||||
|
# @PURPOSE: Formats the log record, adding belief state context if available.
|
||||||
|
# @PRE: record is a logging.LogRecord.
|
||||||
|
# @POST: Returns formatted string.
|
||||||
|
# @PARAM: record (logging.LogRecord) - The log record to format.
|
||||||
|
# @RETURN: str - The formatted log message.
|
||||||
|
def format(self, record):
|
||||||
|
anchor_id = getattr(_belief_state, 'anchor_id', None)
|
||||||
|
if anchor_id:
|
||||||
|
record.msg = f"[{anchor_id}][Action] {record.msg}"
|
||||||
|
return super().format(record)
|
||||||
|
# [/DEF:format:Function]
|
||||||
|
# [/DEF:BeliefFormatter:Class]
|
||||||
|
|
||||||
|
# Re-using LogEntry from task_manager for consistency
|
||||||
|
# [DEF:LogEntry:Class]
|
||||||
|
# @SEMANTICS: log, entry, record, pydantic
|
||||||
|
# @PURPOSE: A Pydantic model representing a single, structured log entry. This is a re-definition for consistency, as it's also defined in task_manager.py.
|
||||||
|
class LogEntry(BaseModel):
|
||||||
|
timestamp: datetime = Field(default_factory=datetime.utcnow)
|
||||||
|
level: str
|
||||||
|
message: str
|
||||||
|
context: Optional[Dict[str, Any]] = None
|
||||||
|
|
||||||
|
# [/DEF:LogEntry:Class]
|
||||||
|
|
||||||
|
# [DEF:belief_scope:Function]
|
||||||
|
# @PURPOSE: Context manager for structured Belief State logging.
|
||||||
|
# @PARAM: anchor_id (str) - The identifier for the current semantic block.
|
||||||
|
# @PARAM: message (str) - Optional entry message.
|
||||||
|
# @PRE: anchor_id must be provided.
|
||||||
|
# @POST: Thread-local belief state is updated and entry/exit logs are generated.
|
||||||
|
@contextmanager
|
||||||
|
def belief_scope(anchor_id: str, message: str = ""):
|
||||||
|
# Log Entry if enabled
|
||||||
|
if _enable_belief_state:
|
||||||
|
entry_msg = f"[{anchor_id}][Entry]"
|
||||||
|
if message:
|
||||||
|
entry_msg += f" {message}"
|
||||||
|
logger.info(entry_msg)
|
||||||
|
|
||||||
|
# Set thread-local anchor_id
|
||||||
|
old_anchor = getattr(_belief_state, 'anchor_id', None)
|
||||||
|
_belief_state.anchor_id = anchor_id
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield
|
||||||
|
# Log Coherence OK and Exit
|
||||||
|
logger.info(f"[{anchor_id}][Coherence:OK]")
|
||||||
|
if _enable_belief_state:
|
||||||
|
logger.info(f"[{anchor_id}][Exit]")
|
||||||
|
except Exception as e:
|
||||||
|
# Log Coherence Failed
|
||||||
|
logger.info(f"[{anchor_id}][Coherence:Failed] {str(e)}")
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
# Restore old anchor
|
||||||
|
_belief_state.anchor_id = old_anchor
|
||||||
|
|
||||||
|
# [/DEF:belief_scope:Function]
|
||||||
|
|
||||||
|
# [DEF:configure_logger:Function]
|
||||||
|
# @PURPOSE: Configures the logger with the provided logging settings.
|
||||||
|
# @PRE: config is a valid LoggingConfig instance.
|
||||||
|
# @POST: Logger level, handlers, and belief state flag are updated.
|
||||||
|
# @PARAM: config (LoggingConfig) - The logging configuration.
|
||||||
|
def configure_logger(config):
|
||||||
|
global _enable_belief_state
|
||||||
|
_enable_belief_state = config.enable_belief_state
|
||||||
|
|
||||||
|
# Set logger level
|
||||||
|
level = getattr(logging, config.level.upper(), logging.INFO)
|
||||||
|
logger.setLevel(level)
|
||||||
|
|
||||||
|
# Remove existing file handlers
|
||||||
|
handlers_to_remove = [h for h in logger.handlers if isinstance(h, RotatingFileHandler)]
|
||||||
|
for h in handlers_to_remove:
|
||||||
|
logger.removeHandler(h)
|
||||||
|
h.close()
|
||||||
|
|
||||||
|
# Add file handler if file_path is set
|
||||||
|
if config.file_path:
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
log_file = Path(config.file_path)
|
||||||
|
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
file_handler = RotatingFileHandler(
|
||||||
|
config.file_path,
|
||||||
|
maxBytes=config.max_bytes,
|
||||||
|
backupCount=config.backup_count
|
||||||
|
)
|
||||||
|
file_handler.setFormatter(BeliefFormatter(
|
||||||
|
'[%(asctime)s][%(levelname)s][%(name)s] %(message)s'
|
||||||
|
))
|
||||||
|
logger.addHandler(file_handler)
|
||||||
|
|
||||||
|
# Update existing handlers' formatters to BeliefFormatter
|
||||||
|
for handler in logger.handlers:
|
||||||
|
if not isinstance(handler, RotatingFileHandler):
|
||||||
|
handler.setFormatter(BeliefFormatter(
|
||||||
|
'[%(asctime)s][%(levelname)s][%(name)s] %(message)s'
|
||||||
|
))
|
||||||
|
# [/DEF:configure_logger:Function]
|
||||||
|
|
||||||
|
# [DEF:WebSocketLogHandler:Class]
|
||||||
|
# @SEMANTICS: logging, handler, websocket, buffer
|
||||||
|
# @PURPOSE: A custom logging handler that captures log records into a buffer. It is designed to be extended for real-time log streaming over WebSockets.
|
||||||
|
class WebSocketLogHandler(logging.Handler):
|
||||||
|
"""
|
||||||
|
A logging handler that stores log records and can be extended to send them
|
||||||
|
over WebSockets.
|
||||||
|
"""
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the handler with a fixed-capacity buffer.
|
||||||
|
# @PRE: capacity is an integer.
|
||||||
|
# @POST: Instance initialized with empty deque.
|
||||||
|
# @PARAM: capacity (int) - Maximum number of logs to keep in memory.
|
||||||
|
def __init__(self, capacity: int = 1000):
|
||||||
|
super().__init__()
|
||||||
|
self.log_buffer: deque[LogEntry] = deque(maxlen=capacity)
|
||||||
|
# In a real implementation, you'd have a way to manage active WebSocket connections
|
||||||
|
# e.g., self.active_connections: Set[WebSocket] = set()
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:emit:Function]
|
||||||
|
# @PURPOSE: Captures a log record, formats it, and stores it in the buffer.
|
||||||
|
# @PRE: record is a logging.LogRecord.
|
||||||
|
# @POST: Log is added to the log_buffer.
|
||||||
|
# @PARAM: record (logging.LogRecord) - The log record to emit.
|
||||||
|
def emit(self, record: logging.LogRecord):
|
||||||
|
try:
|
||||||
|
log_entry = LogEntry(
|
||||||
|
level=record.levelname,
|
||||||
|
message=self.format(record),
|
||||||
|
context={
|
||||||
|
"name": record.name,
|
||||||
|
"pathname": record.pathname,
|
||||||
|
"lineno": record.lineno,
|
||||||
|
"funcName": record.funcName,
|
||||||
|
"process": record.process,
|
||||||
|
"thread": record.thread,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
self.log_buffer.append(log_entry)
|
||||||
|
# Here you would typically send the log_entry to all active WebSocket connections
|
||||||
|
# for real-time streaming to the frontend.
|
||||||
|
# Example: for ws in self.active_connections: await ws.send_json(log_entry.dict())
|
||||||
|
except Exception:
|
||||||
|
self.handleError(record)
|
||||||
|
# [/DEF:emit:Function]
|
||||||
|
|
||||||
|
# [DEF:get_recent_logs:Function]
|
||||||
|
# @PURPOSE: Returns a list of recent log entries from the buffer.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns list of LogEntry objects.
|
||||||
|
# @RETURN: List[LogEntry] - List of buffered log entries.
|
||||||
|
def get_recent_logs(self) -> List[LogEntry]:
|
||||||
|
"""
|
||||||
|
Returns a list of recent log entries from the buffer.
|
||||||
|
"""
|
||||||
|
return list(self.log_buffer)
|
||||||
|
# [/DEF:get_recent_logs:Function]
|
||||||
|
|
||||||
|
# [/DEF:WebSocketLogHandler:Class]
|
||||||
|
|
||||||
|
# [DEF:Logger:Global]
|
||||||
|
# @SEMANTICS: logger, global, instance
|
||||||
|
# @PURPOSE: The global logger instance for the application, configured with both a console handler and the custom WebSocket handler.
|
||||||
|
logger = logging.getLogger("superset_tools_app")
|
||||||
|
|
||||||
|
# [DEF:believed:Function]
|
||||||
|
# @PURPOSE: A decorator that wraps a function in a belief scope.
|
||||||
|
# @PARAM: anchor_id (str) - The identifier for the semantic block.
|
||||||
|
def believed(anchor_id: str):
|
||||||
|
def decorator(func):
|
||||||
|
def wrapper(*args, **kwargs):
|
||||||
|
with belief_scope(anchor_id):
|
||||||
|
return func(*args, **kwargs)
|
||||||
|
return wrapper
|
||||||
|
return decorator
|
||||||
|
# [/DEF:believed:Function]
|
||||||
|
logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
|
# Create a formatter
|
||||||
|
formatter = BeliefFormatter(
|
||||||
|
'[%(asctime)s][%(levelname)s][%(name)s] %(message)s'
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add console handler
|
||||||
|
console_handler = logging.StreamHandler()
|
||||||
|
console_handler.setFormatter(formatter)
|
||||||
|
logger.addHandler(console_handler)
|
||||||
|
|
||||||
|
# Add WebSocket log handler
|
||||||
|
websocket_log_handler = WebSocketLogHandler()
|
||||||
|
websocket_log_handler.setFormatter(formatter)
|
||||||
|
logger.addHandler(websocket_log_handler)
|
||||||
|
|
||||||
|
# Example usage:
|
||||||
|
# logger.info("Application started", extra={"context_key": "context_value"})
|
||||||
|
# logger.error("An error occurred", exc_info=True)
|
||||||
|
# [/DEF:Logger:Global]
|
||||||
|
# [/DEF:LoggerModule:Module]
|
||||||
104
backend/src/core/migration_engine.py
Normal file
104
backend/src/core/migration_engine.py
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
# [DEF:backend.src.core.migration_engine:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: migration, engine, zip, yaml, transformation
|
||||||
|
# @PURPOSE: Handles the interception and transformation of Superset asset ZIP archives.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: DEPENDS_ON -> PyYAML
|
||||||
|
#
|
||||||
|
# @INVARIANT: ZIP structure must be preserved after transformation.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import zipfile
|
||||||
|
import yaml
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import tempfile
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict
|
||||||
|
from .logger import logger, belief_scope
|
||||||
|
import yaml
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:MigrationEngine:Class]
|
||||||
|
# @PURPOSE: Engine for transforming Superset export ZIPs.
|
||||||
|
class MigrationEngine:
|
||||||
|
|
||||||
|
# [DEF:transform_zip:Function]
|
||||||
|
# @PURPOSE: Extracts ZIP, replaces database UUIDs in YAMLs, and re-packages.
|
||||||
|
# @PARAM: zip_path (str) - Path to the source ZIP file.
|
||||||
|
# @PARAM: output_path (str) - Path where the transformed ZIP will be saved.
|
||||||
|
# @PARAM: db_mapping (Dict[str, str]) - Mapping of source UUID to target UUID.
|
||||||
|
# @PARAM: strip_databases (bool) - Whether to remove the databases directory from the archive.
|
||||||
|
# @PRE: zip_path must point to a valid Superset export archive.
|
||||||
|
# @POST: Transformed archive is saved to output_path.
|
||||||
|
# @RETURN: bool - True if successful.
|
||||||
|
def transform_zip(self, zip_path: str, output_path: str, db_mapping: Dict[str, str], strip_databases: bool = True) -> bool:
|
||||||
|
"""
|
||||||
|
Transform a Superset export ZIP by replacing database UUIDs.
|
||||||
|
"""
|
||||||
|
with belief_scope("MigrationEngine.transform_zip"):
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir_str:
|
||||||
|
temp_dir = Path(temp_dir_str)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 1. Extract
|
||||||
|
logger.info(f"[MigrationEngine.transform_zip][Action] Extracting ZIP: {zip_path}")
|
||||||
|
with zipfile.ZipFile(zip_path, 'r') as zf:
|
||||||
|
zf.extractall(temp_dir)
|
||||||
|
|
||||||
|
# 2. Transform YAMLs
|
||||||
|
# Datasets are usually in datasets/*.yaml
|
||||||
|
dataset_files = list(temp_dir.glob("**/datasets/**/*.yaml")) + list(temp_dir.glob("**/datasets/*.yaml"))
|
||||||
|
dataset_files = list(set(dataset_files))
|
||||||
|
|
||||||
|
logger.info(f"[MigrationEngine.transform_zip][State] Found {len(dataset_files)} dataset files.")
|
||||||
|
for ds_file in dataset_files:
|
||||||
|
logger.info(f"[MigrationEngine.transform_zip][Action] Transforming dataset: {ds_file}")
|
||||||
|
self._transform_yaml(ds_file, db_mapping)
|
||||||
|
|
||||||
|
# 3. Re-package
|
||||||
|
logger.info(f"[MigrationEngine.transform_zip][Action] Re-packaging ZIP to: {output_path} (strip_databases={strip_databases})")
|
||||||
|
with zipfile.ZipFile(output_path, 'w', zipfile.ZIP_DEFLATED) as zf:
|
||||||
|
for root, dirs, files in os.walk(temp_dir):
|
||||||
|
rel_root = Path(root).relative_to(temp_dir)
|
||||||
|
|
||||||
|
if strip_databases and "databases" in rel_root.parts:
|
||||||
|
logger.info(f"[MigrationEngine.transform_zip][Action] Skipping file in databases directory: {rel_root}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
for file in files:
|
||||||
|
file_path = Path(root) / file
|
||||||
|
arcname = file_path.relative_to(temp_dir)
|
||||||
|
zf.write(file_path, arcname)
|
||||||
|
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[MigrationEngine.transform_zip][Coherence:Failed] Error transforming ZIP: {e}")
|
||||||
|
return False
|
||||||
|
# [/DEF:transform_zip:Function]
|
||||||
|
|
||||||
|
# [DEF:_transform_yaml:Function]
|
||||||
|
# @PURPOSE: Replaces database_uuid in a single YAML file.
|
||||||
|
# @PARAM: file_path (Path) - Path to the YAML file.
|
||||||
|
# @PARAM: db_mapping (Dict[str, str]) - UUID mapping dictionary.
|
||||||
|
# @PRE: file_path must exist and be readable.
|
||||||
|
# @POST: File is modified in-place if source UUID matches mapping.
|
||||||
|
def _transform_yaml(self, file_path: Path, db_mapping: Dict[str, str]):
|
||||||
|
with open(file_path, 'r') as f:
|
||||||
|
data = yaml.safe_load(f)
|
||||||
|
|
||||||
|
if not data:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Superset dataset YAML structure:
|
||||||
|
# database_uuid: ...
|
||||||
|
source_uuid = data.get('database_uuid')
|
||||||
|
if source_uuid in db_mapping:
|
||||||
|
data['database_uuid'] = db_mapping[source_uuid]
|
||||||
|
with open(file_path, 'w') as f:
|
||||||
|
yaml.dump(data, f)
|
||||||
|
# [/DEF:_transform_yaml:Function]
|
||||||
|
|
||||||
|
# [/DEF:MigrationEngine:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.core.migration_engine:Module]
|
||||||
115
backend/src/core/plugin_base.py
Executable file
115
backend/src/core/plugin_base.py
Executable file
@@ -0,0 +1,115 @@
|
|||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Dict, Any
|
||||||
|
from .logger import belief_scope
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
# [DEF:PluginBase:Class]
|
||||||
|
# @SEMANTICS: plugin, interface, base, abstract
|
||||||
|
# @PURPOSE: Defines the abstract base class that all plugins must implement to be recognized by the system. It enforces a common structure for plugin metadata and execution.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Used by PluginLoader to identify valid plugins.
|
||||||
|
# @INVARIANT: All plugins MUST inherit from this class.
|
||||||
|
class PluginBase(ABC):
|
||||||
|
"""
|
||||||
|
Base class for all plugins.
|
||||||
|
Plugins must inherit from this class and implement the abstract methods.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
# [DEF:id:Function]
|
||||||
|
# @PURPOSE: Returns the unique identifier for the plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string ID.
|
||||||
|
# @RETURN: str - Plugin ID.
|
||||||
|
def id(self) -> str:
|
||||||
|
"""A unique identifier for the plugin."""
|
||||||
|
with belief_scope("id"):
|
||||||
|
pass
|
||||||
|
# [/DEF:id:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
# [DEF:name:Function]
|
||||||
|
# @PURPOSE: Returns the human-readable name of the plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string name.
|
||||||
|
# @RETURN: str - Plugin name.
|
||||||
|
def name(self) -> str:
|
||||||
|
"""A human-readable name for the plugin."""
|
||||||
|
with belief_scope("name"):
|
||||||
|
pass
|
||||||
|
# [/DEF:name:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
# [DEF:description:Function]
|
||||||
|
# @PURPOSE: Returns a brief description of the plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string description.
|
||||||
|
# @RETURN: str - Plugin description.
|
||||||
|
def description(self) -> str:
|
||||||
|
"""A brief description of what the plugin does."""
|
||||||
|
with belief_scope("description"):
|
||||||
|
pass
|
||||||
|
# [/DEF:description:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
# [DEF:version:Function]
|
||||||
|
# @PURPOSE: Returns the version of the plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string version.
|
||||||
|
# @RETURN: str - Plugin version.
|
||||||
|
def version(self) -> str:
|
||||||
|
"""The version of the plugin."""
|
||||||
|
with belief_scope("version"):
|
||||||
|
pass
|
||||||
|
# [/DEF:version:Function]
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
# [DEF:get_schema:Function]
|
||||||
|
# @PURPOSE: Returns the JSON schema for the plugin's input parameters.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns dict schema.
|
||||||
|
# @RETURN: Dict[str, Any] - JSON schema.
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Returns the JSON schema for the plugin's input parameters.
|
||||||
|
This schema will be used to generate the frontend form.
|
||||||
|
"""
|
||||||
|
with belief_scope("get_schema"):
|
||||||
|
pass
|
||||||
|
# [/DEF:get_schema:Function]
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
# [DEF:execute:Function]
|
||||||
|
# @PURPOSE: Executes the plugin's core logic.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Validated input parameters.
|
||||||
|
# @PRE: params must be a dictionary.
|
||||||
|
# @POST: Plugin execution is completed.
|
||||||
|
async def execute(self, params: Dict[str, Any]):
|
||||||
|
with belief_scope("execute"):
|
||||||
|
pass
|
||||||
|
"""
|
||||||
|
Executes the plugin's logic.
|
||||||
|
The `params` argument will be validated against the schema returned by `get_schema()`.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
# [/DEF:execute:Function]
|
||||||
|
# [/DEF:PluginBase:Class]
|
||||||
|
|
||||||
|
# [DEF:PluginConfig:Class]
|
||||||
|
# @SEMANTICS: plugin, config, schema, pydantic
|
||||||
|
# @PURPOSE: A Pydantic model used to represent the validated configuration and metadata of a loaded plugin. This object is what gets exposed to the API layer.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Instantiated by PluginLoader after validating a PluginBase instance.
|
||||||
|
class PluginConfig(BaseModel):
|
||||||
|
"""Pydantic model for plugin configuration."""
|
||||||
|
id: str = Field(..., description="Unique identifier for the plugin")
|
||||||
|
name: str = Field(..., description="Human-readable name for the plugin")
|
||||||
|
description: str = Field(..., description="Brief description of what the plugin does")
|
||||||
|
version: str = Field(..., description="Version of the plugin")
|
||||||
|
input_schema: Dict[str, Any] = Field(..., description="JSON schema for input parameters", alias="schema")
|
||||||
|
# [/DEF:PluginConfig:Class]
|
||||||
191
backend/src/core/plugin_loader.py
Executable file
191
backend/src/core/plugin_loader.py
Executable file
@@ -0,0 +1,191 @@
|
|||||||
|
import importlib.util
|
||||||
|
import os
|
||||||
|
import sys # Added this line
|
||||||
|
from typing import Dict, Type, List, Optional
|
||||||
|
from .plugin_base import PluginBase, PluginConfig
|
||||||
|
from jsonschema import validate
|
||||||
|
from .logger import belief_scope
|
||||||
|
|
||||||
|
# [DEF:PluginLoader:Class]
|
||||||
|
# @SEMANTICS: plugin, loader, dynamic, import
|
||||||
|
# @PURPOSE: Scans a specified directory for Python modules, dynamically loads them, and registers any classes that are valid implementations of the PluginBase interface.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Depends on PluginBase. It is used by the main application to discover and manage available plugins.
|
||||||
|
class PluginLoader:
|
||||||
|
"""
|
||||||
|
Scans a directory for Python modules, loads them, and identifies classes
|
||||||
|
that inherit from PluginBase.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the PluginLoader with a directory to scan.
|
||||||
|
# @PRE: plugin_dir is a valid directory path.
|
||||||
|
# @POST: Plugins are loaded and registered.
|
||||||
|
# @PARAM: plugin_dir (str) - The directory containing plugin modules.
|
||||||
|
def __init__(self, plugin_dir: str):
|
||||||
|
with belief_scope("__init__"):
|
||||||
|
self.plugin_dir = plugin_dir
|
||||||
|
self._plugins: Dict[str, PluginBase] = {}
|
||||||
|
self._plugin_configs: Dict[str, PluginConfig] = {}
|
||||||
|
self._load_plugins()
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:_load_plugins:Function]
|
||||||
|
# @PURPOSE: Scans the plugin directory and loads all valid plugins.
|
||||||
|
# @PRE: plugin_dir exists or can be created.
|
||||||
|
# @POST: _load_module is called for each .py file.
|
||||||
|
def _load_plugins(self):
|
||||||
|
with belief_scope("_load_plugins"):
|
||||||
|
"""
|
||||||
|
Scans the plugin directory, imports modules, and registers valid plugins.
|
||||||
|
"""
|
||||||
|
if not os.path.exists(self.plugin_dir):
|
||||||
|
os.makedirs(self.plugin_dir)
|
||||||
|
|
||||||
|
# Add the plugin directory's parent to sys.path to enable relative imports within plugins
|
||||||
|
# This assumes plugin_dir is something like 'backend/src/plugins'
|
||||||
|
# and we want 'backend/src' to be on the path for 'from ..core...' imports
|
||||||
|
plugin_parent_dir = os.path.abspath(os.path.join(self.plugin_dir, os.pardir))
|
||||||
|
if plugin_parent_dir not in sys.path:
|
||||||
|
sys.path.insert(0, plugin_parent_dir)
|
||||||
|
|
||||||
|
for filename in os.listdir(self.plugin_dir):
|
||||||
|
if filename.endswith(".py") and filename != "__init__.py":
|
||||||
|
module_name = filename[:-3]
|
||||||
|
file_path = os.path.join(self.plugin_dir, filename)
|
||||||
|
self._load_module(module_name, file_path)
|
||||||
|
# [/DEF:_load_plugins:Function]
|
||||||
|
|
||||||
|
# [DEF:_load_module:Function]
|
||||||
|
# @PURPOSE: Loads a single Python module and discovers PluginBase implementations.
|
||||||
|
# @PRE: module_name and file_path are valid.
|
||||||
|
# @POST: Plugin classes are instantiated and registered.
|
||||||
|
# @PARAM: module_name (str) - The name of the module.
|
||||||
|
# @PARAM: file_path (str) - The path to the module file.
|
||||||
|
def _load_module(self, module_name: str, file_path: str):
|
||||||
|
with belief_scope("_load_module"):
|
||||||
|
"""
|
||||||
|
Loads a single Python module and extracts PluginBase subclasses.
|
||||||
|
"""
|
||||||
|
# Try to determine the correct package prefix based on how the app is running
|
||||||
|
# For standalone execution, we need to handle the import differently
|
||||||
|
if __name__ == "__main__" or "test" in __name__:
|
||||||
|
# When running as standalone or in tests, use relative import
|
||||||
|
package_name = f"plugins.{module_name}"
|
||||||
|
elif "backend.src" in __name__:
|
||||||
|
package_prefix = "backend.src.plugins"
|
||||||
|
package_name = f"{package_prefix}.{module_name}"
|
||||||
|
else:
|
||||||
|
package_prefix = "src.plugins"
|
||||||
|
package_name = f"{package_prefix}.{module_name}"
|
||||||
|
|
||||||
|
# print(f"DEBUG: Loading plugin {module_name} as {package_name}")
|
||||||
|
spec = importlib.util.spec_from_file_location(package_name, file_path)
|
||||||
|
if spec is None or spec.loader is None:
|
||||||
|
print(f"Could not load module spec for {package_name}") # Replace with proper logging
|
||||||
|
return
|
||||||
|
|
||||||
|
module = importlib.util.module_from_spec(spec)
|
||||||
|
try:
|
||||||
|
spec.loader.exec_module(module)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error loading plugin module {module_name}: {e}") # Replace with proper logging
|
||||||
|
return
|
||||||
|
|
||||||
|
for attribute_name in dir(module):
|
||||||
|
attribute = getattr(module, attribute_name)
|
||||||
|
if (
|
||||||
|
isinstance(attribute, type)
|
||||||
|
and issubclass(attribute, PluginBase)
|
||||||
|
and attribute is not PluginBase
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
plugin_instance = attribute()
|
||||||
|
self._register_plugin(plugin_instance)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error instantiating plugin {attribute_name} in {module_name}: {e}") # Replace with proper logging
|
||||||
|
# [/DEF:_load_module:Function]
|
||||||
|
|
||||||
|
# [DEF:_register_plugin:Function]
|
||||||
|
# @PURPOSE: Registers a PluginBase instance and its configuration.
|
||||||
|
# @PRE: plugin_instance is a valid implementation of PluginBase.
|
||||||
|
# @POST: Plugin is added to _plugins and _plugin_configs.
|
||||||
|
# @PARAM: plugin_instance (PluginBase) - The plugin instance to register.
|
||||||
|
def _register_plugin(self, plugin_instance: PluginBase):
|
||||||
|
with belief_scope("_register_plugin"):
|
||||||
|
"""
|
||||||
|
Registers a valid plugin instance.
|
||||||
|
"""
|
||||||
|
plugin_id = plugin_instance.id
|
||||||
|
if plugin_id in self._plugins:
|
||||||
|
print(f"Warning: Duplicate plugin ID '{plugin_id}' found. Skipping.") # Replace with proper logging
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
schema = plugin_instance.get_schema()
|
||||||
|
# Basic validation to ensure it's a dictionary
|
||||||
|
if not isinstance(schema, dict):
|
||||||
|
raise TypeError("get_schema() must return a dictionary.")
|
||||||
|
|
||||||
|
plugin_config = PluginConfig(
|
||||||
|
id=plugin_instance.id,
|
||||||
|
name=plugin_instance.name,
|
||||||
|
description=plugin_instance.description,
|
||||||
|
version=plugin_instance.version,
|
||||||
|
schema=schema,
|
||||||
|
)
|
||||||
|
# The following line is commented out because it requires a schema to be passed to validate against.
|
||||||
|
# The schema provided by the plugin is the one being validated, not the data.
|
||||||
|
# validate(instance={}, schema=schema)
|
||||||
|
self._plugins[plugin_id] = plugin_instance
|
||||||
|
self._plugin_configs[plugin_id] = plugin_config
|
||||||
|
from ..core.logger import logger
|
||||||
|
logger.info(f"Plugin '{plugin_instance.name}' (ID: {plugin_id}) loaded successfully.")
|
||||||
|
except Exception as e:
|
||||||
|
from ..core.logger import logger
|
||||||
|
logger.error(f"Error validating plugin '{plugin_instance.name}' (ID: {plugin_id}): {e}")
|
||||||
|
# [/DEF:_register_plugin:Function]
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:get_plugin:Function]
|
||||||
|
# @PURPOSE: Retrieves a loaded plugin instance by its ID.
|
||||||
|
# @PRE: plugin_id is a string.
|
||||||
|
# @POST: Returns plugin instance or None.
|
||||||
|
# @PARAM: plugin_id (str) - The unique identifier of the plugin.
|
||||||
|
# @RETURN: Optional[PluginBase] - The plugin instance if found, otherwise None.
|
||||||
|
def get_plugin(self, plugin_id: str) -> Optional[PluginBase]:
|
||||||
|
with belief_scope("get_plugin"):
|
||||||
|
"""
|
||||||
|
Returns a loaded plugin instance by its ID.
|
||||||
|
"""
|
||||||
|
return self._plugins.get(plugin_id)
|
||||||
|
# [/DEF:get_plugin:Function]
|
||||||
|
|
||||||
|
# [DEF:get_all_plugin_configs:Function]
|
||||||
|
# @PURPOSE: Returns a list of all registered plugin configurations.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns list of all PluginConfig objects.
|
||||||
|
# @RETURN: List[PluginConfig] - A list of plugin configurations.
|
||||||
|
def get_all_plugin_configs(self) -> List[PluginConfig]:
|
||||||
|
with belief_scope("get_all_plugin_configs"):
|
||||||
|
"""
|
||||||
|
Returns a list of all loaded plugin configurations.
|
||||||
|
"""
|
||||||
|
return list(self._plugin_configs.values())
|
||||||
|
# [/DEF:get_all_plugin_configs:Function]
|
||||||
|
|
||||||
|
# [DEF:has_plugin:Function]
|
||||||
|
# @PURPOSE: Checks if a plugin with the given ID is registered.
|
||||||
|
# @PRE: plugin_id is a string.
|
||||||
|
# @POST: Returns True if plugin exists.
|
||||||
|
# @PARAM: plugin_id (str) - The unique identifier of the plugin.
|
||||||
|
# @RETURN: bool - True if the plugin is registered, False otherwise.
|
||||||
|
def has_plugin(self, plugin_id: str) -> bool:
|
||||||
|
with belief_scope("has_plugin"):
|
||||||
|
"""
|
||||||
|
Checks if a plugin with the given ID is loaded.
|
||||||
|
"""
|
||||||
|
return plugin_id in self._plugins
|
||||||
|
# [/DEF:has_plugin:Function]
|
||||||
|
|
||||||
|
# [/DEF:PluginLoader:Class]
|
||||||
119
backend/src/core/scheduler.py
Normal file
119
backend/src/core/scheduler.py
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
# [DEF:SchedulerModule:Module]
|
||||||
|
# @SEMANTICS: scheduler, apscheduler, cron, backup
|
||||||
|
# @PURPOSE: Manages scheduled tasks using APScheduler.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Uses TaskManager to run scheduled backups.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from apscheduler.schedulers.background import BackgroundScheduler
|
||||||
|
from apscheduler.triggers.cron import CronTrigger
|
||||||
|
from .logger import logger, belief_scope
|
||||||
|
from .config_manager import ConfigManager
|
||||||
|
from typing import Optional
|
||||||
|
import asyncio
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:SchedulerService:Class]
|
||||||
|
# @SEMANTICS: scheduler, service, apscheduler
|
||||||
|
# @PURPOSE: Provides a service to manage scheduled backup tasks.
|
||||||
|
class SchedulerService:
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the scheduler service with task and config managers.
|
||||||
|
# @PRE: task_manager and config_manager must be provided.
|
||||||
|
# @POST: Scheduler instance is created but not started.
|
||||||
|
def __init__(self, task_manager, config_manager: ConfigManager):
|
||||||
|
with belief_scope("SchedulerService.__init__"):
|
||||||
|
self.task_manager = task_manager
|
||||||
|
self.config_manager = config_manager
|
||||||
|
self.scheduler = BackgroundScheduler()
|
||||||
|
self.loop = asyncio.get_event_loop()
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:start:Function]
|
||||||
|
# @PURPOSE: Starts the background scheduler and loads initial schedules.
|
||||||
|
# @PRE: Scheduler should be initialized.
|
||||||
|
# @POST: Scheduler is running and schedules are loaded.
|
||||||
|
def start(self):
|
||||||
|
with belief_scope("SchedulerService.start"):
|
||||||
|
if not self.scheduler.running:
|
||||||
|
self.scheduler.start()
|
||||||
|
logger.info("Scheduler started.")
|
||||||
|
self.load_schedules()
|
||||||
|
# [/DEF:start:Function]
|
||||||
|
|
||||||
|
# [DEF:stop:Function]
|
||||||
|
# @PURPOSE: Stops the background scheduler.
|
||||||
|
# @PRE: Scheduler should be running.
|
||||||
|
# @POST: Scheduler is shut down.
|
||||||
|
def stop(self):
|
||||||
|
with belief_scope("SchedulerService.stop"):
|
||||||
|
if self.scheduler.running:
|
||||||
|
self.scheduler.shutdown()
|
||||||
|
logger.info("Scheduler stopped.")
|
||||||
|
# [/DEF:stop:Function]
|
||||||
|
|
||||||
|
# [DEF:load_schedules:Function]
|
||||||
|
# @PURPOSE: Loads backup schedules from configuration and registers them.
|
||||||
|
# @PRE: config_manager must have valid configuration.
|
||||||
|
# @POST: All enabled backup jobs are added to the scheduler.
|
||||||
|
def load_schedules(self):
|
||||||
|
with belief_scope("SchedulerService.load_schedules"):
|
||||||
|
# Clear existing jobs
|
||||||
|
self.scheduler.remove_all_jobs()
|
||||||
|
|
||||||
|
config = self.config_manager.get_config()
|
||||||
|
for env in config.environments:
|
||||||
|
if env.backup_schedule and env.backup_schedule.enabled:
|
||||||
|
self.add_backup_job(env.id, env.backup_schedule.cron_expression)
|
||||||
|
# [/DEF:load_schedules:Function]
|
||||||
|
|
||||||
|
# [DEF:add_backup_job:Function]
|
||||||
|
# @PURPOSE: Adds a scheduled backup job for an environment.
|
||||||
|
# @PRE: env_id and cron_expression must be valid strings.
|
||||||
|
# @POST: A new job is added to the scheduler or replaced if it already exists.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment.
|
||||||
|
# @PARAM: cron_expression (str) - The cron expression for the schedule.
|
||||||
|
def add_backup_job(self, env_id: str, cron_expression: str):
|
||||||
|
with belief_scope("SchedulerService.add_backup_job", f"env_id={env_id}, cron={cron_expression}"):
|
||||||
|
job_id = f"backup_{env_id}"
|
||||||
|
try:
|
||||||
|
self.scheduler.add_job(
|
||||||
|
self._trigger_backup,
|
||||||
|
CronTrigger.from_crontab(cron_expression),
|
||||||
|
id=job_id,
|
||||||
|
args=[env_id],
|
||||||
|
replace_existing=True
|
||||||
|
)
|
||||||
|
logger.info(f"Scheduled backup job added for environment {env_id}: {cron_expression}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to add backup job for environment {env_id}: {e}")
|
||||||
|
# [/DEF:add_backup_job:Function]
|
||||||
|
|
||||||
|
# [DEF:_trigger_backup:Function]
|
||||||
|
# @PURPOSE: Triggered by the scheduler to start a backup task.
|
||||||
|
# @PRE: env_id must be a valid environment ID.
|
||||||
|
# @POST: A new backup task is created in the task manager if not already running.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment.
|
||||||
|
def _trigger_backup(self, env_id: str):
|
||||||
|
with belief_scope("SchedulerService._trigger_backup", f"env_id={env_id}"):
|
||||||
|
logger.info(f"Triggering scheduled backup for environment {env_id}")
|
||||||
|
|
||||||
|
# Check if a backup is already running for this environment
|
||||||
|
active_tasks = self.task_manager.get_tasks(limit=100)
|
||||||
|
for task in active_tasks:
|
||||||
|
if (task.plugin_id == "superset-backup" and
|
||||||
|
task.status in ["PENDING", "RUNNING"] and
|
||||||
|
task.params.get("environment_id") == env_id):
|
||||||
|
logger.warning(f"Backup already running for environment {env_id}. Skipping scheduled run.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Run the backup task
|
||||||
|
# We need to run this in the event loop since create_task is async
|
||||||
|
asyncio.run_coroutine_threadsafe(
|
||||||
|
self.task_manager.create_task("superset-backup", {"environment_id": env_id}),
|
||||||
|
self.loop
|
||||||
|
)
|
||||||
|
# [/DEF:_trigger_backup:Function]
|
||||||
|
|
||||||
|
# [/DEF:SchedulerService:Class]
|
||||||
|
# [/DEF:SchedulerModule:Module]
|
||||||
400
backend/src/core/superset_client.py
Normal file
400
backend/src/core/superset_client.py
Normal file
@@ -0,0 +1,400 @@
|
|||||||
|
# [DEF:backend.src.core.superset_client:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: superset, api, client, rest, http, dashboard, dataset, import, export
|
||||||
|
# @PURPOSE: Предоставляет высокоуровневый клиент для взаимодействия с Superset REST API, инкапсулируя логику запросов, обработку ошибок и пагинацию.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: USES -> backend.src.core.utils.network.APIClient
|
||||||
|
# @RELATION: USES -> backend.src.core.config_models.Environment
|
||||||
|
#
|
||||||
|
# @INVARIANT: All network operations must use the internal APIClient instance.
|
||||||
|
# @PUBLIC_API: SupersetClient
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import json
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Tuple, Union, cast
|
||||||
|
from requests import Response
|
||||||
|
from .logger import logger as app_logger, belief_scope
|
||||||
|
from .utils.network import APIClient, SupersetAPIError, AuthenticationError, DashboardNotFoundError, NetworkError
|
||||||
|
from .utils.fileio import get_filename_from_headers
|
||||||
|
from .config_models import Environment
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:SupersetClient:Class]
|
||||||
|
# @PURPOSE: Класс-обёртка над Superset REST API, предоставляющий методы для работы с дашбордами и датасетами.
|
||||||
|
class SupersetClient:
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Инициализирует клиент, проверяет конфигурацию и создает сетевой клиент.
|
||||||
|
# @PRE: `env` должен быть валидным объектом Environment.
|
||||||
|
# @POST: Атрибуты `env` и `network` созданы и готовы к работе.
|
||||||
|
# @PARAM: env (Environment) - Конфигурация окружения.
|
||||||
|
def __init__(self, env: Environment):
|
||||||
|
with belief_scope("__init__"):
|
||||||
|
app_logger.info("[SupersetClient.__init__][Enter] Initializing SupersetClient for env %s.", env.name)
|
||||||
|
self.env = env
|
||||||
|
# Construct auth payload expected by Superset API
|
||||||
|
auth_payload = {
|
||||||
|
"username": env.username,
|
||||||
|
"password": env.password,
|
||||||
|
"provider": "db",
|
||||||
|
"refresh": "true"
|
||||||
|
}
|
||||||
|
self.network = APIClient(
|
||||||
|
config={
|
||||||
|
"base_url": env.url,
|
||||||
|
"auth": auth_payload
|
||||||
|
},
|
||||||
|
verify_ssl=env.verify_ssl,
|
||||||
|
timeout=env.timeout
|
||||||
|
)
|
||||||
|
self.delete_before_reimport: bool = False
|
||||||
|
app_logger.info("[SupersetClient.__init__][Exit] SupersetClient initialized.")
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:authenticate:Function]
|
||||||
|
# @PURPOSE: Authenticates the client using the configured credentials.
|
||||||
|
# @PRE: self.network must be initialized with valid auth configuration.
|
||||||
|
# @POST: Client is authenticated and tokens are stored.
|
||||||
|
# @RETURN: Dict[str, str] - Authentication tokens.
|
||||||
|
def authenticate(self) -> Dict[str, str]:
|
||||||
|
with belief_scope("SupersetClient.authenticate"):
|
||||||
|
return self.network.authenticate()
|
||||||
|
# [/DEF:authenticate:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:headers:Function]
|
||||||
|
# @PURPOSE: Возвращает базовые HTTP-заголовки, используемые сетевым клиентом.
|
||||||
|
def headers(self) -> dict:
|
||||||
|
with belief_scope("headers"):
|
||||||
|
return self.network.headers
|
||||||
|
# [/DEF:headers:Function]
|
||||||
|
|
||||||
|
# [SECTION: DASHBOARD OPERATIONS]
|
||||||
|
|
||||||
|
# [DEF:get_dashboards:Function]
|
||||||
|
# @PURPOSE: Получает полный список дашбордов, автоматически обрабатывая пагинацию.
|
||||||
|
# @PARAM: query (Optional[Dict]) - Дополнительные параметры запроса для API.
|
||||||
|
# @RETURN: Tuple[int, List[Dict]] - Кортеж (общее количество, список дашбордов).
|
||||||
|
def get_dashboards(self, query: Optional[Dict] = None) -> Tuple[int, List[Dict]]:
|
||||||
|
with belief_scope("get_dashboards"):
|
||||||
|
app_logger.info("[get_dashboards][Enter] Fetching dashboards.")
|
||||||
|
validated_query = self._validate_query_params(query or {})
|
||||||
|
if 'columns' not in validated_query:
|
||||||
|
validated_query['columns'] = ["slug", "id", "changed_on_utc", "dashboard_title", "published"]
|
||||||
|
|
||||||
|
total_count = self._fetch_total_object_count(endpoint="/dashboard/")
|
||||||
|
paginated_data = self._fetch_all_pages(
|
||||||
|
endpoint="/dashboard/",
|
||||||
|
pagination_options={"base_query": validated_query, "total_count": total_count, "results_field": "result"},
|
||||||
|
)
|
||||||
|
app_logger.info("[get_dashboards][Exit] Found %d dashboards.", total_count)
|
||||||
|
return total_count, paginated_data
|
||||||
|
# [/DEF:get_dashboards:Function]
|
||||||
|
|
||||||
|
# [DEF:get_dashboards_summary:Function]
|
||||||
|
# @PURPOSE: Fetches dashboard metadata optimized for the grid.
|
||||||
|
# @RETURN: List[Dict]
|
||||||
|
def get_dashboards_summary(self) -> List[Dict]:
|
||||||
|
with belief_scope("SupersetClient.get_dashboards_summary"):
|
||||||
|
query = {
|
||||||
|
"columns": ["id", "dashboard_title", "changed_on_utc", "published"]
|
||||||
|
}
|
||||||
|
_, dashboards = self.get_dashboards(query=query)
|
||||||
|
|
||||||
|
# Map fields to DashboardMetadata schema
|
||||||
|
result = []
|
||||||
|
for dash in dashboards:
|
||||||
|
result.append({
|
||||||
|
"id": dash.get("id"),
|
||||||
|
"title": dash.get("dashboard_title"),
|
||||||
|
"last_modified": dash.get("changed_on_utc"),
|
||||||
|
"status": "published" if dash.get("published") else "draft"
|
||||||
|
})
|
||||||
|
return result
|
||||||
|
# [/DEF:get_dashboards_summary:Function]
|
||||||
|
|
||||||
|
# [DEF:export_dashboard:Function]
|
||||||
|
# @PURPOSE: Экспортирует дашборд в виде ZIP-архива.
|
||||||
|
# @PARAM: dashboard_id (int) - ID дашборда для экспорта.
|
||||||
|
# @RETURN: Tuple[bytes, str] - Бинарное содержимое ZIP-архива и имя файла.
|
||||||
|
def export_dashboard(self, dashboard_id: int) -> Tuple[bytes, str]:
|
||||||
|
with belief_scope("export_dashboard"):
|
||||||
|
app_logger.info("[export_dashboard][Enter] Exporting dashboard %s.", dashboard_id)
|
||||||
|
response = self.network.request(
|
||||||
|
method="GET",
|
||||||
|
endpoint="/dashboard/export/",
|
||||||
|
params={"q": json.dumps([dashboard_id])},
|
||||||
|
stream=True,
|
||||||
|
raw_response=True,
|
||||||
|
)
|
||||||
|
response = cast(Response, response)
|
||||||
|
self._validate_export_response(response, dashboard_id)
|
||||||
|
filename = self._resolve_export_filename(response, dashboard_id)
|
||||||
|
app_logger.info("[export_dashboard][Exit] Exported dashboard %s to %s.", dashboard_id, filename)
|
||||||
|
return response.content, filename
|
||||||
|
# [/DEF:export_dashboard:Function]
|
||||||
|
|
||||||
|
# [DEF:import_dashboard:Function]
|
||||||
|
# @PURPOSE: Импортирует дашборд из ZIP-файла.
|
||||||
|
# @PARAM: file_name (Union[str, Path]) - Путь к ZIP-архиву.
|
||||||
|
# @PARAM: dash_id (Optional[int]) - ID дашборда для удаления при сбое.
|
||||||
|
# @PARAM: dash_slug (Optional[str]) - Slug дашборда для поиска ID.
|
||||||
|
# @RETURN: Dict - Ответ API в случае успеха.
|
||||||
|
def import_dashboard(self, file_name: Union[str, Path], dash_id: Optional[int] = None, dash_slug: Optional[str] = None) -> Dict:
|
||||||
|
with belief_scope("import_dashboard"):
|
||||||
|
file_path = str(file_name)
|
||||||
|
self._validate_import_file(file_path)
|
||||||
|
try:
|
||||||
|
return self._do_import(file_path)
|
||||||
|
except Exception as exc:
|
||||||
|
app_logger.error("[import_dashboard][Failure] First import attempt failed: %s", exc, exc_info=True)
|
||||||
|
if not self.delete_before_reimport:
|
||||||
|
raise
|
||||||
|
|
||||||
|
target_id = self._resolve_target_id_for_delete(dash_id, dash_slug)
|
||||||
|
if target_id is None:
|
||||||
|
app_logger.error("[import_dashboard][Failure] No ID available for delete-retry.")
|
||||||
|
raise
|
||||||
|
|
||||||
|
self.delete_dashboard(target_id)
|
||||||
|
app_logger.info("[import_dashboard][State] Deleted dashboard ID %s, retrying import.", target_id)
|
||||||
|
return self._do_import(file_path)
|
||||||
|
# [/DEF:import_dashboard:Function]
|
||||||
|
|
||||||
|
# [DEF:delete_dashboard:Function]
|
||||||
|
# @PURPOSE: Удаляет дашборд по его ID или slug.
|
||||||
|
# @PARAM: dashboard_id (Union[int, str]) - ID или slug дашборда.
|
||||||
|
def delete_dashboard(self, dashboard_id: Union[int, str]) -> None:
|
||||||
|
with belief_scope("delete_dashboard"):
|
||||||
|
app_logger.info("[delete_dashboard][Enter] Deleting dashboard %s.", dashboard_id)
|
||||||
|
response = self.network.request(method="DELETE", endpoint=f"/dashboard/{dashboard_id}")
|
||||||
|
response = cast(Dict, response)
|
||||||
|
if response.get("result", True) is not False:
|
||||||
|
app_logger.info("[delete_dashboard][Success] Dashboard %s deleted.", dashboard_id)
|
||||||
|
else:
|
||||||
|
app_logger.warning("[delete_dashboard][Warning] Unexpected response while deleting %s: %s", dashboard_id, response)
|
||||||
|
# [/DEF:delete_dashboard:Function]
|
||||||
|
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [SECTION: DATASET OPERATIONS]
|
||||||
|
|
||||||
|
# [DEF:get_datasets:Function]
|
||||||
|
# @PURPOSE: Получает полный список датасетов, автоматически обрабатывая пагинацию.
|
||||||
|
# @PARAM: query (Optional[Dict]) - Дополнительные параметры запроса.
|
||||||
|
# @RETURN: Tuple[int, List[Dict]] - Кортеж (общее количество, список датасетов).
|
||||||
|
def get_datasets(self, query: Optional[Dict] = None) -> Tuple[int, List[Dict]]:
|
||||||
|
with belief_scope("get_datasets"):
|
||||||
|
app_logger.info("[get_datasets][Enter] Fetching datasets.")
|
||||||
|
validated_query = self._validate_query_params(query)
|
||||||
|
|
||||||
|
total_count = self._fetch_total_object_count(endpoint="/dataset/")
|
||||||
|
paginated_data = self._fetch_all_pages(
|
||||||
|
endpoint="/dataset/",
|
||||||
|
pagination_options={"base_query": validated_query, "total_count": total_count, "results_field": "result"},
|
||||||
|
)
|
||||||
|
app_logger.info("[get_datasets][Exit] Found %d datasets.", total_count)
|
||||||
|
return total_count, paginated_data
|
||||||
|
# [/DEF:get_datasets:Function]
|
||||||
|
|
||||||
|
# [DEF:get_dataset:Function]
|
||||||
|
# @PURPOSE: Получает информацию о конкретном датасете по его ID.
|
||||||
|
# @PARAM: dataset_id (int) - ID датасета.
|
||||||
|
# @RETURN: Dict - Информация о датасете.
|
||||||
|
def get_dataset(self, dataset_id: int) -> Dict:
|
||||||
|
with belief_scope("SupersetClient.get_dataset", f"id={dataset_id}"):
|
||||||
|
app_logger.info("[get_dataset][Enter] Fetching dataset %s.", dataset_id)
|
||||||
|
response = self.network.request(method="GET", endpoint=f"/dataset/{dataset_id}")
|
||||||
|
response = cast(Dict, response)
|
||||||
|
app_logger.info("[get_dataset][Exit] Got dataset %s.", dataset_id)
|
||||||
|
return response
|
||||||
|
# [/DEF:get_dataset:Function]
|
||||||
|
|
||||||
|
# [DEF:update_dataset:Function]
|
||||||
|
# @PURPOSE: Обновляет данные датасета по его ID.
|
||||||
|
# @PARAM: dataset_id (int) - ID датасета.
|
||||||
|
# @PARAM: data (Dict) - Данные для обновления.
|
||||||
|
# @RETURN: Dict - Ответ API.
|
||||||
|
def update_dataset(self, dataset_id: int, data: Dict) -> Dict:
|
||||||
|
with belief_scope("SupersetClient.update_dataset", f"id={dataset_id}"):
|
||||||
|
app_logger.info("[update_dataset][Enter] Updating dataset %s.", dataset_id)
|
||||||
|
response = self.network.request(
|
||||||
|
method="PUT",
|
||||||
|
endpoint=f"/dataset/{dataset_id}",
|
||||||
|
data=json.dumps(data),
|
||||||
|
headers={'Content-Type': 'application/json'}
|
||||||
|
)
|
||||||
|
response = cast(Dict, response)
|
||||||
|
app_logger.info("[update_dataset][Exit] Updated dataset %s.", dataset_id)
|
||||||
|
return response
|
||||||
|
# [/DEF:update_dataset:Function]
|
||||||
|
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [SECTION: DATABASE OPERATIONS]
|
||||||
|
|
||||||
|
# [DEF:get_databases:Function]
|
||||||
|
# @PURPOSE: Получает полный список баз данных.
|
||||||
|
# @PARAM: query (Optional[Dict]) - Дополнительные параметры запроса.
|
||||||
|
# @RETURN: Tuple[int, List[Dict]] - Кортеж (общее количество, список баз данных).
|
||||||
|
def get_databases(self, query: Optional[Dict] = None) -> Tuple[int, List[Dict]]:
|
||||||
|
with belief_scope("get_databases"):
|
||||||
|
app_logger.info("[get_databases][Enter] Fetching databases.")
|
||||||
|
validated_query = self._validate_query_params(query or {})
|
||||||
|
if 'columns' not in validated_query:
|
||||||
|
validated_query['columns'] = []
|
||||||
|
total_count = self._fetch_total_object_count(endpoint="/database/")
|
||||||
|
paginated_data = self._fetch_all_pages(
|
||||||
|
endpoint="/database/",
|
||||||
|
pagination_options={"base_query": validated_query, "total_count": total_count, "results_field": "result"},
|
||||||
|
)
|
||||||
|
app_logger.info("[get_databases][Exit] Found %d databases.", total_count)
|
||||||
|
return total_count, paginated_data
|
||||||
|
# [/DEF:get_databases:Function]
|
||||||
|
|
||||||
|
# [DEF:get_database:Function]
|
||||||
|
# @PURPOSE: Получает информацию о конкретной базе данных по её ID.
|
||||||
|
# @PARAM: database_id (int) - ID базы данных.
|
||||||
|
# @RETURN: Dict - Информация о базе данных.
|
||||||
|
def get_database(self, database_id: int) -> Dict:
|
||||||
|
with belief_scope("get_database"):
|
||||||
|
app_logger.info("[get_database][Enter] Fetching database %s.", database_id)
|
||||||
|
response = self.network.request(method="GET", endpoint=f"/database/{database_id}")
|
||||||
|
response = cast(Dict, response)
|
||||||
|
app_logger.info("[get_database][Exit] Got database %s.", database_id)
|
||||||
|
return response
|
||||||
|
# [/DEF:get_database:Function]
|
||||||
|
|
||||||
|
# [DEF:get_databases_summary:Function]
|
||||||
|
# @PURPOSE: Fetch a summary of databases including uuid, name, and engine.
|
||||||
|
# @RETURN: List[Dict] - Summary of databases.
|
||||||
|
def get_databases_summary(self) -> List[Dict]:
|
||||||
|
with belief_scope("SupersetClient.get_databases_summary"):
|
||||||
|
query = {
|
||||||
|
"columns": ["uuid", "database_name", "backend"]
|
||||||
|
}
|
||||||
|
_, databases = self.get_databases(query=query)
|
||||||
|
|
||||||
|
# Map 'backend' to 'engine' for consistency with contracts
|
||||||
|
for db in databases:
|
||||||
|
db['engine'] = db.pop('backend', None)
|
||||||
|
|
||||||
|
return databases
|
||||||
|
# [/DEF:get_databases_summary:Function]
|
||||||
|
|
||||||
|
# [DEF:get_database_by_uuid:Function]
|
||||||
|
# @PURPOSE: Find a database by its UUID.
|
||||||
|
# @PARAM: db_uuid (str) - The UUID of the database.
|
||||||
|
# @RETURN: Optional[Dict] - Database info if found, else None.
|
||||||
|
def get_database_by_uuid(self, db_uuid: str) -> Optional[Dict]:
|
||||||
|
with belief_scope("SupersetClient.get_database_by_uuid", f"uuid={db_uuid}"):
|
||||||
|
query = {
|
||||||
|
"filters": [{"col": "uuid", "op": "eq", "value": db_uuid}]
|
||||||
|
}
|
||||||
|
_, databases = self.get_databases(query=query)
|
||||||
|
return databases[0] if databases else None
|
||||||
|
# [/DEF:get_database_by_uuid:Function]
|
||||||
|
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [SECTION: HELPERS]
|
||||||
|
|
||||||
|
# [DEF:_resolve_target_id_for_delete:Function]
|
||||||
|
def _resolve_target_id_for_delete(self, dash_id: Optional[int], dash_slug: Optional[str]) -> Optional[int]:
|
||||||
|
with belief_scope("_resolve_target_id_for_delete"):
|
||||||
|
if dash_id is not None:
|
||||||
|
return dash_id
|
||||||
|
if dash_slug is not None:
|
||||||
|
app_logger.debug("[_resolve_target_id_for_delete][State] Resolving ID by slug '%s'.", dash_slug)
|
||||||
|
try:
|
||||||
|
_, candidates = self.get_dashboards(query={"filters": [{"col": "slug", "op": "eq", "value": dash_slug}]})
|
||||||
|
if candidates:
|
||||||
|
target_id = candidates[0]["id"]
|
||||||
|
app_logger.debug("[_resolve_target_id_for_delete][Success] Resolved slug to ID %s.", target_id)
|
||||||
|
return target_id
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.warning("[_resolve_target_id_for_delete][Warning] Could not resolve slug '%s' to ID: %s", dash_slug, e)
|
||||||
|
return None
|
||||||
|
# [/DEF:_resolve_target_id_for_delete:Function]
|
||||||
|
|
||||||
|
# [DEF:_do_import:Function]
|
||||||
|
def _do_import(self, file_name: Union[str, Path]) -> Dict:
|
||||||
|
with belief_scope("_do_import"):
|
||||||
|
app_logger.debug(f"[_do_import][State] Uploading file: {file_name}")
|
||||||
|
file_path = Path(file_name)
|
||||||
|
if not file_path.exists():
|
||||||
|
app_logger.error(f"[_do_import][Failure] File does not exist: {file_name}")
|
||||||
|
raise FileNotFoundError(f"File does not exist: {file_name}")
|
||||||
|
|
||||||
|
return self.network.upload_file(
|
||||||
|
endpoint="/dashboard/import/",
|
||||||
|
file_info={"file_obj": file_path, "file_name": file_path.name, "form_field": "formData"},
|
||||||
|
extra_data={"overwrite": "true"},
|
||||||
|
timeout=self.env.timeout * 2,
|
||||||
|
)
|
||||||
|
# [/DEF:_do_import:Function]
|
||||||
|
|
||||||
|
# [DEF:_validate_export_response:Function]
|
||||||
|
def _validate_export_response(self, response: Response, dashboard_id: int) -> None:
|
||||||
|
with belief_scope("_validate_export_response"):
|
||||||
|
content_type = response.headers.get("Content-Type", "")
|
||||||
|
if "application/zip" not in content_type:
|
||||||
|
raise SupersetAPIError(f"Получен не ZIP-архив (Content-Type: {content_type})")
|
||||||
|
if not response.content:
|
||||||
|
raise SupersetAPIError("Получены пустые данные при экспорте")
|
||||||
|
# [/DEF:_validate_export_response:Function]
|
||||||
|
|
||||||
|
# [DEF:_resolve_export_filename:Function]
|
||||||
|
def _resolve_export_filename(self, response: Response, dashboard_id: int) -> str:
|
||||||
|
with belief_scope("_resolve_export_filename"):
|
||||||
|
filename = get_filename_from_headers(dict(response.headers))
|
||||||
|
if not filename:
|
||||||
|
from datetime import datetime
|
||||||
|
timestamp = datetime.now().strftime("%Y%m%dT%H%M%S")
|
||||||
|
filename = f"dashboard_export_{dashboard_id}_{timestamp}.zip"
|
||||||
|
app_logger.warning("[_resolve_export_filename][Warning] Generated filename: %s", filename)
|
||||||
|
return filename
|
||||||
|
# [/DEF:_resolve_export_filename:Function]
|
||||||
|
|
||||||
|
# [DEF:_validate_query_params:Function]
|
||||||
|
def _validate_query_params(self, query: Optional[Dict]) -> Dict:
|
||||||
|
with belief_scope("_validate_query_params"):
|
||||||
|
base_query = {"page": 0, "page_size": 1000}
|
||||||
|
return {**base_query, **(query or {})}
|
||||||
|
# [/DEF:_validate_query_params:Function]
|
||||||
|
|
||||||
|
# [DEF:_fetch_total_object_count:Function]
|
||||||
|
def _fetch_total_object_count(self, endpoint: str) -> int:
|
||||||
|
with belief_scope("_fetch_total_object_count"):
|
||||||
|
return self.network.fetch_paginated_count(
|
||||||
|
endpoint=endpoint,
|
||||||
|
query_params={"page": 0, "page_size": 1},
|
||||||
|
count_field="count",
|
||||||
|
)
|
||||||
|
# [/DEF:_fetch_total_object_count:Function]
|
||||||
|
|
||||||
|
# [DEF:_fetch_all_pages:Function]
|
||||||
|
def _fetch_all_pages(self, endpoint: str, pagination_options: Dict) -> List[Dict]:
|
||||||
|
with belief_scope("_fetch_all_pages"):
|
||||||
|
return self.network.fetch_paginated_data(endpoint=endpoint, pagination_options=pagination_options)
|
||||||
|
# [/DEF:_fetch_all_pages:Function]
|
||||||
|
|
||||||
|
# [DEF:_validate_import_file:Function]
|
||||||
|
def _validate_import_file(self, zip_path: Union[str, Path]) -> None:
|
||||||
|
with belief_scope("_validate_import_file"):
|
||||||
|
path = Path(zip_path)
|
||||||
|
if not path.exists():
|
||||||
|
raise FileNotFoundError(f"Файл {zip_path} не существует")
|
||||||
|
if not zipfile.is_zipfile(path):
|
||||||
|
raise SupersetAPIError(f"Файл {zip_path} не является ZIP-архивом")
|
||||||
|
with zipfile.ZipFile(path, "r") as zf:
|
||||||
|
if not any(n.endswith("metadata.yaml") for n in zf.namelist()):
|
||||||
|
raise SupersetAPIError(f"Архив {zip_path} не содержит 'metadata.yaml'")
|
||||||
|
# [/DEF:_validate_import_file:Function]
|
||||||
|
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [/DEF:SupersetClient:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.core.superset_client:Module]
|
||||||
12
backend/src/core/task_manager/__init__.py
Normal file
12
backend/src/core/task_manager/__init__.py
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# [DEF:TaskManagerPackage:Module]
|
||||||
|
# @SEMANTICS: task, manager, package, exports
|
||||||
|
# @PURPOSE: Exports the public API of the task manager package.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Aggregates models and manager.
|
||||||
|
|
||||||
|
from .models import Task, TaskStatus, LogEntry
|
||||||
|
from .manager import TaskManager
|
||||||
|
|
||||||
|
__all__ = ["TaskManager", "Task", "TaskStatus", "LogEntry"]
|
||||||
|
|
||||||
|
# [/DEF:TaskManagerPackage:Module]
|
||||||
47
backend/src/core/task_manager/cleanup.py
Normal file
47
backend/src/core/task_manager/cleanup.py
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# [DEF:TaskCleanupModule:Module]
|
||||||
|
# @SEMANTICS: task, cleanup, retention
|
||||||
|
# @PURPOSE: Implements task cleanup and retention policies.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Uses TaskPersistenceService to delete old tasks.
|
||||||
|
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
from .persistence import TaskPersistenceService
|
||||||
|
from ..logger import logger, belief_scope
|
||||||
|
from ..config_manager import ConfigManager
|
||||||
|
|
||||||
|
# [DEF:TaskCleanupService:Class]
|
||||||
|
# @PURPOSE: Provides methods to clean up old task records.
|
||||||
|
class TaskCleanupService:
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the cleanup service with dependencies.
|
||||||
|
# @PRE: persistence_service and config_manager are valid.
|
||||||
|
# @POST: Cleanup service is ready.
|
||||||
|
def __init__(self, persistence_service: TaskPersistenceService, config_manager: ConfigManager):
|
||||||
|
self.persistence_service = persistence_service
|
||||||
|
self.config_manager = config_manager
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:run_cleanup:Function]
|
||||||
|
# @PURPOSE: Deletes tasks older than the configured retention period.
|
||||||
|
# @PRE: Config manager has valid settings.
|
||||||
|
# @POST: Old tasks are deleted from persistence.
|
||||||
|
def run_cleanup(self):
|
||||||
|
with belief_scope("TaskCleanupService.run_cleanup"):
|
||||||
|
settings = self.config_manager.get_config().settings
|
||||||
|
retention_days = settings.task_retention_days
|
||||||
|
|
||||||
|
# This is a simplified implementation.
|
||||||
|
# In a real scenario, we would query IDs of tasks older than retention_days.
|
||||||
|
# For now, we'll log the action.
|
||||||
|
logger.info(f"Cleaning up tasks older than {retention_days} days.")
|
||||||
|
|
||||||
|
# Re-loading tasks to check for limit
|
||||||
|
tasks = self.persistence_service.load_tasks(limit=1000)
|
||||||
|
if len(tasks) > settings.task_retention_limit:
|
||||||
|
to_delete = [t.id for t in tasks[settings.task_retention_limit:]]
|
||||||
|
self.persistence_service.delete_tasks(to_delete)
|
||||||
|
logger.info(f"Deleted {len(to_delete)} tasks exceeding limit of {settings.task_retention_limit}")
|
||||||
|
# [/DEF:run_cleanup:Function]
|
||||||
|
|
||||||
|
# [/DEF:TaskCleanupService:Class]
|
||||||
|
# [/DEF:TaskCleanupModule:Module]
|
||||||
398
backend/src/core/task_manager/manager.py
Normal file
398
backend/src/core/task_manager/manager.py
Normal file
@@ -0,0 +1,398 @@
|
|||||||
|
# [DEF:TaskManagerModule:Module]
|
||||||
|
# @SEMANTICS: task, manager, lifecycle, execution, state
|
||||||
|
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking. It uses a thread pool to run plugins asynchronously.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Depends on PluginLoader to get plugin instances. It is used by the API layer to create and query tasks.
|
||||||
|
# @INVARIANT: Task IDs are unique.
|
||||||
|
# @CONSTRAINT: Must use belief_scope for logging.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import asyncio
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
|
||||||
|
from .models import Task, TaskStatus, LogEntry
|
||||||
|
from .persistence import TaskPersistenceService
|
||||||
|
from ..logger import logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:TaskManager:Class]
|
||||||
|
# @SEMANTICS: task, manager, lifecycle, execution, state
|
||||||
|
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking.
|
||||||
|
class TaskManager:
|
||||||
|
"""
|
||||||
|
Manages the lifecycle of tasks, including their creation, execution, and state tracking.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initialize the TaskManager with dependencies.
|
||||||
|
# @PRE: plugin_loader is initialized.
|
||||||
|
# @POST: TaskManager is ready to accept tasks.
|
||||||
|
# @PARAM: plugin_loader - The plugin loader instance.
|
||||||
|
def __init__(self, plugin_loader):
|
||||||
|
with belief_scope("TaskManager.__init__"):
|
||||||
|
self.plugin_loader = plugin_loader
|
||||||
|
self.tasks: Dict[str, Task] = {}
|
||||||
|
self.subscribers: Dict[str, List[asyncio.Queue]] = {}
|
||||||
|
self.executor = ThreadPoolExecutor(max_workers=5) # For CPU-bound plugin execution
|
||||||
|
self.persistence_service = TaskPersistenceService()
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.loop = asyncio.get_running_loop()
|
||||||
|
except RuntimeError:
|
||||||
|
self.loop = asyncio.get_event_loop()
|
||||||
|
self.task_futures: Dict[str, asyncio.Future] = {}
|
||||||
|
|
||||||
|
# Load persisted tasks on startup
|
||||||
|
self.load_persisted_tasks()
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:create_task:Function]
|
||||||
|
# @PURPOSE: Creates and queues a new task for execution.
|
||||||
|
# @PRE: Plugin with plugin_id exists. Params are valid.
|
||||||
|
# @POST: Task is created, added to registry, and scheduled for execution.
|
||||||
|
# @PARAM: plugin_id (str) - The ID of the plugin to run.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Parameters for the plugin.
|
||||||
|
# @PARAM: user_id (Optional[str]) - ID of the user requesting the task.
|
||||||
|
# @RETURN: Task - The created task instance.
|
||||||
|
# @THROWS: ValueError if plugin not found or params invalid.
|
||||||
|
async def create_task(self, plugin_id: str, params: Dict[str, Any], user_id: Optional[str] = None) -> Task:
|
||||||
|
with belief_scope("TaskManager.create_task", f"plugin_id={plugin_id}"):
|
||||||
|
if not self.plugin_loader.has_plugin(plugin_id):
|
||||||
|
logger.error(f"Plugin with ID '{plugin_id}' not found.")
|
||||||
|
raise ValueError(f"Plugin with ID '{plugin_id}' not found.")
|
||||||
|
|
||||||
|
plugin = self.plugin_loader.get_plugin(plugin_id)
|
||||||
|
|
||||||
|
if not isinstance(params, dict):
|
||||||
|
logger.error("Task parameters must be a dictionary.")
|
||||||
|
raise ValueError("Task parameters must be a dictionary.")
|
||||||
|
|
||||||
|
task = Task(plugin_id=plugin_id, params=params, user_id=user_id)
|
||||||
|
self.tasks[task.id] = task
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
logger.info(f"Task {task.id} created and scheduled for execution")
|
||||||
|
self.loop.create_task(self._run_task(task.id)) # Schedule task for execution
|
||||||
|
return task
|
||||||
|
# [/DEF:create_task:Function]
|
||||||
|
|
||||||
|
# [DEF:_run_task:Function]
|
||||||
|
# @PURPOSE: Internal method to execute a task.
|
||||||
|
# @PRE: Task exists in registry.
|
||||||
|
# @POST: Task is executed, status updated to SUCCESS or FAILED.
|
||||||
|
# @PARAM: task_id (str) - The ID of the task to run.
|
||||||
|
async def _run_task(self, task_id: str):
|
||||||
|
with belief_scope("TaskManager._run_task", f"task_id={task_id}"):
|
||||||
|
task = self.tasks[task_id]
|
||||||
|
plugin = self.plugin_loader.get_plugin(task.plugin_id)
|
||||||
|
|
||||||
|
logger.info(f"Starting execution of task {task_id} for plugin '{plugin.name}'")
|
||||||
|
task.status = TaskStatus.RUNNING
|
||||||
|
task.started_at = datetime.utcnow()
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
self._add_log(task_id, "INFO", f"Task started for plugin '{plugin.name}'")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Execute plugin
|
||||||
|
params = {**task.params, "_task_id": task_id}
|
||||||
|
|
||||||
|
if asyncio.iscoroutinefunction(plugin.execute):
|
||||||
|
task.result = await plugin.execute(params)
|
||||||
|
else:
|
||||||
|
task.result = await self.loop.run_in_executor(
|
||||||
|
self.executor,
|
||||||
|
plugin.execute,
|
||||||
|
params
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(f"Task {task_id} completed successfully")
|
||||||
|
task.status = TaskStatus.SUCCESS
|
||||||
|
self._add_log(task_id, "INFO", f"Task completed successfully for plugin '{plugin.name}'")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Task {task_id} failed: {e}")
|
||||||
|
task.status = TaskStatus.FAILED
|
||||||
|
self._add_log(task_id, "ERROR", f"Task failed: {e}", {"error_type": type(e).__name__})
|
||||||
|
finally:
|
||||||
|
task.finished_at = datetime.utcnow()
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
logger.info(f"Task {task_id} execution finished with status: {task.status}")
|
||||||
|
# [/DEF:_run_task:Function]
|
||||||
|
|
||||||
|
# [DEF:resolve_task:Function]
|
||||||
|
# @PURPOSE: Resumes a task that is awaiting mapping.
|
||||||
|
# @PRE: Task exists and is in AWAITING_MAPPING state.
|
||||||
|
# @POST: Task status updated to RUNNING, params updated, execution resumed.
|
||||||
|
# @PARAM: task_id (str) - The ID of the task.
|
||||||
|
# @PARAM: resolution_params (Dict[str, Any]) - Params to resolve the wait.
|
||||||
|
# @THROWS: ValueError if task not found or not awaiting mapping.
|
||||||
|
async def resolve_task(self, task_id: str, resolution_params: Dict[str, Any]):
|
||||||
|
with belief_scope("TaskManager.resolve_task", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
if not task or task.status != TaskStatus.AWAITING_MAPPING:
|
||||||
|
raise ValueError("Task is not awaiting mapping.")
|
||||||
|
|
||||||
|
# Update task params with resolution
|
||||||
|
task.params.update(resolution_params)
|
||||||
|
task.status = TaskStatus.RUNNING
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
self._add_log(task_id, "INFO", "Task resumed after mapping resolution.")
|
||||||
|
|
||||||
|
# Signal the future to continue
|
||||||
|
if task_id in self.task_futures:
|
||||||
|
self.task_futures[task_id].set_result(True)
|
||||||
|
# [/DEF:resolve_task:Function]
|
||||||
|
|
||||||
|
# [DEF:wait_for_resolution:Function]
|
||||||
|
# @PURPOSE: Pauses execution and waits for a resolution signal.
|
||||||
|
# @PRE: Task exists.
|
||||||
|
# @POST: Execution pauses until future is set.
|
||||||
|
# @PARAM: task_id (str) - The ID of the task.
|
||||||
|
async def wait_for_resolution(self, task_id: str):
|
||||||
|
with belief_scope("TaskManager.wait_for_resolution", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
if not task: return
|
||||||
|
|
||||||
|
task.status = TaskStatus.AWAITING_MAPPING
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
self.task_futures[task_id] = self.loop.create_future()
|
||||||
|
|
||||||
|
try:
|
||||||
|
await self.task_futures[task_id]
|
||||||
|
finally:
|
||||||
|
if task_id in self.task_futures:
|
||||||
|
del self.task_futures[task_id]
|
||||||
|
# [/DEF:wait_for_resolution:Function]
|
||||||
|
|
||||||
|
# [DEF:wait_for_input:Function]
|
||||||
|
# @PURPOSE: Pauses execution and waits for user input.
|
||||||
|
# @PRE: Task exists.
|
||||||
|
# @POST: Execution pauses until future is set via resume_task_with_password.
|
||||||
|
# @PARAM: task_id (str) - The ID of the task.
|
||||||
|
async def wait_for_input(self, task_id: str):
|
||||||
|
with belief_scope("TaskManager.wait_for_input", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
if not task: return
|
||||||
|
|
||||||
|
# Status is already set to AWAITING_INPUT by await_input()
|
||||||
|
self.task_futures[task_id] = self.loop.create_future()
|
||||||
|
|
||||||
|
try:
|
||||||
|
await self.task_futures[task_id]
|
||||||
|
finally:
|
||||||
|
if task_id in self.task_futures:
|
||||||
|
del self.task_futures[task_id]
|
||||||
|
# [/DEF:wait_for_input:Function]
|
||||||
|
|
||||||
|
# [DEF:get_task:Function]
|
||||||
|
# @PURPOSE: Retrieves a task by its ID.
|
||||||
|
# @PRE: task_id is a string.
|
||||||
|
# @POST: Returns Task object or None.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @RETURN: Optional[Task] - The task or None.
|
||||||
|
def get_task(self, task_id: str) -> Optional[Task]:
|
||||||
|
with belief_scope("TaskManager.get_task", f"task_id={task_id}"):
|
||||||
|
return self.tasks.get(task_id)
|
||||||
|
# [/DEF:get_task:Function]
|
||||||
|
|
||||||
|
# [DEF:get_all_tasks:Function]
|
||||||
|
# @PURPOSE: Retrieves all registered tasks.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns list of all Task objects.
|
||||||
|
# @RETURN: List[Task] - All tasks.
|
||||||
|
def get_all_tasks(self) -> List[Task]:
|
||||||
|
with belief_scope("TaskManager.get_all_tasks"):
|
||||||
|
return list(self.tasks.values())
|
||||||
|
# [/DEF:get_all_tasks:Function]
|
||||||
|
|
||||||
|
# [DEF:get_tasks:Function]
|
||||||
|
# @PURPOSE: Retrieves tasks with pagination and optional status filter.
|
||||||
|
# @PRE: limit and offset are non-negative integers.
|
||||||
|
# @POST: Returns a list of tasks sorted by start_time descending.
|
||||||
|
# @PARAM: limit (int) - Maximum number of tasks to return.
|
||||||
|
# @PARAM: offset (int) - Number of tasks to skip.
|
||||||
|
# @PARAM: status (Optional[TaskStatus]) - Filter by task status.
|
||||||
|
# @RETURN: List[Task] - List of tasks matching criteria.
|
||||||
|
def get_tasks(self, limit: int = 10, offset: int = 0, status: Optional[TaskStatus] = None) -> List[Task]:
|
||||||
|
with belief_scope("TaskManager.get_tasks"):
|
||||||
|
tasks = list(self.tasks.values())
|
||||||
|
if status:
|
||||||
|
tasks = [t for t in tasks if t.status == status]
|
||||||
|
# Sort by start_time descending (most recent first)
|
||||||
|
tasks.sort(key=lambda t: t.started_at or datetime.min, reverse=True)
|
||||||
|
return tasks[offset:offset + limit]
|
||||||
|
# [/DEF:get_tasks:Function]
|
||||||
|
|
||||||
|
# [DEF:get_task_logs:Function]
|
||||||
|
# @PURPOSE: Retrieves logs for a specific task.
|
||||||
|
# @PRE: task_id is a string.
|
||||||
|
# @POST: Returns list of LogEntry objects.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @RETURN: List[LogEntry] - List of log entries.
|
||||||
|
def get_task_logs(self, task_id: str) -> List[LogEntry]:
|
||||||
|
with belief_scope("TaskManager.get_task_logs", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
return task.logs if task else []
|
||||||
|
# [/DEF:get_task_logs:Function]
|
||||||
|
|
||||||
|
# [DEF:_add_log:Function]
|
||||||
|
# @PURPOSE: Adds a log entry to a task and notifies subscribers.
|
||||||
|
# @PRE: Task exists.
|
||||||
|
# @POST: Log added to task and pushed to queues.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @PARAM: level (str) - Log level.
|
||||||
|
# @PARAM: message (str) - Log message.
|
||||||
|
# @PARAM: context (Optional[Dict]) - Log context.
|
||||||
|
def _add_log(self, task_id: str, level: str, message: str, context: Optional[Dict[str, Any]] = None):
|
||||||
|
with belief_scope("TaskManager._add_log", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
if not task:
|
||||||
|
return
|
||||||
|
|
||||||
|
log_entry = LogEntry(level=level, message=message, context=context)
|
||||||
|
task.logs.append(log_entry)
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
|
||||||
|
# Notify subscribers
|
||||||
|
if task_id in self.subscribers:
|
||||||
|
for queue in self.subscribers[task_id]:
|
||||||
|
self.loop.call_soon_threadsafe(queue.put_nowait, log_entry)
|
||||||
|
# [/DEF:_add_log:Function]
|
||||||
|
|
||||||
|
# [DEF:subscribe_logs:Function]
|
||||||
|
# @PURPOSE: Subscribes to real-time logs for a task.
|
||||||
|
# @PRE: task_id is a string.
|
||||||
|
# @POST: Returns an asyncio.Queue for log entries.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @RETURN: asyncio.Queue - Queue for log entries.
|
||||||
|
async def subscribe_logs(self, task_id: str) -> asyncio.Queue:
|
||||||
|
with belief_scope("TaskManager.subscribe_logs", f"task_id={task_id}"):
|
||||||
|
queue = asyncio.Queue()
|
||||||
|
if task_id not in self.subscribers:
|
||||||
|
self.subscribers[task_id] = []
|
||||||
|
self.subscribers[task_id].append(queue)
|
||||||
|
return queue
|
||||||
|
# [/DEF:subscribe_logs:Function]
|
||||||
|
|
||||||
|
# [DEF:unsubscribe_logs:Function]
|
||||||
|
# @PURPOSE: Unsubscribes from real-time logs for a task.
|
||||||
|
# @PRE: task_id is a string, queue is asyncio.Queue.
|
||||||
|
# @POST: Queue removed from subscribers.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @PARAM: queue (asyncio.Queue) - Queue to remove.
|
||||||
|
def unsubscribe_logs(self, task_id: str, queue: asyncio.Queue):
|
||||||
|
with belief_scope("TaskManager.unsubscribe_logs", f"task_id={task_id}"):
|
||||||
|
if task_id in self.subscribers:
|
||||||
|
if queue in self.subscribers[task_id]:
|
||||||
|
self.subscribers[task_id].remove(queue)
|
||||||
|
if not self.subscribers[task_id]:
|
||||||
|
del self.subscribers[task_id]
|
||||||
|
# [/DEF:unsubscribe_logs:Function]
|
||||||
|
|
||||||
|
# [DEF:load_persisted_tasks:Function]
|
||||||
|
# @PURPOSE: Load persisted tasks using persistence service.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Persisted tasks loaded into self.tasks.
|
||||||
|
def load_persisted_tasks(self) -> None:
|
||||||
|
with belief_scope("TaskManager.load_persisted_tasks"):
|
||||||
|
loaded_tasks = self.persistence_service.load_tasks(limit=100)
|
||||||
|
for task in loaded_tasks:
|
||||||
|
if task.id not in self.tasks:
|
||||||
|
self.tasks[task.id] = task
|
||||||
|
# [/DEF:load_persisted_tasks:Function]
|
||||||
|
|
||||||
|
# [DEF:await_input:Function]
|
||||||
|
# @PURPOSE: Transition a task to AWAITING_INPUT state with input request.
|
||||||
|
# @PRE: Task exists and is in RUNNING state.
|
||||||
|
# @POST: Task status changed to AWAITING_INPUT, input_request set, persisted.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @PARAM: input_request (Dict) - Details about required input.
|
||||||
|
# @THROWS: ValueError if task not found or not RUNNING.
|
||||||
|
def await_input(self, task_id: str, input_request: Dict[str, Any]) -> None:
|
||||||
|
with belief_scope("TaskManager.await_input", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
if not task:
|
||||||
|
raise ValueError(f"Task {task_id} not found")
|
||||||
|
if task.status != TaskStatus.RUNNING:
|
||||||
|
raise ValueError(f"Task {task_id} is not RUNNING (current: {task.status})")
|
||||||
|
|
||||||
|
task.status = TaskStatus.AWAITING_INPUT
|
||||||
|
task.input_required = True
|
||||||
|
task.input_request = input_request
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
self._add_log(task_id, "INFO", "Task paused for user input", {"input_request": input_request})
|
||||||
|
# [/DEF:await_input:Function]
|
||||||
|
|
||||||
|
# [DEF:resume_task_with_password:Function]
|
||||||
|
# @PURPOSE: Resume a task that is awaiting input with provided passwords.
|
||||||
|
# @PRE: Task exists and is in AWAITING_INPUT state.
|
||||||
|
# @POST: Task status changed to RUNNING, passwords injected, task resumed.
|
||||||
|
# @PARAM: task_id (str) - ID of the task.
|
||||||
|
# @PARAM: passwords (Dict[str, str]) - Mapping of database name to password.
|
||||||
|
# @THROWS: ValueError if task not found, not awaiting input, or passwords invalid.
|
||||||
|
def resume_task_with_password(self, task_id: str, passwords: Dict[str, str]) -> None:
|
||||||
|
with belief_scope("TaskManager.resume_task_with_password", f"task_id={task_id}"):
|
||||||
|
task = self.tasks.get(task_id)
|
||||||
|
if not task:
|
||||||
|
raise ValueError(f"Task {task_id} not found")
|
||||||
|
if task.status != TaskStatus.AWAITING_INPUT:
|
||||||
|
raise ValueError(f"Task {task_id} is not AWAITING_INPUT (current: {task.status})")
|
||||||
|
|
||||||
|
if not isinstance(passwords, dict) or not passwords:
|
||||||
|
raise ValueError("Passwords must be a non-empty dictionary")
|
||||||
|
|
||||||
|
task.params["passwords"] = passwords
|
||||||
|
task.input_required = False
|
||||||
|
task.input_request = None
|
||||||
|
task.status = TaskStatus.RUNNING
|
||||||
|
self.persistence_service.persist_task(task)
|
||||||
|
self._add_log(task_id, "INFO", "Task resumed with passwords", {"databases": list(passwords.keys())})
|
||||||
|
|
||||||
|
if task_id in self.task_futures:
|
||||||
|
self.task_futures[task_id].set_result(True)
|
||||||
|
# [/DEF:resume_task_with_password:Function]
|
||||||
|
|
||||||
|
# [DEF:clear_tasks:Function]
|
||||||
|
# @PURPOSE: Clears tasks based on status filter.
|
||||||
|
# @PRE: status is Optional[TaskStatus].
|
||||||
|
# @POST: Tasks matching filter (or all non-active) cleared from registry and database.
|
||||||
|
# @PARAM: status (Optional[TaskStatus]) - Filter by task status.
|
||||||
|
# @RETURN: int - Number of tasks cleared.
|
||||||
|
def clear_tasks(self, status: Optional[TaskStatus] = None) -> int:
|
||||||
|
with belief_scope("TaskManager.clear_tasks"):
|
||||||
|
tasks_to_remove = []
|
||||||
|
for task_id, task in list(self.tasks.items()):
|
||||||
|
# If status is provided, match it.
|
||||||
|
# If status is None, match everything EXCEPT RUNNING (unless they are awaiting input/mapping which are technically running but paused?)
|
||||||
|
# Actually, AWAITING_INPUT and AWAITING_MAPPING are distinct statuses in TaskStatus enum.
|
||||||
|
# RUNNING is active execution.
|
||||||
|
|
||||||
|
should_remove = False
|
||||||
|
if status:
|
||||||
|
if task.status == status:
|
||||||
|
should_remove = True
|
||||||
|
else:
|
||||||
|
# Clear all non-active tasks (keep RUNNING, AWAITING_INPUT, AWAITING_MAPPING)
|
||||||
|
if task.status not in [TaskStatus.RUNNING, TaskStatus.AWAITING_INPUT, TaskStatus.AWAITING_MAPPING]:
|
||||||
|
should_remove = True
|
||||||
|
|
||||||
|
if should_remove:
|
||||||
|
tasks_to_remove.append(task_id)
|
||||||
|
|
||||||
|
for tid in tasks_to_remove:
|
||||||
|
# Cancel future if exists (e.g. for AWAITING_INPUT/MAPPING)
|
||||||
|
if tid in self.task_futures:
|
||||||
|
self.task_futures[tid].cancel()
|
||||||
|
del self.task_futures[tid]
|
||||||
|
|
||||||
|
del self.tasks[tid]
|
||||||
|
|
||||||
|
# Remove from persistence
|
||||||
|
self.persistence_service.delete_tasks(tasks_to_remove)
|
||||||
|
|
||||||
|
logger.info(f"Cleared {len(tasks_to_remove)} tasks.")
|
||||||
|
return len(tasks_to_remove)
|
||||||
|
# [/DEF:clear_tasks:Function]
|
||||||
|
|
||||||
|
# [/DEF:TaskManager:Class]
|
||||||
|
# [/DEF:TaskManagerModule:Module]
|
||||||
68
backend/src/core/task_manager/models.py
Normal file
68
backend/src/core/task_manager/models.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# [DEF:TaskManagerModels:Module]
|
||||||
|
# @SEMANTICS: task, models, pydantic, enum, state
|
||||||
|
# @PURPOSE: Defines the data models and enumerations used by the Task Manager.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Used by TaskManager and API routes.
|
||||||
|
# @INVARIANT: Task IDs are immutable once created.
|
||||||
|
# @CONSTRAINT: Must use Pydantic for data validation.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import uuid
|
||||||
|
from datetime import datetime
|
||||||
|
from enum import Enum
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:TaskStatus:Enum]
|
||||||
|
# @SEMANTICS: task, status, state, enum
|
||||||
|
# @PURPOSE: Defines the possible states a task can be in during its lifecycle.
|
||||||
|
class TaskStatus(str, Enum):
|
||||||
|
PENDING = "PENDING"
|
||||||
|
RUNNING = "RUNNING"
|
||||||
|
SUCCESS = "SUCCESS"
|
||||||
|
FAILED = "FAILED"
|
||||||
|
AWAITING_MAPPING = "AWAITING_MAPPING"
|
||||||
|
AWAITING_INPUT = "AWAITING_INPUT"
|
||||||
|
# [/DEF:TaskStatus:Enum]
|
||||||
|
|
||||||
|
# [DEF:LogEntry:Class]
|
||||||
|
# @SEMANTICS: log, entry, record, pydantic
|
||||||
|
# @PURPOSE: A Pydantic model representing a single, structured log entry associated with a task.
|
||||||
|
class LogEntry(BaseModel):
|
||||||
|
timestamp: datetime = Field(default_factory=datetime.utcnow)
|
||||||
|
level: str
|
||||||
|
message: str
|
||||||
|
context: Optional[Dict[str, Any]] = None
|
||||||
|
# [/DEF:LogEntry:Class]
|
||||||
|
|
||||||
|
# [DEF:Task:Class]
|
||||||
|
# @SEMANTICS: task, job, execution, state, pydantic
|
||||||
|
# @PURPOSE: A Pydantic model representing a single execution instance of a plugin, including its status, parameters, and logs.
|
||||||
|
class Task(BaseModel):
|
||||||
|
id: str = Field(default_factory=lambda: str(uuid.uuid4()))
|
||||||
|
plugin_id: str
|
||||||
|
status: TaskStatus = TaskStatus.PENDING
|
||||||
|
started_at: Optional[datetime] = None
|
||||||
|
finished_at: Optional[datetime] = None
|
||||||
|
user_id: Optional[str] = None
|
||||||
|
logs: List[LogEntry] = Field(default_factory=list)
|
||||||
|
params: Dict[str, Any] = Field(default_factory=dict)
|
||||||
|
input_required: bool = False
|
||||||
|
input_request: Optional[Dict[str, Any]] = None
|
||||||
|
result: Optional[Dict[str, Any]] = None
|
||||||
|
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the Task model and validates input_request for AWAITING_INPUT status.
|
||||||
|
# @PRE: If status is AWAITING_INPUT, input_request must be provided.
|
||||||
|
# @POST: Task instance is created or ValueError is raised.
|
||||||
|
# @PARAM: **data - Keyword arguments for model initialization.
|
||||||
|
def __init__(self, **data):
|
||||||
|
super().__init__(**data)
|
||||||
|
if self.status == TaskStatus.AWAITING_INPUT and not self.input_request:
|
||||||
|
raise ValueError("input_request is required when status is AWAITING_INPUT")
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
# [/DEF:Task:Class]
|
||||||
|
|
||||||
|
# [/DEF:TaskManagerModels:Module]
|
||||||
158
backend/src/core/task_manager/persistence.py
Normal file
158
backend/src/core/task_manager/persistence.py
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
# [DEF:TaskPersistenceModule:Module]
|
||||||
|
# @SEMANTICS: persistence, sqlite, sqlalchemy, task, storage
|
||||||
|
# @PURPOSE: Handles the persistence of tasks using SQLAlchemy and the tasks.db database.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Used by TaskManager to save and load tasks.
|
||||||
|
# @INVARIANT: Database schema must match the TaskRecord model structure.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import List, Optional, Dict, Any
|
||||||
|
import json
|
||||||
|
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
from ...models.task import TaskRecord
|
||||||
|
from ..database import TasksSessionLocal
|
||||||
|
from .models import Task, TaskStatus, LogEntry
|
||||||
|
from ..logger import logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:TaskPersistenceService:Class]
|
||||||
|
# @SEMANTICS: persistence, service, database, sqlalchemy
|
||||||
|
# @PURPOSE: Provides methods to save and load tasks from the tasks.db database using SQLAlchemy.
|
||||||
|
class TaskPersistenceService:
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the persistence service.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Service is ready.
|
||||||
|
def __init__(self):
|
||||||
|
with belief_scope("TaskPersistenceService.__init__"):
|
||||||
|
# We use TasksSessionLocal from database.py
|
||||||
|
pass
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:persist_task:Function]
|
||||||
|
# @PURPOSE: Persists or updates a single task in the database.
|
||||||
|
# @PRE: isinstance(task, Task)
|
||||||
|
# @POST: Task record created or updated in database.
|
||||||
|
# @PARAM: task (Task) - The task object to persist.
|
||||||
|
def persist_task(self, task: Task) -> None:
|
||||||
|
with belief_scope("TaskPersistenceService.persist_task", f"task_id={task.id}"):
|
||||||
|
session: Session = TasksSessionLocal()
|
||||||
|
try:
|
||||||
|
record = session.query(TaskRecord).filter(TaskRecord.id == task.id).first()
|
||||||
|
if not record:
|
||||||
|
record = TaskRecord(id=task.id)
|
||||||
|
session.add(record)
|
||||||
|
|
||||||
|
record.type = task.plugin_id
|
||||||
|
record.status = task.status.value
|
||||||
|
record.environment_id = task.params.get("environment_id") or task.params.get("source_env_id")
|
||||||
|
record.started_at = task.started_at
|
||||||
|
record.finished_at = task.finished_at
|
||||||
|
record.params = task.params
|
||||||
|
record.result = task.result
|
||||||
|
|
||||||
|
# Store logs as JSON, converting datetime to string
|
||||||
|
record.logs = []
|
||||||
|
for log in task.logs:
|
||||||
|
log_dict = log.dict()
|
||||||
|
if isinstance(log_dict.get('timestamp'), datetime):
|
||||||
|
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
|
||||||
|
record.logs.append(log_dict)
|
||||||
|
|
||||||
|
# Extract error if failed
|
||||||
|
if task.status == TaskStatus.FAILED:
|
||||||
|
for log in reversed(task.logs):
|
||||||
|
if log.level == "ERROR":
|
||||||
|
record.error = log.message
|
||||||
|
break
|
||||||
|
|
||||||
|
session.commit()
|
||||||
|
except Exception as e:
|
||||||
|
session.rollback()
|
||||||
|
logger.error(f"Failed to persist task {task.id}: {e}")
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
# [/DEF:persist_task:Function]
|
||||||
|
|
||||||
|
# [DEF:persist_tasks:Function]
|
||||||
|
# @PURPOSE: Persists multiple tasks.
|
||||||
|
# @PRE: isinstance(tasks, list)
|
||||||
|
# @POST: All tasks in list are persisted.
|
||||||
|
# @PARAM: tasks (List[Task]) - The list of tasks to persist.
|
||||||
|
def persist_tasks(self, tasks: List[Task]) -> None:
|
||||||
|
with belief_scope("TaskPersistenceService.persist_tasks"):
|
||||||
|
for task in tasks:
|
||||||
|
self.persist_task(task)
|
||||||
|
# [/DEF:persist_tasks:Function]
|
||||||
|
|
||||||
|
# [DEF:load_tasks:Function]
|
||||||
|
# @PURPOSE: Loads tasks from the database.
|
||||||
|
# @PRE: limit is an integer.
|
||||||
|
# @POST: Returns list of Task objects.
|
||||||
|
# @PARAM: limit (int) - Max tasks to load.
|
||||||
|
# @PARAM: status (Optional[TaskStatus]) - Filter by status.
|
||||||
|
# @RETURN: List[Task] - The loaded tasks.
|
||||||
|
def load_tasks(self, limit: int = 100, status: Optional[TaskStatus] = None) -> List[Task]:
|
||||||
|
with belief_scope("TaskPersistenceService.load_tasks"):
|
||||||
|
session: Session = TasksSessionLocal()
|
||||||
|
try:
|
||||||
|
query = session.query(TaskRecord)
|
||||||
|
if status:
|
||||||
|
query = query.filter(TaskRecord.status == status.value)
|
||||||
|
|
||||||
|
records = query.order_by(TaskRecord.created_at.desc()).limit(limit).all()
|
||||||
|
|
||||||
|
loaded_tasks = []
|
||||||
|
for record in records:
|
||||||
|
try:
|
||||||
|
logs = []
|
||||||
|
if record.logs:
|
||||||
|
for log_data in record.logs:
|
||||||
|
# Handle timestamp conversion if it's a string
|
||||||
|
if isinstance(log_data.get('timestamp'), str):
|
||||||
|
log_data['timestamp'] = datetime.fromisoformat(log_data['timestamp'])
|
||||||
|
logs.append(LogEntry(**log_data))
|
||||||
|
|
||||||
|
task = Task(
|
||||||
|
id=record.id,
|
||||||
|
plugin_id=record.type,
|
||||||
|
status=TaskStatus(record.status),
|
||||||
|
started_at=record.started_at,
|
||||||
|
finished_at=record.finished_at,
|
||||||
|
params=record.params or {},
|
||||||
|
result=record.result,
|
||||||
|
logs=logs
|
||||||
|
)
|
||||||
|
loaded_tasks.append(task)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to reconstruct task {record.id}: {e}")
|
||||||
|
|
||||||
|
return loaded_tasks
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
# [/DEF:load_tasks:Function]
|
||||||
|
|
||||||
|
# [DEF:delete_tasks:Function]
|
||||||
|
# @PURPOSE: Deletes specific tasks from the database.
|
||||||
|
# @PRE: task_ids is a list of strings.
|
||||||
|
# @POST: Specified task records deleted from database.
|
||||||
|
# @PARAM: task_ids (List[str]) - List of task IDs to delete.
|
||||||
|
def delete_tasks(self, task_ids: List[str]) -> None:
|
||||||
|
if not task_ids:
|
||||||
|
return
|
||||||
|
with belief_scope("TaskPersistenceService.delete_tasks"):
|
||||||
|
session: Session = TasksSessionLocal()
|
||||||
|
try:
|
||||||
|
session.query(TaskRecord).filter(TaskRecord.id.in_(task_ids)).delete(synchronize_session=False)
|
||||||
|
session.commit()
|
||||||
|
except Exception as e:
|
||||||
|
session.rollback()
|
||||||
|
logger.error(f"Failed to delete tasks: {e}")
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
# [/DEF:delete_tasks:Function]
|
||||||
|
|
||||||
|
# [/DEF:TaskPersistenceService:Class]
|
||||||
|
# [/DEF:TaskPersistenceModule:Module]
|
||||||
237
backend/src/core/utils/dataset_mapper.py
Normal file
237
backend/src/core/utils/dataset_mapper.py
Normal file
@@ -0,0 +1,237 @@
|
|||||||
|
# [DEF:backend.core.utils.dataset_mapper:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: dataset, mapping, postgresql, xlsx, superset
|
||||||
|
# @PURPOSE: Этот модуль отвечает за обновление метаданных (verbose_map) в датасетах Superset, извлекая их из PostgreSQL или XLSX-файлов.
|
||||||
|
# @LAYER: Domain
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.core.superset_client
|
||||||
|
# @RELATION: DEPENDS_ON -> pandas
|
||||||
|
# @RELATION: DEPENDS_ON -> psycopg2
|
||||||
|
# @PUBLIC_API: DatasetMapper
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import pandas as pd # type: ignore
|
||||||
|
import psycopg2 # type: ignore
|
||||||
|
from typing import Dict, List, Optional, Any
|
||||||
|
from ..logger import logger as app_logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:DatasetMapper:Class]
|
||||||
|
# @PURPOSE: Класс для меппинга и обновления verbose_map в датасетах Superset.
|
||||||
|
class DatasetMapper:
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the mapper.
|
||||||
|
# @POST: Объект DatasetMapper инициализирован.
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:get_postgres_comments:Function]
|
||||||
|
# @PURPOSE: Извлекает комментарии к колонкам из системного каталога PostgreSQL.
|
||||||
|
# @PRE: db_config должен содержать валидные параметры подключения (host, port, user, password, dbname).
|
||||||
|
# @PRE: table_name и table_schema должны быть строками.
|
||||||
|
# @POST: Возвращается словарь, где ключи - имена колонок, значения - комментарии из БД.
|
||||||
|
# @THROW: Exception - При ошибках подключения или выполнения запроса к БД.
|
||||||
|
# @PARAM: db_config (Dict) - Конфигурация для подключения к БД.
|
||||||
|
# @PARAM: table_name (str) - Имя таблицы.
|
||||||
|
# @PARAM: table_schema (str) - Схема таблицы.
|
||||||
|
# @RETURN: Dict[str, str] - Словарь с комментариями к колонкам.
|
||||||
|
def get_postgres_comments(self, db_config: Dict, table_name: str, table_schema: str) -> Dict[str, str]:
|
||||||
|
with belief_scope("Fetch comments from PostgreSQL"):
|
||||||
|
app_logger.info("[get_postgres_comments][Enter] Fetching comments from PostgreSQL for %s.%s.", table_schema, table_name)
|
||||||
|
query = f"""
|
||||||
|
SELECT
|
||||||
|
cols.column_name,
|
||||||
|
CASE
|
||||||
|
WHEN pg_catalog.col_description(
|
||||||
|
(SELECT c.oid
|
||||||
|
FROM pg_catalog.pg_class c
|
||||||
|
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
|
||||||
|
WHERE c.relname = cols.table_name
|
||||||
|
AND n.nspname = cols.table_schema),
|
||||||
|
cols.ordinal_position::int
|
||||||
|
) LIKE '%|%' THEN
|
||||||
|
split_part(
|
||||||
|
pg_catalog.col_description(
|
||||||
|
(SELECT c.oid
|
||||||
|
FROM pg_catalog.pg_class c
|
||||||
|
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
|
||||||
|
WHERE c.relname = cols.table_name
|
||||||
|
AND n.nspname = cols.table_schema),
|
||||||
|
cols.ordinal_position::int
|
||||||
|
),
|
||||||
|
'|',
|
||||||
|
1
|
||||||
|
)
|
||||||
|
ELSE
|
||||||
|
pg_catalog.col_description(
|
||||||
|
(SELECT c.oid
|
||||||
|
FROM pg_catalog.pg_class c
|
||||||
|
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
|
||||||
|
WHERE c.relname = cols.table_name
|
||||||
|
AND n.nspname = cols.table_schema),
|
||||||
|
cols.ordinal_position::int
|
||||||
|
)
|
||||||
|
END AS column_comment
|
||||||
|
FROM
|
||||||
|
information_schema.columns cols
|
||||||
|
WHERE cols.table_catalog = '{db_config.get('dbname')}' AND cols.table_name = '{table_name}' AND cols.table_schema = '{table_schema}';
|
||||||
|
"""
|
||||||
|
comments = {}
|
||||||
|
try:
|
||||||
|
with psycopg2.connect(**db_config) as conn, conn.cursor() as cursor:
|
||||||
|
cursor.execute(query)
|
||||||
|
for row in cursor.fetchall():
|
||||||
|
if row[1]:
|
||||||
|
comments[row[0]] = row[1]
|
||||||
|
app_logger.info("[get_postgres_comments][Success] Fetched %d comments.", len(comments))
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[get_postgres_comments][Failure] %s", e, exc_info=True)
|
||||||
|
raise
|
||||||
|
return comments
|
||||||
|
# [/DEF:get_postgres_comments:Function]
|
||||||
|
|
||||||
|
# [DEF:load_excel_mappings:Function]
|
||||||
|
# @PURPOSE: Загружает меппинги 'column_name' -> 'column_comment' из XLSX файла.
|
||||||
|
# @PRE: file_path должен указывать на существующий XLSX файл.
|
||||||
|
# @POST: Возвращается словарь с меппингами из файла.
|
||||||
|
# @THROW: Exception - При ошибках чтения файла или парсинга.
|
||||||
|
# @PARAM: file_path (str) - Путь к XLSX файлу.
|
||||||
|
# @RETURN: Dict[str, str] - Словарь с меппингами.
|
||||||
|
def load_excel_mappings(self, file_path: str) -> Dict[str, str]:
|
||||||
|
with belief_scope("Load mappings from Excel"):
|
||||||
|
app_logger.info("[load_excel_mappings][Enter] Loading mappings from %s.", file_path)
|
||||||
|
try:
|
||||||
|
df = pd.read_excel(file_path)
|
||||||
|
mappings = df.set_index('column_name')['verbose_name'].to_dict()
|
||||||
|
app_logger.info("[load_excel_mappings][Success] Loaded %d mappings.", len(mappings))
|
||||||
|
return mappings
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[load_excel_mappings][Failure] %s", e, exc_info=True)
|
||||||
|
raise
|
||||||
|
# [/DEF:load_excel_mappings:Function]
|
||||||
|
|
||||||
|
# [DEF:run_mapping:Function]
|
||||||
|
# @PURPOSE: Основная функция для выполнения меппинга и обновления verbose_map датасета в Superset.
|
||||||
|
# @PRE: superset_client должен быть авторизован.
|
||||||
|
# @PRE: dataset_id должен быть существующим ID в Superset.
|
||||||
|
# @POST: Если найдены изменения, датасет в Superset обновлен через API.
|
||||||
|
# @RELATION: CALLS -> self.get_postgres_comments
|
||||||
|
# @RELATION: CALLS -> self.load_excel_mappings
|
||||||
|
# @RELATION: CALLS -> superset_client.get_dataset
|
||||||
|
# @RELATION: CALLS -> superset_client.update_dataset
|
||||||
|
# @PARAM: superset_client (Any) - Клиент Superset.
|
||||||
|
# @PARAM: dataset_id (int) - ID датасета для обновления.
|
||||||
|
# @PARAM: source (str) - Источник данных ('postgres', 'excel', 'both').
|
||||||
|
# @PARAM: postgres_config (Optional[Dict]) - Конфигурация для подключения к PostgreSQL.
|
||||||
|
# @PARAM: excel_path (Optional[str]) - Путь к XLSX файлу.
|
||||||
|
# @PARAM: table_name (Optional[str]) - Имя таблицы в PostgreSQL.
|
||||||
|
# @PARAM: table_schema (Optional[str]) - Схема таблицы в PostgreSQL.
|
||||||
|
def run_mapping(self, superset_client: Any, dataset_id: int, source: str, postgres_config: Optional[Dict] = None, excel_path: Optional[str] = None, table_name: Optional[str] = None, table_schema: Optional[str] = None):
|
||||||
|
with belief_scope(f"Run dataset mapping for ID {dataset_id}"):
|
||||||
|
app_logger.info("[run_mapping][Enter] Starting dataset mapping for ID %d from source '%s'.", dataset_id, source)
|
||||||
|
mappings: Dict[str, str] = {}
|
||||||
|
|
||||||
|
try:
|
||||||
|
if source in ['postgres', 'both']:
|
||||||
|
assert postgres_config and table_name and table_schema, "Postgres config is required."
|
||||||
|
mappings.update(self.get_postgres_comments(postgres_config, table_name, table_schema))
|
||||||
|
if source in ['excel', 'both']:
|
||||||
|
assert excel_path, "Excel path is required."
|
||||||
|
mappings.update(self.load_excel_mappings(excel_path))
|
||||||
|
if source not in ['postgres', 'excel', 'both']:
|
||||||
|
app_logger.error("[run_mapping][Failure] Invalid source: %s.", source)
|
||||||
|
return
|
||||||
|
|
||||||
|
dataset_response = superset_client.get_dataset(dataset_id)
|
||||||
|
dataset_data = dataset_response['result']
|
||||||
|
|
||||||
|
original_columns = dataset_data.get('columns', [])
|
||||||
|
updated_columns = []
|
||||||
|
changes_made = False
|
||||||
|
|
||||||
|
for column in original_columns:
|
||||||
|
col_name = column.get('column_name')
|
||||||
|
|
||||||
|
new_column = {
|
||||||
|
"column_name": col_name,
|
||||||
|
"id": column.get("id"),
|
||||||
|
"advanced_data_type": column.get("advanced_data_type"),
|
||||||
|
"description": column.get("description"),
|
||||||
|
"expression": column.get("expression"),
|
||||||
|
"extra": column.get("extra"),
|
||||||
|
"filterable": column.get("filterable"),
|
||||||
|
"groupby": column.get("groupby"),
|
||||||
|
"is_active": column.get("is_active"),
|
||||||
|
"is_dttm": column.get("is_dttm"),
|
||||||
|
"python_date_format": column.get("python_date_format"),
|
||||||
|
"type": column.get("type"),
|
||||||
|
"uuid": column.get("uuid"),
|
||||||
|
"verbose_name": column.get("verbose_name"),
|
||||||
|
}
|
||||||
|
|
||||||
|
new_column = {k: v for k, v in new_column.items() if v is not None}
|
||||||
|
|
||||||
|
if col_name in mappings:
|
||||||
|
mapping_value = mappings[col_name]
|
||||||
|
if isinstance(mapping_value, str) and new_column.get('verbose_name') != mapping_value:
|
||||||
|
new_column['verbose_name'] = mapping_value
|
||||||
|
changes_made = True
|
||||||
|
|
||||||
|
updated_columns.append(new_column)
|
||||||
|
|
||||||
|
updated_metrics = []
|
||||||
|
for metric in dataset_data.get("metrics", []):
|
||||||
|
new_metric = {
|
||||||
|
"id": metric.get("id"),
|
||||||
|
"metric_name": metric.get("metric_name"),
|
||||||
|
"expression": metric.get("expression"),
|
||||||
|
"verbose_name": metric.get("verbose_name"),
|
||||||
|
"description": metric.get("description"),
|
||||||
|
"d3format": metric.get("d3format"),
|
||||||
|
"currency": metric.get("currency"),
|
||||||
|
"extra": metric.get("extra"),
|
||||||
|
"warning_text": metric.get("warning_text"),
|
||||||
|
"metric_type": metric.get("metric_type"),
|
||||||
|
"uuid": metric.get("uuid"),
|
||||||
|
}
|
||||||
|
updated_metrics.append({k: v for k, v in new_metric.items() if v is not None})
|
||||||
|
|
||||||
|
if changes_made:
|
||||||
|
payload_for_update = {
|
||||||
|
"database_id": dataset_data.get("database", {}).get("id"),
|
||||||
|
"table_name": dataset_data.get("table_name"),
|
||||||
|
"schema": dataset_data.get("schema"),
|
||||||
|
"columns": updated_columns,
|
||||||
|
"owners": [owner["id"] for owner in dataset_data.get("owners", [])],
|
||||||
|
"metrics": updated_metrics,
|
||||||
|
"extra": dataset_data.get("extra"),
|
||||||
|
"description": dataset_data.get("description"),
|
||||||
|
"sql": dataset_data.get("sql"),
|
||||||
|
"cache_timeout": dataset_data.get("cache_timeout"),
|
||||||
|
"catalog": dataset_data.get("catalog"),
|
||||||
|
"default_endpoint": dataset_data.get("default_endpoint"),
|
||||||
|
"external_url": dataset_data.get("external_url"),
|
||||||
|
"fetch_values_predicate": dataset_data.get("fetch_values_predicate"),
|
||||||
|
"filter_select_enabled": dataset_data.get("filter_select_enabled"),
|
||||||
|
"is_managed_externally": dataset_data.get("is_managed_externally"),
|
||||||
|
"is_sqllab_view": dataset_data.get("is_sqllab_view"),
|
||||||
|
"main_dttm_col": dataset_data.get("main_dttm_col"),
|
||||||
|
"normalize_columns": dataset_data.get("normalize_columns"),
|
||||||
|
"offset": dataset_data.get("offset"),
|
||||||
|
"template_params": dataset_data.get("template_params"),
|
||||||
|
}
|
||||||
|
|
||||||
|
payload_for_update = {k: v for k, v in payload_for_update.items() if v is not None}
|
||||||
|
|
||||||
|
superset_client.update_dataset(dataset_id, payload_for_update)
|
||||||
|
app_logger.info("[run_mapping][Success] Dataset %d columns' verbose_name updated.", dataset_id)
|
||||||
|
else:
|
||||||
|
app_logger.info("[run_mapping][State] No changes in columns' verbose_name, skipping update.")
|
||||||
|
|
||||||
|
except (AssertionError, FileNotFoundError, Exception) as e:
|
||||||
|
app_logger.error("[run_mapping][Failure] %s", e, exc_info=True)
|
||||||
|
return
|
||||||
|
# [/DEF:run_mapping:Function]
|
||||||
|
# [/DEF:DatasetMapper:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.core.utils.dataset_mapper:Module]
|
||||||
486
backend/src/core/utils/fileio.py
Normal file
486
backend/src/core/utils/fileio.py
Normal file
@@ -0,0 +1,486 @@
|
|||||||
|
# [DEF:backend.core.utils.fileio:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: file, io, zip, yaml, temp, archive, utility
|
||||||
|
# @PURPOSE: Предоставляет набор утилит для управления файловыми операциями, включая работу с временными файлами, архивами ZIP, файлами YAML и очистку директорий.
|
||||||
|
# @LAYER: Infra
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.core.logger
|
||||||
|
# @RELATION: DEPENDS_ON -> pyyaml
|
||||||
|
# @PUBLIC_API: create_temp_file, remove_empty_directories, read_dashboard_from_disk, calculate_crc32, RetentionPolicy, archive_exports, save_and_unpack_dashboard, update_yamls, create_dashboard_export, sanitize_filename, get_filename_from_headers, consolidate_archive_folders
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Optional, Tuple, Dict, List, Union, LiteralString, Generator
|
||||||
|
from contextlib import contextmanager
|
||||||
|
import tempfile
|
||||||
|
from datetime import date, datetime
|
||||||
|
import shutil
|
||||||
|
import zlib
|
||||||
|
from dataclasses import dataclass
|
||||||
|
import yaml
|
||||||
|
from ..logger import logger as app_logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:InvalidZipFormatError:Class]
|
||||||
|
class InvalidZipFormatError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# [DEF:create_temp_file:Function]
|
||||||
|
# @PURPOSE: Контекстный менеджер для создания временного файла или директории с гарантированным удалением.
|
||||||
|
# @PRE: suffix должен быть строкой, определяющей тип ресурса.
|
||||||
|
# @POST: Временный ресурс создан и путь к нему возвращен; ресурс удален после выхода из контекста.
|
||||||
|
# @PARAM: content (Optional[bytes]) - Бинарное содержимое для записи во временный файл.
|
||||||
|
# @PARAM: suffix (str) - Суффикс ресурса. Если `.dir`, создается директория.
|
||||||
|
# @PARAM: mode (str) - Режим записи в файл (e.g., 'wb').
|
||||||
|
# @YIELDS: Path - Путь к временному ресурсу.
|
||||||
|
# @THROW: IOError - При ошибках создания ресурса.
|
||||||
|
@contextmanager
|
||||||
|
def create_temp_file(content: Optional[bytes] = None, suffix: str = ".zip", mode: str = 'wb', dry_run = False) -> Generator[Path, None, None]:
|
||||||
|
with belief_scope("Create temporary resource"):
|
||||||
|
resource_path = None
|
||||||
|
is_dir = suffix.startswith('.dir')
|
||||||
|
try:
|
||||||
|
if is_dir:
|
||||||
|
with tempfile.TemporaryDirectory(suffix=suffix) as temp_dir:
|
||||||
|
resource_path = Path(temp_dir)
|
||||||
|
app_logger.debug("[create_temp_file][State] Created temporary directory: %s", resource_path)
|
||||||
|
yield resource_path
|
||||||
|
else:
|
||||||
|
fd, temp_path_str = tempfile.mkstemp(suffix=suffix)
|
||||||
|
resource_path = Path(temp_path_str)
|
||||||
|
os.close(fd)
|
||||||
|
if content:
|
||||||
|
resource_path.write_bytes(content)
|
||||||
|
app_logger.debug("[create_temp_file][State] Created temporary file: %s", resource_path)
|
||||||
|
yield resource_path
|
||||||
|
finally:
|
||||||
|
if resource_path and resource_path.exists() and not dry_run:
|
||||||
|
try:
|
||||||
|
if resource_path.is_dir():
|
||||||
|
shutil.rmtree(resource_path)
|
||||||
|
app_logger.debug("[create_temp_file][Cleanup] Removed temporary directory: %s", resource_path)
|
||||||
|
else:
|
||||||
|
resource_path.unlink()
|
||||||
|
app_logger.debug("[create_temp_file][Cleanup] Removed temporary file: %s", resource_path)
|
||||||
|
except OSError as e:
|
||||||
|
app_logger.error("[create_temp_file][Failure] Error during cleanup of %s: %s", resource_path, e)
|
||||||
|
# [/DEF:create_temp_file:Function]
|
||||||
|
|
||||||
|
# [DEF:remove_empty_directories:Function]
|
||||||
|
# @PURPOSE: Рекурсивно удаляет все пустые поддиректории, начиная с указанного пути.
|
||||||
|
# @PRE: root_dir должен быть путем к существующей директории.
|
||||||
|
# @POST: Все пустые поддиректории удалены, возвращено их количество.
|
||||||
|
# @PARAM: root_dir (str) - Путь к корневой директории для очистки.
|
||||||
|
# @RETURN: int - Количество удаленных директорий.
|
||||||
|
def remove_empty_directories(root_dir: str) -> int:
|
||||||
|
with belief_scope(f"Remove empty directories in {root_dir}"):
|
||||||
|
app_logger.info("[remove_empty_directories][Enter] Starting cleanup of empty directories in %s", root_dir)
|
||||||
|
removed_count = 0
|
||||||
|
if not os.path.isdir(root_dir):
|
||||||
|
app_logger.error("[remove_empty_directories][Failure] Directory not found: %s", root_dir)
|
||||||
|
return 0
|
||||||
|
for current_dir, _, _ in os.walk(root_dir, topdown=False):
|
||||||
|
if not os.listdir(current_dir):
|
||||||
|
try:
|
||||||
|
os.rmdir(current_dir)
|
||||||
|
removed_count += 1
|
||||||
|
app_logger.info("[remove_empty_directories][State] Removed empty directory: %s", current_dir)
|
||||||
|
except OSError as e:
|
||||||
|
app_logger.error("[remove_empty_directories][Failure] Failed to remove %s: %s", current_dir, e)
|
||||||
|
app_logger.info("[remove_empty_directories][Exit] Removed %d empty directories.", removed_count)
|
||||||
|
return removed_count
|
||||||
|
# [/DEF:remove_empty_directories:Function]
|
||||||
|
|
||||||
|
# [DEF:read_dashboard_from_disk:Function]
|
||||||
|
# @PURPOSE: Читает бинарное содержимое файла с диска.
|
||||||
|
# @PRE: file_path должен указывать на существующий файл.
|
||||||
|
# @POST: Возвращает байты содержимого и имя файла.
|
||||||
|
# @PARAM: file_path (str) - Путь к файлу.
|
||||||
|
# @RETURN: Tuple[bytes, str] - Кортеж (содержимое, имя файла).
|
||||||
|
# @THROW: FileNotFoundError - Если файл не найден.
|
||||||
|
def read_dashboard_from_disk(file_path: str) -> Tuple[bytes, str]:
|
||||||
|
with belief_scope(f"Read dashboard from {file_path}"):
|
||||||
|
path = Path(file_path)
|
||||||
|
assert path.is_file(), f"Файл дашборда не найден: {file_path}"
|
||||||
|
app_logger.info("[read_dashboard_from_disk][Enter] Reading file: %s", file_path)
|
||||||
|
content = path.read_bytes()
|
||||||
|
if not content:
|
||||||
|
app_logger.warning("[read_dashboard_from_disk][Warning] File is empty: %s", file_path)
|
||||||
|
return content, path.name
|
||||||
|
# [/DEF:read_dashboard_from_disk:Function]
|
||||||
|
|
||||||
|
# [DEF:calculate_crc32:Function]
|
||||||
|
# @PURPOSE: Вычисляет контрольную сумму CRC32 для файла.
|
||||||
|
# @PRE: file_path должен быть объектом Path к существующему файлу.
|
||||||
|
# @POST: Возвращает 8-значную hex-строку CRC32.
|
||||||
|
# @PARAM: file_path (Path) - Путь к файлу.
|
||||||
|
# @RETURN: str - 8-значное шестнадцатеричное представление CRC32.
|
||||||
|
# @THROW: IOError - При ошибках чтения файла.
|
||||||
|
def calculate_crc32(file_path: Path) -> str:
|
||||||
|
with belief_scope(f"Calculate CRC32 for {file_path}"):
|
||||||
|
with open(file_path, 'rb') as f:
|
||||||
|
crc32_value = zlib.crc32(f.read())
|
||||||
|
return f"{crc32_value:08x}"
|
||||||
|
# [/DEF:calculate_crc32:Function]
|
||||||
|
|
||||||
|
# [SECTION: DATA_CLASSES]
|
||||||
|
# [DEF:RetentionPolicy:DataClass]
|
||||||
|
# @PURPOSE: Определяет политику хранения для архивов (ежедневные, еженедельные, ежемесячные).
|
||||||
|
@dataclass
|
||||||
|
class RetentionPolicy:
|
||||||
|
daily: int = 7
|
||||||
|
weekly: int = 4
|
||||||
|
monthly: int = 12
|
||||||
|
# [/DEF:RetentionPolicy:DataClass]
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:archive_exports:Function]
|
||||||
|
# @PURPOSE: Управляет архивом экспортированных файлов, применяя политику хранения и дедупликацию.
|
||||||
|
# @PRE: output_dir должен быть путем к существующей директории.
|
||||||
|
# @POST: Старые или дублирующиеся архивы удалены согласно политике.
|
||||||
|
# @RELATION: CALLS -> apply_retention_policy
|
||||||
|
# @RELATION: CALLS -> calculate_crc32
|
||||||
|
# @PARAM: output_dir (str) - Директория с архивами.
|
||||||
|
# @PARAM: policy (RetentionPolicy) - Политика хранения.
|
||||||
|
# @PARAM: deduplicate (bool) - Флаг для включения удаления дубликатов по CRC32.
|
||||||
|
def archive_exports(output_dir: str, policy: RetentionPolicy, deduplicate: bool = False) -> None:
|
||||||
|
with belief_scope(f"Archive exports in {output_dir}"):
|
||||||
|
output_path = Path(output_dir)
|
||||||
|
if not output_path.is_dir():
|
||||||
|
app_logger.warning("[archive_exports][Skip] Archive directory not found: %s", output_dir)
|
||||||
|
return
|
||||||
|
|
||||||
|
app_logger.info("[archive_exports][Enter] Managing archive in %s", output_dir)
|
||||||
|
|
||||||
|
# 1. Collect all zip files
|
||||||
|
zip_files = list(output_path.glob("*.zip"))
|
||||||
|
if not zip_files:
|
||||||
|
app_logger.info("[archive_exports][State] No zip files found in %s", output_dir)
|
||||||
|
return
|
||||||
|
|
||||||
|
# 2. Deduplication
|
||||||
|
if deduplicate:
|
||||||
|
app_logger.info("[archive_exports][State] Starting deduplication...")
|
||||||
|
checksums = {}
|
||||||
|
files_to_remove = []
|
||||||
|
|
||||||
|
# Sort by modification time (newest first) to keep the latest version
|
||||||
|
zip_files.sort(key=lambda f: f.stat().st_mtime, reverse=True)
|
||||||
|
|
||||||
|
for file_path in zip_files:
|
||||||
|
try:
|
||||||
|
crc = calculate_crc32(file_path)
|
||||||
|
if crc in checksums:
|
||||||
|
files_to_remove.append(file_path)
|
||||||
|
app_logger.debug("[archive_exports][State] Duplicate found: %s (same as %s)", file_path.name, checksums[crc].name)
|
||||||
|
else:
|
||||||
|
checksums[crc] = file_path
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[archive_exports][Failure] Failed to calculate CRC32 for %s: %s", file_path, e)
|
||||||
|
|
||||||
|
for f in files_to_remove:
|
||||||
|
try:
|
||||||
|
f.unlink()
|
||||||
|
zip_files.remove(f)
|
||||||
|
app_logger.info("[archive_exports][State] Removed duplicate: %s", f.name)
|
||||||
|
except OSError as e:
|
||||||
|
app_logger.error("[archive_exports][Failure] Failed to remove duplicate %s: %s", f, e)
|
||||||
|
|
||||||
|
# 3. Retention Policy
|
||||||
|
files_with_dates = []
|
||||||
|
for file_path in zip_files:
|
||||||
|
# Try to extract date from filename
|
||||||
|
# Pattern: ..._YYYYMMDD_HHMMSS.zip or ..._YYYYMMDD.zip
|
||||||
|
match = re.search(r'_(\d{8})_', file_path.name)
|
||||||
|
file_date = None
|
||||||
|
if match:
|
||||||
|
try:
|
||||||
|
date_str = match.group(1)
|
||||||
|
file_date = datetime.strptime(date_str, "%Y%m%d").date()
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if not file_date:
|
||||||
|
# Fallback to modification time
|
||||||
|
file_date = datetime.fromtimestamp(file_path.stat().st_mtime).date()
|
||||||
|
|
||||||
|
files_with_dates.append((file_path, file_date))
|
||||||
|
|
||||||
|
files_to_keep = apply_retention_policy(files_with_dates, policy)
|
||||||
|
|
||||||
|
for file_path, _ in files_with_dates:
|
||||||
|
if file_path not in files_to_keep:
|
||||||
|
try:
|
||||||
|
file_path.unlink()
|
||||||
|
app_logger.info("[archive_exports][State] Removed by retention policy: %s", file_path.name)
|
||||||
|
except OSError as e:
|
||||||
|
app_logger.error("[archive_exports][Failure] Failed to remove %s: %s", file_path, e)
|
||||||
|
# [/DEF:archive_exports:Function]
|
||||||
|
|
||||||
|
# [DEF:apply_retention_policy:Function]
|
||||||
|
# @PURPOSE: (Helper) Применяет политику хранения к списку файлов, возвращая те, что нужно сохранить.
|
||||||
|
# @PRE: files_with_dates is a list of (Path, date) tuples.
|
||||||
|
# @POST: Returns a set of files to keep.
|
||||||
|
# @PARAM: files_with_dates (List[Tuple[Path, date]]) - Список файлов с датами.
|
||||||
|
# @PARAM: policy (RetentionPolicy) - Политика хранения.
|
||||||
|
# @RETURN: set - Множество путей к файлам, которые должны быть сохранены.
|
||||||
|
def apply_retention_policy(files_with_dates: List[Tuple[Path, date]], policy: RetentionPolicy) -> set:
|
||||||
|
with belief_scope("Apply retention policy"):
|
||||||
|
# Сортируем по дате (от новой к старой)
|
||||||
|
sorted_files = sorted(files_with_dates, key=lambda x: x[1], reverse=True)
|
||||||
|
# Словарь для хранения файлов по категориям
|
||||||
|
daily_files = []
|
||||||
|
weekly_files = []
|
||||||
|
monthly_files = []
|
||||||
|
today = date.today()
|
||||||
|
for file_path, file_date in sorted_files:
|
||||||
|
# Ежедневные
|
||||||
|
if (today - file_date).days < policy.daily:
|
||||||
|
daily_files.append(file_path)
|
||||||
|
# Еженедельные
|
||||||
|
elif (today - file_date).days < policy.weekly * 7:
|
||||||
|
weekly_files.append(file_path)
|
||||||
|
# Ежемесячные
|
||||||
|
elif (today - file_date).days < policy.monthly * 30:
|
||||||
|
monthly_files.append(file_path)
|
||||||
|
# Возвращаем множество файлов, которые нужно сохранить
|
||||||
|
files_to_keep = set()
|
||||||
|
files_to_keep.update(daily_files)
|
||||||
|
files_to_keep.update(weekly_files[:policy.weekly])
|
||||||
|
files_to_keep.update(monthly_files[:policy.monthly])
|
||||||
|
app_logger.debug("[apply_retention_policy][State] Keeping %d files according to retention policy", len(files_to_keep))
|
||||||
|
return files_to_keep
|
||||||
|
# [/DEF:apply_retention_policy:Function]
|
||||||
|
|
||||||
|
# [DEF:save_and_unpack_dashboard:Function]
|
||||||
|
# @PURPOSE: Сохраняет бинарное содержимое ZIP-архива на диск и опционально распаковывает его.
|
||||||
|
# @PRE: zip_content должен быть байтами валидного ZIP-архива.
|
||||||
|
# @POST: ZIP-файл сохранен, и если unpack=True, он распакован в output_dir.
|
||||||
|
# @PARAM: zip_content (bytes) - Содержимое ZIP-архива.
|
||||||
|
# @PARAM: output_dir (Union[str, Path]) - Директория для сохранения.
|
||||||
|
# @PARAM: unpack (bool) - Флаг, нужно ли распаковывать архив.
|
||||||
|
# @PARAM: original_filename (Optional[str]) - Исходное имя файла для сохранения.
|
||||||
|
# @RETURN: Tuple[Path, Optional[Path]] - Путь к ZIP-файлу и, если применимо, путь к директории с распаковкой.
|
||||||
|
# @THROW: InvalidZipFormatError - При ошибке формата ZIP.
|
||||||
|
def save_and_unpack_dashboard(zip_content: bytes, output_dir: Union[str, Path], unpack: bool = False, original_filename: Optional[str] = None) -> Tuple[Path, Optional[Path]]:
|
||||||
|
with belief_scope("Save and unpack dashboard"):
|
||||||
|
app_logger.info("[save_and_unpack_dashboard][Enter] Processing dashboard. Unpack: %s", unpack)
|
||||||
|
try:
|
||||||
|
output_path = Path(output_dir)
|
||||||
|
output_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
zip_name = sanitize_filename(original_filename) if original_filename else f"dashboard_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.zip"
|
||||||
|
zip_path = output_path / zip_name
|
||||||
|
zip_path.write_bytes(zip_content)
|
||||||
|
app_logger.info("[save_and_unpack_dashboard][State] Dashboard saved to: %s", zip_path)
|
||||||
|
if unpack:
|
||||||
|
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
|
||||||
|
zip_ref.extractall(output_path)
|
||||||
|
app_logger.info("[save_and_unpack_dashboard][State] Dashboard unpacked to: %s", output_path)
|
||||||
|
return zip_path, output_path
|
||||||
|
return zip_path, None
|
||||||
|
except zipfile.BadZipFile as e:
|
||||||
|
app_logger.error("[save_and_unpack_dashboard][Failure] Invalid ZIP archive: %s", e)
|
||||||
|
raise InvalidZipFormatError(f"Invalid ZIP file: {e}") from e
|
||||||
|
# [/DEF:save_and_unpack_dashboard:Function]
|
||||||
|
|
||||||
|
# [DEF:update_yamls:Function]
|
||||||
|
# @PURPOSE: Обновляет конфигурации в YAML-файлах, заменяя значения или применяя regex.
|
||||||
|
# @PRE: path должен быть существующей директорией.
|
||||||
|
# @POST: Все YAML файлы в директории обновлены согласно переданным параметрам.
|
||||||
|
# @RELATION: CALLS -> _update_yaml_file
|
||||||
|
# @THROW: FileNotFoundError - Если `path` не существует.
|
||||||
|
# @PARAM: db_configs (Optional[List[Dict]]) - Список конфигураций для замены.
|
||||||
|
# @PARAM: path (str) - Путь к директории с YAML файлами.
|
||||||
|
# @PARAM: regexp_pattern (Optional[LiteralString]) - Паттерн для поиска.
|
||||||
|
# @PARAM: replace_string (Optional[LiteralString]) - Строка для замены.
|
||||||
|
def update_yamls(db_configs: Optional[List[Dict[str, Any]]] = None, path: str = "dashboards", regexp_pattern: Optional[LiteralString] = None, replace_string: Optional[LiteralString] = None) -> None:
|
||||||
|
with belief_scope("Update YAML configurations"):
|
||||||
|
app_logger.info("[update_yamls][Enter] Starting YAML configuration update.")
|
||||||
|
dir_path = Path(path)
|
||||||
|
assert dir_path.is_dir(), f"Путь {path} не существует или не является директорией"
|
||||||
|
|
||||||
|
configs: List[Dict[str, Any]] = db_configs or []
|
||||||
|
|
||||||
|
for file_path in dir_path.rglob("*.yaml"):
|
||||||
|
_update_yaml_file(file_path, configs, regexp_pattern, replace_string)
|
||||||
|
# [/DEF:update_yamls:Function]
|
||||||
|
|
||||||
|
# [DEF:_update_yaml_file:Function]
|
||||||
|
# @PURPOSE: (Helper) Обновляет один YAML файл.
|
||||||
|
# @PRE: file_path должен быть объектом Path к существующему YAML файлу.
|
||||||
|
# @POST: Файл обновлен согласно переданным конфигурациям или регулярному выражению.
|
||||||
|
# @PARAM: file_path (Path) - Путь к файлу.
|
||||||
|
# @PARAM: db_configs (List[Dict]) - Конфигурации.
|
||||||
|
# @PARAM: regexp_pattern (Optional[str]) - Паттерн.
|
||||||
|
# @PARAM: replace_string (Optional[str]) - Замена.
|
||||||
|
def _update_yaml_file(file_path: Path, db_configs: List[Dict[str, Any]], regexp_pattern: Optional[str], replace_string: Optional[str]) -> None:
|
||||||
|
with belief_scope(f"Update YAML file: {file_path}"):
|
||||||
|
# Читаем содержимое файла
|
||||||
|
try:
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[_update_yaml_file][Failure] Failed to read %s: %s", file_path, e)
|
||||||
|
return
|
||||||
|
# Если задан pattern и replace_string, применяем замену по регулярному выражению
|
||||||
|
if regexp_pattern and replace_string:
|
||||||
|
try:
|
||||||
|
new_content = re.sub(regexp_pattern, replace_string, content)
|
||||||
|
if new_content != content:
|
||||||
|
with open(file_path, 'w', encoding='utf-8') as f:
|
||||||
|
f.write(new_content)
|
||||||
|
app_logger.info("[_update_yaml_file][State] Updated %s using regex pattern", file_path)
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[_update_yaml_file][Failure] Error applying regex to %s: %s", file_path, e)
|
||||||
|
# Если заданы конфигурации, заменяем значения (поддержка old/new)
|
||||||
|
if db_configs:
|
||||||
|
try:
|
||||||
|
# Прямой текстовый заменитель для старых/новых значений, чтобы сохранить структуру файла
|
||||||
|
modified_content = content
|
||||||
|
for cfg in db_configs:
|
||||||
|
# Ожидаем структуру: {'old': {...}, 'new': {...}}
|
||||||
|
old_cfg = cfg.get('old', {})
|
||||||
|
new_cfg = cfg.get('new', {})
|
||||||
|
for key, old_val in old_cfg.items():
|
||||||
|
if key in new_cfg:
|
||||||
|
new_val = new_cfg[key]
|
||||||
|
# Заменяем только точные совпадения старого значения в тексте YAML, используя ключ для контекста
|
||||||
|
if isinstance(old_val, str):
|
||||||
|
# Ищем паттерн: key: "value" или key: value
|
||||||
|
key_pattern = re.escape(key)
|
||||||
|
val_pattern = re.escape(old_val)
|
||||||
|
# Группы: 1=ключ+разделитель, 2=открывающая кавычка (опц), 3=значение, 4=закрывающая кавычка (опц)
|
||||||
|
pattern = rf'({key_pattern}\s*:\s*)(["\']?)({val_pattern})(["\']?)'
|
||||||
|
|
||||||
|
# [DEF:replacer:Function]
|
||||||
|
# @PURPOSE: Функция замены, сохраняющая кавычки если они были.
|
||||||
|
# @PRE: match должен быть объектом совпадения регулярного выражения.
|
||||||
|
# @POST: Возвращает строку с новым значением, сохраняя префикс и кавычки.
|
||||||
|
def replacer(match):
|
||||||
|
prefix = match.group(1)
|
||||||
|
quote_open = match.group(2)
|
||||||
|
quote_close = match.group(4)
|
||||||
|
return f"{prefix}{quote_open}{new_val}{quote_close}"
|
||||||
|
# [/DEF:replacer:Function]
|
||||||
|
|
||||||
|
modified_content = re.sub(pattern, replacer, modified_content)
|
||||||
|
app_logger.info("[_update_yaml_file][State] Replaced '%s' with '%s' for key %s in %s", old_val, new_val, key, file_path)
|
||||||
|
# Записываем обратно изменённый контент без парсинга YAML, сохраняем оригинальное форматирование
|
||||||
|
with open(file_path, 'w', encoding='utf-8') as f:
|
||||||
|
f.write(modified_content)
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[_update_yaml_file][Failure] Error performing raw replacement in %s: %s", file_path, e)
|
||||||
|
# [/DEF:_update_yaml_file:Function]
|
||||||
|
|
||||||
|
# [DEF:create_dashboard_export:Function]
|
||||||
|
# @PURPOSE: Создает ZIP-архив из указанных исходных путей.
|
||||||
|
# @PRE: source_paths должен содержать существующие пути.
|
||||||
|
# @POST: ZIP-архив создан по пути zip_path.
|
||||||
|
# @PARAM: zip_path (Union[str, Path]) - Путь для сохранения ZIP архива.
|
||||||
|
# @PARAM: source_paths (List[Union[str, Path]]) - Список исходных путей для архивации.
|
||||||
|
# @PARAM: exclude_extensions (Optional[List[str]]) - Список расширений для исключения.
|
||||||
|
# @RETURN: bool - `True` при успехе, `False` при ошибке.
|
||||||
|
def create_dashboard_export(zip_path: Union[str, Path], source_paths: List[Union[str, Path]], exclude_extensions: Optional[List[str]] = None) -> bool:
|
||||||
|
with belief_scope(f"Create dashboard export: {zip_path}"):
|
||||||
|
app_logger.info("[create_dashboard_export][Enter] Packing dashboard: %s -> %s", source_paths, zip_path)
|
||||||
|
try:
|
||||||
|
exclude_ext = [ext.lower() for ext in exclude_extensions or []]
|
||||||
|
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||||
|
for src_path_str in source_paths:
|
||||||
|
src_path = Path(src_path_str)
|
||||||
|
assert src_path.exists(), f"Путь не найден: {src_path}"
|
||||||
|
for item in src_path.rglob('*'):
|
||||||
|
if item.is_file() and item.suffix.lower() not in exclude_ext:
|
||||||
|
arcname = item.relative_to(src_path.parent)
|
||||||
|
zipf.write(item, arcname)
|
||||||
|
app_logger.info("[create_dashboard_export][Exit] Archive created: %s", zip_path)
|
||||||
|
return True
|
||||||
|
except (IOError, zipfile.BadZipFile, AssertionError) as e:
|
||||||
|
app_logger.error("[create_dashboard_export][Failure] Error: %s", e, exc_info=True)
|
||||||
|
return False
|
||||||
|
# [/DEF:create_dashboard_export:Function]
|
||||||
|
|
||||||
|
# [DEF:sanitize_filename:Function]
|
||||||
|
# @PURPOSE: Очищает строку от символов, недопустимых в именах файлов.
|
||||||
|
# @PRE: filename должен быть строкой.
|
||||||
|
# @POST: Возвращает строку без спецсимволов.
|
||||||
|
# @PARAM: filename (str) - Исходное имя файла.
|
||||||
|
# @RETURN: str - Очищенная строка.
|
||||||
|
def sanitize_filename(filename: str) -> str:
|
||||||
|
with belief_scope(f"Sanitize filename: {filename}"):
|
||||||
|
return re.sub(r'[\\/*?:"<>|]', "_", filename).strip()
|
||||||
|
# [/DEF:sanitize_filename:Function]
|
||||||
|
|
||||||
|
# [DEF:get_filename_from_headers:Function]
|
||||||
|
# @PURPOSE: Извлекает имя файла из HTTP заголовка 'Content-Disposition'.
|
||||||
|
# @PRE: headers должен быть словарем заголовков.
|
||||||
|
# @POST: Возвращает имя файла или None, если заголовок отсутствует.
|
||||||
|
# @PARAM: headers (dict) - Словарь HTTP заголовков.
|
||||||
|
# @RETURN: Optional[str] - Имя файла or `None`.
|
||||||
|
def get_filename_from_headers(headers: dict) -> Optional[str]:
|
||||||
|
with belief_scope("Get filename from headers"):
|
||||||
|
content_disposition = headers.get("Content-Disposition", "")
|
||||||
|
if match := re.search(r'filename="?([^"]+)"?', content_disposition):
|
||||||
|
return match.group(1).strip()
|
||||||
|
return None
|
||||||
|
# [/DEF:get_filename_from_headers:Function]
|
||||||
|
|
||||||
|
# [DEF:consolidate_archive_folders:Function]
|
||||||
|
# @PURPOSE: Консолидирует директории архивов на основе общего слага в имени.
|
||||||
|
# @PRE: root_directory должен быть объектом Path к существующей директории.
|
||||||
|
# @POST: Директории с одинаковым префиксом объединены в одну.
|
||||||
|
# @THROW: TypeError, ValueError - Если `root_directory` невалиден.
|
||||||
|
# @PARAM: root_directory (Path) - Корневая директория для консолидации.
|
||||||
|
def consolidate_archive_folders(root_directory: Path) -> None:
|
||||||
|
with belief_scope(f"Consolidate archives in {root_directory}"):
|
||||||
|
assert isinstance(root_directory, Path), "root_directory must be a Path object."
|
||||||
|
assert root_directory.is_dir(), "root_directory must be an existing directory."
|
||||||
|
|
||||||
|
app_logger.info("[consolidate_archive_folders][Enter] Consolidating archives in %s", root_directory)
|
||||||
|
# Собираем все директории с архивами
|
||||||
|
archive_dirs = []
|
||||||
|
for item in root_directory.iterdir():
|
||||||
|
if item.is_dir():
|
||||||
|
# Проверяем, есть ли в директории ZIP-архивы
|
||||||
|
if any(item.glob("*.zip")):
|
||||||
|
archive_dirs.append(item)
|
||||||
|
# Группируем по слагу (части имени до первого '_')
|
||||||
|
slug_groups = {}
|
||||||
|
for dir_path in archive_dirs:
|
||||||
|
dir_name = dir_path.name
|
||||||
|
slug = dir_name.split('_')[0] if '_' in dir_name else dir_name
|
||||||
|
if slug not in slug_groups:
|
||||||
|
slug_groups[slug] = []
|
||||||
|
slug_groups[slug].append(dir_path)
|
||||||
|
# Для каждой группы консолидируем
|
||||||
|
for slug, dirs in slug_groups.items():
|
||||||
|
if len(dirs) <= 1:
|
||||||
|
continue
|
||||||
|
# Создаем целевую директорию
|
||||||
|
target_dir = root_directory / slug
|
||||||
|
target_dir.mkdir(exist_ok=True)
|
||||||
|
app_logger.info("[consolidate_archive_folders][State] Consolidating %d directories under %s", len(dirs), target_dir)
|
||||||
|
# Перемещаем содержимое
|
||||||
|
for source_dir in dirs:
|
||||||
|
if source_dir == target_dir:
|
||||||
|
continue
|
||||||
|
for item in source_dir.iterdir():
|
||||||
|
dest_item = target_dir / item.name
|
||||||
|
try:
|
||||||
|
if item.is_dir():
|
||||||
|
shutil.move(str(item), str(dest_item))
|
||||||
|
else:
|
||||||
|
shutil.move(str(item), str(dest_item))
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[consolidate_archive_folders][Failure] Failed to move %s to %s: %s", item, dest_item, e)
|
||||||
|
# Удаляем исходную директорию
|
||||||
|
try:
|
||||||
|
source_dir.rmdir()
|
||||||
|
app_logger.info("[consolidate_archive_folders][State] Removed source directory: %s", source_dir)
|
||||||
|
except Exception as e:
|
||||||
|
app_logger.error("[consolidate_archive_folders][Failure] Failed to remove source directory %s: %s", source_dir, e)
|
||||||
|
# [/DEF:consolidate_archive_folders:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.core.utils.fileio:Module]
|
||||||
53
backend/src/core/utils/matching.py
Normal file
53
backend/src/core/utils/matching.py
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
# [DEF:backend.src.core.utils.matching:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: fuzzy, matching, rapidfuzz, database, mapping
|
||||||
|
# @PURPOSE: Provides utility functions for fuzzy matching database names.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: DEPENDS_ON -> rapidfuzz
|
||||||
|
#
|
||||||
|
# @INVARIANT: Confidence scores are returned as floats between 0.0 and 1.0.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from rapidfuzz import fuzz, process
|
||||||
|
from typing import List, Dict
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:suggest_mappings:Function]
|
||||||
|
# @PURPOSE: Suggests mappings between source and target databases using fuzzy matching.
|
||||||
|
# @PRE: source_databases and target_databases are lists of dictionaries with 'uuid' and 'database_name'.
|
||||||
|
# @POST: Returns a list of suggested mappings with confidence scores.
|
||||||
|
# @PARAM: source_databases (List[Dict]) - Databases from the source environment.
|
||||||
|
# @PARAM: target_databases (List[Dict]) - Databases from the target environment.
|
||||||
|
# @PARAM: threshold (int) - Minimum confidence score (0-100).
|
||||||
|
# @RETURN: List[Dict] - Suggested mappings.
|
||||||
|
def suggest_mappings(source_databases: List[Dict], target_databases: List[Dict], threshold: int = 60) -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Suggest mappings between source and target databases using fuzzy matching.
|
||||||
|
"""
|
||||||
|
suggestions = []
|
||||||
|
if not target_databases:
|
||||||
|
return suggestions
|
||||||
|
|
||||||
|
target_names = [db['database_name'] for db in target_databases]
|
||||||
|
|
||||||
|
for s_db in source_databases:
|
||||||
|
# Use token_sort_ratio as decided in research.md
|
||||||
|
match = process.extractOne(
|
||||||
|
s_db['database_name'],
|
||||||
|
target_names,
|
||||||
|
scorer=fuzz.token_sort_ratio
|
||||||
|
)
|
||||||
|
|
||||||
|
if match:
|
||||||
|
name, score, index = match
|
||||||
|
if score >= threshold:
|
||||||
|
suggestions.append({
|
||||||
|
"source_db_uuid": s_db['uuid'],
|
||||||
|
"target_db_uuid": target_databases[index]['uuid'],
|
||||||
|
"confidence": score / 100.0
|
||||||
|
})
|
||||||
|
|
||||||
|
return suggestions
|
||||||
|
# [/DEF:suggest_mappings:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.core.utils.matching:Module]
|
||||||
286
backend/src/core/utils/network.py
Normal file
286
backend/src/core/utils/network.py
Normal file
@@ -0,0 +1,286 @@
|
|||||||
|
# [DEF:backend.core.utils.network:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: network, http, client, api, requests, session, authentication
|
||||||
|
# @PURPOSE: Инкапсулирует низкоуровневую HTTP-логику для взаимодействия с Superset API, включая аутентификацию, управление сессией, retry-логику и обработку ошибок.
|
||||||
|
# @LAYER: Infra
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.core.logger
|
||||||
|
# @RELATION: DEPENDS_ON -> requests
|
||||||
|
# @PUBLIC_API: APIClient
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from typing import Optional, Dict, Any, List, Union, cast
|
||||||
|
import json
|
||||||
|
import io
|
||||||
|
from pathlib import Path
|
||||||
|
import requests
|
||||||
|
from requests.adapters import HTTPAdapter
|
||||||
|
import urllib3
|
||||||
|
from urllib3.util.retry import Retry
|
||||||
|
from ..logger import logger as app_logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:SupersetAPIError:Class]
|
||||||
|
class SupersetAPIError(Exception):
|
||||||
|
def __init__(self, message: str = "Superset API error", **context: Any):
|
||||||
|
self.context = context
|
||||||
|
super().__init__(f"[API_FAILURE] {message} | Context: {self.context}")
|
||||||
|
|
||||||
|
# [DEF:AuthenticationError:Class]
|
||||||
|
class AuthenticationError(SupersetAPIError):
|
||||||
|
def __init__(self, message: str = "Authentication failed", **context: Any):
|
||||||
|
super().__init__(message, type="authentication", **context)
|
||||||
|
|
||||||
|
# [DEF:PermissionDeniedError:Class]
|
||||||
|
class PermissionDeniedError(AuthenticationError):
|
||||||
|
def __init__(self, message: str = "Permission denied", **context: Any):
|
||||||
|
super().__init__(message, **context)
|
||||||
|
|
||||||
|
# [DEF:DashboardNotFoundError:Class]
|
||||||
|
class DashboardNotFoundError(SupersetAPIError):
|
||||||
|
def __init__(self, resource_id: Union[int, str], message: str = "Dashboard not found", **context: Any):
|
||||||
|
super().__init__(f"Dashboard '{resource_id}' {message}", subtype="not_found", resource_id=resource_id, **context)
|
||||||
|
|
||||||
|
# [DEF:NetworkError:Class]
|
||||||
|
class NetworkError(Exception):
|
||||||
|
def __init__(self, message: str = "Network connection failed", **context: Any):
|
||||||
|
self.context = context
|
||||||
|
super().__init__(f"[NETWORK_FAILURE] {message} | Context: {self.context}")
|
||||||
|
|
||||||
|
# [DEF:APIClient:Class]
|
||||||
|
# @PURPOSE: Инкапсулирует HTTP-логику для работы с API, включая сессии, аутентификацию, и обработку запросов.
|
||||||
|
class APIClient:
|
||||||
|
DEFAULT_TIMEOUT = 30
|
||||||
|
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Инициализирует API клиент с конфигурацией, сессией и логгером.
|
||||||
|
# @PARAM: config (Dict[str, Any]) - Конфигурация.
|
||||||
|
# @PARAM: verify_ssl (bool) - Проверять ли SSL.
|
||||||
|
# @PARAM: timeout (int) - Таймаут запросов.
|
||||||
|
# @PRE: config must contain 'base_url' and 'auth'.
|
||||||
|
# @POST: APIClient instance is initialized with a session.
|
||||||
|
def __init__(self, config: Dict[str, Any], verify_ssl: bool = True, timeout: int = DEFAULT_TIMEOUT):
|
||||||
|
with belief_scope("__init__"):
|
||||||
|
app_logger.info("[APIClient.__init__][Entry] Initializing APIClient.")
|
||||||
|
self.base_url: str = config.get("base_url", "")
|
||||||
|
self.auth = config.get("auth")
|
||||||
|
self.request_settings = {"verify_ssl": verify_ssl, "timeout": timeout}
|
||||||
|
self.session = self._init_session()
|
||||||
|
self._tokens: Dict[str, str] = {}
|
||||||
|
self._authenticated = False
|
||||||
|
app_logger.info("[APIClient.__init__][Exit] APIClient initialized.")
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:_init_session:Function]
|
||||||
|
# @PURPOSE: Создает и настраивает `requests.Session` с retry-логикой.
|
||||||
|
# @PRE: self.request_settings must be initialized.
|
||||||
|
# @POST: Returns a configured requests.Session instance.
|
||||||
|
# @RETURN: requests.Session - Настроенная сессия.
|
||||||
|
def _init_session(self) -> requests.Session:
|
||||||
|
with belief_scope("_init_session"):
|
||||||
|
session = requests.Session()
|
||||||
|
retries = Retry(total=3, backoff_factor=0.5, status_forcelist=[500, 502, 503, 504])
|
||||||
|
adapter = HTTPAdapter(max_retries=retries)
|
||||||
|
session.mount('http://', adapter)
|
||||||
|
session.mount('https://', adapter)
|
||||||
|
if not self.request_settings["verify_ssl"]:
|
||||||
|
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||||
|
app_logger.warning("[_init_session][State] SSL verification disabled.")
|
||||||
|
session.verify = self.request_settings["verify_ssl"]
|
||||||
|
return session
|
||||||
|
# [/DEF:_init_session:Function]
|
||||||
|
|
||||||
|
# [DEF:authenticate:Function]
|
||||||
|
# @PURPOSE: Выполняет аутентификацию в Superset API и получает access и CSRF токены.
|
||||||
|
# @PRE: self.auth and self.base_url must be valid.
|
||||||
|
# @POST: `self._tokens` заполнен, `self._authenticated` установлен в `True`.
|
||||||
|
# @RETURN: Dict[str, str] - Словарь с токенами.
|
||||||
|
# @THROW: AuthenticationError, NetworkError - при ошибках.
|
||||||
|
def authenticate(self) -> Dict[str, str]:
|
||||||
|
with belief_scope("authenticate"):
|
||||||
|
app_logger.info("[authenticate][Enter] Authenticating to %s", self.base_url)
|
||||||
|
try:
|
||||||
|
login_url = f"{self.base_url}/security/login"
|
||||||
|
response = self.session.post(login_url, json=self.auth, timeout=self.request_settings["timeout"])
|
||||||
|
response.raise_for_status()
|
||||||
|
access_token = response.json()["access_token"]
|
||||||
|
|
||||||
|
csrf_url = f"{self.base_url}/security/csrf_token/"
|
||||||
|
csrf_response = self.session.get(csrf_url, headers={"Authorization": f"Bearer {access_token}"}, timeout=self.request_settings["timeout"])
|
||||||
|
csrf_response.raise_for_status()
|
||||||
|
|
||||||
|
self._tokens = {"access_token": access_token, "csrf_token": csrf_response.json()["result"]}
|
||||||
|
self._authenticated = True
|
||||||
|
app_logger.info("[authenticate][Exit] Authenticated successfully.")
|
||||||
|
return self._tokens
|
||||||
|
except requests.exceptions.HTTPError as e:
|
||||||
|
raise AuthenticationError(f"Authentication failed: {e}") from e
|
||||||
|
except (requests.exceptions.RequestException, KeyError) as e:
|
||||||
|
raise NetworkError(f"Network or parsing error during authentication: {e}") from e
|
||||||
|
# [/DEF:authenticate:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:headers:Function]
|
||||||
|
# @PURPOSE: Возвращает HTTP-заголовки для аутентифицированных запросов.
|
||||||
|
# @PRE: APIClient is initialized and authenticated or can be authenticated.
|
||||||
|
# @POST: Returns headers including auth tokens.
|
||||||
|
def headers(self) -> Dict[str, str]:
|
||||||
|
with belief_scope("headers"):
|
||||||
|
if not self._authenticated: self.authenticate()
|
||||||
|
return {
|
||||||
|
"Authorization": f"Bearer {self._tokens['access_token']}",
|
||||||
|
"X-CSRFToken": self._tokens.get("csrf_token", ""),
|
||||||
|
"Referer": self.base_url,
|
||||||
|
"Content-Type": "application/json"
|
||||||
|
}
|
||||||
|
# [/DEF:headers:Function]
|
||||||
|
|
||||||
|
# [DEF:request:Function]
|
||||||
|
# @PURPOSE: Выполняет универсальный HTTP-запрос к API.
|
||||||
|
# @PARAM: method (str) - HTTP метод.
|
||||||
|
# @PARAM: endpoint (str) - API эндпоинт.
|
||||||
|
# @PARAM: headers (Optional[Dict]) - Дополнительные заголовки.
|
||||||
|
# @PARAM: raw_response (bool) - Возвращать ли сырой ответ.
|
||||||
|
# @PRE: method and endpoint must be strings.
|
||||||
|
# @POST: Returns response content or raw Response object.
|
||||||
|
# @RETURN: `requests.Response` если `raw_response=True`, иначе `dict`.
|
||||||
|
# @THROW: SupersetAPIError, NetworkError и их подклассы.
|
||||||
|
def request(self, method: str, endpoint: str, headers: Optional[Dict] = None, raw_response: bool = False, **kwargs) -> Union[requests.Response, Dict[str, Any]]:
|
||||||
|
with belief_scope("request"):
|
||||||
|
full_url = f"{self.base_url}{endpoint}"
|
||||||
|
_headers = self.headers.copy()
|
||||||
|
if headers: _headers.update(headers)
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = self.session.request(method, full_url, headers=_headers, **kwargs)
|
||||||
|
response.raise_for_status()
|
||||||
|
return response if raw_response else response.json()
|
||||||
|
except requests.exceptions.HTTPError as e:
|
||||||
|
self._handle_http_error(e, endpoint)
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
self._handle_network_error(e, full_url)
|
||||||
|
# [/DEF:request:Function]
|
||||||
|
|
||||||
|
# [DEF:_handle_http_error:Function]
|
||||||
|
# @PURPOSE: (Helper) Преобразует HTTP ошибки в кастомные исключения.
|
||||||
|
# @PARAM: e (requests.exceptions.HTTPError) - Ошибка.
|
||||||
|
# @PARAM: endpoint (str) - Эндпоинт.
|
||||||
|
# @PRE: e must be a valid HTTPError with a response.
|
||||||
|
# @POST: Raises a specific SupersetAPIError or subclass.
|
||||||
|
def _handle_http_error(self, e: requests.exceptions.HTTPError, endpoint: str):
|
||||||
|
with belief_scope("_handle_http_error"):
|
||||||
|
status_code = e.response.status_code
|
||||||
|
if status_code == 404: raise DashboardNotFoundError(endpoint) from e
|
||||||
|
if status_code == 403: raise PermissionDeniedError() from e
|
||||||
|
if status_code == 401: raise AuthenticationError() from e
|
||||||
|
raise SupersetAPIError(f"API Error {status_code}: {e.response.text}") from e
|
||||||
|
# [/DEF:_handle_http_error:Function]
|
||||||
|
|
||||||
|
# [DEF:_handle_network_error:Function]
|
||||||
|
# @PURPOSE: (Helper) Преобразует сетевые ошибки в `NetworkError`.
|
||||||
|
# @PARAM: e (requests.exceptions.RequestException) - Ошибка.
|
||||||
|
# @PARAM: url (str) - URL.
|
||||||
|
# @PRE: e must be a RequestException.
|
||||||
|
# @POST: Raises a NetworkError.
|
||||||
|
def _handle_network_error(self, e: requests.exceptions.RequestException, url: str):
|
||||||
|
with belief_scope("_handle_network_error"):
|
||||||
|
if isinstance(e, requests.exceptions.Timeout): msg = "Request timeout"
|
||||||
|
elif isinstance(e, requests.exceptions.ConnectionError): msg = "Connection error"
|
||||||
|
else: msg = f"Unknown network error: {e}"
|
||||||
|
raise NetworkError(msg, url=url) from e
|
||||||
|
# [/DEF:_handle_network_error:Function]
|
||||||
|
|
||||||
|
# [DEF:upload_file:Function]
|
||||||
|
# @PURPOSE: Загружает файл на сервер через multipart/form-data.
|
||||||
|
# @PARAM: endpoint (str) - Эндпоинт.
|
||||||
|
# @PARAM: file_info (Dict[str, Any]) - Информация о файле.
|
||||||
|
# @PARAM: extra_data (Optional[Dict]) - Дополнительные данные.
|
||||||
|
# @PARAM: timeout (Optional[int]) - Таймаут.
|
||||||
|
# @PRE: file_info must contain 'file_obj' and 'file_name'.
|
||||||
|
# @POST: File is uploaded and response returned.
|
||||||
|
# @RETURN: Ответ API в виде словаря.
|
||||||
|
# @THROW: SupersetAPIError, NetworkError, TypeError.
|
||||||
|
def upload_file(self, endpoint: str, file_info: Dict[str, Any], extra_data: Optional[Dict] = None, timeout: Optional[int] = None) -> Dict:
|
||||||
|
with belief_scope("upload_file"):
|
||||||
|
full_url = f"{self.base_url}{endpoint}"
|
||||||
|
_headers = self.headers.copy(); _headers.pop('Content-Type', None)
|
||||||
|
|
||||||
|
file_obj, file_name, form_field = file_info.get("file_obj"), file_info.get("file_name"), file_info.get("form_field", "file")
|
||||||
|
|
||||||
|
files_payload = {}
|
||||||
|
if isinstance(file_obj, (str, Path)):
|
||||||
|
with open(file_obj, 'rb') as f:
|
||||||
|
files_payload = {form_field: (file_name, f.read(), 'application/x-zip-compressed')}
|
||||||
|
elif isinstance(file_obj, io.BytesIO):
|
||||||
|
files_payload = {form_field: (file_name, file_obj.getvalue(), 'application/x-zip-compressed')}
|
||||||
|
else:
|
||||||
|
raise TypeError(f"Unsupported file_obj type: {type(file_obj)}")
|
||||||
|
|
||||||
|
return self._perform_upload(full_url, files_payload, extra_data, _headers, timeout)
|
||||||
|
# [/DEF:upload_file:Function]
|
||||||
|
|
||||||
|
# [DEF:_perform_upload:Function]
|
||||||
|
# @PURPOSE: (Helper) Выполняет POST запрос с файлом.
|
||||||
|
# @PARAM: url (str) - URL.
|
||||||
|
# @PARAM: files (Dict) - Файлы.
|
||||||
|
# @PARAM: data (Optional[Dict]) - Данные.
|
||||||
|
# @PARAM: headers (Dict) - Заголовки.
|
||||||
|
# @PARAM: timeout (Optional[int]) - Таймаут.
|
||||||
|
# @PRE: url, files, and headers must be provided.
|
||||||
|
# @POST: POST request is performed and JSON response returned.
|
||||||
|
# @RETURN: Dict - Ответ.
|
||||||
|
def _perform_upload(self, url: str, files: Dict, data: Optional[Dict], headers: Dict, timeout: Optional[int]) -> Dict:
|
||||||
|
with belief_scope("_perform_upload"):
|
||||||
|
try:
|
||||||
|
response = self.session.post(url, files=files, data=data or {}, headers=headers, timeout=timeout or self.request_settings["timeout"])
|
||||||
|
response.raise_for_status()
|
||||||
|
if response.status_code == 200:
|
||||||
|
try:
|
||||||
|
return response.json()
|
||||||
|
except Exception as json_e:
|
||||||
|
app_logger.debug(f"[_perform_upload][Debug] Response is not valid JSON: {response.text[:200]}...")
|
||||||
|
raise SupersetAPIError(f"API error during upload: Response is not valid JSON: {json_e}") from json_e
|
||||||
|
return response.json()
|
||||||
|
except requests.exceptions.HTTPError as e:
|
||||||
|
raise SupersetAPIError(f"API error during upload: {e.response.text}") from e
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
raise NetworkError(f"Network error during upload: {e}", url=url) from e
|
||||||
|
# [/DEF:_perform_upload:Function]
|
||||||
|
|
||||||
|
# [DEF:fetch_paginated_count:Function]
|
||||||
|
# @PURPOSE: Получает общее количество элементов для пагинации.
|
||||||
|
# @PARAM: endpoint (str) - Эндпоинт.
|
||||||
|
# @PARAM: query_params (Dict) - Параметры запроса.
|
||||||
|
# @PARAM: count_field (str) - Поле с количеством.
|
||||||
|
# @PRE: query_params must be a dictionary.
|
||||||
|
# @POST: Returns total count of items.
|
||||||
|
# @RETURN: int - Количество.
|
||||||
|
def fetch_paginated_count(self, endpoint: str, query_params: Dict, count_field: str = "count") -> int:
|
||||||
|
with belief_scope("fetch_paginated_count"):
|
||||||
|
response_json = cast(Dict[str, Any], self.request("GET", endpoint, params={"q": json.dumps(query_params)}))
|
||||||
|
return response_json.get(count_field, 0)
|
||||||
|
# [/DEF:fetch_paginated_count:Function]
|
||||||
|
|
||||||
|
# [DEF:fetch_paginated_data:Function]
|
||||||
|
# @PURPOSE: Автоматически собирает данные со всех страниц пагинированного эндпоинта.
|
||||||
|
# @PARAM: endpoint (str) - Эндпоинт.
|
||||||
|
# @PARAM: pagination_options (Dict[str, Any]) - Опции пагинации.
|
||||||
|
# @PRE: pagination_options must contain 'base_query', 'total_count', 'results_field'.
|
||||||
|
# @POST: Returns all items across all pages.
|
||||||
|
# @RETURN: List[Any] - Список данных.
|
||||||
|
def fetch_paginated_data(self, endpoint: str, pagination_options: Dict[str, Any]) -> List[Any]:
|
||||||
|
with belief_scope("fetch_paginated_data"):
|
||||||
|
base_query, total_count = pagination_options["base_query"], pagination_options["total_count"]
|
||||||
|
results_field, page_size = pagination_options["results_field"], base_query.get('page_size')
|
||||||
|
assert page_size and page_size > 0, "'page_size' must be a positive number."
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for page in range((total_count + page_size - 1) // page_size):
|
||||||
|
query = {**base_query, 'page': page}
|
||||||
|
response_json = cast(Dict[str, Any], self.request("GET", endpoint, params={"q": json.dumps(query)}))
|
||||||
|
results.extend(response_json.get(results_field, []))
|
||||||
|
return results
|
||||||
|
# [/DEF:fetch_paginated_data:Function]
|
||||||
|
|
||||||
|
# [/DEF:APIClient:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.core.utils.network:Module]
|
||||||
80
backend/src/dependencies.py
Executable file
80
backend/src/dependencies.py
Executable file
@@ -0,0 +1,80 @@
|
|||||||
|
# [DEF:Dependencies:Module]
|
||||||
|
# @SEMANTICS: dependency, injection, singleton, factory
|
||||||
|
# @PURPOSE: Manages the creation and provision of shared application dependencies, such as the PluginLoader and TaskManager, to avoid circular imports.
|
||||||
|
# @LAYER: Core
|
||||||
|
# @RELATION: Used by the main app and API routers to get access to shared instances.
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
from .core.plugin_loader import PluginLoader
|
||||||
|
from .core.task_manager import TaskManager
|
||||||
|
from .core.config_manager import ConfigManager
|
||||||
|
from .core.scheduler import SchedulerService
|
||||||
|
from .core.database import init_db
|
||||||
|
from .core.logger import logger, belief_scope
|
||||||
|
|
||||||
|
# Initialize singletons
|
||||||
|
# Use absolute path relative to this file to ensure plugins are found regardless of CWD
|
||||||
|
project_root = Path(__file__).parent.parent.parent
|
||||||
|
config_path = project_root / "config.json"
|
||||||
|
config_manager = ConfigManager(config_path=str(config_path))
|
||||||
|
|
||||||
|
# Initialize database before any other services that might use it
|
||||||
|
init_db()
|
||||||
|
|
||||||
|
# [DEF:get_config_manager:Function]
|
||||||
|
# @PURPOSE: Dependency injector for the ConfigManager.
|
||||||
|
# @PRE: Global config_manager must be initialized.
|
||||||
|
# @POST: Returns shared ConfigManager instance.
|
||||||
|
# @RETURN: ConfigManager - The shared config manager instance.
|
||||||
|
def get_config_manager() -> ConfigManager:
|
||||||
|
"""Dependency injector for the ConfigManager."""
|
||||||
|
with belief_scope("get_config_manager"):
|
||||||
|
return config_manager
|
||||||
|
# [/DEF:get_config_manager:Function]
|
||||||
|
|
||||||
|
plugin_dir = Path(__file__).parent / "plugins"
|
||||||
|
|
||||||
|
plugin_loader = PluginLoader(plugin_dir=str(plugin_dir))
|
||||||
|
logger.info(f"PluginLoader initialized with directory: {plugin_dir}")
|
||||||
|
logger.info(f"Available plugins: {[config.name for config in plugin_loader.get_all_plugin_configs()]}")
|
||||||
|
|
||||||
|
task_manager = TaskManager(plugin_loader)
|
||||||
|
logger.info("TaskManager initialized")
|
||||||
|
|
||||||
|
scheduler_service = SchedulerService(task_manager, config_manager)
|
||||||
|
logger.info("SchedulerService initialized")
|
||||||
|
|
||||||
|
# [DEF:get_plugin_loader:Function]
|
||||||
|
# @PURPOSE: Dependency injector for the PluginLoader.
|
||||||
|
# @PRE: Global plugin_loader must be initialized.
|
||||||
|
# @POST: Returns shared PluginLoader instance.
|
||||||
|
# @RETURN: PluginLoader - The shared plugin loader instance.
|
||||||
|
def get_plugin_loader() -> PluginLoader:
|
||||||
|
"""Dependency injector for the PluginLoader."""
|
||||||
|
with belief_scope("get_plugin_loader"):
|
||||||
|
return plugin_loader
|
||||||
|
# [/DEF:get_plugin_loader:Function]
|
||||||
|
|
||||||
|
# [DEF:get_task_manager:Function]
|
||||||
|
# @PURPOSE: Dependency injector for the TaskManager.
|
||||||
|
# @PRE: Global task_manager must be initialized.
|
||||||
|
# @POST: Returns shared TaskManager instance.
|
||||||
|
# @RETURN: TaskManager - The shared task manager instance.
|
||||||
|
def get_task_manager() -> TaskManager:
|
||||||
|
"""Dependency injector for the TaskManager."""
|
||||||
|
with belief_scope("get_task_manager"):
|
||||||
|
return task_manager
|
||||||
|
# [/DEF:get_task_manager:Function]
|
||||||
|
|
||||||
|
# [DEF:get_scheduler_service:Function]
|
||||||
|
# @PURPOSE: Dependency injector for the SchedulerService.
|
||||||
|
# @PRE: Global scheduler_service must be initialized.
|
||||||
|
# @POST: Returns shared SchedulerService instance.
|
||||||
|
# @RETURN: SchedulerService - The shared scheduler service instance.
|
||||||
|
def get_scheduler_service() -> SchedulerService:
|
||||||
|
"""Dependency injector for the SchedulerService."""
|
||||||
|
with belief_scope("get_scheduler_service"):
|
||||||
|
return scheduler_service
|
||||||
|
# [/DEF:get_scheduler_service:Function]
|
||||||
|
|
||||||
|
# [/DEF:Dependencies:Module]
|
||||||
34
backend/src/models/connection.py
Normal file
34
backend/src/models/connection.py
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# [DEF:backend.src.models.connection:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: database, connection, configuration, sqlalchemy, sqlite
|
||||||
|
# @PURPOSE: Defines the database schema for external database connection configurations.
|
||||||
|
# @LAYER: Domain
|
||||||
|
# @RELATION: DEPENDS_ON -> sqlalchemy
|
||||||
|
#
|
||||||
|
# @INVARIANT: All primary keys are UUID strings.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from sqlalchemy import Column, String, Integer, DateTime
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
from .mapping import Base
|
||||||
|
import uuid
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:ConnectionConfig:Class]
|
||||||
|
# @PURPOSE: Stores credentials for external databases used for column mapping.
|
||||||
|
class ConnectionConfig(Base):
|
||||||
|
__tablename__ = "connection_configs"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
name = Column(String, nullable=False)
|
||||||
|
type = Column(String, nullable=False) # e.g., "postgres"
|
||||||
|
host = Column(String, nullable=True)
|
||||||
|
port = Column(Integer, nullable=True)
|
||||||
|
database = Column(String, nullable=True)
|
||||||
|
username = Column(String, nullable=True)
|
||||||
|
password = Column(String, nullable=True) # Encrypted/Obfuscated password
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||||
|
# [/DEF:ConnectionConfig:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.models.connection:Module]
|
||||||
28
backend/src/models/dashboard.py
Normal file
28
backend/src/models/dashboard.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# [DEF:backend.src.models.dashboard:Module]
|
||||||
|
# @SEMANTICS: dashboard, model, metadata, migration
|
||||||
|
# @PURPOSE: Defines data models for dashboard metadata and selection.
|
||||||
|
# @LAYER: Model
|
||||||
|
# @RELATION: USED_BY -> backend.src.api.routes.migration
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
# [DEF:DashboardMetadata:Class]
|
||||||
|
# @PURPOSE: Represents a dashboard available for migration.
|
||||||
|
class DashboardMetadata(BaseModel):
|
||||||
|
id: int
|
||||||
|
title: str
|
||||||
|
last_modified: str
|
||||||
|
status: str
|
||||||
|
# [/DEF:DashboardMetadata:Class]
|
||||||
|
|
||||||
|
# [DEF:DashboardSelection:Class]
|
||||||
|
# @PURPOSE: Represents the user's selection of dashboards to migrate.
|
||||||
|
class DashboardSelection(BaseModel):
|
||||||
|
selected_ids: List[int]
|
||||||
|
source_env_id: str
|
||||||
|
target_env_id: str
|
||||||
|
replace_db_config: bool = False
|
||||||
|
# [/DEF:DashboardSelection:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.models.dashboard:Module]
|
||||||
70
backend/src/models/mapping.py
Normal file
70
backend/src/models/mapping.py
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# [DEF:backend.src.models.mapping:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: database, mapping, environment, migration, sqlalchemy, sqlite
|
||||||
|
# @PURPOSE: Defines the database schema for environment metadata and database mappings using SQLAlchemy.
|
||||||
|
# @LAYER: Domain
|
||||||
|
# @RELATION: DEPENDS_ON -> sqlalchemy
|
||||||
|
#
|
||||||
|
# @INVARIANT: All primary keys are UUID strings.
|
||||||
|
# @CONSTRAINT: source_env_id and target_env_id must be valid environment IDs.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from sqlalchemy import Column, String, Boolean, DateTime, ForeignKey, Enum as SQLEnum
|
||||||
|
from sqlalchemy.ext.declarative import declarative_base
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
import uuid
|
||||||
|
import enum
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
Base = declarative_base()
|
||||||
|
|
||||||
|
# [DEF:MigrationStatus:Class]
|
||||||
|
# @PURPOSE: Enumeration of possible migration job statuses.
|
||||||
|
class MigrationStatus(enum.Enum):
|
||||||
|
PENDING = "PENDING"
|
||||||
|
RUNNING = "RUNNING"
|
||||||
|
COMPLETED = "COMPLETED"
|
||||||
|
FAILED = "FAILED"
|
||||||
|
AWAITING_MAPPING = "AWAITING_MAPPING"
|
||||||
|
# [/DEF:MigrationStatus:Class]
|
||||||
|
|
||||||
|
# [DEF:Environment:Class]
|
||||||
|
# @PURPOSE: Represents a Superset instance environment.
|
||||||
|
class Environment(Base):
|
||||||
|
__tablename__ = "environments"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
name = Column(String, nullable=False)
|
||||||
|
url = Column(String, nullable=False)
|
||||||
|
credentials_id = Column(String, nullable=False)
|
||||||
|
# [/DEF:Environment:Class]
|
||||||
|
|
||||||
|
# [DEF:DatabaseMapping:Class]
|
||||||
|
# @PURPOSE: Represents a mapping between source and target databases.
|
||||||
|
class DatabaseMapping(Base):
|
||||||
|
__tablename__ = "database_mappings"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
source_env_id = Column(String, ForeignKey("environments.id"), nullable=False)
|
||||||
|
target_env_id = Column(String, ForeignKey("environments.id"), nullable=False)
|
||||||
|
source_db_uuid = Column(String, nullable=False)
|
||||||
|
target_db_uuid = Column(String, nullable=False)
|
||||||
|
source_db_name = Column(String, nullable=False)
|
||||||
|
target_db_name = Column(String, nullable=False)
|
||||||
|
engine = Column(String, nullable=True)
|
||||||
|
# [/DEF:DatabaseMapping:Class]
|
||||||
|
|
||||||
|
# [DEF:MigrationJob:Class]
|
||||||
|
# @PURPOSE: Represents a single migration execution job.
|
||||||
|
class MigrationJob(Base):
|
||||||
|
__tablename__ = "migration_jobs"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
source_env_id = Column(String, ForeignKey("environments.id"), nullable=False)
|
||||||
|
target_env_id = Column(String, ForeignKey("environments.id"), nullable=False)
|
||||||
|
status = Column(SQLEnum(MigrationStatus), default=MigrationStatus.PENDING)
|
||||||
|
replace_db = Column(Boolean, default=False)
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
|
# [/DEF:MigrationJob:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.models.mapping:Module]
|
||||||
35
backend/src/models/task.py
Normal file
35
backend/src/models/task.py
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# [DEF:backend.src.models.task:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: database, task, record, sqlalchemy, sqlite
|
||||||
|
# @PURPOSE: Defines the database schema for task execution records.
|
||||||
|
# @LAYER: Domain
|
||||||
|
# @RELATION: DEPENDS_ON -> sqlalchemy
|
||||||
|
#
|
||||||
|
# @INVARIANT: All primary keys are UUID strings.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from sqlalchemy import Column, String, DateTime, JSON, ForeignKey
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
from .mapping import Base
|
||||||
|
import uuid
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:TaskRecord:Class]
|
||||||
|
# @PURPOSE: Represents a persistent record of a task execution.
|
||||||
|
class TaskRecord(Base):
|
||||||
|
__tablename__ = "task_records"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||||
|
type = Column(String, nullable=False) # e.g., "backup", "migration"
|
||||||
|
status = Column(String, nullable=False) # Enum: "PENDING", "RUNNING", "SUCCESS", "FAILED"
|
||||||
|
environment_id = Column(String, ForeignKey("environments.id"), nullable=True)
|
||||||
|
started_at = Column(DateTime(timezone=True), nullable=True)
|
||||||
|
finished_at = Column(DateTime(timezone=True), nullable=True)
|
||||||
|
logs = Column(JSON, nullable=True) # Store structured logs as JSON
|
||||||
|
error = Column(String, nullable=True)
|
||||||
|
result = Column(JSON, nullable=True)
|
||||||
|
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
|
params = Column(JSON, nullable=True)
|
||||||
|
# [/DEF:TaskRecord:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.models.task:Module]
|
||||||
189
backend/src/plugins/backup.py
Executable file
189
backend/src/plugins/backup.py
Executable file
@@ -0,0 +1,189 @@
|
|||||||
|
# [DEF:BackupPlugin:Module]
|
||||||
|
# @SEMANTICS: backup, superset, automation, dashboard, plugin
|
||||||
|
# @PURPOSE: A plugin that provides functionality to back up Superset dashboards.
|
||||||
|
# @LAYER: App
|
||||||
|
# @RELATION: IMPLEMENTS -> PluginBase
|
||||||
|
# @RELATION: DEPENDS_ON -> superset_tool.client
|
||||||
|
# @RELATION: DEPENDS_ON -> superset_tool.utils
|
||||||
|
|
||||||
|
from typing import Dict, Any
|
||||||
|
from pathlib import Path
|
||||||
|
from requests.exceptions import RequestException
|
||||||
|
|
||||||
|
from ..core.plugin_base import PluginBase
|
||||||
|
from ..core.logger import belief_scope
|
||||||
|
from ..core.superset_client import SupersetClient
|
||||||
|
from ..core.utils.network import SupersetAPIError
|
||||||
|
from ..core.utils.fileio import (
|
||||||
|
save_and_unpack_dashboard,
|
||||||
|
archive_exports,
|
||||||
|
sanitize_filename,
|
||||||
|
consolidate_archive_folders,
|
||||||
|
remove_empty_directories,
|
||||||
|
RetentionPolicy
|
||||||
|
)
|
||||||
|
from ..dependencies import get_config_manager
|
||||||
|
|
||||||
|
# [DEF:BackupPlugin:Class]
|
||||||
|
# @PURPOSE: Implementation of the backup plugin logic.
|
||||||
|
class BackupPlugin(PluginBase):
|
||||||
|
"""
|
||||||
|
A plugin to back up Superset dashboards.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:id:Function]
|
||||||
|
# @PURPOSE: Returns the unique identifier for the backup plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string ID.
|
||||||
|
# @RETURN: str - "superset-backup"
|
||||||
|
def id(self) -> str:
|
||||||
|
with belief_scope("id"):
|
||||||
|
return "superset-backup"
|
||||||
|
# [/DEF:id:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:name:Function]
|
||||||
|
# @PURPOSE: Returns the human-readable name of the backup plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string name.
|
||||||
|
# @RETURN: str - Plugin name.
|
||||||
|
def name(self) -> str:
|
||||||
|
with belief_scope("name"):
|
||||||
|
return "Superset Dashboard Backup"
|
||||||
|
# [/DEF:name:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:description:Function]
|
||||||
|
# @PURPOSE: Returns a description of the backup plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string description.
|
||||||
|
# @RETURN: str - Plugin description.
|
||||||
|
def description(self) -> str:
|
||||||
|
with belief_scope("description"):
|
||||||
|
return "Backs up all dashboards from a Superset instance."
|
||||||
|
# [/DEF:description:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:version:Function]
|
||||||
|
# @PURPOSE: Returns the version of the backup plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string version.
|
||||||
|
# @RETURN: str - "1.0.0"
|
||||||
|
def version(self) -> str:
|
||||||
|
with belief_scope("version"):
|
||||||
|
return "1.0.0"
|
||||||
|
# [/DEF:version:Function]
|
||||||
|
|
||||||
|
# [DEF:get_schema:Function]
|
||||||
|
# @PURPOSE: Returns the JSON schema for backup plugin parameters.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns dictionary schema.
|
||||||
|
# @RETURN: Dict[str, Any] - JSON schema.
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
with belief_scope("get_schema"):
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
envs = [e.name for e in config_manager.get_environments()]
|
||||||
|
default_path = config_manager.get_config().settings.backup_path
|
||||||
|
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Environment",
|
||||||
|
"description": "The Superset environment to back up.",
|
||||||
|
"enum": envs if envs else [],
|
||||||
|
},
|
||||||
|
"backup_path": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Backup Path",
|
||||||
|
"description": "The root directory to save backups to.",
|
||||||
|
"default": default_path
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["env", "backup_path"],
|
||||||
|
}
|
||||||
|
# [/DEF:get_schema:Function]
|
||||||
|
|
||||||
|
# [DEF:execute:Function]
|
||||||
|
# @PURPOSE: Executes the dashboard backup logic.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Backup parameters (env, backup_path).
|
||||||
|
# @PRE: Target environment must be configured. params must be a dictionary.
|
||||||
|
# @POST: All dashboards are exported and archived.
|
||||||
|
async def execute(self, params: Dict[str, Any]):
|
||||||
|
with belief_scope("execute"):
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
env_id = params.get("environment_id")
|
||||||
|
|
||||||
|
# Resolve environment name if environment_id is provided
|
||||||
|
if env_id:
|
||||||
|
env_config = next((e for e in config_manager.get_environments() if e.id == env_id), None)
|
||||||
|
if env_config:
|
||||||
|
params["env"] = env_config.name
|
||||||
|
|
||||||
|
env = params.get("env")
|
||||||
|
if not env:
|
||||||
|
raise KeyError("env")
|
||||||
|
|
||||||
|
backup_path_str = params.get("backup_path") or config_manager.get_config().settings.backup_path
|
||||||
|
backup_path = Path(backup_path_str)
|
||||||
|
|
||||||
|
from ..core.logger import logger as app_logger
|
||||||
|
app_logger.info(f"[BackupPlugin][Entry] Starting backup for {env}.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
if not config_manager.has_environments():
|
||||||
|
raise ValueError("No Superset environments configured. Please add an environment in Settings.")
|
||||||
|
|
||||||
|
env_config = config_manager.get_environment(env)
|
||||||
|
if not env_config:
|
||||||
|
raise ValueError(f"Environment '{env}' not found in configuration.")
|
||||||
|
|
||||||
|
client = SupersetClient(env_config)
|
||||||
|
|
||||||
|
dashboard_count, dashboard_meta = client.get_dashboards()
|
||||||
|
app_logger.info(f"[BackupPlugin][Progress] Found {dashboard_count} dashboards to export in {env}.")
|
||||||
|
|
||||||
|
if dashboard_count == 0:
|
||||||
|
app_logger.info("[BackupPlugin][Exit] No dashboards to back up.")
|
||||||
|
return
|
||||||
|
|
||||||
|
for db in dashboard_meta:
|
||||||
|
dashboard_id = db.get('id')
|
||||||
|
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
|
||||||
|
if not dashboard_id:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
|
||||||
|
dashboard_dir = backup_path / env.upper() / dashboard_base_dir_name
|
||||||
|
dashboard_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
zip_content, filename = client.export_dashboard(dashboard_id)
|
||||||
|
|
||||||
|
save_and_unpack_dashboard(
|
||||||
|
zip_content=zip_content,
|
||||||
|
original_filename=filename,
|
||||||
|
output_dir=dashboard_dir,
|
||||||
|
unpack=False
|
||||||
|
)
|
||||||
|
|
||||||
|
archive_exports(str(dashboard_dir), policy=RetentionPolicy())
|
||||||
|
|
||||||
|
except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
|
||||||
|
app_logger.error(f"[BackupPlugin][Failure] Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}", exc_info=True)
|
||||||
|
continue
|
||||||
|
|
||||||
|
consolidate_archive_folders(backup_path / env.upper())
|
||||||
|
remove_empty_directories(str(backup_path / env.upper()))
|
||||||
|
|
||||||
|
app_logger.info(f"[BackupPlugin][CoherenceCheck:Passed] Backup logic completed for {env}.")
|
||||||
|
|
||||||
|
except (RequestException, IOError, KeyError) as e:
|
||||||
|
app_logger.critical(f"[BackupPlugin][Failure] Fatal error during backup for {env}: {e}", exc_info=True)
|
||||||
|
raise e
|
||||||
|
# [/DEF:execute:Function]
|
||||||
|
# [/DEF:BackupPlugin:Class]
|
||||||
|
# [/DEF:BackupPlugin:Module]
|
||||||
187
backend/src/plugins/debug.py
Normal file
187
backend/src/plugins/debug.py
Normal file
@@ -0,0 +1,187 @@
|
|||||||
|
# [DEF:DebugPluginModule:Module]
|
||||||
|
# @SEMANTICS: plugin, debug, api, database, superset
|
||||||
|
# @PURPOSE: Implements a plugin for system diagnostics and debugging Superset API responses.
|
||||||
|
# @LAYER: Plugins
|
||||||
|
# @RELATION: Inherits from PluginBase. Uses SupersetClient from core.
|
||||||
|
# @CONSTRAINT: Must use belief_scope for logging.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from typing import Dict, Any, Optional
|
||||||
|
from ..core.plugin_base import PluginBase
|
||||||
|
from ..core.superset_client import SupersetClient
|
||||||
|
from ..core.logger import logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:DebugPlugin:Class]
|
||||||
|
# @PURPOSE: Plugin for system diagnostics and debugging.
|
||||||
|
class DebugPlugin(PluginBase):
|
||||||
|
"""
|
||||||
|
Plugin for system diagnostics and debugging.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:id:Function]
|
||||||
|
# @PURPOSE: Returns the unique identifier for the debug plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string ID.
|
||||||
|
# @RETURN: str - "system-debug"
|
||||||
|
def id(self) -> str:
|
||||||
|
with belief_scope("id"):
|
||||||
|
return "system-debug"
|
||||||
|
# [/DEF:id:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:name:Function]
|
||||||
|
# @PURPOSE: Returns the human-readable name of the debug plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string name.
|
||||||
|
# @RETURN: str - Plugin name.
|
||||||
|
def name(self) -> str:
|
||||||
|
with belief_scope("name"):
|
||||||
|
return "System Debug"
|
||||||
|
# [/DEF:name:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:description:Function]
|
||||||
|
# @PURPOSE: Returns a description of the debug plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string description.
|
||||||
|
# @RETURN: str - Plugin description.
|
||||||
|
def description(self) -> str:
|
||||||
|
with belief_scope("description"):
|
||||||
|
return "Run system diagnostics and debug Superset API responses."
|
||||||
|
# [/DEF:description:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:version:Function]
|
||||||
|
# @PURPOSE: Returns the version of the debug plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string version.
|
||||||
|
# @RETURN: str - "1.0.0"
|
||||||
|
def version(self) -> str:
|
||||||
|
with belief_scope("version"):
|
||||||
|
return "1.0.0"
|
||||||
|
# [/DEF:version:Function]
|
||||||
|
|
||||||
|
# [DEF:get_schema:Function]
|
||||||
|
# @PURPOSE: Returns the JSON schema for the debug plugin parameters.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns dictionary schema.
|
||||||
|
# @RETURN: Dict[str, Any] - JSON schema.
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
with belief_scope("get_schema"):
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Action",
|
||||||
|
"enum": ["test-db-api", "get-dataset-structure"],
|
||||||
|
"default": "test-db-api"
|
||||||
|
},
|
||||||
|
"env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Environment",
|
||||||
|
"description": "The Superset environment (for dataset structure)."
|
||||||
|
},
|
||||||
|
"dataset_id": {
|
||||||
|
"type": "integer",
|
||||||
|
"title": "Dataset ID",
|
||||||
|
"description": "The ID of the dataset (for dataset structure)."
|
||||||
|
},
|
||||||
|
"source_env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Source Environment",
|
||||||
|
"description": "Source env for DB API test."
|
||||||
|
},
|
||||||
|
"target_env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Target Environment",
|
||||||
|
"description": "Target env for DB API test."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["action"]
|
||||||
|
}
|
||||||
|
# [/DEF:get_schema:Function]
|
||||||
|
|
||||||
|
# [DEF:execute:Function]
|
||||||
|
# @PURPOSE: Executes the debug logic.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Debug parameters.
|
||||||
|
# @PRE: action must be provided in params.
|
||||||
|
# @POST: Debug action is executed and results returned.
|
||||||
|
# @RETURN: Dict[str, Any] - Execution results.
|
||||||
|
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
with belief_scope("execute"):
|
||||||
|
action = params.get("action")
|
||||||
|
|
||||||
|
if action == "test-db-api":
|
||||||
|
return await self._test_db_api(params)
|
||||||
|
elif action == "get-dataset-structure":
|
||||||
|
return await self._get_dataset_structure(params)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unknown action: {action}")
|
||||||
|
# [/DEF:execute:Function]
|
||||||
|
|
||||||
|
# [DEF:_test_db_api:Function]
|
||||||
|
# @PURPOSE: Tests database API connectivity for source and target environments.
|
||||||
|
# @PRE: source_env and target_env params exist in params.
|
||||||
|
# @POST: Returns DB counts for both envs.
|
||||||
|
# @PARAM: params (Dict) - Plugin parameters.
|
||||||
|
# @RETURN: Dict - Comparison results.
|
||||||
|
async def _test_db_api(self, params: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
with belief_scope("_test_db_api"):
|
||||||
|
source_env_name = params.get("source_env")
|
||||||
|
target_env_name = params.get("target_env")
|
||||||
|
|
||||||
|
if not source_env_name or not target_env_name:
|
||||||
|
raise ValueError("source_env and target_env are required for test-db-api")
|
||||||
|
|
||||||
|
from ..dependencies import get_config_manager
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
|
||||||
|
results = {}
|
||||||
|
for name in [source_env_name, target_env_name]:
|
||||||
|
env_config = config_manager.get_environment(name)
|
||||||
|
if not env_config:
|
||||||
|
raise ValueError(f"Environment '{name}' not found.")
|
||||||
|
|
||||||
|
client = SupersetClient(env_config)
|
||||||
|
client.authenticate()
|
||||||
|
count, dbs = client.get_databases()
|
||||||
|
results[name] = {
|
||||||
|
"count": count,
|
||||||
|
"databases": dbs
|
||||||
|
}
|
||||||
|
|
||||||
|
return results
|
||||||
|
# [/DEF:_test_db_api:Function]
|
||||||
|
|
||||||
|
# [DEF:_get_dataset_structure:Function]
|
||||||
|
# @PURPOSE: Retrieves the structure of a dataset.
|
||||||
|
# @PRE: env and dataset_id params exist in params.
|
||||||
|
# @POST: Returns dataset JSON structure.
|
||||||
|
# @PARAM: params (Dict) - Plugin parameters.
|
||||||
|
# @RETURN: Dict - Dataset structure.
|
||||||
|
async def _get_dataset_structure(self, params: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
with belief_scope("_get_dataset_structure"):
|
||||||
|
env_name = params.get("env")
|
||||||
|
dataset_id = params.get("dataset_id")
|
||||||
|
|
||||||
|
if not env_name or dataset_id is None:
|
||||||
|
raise ValueError("env and dataset_id are required for get-dataset-structure")
|
||||||
|
|
||||||
|
from ..dependencies import get_config_manager
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
env_config = config_manager.get_environment(env_name)
|
||||||
|
if not env_config:
|
||||||
|
raise ValueError(f"Environment '{env_name}' not found.")
|
||||||
|
|
||||||
|
client = SupersetClient(env_config)
|
||||||
|
client.authenticate()
|
||||||
|
|
||||||
|
dataset_response = client.get_dataset(dataset_id)
|
||||||
|
return dataset_response.get('result') or {}
|
||||||
|
# [/DEF:_get_dataset_structure:Function]
|
||||||
|
|
||||||
|
# [/DEF:DebugPlugin:Class]
|
||||||
|
# [/DEF:DebugPluginModule:Module]
|
||||||
195
backend/src/plugins/mapper.py
Normal file
195
backend/src/plugins/mapper.py
Normal file
@@ -0,0 +1,195 @@
|
|||||||
|
# [DEF:MapperPluginModule:Module]
|
||||||
|
# @SEMANTICS: plugin, mapper, datasets, postgresql, excel
|
||||||
|
# @PURPOSE: Implements a plugin for mapping dataset columns using external database connections or Excel files.
|
||||||
|
# @LAYER: Plugins
|
||||||
|
# @RELATION: Inherits from PluginBase. Uses DatasetMapper from superset_tool.
|
||||||
|
# @CONSTRAINT: Must use belief_scope for logging.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from typing import Dict, Any, Optional
|
||||||
|
from ..core.plugin_base import PluginBase
|
||||||
|
from ..core.superset_client import SupersetClient
|
||||||
|
from ..core.logger import logger, belief_scope
|
||||||
|
from ..core.database import SessionLocal
|
||||||
|
from ..models.connection import ConnectionConfig
|
||||||
|
from ..core.utils.dataset_mapper import DatasetMapper
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:MapperPlugin:Class]
|
||||||
|
# @PURPOSE: Plugin for mapping dataset columns verbose names.
|
||||||
|
class MapperPlugin(PluginBase):
|
||||||
|
"""
|
||||||
|
Plugin for mapping dataset columns verbose names.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:id:Function]
|
||||||
|
# @PURPOSE: Returns the unique identifier for the mapper plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string ID.
|
||||||
|
# @RETURN: str - "dataset-mapper"
|
||||||
|
def id(self) -> str:
|
||||||
|
with belief_scope("id"):
|
||||||
|
return "dataset-mapper"
|
||||||
|
# [/DEF:id:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:name:Function]
|
||||||
|
# @PURPOSE: Returns the human-readable name of the mapper plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string name.
|
||||||
|
# @RETURN: str - Plugin name.
|
||||||
|
def name(self) -> str:
|
||||||
|
with belief_scope("name"):
|
||||||
|
return "Dataset Mapper"
|
||||||
|
# [/DEF:name:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:description:Function]
|
||||||
|
# @PURPOSE: Returns a description of the mapper plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string description.
|
||||||
|
# @RETURN: str - Plugin description.
|
||||||
|
def description(self) -> str:
|
||||||
|
with belief_scope("description"):
|
||||||
|
return "Map dataset column verbose names using PostgreSQL comments or Excel files."
|
||||||
|
# [/DEF:description:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:version:Function]
|
||||||
|
# @PURPOSE: Returns the version of the mapper plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string version.
|
||||||
|
# @RETURN: str - "1.0.0"
|
||||||
|
def version(self) -> str:
|
||||||
|
with belief_scope("version"):
|
||||||
|
return "1.0.0"
|
||||||
|
# [/DEF:version:Function]
|
||||||
|
|
||||||
|
# [DEF:get_schema:Function]
|
||||||
|
# @PURPOSE: Returns the JSON schema for the mapper plugin parameters.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns dictionary schema.
|
||||||
|
# @RETURN: Dict[str, Any] - JSON schema.
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
with belief_scope("get_schema"):
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Environment",
|
||||||
|
"description": "The Superset environment (e.g., 'dev')."
|
||||||
|
},
|
||||||
|
"dataset_id": {
|
||||||
|
"type": "integer",
|
||||||
|
"title": "Dataset ID",
|
||||||
|
"description": "The ID of the dataset to update."
|
||||||
|
},
|
||||||
|
"source": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Mapping Source",
|
||||||
|
"enum": ["postgres", "excel"],
|
||||||
|
"default": "postgres"
|
||||||
|
},
|
||||||
|
"connection_id": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Saved Connection",
|
||||||
|
"description": "The ID of a saved database connection (for postgres source)."
|
||||||
|
},
|
||||||
|
"table_name": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Table Name",
|
||||||
|
"description": "Target table name in PostgreSQL."
|
||||||
|
},
|
||||||
|
"table_schema": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Table Schema",
|
||||||
|
"description": "Target table schema in PostgreSQL.",
|
||||||
|
"default": "public"
|
||||||
|
},
|
||||||
|
"excel_path": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Excel Path",
|
||||||
|
"description": "Path to the Excel file (for excel source)."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["env", "dataset_id", "source"]
|
||||||
|
}
|
||||||
|
# [/DEF:get_schema:Function]
|
||||||
|
|
||||||
|
# [DEF:execute:Function]
|
||||||
|
# @PURPOSE: Executes the dataset mapping logic.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Mapping parameters.
|
||||||
|
# @PRE: Params contain valid 'env', 'dataset_id', and 'source'. params must be a dictionary.
|
||||||
|
# @POST: Updates the dataset in Superset.
|
||||||
|
# @RETURN: Dict[str, Any] - Execution status.
|
||||||
|
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
with belief_scope("execute"):
|
||||||
|
env_name = params.get("env")
|
||||||
|
dataset_id = params.get("dataset_id")
|
||||||
|
source = params.get("source")
|
||||||
|
|
||||||
|
if not env_name or dataset_id is None or not source:
|
||||||
|
logger.error("[MapperPlugin.execute][State] Missing required parameters.")
|
||||||
|
raise ValueError("Missing required parameters: env, dataset_id, source")
|
||||||
|
|
||||||
|
# Get config and initialize client
|
||||||
|
from ..dependencies import get_config_manager
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
env_config = config_manager.get_environment(env_name)
|
||||||
|
if not env_config:
|
||||||
|
logger.error(f"[MapperPlugin.execute][State] Environment '{env_name}' not found.")
|
||||||
|
raise ValueError(f"Environment '{env_name}' not found in configuration.")
|
||||||
|
|
||||||
|
client = SupersetClient(env_config)
|
||||||
|
client.authenticate()
|
||||||
|
|
||||||
|
postgres_config = None
|
||||||
|
if source == "postgres":
|
||||||
|
connection_id = params.get("connection_id")
|
||||||
|
if not connection_id:
|
||||||
|
logger.error("[MapperPlugin.execute][State] connection_id is required for postgres source.")
|
||||||
|
raise ValueError("connection_id is required for postgres source.")
|
||||||
|
|
||||||
|
# Load connection from DB
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
conn_config = db.query(ConnectionConfig).filter(ConnectionConfig.id == connection_id).first()
|
||||||
|
if not conn_config:
|
||||||
|
logger.error(f"[MapperPlugin.execute][State] Connection {connection_id} not found.")
|
||||||
|
raise ValueError(f"Connection {connection_id} not found.")
|
||||||
|
|
||||||
|
postgres_config = {
|
||||||
|
'dbname': conn_config.database,
|
||||||
|
'user': conn_config.username,
|
||||||
|
'password': conn_config.password,
|
||||||
|
'host': conn_config.host,
|
||||||
|
'port': str(conn_config.port) if conn_config.port else '5432'
|
||||||
|
}
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
logger.info(f"[MapperPlugin.execute][Action] Starting mapping for dataset {dataset_id} in {env_name}")
|
||||||
|
|
||||||
|
mapper = DatasetMapper()
|
||||||
|
|
||||||
|
try:
|
||||||
|
mapper.run_mapping(
|
||||||
|
superset_client=client,
|
||||||
|
dataset_id=dataset_id,
|
||||||
|
source=source,
|
||||||
|
postgres_config=postgres_config,
|
||||||
|
excel_path=params.get("excel_path"),
|
||||||
|
table_name=params.get("table_name"),
|
||||||
|
table_schema=params.get("table_schema") or "public"
|
||||||
|
)
|
||||||
|
logger.info(f"[MapperPlugin.execute][Success] Mapping completed for dataset {dataset_id}")
|
||||||
|
return {"status": "success", "dataset_id": dataset_id}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[MapperPlugin.execute][Failure] Mapping failed: {e}")
|
||||||
|
raise
|
||||||
|
# [/DEF:execute:Function]
|
||||||
|
|
||||||
|
# [/DEF:MapperPlugin:Class]
|
||||||
|
# [/DEF:MapperPluginModule:Module]
|
||||||
387
backend/src/plugins/migration.py
Executable file
387
backend/src/plugins/migration.py
Executable file
@@ -0,0 +1,387 @@
|
|||||||
|
# [DEF:MigrationPlugin:Module]
|
||||||
|
# @SEMANTICS: migration, superset, automation, dashboard, plugin
|
||||||
|
# @PURPOSE: A plugin that provides functionality to migrate Superset dashboards between environments.
|
||||||
|
# @LAYER: App
|
||||||
|
# @RELATION: IMPLEMENTS -> PluginBase
|
||||||
|
# @RELATION: DEPENDS_ON -> superset_tool.client
|
||||||
|
# @RELATION: DEPENDS_ON -> superset_tool.utils
|
||||||
|
|
||||||
|
from typing import Dict, Any, List
|
||||||
|
from pathlib import Path
|
||||||
|
import zipfile
|
||||||
|
import re
|
||||||
|
|
||||||
|
from ..core.plugin_base import PluginBase
|
||||||
|
from ..core.logger import belief_scope
|
||||||
|
from ..core.superset_client import SupersetClient
|
||||||
|
from ..core.utils.fileio import create_temp_file, update_yamls, create_dashboard_export
|
||||||
|
from ..dependencies import get_config_manager
|
||||||
|
from ..core.migration_engine import MigrationEngine
|
||||||
|
from ..core.database import SessionLocal
|
||||||
|
from ..models.mapping import DatabaseMapping, Environment
|
||||||
|
|
||||||
|
# [DEF:MigrationPlugin:Class]
|
||||||
|
# @PURPOSE: Implementation of the migration plugin logic.
|
||||||
|
class MigrationPlugin(PluginBase):
|
||||||
|
"""
|
||||||
|
A plugin to migrate Superset dashboards between environments.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:id:Function]
|
||||||
|
# @PURPOSE: Returns the unique identifier for the migration plugin.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns "superset-migration".
|
||||||
|
# @RETURN: str - "superset-migration"
|
||||||
|
def id(self) -> str:
|
||||||
|
with belief_scope("id"):
|
||||||
|
return "superset-migration"
|
||||||
|
# [/DEF:id:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:name:Function]
|
||||||
|
# @PURPOSE: Returns the human-readable name of the migration plugin.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns the plugin name.
|
||||||
|
# @RETURN: str - Plugin name.
|
||||||
|
def name(self) -> str:
|
||||||
|
with belief_scope("name"):
|
||||||
|
return "Superset Dashboard Migration"
|
||||||
|
# [/DEF:name:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:description:Function]
|
||||||
|
# @PURPOSE: Returns a description of the migration plugin.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns the plugin description.
|
||||||
|
# @RETURN: str - Plugin description.
|
||||||
|
def description(self) -> str:
|
||||||
|
with belief_scope("description"):
|
||||||
|
return "Migrates dashboards between Superset environments."
|
||||||
|
# [/DEF:description:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:version:Function]
|
||||||
|
# @PURPOSE: Returns the version of the migration plugin.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Returns "1.0.0".
|
||||||
|
# @RETURN: str - "1.0.0"
|
||||||
|
def version(self) -> str:
|
||||||
|
with belief_scope("version"):
|
||||||
|
return "1.0.0"
|
||||||
|
# [/DEF:version:Function]
|
||||||
|
|
||||||
|
# [DEF:get_schema:Function]
|
||||||
|
# @PURPOSE: Returns the JSON schema for migration plugin parameters.
|
||||||
|
# @PRE: Config manager is available.
|
||||||
|
# @POST: Returns a valid JSON schema dictionary.
|
||||||
|
# @RETURN: Dict[str, Any] - JSON schema.
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
with belief_scope("get_schema"):
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
envs = [e.name for e in config_manager.get_environments()]
|
||||||
|
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"from_env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Source Environment",
|
||||||
|
"description": "The environment to migrate from.",
|
||||||
|
"enum": envs if envs else ["dev", "prod"],
|
||||||
|
},
|
||||||
|
"to_env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Target Environment",
|
||||||
|
"description": "The environment to migrate to.",
|
||||||
|
"enum": envs if envs else ["dev", "prod"],
|
||||||
|
},
|
||||||
|
"dashboard_regex": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Dashboard Regex",
|
||||||
|
"description": "A regular expression to filter dashboards to migrate.",
|
||||||
|
},
|
||||||
|
"replace_db_config": {
|
||||||
|
"type": "boolean",
|
||||||
|
"title": "Replace DB Config",
|
||||||
|
"description": "Whether to replace the database configuration.",
|
||||||
|
"default": False,
|
||||||
|
},
|
||||||
|
"from_db_id": {
|
||||||
|
"type": "integer",
|
||||||
|
"title": "Source DB ID",
|
||||||
|
"description": "The ID of the source database to replace (if replacing).",
|
||||||
|
},
|
||||||
|
"to_db_id": {
|
||||||
|
"type": "integer",
|
||||||
|
"title": "Target DB ID",
|
||||||
|
"description": "The ID of the target database to replace with (if replacing).",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["from_env", "to_env", "dashboard_regex"],
|
||||||
|
}
|
||||||
|
# [/DEF:get_schema:Function]
|
||||||
|
|
||||||
|
# [DEF:execute:Function]
|
||||||
|
# @PURPOSE: Executes the dashboard migration logic.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Migration parameters.
|
||||||
|
# @PRE: Source and target environments must be configured.
|
||||||
|
# @POST: Selected dashboards are migrated.
|
||||||
|
async def execute(self, params: Dict[str, Any]):
|
||||||
|
with belief_scope("MigrationPlugin.execute"):
|
||||||
|
source_env_id = params.get("source_env_id")
|
||||||
|
target_env_id = params.get("target_env_id")
|
||||||
|
selected_ids = params.get("selected_ids")
|
||||||
|
|
||||||
|
# Legacy support or alternative params
|
||||||
|
from_env_name = params.get("from_env")
|
||||||
|
to_env_name = params.get("to_env")
|
||||||
|
dashboard_regex = params.get("dashboard_regex")
|
||||||
|
|
||||||
|
replace_db_config = params.get("replace_db_config", False)
|
||||||
|
from_db_id = params.get("from_db_id")
|
||||||
|
to_db_id = params.get("to_db_id")
|
||||||
|
|
||||||
|
# [DEF:MigrationPlugin.execute:Action]
|
||||||
|
# @PURPOSE: Execute the migration logic with proper task logging.
|
||||||
|
task_id = params.get("_task_id")
|
||||||
|
from ..dependencies import get_task_manager
|
||||||
|
tm = get_task_manager()
|
||||||
|
|
||||||
|
class TaskLoggerProxy:
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the proxy logger.
|
||||||
|
# @PRE: None.
|
||||||
|
# @POST: Instance is initialized.
|
||||||
|
def __init__(self):
|
||||||
|
with belief_scope("__init__"):
|
||||||
|
# Initialize parent with dummy values since we override methods
|
||||||
|
pass
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:debug:Function]
|
||||||
|
# @PURPOSE: Logs a debug message to the task manager.
|
||||||
|
# @PRE: msg is a string.
|
||||||
|
# @POST: Log is added to task manager if task_id exists.
|
||||||
|
def debug(self, msg, *args, extra=None, **kwargs):
|
||||||
|
with belief_scope("debug"):
|
||||||
|
if task_id: tm._add_log(task_id, "DEBUG", msg, extra or {})
|
||||||
|
# [/DEF:debug:Function]
|
||||||
|
|
||||||
|
# [DEF:info:Function]
|
||||||
|
# @PURPOSE: Logs an info message to the task manager.
|
||||||
|
# @PRE: msg is a string.
|
||||||
|
# @POST: Log is added to task manager if task_id exists.
|
||||||
|
def info(self, msg, *args, extra=None, **kwargs):
|
||||||
|
with belief_scope("info"):
|
||||||
|
if task_id: tm._add_log(task_id, "INFO", msg, extra or {})
|
||||||
|
# [/DEF:info:Function]
|
||||||
|
|
||||||
|
# [DEF:warning:Function]
|
||||||
|
# @PURPOSE: Logs a warning message to the task manager.
|
||||||
|
# @PRE: msg is a string.
|
||||||
|
# @POST: Log is added to task manager if task_id exists.
|
||||||
|
def warning(self, msg, *args, extra=None, **kwargs):
|
||||||
|
with belief_scope("warning"):
|
||||||
|
if task_id: tm._add_log(task_id, "WARNING", msg, extra or {})
|
||||||
|
# [/DEF:warning:Function]
|
||||||
|
|
||||||
|
# [DEF:error:Function]
|
||||||
|
# @PURPOSE: Logs an error message to the task manager.
|
||||||
|
# @PRE: msg is a string.
|
||||||
|
# @POST: Log is added to task manager if task_id exists.
|
||||||
|
def error(self, msg, *args, extra=None, **kwargs):
|
||||||
|
with belief_scope("error"):
|
||||||
|
if task_id: tm._add_log(task_id, "ERROR", msg, extra or {})
|
||||||
|
# [/DEF:error:Function]
|
||||||
|
|
||||||
|
# [DEF:critical:Function]
|
||||||
|
# @PURPOSE: Logs a critical message to the task manager.
|
||||||
|
# @PRE: msg is a string.
|
||||||
|
# @POST: Log is added to task manager if task_id exists.
|
||||||
|
def critical(self, msg, *args, extra=None, **kwargs):
|
||||||
|
with belief_scope("critical"):
|
||||||
|
if task_id: tm._add_log(task_id, "ERROR", msg, extra or {})
|
||||||
|
# [/DEF:critical:Function]
|
||||||
|
|
||||||
|
# [DEF:exception:Function]
|
||||||
|
# @PURPOSE: Logs an exception message to the task manager.
|
||||||
|
# @PRE: msg is a string.
|
||||||
|
# @POST: Log is added to task manager if task_id exists.
|
||||||
|
def exception(self, msg, *args, **kwargs):
|
||||||
|
with belief_scope("exception"):
|
||||||
|
if task_id: tm._add_log(task_id, "ERROR", msg, {"exception": True})
|
||||||
|
# [/DEF:exception:Function]
|
||||||
|
|
||||||
|
logger = TaskLoggerProxy()
|
||||||
|
logger.info(f"[MigrationPlugin][Entry] Starting migration task.")
|
||||||
|
logger.info(f"[MigrationPlugin][Action] Params: {params}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
with belief_scope("execute"):
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
environments = config_manager.get_environments()
|
||||||
|
|
||||||
|
# Resolve environments
|
||||||
|
src_env = None
|
||||||
|
tgt_env = None
|
||||||
|
|
||||||
|
if source_env_id:
|
||||||
|
src_env = next((e for e in environments if e.id == source_env_id), None)
|
||||||
|
elif from_env_name:
|
||||||
|
src_env = next((e for e in environments if e.name == from_env_name), None)
|
||||||
|
|
||||||
|
if target_env_id:
|
||||||
|
tgt_env = next((e for e in environments if e.id == target_env_id), None)
|
||||||
|
elif to_env_name:
|
||||||
|
tgt_env = next((e for e in environments if e.name == to_env_name), None)
|
||||||
|
|
||||||
|
if not src_env or not tgt_env:
|
||||||
|
raise ValueError(f"Could not resolve source or target environment. Source: {source_env_id or from_env_name}, Target: {target_env_id or to_env_name}")
|
||||||
|
|
||||||
|
from_env_name = src_env.name
|
||||||
|
to_env_name = tgt_env.name
|
||||||
|
|
||||||
|
logger.info(f"[MigrationPlugin][State] Resolved environments: {from_env_name} -> {to_env_name}")
|
||||||
|
|
||||||
|
from_c = SupersetClient(src_env)
|
||||||
|
to_c = SupersetClient(tgt_env)
|
||||||
|
|
||||||
|
if not from_c or not to_c:
|
||||||
|
raise ValueError(f"Clients not initialized for environments: {from_env_name}, {to_env_name}")
|
||||||
|
|
||||||
|
_, all_dashboards = from_c.get_dashboards()
|
||||||
|
|
||||||
|
dashboards_to_migrate = []
|
||||||
|
if selected_ids:
|
||||||
|
dashboards_to_migrate = [d for d in all_dashboards if d["id"] in selected_ids]
|
||||||
|
elif dashboard_regex:
|
||||||
|
regex_str = str(dashboard_regex)
|
||||||
|
dashboards_to_migrate = [
|
||||||
|
d for d in all_dashboards if re.search(regex_str, d["dashboard_title"], re.IGNORECASE)
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
logger.warning("[MigrationPlugin][State] No selection criteria provided (selected_ids or dashboard_regex).")
|
||||||
|
return
|
||||||
|
|
||||||
|
if not dashboards_to_migrate:
|
||||||
|
logger.warning("[MigrationPlugin][State] No dashboards found matching criteria.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Fetch mappings from database
|
||||||
|
db_mapping = {}
|
||||||
|
if replace_db_config:
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
# Find environment IDs by name
|
||||||
|
src_env = db.query(Environment).filter(Environment.name == from_env_name).first()
|
||||||
|
tgt_env = db.query(Environment).filter(Environment.name == to_env_name).first()
|
||||||
|
|
||||||
|
if src_env and tgt_env:
|
||||||
|
mappings = db.query(DatabaseMapping).filter(
|
||||||
|
DatabaseMapping.source_env_id == src_env.id,
|
||||||
|
DatabaseMapping.target_env_id == tgt_env.id
|
||||||
|
).all()
|
||||||
|
db_mapping = {m.source_db_uuid: m.target_db_uuid for m in mappings}
|
||||||
|
logger.info(f"[MigrationPlugin][State] Loaded {len(db_mapping)} database mappings.")
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
engine = MigrationEngine()
|
||||||
|
|
||||||
|
for dash in dashboards_to_migrate:
|
||||||
|
dash_id, dash_slug, title = dash["id"], dash.get("slug"), dash["dashboard_title"]
|
||||||
|
|
||||||
|
try:
|
||||||
|
exported_content, _ = from_c.export_dashboard(dash_id)
|
||||||
|
with create_temp_file(content=exported_content, dry_run=True, suffix=".zip", logger=logger) as tmp_zip_path:
|
||||||
|
# Always transform to strip databases to avoid password errors
|
||||||
|
with create_temp_file(suffix=".zip", dry_run=True, logger=logger) as tmp_new_zip:
|
||||||
|
success = engine.transform_zip(str(tmp_zip_path), str(tmp_new_zip), db_mapping, strip_databases=False)
|
||||||
|
|
||||||
|
if not success and replace_db_config:
|
||||||
|
# Signal missing mapping and wait (only if we care about mappings)
|
||||||
|
if task_id:
|
||||||
|
logger.info(f"[MigrationPlugin][Action] Pausing for missing mapping in task {task_id}")
|
||||||
|
# In a real scenario, we'd pass the missing DB info to the frontend
|
||||||
|
# For this task, we'll just simulate the wait
|
||||||
|
await tm.wait_for_resolution(task_id)
|
||||||
|
# After resolution, retry transformation with updated mappings
|
||||||
|
# (Mappings would be updated in task.params by resolve_task)
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
src_env = db.query(Environment).filter(Environment.name == from_env_name).first()
|
||||||
|
tgt_env = db.query(Environment).filter(Environment.name == to_env_name).first()
|
||||||
|
mappings = db.query(DatabaseMapping).filter(
|
||||||
|
DatabaseMapping.source_env_id == src_env.id,
|
||||||
|
DatabaseMapping.target_env_id == tgt_env.id
|
||||||
|
).all()
|
||||||
|
db_mapping = {m.source_db_uuid: m.target_db_uuid for m in mappings}
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
success = engine.transform_zip(str(tmp_zip_path), str(tmp_new_zip), db_mapping, strip_databases=False)
|
||||||
|
|
||||||
|
if success:
|
||||||
|
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug)
|
||||||
|
else:
|
||||||
|
logger.error(f"[MigrationPlugin][Failure] Failed to transform ZIP for dashboard {title}")
|
||||||
|
|
||||||
|
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported.")
|
||||||
|
except Exception as exc:
|
||||||
|
# Check for password error
|
||||||
|
error_msg = str(exc)
|
||||||
|
# The error message from Superset is often a JSON string inside a string.
|
||||||
|
# We need to robustly detect the password requirement.
|
||||||
|
# Typical error: "Error importing dashboard: databases/PostgreSQL.yaml: {'_schema': ['Must provide a password for the database']}"
|
||||||
|
|
||||||
|
if "Must provide a password for the database" in error_msg:
|
||||||
|
# Extract database name
|
||||||
|
# Try to find "databases/DBNAME.yaml" pattern
|
||||||
|
import re
|
||||||
|
db_name = "unknown"
|
||||||
|
match = re.search(r"databases/([^.]+)\.yaml", error_msg)
|
||||||
|
if match:
|
||||||
|
db_name = match.group(1)
|
||||||
|
else:
|
||||||
|
# Fallback: try to find 'database 'NAME'' pattern
|
||||||
|
match_alt = re.search(r"database '([^']+)'", error_msg)
|
||||||
|
if match_alt:
|
||||||
|
db_name = match_alt.group(1)
|
||||||
|
|
||||||
|
logger.warning(f"[MigrationPlugin][Action] Detected missing password for database: {db_name}")
|
||||||
|
|
||||||
|
if task_id:
|
||||||
|
input_request = {
|
||||||
|
"type": "database_password",
|
||||||
|
"databases": [db_name],
|
||||||
|
"error_message": error_msg
|
||||||
|
}
|
||||||
|
tm.await_input(task_id, input_request)
|
||||||
|
|
||||||
|
# Wait for user input
|
||||||
|
await tm.wait_for_input(task_id)
|
||||||
|
|
||||||
|
# Resume with passwords
|
||||||
|
task = tm.get_task(task_id)
|
||||||
|
passwords = task.params.get("passwords", {})
|
||||||
|
|
||||||
|
# Retry import with password
|
||||||
|
if passwords:
|
||||||
|
logger.info(f"[MigrationPlugin][Action] Retrying import for {title} with provided passwords.")
|
||||||
|
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug, passwords=passwords)
|
||||||
|
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported after password injection.")
|
||||||
|
# Clear passwords from params after use for security
|
||||||
|
if "passwords" in task.params:
|
||||||
|
del task.params["passwords"]
|
||||||
|
continue
|
||||||
|
|
||||||
|
logger.error(f"[MigrationPlugin][Failure] Failed to migrate dashboard {title}: {exc}", exc_info=True)
|
||||||
|
|
||||||
|
logger.info("[MigrationPlugin][Exit] Migration finished.")
|
||||||
|
except Exception as e:
|
||||||
|
logger.critical(f"[MigrationPlugin][Failure] Fatal error during migration: {e}", exc_info=True)
|
||||||
|
raise e
|
||||||
|
# [/DEF:MigrationPlugin.execute:Action]
|
||||||
|
# [/DEF:execute:Function]
|
||||||
|
# [/DEF:MigrationPlugin:Class]
|
||||||
|
# [/DEF:MigrationPlugin:Module]
|
||||||
202
backend/src/plugins/search.py
Normal file
202
backend/src/plugins/search.py
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
# [DEF:SearchPluginModule:Module]
|
||||||
|
# @SEMANTICS: plugin, search, datasets, regex, superset
|
||||||
|
# @PURPOSE: Implements a plugin for searching text patterns across all datasets in a specific Superset environment.
|
||||||
|
# @LAYER: Plugins
|
||||||
|
# @RELATION: Inherits from PluginBase. Uses SupersetClient from core.
|
||||||
|
# @CONSTRAINT: Must use belief_scope for logging.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import re
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from ..core.plugin_base import PluginBase
|
||||||
|
from ..core.superset_client import SupersetClient
|
||||||
|
from ..core.logger import logger, belief_scope
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:SearchPlugin:Class]
|
||||||
|
# @PURPOSE: Plugin for searching text patterns in Superset datasets.
|
||||||
|
class SearchPlugin(PluginBase):
|
||||||
|
"""
|
||||||
|
Plugin for searching text patterns in Superset datasets.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:id:Function]
|
||||||
|
# @PURPOSE: Returns the unique identifier for the search plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string ID.
|
||||||
|
# @RETURN: str - "search-datasets"
|
||||||
|
def id(self) -> str:
|
||||||
|
with belief_scope("id"):
|
||||||
|
return "search-datasets"
|
||||||
|
# [/DEF:id:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:name:Function]
|
||||||
|
# @PURPOSE: Returns the human-readable name of the search plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string name.
|
||||||
|
# @RETURN: str - Plugin name.
|
||||||
|
def name(self) -> str:
|
||||||
|
with belief_scope("name"):
|
||||||
|
return "Search Datasets"
|
||||||
|
# [/DEF:name:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:description:Function]
|
||||||
|
# @PURPOSE: Returns a description of the search plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string description.
|
||||||
|
# @RETURN: str - Plugin description.
|
||||||
|
def description(self) -> str:
|
||||||
|
with belief_scope("description"):
|
||||||
|
return "Search for text patterns across all datasets in a specific environment."
|
||||||
|
# [/DEF:description:Function]
|
||||||
|
|
||||||
|
@property
|
||||||
|
# [DEF:version:Function]
|
||||||
|
# @PURPOSE: Returns the version of the search plugin.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns string version.
|
||||||
|
# @RETURN: str - "1.0.0"
|
||||||
|
def version(self) -> str:
|
||||||
|
with belief_scope("version"):
|
||||||
|
return "1.0.0"
|
||||||
|
# [/DEF:version:Function]
|
||||||
|
|
||||||
|
# [DEF:get_schema:Function]
|
||||||
|
# @PURPOSE: Returns the JSON schema for the search plugin parameters.
|
||||||
|
# @PRE: Plugin instance exists.
|
||||||
|
# @POST: Returns dictionary schema.
|
||||||
|
# @RETURN: Dict[str, Any] - JSON schema.
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
with belief_scope("get_schema"):
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"env": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Environment",
|
||||||
|
"description": "The Superset environment to search in (e.g., 'dev', 'prod')."
|
||||||
|
},
|
||||||
|
"query": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Search Query (Regex)",
|
||||||
|
"description": "The regex pattern to search for."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["env", "query"]
|
||||||
|
}
|
||||||
|
# [/DEF:get_schema:Function]
|
||||||
|
|
||||||
|
# [DEF:execute:Function]
|
||||||
|
# @PURPOSE: Executes the dataset search logic.
|
||||||
|
# @PARAM: params (Dict[str, Any]) - Search parameters.
|
||||||
|
# @PRE: Params contain valid 'env' and 'query'.
|
||||||
|
# @POST: Returns a dictionary with count and results list.
|
||||||
|
# @RETURN: Dict[str, Any] - Search results.
|
||||||
|
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
with belief_scope("SearchPlugin.execute", f"params={params}"):
|
||||||
|
env_name = params.get("env")
|
||||||
|
search_query = params.get("query")
|
||||||
|
|
||||||
|
if not env_name or not search_query:
|
||||||
|
logger.error("[SearchPlugin.execute][State] Missing required parameters.")
|
||||||
|
raise ValueError("Missing required parameters: env, query")
|
||||||
|
|
||||||
|
# Get config and initialize client
|
||||||
|
from ..dependencies import get_config_manager
|
||||||
|
config_manager = get_config_manager()
|
||||||
|
env_config = config_manager.get_environment(env_name)
|
||||||
|
if not env_config:
|
||||||
|
logger.error(f"[SearchPlugin.execute][State] Environment '{env_name}' not found.")
|
||||||
|
raise ValueError(f"Environment '{env_name}' not found in configuration.")
|
||||||
|
|
||||||
|
client = SupersetClient(env_config)
|
||||||
|
client.authenticate()
|
||||||
|
|
||||||
|
logger.info(f"[SearchPlugin.execute][Action] Searching for pattern: '{search_query}' in environment: {env_name}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Ported logic from search_script.py
|
||||||
|
_, datasets = client.get_datasets(query={"columns": ["id", "table_name", "sql", "database", "columns"]})
|
||||||
|
|
||||||
|
if not datasets:
|
||||||
|
logger.warning("[SearchPlugin.execute][State] No datasets found.")
|
||||||
|
return {"count": 0, "results": []}
|
||||||
|
|
||||||
|
pattern = re.compile(search_query, re.IGNORECASE)
|
||||||
|
results = []
|
||||||
|
|
||||||
|
for dataset in datasets:
|
||||||
|
dataset_id = dataset.get('id')
|
||||||
|
dataset_name = dataset.get('table_name', 'Unknown')
|
||||||
|
if not dataset_id:
|
||||||
|
continue
|
||||||
|
|
||||||
|
for field, value in dataset.items():
|
||||||
|
value_str = str(value)
|
||||||
|
if pattern.search(value_str):
|
||||||
|
match_obj = pattern.search(value_str)
|
||||||
|
results.append({
|
||||||
|
"dataset_id": dataset_id,
|
||||||
|
"dataset_name": dataset_name,
|
||||||
|
"field": field,
|
||||||
|
"match_context": self._get_context(value_str, match_obj.group() if match_obj else ""),
|
||||||
|
"full_value": value_str
|
||||||
|
})
|
||||||
|
|
||||||
|
logger.info(f"[SearchPlugin.execute][Success] Found matches in {len(results)} locations.")
|
||||||
|
return {
|
||||||
|
"count": len(results),
|
||||||
|
"results": results
|
||||||
|
}
|
||||||
|
|
||||||
|
except re.error as e:
|
||||||
|
logger.error(f"[SearchPlugin.execute][Failure] Invalid regex pattern: {e}")
|
||||||
|
raise ValueError(f"Invalid regex pattern: {e}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[SearchPlugin.execute][Failure] Error during search: {e}")
|
||||||
|
raise
|
||||||
|
# [/DEF:execute:Function]
|
||||||
|
|
||||||
|
# [DEF:_get_context:Function]
|
||||||
|
# @PURPOSE: Extracts a small context around the match for display.
|
||||||
|
# @PARAM: text (str) - The full text to extract context from.
|
||||||
|
# @PARAM: match_text (str) - The matched text pattern.
|
||||||
|
# @PARAM: context_lines (int) - Number of lines of context to include.
|
||||||
|
# @PRE: text and match_text must be strings.
|
||||||
|
# @POST: Returns context string.
|
||||||
|
# @RETURN: str - Extracted context.
|
||||||
|
def _get_context(self, text: str, match_text: str, context_lines: int = 1) -> str:
|
||||||
|
"""
|
||||||
|
Extracts a small context around the match for display.
|
||||||
|
"""
|
||||||
|
with belief_scope("_get_context"):
|
||||||
|
if not match_text:
|
||||||
|
return text[:100] + "..." if len(text) > 100 else text
|
||||||
|
|
||||||
|
lines = text.splitlines()
|
||||||
|
match_line_index = -1
|
||||||
|
for i, line in enumerate(lines):
|
||||||
|
if match_text in line:
|
||||||
|
match_line_index = i
|
||||||
|
break
|
||||||
|
|
||||||
|
if match_line_index != -1:
|
||||||
|
start = max(0, match_line_index - context_lines)
|
||||||
|
end = min(len(lines), match_line_index + context_lines + 1)
|
||||||
|
context = []
|
||||||
|
for i in range(start, end):
|
||||||
|
line_content = lines[i]
|
||||||
|
if i == match_line_index:
|
||||||
|
context.append(f"==> {line_content}")
|
||||||
|
else:
|
||||||
|
context.append(f" {line_content}")
|
||||||
|
return "\n".join(context)
|
||||||
|
|
||||||
|
return text[:100] + "..." if len(text) > 100 else text
|
||||||
|
# [/DEF:_get_context:Function]
|
||||||
|
|
||||||
|
# [/DEF:SearchPlugin:Class]
|
||||||
|
# [/DEF:SearchPluginModule:Module]
|
||||||
71
backend/src/services/mapping_service.py
Normal file
71
backend/src/services/mapping_service.py
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
# [DEF:backend.src.services.mapping_service:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: service, mapping, fuzzy-matching, superset
|
||||||
|
# @PURPOSE: Orchestrates database fetching and fuzzy matching suggestions.
|
||||||
|
# @LAYER: Service
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
|
||||||
|
# @RELATION: DEPENDS_ON -> backend.src.core.utils.matching
|
||||||
|
#
|
||||||
|
# @INVARIANT: Suggestions are based on database names.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
from typing import List, Dict
|
||||||
|
from backend.src.core.logger import belief_scope
|
||||||
|
from backend.src.core.superset_client import SupersetClient
|
||||||
|
from backend.src.core.utils.matching import suggest_mappings
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
# [DEF:MappingService:Class]
|
||||||
|
# @PURPOSE: Service for handling database mapping logic.
|
||||||
|
class MappingService:
|
||||||
|
|
||||||
|
# [DEF:__init__:Function]
|
||||||
|
# @PURPOSE: Initializes the mapping service with a config manager.
|
||||||
|
# @PRE: config_manager is provided.
|
||||||
|
# @PARAM: config_manager (ConfigManager) - The configuration manager.
|
||||||
|
# @POST: Service is initialized.
|
||||||
|
def __init__(self, config_manager):
|
||||||
|
with belief_scope("MappingService.__init__"):
|
||||||
|
self.config_manager = config_manager
|
||||||
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
|
# [DEF:_get_client:Function]
|
||||||
|
# @PURPOSE: Helper to get an initialized SupersetClient for an environment.
|
||||||
|
# @PARAM: env_id (str) - The ID of the environment.
|
||||||
|
# @PRE: environment must exist in config.
|
||||||
|
# @POST: Returns an initialized SupersetClient.
|
||||||
|
# @RETURN: SupersetClient - Initialized client.
|
||||||
|
def _get_client(self, env_id: str) -> SupersetClient:
|
||||||
|
with belief_scope("MappingService._get_client", f"env_id={env_id}"):
|
||||||
|
envs = self.config_manager.get_environments()
|
||||||
|
env = next((e for e in envs if e.id == env_id), None)
|
||||||
|
if not env:
|
||||||
|
raise ValueError(f"Environment {env_id} not found")
|
||||||
|
|
||||||
|
return SupersetClient(env)
|
||||||
|
# [/DEF:_get_client:Function]
|
||||||
|
|
||||||
|
# [DEF:get_suggestions:Function]
|
||||||
|
# @PURPOSE: Fetches databases from both environments and returns fuzzy matching suggestions.
|
||||||
|
# @PARAM: source_env_id (str) - Source environment ID.
|
||||||
|
# @PARAM: target_env_id (str) - Target environment ID.
|
||||||
|
# @PRE: Both environments must be accessible.
|
||||||
|
# @POST: Returns fuzzy-matched database suggestions.
|
||||||
|
# @RETURN: List[Dict] - Suggested mappings.
|
||||||
|
async def get_suggestions(self, source_env_id: str, target_env_id: str) -> List[Dict]:
|
||||||
|
with belief_scope("MappingService.get_suggestions", f"source={source_env_id}, target={target_env_id}"):
|
||||||
|
"""
|
||||||
|
Get suggested mappings between two environments.
|
||||||
|
"""
|
||||||
|
source_client = self._get_client(source_env_id)
|
||||||
|
target_client = self._get_client(target_env_id)
|
||||||
|
|
||||||
|
source_dbs = source_client.get_databases_summary()
|
||||||
|
target_dbs = target_client.get_databases_summary()
|
||||||
|
|
||||||
|
return suggest_mappings(source_dbs, target_dbs)
|
||||||
|
# [/DEF:get_suggestions:Function]
|
||||||
|
|
||||||
|
# [/DEF:MappingService:Class]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.services.mapping_service:Module]
|
||||||
BIN
backend/tasks.db
Normal file
BIN
backend/tasks.db
Normal file
Binary file not shown.
59
backend/tests/test_logger.py
Normal file
59
backend/tests/test_logger.py
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
import pytest
|
||||||
|
from src.core.logger import belief_scope, logger
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:test_belief_scope_logs_entry_action_exit:Function]
|
||||||
|
# @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs.
|
||||||
|
# @PRE: belief_scope is available. caplog fixture is used.
|
||||||
|
# @POST: Logs are verified to contain Entry, Action, and Exit tags.
|
||||||
|
def test_belief_scope_logs_entry_action_exit(caplog):
|
||||||
|
"""Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs."""
|
||||||
|
caplog.set_level("INFO")
|
||||||
|
|
||||||
|
with belief_scope("TestFunction"):
|
||||||
|
logger.info("Doing something important")
|
||||||
|
|
||||||
|
# Check that the logs contain the expected patterns
|
||||||
|
log_messages = [record.message for record in caplog.records]
|
||||||
|
|
||||||
|
assert any("[TestFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
|
||||||
|
assert any("[TestFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
|
||||||
|
assert any("[TestFunction][Exit]" in msg for msg in log_messages), "Exit log not found"
|
||||||
|
# [/DEF:test_belief_scope_logs_entry_action_exit:Function]
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:test_belief_scope_error_handling:Function]
|
||||||
|
# @PURPOSE: Test that belief_scope logs Coherence:Failed on exception.
|
||||||
|
# @PRE: belief_scope is available. caplog fixture is used.
|
||||||
|
# @POST: Logs are verified to contain Coherence:Failed tag.
|
||||||
|
def test_belief_scope_error_handling(caplog):
|
||||||
|
"""Test that belief_scope logs Coherence:Failed on exception."""
|
||||||
|
caplog.set_level("INFO")
|
||||||
|
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
with belief_scope("FailingFunction"):
|
||||||
|
raise ValueError("Something went wrong")
|
||||||
|
|
||||||
|
log_messages = [record.message for record in caplog.records]
|
||||||
|
|
||||||
|
assert any("[FailingFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
|
||||||
|
assert any("[FailingFunction][Coherence:Failed]" in msg for msg in log_messages), "Failed coherence log not found"
|
||||||
|
# Exit should not be logged on failure
|
||||||
|
# [/DEF:test_belief_scope_error_handling:Function]
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:test_belief_scope_success_coherence:Function]
|
||||||
|
# @PURPOSE: Test that belief_scope logs Coherence:OK on success.
|
||||||
|
# @PRE: belief_scope is available. caplog fixture is used.
|
||||||
|
# @POST: Logs are verified to contain Coherence:OK tag.
|
||||||
|
def test_belief_scope_success_coherence(caplog):
|
||||||
|
"""Test that belief_scope logs Coherence:OK on success."""
|
||||||
|
caplog.set_level("INFO")
|
||||||
|
|
||||||
|
with belief_scope("SuccessFunction"):
|
||||||
|
pass
|
||||||
|
|
||||||
|
log_messages = [record.message for record in caplog.records]
|
||||||
|
|
||||||
|
assert any("[SuccessFunction][Coherence:OK]" in msg for msg in log_messages), "Success coherence log not found"
|
||||||
|
# [/DEF:test_belief_scope_success_coherence:Function]
|
||||||
23
backend/tests/test_models.py
Normal file
23
backend/tests/test_models.py
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
import pytest
|
||||||
|
from src.core.config_models import Environment
|
||||||
|
from src.core.logger import belief_scope
|
||||||
|
|
||||||
|
# [DEF:test_environment_model:Function]
|
||||||
|
# @PURPOSE: Tests that Environment model correctly stores values.
|
||||||
|
# @PRE: Environment class is available.
|
||||||
|
# @POST: Values are verified.
|
||||||
|
def test_environment_model():
|
||||||
|
with belief_scope("test_environment_model"):
|
||||||
|
env = Environment(
|
||||||
|
id="test-id",
|
||||||
|
name="test-env",
|
||||||
|
url="http://localhost:8088/api/v1",
|
||||||
|
username="admin",
|
||||||
|
password="password"
|
||||||
|
)
|
||||||
|
assert env.id == "test-id"
|
||||||
|
assert env.name == "test-env"
|
||||||
|
assert env.url == "http://localhost:8088/api/v1"
|
||||||
|
# [/DEF:test_superset_config_url_normalization:Function]
|
||||||
|
|
||||||
|
# [/DEF:test_superset_config_invalid_url:Function]
|
||||||
146
backup_script.py
146
backup_script.py
@@ -1,146 +0,0 @@
|
|||||||
# pylint: disable=too-many-arguments,too-many-locals,too-many-statements,too-many-branches,unused-argument,invalid-name,redefined-outer-name
|
|
||||||
"""
|
|
||||||
[MODULE] Superset Dashboard Backup Script
|
|
||||||
@contract: Автоматизирует процесс резервного копирования дашбордов Superset.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# [IMPORTS] Стандартная библиотека
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from dataclasses import dataclass
|
|
||||||
|
|
||||||
# [IMPORTS] Third-party
|
|
||||||
from requests.exceptions import RequestException
|
|
||||||
|
|
||||||
# [IMPORTS] Локальные модули
|
|
||||||
from superset_tool.client import SupersetClient
|
|
||||||
from superset_tool.exceptions import SupersetAPIError
|
|
||||||
from superset_tool.utils.logger import SupersetLogger
|
|
||||||
from superset_tool.utils.fileio import (
|
|
||||||
save_and_unpack_dashboard,
|
|
||||||
archive_exports,
|
|
||||||
sanitize_filename,
|
|
||||||
consolidate_archive_folders,
|
|
||||||
remove_empty_directories
|
|
||||||
)
|
|
||||||
from superset_tool.utils.init_clients import setup_clients
|
|
||||||
|
|
||||||
|
|
||||||
# [ENTITY: Dataclass('BackupConfig')]
|
|
||||||
# CONTRACT:
|
|
||||||
# PURPOSE: Хранит конфигурацию для процесса бэкапа.
|
|
||||||
@dataclass
|
|
||||||
class BackupConfig:
|
|
||||||
"""Конфигурация для процесса бэкапа."""
|
|
||||||
consolidate: bool = True
|
|
||||||
rotate_archive: bool = True
|
|
||||||
clean_folders: bool = True
|
|
||||||
|
|
||||||
# [ENTITY: Function('backup_dashboards')]
|
|
||||||
# CONTRACT:
|
|
||||||
# PURPOSE: Выполняет бэкап всех доступных дашбордов для заданного клиента и окружения.
|
|
||||||
# PRECONDITIONS:
|
|
||||||
# - `client` должен быть инициализированным экземпляром `SupersetClient`.
|
|
||||||
# - `env_name` должен быть строкой, обозначающей окружение.
|
|
||||||
# - `backup_root` должен быть валидным путем к корневой директории бэкапа.
|
|
||||||
# POSTCONDITIONS:
|
|
||||||
# - Дашборды экспортируются и сохраняются.
|
|
||||||
# - Возвращает `True` если все дашборды были экспортированы без критических ошибок, `False` иначе.
|
|
||||||
def backup_dashboards(
|
|
||||||
client: SupersetClient,
|
|
||||||
env_name: str,
|
|
||||||
backup_root: Path,
|
|
||||||
logger: SupersetLogger,
|
|
||||||
config: BackupConfig
|
|
||||||
) -> bool:
|
|
||||||
logger.info(f"[STATE][backup_dashboards][ENTER] Starting backup for {env_name}.")
|
|
||||||
try:
|
|
||||||
dashboard_count, dashboard_meta = client.get_dashboards()
|
|
||||||
logger.info(f"[STATE][backup_dashboards][PROGRESS] Found {dashboard_count} dashboards to export in {env_name}.")
|
|
||||||
if dashboard_count == 0:
|
|
||||||
return True
|
|
||||||
|
|
||||||
success_count = 0
|
|
||||||
for db in dashboard_meta:
|
|
||||||
dashboard_id = db.get('id')
|
|
||||||
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
|
|
||||||
if not dashboard_id:
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
|
|
||||||
dashboard_dir = backup_root / env_name / dashboard_base_dir_name
|
|
||||||
dashboard_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
zip_content, filename = client.export_dashboard(dashboard_id)
|
|
||||||
|
|
||||||
save_and_unpack_dashboard(
|
|
||||||
zip_content=zip_content,
|
|
||||||
original_filename=filename,
|
|
||||||
output_dir=dashboard_dir,
|
|
||||||
unpack=False,
|
|
||||||
logger=logger
|
|
||||||
)
|
|
||||||
|
|
||||||
if config.rotate_archive:
|
|
||||||
archive_exports(str(dashboard_dir), logger=logger)
|
|
||||||
|
|
||||||
success_count += 1
|
|
||||||
except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
|
|
||||||
logger.error(f"[STATE][backup_dashboards][FAILURE] Failed to export dashboard {dashboard_title}: {db_error}", exc_info=True)
|
|
||||||
|
|
||||||
if config.consolidate:
|
|
||||||
consolidate_archive_folders(backup_root / env_name , logger=logger)
|
|
||||||
|
|
||||||
if config.clean_folders:
|
|
||||||
remove_empty_directories(str(backup_root / env_name), logger=logger)
|
|
||||||
|
|
||||||
return success_count == dashboard_count
|
|
||||||
except (RequestException, IOError) as e:
|
|
||||||
logger.critical(f"[STATE][backup_dashboards][FAILURE] Fatal error during backup for {env_name}: {e}", exc_info=True)
|
|
||||||
return False
|
|
||||||
# END_FUNCTION_backup_dashboards
|
|
||||||
|
|
||||||
# [ENTITY: Function('main')]
|
|
||||||
# CONTRACT:
|
|
||||||
# PURPOSE: Основная точка входа скрипта.
|
|
||||||
# PRECONDITIONS: None
|
|
||||||
# POSTCONDITIONS: Возвращает код выхода.
|
|
||||||
def main() -> int:
|
|
||||||
log_dir = Path("P:\\Superset\\010 Бекапы\\Logs")
|
|
||||||
logger = SupersetLogger(log_dir=log_dir, level=logging.INFO, console=True)
|
|
||||||
logger.info("[STATE][main][ENTER] Starting Superset backup process.")
|
|
||||||
|
|
||||||
exit_code = 0
|
|
||||||
try:
|
|
||||||
clients = setup_clients(logger)
|
|
||||||
superset_backup_repo = Path("P:\\Superset\\010 Бекапы")
|
|
||||||
superset_backup_repo.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
results = {}
|
|
||||||
environments = ['dev', 'sbx', 'prod', 'preprod']
|
|
||||||
backup_config = BackupConfig(rotate_archive=True)
|
|
||||||
|
|
||||||
for env in environments:
|
|
||||||
results[env] = backup_dashboards(
|
|
||||||
clients[env],
|
|
||||||
env.upper(),
|
|
||||||
superset_backup_repo,
|
|
||||||
logger=logger,
|
|
||||||
config=backup_config
|
|
||||||
)
|
|
||||||
|
|
||||||
if not all(results.values()):
|
|
||||||
exit_code = 1
|
|
||||||
|
|
||||||
except (RequestException, IOError) as e:
|
|
||||||
logger.critical(f"[STATE][main][FAILURE] Fatal error in main execution: {e}", exc_info=True)
|
|
||||||
exit_code = 1
|
|
||||||
|
|
||||||
logger.info("[STATE][main][SUCCESS] Superset backup process finished.")
|
|
||||||
return exit_code
|
|
||||||
# END_FUNCTION_main
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
42
docs/migration_mapping.md
Normal file
42
docs/migration_mapping.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
# Database Mapping in Migration
|
||||||
|
|
||||||
|
This document describes how to use the database mapping feature during Superset dashboard migrations.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
When migrating dashboards between different Superset environments (e.g., from Dev to Prod), the underlying databases often have different UUIDs even if they represent the same data source. The Database Mapping feature allows you to define these relationships so that migrated assets automatically point to the correct database in the target environment.
|
||||||
|
|
||||||
|
## How it Works
|
||||||
|
|
||||||
|
1. **Fuzzy Matching**: The system automatically suggests mappings by comparing database names between environments using the RapidFuzz library.
|
||||||
|
2. **Persistence**: Mappings are stored in a local SQLite database (`mappings.db`) and are reused for future migrations between the same environment pair.
|
||||||
|
3. **Asset Interception**: During migration, the system intercepts the Superset export ZIP archive, modifies the `database_uuid` in the dataset YAML files, and re-packages the archive before importing it to the target.
|
||||||
|
|
||||||
|
## Usage Instructions
|
||||||
|
|
||||||
|
### 1. Define Mappings
|
||||||
|
|
||||||
|
1. Navigate to the **Database Mapping** tab in the application.
|
||||||
|
2. Select your **Source** and **Target** environments.
|
||||||
|
3. Click **Fetch Databases & Suggestions**.
|
||||||
|
4. Review the suggested mappings (highlighted in green).
|
||||||
|
5. If a suggestion is incorrect or missing, use the dropdown in the "Target Database" column to select the correct one.
|
||||||
|
6. Mappings are saved automatically when you select a target database.
|
||||||
|
|
||||||
|
### 2. Run Migration with Database Replacement
|
||||||
|
|
||||||
|
1. Go to the **Migration** dashboard.
|
||||||
|
2. Select the **Source** and **Target** environments.
|
||||||
|
3. Select the dashboards or datasets you want to migrate.
|
||||||
|
4. Enable the **Replace Database (Apply Mappings)** toggle.
|
||||||
|
5. Click **Start Migration**.
|
||||||
|
|
||||||
|
### 3. Handling Missing Mappings
|
||||||
|
|
||||||
|
If the migration engine encounters a database that has no defined mapping, the process will pause, and a modal will appear prompting you to select a target database on-the-fly. Once selected, the mapping is saved, and the migration continues.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
- **Mapping not applied**: Ensure the "Replace Database" toggle is enabled.
|
||||||
|
- **Wrong database in target**: Check the mapping table for the specific environment pair and correct any errors.
|
||||||
|
- **Connection errors**: Ensure both Superset environments are reachable and credentials are correct in Settings.
|
||||||
87
docs/plugin_dev.md
Executable file
87
docs/plugin_dev.md
Executable file
@@ -0,0 +1,87 @@
|
|||||||
|
# Plugin Development Guide
|
||||||
|
|
||||||
|
This guide explains how to create new plugins for the Superset Tools application.
|
||||||
|
|
||||||
|
## 1. Plugin Structure
|
||||||
|
|
||||||
|
A plugin is a single Python file located in the `backend/src/plugins/` directory. Each plugin file must contain a class that inherits from `PluginBase`.
|
||||||
|
|
||||||
|
## 2. Implementing `PluginBase`
|
||||||
|
|
||||||
|
The `PluginBase` class is an abstract base class that defines the interface for all plugins. You must implement the following properties and methods:
|
||||||
|
|
||||||
|
- **`id`**: A unique string identifier for your plugin (e.g., `"my-cool-plugin"`).
|
||||||
|
- **`name`**: A human-readable name for your plugin (e.g., `"My Cool Plugin"`).
|
||||||
|
- **`description`**: A brief description of what your plugin does.
|
||||||
|
- **`version`**: The version of your plugin (e.g., `"1.0.0"`).
|
||||||
|
- **`get_schema()`**: A method that returns a JSON schema dictionary defining the input parameters for your plugin. This schema is used to automatically generate a form in the frontend.
|
||||||
|
- **`execute(params: Dict[str, Any])`**: An `async` method that contains the main logic of your plugin. The `params` argument is a dictionary containing the input data from the user, validated against the schema you defined.
|
||||||
|
|
||||||
|
## 3. Example Plugin
|
||||||
|
|
||||||
|
Here is an example of a simple "Hello World" plugin:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# backend/src/plugins/hello.py
|
||||||
|
# [DEF:HelloWorldPlugin:Plugin]
|
||||||
|
# @SEMANTICS: hello, world, example, plugin
|
||||||
|
# @PURPOSE: A simple "Hello World" plugin example.
|
||||||
|
# @LAYER: Domain (Plugin)
|
||||||
|
# @RELATION: Inherits from PluginBase
|
||||||
|
# @PUBLIC_API: execute
|
||||||
|
|
||||||
|
from typing import Dict, Any
|
||||||
|
from ..core.plugin_base import PluginBase
|
||||||
|
|
||||||
|
class HelloWorldPlugin(PluginBase):
|
||||||
|
@property
|
||||||
|
def id(self) -> str:
|
||||||
|
return "hello-world"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "Hello World"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def description(self) -> str:
|
||||||
|
return "A simple plugin that prints a greeting."
|
||||||
|
|
||||||
|
@property
|
||||||
|
def version(self) -> str:
|
||||||
|
return "1.0.0"
|
||||||
|
|
||||||
|
def get_schema(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": {
|
||||||
|
"type": "string",
|
||||||
|
"title": "Name",
|
||||||
|
"description": "The name to greet.",
|
||||||
|
"default": "World",
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["name"],
|
||||||
|
}
|
||||||
|
|
||||||
|
async def execute(self, params: Dict[str, Any]):
|
||||||
|
name = params["name"]
|
||||||
|
print(f"Hello, {name}!")
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Logging
|
||||||
|
|
||||||
|
You can use the global logger instance to log messages from your plugin. The logger is available in the `superset_tool.utils.logger` module.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from superset_tool.utils.logger import SupersetLogger
|
||||||
|
|
||||||
|
logger = SupersetLogger()
|
||||||
|
|
||||||
|
async def execute(self, params: Dict[str, Any]):
|
||||||
|
logger.info("My plugin is running!")
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Testing
|
||||||
|
|
||||||
|
To test your plugin, simply run the application and navigate to the web UI. Your plugin should appear in the list of available tools.
|
||||||
46
docs/settings.md
Normal file
46
docs/settings.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
# Web Application Settings Mechanism
|
||||||
|
|
||||||
|
This document describes the settings management system for the Superset Tools application.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The settings mechanism allows users to configure multiple Superset environments and global application settings (like backup storage) via the web UI.
|
||||||
|
|
||||||
|
## Backend Architecture
|
||||||
|
|
||||||
|
### Data Models
|
||||||
|
|
||||||
|
Configuration is structured using Pydantic models in `backend/src/core/config_models.py`:
|
||||||
|
|
||||||
|
- `Environment`: Represents a Superset instance (URL, credentials). The `base_url` is automatically normalized to include the `/api/v1` suffix if missing.
|
||||||
|
- `GlobalSettings`: Global application parameters (e.g., `backup_path`).
|
||||||
|
- `AppConfig`: The root configuration object.
|
||||||
|
|
||||||
|
### Configuration Manager
|
||||||
|
|
||||||
|
The `ConfigManager` (`backend/src/core/config_manager.py`) handles:
|
||||||
|
- Persistence to `config.json`.
|
||||||
|
- CRUD operations for environments.
|
||||||
|
- Validation and logging.
|
||||||
|
|
||||||
|
### API Endpoints
|
||||||
|
|
||||||
|
The settings API is available at `/settings`:
|
||||||
|
|
||||||
|
- `GET /settings`: Retrieve all settings (passwords are masked).
|
||||||
|
- `PATCH /settings/global`: Update global settings.
|
||||||
|
- `GET /settings/environments`: List environments.
|
||||||
|
- `POST /settings/environments`: Add environment.
|
||||||
|
- `PUT /settings/environments/{id}`: Update environment.
|
||||||
|
- `DELETE /settings/environments/{id}`: Remove environment.
|
||||||
|
- `POST /settings/environments/{id}/test`: Test connection.
|
||||||
|
|
||||||
|
## Frontend Implementation
|
||||||
|
|
||||||
|
The settings page is located at `frontend/src/pages/Settings.svelte`. It provides forms for managing global settings and Superset environments.
|
||||||
|
|
||||||
|
## Integration
|
||||||
|
|
||||||
|
Existing plugins and utilities use the `ConfigManager` to fetch configuration:
|
||||||
|
- `superset_tool/utils/init_clients.py`: Dynamically initializes Superset clients from the configured environments.
|
||||||
|
- `BackupPlugin`: Uses the configured `backup_path` as the default storage location.
|
||||||
25
frontend/.vite/deps/_metadata.json
Normal file
25
frontend/.vite/deps/_metadata.json
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
{
|
||||||
|
"hash": "a8d52b4a",
|
||||||
|
"configHash": "7bf228bb",
|
||||||
|
"lockfileHash": "57452527",
|
||||||
|
"browserHash": "e59a8620",
|
||||||
|
"optimized": {
|
||||||
|
"svelte": {
|
||||||
|
"src": "../../node_modules/svelte/src/index-client.js",
|
||||||
|
"file": "svelte.js",
|
||||||
|
"fileHash": "0e9fe405",
|
||||||
|
"needsInterop": false
|
||||||
|
},
|
||||||
|
"svelte/store": {
|
||||||
|
"src": "../../node_modules/svelte/src/store/index-client.js",
|
||||||
|
"file": "svelte_store.js",
|
||||||
|
"fileHash": "28cc90b1",
|
||||||
|
"needsInterop": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"chunks": {
|
||||||
|
"chunk-YAQNMG2X": {
|
||||||
|
"file": "chunk-YAQNMG2X.js"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
4170
frontend/.vite/deps/chunk-YAQNMG2X.js
Normal file
4170
frontend/.vite/deps/chunk-YAQNMG2X.js
Normal file
File diff suppressed because it is too large
Load Diff
7
frontend/.vite/deps/chunk-YAQNMG2X.js.map
Normal file
7
frontend/.vite/deps/chunk-YAQNMG2X.js.map
Normal file
File diff suppressed because one or more lines are too long
3
frontend/.vite/deps/package.json
Normal file
3
frontend/.vite/deps/package.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"type": "module"
|
||||||
|
}
|
||||||
46
frontend/.vite/deps/svelte.js
Normal file
46
frontend/.vite/deps/svelte.js
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
import {
|
||||||
|
afterUpdate,
|
||||||
|
beforeUpdate,
|
||||||
|
createContext,
|
||||||
|
createEventDispatcher,
|
||||||
|
createRawSnippet,
|
||||||
|
flushSync,
|
||||||
|
fork,
|
||||||
|
getAbortSignal,
|
||||||
|
getAllContexts,
|
||||||
|
getContext,
|
||||||
|
hasContext,
|
||||||
|
hydratable,
|
||||||
|
hydrate,
|
||||||
|
mount,
|
||||||
|
onDestroy,
|
||||||
|
onMount,
|
||||||
|
setContext,
|
||||||
|
settled,
|
||||||
|
tick,
|
||||||
|
unmount,
|
||||||
|
untrack
|
||||||
|
} from "./chunk-YAQNMG2X.js";
|
||||||
|
export {
|
||||||
|
afterUpdate,
|
||||||
|
beforeUpdate,
|
||||||
|
createContext,
|
||||||
|
createEventDispatcher,
|
||||||
|
createRawSnippet,
|
||||||
|
flushSync,
|
||||||
|
fork,
|
||||||
|
getAbortSignal,
|
||||||
|
getAllContexts,
|
||||||
|
getContext,
|
||||||
|
hasContext,
|
||||||
|
hydratable,
|
||||||
|
hydrate,
|
||||||
|
mount,
|
||||||
|
onDestroy,
|
||||||
|
onMount,
|
||||||
|
setContext,
|
||||||
|
settled,
|
||||||
|
tick,
|
||||||
|
unmount,
|
||||||
|
untrack
|
||||||
|
};
|
||||||
7
frontend/.vite/deps/svelte.js.map
Normal file
7
frontend/.vite/deps/svelte.js.map
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"version": 3,
|
||||||
|
"sources": [],
|
||||||
|
"sourcesContent": [],
|
||||||
|
"mappings": "",
|
||||||
|
"names": []
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user