feat: implement plugin architecture and application settings with Svelte UI

- Added plugin base and loader for backend extensibility
- Implemented application settings management with config persistence
- Created Svelte-based frontend with Dashboard and Settings pages
- Added API routes for plugins, tasks, and settings
- Updated documentation and specifications
- Improved project structure and developer tools
This commit is contained in:
2025-12-20 20:48:18 +03:00
parent ce703322c2
commit 2d8cae563f
98 changed files with 7894 additions and 5021 deletions

1
.gitignore vendored Normal file → Executable file
View File

@@ -16,3 +16,4 @@ dist/
node_modules/ node_modules/
build/ build/
.env* .env*
config.json

0
.kilocode/mcp.json Normal file → Executable file
View File

0
.kilocode/rules/specify-rules.md Normal file → Executable file
View File

View File

@@ -24,7 +24,7 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items ac
### 1. Initialize Analysis Context ### 1. Initialize Analysis Context
Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths: Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
- SPEC = FEATURE_DIR/spec.md - SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md - PLAN = FEATURE_DIR/plan.md

View File

@@ -33,7 +33,7 @@ You **MUST** consider the user input before proceeding (if not empty).
## Execution Steps ## Execution Steps
1. **Setup**: Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list. 1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
- All file paths must be absolute. - All file paths must be absolute.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").

View File

@@ -22,7 +22,7 @@ Note: This clarification workflow is expected to run (and be completed) BEFORE i
Execution steps: Execution steps:
1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -PathsOnly` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
- `FEATURE_DIR` - `FEATURE_DIR`
- `FEATURE_SPEC` - `FEATURE_SPEC`
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)

View File

@@ -12,7 +12,7 @@ You **MUST** consider the user input before proceeding (if not empty).
## Outline ## Outline
1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists): 2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
- Scan all checklist files in the checklists/ directory - Scan all checklist files in the checklists/ directory

View File

@@ -20,7 +20,7 @@ You **MUST** consider the user input before proceeding (if not empty).
## Outline ## Outline
1. **Setup**: Run `.specify/scripts/powershell/setup-plan.ps1 -Json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied). 2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
@@ -75,7 +75,7 @@ You **MUST** consider the user input before proceeding (if not empty).
- Output OpenAPI/GraphQL schema to `/contracts/` - Output OpenAPI/GraphQL schema to `/contracts/`
3. **Agent context update**: 3. **Agent context update**:
- Run `.specify/scripts/powershell/update-agent-context.ps1 -AgentType kilocode` - Run `.specify/scripts/bash/update-agent-context.sh kilocode`
- These scripts detect which AI agent is in use - These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file - Update the appropriate agent-specific context file
- Add only new technology from current plan - Add only new technology from current plan

View File

@@ -54,10 +54,10 @@ Given that feature description, do this:
- Find the highest number N - Find the highest number N
- Use N+1 for the new branch number - Use N+1 for the new branch number
d. Run the script `.specify/scripts/powershell/create-new-feature.ps1 -Json "$ARGUMENTS"` with the calculated number and short-name: d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.specify/scripts/powershell/create-new-feature.ps1 -Json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"` - Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.specify/scripts/powershell/create-new-feature.ps1 -Json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"` - PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**: **IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number - Check all three sources (remote branches, local branches, specs directories) to find the highest number

View File

@@ -21,7 +21,7 @@ You **MUST** consider the user input before proceeding (if not empty).
## Outline ## Outline
1. **Setup**: Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load design documents**: Read from FEATURE_DIR: 2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities) - **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)

View File

@@ -13,7 +13,7 @@ You **MUST** consider the user input before proceeding (if not empty).
## Outline ## Outline
1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
1. From the executed script, extract the path to **tasks**. 1. From the executed script, extract the path to **tasks**.
1. Get the Git remote by running: 1. Get the Git remote by running:

0
.pylintrc Normal file → Executable file
View File

View File

@@ -1,68 +1,50 @@
<!-- # [PROJECT_NAME] Constitution
SYNC IMPACT REPORT <!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
Version: 1.1.0 (Svelte Support)
Changes:
- Added Svelte Component semantic markup standards.
- Updated File Structure Standards to include `.svelte` files.
- Refined File Structure Standards to distinguish between Python Modules and Svelte Components.
Templates Status:
- .specify/templates/plan-template.md: ⚠ Pending (Needs update to include Component headers in checks).
- .specify/templates/spec-template.md: ✅ Aligned.
- .specify/templates/tasks-template.md: ⚠ Pending (Needs update to include Component definition tasks).
-->
# Semantic Code Generation Constitution
## Core Principles ## Core Principles
### I. Causal Validity (Contracts First) ### [PRINCIPLE_1_NAME]
Semantic definitions (Contracts) must ALWAYS precede implementation code. Logic is downstream of definition. We define the structure and constraints (`[DEF]`, `@PRE`, `@POST`) before writing the executable logic. This ensures that the "what" and "why" govern the "how". <!-- Example: I. Library-First -->
[PRINCIPLE_1_DESCRIPTION]
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
### II. Immutability of Architecture ### [PRINCIPLE_2_NAME]
Once defined, architectural decisions in the Module Header (`@LAYER`, `@INVARIANT`, `@CONSTRAINT`) are treated as immutable constraints for that module. Changes to these require an explicit refactoring step, not ad-hoc modification during implementation. <!-- Example: II. CLI Interface -->
[PRINCIPLE_2_DESCRIPTION]
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
### III. Semantic Format Compliance ### [PRINCIPLE_3_NAME]
All output must strictly follow the `[DEF]` / `[/DEF]` anchor syntax with specific Metadata Tags (`@KEY`) and Graph Relations (`@RELATION`). This structure is non-negotiable as it ensures the codebase remains machine-readable, fractal-structured, and optimized for Sparse Attention navigation by AI agents. <!-- Example: III. Test-First (NON-NEGOTIABLE) -->
[PRINCIPLE_3_DESCRIPTION]
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
### IV. Design by Contract (DbC) ### [PRINCIPLE_4_NAME]
Contracts are the Source of Truth. Functions and Classes must define their purpose, specifications, and constraints (`@PRE`, `@POST`, `@THROW`) in the metadata block before implementation. Implementation must strictly satisfy these contracts. <!-- Example: IV. Integration Testing -->
[PRINCIPLE_4_DESCRIPTION]
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
### V. Belief State Logging ### [PRINCIPLE_5_NAME]
Logs must define the agent's internal state for debugging and coherence checks. We use a strict format: `logger.level(f"[{ANCHOR_ID}][{STATE}] {MESSAGE} context={...}")` to track transitions between `Entry`, `Validation`, `Action`, and `Coherence` states. <!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
[PRINCIPLE_5_DESCRIPTION]
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
## File Structure Standards ## [SECTION_2_NAME]
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
### Python Modules [SECTION_2_CONTENT]
Every `.py` file must start with a Module definition header (`[DEF:module_name:Module]`) containing: <!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
- `@SEMANTICS`: Keywords for vector search.
- `@PURPOSE`: Primary responsibility of the module.
- `@LAYER`: Architecture layer (Domain/Infra/UI).
- `@RELATION`: Dependencies.
- `@INVARIANT` & `@CONSTRAINT`: Immutable rules.
- `@PUBLIC_API`: Exported symbols.
### Svelte Components ## [SECTION_3_NAME]
Every `.svelte` file must start with a Component definition header (`[DEF:ComponentName:Component]`) wrapped in an HTML comment `<!-- ... -->` containing: <!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
- `@SEMANTICS`: Keywords for vector search.
- `@PURPOSE`: Primary responsibility of the component.
- `@LAYER`: Architecture layer (UI/State/Layout).
- `@RELATION`: Child components, Stores used, API calls.
- `@PROPS`: Input properties.
- `@EVENTS`: Emitted events.
- `@INVARIANT`: Immutable UI/State rules.
## Generation Workflow [SECTION_3_CONTENT]
The development process follows a strict sequence: <!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
1. **Analyze Request**: Identify target module and graph position.
2. **Define Structure**: Generate `[DEF]` anchors and Contracts FIRST.
3. **Implement Logic**: Write code satisfying Contracts.
4. **Validate**: If logic conflicts with Contract -> Stop -> Report Error.
## Governance ## Governance
This Constitution establishes the "Semantic Code Generation Protocol" as the supreme law of this repository. <!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
- **Automated Enforcement**: All code generation tools and agents must parse and validate adherence to the `[DEF]` syntax and Contract requirements. [GOVERNANCE_RULES]
- **Amendments**: Changes to the syntax or core principles require a formal amendment to this Constitution and a corresponding update to the constitution <!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
- **Review**: Code reviews must verify that implementation matches the preceding contracts and that no "naked code" exists outside of semantic anchors.
- **Compliance**: Failure to adhere to the `[DEF]` / `[/DEF]` structure constitutes a build failure.
**Version**: 1.1.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2025-12-19 **Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->

View File

@@ -0,0 +1,166 @@
#!/usr/bin/env bash
# Consolidated prerequisite checking script
#
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
# It replaces the functionality previously spread across multiple scripts.
#
# Usage: ./check-prerequisites.sh [OPTIONS]
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
# OUTPUTS:
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
set -e
# Parse command line arguments
JSON_MODE=false
REQUIRE_TASKS=false
INCLUDE_TASKS=false
PATHS_ONLY=false
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--require-tasks)
REQUIRE_TASKS=true
;;
--include-tasks)
INCLUDE_TASKS=true
;;
--paths-only)
PATHS_ONLY=true
;;
--help|-h)
cat << 'EOF'
Usage: check-prerequisites.sh [OPTIONS]
Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
./check-prerequisites.sh --paths-only
EOF
exit 0
;;
*)
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
exit 1
;;
esac
done
# Source common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get feature paths and validate branch
eval $(get_feature_paths)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
fi
exit 0
fi
# Validate required directories and files
if [[ ! -d "$FEATURE_DIR" ]]; then
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
echo "Run /speckit.specify first to create the feature structure." >&2
exit 1
fi
if [[ ! -f "$IMPL_PLAN" ]]; then
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.plan first to create the implementation plan." >&2
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
fi
# Build list of available documents
docs=()
# Always check these optional docs
[[ -f "$RESEARCH" ]] && docs+=("research.md")
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
# Check contracts directory (only if it exists and has files)
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
docs+=("contracts/")
fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
fi
# Output results
if $JSON_MODE; then
# Build JSON array of documents
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(printf '"%s",' "${docs[@]}")
json_docs="[${json_docs%,}]"
fi
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
else
# Text output
echo "FEATURE_DIR:$FEATURE_DIR"
echo "AVAILABLE_DOCS:"
# Show status of each potential document
check_file "$RESEARCH" "research.md"
check_file "$DATA_MODEL" "data-model.md"
check_dir "$CONTRACTS_DIR" "contracts/"
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
fi
fi

156
.specify/scripts/bash/common.sh Executable file
View File

@@ -0,0 +1,156 @@
#!/usr/bin/env bash
# Common functions and variables for all scripts
# Get repository root, with fallback for non-git repositories
get_repo_root() {
if git rev-parse --show-toplevel >/dev/null 2>&1; then
git rev-parse --show-toplevel
else
# Fall back to script location for non-git repos
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
(cd "$script_dir/../../.." && pwd)
fi
}
# Get current branch, with fallback for non-git repositories
get_current_branch() {
# First check if SPECIFY_FEATURE environment variable is set
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
echo "$SPECIFY_FEATURE"
return
fi
# Then check git if available
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
git rev-parse --abbrev-ref HEAD
return
fi
# For non-git repos, try to find the latest feature directory
local repo_root=$(get_repo_root)
local specs_dir="$repo_root/specs"
if [[ -d "$specs_dir" ]]; then
local latest_feature=""
local highest=0
for dir in "$specs_dir"/*; do
if [[ -d "$dir" ]]; then
local dirname=$(basename "$dir")
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
local number=${BASH_REMATCH[1]}
number=$((10#$number))
if [[ "$number" -gt "$highest" ]]; then
highest=$number
latest_feature=$dirname
fi
fi
fi
done
if [[ -n "$latest_feature" ]]; then
echo "$latest_feature"
return
fi
fi
echo "main" # Final fallback
}
# Check if we have git available
has_git() {
git rev-parse --show-toplevel >/dev/null 2>&1
}
check_feature_branch() {
local branch="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
echo "Feature branches should be named like: 001-feature-name" >&2
return 1
fi
return 0
}
get_feature_dir() { echo "$1/specs/$2"; }
# Find feature directory by numeric prefix instead of exact branch match
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
find_feature_dir_by_prefix() {
local repo_root="$1"
local branch_name="$2"
local specs_dir="$repo_root/specs"
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
# If branch doesn't have numeric prefix, fall back to exact match
echo "$specs_dir/$branch_name"
return
fi
local prefix="${BASH_REMATCH[1]}"
# Search for directories in specs/ that start with this prefix
local matches=()
if [[ -d "$specs_dir" ]]; then
for dir in "$specs_dir"/"$prefix"-*; do
if [[ -d "$dir" ]]; then
matches+=("$(basename "$dir")")
fi
done
fi
# Handle results
if [[ ${#matches[@]} -eq 0 ]]; then
# No match found - return the branch name path (will fail later with clear error)
echo "$specs_dir/$branch_name"
elif [[ ${#matches[@]} -eq 1 ]]; then
# Exactly one match - perfect!
echo "$specs_dir/${matches[0]}"
else
# Multiple matches - this shouldn't happen with proper naming convention
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
echo "Please ensure only one spec directory exists per numeric prefix." >&2
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
fi
}
get_feature_paths() {
local repo_root=$(get_repo_root)
local current_branch=$(get_current_branch)
local has_git_repo="false"
if has_git; then
has_git_repo="true"
fi
# Use prefix-based lookup to support multiple branches per spec
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
cat <<EOF
REPO_ROOT='$repo_root'
CURRENT_BRANCH='$current_branch'
HAS_GIT='$has_git_repo'
FEATURE_DIR='$feature_dir'
FEATURE_SPEC='$feature_dir/spec.md'
IMPL_PLAN='$feature_dir/plan.md'
TASKS='$feature_dir/tasks.md'
RESEARCH='$feature_dir/research.md'
DATA_MODEL='$feature_dir/data-model.md'
QUICKSTART='$feature_dir/quickstart.md'
CONTRACTS_DIR='$feature_dir/contracts'
EOF
}
check_file() { [[ -f "$1" ]] && echo "$2" || echo "$2"; }
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo "$2" || echo "$2"; }

View File

@@ -0,0 +1,297 @@
#!/usr/bin/env bash
set -e
JSON_MODE=false
SHORT_NAME=""
BRANCH_NUMBER=""
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
exit 1
fi
# Function to find the repository root by searching for existing project markers
find_repo_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
# Function to get highest number from specs directory
get_highest_from_specs() {
local specs_dir="$1"
local highest=0
if [ -d "$specs_dir" ]; then
for dir in "$specs_dir"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
done
fi
echo "$highest"
}
# Function to get highest number from git branches
get_highest_from_branches() {
local highest=0
# Get all branches (local and remote)
branches=$(git branch -a 2>/dev/null || echo "")
if [ -n "$branches" ]; then
while IFS= read -r branch; do
# Clean branch name: remove leading markers and remote prefixes
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
# Extract feature number if branch matches pattern ###-*
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done <<< "$branches"
fi
echo "$highest"
}
# Function to check existing branches (local and remote) and return next available number
check_existing_branches() {
local specs_dir="$1"
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune 2>/dev/null || true
# Get highest number from ALL branches (not just matching short name)
local highest_branch=$(get_highest_from_branches)
# Get highest number from ALL specs (not just matching short name)
local highest_spec=$(get_highest_from_specs "$specs_dir")
# Take the maximum of both
local max_num=$highest_branch
if [ "$highest_spec" -gt "$max_num" ]; then
max_num=$highest_spec
fi
# Return next number
echo $((max_num + 1))
}
# Function to clean and format a branch name
clean_branch_name() {
local name="$1"
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
}
# Resolve repository root. Prefer git information when available, but fall back
# to searching for repository markers so the workflow still functions in repositories that
# were initialised with --no-git.
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if git rev-parse --show-toplevel >/dev/null 2>&1; then
REPO_ROOT=$(git rev-parse --show-toplevel)
HAS_GIT=true
else
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
exit 1
fi
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
mkdir -p "$SPECS_DIR"
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
# Fallback to original logic if no meaningful words found
local cleaned=$(clean_branch_name "$description")
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
else
# Fall back to local directory check
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for: feature number (3) + hyphen (1) = 4 chars
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if [ "$HAS_GIT" = true ]; then
git checkout -b "$BRANCH_NAME"
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
mkdir -p "$FEATURE_DIR"
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
# Set the SPECIFY_FEATURE environment variable for the current session
export SPECIFY_FEATURE="$BRANCH_NAME"
if $JSON_MODE; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
fi

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -e
# Parse command line arguments
JSON_MODE=false
ARGS=()
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--help|-h)
echo "Usage: $0 [--json]"
echo " --json Output results in JSON format"
echo " --help Show this help message"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
done
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
# Check if we're on a proper feature branch (only for git repos)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# Ensure the feature directory exists
mkdir -p "$FEATURE_DIR"
# Copy plan template if it exists
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
if [[ -f "$TEMPLATE" ]]; then
cp "$TEMPLATE" "$IMPL_PLAN"
echo "Copied plan template to $IMPL_PLAN"
else
echo "Warning: Plan template not found at $TEMPLATE"
# Create a basic plan file if template doesn't exist
touch "$IMPL_PLAN"
fi
# Output results
if $JSON_MODE; then
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
else
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "SPECS_DIR: $FEATURE_DIR"
echo "BRANCH: $CURRENT_BRANCH"
echo "HAS_GIT: $HAS_GIT"
fi

View File

@@ -0,0 +1,799 @@
#!/usr/bin/env bash
# Update agent context files with information from plan.md
#
# This script maintains AI agent context files by parsing feature specifications
# and updating agent-specific configuration files with project information.
#
# MAIN FUNCTIONS:
# 1. Environment Validation
# - Verifies git repository structure and branch information
# - Checks for required plan.md files and templates
# - Validates file permissions and accessibility
#
# 2. Plan Data Extraction
# - Parses plan.md files to extract project metadata
# - Identifies language/version, frameworks, databases, and project types
# - Handles missing or incomplete specification data gracefully
#
# 3. Agent File Management
# - Creates new agent context files from templates when needed
# - Updates existing agent files with new project information
# - Preserves manual additions and custom configurations
# - Supports multiple AI agent formats and directory structures
#
# 4. Content Generation
# - Generates language-specific build/test commands
# - Creates appropriate project directory structures
# - Updates technology stacks and recent changes sections
# - Maintains consistent formatting and timestamps
#
# 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI
# - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist
#
# Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder
# Leave empty to update all existing agent files
set -e
# Enable strict error handling
set -u
set -o pipefail
#==============================================================================
# Configuration and Global Variables
#==============================================================================
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
AGENT_TYPE="${1:-}"
# Agent-specific file paths
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
QWEN_FILE="$REPO_ROOT/QWEN.md"
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
QODER_FILE="$REPO_ROOT/QODER.md"
AMP_FILE="$REPO_ROOT/AGENTS.md"
SHAI_FILE="$REPO_ROOT/SHAI.md"
Q_FILE="$REPO_ROOT/AGENTS.md"
BOB_FILE="$REPO_ROOT/AGENTS.md"
# Template file
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
# Global variables for parsed plan data
NEW_LANG=""
NEW_FRAMEWORK=""
NEW_DB=""
NEW_PROJECT_TYPE=""
#==============================================================================
# Utility Functions
#==============================================================================
log_info() {
echo "INFO: $1"
}
log_success() {
echo "$1"
}
log_error() {
echo "ERROR: $1" >&2
}
log_warning() {
echo "WARNING: $1" >&2
}
# Cleanup function for temporary files
cleanup() {
local exit_code=$?
rm -f /tmp/agent_update_*_$$
rm -f /tmp/manual_additions_$$
exit $exit_code
}
# Set up cleanup trap
trap cleanup EXIT INT TERM
#==============================================================================
# Validation Functions
#==============================================================================
validate_environment() {
# Check if we have a current branch/feature (git or non-git)
if [[ -z "$CURRENT_BRANCH" ]]; then
log_error "Unable to determine current feature"
if [[ "$HAS_GIT" == "true" ]]; then
log_info "Make sure you're on a feature branch"
else
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
fi
exit 1
fi
# Check if plan.md exists
if [[ ! -f "$NEW_PLAN" ]]; then
log_error "No plan.md found at $NEW_PLAN"
log_info "Make sure you're working on a feature with a corresponding spec directory"
if [[ "$HAS_GIT" != "true" ]]; then
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
fi
exit 1
fi
# Check if template exists (needed for new files)
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_warning "Template file not found at $TEMPLATE_FILE"
log_warning "Creating new agent files will fail"
fi
}
#==============================================================================
# Plan Parsing Functions
#==============================================================================
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
sed 's/^[ \t]*//;s/[ \t]*$//' | \
grep -v "NEEDS CLARIFICATION" | \
grep -v "^N/A$" || echo ""
}
parse_plan_data() {
local plan_file="$1"
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
log_info "Parsing plan data from $plan_file"
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
log_info "Found framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
log_info "Found database: $NEW_DB"
fi
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
log_info "Found project type: $NEW_PROJECT_TYPE"
fi
}
format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
# Add non-empty parts
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
# Join with proper formatting
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
elif [[ ${#parts[@]} -eq 1 ]]; then
echo "${parts[0]}"
else
# Join multiple parts with " + "
local result="${parts[0]}"
for ((i=1; i<${#parts[@]}; i++)); do
result="$result + ${parts[i]}"
done
echo "$result"
fi
}
#==============================================================================
# Template and Content Generation Functions
#==============================================================================
get_project_structure() {
local project_type="$1"
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
echo "src/\\ntests/"
fi
}
get_commands_for_language() {
local lang="$1"
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
;;
*"Rust"*)
echo "cargo test && cargo clippy"
;;
*"JavaScript"*|*"TypeScript"*)
echo "npm test \\&\\& npm run lint"
;;
*)
echo "# Add commands for $lang"
;;
esac
}
get_language_conventions() {
local lang="$1"
echo "$lang: Follow standard conventions"
}
create_new_agent_file() {
local target_file="$1"
local temp_file="$2"
local project_name="$3"
local current_date="$4"
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_error "Template not found at $TEMPLATE_FILE"
return 1
fi
if [[ ! -r "$TEMPLATE_FILE" ]]; then
log_error "Template file is not readable: $TEMPLATE_FILE"
return 1
fi
log_info "Creating new agent context file from template..."
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
log_error "Failed to copy template file"
return 1
fi
# Replace template placeholders
local project_structure
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
local commands
commands=$(get_commands_for_language "$NEW_LANG")
local language_conventions
language_conventions=$(get_language_conventions "$NEW_LANG")
# Perform substitutions with error checking using safer approach
# Escape special characters for sed by using a different delimiter or escaping
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
# Build technology stack and recent change strings conditionally
local tech_stack
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
elif [[ -n "$escaped_lang" ]]; then
tech_stack="- $escaped_lang ($escaped_branch)"
elif [[ -n "$escaped_framework" ]]; then
tech_stack="- $escaped_framework ($escaped_branch)"
else
tech_stack="- ($escaped_branch)"
fi
local recent_change
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
elif [[ -n "$escaped_lang" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang"
elif [[ -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_framework"
else
recent_change="- $escaped_branch: Added"
fi
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
rm -f "$temp_file" "$temp_file.bak"
return 1
fi
done
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
return 0
}
update_existing_agent_file() {
local target_file="$1"
local current_date="$2"
log_info "Updating existing agent context file..."
# Use a single temporary file for atomic update
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
# Process the file in one pass
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
local new_tech_entries=()
local new_change_entry=""
# Prepare new technology entries
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
fi
# Prepare new change entry
if [[ -n "$tech_stack" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
fi
# Check if sections exist in the file
local has_active_technologies=0
local has_recent_changes=0
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
has_active_technologies=1
fi
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
has_recent_changes=1
fi
# Process file line by line
local in_tech_section=false
local in_changes_section=false
local tech_entries_added=false
local changes_entries_added=false
local existing_changes_count=0
local file_ended=false
while IFS= read -r line || [[ -n "$line" ]]; do
# Handle Active Technologies section
if [[ "$line" == "## Active Technologies" ]]; then
echo "$line" >> "$temp_file"
in_tech_section=true
continue
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
# Add new tech entries before closing the section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
in_tech_section=false
continue
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
# Add new tech entries before empty line in tech section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
continue
fi
# Handle Recent Changes section
if [[ "$line" == "## Recent Changes" ]]; then
echo "$line" >> "$temp_file"
# Add new change entry right after the heading
if [[ -n "$new_change_entry" ]]; then
echo "$new_change_entry" >> "$temp_file"
fi
in_changes_section=true
changes_entries_added=true
continue
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
echo "$line" >> "$temp_file"
in_changes_section=false
continue
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
# Keep only first 2 existing changes
if [[ $existing_changes_count -lt 2 ]]; then
echo "$line" >> "$temp_file"
((existing_changes_count++))
fi
continue
fi
# Update timestamp
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
else
echo "$line" >> "$temp_file"
fi
done < "$target_file"
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
# If sections don't exist, add them at the end of the file
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
echo "" >> "$temp_file"
echo "## Active Technologies" >> "$temp_file"
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
echo "" >> "$temp_file"
echo "## Recent Changes" >> "$temp_file"
echo "$new_change_entry" >> "$temp_file"
changes_entries_added=true
fi
# Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file"
rm -f "$temp_file"
return 1
fi
return 0
}
#==============================================================================
# Main Agent File Update Function
#==============================================================================
update_agent_file() {
local target_file="$1"
local agent_name="$2"
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
log_error "update_agent_file requires target_file and agent_name parameters"
return 1
fi
log_info "Updating $agent_name context file: $target_file"
local project_name
project_name=$(basename "$REPO_ROOT")
local current_date
current_date=$(date +%Y-%m-%d)
# Create directory if it doesn't exist
local target_dir
target_dir=$(dirname "$target_file")
if [[ ! -d "$target_dir" ]]; then
if ! mkdir -p "$target_dir"; then
log_error "Failed to create directory: $target_dir"
return 1
fi
fi
if [[ ! -f "$target_file" ]]; then
# Create new file from template
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
if mv "$temp_file" "$target_file"; then
log_success "Created new $agent_name context file"
else
log_error "Failed to move temporary file to $target_file"
rm -f "$temp_file"
return 1
fi
else
log_error "Failed to create new agent file"
rm -f "$temp_file"
return 1
fi
else
# Update existing file
if [[ ! -r "$target_file" ]]; then
log_error "Cannot read existing file: $target_file"
return 1
fi
if [[ ! -w "$target_file" ]]; then
log_error "Cannot write to existing file: $target_file"
return 1
fi
if update_existing_agent_file "$target_file" "$current_date"; then
log_success "Updated existing $agent_name context file"
else
log_error "Failed to update existing agent file"
return 1
fi
fi
return 0
}
#==============================================================================
# Agent Selection and Processing
#==============================================================================
update_specific_agent() {
local agent_type="$1"
case "$agent_type" in
claude)
update_agent_file "$CLAUDE_FILE" "Claude Code"
;;
gemini)
update_agent_file "$GEMINI_FILE" "Gemini CLI"
;;
copilot)
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
;;
cursor-agent)
update_agent_file "$CURSOR_FILE" "Cursor IDE"
;;
qwen)
update_agent_file "$QWEN_FILE" "Qwen Code"
;;
opencode)
update_agent_file "$AGENTS_FILE" "opencode"
;;
codex)
update_agent_file "$AGENTS_FILE" "Codex CLI"
;;
windsurf)
update_agent_file "$WINDSURF_FILE" "Windsurf"
;;
kilocode)
update_agent_file "$KILOCODE_FILE" "Kilo Code"
;;
auggie)
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
;;
roo)
update_agent_file "$ROO_FILE" "Roo Code"
;;
codebuddy)
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
;;
qoder)
update_agent_file "$QODER_FILE" "Qoder CLI"
;;
amp)
update_agent_file "$AMP_FILE" "Amp"
;;
shai)
update_agent_file "$SHAI_FILE" "SHAI"
;;
q)
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
;;
bob)
update_agent_file "$BOB_FILE" "IBM Bob"
;;
*)
log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder"
exit 1
;;
esac
}
update_all_existing_agents() {
local found_agent=false
# Check each possible agent file and update if it exists
if [[ -f "$CLAUDE_FILE" ]]; then
update_agent_file "$CLAUDE_FILE" "Claude Code"
found_agent=true
fi
if [[ -f "$GEMINI_FILE" ]]; then
update_agent_file "$GEMINI_FILE" "Gemini CLI"
found_agent=true
fi
if [[ -f "$COPILOT_FILE" ]]; then
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
found_agent=true
fi
if [[ -f "$CURSOR_FILE" ]]; then
update_agent_file "$CURSOR_FILE" "Cursor IDE"
found_agent=true
fi
if [[ -f "$QWEN_FILE" ]]; then
update_agent_file "$QWEN_FILE" "Qwen Code"
found_agent=true
fi
if [[ -f "$AGENTS_FILE" ]]; then
update_agent_file "$AGENTS_FILE" "Codex/opencode"
found_agent=true
fi
if [[ -f "$WINDSURF_FILE" ]]; then
update_agent_file "$WINDSURF_FILE" "Windsurf"
found_agent=true
fi
if [[ -f "$KILOCODE_FILE" ]]; then
update_agent_file "$KILOCODE_FILE" "Kilo Code"
found_agent=true
fi
if [[ -f "$AUGGIE_FILE" ]]; then
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
found_agent=true
fi
if [[ -f "$ROO_FILE" ]]; then
update_agent_file "$ROO_FILE" "Roo Code"
found_agent=true
fi
if [[ -f "$CODEBUDDY_FILE" ]]; then
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
found_agent=true
fi
if [[ -f "$SHAI_FILE" ]]; then
update_agent_file "$SHAI_FILE" "SHAI"
found_agent=true
fi
if [[ -f "$QODER_FILE" ]]; then
update_agent_file "$QODER_FILE" "Qoder CLI"
found_agent=true
fi
if [[ -f "$Q_FILE" ]]; then
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
found_agent=true
fi
if [[ -f "$BOB_FILE" ]]; then
update_agent_file "$BOB_FILE" "IBM Bob"
found_agent=true
fi
# If no agent files exist, create a default Claude file
if [[ "$found_agent" == false ]]; then
log_info "No existing agent files found, creating default Claude file..."
update_agent_file "$CLAUDE_FILE" "Claude Code"
fi
}
print_summary() {
echo
log_info "Summary of changes:"
if [[ -n "$NEW_LANG" ]]; then
echo " - Added language: $NEW_LANG"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
echo " - Added framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
echo " - Added database: $NEW_DB"
fi
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
}
#==============================================================================
# Main Execution
#==============================================================================
main() {
# Validate environment before proceeding
validate_environment
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
# Parse the plan file to extract project information
if ! parse_plan_data "$NEW_PLAN"; then
log_error "Failed to parse plan data"
exit 1
fi
# Process based on agent type argument
local success=true
if [[ -z "$AGENT_TYPE" ]]; then
# No specific agent provided - update all existing agent files
log_info "No agent specified, updating all existing agent files..."
if ! update_all_existing_agents; then
success=false
fi
else
# Specific agent provided - update only that agent
log_info "Updating specific agent: $AGENT_TYPE"
if ! update_specific_agent "$AGENT_TYPE"; then
success=false
fi
fi
# Print summary
print_summary
if [[ "$success" == true ]]; then
log_success "Agent context update completed successfully"
exit 0
else
log_error "Agent context update completed with errors"
exit 1
fi
}
# Execute main function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View File

@@ -31,10 +31,7 @@
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.* *GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
- [ ] **Causal Validity**: Do all planned modules/components have defined Contracts (inputs/outputs/props/events) before implementation logic? [Gates determined based on constitution file]
- [ ] **Immutability**: Are architectural layers and constraints defined in Module/Component Headers?
- [ ] **Format Compliance**: Does the plan ensure all code will be wrapped in `[DEF]` anchors?
- [ ] **Belief State**: Is logging planned to follow the `Entry` -> `Validation` -> `Action` -> `Coherence` state transition model?
## Project Structure ## Project Structure

View File

@@ -95,12 +95,6 @@
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?] - **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified] - **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### System Invariants (Constitution Check)
*Define immutable constraints that will become `@INVARIANT` or `@CONSTRAINT` tags in Module Headers.*
- **INV-001**: [e.g., "No direct database access from UI layer"]
- **INV-002**: [e.g., "All financial calculations must use Decimal type"]
### Key Entities *(include if feature involves data)* ### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation] - **[Entity 1]**: [What it represents, key attributes without implementation]

View File

@@ -88,14 +88,12 @@ Examples of foundational tasks (adjust based on your project):
### Implementation for User Story 1 ### Implementation for User Story 1
- [ ] T012 [P] [US1] Define [Entity1] Module Header & Contracts in src/models/[entity1].py - [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Implement [Entity1] logic satisfying contracts - [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [P] [US1] Define [Service] Module Header & Contracts in src/services/[service].py - [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [Service] logic satisfying contracts (depends on T012) - [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Define [endpoint] Contracts & Logic in src/[location]/[file].py - [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Define [Component] Header (Props/Events) in frontend/src/components/[Component].svelte - [ ] T017 [US1] Add logging for user story 1 operations
- [ ] T018 [US1] Implement [Component] logic satisfying contracts
- [ ] T019 [US1] Verify `[DEF]` syntax and Belief State logging compliance
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently **Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
@@ -109,16 +107,15 @@ Examples of foundational tasks (adjust based on your project):
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️ ### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T020 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py - [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T021 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py - [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2 ### Implementation for User Story 2
- [ ] T022 [P] [US2] Define [Entity] Module Header & Contracts in src/models/[entity].py - [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T023 [P] [US2] Implement [Entity] logic satisfying contracts - [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T024 [US2] Define [Service] Module Header & Contracts in src/services/[service].py - [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T025 [US2] Implement [Service] logic satisfying contracts - [ ] T023 [US2] Integrate with User Story 1 components (if needed)
- [ ] T026 [US2] Define [Component] Header & Logic in frontend/src/components/[Component].svelte
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently **Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
@@ -132,15 +129,14 @@ Examples of foundational tasks (adjust based on your project):
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️ ### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T027 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py - [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T028 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py - [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3 ### Implementation for User Story 3
- [ ] T029 [P] [US3] Define [Entity] Module Header & Contracts in src/models/[entity].py - [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T030 [US3] Define [Service] Module Header & Contracts in src/services/[service].py - [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T031 [US3] Implement logic for [Entity] and [Service] satisfying contracts - [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T032 [US3] Define [Component] Header & Logic in frontend/src/components/[Component].svelte
**Checkpoint**: All user stories should now be independently functional **Checkpoint**: All user stories should now be independently functional
@@ -183,10 +179,9 @@ Examples of foundational tasks (adjust based on your project):
### Within Each User Story ### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation - Tests (if included) MUST be written and FAIL before implementation
- Module/Component Headers & Contracts BEFORE Implementation (Causal Validity)
- Models before services - Models before services
- Services before endpoints - Services before endpoints
- Components before Pages - Core implementation before integration
- Story complete before moving to next priority - Story complete before moving to next priority
### Parallel Opportunities ### Parallel Opportunities
@@ -207,9 +202,9 @@ Examples of foundational tasks (adjust based on your project):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py" Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py" Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all contract definitions for User Story 1 together: # Launch all models for User Story 1 together:
Task: "Define [Entity1] Module Header & Contracts in src/models/[entity1].py" Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Define [Entity2] Module Header & Contracts in src/models/[entity2].py" Task: "Create [Entity2] model in src/models/[entity2].py"
``` ```
--- ---

0
README.md Normal file → Executable file
View File

2
backend/requirements.txt Normal file → Executable file
View File

@@ -7,3 +7,5 @@ starlette
jsonschema jsonschema
requests requests
keyring keyring
httpx
PyYAML

0
backend/src/api/auth.py Normal file → Executable file
View File

View File

@@ -0,0 +1 @@
from . import plugins, tasks, settings

0
backend/src/api/routes/plugins.py Normal file → Executable file
View File

View File

@@ -0,0 +1,185 @@
# [DEF:SettingsRouter:Module]
#
# @SEMANTICS: settings, api, router, fastapi
# @PURPOSE: Provides API endpoints for managing application settings and Superset environments.
# @LAYER: UI (API)
# @RELATION: DEPENDS_ON -> ConfigManager
# @RELATION: DEPENDS_ON -> ConfigModels
#
# @INVARIANT: All settings changes must be persisted via ConfigManager.
# @PUBLIC_API: router
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List
from ...core.config_models import AppConfig, Environment, GlobalSettings
from ...dependencies import get_config_manager
from ...core.config_manager import ConfigManager
from ...core.logger import logger
from superset_tool.client import SupersetClient
from superset_tool.models import SupersetConfig
import os
# [/SECTION]
router = APIRouter()
# [DEF:get_settings:Function]
# @PURPOSE: Retrieves all application settings.
# @RETURN: AppConfig - The current configuration.
@router.get("/", response_model=AppConfig)
async def get_settings(config_manager: ConfigManager = Depends(get_config_manager)):
logger.info("[get_settings][Entry] Fetching all settings")
config = config_manager.get_config().copy(deep=True)
# Mask passwords
for env in config.environments:
if env.password:
env.password = "********"
return config
# [/DEF:get_settings]
# [DEF:update_global_settings:Function]
# @PURPOSE: Updates global application settings.
# @PARAM: settings (GlobalSettings) - The new global settings.
# @RETURN: GlobalSettings - The updated settings.
@router.patch("/global", response_model=GlobalSettings)
async def update_global_settings(
settings: GlobalSettings,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info("[update_global_settings][Entry] Updating global settings")
config_manager.update_global_settings(settings)
return settings
# [/DEF:update_global_settings]
# [DEF:get_environments:Function]
# @PURPOSE: Lists all configured Superset environments.
# @RETURN: List[Environment] - List of environments.
@router.get("/environments", response_model=List[Environment])
async def get_environments(config_manager: ConfigManager = Depends(get_config_manager)):
logger.info("[get_environments][Entry] Fetching environments")
return config_manager.get_environments()
# [/DEF:get_environments]
# [DEF:add_environment:Function]
# @PURPOSE: Adds a new Superset environment.
# @PARAM: env (Environment) - The environment to add.
# @RETURN: Environment - The added environment.
@router.post("/environments", response_model=Environment)
async def add_environment(
env: Environment,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
config_manager.add_environment(env)
return env
# [/DEF:add_environment]
# [DEF:update_environment:Function]
# @PURPOSE: Updates an existing Superset environment.
# @PARAM: id (str) - The ID of the environment to update.
# @PARAM: env (Environment) - The updated environment data.
# @RETURN: Environment - The updated environment.
@router.put("/environments/{id}", response_model=Environment)
async def update_environment(
id: str,
env: Environment,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[update_environment][Entry] Updating environment {id}")
if config_manager.update_environment(id, env):
return env
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
# [/DEF:update_environment]
# [DEF:delete_environment:Function]
# @PURPOSE: Deletes a Superset environment.
# @PARAM: id (str) - The ID of the environment to delete.
@router.delete("/environments/{id}")
async def delete_environment(
id: str,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[delete_environment][Entry] Deleting environment {id}")
config_manager.delete_environment(id)
return {"message": f"Environment {id} deleted"}
# [/DEF:delete_environment]
# [DEF:test_environment_connection:Function]
# @PURPOSE: Tests the connection to a Superset environment.
# @PARAM: id (str) - The ID of the environment to test.
# @RETURN: dict - Success message or error.
@router.post("/environments/{id}/test")
async def test_environment_connection(
id: str,
config_manager: ConfigManager = Depends(get_config_manager)
):
logger.info(f"[test_environment_connection][Entry] Testing environment {id}")
# Find environment
env = next((e for e in config_manager.get_environments() if e.id == id), None)
if not env:
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
try:
# Create SupersetConfig
# Note: SupersetConfig expects 'auth' dict with specific keys
superset_config = SupersetConfig(
env=env.name,
base_url=env.url,
auth={
"provider": "db", # Defaulting to db for now
"username": env.username,
"password": env.password,
"refresh": "true"
}
)
# Initialize client (this will trigger authentication)
client = SupersetClient(config=superset_config)
# Try a simple request to verify
client.get_dashboards(query={"page_size": 1})
logger.info(f"[test_environment_connection][Coherence:OK] Connection successful for {id}")
return {"status": "success", "message": "Connection successful"}
except Exception as e:
logger.error(f"[test_environment_connection][Coherence:Failed] Connection failed for {id}: {e}")
return {"status": "error", "message": str(e)}
# [/DEF:test_environment_connection]
# [DEF:validate_backup_path:Function]
# @PURPOSE: Validates if a backup path exists and is writable.
# @PARAM: path (str) - The path to validate.
# @RETURN: dict - Validation result.
@router.post("/validate-path")
async def validate_backup_path(path_data: dict):
path = path_data.get("path")
if not path:
raise HTTPException(status_code=400, detail="Path is required")
logger.info(f"[validate_backup_path][Entry] Validating path: {path}")
p = os.path.abspath(path)
exists = os.path.exists(p)
writable = os.access(p, os.W_OK) if exists else os.access(os.path.dirname(p), os.W_OK)
if not exists:
# Try to create it
try:
os.makedirs(p, exist_ok=True)
exists = True
writable = os.access(p, os.W_OK)
logger.info(f"[validate_backup_path][Action] Created directory: {p}")
except Exception as e:
logger.error(f"[validate_backup_path][Coherence:Failed] Failed to create directory: {e}")
return {"status": "error", "message": f"Path does not exist and could not be created: {e}"}
if not writable:
logger.warning(f"[validate_backup_path][Coherence:Failed] Path not writable: {p}")
return {"status": "error", "message": "Path is not writable"}
logger.info(f"[validate_backup_path][Coherence:OK] Path valid: {p}")
return {"status": "success", "message": "Path is valid and writable"}
# [/DEF:validate_backup_path]
# [/DEF:SettingsRouter]

0
backend/src/api/routes/tasks.py Normal file → Executable file
View File

3
backend/src/app.py Normal file → Executable file
View File

@@ -17,7 +17,7 @@ import asyncio
from .dependencies import get_task_manager from .dependencies import get_task_manager
from .core.logger import logger from .core.logger import logger
from .api.routes import plugins, tasks from .api.routes import plugins, tasks, settings
# [DEF:App:Global] # [DEF:App:Global]
# @SEMANTICS: app, fastapi, instance # @SEMANTICS: app, fastapi, instance
@@ -41,6 +41,7 @@ app.add_middleware(
# Include API routes # Include API routes
app.include_router(plugins.router, prefix="/plugins", tags=["Plugins"]) app.include_router(plugins.router, prefix="/plugins", tags=["Plugins"])
app.include_router(tasks.router, prefix="/tasks", tags=["Tasks"]) app.include_router(tasks.router, prefix="/tasks", tags=["Tasks"])
app.include_router(settings.router, prefix="/settings", tags=["Settings"])
# [DEF:WebSocketEndpoint:Endpoint] # [DEF:WebSocketEndpoint:Endpoint]
# @SEMANTICS: websocket, logs, streaming, real-time # @SEMANTICS: websocket, logs, streaming, real-time

View File

@@ -0,0 +1,205 @@
# [DEF:ConfigManagerModule:Module]
#
# @SEMANTICS: config, manager, persistence, json
# @PURPOSE: Manages application configuration, including loading/saving to JSON and CRUD for environments.
# @LAYER: Core
# @RELATION: DEPENDS_ON -> ConfigModels
# @RELATION: CALLS -> logger
# @RELATION: WRITES_TO -> config.json
#
# @INVARIANT: Configuration must always be valid according to AppConfig model.
# @PUBLIC_API: ConfigManager
# [SECTION: IMPORTS]
import json
import os
from pathlib import Path
from typing import Optional, List
from .config_models import AppConfig, Environment, GlobalSettings
from .logger import logger
# [/SECTION]
# [DEF:ConfigManager:Class]
# @PURPOSE: A class to handle application configuration persistence and management.
# @RELATION: WRITES_TO -> config.json
class ConfigManager:
# [DEF:__init__:Function]
# @PURPOSE: Initializes the ConfigManager.
# @PRE: isinstance(config_path, str) and len(config_path) > 0
# @POST: self.config is an instance of AppConfig
# @PARAM: config_path (str) - Path to the configuration file.
def __init__(self, config_path: str = "config.json"):
# 1. Runtime check of @PRE
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
logger.info(f"[ConfigManager][Entry] Initializing with {config_path}")
# 2. Logic implementation
self.config_path = Path(config_path)
self.config: AppConfig = self._load_config()
# 3. Runtime check of @POST
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
logger.info(f"[ConfigManager][Exit] Initialized")
# [/DEF:__init__]
# [DEF:_load_config:Function]
# @PURPOSE: Loads the configuration from disk or creates a default one.
# @POST: isinstance(return, AppConfig)
# @RETURN: AppConfig - The loaded or default configuration.
def _load_config(self) -> AppConfig:
logger.debug(f"[_load_config][Entry] Loading from {self.config_path}")
if not self.config_path.exists():
logger.info(f"[_load_config][Action] Config file not found. Creating default.")
default_config = AppConfig(
environments=[],
settings=GlobalSettings(backup_path="backups")
)
self._save_config_to_disk(default_config)
return default_config
try:
with open(self.config_path, "r") as f:
data = json.load(f)
config = AppConfig(**data)
logger.info(f"[_load_config][Coherence:OK] Configuration loaded")
return config
except Exception as e:
logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}")
return AppConfig(
environments=[],
settings=GlobalSettings(backup_path="backups")
)
# [/DEF:_load_config]
# [DEF:_save_config_to_disk:Function]
# @PURPOSE: Saves the provided configuration object to disk.
# @PRE: isinstance(config, AppConfig)
# @PARAM: config (AppConfig) - The configuration to save.
def _save_config_to_disk(self, config: AppConfig):
logger.debug(f"[_save_config_to_disk][Entry] Saving to {self.config_path}")
# 1. Runtime check of @PRE
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
# 2. Logic implementation
try:
with open(self.config_path, "w") as f:
json.dump(config.dict(), f, indent=4)
logger.info(f"[_save_config_to_disk][Action] Configuration saved")
except Exception as e:
logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}")
# [/DEF:_save_config_to_disk]
# [DEF:save:Function]
# @PURPOSE: Saves the current configuration state to disk.
def save(self):
self._save_config_to_disk(self.config)
# [/DEF:save]
# [DEF:get_config:Function]
# @PURPOSE: Returns the current configuration.
# @RETURN: AppConfig - The current configuration.
def get_config(self) -> AppConfig:
return self.config
# [/DEF:get_config]
# [DEF:update_global_settings:Function]
# @PURPOSE: Updates the global settings and persists the change.
# @PRE: isinstance(settings, GlobalSettings)
# @PARAM: settings (GlobalSettings) - The new global settings.
def update_global_settings(self, settings: GlobalSettings):
logger.info(f"[update_global_settings][Entry] Updating settings")
# 1. Runtime check of @PRE
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
# 2. Logic implementation
self.config.settings = settings
self.save()
logger.info(f"[update_global_settings][Exit] Settings updated")
# [/DEF:update_global_settings]
# [DEF:get_environments:Function]
# @PURPOSE: Returns the list of configured environments.
# @RETURN: List[Environment] - List of environments.
def get_environments(self) -> List[Environment]:
return self.config.environments
# [/DEF:get_environments]
# [DEF:add_environment:Function]
# @PURPOSE: Adds a new environment to the configuration.
# @PRE: isinstance(env, Environment)
# @PARAM: env (Environment) - The environment to add.
def add_environment(self, env: Environment):
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
# 1. Runtime check of @PRE
assert isinstance(env, Environment), "env must be an instance of Environment"
# 2. Logic implementation
# Check for duplicate ID and remove if exists
self.config.environments = [e for e in self.config.environments if e.id != env.id]
self.config.environments.append(env)
self.save()
logger.info(f"[add_environment][Exit] Environment added")
# [/DEF:add_environment]
# [DEF:update_environment:Function]
# @PURPOSE: Updates an existing environment.
# @PRE: isinstance(env_id, str) and len(env_id) > 0 and isinstance(updated_env, Environment)
# @PARAM: env_id (str) - The ID of the environment to update.
# @PARAM: updated_env (Environment) - The updated environment data.
# @RETURN: bool - True if updated, False otherwise.
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
logger.info(f"[update_environment][Entry] Updating {env_id}")
# 1. Runtime check of @PRE
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment"
# 2. Logic implementation
for i, env in enumerate(self.config.environments):
if env.id == env_id:
# If password is masked, keep the old one
if updated_env.password == "********":
updated_env.password = env.password
self.config.environments[i] = updated_env
self.save()
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}")
return True
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found")
return False
# [/DEF:update_environment]
# [DEF:delete_environment:Function]
# @PURPOSE: Deletes an environment by ID.
# @PRE: isinstance(env_id, str) and len(env_id) > 0
# @PARAM: env_id (str) - The ID of the environment to delete.
def delete_environment(self, env_id: str):
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
# 1. Runtime check of @PRE
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
# 2. Logic implementation
original_count = len(self.config.environments)
self.config.environments = [e for e in self.config.environments if e.id != env_id]
if len(self.config.environments) < original_count:
self.save()
logger.info(f"[delete_environment][Action] Deleted {env_id}")
else:
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found")
# [/DEF:delete_environment]
# [/DEF:ConfigManager]
# [/DEF:ConfigManagerModule]

View File

@@ -0,0 +1,36 @@
# [DEF:ConfigModels:Module]
# @SEMANTICS: config, models, pydantic
# @PURPOSE: Defines the data models for application configuration using Pydantic.
# @LAYER: Core
# @RELATION: READS_FROM -> config.json
# @RELATION: USED_BY -> ConfigManager
from pydantic import BaseModel, Field
from typing import List, Optional
# [DEF:Environment:DataClass]
# @PURPOSE: Represents a Superset environment configuration.
class Environment(BaseModel):
id: str
name: str
url: str
username: str
password: str # Will be masked in UI
is_default: bool = False
# [/DEF:Environment]
# [DEF:GlobalSettings:DataClass]
# @PURPOSE: Represents global application settings.
class GlobalSettings(BaseModel):
backup_path: str
default_environment_id: Optional[str] = None
# [/DEF:GlobalSettings]
# [DEF:AppConfig:DataClass]
# @PURPOSE: The root configuration model containing all application settings.
class AppConfig(BaseModel):
environments: List[Environment] = []
settings: GlobalSettings
# [/DEF:AppConfig]
# [/DEF:ConfigModels]

0
backend/src/core/logger.py Normal file → Executable file
View File

0
backend/src/core/plugin_base.py Normal file → Executable file
View File

9
backend/src/core/plugin_loader.py Normal file → Executable file
View File

@@ -46,7 +46,14 @@ class PluginLoader:
""" """
Loads a single Python module and extracts PluginBase subclasses. Loads a single Python module and extracts PluginBase subclasses.
""" """
package_name = f"src.plugins.{module_name}" # Try to determine the correct package prefix based on how the app is running
if "backend.src" in __name__:
package_prefix = "backend.src.plugins"
else:
package_prefix = "src.plugins"
package_name = f"{package_prefix}.{module_name}"
# print(f"DEBUG: Loading plugin {module_name} as {package_name}")
spec = importlib.util.spec_from_file_location(package_name, file_path) spec = importlib.util.spec_from_file_location(package_name, file_path)
if spec is None or spec.loader is None: if spec is None or spec.loader is None:
print(f"Could not load module spec for {package_name}") # Replace with proper logging print(f"Could not load module spec for {package_name}") # Replace with proper logging

0
backend/src/core/task_manager.py Normal file → Executable file
View File

9
backend/src/dependencies.py Normal file → Executable file
View File

@@ -7,9 +7,18 @@
from pathlib import Path from pathlib import Path
from .core.plugin_loader import PluginLoader from .core.plugin_loader import PluginLoader
from .core.task_manager import TaskManager from .core.task_manager import TaskManager
from .core.config_manager import ConfigManager
# Initialize singletons # Initialize singletons
# Use absolute path relative to this file to ensure plugins are found regardless of CWD # Use absolute path relative to this file to ensure plugins are found regardless of CWD
project_root = Path(__file__).parent.parent.parent
config_path = project_root / "config.json"
config_manager = ConfigManager(config_path=str(config_path))
def get_config_manager() -> ConfigManager:
"""Dependency injector for the ConfigManager."""
return config_manager
plugin_dir = Path(__file__).parent / "plugins" plugin_dir = Path(__file__).parent / "plugins"
plugin_loader = PluginLoader(plugin_dir=str(plugin_dir)) plugin_loader = PluginLoader(plugin_dir=str(plugin_dir))
task_manager = TaskManager(plugin_loader) task_manager = TaskManager(plugin_loader)

19
backend/src/plugins/backup.py Normal file → Executable file
View File

@@ -23,6 +23,7 @@ from superset_tool.utils.fileio import (
RetentionPolicy RetentionPolicy
) )
from superset_tool.utils.init_clients import setup_clients from superset_tool.utils.init_clients import setup_clients
from ..dependencies import get_config_manager
class BackupPlugin(PluginBase): class BackupPlugin(PluginBase):
""" """
@@ -46,20 +47,24 @@ class BackupPlugin(PluginBase):
return "1.0.0" return "1.0.0"
def get_schema(self) -> Dict[str, Any]: def get_schema(self) -> Dict[str, Any]:
config_manager = get_config_manager()
envs = [e.name for e in config_manager.get_environments()]
default_path = config_manager.get_config().settings.backup_path
return { return {
"type": "object", "type": "object",
"properties": { "properties": {
"env": { "env": {
"type": "string", "type": "string",
"title": "Environment", "title": "Environment",
"description": "The Superset environment to back up (e.g., 'dev', 'prod').", "description": "The Superset environment to back up.",
"enum": ["dev", "sbx", "prod", "preprod"], "enum": envs if envs else ["dev", "prod"],
}, },
"backup_path": { "backup_path": {
"type": "string", "type": "string",
"title": "Backup Path", "title": "Backup Path",
"description": "The root directory to save backups to.", "description": "The root directory to save backups to.",
"default": "P:\\Superset\\010 Бекапы" "default": default_path
} }
}, },
"required": ["env", "backup_path"], "required": ["env", "backup_path"],
@@ -73,8 +78,12 @@ class BackupPlugin(PluginBase):
logger.info(f"[BackupPlugin][Entry] Starting backup for {env}.") logger.info(f"[BackupPlugin][Entry] Starting backup for {env}.")
try: try:
clients = setup_clients(logger) config_manager = get_config_manager()
client = clients[env] clients = setup_clients(logger, custom_envs=config_manager.get_environments())
client = clients.get(env)
if not client:
raise ValueError(f"Environment '{env}' not found in configuration.")
dashboard_count, dashboard_meta = client.get_dashboards() dashboard_count, dashboard_meta = client.get_dashboards()
logger.info(f"[BackupPlugin][Progress] Found {dashboard_count} dashboards to export in {env}.") logger.info(f"[BackupPlugin][Progress] Found {dashboard_count} dashboards to export in {env}.")

18
backend/src/plugins/migration.py Normal file → Executable file
View File

@@ -15,6 +15,7 @@ from ..core.plugin_base import PluginBase
from superset_tool.client import SupersetClient from superset_tool.client import SupersetClient
from superset_tool.utils.init_clients import setup_clients from superset_tool.utils.init_clients import setup_clients
from superset_tool.utils.fileio import create_temp_file, update_yamls, create_dashboard_export from superset_tool.utils.fileio import create_temp_file, update_yamls, create_dashboard_export
from ..dependencies import get_config_manager
from superset_tool.utils.logger import SupersetLogger from superset_tool.utils.logger import SupersetLogger
class MigrationPlugin(PluginBase): class MigrationPlugin(PluginBase):
@@ -39,6 +40,9 @@ class MigrationPlugin(PluginBase):
return "1.0.0" return "1.0.0"
def get_schema(self) -> Dict[str, Any]: def get_schema(self) -> Dict[str, Any]:
config_manager = get_config_manager()
envs = [e.name for e in config_manager.get_environments()]
return { return {
"type": "object", "type": "object",
"properties": { "properties": {
@@ -46,13 +50,13 @@ class MigrationPlugin(PluginBase):
"type": "string", "type": "string",
"title": "Source Environment", "title": "Source Environment",
"description": "The environment to migrate from.", "description": "The environment to migrate from.",
"enum": ["dev", "sbx", "prod", "preprod"], "enum": envs if envs else ["dev", "prod"],
}, },
"to_env": { "to_env": {
"type": "string", "type": "string",
"title": "Target Environment", "title": "Target Environment",
"description": "The environment to migrate to.", "description": "The environment to migrate to.",
"enum": ["dev", "sbx", "prod", "preprod"], "enum": envs if envs else ["dev", "prod"],
}, },
"dashboard_regex": { "dashboard_regex": {
"type": "string", "type": "string",
@@ -91,9 +95,13 @@ class MigrationPlugin(PluginBase):
logger.info(f"[MigrationPlugin][Entry] Starting migration from {from_env} to {to_env}.") logger.info(f"[MigrationPlugin][Entry] Starting migration from {from_env} to {to_env}.")
try: try:
all_clients = setup_clients(logger) config_manager = get_config_manager()
from_c = all_clients[from_env] all_clients = setup_clients(logger, custom_envs=config_manager.get_environments())
to_c = all_clients[to_env] from_c = all_clients.get(from_env)
to_c = all_clients.get(to_env)
if not from_c or not to_c:
raise ValueError(f"One or both environments ('{from_env}', '{to_env}') not found in configuration.")
_, all_dashboards = from_c.get_dashboards() _, all_dashboards = from_c.get_dashboards()

0
backup_script.py Normal file → Executable file
View File

0
debug_db_api.py Normal file → Executable file
View File

0
docs/plugin_dev.md Normal file → Executable file
View File

46
docs/settings.md Normal file
View File

@@ -0,0 +1,46 @@
# Web Application Settings Mechanism
This document describes the settings management system for the Superset Tools application.
## Overview
The settings mechanism allows users to configure multiple Superset environments and global application settings (like backup storage) via the web UI.
## Backend Architecture
### Data Models
Configuration is structured using Pydantic models in `backend/src/core/config_models.py`:
- `Environment`: Represents a Superset instance (URL, credentials).
- `GlobalSettings`: Global application parameters (e.g., `backup_path`).
- `AppConfig`: The root configuration object.
### Configuration Manager
The `ConfigManager` (`backend/src/core/config_manager.py`) handles:
- Persistence to `config.json`.
- CRUD operations for environments.
- Validation and logging.
### API Endpoints
The settings API is available at `/settings`:
- `GET /settings`: Retrieve all settings (passwords are masked).
- `PATCH /settings/global`: Update global settings.
- `GET /settings/environments`: List environments.
- `POST /settings/environments`: Add environment.
- `PUT /settings/environments/{id}`: Update environment.
- `DELETE /settings/environments/{id}`: Remove environment.
- `POST /settings/environments/{id}/test`: Test connection.
## Frontend Implementation
The settings page is located at `frontend/src/pages/Settings.svelte`. It provides forms for managing global settings and Superset environments.
## Integration
Existing plugins and utilities use the `ConfigManager` to fetch configuration:
- `superset_tool/utils/init_clients.py`: Dynamically initializes Superset clients from the configured environments.
- `BackupPlugin`: Uses the configured `backup_path` as the default storage location.

0
frontend/.vscode/extensions.json vendored Normal file → Executable file
View File

0
frontend/README.md Normal file → Executable file
View File

0
frontend/index.html Normal file → Executable file
View File

0
frontend/jsconfig.json Normal file → Executable file
View File

9
frontend/package-lock.json generated Normal file → Executable file
View File

@@ -883,6 +883,7 @@
"integrity": "sha512-YZs/OSKOQAQCnJvM/P+F1URotNnYNeU3P2s4oIpzm1uFaqUEqRxUB0g5ejMjEb5Gjb9/PiBI5Ktrq4rUUF8UVQ==", "integrity": "sha512-YZs/OSKOQAQCnJvM/P+F1URotNnYNeU3P2s4oIpzm1uFaqUEqRxUB0g5ejMjEb5Gjb9/PiBI5Ktrq4rUUF8UVQ==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@sveltejs/vite-plugin-svelte-inspector": "^5.0.0", "@sveltejs/vite-plugin-svelte-inspector": "^5.0.0",
"debug": "^4.4.1", "debug": "^4.4.1",
@@ -929,6 +930,7 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"bin": { "bin": {
"acorn": "bin/acorn" "acorn": "bin/acorn"
}, },
@@ -1077,6 +1079,7 @@
} }
], ],
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"baseline-browser-mapping": "^2.9.0", "baseline-browser-mapping": "^2.9.0",
"caniuse-lite": "^1.0.30001759", "caniuse-lite": "^1.0.30001759",
@@ -1514,6 +1517,7 @@
"integrity": "sha512-/imKNG4EbWNrVjoNC/1H5/9GFy+tqjGBHCaSsN+P2RnPqjsLmv6UD3Ej+Kj8nBWaRAwyk7kK5ZUc+OEatnTR3A==", "integrity": "sha512-/imKNG4EbWNrVjoNC/1H5/9GFy+tqjGBHCaSsN+P2RnPqjsLmv6UD3Ej+Kj8nBWaRAwyk7kK5ZUc+OEatnTR3A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"bin": { "bin": {
"jiti": "bin/jiti.js" "jiti": "bin/jiti.js"
} }
@@ -1721,6 +1725,7 @@
} }
], ],
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"nanoid": "^3.3.11", "nanoid": "^3.3.11",
"picocolors": "^1.1.1", "picocolors": "^1.1.1",
@@ -2058,6 +2063,7 @@
"integrity": "sha512-ZhLtvroYxUxr+HQJfMZEDRsGsmU46x12RvAv/zi9584f5KOX7bUrEbhPJ7cKFmUvZTJXi/CFZUYwDC6M1FigPw==", "integrity": "sha512-ZhLtvroYxUxr+HQJfMZEDRsGsmU46x12RvAv/zi9584f5KOX7bUrEbhPJ7cKFmUvZTJXi/CFZUYwDC6M1FigPw==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@jridgewell/remapping": "^2.3.4", "@jridgewell/remapping": "^2.3.4",
"@jridgewell/sourcemap-codec": "^1.5.0", "@jridgewell/sourcemap-codec": "^1.5.0",
@@ -2181,6 +2187,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"engines": { "engines": {
"node": ">=12" "node": ">=12"
}, },
@@ -2252,6 +2259,7 @@
"integrity": "sha512-dZwN5L1VlUBewiP6H9s2+B3e3Jg96D0vzN+Ry73sOefebhYr9f94wwkMNN/9ouoU8pV1BqA1d1zGk8928cx0rg==", "integrity": "sha512-dZwN5L1VlUBewiP6H9s2+B3e3Jg96D0vzN+Ry73sOefebhYr9f94wwkMNN/9ouoU8pV1BqA1d1zGk8928cx0rg==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"esbuild": "^0.27.0", "esbuild": "^0.27.0",
"fdir": "^6.5.0", "fdir": "^6.5.0",
@@ -2345,6 +2353,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"engines": { "engines": {
"node": ">=12" "node": ">=12"
}, },

0
frontend/package.json Normal file → Executable file
View File

0
frontend/postcss.config.js Normal file → Executable file
View File

0
frontend/public/vite.svg Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 1.5 KiB

72
frontend/src/App.svelte Normal file → Executable file
View File

@@ -1,28 +1,91 @@
<!--
[DEF:App:Component]
@SEMANTICS: main, entrypoint, layout, navigation
@PURPOSE: The root component of the frontend application. Manages navigation and layout.
@LAYER: UI
@RELATION: DEPENDS_ON -> frontend/src/pages/Dashboard.svelte
@RELATION: DEPENDS_ON -> frontend/src/pages/Settings.svelte
@RELATION: DEPENDS_ON -> frontend/src/lib/stores.js
@PROPS: None
@EVENTS: None
@INVARIANT: Navigation state must be persisted in the currentPage store.
-->
<script> <script>
import Dashboard from './pages/Dashboard.svelte'; import Dashboard from './pages/Dashboard.svelte';
import { selectedPlugin, selectedTask } from './lib/stores.js'; import Settings from './pages/Settings.svelte';
import { selectedPlugin, selectedTask, currentPage } from './lib/stores.js';
import TaskRunner from './components/TaskRunner.svelte'; import TaskRunner from './components/TaskRunner.svelte';
import DynamicForm from './components/DynamicForm.svelte'; import DynamicForm from './components/DynamicForm.svelte';
import { api } from './lib/api.js'; import { api } from './lib/api.js';
import Toast from './components/Toast.svelte'; import Toast from './components/Toast.svelte';
// [DEF:handleFormSubmit:Function]
// @PURPOSE: Handles form submission for task creation.
// @PARAM: event (CustomEvent) - The submit event from DynamicForm.
async function handleFormSubmit(event) { async function handleFormSubmit(event) {
console.log("[App.handleFormSubmit][Action] Handling form submission for task creation.");
const params = event.detail; const params = event.detail;
try {
const task = await api.createTask($selectedPlugin.id, params); const task = await api.createTask($selectedPlugin.id, params);
selectedTask.set(task); selectedTask.set(task);
selectedPlugin.set(null); selectedPlugin.set(null);
console.log(`[App.handleFormSubmit][Coherence:OK] Task created context={{'id': '${task.id}'}}`);
} catch (error) {
console.error(`[App.handleFormSubmit][Coherence:Failed] Task creation failed context={{'error': '${error}'}}`);
} }
}
// [/DEF:handleFormSubmit]
// [DEF:navigate:Function]
// @PURPOSE: Changes the current page and resets state.
// @PARAM: page (string) - Target page name.
function navigate(page) {
console.log(`[App.navigate][Action] Navigating to ${page}.`);
// Reset selection first
if (page !== $currentPage) {
selectedPlugin.set(null);
selectedTask.set(null);
}
// Then set page
currentPage.set(page);
}
// [/DEF:navigate]
</script> </script>
<Toast /> <Toast />
<main class="bg-gray-50 min-h-screen"> <main class="bg-gray-50 min-h-screen">
<header class="bg-white shadow-md p-4"> <header class="bg-white shadow-md p-4 flex justify-between items-center">
<h1 class="text-3xl font-bold text-gray-800">Superset Tools</h1> <button
type="button"
class="text-3xl font-bold text-gray-800 focus:outline-none"
on:click={() => navigate('dashboard')}
>
Superset Tools
</button>
<nav class="space-x-4">
<button
type="button"
on:click={() => navigate('dashboard')}
class="text-gray-600 hover:text-blue-600 font-medium {$currentPage === 'dashboard' ? 'text-blue-600 border-b-2 border-blue-600' : ''}"
>
Dashboard
</button>
<button
type="button"
on:click={() => navigate('settings')}
class="text-gray-600 hover:text-blue-600 font-medium {$currentPage === 'settings' ? 'text-blue-600 border-b-2 border-blue-600' : ''}"
>
Settings
</button>
</nav>
</header> </header>
<div class="p-4"> <div class="p-4">
{#if $selectedTask} {#if $currentPage === 'settings'}
<Settings />
{:else if $selectedTask}
<TaskRunner /> <TaskRunner />
<button on:click={() => selectedTask.set(null)} class="mt-4 bg-blue-500 text-white p-2 rounded"> <button on:click={() => selectedTask.set(null)} class="mt-4 bg-blue-500 text-white p-2 rounded">
Back to Task List Back to Task List
@@ -38,3 +101,4 @@
{/if} {/if}
</div> </div>
</main> </main>
<!-- [/DEF:App] -->

0
frontend/src/app.css Normal file → Executable file
View File

0
frontend/src/assets/svelte.svg Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

25
frontend/src/components/DynamicForm.svelte Normal file → Executable file
View File

@@ -1,3 +1,15 @@
<!--
[DEF:DynamicForm:Component]
@SEMANTICS: form, schema, dynamic, json-schema
@PURPOSE: Generates a form dynamically based on a JSON schema.
@LAYER: UI
@RELATION: DEPENDS_ON -> svelte:createEventDispatcher
@PROPS:
- schema: Object - JSON schema for the form.
@EVENTS:
- submit: Object - Dispatched when the form is submitted, containing the form data.
-->
<script> <script>
import { createEventDispatcher } from 'svelte'; import { createEventDispatcher } from 'svelte';
@@ -6,16 +18,26 @@
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
// [DEF:handleSubmit:Function]
// @PURPOSE: Dispatches the submit event with the form data.
function handleSubmit() { function handleSubmit() {
console.log("[DynamicForm][Action] Submitting form data.", { formData });
dispatch('submit', formData); dispatch('submit', formData);
} }
// [/DEF:handleSubmit]
// Initialize form data with default values from the schema // [DEF:initializeForm:Function]
// @PURPOSE: Initialize form data with default values from the schema.
function initializeForm() {
if (schema && schema.properties) { if (schema && schema.properties) {
for (const key in schema.properties) { for (const key in schema.properties) {
formData[key] = schema.properties[key].default || ''; formData[key] = schema.properties[key].default || '';
} }
} }
}
// [/DEF:initializeForm]
initializeForm();
</script> </script>
<form on:submit|preventDefault={handleSubmit} class="space-y-4"> <form on:submit|preventDefault={handleSubmit} class="space-y-4">
@@ -54,3 +76,4 @@
</button> </button>
{/if} {/if}
</form> </form>
<!-- [/DEF:DynamicForm] -->

25
frontend/src/components/TaskRunner.svelte Normal file → Executable file
View File

@@ -1,17 +1,30 @@
<!--
[DEF:TaskRunner:Component]
@SEMANTICS: task, runner, logs, websocket
@PURPOSE: Connects to a WebSocket to display real-time logs for a running task.
@LAYER: UI
@RELATION: DEPENDS_ON -> frontend/src/lib/stores.js
@PROPS: None
@EVENTS: None
-->
<script> <script>
import { onMount, onDestroy } from 'svelte'; import { onMount, onDestroy } from 'svelte';
import { selectedTask, taskLogs } from '../lib/stores.js'; import { selectedTask, taskLogs } from '../lib/stores.js';
let ws; let ws;
// [DEF:onMount:Function]
// @PURPOSE: Initialize WebSocket connection for task logs.
onMount(() => { onMount(() => {
if ($selectedTask) { if ($selectedTask) {
console.log(`[TaskRunner][Entry] Connecting to logs for task: ${$selectedTask.id}`);
taskLogs.set([]); // Clear previous logs taskLogs.set([]); // Clear previous logs
const wsUrl = `ws://localhost:8000/ws/logs/${$selectedTask.id}`; const wsUrl = `ws://localhost:8000/ws/logs/${$selectedTask.id}`;
ws = new WebSocket(wsUrl); ws = new WebSocket(wsUrl);
ws.onopen = () => { ws.onopen = () => {
console.log('WebSocket connection established'); console.log('[TaskRunner][Coherence:OK] WebSocket connection established');
}; };
ws.onmessage = (event) => { ws.onmessage = (event) => {
@@ -20,20 +33,25 @@
}; };
ws.onerror = (error) => { ws.onerror = (error) => {
console.error('WebSocket error:', error); console.error('[TaskRunner][Coherence:Failed] WebSocket error:', error);
}; };
ws.onclose = () => { ws.onclose = () => {
console.log('WebSocket connection closed'); console.log('[TaskRunner][Exit] WebSocket connection closed');
}; };
} }
}); });
// [/DEF:onMount]
// [DEF:onDestroy:Function]
// @PURPOSE: Close WebSocket connection when the component is destroyed.
onDestroy(() => { onDestroy(() => {
if (ws) { if (ws) {
console.log("[TaskRunner][Action] Closing WebSocket connection.");
ws.close(); ws.close();
} }
}); });
// [/DEF:onDestroy]
</script> </script>
<div class="p-4 border rounded-lg bg-white shadow-md"> <div class="p-4 border rounded-lg bg-white shadow-md">
@@ -52,3 +70,4 @@
<p>No task selected.</p> <p>No task selected.</p>
{/if} {/if}
</div> </div>
<!-- [/DEF:TaskRunner] -->

11
frontend/src/components/Toast.svelte Normal file → Executable file
View File

@@ -1,3 +1,13 @@
<!--
[DEF:Toast:Component]
@SEMANTICS: toast, notification, feedback, ui
@PURPOSE: Displays transient notifications (toasts) in the bottom-right corner.
@LAYER: UI
@RELATION: DEPENDS_ON -> frontend/src/lib/toasts.js
@PROPS: None
@EVENTS: None
-->
<script> <script>
import { toasts } from '../lib/toasts.js'; import { toasts } from '../lib/toasts.js';
</script> </script>
@@ -13,3 +23,4 @@
</div> </div>
{/each} {/each}
</div> </div>
<!-- [/DEF:Toast] -->

0
frontend/src/lib/Counter.svelte Normal file → Executable file
View File

78
frontend/src/lib/api.js Normal file → Executable file
View File

@@ -1,34 +1,40 @@
// [DEF:api_module:Module]
// @SEMANTICS: api, client, fetch, rest
// @PURPOSE: Handles all communication with the backend API.
// @LAYER: Infra-API
import { addToast } from './toasts.js'; import { addToast } from './toasts.js';
const API_BASE_URL = 'http://localhost:8000'; const API_BASE_URL = 'http://localhost:8000';
/** // [DEF:fetchApi:Function]
* Fetches data from the API. // @PURPOSE: Generic GET request wrapper.
* @param {string} endpoint The API endpoint to fetch data from. // @PARAM: endpoint (string) - API endpoint.
* @returns {Promise<any>} The JSON response from the API. // @RETURN: Promise<any> - JSON response.
*/
async function fetchApi(endpoint) { async function fetchApi(endpoint) {
try { try {
console.log(`[api.fetchApi][Action] Fetching from context={{'endpoint': '${endpoint}'}}`);
const response = await fetch(`${API_BASE_URL}${endpoint}`); const response = await fetch(`${API_BASE_URL}${endpoint}`);
if (!response.ok) { if (!response.ok) {
throw new Error(`API request failed with status ${response.status}`); throw new Error(`API request failed with status ${response.status}`);
} }
return await response.json(); return await response.json();
} catch (error) { } catch (error) {
console.error(`Error fetching from ${endpoint}:`, error); console.error(`[api.fetchApi][Coherence:Failed] Error fetching from ${endpoint}:`, error);
addToast(error.message, 'error'); addToast(error.message, 'error');
throw error; throw error;
} }
} }
// [/DEF:fetchApi]
/** // [DEF:postApi:Function]
* Posts data to the API. // @PURPOSE: Generic POST request wrapper.
* @param {string} endpoint The API endpoint to post data to. // @PARAM: endpoint (string) - API endpoint.
* @param {object} body The data to post. // @PARAM: body (object) - Request payload.
* @returns {Promise<any>} The JSON response from the API. // @RETURN: Promise<any> - JSON response.
*/
async function postApi(endpoint, body) { async function postApi(endpoint, body) {
try { try {
console.log(`[api.postApi][Action] Posting to context={{'endpoint': '${endpoint}'}}`);
const response = await fetch(`${API_BASE_URL}${endpoint}`, { const response = await fetch(`${API_BASE_URL}${endpoint}`, {
method: 'POST', method: 'POST',
headers: { headers: {
@@ -41,15 +47,57 @@ async function postApi(endpoint, body) {
} }
return await response.json(); return await response.json();
} catch (error) { } catch (error) {
console.error(`Error posting to ${endpoint}:`, error); console.error(`[api.postApi][Coherence:Failed] Error posting to ${endpoint}:`, error);
addToast(error.message, 'error'); addToast(error.message, 'error');
throw error; throw error;
} }
} }
// [/DEF:postApi]
// [DEF:api:Data]
// @PURPOSE: API client object with specific methods.
export const api = { export const api = {
getPlugins: () => fetchApi('/plugins'), getPlugins: () => fetchApi('/plugins/'),
getTasks: () => fetchApi('/tasks'), getTasks: () => fetchApi('/tasks/'),
getTask: (taskId) => fetchApi(`/tasks/${taskId}`), getTask: (taskId) => fetchApi(`/tasks/${taskId}`),
createTask: (pluginId, params) => postApi('/tasks', { plugin_id: pluginId, params }), createTask: (pluginId, params) => postApi('/tasks', { plugin_id: pluginId, params }),
// Settings
getSettings: () => fetchApi('/settings'),
updateGlobalSettings: (settings) => {
return fetch(`${API_BASE_URL}/settings/global`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(settings)
}).then(res => res.json());
},
getEnvironments: () => fetchApi('/settings/environments'),
addEnvironment: (env) => postApi('/settings/environments', env),
updateEnvironment: (id, env) => {
return fetch(`${API_BASE_URL}/settings/environments/${id}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(env)
}).then(res => res.json());
},
deleteEnvironment: (id) => {
return fetch(`${API_BASE_URL}/settings/environments/${id}`, {
method: 'DELETE'
}).then(res => res.json());
},
testEnvironmentConnection: (id) => postApi(`/settings/environments/${id}/test`, {}),
}; };
// [/DEF:api_module]
// Export individual functions for easier use in components
export const getPlugins = api.getPlugins;
export const getTasks = api.getTasks;
export const getTask = api.getTask;
export const createTask = api.createTask;
export const getSettings = api.getSettings;
export const updateGlobalSettings = api.updateGlobalSettings;
export const getEnvironments = api.getEnvironments;
export const addEnvironment = api.addEnvironment;
export const updateEnvironment = api.updateEnvironment;
export const deleteEnvironment = api.deleteEnvironment;
export const testEnvironmentConnection = api.testEnvironmentConnection;

44
frontend/src/lib/stores.js Normal file → Executable file
View File

@@ -1,40 +1,60 @@
// [DEF:stores_module:Module]
// @SEMANTICS: state, stores, svelte, plugins, tasks
// @PURPOSE: Global state management using Svelte stores.
// @LAYER: UI-State
import { writable } from 'svelte/store'; import { writable } from 'svelte/store';
import { api } from './api.js'; import { api } from './api.js';
// Store for the list of available plugins // [DEF:plugins:Data]
// @PURPOSE: Store for the list of available plugins.
export const plugins = writable([]); export const plugins = writable([]);
// Store for the list of tasks // [DEF:tasks:Data]
// @PURPOSE: Store for the list of tasks.
export const tasks = writable([]); export const tasks = writable([]);
// Store for the currently selected plugin // [DEF:selectedPlugin:Data]
// @PURPOSE: Store for the currently selected plugin.
export const selectedPlugin = writable(null); export const selectedPlugin = writable(null);
// Store for the currently selected task // [DEF:selectedTask:Data]
// @PURPOSE: Store for the currently selected task.
export const selectedTask = writable(null); export const selectedTask = writable(null);
// Store for the logs of the currently selected task // [DEF:currentPage:Data]
// @PURPOSE: Store for the current page.
export const currentPage = writable('dashboard');
// [DEF:taskLogs:Data]
// @PURPOSE: Store for the logs of the currently selected task.
export const taskLogs = writable([]); export const taskLogs = writable([]);
// Function to fetch plugins from the API // [DEF:fetchPlugins:Function]
// @PURPOSE: Fetches plugins from the API and updates the plugins store.
export async function fetchPlugins() { export async function fetchPlugins() {
try { try {
console.log("[stores.fetchPlugins][Action] Fetching plugins.");
const data = await api.getPlugins(); const data = await api.getPlugins();
console.log('Fetched plugins:', data); // Add console log console.log("[stores.fetchPlugins][Coherence:OK] Plugins fetched context={{'count': " + data.length + "}}");
plugins.set(data); plugins.set(data);
} catch (error) { } catch (error) {
console.error('Error fetching plugins:', error); console.error(`[stores.fetchPlugins][Coherence:Failed] Error fetching plugins context={{'error': '${error}'}}`);
// Handle error appropriately in the UI
} }
} }
// [/DEF:fetchPlugins]
// Function to fetch tasks from the API // [DEF:fetchTasks:Function]
// @PURPOSE: Fetches tasks from the API and updates the tasks store.
export async function fetchTasks() { export async function fetchTasks() {
try { try {
console.log("[stores.fetchTasks][Action] Fetching tasks.");
const data = await api.getTasks(); const data = await api.getTasks();
console.log("[stores.fetchTasks][Coherence:OK] Tasks fetched context={{'count': " + data.length + "}}");
tasks.set(data); tasks.set(data);
} catch (error) { } catch (error) {
console.error('Error fetching tasks:', error); console.error(`[stores.fetchTasks][Coherence:Failed] Error fetching tasks context={{'error': '${error}'}}`);
// Handle error appropriately in the UI
} }
} }
// [/DEF:fetchTasks]
// [/DEF:stores_module]

20
frontend/src/lib/toasts.js Normal file → Executable file
View File

@@ -1,13 +1,33 @@
// [DEF:toasts_module:Module]
// @SEMANTICS: notification, toast, feedback, state
// @PURPOSE: Manages toast notifications using a Svelte writable store.
// @LAYER: UI-State
import { writable } from 'svelte/store'; import { writable } from 'svelte/store';
// [DEF:toasts:Data]
// @PURPOSE: Writable store containing the list of active toasts.
export const toasts = writable([]); export const toasts = writable([]);
// [DEF:addToast:Function]
// @PURPOSE: Adds a new toast message.
// @PARAM: message (string) - The message text.
// @PARAM: type (string) - The type of toast (info, success, error).
// @PARAM: duration (number) - Duration in ms before the toast is removed.
export function addToast(message, type = 'info', duration = 3000) { export function addToast(message, type = 'info', duration = 3000) {
const id = Math.random().toString(36).substr(2, 9); const id = Math.random().toString(36).substr(2, 9);
console.log(`[toasts.addToast][Action] Adding toast context={{'id': '${id}', 'type': '${type}', 'message': '${message}'}}`);
toasts.update(all => [...all, { id, message, type }]); toasts.update(all => [...all, { id, message, type }]);
setTimeout(() => removeToast(id), duration); setTimeout(() => removeToast(id), duration);
} }
// [/DEF:addToast]
// [DEF:removeToast:Function]
// @PURPOSE: Removes a toast message by ID.
// @PARAM: id (string) - The ID of the toast to remove.
function removeToast(id) { function removeToast(id) {
console.log(`[toasts.removeToast][Action] Removing toast context={{'id': '${id}'}}`);
toasts.update(all => all.filter(t => t.id !== id)); toasts.update(all => all.filter(t => t.id !== id));
} }
// [/DEF:removeToast]
// [/DEF:toasts_module]

8
frontend/src/main.js Normal file → Executable file
View File

@@ -1,9 +1,17 @@
// [DEF:main:Module]
// @SEMANTICS: entrypoint, svelte, init
// @PURPOSE: Entry point for the Svelte application.
// @LAYER: UI-Entry
import './app.css' import './app.css'
import App from './App.svelte' import App from './App.svelte'
// [DEF:app_instance:Data]
// @PURPOSE: Initialized Svelte app instance.
const app = new App({ const app = new App({
target: document.getElementById('app'), target: document.getElementById('app'),
props: {} props: {}
}) })
export default app export default app
// [/DEF:main]

20
frontend/src/pages/Dashboard.svelte Normal file → Executable file
View File

@@ -1,14 +1,33 @@
<!--
[DEF:Dashboard:Component]
@SEMANTICS: dashboard, plugins, tools, list
@PURPOSE: Displays the list of available plugins and allows selecting one.
@LAYER: UI
@RELATION: DEPENDS_ON -> frontend/src/lib/stores.js
@PROPS: None
@EVENTS: None
-->
<script> <script>
import { onMount } from 'svelte'; import { onMount } from 'svelte';
import { plugins, fetchPlugins, selectedPlugin } from '../lib/stores.js'; import { plugins, fetchPlugins, selectedPlugin } from '../lib/stores.js';
// [DEF:onMount:Function]
// @PURPOSE: Fetch plugins when the component mounts.
onMount(async () => { onMount(async () => {
console.log("[Dashboard][Entry] Component mounted, fetching plugins.");
await fetchPlugins(); await fetchPlugins();
}); });
// [/DEF:onMount]
// [DEF:selectPlugin:Function]
// @PURPOSE: Selects a plugin to display its form.
// @PARAM: plugin (Object) - The plugin object to select.
function selectPlugin(plugin) { function selectPlugin(plugin) {
console.log(`[Dashboard][Action] Selecting plugin: ${plugin.id}`);
selectedPlugin.set(plugin); selectedPlugin.set(plugin);
} }
// [/DEF:selectPlugin]
</script> </script>
<div class="container mx-auto p-4"> <div class="container mx-auto p-4">
@@ -26,3 +45,4 @@
{/each} {/each}
</div> </div>
</div> </div>
<!-- [/DEF:Dashboard] -->

View File

@@ -0,0 +1,207 @@
<!--
[DEF:Settings:Component]
@SEMANTICS: settings, ui, configuration
@PURPOSE: The main settings page for the application, allowing management of environments and global settings.
@LAYER: UI
@RELATION: CALLS -> api.js
@RELATION: USES -> stores.js
@PROPS:
None
@EVENTS:
None
@INVARIANT: Settings changes must be saved to the backend.
-->
<script>
import { onMount } from 'svelte';
import { getSettings, updateGlobalSettings, getEnvironments, addEnvironment, updateEnvironment, deleteEnvironment, testEnvironmentConnection } from '../lib/api';
import { addToast } from '../lib/toasts';
let settings = {
environments: [],
settings: {
backup_path: '',
default_environment_id: null
}
};
let newEnv = {
id: '',
name: '',
url: '',
username: '',
password: '',
is_default: false
};
let editingEnvId = null;
async function loadSettings() {
try {
const data = await getSettings();
settings = data;
} catch (error) {
addToast('Failed to load settings', 'error');
}
}
async function handleSaveGlobal() {
try {
await updateGlobalSettings(settings.settings);
addToast('Global settings saved', 'success');
} catch (error) {
addToast('Failed to save global settings', 'error');
}
}
async function handleAddOrUpdateEnv() {
try {
if (editingEnvId) {
await updateEnvironment(editingEnvId, newEnv);
addToast('Environment updated', 'success');
} else {
await addEnvironment(newEnv);
addToast('Environment added', 'success');
}
resetEnvForm();
await loadSettings();
} catch (error) {
addToast('Failed to save environment', 'error');
}
}
async function handleDeleteEnv(id) {
if (confirm('Are you sure you want to delete this environment?')) {
try {
await deleteEnvironment(id);
addToast('Environment deleted', 'success');
await loadSettings();
} catch (error) {
addToast('Failed to delete environment', 'error');
}
}
}
async function handleTestEnv(id) {
try {
const result = await testEnvironmentConnection(id);
if (result.status === 'success') {
addToast('Connection successful', 'success');
} else {
addToast(`Connection failed: ${result.message}`, 'error');
}
} catch (error) {
addToast('Failed to test connection', 'error');
}
}
function editEnv(env) {
newEnv = { ...env };
editingEnvId = env.id;
}
function resetEnvForm() {
newEnv = {
id: '',
name: '',
url: '',
username: '',
password: '',
is_default: false
};
editingEnvId = null;
}
onMount(loadSettings);
</script>
<div class="container mx-auto p-4">
<h1 class="text-2xl font-bold mb-6">Settings</h1>
<section class="mb-8 bg-white p-6 rounded shadow">
<h2 class="text-xl font-semibold mb-4">Global Settings</h2>
<div class="grid grid-cols-1 gap-4">
<div>
<label for="backup_path" class="block text-sm font-medium text-gray-700">Backup Storage Path</label>
<input type="text" id="backup_path" bind:value={settings.settings.backup_path} class="mt-1 block w-full border border-gray-300 rounded-md shadow-sm p-2" />
</div>
<button on:click={handleSaveGlobal} class="bg-blue-500 text-white px-4 py-2 rounded hover:bg-blue-600 w-max">
Save Global Settings
</button>
</div>
</section>
<section class="mb-8 bg-white p-6 rounded shadow">
<h2 class="text-xl font-semibold mb-4">Superset Environments</h2>
<div class="mb-6 overflow-x-auto">
<table class="min-w-full divide-y divide-gray-200">
<thead class="bg-gray-50">
<tr>
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Name</th>
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">URL</th>
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Username</th>
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Default</th>
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Actions</th>
</tr>
</thead>
<tbody class="bg-white divide-y divide-gray-200">
{#each settings.environments as env}
<tr>
<td class="px-6 py-4 whitespace-nowrap">{env.name}</td>
<td class="px-6 py-4 whitespace-nowrap">{env.url}</td>
<td class="px-6 py-4 whitespace-nowrap">{env.username}</td>
<td class="px-6 py-4 whitespace-nowrap">{env.is_default ? 'Yes' : 'No'}</td>
<td class="px-6 py-4 whitespace-nowrap">
<button on:click={() => handleTestEnv(env.id)} class="text-green-600 hover:text-green-900 mr-4">Test</button>
<button on:click={() => editEnv(env)} class="text-indigo-600 hover:text-indigo-900 mr-4">Edit</button>
<button on:click={() => handleDeleteEnv(env.id)} class="text-red-600 hover:text-red-900">Delete</button>
</td>
</tr>
{/each}
</tbody>
</table>
</div>
<div class="bg-gray-50 p-4 rounded">
<h3 class="text-lg font-medium mb-4">{editingEnvId ? 'Edit' : 'Add'} Environment</h3>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div>
<label for="env_id" class="block text-sm font-medium text-gray-700">ID</label>
<input type="text" id="env_id" bind:value={newEnv.id} disabled={!!editingEnvId} class="mt-1 block w-full border border-gray-300 rounded-md shadow-sm p-2" />
</div>
<div>
<label for="env_name" class="block text-sm font-medium text-gray-700">Name</label>
<input type="text" id="env_name" bind:value={newEnv.name} class="mt-1 block w-full border border-gray-300 rounded-md shadow-sm p-2" />
</div>
<div>
<label for="env_url" class="block text-sm font-medium text-gray-700">URL</label>
<input type="text" id="env_url" bind:value={newEnv.url} class="mt-1 block w-full border border-gray-300 rounded-md shadow-sm p-2" />
</div>
<div>
<label for="env_user" class="block text-sm font-medium text-gray-700">Username</label>
<input type="text" id="env_user" bind:value={newEnv.username} class="mt-1 block w-full border border-gray-300 rounded-md shadow-sm p-2" />
</div>
<div>
<label for="env_pass" class="block text-sm font-medium text-gray-700">Password</label>
<input type="password" id="env_pass" bind:value={newEnv.password} class="mt-1 block w-full border border-gray-300 rounded-md shadow-sm p-2" />
</div>
<div class="flex items-center">
<input type="checkbox" id="env_default" bind:checked={newEnv.is_default} class="h-4 w-4 text-blue-600 border-gray-300 rounded" />
<label for="env_default" class="ml-2 block text-sm text-gray-900">Default Environment</label>
</div>
</div>
<div class="mt-4 flex gap-2">
<button on:click={handleAddOrUpdateEnv} class="bg-green-500 text-white px-4 py-2 rounded hover:bg-green-600">
{editingEnvId ? 'Update' : 'Add'} Environment
</button>
{#if editingEnvId}
<button on:click={resetEnvForm} class="bg-gray-500 text-white px-4 py-2 rounded hover:bg-gray-600">
Cancel
</button>
{/if}
</div>
</div>
</section>
</div>
<!-- [/DEF:Settings] -->

0
frontend/svelte.config.js Normal file → Executable file
View File

0
frontend/tailwind.config.js Normal file → Executable file
View File

0
frontend/vite.config.js Normal file → Executable file
View File

0
get_dataset_structure.py Normal file → Executable file
View File

0
migration_script.py Normal file → Executable file
View File

21
reproduce_issue.py Normal file
View File

@@ -0,0 +1,21 @@
import sys
import os
from pathlib import Path
# Add root to sys.path
sys.path.append(os.getcwd())
try:
from backend.src.core.plugin_loader import PluginLoader
except ImportError as e:
print(f"Failed to import PluginLoader: {e}")
sys.exit(1)
plugin_dir = Path("backend/src/plugins").absolute()
print(f"Plugin dir: {plugin_dir}")
loader = PluginLoader(str(plugin_dir))
configs = loader.get_all_plugin_configs()
print(f"Loaded plugins: {len(configs)}")
for config in configs:
print(f" - {config.id}")

0
requirements.txt Normal file → Executable file
View File

0
run_mapper.py Normal file → Executable file
View File

0
search_script.py Normal file → Executable file
View File

172
semantic_protocol.md Normal file → Executable file
View File

@@ -1,49 +1,61 @@
# SYSTEM STANDARD: CODE GENERATION PROTOCOL Here is the revised **System Standard**, adapted for a Polyglot environment (Python Backend + Svelte Frontend) and removing the requirement for explicit assertion generation.
**OBJECTIVE:** Generate Python code that strictly adheres to the Semantic Coherence standards defined below. All output must be machine-readable, fractal-structured, and optimized for Sparse Attention navigation. This protocol standardizes the "Semantic Bridge" between the two languages using unified Anchor logic while respecting the native documentation standards (Comments for Python, JSDoc for JavaScript/Svelte).
***
# SYSTEM STANDARD: POLYGLOT CODE GENERATION PROTOCOL (GRACE-Poly)
**OBJECTIVE:** Generate Python and Svelte/TypeScript code that strictly adheres to Semantic Coherence standards. Output must be machine-readable, fractal-structured, and optimized for Sparse Attention navigation.
## I. CORE REQUIREMENTS ## I. CORE REQUIREMENTS
1. **Causal Validity:** Semantic definitions (Contracts) must ALWAYS precede implementation code. 1. **Causal Validity:** Semantic definitions (Contracts) must ALWAYS precede implementation code.
2. **Immutability:** Once defined, architectural decisions in the Module Header are treated as immutable constraints. 2. **Immutability:** Architectural decisions defined in the Module/Component Header are treated as immutable constraints.
3. **Format Compliance:** Output must strictly follow the `[DEF]` / `[/DEF]` anchor syntax. 3. **Format Compliance:** Output must strictly follow the `[DEF]` / `[/DEF]` anchor syntax for structure.
4. **Logic over Assertion:** Contracts define the *logic flow*. Do not generate explicit `assert` statements unless requested. The code logic itself must inherently satisfy the Pre/Post conditions (e.g., via control flow, guards, or types).
--- ---
## II. SYNTAX SPECIFICATION ## II. SYNTAX SPECIFICATION
Code must be wrapped in semantic anchors using square brackets to minimize token interference. Code structure is defined by **Anchors** (square brackets). Metadata is defined by **Tags** (native comment style).
### 1. Entity Anchors (The "Container") ### 1. Entity Anchors (The "Container")
* **Start:** `# [DEF:identifier:Type]` Used to define the boundaries of Modules, Classes, Components, and Functions.
* **End:** `# [/DEF:identifier]` (MANDATORY for semantic accumulation)
* **Types:** `Module`, `Class`, `Function`, `DataClass`, `Enum`.
### 2. Metadata Tags (The "Content") * **Python:**
* **Syntax:** `# @KEY: Value` * Start: `# [DEF:identifier:Type]`
* **Location:** Inside the `[DEF]` block, before any code. * End: `# [/DEF:identifier]`
* **Svelte (Top-level):**
* Start: `<!-- [DEF:ComponentName:Component] -->`
* End: `<!-- [/DEF:ComponentName] -->`
* **Svelte (Script/JS/TS):**
* Start: `// [DEF:funcName:Function]`
* End: `// [/DEF:funcName]`
### 3. Graph Relations (The "Map") **Types:** `Module`, `Component`, `Class`, `Function`, `Store`, `Action`.
* **Syntax:** `# @RELATION: TYPE -> TARGET_ID`
* **Types:** `DEPENDS_ON`, `CALLS`, `INHERITS_FROM`, `IMPLEMENTS`, `WRITES_TO`, `READS_FROM`. ### 2. Graph Relations (The "Map")
Defines high-level dependencies.
* **Python Syntax:** `# @RELATION: TYPE -> TARGET_ID`
* **Svelte/JS Syntax:** `// @RELATION: TYPE -> TARGET_ID`
* **Types:** `DEPENDS_ON`, `CALLS`, `INHERITS_FROM`, `IMPLEMENTS`, `BINDS_TO`, `DISPATCHES`.
--- ---
## III. FILE STRUCTURE STANDARD ## III. FILE STRUCTURE STANDARD
### 1. Python Module Header ### 1. Python Module Header (`.py`)
Every `.py` file starts with a Module definition.
```python ```python
# [DEF:module_name:Module] # [DEF:module_name:Module]
# #
# @SEMANTICS: [keywords for vector search] # @SEMANTICS: [keywords for vector search]
# @PURPOSE: [Primary responsibility of the module] # @PURPOSE: [Primary responsibility of the module]
# @LAYER: [Architecture layer: Domain/Infra/UI] # @LAYER: [Domain/Infra/API]
# @RELATION: [Dependencies] # @RELATION: [Dependencies]
# #
# @INVARIANT: [Global immutable rule for this file] # @INVARIANT: [Global immutable rule]
# @CONSTRAINT: [Hard restriction, e.g., "No SQL here"] # @CONSTRAINT: [Hard restriction, e.g., "No ORM calls here"]
# @PUBLIC_API: [Exported symbols]
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
... ...
@@ -54,71 +66,109 @@ Every `.py` file starts with a Module definition.
# [/DEF:module_name] # [/DEF:module_name]
``` ```
### 2. Svelte Component Header ### 2. Svelte Component Header (`.svelte`)
Every `.svelte` file starts with a Component definition inside an HTML comment.
```html ```html
<!-- [DEF:ComponentName:Component] -->
<!-- <!--
[DEF:ComponentName:Component]
@SEMANTICS: [keywords] @SEMANTICS: [keywords]
@PURPOSE: [Primary responsibility] @PURPOSE: [Primary UI responsibility]
@LAYER: [UI/State/Layout] @LAYER: [Feature/Atom/Layout]
@RELATION: [Child components, Stores, API] @RELATION: [Child components, Stores]
@PROPS: @INVARIANT: [UI rules, e.g., "Always responsive"]
- name: type - description
@EVENTS:
- name: payload_type - description
@INVARIANT: [Immutable UI rule]
--> -->
<script>
<script lang="ts">
// [SECTION: IMPORTS]
// ... // ...
// [/SECTION]
// ... LOGIC IMPLEMENTATION ...
</script> </script>
<!-- [SECTION: TEMPLATE] -->
...
<!-- [/SECTION] -->
<style>
/* ... */
</style>
<!-- [/DEF:ComponentName] --> <!-- [/DEF:ComponentName] -->
``` ```
--- ---
## IV. FUNCTION & CLASS CONTRACTS (DbC) ## IV. CONTRACTS (Design by Contract)
Contracts are the **Source of Truth**. Contracts define *what* the code does before *how* it does it.
### 1. Python Contract Style
Uses comment blocks inside the anchor.
**Required Template:**
```python ```python
# [DEF:func_name:Function] # [DEF:calculate_total:Function]
# @PURPOSE: [Description] # @PURPOSE: Calculates cart total including tax.
# @SPEC_LINK: [Requirement ID] # @PRE: items list is not empty.
# # @POST: returns non-negative Decimal.
# @PRE: [Condition required before execution] # @PARAM: items (List[Item]) - Cart items.
# @POST: [Condition guaranteed after execution] # @RETURN: Decimal - Final total.
# @PARAM: [name] ([type]) - [desc] def calculate_total(items: List[Item]) -> Decimal:
# @RETURN: [type] - [desc] # Logic implementation that respects @PRE
# @THROW: [Exception] - [Reason] if not items:
# return Decimal(0)
# @RELATION: [Graph connections]
def func_name(...): # ... calculation ...
# 1. Runtime check of @PRE
# 2. Logic implementation # Logic ensuring @POST
# 3. Runtime check of @POST return total
pass # [/DEF:calculate_total]
# [/DEF:func_name] ```
### 2. Svelte/JS Contract Style (JSDoc)
Uses JSDoc blocks inside the anchor. Standard JSDoc tags are used where possible; custom GRACE tags are added for strictness.
```javascript
// [DEF:updateUserProfile:Function]
/**
* @purpose Updates user data in the store and backend.
* @pre User must be authenticated (session token exists).
* @post UserStore is updated with new data.
* @param {Object} profileData - The new profile fields.
* @returns {Promise<void>}
* @throws {AuthError} If session is invalid.
*/
// @RELATION: CALLS -> api.user.update
async function updateUserProfile(profileData) {
// Logic implementation
if (!session.token) throw new AuthError();
// ...
}
// [/DEF:updateUserProfile]
``` ```
--- ---
## V. LOGGING STANDARD (BELIEF STATE) ## V. LOGGING STANDARD (BELIEF STATE)
Logs define the agent's internal state for debugging and coherence checks. Logs delineate the agent's internal state.
**Format:** `logger.level(f"[{ANCHOR_ID}][{STATE}] {MESSAGE} context={...}")` * **Python:** `logger.info(f"[{ANCHOR_ID}][{STATE}] Msg")`
* **Svelte/JS:** `console.log(\`[${ANCHOR_ID}][${STATE}] Msg\`)`
**States:** `Entry`, `Validation`, `Action`, `Coherence:OK`, `Coherence:Failed`, `Exit`. **Required States:**
1. `Entry` (Start of block)
2. `Action` (Key business logic)
3. `Coherence:OK` (Logic successfully completed)
4. `Coherence:Failed` (Error handling)
5. `Exit` (End of block)
--- ---
## VI. GENERATION WORKFLOW ## VI. GENERATION WORKFLOW
1. **Analyze Request:** Identify target module and graph position.
2. **Define Structure:** Generate `[DEF]` anchors and Contracts FIRST.
3. **Implement Logic:** Write code satisfying Contracts.
4. **Validate:** If logic conflicts with Contract -> Stop -> Report Error.
1. **Context Analysis:** Identify language (Python vs Svelte) and Architecture Layer.
2. **Scaffolding:** Generate the `[DEF]` Anchors and Header/Contract **before** writing any logic.
3. **Implementation:** Write the code. Ensure the code logic handles the `@PRE` conditions (e.g., via `if/return` or guards) and satisfies `@POST` conditions naturally. **Do not write explicit `assert` statements unless debugging mode is requested.**
4. **Closure:** Ensure every `[DEF]` is closed with `[/DEF]` to accumulate semantic context.

View File

0
specs/001-plugin-arch-svelte-ui/contracts/api.yaml Normal file → Executable file
View File

0
specs/001-plugin-arch-svelte-ui/data-model.md Normal file → Executable file
View File

0
specs/001-plugin-arch-svelte-ui/plan.md Normal file → Executable file
View File

0
specs/001-plugin-arch-svelte-ui/quickstart.md Normal file → Executable file
View File

0
specs/001-plugin-arch-svelte-ui/research.md Normal file → Executable file
View File

0
specs/001-plugin-arch-svelte-ui/spec.md Normal file → Executable file
View File

0
specs/001-plugin-arch-svelte-ui/tasks.md Normal file → Executable file
View File

View File

@@ -0,0 +1,34 @@
# Specification Quality Checklist: Add web application settings mechanism
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2025-12-20
**Feature**: [specs/002-app-settings/spec.md](specs/002-app-settings/spec.md)
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- Initial specification covers all requested points with reasonable defaults for authentication and storage validation.

102
specs/002-app-settings/plan.md Executable file
View File

@@ -0,0 +1,102 @@
# Technical Plan: Web Application Settings Mechanism
This plan outlines the implementation of a settings management system for the Superset Tools application, allowing users to configure multiple Superset environments and global application settings (like backup storage) via the web UI.
## 1. Backend Architecture
### 1.1 Data Models (Pydantic)
We will define models in `backend/src/core/config_models.py`:
```python
from pydantic import BaseModel, Field
from typing import List, Optional
class Environment(BaseModel):
id: str
name: str
url: str
username: str
password: str # Will be masked in UI
is_default: bool = False
class GlobalSettings(BaseModel):
backup_path: str
default_environment_id: Optional[str] = None
class AppConfig(BaseModel):
environments: List[Environment] = []
settings: GlobalSettings
```
### 1.2 Configuration Manager
A new class `ConfigManager` in `backend/src/core/config_manager.py` will handle:
- Loading/saving `AppConfig` to `config.json`.
- CRUD operations for environments.
- Updating global settings.
- Validating backup paths and Superset URLs.
### 1.3 API Endpoints
New router `backend/src/api/routes/settings.py`:
- `GET /settings`: Retrieve all settings (masking passwords).
- `PATCH /settings/global`: Update global settings (backup path, etc.).
- `GET /settings/environments`: List all environments.
- `POST /settings/environments`: Add a new environment.
- `PUT /settings/environments/{id}`: Update an environment.
- `DELETE /settings/environments/{id}`: Remove an environment.
- `POST /settings/environments/{id}/test`: Test connection to a specific environment.
### 1.4 Integration
- Update `backend/src/dependencies.py` to provide a singleton `ConfigManager`.
- Refactor `superset_tool/utils/init_clients.py` to fetch environment details from `ConfigManager` instead of hardcoded values.
## 2. Frontend Implementation
### 2.1 Settings Page
- Create `frontend/src/pages/Settings.svelte`.
- Add a "Settings" link to the main navigation (likely in `App.svelte`).
### 2.2 Components
- **EnvironmentList**: Displays a table/list of configured environments with Edit/Delete buttons.
- **EnvironmentForm**: A modal or inline form for adding/editing environments.
- **GlobalSettingsForm**: Form for editing the backup storage path.
### 2.3 API Integration
- Add functions to `frontend/src/lib/api.js` for interacting with the new settings endpoints.
## 3. Workflow Diagram
```mermaid
graph TD
UI[Web UI - Settings Page] --> API[FastAPI Settings Router]
API --> CM[Config Manager]
CM --> JSON[(config.json)]
CM --> SS[Superset Instance] : Test Connection
Plugins[Plugins - Backup/Migration] --> CM : Get Env/Path
```
## 4. Implementation Steps
1. **Backend Core**:
- Create `config_models.py` and `config_manager.py`.
- Implement file-based persistence.
2. **Backend API**:
- Implement `settings.py` router.
- Register router in `app.py`.
3. **Frontend UI**:
- Create `Settings.svelte` and necessary components.
- Implement API calls and state management.
4. **Refactoring**:
- Update `init_clients.py` to use the new configuration system.
- Ensure existing plugins (Backup, Migration) use the configured settings.
5. **Validation**:
- Add path existence/write checks for backup storage.
- Add URL/Connection checks for Superset environments.

77
specs/002-app-settings/spec.md Executable file
View File

@@ -0,0 +1,77 @@
# Feature Specification: Add web application settings mechanism
**Feature Branch**: `002-app-settings`
**Created**: 2025-12-20
**Status**: Draft
**Input**: User description: "давай внесем полноценный механизм настройки веб приложения. Что нужно точно - 1. Интерфейс для добавления enviroments (разные сервера суперсета) 2. Интерфейс для настройки файлового хранилища бекапов"
## User Scenarios & Testing *(mandatory)*
### User Story 1 - Manage Superset Environments (Priority: P1)
As an administrator, I want to add, edit, and remove Superset environment configurations (URL, credentials, name) so that the application can interact with multiple Superset instances.
**Why this priority**: This is the core functionality required for the tool to be useful across different stages (dev/prod) or different Superset clusters.
**Independent Test**: Can be fully tested by adding a new environment, verifying it appears in the list, and then deleting it.
**Acceptance Scenarios**:
1. **Given** the settings page is open, **When** I enter valid Superset connection details and save, **Then** the new environment is added to the list of available targets.
2. **Given** an existing environment, **When** I update its URL and save, **Then** the system uses the new URL for subsequent operations.
3. **Given** an existing environment, **When** I delete it, **Then** it is no longer available for selection in other parts of the application.
---
### User Story 2 - Configure Backup Storage (Priority: P1)
As an administrator, I want to configure the file path or storage location for backups so that I can control where system backups are stored.
**Why this priority**: Essential for the backup plugin to function correctly and for users to manage disk space/storage locations.
**Independent Test**: Can be tested by setting a backup path and verifying that the system validates the path's existence or accessibility.
**Acceptance Scenarios**:
1. **Given** the storage settings section, **When** I provide a valid local or network path, **Then** the system saves this as the default backup location.
2. **Given** an invalid or inaccessible path, **When** I try to save, **Then** the system displays an error message and does not update the setting.
---
### Edge Cases
- **Duplicate Environments**: What happens when a user tries to add an environment with a name that already exists? (System should prevent duplicates).
- **Invalid Credentials**: How does the system handle saving environments with incorrect credentials? (System should ideally validate connection on save).
- **Path Permissions**: How does the system handle a backup path that is valid but the application lacks write permissions for? (System should check write permissions).
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: System MUST provide a dedicated settings interface in the web UI.
- **FR-002**: System MUST allow users to create multiple named "Environments" for Superset.
- **FR-003**: Each Environment MUST include: Name, Base URL, and Authentication details (e.g., Username/Password or API Key).
- **FR-004**: System MUST allow setting a global "Backup Storage Path".
- **FR-005**: System MUST persist these settings across application restarts.
- **FR-006**: System MUST validate the Superset URL format before saving.
- **FR-007**: System MUST verify that the Backup Storage Path is writable by the application.
- **FR-008**: System MUST allow selecting a "Default" environment for operations.
### System Invariants (Constitution Check)
- **INV-001**: Sensitive credentials (passwords/keys) MUST NOT be displayed in plain text after being saved.
- **INV-002**: At least one environment MUST be configured for the application to perform Superset-related tasks.
### Key Entities *(include if feature involves data)*
- **Environment**: Represents a Superset instance. Attributes: Unique ID, Name, URL, Credentials, IsDefault flag.
- **AppConfiguration**: Singleton entity representing global settings. Attributes: BackupPath, DefaultEnvironmentID.
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: Users can add a new Superset environment in under 30 seconds.
- **SC-002**: 100% of saved environments are immediately available for use in backup/migration tasks.
- **SC-003**: System prevents saving invalid backup paths 100% of the time.
- **SC-004**: Configuration changes take effect without requiring a manual restart of the backend services.

View File

@@ -0,0 +1,141 @@
---
description: "Task list for implementing the web application settings mechanism"
---
# Tasks: Web Application Settings Mechanism
**Input**: Design documents from `specs/002-app-settings/`
**Prerequisites**: plan.md (required), spec.md (required for user stories)
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [x] T001 Create project structure for settings management in `backend/src/core/` and `backend/src/api/routes/`
- [x] T002 [P] Initialize `frontend/src/pages/Settings.svelte` placeholder
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
- [x] T003 Implement configuration models in `backend/src/core/config_models.py`
- [x] T004 Implement `ConfigManager` for JSON persistence in `backend/src/core/config_manager.py`
- [x] T005 [P] Update `backend/src/dependencies.py` to provide `ConfigManager` singleton
- [x] T006 [P] Setup API routing for settings in `backend/src/api/routes/settings.py` and register in `backend/src/app.py`
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - Manage Superset Environments (Priority: P1) 🎯 MVP
**Goal**: Add, edit, and remove Superset environment configurations (URL, credentials, name) so that the application can interact with multiple Superset instances.
**Independent Test**: Add a new environment, verify it appears in the list, and then delete it.
### Implementation for User Story 1
- [x] T007 [P] [US1] Implement environment CRUD logic in `backend/src/core/config_manager.py`
- [x] T008 [US1] Implement environment API endpoints in `backend/src/api/routes/settings.py`
- [x] T009 [P] [US1] Add environment API methods to `frontend/src/lib/api.js`
- [x] T010 [US1] Implement environment list and form UI in `frontend/src/pages/Settings.svelte`
- [x] T011 [US1] Implement connection test logic in `backend/src/api/routes/settings.py`
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - Configure Backup Storage (Priority: P1)
**Goal**: Configure the file path or storage location for backups so that I can control where system backups are stored.
**Independent Test**: Set a backup path and verify that the system validates the path's existence or accessibility.
### Implementation for User Story 2
- [x] T012 [P] [US2] Implement global settings update logic in `backend/src/core/config_manager.py`
- [x] T013 [US2] Implement global settings API endpoints in `backend/src/api/routes/settings.py`
- [x] T014 [P] [US2] Add global settings API methods to `frontend/src/lib/api.js`
- [x] T015 [US2] Implement backup storage configuration UI in `frontend/src/pages/Settings.svelte`
- [x] T016 [US2] Add path validation and write permission checks in `backend/src/api/routes/settings.py`
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [x] T017 Refactor `superset_tool/utils/init_clients.py` to use `ConfigManager` for environment details
- [x] T018 Update existing plugins (Backup, Migration) to fetch settings from `ConfigManager`
- [x] T019 [P] Add password masking in `backend/src/api/routes/settings.py` and UI
- [x] T020 [P] Add "Settings" link to navigation in `frontend/src/App.svelte`
- [x] T021 [P] Documentation updates for settings mechanism in `docs/`
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P1)**: Can start after Foundational (Phase 2) - Independent of US1
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel
- Models and API methods within a story marked [P] can run in parallel
---
## Parallel Example: User Story 1
```bash
# Launch backend and frontend tasks for User Story 1 together:
Task: "Implement environment CRUD logic in backend/src/core/config_manager.py"
Task: "Add environment API methods to frontend/src/lib/api.js"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Each story adds value without breaking previous stories

0
superset_tool/__init__.py Normal file → Executable file
View File

0
superset_tool/client.py Normal file → Executable file
View File

0
superset_tool/exceptions.py Normal file → Executable file
View File

0
superset_tool/models.py Normal file → Executable file
View File

0
superset_tool/requirements.txt Normal file → Executable file
View File

0
superset_tool/utils/__init__.py Normal file → Executable file
View File

0
superset_tool/utils/dataset_mapper.py Normal file → Executable file
View File

0
superset_tool/utils/fileio.py Normal file → Executable file
View File

66
superset_tool/utils/init_clients.py Normal file → Executable file
View File

@@ -10,27 +10,70 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
import keyring import keyring
from typing import Dict import os
from typing import Dict, List, Optional, Any
from superset_tool.models import SupersetConfig from superset_tool.models import SupersetConfig
from superset_tool.client import SupersetClient from superset_tool.client import SupersetClient
from superset_tool.utils.logger import SupersetLogger from superset_tool.utils.logger import SupersetLogger
# [/SECTION] # [/SECTION]
# [DEF:setup_clients:Function] # [DEF:setup_clients:Function]
# @PURPOSE: Инициализирует и возвращает словарь клиентов `SupersetClient` для всех предопределенных окружений. # @PURPOSE: Инициализирует и возвращает словарь клиентов `SupersetClient`.
# @PRE: `keyring` должен содержать пароли для систем "dev migrate", "prod migrate", "sbx migrate", "preprod migrate".
# @PRE: `logger` должен быть валидным экземпляром `SupersetLogger`. # @PRE: `logger` должен быть валидным экземпляром `SupersetLogger`.
# @POST: Возвращает словарь с инициализированными клиентами. # @POST: Возвращает словарь с инициализированными клиентами.
# @THROW: ValueError - Если пароль для окружения не найден в `keyring`.
# @THROW: Exception - При любых других ошибках инициализации. # @THROW: Exception - При любых других ошибках инициализации.
# @RELATION: CREATES_INSTANCE_OF -> SupersetConfig # @RELATION: CREATES_INSTANCE_OF -> SupersetConfig
# @RELATION: CREATES_INSTANCE_OF -> SupersetClient # @RELATION: CREATES_INSTANCE_OF -> SupersetClient
# @PARAM: logger (SupersetLogger) - Экземпляр логгера для записи процесса. # @PARAM: logger (SupersetLogger) - Экземпляр логгера для записи процесса.
# @PARAM: custom_envs (List[Dict[str, Any]]) - Список пользовательских настроек окружений.
# @RETURN: Dict[str, SupersetClient] - Словарь, где ключ - имя окружения, значение - `SupersetClient`. # @RETURN: Dict[str, SupersetClient] - Словарь, где ключ - имя окружения, значение - `SupersetClient`.
def setup_clients(logger: SupersetLogger) -> Dict[str, SupersetClient]: def setup_clients(logger: SupersetLogger, custom_envs: Optional[List[Any]] = None) -> Dict[str, SupersetClient]:
logger.info("[setup_clients][Enter] Starting Superset clients initialization.") logger.info("[setup_clients][Enter] Starting Superset clients initialization.")
clients = {} clients = {}
try:
# Try to load from ConfigManager if available
try:
from backend.src.dependencies import get_config_manager
config_manager = get_config_manager()
envs = config_manager.get_environments()
if envs:
logger.info("[setup_clients][Action] Loading environments from ConfigManager")
for env in envs:
logger.debug("[setup_clients][State] Creating config for environment: %s", env.name)
config = SupersetConfig(
env=env.name,
base_url=env.url,
auth={"provider": "db", "username": env.username, "password": env.password, "refresh": "true"},
verify_ssl=False,
timeout=30,
logger=logger
)
clients[env.name] = SupersetClient(config, logger)
return clients
except (ImportError, Exception) as e:
logger.debug(f"[setup_clients][State] ConfigManager not available or failed: {e}")
if custom_envs:
for env in custom_envs:
# Handle both dict and object (like Pydantic model)
env_name = str(getattr(env, 'name', env.get('name') if isinstance(env, dict) else "unknown"))
base_url = str(getattr(env, 'url', env.get('url') if isinstance(env, dict) else ""))
username = str(getattr(env, 'username', env.get('username') if isinstance(env, dict) else ""))
password = str(getattr(env, 'password', env.get('password') if isinstance(env, dict) else ""))
logger.debug("[setup_clients][State] Creating config for custom environment: %s", env_name)
config = SupersetConfig(
env=env_name,
base_url=base_url,
auth={"provider": "db", "username": username, "password": password, "refresh": "true"},
verify_ssl=False,
timeout=30,
logger=logger
)
clients[env_name] = SupersetClient(config, logger)
else:
# Fallback to hardcoded environments with keyring
environments = { environments = {
"dev": "https://devta.bi.dwh.rusal.com/api/v1", "dev": "https://devta.bi.dwh.rusal.com/api/v1",
"prod": "https://prodta.bi.dwh.rusal.com/api/v1", "prod": "https://prodta.bi.dwh.rusal.com/api/v1",
@@ -39,23 +82,22 @@ def setup_clients(logger: SupersetLogger) -> Dict[str, SupersetClient]:
"uatta": "https://uatta.bi.dwh.rusal.com/api/v1", "uatta": "https://uatta.bi.dwh.rusal.com/api/v1",
"dev5":"https://dev.bi.dwh.rusal.com/api/v1" "dev5":"https://dev.bi.dwh.rusal.com/api/v1"
} }
try:
for env_name, base_url in environments.items(): for env_name, base_url in environments.items():
logger.debug("[setup_clients][State] Creating config for environment: %s", env_name.upper()) logger.debug("[setup_clients][State] Creating config for environment: %s", env_name.upper())
password = keyring.get_password("system", f"{env_name} migrate") password = keyring.get_password("system", f"{env_name} migrate")
if not password: if not password:
raise ValueError(f"Пароль для '{env_name} migrate' не найден в keyring.") logger.warning(f"Пароль для '{env_name} migrate' не найден в keyring. Пропускаем.")
continue
config = SupersetConfig( config = SupersetConfig(
env=env_name, env=env_name,
base_url=base_url, base_url=base_url,
auth={"provider": "db", "username": "migrate_user", "password": password, "refresh": True}, auth={"provider": "db", "username": "migrate_user", "password": password, "refresh": "true"},
verify_ssl=False verify_ssl=False,
timeout=30,
logger=logger
) )
clients[env_name] = SupersetClient(config, logger) clients[env_name] = SupersetClient(config, logger)
logger.debug("[setup_clients][State] Client for %s created successfully.", env_name.upper())
logger.info("[setup_clients][Exit] All clients (%s) initialized successfully.", ', '.join(clients.keys())) logger.info("[setup_clients][Exit] All clients (%s) initialized successfully.", ', '.join(clients.keys()))
return clients return clients

0
superset_tool/utils/logger.py Normal file → Executable file
View File

0
superset_tool/utils/network.py Normal file → Executable file
View File

0
superset_tool/utils/whiptail_fallback.py Normal file → Executable file
View File

0
test_update_yamls.py Normal file → Executable file
View File