Initial Commit

This commit is contained in:
2026-02-12 16:50:29 -05:00
commit a1da041f14
74 changed files with 6140 additions and 0 deletions

14
.dockerignore Normal file
View File

@@ -0,0 +1,14 @@
.venv/
__pycache__/
*.pyc
.env
.git/
.opencode/
openspec/
data/
*.db
*.db-journal
*.db-wal
.ruff_cache/
.pytest_cache/
node_modules/

6
.env Normal file
View File

@@ -0,0 +1,6 @@
PERPLEXITY_API_KEY=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
IMAGE_QUALITY=85
OPENROUTER_API_KEY=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
UMAMI_SCRIPT_URL=https://wa.santhoshj.com/script.js
UMAMI_WEBSITE_ID=b4315ab2-3075-44b7-b91a-d08497771c14
RETENTION_DAYS=30

6
.env.example Normal file
View File

@@ -0,0 +1,6 @@
PERPLEXITY_API_KEY=pplx-xxxxxxxxxxxxxxxxxxxx
IMAGE_QUALITY=85
OPENROUTER_API_KEY=
UMAMI_SCRIPT_URL=
UMAMI_WEBSITE_ID=
RETENTION_DAYS=30

View File

@@ -0,0 +1,149 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx-apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx-continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx-archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,154 @@
---
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx-archive` (e.g., `/opsx-archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx-sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx-sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,239 @@
---
description: Archive multiple completed changes at once
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx-new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,111 @@
---
description: Continue working on a change - create the next artifact (Experimental)
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx-continue` (e.g., `/opsx-continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change with `/opsx-apply` or archive it with `/opsx-archive`."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx-continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,171 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx-new` or `/opsx-ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx-explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx-new` or `/opsx-ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx-new` or `/opsx-ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,91 @@
---
description: Create a change and generate all artifacts needed for implementation in one go
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx-ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx-apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,66 @@
---
description: Start a new change using the experimental artifact workflow (OPSX)
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx-new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx-continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx-continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,522 @@
---
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx-onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx-explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx-explore` | Think through problems before/during work |
| `/opsx-new` | Start a new change, step through artifacts |
| `/opsx-ff` | Fast-forward: create all artifacts at once |
| `/opsx-continue` | Continue working on an existing change |
| `/opsx-apply` | Implement tasks from a change |
| `/opsx-verify` | Verify implementation matches artifacts |
| `/opsx-archive` | Archive a completed change |
---
## What's Next?
Try `/opsx-new` or `/opsx-ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx-continue <name>` - Resume artifact creation
- `/opsx-apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx-explore` | Think through problems (no code changes) |
| `/opsx-new <name>` | Start a new change, step by step |
| `/opsx-ff <name>` | Fast-forward: all artifacts at once |
| `/opsx-continue <name>` | Continue an existing change |
| `/opsx-apply <name>` | Implement tasks |
| `/opsx-verify <name>` | Verify implementation |
| `/opsx-archive <name>` | Archive when done |
Try `/opsx-new` to start your first change, or `/opsx-ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,131 @@
---
description: Sync delta specs from a change to main specs
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx-sync` (e.g., `/opsx-sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,161 @@
---
description: Verify implementation matches change artifacts before archiving
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx-verify` (e.g., `/opsx-verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx-sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx-new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,118 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx-new` or `/opsx-ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx-new` or `/opsx-ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx-explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx-new or /opsx-ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx-new <name>
- Fast-forward to tasks: /opsx-ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx-apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,74 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx-onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx-explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx-explore` | Think through problems before/during work |
| `/opsx-new` | Start a new change, step through artifacts |
| `/opsx-ff` | Fast-forward: create all artifacts at once |
| `/opsx-continue` | Continue working on an existing change |
| `/opsx-apply` | Implement tasks from a change |
| `/opsx-verify` | Verify implementation matches artifacts |
| `/opsx-archive` | Archive a completed change |
---
## What's Next?
Try `/opsx-new` or `/opsx-ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx-continue <name>` - Resume artifact creation
- `/opsx-apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx-explore` | Think through problems (no code changes) |
| `/opsx-new <name>` | Start a new change, step by step |
| `/opsx-ff <name>` | Fast-forward: all artifacts at once |
| `/opsx-continue <name>` | Continue an existing change |
| `/opsx-apply <name>` | Implement tasks |
| `/opsx-verify <name>` | Verify implementation |
| `/opsx-archive <name>` | Archive when done |
Try `/opsx-new` to start your first change, or `/opsx-ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

22
Dockerfile Normal file
View File

@@ -0,0 +1,22 @@
FROM python:3.11-slim AS base
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg62-turbo-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY backend/ ./backend/
COPY frontend/ ./frontend/
RUN mkdir -p /app/data /app/backend/static/images
EXPOSE 8000
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
CMD ["uvicorn", "backend.main:app", "--host", "0.0.0.0", "--port", "8000"]

135
README.md Normal file
View File

@@ -0,0 +1,135 @@
# ClawFort — AI News Aggregation One-Pager
A stunning single-page website that automatically aggregates and displays AI news hourly using the Perplexity API.
## Quick Start
### Docker (Recommended)
```bash
cp .env.example .env
# Edit .env and set PERPLEXITY_API_KEY
docker compose up --build
```
Open http://localhost:8000
### Local Development
```bash
pip install -r requirements.txt
cp .env.example .env
# Edit .env and set PERPLEXITY_API_KEY
python -m uvicorn backend.main:app --reload --port 8000
```
## Force Fetch Command
Use the force-fetch command to run one immediate news ingestion cycle outside the hourly scheduler.
```bash
python -m backend.cli force-fetch
```
Common use cases:
- **Bootstrap**: Populate initial content right after first deployment.
- **Recovery**: Re-run ingestion after a failed provider/API cycle.
Command behavior:
- Reuses existing retry, fallback, dedup, image optimization, and persistence logic.
- Prints success output with stored item count, for example: `force-fetch succeeded: stored=3 elapsed=5.1s`
- Prints actionable error output on fatal failures and exits non-zero.
Exit codes:
- `0`: Command completed successfully (including runs that store zero new rows)
- `1`: Fatal command failure (for example missing API keys or unrecoverable runtime error)
## Multilingual Support
ClawFort supports English (`en`), Tamil (`ta`), and Malayalam (`ml`) content delivery.
- New articles are stored in English and translated to Tamil and Malayalam during ingestion.
- Translations are linked to the same base article and served by the existing news endpoints.
- If a requested translation is unavailable, the API falls back to English.
Language-aware API usage:
```bash
# Latest hero item in Tamil
curl "http://localhost:8000/api/news/latest?language=ta"
# Feed page in Malayalam
curl "http://localhost:8000/api/news?limit=10&language=ml"
```
Unsupported language codes default to English.
Frontend language selector behavior:
- Landing page includes a language selector (`English`, `Tamil`, `Malayalam`).
- Selected language is persisted in `localStorage` and mirrored in a client cookie.
- Returning users see content in their previously selected language.
## Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `PERPLEXITY_API_KEY` | Yes | — | Perplexity API key for news fetching |
| `IMAGE_QUALITY` | No | `85` | JPEG compression quality (1-100) |
| `OPENROUTER_API_KEY` | No | — | Fallback LLM provider API key |
| `RETENTION_DAYS` | No | `30` | Days to keep news before archiving |
| `UMAMI_SCRIPT_URL` | No | — | Umami analytics script URL |
| `UMAMI_WEBSITE_ID` | No | — | Umami website tracking ID |
## Architecture
- **Backend**: Python (FastAPI) + SQLAlchemy + APScheduler
- **Frontend**: Alpine.js + Tailwind CSS (CDN, no build step)
- **Database**: SQLite with 30-day retention
- **Container**: Single Docker container
## API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| `GET` | `/` | Serve frontend |
| `GET` | `/api/news/latest` | Latest news item for hero block |
| `GET` | `/api/news` | Paginated news feed |
| `GET` | `/api/health` | Health check with news count |
| `GET` | `/config` | Frontend config (analytics) |
### GET /api/news
Query parameters:
- `cursor` (int, optional): Last item ID for pagination
- `limit` (int, default 10): Items per page (max 50)
- `exclude_hero` (int, optional): Hero item ID to exclude
Response:
```json
{
"items": [{ "id": 1, "headline": "...", "summary": "...", "source_url": "...", "image_url": "...", "image_credit": "...", "published_at": "...", "created_at": "..." }],
"next_cursor": 5,
"has_more": true
}
```
## Deployment
```bash
docker build -t clawfort .
docker run -d \
-e PERPLEXITY_API_KEY=pplx-xxx \
-v clawfort-data:/app/data \
-v clawfort-images:/app/backend/static/images \
-p 8000:8000 \
clawfort
```
Data persists across restarts via Docker volumes.
## Scheduled Jobs
- **Hourly**: Fetch latest AI news from Perplexity API
- **Nightly (3 AM)**: Archive news older than 30 days, delete archived items older than 60 days

0
backend/__init__.py Normal file
View File

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

71
backend/cli.py Normal file
View File

@@ -0,0 +1,71 @@
import argparse
import asyncio
import logging
import os
import sys
import time
from backend import config
from backend.database import init_db
from backend.news_service import process_and_store_news
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
)
logger = logging.getLogger(__name__)
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(prog="clawfort", description="ClawFort operations CLI")
subparsers = parser.add_subparsers(dest="command", required=True)
force_fetch_parser = subparsers.add_parser(
"force-fetch",
help="Run one immediate news fetch cycle",
description="Trigger one immediate news fetch run outside scheduler cadence.",
)
force_fetch_parser.set_defaults(handler=handle_force_fetch)
return parser
def validate_runtime() -> None:
if not config.PERPLEXITY_API_KEY and not config.OPENROUTER_API_KEY:
raise RuntimeError(
"No provider API key configured. Set PERPLEXITY_API_KEY or OPENROUTER_API_KEY in the environment."
)
def handle_force_fetch(_: argparse.Namespace) -> int:
start = time.monotonic()
try:
validate_runtime()
os.makedirs("data", exist_ok=True)
init_db()
stored_count = asyncio.run(process_and_store_news())
elapsed = time.monotonic() - start
print(f"force-fetch succeeded: stored={stored_count} elapsed={elapsed:.1f}s")
return 0
except Exception as exc:
logger.exception("force-fetch failed")
print(f"force-fetch failed: {exc}", file=sys.stderr)
print(
"Check API keys, network connectivity, and provider status, then retry the command.",
file=sys.stderr,
)
return 1
def main(argv: list[str] | None = None) -> int:
parser = build_parser()
args = parser.parse_args(argv)
handler = args.handler
return handler(args)
if __name__ == "__main__":
raise SystemExit(main())

23
backend/config.py Normal file
View File

@@ -0,0 +1,23 @@
import os
from dotenv import load_dotenv
load_dotenv()
PERPLEXITY_API_KEY = os.getenv("PERPLEXITY_API_KEY", "")
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY", "")
IMAGE_QUALITY = int(os.getenv("IMAGE_QUALITY", "85"))
RETENTION_DAYS = int(os.getenv("RETENTION_DAYS", "30"))
UMAMI_SCRIPT_URL = os.getenv("UMAMI_SCRIPT_URL", "")
UMAMI_WEBSITE_ID = os.getenv("UMAMI_WEBSITE_ID", "")
PERPLEXITY_API_URL = "https://api.perplexity.ai/chat/completions"
PERPLEXITY_MODEL = "sonar"
OPENROUTER_API_URL = "https://openrouter.ai/api/v1/chat/completions"
OPENROUTER_MODEL = "google/gemini-2.0-flash-001"
SUPPORTED_LANGUAGES = ["en", "ta", "ml"]
STATIC_IMAGES_DIR = os.path.join(os.path.dirname(__file__), "static", "images")
os.makedirs(STATIC_IMAGES_DIR, exist_ok=True)

27
backend/database.py Normal file
View File

@@ -0,0 +1,27 @@
from collections.abc import Generator
from sqlalchemy import create_engine
from sqlalchemy.orm import DeclarativeBase, Session, sessionmaker
DATABASE_URL = "sqlite:///./data/clawfort.db"
engine = create_engine(DATABASE_URL, connect_args={"check_same_thread": False})
SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False)
class Base(DeclarativeBase):
pass
def get_db() -> Generator[Session, None, None]:
db = SessionLocal()
try:
yield db
finally:
db.close()
def init_db() -> None:
from backend.models import NewsItem, NewsTranslation # noqa: F401
Base.metadata.create_all(bind=engine)

16
backend/init_db.py Normal file
View File

@@ -0,0 +1,16 @@
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from backend.database import init_db
def main() -> None:
os.makedirs("data", exist_ok=True)
init_db()
print("Database initialized successfully.")
if __name__ == "__main__":
main()

173
backend/main.py Normal file
View File

@@ -0,0 +1,173 @@
import logging
import os
from apscheduler.schedulers.background import BackgroundScheduler
from fastapi import Depends, FastAPI, Query
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session
from backend import config
from backend.database import get_db, init_db
from backend.models import NewsItem
from backend.news_service import scheduled_news_fetch
from backend.repository import (
archive_old_news,
delete_archived_news,
get_latest_news,
get_news_paginated,
get_translation,
normalize_language,
resolve_news_content,
)
from backend.schemas import HealthResponse, NewsItemResponse, PaginatedNewsResponse
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
)
logger = logging.getLogger(__name__)
app = FastAPI(title="ClawFort News API", version="0.1.0")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
static_dir = os.path.join(os.path.dirname(__file__), "static")
app.mount("/static", StaticFiles(directory=static_dir), name="static")
scheduler = BackgroundScheduler()
def nightly_cleanup() -> None:
from backend.database import SessionLocal
db = SessionLocal()
try:
archived = archive_old_news(db, config.RETENTION_DAYS)
deleted = delete_archived_news(db, days_after_archive=60)
logger.info("Nightly cleanup: archived=%d, deleted=%d", archived, deleted)
finally:
db.close()
@app.on_event("startup")
async def startup_event() -> None:
if not config.PERPLEXITY_API_KEY:
logger.error("PERPLEXITY_API_KEY is not set — news fetching will fail")
os.makedirs("data", exist_ok=True)
init_db()
logger.info("Database initialized")
scheduler.add_job(scheduled_news_fetch, "interval", hours=1, id="news_fetch")
scheduler.add_job(nightly_cleanup, "cron", hour=3, minute=0, id="nightly_cleanup")
scheduler.start()
logger.info("Scheduler started: hourly news fetch + nightly cleanup")
@app.on_event("shutdown")
async def shutdown_event() -> None:
scheduler.shutdown(wait=False)
logger.info("Scheduler shut down")
@app.get("/api/news", response_model=PaginatedNewsResponse)
def api_get_news(
cursor: int | None = Query(None, description="Cursor for pagination (last item ID)"),
limit: int = Query(10, ge=1, le=50),
exclude_hero: int | None = Query(None, description="Hero item ID to exclude from feed"),
language: str = Query("en", description="Language code: en, ta, ml"),
db: Session = Depends(get_db),
) -> PaginatedNewsResponse:
lang = normalize_language(language)
items = get_news_paginated(db, cursor=cursor, limit=limit + 1, exclude_id=exclude_hero)
has_more = len(items) > limit
if has_more:
items = items[:limit]
next_cursor = items[-1].id if items and has_more else None
response_items: list[NewsItemResponse] = []
for item in items:
translation = None
if lang != "en":
translation = get_translation(db, item.id, lang)
headline, summary = resolve_news_content(item, translation)
response_items.append(
NewsItemResponse(
id=item.id,
headline=headline,
summary=summary,
source_url=item.source_url,
image_url=item.image_url,
image_credit=item.image_credit,
published_at=item.published_at,
created_at=item.created_at,
language=lang if translation is not None else "en",
)
)
return PaginatedNewsResponse(
items=response_items,
next_cursor=next_cursor,
has_more=has_more,
)
@app.get("/api/news/latest", response_model=NewsItemResponse | None)
def api_get_latest_news(
language: str = Query("en", description="Language code: en, ta, ml"),
db: Session = Depends(get_db),
) -> NewsItemResponse | None:
lang = normalize_language(language)
item = get_latest_news(db)
if not item:
return None
translation = None
if lang != "en":
translation = get_translation(db, item.id, lang)
headline, summary = resolve_news_content(item, translation)
return NewsItemResponse(
id=item.id,
headline=headline,
summary=summary,
source_url=item.source_url,
image_url=item.image_url,
image_credit=item.image_credit,
published_at=item.published_at,
created_at=item.created_at,
language=lang if translation is not None else "en",
)
@app.get("/api/health", response_model=HealthResponse)
def api_health(db: Session = Depends(get_db)) -> HealthResponse:
count = db.query(NewsItem).filter(NewsItem.archived.is_(False)).count()
return HealthResponse(status="ok", version="0.1.0", news_count=count)
frontend_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), "frontend")
@app.get("/")
async def serve_frontend() -> FileResponse:
return FileResponse(os.path.join(frontend_dir, "index.html"))
@app.get("/config")
async def serve_config() -> dict:
return {
"umami_script_url": config.UMAMI_SCRIPT_URL,
"umami_website_id": config.UMAMI_WEBSITE_ID,
"supported_languages": config.SUPPORTED_LANGUAGES,
"default_language": "en",
}

45
backend/models.py Normal file
View File

@@ -0,0 +1,45 @@
import datetime
from sqlalchemy import Boolean, DateTime, ForeignKey, Integer, String, Text, UniqueConstraint
from sqlalchemy.orm import Mapped, mapped_column, relationship
from backend.database import Base
class NewsItem(Base):
__tablename__ = "news_items"
id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
headline: Mapped[str] = mapped_column(String(500), nullable=False, index=True)
summary: Mapped[str] = mapped_column(Text, nullable=False)
source_url: Mapped[str | None] = mapped_column(String(2000), nullable=True)
image_url: Mapped[str | None] = mapped_column(String(2000), nullable=True)
image_credit: Mapped[str | None] = mapped_column(String(500), nullable=True)
published_at: Mapped[datetime.datetime] = mapped_column(
DateTime, nullable=False, default=datetime.datetime.utcnow
)
created_at: Mapped[datetime.datetime] = mapped_column(
DateTime, nullable=False, default=datetime.datetime.utcnow
)
archived: Mapped[bool] = mapped_column(Boolean, nullable=False, default=False)
translations: Mapped[list["NewsTranslation"]] = relationship(
back_populates="news_item", cascade="all, delete-orphan"
)
class NewsTranslation(Base):
__tablename__ = "news_translations"
__table_args__ = (UniqueConstraint("news_item_id", "language", name="uq_news_item_language"),)
id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
news_item_id: Mapped[int] = mapped_column(
ForeignKey("news_items.id"), nullable=False, index=True
)
language: Mapped[str] = mapped_column(String(5), nullable=False, index=True)
headline: Mapped[str] = mapped_column(String(500), nullable=False)
summary: Mapped[str] = mapped_column(Text, nullable=False)
created_at: Mapped[datetime.datetime] = mapped_column(
DateTime, nullable=False, default=datetime.datetime.utcnow
)
news_item: Mapped[NewsItem] = relationship(back_populates="translations")

299
backend/news_service.py Normal file
View File

@@ -0,0 +1,299 @@
import asyncio
import hashlib
import json
import logging
import os
import time
from io import BytesIO
import httpx
from PIL import Image
from backend import config
from backend.database import SessionLocal
from backend.repository import (
create_news,
create_translation,
headline_exists_within_24h,
translation_exists,
)
logger = logging.getLogger(__name__)
PLACEHOLDER_IMAGE_PATH = "/static/images/placeholder.png"
async def call_perplexity_api(query: str) -> dict | None:
headers = {
"Authorization": f"Bearer {config.PERPLEXITY_API_KEY}",
"Content-Type": "application/json",
}
payload = {
"model": config.PERPLEXITY_MODEL,
"messages": [
{
"role": "system",
"content": (
"You are a news aggregator. Return a JSON array of news items. "
"Each item must have: headline, summary (2-3 sentences), source_url, "
"image_url (a relevant image URL if available), image_credit. "
"Return between 3 and 5 items. Respond ONLY with valid JSON array, no markdown."
),
},
{"role": "user", "content": query},
],
"temperature": 0.3,
}
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.post(config.PERPLEXITY_API_URL, headers=headers, json=payload)
cost_info = {
"model": config.PERPLEXITY_MODEL,
"status": response.status_code,
"usage": response.json().get("usage", {}),
}
logger.info("Perplexity API cost: %s", json.dumps(cost_info))
response.raise_for_status()
return response.json()
async def call_openrouter_api(query: str) -> dict | None:
if not config.OPENROUTER_API_KEY:
return None
headers = {
"Authorization": f"Bearer {config.OPENROUTER_API_KEY}",
"Content-Type": "application/json",
}
payload = {
"model": config.OPENROUTER_MODEL,
"messages": [
{
"role": "system",
"content": (
"You are a news aggregator. Return a JSON array of news items. "
"Each item must have: headline, summary (2-3 sentences), source_url, "
"image_url (a relevant image URL if available), image_credit. "
"Return between 3 and 5 items. Respond ONLY with valid JSON array, no markdown."
),
},
{"role": "user", "content": query},
],
"temperature": 0.3,
}
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.post(config.OPENROUTER_API_URL, headers=headers, json=payload)
cost_info = {
"model": config.OPENROUTER_MODEL,
"status": response.status_code,
"usage": response.json().get("usage", {}),
}
logger.info("OpenRouter API cost: %s", json.dumps(cost_info))
response.raise_for_status()
return response.json()
async def call_perplexity_translation_api(
headline: str, summary: str, language: str
) -> dict | None:
headers = {
"Authorization": f"Bearer {config.PERPLEXITY_API_KEY}",
"Content-Type": "application/json",
}
payload = {
"model": config.PERPLEXITY_MODEL,
"messages": [
{
"role": "system",
"content": (
"Translate the given headline and summary to the target language. "
"Return only valid JSON object with keys: headline, summary. "
"No markdown, no extra text."
),
},
{
"role": "user",
"content": json.dumps(
{
"target_language": language,
"headline": headline,
"summary": summary,
}
),
},
],
"temperature": 0.1,
}
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.post(config.PERPLEXITY_API_URL, headers=headers, json=payload)
response.raise_for_status()
return response.json()
def parse_translation_response(response: dict) -> dict | None:
content = response.get("choices", [{}])[0].get("message", {}).get("content", "")
content = content.strip()
if content.startswith("```"):
content = content.split("\n", 1)[-1].rsplit("```", 1)[0]
try:
parsed = json.loads(content)
if isinstance(parsed, dict):
headline = str(parsed.get("headline", "")).strip()
summary = str(parsed.get("summary", "")).strip()
if headline and summary:
return {"headline": headline, "summary": summary}
except json.JSONDecodeError:
logger.error("Failed to parse translation response: %s", content[:200])
return None
async def generate_translations(headline: str, summary: str) -> dict[str, dict]:
translations: dict[str, dict] = {}
language_names = {"ta": "Tamil", "ml": "Malayalam"}
if not config.PERPLEXITY_API_KEY:
return translations
for language_code, language_name in language_names.items():
try:
response = await call_perplexity_translation_api(headline, summary, language_name)
if response:
parsed = parse_translation_response(response)
if parsed:
translations[language_code] = parsed
except Exception:
logger.exception("Translation generation failed for %s", language_code)
return translations
def parse_news_response(response: dict) -> list[dict]:
content = response.get("choices", [{}])[0].get("message", {}).get("content", "")
content = content.strip()
if content.startswith("```"):
content = content.split("\n", 1)[-1].rsplit("```", 1)[0]
try:
items = json.loads(content)
if isinstance(items, list):
return items
except json.JSONDecodeError:
logger.error("Failed to parse news response: %s", content[:200])
return []
async def download_and_optimize_image(image_url: str) -> str | None:
if not image_url:
return None
try:
async with httpx.AsyncClient(timeout=15.0, follow_redirects=True) as client:
response = await client.get(image_url)
response.raise_for_status()
img = Image.open(BytesIO(response.content))
if img.width > 1200:
ratio = 1200 / img.width
new_height = int(img.height * ratio)
img = img.resize((1200, new_height), Image.Resampling.LANCZOS)
if img.mode in ("RGBA", "P"):
img = img.convert("RGB")
filename = hashlib.md5(image_url.encode()).hexdigest() + ".jpg"
filepath = os.path.join(config.STATIC_IMAGES_DIR, filename)
img.save(filepath, "JPEG", quality=config.IMAGE_QUALITY, optimize=True)
return f"/static/images/{filename}"
except Exception:
logger.exception("Failed to download/optimize image: %s", image_url)
return None
async def fetch_news_with_retry(max_attempts: int = 3) -> list[dict]:
query = "What are the latest AI news from the last hour? Include source URLs and image URLs."
for attempt in range(max_attempts):
try:
response = await call_perplexity_api(query)
if response:
return parse_news_response(response)
except Exception:
wait = 2**attempt
logger.warning("Perplexity API attempt %d failed, retrying in %ds", attempt + 1, wait)
await asyncio.sleep(wait)
logger.warning("Perplexity API exhausted, trying OpenRouter fallback")
try:
response = await call_openrouter_api(query)
if response:
return parse_news_response(response)
except Exception:
logger.exception("OpenRouter fallback also failed")
return []
async def process_and_store_news() -> int:
items = await fetch_news_with_retry()
if not items:
logger.warning("No news items fetched this cycle")
return 0
db = SessionLocal()
stored = 0
try:
for item in items:
headline = item.get("headline", "").strip()
summary = item.get("summary", "").strip()
if not headline or not summary:
continue
if headline_exists_within_24h(db, headline):
logger.debug("Duplicate headline skipped: %s", headline[:80])
continue
local_image = await download_and_optimize_image(item.get("image_url", ""))
image_url = local_image or PLACEHOLDER_IMAGE_PATH
created_news_item = create_news(
db=db,
headline=headline,
summary=summary,
source_url=item.get("source_url"),
image_url=image_url,
image_credit=item.get("image_credit"),
)
translations = await generate_translations(headline, summary)
for language_code, payload in translations.items():
if translation_exists(db, created_news_item.id, language_code):
continue
create_translation(
db=db,
news_item_id=created_news_item.id,
language=language_code,
headline=payload["headline"],
summary=payload["summary"],
)
stored += 1
logger.info("Stored %d new news items", stored)
finally:
db.close()
return stored
def scheduled_news_fetch() -> None:
start = time.monotonic()
logger.info("Starting scheduled news fetch")
count = asyncio.run(process_and_store_news())
elapsed = time.monotonic() - start
logger.info("Scheduled news fetch complete: %d items in %.1fs", count, elapsed)

171
backend/repository.py Normal file
View File

@@ -0,0 +1,171 @@
import datetime
from sqlalchemy import and_, desc
from sqlalchemy.orm import Session
from backend.models import NewsItem, NewsTranslation
SUPPORTED_LANGUAGES = {"en", "ta", "ml"}
def create_news(
db: Session,
headline: str,
summary: str,
source_url: str | None = None,
image_url: str | None = None,
image_credit: str | None = None,
published_at: datetime.datetime | None = None,
) -> NewsItem:
item = NewsItem(
headline=headline,
summary=summary,
source_url=source_url,
image_url=image_url,
image_credit=image_credit,
published_at=published_at or datetime.datetime.utcnow(),
)
db.add(item)
db.commit()
db.refresh(item)
return item
def get_recent_news(db: Session, limit: int = 10) -> list[NewsItem]:
return (
db.query(NewsItem)
.filter(NewsItem.archived.is_(False))
.order_by(desc(NewsItem.published_at))
.limit(limit)
.all()
)
def get_latest_news(db: Session) -> NewsItem | None:
return (
db.query(NewsItem)
.filter(NewsItem.archived.is_(False))
.order_by(desc(NewsItem.published_at))
.first()
)
def create_translation(
db: Session,
news_item_id: int,
language: str,
headline: str,
summary: str,
) -> NewsTranslation:
translation = NewsTranslation(
news_item_id=news_item_id,
language=language,
headline=headline,
summary=summary,
)
db.add(translation)
db.commit()
db.refresh(translation)
return translation
def get_translation(db: Session, news_item_id: int, language: str) -> NewsTranslation | None:
return (
db.query(NewsTranslation)
.filter(
and_(
NewsTranslation.news_item_id == news_item_id,
NewsTranslation.language == language,
)
)
.first()
)
def translation_exists(db: Session, news_item_id: int, language: str) -> bool:
return get_translation(db, news_item_id, language) is not None
def get_translations_by_article(db: Session, news_item_id: int) -> list[NewsTranslation]:
return (
db.query(NewsTranslation)
.filter(NewsTranslation.news_item_id == news_item_id)
.order_by(NewsTranslation.language.asc())
.all()
)
def resolve_news_content(item: NewsItem, translation: NewsTranslation | None) -> tuple[str, str]:
if translation is None:
return item.headline, item.summary
return translation.headline, translation.summary
def normalize_language(language: str | None) -> str:
if not language:
return "en"
lower = language.lower()
if lower not in SUPPORTED_LANGUAGES:
return "en"
return lower
def get_news_paginated(
db: Session, cursor: int | None = None, limit: int = 10, exclude_id: int | None = None
) -> list[NewsItem]:
query = db.query(NewsItem).filter(NewsItem.archived.is_(False))
if exclude_id is not None:
query = query.filter(NewsItem.id != exclude_id)
if cursor is not None:
query = query.filter(NewsItem.id < cursor)
return query.order_by(desc(NewsItem.id)).limit(limit).all()
def headline_exists_within_24h(db: Session, headline: str) -> bool:
cutoff = datetime.datetime.utcnow() - datetime.timedelta(hours=24)
return (
db.query(NewsItem)
.filter(
and_(
NewsItem.headline == headline,
NewsItem.created_at >= cutoff,
)
)
.first()
is not None
)
def archive_old_news(db: Session, retention_days: int = 30) -> int:
cutoff = datetime.datetime.utcnow() - datetime.timedelta(days=retention_days)
count = (
db.query(NewsItem)
.filter(
and_(
NewsItem.created_at < cutoff,
NewsItem.archived.is_(False),
)
)
.update({"archived": True})
)
db.commit()
return count
def delete_archived_news(db: Session, days_after_archive: int = 60) -> int:
cutoff = datetime.datetime.utcnow() - datetime.timedelta(days=days_after_archive)
count = (
db.query(NewsItem)
.filter(
and_(
NewsItem.archived.is_(True),
NewsItem.created_at < cutoff,
)
)
.delete()
)
db.commit()
return count

40
backend/schemas.py Normal file
View File

@@ -0,0 +1,40 @@
import datetime
from pydantic import BaseModel
class NewsItemResponse(BaseModel):
id: int
headline: str
summary: str
source_url: str | None = None
image_url: str | None = None
image_credit: str | None = None
published_at: datetime.datetime
created_at: datetime.datetime
language: str
model_config = {"from_attributes": True}
class PaginatedNewsResponse(BaseModel):
items: list[NewsItemResponse]
next_cursor: int | None = None
has_more: bool = False
class NewsTranslationResponse(BaseModel):
id: int
news_item_id: int
language: str
headline: str
summary: str
created_at: datetime.datetime
model_config = {"from_attributes": True}
class HealthResponse(BaseModel):
status: str
version: str
news_count: int

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

BIN
data/clawfort.db Normal file

Binary file not shown.

22
docker-compose.yml Normal file
View File

@@ -0,0 +1,22 @@
services:
clawfort:
build: .
ports:
- "8000:8000"
volumes:
- clawfort-data:/app/data
- clawfort-images:/app/backend/static/images
env_file:
- .env
environment:
- PERPLEXITY_API_KEY=${PERPLEXITY_API_KEY}
- IMAGE_QUALITY=${IMAGE_QUALITY:-85}
- OPENROUTER_API_KEY=${OPENROUTER_API_KEY:-}
- UMAMI_SCRIPT_URL=${UMAMI_SCRIPT_URL:-}
- UMAMI_WEBSITE_ID=${UMAMI_WEBSITE_ID:-}
- RETENTION_DAYS=${RETENTION_DAYS:-30}
restart: unless-stopped
volumes:
clawfort-data:
clawfort-images:

364
frontend/index.html Normal file
View File

@@ -0,0 +1,364 @@
<!DOCTYPE html>
<html lang="en" class="scroll-smooth">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ClawFort — AI News</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800;900&display=swap" rel="stylesheet">
<script src="https://cdn.tailwindcss.com"></script>
<script>
tailwind.config = {
theme: {
extend: {
colors: {
cf: {
50: '#f0f4ff', 100: '#dbe4ff', 200: '#bac8ff',
400: '#748ffc', 500: '#5c7cfa', 600: '#4c6ef5',
700: '#4263eb', 900: '#1e2a5e', 950: '#0f172a'
}
},
fontFamily: { sans: ['Inter', 'system-ui', 'sans-serif'] }
}
}
}
</script>
<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3/dist/cdn.min.js"></script>
<style>
[x-cloak] { display: none !important; }
@keyframes fadeUp { from { opacity: 0; transform: translateY(20px); } to { opacity: 1; transform: translateY(0); } }
.fade-up { animation: fadeUp 0.6s ease-out forwards; }
@keyframes shimmer { 0% { background-position: -200% 0; } 100% { background-position: 200% 0; } }
.skeleton { background: linear-gradient(90deg, #1e293b 25%, #334155 50%, #1e293b 75%); background-size: 200% 100%; animation: shimmer 1.5s infinite; }
.line-clamp-2 { display: -webkit-box; -webkit-line-clamp: 2; -webkit-box-orient: vertical; overflow: hidden; }
.line-clamp-3 { display: -webkit-box; -webkit-line-clamp: 3; -webkit-box-orient: vertical; overflow: hidden; }
</style>
</head>
<body class="bg-cf-950 text-gray-100 font-sans min-h-screen">
<header class="sticky top-0 z-50 bg-cf-950/80 backdrop-blur-lg border-b border-white/5">
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 h-16 flex items-center justify-between">
<a href="/" class="flex items-center gap-2.5 group">
<svg class="w-8 h-8 text-cf-500 group-hover:text-cf-400 transition-colors" viewBox="0 0 32 32" fill="none">
<path d="M16 2L4 8v8c0 7.7 5.1 14.9 12 16 6.9-1.1 12-8.3 12-16V8L16 2z" fill="currentColor" fill-opacity="0.15" stroke="currentColor" stroke-width="1.5"/>
<path d="M12 14l2 2 6-6" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<span class="text-xl font-bold tracking-tight">Claw<span class="text-cf-500">Fort</span></span>
</a>
<div class="flex items-center gap-3">
<label for="language-select" class="text-xs text-gray-400 hidden sm:block">Language</label>
<select id="language-select"
class="bg-[#1e293b] border border-white/10 text-gray-200 text-xs rounded-md px-2 py-1 focus:outline-none focus:ring-2 focus:ring-cf-500"
onchange="setPreferredLanguage(this.value)">
<option value="en">English</option>
<option value="ta">Tamil</option>
<option value="ml">Malayalam</option>
</select>
<span class="text-xs text-gray-500 hidden lg:block">AI News — Updated Hourly</span>
</div>
</div>
</header>
<main>
<section x-data="heroBlock()" x-init="init()" class="relative">
<template x-if="loading">
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-12">
<div class="skeleton rounded-2xl h-[400px] w-full"></div>
<div class="mt-6 space-y-3">
<div class="skeleton h-10 w-3/4 rounded-lg"></div>
<div class="skeleton h-5 w-full rounded-lg"></div>
<div class="skeleton h-5 w-2/3 rounded-lg"></div>
</div>
</div>
</template>
<template x-if="!loading && item">
<div class="fade-up">
<div class="relative max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-8 sm:py-12">
<div class="relative rounded-2xl overflow-hidden group">
<img :src="item.image_url" :alt="item.headline"
class="w-full h-[300px] sm:h-[400px] lg:h-[480px] object-cover transition-transform duration-700 group-hover:scale-105"
@error="$el.src='/static/images/placeholder.png'">
<div class="absolute inset-0 bg-gradient-to-t from-cf-950 via-cf-950/40 to-transparent"></div>
<div class="absolute bottom-0 left-0 right-0 p-6 sm:p-10">
<div class="flex items-center gap-3 mb-3">
<span class="px-2.5 py-1 bg-cf-500/20 text-cf-400 text-xs font-semibold rounded-full border border-cf-500/30">LATEST</span>
<span class="text-gray-400 text-sm" x-text="timeAgo(item.published_at)"></span>
</div>
<h1 class="text-2xl sm:text-3xl lg:text-4xl font-extrabold leading-tight mb-3 max-w-4xl" x-text="item.headline"></h1>
<p class="text-gray-300 text-base sm:text-lg max-w-3xl line-clamp-3 mb-4" x-text="item.summary"></p>
<div class="flex flex-wrap items-center gap-4 text-sm text-gray-400">
<a :href="item.source_url" target="_blank" rel="noopener"
class="hover:text-cf-400 transition-colors"
@click="trackEvent('hero-source-click')"
x-show="item.source_url">
Via: <span x-text="extractDomain(item.source_url)" class="underline underline-offset-2"></span>
</a>
<span x-show="item.image_credit" x-text="'Image: ' + item.image_credit"></span>
</div>
</div>
</div>
</div>
</div>
</template>
<template x-if="!loading && !item">
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-20 text-center">
<div class="text-6xl mb-4">🤖</div>
<h2 class="text-2xl font-bold mb-2">No News Yet</h2>
<p class="text-gray-400">Check back soon — news is fetched hourly.</p>
</div>
</template>
</section>
<section x-data="newsFeed()" x-init="init()" class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 pb-20">
<h2 class="text-xl font-bold mb-6 flex items-center gap-2">
<span class="w-1 h-6 bg-cf-500 rounded-full"></span> Recent News
</h2>
<template x-if="initialLoading">
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6">
<template x-for="i in 6" :key="i">
<div class="rounded-xl overflow-hidden">
<div class="skeleton h-48 w-full"></div>
<div class="p-4 bg-[#1e293b] space-y-3">
<div class="skeleton h-5 w-full rounded"></div>
<div class="skeleton h-4 w-3/4 rounded"></div>
<div class="skeleton h-3 w-1/2 rounded"></div>
</div>
</div>
</template>
</div>
</template>
<div x-show="!initialLoading" class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6">
<template x-for="item in items" :key="item.id">
<article class="bg-[#1e293b] rounded-xl overflow-hidden border border-white/5 hover:border-cf-500/30 transition-all duration-300 hover:-translate-y-1 hover:shadow-xl hover:shadow-cf-500/5 group cursor-pointer"
@click="window.open(item.source_url || '#', '_blank'); trackEvent('card-click')">
<div class="relative h-48 overflow-hidden">
<img :src="item.image_url" :alt="item.headline" loading="lazy"
class="w-full h-full object-cover transition-transform duration-500 group-hover:scale-110"
@error="$el.src='/static/images/placeholder.png'">
<div class="absolute inset-0 bg-gradient-to-t from-[#1e293b] to-transparent opacity-60"></div>
</div>
<div class="p-5">
<h3 class="font-bold text-base mb-2 line-clamp-2 group-hover:text-cf-400 transition-colors" x-text="item.headline"></h3>
<p class="text-gray-400 text-sm line-clamp-2 mb-3" x-text="item.summary"></p>
<div class="flex items-center justify-between text-xs text-gray-500">
<a :href="item.source_url" target="_blank" rel="noopener"
class="hover:text-cf-400 transition-colors truncate max-w-[60%]"
@click.stop="trackEvent('source-link-click')"
x-show="item.source_url"
x-text="extractDomain(item.source_url)"></a>
<span x-text="timeAgo(item.published_at)"></span>
</div>
</div>
</article>
</template>
</div>
<div x-show="loadingMore" class="flex justify-center py-8">
<div class="w-8 h-8 border-2 border-cf-500/30 border-t-cf-500 rounded-full animate-spin"></div>
</div>
<div x-show="!hasMore && items.length > 0 && !initialLoading" class="text-center py-12 text-gray-500">
<div class="text-3xl mb-2"></div>
<p class="font-medium">You're all caught up!</p>
</div>
<div x-ref="sentinel" class="h-4"></div>
</section>
<div class="fixed left-0 right-0 pointer-events-none" style="top:25%" x-data x-intersect:enter="trackDepth(25)"></div>
<div class="fixed left-0 right-0 pointer-events-none" style="top:50%" x-data x-intersect:enter="trackDepth(50)"></div>
<div class="fixed left-0 right-0 pointer-events-none" style="top:75%" x-data x-intersect:enter="trackDepth(75)"></div>
</main>
<footer class="border-t border-white/5 py-8 text-center text-sm text-gray-500">
<div class="max-w-7xl mx-auto px-4 space-y-2">
<p>Powered by <a href="https://www.perplexity.ai" target="_blank" rel="noopener" class="text-cf-400 hover:text-cf-300 transition-colors">Perplexity</a></p>
<p>&copy; <span x-data x-text="new Date().getFullYear()"></span> ClawFort. All rights reserved.</p>
</div>
</footer>
<script>
function extractDomain(url) {
if (!url) return '';
try { return new URL(url).hostname.replace('www.', ''); } catch { return url; }
}
function timeAgo(dateStr) {
if (!dateStr) return '';
const seconds = Math.floor((Date.now() - new Date(dateStr).getTime()) / 1000);
const intervals = [
[31536000, 'year'], [2592000, 'month'], [86400, 'day'],
[3600, 'hour'], [60, 'minute'], [1, 'second']
];
for (const [secs, label] of intervals) {
const count = Math.floor(seconds / secs);
if (count >= 1) return `${count} ${label}${count > 1 ? 's' : ''} ago`;
}
return 'just now';
}
function trackEvent(name, data) {
if (window.umami) window.umami.track(name, data);
}
function trackDepth(pct) {
trackEvent('scroll-depth', { depth: pct });
}
function readCookie(name) {
const match = document.cookie.match(new RegExp(`(^| )${name}=([^;]+)`));
return match ? decodeURIComponent(match[2]) : null;
}
function normalizeLanguage(language) {
const supported = ['en', 'ta', 'ml'];
if (supported.includes(language)) return language;
return 'en';
}
function getPreferredLanguage() {
const stored = localStorage.getItem('clawfort_language');
if (stored) return normalizeLanguage(stored);
const cookieValue = readCookie('clawfort_language');
if (cookieValue) return normalizeLanguage(cookieValue);
return 'en';
}
function setPreferredLanguage(language) {
const normalized = normalizeLanguage(language);
window._selectedLanguage = normalized;
localStorage.setItem('clawfort_language', normalized);
document.cookie = `clawfort_language=${encodeURIComponent(normalized)}; path=/; max-age=31536000; SameSite=Lax`;
const select = document.getElementById('language-select');
if (select && select.value !== normalized) {
select.value = normalized;
}
trackEvent('language-change', { language: normalized });
window.dispatchEvent(new CustomEvent('language-changed', { detail: { language: normalized } }));
}
window._selectedLanguage = getPreferredLanguage();
function heroBlock() {
return {
item: null,
loading: true,
async init() {
const select = document.getElementById('language-select');
if (select) select.value = window._selectedLanguage;
await this.loadHero();
window.addEventListener('language-changed', () => {
this.loadHero();
});
},
async loadHero() {
this.loading = true;
try {
const params = new URLSearchParams({ language: window._selectedLanguage || 'en' });
const resp = await fetch(`/api/news/latest?${params}`);
if (resp.ok) {
const data = await resp.json();
if (data) this.item = data;
}
} catch (e) { console.error('Hero fetch failed:', e); }
this.loading = false;
window._heroId = this.item?.id;
}
};
}
function newsFeed() {
return {
items: [],
nextCursor: null,
hasMore: true,
initialLoading: true,
loadingMore: false,
observer: null,
async init() {
await this.waitForHero();
await this.loadMore();
this.initialLoading = false;
this.setupObserver();
window.addEventListener('language-changed', async () => {
this.items = [];
this.nextCursor = null;
this.hasMore = true;
this.initialLoading = true;
await this.waitForHero();
await this.loadMore();
this.initialLoading = false;
});
},
waitForHero() {
return new Promise(resolve => {
const check = () => {
if (window._heroId !== undefined) return resolve();
setTimeout(check, 50);
};
check();
});
},
async loadMore() {
if (this.loadingMore || !this.hasMore) return;
this.loadingMore = true;
try {
const params = new URLSearchParams({
limit: '10',
language: window._selectedLanguage || 'en',
});
if (this.nextCursor) params.set('cursor', this.nextCursor);
if (window._heroId) params.set('exclude_hero', window._heroId);
const resp = await fetch(`/api/news?${params}`);
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
const data = await resp.json();
this.items = [...this.items, ...data.items];
this.nextCursor = data.next_cursor;
this.hasMore = data.has_more;
} catch (e) {
console.error('Feed fetch failed:', e);
this.hasMore = false;
}
this.loadingMore = false;
},
setupObserver() {
this.observer = new IntersectionObserver(entries => {
if (entries[0].isIntersecting && this.hasMore && !this.loadingMore) {
this.loadMore();
}
}, { rootMargin: '200px' });
this.observer.observe(this.$refs.sentinel);
}
};
}
(async function initAnalytics() {
try {
const resp = await fetch('/config');
if (!resp.ok) return;
const cfg = await resp.json();
if (cfg.umami_script_url && cfg.umami_website_id) {
const s = document.createElement('script');
s.defer = true;
s.src = cfg.umami_script_url;
s.setAttribute('data-website-id', cfg.umami_website_id);
document.head.appendChild(s);
}
} catch {}
})();
</script>
</body>
</html>

View File

@@ -0,0 +1,28 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: ["./frontend/**/*.{html,js}"],
theme: {
extend: {
colors: {
clawfort: {
50: "#f0f4ff",
100: "#dbe4ff",
200: "#bac8ff",
300: "#91a7ff",
400: "#748ffc",
500: "#5c7cfa",
600: "#4c6ef5",
700: "#4263eb",
800: "#3b5bdb",
900: "#364fc7",
950: "#1e2a5e",
},
},
fontFamily: {
sans: ['"Inter"', "system-ui", "-apple-system", "sans-serif"],
mono: ['"JetBrains Mono"', "monospace"],
},
},
},
plugins: [],
};

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-12

View File

@@ -0,0 +1,111 @@
## Context
ClawFort needs a stunning one-page placeholder website that automatically aggregates and displays AI news hourly. The site must be containerized, use Perplexity API for news generation, and feature infinite scroll with 30-day retention.
**Current State:** Greenfield project - no existing codebase.
**Constraints:**
- Must use Perplexity API (API key via environment variable)
- Containerized deployment (Docker)
- Lean JavaScript framework for frontend
- 30-day news retention with archiving
- Hourly automated updates
## Goals / Non-Goals
**Goals:**
- Stunning one-page website with ClawFort branding
- Hourly AI news aggregation via Perplexity API
- Dynamic hero block with featured news and image
- Infinite scroll news feed (10 initial items)
- 30-day retention with automatic archiving
- Source attribution for news and images
- Fully containerized deployment
- Responsive design
**Non-Goals:**
- User authentication/accounts
- Manual news curation interface
- Real-time updates (polling only)
- Multi-language support
- CMS integration
- SEO optimization beyond basic meta tags
## Decisions
### Architecture: Monolithic Container
**Decision:** Single container with frontend + backend + SQLite
**Rationale:** Simplicity for a placeholder site, easy deployment, no external database dependencies
**Alternative:** Microservices with separate DB container - rejected as overkill for this scope
### Frontend Framework: Alpine.js + Tailwind CSS
**Decision:** Alpine.js for lean reactivity, Tailwind for styling
**Rationale:** Minimal bundle size (~15kb), no build step complexity, perfect for one-page sites
**Alternative:** React/Vue - rejected as too heavy for simple infinite scroll and hero display
### Backend: Python (FastAPI) + APScheduler
**Decision:** FastAPI for REST API, APScheduler for cron-like jobs
**Rationale:** Fast to develop, excellent async support, built-in OpenAPI docs, simple scheduling
**Alternative:** Node.js/Express - rejected; Python better for data processing and Perplexity integration
### Database: SQLite with SQLAlchemy
**Decision:** SQLite for zero-config persistence
**Rationale:** No separate DB container needed, sufficient for 30-day news retention (~1000-2000 records)
**Alternative:** PostgreSQL - rejected as adds deployment complexity
### News Aggregation Strategy
**Decision:** Hourly cron job queries Perplexity for "latest AI news" with image generation
**Rationale:** Simple, reliable, cost-effective
**Implementation:**
- Perplexity API call: "What are the latest AI news from the last hour?"
- Store: headline, summary, source URL, image URL, timestamp
- Attribution: Display source name and image credit
### Image Strategy
**Decision:** Use Perplexity to suggest relevant images or generate via DALL-E if available, with local image optimization
**Rationale:** Consistent with AI theme, no copyright concerns, plus configurable compression
**Implementation:**
- Download and optimize images locally using Pillow
- Configurable quality setting via `IMAGE_QUALITY` env var (1-100, default 85)
- Store optimized images in `/app/static/images/`
- Serve optimized versions, fallback to original URL if optimization fails
**Alternative:** Unsplash API - rejected to keep dependencies minimal
### Infinite Scroll Implementation
**Decision:** Cursor-based pagination with Intersection Observer API
**Rationale:** Efficient for large datasets, simple Alpine.js integration
**Page Size:** 10 items per request
### Archive Strategy
**Decision:** Soft delete (archived flag) + nightly cleanup job
**Rationale:** Easy to implement, data recoverable if needed
**Cleanup:** Move items >30 days to archive table or delete
## Risks / Trade-offs
**[Risk] Perplexity API rate limits or downtime** → Mitigation: Implement exponential backoff, cache last successful fetch, display cached content with "last updated" timestamp, fallback to OpenRouter API if configured
**[Risk] Container storage grows unbounded** → Mitigation: SQLite WAL mode, volume mounts for persistence, 30-day hard limit on retention
**[Risk] News quality varies** → Mitigation: Basic filtering (require title + summary), manual blacklist capability in config
**[Risk] Cold start performance** → Mitigation: SQLite connection pooling, frontend CDN-ready static assets
**[Trade-off] SQLite vs PostgreSQL** → SQLite limits concurrent writes but acceptable for read-heavy news site
**[Trade-off] Single container vs microservices** → Easier deployment but less scalable; acceptable for placeholder site
## Migration Plan
1. **Development:** Local Docker Compose setup
2. **Environment:** Configure `PERPLEXITY_API_KEY` in `.env`
3. **Build:** `docker build -t clawfort-site .`
4. **Run:** `docker run -e PERPLEXITY_API_KEY=xxx -p 8000:8000 clawfort-site`
5. **Data:** SQLite volume mount for persistence across restarts
## Open Questions (Resolved)
1. **Admin panel?** → Deferred to future
2. **Image optimization?** → Yes, local optimization with Pillow, configurable quality via `IMAGE_QUALITY` env var
3. **Analytics?** → Umami integration with `UMAMI_SCRIPT_URL` and `UMAMI_WEBSITE_ID` env vars, track page views, scroll events, and CTA clicks
4. **API cost monitoring?** → Log Perplexity usage, fallback to OpenRouter API if `OPENROUTER_API_KEY` configured

View File

@@ -0,0 +1,54 @@
## Why
ClawFort needs a stunning one-page placeholder website that automatically generates and displays AI news hourly, creating a dynamic, always-fresh brand presence without manual content curation. The site will serve as a living showcase of AI capabilities while building brand recognition.
## What Changes
- **New Capabilities:**
- Automated AI news aggregation via Perplexity API (hourly updates)
- Dynamic hero section with featured news and images
- Infinite scroll news feed with 1-month retention
- Archive system for older news items
- Containerized deployment (Docker)
- Responsive single-page design with lean JavaScript framework
- **Frontend:**
- One-page website with hero block
- Infinite scroll news feed (latest 10 on load)
- News attribution to sources
- Image credit display
- Responsive design
- **Backend:**
- News aggregation service (hourly cron job)
- Perplexity API integration
- News storage with 30-day retention
- Archive management
- REST API for frontend
- **Infrastructure:**
- Docker containerization
- Environment-based configuration
- Perplexity API key management
## Capabilities
### New Capabilities
- `news-aggregator`: Automated AI news collection via Perplexity API with hourly scheduling
- `news-storage`: Database storage with 30-day retention and archive management
- `hero-display`: Dynamic hero block with featured news and image attribution
- `infinite-scroll`: Frontend infinite scroll with lazy loading (10 initial, paginated)
- `containerized-deployment`: Docker-based deployment with environment configuration
- `responsive-frontend`: Single-page application with lean JavaScript framework
### Modified Capabilities
- None (new project)
## Impact
- **Code:** New full-stack application (frontend + backend)
- **APIs:** Perplexity API integration required
- **Dependencies:** Docker, Node.js/Python runtime, database (SQLite/PostgreSQL)
- **Infrastructure:** Container orchestration support
- **Environment:** `PERPLEXITY_API_KEY` required
- **Data:** 30-day rolling news archive with automatic cleanup

View File

@@ -0,0 +1,45 @@
## ADDED Requirements
### Requirement: Containerized deployment
The system SHALL run entirely within Docker containers with all dependencies included.
#### Scenario: Single container build
- **WHEN** building the Docker image
- **THEN** the Dockerfile SHALL include Python runtime, Node.js (for Tailwind if needed), and all application code
- **AND** expose port 8000 for web traffic
#### Scenario: Environment configuration
- **WHEN** running the container
- **THEN** the system SHALL read PERPLEXITY_API_KEY from environment variables
- **AND** fail to start if the key is missing or invalid
- **AND** support optional configuration for retention days (default: 30)
- **AND** support optional IMAGE_QUALITY for image compression (default: 85)
- **AND** support optional OPENROUTER_API_KEY for fallback LLM provider
- **AND** support optional UMAMI_SCRIPT_URL and UMAMI_WEBSITE_ID for analytics
#### Scenario: Data persistence
- **WHEN** the container restarts
- **THEN** the SQLite database SHALL persist via Docker volume mount
- **AND** news data SHALL remain intact across restarts
## ADDED Requirements
### Requirement: Responsive single-page design
The system SHALL provide a stunning, responsive one-page website with ClawFort branding.
#### Scenario: Brand consistency
- **WHEN** viewing the website
- **THEN** the design SHALL feature ClawFort branding (logo, colors, typography)
- **AND** maintain visual consistency across all sections
#### Scenario: Responsive layout
- **WHEN** viewing on mobile, tablet, or desktop
- **THEN** the layout SHALL adapt appropriately
- **AND** the hero block SHALL resize proportionally
- **AND** the news feed SHALL use appropriate column layouts
#### Scenario: Performance
- **WHEN** loading the page
- **THEN** initial page load SHALL complete within 2 seconds
- **AND** images SHALL lazy load outside viewport
- **AND** JavaScript bundle SHALL be under 100KB gzipped

View File

@@ -0,0 +1,55 @@
## ADDED Requirements
### Requirement: Hero block display
The system SHALL display the most recent news item as a featured hero block with full attribution.
#### Scenario: Hero rendering
- **WHEN** the page loads
- **THEN** the hero block SHALL display the latest news headline, summary, and featured image
- **AND** show source attribution (e.g., "Via: TechCrunch")
- **AND** show image credit (e.g., "Image: DALL-E")
#### Scenario: Hero update
- **WHEN** new news is fetched hourly
- **THEN** the hero block SHALL automatically update to show the newest item
- **AND** the previous hero item SHALL move to the news feed
## ADDED Requirements
### Requirement: Infinite scroll news feed
The system SHALL display news items in reverse chronological order with infinite scroll pagination.
#### Scenario: Initial load
- **WHEN** the page first loads
- **THEN** the system SHALL display the 10 most recent non-archived news items
- **AND** exclude the hero item from the feed
#### Scenario: Infinite scroll
- **WHEN** the user scrolls to the bottom of the feed
- **THEN** the system SHALL fetch the next 10 news items via API
- **AND** append them to the feed without page reload
- **AND** show a loading indicator during fetch
#### Scenario: End of feed
- **WHEN** all non-archived news items have been loaded
- **THEN** the system SHALL display "No more news" message
- **AND** disable further scroll triggers
### Requirement: News attribution display
The system SHALL clearly attribute all news content and images to their sources.
#### Scenario: Source attribution
- **WHEN** displaying any news item
- **THEN** the system SHALL show the original source name and link
- **AND** display image credit if available
#### Scenario: Perplexity attribution
- **WHEN** displaying aggregated content
- **THEN** the system SHALL include "Powered by Perplexity" in the footer
#### Scenario: Analytics tracking
- **WHEN** Umami analytics is configured via `UMAMI_SCRIPT_URL` and `UMAMI_WEBSITE_ID`
- **THEN** the system SHALL inject Umami tracking script into page head
- **AND** track page view events on initial load
- **AND** track scroll depth events (25%, 50%, 75%, 100%)
- **AND** track CTA click events (news item clicks, source link clicks)

View File

@@ -0,0 +1,56 @@
## ADDED Requirements
### Requirement: News aggregation via Perplexity API
The system SHALL fetch AI news hourly from Perplexity API and store it with full attribution.
#### Scenario: Hourly news fetch
- **WHEN** the scheduled job runs every hour
- **THEN** the system calls Perplexity API with query "latest AI news"
- **AND** stores the response with headline, summary, source URL, and timestamp
#### Scenario: API error handling
- **WHEN** Perplexity API returns an error or timeout
- **THEN** the system logs the error with cost tracking
- **AND** retries with exponential backoff up to 3 times
- **AND** falls back to OpenRouter API if `OPENROUTER_API_KEY` is configured
- **AND** continues using cached content if all retries and fallback fail
### Requirement: Featured image generation
The system SHALL generate or fetch a relevant featured image for each news item.
#### Scenario: Image acquisition
- **WHEN** a new news item is fetched
- **THEN** the system SHALL request a relevant image URL from Perplexity
- **AND** download and optimize the image locally using Pillow
- **AND** apply quality compression based on `IMAGE_QUALITY` env var (1-100, default 85)
- **AND** store the optimized image path and original image credit/source information
#### Scenario: Image optimization configuration
- **WHEN** the system processes an image
- **THEN** it SHALL read `IMAGE_QUALITY` from environment (default: 85)
- **AND** apply JPEG compression at specified quality level
- **AND** resize images exceeding 1200px width while maintaining aspect ratio
- **AND** store optimized images in `/app/static/images/` directory
#### Scenario: Image fallback
- **WHEN** image generation fails or returns no result
- **THEN** the system SHALL use a default ClawFort branded placeholder image
## ADDED Requirements
### Requirement: News data persistence with retention
The system SHALL store news items for exactly 30 days with automatic archiving.
#### Scenario: News storage
- **WHEN** a news item is fetched from Perplexity
- **THEN** the system SHALL store it in SQLite with fields: id, headline, summary, source_url, image_url, image_credit, published_at, created_at
- **AND** set archived=false by default
#### Scenario: Automatic archiving
- **WHEN** a nightly cleanup job runs
- **THEN** the system SHALL mark all news items older than 30 days as archived=true
- **AND** delete archived items older than 60 days permanently
#### Scenario: Duplicate prevention
- **WHEN** fetching news that matches an existing headline (within 24 hours)
- **THEN** the system SHALL skip insertion to prevent duplicates

View File

@@ -0,0 +1,82 @@
## 1. Project Setup
- [x] 1.1 Create project directory structure (backend/, frontend/, docker/)
- [x] 1.2 Initialize Python project with pyproject.toml (FastAPI, SQLAlchemy, APScheduler, httpx)
- [x] 1.3 Create requirements.txt for Docker build
- [x] 1.4 Set up Tailwind CSS configuration
- [x] 1.5 Create .env.example with all environment variables (PERPLEXITY_API_KEY, IMAGE_QUALITY, OPENROUTER_API_KEY, UMAMI_SCRIPT_URL, UMAMI_WEBSITE_ID)
## 2. Database Layer
- [x] 2.1 Create SQLAlchemy models (NewsItem with fields: id, headline, summary, source_url, image_url, image_credit, published_at, created_at, archived)
- [x] 2.2 Create database initialization and migration scripts
- [x] 2.3 Implement database connection management with SQLite
- [x] 2.4 Create repository functions (create_news, get_recent_news, get_news_paginated, archive_old_news, delete_archived_news)
## 3. News Aggregation Service
- [x] 3.1 Implement Perplexity API client with httpx and cost logging
- [x] 3.2 Create news fetch function with query "latest AI news"
- [x] 3.3 Implement exponential backoff retry logic (3 attempts)
- [x] 3.4 Add duplicate detection (headline match within 24h)
- [x] 3.5 Create hourly scheduled job with APScheduler
- [x] 3.6 Implement image URL fetching from Perplexity
- [x] 3.7 Add image download and optimization with Pillow (configurable quality)
- [x] 3.8 Implement OpenRouter API fallback for news fetching
- [x] 3.9 Add default placeholder image fallback
## 4. Backend API
- [x] 4.1 Create FastAPI application structure
- [x] 4.2 Implement GET /api/news endpoint with pagination (cursor-based)
- [x] 4.3 Implement GET /api/news/latest endpoint for hero block
- [x] 4.4 Add CORS middleware for frontend access
- [x] 4.5 Create Pydantic schemas for API responses
- [x] 4.6 Implement health check endpoint
- [x] 4.7 Add API error handling and logging
## 5. Frontend Implementation
- [x] 5.1 Create HTML structure with ClawFort branding
- [x] 5.2 Implement hero block with Alpine.js (latest news display)
- [x] 5.3 Create news feed component with Alpine.js
- [x] 5.4 Implement infinite scroll with Intersection Observer API
- [x] 5.5 Add loading indicators and "No more news" message
- [x] 5.6 Implement source attribution display
- [x] 5.7 Add image lazy loading
- [x] 5.8 Style with Tailwind CSS (responsive design)
- [x] 5.9 Add "Powered by Perplexity" footer attribution
- [x] 5.10 Implement Umami analytics integration (conditional on env vars)
- [x] 5.11 Add analytics events: page view, scroll depth (25/50/75/100%), CTA clicks
## 6. Archive Management
- [x] 6.1 Implement nightly cleanup job (archive >30 days)
- [x] 6.2 Create permanent deletion job (>60 days archived)
- [x] 6.3 Add retention configuration (default 30 days)
## 7. Docker Containerization
- [x] 7.1 Create Dockerfile with multi-stage build (Python + static assets)
- [x] 7.2 Create docker-compose.yml for local development
- [x] 7.3 Add volume mount for SQLite persistence
- [x] 7.4 Configure environment variable handling
- [x] 7.5 Optimize image size (slim Python base)
- [x] 7.6 Add .dockerignore file
## 8. Testing & Validation
- [x] 8.1 Test Perplexity API integration manually
- [x] 8.2 Verify hourly news fetching works
- [x] 8.3 Test infinite scroll pagination
- [x] 8.4 Verify responsive design on mobile/desktop
- [x] 8.5 Test container build and run
- [x] 8.6 Verify data persistence across container restarts
- [x] 8.7 Test archive cleanup functionality
## 9. Documentation
- [x] 9.1 Create README.md with setup instructions
- [x] 9.2 Document environment variables
- [x] 9.3 Add deployment instructions
- [x] 9.4 Document API endpoints

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-12

View File

@@ -0,0 +1,93 @@
## Context
ClawFort currently performs automated hourly news ingestion through APScheduler (`scheduled_news_fetch()` in `backend/news_service.py`) and the same pipeline handles retries, deduplication, image optimization, and persistence. There is no operator-facing command to run this pipeline on demand.
The change adds an explicit manual trigger path for operations use cases:
- first-time bootstrap (populate content immediately after setup)
- recovery after failed external API calls
- ad-hoc operational refresh without waiting for scheduler cadence
Constraints:
- Reuse existing fetch pipeline to avoid logic drift
- Keep behavior idempotent with existing duplicate detection
- Preserve scheduler behavior; manual runs must not mutate scheduler configuration
## Goals / Non-Goals
**Goals:**
- Provide a Python command to force an immediate news fetch.
- Reuse existing retry, dedup, and storage logic.
- Return clear terminal output and process exit status for automation.
- Keep command safe to run repeatedly.
**Non-Goals:**
- Replacing APScheduler-based hourly fetch.
- Introducing new API endpoints for manual triggering.
- Changing data schema or retention policy.
- Building a full operator dashboard.
## Decisions
### Decision: Add a dedicated CLI entrypoint module
**Decision:** Add a small CLI entrypoint under backend (for example `backend/cli.py`) with a subcommand that invokes the fetch pipeline.
**Rationale:**
- Keeps operational workflow explicit and scriptable.
- Avoids coupling manual trigger behavior to HTTP routes.
- Works in local dev and containerized runtime.
**Alternatives considered:**
- Add an admin HTTP endpoint: rejected due to unnecessary security exposure.
- Trigger APScheduler internals directly: rejected to avoid scheduler-state side effects.
### Decision: Invoke the existing news pipeline directly
**Decision:** The command should call `process_and_store_news()` (or the existing sync wrapper) instead of implementing parallel fetch logic.
**Rationale:**
- Guarantees parity with scheduled runs.
- Reuses retry/backoff, fallback provider behavior, image handling, and dedup checks.
- Minimizes maintenance overhead.
**Alternatives considered:**
- New command-specific fetch implementation: rejected due to drift risk.
### Decision: Standardize command exit semantics
**Decision:** Exit code `0` for successful command execution (including zero new items), non-zero for operational failures (for example unhandled exceptions or fatal setup errors).
**Rationale:**
- Enables CI/cron/operator scripts to react deterministically.
- Matches common CLI conventions.
**Alternatives considered:**
- Exit non-zero when zero new items were inserted: rejected because dedup can make zero-item runs valid.
### Decision: Keep manual and scheduled paths independent
**Decision:** Manual command does not reconfigure or trigger scheduler jobs; it performs a one-off run only.
**Rationale:**
- Avoids race-prone manipulation of scheduler internals.
- Reduces complexity and risk in production runtime.
**Alternatives considered:**
- Temporarily altering scheduler trigger times: rejected as brittle and harder to reason about.
## Risks / Trade-offs
- **[Risk] Overlapping manual and scheduled runs may happen at boundary times** -> Mitigation: document operational guidance and keep dedup checks as safety net.
- **[Risk] External API failures still occur during forced runs** -> Mitigation: existing retry/backoff plus fallback provider path and explicit error output.
- **[Trade-off] Command success does not guarantee new rows** -> Mitigation: command output reports inserted count so operators can distinguish no-op vs failure.
## Migration Plan
1. Add CLI module and force-fetch subcommand wired to existing pipeline.
2. Add command result reporting and exit code behavior.
3. Document usage in README for bootstrap and recovery flows.
4. Validate command in local runtime and container runtime.
Rollback:
- Remove CLI entrypoint and related docs; scheduler-based hourly behavior remains unchanged.
## Open Questions
- Should force-fetch support an optional `--max-attempts` override, or stay fixed to pipeline defaults for v1?
- Should concurrent-run prevention use a process lock in this phase, or remain a documented operational constraint?

View File

@@ -0,0 +1,35 @@
## Why
ClawFort currently fetches news on a fixed hourly schedule, which is not enough during first-time setup or after a failed API cycle. Operators need a reliable way to force an immediate news pull so they can bootstrap content quickly and recover without waiting for the next scheduled run.
## What Changes
- **New Capabilities:**
- Add a manual Python command to trigger an immediate news fetch on demand.
- Add command output that clearly reports success/failure, number of fetched/stored items, and error details.
- Add safe invocation behavior so manual runs reuse existing fetch/retry/dedup logic.
- **Backend:**
- Add a CLI entrypoint/script for force-fetch execution.
- Wire the command to existing news aggregation pipeline used by scheduled jobs.
- Return non-zero exit codes on command failure for operational automation.
- **Operations:**
- Document how and when to run the force-fetch command (initial setup and recovery scenarios).
## Capabilities
### New Capabilities
- `force-fetch-command`: Provide a Python command that triggers immediate news aggregation outside the hourly scheduler.
- `fetch-run-reporting`: Provide operator-facing command output and exit semantics for successful runs and failures.
- `manual-fetch-recovery`: Support manual recovery workflow after failed or partial API fetch cycles.
### Modified Capabilities
- None.
## Impact
- **Code:** New CLI command module/entrypoint plus minimal integration with existing `news_service` execution path.
- **APIs:** No external API contract changes.
- **Dependencies:** No required new runtime dependencies expected.
- **Infrastructure:** No deployment topology change; command runs in the same container/runtime.
- **Environment:** Reuses existing env vars (`PERPLEXITY_API_KEY`, `OPENROUTER_API_KEY`, `IMAGE_QUALITY`).
- **Data:** No schema changes; command writes through existing dedup + persistence flow.

View File

@@ -0,0 +1,27 @@
## ADDED Requirements
### Requirement: Command reports run outcome to operator
The system SHALL present operator-facing output that describes whether the forced run succeeded or failed.
#### Scenario: Successful run reporting
- **WHEN** a forced fetch command completes without fatal errors
- **THEN** the command output includes a success indication
- **AND** includes the number of items stored in that run
#### Scenario: Failed run reporting
- **WHEN** a forced fetch command encounters a fatal execution error
- **THEN** the command output includes a failure indication
- **AND** includes actionable error details for operator diagnosis
### Requirement: Command exposes automation-friendly exit semantics
The system SHALL return deterministic process exit codes for command success and failure.
#### Scenario: Exit code on success
- **WHEN** the force-fetch command execution completes successfully
- **THEN** the process exits with code 0
- **AND** automation tooling can treat the run as successful
#### Scenario: Exit code on fatal failure
- **WHEN** the force-fetch command execution fails fatally
- **THEN** the process exits with a non-zero code
- **AND** automation tooling can detect the failure state

View File

@@ -0,0 +1,27 @@
## ADDED Requirements
### Requirement: Operator can trigger immediate news fetch via Python command
The system SHALL provide a Python command that triggers one immediate news aggregation run outside of the hourly scheduler.
#### Scenario: Successful forced fetch invocation
- **WHEN** an operator runs the documented force-fetch command with valid runtime configuration
- **THEN** the system executes one full fetch cycle using the existing aggregation pipeline
- **AND** the command terminates after the run completes
#### Scenario: Command does not reconfigure scheduler
- **WHEN** an operator runs the force-fetch command while the service scheduler exists
- **THEN** the command performs a one-off run only
- **AND** scheduler job definitions and cadence remain unchanged
### Requirement: Forced fetch reuses existing aggregation behavior
The system SHALL use the same retry, fallback, deduplication, image processing, and persistence logic as scheduled fetch runs.
#### Scenario: Retry and fallback parity
- **WHEN** the primary news provider request fails during a forced run
- **THEN** the system applies the configured retry behavior
- **AND** uses the configured fallback provider path if available
#### Scenario: Deduplication parity
- **WHEN** fetched headlines match existing duplicate rules
- **THEN** duplicate items are skipped according to existing deduplication policy
- **AND** only eligible items are persisted

View File

@@ -0,0 +1,22 @@
## ADDED Requirements
### Requirement: Manual command supports bootstrap and recovery workflows
The system SHALL allow operators to run the forced fetch command during first-time setup and after failed scheduled cycles.
#### Scenario: Bootstrap content population
- **WHEN** the system is newly deployed and contains no current news items
- **THEN** an operator can run the force-fetch command immediately
- **AND** the command attempts to populate the dataset without waiting for the next hourly schedule
#### Scenario: Recovery after failed scheduled fetch
- **WHEN** a prior scheduled fetch cycle failed or produced incomplete results
- **THEN** an operator can run the force-fetch command on demand
- **AND** the system performs a fresh one-off fetch attempt
### Requirement: Repeated manual runs remain operationally safe
The system SHALL support repeated operator-triggered runs without corrupting data integrity.
#### Scenario: Repeated invocation in same day
- **WHEN** an operator runs the force-fetch command multiple times within the same day
- **THEN** existing deduplication behavior prevents duplicate persistence for matching items
- **AND** each command run completes with explicit run status output

View File

@@ -0,0 +1,28 @@
## 1. CLI Command Foundation
- [x] 1.1 Create `backend/cli.py` with command parsing for force-fetch execution
- [x] 1.2 Add a force-fetch command entrypoint that can be invoked via Python module execution
- [x] 1.3 Ensure command initializes required runtime context (env + database readiness)
## 2. Force-Fetch Execution Path
- [x] 2.1 Wire command to existing news aggregation execution path (`process_and_store_news` or sync wrapper)
- [x] 2.2 Ensure command runs as a one-off operation without changing scheduler job configuration
- [x] 2.3 Preserve existing deduplication, retry, fallback, and image processing behavior during manual runs
## 3. Operator Reporting and Exit Semantics
- [x] 3.1 Add success output that includes stored item count for the forced run
- [x] 3.2 Add failure output with actionable error details when fatal execution errors occur
- [x] 3.3 Return exit code `0` on success and non-zero on fatal failures
## 4. Recovery Workflow and Validation
- [x] 4.1 Validate bootstrap workflow: force-fetch on a fresh deployment with no current items
- [x] 4.2 Validate recovery workflow: force-fetch after simulated failed scheduled cycle
- [x] 4.3 Validate repeated same-day manual runs do not create duplicate records under dedup policy
## 5. Documentation
- [x] 5.1 Update `README.md` with force-fetch command usage for first-time setup
- [x] 5.2 Document recovery-run usage and expected command output/exit behavior

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-12

View File

@@ -0,0 +1,94 @@
## Context
ClawFort currently stores and serves article content in a single language flow. The news creation path fetches English content via Perplexity and persists one record per article, while frontend hero/feed rendering consumes that single-language payload.
This change introduces multilingual support for Tamil and Malayalam with language-aware rendering and persistent user preference.
Constraints:
- Keep existing English behavior as default and fallback.
- Reuse current Perplexity integration for translation generation.
- Keep API and frontend changes minimal and backward-compatible where possible.
- Persist user language preference client-side so returning users keep their choice.
## Goals / Non-Goals
**Goals:**
- Generate Tamil and Malayalam translations at article creation time.
- Persist translation variants linked to the base article.
- Serve language-specific content in hero/feed API responses.
- Add landing-page language selector and persist preference across sessions.
**Non-Goals:**
- Supporting arbitrary language expansion in this phase.
- Introducing user accounts/server-side profile preferences.
- Building editorial translation workflows or manual override UI.
- Replacing Perplexity as translation provider.
## Decisions
### Decision: Model translations as child records linked to a base article
**Decision:** Keep one source article and store translation rows keyed by article ID + language code.
**Rationale:**
- Avoids duplicating non-language metadata (source URL, image attribution, timestamps).
- Supports language lookup with deterministic fallback to English.
- Eases future language additions without schema redesign.
**Alternatives considered:**
- Inline columns on article table (`headline_ta`, `headline_ml`): rejected as rigid and harder to extend.
- Fully duplicated article rows per language: rejected due to dedup and feed-order complexity.
### Decision: Translate immediately after article creation in ingestion pipeline
**Decision:** For each newly accepted article, request Tamil and Malayalam translations and persist before ingestion cycle completes.
**Rationale:**
- Keeps article and translations synchronized.
- Avoids delayed jobs and partial language availability in normal flow.
- Fits existing per-article processing loop.
**Alternatives considered:**
- Asynchronous background translation queue: rejected for higher complexity in this phase.
### Decision: Add optional language input to read APIs with English fallback
**Decision:** Add language selection input (query param) on existing read endpoints; if translation missing, return English source text.
**Rationale:**
- Preserves endpoint footprint and frontend integration simplicity.
- Guarantees response completeness even when translation fails.
- Supports progressive rollout without breaking existing consumers.
**Alternatives considered:**
- New language-specific endpoints: rejected as unnecessary API surface growth.
### Decision: Persist frontend language preference in localStorage with cookie fallback
**Decision:** Primary persistence in `localStorage`; optional cookie fallback for constrained browsers.
**Rationale:**
- Simple client-only persistence without backend session dependencies.
- Matches one-page app architecture and current no-auth model.
**Alternatives considered:**
- Cookie-only preference: rejected as less ergonomic for JS state hydration.
## Risks / Trade-offs
- **[Risk] Translation generation increases API cost/latency per ingestion cycle** -> Mitigation: bounded retries, fallback to English when translation unavailable.
- **[Risk] Partial translation failures create mixed-language feed** -> Mitigation: deterministic fallback to English for missing translation rows.
- **[Trade-off] Translation-at-ingest adds synchronous processing time** -> Mitigation: keep language set fixed to two targets in this phase.
- **[Risk] Language preference desynchronization between tabs/devices** -> Mitigation: accept per-browser persistence scope in current architecture.
## Migration Plan
1. Add translation persistence model and migration path.
2. Extend ingestion pipeline to request/store Tamil and Malayalam translations.
3. Add language-aware API response behavior with fallback.
4. Implement frontend language selector + preference persistence.
5. Validate language switching, fallback, and returning-user preference behavior.
Rollback:
- Disable language selection in frontend and return English-only payload while retaining translation data safely.
## Open Questions
- Should translation failures be retried independently per language within the same cycle, or skipped after one failed language call?
- Should unsupported language requests return 400 or silently fallback to English in v1?

View File

@@ -0,0 +1,37 @@
## Why
ClawFort currently publishes content in a single language, which limits accessibility for regional audiences. Adding multilingual delivery now improves usability for Tamil and Malayalam readers while keeping the current English workflow intact.
## What Changes
- **New Capabilities:**
- Persist the fetched articles locally in database.
- Generate Tamil and Malayalam translations for each newly created article using Perplexity.
- Store translated variants as language-specific content items linked to the same base article.
- Add a language selector on the landing page to switch article rendering language.
- Persist user language preference in browser storage (local storage or cookie) and restore it for returning users.
- **Frontend:**
- Add visible language switcher UI on the one-page experience.
- Render hero and feed content in selected language when translation exists.
- **Backend:**
- Extend content generation flow to request and save multilingual outputs.
- Serve language-specific content for existing API reads.
## Capabilities
### New Capabilities
- `article-translations-ml-tm`: Create and store Tamil and Malayalam translated content variants for each article at creation time.
- `language-aware-content-delivery`: Return and render language-specific article fields based on selected language.
- `language-preference-persistence`: Persist and restore user-selected language across sessions for returning users.
### Modified Capabilities
- None.
## Impact
- **Code:** Backend aggregation/storage flow, API response handling, and frontend rendering/state management will be updated.
- **APIs:** Existing read endpoints will need language-aware response behavior or language selection input handling.
- **Dependencies:** Reuses Perplexity integration; no mandatory new external provider expected.
- **Infrastructure:** No deployment topology changes.
- **Environment:** Uses existing Perplexity configuration; may introduce optional translation toggles/settings later.
- **Data:** Adds translation data model/fields linked to each source article.

View File

@@ -0,0 +1,27 @@
## ADDED Requirements
### Requirement: System generates Tamil and Malayalam translations at article creation time
The system SHALL generate Tamil (`ta`) and Malayalam (`ml`) translations for each newly created article during ingestion.
#### Scenario: Translation generation for new article
- **WHEN** a new source article is accepted for storage
- **THEN** the system requests Tamil and Malayalam translations for headline and summary
- **AND** translation generation occurs in the same ingestion flow for that article
#### Scenario: Translation failure fallback
- **WHEN** translation generation fails for one or both target languages
- **THEN** the system stores the base article in English
- **AND** marks missing translations as unavailable without failing the whole ingestion cycle
### Requirement: System stores translation variants linked to the same article
The system SHALL persist language-specific translated content as translation items associated with the base article.
#### Scenario: Persist linked translations
- **WHEN** Tamil and Malayalam translations are generated successfully
- **THEN** the system stores them as language-specific content variants linked to the base article identifier
- **AND** translation records remain queryable by language code
#### Scenario: No duplicate translation variants per language
- **WHEN** translation storage is attempted for an article-language pair that already exists
- **THEN** the system avoids creating duplicate translation items for the same language
- **AND** preserves one authoritative translation variant per article per language in this phase

View File

@@ -0,0 +1,27 @@
## ADDED Requirements
### Requirement: API supports language-aware content retrieval
The system SHALL support language-aware content delivery for hero and feed reads using selected language input.
#### Scenario: Language-specific latest article response
- **WHEN** a client requests latest article data with a supported language selection
- **THEN** the system returns headline and summary in the selected language when available
- **AND** includes the corresponding base article metadata and media attribution
#### Scenario: Language-specific paginated feed response
- **WHEN** a client requests paginated feed data with a supported language selection
- **THEN** the system returns each feed item's headline and summary in the selected language when available
- **AND** preserves existing pagination behavior and ordering semantics
### Requirement: Language fallback to English is deterministic
The system SHALL return English source content when the requested translation is unavailable.
#### Scenario: Missing translation fallback
- **WHEN** a client requests Tamil or Malayalam content for an article lacking that translation
- **THEN** the system returns the English headline and summary for that article
- **AND** response shape remains consistent with language-aware responses
#### Scenario: Unsupported language handling
- **WHEN** a client requests a language outside supported values (`en`, `ta`, `ml`)
- **THEN** the system applies the defined default language behavior for this phase
- **AND** avoids breaking existing consumers of news endpoints

View File

@@ -0,0 +1,27 @@
## ADDED Requirements
### Requirement: Landing page provides language selector
The system SHALL display a language selector on the landing page that allows switching between English, Tamil, and Malayalam content views.
#### Scenario: User selects language from landing page
- **WHEN** a user chooses Tamil or Malayalam from the language selector
- **THEN** hero and feed content update to requested language-aware rendering
- **AND** subsequent API requests use the selected language context
#### Scenario: User switches back to English
- **WHEN** a user selects English in the language selector
- **THEN** content renders in English
- **AND** language state updates immediately in the frontend view
### Requirement: User language preference is persisted and restored
The system SHALL persist selected language preference in client-side storage and restore it for returning users.
#### Scenario: Persist language selection
- **WHEN** a user selects a supported language on the landing page
- **THEN** the selected language code is stored in local storage or a client cookie
- **AND** the persisted value is used as preferred language for future visits on the same browser
#### Scenario: Restore preference on return visit
- **WHEN** a returning user opens the landing page
- **THEN** the system reads persisted language preference from client storage
- **AND** initializes the UI and content requests with that language by default

View File

@@ -0,0 +1,40 @@
## 1. Translation Data Model and Persistence
- [x] 1.1 Add translation persistence model linked to base article with language code (`en`, `ta`, `ml`)
- [x] 1.2 Update database initialization/migration path to create translation storage structures
- [x] 1.3 Add repository operations to create/read translation variants by article and language
- [x] 1.4 Enforce no duplicate translation variant for the same article-language pair
## 2. Ingestion Pipeline Translation Generation
- [x] 2.1 Extend ingestion flow to trigger Tamil and Malayalam translation generation for each new article
- [x] 2.2 Reuse Perplexity integration for translation calls with language-specific prompts
- [x] 2.3 Persist generated translations as linked variants during the same ingestion cycle
- [x] 2.4 Implement graceful fallback when translation generation fails (store English base, continue cycle)
## 3. Language-Aware API Delivery
- [x] 3.1 Add language selection input handling to latest-news endpoint
- [x] 3.2 Add language selection input handling to paginated feed endpoint
- [x] 3.3 Return translated headline/summary when available and fallback to English when missing
- [x] 3.4 Define and implement behavior for unsupported language requests in this phase
## 4. Frontend Language Selector and Rendering
- [x] 4.1 Add landing-page language selector UI with English, Tamil, and Malayalam options
- [x] 4.2 Update hero data fetch/render flow to request and display selected language content
- [x] 4.3 Update feed pagination fetch/render flow to request and display selected language content
- [x] 4.4 Keep existing attribution/media rendering behavior intact across language switches
## 5. Preference Persistence and Returning User Behavior
- [x] 5.1 Persist user-selected language in localStorage with cookie fallback
- [x] 5.2 Restore persisted language on page load before initial content fetch
- [x] 5.3 Initialize selector state and API language requests from restored preference
## 6. Validation and Documentation
- [x] 6.1 Validate translation creation and retrieval for Tamil and Malayalam on new articles
- [x] 6.2 Validate fallback behavior for missing translation variants and unsupported language input
- [x] 6.3 Validate returning-user language persistence across browser sessions
- [x] 6.4 Update README with multilingual behavior, language selector usage, and persistence details

20
openspec/config.yaml Normal file
View File

@@ -0,0 +1,20 @@
schema: spec-driven
# Project context (optional)
# This is shown to AI when creating artifacts.
# Add your tech stack, conventions, style guides, domain knowledge, etc.
# Example:
# context: |
# Tech stack: TypeScript, React, Node.js
# We use conventional commits
# Domain: e-commerce platform
# Per-artifact rules (optional)
# Add custom rules for specific artifacts.
# Example:
# rules:
# proposal:
# - Keep proposals under 500 words
# - Always include a "Non-goals" section
# tasks:
# - Break tasks into chunks of max 2 hours

28
pyproject.toml Normal file
View File

@@ -0,0 +1,28 @@
[project]
name = "clawfort"
version = "0.1.0"
description = "ClawFort AI News Aggregation One-Pager"
requires-python = ">=3.11"
dependencies = [
"fastapi>=0.110.0",
"uvicorn[standard]>=0.27.0",
"sqlalchemy>=2.0.0",
"apscheduler>=3.10.0",
"httpx>=0.27.0",
"pillow>=10.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
]
[project.optional-dependencies]
dev = [
"ruff>=0.3.0",
"pytest>=8.0.0",
]
[tool.ruff]
target-version = "py311"
line-length = 100
[tool.ruff.lint]
select = ["E", "F", "I", "N", "W"]

8
requirements.txt Normal file
View File

@@ -0,0 +1,8 @@
fastapi>=0.110.0
uvicorn[standard]>=0.27.0
sqlalchemy>=2.0.0
apscheduler>=3.10.0
httpx>=0.27.0
pillow>=10.0.0
pydantic>=2.0.0
python-dotenv>=1.0.0