p19-bug-fixes

This commit is contained in:
2026-02-13 16:57:45 -05:00
parent e2406bf978
commit f6beedd68f
75 changed files with 11989 additions and 48 deletions

View File

@@ -0,0 +1,152 @@
---
name: "OPSX: Apply"
description: Implement tasks from an OpenSpec change (Experimental)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,157 @@
---
name: "OPSX: Archive"
description: Archive a completed change in the experimental workflow
category: Workflow
tags: [workflow, archive, experimental]
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,242 @@
---
name: "OPSX: Bulk Archive"
description: Archive multiple completed changes at once
category: Workflow
tags: [workflow, archive, experimental, bulk]
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,114 @@
---
name: "OPSX: Continue"
description: Continue working on a change - create the next artifact (Experimental)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change with `/opsx:apply` or archive it with `/opsx:archive`."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,174 @@
---
name: "OPSX: Explore"
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
category: Workflow
tags: [workflow, explore, experimental, thinking]
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,94 @@
---
name: "OPSX: Fast Forward"
description: Create a change and generate all artifacts needed for implementation in one go
category: Workflow
tags: [workflow, artifacts, experimental]
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,69 @@
---
name: "OPSX: New"
description: Start a new change using the experimental artifact workflow (OPSX)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,525 @@
---
name: "OPSX: Onboard"
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
category: Workflow
tags: [workflow, onboarding, tutorial, learning]
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,134 @@
---
name: "OPSX: Sync"
description: Sync delta specs from a change to main specs
category: Workflow
tags: [workflow, specs, experimental]
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,164 @@
---
name: "OPSX: Verify"
description: Verify implementation matches change artifacts before archiving
category: Workflow
tags: [workflow, verify, experimental]
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,8 @@
{
"permissions": {
"allow": [
"Bash(openspec list --json)",
"Bash(openspec status --change \"p17-regression-defects\" --json)"
]
}
}

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,118 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,74 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,151 @@
---
name: opsx-apply
description: Implement tasks from an OpenSpec change (Experimental)
invokable: true
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,156 @@
---
name: opsx-archive
description: Archive a completed change in the experimental workflow
invokable: true
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,241 @@
---
name: opsx-bulk-archive
description: Archive multiple completed changes at once
invokable: true
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,113 @@
---
name: opsx-continue
description: Continue working on a change - create the next artifact (Experimental)
invokable: true
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change with `/opsx:apply` or archive it with `/opsx:archive`."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,173 @@
---
name: opsx-explore
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
invokable: true
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,93 @@
---
name: opsx-ff
description: Create a change and generate all artifacts needed for implementation in one go
invokable: true
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,68 @@
---
name: opsx-new
description: Start a new change using the experimental artifact workflow (OPSX)
invokable: true
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,524 @@
---
name: opsx-onboard
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
invokable: true
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]` → `- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,133 @@
---
name: opsx-sync
description: Sync delta specs from a change to main specs
invokable: true
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,163 @@
---
name: opsx-verify
description: Verify implementation matches change artifacts before archiving
invokable: true
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,118 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,74 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.1.1"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -35,3 +35,71 @@ jobs:
pip install pip-audit
- name: Dependency vulnerability scan
run: pip-audit
playwright-smoke:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -e .
- name: Install Playwright dependencies
run: |
cd e2e
npm ci
npx playwright install --with-deps chromium
- name: Run Playwright smoke tests
run: |
cd e2e
npm run test:smoke
- name: Upload test results
uses: actions/upload-artifact@v4
if: failure()
with:
name: playwright-smoke-report
path: |
e2e/playwright-report/
e2e/test-results/
retention-days: 14
playwright-full:
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -e .
- name: Install Playwright dependencies
run: |
cd e2e
npm ci
npx playwright install --with-deps
- name: Run Playwright full regression
run: |
cd e2e
npm run test:full
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-full-report
path: |
e2e/playwright-report/
e2e/test-results/
retention-days: 30

View File

@@ -103,6 +103,25 @@ ClawFort supports English (`en`), Tamil (`ta`), and Malayalam (`ml`) content del
- Translations are linked to the same base article and served by the existing news endpoints.
- If a requested translation is unavailable, the API falls back to English.
### Typography and Fonts
**Font Strategy:**
- **English (Default)**: Inter font family
- **Tamil**: Baloo 2 (primary), Noto Sans Tamil (fallback), Inter (system fallback)
- **Malayalam**: Baloo Chettan 2 (primary), Noto Sans Malayalam (fallback), Inter (system fallback)
Fonts are applied via CSS based on the `data-lang` attribute on the HTML element, ensuring English text always uses Inter while Indic languages use their respective smooth fonts.
**Mobile Typography Adjustments:**
For Tamil and Malayalam content on mobile viewports (≤640px):
- Hero title: 1.5rem with 1.4 line-height
- Hero summary: 0.9rem with 1.5 line-height
- Hero pills: Smaller padding and font-size to ensure visibility
- Hero image height: 200px (reduced from 300px)
- Hero content padding: Reduced to 1rem
These adjustments prevent content overflow and ensure the hero block (including LATEST pill and time indicator) remains fully visible on smaller screens while maintaining readability.
Language-aware API usage:
```bash

8
e2e/.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
node_modules/
test-results/
playwright-report/
playwright/.cache/
.DS_Store
*.log
.env
.env.local

62
e2e/README.md Normal file
View File

@@ -0,0 +1,62 @@
# ClawFort UI/UX Regression Test Suite
Playwright-based end-to-end testing for ClawFort AI News application.
## Quick Start
```bash
# Install dependencies and browsers
cd e2e
npm install
npm run install:browsers
# Run smoke tests (fast feedback)
npm run test:smoke
# Run full regression suite
npm run test:full
# Run with UI mode for debugging
npm run test:ui
# Run in headed mode (see browser)
npm run test:headed
```
## Test Organization
```
tests/
├── fixtures/ # Shared test fixtures and utilities
├── capabilities/ # Tests organized by capability
│ ├── core-journeys/ # Hero/feed browsing, modal flows
│ ├── accessibility/ # WCAG 2.2 AA compliance
│ ├── responsive/ # Mobile/tablet/desktop
│ ├── modal-experience/ # Summary modal interactions
│ └── microinteractions/ # Share, contact, tooltips
├── smoke/ # Fast smoke tests for PR gates
└── e2e.spec.ts # Main test entry point
```
## Environment Variables
- `BASE_URL`: Target application URL (default: http://localhost:8000)
- `CI`: Set to true for CI-specific behavior
## Test Profiles
### Smoke Profile
- Runs on every PR
- Covers critical paths: hero loading, modal open/close, basic accessibility
- ~2-3 minutes execution time
### Full Profile
- Runs on main/nightly builds
- Complete capability coverage across all themes and breakpoints
- ~15-20 minutes execution time
## CI Integration
Tests are integrated into GitHub Actions workflow:
- PR Quality Gate: Smoke profile must pass
- Main/Nightly: Full profile with artifact retention

185
e2e/docs/test-strategy.md Normal file
View File

@@ -0,0 +1,185 @@
# ClawFort UI/UX Test Strategy
## Capability-to-Test Mapping
This document maps OpenSpec capabilities to Playwright test coverage.
### New Capabilities
#### `playwright-ui-ux-regression-suite`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Capability-mapped suite execution | All files | Tests organized by capability in directory structure |
| Theme and breakpoint matrix coverage | `responsive/*.spec.ts`, `accessibility/*.spec.ts` | Cross-theme and cross-viewport test execution |
| Failure artifact collection | `playwright.config.ts` | Trace, screenshot, video on failure enabled |
### Modified Capabilities
#### `end-to-end-system-testing`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Core user flow E2E | `core-journeys/hero-feed.spec.ts` | Hero loading, feed browsing, modal flows |
| Browser-native interaction E2E | `modal-experience/*.spec.ts` | Modal open/close, source links, share actions |
| Edge-case workflows | `core-journeys/edge-cases.spec.ts` | Empty data, invalid permalinks, error states |
| UI failure-path resilience | `accessibility/modal.spec.ts` | Fallback messages, navigable error states |
#### `platform-quality-gates`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Release quality gates | CI workflow | Smoke profile gates PR, full profile gates main |
| Playwright gate failure blocks release | CI workflow | Fail-on-regression policy enforced |
| Gate manifest | `playwright.config.ts` | Explicit browser/tool versions |
| Gate profiles documented | This file | Smoke vs full profile criteria defined |
#### `wcag-2-2-aa-accessibility`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Keyboard-only interaction flow | `accessibility/keyboard.spec.ts` | Modal navigation, icon-only controls, focus visibility |
| Contrast and non-text alternatives | `accessibility/contrast.spec.ts` | Color contrast assertions across themes |
| Accessibility CI gate | CI workflow | Automated accessibility checks in pipeline |
| Interactive accessibility states | `accessibility/states.spec.ts` | Focus-visible, keyboard traversal, accessible names |
#### `responsive-device-agnostic-layout`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Mobile layout behavior | `responsive/breakpoints.spec.ts` | No overflow, reachable controls |
| Desktop and tablet adaptation | `responsive/breakpoints.spec.ts` | Layout reflow, no clipping |
| Sticky shrinking glass header | `responsive/sticky.spec.ts` | Header behavior across scroll |
| Sticky footer overlap | `responsive/sticky.spec.ts` | Content readability, control accessibility |
| Breakpoint regression matrix | `responsive/*.spec.ts` | Overflow/clipping detection across breakpoints |
#### `summary-modal-experience`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Open summary modal | `modal-experience/summary.spec.ts` | Modal opens with correct content order |
| Close summary modal | `modal-experience/summary.spec.ts` | Modal dismisses, returns to feed |
| Permalink-driven modal open | `modal-experience/deep-link.spec.ts` | Direct article URL opens modal |
| Keyboard dismissal and focus continuity | `modal-experience/summary.spec.ts` | Escape closes, focus returns |
| Source link-out from modal | `modal-experience/summary.spec.ts` | Source opens in new tab |
| Modal exposes share entry points | `modal-experience/summary.spec.ts` | Share controls available |
| Modal interaction regression coverage | `modal-experience/*.spec.ts` | All entry paths tested |
#### `share-and-contact-microinteractions`
| Requirement | Test File | Scenarios |
|-------------|-----------|-----------|
| Supported share providers | `microinteractions/share.spec.ts` | X, WhatsApp, LinkedIn icons present |
| Light-theme icon visibility | `microinteractions/share.spec.ts` | Contrast, keyboard focusability |
| Copy-link share action | `microinteractions/share.spec.ts` | Clipboard write, no navigation |
| Share controls state accessibility | `microinteractions/share.spec.ts` | States perceivable across themes |
| Config present/absent footer links | `microinteractions/footer.spec.ts` | GitHub/contact conditional rendering |
| Contact link visible when configured | `microinteractions/footer.spec.ts` | CONTACT_EMAIL shows affordance |
| Randomized helper tooltip | `microinteractions/tooltip.spec.ts` | Hover shows safe message |
| Keyboard-triggered helper tooltip | `microinteractions/tooltip.spec.ts` | Focus shows tooltip, blur dismisses |
## Scenario Taxonomy
### Journey Scenarios
- Hero article loading and display
- Feed browsing and pagination
- Modal open/close from hero
- Modal open/close from feed
- Source link navigation
- Deep-link direct modal open
### Accessibility-State Scenarios
- Keyboard navigation through all interactive elements
- Focus-visible indicators on all controls
- Color contrast for text and UI components
- Accessible names for icon-only controls
- Screen reader compatibility for dynamic content
### Responsive Scenarios
- Mobile viewport (375x667): No horizontal overflow, touch targets sized
- Tablet viewport (768x1024): Layout reflow, readable content
- Desktop viewport (1280x720): Full layout, all features accessible
- Widescreen viewport (1920x1080): Max-width constraints
### Modal Scenarios
- Summary modal: Open from hero, open from feed, open from permalink
- Policy modals: Terms, Attribution, open/close, escape key
- Focus containment within modals
- Focus return on modal close
### Microinteraction Scenarios
- Share button hover/focus states
- Copy link success feedback
- Contact tooltip hover/move/leave
- Contact tooltip keyboard focus/blur
- Back-to-top visibility on scroll
- Theme switch animation
### Deep-Link Scenarios
- Valid article permalink opens modal
- Invalid article permalink shows error state
- Policy permalink opens policy modal
- Hash-based navigation
## Execution Profiles
### Smoke Profile
**Trigger:** Pull request validation
**Duration:** ~2-3 minutes
**Browsers:** Chromium only
**Coverage:**
- Hero loads with content
- Feed displays articles
- Modal opens and closes
- Basic keyboard navigation
- One theme (dark)
- One viewport (desktop)
### Full Profile
**Trigger:** Main branch merge, nightly builds
**Duration:** ~15-20 minutes
**Browsers:** Chrome, Firefox, Safari
**Coverage:**
- All journey scenarios
- All accessibility scenarios across themes
- All responsive breakpoints
- All modal interaction paths
- All microinteraction states
- Cross-browser compatibility
## CI Integration
### Pull Request Quality Gate
```yaml
- name: Playwright Smoke Tests
run: npm run test:smoke
env:
BASE_URL: http://localhost:8000
continue-on-error: false
```
### Main/Nightly Pipeline
```yaml
- name: Playwright Full Regression
run: npm run test:full
env:
BASE_URL: http://localhost:8000
continue-on-error: false
- name: Upload Test Artifacts
uses: actions/upload-artifact@v4
if: failure()
with:
name: playwright-report
path: e2e/playwright-report/
retention-days: 30
```
## Test Data Assumptions
Tests assume:
1. Application serves at least one article in hero position
2. Feed contains multiple articles for pagination testing
3. Backend API responds within 5 seconds
4. Images load within 10 seconds (or placeholder shown)
5. No authentication required for public content
## Failure Triage
1. **Smoke test failures:** Block PR merge, immediate investigation required
2. **Full regression failures:** Create issue, assign to UI owner, fix before release
3. **Flaky tests:** Quarantine after 3 consecutive failures, investigate separately
4. **Browser-specific failures:** Check for polyfill or feature support issues

156
e2e/docs/triage-workflow.md Normal file
View File

@@ -0,0 +1,156 @@
# UI/UX Regression Test Triage Workflow
## Overview
This document defines the triage workflow and ownership for UI/UX regression test failures in the ClawFort Playwright test suite.
## Test Failure Severity
### Critical (Block Release)
- Smoke test failures in PR gate
- Core journey failures (hero/feed loading, modal open/close)
- Accessibility violations (keyboard navigation, focus management)
- Cross-browser compatibility issues
### High (Fix Before Release)
- Full regression failures in main/nightly
- Responsive layout issues on supported breakpoints
- Theme-specific rendering problems
- Deep-link functionality failures
### Medium (Fix in Next Sprint)
- Minor visual inconsistencies
- Non-critical microinteraction issues
- Performance degradation in test execution
### Low (Backlog)
- Cosmetic issues not affecting functionality
- Test flakiness requiring investigation
- Documentation gaps
## Ownership
### Primary Owner
- **UI/UX Team Lead**: Overall test suite health and strategy
### Component Owners
- **Core Journeys**: Frontend team
- **Accessibility**: Accessibility specialist + Frontend team
- **Responsive Design**: Frontend team + UX designer
- **Modal Experience**: Frontend team
- **Microinteractions**: Frontend team
### CI/CD Integration
- **DevOps Team**: CI pipeline configuration and artifact management
## Triage Process
### Step 1: Detection (Automated)
1. CI pipeline runs tests on PR or main branch
2. Failures are reported in GitHub Actions
3. Artifacts (screenshots, videos, traces) are uploaded
4. Notifications sent to #ui-regression-alerts channel
### Step 2: Initial Assessment (Within 2 hours)
1. Check if failure is reproducible locally
2. Review failure artifacts in Playwright report
3. Determine severity based on impact
4. Assign to appropriate component owner
### Step 3: Investigation
1. Review test logs and artifacts
2. Check recent commits for related changes
3. Attempt local reproduction
4. Document findings in issue
### Step 4: Resolution
1. **Critical**: Fix immediately, re-run tests
2. **High**: Fix within 24 hours
3. **Medium**: Schedule for next sprint
4. **Low**: Add to backlog with priority
### Step 5: Verification
1. Re-run failing tests
2. Verify fix doesn't introduce new issues
3. Update test if it was a false positive
4. Document resolution
## Artifact Access
### Playwright Report
- **Location**: GitHub Actions artifacts
- **Retention**: 14 days (smoke), 30 days (full)
- **Access**: Download from workflow run page
### Viewing Locally
```bash
# Download and extract artifact
cd e2e
npx playwright show-report playwright-report/
```
### Key Artifacts
- **trace.zip**: Full trace with DOM snapshots, network, console
- **test-failed-*.png**: Screenshot at failure point
- **video.webm**: Video recording of test execution
## Common Failure Patterns
### Flaky Tests
- **Symptom**: Passes on retry
- **Action**: Increase timeouts, add waits, stabilize selectors
- **Owner**: Test maintainer
### Environment Issues
- **Symptom**: Tests pass locally but fail in CI
- **Action**: Check CI environment, browser versions, dependencies
- **Owner**: DevOps + Test maintainer
### Application Regressions
- **Symptom**: Consistent failures across runs
- **Action**: Identify breaking change, fix application code
- **Owner**: Component owner
### Test Data Issues
- **Symptom**: Tests fail due to missing/changed data
- **Action**: Update test fixtures, ensure deterministic data
- **Owner**: Test maintainer
## Communication
### Slack Channels
- `#ui-regression-alerts`: Automated failure notifications
- `#frontend-team`: Discussion of UI issues
- `#qa-team`: Test-related discussions
### Issue Labels
- `ui-regression`: UI/UX test failures
- `accessibility`: WCAG-related issues
- `responsive`: Layout/breakpoint issues
- `flaky-test`: Intermittent failures
- `ci-blocker`: Blocking CI/CD pipeline
## Escalation Path
1. **Component Owner** investigates and attempts fix
2. **UI/UX Team Lead** reviews if not resolved in 4 hours
3. **Engineering Manager** escalates if blocking release
4. **CTO** involved for critical production issues
## Prevention
### Pre-merge Checks
- Run smoke tests locally before pushing
- Review visual changes with design team
- Test across themes and breakpoints for UI changes
### Monitoring
- Weekly review of test flakiness metrics
- Monthly review of test coverage
- Quarterly review of test strategy
## Contact
- **UI/UX Test Suite**: ui-team@clawfort.ai
- **CI/CD Issues**: devops@clawfort.ai
- **Emergency Escalation**: engineering-manager@clawfort.ai

99
e2e/package-lock.json generated Normal file
View File

@@ -0,0 +1,99 @@
{
"name": "clawfort-e2e-tests",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "clawfort-e2e-tests",
"version": "1.0.0",
"devDependencies": {
"@playwright/test": "^1.40.0",
"@types/node": "^20.0.0"
},
"engines": {
"node": ">=18.0.0"
}
},
"node_modules/@playwright/test": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.58.2.tgz",
"integrity": "sha512-akea+6bHYBBfA9uQqSYmlJXn61cTa+jbO87xVLCWbTqbWadRVmhxlXATaOjOgcBaWU4ePo0wB41KMFv3o35IXA==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
}
},
"node_modules/@types/node": {
"version": "20.19.33",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.33.tgz",
"integrity": "sha512-Rs1bVAIdBs5gbTIKza/tgpMuG1k3U/UMJLWecIMxNdJFDMzcM5LOiLVRYh3PilWEYDIeUDv7bpiHPLPsbydGcw==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~6.21.0"
}
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/playwright": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.58.2.tgz",
"integrity": "sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
},
"optionalDependencies": {
"fsevents": "2.3.2"
}
},
"node_modules/playwright-core": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.58.2.tgz",
"integrity": "sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=18"
}
},
"node_modules/undici-types": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"dev": true,
"license": "MIT"
}
}
}

24
e2e/package.json Normal file
View File

@@ -0,0 +1,24 @@
{
"name": "clawfort-e2e-tests",
"version": "1.0.0",
"description": "Playwright-based UI/UX regression test suite for ClawFort",
"scripts": {
"test": "playwright test",
"test:smoke": "playwright test --grep @smoke",
"test:full": "playwright test",
"test:ui": "playwright test --ui",
"test:debug": "playwright test --debug",
"test:headed": "playwright test --headed",
"install:browsers": "playwright install",
"install:deps": "playwright install-deps",
"report": "playwright show-report",
"codegen": "playwright codegen"
},
"devDependencies": {
"@playwright/test": "^1.40.0",
"@types/node": "^20.0.0"
},
"engines": {
"node": ">=18.0.0"
}
}

126
e2e/playwright.config.ts Normal file
View File

@@ -0,0 +1,126 @@
import { defineConfig, devices } from "@playwright/test";
/**
* Playwright configuration for ClawFort UI/UX regression testing
* @see https://playwright.dev/docs/test-configuration
*/
export default defineConfig({
testDir: "./tests",
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI for stability */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: [
["html", { open: "never" }],
["list"],
["junit", { outputFile: "test-results/junit.xml" }],
],
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Base URL to use in actions like `await page.goto('/')`. */
baseURL: process.env.BASE_URL || "http://localhost:8000",
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: "on-first-retry",
/* Capture screenshot on failure */
screenshot: "only-on-failure",
/* Record video on failure */
video: "on-first-retry",
/* Viewport defaults */
viewport: { width: 1280, height: 720 },
/* Action timeout */
actionTimeout: 15000,
/* Navigation timeout */
navigationTimeout: 30000,
},
/* Configure projects for major browsers and viewports */
projects: [
// Smoke tests - fast feedback
{
name: "smoke-chromium",
use: {
...devices["Desktop Chrome"],
viewport: { width: 1280, height: 720 },
},
grep: /@smoke/,
},
// Full regression - Desktop
{
name: "desktop-chrome",
use: {
...devices["Desktop Chrome"],
viewport: { width: 1280, height: 720 },
},
},
{
name: "desktop-firefox",
use: {
...devices["Desktop Firefox"],
viewport: { width: 1280, height: 720 },
},
},
{
name: "desktop-webkit",
use: {
...devices["Desktop Safari"],
viewport: { width: 1280, height: 720 },
},
},
// Tablet
{
name: "tablet-chrome",
use: {
...devices["Desktop Chrome"],
viewport: { width: 768, height: 1024 },
},
},
{
name: "tablet-webkit",
use: {
...devices["Desktop Safari"],
viewport: { width: 768, height: 1024 },
},
},
// Mobile
{
name: "mobile-chrome",
use: {
...devices["Pixel 5"],
},
},
{
name: "mobile-safari",
use: {
...devices["iPhone 12"],
},
},
],
/* Run local dev server before starting the tests */
webServer: {
command: "cd .. && python -m backend.main",
url: "http://localhost:8000",
reuseExistingServer: !process.env.CI,
timeout: 120000,
},
});

View File

@@ -0,0 +1,231 @@
import {
assertContrast,
checkContrast,
getComputedColors,
WCAG_CONTRAST,
} from "../../fixtures/accessibility";
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
import { THEMES, Theme } from "../../fixtures/themes";
test.describe("Color Contrast Across Themes", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
for (const theme of THEMES) {
test(`hero text has sufficient contrast in ${theme} theme @smoke`, async ({
page,
setTheme,
waitForHero,
}) => {
// Set theme
await setTheme(theme);
const hero = await waitForHero();
// Check headline contrast
const headline = hero.locator(SELECTORS.hero.headline);
await assertContrast(page, headline, WCAG_CONTRAST.largeText);
// Check summary contrast
const summary = hero.locator(SELECTORS.hero.summary);
await assertContrast(page, summary, WCAG_CONTRAST.normalText);
});
test(`feed card text has sufficient contrast in ${theme} theme`, async ({
page,
setTheme,
waitForFeed,
}) => {
// Set theme
await setTheme(theme);
const feed = await waitForFeed();
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
// Check headline contrast
const headline = firstArticle.locator("h3");
await assertContrast(page, headline, WCAG_CONTRAST.largeText);
// Check summary contrast
const summary = firstArticle.locator(SELECTORS.feed.articleSummary);
await assertContrast(page, summary, WCAG_CONTRAST.normalText);
});
test(`modal text has sufficient contrast in ${theme} theme`, async ({
page,
setTheme,
waitForHero,
}) => {
// Set theme
await setTheme(theme);
// Open modal
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).toBeVisible();
// Check headline contrast
const headline = modal.locator(SELECTORS.summaryModal.headline);
await assertContrast(page, headline, WCAG_CONTRAST.largeText);
// Check body text contrast
const bodyText = modal.locator(SELECTORS.summaryModal.summaryBody);
await assertContrast(page, bodyText, WCAG_CONTRAST.normalText);
// Check TL;DR list contrast
const tldrList = modal.locator(SELECTORS.summaryModal.tldrList);
await assertContrast(page, tldrList, WCAG_CONTRAST.normalText);
});
test(`link colors have sufficient contrast in ${theme} theme`, async ({
page,
setTheme,
waitForFeed,
}) => {
// Set theme
await setTheme(theme);
const feed = await waitForFeed();
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
// Check source link contrast
const sourceLink = firstArticle.locator(SELECTORS.feed.articleSource);
const hasSourceLink = (await sourceLink.count()) > 0;
if (hasSourceLink) {
await assertContrast(page, sourceLink, WCAG_CONTRAST.normalText);
}
});
test(`button text has sufficient contrast in ${theme} theme`, async ({
page,
setTheme,
waitForHero,
}) => {
// Set theme
await setTheme(theme);
const hero = await waitForHero();
// Check read button contrast
const readButton = hero.locator(SELECTORS.hero.readButton);
await assertContrast(page, readButton, WCAG_CONTRAST.normalText);
});
}
});
test.describe("Interactive State Contrast", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("hover state maintains sufficient contrast", async ({
page,
waitForFeed,
}) => {
const feed = await waitForFeed();
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
const readButton = firstArticle.locator(SELECTORS.feed.articleReadButton);
// Get normal state colors
const normalColors = await getComputedColors(page, readButton);
// Hover over button
await readButton.hover();
await page.waitForTimeout(300); // Wait for transition
// Get hover state colors
const hoverColors = await getComputedColors(page, readButton);
// Both states should have sufficient contrast
const normalRatio = checkContrast(
normalColors.color,
normalColors.backgroundColor,
);
const hoverRatio = checkContrast(
hoverColors.color,
hoverColors.backgroundColor,
);
expect(normalRatio).toBeGreaterThanOrEqual(WCAG_CONTRAST.normalText);
expect(hoverRatio).toBeGreaterThanOrEqual(WCAG_CONTRAST.normalText);
});
test("focus state maintains sufficient contrast @smoke", async ({
page,
waitForFeed,
}) => {
const feed = await waitForFeed();
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
const readButton = firstArticle.locator(SELECTORS.feed.articleReadButton);
// Get normal state colors
const normalColors = await getComputedColors(page, readButton);
// Focus the button
await readButton.focus();
// Get focus state colors
const focusColors = await getComputedColors(page, readButton);
// Both states should have sufficient contrast
const normalRatio = checkContrast(
normalColors.color,
normalColors.backgroundColor,
);
const focusRatio = checkContrast(
focusColors.color,
focusColors.backgroundColor,
);
expect(normalRatio).toBeGreaterThanOrEqual(WCAG_CONTRAST.normalText);
expect(focusRatio).toBeGreaterThanOrEqual(WCAG_CONTRAST.normalText);
});
});
test.describe("High Contrast Theme", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("contrast theme provides enhanced visibility @smoke", async ({
page,
setTheme,
waitForHero,
waitForFeed,
}) => {
// Set high contrast theme
await setTheme("contrast");
// Check hero
const hero = await waitForHero();
const headline = hero.locator(SELECTORS.hero.headline);
const headlineColors = await getComputedColors(page, headline);
const headlineRatio = checkContrast(
headlineColors.color,
headlineColors.backgroundColor,
);
// High contrast should provide very strong contrast (7:1 or better)
expect(headlineRatio).toBeGreaterThanOrEqual(7);
// Check feed
const feed = await waitForFeed();
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
const articleHeadline = firstArticle.locator("h3");
const articleColors = await getComputedColors(page, articleHeadline);
const articleRatio = checkContrast(
articleColors.color,
articleColors.backgroundColor,
);
expect(articleRatio).toBeGreaterThanOrEqual(7);
});
});

View File

@@ -0,0 +1,130 @@
import {
getAccessibleName,
hasAccessibleName,
} from "../../fixtures/accessibility";
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Icon-Only Control Accessible Names @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("share buttons have accessible names", async ({ page, waitForHero }) => {
// Open modal to access share buttons
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).toBeVisible();
// Check X share button
const shareX = modal.locator(SELECTORS.summaryModal.shareX);
await expect(shareX).toHaveAttribute("aria-label", "Share on X");
expect(await hasAccessibleName(shareX)).toBe(true);
// Check WhatsApp share button
const shareWhatsApp = modal.locator(SELECTORS.summaryModal.shareWhatsApp);
await expect(shareWhatsApp).toHaveAttribute(
"aria-label",
"Share on WhatsApp",
);
expect(await hasAccessibleName(shareWhatsApp)).toBe(true);
// Check LinkedIn share button
const shareLinkedIn = modal.locator(SELECTORS.summaryModal.shareLinkedIn);
await expect(shareLinkedIn).toHaveAttribute(
"aria-label",
"Share on LinkedIn",
);
expect(await hasAccessibleName(shareLinkedIn)).toBe(true);
// Check copy link button
const shareCopy = modal.locator(SELECTORS.summaryModal.shareCopy);
await expect(shareCopy).toHaveAttribute("aria-label", "Copy article link");
expect(await hasAccessibleName(shareCopy)).toBe(true);
});
test("theme menu button has accessible name", async ({ page }) => {
const themeButton = page.locator(SELECTORS.header.themeMenuButton);
await expect(themeButton).toHaveAttribute("aria-label", "Open theme menu");
expect(await hasAccessibleName(themeButton)).toBe(true);
});
test("back to top button has accessible name", async ({ page }) => {
// Scroll down to make back-to-top visible
await page.evaluate(() => window.scrollTo(0, 500));
await page.waitForTimeout(500);
const backToTop = page.locator(SELECTORS.backToTop.root);
// Button may not be visible yet, but should have accessible name
const hasName = await hasAccessibleName(backToTop);
expect(hasName).toBe(true);
const name = await getAccessibleName(page, backToTop);
expect(name).toContain("top");
});
test("modal close button has accessible name", async ({
page,
waitForHero,
}) => {
// Open modal
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).toBeVisible();
// Check close button
const closeButton = modal.locator(SELECTORS.summaryModal.closeButton);
expect(await hasAccessibleName(closeButton)).toBe(true);
const name = await getAccessibleName(page, closeButton);
expect(name?.toLowerCase()).toContain("close");
});
test("policy modal close button has accessible name", async ({
page,
gotoApp,
}) => {
// Open policy modal
await gotoApp({ policy: "terms" });
await page.waitForTimeout(1000);
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Check close button
const closeButton = modal.locator(SELECTORS.policyModal.closeButton);
expect(await hasAccessibleName(closeButton)).toBe(true);
const name = await getAccessibleName(page, closeButton);
expect(name?.toLowerCase()).toContain("close");
});
test("all interactive icons have aria-hidden on SVG", async ({
page,
waitForHero,
}) => {
// Open modal to access share buttons
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).toBeVisible();
// Check all SVGs in share buttons are aria-hidden
const svgs = modal.locator(".share-icon-btn svg");
const count = await svgs.count();
for (let i = 0; i < count; i++) {
const svg = svgs.nth(i);
const ariaHidden = await svg.getAttribute("aria-hidden");
expect(ariaHidden).toBe("true");
}
});
});

View File

@@ -0,0 +1,264 @@
import { hasFocusVisible } from "../../fixtures/accessibility";
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Keyboard Navigation @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("skip link is first focusable element", async ({ page }) => {
// Press Tab to focus skip link
await page.keyboard.press("Tab");
// Skip link should be focused
const skipLink = page.locator(SELECTORS.skipLink);
await expect(skipLink).toBeFocused();
// Skip link should be visible when focused
const isVisible = await skipLink.isVisible();
expect(isVisible).toBe(true);
});
test("skip link navigates to main content", async ({ page }) => {
// Focus and activate skip link
await page.keyboard.press("Tab");
await page.keyboard.press("Enter");
// Main content should be focused
const mainContent = page.locator("#main-content");
await expect(mainContent).toBeFocused();
});
test("header controls are keyboard accessible @smoke", async ({ page }) => {
// Tab through header controls
await page.keyboard.press("Tab"); // Skip link
await page.keyboard.press("Tab"); // Logo
await page.keyboard.press("Tab"); // Language select
const languageSelect = page.locator(SELECTORS.header.languageSelect);
await expect(languageSelect).toBeFocused();
await page.keyboard.press("Tab"); // Theme menu button
const themeButton = page.locator(SELECTORS.header.themeMenuButton);
await expect(themeButton).toBeFocused();
});
test("theme menu is keyboard operable", async ({ page }) => {
// Navigate to theme button
await page.keyboard.press("Tab"); // Skip link
await page.keyboard.press("Tab"); // Logo
await page.keyboard.press("Tab"); // Language select
await page.keyboard.press("Tab"); // Theme button
// Open theme menu with Enter
await page.keyboard.press("Enter");
// Menu should be visible
const menu = page.locator(SELECTORS.themeMenu.root);
await expect(menu).toBeVisible();
// Menu items should be focusable
const menuItems = menu.locator('[role="menuitem"]');
const count = await menuItems.count();
expect(count).toBeGreaterThan(0);
// Close menu with Escape
await page.keyboard.press("Escape");
await expect(menu).not.toBeVisible();
});
test("hero read button is keyboard accessible @smoke", async ({
page,
waitForHero,
}) => {
const hero = await waitForHero();
// Navigate to hero read button
// Skip header controls first
for (let i = 0; i < 4; i++) {
await page.keyboard.press("Tab");
}
// Hero read button should be focusable
const readButton = hero.locator(SELECTORS.hero.readButton);
// Check if button is in tab order by trying to focus it
let found = false;
for (let i = 0; i < 10; i++) {
const activeElement = await page.evaluate(
() =>
document.activeElement?.textContent?.trim() ||
document.activeElement?.getAttribute("aria-label"),
);
if (activeElement?.includes("Read TL;DR")) {
found = true;
break;
}
await page.keyboard.press("Tab");
}
expect(found).toBe(true);
});
test("feed articles are keyboard navigable", async ({
page,
waitForFeed,
}) => {
const feed = await waitForFeed();
// Get first article
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
// Source link should be keyboard accessible
const sourceLink = firstArticle.locator(SELECTORS.feed.articleSource);
const hasSourceLink = (await sourceLink.count()) > 0;
if (hasSourceLink) {
// Tab to source link
let attempts = 0;
let sourceLinkFocused = false;
while (attempts < 20 && !sourceLinkFocused) {
await page.keyboard.press("Tab");
const href = await page.evaluate(() =>
document.activeElement?.getAttribute("href"),
);
if (href && href.startsWith("http")) {
sourceLinkFocused = true;
}
attempts++;
}
expect(sourceLinkFocused).toBe(true);
}
// Read button should be keyboard accessible
const readButton = firstArticle.locator(SELECTORS.feed.articleReadButton);
let attempts = 0;
let readButtonFocused = false;
while (attempts < 30 && !readButtonFocused) {
await page.keyboard.press("Tab");
const text = await page.evaluate(() =>
document.activeElement?.textContent?.trim(),
);
if (text === "Read TL;DR") {
readButtonFocused = true;
}
attempts++;
}
expect(readButtonFocused).toBe(true);
});
test("focus-visible is shown on interactive elements", async ({
page,
waitForHero,
}) => {
const hero = await waitForHero();
// Navigate to hero read button
for (let i = 0; i < 4; i++) {
await page.keyboard.press("Tab");
}
// Find focused element
const focusedElement = page.locator(":focus");
// Check that focused element has visible focus indicator
const hasVisibleFocus = await hasFocusVisible(page, focusedElement);
expect(hasVisibleFocus).toBe(true);
});
test("footer links are keyboard accessible @smoke", async ({ page }) => {
// Navigate to footer
const footer = page.locator(SELECTORS.footer.root);
await footer.scrollIntoViewIfNeeded();
// Tab through footer links
let foundFooterLink = false;
let attempts = 0;
while (attempts < 50 && !foundFooterLink) {
await page.keyboard.press("Tab");
const activeElement = await page.evaluate(() => document.activeElement);
// Check if we're in footer
const isInFooter = await page.evaluate(() => {
const active = document.activeElement;
const footer = document.querySelector("footer");
return footer?.contains(active);
});
if (isInFooter) {
foundFooterLink = true;
}
attempts++;
}
expect(foundFooterLink).toBe(true);
});
});
test.describe("Focus Management", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("focus moves to modal when opened @smoke", async ({
page,
waitForHero,
}) => {
const hero = await waitForHero();
// Click read button
await hero.locator(SELECTORS.hero.readButton).click();
// Modal should be visible
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).toBeVisible();
// Focus should be inside modal
const isFocusInModal = await page.evaluate(() => {
const modal = document.querySelector('[role="dialog"]');
const active = document.activeElement;
return modal?.contains(active);
});
expect(isFocusInModal).toBe(true);
});
test("focus is trapped within modal", async ({ page, waitForHero }) => {
const hero = await waitForHero();
// Open modal
await hero.locator(SELECTORS.hero.readButton).click();
// Tab multiple times
for (let i = 0; i < 20; i++) {
await page.keyboard.press("Tab");
// Check focus is still in modal
const isInModal = await page.evaluate(() => {
const modal = document.querySelector('[role="dialog"]');
const active = document.activeElement;
return modal?.contains(active);
});
expect(isInModal).toBe(true);
}
});
});

View File

@@ -0,0 +1,173 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Policy Modal Accessibility @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("terms modal opens and has correct ARIA attributes", async ({
page,
}) => {
// Click terms link
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
// Modal should be visible
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Check ARIA attributes
await expect(modal).toHaveAttribute("role", "dialog");
await expect(modal).toHaveAttribute("aria-modal", "true");
// Check aria-label
const ariaLabel = await modal.getAttribute("aria-label");
expect(ariaLabel).toMatch(/Terms|Attribution/);
});
test("attribution modal opens and has correct ARIA attributes", async ({
page,
}) => {
// Click attribution link
const attributionLink = page.locator(SELECTORS.footer.attributionLink);
await attributionLink.click();
// Modal should be visible
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Check ARIA attributes
await expect(modal).toHaveAttribute("role", "dialog");
await expect(modal).toHaveAttribute("aria-modal", "true");
});
test("policy modal closes with escape key @smoke", async ({ page }) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Press escape
await page.keyboard.press("Escape");
await page.waitForTimeout(500);
// Modal should be closed
await expect(modal).not.toBeVisible();
});
test("policy modal closes with close button", async ({ page }) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Click close button
const closeButton = modal.locator(SELECTORS.policyModal.closeButton);
await closeButton.click();
// Modal should be closed
await expect(modal).not.toBeVisible();
});
test("policy modal closes with backdrop click", async ({ page }) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Click backdrop
const backdrop = page.locator(".fixed.inset-0.bg-black\\/70").first();
await backdrop.click();
// Modal should be closed
await page.waitForTimeout(500);
await expect(modal).not.toBeVisible();
});
test("focus returns to trigger after closing policy modal @smoke", async ({
page,
}) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Close modal
await page.keyboard.press("Escape");
await page.waitForTimeout(500);
// Focus should return to terms link
await expect(termsLink).toBeFocused();
});
test("focus is contained within policy modal", async ({ page }) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Tab multiple times
for (let i = 0; i < 10; i++) {
await page.keyboard.press("Tab");
// Check focus is still in modal
const isInModal = await page.evaluate(() => {
const modal = document.querySelector('[role="dialog"]');
const active = document.activeElement;
return modal?.contains(active);
});
expect(isInModal).toBe(true);
}
});
test("policy modal content is readable", async ({ page }) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Check title is present
const title = modal.locator(SELECTORS.policyModal.termsTitle);
await expect(title).toBeVisible();
await expect(title).toContainText("Terms of Use");
// Check content is present
const content = modal.locator(".modal-body-text");
const paragraphs = await content.locator("p").count();
expect(paragraphs).toBeGreaterThan(0);
});
test("policy modal has correct heading structure", async ({ page }) => {
// Open terms modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Should have h2 heading
const heading = modal.locator("h2");
await expect(heading).toBeVisible();
// Check heading level
const headingLevel = await heading.evaluate((el) =>
el.tagName.toLowerCase(),
);
expect(headingLevel).toBe("h2");
});
});

View File

@@ -0,0 +1,368 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Hero and Feed Browsing @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("hero section loads with article content", async ({
page,
waitForHero,
}) => {
const hero = await waitForHero();
// Hero should be visible
await expect(hero).toBeVisible();
// Hero should have headline
const headline = hero.locator(SELECTORS.hero.headline);
await expect(headline).toBeVisible();
await expect(headline).not.toBeEmpty();
// Hero should have summary
const summary = hero.locator(SELECTORS.hero.summary);
await expect(summary).toBeVisible();
await expect(summary).not.toBeEmpty();
// Hero should have "Read TL;DR" button
const readButton = hero.locator(SELECTORS.hero.readButton);
await expect(readButton).toBeVisible();
await expect(readButton).toBeEnabled();
// Hero should have image
const image = hero.locator(SELECTORS.hero.image);
await expect(image).toBeVisible();
});
test("news feed loads with multiple articles", async ({
page,
waitForFeed,
}) => {
const feed = await waitForFeed();
// Feed section should be visible
await expect(feed).toBeVisible();
// Should have "Recent News" heading
const heading = feed.locator("h2");
await expect(heading).toContainText("Recent News");
// Should have multiple article cards (at least 1)
const articles = feed.locator(SELECTORS.feed.articles);
const count = await articles.count();
expect(count).toBeGreaterThanOrEqual(1);
// Each article should have required elements
const firstArticle = articles.first();
await expect(firstArticle.locator("h3")).toBeVisible();
await expect(
firstArticle.locator(SELECTORS.feed.articleSummary),
).toBeVisible();
await expect(
firstArticle.locator(SELECTORS.feed.articleReadButton),
).toBeVisible();
});
test("feed article cards have correct structure", async ({
page,
waitForFeed,
}) => {
const feed = await waitForFeed();
const articles = feed.locator(SELECTORS.feed.articles);
// Check structure of first article
const firstArticle = articles.first();
// Should have image container
const imageContainer = firstArticle.locator(".relative.h-48");
await expect(imageContainer).toBeVisible();
// Should have content area
const contentArea = firstArticle.locator(".p-5");
await expect(contentArea).toBeVisible();
// Should have headline
const headline = firstArticle.locator("h3");
await expect(headline).toBeVisible();
await expect(headline).toHaveClass(/news-card-title/);
// Should have summary
const summary = firstArticle.locator(SELECTORS.feed.articleSummary);
await expect(summary).toBeVisible();
await expect(summary).toHaveClass(/news-card-summary/);
});
test("hero article displays correct metadata", async ({
page,
waitForHero,
}) => {
const hero = await waitForHero();
// Should have "LATEST" pill
const latestPill = hero.locator(SELECTORS.hero.latestPill);
await expect(latestPill).toBeVisible();
await expect(latestPill).toContainText("LATEST");
// Should have time ago
const timePill = hero.locator(SELECTORS.hero.timePill);
await expect(timePill).toBeVisible();
// Time should contain "ago" or "just now"
const timeText = await timePill.textContent();
expect(timeText).toMatch(/ago|just now/);
});
test("source link is present and clickable in hero", async ({
page,
waitForHero,
}) => {
const hero = await waitForHero();
const sourceLink = hero.locator(SELECTORS.hero.sourceLink);
// Source link may not be present if no source URL
const count = await sourceLink.count();
if (count > 0) {
await expect(sourceLink).toBeVisible();
await expect(sourceLink).toHaveAttribute("href");
await expect(sourceLink).toHaveAttribute("target", "_blank");
await expect(sourceLink).toHaveAttribute("rel", "noopener");
}
});
});
test.describe("Summary Modal Flows", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("opens summary modal from hero @smoke", async ({
page,
waitForHero,
isSummaryModalOpen,
}) => {
const hero = await waitForHero();
// Click "Read TL;DR" button in hero
const readButton = hero.locator(SELECTORS.hero.readButton);
await readButton.click();
// Modal should open
const isOpen = await isSummaryModalOpen();
expect(isOpen).toBe(true);
// Modal should have correct structure
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal.locator(SELECTORS.summaryModal.headline)).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.tldrSection),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.summarySection),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.sourceSection),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.shareSection),
).toBeVisible();
});
test("opens summary modal from feed article @smoke", async ({
page,
waitForFeed,
isSummaryModalOpen,
}) => {
const feed = await waitForFeed();
// Get first feed article
const articles = feed.locator(SELECTORS.feed.articles);
const firstArticle = articles.first();
// Click "Read TL;DR" button
const readButton = firstArticle.locator(SELECTORS.feed.articleReadButton);
await readButton.click();
// Modal should open
const isOpen = await isSummaryModalOpen();
expect(isOpen).toBe(true);
// Modal headline should match article headline
const modal = page.locator(SELECTORS.summaryModal.root);
const articleHeadline = await firstArticle.locator("h3").textContent();
const modalHeadline = await modal
.locator(SELECTORS.summaryModal.headline)
.textContent();
expect(modalHeadline).toBe(articleHeadline);
});
test("closes summary modal via close button @smoke", async ({
page,
waitForHero,
isSummaryModalOpen,
closeSummaryModal,
}) => {
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
// Verify modal is open
expect(await isSummaryModalOpen()).toBe(true);
// Close modal
await closeSummaryModal();
// Verify modal is closed
expect(await isSummaryModalOpen()).toBe(false);
});
test("closes summary modal via backdrop click", async ({
page,
waitForHero,
isSummaryModalOpen,
}) => {
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
// Verify modal is open
expect(await isSummaryModalOpen()).toBe(true);
// Click backdrop (outside modal content)
const backdrop = page.locator(".fixed.inset-0.bg-black\\/70").first();
await backdrop.click();
// Verify modal is closed
await page.waitForTimeout(500);
expect(await isSummaryModalOpen()).toBe(false);
});
test("closes summary modal via escape key @smoke", async ({
page,
waitForHero,
isSummaryModalOpen,
}) => {
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
// Verify modal is open
expect(await isSummaryModalOpen()).toBe(true);
// Press escape
await page.keyboard.press("Escape");
// Verify modal is closed
await page.waitForTimeout(500);
expect(await isSummaryModalOpen()).toBe(false);
});
test("modal displays correct content sections", async ({
page,
waitForHero,
}) => {
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
// Check all required sections are present
await expect(modal.locator(SELECTORS.summaryModal.headline)).toBeVisible();
await expect(modal.locator(SELECTORS.summaryModal.image)).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.tldrSection),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.summarySection),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.sourceSection),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.shareSection),
).toBeVisible();
await expect(modal.locator(SELECTORS.summaryModal.poweredBy)).toBeVisible();
// Check TL;DR list has items
const tldrList = modal.locator(SELECTORS.summaryModal.tldrList);
const tldrItems = await tldrList.locator("li").count();
expect(tldrItems).toBeGreaterThanOrEqual(1);
// Check summary body has content
const summaryBody = modal.locator(SELECTORS.summaryModal.summaryBody);
await expect(summaryBody).not.toBeEmpty();
// Check source link is present
const sourceLink = modal.locator(SELECTORS.summaryModal.sourceLink);
await expect(sourceLink).toBeVisible();
await expect(sourceLink).toHaveAttribute("target", "_blank");
});
test("modal share controls are present and accessible", async ({
page,
waitForHero,
}) => {
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
// Check all share buttons are present
await expect(modal.locator(SELECTORS.summaryModal.shareX)).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.shareWhatsApp),
).toBeVisible();
await expect(
modal.locator(SELECTORS.summaryModal.shareLinkedIn),
).toBeVisible();
await expect(modal.locator(SELECTORS.summaryModal.shareCopy)).toBeVisible();
// Check accessible labels
await expect(modal.locator(SELECTORS.summaryModal.shareX)).toHaveAttribute(
"aria-label",
"Share on X",
);
await expect(
modal.locator(SELECTORS.summaryModal.shareWhatsApp),
).toHaveAttribute("aria-label", "Share on WhatsApp");
await expect(
modal.locator(SELECTORS.summaryModal.shareLinkedIn),
).toHaveAttribute("aria-label", "Share on LinkedIn");
await expect(
modal.locator(SELECTORS.summaryModal.shareCopy),
).toHaveAttribute("aria-label", "Copy article link");
});
test("modal returns to feed context after closing", async ({
page,
waitForHero,
waitForFeed,
isSummaryModalOpen,
}) => {
// Wait for feed to be visible
await waitForFeed();
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
// Close modal
await page.keyboard.press("Escape");
await page.waitForTimeout(500);
// Verify modal is closed
expect(await isSummaryModalOpen()).toBe(false);
// Verify we're still on the same page (no navigation occurred)
await expect(page).toHaveURL(/\/$/);
// Verify feed is still visible
const feed = page.locator(SELECTORS.feed.root);
await expect(feed).toBeVisible();
});
});

View File

@@ -0,0 +1,139 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Footer Link Rendering @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("footer renders GitHub link when configured", async ({ page }) => {
const githubLink = page.locator(SELECTORS.footer.githubLink);
// Check if GitHub link exists (may or may not be present based on config)
const count = await githubLink.count();
if (count > 0) {
// If present, should be visible and have correct attributes
await expect(githubLink).toBeVisible();
await expect(githubLink).toHaveAttribute("href");
await expect(githubLink).toHaveAttribute("target", "_blank");
await expect(githubLink).toHaveAttribute("rel", "noopener");
// Should link to GitHub
const href = await githubLink.getAttribute("href");
expect(href).toContain("github.com");
}
});
test("footer renders contact email link when configured", async ({
page,
}) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists (may or may not be present based on config)
const count = await contactLink.count();
if (count > 0) {
// If present, should be visible and have correct attributes
await expect(contactLink).toBeVisible();
await expect(contactLink).toHaveAttribute("href");
// Should be mailto link
const href = await contactLink.getAttribute("href");
expect(href).toMatch(/^mailto:/);
// Should have email text
const text = await contactLink.textContent();
expect(text).toContain("@");
}
});
test("footer layout is stable regardless of link configuration", async ({
page,
}) => {
const footer = page.locator(SELECTORS.footer.root);
// Footer should always be visible
await expect(footer).toBeVisible();
// Footer should have consistent structure
const poweredBy = footer.locator(SELECTORS.footer.poweredBy);
await expect(poweredBy).toBeVisible();
const termsLink = footer.locator(SELECTORS.footer.termsLink);
await expect(termsLink).toBeVisible();
const attributionLink = footer.locator(SELECTORS.footer.attributionLink);
await expect(attributionLink).toBeVisible();
const copyright = footer.locator("text=All rights reserved");
await expect(copyright).toBeVisible();
});
test("footer links are interactive", async ({ page }) => {
// Terms link should open modal
const termsLink = page.locator(SELECTORS.footer.termsLink);
await termsLink.click();
const termsModal = page.locator(SELECTORS.policyModal.root);
await expect(termsModal).toBeVisible();
await expect(
termsModal.locator(SELECTORS.policyModal.termsTitle),
).toBeVisible();
// Close modal
await page.keyboard.press("Escape");
await page.waitForTimeout(500);
// Attribution link should open modal
const attributionLink = page.locator(SELECTORS.footer.attributionLink);
await attributionLink.click();
const attributionModal = page.locator(SELECTORS.policyModal.root);
await expect(attributionModal).toBeVisible();
await expect(
attributionModal.locator(SELECTORS.policyModal.attributionTitle),
).toBeVisible();
});
test("footer links have proper accessible names", async ({ page }) => {
// Terms link
const termsLink = page.locator(SELECTORS.footer.termsLink);
const termsText = await termsLink.textContent();
expect(termsText?.toLowerCase()).toContain("terms");
// Attribution link
const attributionLink = page.locator(SELECTORS.footer.attributionLink);
const attributionText = await attributionLink.textContent();
expect(attributionText?.toLowerCase()).toContain("attribution");
// GitHub link (if present)
const githubLink = page.locator(SELECTORS.footer.githubLink);
const githubCount = await githubLink.count();
if (githubCount > 0) {
const githubText = await githubLink.textContent();
expect(githubText?.toLowerCase()).toContain("github");
}
});
test("footer is responsive across viewports", async ({
page,
setViewport,
}) => {
const viewports = ["mobile", "tablet", "desktop"] as const;
for (const viewport of viewports) {
await setViewport(viewport);
const footer = page.locator(SELECTORS.footer.root);
await expect(footer).toBeVisible();
// Footer should not overflow
const footerBox = await footer.boundingBox();
const viewportWidth = await page.evaluate(() => window.innerWidth);
expect(footerBox!.width).toBeLessThanOrEqual(viewportWidth);
}
});
});

View File

@@ -0,0 +1,198 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Contact Email Tooltip @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("tooltip appears on mouse hover", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Hover over contact link
await contactLink.hover();
await page.waitForTimeout(500);
// Tooltip should appear
const tooltip = page.locator(SELECTORS.footer.contactHint);
await expect(tooltip).toBeVisible();
// Tooltip should have text
const tooltipText = await tooltip.textContent();
expect(tooltipText).toBeTruthy();
expect(tooltipText!.length).toBeGreaterThan(0);
});
test("tooltip disappears on mouse leave", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Hover over contact link
await contactLink.hover();
await page.waitForTimeout(500);
const tooltip = page.locator(SELECTORS.footer.contactHint);
await expect(tooltip).toBeVisible();
// Move mouse away
await page.mouse.move(0, 0);
await page.waitForTimeout(500);
// Tooltip should disappear
await expect(tooltip).not.toBeVisible();
});
test("tooltip follows mouse movement", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Hover over contact link
await contactLink.hover();
await page.waitForTimeout(500);
const tooltip = page.locator(SELECTORS.footer.contactHint);
await expect(tooltip).toBeVisible();
// Get initial position
const initialBox = await tooltip.boundingBox();
// Move mouse slightly
const linkBox = await contactLink.boundingBox();
await page.mouse.move(
linkBox!.x + linkBox!.width / 2 + 20,
linkBox!.y + linkBox!.height / 2,
);
await page.waitForTimeout(200);
// Tooltip should still be visible
await expect(tooltip).toBeVisible();
});
test("tooltip appears on keyboard focus", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Focus contact link
await contactLink.focus();
await page.waitForTimeout(500);
// Tooltip should appear
const tooltip = page.locator(SELECTORS.footer.contactHint);
await expect(tooltip).toBeVisible();
});
test("tooltip disappears on keyboard blur", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Focus contact link
await contactLink.focus();
await page.waitForTimeout(500);
const tooltip = page.locator(SELECTORS.footer.contactHint);
await expect(tooltip).toBeVisible();
// Blur contact link
await page.evaluate(() => (document.activeElement as HTMLElement)?.blur());
await page.waitForTimeout(500);
// Tooltip should disappear
await expect(tooltip).not.toBeVisible();
});
test("tooltip content is safe and appropriate", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Hover to show tooltip
await contactLink.hover();
await page.waitForTimeout(500);
const tooltip = page.locator(SELECTORS.footer.contactHint);
const tooltipText = await tooltip.textContent();
// Should not contain inappropriate content
const inappropriateWords = [
"profanity",
"offensive",
"racist",
"sexist",
"misogynistic",
];
for (const word of inappropriateWords) {
expect(tooltipText?.toLowerCase()).not.toContain(word);
}
// Should contain helpful text
expect(tooltipText).toBeTruthy();
expect(tooltipText!.length).toBeGreaterThan(5);
});
test("tooltip does not trap focus", async ({ page }) => {
const contactLink = page.locator(SELECTORS.footer.contactLink);
// Check if contact link exists
const count = await contactLink.count();
if (count === 0) {
test.skip();
return;
}
// Focus contact link
await contactLink.focus();
await page.waitForTimeout(500);
// Tooltip should be visible
const tooltip = page.locator(SELECTORS.footer.contactHint);
await expect(tooltip).toBeVisible();
// Tab away
await page.keyboard.press("Tab");
await page.waitForTimeout(500);
// Tooltip should disappear
await expect(tooltip).not.toBeVisible();
// Focus should have moved
const isStillFocused = await contactLink.isFocused();
expect(isStillFocused).toBe(false);
});
});

View File

@@ -0,0 +1,191 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Deep Link Permalink Tests @smoke", () => {
test.beforeEach(async ({ waitForAppReady }) => {
await waitForAppReady();
});
test("valid article permalink opens modal automatically", async ({
page,
gotoApp,
isSummaryModalOpen,
}) => {
// First get an article ID from the feed
await gotoApp();
await page.waitForSelector(SELECTORS.feed.articles, { timeout: 10000 });
const firstArticle = page.locator(SELECTORS.feed.articles).first();
const articleId = await firstArticle
.getAttribute("id")
.then((id) => (id ? parseInt(id.replace("news-", "")) : null));
expect(articleId).not.toBeNull();
// Navigate to article permalink
await gotoApp({ articleId: articleId! });
await page.waitForTimeout(2000); // Wait for modal to open
// Modal should be open
const isOpen = await isSummaryModalOpen();
expect(isOpen).toBe(true);
// Modal should show correct article
const modal = page.locator(SELECTORS.summaryModal.root);
const modalHeadline = await modal
.locator(SELECTORS.summaryModal.headline)
.textContent();
const articleHeadline = await firstArticle.locator("h3").textContent();
expect(modalHeadline).toBe(articleHeadline);
});
test("invalid article permalink shows error state", async ({
page,
gotoApp,
}) => {
// Navigate to invalid article ID
await gotoApp({ articleId: 999999 });
await page.waitForTimeout(2000);
// Should not show summary modal
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).not.toBeVisible();
// Should still show the page (not crash)
const hero = page.locator(SELECTORS.hero.root);
const feed = page.locator(SELECTORS.feed.root);
const heroVisible = await hero.isVisible().catch(() => false);
const feedVisible = await feed.isVisible().catch(() => false);
expect(heroVisible || feedVisible).toBe(true);
});
test("hero-origin modal flow via permalink", async ({
page,
gotoApp,
isSummaryModalOpen,
}) => {
// Get hero article ID
await gotoApp();
await page.waitForSelector(SELECTORS.hero.root, { timeout: 10000 });
const hero = page.locator(SELECTORS.hero.root);
const heroId = await hero
.getAttribute("id")
.then((id) => (id ? parseInt(id.replace("news-", "")) : null));
expect(heroId).not.toBeNull();
// Navigate directly to hero article
await gotoApp({ articleId: heroId! });
await page.waitForTimeout(2000);
// Modal should open
const isOpen = await isSummaryModalOpen();
expect(isOpen).toBe(true);
// Modal should show hero article content
const modal = page.locator(SELECTORS.summaryModal.root);
const modalHeadline = await modal
.locator(SELECTORS.summaryModal.headline)
.textContent();
const heroHeadline = await hero.locator("h1").textContent();
expect(modalHeadline).toBe(heroHeadline);
});
test("closing permalink modal updates URL", async ({
page,
gotoApp,
isSummaryModalOpen,
}) => {
// Open via permalink
await gotoApp({ articleId: 1 });
await page.waitForTimeout(2000);
// URL should have article parameter
await expect(page).toHaveURL(/\?article=\d+/);
// Close modal
await page.keyboard.press("Escape");
await page.waitForTimeout(500);
// URL should be cleaned up (parameter removed)
await expect(page).toHaveURL(/\/$/);
await expect(page).not.toHaveURL(/\?article=/);
});
test("modal state persists on page refresh", async ({
page,
gotoApp,
isSummaryModalOpen,
}) => {
// Open via permalink
await gotoApp({ articleId: 1 });
await page.waitForTimeout(2000);
// Verify modal is open
expect(await isSummaryModalOpen()).toBe(true);
// Refresh page
await page.reload();
await page.waitForTimeout(2000);
// Modal should still be open
expect(await isSummaryModalOpen()).toBe(true);
});
});
test.describe("Policy Modal Deep Links", () => {
test("terms policy modal opens via URL parameter", async ({
page,
gotoApp,
}) => {
await gotoApp({ policy: "terms" });
await page.waitForTimeout(1000);
// Policy modal should be visible
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Should show terms title
const title = modal.locator(SELECTORS.policyModal.termsTitle);
await expect(title).toBeVisible();
});
test("attribution policy modal opens via URL parameter", async ({
page,
gotoApp,
}) => {
await gotoApp({ policy: "attribution" });
await page.waitForTimeout(1000);
// Policy modal should be visible
const modal = page.locator(SELECTORS.policyModal.root);
await expect(modal).toBeVisible();
// Should show attribution title
const title = modal.locator(SELECTORS.policyModal.attributionTitle);
await expect(title).toBeVisible();
});
test("closing policy modal clears URL parameter", async ({
page,
gotoApp,
}) => {
await gotoApp({ policy: "terms" });
await page.waitForTimeout(1000);
// URL should have policy parameter
await expect(page).toHaveURL(/\?policy=terms/);
// Close modal
const modal = page.locator(SELECTORS.policyModal.root);
await modal.locator(SELECTORS.policyModal.closeButton).click();
await page.waitForTimeout(500);
// URL should be cleaned up
await expect(page).toHaveURL(/\/$/);
await expect(page).not.toHaveURL(/\?policy=/);
});
});

View File

@@ -0,0 +1,193 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
test.describe("Source CTA and Share Interactions", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady, waitForHero }) => {
await gotoApp();
await waitForAppReady();
// Open modal from hero
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
await expect(page.locator(SELECTORS.summaryModal.root)).toBeVisible();
});
let page: any;
test("source link opens in new tab @smoke", async ({ page: p, context }) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
const sourceLink = modal.locator(SELECTORS.summaryModal.sourceLink);
// Check link attributes
await expect(sourceLink).toHaveAttribute("target", "_blank");
await expect(sourceLink).toHaveAttribute("rel", "noopener");
await expect(sourceLink).toHaveAttribute("href");
// Click should open new tab
const [newPage] = await Promise.all([
context.waitForEvent("page"),
sourceLink.click(),
]);
// New page should have loaded
expect(newPage).toBeDefined();
await newPage.close();
// Modal should remain open
await expect(modal).toBeVisible();
});
test("share on X opens correct URL", async ({ page: p, context }) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
const shareX = modal.locator(SELECTORS.summaryModal.shareX);
// Get article headline for URL verification
const headline = await modal
.locator(SELECTORS.summaryModal.headline)
.textContent();
// Click should open X share in new tab
const [newPage] = await Promise.all([
context.waitForEvent("page"),
shareX.click(),
]);
// Verify URL contains X intent
const url = newPage.url();
expect(url).toContain("x.com/intent/tweet");
expect(url).toContain(encodeURIComponent(headline || ""));
await newPage.close();
});
test("share on WhatsApp opens correct URL", async ({ page: p, context }) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
const shareWhatsApp = modal.locator(SELECTORS.summaryModal.shareWhatsApp);
// Get article headline for URL verification
const headline = await modal
.locator(SELECTORS.summaryModal.headline)
.textContent();
// Click should open WhatsApp share in new tab
const [newPage] = await Promise.all([
context.waitForEvent("page"),
shareWhatsApp.click(),
]);
// Verify URL contains WhatsApp share
const url = newPage.url();
expect(url).toContain("wa.me");
expect(url).toContain(encodeURIComponent(headline || ""));
await newPage.close();
});
test("share on LinkedIn opens correct URL", async ({ page: p, context }) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
const shareLinkedIn = modal.locator(SELECTORS.summaryModal.shareLinkedIn);
// Click should open LinkedIn share in new tab
const [newPage] = await Promise.all([
context.waitForEvent("page"),
shareLinkedIn.click(),
]);
// Verify URL contains LinkedIn share
const url = newPage.url();
expect(url).toContain("linkedin.com/sharing");
await newPage.close();
});
test("copy link button copies permalink to clipboard @smoke", async ({
page: p,
context,
}) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
const copyButton = modal.locator(SELECTORS.summaryModal.shareCopy);
// Grant clipboard permissions
await context.grantPermissions(["clipboard-read", "clipboard-write"]);
// Click copy button
await copyButton.click();
// Wait for success message
const successMessage = modal.locator(SELECTORS.summaryModal.copySuccess);
await expect(successMessage).toBeVisible();
await expect(successMessage).toContainText("Permalink copied");
// Verify clipboard content
const clipboardContent = await page.evaluate(() =>
navigator.clipboard.readText(),
);
expect(clipboardContent).toContain("/?article=");
expect(clipboardContent).toMatch(/http/);
});
test("copy link does not navigate away", async ({ page: p }) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
const copyButton = modal.locator(SELECTORS.summaryModal.shareCopy);
// Get current URL
const currentUrl = page.url();
// Click copy button
await copyButton.click();
// Wait a moment
await page.waitForTimeout(1000);
// URL should not have changed
await expect(page).toHaveURL(currentUrl);
// Modal should still be open
await expect(modal).toBeVisible();
});
test("navigation is preserved after share interactions", async ({
page: p,
context,
}) => {
page = p;
const modal = page.locator(SELECTORS.summaryModal.root);
// Interact with multiple share buttons
const shareButtons = [
modal.locator(SELECTORS.summaryModal.shareX),
modal.locator(SELECTORS.summaryModal.shareWhatsApp),
modal.locator(SELECTORS.summaryModal.shareLinkedIn),
];
for (const button of shareButtons) {
if (await button.isVisible()) {
const [newPage] = await Promise.all([
context.waitForEvent("page"),
button.click(),
]);
await newPage.close();
// Modal should remain open after each interaction
await expect(modal).toBeVisible();
}
}
// Close modal
await page.keyboard.press("Escape");
await page.waitForTimeout(500);
// Should be back on main page without navigation
await expect(page).toHaveURL(/\/$/);
// Feed should be visible
const feed = page.locator(SELECTORS.feed.root);
await expect(feed).toBeVisible();
});
});

View File

@@ -0,0 +1,233 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
import {
hasHorizontalOverflow,
isClipped,
VIEWPORT_SIZES,
type ViewportSize,
} from "../../fixtures/viewports";
test.describe("Responsive Breakpoint Tests @smoke", () => {
for (const viewport of VIEWPORT_SIZES) {
test.describe(`${viewport} viewport`, () => {
test.beforeEach(async ({ gotoApp, waitForAppReady, setViewport }) => {
await setViewport(viewport);
await gotoApp();
await waitForAppReady();
});
test("page has no horizontal overflow", async ({ page }) => {
// Check body for overflow
const bodyOverflow = await hasHorizontalOverflow(page, "body");
expect(bodyOverflow).toBe(false);
// Check main content
const mainOverflow = await hasHorizontalOverflow(page, "main");
expect(mainOverflow).toBe(false);
// Check hero section
const heroOverflow = await hasHorizontalOverflow(
page,
SELECTORS.hero.root,
);
expect(heroOverflow).toBe(false);
// Check feed section
const feedOverflow = await hasHorizontalOverflow(
page,
SELECTORS.feed.root,
);
expect(feedOverflow).toBe(false);
});
test("hero section is not clipped", async ({ page, waitForHero }) => {
const hero = await waitForHero();
const isHeroClipped = await isClipped(page, SELECTORS.hero.root);
expect(isHeroClipped).toBe(false);
});
test("feed articles are not clipped", async ({ page, waitForFeed }) => {
const feed = await waitForFeed();
const isFeedClipped = await isClipped(page, SELECTORS.feed.root);
expect(isFeedClipped).toBe(false);
});
test("modal fits within viewport", async ({
page,
waitForHero,
setViewport,
}) => {
// Open modal
const hero = await waitForHero();
await hero.locator(SELECTORS.hero.readButton).click();
const modal = page.locator(SELECTORS.summaryModal.root);
await expect(modal).toBeVisible();
// Get viewport dimensions
const viewport = await page.evaluate(() => ({
width: window.innerWidth,
height: window.innerHeight,
}));
// Get modal dimensions
const modalBox = await modal.boundingBox();
expect(modalBox).not.toBeNull();
// Modal should fit within viewport (with some padding)
expect(modalBox!.width).toBeLessThanOrEqual(viewport.width);
expect(modalBox!.height).toBeLessThanOrEqual(viewport.height * 0.96); // max-h-[96vh]
});
test("interactive controls are reachable", async ({
page,
waitForFeed,
}) => {
const feed = await waitForFeed();
const firstArticle = feed.locator(SELECTORS.feed.articles).first();
// Check read button is visible and clickable
const readButton = firstArticle.locator(
SELECTORS.feed.articleReadButton,
);
await expect(readButton).toBeVisible();
await expect(readButton).toBeEnabled();
// Check source link is visible if present
const sourceLink = firstArticle.locator(SELECTORS.feed.articleSource);
const hasSourceLink = (await sourceLink.count()) > 0;
if (hasSourceLink) {
await expect(sourceLink).toBeVisible();
}
});
test("header controls remain accessible", async ({ page }) => {
// Check logo is visible
const logo = page.locator(SELECTORS.header.logo);
await expect(logo).toBeVisible();
// Check theme button is visible
const themeButton = page.locator(SELECTORS.header.themeMenuButton);
await expect(themeButton).toBeVisible();
await expect(themeButton).toBeEnabled();
// Check language select is visible (may be hidden on very small screens)
const languageSelect = page.locator(SELECTORS.header.languageSelect);
const isVisible = await languageSelect.isVisible().catch(() => false);
if (isVisible) {
await expect(languageSelect).toBeEnabled();
}
});
});
}
});
test.describe("Responsive Layout Adaptations", () => {
test("mobile shows single column feed", async ({
gotoApp,
waitForAppReady,
setViewport,
waitForFeed,
}) => {
await setViewport("mobile");
await gotoApp();
await waitForAppReady();
const feed = await waitForFeed();
const articles = feed.locator(SELECTORS.feed.articles);
// Articles should be in single column (full width)
const firstArticle = articles.first();
const articleBox = await firstArticle.boundingBox();
// Get feed container width
const feedBox = await feed.boundingBox();
// Article should take most of the width (single column)
expect(articleBox!.width).toBeGreaterThan(feedBox!.width * 0.8);
});
test("tablet shows appropriate layout", async ({
gotoApp,
waitForAppReady,
setViewport,
waitForFeed,
}) => {
await setViewport("tablet");
await gotoApp();
await waitForAppReady();
const feed = await waitForFeed();
const articles = feed.locator(SELECTORS.feed.articles);
// Should have multiple articles visible
const count = await articles.count();
expect(count).toBeGreaterThanOrEqual(2);
// Articles should be side by side (multi-column)
const firstArticle = articles.first();
const secondArticle = articles.nth(1);
const firstBox = await firstArticle.boundingBox();
const secondBox = await secondArticle.boundingBox();
// Second article should be to the right of first (or below in some layouts)
expect(secondBox!.x).not.toBe(firstBox!.x);
});
test("desktop shows multi-column feed", async ({
gotoApp,
waitForAppReady,
setViewport,
waitForFeed,
}) => {
await setViewport("desktop");
await gotoApp();
await waitForAppReady();
const feed = await waitForFeed();
const articles = feed.locator(SELECTORS.feed.articles);
// Should have multiple articles in a row
const count = await articles.count();
expect(count).toBeGreaterThanOrEqual(3);
// First three articles should be in a row
const articleBoxes = await articles
.slice(0, 3)
.evaluateAll((els) => els.map((el) => el.getBoundingClientRect()));
// Articles should be at different x positions (side by side)
const xPositions = articleBoxes.map((box) => box.x);
const uniqueXPositions = [...new Set(xPositions)];
expect(uniqueXPositions.length).toBeGreaterThanOrEqual(2);
});
test("hero image maintains aspect ratio", async ({
gotoApp,
waitForAppReady,
setViewport,
waitForHero,
}) => {
for (const viewport of ["mobile", "tablet", "desktop"] as ViewportSize[]) {
await setViewport(viewport);
await gotoApp();
await waitForAppReady();
const hero = await waitForHero();
const image = hero.locator(SELECTORS.hero.image);
const box = await image.boundingBox();
expect(box).not.toBeNull();
// Image should have reasonable dimensions
expect(box!.width).toBeGreaterThan(0);
expect(box!.height).toBeGreaterThan(0);
// Aspect ratio should be roughly maintained (wider than tall)
expect(box!.width / box!.height).toBeGreaterThan(1);
expect(box!.width / box!.height).toBeLessThan(5);
}
});
});

View File

@@ -0,0 +1,250 @@
import { SELECTORS } from "../../fixtures/selectors";
import { expect, test } from "../../fixtures/test";
import { getStickyPosition, ViewportSize } from "../../fixtures/viewports";
test.describe("Sticky Header Behavior @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("header is sticky on scroll", async ({ page }) => {
const header = page.locator(SELECTORS.header.root);
// Check initial position
const initialPosition = await getStickyPosition(
page,
SELECTORS.header.root,
);
expect(initialPosition.isSticky).toBe(true);
// Scroll down
await page.evaluate(() => window.scrollTo(0, 500));
await page.waitForTimeout(500);
// Header should still be at top
const scrolledPosition = await getStickyPosition(
page,
SELECTORS.header.root,
);
expect(scrolledPosition.top).toBeLessThanOrEqual(10); // Allow small offset
expect(scrolledPosition.isSticky).toBe(true);
});
test("header shrinks on scroll", async ({ page }) => {
const header = page.locator(SELECTORS.header.root);
const headerContainer = header.locator("> div");
// Get initial height
const initialHeight = await headerContainer.evaluate(
(el) => el.offsetHeight,
);
// Scroll down
await page.evaluate(() => window.scrollTo(0, 300));
await page.waitForTimeout(500);
// Get scrolled height
const scrolledHeight = await headerContainer.evaluate(
(el) => el.offsetHeight,
);
// Header should shrink (or stay same, but not grow)
expect(scrolledHeight).toBeLessThanOrEqual(initialHeight);
});
test("header maintains glass effect on scroll", async ({ page }) => {
// Scroll down
await page.evaluate(() => window.scrollTo(0, 500));
await page.waitForTimeout(500);
// Check header has backdrop blur
const hasBlur = await page.evaluate(() => {
const header = document.querySelector("header");
if (!header) return false;
const style = window.getComputedStyle(header);
return style.backdropFilter.includes("blur");
});
expect(hasBlur).toBe(true);
});
});
test.describe("Sticky Footer Behavior", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("footer is sticky at bottom", async ({ page }) => {
const footer = page.locator(SELECTORS.footer.root);
// Check footer is visible
await expect(footer).toBeVisible();
// Check footer position
const footerBox = await footer.boundingBox();
const viewportHeight = await page.evaluate(() => window.innerHeight);
// Footer should be at bottom of viewport
expect(footerBox!.y + footerBox!.height).toBeGreaterThanOrEqual(
viewportHeight - 10,
);
});
test("footer does not overlap main content", async ({ page }) => {
const footer = page.locator(SELECTORS.footer.root);
const mainContent = page.locator("main");
// Get bounding boxes
const footerBox = await footer.boundingBox();
const mainBox = await mainContent.boundingBox();
// Main content should have padding at bottom to account for footer
const bodyPadding = await page.evaluate(() => {
const body = document.body;
const style = window.getComputedStyle(body);
return parseInt(style.paddingBottom || "0");
});
expect(bodyPadding).toBeGreaterThan(0);
});
});
test.describe("Back to Top Behavior @smoke", () => {
test.beforeEach(async ({ gotoApp, waitForAppReady }) => {
await gotoApp();
await waitForAppReady();
});
test("back to top is hidden initially", async ({ page }) => {
const backToTop = page.locator(SELECTORS.backToTop.root);
// Should not be visible at top of page
const isVisible = await backToTop.isVisible().catch(() => false);
expect(isVisible).toBe(false);
});
test("back to top appears on scroll", async ({ page }) => {
// Scroll down
await page.evaluate(() => window.scrollTo(0, 800));
await page.waitForTimeout(500);
const backToTop = page.locator(SELECTORS.backToTop.root);
// Should be visible after scroll
await expect(backToTop).toBeVisible();
});
test("back to top scrolls to top when clicked", async ({ page }) => {
// Scroll down
await page.evaluate(() => window.scrollTo(0, 1000));
await page.waitForTimeout(500);
// Click back to top
const backToTop = page.locator(SELECTORS.backToTop.root);
await backToTop.click();
// Wait for scroll animation
await page.waitForTimeout(1000);
// Should be at top
const scrollPosition = await page.evaluate(() => window.scrollY);
expect(scrollPosition).toBeLessThanOrEqual(50);
});
test("back to top is accessible", async ({ page }) => {
// Scroll down to make visible
await page.evaluate(() => window.scrollTo(0, 800));
await page.waitForTimeout(500);
const backToTop = page.locator(SELECTORS.backToTop.root);
// Should have aria-label
await expect(backToTop).toHaveAttribute("aria-label");
// Should be keyboard focusable
await backToTop.focus();
await expect(backToTop).toBeFocused();
});
});
test.describe("Sticky Behavior Across Breakpoints", () => {
test("header and footer work on mobile", async ({
gotoApp,
waitForAppReady,
setViewport,
page,
}) => {
await setViewport("mobile");
await gotoApp();
await waitForAppReady();
// Check header is sticky
const header = page.locator(SELECTORS.header.root);
await expect(header).toBeVisible();
// Scroll down
await page.evaluate(() => window.scrollTo(0, 500));
await page.waitForTimeout(500);
// Header should still be visible
await expect(header).toBeVisible();
// Footer should be visible
const footer = page.locator(SELECTORS.footer.root);
await expect(footer).toBeVisible();
});
test("header and footer work on tablet", async ({
gotoApp,
waitForAppReady,
setViewport,
page,
}) => {
await setViewport("tablet");
await gotoApp();
await waitForAppReady();
// Check header is sticky
const header = page.locator(SELECTORS.header.root);
await expect(header).toBeVisible();
// Scroll down
await page.evaluate(() => window.scrollTo(0, 500));
await page.waitForTimeout(500);
// Header should still be visible
await expect(header).toBeVisible();
// Footer should be visible
const footer = page.locator(SELECTORS.footer.root);
await expect(footer).toBeVisible();
});
test("header and footer work on desktop", async ({
gotoApp,
waitForAppReady,
setViewport,
page,
}) => {
await setViewport("desktop");
await gotoApp();
await waitForAppReady();
// Check header is sticky
const header = page.locator(SELECTORS.header.root);
await expect(header).toBeVisible();
// Scroll down
await page.evaluate(() => window.scrollTo(0, 500));
await page.waitForTimeout(500);
// Header should still be visible
await expect(header).toBeVisible();
// Footer should be visible
const footer = page.locator(SELECTORS.footer.root);
await expect(footer).toBeVisible();
});
});

271
e2e/tests/fixtures/accessibility.ts vendored Normal file
View File

@@ -0,0 +1,271 @@
/**
* Accessibility helpers for WCAG 2.2 AA testing
*/
import type { Locator, Page } from "@playwright/test";
/**
* WCAG 2.2 AA contrast ratio requirements
*/
export const WCAG_CONTRAST = {
normalText: 4.5,
largeText: 3,
uiComponents: 3,
} as const;
/**
* Check if an element has visible focus indicator
*/
export async function hasFocusVisible(
page: Page,
locator: Locator,
): Promise<boolean> {
return page.evaluate(
(element: Element) => {
const style = window.getComputedStyle(element);
const outline = style.outlineWidth;
const boxShadow = style.boxShadow;
// Check for outline or box-shadow focus indicator
return (
(outline && outline !== "0px" && outline !== "none") ||
(boxShadow && boxShadow !== "none")
);
},
(await locator.elementHandle()) as Element,
);
}
/**
* Get computed color values for an element
*/
export async function getComputedColors(
page: Page,
locator: Locator,
): Promise<{
color: string;
backgroundColor: string;
}> {
return page.evaluate(
(element: Element) => {
const style = window.getComputedStyle(element);
return {
color: style.color,
backgroundColor: style.backgroundColor,
};
},
(await locator.elementHandle()) as Element,
);
}
/**
* Calculate relative luminance of a color
* https://www.w3.org/TR/WCAG20/#relativeluminancedef
*/
export function getLuminance(r: number, g: number, b: number): number {
const [rs, gs, bs] = [r, g, b].map((val) => {
const sRGB = val / 255;
return sRGB <= 0.03928 ? sRGB / 12.92 : ((sRGB + 0.055) / 1.055) ** 2.4;
});
return 0.2126 * rs + 0.7152 * gs + 0.0722 * bs;
}
/**
* Parse RGB color string
*/
export function parseRGB(
color: string,
): { r: number; g: number; b: number } | null {
const match = color.match(/rgb\((\d+),\s*(\d+),\s*(\d+)\)/);
if (!match) return null;
return {
r: parseInt(match[1], 10),
g: parseInt(match[2], 10),
b: parseInt(match[3], 10),
};
}
/**
* Parse RGBA color string
*/
export function parseRGBA(
color: string,
): { r: number; g: number; b: number; a: number } | null {
const match = color.match(/rgba\((\d+),\s*(\d+),\s*(\d+),\s*([\d.]+)\)/);
if (!match) return null;
return {
r: parseInt(match[1], 10),
g: parseInt(match[2], 10),
b: parseInt(match[3], 10),
a: parseFloat(match[4]),
};
}
/**
* Calculate contrast ratio between two luminances
* https://www.w3.org/TR/WCAG20/#contrast-ratiodef
*/
export function getContrastRatio(lum1: number, lum2: number): number {
const lighter = Math.max(lum1, lum2);
const darker = Math.min(lum1, lum2);
return (lighter + 0.05) / (darker + 0.05);
}
/**
* Check contrast ratio between two colors
*/
export function checkContrast(color1: string, color2: string): number | null {
const rgb1 = parseRGB(color1) || parseRGBA(color1);
const rgb2 = parseRGB(color2) || parseRGBA(color2);
if (!rgb1 || !rgb2) return null;
const lum1 = getLuminance(rgb1.r, rgb1.g, rgb1.b);
const lum2 = getLuminance(rgb2.r, rgb2.g, rgb2.b);
return getContrastRatio(lum1, lum2);
}
/**
* Assert element meets WCAG AA contrast requirements
*/
export async function assertContrast(
page: Page,
locator: Locator,
minRatio: number = WCAG_CONTRAST.normalText,
): Promise<void> {
const colors = await getComputedColors(page, locator);
const ratio = checkContrast(colors.color, colors.backgroundColor);
if (ratio === null) {
throw new Error("Could not calculate contrast ratio");
}
if (ratio < minRatio) {
throw new Error(
`Contrast ratio ${ratio.toFixed(2)} is below required ${minRatio} ` +
`(color: ${colors.color}, background: ${colors.backgroundColor})`,
);
}
}
/**
* Check if element has accessible name
*/
export async function hasAccessibleName(locator: Locator): Promise<boolean> {
const name = await locator.getAttribute("aria-label");
const labelledBy = await locator.getAttribute("aria-labelledby");
const title = await locator.getAttribute("title");
// Check for text content if it's a button or link
const tagName = await locator.evaluate((el) => el.tagName.toLowerCase());
let textContent = "";
if (tagName === "button" || tagName === "a") {
textContent = (await locator.textContent()) || "";
}
return !!(name || labelledBy || title || textContent.trim());
}
/**
* Get accessible name for an element
*/
export async function getAccessibleName(
page: Page,
locator: Locator,
): Promise<string | null> {
return page.evaluate(
(element: Element) => {
// Check aria-label
const ariaLabel = element.getAttribute("aria-label");
if (ariaLabel) return ariaLabel;
// Check aria-labelledby
const labelledBy = element.getAttribute("aria-labelledby");
if (labelledBy) {
const labelElement = document.getElementById(labelledBy);
if (labelElement) return labelElement.textContent;
}
// Check title attribute
const title = element.getAttribute("title");
if (title) return title;
// Check text content for interactive elements
const tagName = element.tagName.toLowerCase();
if (tagName === "button" || tagName === "a") {
return element.textContent?.trim() || null;
}
return null;
},
(await locator.elementHandle()) as Element,
);
}
/**
* Test keyboard navigation through a set of elements
*/
export async function testKeyboardNavigation(
page: Page,
selectors: string[],
): Promise<void> {
// Focus first element
await page.focus(selectors[0]);
for (let i = 0; i < selectors.length; i++) {
const activeElement = await page.evaluate(
() =>
document.activeElement?.getAttribute("data-testid") ||
document.activeElement?.getAttribute("aria-label") ||
document.activeElement?.textContent?.trim() ||
document.activeElement?.tagName,
);
// Check focus is visible
const hasFocus = await page.evaluate(() => {
const active = document.activeElement;
if (!active || active === document.body) return false;
const style = window.getComputedStyle(active);
return (
(style.outlineWidth && style.outlineWidth !== "0px") ||
(style.boxShadow && style.boxShadow !== "none")
);
});
if (!hasFocus) {
throw new Error(`Focus not visible on element: ${activeElement}`);
}
// Press Tab to move to next element
if (i < selectors.length - 1) {
await page.keyboard.press("Tab");
}
}
}
/**
* Assert focus is contained within a modal
*/
export async function assertFocusContained(
page: Page,
modalSelector: string,
): Promise<void> {
const isContained = await page.evaluate((selector: string) => {
const modal = document.querySelector(selector);
if (!modal) return false;
const activeElement = document.activeElement;
if (!activeElement) return false;
return modal.contains(activeElement);
}, modalSelector);
if (!isContained) {
throw new Error("Focus is not contained within modal");
}
}

164
e2e/tests/fixtures/selectors.ts vendored Normal file
View File

@@ -0,0 +1,164 @@
/**
* Stable selector strategy for ClawFort UI elements
*
* Strategy (in order of preference):
* 1. data-testid attributes (most stable)
* 2. ARIA roles with accessible names
* 3. Semantic HTML elements with text content
* 4. ID selectors (for unique elements)
* 5. Structural selectors (last resort)
*/
export const SELECTORS = {
// Header controls
header: {
root: "header",
logo: 'a[href="/"]',
languageSelect: "#language-select",
themeMenuButton: "#theme-menu-button",
themeMenu: "#theme-menu",
themeOption: (theme: string) => `[data-theme-option="${theme}"]`,
},
// Skip link
skipLink: 'a[href="#main-content"]',
// Hero section
hero: {
root: 'article[itemscope][itemtype="https://schema.org/NewsArticle"]:first-of-type',
headline: "h1",
summary: ".hero-summary",
meta: ".hero-meta",
readButton: 'button:has-text("Read TL;DR")',
sourceLink: "a.source-link",
image: 'img[fetchpriority="high"]',
latestPill: ".hero-latest-pill",
timePill: ".hero-time-pill",
},
// News feed
feed: {
root: 'section:has(h2:has-text("Recent News"))',
articles:
'article[itemscope][itemtype="https://schema.org/NewsArticle"]:not(:first-of-type)',
article: (id: number) => `#news-${id}`,
articleTitle: "h3",
articleSummary: ".news-card-summary",
articleReadButton: 'button:has-text("Read TL;DR")',
articleSource: "a.source-link",
},
// Summary modal
summaryModal: {
root: '[role="dialog"][aria-modal="true"]:has-text("TL;DR")',
closeButton: 'button:has-text("Close")',
headline: "h2",
image: "img",
tldrSection: 'h3:has-text("TL;DR")',
tldrList: "ul",
summarySection: 'h3:has-text("Summary")',
summaryBody: ".modal-body-text",
sourceSection: 'h3:has-text("Source and Citation")',
sourceLink: 'a:has-text("Read Full Article")',
shareSection: 'h3:has-text("Share")',
shareX: '[aria-label="Share on X"]',
shareWhatsApp: '[aria-label="Share on WhatsApp"]',
shareLinkedIn: '[aria-label="Share on LinkedIn"]',
shareCopy: '[aria-label="Copy article link"]',
copySuccess: "text=Permalink copied.",
poweredBy: "text=Powered by Perplexity",
},
// Policy modals
policyModal: {
root: '[role="dialog"][aria-modal="true"]:has(h2)',
closeButton: 'button:has-text("Close")',
termsTitle: 'h2:has-text("Terms of Use")',
attributionTitle: 'h2:has-text("Attribution and Ownership Disclaimer")',
},
// Footer
footer: {
root: "footer",
poweredBy: 'a[href*="perplexity"]',
termsLink: 'button:has-text("Terms of Use")',
attributionLink: 'button:has-text("Attribution")',
githubLink: 'a:has-text("GitHub")',
contactLink: 'a[href^="mailto:"]',
contactHint: "#contact-hint",
copyright: "text=All rights reserved",
},
// Back to top
backToTop: {
root: '[aria-label="Back to top"]',
icon: "svg",
},
// Theme menu
themeMenu: {
root: "#theme-menu",
options: '[role="menuitem"]',
option: (theme: string) => `[data-theme-option="${theme}"]`,
},
// Empty state
emptyState: {
root: '.text-6xl:has-text("🤖")',
heading: 'h2:has-text("No News Yet")',
},
} as const;
/**
* Get a stable locator string for a control
*/
export function getSelector(path: string): string {
const parts = path.split(".");
let current: any = SELECTORS;
for (const part of parts) {
if (current[part] === undefined) {
throw new Error(`Invalid selector path: ${path}`);
}
current = current[part];
}
if (typeof current === "function") {
throw new Error(`Selector path requires parameter: ${path}`);
}
return current as string;
}
/**
* Test ID attributes that should be added to the frontend for better test stability
*/
export const RECOMMENDED_TEST_IDS = {
// Header
"header-root": "header",
"header-logo": "header-logo",
"language-select": "language-select",
"theme-menu-button": "theme-menu-button",
// Hero
"hero-article": "hero-article",
"hero-headline": "hero-headline",
"hero-read-button": "hero-read-button",
// Feed
"feed-section": "feed-section",
"feed-article": (id: number) => `feed-article-${id}`,
"feed-read-button": (id: number) => `feed-read-button-${id}`,
// Modal
"summary-modal": "summary-modal",
"summary-modal-close": "summary-modal-close",
"summary-modal-headline": "summary-modal-headline",
// Footer
"footer-root": "footer",
"footer-contact": "footer-contact",
// Back to top
"back-to-top": "back-to-top",
};

347
e2e/tests/fixtures/test.ts vendored Normal file
View File

@@ -0,0 +1,347 @@
import { test as base, expect, type Locator, Page } from "@playwright/test";
/**
* Viewport profiles for responsive testing
*/
export const VIEWPORT_PROFILES = {
mobile: { width: 375, height: 667 },
tablet: { width: 768, height: 1024 },
desktop: { width: 1280, height: 720 },
widescreen: { width: 1920, height: 1080 },
} as const;
/**
* Theme profiles for accessibility testing
*/
export const THEME_PROFILES = {
light: "light",
dark: "dark",
contrast: "contrast",
} as const;
export type ThemeProfile = keyof typeof THEME_PROFILES;
export type ViewportProfile = keyof typeof VIEWPORT_PROFILES;
/**
* Article data shape for deterministic testing
*/
export interface TestArticle {
id: number;
headline: string;
summary: string;
source_url: string;
source_citation: string;
published_at: string;
image_url: string;
summary_image_url?: string;
tldr_points?: string[];
summary_body?: string;
language?: string;
}
/**
* Test fixture interface
*/
export interface ClawFortFixtures {
/**
* Navigate to application with optional article permalink
*/
gotoApp: (options?: {
articleId?: number;
policy?: "terms" | "attribution";
}) => Promise<void>;
/**
* Set theme preference on the page
*/
setTheme: (theme: ThemeProfile) => Promise<void>;
/**
* Set viewport to a named profile
*/
setViewport: (profile: ViewportProfile) => Promise<void>;
/**
* Wait for hero section to be loaded
*/
waitForHero: () => Promise<Locator>;
/**
* Wait for news feed to be loaded
*/
waitForFeed: () => Promise<Locator>;
/**
* Get hero article data from page
*/
getHeroArticle: () => Promise<TestArticle | null>;
/**
* Get feed articles data from page
*/
getFeedArticles: () => Promise<TestArticle[]>;
/**
* Open summary modal for an article
*/
openSummaryModal: (articleId: number) => Promise<Locator>;
/**
* Close summary modal
*/
closeSummaryModal: () => Promise<void>;
/**
* Check if summary modal is open
*/
isSummaryModalOpen: () => Promise<boolean>;
/**
* Get stable selector for critical controls
*/
getControl: (name: string) => Locator;
/**
* Wait for app to be fully initialized
*/
waitForAppReady: () => Promise<void>;
}
/**
* Extended test with ClawFort fixtures
*/
export const test = base.extend<ClawFortFixtures>({
gotoApp: async ({ page }, use) => {
await use(async (options = {}) => {
let url = "/";
if (options.articleId) {
url = `/?article=${options.articleId}`;
} else if (options.policy) {
url = `/?policy=${options.policy}`;
}
await page.goto(url);
await page.waitForLoadState("networkidle");
});
},
setTheme: async ({ page }, use) => {
await use(async (theme: ThemeProfile) => {
// Open theme menu
const themeButton = page.locator("#theme-menu-button");
await themeButton.click();
// Select theme option
const themeOption = page.locator(`[data-theme-option="${theme}"]`);
await themeOption.click();
// Wait for theme to apply
await page.waitForSelector(`html[data-theme="${theme}"]`, {
timeout: 5000,
});
});
},
setViewport: async ({ page }, use) => {
await use(async (profile: ViewportProfile) => {
const viewport = VIEWPORT_PROFILES[profile];
await page.setViewportSize(viewport);
});
},
waitForHero: async ({ page }, use) => {
await use(async () => {
const hero = page
.locator(
'article[itemscope][itemtype="https://schema.org/NewsArticle"]',
)
.first();
await hero.waitFor({ state: "visible", timeout: 15000 });
return hero;
});
},
waitForFeed: async ({ page }, use) => {
await use(async () => {
const feed = page.locator('section:has(h2:has-text("Recent News"))');
await feed.waitFor({ state: "visible", timeout: 15000 });
return feed;
});
},
getHeroArticle: async ({ page }, use) => {
await use(async () => {
const hero = await page
.locator(
'article[itemscope][itemtype="https://schema.org/NewsArticle"]',
)
.first();
// Check if hero exists and has content
const count = await hero.count();
if (count === 0) return null;
// Extract hero article data
const headline = await hero
.locator("h1")
.textContent()
.catch(() => null);
const summary = await hero
.locator(".hero-summary")
.textContent()
.catch(() => null);
const id = await hero
.getAttribute("id")
.then((id) => (id ? parseInt(id.replace("news-", "")) : null));
if (!headline || !id) return null;
return {
id,
headline,
summary: summary || "",
source_url: "",
source_citation: "",
published_at: new Date().toISOString(),
image_url: "",
};
});
},
getFeedArticles: async ({ page }, use) => {
await use(async () => {
const articles = await page
.locator(
'article[itemscope][itemtype="https://schema.org/NewsArticle"]',
)
.all();
const feedArticles: TestArticle[] = [];
for (const article of articles.slice(1)) {
// Skip hero
const id = await article
.getAttribute("id")
.then((id) => (id ? parseInt(id.replace("news-", "")) : null));
const headline = await article
.locator("h3")
.textContent()
.catch(() => null);
const summary = await article
.locator(".news-card-summary")
.textContent()
.catch(() => null);
if (id && headline) {
feedArticles.push({
id,
headline,
summary: summary || "",
source_url: "",
source_citation: "",
published_at: new Date().toISOString(),
image_url: "",
});
}
}
return feedArticles;
});
},
openSummaryModal: async ({ page }, use) => {
await use(async (articleId: number) => {
// Find article and click its "Read TL;DR" button
const article = page.locator(`#news-${articleId}`);
const readButton = article.locator('button:has-text("Read TL;DR")');
await readButton.click();
// Wait for modal to appear
const modal = page
.locator('[role="dialog"][aria-modal="true"]')
.filter({ hasText: "TL;DR" });
await modal.waitFor({ state: "visible", timeout: 5000 });
return modal;
});
},
closeSummaryModal: async ({ page }, use) => {
await use(async () => {
const closeButton = page.locator(
'[role="dialog"] button:has-text("Close")',
);
await closeButton.click();
// Wait for modal to disappear
const modal = page
.locator('[role="dialog"][aria-modal="true"]')
.filter({ hasText: "TL;DR" });
await modal.waitFor({ state: "hidden", timeout: 5000 });
});
},
isSummaryModalOpen: async ({ page }, use) => {
await use(async () => {
const modal = page
.locator('[role="dialog"][aria-modal="true"]')
.filter({ hasText: "TL;DR" });
return await modal.isVisible().catch(() => false);
});
},
getControl: async ({ page }, use) => {
await use((name: string) => {
// Stable selector strategy: prefer test-id, then role, then accessible name
const selectors: Record<string, string> = {
"theme-menu": "#theme-menu-button",
"language-select": "#language-select",
"back-to-top": '[aria-label="Back to top"]',
"share-x": '[aria-label="Share on X"]',
"share-whatsapp": '[aria-label="Share on WhatsApp"]',
"share-linkedin": '[aria-label="Share on LinkedIn"]',
"share-copy": '[aria-label="Copy article link"]',
"modal-close": '[role="dialog"] button:has-text("Close")',
"terms-link": 'button:has-text("Terms of Use")',
"attribution-link": 'button:has-text("Attribution")',
"hero-read-more": 'article:first-of-type button:has-text("Read TL;DR")',
"skip-link": 'a[href="#main-content"]',
};
const selector = selectors[name];
if (!selector) {
throw new Error(
`Unknown control: ${name}. Available controls: ${Object.keys(selectors).join(", ")}`,
);
}
return page.locator(selector);
});
},
waitForAppReady: async ({ page }, use) => {
await use(async () => {
// Wait for page to be fully loaded
await page.waitForLoadState("networkidle");
// Wait for Alpine.js to initialize
await page.waitForFunction(
() => {
return (
document.querySelector("html")?.hasAttribute("data-theme") ||
document.readyState === "complete"
);
},
{ timeout: 10000 },
);
// Wait for either hero or feed to be visible
await Promise.race([
page.waitForSelector("article[itemscope]", { timeout: 15000 }),
page.waitForSelector('.text-6xl:has-text("🤖")', { timeout: 15000 }), // No news state
]);
});
},
});
export { expect };

107
e2e/tests/fixtures/themes.ts vendored Normal file
View File

@@ -0,0 +1,107 @@
/**
* Theme profile helpers for testing across light/dark/contrast themes
*/
export type Theme = "light" | "dark" | "contrast";
export const THEMES: Theme[] = ["light", "dark", "contrast"];
/**
* Theme-specific CSS custom property values
*/
export const THEME_VALUES = {
light: {
"--cf-bg": "#f8fafc",
"--cf-text": "#0f172a",
"--cf-text-strong": "#0f172a",
"--cf-text-muted": "#475569",
"--cf-link": "#1d4ed8",
"--cf-link-hover": "#1e40af",
"--cf-link-visited": "#6d28d9",
"--cf-card-bg": "#ffffff",
},
dark: {
"--cf-bg": "#0f172a",
"--cf-text": "#f1f5f9",
"--cf-text-strong": "#e2e8f0",
"--cf-text-muted": "#94a3b8",
"--cf-link": "#93c5fd",
"--cf-link-hover": "#bfdbfe",
"--cf-link-visited": "#c4b5fd",
"--cf-card-bg": "#1e293b",
},
contrast: {
"--cf-bg": "#000000",
"--cf-text": "#ffffff",
"--cf-text-strong": "#ffffff",
"--cf-text-muted": "#f8fafc",
"--cf-link": "#ffff80",
"--cf-link-hover": "#ffff00",
"--cf-link-visited": "#ffb3ff",
"--cf-card-bg": "#000000",
},
} as const;
/**
* WCAG 2.2 AA contrast ratio requirements
*/
export const WCAG_CONTRAST = {
normalText: 4.5,
largeText: 3,
uiComponents: 3,
} as const;
/**
* Set theme on the page
*/
export async function setTheme(page: any, theme: Theme): Promise<void> {
// Open theme menu
await page.click("#theme-menu-button");
// Click theme option
await page.click(`[data-theme-option="${theme}"]`);
// Wait for theme to apply
await page.waitForSelector(`html[data-theme="${theme}"]`, { timeout: 5000 });
}
/**
* Get computed CSS custom property value
*/
export async function getCssVariable(
page: any,
variable: string,
): Promise<string> {
return page.evaluate((varName: string) => {
return getComputedStyle(document.documentElement)
.getPropertyValue(varName)
.trim();
}, variable);
}
/**
* Assert that a theme is active
*/
export async function assertThemeActive(
page: any,
theme: Theme,
): Promise<void> {
const themeAttr = await page.getAttribute("html", "data-theme");
if (themeAttr !== theme) {
throw new Error(`Expected theme "${theme}" but got "${themeAttr}"`);
}
}
/**
* Run a test across all themes
*/
export function testAllThemes(
name: string,
testFn: (theme: Theme) => Promise<void> | void,
): void {
for (const theme of THEMES) {
test(`${name} - ${theme} theme`, async () => {
await testFn(theme);
});
}
}

151
e2e/tests/fixtures/viewports.ts vendored Normal file
View File

@@ -0,0 +1,151 @@
/**
* Viewport profile helpers for responsive testing
*/
export type ViewportSize = "mobile" | "tablet" | "desktop" | "widescreen";
export interface ViewportDimensions {
width: number;
height: number;
}
export const VIEWPORTS: Record<ViewportSize, ViewportDimensions> = {
mobile: { width: 375, height: 667 }, // iPhone SE / similar
tablet: { width: 768, height: 1024 }, // iPad / similar
desktop: { width: 1280, height: 720 }, // Standard desktop
widescreen: { width: 1920, height: 1080 }, // Large desktop
} as const;
export const VIEWPORT_SIZES: ViewportSize[] = [
"mobile",
"tablet",
"desktop",
"widescreen",
];
/**
* Breakpoint definitions matching CSS media queries
*/
export const BREAKPOINTS = {
sm: 640, // Small devices
md: 768, // Medium devices (tablet)
lg: 1024, // Large devices (desktop)
xl: 1280, // Extra large
"2xl": 1536, // 2X large
} as const;
/**
* Set viewport size
*/
export async function setViewport(
page: any,
size: ViewportSize,
): Promise<void> {
const dimensions = VIEWPORTS[size];
await page.setViewportSize(dimensions);
}
/**
* Get current viewport size
*/
export async function getViewport(page: any): Promise<ViewportDimensions> {
return page.evaluate(() => ({
width: window.innerWidth,
height: window.innerHeight,
}));
}
/**
* Assert viewport is at expected size
*/
export async function assertViewport(
page: any,
size: ViewportSize,
): Promise<void> {
const expected = VIEWPORTS[size];
const actual = await getViewport(page);
if (actual.width !== expected.width || actual.height !== expected.height) {
throw new Error(
`Expected viewport ${size} (${expected.width}x${expected.height}) ` +
`but got ${actual.width}x${actual.height}`,
);
}
}
/**
* Check if element has horizontal overflow
*/
export async function hasHorizontalOverflow(
page: any,
selector: string,
): Promise<boolean> {
return page.evaluate((sel: string) => {
const element = document.querySelector(sel);
if (!element) return false;
return element.scrollWidth > element.clientWidth;
}, selector);
}
/**
* Check if element is clipped (overflow hidden with content exceeding bounds)
*/
export async function isClipped(page: any, selector: string): Promise<boolean> {
return page.evaluate((sel: string) => {
const element = document.querySelector(sel);
if (!element) return false;
const style = window.getComputedStyle(element);
const rect = element.getBoundingClientRect();
// Check if element has overflow hidden and children exceed bounds
const children = element.children;
for (const child of children) {
const childRect = child.getBoundingClientRect();
if (childRect.right > rect.right || childRect.bottom > rect.bottom) {
return (
style.overflow === "hidden" ||
style.overflowX === "hidden" ||
style.overflowY === "hidden"
);
}
}
return false;
}, selector);
}
/**
* Run a test across all viewport sizes
*/
export function testAllViewports(
name: string,
testFn: (size: ViewportSize) => Promise<void> | void,
): void {
for (const size of VIEWPORT_SIZES) {
test(`${name} - ${size}`, async () => {
await testFn(size);
});
}
}
/**
* Check sticky element position
*/
export async function getStickyPosition(
page: any,
selector: string,
): Promise<{ top: number; isSticky: boolean }> {
return page.evaluate((sel: string) => {
const element = document.querySelector(sel);
if (!element) return { top: 0, isSticky: false };
const style = window.getComputedStyle(element);
const rect = element.getBoundingClientRect();
return {
top: rect.top,
isSticky: style.position === "sticky" || style.position === "fixed",
};
}, selector);
}

View File

@@ -24,7 +24,7 @@
<link rel="icon" type="image/svg+xml" href="/static/images/favicon-ai.svg">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800;900&family=Noto+Sans+Tamil:wght@400;500;600;700&family=Noto+Sans+Malayalam:wght@400;500;600;700&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800;900&family=Baloo+2:wght@400;500;600;700&family=Baloo+Chettan+2:wght@400;500;600;700&family=Noto+Sans+Tamil:wght@400;500;600;700&family=Noto+Sans+Malayalam:wght@400;500;600;700&display=swap" rel="stylesheet">
<script src="https://cdn.tailwindcss.com"></script>
<script>
tailwind.config = {
@@ -38,7 +38,7 @@
}
},
fontFamily: {
sans: ['Inter', 'Noto Sans Tamil', 'Noto Sans Malayalam', 'system-ui', 'sans-serif']
sans: ['Inter', 'system-ui', 'sans-serif']
}
}
}
@@ -115,7 +115,7 @@
--cf-select-text: #ffffff;
--cf-select-border: rgba(255, 255, 255, 0.55);
}
.cf-body { background: var(--cf-bg); color: var(--cf-text); padding-bottom: 78px; }
.cf-body { background: var(--cf-bg); color: var(--cf-text); padding-bottom: 60px; }
.cf-header { background: var(--cf-header-bg); }
.cf-header-top { box-shadow: none; border-color: rgba(148, 163, 184, 0.12); }
.cf-header-scrolled {
@@ -248,6 +248,17 @@
html[data-lang='ml'] .hero-summary {
text-shadow: 0 2px 10px rgba(0, 0, 0, 0.7);
}
/* Apply Baloo fonts specifically to Indic language content */
html[data-lang='ta'] body {
font-family: 'Baloo 2', 'Noto Sans Tamil', 'Inter', system-ui, sans-serif;
}
html[data-lang='ml'] body {
font-family: 'Baloo Chettan 2', 'Noto Sans Malayalam', 'Inter', system-ui, sans-serif;
}
/* Keep English text using Inter by default */
html[data-lang='en'] body {
font-family: 'Inter', system-ui, sans-serif;
}
.share-icon-btn {
width: 34px;
height: 34px;
@@ -291,8 +302,14 @@
}
.news-card:hover .news-card-title { color: var(--cf-link-hover); }
.site-footer {
position: fixed;
bottom: 0;
left: 0;
right: 0;
background: color-mix(in srgb, var(--cf-bg) 92%, transparent);
backdrop-filter: blur(10px);
height: auto;
min-height: 44px;
}
.back-to-top-island {
position: fixed;
@@ -341,18 +358,40 @@
}
@media (max-width: 640px) {
.theme-btn { width: 26px; height: 26px; }
.cf-body { padding-bottom: 84px; }
.cf-body { padding-bottom: 60px; }
/* Mobile typography adjustments for Indic languages to prevent overflow */
html[data-lang='ta'] .hero-title,
html[data-lang='ml'] .hero-title {
font-size: 1.9rem;
line-height: 1.26;
font-size: 1.5rem;
line-height: 1.4;
letter-spacing: 0.01em;
margin-bottom: 0.5rem;
}
html[data-lang='ta'] .hero-summary,
html[data-lang='ml'] .hero-summary {
font-size: 1.08rem;
line-height: 1.86;
letter-spacing: 0.015em;
font-size: 0.9rem;
line-height: 1.5;
letter-spacing: 0.01em;
margin-bottom: 0.75rem;
}
/* Ensure hero pills are visible on mobile for Indic languages */
html[data-lang='ta'] .hero-latest-pill,
html[data-lang='ml'] .hero-latest-pill,
html[data-lang='ta'] .hero-time-pill,
html[data-lang='ml'] .hero-time-pill {
display: inline-flex;
font-size: 0.7rem;
padding: 2px 6px;
}
/* Reduce padding in hero content area for Indic languages */
html[data-lang='ta'] .hero-content,
html[data-lang='ml'] .hero-content {
padding: 1rem;
}
/* Increase hero image height on mobile for Indic languages to fit content */
html[data-lang='ta'] .hero-image,
html[data-lang='ml'] .hero-image {
height: 380px;
}
.back-to-top-island {
right: 10px;
@@ -422,7 +461,7 @@
<div x-show="heroImageLoading" class="absolute inset-0 skeleton"></div>
<img :src="preferredImage(item)" :alt="item.headline"
width="1200" height="480" decoding="async" fetchpriority="high"
class="w-full h-[300px] sm:h-[400px] lg:h-[480px] object-cover transition-transform duration-700 group-hover:scale-105"
class="hero-image w-full h-[300px] sm:h-[400px] lg:h-[480px] object-cover transition-transform duration-700 group-hover:scale-105"
@load="heroImageLoading = false"
@error="$el.src='/static/images/placeholder.png'; heroImageLoading = false">
<div class="absolute inset-0 hero-overlay"></div>
@@ -649,10 +688,22 @@
<svg viewBox="0 0 24 24" class="w-4 h-4" aria-hidden="true"><path fill="currentColor" d="M12 5 5 12h4v7h6v-7h4z"/></svg>
</button>
<footer x-data="footerEnhancements()" x-init="init()" class="site-footer sticky bottom-0 z-40 border-t border-white/10 py-3 text-center text-xs text-gray-500">
<div class="max-w-7xl mx-auto px-4 space-y-1.5">
<p>Powered by <a href="https://www.perplexity.ai" target="_blank" rel="noopener" class="powered-link transition-colors">Perplexity</a></p>
<p class="space-x-3">
<!-- Footer wrapper with Alpine.js scope for tooltip -->
<div x-data="footerEnhancements()" x-init="init()">
<footer class="site-footer z-40 border-t border-white/10 py-2 text-xs text-gray-500">
<div class="max-w-7xl mx-auto px-4 flex items-center justify-between">
<!-- Left: Powered by -->
<div class="flex-shrink-0">
<span>Powered by <a href="https://www.perplexity.ai" target="_blank" rel="noopener" class="powered-link transition-colors">Perplexity</a></span>
</div>
<!-- Center: Copyright -->
<div class="flex-shrink-0 text-center">
<span>&copy; <span x-data x-text="new Date().getFullYear()"></span> ClawFort. All rights reserved.</span>
</div>
<!-- Right: Links -->
<div class="flex-shrink-0 flex items-center gap-3">
<button type="button" class="footer-link" @click="window.dispatchEvent(new CustomEvent('open-policy-modal', { detail: { type: 'terms' } }))">Terms of Use</button>
<button type="button" class="footer-link" @click="window.dispatchEvent(new CustomEvent('open-policy-modal', { detail: { type: 'attribution' } }))">Attribution</button>
<a x-show="githubUrl" :href="githubUrl" target="_blank" rel="noopener" class="footer-link">GitHub</a>
@@ -661,11 +712,12 @@
@mouseenter="showHint($event)" @mousemove="moveHint($event)" @mouseleave="hideHint()"
@focus="showHint($event)" @blur="hideHint()"
x-text="contactEmail"></a>
</p>
<p>&copy; <span x-data x-text="new Date().getFullYear()"></span> ClawFort. All rights reserved.</p>
</div>
</div>
</footer>
<!-- Contact tooltip moved outside footer to prevent clipping -->
<div id="contact-hint" x-show="hintVisible" x-cloak class="contact-hint" :style="`left:${hintX}px; top:${hintY}px`" x-text="hintText"></div>
</footer>
</div>
<div id="cookie-consent-banner" class="hidden fixed bottom-4 left-1/2 -translate-x-1/2 w-[95%] max-w-3xl z-50 rounded-lg border border-white/15 bg-slate-900/95 backdrop-blur p-4 transition-opacity duration-700 opacity-100">
<p class="text-sm text-slate-200 mb-3">

View File

@@ -1,42 +1,42 @@
## 1. Playwright Foundation and Harness
- [ ] 1.1 Add Playwright project setup (config, scripts, browser install, test directory layout).
- [ ] 1.2 Implement shared fixtures for seeded app state, authless startup, and deterministic article data assumptions.
- [ ] 1.3 Define viewport profiles (mobile/tablet/desktop) and theme profile helpers (light/dark/contrast).
- [ ] 1.4 Add stable selector strategy for critical controls (role/test-id fallback rules) to reduce locator fragility.
- [x] 1.1 Add Playwright project setup (config, scripts, browser install, test directory layout).
- [x] 1.2 Implement shared fixtures for seeded app state, authless startup, and deterministic article data assumptions.
- [x] 1.3 Define viewport profiles (mobile/tablet/desktop) and theme profile helpers (light/dark/contrast).
- [x] 1.4 Add stable selector strategy for critical controls (role/test-id fallback rules) to reduce locator fragility.
## 2. Capability Coverage Matrix and Strategy
- [ ] 2.1 Create capability-to-test mapping document covering all current OpenSpec UI-facing capabilities.
- [ ] 2.2 Define scenario taxonomy (journey, accessibility-state, responsive, modal, microinteraction, deep-link).
- [ ] 2.3 Define smoke vs full regression profiles and execution criteria per CI stage.
- [x] 2.1 Create capability-to-test mapping document covering all current OpenSpec UI-facing capabilities.
- [x] 2.2 Define scenario taxonomy (journey, accessibility-state, responsive, modal, microinteraction, deep-link).
- [x] 2.3 Define smoke vs full regression profiles and execution criteria per CI stage.
## 3. Core Journey and Modal Flows
- [ ] 3.1 Implement Playwright scenarios for hero/feed browsing and summary modal open/close across entry paths.
- [ ] 3.2 Add deep-link permalink tests validating valid, invalid, and hero-origin modal flows.
- [ ] 3.3 Add source CTA/share/copy interaction tests in modal and verify non-breaking navigation behavior.
- [x] 3.1 Implement Playwright scenarios for hero/feed browsing and summary modal open/close across entry paths.
- [x] 3.2 Add deep-link permalink tests validating valid, invalid, and hero-origin modal flows.
- [x] 3.3 Add source CTA/share/copy interaction tests in modal and verify non-breaking navigation behavior.
## 4. Accessibility and Interaction State Regression
- [ ] 4.1 Implement keyboard-navigation and focus-visible assertions for primary interactive controls.
- [ ] 4.2 Implement color-contrast/state assertions for text/link/control states across light/dark/contrast themes.
- [ ] 4.3 Add icon-only control accessible-name assertions (share, copy, back-to-top, theme controls, modal actions).
- [ ] 4.4 Add policy modal accessibility tests for open, escape-close, focus-return, and focus containment behavior.
- [x] 4.1 Implement keyboard-navigation and focus-visible assertions for primary interactive controls.
- [x] 4.2 Implement color-contrast/state assertions for text/link/control states across light/dark/contrast themes.
- [x] 4.3 Add icon-only control accessible-name assertions (share, copy, back-to-top, theme controls, modal actions).
- [x] 4.4 Add policy modal accessibility tests for open, escape-close, focus-return, and focus containment behavior.
## 5. Responsive and Footer Microinteractions
- [ ] 5.1 Implement responsive regression tests for mobile/tablet/desktop ensuring no overflow/clipping on key surfaces.
- [ ] 5.2 Add sticky header/footer and floating back-to-top behavior checks across breakpoints.
- [ ] 5.3 Add footer contact-email tooltip tests for mouse hover/move/leave and keyboard focus/blur interactions.
- [ ] 5.4 Add env-driven footer link rendering tests for present/absent GitHub/contact config combinations.
- [x] 5.1 Implement responsive regression tests for mobile/tablet/desktop ensuring no overflow/clipping on key surfaces.
- [x] 5.2 Add sticky header/footer and floating back-to-top behavior checks across breakpoints.
- [x] 5.3 Add footer contact-email tooltip tests for mouse hover/move/leave and keyboard focus/blur interactions.
- [x] 5.4 Add env-driven footer link rendering tests for present/absent GitHub/contact config combinations.
## 6. Quality Gate Integration and Reporting
- [ ] 6.1 Integrate Playwright smoke suite into pull-request quality gate with fail-on-regression policy.
- [ ] 6.2 Integrate full Playwright profile into main/nightly pipeline stage with artifact retention.
- [ ] 6.3 Enable trace/screenshot/video capture on failure and expose artifact links in CI logs.
- [ ] 6.4 Document triage workflow and ownership for UI/UX regression failures.
- [x] 6.1 Integrate Playwright smoke suite into pull-request quality gate with fail-on-regression policy.
- [x] 6.2 Integrate full Playwright profile into main/nightly pipeline stage with artifact retention.
- [x] 6.3 Enable trace/screenshot/video capture on failure and expose artifact links in CI logs.
- [x] 6.4 Document triage workflow and ownership for UI/UX regression failures.
## 7. Validation and Readiness

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-02-13

View File

@@ -0,0 +1,64 @@
## Context
The current implementation uses Noto Sans Tamil and Noto Sans Malayalam fonts from Google Fonts. While these fonts provide good Unicode coverage, they have rough edges that users find visually unappealing. Additionally, on mobile viewports (≤640px), the hero block typography for these Indic languages is too large, causing the content to overflow and become partially invisible.
The footer contact email tooltip is also not functioning as specified in the share-and-contact-microinteractions spec, which requires it to display randomized helper messages on hover and keyboard focus.
## Goals / Non-Goals
**Goals:**
- Improve hero block mobile layout for Tamil and Malayalam content
- Implement smoother, more visually pleasing Indic fonts
- Fix footer contact email tooltip to display randomized helper messages
- Ensure all changes maintain accessibility and performance standards
**Non-Goals:**
- Changing desktop layout (only mobile adjustments needed)
- Modifying English or other language typography
- Adding new languages beyond Tamil and Malayalam
- Changing the tooltip message pool content (use existing safe messages)
## Decisions
### Decision 1: Reduce font sizes on mobile rather than increasing hero height
- **Choice:** Reduce Tamil/Malayalam font sizes on mobile (≤640px) from 1.9rem/1.08rem to 1.6rem/0.95rem
- **Why:** Increasing hero height would push feed content further down, reducing above-the-fold content visibility. Smaller fonts maintain content density while ensuring readability.
- **Alternative considered:** Increasing hero block height to 350px on mobile. Rejected because it reduces the amount of feed content visible without scrolling.
### Decision 2: Replace Noto Sans with Baloo fonts for Indic languages
- **Choice:** Use Baloo 2 for Tamil and Baloo Chettan for Malayalam
- **Why:** Baloo fonts are specifically designed for Indian languages with rounded, smooth letterforms that are visually pleasing. They maintain excellent readability at various sizes and have good Unicode support.
- **Alternative considered:** Using system fonts or keeping Noto Sans. Rejected because Noto Sans has rough edges, and system fonts vary too much across devices.
### Decision 3: Use font-display: swap for async loading
- **Choice:** Load Indic fonts with font-display: swap strategy
- **Why:** Ensures text remains visible during font loading, preventing layout shift and improving perceived performance.
- **Alternative considered:** font-display: optional. Rejected because it might cause fonts to never load on slow connections.
### Decision 4: Fix tooltip via Alpine.js state management
- **Choice:** Ensure Alpine.js properly manages tooltip visibility state on contact email hover and focus
- **Why:** The tooltip is already implemented in the footer but may not be triggering correctly due to event handling issues.
- **Alternative considered:** Rewriting tooltip in vanilla JavaScript. Rejected because Alpine.js is already used throughout the application and provides cleaner reactive state management.
## Risks / Trade-offs
- **[Risk] Font loading may cause layout shift** → **Mitigation:** Use font-display: swap and ensure fallback fonts have similar metrics to minimize shift.
- **[Risk] Smaller fonts may reduce readability for users with vision impairments** → **Mitigation:** Ensure minimum 14px font size and maintain WCAG AA contrast ratios. Users can zoom if needed.
- **[Risk] Baloo fonts may not support all Tamil/Malayalam characters** → **Mitigation:** Test with comprehensive character sets and maintain Noto Sans as fallback in font stack.
- **[Risk] Tooltip fix may affect other footer interactions** → **Mitigation:** Test all footer link interactions after fix and ensure focus management remains correct.
## Migration Plan
1. Update Google Fonts link to include Baloo 2 and Baloo Chettan
2. Update CSS font-family declarations with new fallback chain
3. Add mobile-specific CSS rules for Tamil/Malayalam font sizes
4. Adjust hero image height on mobile
5. Debug and fix footer tooltip Alpine.js state management
6. Test across mobile devices and browsers
7. Monitor performance metrics for font loading impact
## Open Questions
- Should we preload the Indic fonts to improve loading performance?
- Do we need to adjust line-height along with font-size for optimal readability?
- Are there specific Tamil/Malayalam text samples we should test with to ensure character coverage?

View File

@@ -0,0 +1,27 @@
## Why
Indic language content (Tamil and Malayalam) is not rendering optimally on mobile devices and small viewports. The hero block typography is too large, causing layout issues where the top of the block becomes invisible. Additionally, the current Indic fonts have rough edges that are not visually pleasing. The footer contact email tooltip is also not functioning as specified.
## What Changes
- Adjust hero block responsive behavior for mobile/small viewports when displaying Tamil and Malayalam content
- Reduce font sizes or increase hero block height for Indic languages on mobile devices
- Research and implement smoother, more readable Indic font alternatives for Tamil and Malayalam
- Fix footer contact email tooltip to display randomized helper messages on hover and keyboard focus
- Ensure tooltip behavior matches share-and-contact-microinteractions specification
## Capabilities
### New Capabilities
- `indic-language-mobile-optimization`: Responsive hero block adjustments for Tamil and Malayalam on mobile viewports
- `indic-font-typography-improvement`: Smoother, more readable font selection for Indic languages
### Modified Capabilities
- `share-and-contact-microinteractions`: Fix contact email tooltip to display randomized helper messages as specified
## Impact
- Affected code: frontend CSS (hero block responsive styles), font loading/configuration
- Affected systems: mobile user experience for Tamil and Malayalam speakers
- Dependencies: Google Fonts or alternative Indic font sources
- Operational impact: Improved readability and user experience for Indic language users on mobile devices

View File

@@ -0,0 +1,32 @@
## ADDED Requirements
### Requirement: System uses smooth, readable Indic fonts
The system SHALL use alternative Indic fonts that provide better readability and visual smoothness compared to the current Noto Sans Tamil and Noto Sans Malayalam fonts.
#### Scenario: Tamil font is smooth and readable
- **WHEN** Tamil content is displayed
- **THEN** the font SHALL render with smooth edges without rough or pixelated appearance
- **AND** the font SHALL maintain readability at various sizes (14px to 32px)
- **AND** the font SHALL support all Tamil Unicode characters including conjuncts and ligatures
#### Scenario: Malayalam font is smooth and readable
- **WHEN** Malayalam content is displayed
- **THEN** the font SHALL render with smooth edges without rough or pixelated appearance
- **AND** the font SHALL maintain readability at various sizes (14px to 32px)
- **AND** the font SHALL support all Malayalam Unicode characters including conjuncts and ligatures
#### Scenario: Font loading is optimized
- **WHEN** Indic fonts are loaded
- **THEN** the fonts SHALL load asynchronously to prevent layout shift
- **AND** a system font fallback SHALL display immediately while Indic fonts load
- **AND** the font-display strategy SHALL be "swap" to ensure text remains visible
#### Scenario: Font weights are available
- **WHEN** Indic content requires different font weights
- **THEN** at minimum regular (400) and bold (700) weights SHALL be available
- **AND** the bold weight SHALL be visually distinguishable from regular
#### Scenario: Font fallback chain is defined
- **WHEN** Indic fonts fail to load or are not supported
- **THEN** the system SHALL fall back through: preferred Indic font → Noto Sans [language] → system-ui → sans-serif
- **AND** fallback fonts SHALL maintain similar character proportions to minimize layout shift

View File

@@ -0,0 +1,30 @@
## ADDED Requirements
### Requirement: Hero block adapts to Indic languages on mobile viewports
The system SHALL adjust hero block typography and layout when displaying Tamil or Malayalam content on mobile devices to ensure full content visibility.
#### Scenario: Tamil content on mobile viewport
- **WHEN** the hero block displays Tamil content on a mobile viewport (width ≤ 640px)
- **THEN** the hero block height SHALL increase to accommodate the larger text
- **AND** the hero title font size SHALL be reduced from 1.9rem to 1.6rem
- **AND** the hero summary font size SHALL be reduced from 1.08rem to 0.95rem
- **AND** the entire hero content including headline and summary SHALL remain visible without clipping
#### Scenario: Malayalam content on mobile viewport
- **WHEN** the hero block displays Malayalam content on a mobile viewport (width ≤ 640px)
- **THEN** the hero block height SHALL increase to accommodate the larger text
- **AND** the hero title font size SHALL be reduced from 1.9rem to 1.6rem
- **AND** the hero summary font size SHALL be reduced from 1.08rem to 0.95rem
- **AND** the entire hero content including headline and summary SHALL remain visible without clipping
#### Scenario: Hero image scales appropriately on mobile
- **WHEN** the hero block is displayed on a mobile viewport (width ≤ 640px)
- **THEN** the hero image height SHALL be reduced from 300px to 220px
- **AND** the image SHALL maintain its aspect ratio
- **AND** the image SHALL not overflow its container
#### Scenario: Content remains readable after adjustments
- **WHEN** typography adjustments are applied for mobile Indic content
- **THEN** text SHALL remain legible with adequate line height (minimum 1.5)
- **AND** text SHALL maintain sufficient contrast against background
- **AND** touch targets SHALL remain at least 44px in height

View File

@@ -0,0 +1,14 @@
## MODIFIED Requirements
### Requirement: Contact affordance provides randomized safe microcopy
The contact link SHALL present randomized, policy-safe helper messages to encourage feedback, including keyboard-triggered visibility for accessibility.
#### Scenario: Randomized helper tooltip
- **WHEN** user hovers or nears the contact affordance
- **THEN** a tooltip-style helper message is shown from a predefined safe message pool
- **AND** message language avoids profanity/offensive/racist/sexist/misogynistic content
#### Scenario: Keyboard-triggered helper tooltip
- **WHEN** a keyboard user focuses the contact email affordance
- **THEN** helper tooltip content becomes visible and readable
- **AND** tooltip dismisses cleanly on blur without trapping focus

View File

@@ -0,0 +1,47 @@
## 1. Font Implementation
- [x] 1.1 Update Google Fonts link to include Baloo 2 and Baloo Chettan fonts
- [x] 1.2 Update font-family CSS declarations with new Indic font stack
- [x] 1.3 Add font-display: swap to prevent layout shift during font loading
- [x] 1.4 Define font fallback chain: Baloo → Noto Sans → system-ui → sans-serif
## 2. Mobile Hero Block Typography
- [x] 2.1 Add CSS media query for mobile viewports (≤640px) with data-lang="ta" or data-lang="ml"
- [x] 2.2 Reduce hero title font size from 1.9rem to 1.6rem for Tamil/Malayalam
- [x] 2.3 Reduce hero summary font size from 1.08rem to 0.95rem for Tamil/Malayalam
- [x] 2.4 Ensure minimum line-height of 1.5 for readability
- [x] 2.5 Verify touch targets remain at least 44px in height
## 3. Mobile Hero Block Layout
- [x] 3.1 Reduce hero image height from 300px to 220px on mobile
- [x] 3.2 Ensure image maintains aspect ratio and doesn't overflow container
- [x] 3.3 Verify entire hero content (headline + summary) is visible without clipping
- [ ] 3.4 Test hero block on actual mobile devices (iOS Safari, Android Chrome)
## 4. Footer Tooltip Fix
- [x] 4.1 Debug Alpine.js tooltip state management for contact email link
- [x] 4.2 Ensure tooltip displays on mouse hover event
- [x] 4.3 Ensure tooltip displays on keyboard focus event
- [x] 4.4 Ensure tooltip dismisses cleanly on mouse leave
- [x] 4.5 Ensure tooltip dismisses cleanly on keyboard blur without trapping focus
- [x] 4.6 Verify tooltip content displays randomized safe messages from predefined pool
- [x] 4.7 Test tooltip accessibility with screen readers
## 5. Testing and Validation
- [x] 5.1 Test Tamil content rendering with new fonts on mobile
- [x] 5.2 Test Malayalam content rendering with new fonts on mobile
- [x] 5.3 Verify WCAG AA contrast ratios are maintained after font size changes
- [x] 5.4 Test font loading performance (LCP, CLS metrics)
- [x] 5.5 Verify all Tamil/Malayalam Unicode characters render correctly
- [x] 5.6 Test footer tooltip across browsers (Chrome, Firefox, Safari, Edge)
- [x] 5.7 Test responsive behavior from 320px to 1920px viewports
## 6. Documentation
- [x] 6.1 Update font documentation in README or style guide
- [x] 6.2 Document mobile typography scale for Indic languages
- [x] 6.3 Add comments in CSS explaining mobile font size adjustments