strategy change to TDD
This commit is contained in:
32
.slim/cartography.json
Normal file
32
.slim/cartography.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"metadata": {
|
||||
"version": "1.0.0",
|
||||
"last_run": "2026-02-17T15:00:03.799320Z",
|
||||
"root": "C:\\dev\\kimi-headroom",
|
||||
"include_patterns": [
|
||||
"**/*.ts",
|
||||
"**/*.json",
|
||||
"**/*.yaml",
|
||||
"**/*.yml",
|
||||
"**/*.js"
|
||||
],
|
||||
"exclude_patterns": [
|
||||
"**/*.test.ts",
|
||||
"**/dist/**",
|
||||
"**/node_modules/**",
|
||||
"**/*.min.js",
|
||||
"**/*.map"
|
||||
],
|
||||
"exceptions": []
|
||||
},
|
||||
"file_hashes": {
|
||||
"openspec\\changes\\headroom-foundation\\.openspec.yaml": "cafa573d0d11cc3531e4cbdb09aea2a6",
|
||||
"openspec\\config.yaml": "63c26766698dd49488b53c274bfec0f9"
|
||||
},
|
||||
"folder_hashes": {
|
||||
"openspec/changes": "",
|
||||
"openspec/changes/headroom-foundation": "",
|
||||
".": "072f78e51abbd298a4f984fc2d143028",
|
||||
"openspec": ""
|
||||
}
|
||||
}
|
||||
19
codemap.md
Normal file
19
codemap.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# kimi-headroom/
|
||||
|
||||
<!-- Explorer: Fill in this section with architectural understanding -->
|
||||
|
||||
## Responsibility
|
||||
|
||||
<!-- What is this folder's job in the system? -->
|
||||
|
||||
## Design
|
||||
|
||||
<!-- Key patterns, abstractions, architectural decisions -->
|
||||
|
||||
## Flow
|
||||
|
||||
<!-- How does data/control flow through this module? -->
|
||||
|
||||
## Integration
|
||||
|
||||
<!-- How does it connect to other parts of the system? -->
|
||||
466
devnotes.md
466
devnotes.md
@@ -894,3 +894,469 @@ Just say the word when you've reviewed everything, and we can:
|
||||
- Adjust anything that needs tweaking
|
||||
- Start implementation with /opsx-apply headroom-foundation
|
||||
Take your time — good planning now saves endless rework later. Sleep well! 🌙
|
||||
|
||||
## LATE IDEA
|
||||
Perfect! Now I can see the current state. Let me visualize the transformation you're proposing:
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ CURRENT WORKFLOW vs SDD + TDD HYBRID │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
CURRENT (Sequential):
|
||||
═══════════════════════
|
||||
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
|
||||
│ Specs │────▶│ Design │────▶│ Code │────▶│ Tests │
|
||||
│ (exists) │ │ (exists) │ │ (tasks) │ │ (tasks) │
|
||||
└──────────┘ └──────────┘ └──────────┘ └──────────┘
|
||||
│ │ │ │
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
15 spec files Decisions 328 tasks "Write tests
|
||||
(auth, capacity, documented (DB, API, for X" at end
|
||||
allocation, etc.) Frontend) of each section
|
||||
PROBLEM: Tests are an afterthought. Code is written, THEN we figure out
|
||||
how to test it. Coverage becomes retrofit work.
|
||||
PROPOSED SDD + TDD HYBRID:
|
||||
═══════════════════════════
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ SPEC-DRIVEN DEVELOPMENT │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Specs │────▶│ Design │────▶│ Tasks │ │
|
||||
│ │ (GIVEN/ │ │ (HOW to │ │ (WHAT to │ │
|
||||
│ │ WHEN/ │ │ test) │ │ build) │ │
|
||||
│ │ THEN) │ │ │ │ │ │
|
||||
│ └──────────┘ └────┬─────┘ └────┬─────┘ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ TEST-DRIVEN CYCLE │ │
|
||||
│ │ (per capability) │ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ RED │◄────────┤ │
|
||||
│ │ │ (Write │ │ │
|
||||
│ │ │ tests │ │ │
|
||||
│ │ │ first) │ │ │
|
||||
│ │ └────┬─────┘ │ │
|
||||
│ │ │ │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ GREEN │ │ │
|
||||
│ │ │ (Code to │ │ │
|
||||
│ │ │ pass) │ │ │
|
||||
│ │ └────┬─────┘ │ │
|
||||
│ │ │ │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ REFACTOR│─────────┘ │
|
||||
│ │ │ (Clean │ │
|
||||
│ │ │ up) │ │
|
||||
│ │ └──────────┘ │
|
||||
│ │ │
|
||||
│ └───────────────────────────────────────────┘
|
||||
│
|
||||
│ KEY PRINCIPLE: Every spec scenario becomes a test case
|
||||
│ BEFORE any implementation code exists.
|
||||
│
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
Now let me check what the current tasks look like to understand the gap:
|
||||
→ Read openspec\changes\headroom-foundation\tasks.md [offset=65, limit=30]
|
||||
I can see the current pattern: Code first, tests last (tasks 7.7-7.8, 8.8, etc.).
|
||||
Here's what I'm thinking about the SDD + TDD hybrid approach:
|
||||
The Core Shift
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ SDD + TDD: THE NEW SEQUENCE │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
For EACH capability (auth, capacity planning, etc.):
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 1: SPEC (Already exists in your specs/) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Your specs are EXCELLENT for TDD - they're GIVEN/WHEN/THEN format! │
|
||||
│ │
|
||||
│ Example from specs/authentication/spec.md: │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Scenario: Successful login │ │
|
||||
│ │ WHEN a user submits valid email "john@example.com" and password │ │
|
||||
│ │ THEN the system validates the credentials │ │
|
||||
│ │ AND generates a JWT access token (60 minute TTL) │ │
|
||||
│ │ AND generates a refresh token (7 day TTL) │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ This IS a test case. We just need to write it as code FIRST. │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 2: TEST (New - write BEFORE code) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Each spec scenario becomes a test: │
|
||||
│ │
|
||||
│ Backend (Pest/PHPUnit): │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ it('authenticates user with valid credentials and issues JWT tokens') │ │
|
||||
│ │ ->given('user exists with email john@example.com') │ │
|
||||
│ │ ->when('POST /api/auth/login', ['email' => 'john@example.com', ...])│ │
|
||||
│ │ ->then('response status is 200') │ │
|
||||
│ │ ->and('response contains access_token') │ │
|
||||
│ │ ->and('response contains refresh_token') │ │
|
||||
│ │ ->and('access_token expires in 60 minutes') │ │
|
||||
│ │ ->and('refresh_token expires in 7 days'); │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Frontend (Playwright/Vitest): │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ test('successful login issues JWT tokens', async ({ page }) => { │ │
|
||||
│ │ await page.goto('/login'); │ │
|
||||
│ │ await page.fill('[name="email"]', 'john@example.com'); │ │
|
||||
│ │ await page.fill('[name="password"]', 'password'); │ │
|
||||
│ │ await page.click('button[type="submit"]'); │ │
|
||||
│ │ │ │
|
||||
│ │ await expect(page).toHaveURL('/dashboard'); │ │
|
||||
│ │ await expect(localStorage.getItem('access_token')).toBeTruthy(); │ │
|
||||
│ │ await expect(localStorage.getItem('refresh_token')).toBeTruthy(); │ │
|
||||
│ │ }); │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 3: CODE (Minimal to make tests pass) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Write just enough code to make the tests green. │
|
||||
│ No premature optimization. No "I'll need this later." │
|
||||
│ │
|
||||
│ The tests guide the API design, not the other way around. │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 4: REFACTOR (Clean up while tests stay green) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Now that it works, make it clean: │
|
||||
│ - Extract methods │
|
||||
│ - Improve naming │
|
||||
│ - Add edge case handling │
|
||||
│ - Optimize queries │
|
||||
│ │
|
||||
│ Tests ensure nothing breaks during refactoring. │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
The Practical Impact on Your 328 Tasks
|
||||
Your current tasks are organized by layer (DB → Backend → Frontend). With SDD+TDD, we'd organize by capability with tests leading:
|
||||
CURRENT TASK STRUCTURE:
|
||||
═══════════════════════
|
||||
1. Project Setup & Infrastructure
|
||||
2. Backend Foundation (Laravel)
|
||||
3. Frontend Foundation (SvelteKit)
|
||||
4. Database Schema & Migrations
|
||||
5. Database Seeders
|
||||
6. Laravel Models & Relationships
|
||||
7. Authentication (Backend)
|
||||
8. Authentication (Frontend)
|
||||
9. Team Member Management (Backend)
|
||||
10. Team Member Management (Frontend)
|
||||
... (16 sections total)
|
||||
Each section ends with "Write tests for X"
|
||||
PROPOSED SDD+TDD STRUCTURE:
|
||||
════════════════════════════
|
||||
1. Project Setup & Infrastructure (unchanged - foundation)
|
||||
2. Database Schema (unchanged - structure needed first)
|
||||
3. AUTHENTICATION CAPABILITY
|
||||
├─ 3.1 Write E2E tests from spec scenarios (Playwright)
|
||||
├─ 3.2 Write API tests from spec scenarios (Pest)
|
||||
├─ 3.3 Implement AuthController (minimal - make tests pass)
|
||||
├─ 3.4 Implement JWT middleware (minimal - make tests pass)
|
||||
├─ 3.5 Implement login page (minimal - make E2E pass)
|
||||
├─ 3.6 Refactor auth flow (clean up)
|
||||
└─ 3.7 Update API documentation (Scribe)
|
||||
4. TEAM MEMBER MANAGEMENT CAPABILITY
|
||||
├─ 4.1 Write E2E tests from spec scenarios
|
||||
├─ 4.2 Write API tests from spec scenarios
|
||||
├─ 4.3 Implement TeamMemberController
|
||||
├─ 4.4 Implement team member pages
|
||||
├─ 4.5 Refactor
|
||||
└─ 4.6 Update docs
|
||||
... (one section per capability)
|
||||
Key Questions to Consider
|
||||
1. Test Granularity: Do we write tests for EVERY spec scenario (you have ~100 scenarios across 15 specs)? Or prioritize critical paths? ==> Every spec.
|
||||
2. Test Organization: Should tests live with the capability (e.g., tests/Feature/Authentication/) or mirror the current structure? ==> Mirror the current structure.
|
||||
3. E2E vs Unit: Your specs naturally map to E2E tests. Do we also need unit tests for internal methods, or is E2E + API integration sufficient? ==> We need unit tests for internal methods --> E2E + API + Unit
|
||||
4. Frontend Testing: With SvelteKit, do we test components in isolation (Vitest) or rely on E2E (Playwright) for most coverage? ==> We test components in isolation (Vitest) and E2E (Playwright)
|
||||
5. The "Red" Phase: How do we handle the fact that tests will fail initially? Do we commit failing tests, or keep them local until they pass? ==> We commit failing tests, but mark them as pending until they pass.
|
||||
6. Regression Testing: Your decision-log mentions "regression tests" - does this mean maintaining a separate regression suite, or is every test potentially a regression test? ==> Every test is potentially a regression test.
|
||||
What resonates with you? Any of these questions spark concerns or clarifications?
|
||||
|
||||
|
||||
## LATE IDEA INTERVIEW 2
|
||||
Perfect! I have a crystal-clear picture now. Let me synthesize this into a concrete development strategy:
|
||||
SDD + TDD Strategy - Finalized
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ DEVELOPMENT WORKFLOW (SDD + TDD) │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
FOR EACH CAPABILITY (15 specs = 15 capability cycles):
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: SPEC → TEST (Red Phase) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Input: specs/<capability>/spec.md (GIVEN/WHEN/THEN scenarios) │
|
||||
│ │
|
||||
│ Output: Failing tests (marked pending) │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Read spec scenarios │ │
|
||||
│ │ └── Example: "WHEN user submits valid email, THEN JWT issued" │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Write E2E test (Playwright) │ │
|
||||
│ │ └── tests/e2e/<capability>.spec.ts │ │
|
||||
│ │ └── Mark as test.skip() or test.fixme() │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Write API test (Pest) │ │
|
||||
│ │ └── backend/tests/Feature/<Capaibility>Test.php │ │
|
||||
│ │ └── Mark as $this->markTestSkipped('pending implementation') │ │
|
||||
│ │ │ │
|
||||
│ │ 4. Write Unit tests (Pest/Vitest) │ │
|
||||
│ │ └── backend/tests/Unit/<Class>Test.php │ │
|
||||
│ │ └── frontend/src/lib/<module>.test.ts │ │
|
||||
│ │ └── Mark as skip/pending │ │
|
||||
│ │ │ │
|
||||
│ │ 5. Write Component tests (Vitest + Testing Library) │ │
|
||||
│ │ └── frontend/src/components/<Component>.test.ts │ │
|
||||
│ │ └── Mark as skip/pending │ │
|
||||
│ │ │ │
|
||||
│ │ 6. Commit: "test(<capability>): Add pending tests for all scenarios" │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: IMPLEMENT (Green Phase) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Goal: Make pending tests pass, one by one │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Pick one pending test │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Remove skip marker │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Write MINIMAL code to make it pass │ │
|
||||
│ │ └── Don't worry about elegance │ │
|
||||
│ │ └── Hardcode if needed temporarily │ │
|
||||
│ │ └── Just get to green │ │
|
||||
│ │ │ │
|
||||
│ │ 4. Run test suite │ │
|
||||
│ │ └── npm run test (frontend) │ │
|
||||
│ │ └── php artisan test (backend) │ │
|
||||
│ │ └── npx playwright test (e2e) │ │
|
||||
│ │ │ │
|
||||
│ │ 5. Commit when green: │ │
|
||||
│ │ └── "feat(<capability>): Implement <scenario>" │ │
|
||||
│ │ └── "test(<capability>): Enable <scenario> test" │ │
|
||||
│ │ │ │
|
||||
│ │ 6. Repeat for all scenarios in capability │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 3: REFACTOR (Clean Phase) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Goal: Clean code while keeping all tests green │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Review implementation for: │ │
|
||||
│ │ └── Code smells │ │
|
||||
│ │ └── Duplication │ │
|
||||
│ │ └── Performance issues │ │
|
||||
│ │ └── Naming clarity │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Refactor with confidence (tests guard against regression) │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Run full test suite after each change │ │
|
||||
│ │ │ │
|
||||
│ │ 4. Commit: "refactor(<capability>): <what was improved>" │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 4: DOCUMENT │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Update API documentation (Scribe) │ │
|
||||
│ │ └── php artisan scribe:generate │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Verify all tests still pass │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Commit: "docs(<capability>): Update API documentation" │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
Test Organization (Mirrors Current Structure)
|
||||
backend/
|
||||
├── tests/
|
||||
│ ├── Feature/ # API integration tests
|
||||
│ │ ├── AuthenticationTest.php # From specs/authentication/spec.md
|
||||
│ │ ├── TeamMemberTest.php # From specs/team-member-management/spec.md
|
||||
│ │ └── ... # One test class per spec
|
||||
│ │
|
||||
│ └── Unit/ # Unit tests for internal methods
|
||||
│ ├── Services/
|
||||
│ │ └── JwtServiceTest.php
|
||||
│ └── ...
|
||||
frontend/
|
||||
├── src/
|
||||
│ ├── lib/
|
||||
│ │ ├── api/
|
||||
│ │ │ └── auth.test.ts # Unit tests for API clients
|
||||
│ │ └── stores/
|
||||
│ │ └── auth.test.ts # Unit tests for stores
|
||||
│ │
|
||||
│ ├── components/
|
||||
│ │ ├── LoginForm.test.ts # Component tests
|
||||
│ │ └── ...
|
||||
│ │
|
||||
│ └── routes/
|
||||
│ └── login/
|
||||
│ └── page.test.ts # Route/page tests
|
||||
│
|
||||
└── tests/e2e/ # Playwright E2E tests
|
||||
├── auth.spec.ts # From specs/authentication/spec.md
|
||||
├── team-members.spec.ts # From specs/team-member-management/spec.md
|
||||
└── ... # One spec file per capability
|
||||
Test Coverage Strategy
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ TEST PYRAMID FOR THIS PROJECT │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
▲
|
||||
/│\ E2E Tests (Playwright)
|
||||
/ │ \ ───────────────────────
|
||||
/ │ \ • Critical user journeys
|
||||
/ │ \ • ~15 spec files = ~15 e2e suites
|
||||
/ │ \ • Tests from specs/<capability>/spec.md
|
||||
/ │ \ • Run on CI before deploy
|
||||
/──────┼──────\
|
||||
/ │ \ API Tests (Pest Feature)
|
||||
/ │ \ ─────────────────────────
|
||||
/ │ \ • Every spec scenario
|
||||
/ │ \• ~100 scenarios across 15 specs
|
||||
/ │ \• Validates endpoints, auth, validation
|
||||
/────────────┼────────────\
|
||||
/ │ \ Unit Tests (Pest/Vitest)
|
||||
/ │ \ ─────────────────────────
|
||||
/ │ \• Internal methods, services
|
||||
/ │ \• Business logic, calculations
|
||||
/ │ \• Edge cases, error handling
|
||||
/ │ \
|
||||
/───────────────────┴───────────────────\
|
||||
Coverage Target: >70% (from decision-log)
|
||||
Every test is a regression test - run full suite on every PR
|
||||
Commit Message Convention
|
||||
test(<capability>): Add pending tests for <scenario>
|
||||
↳ Initial failing tests, marked skip/pending
|
||||
feat(<capability>): Implement <specific feature>
|
||||
↳ Enable test + minimal implementation
|
||||
refactor(<capability>): <description of improvement>
|
||||
↳ Clean up while tests green
|
||||
docs(<capability>): <documentation change>
|
||||
↳ Scribe generation, README updates
|
||||
The Complete Capability Cycle
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ CAPABILITY: Authentication (Example) │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
SPEC: specs/authentication/spec.md has 10 scenarios
|
||||
CYCLE 1: Write all pending tests
|
||||
├─ tests/e2e/auth.spec.ts (10 scenarios)
|
||||
├─ backend/tests/Feature/AuthenticationTest.php (10 scenarios)
|
||||
├─ backend/tests/Unit/JwtServiceTest.php (unit tests for internal methods)
|
||||
├─ frontend/src/components/LoginForm.test.ts (component tests)
|
||||
└─ Commit: "test(auth): Add pending tests for all authentication scenarios"
|
||||
CYCLE 2-11: Implement one scenario at a time
|
||||
├─ Remove skip from "Successful login" test
|
||||
├─ Implement AuthController::login()
|
||||
├─ Implement JWT generation
|
||||
├─ Tests pass
|
||||
├─ Commit: "feat(auth): Implement user login with JWT tokens"
|
||||
│
|
||||
├─ Remove skip from "Invalid credentials" test
|
||||
├─ Add validation logic
|
||||
├─ Tests pass
|
||||
├─ Commit: "feat(auth): Add credential validation"
|
||||
│
|
||||
└─ ... repeat for all 10 scenarios
|
||||
CYCLE 12: Refactor
|
||||
├─ Extract JwtService from AuthController
|
||||
├─ Improve error messages
|
||||
├─ Optimize token generation
|
||||
├─ All tests still pass
|
||||
└─ Commit: "refactor(auth): Extract JwtService, improve error handling"
|
||||
CYCLE 13: Document
|
||||
├─ php artisan scribe:generate
|
||||
├─ Verify all tests pass
|
||||
└─ Commit: "docs(auth): Update API documentation"
|
||||
Questions Before I Update the Config
|
||||
1. Test Runner Integration: Should I add npm scripts to frontend/package.json and composer scripts to backend/composer.json for running the different test suites? ==> Yes and Yes
|
||||
2. CI/CD: Do you want the config to specify running tests in CI? (e.g., GitHub Actions workflow) ==> Not at the moment.
|
||||
3. Pending Test Markers: For PHP/Pest, should pending tests use ->markTestSkipped() or ->todo()? For Playwright, test.skip() or test.fixme()? ==> Todo and Fixme.
|
||||
4. Test Naming: Should test names mirror the spec scenario names exactly, or use a shorter convention? => Use ideal convention
|
||||
5. Task Structure: Should I update the existing tasks.md to reflect this SDD+TDD workflow, or is this a "meta" change that lives in the config? ==> Yes.
|
||||
Ready to update openspec/config.yaml with this strategy once you confirm!
|
||||
|
||||
## LATE IDEA INTERVIEW 3
|
||||
I understand you're in plan mode. Let me construct a comprehensive plan for implementing the SDD + TDD strategy.
|
||||
Implementation Plan
|
||||
Phase 1: Update OpenSpec Config
|
||||
File: openspec/config.yaml
|
||||
Add new sections:
|
||||
- development_strategy: SDD + TDD workflow definition
|
||||
- testing: Test organization, naming conventions, pending markers
|
||||
- scripts: Test runner commands for both frontend and backend
|
||||
Phase 2: Add Test Runner Scripts
|
||||
Frontend (frontend/package.json):
|
||||
{
|
||||
scripts: {
|
||||
test: vitest run,
|
||||
test:watch: vitest,
|
||||
test:ui: vitest --ui,
|
||||
test:e2e: playwright test,
|
||||
test:e2e:ui: playwright test --ui,
|
||||
test:all: npm run test && npm run test:e2e
|
||||
}
|
||||
}
|
||||
Backend (backend/composer.json):
|
||||
{
|
||||
scripts: {
|
||||
test: pest,
|
||||
test:unit: pest --filter=Unit,
|
||||
test:feature: pest --filter=Feature,
|
||||
test:coverage: pest --coverage --min=70
|
||||
}
|
||||
}
|
||||
Phase 3: Update Tasks Structure
|
||||
File: openspec/changes/headroom-foundation/tasks.md
|
||||
Reorganize from layer-based to capability-based with SDD+TDD phases:
|
||||
## Capability 1: Authentication
|
||||
### Phase 1: Write Pending Tests (Red)
|
||||
- [ ] 1.1 Write E2E tests for all auth scenarios (Playwright, test.fixme)
|
||||
- [ ] 1.2 Write API tests for all auth scenarios (Pest, ->todo)
|
||||
- [ ] 1.3 Write unit tests for JwtService (Pest, ->todo)
|
||||
- [ ] 1.4 Write component tests for LoginForm (Vitest, test.skip)
|
||||
### Phase 2: Implement (Green)
|
||||
- [ ] 1.5 Enable and implement: Successful login scenario
|
||||
- [ ] 1.6 Enable and implement: Invalid credentials scenario
|
||||
- [ ] 1.7 Enable and implement: Token refresh scenario
|
||||
- [ ] 1.8 Enable and implement: Logout scenario
|
||||
... (all 10 scenarios)
|
||||
### Phase 3: Refactor
|
||||
- [ ] 1.x Refactor auth implementation
|
||||
### Phase 4: Document
|
||||
- [ ] 1.y Generate API docs with Scribe
|
||||
Phase 4: Create Test Templates
|
||||
Create starter templates for:
|
||||
- backend/tests/Feature/TemplateTest.php (Pest)
|
||||
- frontend/tests/e2e/template.spec.ts (Playwright)
|
||||
- frontend/src/components/Template.test.ts (Vitest)
|
||||
Questions
|
||||
1. Test naming convention: You said "ideal convention" - should I propose a specific format like it('authenticates user with valid credentials', ...) or test('successful login issues JWT tokens', ...)?
|
||||
2. Task reorganization scope: Should I reorganize ALL 328 tasks now, or start with just the first 1-2 capabilities as a pilot?
|
||||
3. Template location: Should templates live in openspec/templates/ or within each project's test directory?
|
||||
Ready to proceed with implementation once you approve this plan!
|
||||
|
||||
19
openspec/changes/codemap.md
Normal file
19
openspec/changes/codemap.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# openspec/changes/
|
||||
|
||||
<!-- Explorer: Fill in this section with architectural understanding -->
|
||||
|
||||
## Responsibility
|
||||
|
||||
<!-- What is this folder's job in the system? -->
|
||||
|
||||
## Design
|
||||
|
||||
<!-- Key patterns, abstractions, architectural decisions -->
|
||||
|
||||
## Flow
|
||||
|
||||
<!-- How does data/control flow through this module? -->
|
||||
|
||||
## Integration
|
||||
|
||||
<!-- How does it connect to other parts of the system? -->
|
||||
19
openspec/changes/headroom-foundation/codemap.md
Normal file
19
openspec/changes/headroom-foundation/codemap.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# openspec/changes/headroom-foundation/
|
||||
|
||||
<!-- Explorer: Fill in this section with architectural understanding -->
|
||||
|
||||
## Responsibility
|
||||
|
||||
<!-- What is this folder's job in the system? -->
|
||||
|
||||
## Design
|
||||
|
||||
<!-- Key patterns, abstractions, architectural decisions -->
|
||||
|
||||
## Flow
|
||||
|
||||
<!-- How does data/control flow through this module? -->
|
||||
|
||||
## Integration
|
||||
|
||||
<!-- How does it connect to other parts of the system? -->
|
||||
File diff suppressed because it is too large
Load Diff
19
openspec/codemap.md
Normal file
19
openspec/codemap.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# openspec/
|
||||
|
||||
<!-- Explorer: Fill in this section with architectural understanding -->
|
||||
|
||||
## Responsibility
|
||||
|
||||
<!-- What is this folder's job in the system? -->
|
||||
|
||||
## Design
|
||||
|
||||
<!-- Key patterns, abstractions, architectural decisions -->
|
||||
|
||||
## Flow
|
||||
|
||||
<!-- How does data/control flow through this module? -->
|
||||
|
||||
## Integration
|
||||
|
||||
<!-- How does it connect to other parts of the system? -->
|
||||
@@ -1,20 +1,238 @@
|
||||
schema: spec-driven
|
||||
|
||||
# Project context (optional)
|
||||
# This is shown to AI when creating artifacts.
|
||||
# Add your tech stack, conventions, style guides, domain knowledge, etc.
|
||||
# Example:
|
||||
# context: |
|
||||
# Tech stack: TypeScript, React, Node.js
|
||||
# We use conventional commits
|
||||
# Domain: e-commerce platform
|
||||
# Project context - shown to AI when creating artifacts
|
||||
# This provides essential context about the Headroom project
|
||||
techstack: |
|
||||
## Backend (Laravel API)
|
||||
- **Framework:** Laravel 12 (latest) with PHP 8.4
|
||||
- **Database:** PostgreSQL (latest, Alpine container)
|
||||
- **Caching:** Redis (latest, Alpine container) - query + response caching
|
||||
- **Authentication:** JWT (tymon/jwt-auth package)
|
||||
- **API Design:** REST with Laravel API Resources
|
||||
- **API Documentation:** Laravel Scribe (auto-generates SwaggerUI)
|
||||
- **Testing:** PHPUnit (unit) + Pest (feature)
|
||||
- **Code Style:** PSR-12, Laravel conventions
|
||||
- **Container:** Docker (port 3000)
|
||||
|
||||
## Frontend (SvelteKit)
|
||||
- **Framework:** SvelteKit (latest) with Svelte 5
|
||||
- **Styling:** Tailwind CSS + DaisyUI
|
||||
- **Charts:** Recharts
|
||||
- **Tables:** TanStack Table (React Table for Svelte)
|
||||
- **Forms:** Superforms + Zod + SvelteKit Form Actions
|
||||
- **State Management:** Svelte stores (minimal - UI state only)
|
||||
- **HTTP Client:** Native fetch (no Axios)
|
||||
- **Testing:** Vitest (unit) + Playwright (E2E)
|
||||
- **Container:** Docker (port 5173)
|
||||
|
||||
## Infrastructure
|
||||
- **Local Dev:** Docker Compose with code-mounted volumes (hot reload)
|
||||
- **Reverse Proxy:** Nginx Proxy Manager (existing)
|
||||
- **Database Volume:** Mounted to ./data/postgres
|
||||
- **Cache Volume:** Mounted to ./data/redis
|
||||
- **Secrets:** .env files (all environments)
|
||||
|
||||
# Per-artifact rules (optional)
|
||||
# Add custom rules for specific artifacts.
|
||||
# Example:
|
||||
# rules:
|
||||
# proposal:
|
||||
# - Keep proposals under 500 words
|
||||
# - Always include a "Non-goals" section
|
||||
# tasks:
|
||||
# - Break tasks into chunks of max 2 hours
|
||||
conventions: |
|
||||
## Code Style Standards
|
||||
|
||||
### Backend (Laravel)
|
||||
- Follow PSR-12 coding standards
|
||||
- Use Laravel Pint for linting
|
||||
- Use PHPStan (level 5+) for static analysis
|
||||
- Use Laravel conventions for naming (camelCase methods, snake_case DB columns)
|
||||
- Use API Resources for consistent JSON responses
|
||||
- Use Form Requests for validation
|
||||
- Use Policies for authorization
|
||||
- Use UUIDs for primary keys (prevents ID enumeration)
|
||||
|
||||
### Frontend (SvelteKit)
|
||||
- Use Prettier for formatting
|
||||
- Use ESLint with Svelte plugin
|
||||
- Use TypeScript strict mode
|
||||
- Use SvelteKit file-based routing conventions
|
||||
- Use $lib alias for imports from src/lib
|
||||
- Use Zod schemas for validation (shared between frontend and API contracts)
|
||||
|
||||
## Git Conventions
|
||||
- **Branch naming:** `feature/opsx-<change-name>` for OpenSpec changes
|
||||
- **Commit format:**
|
||||
```
|
||||
[Type] Brief description (50 chars max)
|
||||
|
||||
Detailed explanation (optional, 72 char wrap)
|
||||
|
||||
Refs: openspec/changes/<change-name>
|
||||
```
|
||||
- **Types:** feat, fix, refactor, test, docs, chore
|
||||
- **Granular commits:** One fix = one commit
|
||||
|
||||
## API Conventions
|
||||
- RESTful endpoints with standard HTTP verbs
|
||||
- Response format: `{ "data": {}, "meta": {}, "links": {} }`
|
||||
- Error format: `{ "message": "...", "errors": {} }`
|
||||
- Cache keys pattern: `allocations:month:{YYYY-MM}`, `reports:forecast:{from}:{to}:{hash}`
|
||||
- TTL: 1 hour (allocations), 15 min (reports), 24 hours (master data)
|
||||
|
||||
development_strategy:
|
||||
approach: "Spec-Driven Development (SDD) + Test-Driven Development (TDD) Hybrid"
|
||||
description: |
|
||||
Every capability follows a 4-phase cycle:
|
||||
|
||||
## Phase 1: SPEC → TEST (Red Phase)
|
||||
- Read scenarios from specs/<capability>/spec.md
|
||||
- Write E2E tests (Playwright) - mark as test.fixme()
|
||||
- Write API tests (Pest) - mark as ->todo()
|
||||
- Write unit tests (Pest/Vitest) - mark as ->todo() or test.skip()
|
||||
- Write component tests (Vitest) - mark as test.skip()
|
||||
- Commit: "test(<capability>): Add pending tests for all scenarios"
|
||||
|
||||
## Phase 2: IMPLEMENT (Green Phase)
|
||||
- Remove skip/todo marker from one test
|
||||
- Write MINIMAL code to make it pass
|
||||
- Run test suite (npm run test, php artisan test, npx playwright test)
|
||||
- Commit when green: "feat(<capability>): Implement <scenario>"
|
||||
- Repeat for all scenarios
|
||||
|
||||
## Phase 3: REFACTOR (Clean Phase)
|
||||
- Review for code smells, duplication, performance
|
||||
- Refactor with confidence (tests guard against regression)
|
||||
- Run full test suite
|
||||
- Commit: "refactor(<capability>): <improvement description>"
|
||||
|
||||
## Phase 4: DOCUMENT
|
||||
- Generate API docs: php artisan scribe:generate
|
||||
- Verify all tests pass
|
||||
- Commit: "docs(<capability>): Update API documentation"
|
||||
|
||||
test_granularity: "Every spec scenario gets a test"
|
||||
test_organization: "Mirror current structure"
|
||||
test_types:
|
||||
- E2E: Playwright tests for critical user journeys
|
||||
- API: Pest Feature tests for all endpoints
|
||||
- Unit: Pest/Vitest for internal methods and business logic
|
||||
- Component: Vitest + Testing Library for Svelte components
|
||||
|
||||
pending_markers:
|
||||
php: "->todo()"
|
||||
playwright: "test.fixme()"
|
||||
vitest: "test.skip()"
|
||||
|
||||
test_naming:
|
||||
format: "Descriptive - mirror spec scenario intent"
|
||||
examples:
|
||||
- "authenticates user with valid credentials and issues JWT tokens"
|
||||
- "returns 401 when credentials are invalid"
|
||||
- "rotates refresh token on each refresh request"
|
||||
|
||||
coverage_target: ">70%"
|
||||
regression_strategy: "Every test is a regression test - run full suite on every PR"
|
||||
|
||||
scripts:
|
||||
backend:
|
||||
test: "pest"
|
||||
"test:unit": "pest --filter=Unit"
|
||||
"test:feature": "pest --filter=Feature"
|
||||
"test:coverage": "pest --coverage --min=70"
|
||||
"test:todo": "pest --filter=todo"
|
||||
lint: "pint"
|
||||
"lint:fix": "pint --fix"
|
||||
analyse: "phpstan analyse --level=5"
|
||||
docs: "scribe:generate"
|
||||
|
||||
frontend:
|
||||
test: "vitest run"
|
||||
"test:watch": "vitest"
|
||||
"test:ui": "vitest --ui"
|
||||
"test:e2e": "playwright test"
|
||||
"test:e2e:ui": "playwright test --ui"
|
||||
"test:all": "npm run test && npm run test:e2e"
|
||||
lint: "eslint ."
|
||||
"lint:fix": "eslint . --fix"
|
||||
format: "prettier --check ."
|
||||
"format:fix": "prettier --write ."
|
||||
|
||||
rules:
|
||||
# Project-level standing instructions (from decision-log.md)
|
||||
all_changes:
|
||||
- Every change must follow SDD+TDD: specs → pending tests → implementation → refactor
|
||||
- Every spec scenario must have corresponding tests (E2E, API, Unit, Component)
|
||||
- Pending tests must be committed before implementation (red phase)
|
||||
- Changes must end with code review for style, standards, and security
|
||||
- Verification (/opsx-verify) must check for uncovered code (code not tested)
|
||||
- Commits must be granular (one scenario = one commit)
|
||||
- Code coverage must be >70% (enforced in /opsx-verify)
|
||||
- All tests must pass before merge
|
||||
- Zero linting errors (Laravel Pint, ESLint, Prettier)
|
||||
- API documentation must be up-to-date (Scribe generation)
|
||||
|
||||
proposal:
|
||||
- Include clear Goals and Non-Goals sections
|
||||
- Reference the 4 personas (Superuser, Manager, Developer, Top Brass)
|
||||
- Align with monthly capacity planning workflow
|
||||
- Include data validation rules for any new entities
|
||||
|
||||
specs:
|
||||
- Document all validation rules explicitly
|
||||
- Include RBAC permissions for each operation
|
||||
- Define error scenarios and expected responses
|
||||
- Reference existing data model (team_members, projects, allocations, actuals)
|
||||
- Use YYYY-MM format for month references
|
||||
- Each scenario must be testable (clear GIVEN/WHEN/THEN)
|
||||
|
||||
design:
|
||||
- Include database schema changes (migrations needed)
|
||||
- Define API endpoints with request/response examples
|
||||
- Specify caching strategy (keys, TTL, invalidation rules)
|
||||
- Include UI/UX considerations for SvelteKit + DaisyUI
|
||||
- Document any new dependencies
|
||||
- Document test approach for each capability
|
||||
|
||||
tasks:
|
||||
- Organize by capability (not by layer)
|
||||
- Each capability has 4 phases: Test (Red) → Implement (Green) → Refactor → Document
|
||||
- Break implementation into individual scenarios
|
||||
- Include explicit test tasks (write pending, enable one by one)
|
||||
- Include API documentation updates as tasks
|
||||
- Order capabilities by dependency and business priority
|
||||
|
||||
# Domain knowledge
|
||||
context: |
|
||||
## Project Overview
|
||||
Headroom is a resource planning and capacity management tool for engineering managers.
|
||||
It replaces error-prone spreadsheets with structured capacity planning, resource allocation,
|
||||
and utilization tracking.
|
||||
|
||||
## Core Workflow (Monthly Cycle)
|
||||
1. **Capacity Planning** - Define team availability (holidays, PTO, working days)
|
||||
2. **Project Setup** - Track projects through lifecycle with approved estimates
|
||||
3. **Resource Allocation** - Allocate hours per person per project per month
|
||||
4. **Actuals Tracking** - Log actual hours worked and compare to planned
|
||||
|
||||
## Personas & Permissions
|
||||
- **Superuser:** Full access (setup, config, all projects, all teams)
|
||||
- **Manager:** Create/edit own projects, allocate own team, view all projects read-only
|
||||
- **Developer:** View own allocations, log own hours, view assigned projects
|
||||
- **Top Brass:** View all reports read-only (forecasts, utilization, costs)
|
||||
|
||||
## Key Business Rules
|
||||
- Availability: 0 (unavailable), 0.5 (half day), 1.0 (full day)
|
||||
- Project allocation indicators: GREEN (100% ±5%), YELLOW (<95%), RED (>105%)
|
||||
- Monthly aggregate for actuals (not daily)
|
||||
- Untracked resource for external team time (not billed)
|
||||
- Validation: Cannot allocate to "Done" or "Cancelled" projects
|
||||
|
||||
## Data Model Summary
|
||||
- team_members: id (UUID), name, role_id, hourly_rate, active
|
||||
- projects: id (UUID), code, title, status_id, type_id, approved_estimate, forecasted_effort (JSON)
|
||||
- allocations: id (UUID), project_id, team_member_id, month (YYYY-MM), allocated_hours
|
||||
- actuals: id (UUID), project_id, team_member_id, month (YYYY-MM), hours_logged
|
||||
- roles, project_statuses, project_types: Master data tables
|
||||
- holidays, ptos: Calendar data
|
||||
|
||||
## Deferred to Phase 2
|
||||
- Real-time notifications (WebSocket)
|
||||
- PDF/CSV exports
|
||||
- Background jobs (Laravel Queue)
|
||||
- Audit logging
|
||||
- Multi-tenancy
|
||||
- Time-tracking tool integration
|
||||
|
||||
100
openspec/templates/PestFeatureTest.php
Normal file
100
openspec/templates/PestFeatureTest.php
Normal file
@@ -0,0 +1,100 @@
|
||||
<?php
|
||||
|
||||
use Illuminate\Foundation\Testing\RefreshDatabase;
|
||||
use App\Models\User;
|
||||
|
||||
uses()->group('{{capability}}');
|
||||
|
||||
/*
|
||||
* Test Template for Pest Feature Tests
|
||||
*
|
||||
* Copy this file and replace:
|
||||
* - {{capability}} with the capability name (e.g., 'authentication')
|
||||
* - {{scenario_description}} with the spec scenario description
|
||||
*
|
||||
* Mark pending tests with ->todo() during Red Phase
|
||||
* Remove ->todo() and implement during Green Phase
|
||||
*/
|
||||
|
||||
describe('{{capability}} API', function () {
|
||||
|
||||
// Use RefreshDatabase trait for clean state
|
||||
uses(RefreshDatabase::class);
|
||||
|
||||
beforeEach(function () {
|
||||
// Setup code runs before each test
|
||||
// e.g., create test user, authenticate, etc.
|
||||
});
|
||||
|
||||
/*
|
||||
* Scenario: {{scenario_description}}
|
||||
*
|
||||
* Spec Reference: specs/{{capability}}/spec.md
|
||||
* Scenario: {{scenario_number}}
|
||||
*/
|
||||
it('{{scenario_description}}', function () {
|
||||
// Arrange (GIVEN)
|
||||
// Set up the initial state
|
||||
|
||||
// Act (WHEN)
|
||||
// Perform the action
|
||||
$response = $this->postJson('/api/{{endpoint}}', [
|
||||
// Request data
|
||||
]);
|
||||
|
||||
// Assert (THEN)
|
||||
// Verify the expected outcome
|
||||
$response->assertStatus(200)
|
||||
->assertJson([
|
||||
// Expected response structure
|
||||
]);
|
||||
})->todo(); // Remove ->todo() when implementing
|
||||
|
||||
/*
|
||||
* Example: Error scenario
|
||||
*/
|
||||
it('returns error when {{error_condition}}', function () {
|
||||
// Arrange
|
||||
|
||||
// Act
|
||||
$response = $this->postJson('/api/{{endpoint}}', [
|
||||
'invalid' => 'data'
|
||||
]);
|
||||
|
||||
// Assert
|
||||
$response->assertStatus(422)
|
||||
->assertJsonValidationErrors(['field_name']);
|
||||
})->todo();
|
||||
|
||||
/*
|
||||
* Example: Authorization scenario
|
||||
*/
|
||||
it('returns 403 when user lacks permission', function () {
|
||||
// Arrange: Create user without permission
|
||||
$user = User::factory()->create(['role' => 'developer']);
|
||||
|
||||
// Act
|
||||
$response = $this->actingAs($user)
|
||||
->postJson('/api/{{admin_endpoint}}');
|
||||
|
||||
// Assert
|
||||
$response->assertStatus(403);
|
||||
})->todo();
|
||||
|
||||
});
|
||||
|
||||
/*
|
||||
* Helper functions specific to this capability
|
||||
*/
|
||||
function create{{model}}(array $attributes = []): {{model}}
|
||||
{
|
||||
return {{model}}::factory()->create($attributes);
|
||||
}
|
||||
|
||||
function authenticateUser(string $role = 'manager'): User
|
||||
{
|
||||
$user = User::factory()->create(['role' => $role]);
|
||||
|
||||
// Return authenticated user or token
|
||||
return $user;
|
||||
}
|
||||
92
openspec/templates/PestUnitTest.php
Normal file
92
openspec/templates/PestUnitTest.php
Normal file
@@ -0,0 +1,92 @@
|
||||
<?php
|
||||
|
||||
use App\Services\{{service}};
|
||||
|
||||
uses()->group('{{capability}}', 'unit');
|
||||
|
||||
/*
|
||||
* Test Template for Pest Unit Tests
|
||||
*
|
||||
* Copy this file and replace:
|
||||
* - {{capability}} with the capability name
|
||||
* - {{service}} with the class being tested
|
||||
*
|
||||
* Mark pending tests with ->todo() during Red Phase
|
||||
* Remove ->todo() and implement during Green Phase
|
||||
*/
|
||||
|
||||
describe('{{service}}', function () {
|
||||
|
||||
beforeEach(function () {
|
||||
$this->service = new {{service}}();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: {{method_description}}
|
||||
*
|
||||
* Tests the core logic of {{method_name}}
|
||||
*/
|
||||
it('{{method_description}}', function () {
|
||||
// Arrange
|
||||
$input = [
|
||||
// Test input data
|
||||
];
|
||||
|
||||
$expected = [
|
||||
// Expected output
|
||||
];
|
||||
|
||||
// Act
|
||||
$result = $this->service->{{method_name}}($input);
|
||||
|
||||
// Assert
|
||||
expect($result)->toBe($expected);
|
||||
})->todo();
|
||||
|
||||
/*
|
||||
* Test: Edge case handling
|
||||
*/
|
||||
it('handles {{edge_case}} gracefully', function () {
|
||||
// Arrange
|
||||
$edgeCaseInput = null;
|
||||
|
||||
// Act & Assert
|
||||
expect(fn () => $this->service->{{method_name}}($edgeCaseInput))
|
||||
->toThrow(\InvalidArgumentException::class);
|
||||
})->todo();
|
||||
|
||||
/*
|
||||
* Test: Calculation accuracy
|
||||
*/
|
||||
it('calculates {{calculation}} correctly', function () {
|
||||
// Arrange
|
||||
$a = 10;
|
||||
$b = 20;
|
||||
|
||||
// Act
|
||||
$result = $this->service->calculate($a, $b);
|
||||
|
||||
// Assert
|
||||
expect($result)->toBe(30);
|
||||
})->todo();
|
||||
|
||||
});
|
||||
|
||||
/*
|
||||
* Data providers for parameterized tests
|
||||
*/
|
||||
dataset('valid_inputs', function () {
|
||||
return [
|
||||
'case 1' => [['input' => 'value1'], 'expected1'],
|
||||
'case 2' => [['input' => 'value2'], 'expected2'],
|
||||
'case 3' => [['input' => 'value3'], 'expected3'],
|
||||
];
|
||||
});
|
||||
|
||||
/*
|
||||
* Example parameterized test
|
||||
*/
|
||||
it('processes valid inputs correctly', function ($input, $expected) {
|
||||
$result = $this->service->process($input);
|
||||
expect($result)->toBe($expected);
|
||||
})->with('valid_inputs')->todo();
|
||||
137
openspec/templates/PlaywrightE2ETest.spec.ts
Normal file
137
openspec/templates/PlaywrightE2ETest.spec.ts
Normal file
@@ -0,0 +1,137 @@
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
/*
|
||||
* Test Template for Playwright E2E Tests
|
||||
*
|
||||
* Copy this file and replace:
|
||||
* - {{capability}} with the capability name (e.g., 'authentication')
|
||||
* - {{scenario_description}} with the spec scenario description
|
||||
*
|
||||
* Mark pending tests with test.fixme() during Red Phase
|
||||
* Remove test.fixme() and implement during Green Phase
|
||||
*/
|
||||
|
||||
test.describe('{{capability}} Flow', () => {
|
||||
|
||||
test.beforeEach(async ({ page }) => {
|
||||
// Setup: Navigate to base URL, login, etc.
|
||||
await page.goto('/');
|
||||
});
|
||||
|
||||
/*
|
||||
* Scenario: {{scenario_description}}
|
||||
*
|
||||
* Spec Reference: specs/{{capability}}/spec.md
|
||||
* Scenario: {{scenario_number}}
|
||||
*/
|
||||
test('{{scenario_description}}', async ({ page }) => {
|
||||
// Arrange (GIVEN)
|
||||
// Set up initial state
|
||||
await page.goto('/{{route}}');
|
||||
|
||||
// Act (WHEN)
|
||||
// Perform user actions
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="password"]', 'password');
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
// Assert (THEN)
|
||||
// Verify expected outcome
|
||||
await expect(page).toHaveURL('/dashboard');
|
||||
await expect(page.locator('[data-testid="success-message"]')).toBeVisible();
|
||||
await expect(page.locator('[data-testid="user-name"]')).toContainText('Test User');
|
||||
});
|
||||
|
||||
/*
|
||||
* Example: Error scenario
|
||||
*/
|
||||
test('displays error when {{error_condition}}', async ({ page }) => {
|
||||
// Arrange
|
||||
await page.goto('/{{route}}');
|
||||
|
||||
// Act
|
||||
await page.fill('[name="email"]', 'invalid@example.com');
|
||||
await page.fill('[name="password"]', 'wrongpassword');
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
// Assert
|
||||
await expect(page.locator('[data-testid="error-message"]')).toBeVisible();
|
||||
await expect(page.locator('[data-testid="error-message"]')).toContainText('Invalid credentials');
|
||||
await expect(page).toHaveURL('/{{route}}');
|
||||
});
|
||||
|
||||
/*
|
||||
* Example: Form validation
|
||||
*/
|
||||
test('validates required fields', async ({ page }) => {
|
||||
// Arrange
|
||||
await page.goto('/{{route}}');
|
||||
|
||||
// Act
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
// Assert
|
||||
await expect(page.locator('[data-testid="error-email"]')).toContainText('Email is required');
|
||||
await expect(page.locator('[data-testid="error-password"]')).toContainText('Password is required');
|
||||
});
|
||||
|
||||
/*
|
||||
* Example: CRUD operation
|
||||
*/
|
||||
test('completes full {{resource}} CRUD flow', async ({ page }) => {
|
||||
// Create
|
||||
await page.goto('/{{resource}}/new');
|
||||
await page.fill('[name="name"]', 'Test {{resource}}');
|
||||
await page.click('button[type="submit"]');
|
||||
await expect(page.locator('[data-testid="success-message"]')).toBeVisible();
|
||||
|
||||
// Read
|
||||
await page.goto('/{{resource}}');
|
||||
await expect(page.locator('text=Test {{resource}}')).toBeVisible();
|
||||
|
||||
// Update
|
||||
await page.click('[data-testid="edit-{{resource}}"]');
|
||||
await page.fill('[name="name"]', 'Updated {{resource}}');
|
||||
await page.click('button[type="submit"]');
|
||||
await expect(page.locator('[data-testid="success-message"]')).toBeVisible();
|
||||
|
||||
// Delete
|
||||
await page.click('[data-testid="delete-{{resource}}"]');
|
||||
await page.click('[data-testid="confirm-delete"]');
|
||||
await expect(page.locator('text=Updated {{resource}}')).not.toBeVisible();
|
||||
});
|
||||
|
||||
/*
|
||||
* Example: RBAC test
|
||||
*/
|
||||
test('restricts access based on role', async ({ page }) => {
|
||||
// Login as developer (limited permissions)
|
||||
await loginAs(page, 'developer');
|
||||
|
||||
// Try to access admin page
|
||||
await page.goto('/admin/{{resource}}');
|
||||
|
||||
// Should be redirected or show access denied
|
||||
await expect(page.locator('[data-testid="access-denied"]')).toBeVisible();
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
/*
|
||||
* Helper functions
|
||||
*/
|
||||
async function loginAs(page, role: string) {
|
||||
await page.goto('/login');
|
||||
await page.fill('[name="email"]', `${role}@example.com`);
|
||||
await page.fill('[name="password"]', 'password');
|
||||
await page.click('button[type="submit"]');
|
||||
await expect(page).toHaveURL('/dashboard');
|
||||
}
|
||||
|
||||
async function create{{model}}(page, data: object) {
|
||||
await page.goto('/{{resource}}/new');
|
||||
for (const [key, value] of Object.entries(data)) {
|
||||
await page.fill(`[name="${key}"]`, value);
|
||||
}
|
||||
await page.click('button[type="submit"]');
|
||||
}
|
||||
174
openspec/templates/VitestComponentTest.test.ts
Normal file
174
openspec/templates/VitestComponentTest.test.ts
Normal file
@@ -0,0 +1,174 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { render, screen, fireEvent, waitFor } from '@testing-library/svelte';
|
||||
import {{component}} from './{{component}}.svelte';
|
||||
|
||||
/*
|
||||
* Test Template for Vitest Component Tests
|
||||
*
|
||||
* Copy this file and replace:
|
||||
* - {{component}} with the Svelte component name
|
||||
* - {{capability}} with the capability name
|
||||
*
|
||||
* Mark pending tests with test.skip() during Red Phase
|
||||
* Remove .skip and implement during Green Phase
|
||||
*/
|
||||
|
||||
describe('{{component}}', () => {
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset mocks and state
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Component renders correctly
|
||||
*/
|
||||
it.skip('renders with default props', () => {
|
||||
const { container } = render({{component}});
|
||||
|
||||
expect(container.querySelector('[data-testid="{{component}}"]')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Component displays data correctly
|
||||
*/
|
||||
it.skip('displays {{data_type}} data', () => {
|
||||
const mockData = {
|
||||
id: '123',
|
||||
name: 'Test Name',
|
||||
value: 100
|
||||
};
|
||||
|
||||
render({{component}}, { props: { data: mockData } });
|
||||
|
||||
expect(screen.getByText('Test Name')).toBeInTheDocument();
|
||||
expect(screen.getByText('100')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: User interaction
|
||||
*/
|
||||
it.skip('handles {{action}} click', async () => {
|
||||
const mockHandler = vi.fn();
|
||||
|
||||
render({{component}}, {
|
||||
props: {
|
||||
on{{action}}: mockHandler
|
||||
}
|
||||
});
|
||||
|
||||
const button = screen.getByTestId('{{action}}-button');
|
||||
await fireEvent.click(button);
|
||||
|
||||
expect(mockHandler).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Form submission
|
||||
*/
|
||||
it.skip('submits form with correct data', async () => {
|
||||
const mockSubmit = vi.fn();
|
||||
|
||||
render({{component}}, {
|
||||
props: {
|
||||
onSubmit: mockSubmit
|
||||
}
|
||||
});
|
||||
|
||||
// Fill form
|
||||
await fireEvent.input(screen.getByLabelText('Name'), {
|
||||
target: { value: 'Test Name' }
|
||||
});
|
||||
|
||||
await fireEvent.input(screen.getByLabelText('Email'), {
|
||||
target: { value: 'test@example.com' }
|
||||
});
|
||||
|
||||
// Submit
|
||||
await fireEvent.click(screen.getByText('Submit'));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(mockSubmit).toHaveBeenCalledWith({
|
||||
name: 'Test Name',
|
||||
email: 'test@example.com'
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Validation errors
|
||||
*/
|
||||
it.skip('displays validation errors', async () => {
|
||||
render({{component}});
|
||||
|
||||
// Submit empty form
|
||||
await fireEvent.click(screen.getByText('Submit'));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(screen.getByText('Name is required')).toBeInTheDocument();
|
||||
expect(screen.getByText('Email is required')).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Loading state
|
||||
*/
|
||||
it.skip('shows loading state', () => {
|
||||
render({{component}}, {
|
||||
props: {
|
||||
loading: true
|
||||
}
|
||||
});
|
||||
|
||||
expect(screen.getByTestId('loading-spinner')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Empty state
|
||||
*/
|
||||
it.skip('displays empty state when no data', () => {
|
||||
render({{component}}, {
|
||||
props: {
|
||||
data: []
|
||||
}
|
||||
});
|
||||
|
||||
expect(screen.getByText('No data available')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Error state
|
||||
*/
|
||||
it.skip('displays error message', () => {
|
||||
render({{component}}, {
|
||||
props: {
|
||||
error: 'Failed to load data'
|
||||
}
|
||||
});
|
||||
|
||||
expect(screen.getByText('Failed to load data')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Accessibility
|
||||
*/
|
||||
it.skip('has accessible attributes', () => {
|
||||
render({{component}});
|
||||
|
||||
const element = screen.getByRole('button');
|
||||
expect(element).toHaveAttribute('aria-label');
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
/*
|
||||
* Helper functions
|
||||
*/
|
||||
function createMock{{model}}(overrides = {}) {
|
||||
return {
|
||||
id: '123',
|
||||
name: 'Test {{model}}',
|
||||
createdAt: new Date().toISOString(),
|
||||
...overrides
|
||||
};
|
||||
}
|
||||
205
openspec/templates/VitestUnitTest.test.ts
Normal file
205
openspec/templates/VitestUnitTest.test.ts
Normal file
@@ -0,0 +1,205 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
|
||||
/*
|
||||
* Test Template for Vitest Unit Tests (Utilities, Stores, API Clients)
|
||||
*
|
||||
* Copy this file and replace:
|
||||
* - {{module}} with the module name
|
||||
* - {{capability}} with the capability name
|
||||
*
|
||||
* Mark pending tests with test.skip() during Red Phase
|
||||
* Remove .skip and implement during Green Phase
|
||||
*/
|
||||
|
||||
describe('{{module}}', () => {
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset state before each test
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Core functionality
|
||||
*/
|
||||
it.skip('{{functionality_description}}', () => {
|
||||
// Arrange
|
||||
const input = {
|
||||
// Test input
|
||||
};
|
||||
|
||||
const expected = {
|
||||
// Expected output
|
||||
};
|
||||
|
||||
// Act
|
||||
const result = {{module}}.{{method}}(input);
|
||||
|
||||
// Assert
|
||||
expect(result).toEqual(expected);
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Edge cases
|
||||
*/
|
||||
it.skip('handles {{edge_case}}', () => {
|
||||
// Arrange
|
||||
const edgeCaseInput = null;
|
||||
|
||||
// Act & Assert
|
||||
expect(() => {{module}}.{{method}}(edgeCaseInput))
|
||||
.toThrow('Invalid input');
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Async operations
|
||||
*/
|
||||
it.skip('{{async_description}}', async () => {
|
||||
// Arrange
|
||||
const mockResponse = { data: 'test' };
|
||||
global.fetch = vi.fn().mockResolvedValue({
|
||||
ok: true,
|
||||
json: () => Promise.resolve(mockResponse)
|
||||
});
|
||||
|
||||
// Act
|
||||
const result = await {{module}}.{{async_method}}();
|
||||
|
||||
// Assert
|
||||
expect(result).toEqual(mockResponse);
|
||||
expect(fetch).toHaveBeenCalledWith('/api/{{endpoint}}');
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Error handling
|
||||
*/
|
||||
it.skip('handles API errors', async () => {
|
||||
// Arrange
|
||||
global.fetch = vi.fn().mockResolvedValue({
|
||||
ok: false,
|
||||
status: 500,
|
||||
json: () => Promise.resolve({ message: 'Server error' })
|
||||
});
|
||||
|
||||
// Act & Assert
|
||||
await expect({{module}}.{{async_method}}())
|
||||
.rejects.toThrow('Server error');
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Store behavior (if testing Svelte stores)
|
||||
*/
|
||||
it.skip('updates store value', () => {
|
||||
// Arrange
|
||||
const store = {{module}}.{{store_name}};
|
||||
|
||||
// Act
|
||||
store.set({ value: 'new value' });
|
||||
|
||||
// Assert
|
||||
let value;
|
||||
store.subscribe(v => { value = v; })();
|
||||
expect(value).toEqual({ value: 'new value' });
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Store derived values
|
||||
*/
|
||||
it.skip('computes derived value correctly', () => {
|
||||
// Arrange
|
||||
const store = {{module}}.{{store_name}};
|
||||
store.set({ items: [1, 2, 3] });
|
||||
|
||||
// Act
|
||||
let derivedValue;
|
||||
{{module}}.{{derived_store}}.subscribe(v => { derivedValue = v; })();
|
||||
|
||||
// Assert
|
||||
expect(derivedValue).toBe(6); // sum of items
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Validation logic
|
||||
*/
|
||||
it.skip('validates {{field}} correctly', () => {
|
||||
// Valid cases
|
||||
expect({{module}}.validate{{field}}('valid')).toBe(true);
|
||||
expect({{module}}.validate{{field}}('another-valid')).toBe(true);
|
||||
|
||||
// Invalid cases
|
||||
expect({{module}}.validate{{field}}('')).toBe(false);
|
||||
expect({{module}}.validate{{field}}(null)).toBe(false);
|
||||
expect({{module}}.validate{{field}}('invalid!')).toBe(false);
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Calculation accuracy
|
||||
*/
|
||||
it.skip('calculates {{calculation}} correctly', () => {
|
||||
expect({{module}}.calculate(10, 20)).toBe(30);
|
||||
expect({{module}}.calculate(0, 0)).toBe(0);
|
||||
expect({{module}}.calculate(-5, 5)).toBe(0);
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Formatting functions
|
||||
*/
|
||||
it.skip('formats {{data_type}} correctly', () => {
|
||||
expect({{module}}.format{{data_type}}(1000)).toBe('$1,000.00');
|
||||
expect({{module}}.format{{data_type}}(0)).toBe('$0.00');
|
||||
expect({{module}}.format{{data_type}}(-500)).toBe('-$500.00');
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Date/time utilities
|
||||
*/
|
||||
it.skip('parses date correctly', () => {
|
||||
const result = {{module}}.parseDate('2024-01-15');
|
||||
expect(result).toBeInstanceOf(Date);
|
||||
expect(result.getFullYear()).toBe(2024);
|
||||
expect(result.getMonth()).toBe(0); // January is 0
|
||||
expect(result.getDate()).toBe(15);
|
||||
});
|
||||
|
||||
it.skip('formats date as YYYY-MM', () => {
|
||||
const date = new Date(2024, 0, 15); // January 15, 2024
|
||||
expect({{module}}.formatMonth(date)).toBe('2024-01');
|
||||
});
|
||||
|
||||
/*
|
||||
* Test: Local storage operations
|
||||
*/
|
||||
it.skip('saves to localStorage', () => {
|
||||
const data = { key: 'value' };
|
||||
{{module}}.saveToStorage('test-key', data);
|
||||
|
||||
expect(localStorage.setItem).toHaveBeenCalledWith(
|
||||
'test-key',
|
||||
JSON.stringify(data)
|
||||
);
|
||||
});
|
||||
|
||||
it.skip('loads from localStorage', () => {
|
||||
const data = { key: 'value' };
|
||||
localStorage.getItem.mockReturnValue(JSON.stringify(data));
|
||||
|
||||
const result = {{module}}.loadFromStorage('test-key');
|
||||
|
||||
expect(result).toEqual(data);
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
/*
|
||||
* Data providers for parameterized tests
|
||||
*/
|
||||
const testCases = [
|
||||
{ input: 'case1', expected: 'result1' },
|
||||
{ input: 'case2', expected: 'result2' },
|
||||
{ input: 'case3', expected: 'result3' },
|
||||
];
|
||||
|
||||
it.skip('processes multiple cases correctly', () => {
|
||||
testCases.forEach(({ input, expected }) => {
|
||||
expect({{module}}.process(input)).toBe(expected);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user