strategy change to TDD
This commit is contained in:
466
devnotes.md
466
devnotes.md
@@ -894,3 +894,469 @@ Just say the word when you've reviewed everything, and we can:
|
||||
- Adjust anything that needs tweaking
|
||||
- Start implementation with /opsx-apply headroom-foundation
|
||||
Take your time — good planning now saves endless rework later. Sleep well! 🌙
|
||||
|
||||
## LATE IDEA
|
||||
Perfect! Now I can see the current state. Let me visualize the transformation you're proposing:
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ CURRENT WORKFLOW vs SDD + TDD HYBRID │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
CURRENT (Sequential):
|
||||
═══════════════════════
|
||||
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
|
||||
│ Specs │────▶│ Design │────▶│ Code │────▶│ Tests │
|
||||
│ (exists) │ │ (exists) │ │ (tasks) │ │ (tasks) │
|
||||
└──────────┘ └──────────┘ └──────────┘ └──────────┘
|
||||
│ │ │ │
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
15 spec files Decisions 328 tasks "Write tests
|
||||
(auth, capacity, documented (DB, API, for X" at end
|
||||
allocation, etc.) Frontend) of each section
|
||||
PROBLEM: Tests are an afterthought. Code is written, THEN we figure out
|
||||
how to test it. Coverage becomes retrofit work.
|
||||
PROPOSED SDD + TDD HYBRID:
|
||||
═══════════════════════════
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ SPEC-DRIVEN DEVELOPMENT │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Specs │────▶│ Design │────▶│ Tasks │ │
|
||||
│ │ (GIVEN/ │ │ (HOW to │ │ (WHAT to │ │
|
||||
│ │ WHEN/ │ │ test) │ │ build) │ │
|
||||
│ │ THEN) │ │ │ │ │ │
|
||||
│ └──────────┘ └────┬─────┘ └────┬─────┘ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ TEST-DRIVEN CYCLE │ │
|
||||
│ │ (per capability) │ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ RED │◄────────┤ │
|
||||
│ │ │ (Write │ │ │
|
||||
│ │ │ tests │ │ │
|
||||
│ │ │ first) │ │ │
|
||||
│ │ └────┬─────┘ │ │
|
||||
│ │ │ │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ GREEN │ │ │
|
||||
│ │ │ (Code to │ │ │
|
||||
│ │ │ pass) │ │ │
|
||||
│ │ └────┬─────┘ │ │
|
||||
│ │ │ │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ │ │ REFACTOR│─────────┘ │
|
||||
│ │ │ (Clean │ │
|
||||
│ │ │ up) │ │
|
||||
│ │ └──────────┘ │
|
||||
│ │ │
|
||||
│ └───────────────────────────────────────────┘
|
||||
│
|
||||
│ KEY PRINCIPLE: Every spec scenario becomes a test case
|
||||
│ BEFORE any implementation code exists.
|
||||
│
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
Now let me check what the current tasks look like to understand the gap:
|
||||
→ Read openspec\changes\headroom-foundation\tasks.md [offset=65, limit=30]
|
||||
I can see the current pattern: Code first, tests last (tasks 7.7-7.8, 8.8, etc.).
|
||||
Here's what I'm thinking about the SDD + TDD hybrid approach:
|
||||
The Core Shift
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ SDD + TDD: THE NEW SEQUENCE │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
For EACH capability (auth, capacity planning, etc.):
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 1: SPEC (Already exists in your specs/) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Your specs are EXCELLENT for TDD - they're GIVEN/WHEN/THEN format! │
|
||||
│ │
|
||||
│ Example from specs/authentication/spec.md: │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Scenario: Successful login │ │
|
||||
│ │ WHEN a user submits valid email "john@example.com" and password │ │
|
||||
│ │ THEN the system validates the credentials │ │
|
||||
│ │ AND generates a JWT access token (60 minute TTL) │ │
|
||||
│ │ AND generates a refresh token (7 day TTL) │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ This IS a test case. We just need to write it as code FIRST. │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 2: TEST (New - write BEFORE code) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Each spec scenario becomes a test: │
|
||||
│ │
|
||||
│ Backend (Pest/PHPUnit): │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ it('authenticates user with valid credentials and issues JWT tokens') │ │
|
||||
│ │ ->given('user exists with email john@example.com') │ │
|
||||
│ │ ->when('POST /api/auth/login', ['email' => 'john@example.com', ...])│ │
|
||||
│ │ ->then('response status is 200') │ │
|
||||
│ │ ->and('response contains access_token') │ │
|
||||
│ │ ->and('response contains refresh_token') │ │
|
||||
│ │ ->and('access_token expires in 60 minutes') │ │
|
||||
│ │ ->and('refresh_token expires in 7 days'); │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Frontend (Playwright/Vitest): │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ test('successful login issues JWT tokens', async ({ page }) => { │ │
|
||||
│ │ await page.goto('/login'); │ │
|
||||
│ │ await page.fill('[name="email"]', 'john@example.com'); │ │
|
||||
│ │ await page.fill('[name="password"]', 'password'); │ │
|
||||
│ │ await page.click('button[type="submit"]'); │ │
|
||||
│ │ │ │
|
||||
│ │ await expect(page).toHaveURL('/dashboard'); │ │
|
||||
│ │ await expect(localStorage.getItem('access_token')).toBeTruthy(); │ │
|
||||
│ │ await expect(localStorage.getItem('refresh_token')).toBeTruthy(); │ │
|
||||
│ │ }); │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 3: CODE (Minimal to make tests pass) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Write just enough code to make the tests green. │
|
||||
│ No premature optimization. No "I'll need this later." │
|
||||
│ │
|
||||
│ The tests guide the API design, not the other way around. │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ STEP 4: REFACTOR (Clean up while tests stay green) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Now that it works, make it clean: │
|
||||
│ - Extract methods │
|
||||
│ - Improve naming │
|
||||
│ - Add edge case handling │
|
||||
│ - Optimize queries │
|
||||
│ │
|
||||
│ Tests ensure nothing breaks during refactoring. │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
The Practical Impact on Your 328 Tasks
|
||||
Your current tasks are organized by layer (DB → Backend → Frontend). With SDD+TDD, we'd organize by capability with tests leading:
|
||||
CURRENT TASK STRUCTURE:
|
||||
═══════════════════════
|
||||
1. Project Setup & Infrastructure
|
||||
2. Backend Foundation (Laravel)
|
||||
3. Frontend Foundation (SvelteKit)
|
||||
4. Database Schema & Migrations
|
||||
5. Database Seeders
|
||||
6. Laravel Models & Relationships
|
||||
7. Authentication (Backend)
|
||||
8. Authentication (Frontend)
|
||||
9. Team Member Management (Backend)
|
||||
10. Team Member Management (Frontend)
|
||||
... (16 sections total)
|
||||
Each section ends with "Write tests for X"
|
||||
PROPOSED SDD+TDD STRUCTURE:
|
||||
════════════════════════════
|
||||
1. Project Setup & Infrastructure (unchanged - foundation)
|
||||
2. Database Schema (unchanged - structure needed first)
|
||||
3. AUTHENTICATION CAPABILITY
|
||||
├─ 3.1 Write E2E tests from spec scenarios (Playwright)
|
||||
├─ 3.2 Write API tests from spec scenarios (Pest)
|
||||
├─ 3.3 Implement AuthController (minimal - make tests pass)
|
||||
├─ 3.4 Implement JWT middleware (minimal - make tests pass)
|
||||
├─ 3.5 Implement login page (minimal - make E2E pass)
|
||||
├─ 3.6 Refactor auth flow (clean up)
|
||||
└─ 3.7 Update API documentation (Scribe)
|
||||
4. TEAM MEMBER MANAGEMENT CAPABILITY
|
||||
├─ 4.1 Write E2E tests from spec scenarios
|
||||
├─ 4.2 Write API tests from spec scenarios
|
||||
├─ 4.3 Implement TeamMemberController
|
||||
├─ 4.4 Implement team member pages
|
||||
├─ 4.5 Refactor
|
||||
└─ 4.6 Update docs
|
||||
... (one section per capability)
|
||||
Key Questions to Consider
|
||||
1. Test Granularity: Do we write tests for EVERY spec scenario (you have ~100 scenarios across 15 specs)? Or prioritize critical paths? ==> Every spec.
|
||||
2. Test Organization: Should tests live with the capability (e.g., tests/Feature/Authentication/) or mirror the current structure? ==> Mirror the current structure.
|
||||
3. E2E vs Unit: Your specs naturally map to E2E tests. Do we also need unit tests for internal methods, or is E2E + API integration sufficient? ==> We need unit tests for internal methods --> E2E + API + Unit
|
||||
4. Frontend Testing: With SvelteKit, do we test components in isolation (Vitest) or rely on E2E (Playwright) for most coverage? ==> We test components in isolation (Vitest) and E2E (Playwright)
|
||||
5. The "Red" Phase: How do we handle the fact that tests will fail initially? Do we commit failing tests, or keep them local until they pass? ==> We commit failing tests, but mark them as pending until they pass.
|
||||
6. Regression Testing: Your decision-log mentions "regression tests" - does this mean maintaining a separate regression suite, or is every test potentially a regression test? ==> Every test is potentially a regression test.
|
||||
What resonates with you? Any of these questions spark concerns or clarifications?
|
||||
|
||||
|
||||
## LATE IDEA INTERVIEW 2
|
||||
Perfect! I have a crystal-clear picture now. Let me synthesize this into a concrete development strategy:
|
||||
SDD + TDD Strategy - Finalized
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ DEVELOPMENT WORKFLOW (SDD + TDD) │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
FOR EACH CAPABILITY (15 specs = 15 capability cycles):
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: SPEC → TEST (Red Phase) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Input: specs/<capability>/spec.md (GIVEN/WHEN/THEN scenarios) │
|
||||
│ │
|
||||
│ Output: Failing tests (marked pending) │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Read spec scenarios │ │
|
||||
│ │ └── Example: "WHEN user submits valid email, THEN JWT issued" │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Write E2E test (Playwright) │ │
|
||||
│ │ └── tests/e2e/<capability>.spec.ts │ │
|
||||
│ │ └── Mark as test.skip() or test.fixme() │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Write API test (Pest) │ │
|
||||
│ │ └── backend/tests/Feature/<Capaibility>Test.php │ │
|
||||
│ │ └── Mark as $this->markTestSkipped('pending implementation') │ │
|
||||
│ │ │ │
|
||||
│ │ 4. Write Unit tests (Pest/Vitest) │ │
|
||||
│ │ └── backend/tests/Unit/<Class>Test.php │ │
|
||||
│ │ └── frontend/src/lib/<module>.test.ts │ │
|
||||
│ │ └── Mark as skip/pending │ │
|
||||
│ │ │ │
|
||||
│ │ 5. Write Component tests (Vitest + Testing Library) │ │
|
||||
│ │ └── frontend/src/components/<Component>.test.ts │ │
|
||||
│ │ └── Mark as skip/pending │ │
|
||||
│ │ │ │
|
||||
│ │ 6. Commit: "test(<capability>): Add pending tests for all scenarios" │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: IMPLEMENT (Green Phase) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Goal: Make pending tests pass, one by one │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Pick one pending test │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Remove skip marker │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Write MINIMAL code to make it pass │ │
|
||||
│ │ └── Don't worry about elegance │ │
|
||||
│ │ └── Hardcode if needed temporarily │ │
|
||||
│ │ └── Just get to green │ │
|
||||
│ │ │ │
|
||||
│ │ 4. Run test suite │ │
|
||||
│ │ └── npm run test (frontend) │ │
|
||||
│ │ └── php artisan test (backend) │ │
|
||||
│ │ └── npx playwright test (e2e) │ │
|
||||
│ │ │ │
|
||||
│ │ 5. Commit when green: │ │
|
||||
│ │ └── "feat(<capability>): Implement <scenario>" │ │
|
||||
│ │ └── "test(<capability>): Enable <scenario> test" │ │
|
||||
│ │ │ │
|
||||
│ │ 6. Repeat for all scenarios in capability │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 3: REFACTOR (Clean Phase) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Goal: Clean code while keeping all tests green │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Review implementation for: │ │
|
||||
│ │ └── Code smells │ │
|
||||
│ │ └── Duplication │ │
|
||||
│ │ └── Performance issues │ │
|
||||
│ │ └── Naming clarity │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Refactor with confidence (tests guard against regression) │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Run full test suite after each change │ │
|
||||
│ │ │ │
|
||||
│ │ 4. Commit: "refactor(<capability>): <what was improved>" │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 4: DOCUMENT │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. Update API documentation (Scribe) │ │
|
||||
│ │ └── php artisan scribe:generate │ │
|
||||
│ │ │ │
|
||||
│ │ 2. Verify all tests still pass │ │
|
||||
│ │ │ │
|
||||
│ │ 3. Commit: "docs(<capability>): Update API documentation" │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
Test Organization (Mirrors Current Structure)
|
||||
backend/
|
||||
├── tests/
|
||||
│ ├── Feature/ # API integration tests
|
||||
│ │ ├── AuthenticationTest.php # From specs/authentication/spec.md
|
||||
│ │ ├── TeamMemberTest.php # From specs/team-member-management/spec.md
|
||||
│ │ └── ... # One test class per spec
|
||||
│ │
|
||||
│ └── Unit/ # Unit tests for internal methods
|
||||
│ ├── Services/
|
||||
│ │ └── JwtServiceTest.php
|
||||
│ └── ...
|
||||
frontend/
|
||||
├── src/
|
||||
│ ├── lib/
|
||||
│ │ ├── api/
|
||||
│ │ │ └── auth.test.ts # Unit tests for API clients
|
||||
│ │ └── stores/
|
||||
│ │ └── auth.test.ts # Unit tests for stores
|
||||
│ │
|
||||
│ ├── components/
|
||||
│ │ ├── LoginForm.test.ts # Component tests
|
||||
│ │ └── ...
|
||||
│ │
|
||||
│ └── routes/
|
||||
│ └── login/
|
||||
│ └── page.test.ts # Route/page tests
|
||||
│
|
||||
└── tests/e2e/ # Playwright E2E tests
|
||||
├── auth.spec.ts # From specs/authentication/spec.md
|
||||
├── team-members.spec.ts # From specs/team-member-management/spec.md
|
||||
└── ... # One spec file per capability
|
||||
Test Coverage Strategy
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ TEST PYRAMID FOR THIS PROJECT │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
▲
|
||||
/│\ E2E Tests (Playwright)
|
||||
/ │ \ ───────────────────────
|
||||
/ │ \ • Critical user journeys
|
||||
/ │ \ • ~15 spec files = ~15 e2e suites
|
||||
/ │ \ • Tests from specs/<capability>/spec.md
|
||||
/ │ \ • Run on CI before deploy
|
||||
/──────┼──────\
|
||||
/ │ \ API Tests (Pest Feature)
|
||||
/ │ \ ─────────────────────────
|
||||
/ │ \ • Every spec scenario
|
||||
/ │ \• ~100 scenarios across 15 specs
|
||||
/ │ \• Validates endpoints, auth, validation
|
||||
/────────────┼────────────\
|
||||
/ │ \ Unit Tests (Pest/Vitest)
|
||||
/ │ \ ─────────────────────────
|
||||
/ │ \• Internal methods, services
|
||||
/ │ \• Business logic, calculations
|
||||
/ │ \• Edge cases, error handling
|
||||
/ │ \
|
||||
/───────────────────┴───────────────────\
|
||||
Coverage Target: >70% (from decision-log)
|
||||
Every test is a regression test - run full suite on every PR
|
||||
Commit Message Convention
|
||||
test(<capability>): Add pending tests for <scenario>
|
||||
↳ Initial failing tests, marked skip/pending
|
||||
feat(<capability>): Implement <specific feature>
|
||||
↳ Enable test + minimal implementation
|
||||
refactor(<capability>): <description of improvement>
|
||||
↳ Clean up while tests green
|
||||
docs(<capability>): <documentation change>
|
||||
↳ Scribe generation, README updates
|
||||
The Complete Capability Cycle
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ CAPABILITY: Authentication (Example) │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
SPEC: specs/authentication/spec.md has 10 scenarios
|
||||
CYCLE 1: Write all pending tests
|
||||
├─ tests/e2e/auth.spec.ts (10 scenarios)
|
||||
├─ backend/tests/Feature/AuthenticationTest.php (10 scenarios)
|
||||
├─ backend/tests/Unit/JwtServiceTest.php (unit tests for internal methods)
|
||||
├─ frontend/src/components/LoginForm.test.ts (component tests)
|
||||
└─ Commit: "test(auth): Add pending tests for all authentication scenarios"
|
||||
CYCLE 2-11: Implement one scenario at a time
|
||||
├─ Remove skip from "Successful login" test
|
||||
├─ Implement AuthController::login()
|
||||
├─ Implement JWT generation
|
||||
├─ Tests pass
|
||||
├─ Commit: "feat(auth): Implement user login with JWT tokens"
|
||||
│
|
||||
├─ Remove skip from "Invalid credentials" test
|
||||
├─ Add validation logic
|
||||
├─ Tests pass
|
||||
├─ Commit: "feat(auth): Add credential validation"
|
||||
│
|
||||
└─ ... repeat for all 10 scenarios
|
||||
CYCLE 12: Refactor
|
||||
├─ Extract JwtService from AuthController
|
||||
├─ Improve error messages
|
||||
├─ Optimize token generation
|
||||
├─ All tests still pass
|
||||
└─ Commit: "refactor(auth): Extract JwtService, improve error handling"
|
||||
CYCLE 13: Document
|
||||
├─ php artisan scribe:generate
|
||||
├─ Verify all tests pass
|
||||
└─ Commit: "docs(auth): Update API documentation"
|
||||
Questions Before I Update the Config
|
||||
1. Test Runner Integration: Should I add npm scripts to frontend/package.json and composer scripts to backend/composer.json for running the different test suites? ==> Yes and Yes
|
||||
2. CI/CD: Do you want the config to specify running tests in CI? (e.g., GitHub Actions workflow) ==> Not at the moment.
|
||||
3. Pending Test Markers: For PHP/Pest, should pending tests use ->markTestSkipped() or ->todo()? For Playwright, test.skip() or test.fixme()? ==> Todo and Fixme.
|
||||
4. Test Naming: Should test names mirror the spec scenario names exactly, or use a shorter convention? => Use ideal convention
|
||||
5. Task Structure: Should I update the existing tasks.md to reflect this SDD+TDD workflow, or is this a "meta" change that lives in the config? ==> Yes.
|
||||
Ready to update openspec/config.yaml with this strategy once you confirm!
|
||||
|
||||
## LATE IDEA INTERVIEW 3
|
||||
I understand you're in plan mode. Let me construct a comprehensive plan for implementing the SDD + TDD strategy.
|
||||
Implementation Plan
|
||||
Phase 1: Update OpenSpec Config
|
||||
File: openspec/config.yaml
|
||||
Add new sections:
|
||||
- development_strategy: SDD + TDD workflow definition
|
||||
- testing: Test organization, naming conventions, pending markers
|
||||
- scripts: Test runner commands for both frontend and backend
|
||||
Phase 2: Add Test Runner Scripts
|
||||
Frontend (frontend/package.json):
|
||||
{
|
||||
scripts: {
|
||||
test: vitest run,
|
||||
test:watch: vitest,
|
||||
test:ui: vitest --ui,
|
||||
test:e2e: playwright test,
|
||||
test:e2e:ui: playwright test --ui,
|
||||
test:all: npm run test && npm run test:e2e
|
||||
}
|
||||
}
|
||||
Backend (backend/composer.json):
|
||||
{
|
||||
scripts: {
|
||||
test: pest,
|
||||
test:unit: pest --filter=Unit,
|
||||
test:feature: pest --filter=Feature,
|
||||
test:coverage: pest --coverage --min=70
|
||||
}
|
||||
}
|
||||
Phase 3: Update Tasks Structure
|
||||
File: openspec/changes/headroom-foundation/tasks.md
|
||||
Reorganize from layer-based to capability-based with SDD+TDD phases:
|
||||
## Capability 1: Authentication
|
||||
### Phase 1: Write Pending Tests (Red)
|
||||
- [ ] 1.1 Write E2E tests for all auth scenarios (Playwright, test.fixme)
|
||||
- [ ] 1.2 Write API tests for all auth scenarios (Pest, ->todo)
|
||||
- [ ] 1.3 Write unit tests for JwtService (Pest, ->todo)
|
||||
- [ ] 1.4 Write component tests for LoginForm (Vitest, test.skip)
|
||||
### Phase 2: Implement (Green)
|
||||
- [ ] 1.5 Enable and implement: Successful login scenario
|
||||
- [ ] 1.6 Enable and implement: Invalid credentials scenario
|
||||
- [ ] 1.7 Enable and implement: Token refresh scenario
|
||||
- [ ] 1.8 Enable and implement: Logout scenario
|
||||
... (all 10 scenarios)
|
||||
### Phase 3: Refactor
|
||||
- [ ] 1.x Refactor auth implementation
|
||||
### Phase 4: Document
|
||||
- [ ] 1.y Generate API docs with Scribe
|
||||
Phase 4: Create Test Templates
|
||||
Create starter templates for:
|
||||
- backend/tests/Feature/TemplateTest.php (Pest)
|
||||
- frontend/tests/e2e/template.spec.ts (Playwright)
|
||||
- frontend/src/components/Template.test.ts (Vitest)
|
||||
Questions
|
||||
1. Test naming convention: You said "ideal convention" - should I propose a specific format like it('authenticates user with valid credentials', ...) or test('successful login issues JWT tokens', ...)?
|
||||
2. Task reorganization scope: Should I reorganize ALL 328 tasks now, or start with just the first 1-2 capabilities as a pilot?
|
||||
3. Template location: Should templates live in openspec/templates/ or within each project's test directory?
|
||||
Ready to proceed with implementation once you approve this plan!
|
||||
|
||||
Reference in New Issue
Block a user