Files
style/docs/superpowers/specs/2025-04-12-english-style-converter-design.md

8.4 KiB

English Style Converter — Design Spec

Date: 2025-04-12 Status: Approved

Overview

A web app that takes a normal English sentence and converts it into various English styles and tones using an LLM. Single-page, minimal, light-themed, with an intensity slider for fine control and prompt transparency.

Architecture

Approach: Simple and Direct — single SvelteKit project with API routes handling LLM calls. No separate backend, no database, no auth.

Tech Stack

  • Framework: SvelteKit (latest)
  • UI: Svelte 5 with runes (, )
  • Language: TypeScript
  • Testing: Vitest (unit + integration)
  • LLM: OpenAI-compatible API (Ollama default, any compatible provider)
  • Styling: Delegated to UI specialist tools (uncodixfy, stitch MCP). Direction: minimal, clean, light colors only, no dark mode.

Project Structure

english-styler/ src/ lib/ styles.ts - Style definitions (categories, subtypes, prompt text) llm.ts - OpenAI-compatible client abstraction types.ts - Shared TypeScript types routes/ +page.svelte - Main converter UI +page.ts - Page load (optional) api/ convert/ +server.ts - POST endpoint app.html - SvelteKit HTML shell static/ .env - LLM config svelte.config.js vite.config.ts tsconfig.json package.json

Style System

Types

Style: id: string (e.g. "sarcastic", "british-polite", "got-kingslanding") label: string (e.g. "Sarcastic", "Polite (British)") categoryId: string (e.g. "general", "british", "american", "got") promptModifier: string (e.g. "Rewrite in a sarcastic, snarky tone with biting wit")

StyleCategory: id: string label: string emoji: string

ConversionRequest: text: string styleId: string intensity: number (1-5)

ConversionResponse: original: string converted: string styleId: string intensity: number systemPrompt: string (Full system prompt for transparency display) userMessage: string (Full user message for transparency display)

Style Categories and Sub-styles

Category Emoji Sub-styles
General drama Sarcastic, Formal, Casual, Academic, Poetic, Passive-Aggressive
British Slang gb Polite, Formal, Witty, Gentlemanly, Upper Class, Royal, Victorian, Downton Abbey
American Slang us New Yorker, Black American Slang, Southern, Redneck
Fun pirate Pirate, Shakespearean, Gen Z Slang
Game of Thrones dragon Kings Landing, Wildlings, Winterfell
Dystopian newspaper Newspeak (Orwellian)

Intensity Levels

Level Label Prompt effect
1 Subtle lightly hint at a [style] tone
2 Moderate rewrite with a [style] tone
3 Strong rewrite strongly in a [style] style
4 Heavy rewrite completely in [style] - fully commit to the voice
5 Maximum go absolutely all-out [style] - no restraint

Intensity mapping stored in lib/styles.ts, not hardcoded in LLM call.

LLM Abstraction

Configuration (.env only)

OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_API_KEY=ollama OPENAI_MODEL=llama3

No UI-based provider config. Server-side only.

Client (lib/llm.ts)

Single OpenAI-compatible client using fetch against /v1/chat/completions. Works with Ollama (default) and any OpenAI-compatible API.

Returns converted text plus full prompt for transparency display.

System Prompt Template

You are an expert English style converter. Rewrite the users text {intensityInstruction}. {stylePromptModifier} Preserve the core meaning but fully transform the voice and tone. Output ONLY the converted text - no explanations, no labels, no quotes.

API Endpoint

POST /api/convert

  • Request body: { text, styleId, intensity }
  • Response: { original, converted, styleId, intensity, systemPrompt, userMessage }
  • Validation:
    • text non-empty (after trimming)
    • styleId exists in styles
    • intensity is integer 1-5
  • Errors:
    • 400 for bad input (with human-readable message)
    • 502 for LLM call failure (with friendly message)
  • No retry logic for MVP1

Frontend UI

Layout

Single page, centered, minimal. Light palette only.

  • Title: "English Style Converter" with subtitle
  • Input textarea for text entry
  • Two-step style selector: Category dropdown then Sub-style dropdown
  • Intensity slider (1-5, default 3) with labels "Subtle" to "Maximum"
  • Convert button with loading state
  • Output display area with Copy button
  • Collapsible "Show prompt" section below output revealing system and user prompts

Loading Modal

When the user clicks Convert and the LLM request is in flight, a modal dialog appears with an animated loading word. This replaces a boring spinner with delightful, personality-driven feedback.

Behavior:

  • Modal overlays the page content, preventing interaction during conversion
  • Displays one playful verb at a time, cycling every ~2 seconds
  • Each letter of the word animates in sequentially (per-letter animation, inspired by Tobias Ahlin's Moving Letters)
  • Each new word gets a randomized color from a curated light palette
  • Each new word gets a randomized animation style from a set of entrance animations (slide-up, bounce-in, drop-in, fade-scale, spin-in, spring-from-left, etc.)
  • When the LLM response arrives, the modal dismisses immediately

Loading word list: Bamboozling · Razzmatazzing · Transmogrifying · Alakazamming · Prestidigitating · Metamorphosizing · Enchanting · Voodooing · Witchcrafting · Sorcerizing · Spellcasting · Hocus-pocusing · Incantating · Conjurating · Charmweaving

Animation styles (randomized per word):

  • Slide up from below
  • Bounce in from above
  • Drop in with gravity
  • Scale up from center
  • Fade + rotate in
  • Spring in from left

Color palette (randomized per word, all light-friendly): A set of vibrant but light-appropriate accent colors (coral, teal, violet, amber, emerald, rose, sky blue)

This is a signature UX moment — the loading modal should feel magical and fun.

Interaction Flow

  1. User types or pastes text
  2. Selects category, sub-style dropdown populates
  3. Adjusts intensity slider (default: 3)
  4. Clicks Convert — loading modal appears with animated words
  5. LLM responds — modal dismisses, result appears with Copy button
  6. "Show prompt" reveals the full system + user prompt sent to the LLM (collapsible, hidden by default)
  7. Errors display in the output area, never as browser alerts

Svelte 5 Runes

  • () for input text, selected style, intensity, output, loading, error, prompt visibility
  • () for is category selected, is convert disabled, etc.

UI Details

  • Light palette - white/off-white background, subtle borders, accent color on Convert button
  • No dark mode
  • Responsive - stacked full-width on mobile, centered max-width on desktop
  • Copy button - navigator.clipboard with confirmation
  • No streaming - full response then display (v2 candidate)

Testing Strategy

Automated Tests (Vitest)

Layer Tests
lib/styles.ts Style lookup, category filtering, all styles have valid promptModifiers
lib/llm.ts Prompt construction correctness (mock HTTP, verify prompt)
/api/convert Input validation (empty text, bad style, out-of-range intensity), error responses

Not Testing (MVP1)

  • E2E, visual regression, snapshots, LLM response content

Manual Testing Checklist (Pre-launch)

  1. Submit empty text returns 400
  2. Submit with invalid style returns 400
  3. Submit with intensity 0 or 6 returns validation error
  4. LLM unreachable returns 502 with friendly message
  5. Each category styles load in dropdowns
  6. Intensity slider updates label
  7. Copy button works
  8. Prompt section expands/collapses with correct content
  9. Works on mobile viewport
  10. Loading modal shows animated words with randomized colors and animation styles

Scope - MVP1

In scope:

  • Single style at a time
  • Intensity slider (1-5)
  • Collapsible prompt display
  • Loading modal with animated words (randomized color + animation style)
  • .env-only LLM config
  • OpenAI-compatible client (Ollama default)
  • Light-only, minimal UI
  • Unit + integration tests for backend logic

Explicitly out of scope (v2 candidates):

  • Multi-style / compare mode
  • Conversion history
  • UI-based provider settings
  • Streaming responses
  • Dark mode
  • E2E tests
  • Database or auth