Compare commits

...

10 Commits

2370 changed files with 742031 additions and 6 deletions

6
.gitignore vendored
View File

@@ -9,3 +9,9 @@ venv/
.companion/
dist/
build/
# Node.js / UI
node_modules/
ui/node_modules/
ui/dist/

55
Dockerfile Normal file
View File

@@ -0,0 +1,55 @@
# Build stage
FROM python:3.11-slim AS builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY pyproject.toml .
RUN pip install --no-cache-dir -e ".[dev]" \
&& pip wheel --no-cache-dir --wheel-dir /app/wheels \
pydantic lancedb pyarrow requests watchdog typer rich numpy httpx sse-starlette fastapi uvicorn
# Production stage
FROM python:3.11-slim AS production
WORKDIR /app
# Install runtime dependencies
RUN apt-get update && apt-get install -y \
&& rm -rf /var/lib/apt/lists/*
# Copy wheels and install
COPY --from=builder /app/wheels /wheels
RUN pip install --no-cache-dir /wheels/*
# Copy application code
COPY companion/ ./companion/
COPY companion/forge/ ./companion/forge/
COPY companion/indexer_daemon/ ./companion/indexer_daemon/
COPY companion/rag/ ./companion/rag/
# Create directories for data
RUN mkdir -p /data/vectors /data/memory /models
# Copy default config
COPY config.json /app/config.json
# Environment variables
ENV PYTHONPATH=/app
ENV COMPANION_CONFIG=/app/config.json
ENV COMPANION_DATA_DIR=/data
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import requests; requests.get('http://localhost:7373/health')" || exit 1
# API port
EXPOSE 7373
# Default command
CMD ["python", "-m", "uvicorn", "companion.api:app", "--host", "0.0.0.0", "--port", "7373"]

32
Dockerfile.indexer Normal file
View File

@@ -0,0 +1,32 @@
# Indexer-only Dockerfile (lightweight, no API dependencies)
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
RUN pip install --no-cache-dir \
pydantic lancedb pyarrow requests watchdog typer rich numpy httpx
# Copy application code
COPY companion/ ./companion/
COPY companion/indexer_daemon/ ./companion/indexer_daemon/
COPY companion/rag/ ./companion/rag/
# Create directories for data
RUN mkdir -p /data/vectors
# Copy default config
COPY config.json /app/config.json
# Environment variables
ENV PYTHONPATH=/app
ENV COMPANION_CONFIG=/app/config.json
ENV COMPANION_DATA_DIR=/data
# Default command (can be overridden)
CMD ["python", "-m", "companion.indexer_daemon.cli", "index"]

207
README.md Normal file
View File

@@ -0,0 +1,207 @@
# Personal Companion AI
A fully local, privacy-first AI companion trained on your Obsidian vault. Combines fine-tuned reasoning with RAG-powered memory to answer questions about your life, relationships, and experiences.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Personal Companion AI │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌─────────────────┐ ┌──────────┐ │
│ │ React UI │◄──►│ FastAPI │◄──►│ Ollama │ │
│ │ (Vite) │ │ Backend │ │ Models │ │
│ └──────────────┘ └─────────────────┘ └──────────┘ │
│ │ │
│ ┌─────────────────────┼─────────────────────┐ │
│ ↓ ↓ ↓ │
│ ┌──────────────┐ ┌─────────────────┐ ┌──────────┐ │
│ │ Fine-tuned │ │ RAG Engine │ │ Vault │ │
│ │ 7B Model │ │ (LanceDB) │ │ Indexer │ │
│ │ │ │ │ │ │ │
│ │ Quarterly │ │ • semantic │ │ • watch │ │
│ │ retrain │ │ search │ │ • chunk │ │
│ │ │ │ • hybrid │ │ • embed │ │
│ │ │ │ filters │ │ │ │
│ │ │ │ • relationship │ │ Daily │ │
│ │ │ │ graph │ │ auto-sync│ │
│ └──────────────┘ └─────────────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
```
## Quick Start
### Prerequisites
- Python 3.11+
- Node.js 18+ (for UI)
- Ollama running locally
- RTX 5070 or equivalent (12GB+ VRAM for fine-tuning)
### Installation
```bash
# Clone and setup
cd kv-rag
pip install -e ".[dev]"
# Install UI dependencies
cd ui && npm install && cd ..
# Pull required Ollama models
ollama pull mxbai-embed-large
ollama pull llama3.1:8b
```
### Configuration
Copy `config.json` and customize:
```json
{
"vault": {
"path": "/path/to/your/obsidian/vault"
},
"companion": {
"name": "SAN"
}
}
```
See [docs/config.md](docs/config.md) for full configuration reference.
### Running
**Terminal 1 - Backend:**
```bash
python -m uvicorn companion.api:app --host 0.0.0.0 --port 7373
```
**Terminal 2 - Frontend:**
```bash
cd ui && npm run dev
```
**Terminal 3 - Indexer (optional):**
```bash
# One-time full index
python -m companion.indexer_daemon.cli index
# Or continuous file watching
python -m companion.indexer_daemon.watcher
```
Open http://localhost:5173
## Usage
### Chat Interface
Type messages naturally. The companion will:
- Retrieve relevant context from your vault
- Reference past events, relationships, decisions
- Provide reflective, companion-style responses
### Indexing Your Vault
```bash
# Full reindex
python -m companion.indexer_daemon.cli index
# Incremental sync
python -m companion.indexer_daemon.cli sync
# Check status
python -m companion.indexer_daemon.cli status
```
### Fine-Tuning (Optional)
Train a custom model that reasons like you:
```bash
# Extract training examples from vault reflections
python -m companion.forge.cli extract
# Train with QLoRA (4-6 hours on RTX 5070)
python -m companion.forge.cli train --epochs 3
# Reload the fine-tuned model
python -m companion.forge.cli reload ~/.companion/training/final
```
## Modules
| Module | Purpose | Documentation |
|--------|---------|---------------|
| `companion.config` | Configuration management | [docs/config.md](docs/config.md) |
| `companion.rag` | RAG engine (chunk, embed, search) | [docs/rag.md](docs/rag.md) |
| `companion.forge` | Fine-tuning pipeline | [docs/forge.md](docs/forge.md) |
| `companion.api` | FastAPI backend | [docs/api.md](docs/api.md) |
| `ui/` | React frontend | [docs/ui.md](docs/ui.md) |
## Project Structure
```
kv-rag/
├── companion/ # Python backend
│ ├── __init__.py
│ ├── api.py # FastAPI app
│ ├── config.py # Configuration
│ ├── memory.py # Session memory (SQLite)
│ ├── orchestrator.py # Chat orchestration
│ ├── prompts.py # Prompt templates
│ ├── rag/ # RAG modules
│ │ ├── chunker.py
│ │ ├── embedder.py
│ │ ├── indexer.py
│ │ ├── search.py
│ │ └── vector_store.py
│ ├── forge/ # Fine-tuning
│ │ ├── extract.py
│ │ ├── train.py
│ │ ├── export.py
│ │ └── reload.py
│ └── indexer_daemon/ # File watching
│ ├── cli.py
│ └── watcher.py
├── ui/ # React frontend
│ ├── src/
│ │ ├── App.tsx
│ │ ├── components/
│ │ └── hooks/
│ └── package.json
├── tests/ # Test suite
├── config.json # Configuration file
├── docs/ # Documentation
└── README.md
```
## Testing
```bash
# Run all tests
pytest tests/ -v
# Run specific module
pytest tests/test_chunker.py -v
```
## Privacy & Security
- **Fully Local**: No data leaves your machine
- **Vault Data**: Never sent to external APIs for training
- **Config**: `local_only: true` blocks external API calls
- **Sensitive Tags**: Configurable patterns for health, finance, etc.
## License
MIT License - See LICENSE file
## Acknowledgments
- Built with [Unsloth](https://github.com/unslothai/unsloth) for efficient fine-tuning
- Uses [LanceDB](https://lancedb.github.io/) for vector storage
- UI inspired by [Obsidian](https://obsidian.md/) aesthetics

364
USER_MANUAL.md Normal file
View File

@@ -0,0 +1,364 @@
# Companion AI - User Manual
A practical guide to running and using your personal AI companion.
---
## Quick Start (5 minutes)
### Option 1: Docker (Recommended)
```bash
# 1. Clone and enter directory
git clone https://github.com/santhoshjan/companion.git
cd companion
# 2. Edit config.json - set your vault path
nano config.json
# Change: "path": "/home/yourname/KnowledgeVault/Default"
# 3. Start everything
docker-compose up -d
# 4. Open the UI
open http://localhost:5173
```
### Option 2: Manual Install
```bash
# Install dependencies
pip install -e ".[dev]"
# Run the API
python -m uvicorn companion.api:app --host 0.0.0.0 --port 7373
```
---
## Configuration
Before first use, edit `config.json`:
```json
{
"vault": {
"path": "/path/to/your/obsidian/vault"
}
}
```
**Required paths to set:**
- `vault.path` - Your Obsidian vault directory
- `rag.vector_store.path` - Where to store embeddings (default: `~/.companion/vectors`)
- `companion.memory.persistent_store` - Conversation history (default: `~/.companion/memory.db`)
---
## Indexing Your Vault
The system needs to index your vault before it can answer questions.
### One-time Full Index
```bash
# Using Python module
python -m companion.indexer_daemon.cli index
# Or with Docker
docker-compose exec companion-api python -m companion.indexer_daemon.cli index
```
**Output:**
```
Running full index...
Done. Total chunks: 12,847
```
### Continuous File Watching
```bash
# Auto-sync when files change
python -m companion.indexer_daemon.watcher
```
**What it does:**
- Monitors your vault for changes
- Automatically re-indexes modified files
- Debounced (5-second delay to batch rapid changes)
### Check Index Status
```bash
python -m companion.indexer_daemon.cli status
```
**Output:**
```
Total chunks: 12,847
Indexed files: 847
Unindexed files: 3
```
### Incremental Sync
```bash
# Sync only changed files
python -m companion.indexer_daemon.cli sync
```
**When to use:**
- After editing a few notes
- Faster than full re-index
- Keeps vector store updated
---
## Using the Chat Interface
### Web UI
1. Start the backend: `python -m uvicorn companion.api:app --port 7373`
2. In another terminal: `cd ui && npm run dev`
3. Open http://localhost:5173
**Interface:**
- Type messages in the bottom textarea
- Press Enter to send
- Citations appear in a side panel when RAG retrieves context
- Conversation history persists per session
### API Directly
```bash
# Start a conversation
curl -X POST http://localhost:7373/chat \
-H "Content-Type: application/json" \
-d '{
"message": "What did I write about my trip to Japan?",
"stream": true
}'
```
**Parameters:**
- `message` (required): Your question
- `session_id` (optional): Continue existing conversation
- `stream` (optional): Stream response (default: true)
- `use_rag` (optional): Enable vault search (default: true)
### Get Session History
```bash
curl http://localhost:7373/sessions/{session_id}/history
```
---
## Fine-Tuning (Optional)
### Extract Training Data
```bash
# Extract reflection examples from your vault
python -m companion.forge.extract \
--output training_data.jsonl \
--min-length 100
```
### Train Model
```bash
# Start training (takes hours, needs GPU)
python -m companion.forge.train \
--data training_data.jsonl \
--output-dir ./models
```
### Export to GGUF
```bash
# Convert for llama.cpp inference
python -m companion.forge.export \
--model ./models/checkpoint \
--output companion-7b.gguf
```
### Reload Model (Hot Swap)
```bash
# Admin endpoint - reload without restarting API
curl -X POST http://localhost:7373/admin/reload-model \
-H "Content-Type: application/json" \
-d '{"model_path": "/path/to/companion-7b.gguf"}'
```
---
## Command Reference
### Indexer Commands
| Command | Description | Example |
|---------|-------------|---------|
| `index` | Full re-index of vault | `cli index` |
| `sync` | Incremental sync | `cli sync` |
| `reindex` | Force full reindex | `cli reindex` |
| `status` | Show index statistics | `cli status` |
### Common Options
All indexer commands support:
- `--config PATH` : Use alternative config file
- `--verbose` : Detailed logging
### Environment Variables
| Variable | Purpose | Example |
|----------|---------|---------|
| `COMPANION_CONFIG` | Config file path | `/etc/companion/config.json` |
| `COMPANION_DATA_DIR` | Data storage root | `/var/lib/companion` |
| `VAULT_PATH` | Override vault location | `/home/user/vault` |
---
## Troubleshooting
### "No module named 'companion'"
```bash
# Install in editable mode
pip install -e "."
# Or set PYTHONPATH
export PYTHONPATH=/path/to/companion:$PYTHONPATH
```
### "Failed to generate embedding"
- Check Ollama is running: `curl http://localhost:11434/api/tags`
- Verify embedding model is pulled: `ollama pull mxbai-embed-large`
- Check `config.json` has correct `rag.embedding.base_url`
### "No results from search"
1. Check index status: `cli status`
2. Verify vault path is correct in config
3. Run full index: `cli index`
4. Check file patterns match your notes (default: `*.md`)
### "Port 7373 already in use"
```bash
# Find and kill process
lsof -i :7373
kill <PID>
# Or use different port
python -m uvicorn companion.api:app --port 7374
```
### Windows: Scripts won't run
```powershell
# Set execution policy
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Or run with explicit bypass
powershell -ExecutionPolicy Bypass -File scripts/install.ps1
```
---
## Systemd Commands (Linux)
If installed as a service:
```bash
# Start/stop/restart
sudo systemctl start companion-api
sudo systemctl stop companion-api
sudo systemctl restart companion-api
# Check status
sudo systemctl status companion-api
# View logs
sudo journalctl -u companion-api -f
# Enable auto-start
sudo systemctl enable companion-api
```
---
## Docker Commands
```bash
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f companion-api
# Run indexer command
docker-compose exec companion-api python -m companion.indexer_daemon.cli index
# Rebuild after code changes
docker-compose up -d --build
# Stop everything
docker-compose down
```
---
## Performance Tips
### Indexing Large Vaults
- First index takes time (1-5 hours for 1000+ notes)
- Subsequent syncs are fast (seconds to minutes)
- Use `sync` instead of `reindex` for updates
### Memory Usage
- API server: ~500MB-1GB RAM
- Indexer: ~200MB while watching
- LanceDB is disk-based, scales to 100K+ chunks
### GPU Acceleration
For fine-tuning (not required for RAG):
- Requires CUDA-capable GPU
- Install `unsloth` and `torch` with CUDA support
- Training uses ~10GB VRAM for 7B models
---
## Best Practices
1. **Index regularly**: Run `sync` daily or use the file watcher
2. **Back up vectors**: Copy `~/.companion/vectors.lance` periodically
3. **Use session IDs**: Pass `session_id` to maintain conversation context
4. **Monitor citations**: Check the sources panel to verify RAG is working
5. **Keep Ollama running**: The embedding service must be available
---
## Getting Help
- Check logs: `journalctl -u companion-api` or `docker-compose logs`
- Verify config: `python -c "from companion.config import load_config; print(load_config())"`
- Test search directly: `python -m companion.indexer_daemon.cli search "test query"`
---
## Next Steps
1. Index your vault
2. Start the API server
3. Open the web UI
4. Ask your first question
5. Explore the citations
Enjoy your personal AI companion!

667
config-schema.json Normal file
View File

@@ -0,0 +1,667 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://companion.ai/config-schema.json",
"title": "Companion AI Configuration",
"description": "Configuration schema for Personal Companion AI",
"type": "object",
"required": ["companion", "vault", "rag", "model", "api", "ui", "logging", "security"],
"properties": {
"companion": {
"type": "object",
"title": "Companion Settings",
"required": ["name", "persona", "memory", "chat"],
"properties": {
"name": {
"type": "string",
"description": "Display name for the companion",
"default": "SAN"
},
"persona": {
"type": "object",
"required": ["role", "tone", "style", "boundaries"],
"properties": {
"role": {
"type": "string",
"description": "Role of the companion",
"enum": ["companion", "advisor", "reflector"],
"default": "companion"
},
"tone": {
"type": "string",
"description": "Communication tone",
"enum": ["reflective", "supportive", "analytical", "mixed"],
"default": "reflective"
},
"style": {
"type": "string",
"description": "Interaction style",
"enum": ["questioning", "supportive", "direct", "mixed"],
"default": "questioning"
},
"boundaries": {
"type": "array",
"description": "Behavioral guardrails",
"items": {
"type": "string",
"enum": [
"does_not_impersonate_user",
"no_future_predictions",
"no_medical_or_legal_advice"
]
},
"default": ["does_not_impersonate_user", "no_future_predictions", "no_medical_or_legal_advice"]
}
}
},
"memory": {
"type": "object",
"required": ["session_turns", "persistent_store", "summarize_after"],
"properties": {
"session_turns": {
"type": "integer",
"description": "Messages to keep in context",
"minimum": 1,
"maximum": 100,
"default": 20
},
"persistent_store": {
"type": "string",
"description": "SQLite database path",
"default": "~/.companion/memory.db"
},
"summarize_after": {
"type": "integer",
"description": "Summarize history after N turns",
"minimum": 5,
"maximum": 50,
"default": 10
}
}
},
"chat": {
"type": "object",
"required": ["streaming", "max_response_tokens", "default_temperature", "allow_temperature_override"],
"properties": {
"streaming": {
"type": "boolean",
"description": "Stream responses in real-time",
"default": true
},
"max_response_tokens": {
"type": "integer",
"description": "Max tokens per response",
"minimum": 256,
"maximum": 8192,
"default": 2048
},
"default_temperature": {
"type": "number",
"description": "Creativity level (0.0=deterministic, 2.0=creative)",
"minimum": 0.0,
"maximum": 2.0,
"default": 0.7
},
"allow_temperature_override": {
"type": "boolean",
"description": "Let users adjust temperature",
"default": true
}
}
}
}
},
"vault": {
"type": "object",
"title": "Vault Settings",
"required": ["path", "indexing", "chunking_rules"],
"properties": {
"path": {
"type": "string",
"description": "Absolute path to Obsidian vault root"
},
"indexing": {
"type": "object",
"required": ["auto_sync", "auto_sync_interval_minutes", "watch_fs_events", "file_patterns", "deny_dirs", "deny_patterns"],
"properties": {
"auto_sync": {
"type": "boolean",
"description": "Enable automatic syncing",
"default": true
},
"auto_sync_interval_minutes": {
"type": "integer",
"description": "Minutes between full syncs",
"minimum": 60,
"maximum": 10080,
"default": 1440
},
"watch_fs_events": {
"type": "boolean",
"description": "Watch for file system changes",
"default": true
},
"file_patterns": {
"type": "array",
"description": "File patterns to index",
"items": { "type": "string" },
"default": ["*.md"]
},
"deny_dirs": {
"type": "array",
"description": "Directories to skip",
"items": { "type": "string" },
"default": [".obsidian", ".trash", "zzz-Archive", ".git", ".logseq"]
},
"deny_patterns": {
"type": "array",
"description": "File patterns to ignore",
"items": { "type": "string" },
"default": ["*.tmp", "*.bak", "*conflict*", ".*"]
}
}
},
"chunking_rules": {
"type": "object",
"description": "Per-directory chunking rules (key: glob pattern, value: rule)",
"additionalProperties": {
"type": "object",
"required": ["strategy", "chunk_size", "chunk_overlap"],
"properties": {
"strategy": {
"type": "string",
"enum": ["sliding_window", "section"],
"description": "Chunking strategy"
},
"chunk_size": {
"type": "integer",
"description": "Target chunk size in words",
"minimum": 50,
"maximum": 2000
},
"chunk_overlap": {
"type": "integer",
"description": "Overlap between chunks in words",
"minimum": 0,
"maximum": 500
},
"section_tags": {
"type": "array",
"description": "Tags that mark sections (for section strategy)",
"items": { "type": "string" }
}
}
}
}
}
},
"rag": {
"type": "object",
"title": "RAG Settings",
"required": ["embedding", "vector_store", "search"],
"properties": {
"embedding": {
"type": "object",
"required": ["provider", "model", "base_url", "dimensions", "batch_size"],
"properties": {
"provider": {
"type": "string",
"description": "Embedding service provider",
"enum": ["ollama"],
"default": "ollama"
},
"model": {
"type": "string",
"description": "Model name for embeddings",
"enum": ["mxbai-embed-large", "nomic-embed-text", "all-minilm"],
"default": "mxbai-embed-large"
},
"base_url": {
"type": "string",
"description": "Provider API endpoint",
"format": "uri",
"default": "http://localhost:11434"
},
"dimensions": {
"type": "integer",
"description": "Embedding vector size",
"enum": [384, 768, 1024],
"default": 1024
},
"batch_size": {
"type": "integer",
"description": "Texts per embedding batch",
"minimum": 1,
"maximum": 256,
"default": 32
}
}
},
"vector_store": {
"type": "object",
"required": ["type", "path"],
"properties": {
"type": {
"type": "string",
"description": "Vector database type",
"enum": ["lancedb"],
"default": "lancedb"
},
"path": {
"type": "string",
"description": "Storage path",
"default": "~/.companion/vectors.lance"
}
}
},
"search": {
"type": "object",
"required": ["default_top_k", "max_top_k", "similarity_threshold", "hybrid_search", "filters"],
"properties": {
"default_top_k": {
"type": "integer",
"description": "Default results to retrieve",
"minimum": 1,
"maximum": 100,
"default": 8
},
"max_top_k": {
"type": "integer",
"description": "Maximum allowed results",
"minimum": 1,
"maximum": 100,
"default": 20
},
"similarity_threshold": {
"type": "number",
"description": "Minimum relevance score (0-1)",
"minimum": 0.0,
"maximum": 1.0,
"default": 0.75
},
"hybrid_search": {
"type": "object",
"required": ["enabled", "keyword_weight", "semantic_weight"],
"properties": {
"enabled": {
"type": "boolean",
"description": "Combine keyword + semantic search",
"default": true
},
"keyword_weight": {
"type": "number",
"description": "Keyword search weight",
"minimum": 0.0,
"maximum": 1.0,
"default": 0.3
},
"semantic_weight": {
"type": "number",
"description": "Semantic search weight",
"minimum": 0.0,
"maximum": 1.0,
"default": 0.7
}
}
},
"filters": {
"type": "object",
"required": ["date_range_enabled", "tag_filter_enabled", "directory_filter_enabled"],
"properties": {
"date_range_enabled": {
"type": "boolean",
"description": "Enable date range filtering",
"default": true
},
"tag_filter_enabled": {
"type": "boolean",
"description": "Enable tag filtering",
"default": true
},
"directory_filter_enabled": {
"type": "boolean",
"description": "Enable directory filtering",
"default": true
}
}
}
}
}
}
},
"model": {
"type": "object",
"title": "Model Settings",
"required": ["inference", "fine_tuning", "retrain_schedule"],
"properties": {
"inference": {
"type": "object",
"required": ["backend", "model_path", "context_length", "gpu_layers", "batch_size", "threads"],
"properties": {
"backend": {
"type": "string",
"description": "Inference engine",
"enum": ["llama.cpp", "vllm"],
"default": "llama.cpp"
},
"model_path": {
"type": "string",
"description": "Path to GGUF or HF model",
"default": "~/.companion/models/companion-7b-q4.gguf"
},
"context_length": {
"type": "integer",
"description": "Max context tokens",
"minimum": 2048,
"maximum": 32768,
"default": 8192
},
"gpu_layers": {
"type": "integer",
"description": "Layers to offload to GPU (0 for CPU-only)",
"minimum": 0,
"maximum": 100,
"default": 35
},
"batch_size": {
"type": "integer",
"description": "Inference batch size",
"minimum": 1,
"maximum": 2048,
"default": 512
},
"threads": {
"type": "integer",
"description": "CPU threads for inference",
"minimum": 1,
"maximum": 64,
"default": 8
}
}
},
"fine_tuning": {
"type": "object",
"required": ["base_model", "output_dir", "lora_rank", "lora_alpha", "learning_rate", "batch_size", "gradient_accumulation_steps", "num_epochs", "warmup_steps", "save_steps", "eval_steps", "training_data_path", "validation_split"],
"properties": {
"base_model": {
"type": "string",
"description": "Base model for fine-tuning",
"default": "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit"
},
"output_dir": {
"type": "string",
"description": "Training outputs directory",
"default": "~/.companion/training"
},
"lora_rank": {
"type": "integer",
"description": "LoRA rank (higher = more capacity, more VRAM)",
"minimum": 4,
"maximum": 128,
"default": 16
},
"lora_alpha": {
"type": "integer",
"description": "LoRA alpha (scaling factor, typically 2x rank)",
"minimum": 8,
"maximum": 256,
"default": 32
},
"learning_rate": {
"type": "number",
"description": "Training learning rate",
"minimum": 1e-6,
"maximum": 1e-3,
"default": 0.0002
},
"batch_size": {
"type": "integer",
"description": "Per-device batch size",
"minimum": 1,
"maximum": 32,
"default": 4
},
"gradient_accumulation_steps": {
"type": "integer",
"description": "Steps to accumulate before update",
"minimum": 1,
"maximum": 64,
"default": 4
},
"num_epochs": {
"type": "integer",
"description": "Training epochs",
"minimum": 1,
"maximum": 20,
"default": 3
},
"warmup_steps": {
"type": "integer",
"description": "Learning rate warmup steps",
"minimum": 0,
"maximum": 10000,
"default": 100
},
"save_steps": {
"type": "integer",
"description": "Checkpoint frequency",
"minimum": 10,
"maximum": 10000,
"default": 500
},
"eval_steps": {
"type": "integer",
"description": "Evaluation frequency",
"minimum": 10,
"maximum": 10000,
"default": 250
},
"training_data_path": {
"type": "string",
"description": "Training data directory",
"default": "~/.companion/training_data/"
},
"validation_split": {
"type": "number",
"description": "Fraction of data for validation",
"minimum": 0.0,
"maximum": 0.5,
"default": 0.1
}
}
},
"retrain_schedule": {
"type": "object",
"required": ["auto_reminder", "default_interval_days", "reminder_channels"],
"properties": {
"auto_reminder": {
"type": "boolean",
"description": "Enable retrain reminders",
"default": true
},
"default_interval_days": {
"type": "integer",
"description": "Days between retrain reminders",
"minimum": 30,
"maximum": 365,
"default": 90
},
"reminder_channels": {
"type": "array",
"description": "Where to show reminders",
"items": {
"type": "string",
"enum": ["chat_stream", "log", "ui"]
},
"default": ["chat_stream", "log"]
}
}
}
}
},
"api": {
"type": "object",
"title": "API Settings",
"required": ["host", "port", "cors_origins", "auth"],
"properties": {
"host": {
"type": "string",
"description": "Bind address (use 0.0.0.0 for LAN access)",
"default": "127.0.0.1"
},
"port": {
"type": "integer",
"description": "HTTP port",
"minimum": 1,
"maximum": 65535,
"default": 7373
},
"cors_origins": {
"type": "array",
"description": "Allowed CORS origins",
"items": {
"type": "string",
"format": "uri"
},
"default": ["http://localhost:5173"]
},
"auth": {
"type": "object",
"required": ["enabled"],
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable API key authentication",
"default": false
}
}
}
}
},
"ui": {
"type": "object",
"title": "UI Settings",
"required": ["web", "cli"],
"properties": {
"web": {
"type": "object",
"required": ["enabled", "theme", "features"],
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable web interface",
"default": true
},
"theme": {
"type": "string",
"description": "UI theme",
"enum": ["obsidian"],
"default": "obsidian"
},
"features": {
"type": "object",
"required": ["streaming", "citations", "source_preview"],
"properties": {
"streaming": {
"type": "boolean",
"description": "Real-time response streaming",
"default": true
},
"citations": {
"type": "boolean",
"description": "Show source citations",
"default": true
},
"source_preview": {
"type": "boolean",
"description": "Preview source snippets",
"default": true
}
}
}
}
},
"cli": {
"type": "object",
"required": ["enabled", "rich_output"],
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable CLI interface",
"default": true
},
"rich_output": {
"type": "boolean",
"description": "Rich terminal formatting",
"default": true
}
}
}
}
},
"logging": {
"type": "object",
"title": "Logging Settings",
"required": ["level", "file", "max_size_mb", "backup_count"],
"properties": {
"level": {
"type": "string",
"description": "Log level",
"enum": ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"],
"default": "INFO"
},
"file": {
"type": "string",
"description": "Log file path",
"default": "~/.companion/logs/companion.log"
},
"max_size_mb": {
"type": "integer",
"description": "Max log file size in MB",
"minimum": 10,
"maximum": 1000,
"default": 100
},
"backup_count": {
"type": "integer",
"description": "Number of rotated backups",
"minimum": 1,
"maximum": 20,
"default": 5
}
}
},
"security": {
"type": "object",
"title": "Security Settings",
"required": ["local_only", "vault_path_traversal_check", "sensitive_content_detection", "sensitive_patterns", "require_confirmation_for_external_apis"],
"properties": {
"local_only": {
"type": "boolean",
"description": "Block external API calls",
"default": true
},
"vault_path_traversal_check": {
"type": "boolean",
"description": "Prevent path traversal attacks",
"default": true
},
"sensitive_content_detection": {
"type": "boolean",
"description": "Tag sensitive content",
"default": true
},
"sensitive_patterns": {
"type": "array",
"description": "Tags considered sensitive",
"items": { "type": "string" },
"default": ["#mentalhealth", "#physicalhealth", "#finance", "#Relations"]
},
"require_confirmation_for_external_apis": {
"type": "boolean",
"description": "Confirm before external API calls",
"default": true
}
}
}
}
}

76
docker-compose.yml Normal file
View File

@@ -0,0 +1,76 @@
version: '3.8'
services:
companion-api:
build:
context: .
dockerfile: Dockerfile
target: production
container_name: companion-api
ports:
- "7373:7373"
volumes:
- ./config.json:/app/config.json:ro
- companion-data:/data
- ./models:/models:ro
environment:
- COMPANION_CONFIG=/app/config.json
- COMPANION_DATA_DIR=/data
networks:
- companion-network
restart: unless-stopped
healthcheck:
test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:7373/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 5s
companion-indexer:
build:
context: .
dockerfile: Dockerfile.indexer
container_name: companion-indexer
volumes:
- ./config.json:/app/config.json:ro
- companion-data:/data
- /home/san/KnowledgeVault:/vault:ro # Mount Obsidian vault as read-only
environment:
- COMPANION_CONFIG=/app/config.json
- COMPANION_DATA_DIR=/data
- VAULT_PATH=/vault
networks:
- companion-network
restart: unless-stopped
command: ["python", "-m", "companion.indexer_daemon.watcher"]
# Or use CLI mode for manual sync:
# command: ["python", "-m", "companion.indexer_daemon.cli", "index"]
# Optional: Ollama for local embeddings and LLM
ollama:
image: ollama/ollama:latest
container_name: companion-ollama
ports:
- "11434:11434"
volumes:
- ollama-data:/root/.ollama
networks:
- companion-network
restart: unless-stopped
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
companion-data:
driver: local
ollama-data:
driver: local
networks:
companion-network:
driver: bridge

278
docs/config.md Normal file
View File

@@ -0,0 +1,278 @@
# Configuration Reference
Complete reference for `config.json` configuration options.
## Overview
The configuration file uses JSON format with support for:
- Path expansion (`~` expands to home directory)
- Type validation via Pydantic models
- Environment-specific overrides
## Schema Validation
Validate your config against the schema:
```bash
python -c "from companion.config import load_config; load_config('config.json')"
```
Or use the JSON Schema directly: [config-schema.json](../config-schema.json)
## Configuration Sections
### companion
Core companion personality and behavior settings.
```json
{
"companion": {
"name": "SAN",
"persona": {
"role": "companion",
"tone": "reflective",
"style": "questioning",
"boundaries": [
"does_not_impersonate_user",
"no_future_predictions",
"no_medical_or_legal_advice"
]
},
"memory": {
"session_turns": 20,
"persistent_store": "~/.companion/memory.db",
"summarize_after": 10
},
"chat": {
"streaming": true,
"max_response_tokens": 2048,
"default_temperature": 0.7,
"allow_temperature_override": true
}
}
}
```
#### Fields
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `name` | string | "SAN" | Display name for the companion |
| `persona.role` | string | "companion" | Role description (companion/advisor/reflector) |
| `persona.tone` | string | "reflective" | Communication tone (reflective/supportive/analytical) |
| `persona.style` | string | "questioning" | Interaction style (questioning/supportive/direct) |
| `persona.boundaries` | string[] | [...] | Behavioral guardrails |
| `memory.session_turns` | int | 20 | Messages to keep in context |
| `memory.persistent_store` | string | "~/.companion/memory.db" | SQLite database path |
| `memory.summarize_after` | int | 10 | Summarize history after N turns |
| `chat.streaming` | bool | true | Stream responses in real-time |
| `chat.max_response_tokens` | int | 2048 | Max tokens per response |
| `chat.default_temperature` | float | 0.7 | Creativity (0.0=deterministic, 2.0=creative) |
| `chat.allow_temperature_override` | bool | true | Let users adjust temperature |
---
### vault
Obsidian vault indexing configuration.
```json
{
"vault": {
"path": "~/KnowledgeVault/Default",
"indexing": {
"auto_sync": true,
"auto_sync_interval_minutes": 1440,
"watch_fs_events": true,
"file_patterns": ["*.md"],
"deny_dirs": [".obsidian", ".trash", "zzz-Archive", ".git"],
"deny_patterns": ["*.tmp", "*.bak", "*conflict*"]
},
"chunking_rules": {
"default": {
"strategy": "sliding_window",
"chunk_size": 500,
"chunk_overlap": 100
},
"Journal/**": {
"strategy": "section",
"section_tags": ["#DayInShort", "#mentalhealth", "#work"],
"chunk_size": 300,
"chunk_overlap": 50
}
}
}
}
```
---
### rag
RAG (Retrieval-Augmented Generation) engine configuration.
```json
{
"rag": {
"embedding": {
"provider": "ollama",
"model": "mxbai-embed-large",
"base_url": "http://localhost:11434",
"dimensions": 1024,
"batch_size": 32
},
"vector_store": {
"type": "lancedb",
"path": "~/.companion/vectors.lance"
},
"search": {
"default_top_k": 8,
"max_top_k": 20,
"similarity_threshold": 0.75,
"hybrid_search": {
"enabled": true,
"keyword_weight": 0.3,
"semantic_weight": 0.7
},
"filters": {
"date_range_enabled": true,
"tag_filter_enabled": true,
"directory_filter_enabled": true
}
}
}
}
```
---
### model
LLM configuration for inference and fine-tuning.
```json
{
"model": {
"inference": {
"backend": "llama.cpp",
"model_path": "~/.companion/models/companion-7b-q4.gguf",
"context_length": 8192,
"gpu_layers": 35,
"batch_size": 512,
"threads": 8
},
"fine_tuning": {
"base_model": "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"output_dir": "~/.companion/training",
"lora_rank": 16,
"lora_alpha": 32,
"learning_rate": 0.0002,
"batch_size": 4,
"gradient_accumulation_steps": 4,
"num_epochs": 3,
"warmup_steps": 100,
"save_steps": 500,
"eval_steps": 250,
"training_data_path": "~/.companion/training_data/",
"validation_split": 0.1
},
"retrain_schedule": {
"auto_reminder": true,
"default_interval_days": 90,
"reminder_channels": ["chat_stream", "log"]
}
}
}
```
---
### api
FastAPI backend configuration.
```json
{
"api": {
"host": "127.0.0.1",
"port": 7373,
"cors_origins": ["http://localhost:5173"],
"auth": {
"enabled": false
}
}
}
```
---
### ui
Web UI configuration.
```json
{
"ui": {
"web": {
"enabled": true,
"theme": "obsidian",
"features": {
"streaming": true,
"citations": true,
"source_preview": true
}
},
"cli": {
"enabled": true,
"rich_output": true
}
}
}
```
---
### logging
Logging configuration.
```json
{
"logging": {
"level": "INFO",
"file": "~/.companion/logs/companion.log",
"max_size_mb": 100,
"backup_count": 5
}
}
```
---
### security
Security and privacy settings.
```json
{
"security": {
"local_only": true,
"vault_path_traversal_check": true,
"sensitive_content_detection": true,
"sensitive_patterns": [
"#mentalhealth",
"#physicalhealth",
"#finance",
"#Relations"
],
"require_confirmation_for_external_apis": true
}
}
```
---
## Full Example
See [config.json](../config.json) for a complete working configuration.

288
docs/forge.md Normal file
View File

@@ -0,0 +1,288 @@
# FORGE Module Documentation
The FORGE module handles fine-tuning of the companion model. It extracts training examples from your vault reflections and trains a custom LoRA adapter using QLoRA on your local GPU.
## Architecture
```
Vault Reflections
┌─────────────────┐
│ Extract │ - Scan for #reflection, #insight tags
│ (extract.py) │ - Parse reflection patterns
└────────┬────────┘
┌─────────────────┐
│ Curate │ - Manual review (optional)
│ (curate.py) │ - Deduplication
└────────┬────────┘
┌─────────────────┐
│ Train │ - QLoRA fine-tuning
│ (train.py) │ - Unsloth + transformers
└────────┬────────┘
┌─────────────────┐
│ Export │ - Merge LoRA weights
│ (export.py) │ - Convert to GGUF
└────────┬────────┘
┌─────────────────┐
│ Reload │ - Hot-swap in API
│ (reload.py) │ - No restart needed
└─────────────────┘
```
## Requirements
- **GPU**: RTX 5070 or equivalent (12GB+ VRAM)
- **Dependencies**: Install with `pip install -e ".[train]"`
- **Time**: 4-6 hours for full training run
## Workflow
### 1. Extract Training Data
Scan your vault for reflection patterns:
```bash
python -m companion.forge.cli extract
```
This scans for:
- Tags: `#reflection`, `#insight`, `#learning`, `#decision`, etc.
- Patterns: "I think", "I realize", "Looking back", "What if"
- Section headers in journal entries
Output: `~/.companion/training_data/extracted.jsonl`
**Example extracted data:**
```json
{
"messages": [
{"role": "system", "content": "You are a thoughtful, reflective companion."},
{"role": "user", "content": "I'm facing a decision. How should I think through this?"},
{"role": "assistant", "content": "#reflection I think I need to slow down..."}
],
"source_file": "Journal/2026/04/2026-04-12.md",
"tags": ["#reflection", "#DayInShort"],
"date": "2026-04-12"
}
```
### 2. Train Model
Run QLoRA fine-tuning:
```bash
python -m companion.forge.cli train --epochs 3 --lr 2e-4
```
**Hyperparameters (from config):**
| Parameter | Default | Description |
|-----------|---------|-------------|
| `lora_rank` | 16 | LoRA rank (8-64) |
| `lora_alpha` | 32 | LoRA scaling factor |
| `learning_rate` | 2e-4 | Optimizer learning rate |
| `num_epochs` | 3 | Training epochs |
| `batch_size` | 4 | Per-device batch |
| `gradient_accumulation_steps` | 4 | Steps before update |
**Training Output:**
- Checkpoints: `~/.companion/training/checkpoint-*/`
- Final model: `~/.companion/training/final/`
- Logs: Training loss, eval metrics
### 3. Reload Model
Hot-swap without restarting API:
```bash
python -m companion.forge.cli reload ~/.companion/training/final
```
Or via API:
```bash
curl -X POST http://localhost:7373/admin/reload-model \
-H "Content-Type: application/json" \
-d '{"model_path": "~/.companion/training/final"}'
```
## Components
### Extractor (`companion.forge.extract`)
```python
from companion.forge.extract import TrainingDataExtractor, extract_training_data
# Extract from vault
extractor = TrainingDataExtractor(config)
examples = extractor.extract()
# Get statistics
stats = extractor.get_stats()
print(f"Extracted {stats['total']} examples")
# Save to JSONL
extractor.save_to_jsonl(Path("training.jsonl"))
```
**Reflection Detection:**
- **Tags**: `#reflection`, `#learning`, `#insight`, `#decision`, `#analysis`, `#takeaway`, `#realization`
- **Patterns**: "I think", "I feel", "I realize", "I wonder", "Looking back", "On one hand...", "Ultimately decided"
### Trainer (`companion.forge.train`)
```python
from companion.forge.train import train
final_path = train(
data_path=Path("training.jsonl"),
output_dir=Path("~/.companion/training"),
base_model="unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
lora_rank=16,
lora_alpha=32,
learning_rate=2e-4,
num_epochs=3,
)
```
**Base Models:**
- `unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit` - Recommended
- `unsloth/llama-3-8b-bnb-4bit` - Alternative
**Target Modules:**
LoRA is applied to: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
### Exporter (`companion.forge.export`)
```python
from companion.forge.export import merge_only
# Merge LoRA into base model
merged_path = merge_only(
checkpoint_path=Path("~/.companion/training/checkpoint-500"),
output_path=Path("~/.companion/models/merged"),
)
```
### Reloader (`companion.forge.reload`)
```python
from companion.forge.reload import reload_model, get_model_status
# Check current model
status = get_model_status(config)
print(f"Model size: {status['size_mb']} MB")
# Reload with new model
new_path = reload_model(
config=config,
new_model_path=Path("~/.companion/training/final"),
backup=True,
)
```
## CLI Reference
```bash
# Extract training data
companion.forge.cli extract [--output PATH]
# Train model
companion.forge.cli train \
[--data PATH] \
[--output PATH] \
[--epochs N] \
[--lr FLOAT]
# Check model status
companion.forge.cli status
# Reload model
companion.forge.cli reload MODEL_PATH [--no-backup]
```
## Training Tips
**Dataset Size:**
- Minimum: 50 examples
- Optimal: 100-500 examples
- More is not always better - quality over quantity
**Epochs:**
- Start with 3 epochs
- Increase if underfitting (high loss)
- Decrease if overfitting (loss increases on eval)
**LoRA Rank:**
- `8` - Quick experiments
- `16` - Balanced (recommended)
- `32-64` - High capacity, more VRAM
**Overfitting Signs:**
- Training loss decreasing, eval loss increasing
- Model repeats exact phrases from training data
- Responses feel "memorized" not "learned"
## VRAM Usage (RTX 5070, 12GB)
| Config | VRAM | Batch Size |
|--------|------|------------|
| Rank 16, 8-bit adam | ~10GB | 4 |
| Rank 32, 8-bit adam | ~11GB | 4 |
| Rank 64, 8-bit adam | OOM | - |
Use `gradient_accumulation_steps` to increase effective batch size.
## Troubleshooting
**CUDA Out of Memory**
- Reduce `lora_rank` to 8
- Reduce `batch_size` to 2
- Increase `gradient_accumulation_steps`
**Training Loss Not Decreasing**
- Check data quality (reflections present?)
- Increase learning rate to 5e-4
- Check for data formatting issues
**Model Not Loading After Reload**
- Check path exists: `ls -la ~/.companion/models/`
- Verify model format (GGUF vs HF)
- Check API logs for errors
**Slow Training**
- Expected: ~6 hours for 3 epochs on RTX 5070
- Enable gradient checkpointing (enabled by default)
- Close other GPU applications
## Advanced: Custom Training Script
```python
# custom_train.py
from companion.forge.train import train
from companion.config import load_config
config = load_config()
final_path = train(
data_path=config.model.fine_tuning.training_data_path / "curated.jsonl",
output_dir=config.model.fine_tuning.output_dir,
base_model=config.model.fine_tuning.base_model,
lora_rank=32, # Higher capacity
lora_alpha=64,
learning_rate=3e-4, # Slightly higher
num_epochs=5, # More epochs
batch_size=2, # Smaller batches
gradient_accumulation_steps=8, # Effective batch = 16
)
print(f"Model saved to: {final_path}")
```

269
docs/rag.md Normal file
View File

@@ -0,0 +1,269 @@
# RAG Module Documentation
The RAG (Retrieval-Augmented Generation) module provides semantic search over your Obsidian vault. It handles document chunking, embedding generation, and vector similarity search.
## Architecture
```
Vault Markdown Files
┌─────────────────┐
│ Chunker │ - Split by strategy (sliding window / section)
│ (chunker.py) │ - Extract metadata (tags, dates, sections)
└────────┬────────┘
┌─────────────────┐
│ Embedder │ - HTTP client for Ollama API
│ (embedder.py) │ - Batch processing with retries
└────────┬────────┘
┌─────────────────┐
│ Vector Store │ - LanceDB persistence
│(vector_store.py)│ - Upsert, delete, search
└────────┬────────┘
┌─────────────────┐
│ Indexer │ - Full/incremental sync
│ (indexer.py) │ - File watching
└─────────────────┘
```
## Components
### Chunker (`companion.rag.chunker`)
Splits markdown files into searchable chunks.
```python
from companion.rag.chunker import chunk_file, ChunkingRule
rules = {
"default": ChunkingRule(strategy="sliding_window", chunk_size=500, chunk_overlap=100),
"Journal/**": ChunkingRule(strategy="section", section_tags=["#DayInShort"], chunk_size=300, chunk_overlap=50),
}
chunks = chunk_file(
file_path=Path("journal/2026-04-12.md"),
vault_root=Path("~/vault"),
rules=rules,
modified_at=1234567890.0,
)
for chunk in chunks:
print(f"{chunk.source_file}:{chunk.chunk_index}")
print(f"Text: {chunk.text[:100]}...")
print(f"Tags: {chunk.tags}")
print(f"Date: {chunk.date}")
```
#### Chunking Strategies
**Sliding Window**
- Fixed-size chunks with overlap
- Best for: Longform text, articles
```python
ChunkingRule(
strategy="sliding_window",
chunk_size=500, # words per chunk
chunk_overlap=100, # words overlap between chunks
)
```
**Section-Based**
- Split on section headers (tags)
- Best for: Structured journals, daily notes
```python
ChunkingRule(
strategy="section",
section_tags=["#DayInShort", "#mentalhealth", "#work"],
chunk_size=300,
chunk_overlap=50,
)
```
#### Metadata Extraction
Each chunk includes:
- `source_file` - Relative path from vault root
- `source_directory` - Top-level directory
- `section` - Section header (for section strategy)
- `date` - Parsed from filename
- `tags` - Hashtags and wikilinks
- `chunk_index` - Position in document
- `modified_at` - File mtime for sync
### Embedder (`companion.rag.embedder`)
Generates embeddings via Ollama API.
```python
from companion.rag.embedder import OllamaEmbedder
embedder = OllamaEmbedder(
base_url="http://localhost:11434",
model="mxbai-embed-large",
batch_size=32,
)
# Single embedding
embeddings = embedder.embed(["Hello world"])
print(len(embeddings[0])) # 1024 dimensions
# Batch embedding (with automatic batching)
texts = ["text 1", "text 2", "text 3", ...] # 100 texts
embeddings = embedder.embed(texts) # Automatically batches
```
#### Features
- **Batching**: Automatically splits large requests
- **Retries**: Exponential backoff on failures
- **Context Manager**: Proper resource cleanup
```python
with OllamaEmbedder(...) as embedder:
embeddings = embedder.embed(texts)
```
### Vector Store (`companion.rag.vector_store`)
LanceDB wrapper for vector storage.
```python
from companion.rag.vector_store import VectorStore
store = VectorStore(
uri="~/.companion/vectors.lance",
dimensions=1024,
)
# Upsert chunks
store.upsert(
ids=["file.md::0", "file.md::1"],
texts=["chunk 1", "chunk 2"],
embeddings=[[0.1, ...], [0.2, ...]],
metadatas=[
{"source_file": "file.md", "source_directory": "docs"},
{"source_file": "file.md", "source_directory": "docs"},
],
)
# Search
results = store.search(
query_vector=[0.1, ...],
top_k=8,
filters={"source_directory": "Journal"},
)
```
#### Schema
| Field | Type | Nullable |
|-------|------|----------|
| id | string | No |
| text | string | No |
| vector | list[float32] | No |
| source_file | string | No |
| source_directory | string | No |
| section | string | Yes |
| date | string | Yes |
| tags | list[string] | Yes |
| chunk_index | int32 | No |
| total_chunks | int32 | No |
| modified_at | float64 | Yes |
| rule_applied | string | No |
### Indexer (`companion.rag.indexer`)
Orchestrates vault indexing.
```python
from companion.config import load_config
from companion.rag.indexer import Indexer
from companion.rag.vector_store import VectorStore
config = load_config()
store = VectorStore(
uri=config.rag.vector_store.path,
dimensions=config.rag.embedding.dimensions,
)
indexer = Indexer(config, store)
# Full reindex (clear + rebuild)
indexer.full_index()
# Incremental sync (only changed files)
indexer.sync()
# Get status
status = indexer.status()
print(f"Total chunks: {status['total_chunks']}")
print(f"Unindexed files: {status['unindexed_files']}")
```
### Search (`companion.rag.search`)
High-level search interface.
```python
from companion.rag.search import SearchEngine
engine = SearchEngine(
vector_store=store,
embedder_base_url="http://localhost:11434",
embedder_model="mxbai-embed-large",
default_top_k=8,
similarity_threshold=0.75,
hybrid_search_enabled=False,
)
results = engine.search(
query="What did I learn about friendships?",
top_k=8,
filters={"source_directory": "Journal"},
)
for result in results:
print(f"Source: {result['source_file']}")
print(f"Relevance: {1 - result['_distance']:.2f}")
```
## CLI Commands
```bash
# Full index
python -m companion.indexer_daemon.cli index
# Incremental sync
python -m companion.indexer_daemon.cli sync
# Check status
python -m companion.indexer_daemon.cli status
# Reindex (same as index)
python -m companion.indexer_daemon.cli reindex
```
## Performance Tips
1. **Chunk Size**: Smaller chunks = better retrieval, larger = more context
2. **Batch Size**: 32 is optimal for Ollama embeddings
3. **Filters**: Use directory filters to narrow search scope
4. **Sync vs Index**: Use `sync` for daily updates, `index` for full rebuilds
## Troubleshooting
**Slow indexing**
- Check Ollama is running: `ollama ps`
- Reduce batch size if OOM
**No results**
- Verify vault path in config
- Check `indexer.status()` for unindexed files
**Duplicate chunks**
- Each chunk ID is `{source_file}::{chunk_index}`
- Use `full_index()` to clear and rebuild

View File

@@ -0,0 +1,156 @@
# Phase 4: Fine-Tuning Pipeline Implementation Plan
## Goal
Build a pipeline to extract training examples from the Obsidian vault and fine-tune a local 7B model using QLoRA on the RTX 5070.
## Architecture
```
┌─────────────────────────────────────────────────────────┐
│ Training Data Pipeline │
│ ───────────────────── │
│ 1. Extract reflections from vault │
│ 2. Curate into conversation format │
│ 3. Split train/validation │
│ 4. Export to HuggingFace datasets format │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ QLoRA Fine-Tuning (Unsloth) │
│ ─────────────────────────── │
│ - Base: Llama 3.1 8B Instruct (4-bit) │
│ - LoRA rank: 16, alpha: 32 │
│ - Target modules: q_proj, k_proj, v_proj, o_proj │
│ - Learning rate: 2e-4 │
│ - Epochs: 3 │
│ - Batch: 4, Gradient accumulation: 4 │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Model Export & Serving │
│ ───────────────────── │
│ - Export to GGUF (Q4_K_M quantization) │
│ - Serve via llama.cpp or vLLM │
│ - Hot-swap in FastAPI backend │
└─────────────────────────────────────────────────────────┘
```
## Tasks
### Task 1: Training Data Extractor
**Files:**
- `src/companion/forge/extract.py` - Extract reflection examples from vault
- `tests/test_forge_extract.py` - Test extraction logic
**Spec:**
- Parse vault for "reflection" patterns (journal entries with insights, decision analyses)
- Look for tags: #reflection, #decision, #learning, etc.
- Extract entries where you reflect on situations, weigh options, or analyze outcomes
- Format as conversation: user prompt + assistant response (your reflection)
- Output: JSONL file with {"messages": [{"role": "...", "content": "..."}]}
### Task 2: Training Data Curator
**Files:**
- `src/companion/forge/curate.py` - Human-in-the-loop curation
- `src/companion/forge/cli.py` - CLI for curation workflow
**Spec:**
- Load extracted examples
- Interactive review: show each example, allow approve/reject/edit
- Track curation decisions in SQLite
- Export approved examples to final training set
- Deduplicate similar examples (use embeddings similarity)
### Task 3: Training Configuration
**Files:**
- `src/companion/forge/config.py` - Training hyperparameters
- `config.json` updates for fine_tuning section
**Spec:**
- Pydantic models for training config
- Hyperparameters tuned for RTX 5070 (12GB VRAM)
- Output paths, logging config
### Task 4: QLoRA Training Script
**Files:**
- `src/companion/forge/train.py` - Unsloth training script
- `scripts/train.sh` - Convenience launcher
**Spec:**
- Load base model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
- Apply LoRA config (r=16, alpha=32, target_modules)
- Load and tokenize dataset
- Training loop with wandb logging (optional)
- Save checkpoints every 500 steps
- Validate on holdout set
### Task 5: Model Export
**Files:**
- `src/companion/forge/export.py` - Export to GGUF
- `src/companion/forge/merge.py` - Merge LoRA weights into base
**Spec:**
- Merge LoRA weights into base model
- Export to GGUF with Q4_K_M quantization
- Save to `~/.companion/models/`
- Update config.json with new model path
### Task 6: Model Hot-Swap
**Files:**
- Update `src/companion/api.py` - Add endpoint to reload model
- `src/companion/forge/reload.py` - Model reloader utility
**Spec:**
- `/admin/reload-model` endpoint (requires auth/local-only)
- Gracefully unload old model, load new GGUF
- Return status: success or error
### Task 7: Evaluation Framework
**Files:**
- `src/companion/forge/eval.py` - Evaluate model on test prompts
- `tests/test_forge_eval.py` - Evaluation tests
**Spec:**
- Load test prompts (decision scenarios, relationship questions)
- Generate responses from both base and fine-tuned model
- Store outputs for human comparison
- Track metrics: response time, token count
## Success Criteria
- [ ] Extract 100+ reflection examples from vault
- [ ] Curate down to 50-100 high-quality training examples
- [ ] Complete training run in <6 hours on RTX 5070
- [ ] Export produces valid GGUF file
- [ ] Hot-swap endpoint successfully reloads model
- [ ] Evaluation shows distinguishable "Santhosh-style" in outputs
## Dependencies
```
unsloth>=2024.1.0
torch>=2.1.0
transformers>=4.36.0
datasets>=2.14.0
peft>=0.7.0
accelerate>=0.25.0
bitsandbytes>=0.41.0
sentencepiece>=0.1.99
protobuf>=3.20.0
```
## Commands
```bash
# Extract training data
python -m companion.forge.cli extract
# Curate examples
python -m companion.forge.cli curate
# Train
python -m companion.forge.train
# Export
python -m companion.forge.export
# Reload model in API
python -m companion.forge.reload
```

408
docs/ui.md Normal file
View File

@@ -0,0 +1,408 @@
# UI Module Documentation
The UI is a React + Vite frontend for the companion chat interface. It provides real-time streaming chat with a clean, Obsidian-inspired dark theme.
## Architecture
```
HTTP/SSE
┌─────────────────┐
│ App.tsx │ - State management
│ Message state │ - User/assistant messages
└────────┬────────┘
┌─────────────────┐
│ MessageList │ - Render messages
│ (components/) │ - User/assistant styling
└─────────────────┘
┌─────────────────┐
│ ChatInput │ - Textarea + send
│ (components/) │ - Auto-resize, hotkeys
└─────────────────┘
┌─────────────────┐
│ useChatStream │ - SSE streaming
│ (hooks/) │ - Session management
└─────────────────┘
```
## Project Structure
```
ui/
├── src/
│ ├── main.tsx # React entry point
│ ├── App.tsx # Main app component
│ ├── App.css # App layout styles
│ ├── index.css # Global styles
│ ├── components/
│ │ ├── MessageList.tsx # Message display
│ │ ├── MessageList.css # Message styling
│ │ ├── ChatInput.tsx # Input textarea
│ │ └── ChatInput.css # Input styling
│ └── hooks/
│ └── useChatStream.ts # SSE streaming hook
├── index.html # HTML template
├── vite.config.ts # Vite configuration
├── tsconfig.json # TypeScript config
└── package.json # Dependencies
```
## Components
### App.tsx
Main application state management:
```typescript
interface Message {
role: 'user' | 'assistant'
content: string
}
// State
const [messages, setMessages] = useState<Message[]>([])
const [input, setInput] = useState('')
const [isLoading, setIsLoading] = useState(false)
// Handlers
const handleSend = async () => { /* ... */ }
const handleKeyDown = (e) => { /* Enter to send, Shift+Enter newline */ }
```
**Features:**
- Auto-scroll to bottom on new messages
- Keyboard shortcuts (Enter to send, Shift+Enter for newline)
- Loading state with animation
- Message streaming in real-time
### MessageList.tsx
Renders the chat history:
```typescript
interface MessageListProps {
messages: Message[]
isLoading: boolean
}
```
**Layout:**
- User messages: Right-aligned, blue background
- Assistant messages: Left-aligned, gray background with border
- Loading indicator: Three animated dots
- Empty state: Prompt text when no messages
**Styling:**
- Max-width 800px, centered
- Smooth scroll behavior
- Avatar-less design (clean, text-focused)
### ChatInput.tsx
Textarea input with send button:
```typescript
interface ChatInputProps {
value: string
onChange: (value: string) => void
onSend: () => void
onKeyDown: (e: KeyboardEvent) => void
disabled: boolean
}
```
**Features:**
- Auto-resizing textarea
- Send button with loading state
- Placeholder text
- Disabled during streaming
## Hooks
### useChatStream.ts
Manages SSE streaming connection:
```typescript
interface UseChatStreamReturn {
sendMessage: (
message: string,
onChunk: (chunk: string) => void
) => Promise<void>
sessionId: string | null
}
const { sendMessage, sessionId } = useChatStream()
```
**Usage:**
```typescript
await sendMessage("Hello", (chunk) => {
// Append chunk to current response
setMessages(prev => {
const last = prev[prev.length - 1]
if (last?.role === 'assistant') {
last.content += chunk
return [...prev]
}
return [...prev, { role: 'assistant', content: chunk }]
})
})
```
**SSE Protocol:**
The API streams events in this format:
```
data: {"type": "chunk", "content": "Hello"}
data: {"type": "chunk", "content": " world"}
data: {"type": "sources", "sources": [{"file": "journal.md"}]}
data: {"type": "done", "session_id": "uuid"}
```
## Styling
### Design System
Based on Obsidian's dark theme:
```css
:root {
--bg-primary: #0d1117; /* App background */
--bg-secondary: #161b22; /* Header/footer */
--bg-tertiary: #21262d; /* Input background */
--text-primary: #c9d1d9; /* Main text */
--text-secondary: #8b949e; /* Placeholder */
--accent-primary: #58a6ff; /* Primary blue */
--accent-secondary: #79c0ff;/* Lighter blue */
--border: #30363d; /* Borders */
--user-bg: #1f6feb; /* User message */
--assistant-bg: #21262d; /* Assistant message */
}
```
### Message Styling
**User Message:**
- Blue background (`--user-bg`)
- White text
- Border radius: 12px (12px 12px 4px 12px)
- Max-width: 80%
**Assistant Message:**
- Gray background (`--assistant-bg`)
- Light text (`--text-primary`)
- Border: 1px solid `--border`
- Border radius: 12px (12px 12px 12px 4px)
### Loading Animation
Three bouncing dots using CSS keyframes:
```css
@keyframes bounce {
0%, 80%, 100% { transform: scale(0.6); }
40% { transform: scale(1); }
}
```
## Development
### Setup
```bash
cd ui
npm install
```
### Dev Server
```bash
npm run dev
# Opens http://localhost:5173
```
### Build
```bash
npm run build
# Output: ui/dist/
```
### Preview Production Build
```bash
npm run preview
```
## Configuration
### Vite Config
`vite.config.ts`:
```typescript
export default defineConfig({
plugins: [react()],
server: {
port: 5173,
proxy: {
'/api': {
target: 'http://localhost:7373',
changeOrigin: true,
},
},
},
})
```
**Proxy Setup:**
- Frontend: `http://localhost:5173`
- API: `http://localhost:7373`
- `/api/*``http://localhost:7373/api/*`
This allows using relative API paths in the code:
```typescript
const API_BASE = '/api' // Not http://localhost:7373/api
```
## TypeScript
### Types
```typescript
// Message role
type Role = 'user' | 'assistant'
// Message object
interface Message {
role: Role
content: string
}
// Chat request
type ChatRequest = {
message: string
session_id?: string
temperature?: number
}
// SSE chunk
type ChunkEvent = {
type: 'chunk'
content: string
}
type SourcesEvent = {
type: 'sources'
sources: Array<{
file: string
section?: string
date?: string
}>
}
type DoneEvent = {
type: 'done'
session_id: string
}
```
## API Integration
### Chat Endpoint
```typescript
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: userInput,
session_id: sessionId, // null for new session
stream: true,
}),
})
// Read SSE stream
const reader = response.body?.getReader()
const decoder = new TextDecoder()
while (true) {
const { done, value } = await reader.read()
if (done) break
const chunk = decoder.decode(value, { stream: true })
// Parse SSE lines
}
```
### Session Persistence
The backend maintains conversation history via `session_id`:
1. First message: `session_id: null` → backend creates UUID
2. Response header: `X-Session-ID: <uuid>`
3. Subsequent messages: include `session_id: <uuid>`
4. History retrieved automatically
## Customization
### Themes
Modify `App.css` and `index.css`:
```css
/* Custom accent color */
--accent-primary: #ff6b6b;
--user-bg: #ff6b6b;
```
### Fonts
Update `index.css`:
```css
body {
font-family: 'Inter', -apple-system, sans-serif;
}
```
### Message Layout
Modify `MessageList.css`:
```css
.message-content {
max-width: 90%; /* Wider messages */
font-size: 16px; /* Larger text */
}
```
## Troubleshooting
**CORS errors**
- Check `vite.config.ts` proxy configuration
- Verify backend CORS origins include `http://localhost:5173`
**Stream not updating**
- Check browser network tab for SSE events
- Verify `EventSourceResponse` from backend
**Messages not appearing**
- Check React DevTools for state updates
- Verify `messages` array is being mutated correctly
**Build fails**
- Check TypeScript errors: `npx tsc --noEmit`
- Update dependencies: `npm update`

View File

@@ -12,6 +12,11 @@ dependencies = [
"typer>=0.12.0",
"rich>=13.0.0",
"numpy>=1.26.0",
"fastapi>=0.109.0",
"uvicorn[standard]>=0.27.0",
"httpx>=0.27.0",
"sse-starlette>=2.0.0",
"python-multipart>=0.0.9",
]
[project.optional-dependencies]
@@ -21,6 +26,16 @@ dev = [
"httpx>=0.27.0",
"respx>=0.21.0",
]
train = [
"unsloth>=2024.1.0",
"torch>=2.1.0",
"transformers>=4.36.0",
"datasets>=2.14.0",
"peft>=0.7.0",
"accelerate>=0.25.0",
"bitsandbytes>=0.41.0",
"trl>=0.7.0",
]
[tool.hatchling]
packages = ["src/companion"]

110
scripts/install.ps1 Normal file
View File

@@ -0,0 +1,110 @@
#!/bin/bash
# Companion AI - Windows Installation Script
Write-Host "Companion AI Installation Script for Windows"
Write-Host "============================================="
Write-Host ""
# Check if Python is installed
$python = Get-Command python -ErrorAction SilentlyContinue
if (-not $python) {
Write-Error "Python 3 is required but not found. Please install Python 3.11 or later."
exit 1
}
$pythonVersion = python --version 2>&1
Write-Host "Found: $pythonVersion"
# Create directories
$installDir = "$env:LOCALAPPDATA\Companion"
$dataDir = "$env:LOCALAPPDATA\Companion\Data"
Write-Host "Creating directories..."
New-Item -ItemType Directory -Force -Path $installDir | Out-Null
New-Item -ItemType Directory -Force -Path $dataDir\vectors | Out-Null
New-Item -ItemType Directory -Force -Path $dataDir\memory | Out-Null
New-Item -ItemType Directory -Force -Path $dataDir\models | Out-Null
# Copy application files
Write-Host "Installing Companion AI..."
$sourceDir = $PSScriptRoot
if (Test-Path "$sourceDir\src") {
Copy-Item -Recurse -Force "$sourceDir\*" $installDir
} else {
Write-Error "Cannot find source files. Please run from the Companion AI directory."
exit 1
}
# Create virtual environment
Write-Host "Creating Python virtual environment..."
Set-Location $installDir
python -m venv venv
# Activate and install
Write-Host "Installing dependencies..."
& .\venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install -e ".[dev]"
# Copy config
if (-not (Test-Path "$dataDir\config.json")) {
Copy-Item "$installDir\config.json" "$dataDir\config.json"
# Update paths
$config = Get-Content "$dataDir\config.json" -Raw
$config = $config -replace '~/.companion', $dataDir.Replace('\', '/')
Set-Content "$dataDir\config.json" $config
}
# Create startup scripts
Write-Host "Creating startup scripts..."
# API server startup script
$apiScript = @"
@echo off
set PYTHONPATH=$($installDir)
set COMPANION_CONFIG=$($dataDir)\config.json
set COMPANION_DATA_DIR=$($dataDir)
cd "$($installDir)"
.\venv\Scripts\python -m uvicorn companion.api:app --host 0.0.0.0 --port 7373
"@
Set-Content "$installDir\start-api.bat" $apiScript
# Indexer watcher startup script
$indexerScript = @"
@echo off
set PYTHONPATH=$($installDir)
set COMPANION_CONFIG=$($dataDir)\config.json
set COMPANION_DATA_DIR=$($dataDir)
cd "$($installDir)"
.\venv\Scripts\python -m companion.indexer_daemon.watcher
"@
Set-Content "$installDir\start-indexer.bat" $indexerScript
# CLI shortcut
$cliScript = @"
@echo off
set PYTHONPATH=$($installDir)
set COMPANION_CONFIG=$($dataDir)\config.json
set COMPANION_DATA_DIR=$($dataDir)
cd "$($installDir)"
.\venv\Scripts\python -m companion.indexer_daemon.cli %*
"@
Set-Content "$installDir\companion.bat" $cliScript
Write-Host ""
Write-Host "Installation complete!"
Write-Host "====================="
Write-Host ""
Write-Host "To start the API server:"
Write-Host " $installDir\start-api.bat"
Write-Host ""
Write-Host "To start the file watcher (auto-indexing):"
Write-Host " $installDir\start-indexer.bat"
Write-Host ""
Write-Host "To run a manual index:"
Write-Host " $installDir\companion.bat index"
Write-Host ""
Write-Host "Config location: $dataDir\config.json"
Write-Host "Data location: $dataDir"
Write-Host ""
Write-Host "Edit the config to set your vault path, then start the services."

116
scripts/install.sh Normal file
View File

@@ -0,0 +1,116 @@
#!/bin/bash
# Companion AI Installation Script for Linux
set -e
# Configuration
INSTALL_DIR="/opt/companion"
DATA_DIR="/var/lib/companion"
USER="companion"
SERVICE_USER="${USER}"
REPO_URL="https://github.com/santhoshjan/companion.git"
echo "Companion AI Installation Script"
echo "================================"
echo ""
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo "Please run as root (use sudo)"
exit 1
fi
# Check dependencies
echo "Checking dependencies..."
if ! command -v python3 &> /dev/null; then
echo "Python 3 is required but not installed. Aborting."
exit 1
fi
if ! command -v pip3 &> /dev/null; then
echo "pip3 is required but not installed. Installing..."
apt-get update && apt-get install -y python3-pip
fi
# Create user
echo "Creating companion user..."
if ! id "$USER" &>/dev/null; then
useradd -r -s /bin/false -d "$INSTALL_DIR" "$USER"
fi
# Create directories
echo "Creating directories..."
mkdir -p "$INSTALL_DIR"
mkdir -p "$DATA_DIR"/{vectors,memory,models}
mkdir -p /etc/companion
# Install application
echo "Installing Companion AI..."
if [ -d ".git" ]; then
# Running from git repo
cp -r . "$INSTALL_DIR/"
else
# Clone from remote
git clone "$REPO_URL" "$INSTALL_DIR"
fi
# Create virtual environment
echo "Creating Python virtual environment..."
cd "$INSTALL_DIR"
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -e ".[dev]"
# Set permissions
echo "Setting permissions..."
chown -R "$USER:$USER" "$INSTALL_DIR"
chown -R "$USER:$USER" "$DATA_DIR"
# Install systemd services
echo "Installing systemd services..."
if command -v systemctl &> /dev/null; then
cp systemd/companion-api.service /etc/systemd/system/
cp systemd/companion-indexer.service /etc/systemd/system/
cp systemd/companion-index@.service /etc/systemd/system/
cp systemd/companion-index.timer /etc/systemd/system/ 2>/dev/null || true
systemctl daemon-reload
echo ""
echo "Services installed. To start:"
echo " sudo systemctl enable companion-api"
echo " sudo systemctl start companion-api"
echo ""
echo "For auto-indexing:"
echo " sudo systemctl enable companion-indexer"
echo " sudo systemctl start companion-indexer"
else
echo "systemd not found. Manual setup required."
fi
# Create config if doesn't exist
if [ ! -f /etc/companion/config.json ]; then
echo "Creating default configuration..."
cp "$INSTALL_DIR/config.json" /etc/companion/config.json
# Update paths in config
sed -i "s|~/.companion|$DATA_DIR|g" /etc/companion/config.json
fi
# Create symlink for CLI
echo "Installing CLI..."
ln -sf "$INSTALL_DIR/venv/bin/companion" /usr/local/bin/companion 2>/dev/null || true
echo ""
echo "Installation complete!"
echo "======================"
echo ""
echo "Next steps:"
echo "1. Edit configuration: sudo nano /etc/companion/config.json"
echo "2. Update vault path in the config"
echo "3. Start services: sudo systemctl start companion-api companion-indexer"
echo "4. Check status: sudo systemctl status companion-api"
echo ""
echo "Data directory: $DATA_DIR"
echo "Config directory: /etc/companion"
echo "Install directory: $INSTALL_DIR"

220
src/companion/api.py Normal file
View File

@@ -0,0 +1,220 @@
"""FastAPI backend for Companion AI."""
from __future__ import annotations
import json
import uuid
from contextlib import asynccontextmanager
from typing import AsyncGenerator
import httpx
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from sse_starlette.sse import EventSourceResponse
from companion.config import Config, load_config
from companion.memory import SessionMemory
from companion.orchestrator import ChatOrchestrator
from companion.rag.search import SearchEngine
from companion.rag.vector_store import VectorStore
class ChatRequest(BaseModel):
"""Chat request model."""
message: str
session_id: str | None = None
stream: bool = True
use_rag: bool = True
class ChatResponse(BaseModel):
"""Chat response model (non-streaming)."""
response: str
session_id: str
sources: list[dict] | None = None
# Global instances
config: Config
vector_store: VectorStore
search_engine: SearchEngine
memory: SessionMemory
orchestrator: ChatOrchestrator
http_client: httpx.AsyncClient
@asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
"""Manage application lifespan."""
global config, vector_store, search_engine, memory, orchestrator, http_client
# Startup
config = load_config("config.json")
vector_store = VectorStore(
uri=config.rag.vector_store.path, dimensions=config.rag.embedding.dimensions
)
search_engine = SearchEngine(
vector_store=vector_store,
embedder_base_url=config.rag.embedding.base_url,
embedder_model=config.rag.embedding.model,
embedder_batch_size=config.rag.embedding.batch_size,
default_top_k=config.rag.search.default_top_k,
similarity_threshold=config.rag.search.similarity_threshold,
hybrid_search_enabled=config.rag.search.hybrid_search.enabled,
)
memory = SessionMemory(db_path=config.companion.memory.persistent_store)
http_client = httpx.AsyncClient(timeout=300.0)
orchestrator = ChatOrchestrator(
config=config,
search_engine=search_engine,
memory=memory,
http_client=http_client,
)
yield
# Shutdown
await http_client.aclose()
memory.close()
app = FastAPI(
title="Companion AI",
description="Personal AI companion with RAG",
version="0.1.0",
lifespan=lifespan,
)
# CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=config.api.cors_origins
if "config" in globals()
else ["http://localhost:5173"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/health")
async def health_check() -> dict:
"""Health check endpoint."""
return {
"status": "healthy",
"version": "0.1.0",
"indexed_chunks": vector_store.count() if "vector_store" in globals() else 0,
}
@app.post("/chat")
async def chat(request: ChatRequest) -> EventSourceResponse:
"""Chat endpoint with SSE streaming."""
if not request.message.strip():
raise HTTPException(status_code=400, detail="Message cannot be empty")
# Generate or use existing session ID
session_id = request.session_id or str(uuid.uuid4())
async def event_generator() -> AsyncGenerator[str, None]:
"""Generate SSE events."""
try:
# Stream response from orchestrator
async for chunk in orchestrator.chat(
session_id=session_id,
user_message=request.message,
use_rag=request.use_rag,
):
# Parse the SSE data format
if chunk.startswith("data: "):
data = chunk[6:]
if data == "[DONE]":
yield json.dumps({"type": "done", "session_id": session_id})
else:
try:
parsed = json.loads(data)
if "content" in parsed:
yield json.dumps(
{"type": "chunk", "content": parsed["content"]}
)
elif "citations" in parsed:
yield json.dumps(
{
"type": "citations",
"citations": parsed["citations"],
}
)
elif "error" in parsed:
yield json.dumps(
{"type": "error", "message": parsed["error"]}
)
else:
yield data
except json.JSONDecodeError:
# Pass through raw data
yield data
else:
yield chunk
except Exception as e:
yield json.dumps({"type": "error", "message": str(e)})
return EventSourceResponse(
event_generator(),
headers={"X-Session-ID": session_id},
)
@app.get("/sessions/{session_id}/history")
async def get_session_history(session_id: str) -> dict:
"""Get conversation history for a session."""
history = memory.get_history(session_id)
return {
"session_id": session_id,
"messages": [
{
"role": msg.role,
"content": msg.content,
"timestamp": msg.timestamp.isoformat(),
}
for msg in history
],
}
class ReloadModelRequest(BaseModel):
"""Model reload request."""
model_path: str
@app.post("/admin/reload-model")
async def reload_model_endpoint(request: ReloadModelRequest) -> dict:
"""Reload the model with a new fine-tuned version (admin only)."""
from pathlib import Path
from companion.forge.reload import reload_model
new_path = Path(request.model_path).expanduser()
if not new_path.exists():
raise HTTPException(status_code=404, detail=f"Model not found: {new_path}")
try:
active_path = reload_model(config, new_path, backup=True)
return {
"status": "success",
"message": f"Model reloaded successfully",
"active_model": str(active_path),
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Failed to reload model: {e}")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=7373)

View File

@@ -0,0 +1 @@
# Model Forge - Fine-tuning pipeline for companion AI

133
src/companion/forge/cli.py Normal file
View File

@@ -0,0 +1,133 @@
"""CLI for model forge operations."""
from __future__ import annotations
from pathlib import Path
import typer
from companion.config import load_config
from companion.forge.extract import TrainingDataExtractor
from companion.forge.reload import get_model_status, reload_model
from companion.forge.train import train as train_model
app = typer.Typer(help="Companion model forge - training pipeline")
@app.command()
def extract(
output: Path = typer.Option(
Path("~/.companion/training_data/extracted.jsonl"),
help="Output JSONL file path",
),
) -> None:
"""Extract training examples from vault."""
config = load_config()
typer.echo("Scanning vault for reflection examples...")
extractor = TrainingDataExtractor(config)
examples = extractor.extract()
if not examples:
typer.echo("No reflection examples found in vault.")
typer.echo(
"Try adding tags like #reflection, #insight, or #learning to your notes."
)
raise typer.Exit(1)
# Save to JSONL
output = output.expanduser()
output.parent.mkdir(parents=True, exist_ok=True)
count = extractor.save_to_jsonl(output)
stats = extractor.get_stats()
typer.echo(f"\nExtracted {count} training examples:")
typer.echo(f" - Average length: {stats.get('avg_length', 0)} chars")
if stats.get("top_tags"):
typer.echo(
f" - Top tags: {', '.join(f'{tag}({cnt})' for tag, cnt in stats['top_tags'][:5])}"
)
typer.echo(f"\nSaved to: {output}")
@app.command()
def status() -> None:
"""Check model status."""
config = load_config()
model_status = get_model_status(config)
typer.echo(f"Model Status:")
typer.echo(f" Path: {model_status['path']}")
typer.echo(f" Exists: {'Yes' if model_status['exists'] else 'No'}")
if model_status["exists"]:
typer.echo(f" Type: {model_status['type']}")
typer.echo(f" Size: {model_status['size_mb']} MB")
@app.command()
def reload(
model_path: Path = typer.Argument(
...,
help="Path to new model directory or GGUF file",
),
no_backup: bool = typer.Option(
False,
"--no-backup",
help="Skip backing up current model",
),
) -> None:
"""Reload model with a new fine-tuned version."""
config = load_config()
model_path = model_path.expanduser()
try:
active_path = reload_model(config, model_path, backup=not no_backup)
typer.echo(f"Model reloaded successfully: {active_path}")
except FileNotFoundError as e:
typer.echo(f"Error: {e}")
raise typer.Exit(1)
@app.command()
def train(
data: Path = typer.Option(
Path("~/.companion/training_data/extracted.jsonl"),
help="Path to training data JSONL",
),
output: Path = typer.Option(
Path("~/.companion/training"),
help="Output directory for checkpoints",
),
epochs: int = typer.Option(3, help="Number of training epochs"),
lr: float = typer.Option(2e-4, help="Learning rate"),
) -> None:
"""Train model using QLoRA fine-tuning."""
data = data.expanduser()
output = output.expanduser()
if not data.exists():
typer.echo(f"Training data not found: {data}")
typer.echo("Run 'forge extract' first to generate training data.")
raise typer.Exit(1)
try:
final_path = train_model(
data_path=data,
output_dir=output,
num_epochs=epochs,
learning_rate=lr,
)
typer.echo(f"\nTraining complete! Model saved to: {final_path}")
typer.echo("\nTo use this model:")
typer.echo(f" forge reload {final_path}")
except Exception as e:
typer.echo(f"Training failed: {e}")
raise typer.Exit(1)
if __name__ == "__main__":
app()

View File

@@ -0,0 +1,151 @@
"""Merge LoRA weights and export to GGUF for llama.cpp inference."""
from __future__ import annotations
import argparse
import shutil
from pathlib import Path
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
def export_to_gguf(
checkpoint_path: Path,
output_path: Path,
quantization: str = "Q4_K_M",
) -> Path:
"""Export fine-tuned model to GGUF format.
Args:
checkpoint_path: Path to checkpoint directory with LoRA weights
output_path: Path to save GGUF file
quantization: Quantization method (Q4_K_M, Q5_K_M, Q8_0)
Returns:
Path to exported GGUF file
"""
print(f"Loading checkpoint from: {checkpoint_path}")
# Load the base model
# Note: This assumes the checkpoint was saved with save_pretrained
# which includes the adapter_config.json
from unsloth import FastLanguageModel
# Load model with adapters
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=str(checkpoint_path),
max_seq_length=2048,
dtype=None,
load_in_4bit=False, # Load full precision for export
)
# Merge LoRA weights into base
print("Merging LoRA weights...")
model = model.merge_and_unload()
# Save merged model temporarily
temp_path = checkpoint_path.parent / "merged"
temp_path.mkdir(exist_ok=True)
print(f"Saving merged model to: {temp_path}")
model.save_pretrained(temp_path)
tokenizer.save_pretrained(temp_path)
# Convert to GGUF using llama.cpp
# Note: This requires llama.cpp's convert script
output_path.parent.mkdir(parents=True, exist_ok=True)
print(f"Exporting to GGUF format...")
print(f" Quantization: {quantization}")
print(f" Output: {output_path}")
# For now, we'll save in HuggingFace format
# Full GGUF conversion would require llama.cpp tools
# which may not be installed in the environment
# Alternative: Save as merged HF model
hf_output = output_path.parent / "merged_hf"
hf_output.mkdir(parents=True, exist_ok=True)
model.save_pretrained(hf_output)
tokenizer.save_pretrained(hf_output)
print(f"\nModel exported to HuggingFace format: {hf_output}")
print(f"\nTo convert to GGUF, install llama.cpp and run:")
print(
f" python convert_hf_to_gguf.py {hf_output} --outfile {output_path} --outtype {quantization}"
)
# Create a marker file
marker = output_path.parent / "EXPORTED"
marker.write_text(f"Merged model saved to: {hf_output}\n")
return hf_output
def merge_only(
checkpoint_path: Path,
output_path: Path,
) -> Path:
"""Just merge LoRA weights, save as HF model.
This is useful if you want to serve via vLLM or HuggingFace directly
instead of converting to GGUF.
"""
print(f"Loading checkpoint from: {checkpoint_path}")
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=str(checkpoint_path),
max_seq_length=2048,
dtype=None,
load_in_4bit=False,
)
print("Merging LoRA weights...")
model = model.merge_and_unload()
output_path.mkdir(parents=True, exist_ok=True)
print(f"Saving merged model to: {output_path}")
model.save_pretrained(output_path)
tokenizer.save_pretrained(output_path)
print(f"Done! Model saved to: {output_path}")
return output_path
def main():
"""CLI entry point."""
parser = argparse.ArgumentParser(description="Export fine-tuned model")
parser.add_argument(
"--checkpoint", type=Path, required=True, help="Checkpoint directory"
)
parser.add_argument(
"--output",
type=Path,
default=Path("~/.companion/models/exported"),
help="Output path",
)
parser.add_argument("--gguf", action="store_true", help="Export to GGUF format")
parser.add_argument(
"--quant", type=str, default="Q4_K_M", help="GGUF quantization type"
)
args = parser.parse_args()
checkpoint = args.checkpoint.expanduser()
output = args.output.expanduser()
if args.gguf:
export_to_gguf(checkpoint, output, args.quant)
else:
merge_only(checkpoint, output)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,329 @@
"""Extract training examples from Obsidian vault.
Looks for reflection patterns, decision analyses, and personal insights
to create training data that teaches the model to reason like San.
"""
from __future__ import annotations
import json
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Iterator
from companion.config import Config
from companion.rag.chunker import chunk_file
@dataclass
class TrainingExample:
"""A single training example in conversation format."""
messages: list[dict[str, str]]
source_file: str
tags: list[str]
date: str | None = None
def to_dict(self) -> dict:
return {
"messages": self.messages,
"source_file": self.source_file,
"tags": self.tags,
"date": self.date,
}
# Tags that indicate reflection/decision content
REFLECTION_TAGS = {
"#reflection",
"#decision",
"#learning",
"#insight",
"#analysis",
"#pondering",
"#evaluation",
"#takeaway",
"#realization",
}
# Patterns that suggest reflection content
REFLECTION_PATTERNS = [
r"(?i)I think\s+",
r"(?i)I feel\s+",
r"(?i)I realize\s+",
r"(?i)I wonder\s+",
r"(?i)I should\s+",
r"(?i)I need to\s+",
r"(?i)The reason\s+",
r"(?i)What if\s+",
r"(?i)Maybe\s+",
r"(?i)Perhaps\s+",
r"(?i)On one hand.*?on the other hand",
r"(?i)Pros?:.*?Cons?:",
r"(?i)Weighing.*?(?:options?|choices?|alternatives?)",
r"(?i)Ultimately.*?decided",
r"(?i)Looking back",
r"(?i)In hindsight",
r"(?i)I've learned\s+",
r"(?i)The lesson\s+",
]
def _has_reflection_tags(text: str) -> bool:
"""Check if text contains reflection-related hashtags."""
hashtags = set(re.findall(r"#\w+", text))
return bool(hashtags & REFLECTION_TAGS)
def _has_reflection_patterns(text: str) -> bool:
"""Check if text contains reflection language patterns."""
for pattern in REFLECTION_PATTERNS:
if re.search(pattern, text):
return True
return False
def _is_likely_reflection(text: str) -> bool:
"""Determine if a text chunk is likely a reflection."""
# Must have reflection tags OR strong reflection patterns
has_tags = _has_reflection_tags(text)
has_patterns = _has_reflection_patterns(text)
# Require at least one indicator
return has_tags or has_patterns
def _extract_date_from_filename(filename: str) -> str | None:
"""Extract date from filename patterns like 2026-04-12 or "12 Apr 2026"."""
# ISO format: 2026-04-12
m = re.search(r"(\d{4}-\d{2}-\d{2})", filename)
if m:
return m.group(1)
# Human format: 12-Apr-2026 or "12 Apr 2026"
m = re.search(r"(\d{1,2}[-\s][A-Za-z]{3}[-\s]\d{4})", filename)
if m:
return m.group(1).replace(" ", "-").replace("--", "-")
return None
def _create_training_prompt(chunk_text: str) -> str:
"""Create a user prompt that elicits a reflection similar to the example."""
# Extract context from the chunk to form a relevant question
# Look for decision patterns
if re.search(r"(?i)decided|decision|chose|choice", chunk_text):
return "I'm facing a decision. How should I think through this?"
# Look for relationship content
if re.search(r"(?i)friend|relationship|person|people|someone", chunk_text):
return "What do you think about this situation with someone I'm close to?"
# Look for work/career content
if re.search(r"(?i)work|job|career|project|professional", chunk_text):
return "I'm thinking about something at work. What's your perspective?"
# Look for health content
if re.search(r"(?i)health|mental|physical|stress|wellness", chunk_text):
return "I've been thinking about my well-being. What do you notice?"
# Look for financial content
if re.search(r"(?i)money|finance|financial|invest|saving|spending", chunk_text):
return "I'm considering a financial decision. How should I evaluate this?"
# Default prompt
return "I'm reflecting on something. What patterns do you see?"
def _create_training_example(
chunk_text: str, source_file: str, tags: list[str], date: str | None
) -> TrainingExample | None:
"""Convert a chunk into a training example if it meets criteria."""
# Skip if too short
if len(chunk_text) < 100:
return None
# Skip if not a reflection
if not _is_likely_reflection(chunk_text):
return None
# Clean up the text
cleaned = chunk_text.strip()
# Create prompt-response pair
prompt = _create_training_prompt(cleaned)
# The response is your reflection/insight
messages = [
{"role": "system", "content": "You are a thoughtful, reflective companion."},
{"role": "user", "content": prompt},
{"role": "assistant", "content": cleaned},
]
return TrainingExample(
messages=messages,
source_file=source_file,
tags=tags,
date=date,
)
class TrainingDataExtractor:
"""Extracts training examples from Obsidian vault."""
def __init__(self, config: Config):
self.config = config
self.vault_path = Path(config.vault.path)
self.examples: list[TrainingExample] = []
def extract(self) -> list[TrainingExample]:
"""Extract all training examples from the vault."""
self.examples = []
# Walk vault for markdown files
for md_file in self.vault_path.rglob("*.md"):
# Skip denied directories
relative = md_file.relative_to(self.vault_path)
if any(part.startswith(".") for part in relative.parts):
continue
if any(
part in [".obsidian", ".trash", "zzz-Archive"]
for part in relative.parts
):
continue
# Extract from this file
file_examples = self._extract_from_file(md_file)
self.examples.extend(file_examples)
return self.examples
def _extract_from_file(self, md_file: Path) -> list[TrainingExample]:
"""Extract training examples from a single file."""
examples = []
try:
text = md_file.read_text(encoding="utf-8")
except Exception:
return examples
# Get date from filename
date = _extract_date_from_filename(md_file.name)
# Split into sections (by headers like #Section or # Tag:)
# Pattern matches lines starting with # that have content
sections = re.split(r"\n(?=#[^#])", text)
for section in sections:
section = section.strip()
if not section or len(section) < 50: # Skip very short sections
continue
# Extract tags from section
hashtags = re.findall(r"#[\w\-]+", section)
# Try to create training example
example = _create_training_example(
chunk_text=section,
source_file=str(md_file.relative_to(self.vault_path)).replace(
"\\", "/"
),
tags=hashtags,
date=date,
)
if example:
examples.append(example)
# If no reflection sections found, try the whole file
if not examples and len(text) >= 100:
hashtags = re.findall(r"#[\w\-]+", text)
if _is_likely_reflection(text):
example = _create_training_example(
chunk_text=text.strip(),
source_file=str(md_file.relative_to(self.vault_path)).replace(
"\\", "/"
),
tags=hashtags,
date=date,
)
if example:
examples.append(example)
return examples
# Get date from filename
date = _extract_date_from_filename(md_file.name)
# Split into sections (by headers)
sections = re.split(r"\n(?=#+\s)", text)
for section in sections:
section = section.strip()
if not section:
continue
# Extract tags from section
hashtags = re.findall(r"#\w+", section)
# Try to create training example
example = _create_training_example(
chunk_text=section,
source_file=str(md_file.relative_to(self.vault_path)).replace(
"\\", "/"
),
tags=hashtags,
date=date,
)
if example:
examples.append(example)
return examples
def save_to_jsonl(self, output_path: Path) -> int:
"""Save extracted examples to JSONL file."""
with open(output_path, "w", encoding="utf-8") as f:
for example in self.examples:
f.write(json.dumps(example.to_dict(), ensure_ascii=False) + "\n")
return len(self.examples)
def get_stats(self) -> dict:
"""Get statistics about extracted examples."""
if not self.examples:
return {"total": 0}
tag_counts: dict[str, int] = {}
for ex in self.examples:
for tag in ex.tags:
tag_counts[tag] = tag_counts.get(tag, 0) + 1
return {
"total": len(self.examples),
"avg_length": sum(len(ex.messages[2]["content"]) for ex in self.examples)
// len(self.examples),
"top_tags": sorted(tag_counts.items(), key=lambda x: x[1], reverse=True)[
:10
],
}
def extract_training_data(
config_path: str = "config.json",
) -> tuple[list[TrainingExample], dict]:
"""Convenience function to extract training data from vault.
Returns:
Tuple of (examples list, stats dict)
"""
from companion.config import load_config
config = load_config(config_path)
extractor = TrainingDataExtractor(config)
examples = extractor.extract()
stats = extractor.get_stats()
return examples, stats

View File

@@ -0,0 +1,89 @@
"""Model reloader for hot-swapping fine-tuned models."""
from __future__ import annotations
import shutil
from pathlib import Path
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from companion.config import Config
def reload_model(
config: Config,
new_model_path: Path,
backup: bool = True,
) -> Path:
"""Reload the model with a new fine-tuned version.
Args:
config: Current configuration
new_model_path: Path to new model directory or GGUF file
backup: Whether to backup the old model
Returns:
Path to the active model
"""
current_model = Path(config.model.inference.model_path).expanduser()
# Validate new model exists
if not new_model_path.exists():
raise FileNotFoundError(f"New model not found: {new_model_path}")
# Backup current model if it exists
if backup and current_model.exists():
backup_path = current_model.parent / f"{current_model.name}.backup"
if backup_path.exists():
shutil.rmtree(backup_path, ignore_errors=True)
if current_model.is_dir():
shutil.copytree(current_model, backup_path)
else:
shutil.copy2(current_model, backup_path)
print(f"Backed up current model to: {backup_path}")
# Copy new model to active location
if current_model.exists():
if current_model.is_dir():
shutil.rmtree(current_model, ignore_errors=True)
else:
current_model.unlink()
current_model.parent.mkdir(parents=True, exist_ok=True)
if new_model_path.is_dir():
shutil.copytree(new_model_path, current_model)
else:
shutil.copy2(new_model_path, current_model)
print(f"Model reloaded: {new_model_path} -> {current_model}")
return current_model
def get_model_status(config: Config) -> dict:
"""Get status of current model."""
model_path = Path(config.model.inference.model_path).expanduser()
status = {
"path": str(model_path),
"exists": model_path.exists(),
"type": None,
"size_mb": 0,
}
if model_path.exists():
if model_path.is_dir():
status["type"] = "directory"
# Calculate directory size
total_size = sum(
f.stat().st_size for f in model_path.rglob("*") if f.is_file()
)
status["size_mb"] = round(total_size / (1024 * 1024), 2)
else:
status["type"] = "file"
status["size_mb"] = round(model_path.stat().st_size / (1024 * 1024), 2)
return status

View File

@@ -0,0 +1,238 @@
"""QLoRA fine-tuning script using Unsloth.
Trains a Llama 3.1 8B model on extracted reflection data from the vault.
Optimized for RTX 5070 with 12GB VRAM.
"""
from __future__ import annotations
import json
import os
from pathlib import Path
from typing import Any
from datasets import Dataset
from transformers import TrainingArguments
from trl import SFTTrainer
def load_training_data(data_path: Path) -> list[dict[str, Any]]:
"""Load training examples from JSONL file."""
examples = []
with open(data_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line:
examples.append(json.loads(line))
return examples
def prepare_dataset(
examples: list[dict], validation_split: float = 0.1
) -> tuple[Dataset, Dataset | None]:
"""Split examples into train and validation datasets."""
import random
random.seed(42)
shuffled = examples.copy()
random.shuffle(shuffled)
split_idx = int(len(shuffled) * (1 - validation_split))
train_examples = shuffled[:split_idx]
val_examples = shuffled[split_idx:] if len(shuffled[split_idx:]) > 5 else None
train_dataset = Dataset.from_list(train_examples)
val_dataset = Dataset.from_list(val_examples) if val_examples else None
return train_dataset, val_dataset
def format_example(example: dict) -> str:
"""Format a training example into the chat template format."""
messages = example.get("messages", [])
if not messages:
return ""
# Format as conversation
formatted = ""
for msg in messages:
role = msg.get("role", "")
content = msg.get("content", "")
if role == "system":
formatted += f"System: {content}\n\n"
elif role == "user":
formatted += f"User: {content}\n\n"
elif role == "assistant":
formatted += f"Assistant: {content}\n"
return formatted.strip()
def train(
data_path: Path,
output_dir: Path,
base_model: str = "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
lora_rank: int = 16,
lora_alpha: int = 32,
learning_rate: float = 2e-4,
num_epochs: int = 3,
batch_size: int = 4,
gradient_accumulation_steps: int = 4,
warmup_steps: int = 100,
save_steps: int = 500,
eval_steps: int = 250,
validation_split: float = 0.1,
) -> Path:
"""Run QLoRA fine-tuning on the training data.
Args:
data_path: Path to JSONL file with training examples
output_dir: Directory to save model checkpoints and outputs
base_model: HuggingFace model ID for base model
lora_rank: LoRA rank (higher = more capacity, more memory)
lora_alpha: LoRA alpha (scaling factor)
learning_rate: Learning rate for optimizer
num_epochs: Number of training epochs
batch_size: Per-device batch size
gradient_accumulation_steps: Steps to accumulate before update
warmup_steps: Learning rate warmup steps
save_steps: Save checkpoint every N steps
eval_steps: Run evaluation every N steps
validation_split: Fraction of data for validation
Returns:
Path to final checkpoint directory
"""
try:
from unsloth import FastLanguageModel
except ImportError:
raise ImportError("unsloth is required. Install with: pip install unsloth")
print(f"Loading base model: {base_model}")
# Load model with Unsloth (4-bit quantization)
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=base_model,
max_seq_length=2048,
dtype=None, # Auto-detect
load_in_4bit=True,
)
# Add LoRA adapters
print(f"Adding LoRA adapters (rank={lora_rank}, alpha={lora_alpha})")
model = FastLanguageModel.get_peft_model(
model,
r=lora_rank,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
],
lora_alpha=lora_alpha,
lora_dropout=0,
bias="none",
use_gradient_checkpointing="unsloth",
random_state=3407,
use_rslora=False,
)
# Load training data
print(f"Loading training data from: {data_path}")
examples = load_training_data(data_path)
print(f"Loaded {len(examples)} examples")
if len(examples) < 10:
raise ValueError(f"Need at least 10 examples, got {len(examples)}")
train_dataset, val_dataset = prepare_dataset(examples, validation_split)
# Set up training arguments
training_args = TrainingArguments(
output_dir=str(output_dir),
num_train_epochs=num_epochs,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_steps=warmup_steps,
learning_rate=learning_rate,
logging_steps=10,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
save_steps=save_steps,
eval_steps=eval_steps if val_dataset else None,
evaluation_strategy="steps" if val_dataset else "no",
save_strategy="steps",
load_best_model_at_end=True if val_dataset else False,
)
# Initialize trainer
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=train_dataset,
eval_dataset=val_dataset,
dataset_text_field="text",
max_seq_length=2048,
dataset_num_proc=2,
packing=False,
args=training_args,
formatting_func=lambda ex: format_example(ex),
)
# Train
print("Starting training...")
trainer_stats = trainer.train()
# Save final model
final_path = output_dir / "final"
final_path.mkdir(parents=True, exist_ok=True)
print(f"Saving final model to: {final_path}")
trainer.save_model(str(final_path))
print(f"Training complete!")
print(f" - Final loss: {trainer_stats.training_loss:.4f}")
print(f" - Trained for {trainer_stats.global_step} steps")
return final_path
def main():
"""CLI entry point for training."""
import argparse
parser = argparse.ArgumentParser(description="Train companion model")
parser.add_argument(
"--data", type=Path, required=True, help="Path to training data JSONL"
)
parser.add_argument(
"--output",
type=Path,
default=Path("~/.companion/training"),
help="Output directory",
)
parser.add_argument("--epochs", type=int, default=3, help="Number of epochs")
parser.add_argument("--lr", type=float, default=2e-4, help="Learning rate")
args = parser.parse_args()
output = args.output.expanduser()
output.mkdir(parents=True, exist_ok=True)
final_model = train(
data_path=args.data,
output_dir=output,
num_epochs=args.epochs,
learning_rate=args.lr,
)
print(f"\nModel saved to: {final_model}")
if __name__ == "__main__":
main()

178
src/companion/memory.py Normal file
View File

@@ -0,0 +1,178 @@
"""SQLite-based session memory for the companion."""
from __future__ import annotations
import json
import sqlite3
from dataclasses import dataclass
from datetime import datetime
from pathlib import Path
from typing import Any
@dataclass
class Message:
"""A single message in the conversation."""
role: str # "user" | "assistant" | "system"
content: str
timestamp: datetime
metadata: dict[str, Any] | None = None
class SessionMemory:
"""Manages conversation history in SQLite."""
def __init__(self, db_path: str | Path):
self.db_path = Path(db_path)
self.db_path.parent.mkdir(parents=True, exist_ok=True)
self._init_db()
def _init_db(self) -> None:
"""Initialize the database schema."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
metadata TEXT,
FOREIGN KEY (session_id) REFERENCES sessions(session_id)
)
""")
conn.execute("""
CREATE INDEX IF NOT EXISTS idx_messages_session
ON messages(session_id, timestamp)
""")
conn.commit()
def get_or_create_session(self, session_id: str) -> str:
"""Get existing session or create new one."""
with sqlite3.connect(self.db_path) as conn:
conn.execute(
"INSERT OR IGNORE INTO sessions (session_id) VALUES (?)", (session_id,)
)
conn.execute(
"""UPDATE sessions SET updated_at = CURRENT_TIMESTAMP
WHERE session_id = ?""",
(session_id,),
)
conn.commit()
return session_id
def add_message(
self,
session_id: str,
role: str,
content: str,
metadata: dict[str, Any] | None = None,
) -> None:
"""Add a message to the session."""
self.get_or_create_session(session_id)
with sqlite3.connect(self.db_path) as conn:
conn.execute(
"""INSERT INTO messages (session_id, role, content, metadata)
VALUES (?, ?, ?, ?)""",
(session_id, role, content, json.dumps(metadata) if metadata else None),
)
conn.commit()
def get_messages(
self, session_id: str, limit: int = 20, before_id: int | None = None
) -> list[Message]:
"""Get messages from a session, most recent first."""
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
if before_id:
rows = conn.execute(
"""SELECT role, content, timestamp, metadata
FROM messages
WHERE session_id = ? AND id < ?
ORDER BY timestamp DESC
LIMIT ?""",
(session_id, before_id, limit),
).fetchall()
else:
rows = conn.execute(
"""SELECT role, content, timestamp, metadata
FROM messages
WHERE session_id = ?
ORDER BY timestamp DESC
LIMIT ?""",
(session_id, limit),
).fetchall()
messages = []
for row in rows:
meta = json.loads(row["metadata"]) if row["metadata"] else None
messages.append(
Message(
role=row["role"],
content=row["content"],
timestamp=datetime.fromisoformat(row["timestamp"]),
metadata=meta,
)
)
# Return in chronological order
return list(reversed(messages))
def get_session_summary(self, session_id: str) -> dict[str, Any] | None:
"""Get summary info about a session."""
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
row = conn.execute(
"""SELECT session_id, created_at, updated_at,
(SELECT COUNT(*) FROM messages WHERE session_id = ?) as message_count
FROM sessions WHERE session_id = ?""",
(session_id, session_id),
).fetchone()
if not row:
return None
return {
"session_id": row["session_id"],
"created_at": row["created_at"],
"updated_at": row["updated_at"],
"message_count": row["message_count"],
}
def list_sessions(self, limit: int = 100) -> list[dict[str, Any]]:
"""List recent sessions."""
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
rows = conn.execute(
"""SELECT session_id, created_at, updated_at
FROM sessions
ORDER BY updated_at DESC
LIMIT ?""",
(limit,),
).fetchall()
return [
{
"session_id": r["session_id"],
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
for r in rows
]
def clear_session(self, session_id: str) -> None:
"""Clear all messages from a session."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("DELETE FROM messages WHERE session_id = ?", (session_id,))
conn.commit()
def delete_session(self, session_id: str) -> None:
"""Delete a session and all its messages."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("DELETE FROM messages WHERE session_id = ?", (session_id,))
conn.execute("DELETE FROM sessions WHERE session_id = ?", (session_id,))
conn.commit()

View File

@@ -0,0 +1,167 @@
"""Chat orchestrator - coordinates RAG, LLM, and memory for chat responses."""
from __future__ import annotations
import json
import time
from collections.abc import AsyncGenerator
from typing import Any
import httpx
from companion.config import Config
from companion.memory import Message, SessionMemory
from companion.prompts import build_system_prompt, format_conversation_history
from companion.rag.search import SearchEngine, SearchResult
class ChatOrchestrator:
"""Orchestrates chat by combining RAG, memory, and LLM streaming."""
def __init__(
self,
config: Config,
search_engine: SearchEngine,
session_memory: SessionMemory,
):
self.config = config
self.search = search_engine
self.memory = session_memory
self.model_endpoint = config.model.inference.backend
self.model_path = config.model.inference.model_path
async def chat(
self,
session_id: str,
user_message: str,
use_rag: bool = True,
) -> AsyncGenerator[str, None]:
"""Process a chat message and yield streaming responses.
Args:
session_id: Unique session identifier
user_message: The user's input message
use_rag: Whether to retrieve context from vault
Yields:
Streaming response chunks (SSE format)
"""
# Retrieve relevant context from RAG if enabled
retrieved_context: list[SearchResult] = []
if use_rag:
try:
retrieved_context = self.search.search(
user_message, top_k=self.config.rag.search.default_top_k
)
except Exception as e:
# Log error but continue without RAG context
print(f"RAG retrieval failed: {e}")
# Get conversation history
history = self.memory.get_messages(
session_id, limit=self.config.companion.memory.session_turns
)
# Format history for prompt
history_formatted = format_conversation_history(
[{"role": msg.role, "content": msg.content} for msg in history]
)
# Build system prompt
system_prompt = build_system_prompt(
persona=self.config.companion.persona.model_dump(),
retrieved_context=retrieved_context if retrieved_context else None,
memory_context=history_formatted if history_formatted else None,
)
# Add user message to memory
self.memory.add_message(session_id, "user", user_message)
# Build messages for LLM
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message},
]
# Stream from LLM (using llama.cpp server or similar)
full_response = ""
async for chunk in self._stream_llm_response(messages):
yield chunk
if chunk.startswith("data: "):
data = chunk[6:] # Remove "data: " prefix
if data != "[DONE]":
try:
delta = json.loads(data)
if "content" in delta:
full_response += delta["content"]
except json.JSONDecodeError:
pass
# Store assistant response in memory
if full_response:
self.memory.add_message(
session_id,
"assistant",
full_response,
metadata={
"rag_context_used": len(retrieved_context) > 0,
"citations": [ctx.to_dict() for ctx in retrieved_context[:5]]
if retrieved_context
else [],
},
)
# Yield citations after the main response
if retrieved_context:
citations_data = {
"citations": [ctx.to_dict() for ctx in retrieved_context[:5]]
}
yield f"data: {json.dumps(citations_data)}\n\n"
async def _stream_llm_response(
self, messages: list[dict[str, Any]]
) -> AsyncGenerator[str, None]:
"""Stream response from local LLM.
Uses llama.cpp HTTP server format or Ollama API.
"""
# Try llama.cpp server first
base_url = self.config.model.inference.backend
if base_url == "llama.cpp":
base_url = "http://localhost:8080" # Default llama.cpp server port
# Default to Ollama API
if base_url not in ["llama.cpp", "http://localhost:8080"]:
base_url = self.config.rag.embedding.base_url.replace("/api", "")
try:
async with httpx.AsyncClient(timeout=120.0) as client:
# Try Ollama chat endpoint
response = await client.post(
f"{base_url}/api/chat",
json={
"model": self.config.rag.embedding.model.replace("-embed", ""),
"messages": messages,
"stream": True,
"options": {
"temperature": self.config.companion.chat.default_temperature,
},
},
timeout=120.0,
)
async for line in response.aiter_lines():
if line.strip():
yield f"data: {line}\n\n"
except Exception as e:
yield f'data: {{"error": "LLM streaming failed: {str(e)}"}}\n\n'
yield "data: [DONE]\n\n"
def get_session_history(self, session_id: str, limit: int = 50) -> list[Message]:
"""Get conversation history for a session."""
return self.memory.get_messages(session_id, limit=limit)
def clear_session(self, session_id: str) -> None:
"""Clear a session's conversation history."""
self.memory.clear_session(session_id)

101
src/companion/prompts.py Normal file
View File

@@ -0,0 +1,101 @@
"""System prompts for the companion."""
from __future__ import annotations
from typing import Any
from companion.rag.search import SearchResult
def build_system_prompt(
persona: dict[str, Any],
retrieved_context: list[SearchResult] | None = None,
memory_context: str | None = None,
) -> str:
"""Build the system prompt for the companion.
Args:
persona: Companion persona configuration (name, role, tone, style, boundaries)
retrieved_context: Optional RAG context from vault search
memory_context: Optional memory summary from session
Returns:
The complete system prompt
"""
name = persona.get("name", "Companion")
role = persona.get("role", "companion")
tone = persona.get("tone", "reflective")
style = persona.get("style", "questioning")
boundaries = persona.get("boundaries", [])
# Base persona description
base_prompt = f"""You are {name}, a {tone}, {style} {role}.
Your role is to be a thoughtful, reflective companion—not to impersonate or speak for the user, but to explore alongside them. You know their life through their journal entries and are here to help them reflect, remember, and explore patterns.
Core principles:
- You do not speak as the user. You speak to them.
- You listen deeply and reflect patterns back gently.
- You ask questions that help them explore their own thoughts.
- You respect the boundaries of a confidante, not an oracle.
"""
# Add boundaries section
if boundaries:
base_prompt += "\nBoundaries:\n"
for boundary in boundaries:
base_prompt += f"- {boundary.replace('_', ' ').title()}\n"
# Add retrieved context section if available
context_section = ""
if retrieved_context:
context_section += "\n\nRelevant context from your vault:\n"
for i, ctx in enumerate(retrieved_context[:8], 1): # Limit to top 8 chunks
text = ctx.text.strip()
if text:
context_section += f"[{i}] From {ctx.citation}:\n{text}\n\n"
# Add memory context if available
memory_section = ""
if memory_context:
memory_section = f"\n\nContext from your conversation:\n{memory_context}"
# Final instructions
closing = """
When responding:
- Draw from the vault context if relevant, but don't force it
- Be concise but thoughtful—no unnecessary length
- If uncertain, acknowledge the uncertainty
- Ask follow-up questions that deepen reflection
"""
return base_prompt + context_section + memory_section + closing
def format_conversation_history(
messages: list[dict[str, Any]], max_turns: int = 10
) -> str:
"""Format conversation history for prompt context.
Args:
messages: List of message dicts with 'role' and 'content'
max_turns: Maximum number of recent turns to include
Returns:
Formatted conversation history
"""
if not messages:
return ""
# Take only recent messages
recent = messages[-max_turns * 2 :] if len(messages) > max_turns * 2 else messages
formatted = []
for msg in recent:
role = msg.get("role", "user")
content = msg.get("content", "")
if content.strip():
formatted.append(f"{role.upper()}: {content}")
return "\n\n".join(formatted)

View File

@@ -1,9 +1,52 @@
from dataclasses import dataclass
from typing import Any
from companion.rag.embedder import OllamaEmbedder
from companion.rag.vector_store import VectorStore
@dataclass
class SearchResult:
"""Structured search result with citation information."""
id: str
text: str
source_file: str
source_directory: str
section: str | None
date: str | None
tags: list[str]
chunk_index: int
total_chunks: int
distance: float
@property
def citation(self) -> str:
"""Generate a citation string for this result."""
parts = [self.source_file]
if self.section:
parts.append(f"#{self.section}")
if self.date:
parts.append(f"({self.date})")
return " - ".join(parts)
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for API serialization."""
return {
"id": self.id,
"text": self.text,
"source_file": self.source_file,
"source_directory": self.source_directory,
"section": self.section,
"date": self.date,
"tags": self.tags,
"chunk_index": self.chunk_index,
"total_chunks": self.total_chunks,
"distance": self.distance,
"citation": self.citation,
}
class SearchEngine:
"""Search engine for semantic search using vector embeddings.
@@ -50,7 +93,7 @@ class SearchEngine:
query: str,
top_k: int | None = None,
filters: dict[str, Any] | None = None,
) -> list[dict[str, Any]]:
) -> list[SearchResult]:
"""Search for relevant documents using semantic similarity.
Args:
@@ -59,7 +102,7 @@ class SearchEngine:
filters: Optional metadata filters to apply
Returns:
List of matching documents with similarity scores
List of SearchResult objects with similarity scores
Raises:
RuntimeError: If embedding generation fails
@@ -76,14 +119,33 @@ class SearchEngine:
except RuntimeError as e:
raise RuntimeError(f"Failed to generate embedding for query: {e}") from e
results = self.vector_store.search(query_embedding, top_k=k, filters=filters)
raw_results = self.vector_store.search(
query_embedding, top_k=k, filters=filters
)
if self.similarity_threshold > 0 and results:
results = [
if self.similarity_threshold > 0 and raw_results:
raw_results = [
r
for r in results
for r in raw_results
if r.get(self._DISTANCE_FIELD, float("inf"))
<= self.similarity_threshold
]
# Convert raw results to SearchResult objects
results: list[SearchResult] = []
for r in raw_results:
result = SearchResult(
id=r.get("id", ""),
text=r.get("text", ""),
source_file=r.get("source_file", ""),
source_directory=r.get("source_directory", ""),
section=r.get("section"),
date=r.get("date"),
tags=r.get("tags") or [],
chunk_index=r.get("chunk_index", 0),
total_chunks=r.get("total_chunks", 1),
distance=r.get(self._DISTANCE_FIELD, 1.0),
)
results.append(result)
return results

View File

@@ -0,0 +1,35 @@
[Unit]
Description=Companion AI API Service
Documentation=https://github.com/santhoshjan/companion
After=network.target ollama.service
Wants=ollama.service
[Service]
Type=simple
User=companion
Group=companion
WorkingDirectory=/opt/companion
Environment=PYTHONPATH=/opt/companion
Environment=COMPANION_CONFIG=/opt/companion/config.json
Environment=COMPANION_DATA_DIR=/var/lib/companion
# Start the API server
ExecStart=/opt/companion/venv/bin/python -m uvicorn companion.api:app --host 0.0.0.0 --port 7373
# Restart on failure
Restart=on-failure
RestartSec=5
# Resource limits
MemoryMax=2G
CPUQuota=200%
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/companion
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Companion AI Daily Full Index
Documentation=https://github.com/santhoshjan/companion
[Timer]
OnCalendar=daily
OnCalendar=*-*-* 03:00:00
Persistent=true
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,22 @@
[Unit]
Description=Companion AI Full Index (One-shot)
Documentation=https://github.com/santhoshjan/companion
[Service]
Type=oneshot
User=companion
Group=companion
WorkingDirectory=/opt/companion
Environment=PYTHONPATH=/opt/companion
Environment=COMPANION_CONFIG=/opt/companion/config.json
Environment=COMPANION_DATA_DIR=/var/lib/companion
# Run full index
ExecStart=/opt/companion/venv/bin/python -m companion.indexer_daemon.cli index
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/companion

View File

@@ -0,0 +1,38 @@
[Unit]
Description=Companion AI Vault Indexer
Documentation=https://github.com/santhoshjan/companion
After=network.target companion-api.service
Wants=companion-api.service
[Service]
Type=simple
User=companion
Group=companion
WorkingDirectory=/opt/companion
Environment=PYTHONPATH=/opt/companion
Environment=COMPANION_CONFIG=/opt/companion/config.json
Environment=COMPANION_DATA_DIR=/var/lib/companion
# Start the file watcher for auto-sync
ExecStart=/opt/companion/venv/bin/python -m companion.indexer_daemon.watcher
# Or use the CLI for scheduled sync (uncomment to use):
# ExecStart=/opt/companion/venv/bin/python -m companion.indexer_daemon.cli sync
# Restart on failure
Restart=on-failure
RestartSec=10
# Resource limits
MemoryMax=1G
CPUQuota=100%
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/companion
[Install]
WantedBy=multi-user.target

31
tests/test_api.py Normal file
View File

@@ -0,0 +1,31 @@
"""Simple smoke tests for FastAPI backend."""
import pytest
def test_api_imports():
"""Test that API module imports correctly."""
# This will fail if there are any import errors
from companion.api import app, ChatRequest
assert app is not None
assert ChatRequest is not None
def test_chat_request_model():
"""Test ChatRequest model validation."""
from companion.api import ChatRequest
# Valid request
req = ChatRequest(message="hello", session_id="abc123")
assert req.message == "hello"
assert req.session_id == "abc123"
# Valid request with temperature
req2 = ChatRequest(message="hello", temperature=0.7)
assert req2.temperature == 0.7
# Valid request with minimal fields
req3 = ChatRequest(message="hello")
assert req3.session_id is None
assert req3.temperature is None

604
tests/test_forge_extract.py Normal file
View File

@@ -0,0 +1,604 @@
"""Tests for training data extractor."""
import tempfile
from pathlib import Path
import pytest
from companion.config import Config, VaultConfig, IndexingConfig
from companion.forge.extract import (
TrainingDataExtractor,
TrainingExample,
_create_training_example,
_extract_date_from_filename,
_has_reflection_patterns,
_has_reflection_tags,
_is_likely_reflection,
extract_training_data,
)
def test_has_reflection_tags():
assert _has_reflection_tags("#reflection on today's events")
assert _has_reflection_tags("#decision made today")
assert not _has_reflection_tags("#worklog entry")
def test_has_reflection_patterns():
assert _has_reflection_patterns("I think this is important")
assert _has_reflection_patterns("I wonder if I should change")
assert _has_reflection_patterns("Looking back, I see the pattern")
assert not _has_reflection_patterns("The meeting was at 3pm")
def test_is_likely_reflection():
assert _is_likely_reflection("#reflection I think this matters")
assert _is_likely_reflection("I realize now that I was wrong")
assert not _is_likely_reflection("Just a regular note")
def test_extract_date_from_filename():
assert _extract_date_from_filename("2026-04-12.md") == "2026-04-12"
assert _extract_date_from_filename("12-Apr-2026.md") == "12-Apr-2026"
assert _extract_date_from_filename("2026-04-12-journal.md") == "2026-04-12"
assert _extract_date_from_filename("notes.md") is None
def test_create_training_example():
text = "#reflection I think I need to reconsider my approach. The way I've been handling this isn't working."
example = _create_training_example(
chunk_text=text,
source_file="journal/2026-04-12.md",
tags=["#reflection"],
date="2026-04-12",
)
assert example is not None
assert len(example.messages) == 3
assert example.messages[0]["role"] == "system"
assert example.messages[1]["role"] == "user"
assert example.messages[2]["role"] == "assistant"
assert example.messages[2]["content"] == text
assert example.source_file == "journal/2026-04-12.md"
def test_create_training_example_too_short():
text = "I think." # Too short
example = _create_training_example(
chunk_text=text,
source_file="test.md",
tags=["#reflection"],
date=None,
)
assert example is None
def test_create_training_example_no_reflection():
text = "This is just a regular note about the meeting at 3pm. Nothing special." * 5
example = _create_training_example(
chunk_text=text,
source_file="test.md",
tags=["#work"],
date=None,
)
assert example is None
def test_training_example_to_dict():
example = TrainingExample(
messages=[
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
],
source_file="test.md",
tags=["#test"],
date="2026-04-12",
)
d = example.to_dict()
assert d["messages"][0]["role"] == "user"
assert d["source_file"] == "test.md"
assert d["date"] == "2026-04-12"
class TestTrainingDataExtractor:
def _get_config_dict(self, vault_path: Path) -> dict:
"""Return minimal config dict for testing."""
return {
"companion": {
"name": "SAN",
"persona": {
"role": "companion",
"tone": "reflective",
"style": "questioning",
"boundaries": [],
},
"memory": {
"session_turns": 20,
"persistent_store": "",
"summarize_after": 10,
},
"chat": {
"streaming": True,
"max_response_tokens": 2048,
"default_temperature": 0.7,
"allow_temperature_override": True,
},
},
"vault": {
"path": str(vault_path),
"indexing": {
"auto_sync": False,
"auto_sync_interval_minutes": 1440,
"watch_fs_events": False,
"file_patterns": ["*.md"],
"deny_dirs": [".git"],
"deny_patterns": [],
},
"chunking_rules": {},
},
"rag": {
"embedding": {
"provider": "ollama",
"model": "mxbai-embed-large",
"base_url": "http://localhost:11434",
"dimensions": 1024,
"batch_size": 32,
},
"vector_store": {"type": "lancedb", "path": ".test.vectors"},
"search": {
"default_top_k": 8,
"max_top_k": 20,
"similarity_threshold": 0.75,
"hybrid_search": {
"enabled": False,
"keyword_weight": 0.3,
"semantic_weight": 0.7,
},
"filters": {
"date_range_enabled": True,
"tag_filter_enabled": True,
"directory_filter_enabled": True,
},
},
},
"model": {
"inference": {
"backend": "llama.cpp",
"model_path": "",
"context_length": 8192,
"gpu_layers": 35,
"batch_size": 512,
"threads": 8,
},
"fine_tuning": {
"base_model": "",
"output_dir": "",
"lora_rank": 16,
"lora_alpha": 32,
"learning_rate": 0.0002,
"batch_size": 4,
"gradient_accumulation_steps": 4,
"num_epochs": 3,
"warmup_steps": 100,
"save_steps": 500,
"eval_steps": 250,
"training_data_path": "",
"validation_split": 0.1,
},
"retrain_schedule": {
"auto_reminder": True,
"default_interval_days": 90,
"reminder_channels": [],
},
},
"api": {
"host": "127.0.0.1",
"port": 7373,
"cors_origins": [],
"auth": {"enabled": False},
},
"ui": {
"web": {
"enabled": True,
"theme": "obsidian",
"features": {
"streaming": True,
"citations": True,
"source_preview": True,
},
},
"cli": {"enabled": True, "rich_output": True},
},
"logging": {
"level": "INFO",
"file": "",
"max_size_mb": 100,
"backup_count": 5,
},
"security": {
"local_only": True,
"vault_path_traversal_check": True,
"sensitive_content_detection": True,
"sensitive_patterns": [],
"require_confirmation_for_external_apis": True,
},
}
def test_extract_from_single_file(self):
with tempfile.TemporaryDirectory() as tmp:
vault = Path(tmp)
journal = vault / "Journal" / "2026" / "04"
journal.mkdir(parents=True)
content = """#DayInShort: Busy day
#reflection I think I need to slow down. The pace has been unsustainable.
#work Normal work day with meetings.
#insight I realize that I've been prioritizing urgency over importance.
"""
(journal / "2026-04-12.md").write_text(content, encoding="utf-8")
# Use helper method for config
from companion.config import load_config
import json
config_dict = self._get_config_dict(vault)
config_path = Path(tmp) / "test_config.json"
with open(config_path, "w") as f:
json.dump(config_dict, f)
config = load_config(config_path)
extractor = TrainingDataExtractor(config)
examples = extractor.extract()
# Should extract at least 2 reflection examples
assert len(examples) >= 2
# Check they have the right structure
for ex in examples:
assert len(ex.messages) == 3
assert ex.messages[2]["role"] == "assistant"
def _save_to_jsonl_helper(self):
"""Helper extracted to reduce nesting."""
pass # placeholder
"companion": {
"name": "SAN",
"persona": {
"role": "companion",
"tone": "reflective",
"style": "questioning",
"boundaries": [],
},
"memory": {
"session_turns": 20,
"persistent_store": "",
"summarize_after": 10,
},
"chat": {
"streaming": True,
"max_response_tokens": 2048,
"default_temperature": 0.7,
"allow_temperature_override": True,
},
},
"vault": {
"path": str(vault),
"indexing": {
"auto_sync": False,
"auto_sync_interval_minutes": 1440,
"watch_fs_events": False,
"file_patterns": ["*.md"],
"deny_dirs": [".git"],
"deny_patterns": [],
},
"chunking_rules": {},
},
"rag": {
"embedding": {
"provider": "ollama",
"model": "mxbai-embed-large",
"base_url": "http://localhost:11434",
"dimensions": 1024,
"batch_size": 32,
},
"vector_store": {"type": "lancedb", "path": ".test.vectors"},
"search": {
"default_top_k": 8,
"max_top_k": 20,
"similarity_threshold": 0.75,
"hybrid_search": {
"enabled": False,
"keyword_weight": 0.3,
"semantic_weight": 0.7,
},
"filters": {
"date_range_enabled": True,
"tag_filter_enabled": True,
"directory_filter_enabled": True,
},
},
},
"model": {
"inference": {
"backend": "llama.cpp",
"model_path": "",
"context_length": 8192,
"gpu_layers": 35,
"batch_size": 512,
"threads": 8,
},
"fine_tuning": {
"base_model": "",
"output_dir": "",
"lora_rank": 16,
"lora_alpha": 32,
"learning_rate": 0.0002,
"batch_size": 4,
"gradient_accumulation_steps": 4,
"num_epochs": 3,
"warmup_steps": 100,
"save_steps": 500,
"eval_steps": 250,
"training_data_path": "",
"validation_split": 0.1,
},
"retrain_schedule": {
"auto_reminder": True,
"default_interval_days": 90,
"reminder_channels": [],
},
},
"api": {
"host": "127.0.0.1",
"port": 7373,
"cors_origins": [],
"auth": {"enabled": False},
},
"ui": {
"web": {
"enabled": True,
"theme": "obsidian",
"features": {
"streaming": True,
"citations": True,
"source_preview": True,
},
},
"cli": {"enabled": True, "rich_output": True},
},
"logging": {
"level": "INFO",
"file": "",
"max_size_mb": 100,
"backup_count": 5,
},
"security": {
"local_only": True,
"vault_path_traversal_check": True,
"sensitive_content_detection": True,
"sensitive_patterns": [],
"require_confirmation_for_external_apis": True,
},
}
config_path = Path(tmp) / "test_config.json"
with open(config_path, "w") as f:
json.dump(config_dict, f)
config = load_config(config_path)
extractor = TrainingDataExtractor(config)
examples = extractor.extract()
# Should extract at least 2 reflection examples
assert len(examples) >= 2
# Check they have the right structure
for ex in examples:
assert len(ex.messages) == 3
assert ex.messages[2]["role"] == "assistant"
def test_save_to_jsonl(self):
with tempfile.TemporaryDirectory() as tmp:
output = Path(tmp) / "training.jsonl"
examples = [
TrainingExample(
messages=[
{"role": "system", "content": "sys"},
{"role": "user", "content": "user"},
{"role": "assistant", "content": "assistant"},
],
source_file="test.md",
tags=["#test"],
date="2026-04-12",
)
]
# Create minimal config for extractor
config_dict = self._get_config_dict(Path(tmp))
config_path = Path(tmp) / "test_config.json"
import json
with open(config_path, "w") as f:
json.dump(config_dict, f)
from companion.config import load_config
config = load_config(config_path)
extractor = TrainingDataExtractor(config)
extractor.examples = examples
count = extractor.save_to_jsonl(output)
assert count == 1
# Verify file content
lines = output.read_text(encoding="utf-8").strip().split("\n")
assert len(lines) == 1
assert "assistant" in lines[0]
def test_get_stats(self):
examples = [
TrainingExample(
messages=[
{"role": "system", "content": "sys"},
{"role": "user", "content": "user"},
{"role": "assistant", "content": "a" * 100},
],
source_file="test1.md",
tags=["#reflection", "#learning"],
date="2026-04-12",
),
TrainingExample(
messages=[
{"role": "system", "content": "sys"},
{"role": "user", "content": "user"},
{"role": "assistant", "content": "b" * 200},
],
source_file="test2.md",
tags=["#reflection", "#decision"],
date="2026-04-13",
),
]
# Create minimal config
with tempfile.TemporaryDirectory() as tmp:
config_dict = {
"companion": {
"name": "SAN",
"persona": {
"role": "companion",
"tone": "reflective",
"style": "questioning",
"boundaries": [],
},
"memory": {
"session_turns": 20,
"persistent_store": "",
"summarize_after": 10,
},
"chat": {
"streaming": True,
"max_response_tokens": 2048,
"default_temperature": 0.7,
"allow_temperature_override": True,
},
},
"vault": {
"path": str(tmp),
"indexing": {
"auto_sync": False,
"auto_sync_interval_minutes": 1440,
"watch_fs_events": False,
"file_patterns": ["*.md"],
"deny_dirs": [".git"],
"deny_patterns": [],
},
"chunking_rules": {},
},
"rag": {
"embedding": {
"provider": "ollama",
"model": "mxbai-embed-large",
"base_url": "http://localhost:11434",
"dimensions": 1024,
"batch_size": 32,
},
"vector_store": {"type": "lancedb", "path": ".test.vectors"},
"search": {
"default_top_k": 8,
"max_top_k": 20,
"similarity_threshold": 0.75,
"hybrid_search": {
"enabled": False,
"keyword_weight": 0.3,
"semantic_weight": 0.7,
},
"filters": {
"date_range_enabled": True,
"tag_filter_enabled": True,
"directory_filter_enabled": True,
},
},
},
"model": {
"inference": {
"backend": "llama.cpp",
"model_path": "",
"context_length": 8192,
"gpu_layers": 35,
"batch_size": 512,
"threads": 8,
},
"fine_tuning": {
"base_model": "",
"output_dir": "",
"lora_rank": 16,
"lora_alpha": 32,
"learning_rate": 0.0002,
"batch_size": 4,
"gradient_accumulation_steps": 4,
"num_epochs": 3,
"warmup_steps": 100,
"save_steps": 500,
"eval_steps": 250,
"training_data_path": "",
"validation_split": 0.1,
},
"retrain_schedule": {
"auto_reminder": True,
"default_interval_days": 90,
"reminder_channels": [],
},
},
"api": {
"host": "127.0.0.1",
"port": 7373,
"cors_origins": [],
"auth": {"enabled": False},
},
"ui": {
"web": {
"enabled": True,
"theme": "obsidian",
"features": {
"streaming": True,
"citations": True,
"source_preview": True,
},
},
"cli": {"enabled": True, "rich_output": True},
},
"logging": {
"level": "INFO",
"file": "",
"max_size_mb": 100,
"backup_count": 5,
},
"security": {
"local_only": True,
"vault_path_traversal_check": True,
"sensitive_content_detection": True,
"sensitive_patterns": [],
"require_confirmation_for_external_apis": True,
},
}
config_path = Path(tmp) / "test_config.json"
import json
with open(config_path, "w") as f:
json.dump(config_dict, f)
from companion.config import load_config
config = load_config(config_path)
extractor = TrainingDataExtractor(config)
extractor.examples = examples
stats = extractor.get_stats()
assert stats["total"] == 2
assert stats["avg_length"] == 150 # (100 + 200) // 2
assert len(stats["top_tags"]) > 0
assert stats["top_tags"][0][0] == "#reflection"

186
tests/test_integration.py Normal file
View File

@@ -0,0 +1,186 @@
import tempfile
from pathlib import Path
from unittest.mock import MagicMock, patch
from companion.config import (
Config,
VaultConfig,
IndexingConfig,
RagConfig,
EmbeddingConfig,
VectorStoreConfig,
SearchConfig,
HybridSearchConfig,
FiltersConfig,
CompanionConfig,
PersonaConfig,
MemoryConfig,
ChatConfig,
ModelConfig,
InferenceConfig,
FineTuningConfig,
RetrainScheduleConfig,
ApiConfig,
AuthConfig,
UiConfig,
WebConfig,
WebFeaturesConfig,
CliConfig,
LoggingConfig,
SecurityConfig,
)
from companion.rag.indexer import Indexer
from companion.rag.search import SearchEngine
from companion.rag.vector_store import VectorStore
def _make_config(vault_path: Path, vector_store_path: Path) -> Config:
return Config(
companion=CompanionConfig(
name="SAN",
persona=PersonaConfig(
role="companion", tone="reflective", style="questioning", boundaries=[]
),
memory=MemoryConfig(
session_turns=20, persistent_store="", summarize_after=10
),
chat=ChatConfig(
streaming=True,
max_response_tokens=2048,
default_temperature=0.7,
allow_temperature_override=True,
),
),
vault=VaultConfig(
path=str(vault_path),
indexing=IndexingConfig(
auto_sync=False,
auto_sync_interval_minutes=1440,
watch_fs_events=False,
file_patterns=["*.md"],
deny_dirs=[".git"],
deny_patterns=[".*"],
),
chunking_rules={},
),
rag=RagConfig(
embedding=EmbeddingConfig(
provider="ollama",
model="dummy",
base_url="http://localhost:11434",
dimensions=4,
batch_size=2,
),
vector_store=VectorStoreConfig(type="lancedb", path=str(vector_store_path)),
search=SearchConfig(
default_top_k=8,
max_top_k=20,
similarity_threshold=0.0,
hybrid_search=HybridSearchConfig(
enabled=False, keyword_weight=0.3, semantic_weight=0.7
),
filters=FiltersConfig(
date_range_enabled=True,
tag_filter_enabled=True,
directory_filter_enabled=True,
),
),
),
model=ModelConfig(
inference=InferenceConfig(
backend="llama.cpp",
model_path="",
context_length=8192,
gpu_layers=35,
batch_size=512,
threads=8,
),
fine_tuning=FineTuningConfig(
base_model="",
output_dir="",
lora_rank=16,
lora_alpha=32,
learning_rate=0.0002,
batch_size=4,
gradient_accumulation_steps=4,
num_epochs=3,
warmup_steps=100,
save_steps=500,
eval_steps=250,
training_data_path="",
validation_split=0.1,
),
retrain_schedule=RetrainScheduleConfig(
auto_reminder=True, default_interval_days=90, reminder_channels=[]
),
),
api=ApiConfig(
host="127.0.0.1", port=7373, cors_origins=[], auth=AuthConfig(enabled=False)
),
ui=UiConfig(
web=WebConfig(
enabled=True,
theme="obsidian",
features=WebFeaturesConfig(
streaming=True, citations=True, source_preview=True
),
),
cli=CliConfig(enabled=True, rich_output=True),
),
logging=LoggingConfig(level="INFO", file="", max_size_mb=100, backup_count=5),
security=SecurityConfig(
local_only=True,
vault_path_traversal_check=True,
sensitive_content_detection=True,
sensitive_patterns=[],
require_confirmation_for_external_apis=True,
),
)
@patch("companion.rag.search.OllamaEmbedder")
@patch("companion.rag.indexer.OllamaEmbedder")
def test_index_and_search_flow(mock_indexer_embedder, mock_search_embedder):
"""Verify end-to-end indexing and semantic search with mocked embeddings."""
mock_embed = MagicMock()
def mock_embed_side_effect(texts):
return [
[1.0, 0.0, 0.0, 0.0] if i == 0 else [0.0, 1.0, 0.0, 0.0]
for i in range(len(texts))
]
mock_embed.embed.side_effect = mock_embed_side_effect
mock_indexer_embedder.return_value = mock_embed
mock_search_embedder.return_value = mock_embed
with tempfile.TemporaryDirectory() as tmp:
vault = Path(tmp) / "vault"
vault.mkdir()
(vault / "note1.md").write_text("hello world", encoding="utf-8")
(vault / "note2.md").write_text("goodbye world", encoding="utf-8")
vs_path = Path(tmp) / "vectors"
config = _make_config(vault, vs_path)
store = VectorStore(uri=vs_path, dimensions=4)
indexer = Indexer(config, store)
indexer.full_index()
assert store.count() == 2
engine = SearchEngine(
vector_store=store,
embedder_base_url="http://localhost:11434",
embedder_model="dummy",
embedder_batch_size=2,
default_top_k=5,
similarity_threshold=0.0,
hybrid_search_enabled=False,
)
results = engine.search("hello")
assert len(results) >= 1
files = {r["source_file"] for r in results}
assert "note1.md" in files
results = engine.search("goodbye")
assert len(results) >= 1
files = {r["source_file"] for r in results}
assert "note2.md" in files

13
ui/index.html Normal file
View File

@@ -0,0 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Companion</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

16
ui/node_modules/.bin/baseline-browser-mapping generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../baseline-browser-mapping/dist/cli.cjs" "$@"
else
exec node "$basedir/../baseline-browser-mapping/dist/cli.cjs" "$@"
fi

17
ui/node_modules/.bin/baseline-browser-mapping.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\baseline-browser-mapping\dist\cli.cjs" %*

28
ui/node_modules/.bin/baseline-browser-mapping.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../baseline-browser-mapping/dist/cli.cjs" $args
} else {
& "$basedir/node$exe" "$basedir/../baseline-browser-mapping/dist/cli.cjs" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../baseline-browser-mapping/dist/cli.cjs" $args
} else {
& "node$exe" "$basedir/../baseline-browser-mapping/dist/cli.cjs" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/browserslist generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../browserslist/cli.js" "$@"
else
exec node "$basedir/../browserslist/cli.js" "$@"
fi

17
ui/node_modules/.bin/browserslist.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\browserslist\cli.js" %*

28
ui/node_modules/.bin/browserslist.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../browserslist/cli.js" $args
} else {
& "$basedir/node$exe" "$basedir/../browserslist/cli.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../browserslist/cli.js" $args
} else {
& "node$exe" "$basedir/../browserslist/cli.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/esbuild generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../esbuild/bin/esbuild" "$@"
else
exec node "$basedir/../esbuild/bin/esbuild" "$@"
fi

17
ui/node_modules/.bin/esbuild.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\esbuild\bin\esbuild" %*

28
ui/node_modules/.bin/esbuild.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../esbuild/bin/esbuild" $args
} else {
& "$basedir/node$exe" "$basedir/../esbuild/bin/esbuild" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../esbuild/bin/esbuild" $args
} else {
& "node$exe" "$basedir/../esbuild/bin/esbuild" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/jsesc generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../jsesc/bin/jsesc" "$@"
else
exec node "$basedir/../jsesc/bin/jsesc" "$@"
fi

17
ui/node_modules/.bin/jsesc.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\jsesc\bin\jsesc" %*

28
ui/node_modules/.bin/jsesc.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../jsesc/bin/jsesc" $args
} else {
& "$basedir/node$exe" "$basedir/../jsesc/bin/jsesc" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../jsesc/bin/jsesc" $args
} else {
& "node$exe" "$basedir/../jsesc/bin/jsesc" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/json5 generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../json5/lib/cli.js" "$@"
else
exec node "$basedir/../json5/lib/cli.js" "$@"
fi

17
ui/node_modules/.bin/json5.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\json5\lib\cli.js" %*

28
ui/node_modules/.bin/json5.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../json5/lib/cli.js" $args
} else {
& "$basedir/node$exe" "$basedir/../json5/lib/cli.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../json5/lib/cli.js" $args
} else {
& "node$exe" "$basedir/../json5/lib/cli.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/loose-envify generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../loose-envify/cli.js" "$@"
else
exec node "$basedir/../loose-envify/cli.js" "$@"
fi

17
ui/node_modules/.bin/loose-envify.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\loose-envify\cli.js" %*

28
ui/node_modules/.bin/loose-envify.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../loose-envify/cli.js" $args
} else {
& "$basedir/node$exe" "$basedir/../loose-envify/cli.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../loose-envify/cli.js" $args
} else {
& "node$exe" "$basedir/../loose-envify/cli.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/nanoid generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../nanoid/bin/nanoid.cjs" "$@"
else
exec node "$basedir/../nanoid/bin/nanoid.cjs" "$@"
fi

17
ui/node_modules/.bin/nanoid.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\nanoid\bin\nanoid.cjs" %*

28
ui/node_modules/.bin/nanoid.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../nanoid/bin/nanoid.cjs" $args
} else {
& "$basedir/node$exe" "$basedir/../nanoid/bin/nanoid.cjs" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../nanoid/bin/nanoid.cjs" $args
} else {
& "node$exe" "$basedir/../nanoid/bin/nanoid.cjs" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/parser generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../@babel/parser/bin/babel-parser.js" "$@"
else
exec node "$basedir/../@babel/parser/bin/babel-parser.js" "$@"
fi

17
ui/node_modules/.bin/parser.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\@babel\parser\bin\babel-parser.js" %*

28
ui/node_modules/.bin/parser.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../@babel/parser/bin/babel-parser.js" $args
} else {
& "$basedir/node$exe" "$basedir/../@babel/parser/bin/babel-parser.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../@babel/parser/bin/babel-parser.js" $args
} else {
& "node$exe" "$basedir/../@babel/parser/bin/babel-parser.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/rollup generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../rollup/dist/bin/rollup" "$@"
else
exec node "$basedir/../rollup/dist/bin/rollup" "$@"
fi

17
ui/node_modules/.bin/rollup.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\rollup\dist\bin\rollup" %*

28
ui/node_modules/.bin/rollup.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../rollup/dist/bin/rollup" $args
} else {
& "$basedir/node$exe" "$basedir/../rollup/dist/bin/rollup" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../rollup/dist/bin/rollup" $args
} else {
& "node$exe" "$basedir/../rollup/dist/bin/rollup" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/semver generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../semver/bin/semver.js" "$@"
else
exec node "$basedir/../semver/bin/semver.js" "$@"
fi

17
ui/node_modules/.bin/semver.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\semver\bin\semver.js" %*

28
ui/node_modules/.bin/semver.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../semver/bin/semver.js" $args
} else {
& "$basedir/node$exe" "$basedir/../semver/bin/semver.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../semver/bin/semver.js" $args
} else {
& "node$exe" "$basedir/../semver/bin/semver.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/tsc generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../typescript/bin/tsc" "$@"
else
exec node "$basedir/../typescript/bin/tsc" "$@"
fi

17
ui/node_modules/.bin/tsc.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\typescript\bin\tsc" %*

28
ui/node_modules/.bin/tsc.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../typescript/bin/tsc" $args
} else {
& "$basedir/node$exe" "$basedir/../typescript/bin/tsc" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../typescript/bin/tsc" $args
} else {
& "node$exe" "$basedir/../typescript/bin/tsc" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/tsserver generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../typescript/bin/tsserver" "$@"
else
exec node "$basedir/../typescript/bin/tsserver" "$@"
fi

17
ui/node_modules/.bin/tsserver.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\typescript\bin\tsserver" %*

28
ui/node_modules/.bin/tsserver.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../typescript/bin/tsserver" $args
} else {
& "$basedir/node$exe" "$basedir/../typescript/bin/tsserver" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../typescript/bin/tsserver" $args
} else {
& "node$exe" "$basedir/../typescript/bin/tsserver" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/update-browserslist-db generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../update-browserslist-db/cli.js" "$@"
else
exec node "$basedir/../update-browserslist-db/cli.js" "$@"
fi

17
ui/node_modules/.bin/update-browserslist-db.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\update-browserslist-db\cli.js" %*

28
ui/node_modules/.bin/update-browserslist-db.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../update-browserslist-db/cli.js" $args
} else {
& "$basedir/node$exe" "$basedir/../update-browserslist-db/cli.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../update-browserslist-db/cli.js" $args
} else {
& "node$exe" "$basedir/../update-browserslist-db/cli.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

16
ui/node_modules/.bin/vite generated vendored Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -x "$basedir/node" ]; then
exec "$basedir/node" "$basedir/../vite/bin/vite.js" "$@"
else
exec node "$basedir/../vite/bin/vite.js" "$@"
fi

17
ui/node_modules/.bin/vite.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
GOTO start
:find_dp0
SET dp0=%~dp0
EXIT /b
:start
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
endLocal & goto #_undefined_# 2>NUL || title %COMSPEC% & "%_prog%" "%dp0%\..\vite\bin\vite.js" %*

28
ui/node_modules/.bin/vite.ps1 generated vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "$basedir/node$exe" "$basedir/../vite/bin/vite.js" $args
} else {
& "$basedir/node$exe" "$basedir/../vite/bin/vite.js" $args
}
$ret=$LASTEXITCODE
} else {
# Support pipeline input
if ($MyInvocation.ExpectingInput) {
$input | & "node$exe" "$basedir/../vite/bin/vite.js" $args
} else {
& "node$exe" "$basedir/../vite/bin/vite.js" $args
}
$ret=$LASTEXITCODE
}
exit $ret

1003
ui/node_modules/.package-lock.json generated vendored Normal file

File diff suppressed because it is too large Load Diff

22
ui/node_modules/@babel/code-frame/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,22 @@
MIT License
Copyright (c) 2014-present Sebastian McKenzie and other contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

19
ui/node_modules/@babel/code-frame/README.md generated vendored Normal file
View File

@@ -0,0 +1,19 @@
# @babel/code-frame
> Generate errors that contain a code frame that point to source locations.
See our website [@babel/code-frame](https://babeljs.io/docs/babel-code-frame) for more information.
## Install
Using npm:
```sh
npm install --save-dev @babel/code-frame
```
or using yarn:
```sh
yarn add @babel/code-frame --dev
```

217
ui/node_modules/@babel/code-frame/lib/index.js generated vendored Normal file
View File

@@ -0,0 +1,217 @@
'use strict';
Object.defineProperty(exports, '__esModule', { value: true });
var picocolors = require('picocolors');
var jsTokens = require('js-tokens');
var helperValidatorIdentifier = require('@babel/helper-validator-identifier');
function isColorSupported() {
return (typeof process === "object" && (process.env.FORCE_COLOR === "0" || process.env.FORCE_COLOR === "false") ? false : picocolors.isColorSupported
);
}
const compose = (f, g) => v => f(g(v));
function buildDefs(colors) {
return {
keyword: colors.cyan,
capitalized: colors.yellow,
jsxIdentifier: colors.yellow,
punctuator: colors.yellow,
number: colors.magenta,
string: colors.green,
regex: colors.magenta,
comment: colors.gray,
invalid: compose(compose(colors.white, colors.bgRed), colors.bold),
gutter: colors.gray,
marker: compose(colors.red, colors.bold),
message: compose(colors.red, colors.bold),
reset: colors.reset
};
}
const defsOn = buildDefs(picocolors.createColors(true));
const defsOff = buildDefs(picocolors.createColors(false));
function getDefs(enabled) {
return enabled ? defsOn : defsOff;
}
const sometimesKeywords = new Set(["as", "async", "from", "get", "of", "set"]);
const NEWLINE$1 = /\r\n|[\n\r\u2028\u2029]/;
const BRACKET = /^[()[\]{}]$/;
let tokenize;
const JSX_TAG = /^[a-z][\w-]*$/i;
const getTokenType = function (token, offset, text) {
if (token.type === "name") {
const tokenValue = token.value;
if (helperValidatorIdentifier.isKeyword(tokenValue) || helperValidatorIdentifier.isStrictReservedWord(tokenValue, true) || sometimesKeywords.has(tokenValue)) {
return "keyword";
}
if (JSX_TAG.test(tokenValue) && (text[offset - 1] === "<" || text.slice(offset - 2, offset) === "</")) {
return "jsxIdentifier";
}
const firstChar = String.fromCodePoint(tokenValue.codePointAt(0));
if (firstChar !== firstChar.toLowerCase()) {
return "capitalized";
}
}
if (token.type === "punctuator" && BRACKET.test(token.value)) {
return "bracket";
}
if (token.type === "invalid" && (token.value === "@" || token.value === "#")) {
return "punctuator";
}
return token.type;
};
tokenize = function* (text) {
let match;
while (match = jsTokens.default.exec(text)) {
const token = jsTokens.matchToToken(match);
yield {
type: getTokenType(token, match.index, text),
value: token.value
};
}
};
function highlight(text) {
if (text === "") return "";
const defs = getDefs(true);
let highlighted = "";
for (const {
type,
value
} of tokenize(text)) {
if (type in defs) {
highlighted += value.split(NEWLINE$1).map(str => defs[type](str)).join("\n");
} else {
highlighted += value;
}
}
return highlighted;
}
let deprecationWarningShown = false;
const NEWLINE = /\r\n|[\n\r\u2028\u2029]/;
function getMarkerLines(loc, source, opts, startLineBaseZero) {
const startLoc = Object.assign({
column: 0,
line: -1
}, loc.start);
const endLoc = Object.assign({}, startLoc, loc.end);
const {
linesAbove = 2,
linesBelow = 3
} = opts || {};
const startLine = startLoc.line - startLineBaseZero;
const startColumn = startLoc.column;
const endLine = endLoc.line - startLineBaseZero;
const endColumn = endLoc.column;
let start = Math.max(startLine - (linesAbove + 1), 0);
let end = Math.min(source.length, endLine + linesBelow);
if (startLine === -1) {
start = 0;
}
if (endLine === -1) {
end = source.length;
}
const lineDiff = endLine - startLine;
const markerLines = {};
if (lineDiff) {
for (let i = 0; i <= lineDiff; i++) {
const lineNumber = i + startLine;
if (!startColumn) {
markerLines[lineNumber] = true;
} else if (i === 0) {
const sourceLength = source[lineNumber - 1].length;
markerLines[lineNumber] = [startColumn, sourceLength - startColumn + 1];
} else if (i === lineDiff) {
markerLines[lineNumber] = [0, endColumn];
} else {
const sourceLength = source[lineNumber - i].length;
markerLines[lineNumber] = [0, sourceLength];
}
}
} else {
if (startColumn === endColumn) {
if (startColumn) {
markerLines[startLine] = [startColumn, 0];
} else {
markerLines[startLine] = true;
}
} else {
markerLines[startLine] = [startColumn, endColumn - startColumn];
}
}
return {
start,
end,
markerLines
};
}
function codeFrameColumns(rawLines, loc, opts = {}) {
const shouldHighlight = opts.forceColor || isColorSupported() && opts.highlightCode;
const startLineBaseZero = (opts.startLine || 1) - 1;
const defs = getDefs(shouldHighlight);
const lines = rawLines.split(NEWLINE);
const {
start,
end,
markerLines
} = getMarkerLines(loc, lines, opts, startLineBaseZero);
const hasColumns = loc.start && typeof loc.start.column === "number";
const numberMaxWidth = String(end + startLineBaseZero).length;
const highlightedLines = shouldHighlight ? highlight(rawLines) : rawLines;
let frame = highlightedLines.split(NEWLINE, end).slice(start, end).map((line, index) => {
const number = start + 1 + index;
const paddedNumber = ` ${number + startLineBaseZero}`.slice(-numberMaxWidth);
const gutter = ` ${paddedNumber} |`;
const hasMarker = markerLines[number];
const lastMarkerLine = !markerLines[number + 1];
if (hasMarker) {
let markerLine = "";
if (Array.isArray(hasMarker)) {
const markerSpacing = line.slice(0, Math.max(hasMarker[0] - 1, 0)).replace(/[^\t]/g, " ");
const numberOfMarkers = hasMarker[1] || 1;
markerLine = ["\n ", defs.gutter(gutter.replace(/\d/g, " ")), " ", markerSpacing, defs.marker("^").repeat(numberOfMarkers)].join("");
if (lastMarkerLine && opts.message) {
markerLine += " " + defs.message(opts.message);
}
}
return [defs.marker(">"), defs.gutter(gutter), line.length > 0 ? ` ${line}` : "", markerLine].join("");
} else {
return ` ${defs.gutter(gutter)}${line.length > 0 ? ` ${line}` : ""}`;
}
}).join("\n");
if (opts.message && !hasColumns) {
frame = `${" ".repeat(numberMaxWidth + 1)}${opts.message}\n${frame}`;
}
if (shouldHighlight) {
return defs.reset(frame);
} else {
return frame;
}
}
function index (rawLines, lineNumber, colNumber, opts = {}) {
if (!deprecationWarningShown) {
deprecationWarningShown = true;
const message = "Passing lineNumber and colNumber is deprecated to @babel/code-frame. Please use `codeFrameColumns`.";
if (process.emitWarning) {
process.emitWarning(message, "DeprecationWarning");
} else {
const deprecationError = new Error(message);
deprecationError.name = "DeprecationWarning";
console.warn(new Error(message));
}
}
colNumber = Math.max(colNumber, 0);
const location = {
start: {
column: colNumber,
line: lineNumber
}
};
return codeFrameColumns(rawLines, location, opts);
}
exports.codeFrameColumns = codeFrameColumns;
exports.default = index;
exports.highlight = highlight;
//# sourceMappingURL=index.js.map

1
ui/node_modules/@babel/code-frame/lib/index.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

32
ui/node_modules/@babel/code-frame/package.json generated vendored Normal file
View File

@@ -0,0 +1,32 @@
{
"name": "@babel/code-frame",
"version": "7.29.0",
"description": "Generate errors that contain a code frame that point to source locations.",
"author": "The Babel Team (https://babel.dev/team)",
"homepage": "https://babel.dev/docs/en/next/babel-code-frame",
"bugs": "https://github.com/babel/babel/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen",
"license": "MIT",
"publishConfig": {
"access": "public"
},
"repository": {
"type": "git",
"url": "https://github.com/babel/babel.git",
"directory": "packages/babel-code-frame"
},
"main": "./lib/index.js",
"dependencies": {
"@babel/helper-validator-identifier": "^7.28.5",
"js-tokens": "^4.0.0",
"picocolors": "^1.1.1"
},
"devDependencies": {
"charcodes": "^0.2.0",
"import-meta-resolve": "^4.1.0",
"strip-ansi": "^4.0.0"
},
"engines": {
"node": ">=6.9.0"
},
"type": "commonjs"
}

22
ui/node_modules/@babel/compat-data/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,22 @@
MIT License
Copyright (c) 2014-present Sebastian McKenzie and other contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

19
ui/node_modules/@babel/compat-data/README.md generated vendored Normal file
View File

@@ -0,0 +1,19 @@
# @babel/compat-data
> The compat-data to determine required Babel plugins
See our website [@babel/compat-data](https://babeljs.io/docs/babel-compat-data) for more information.
## Install
Using npm:
```sh
npm install --save @babel/compat-data
```
or using yarn:
```sh
yarn add @babel/compat-data
```

View File

@@ -0,0 +1,2 @@
// Todo (Babel 8): remove this file as Babel 8 drop support of core-js 2
module.exports = require("./data/corejs2-built-ins.json");

View File

@@ -0,0 +1,2 @@
// Todo (Babel 8): remove this file now that it is included in babel-plugin-polyfill-corejs3
module.exports = require("./data/corejs3-shipped-proposals.json");

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,5 @@
[
"esnext.promise.all-settled",
"esnext.string.match-all",
"esnext.global-this"
]

View File

@@ -0,0 +1,18 @@
{
"es6.module": {
"chrome": "61",
"and_chr": "61",
"edge": "16",
"firefox": "60",
"and_ff": "60",
"node": "13.2.0",
"opera": "48",
"op_mob": "45",
"safari": "10.1",
"ios": "10.3",
"samsung": "8.2",
"android": "61",
"electron": "2.0",
"ios_saf": "10.3"
}
}

View File

@@ -0,0 +1,35 @@
{
"transform-async-to-generator": [
"bugfix/transform-async-arrows-in-class"
],
"transform-parameters": [
"bugfix/transform-edge-default-parameters",
"bugfix/transform-safari-id-destructuring-collision-in-function-expression"
],
"transform-function-name": [
"bugfix/transform-edge-function-name"
],
"transform-block-scoping": [
"bugfix/transform-safari-block-shadowing",
"bugfix/transform-safari-for-shadowing"
],
"transform-template-literals": [
"bugfix/transform-tagged-template-caching"
],
"transform-optional-chaining": [
"bugfix/transform-v8-spread-parameters-in-optional-chaining"
],
"proposal-optional-chaining": [
"bugfix/transform-v8-spread-parameters-in-optional-chaining"
],
"transform-class-properties": [
"bugfix/transform-v8-static-class-fields-redefine-readonly",
"bugfix/transform-firefox-class-in-computed-class-key",
"bugfix/transform-safari-class-field-initializer-scope"
],
"proposal-class-properties": [
"bugfix/transform-v8-static-class-fields-redefine-readonly",
"bugfix/transform-firefox-class-in-computed-class-key",
"bugfix/transform-safari-class-field-initializer-scope"
]
}

View File

@@ -0,0 +1,203 @@
{
"bugfix/transform-async-arrows-in-class": {
"chrome": "55",
"opera": "42",
"edge": "15",
"firefox": "52",
"safari": "11",
"node": "7.6",
"deno": "1",
"ios": "11",
"samsung": "6",
"opera_mobile": "42",
"electron": "1.6"
},
"bugfix/transform-edge-default-parameters": {
"chrome": "49",
"opera": "36",
"edge": "18",
"firefox": "52",
"safari": "10",
"node": "6",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "36",
"electron": "0.37"
},
"bugfix/transform-edge-function-name": {
"chrome": "51",
"opera": "38",
"edge": "79",
"firefox": "53",
"safari": "10",
"node": "6.5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "41",
"electron": "1.2"
},
"bugfix/transform-safari-block-shadowing": {
"chrome": "49",
"opera": "36",
"edge": "12",
"firefox": "44",
"safari": "11",
"node": "6",
"deno": "1",
"ie": "11",
"ios": "11",
"samsung": "5",
"opera_mobile": "36",
"electron": "0.37"
},
"bugfix/transform-safari-for-shadowing": {
"chrome": "49",
"opera": "36",
"edge": "12",
"firefox": "4",
"safari": "11",
"node": "6",
"deno": "1",
"ie": "11",
"ios": "11",
"samsung": "5",
"rhino": "1.7.13",
"opera_mobile": "36",
"electron": "0.37"
},
"bugfix/transform-safari-id-destructuring-collision-in-function-expression": {
"chrome": "49",
"opera": "36",
"edge": "14",
"firefox": "2",
"safari": "16.3",
"node": "6",
"deno": "1",
"ios": "16.3",
"samsung": "5",
"opera_mobile": "36",
"electron": "0.37"
},
"bugfix/transform-tagged-template-caching": {
"chrome": "41",
"opera": "28",
"edge": "12",
"firefox": "34",
"safari": "13",
"node": "4",
"deno": "1",
"ios": "13",
"samsung": "3.4",
"rhino": "1.7.14",
"opera_mobile": "28",
"electron": "0.21"
},
"bugfix/transform-v8-spread-parameters-in-optional-chaining": {
"chrome": "91",
"opera": "77",
"edge": "91",
"firefox": "74",
"safari": "13.1",
"node": "16.9",
"deno": "1.9",
"ios": "13.4",
"samsung": "16",
"opera_mobile": "64",
"electron": "13.0"
},
"transform-optional-chaining": {
"chrome": "80",
"opera": "67",
"edge": "80",
"firefox": "74",
"safari": "13.1",
"node": "14",
"deno": "1",
"ios": "13.4",
"samsung": "13",
"rhino": "1.8",
"opera_mobile": "57",
"electron": "8.0"
},
"proposal-optional-chaining": {
"chrome": "80",
"opera": "67",
"edge": "80",
"firefox": "74",
"safari": "13.1",
"node": "14",
"deno": "1",
"ios": "13.4",
"samsung": "13",
"rhino": "1.8",
"opera_mobile": "57",
"electron": "8.0"
},
"transform-parameters": {
"chrome": "49",
"opera": "36",
"edge": "15",
"firefox": "52",
"safari": "10",
"node": "6",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "36",
"electron": "0.37"
},
"transform-async-to-generator": {
"chrome": "55",
"opera": "42",
"edge": "15",
"firefox": "52",
"safari": "10.1",
"node": "7.6",
"deno": "1",
"ios": "10.3",
"samsung": "6",
"opera_mobile": "42",
"electron": "1.6"
},
"transform-template-literals": {
"chrome": "41",
"opera": "28",
"edge": "13",
"firefox": "34",
"safari": "9",
"node": "4",
"deno": "1",
"ios": "9",
"samsung": "3.4",
"opera_mobile": "28",
"electron": "0.21"
},
"transform-function-name": {
"chrome": "51",
"opera": "38",
"edge": "14",
"firefox": "53",
"safari": "10",
"node": "6.5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "41",
"electron": "1.2"
},
"transform-block-scoping": {
"chrome": "50",
"opera": "37",
"edge": "14",
"firefox": "53",
"safari": "10",
"node": "6",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "37",
"electron": "1.1"
}
}

838
ui/node_modules/@babel/compat-data/data/plugins.json generated vendored Normal file
View File

@@ -0,0 +1,838 @@
{
"transform-explicit-resource-management": {
"chrome": "134",
"edge": "134",
"firefox": "141",
"node": "24",
"electron": "35.0"
},
"transform-duplicate-named-capturing-groups-regex": {
"chrome": "126",
"opera": "112",
"edge": "126",
"firefox": "129",
"safari": "17.4",
"node": "23",
"ios": "17.4",
"electron": "31.0"
},
"transform-regexp-modifiers": {
"chrome": "125",
"opera": "111",
"edge": "125",
"firefox": "132",
"node": "23",
"samsung": "27",
"electron": "31.0"
},
"transform-unicode-sets-regex": {
"chrome": "112",
"opera": "98",
"edge": "112",
"firefox": "116",
"safari": "17",
"node": "20",
"deno": "1.32",
"ios": "17",
"samsung": "23",
"opera_mobile": "75",
"electron": "24.0"
},
"bugfix/transform-v8-static-class-fields-redefine-readonly": {
"chrome": "98",
"opera": "84",
"edge": "98",
"firefox": "75",
"safari": "15",
"node": "12",
"deno": "1.18",
"ios": "15",
"samsung": "11",
"opera_mobile": "52",
"electron": "17.0"
},
"bugfix/transform-firefox-class-in-computed-class-key": {
"chrome": "74",
"opera": "62",
"edge": "79",
"firefox": "126",
"safari": "16",
"node": "12",
"deno": "1",
"ios": "16",
"samsung": "11",
"opera_mobile": "53",
"electron": "6.0"
},
"bugfix/transform-safari-class-field-initializer-scope": {
"chrome": "74",
"opera": "62",
"edge": "79",
"firefox": "69",
"safari": "16",
"node": "12",
"deno": "1",
"ios": "16",
"samsung": "11",
"opera_mobile": "53",
"electron": "6.0"
},
"transform-class-static-block": {
"chrome": "94",
"opera": "80",
"edge": "94",
"firefox": "93",
"safari": "16.4",
"node": "16.11",
"deno": "1.14",
"ios": "16.4",
"samsung": "17",
"opera_mobile": "66",
"electron": "15.0"
},
"proposal-class-static-block": {
"chrome": "94",
"opera": "80",
"edge": "94",
"firefox": "93",
"safari": "16.4",
"node": "16.11",
"deno": "1.14",
"ios": "16.4",
"samsung": "17",
"opera_mobile": "66",
"electron": "15.0"
},
"transform-private-property-in-object": {
"chrome": "91",
"opera": "77",
"edge": "91",
"firefox": "90",
"safari": "15",
"node": "16.9",
"deno": "1.9",
"ios": "15",
"samsung": "16",
"opera_mobile": "64",
"electron": "13.0"
},
"proposal-private-property-in-object": {
"chrome": "91",
"opera": "77",
"edge": "91",
"firefox": "90",
"safari": "15",
"node": "16.9",
"deno": "1.9",
"ios": "15",
"samsung": "16",
"opera_mobile": "64",
"electron": "13.0"
},
"transform-class-properties": {
"chrome": "74",
"opera": "62",
"edge": "79",
"firefox": "90",
"safari": "14.1",
"node": "12",
"deno": "1",
"ios": "14.5",
"samsung": "11",
"opera_mobile": "53",
"electron": "6.0"
},
"proposal-class-properties": {
"chrome": "74",
"opera": "62",
"edge": "79",
"firefox": "90",
"safari": "14.1",
"node": "12",
"deno": "1",
"ios": "14.5",
"samsung": "11",
"opera_mobile": "53",
"electron": "6.0"
},
"transform-private-methods": {
"chrome": "84",
"opera": "70",
"edge": "84",
"firefox": "90",
"safari": "15",
"node": "14.6",
"deno": "1",
"ios": "15",
"samsung": "14",
"opera_mobile": "60",
"electron": "10.0"
},
"proposal-private-methods": {
"chrome": "84",
"opera": "70",
"edge": "84",
"firefox": "90",
"safari": "15",
"node": "14.6",
"deno": "1",
"ios": "15",
"samsung": "14",
"opera_mobile": "60",
"electron": "10.0"
},
"transform-numeric-separator": {
"chrome": "75",
"opera": "62",
"edge": "79",
"firefox": "70",
"safari": "13",
"node": "12.5",
"deno": "1",
"ios": "13",
"samsung": "11",
"rhino": "1.7.14",
"opera_mobile": "54",
"electron": "6.0"
},
"proposal-numeric-separator": {
"chrome": "75",
"opera": "62",
"edge": "79",
"firefox": "70",
"safari": "13",
"node": "12.5",
"deno": "1",
"ios": "13",
"samsung": "11",
"rhino": "1.7.14",
"opera_mobile": "54",
"electron": "6.0"
},
"transform-logical-assignment-operators": {
"chrome": "85",
"opera": "71",
"edge": "85",
"firefox": "79",
"safari": "14",
"node": "15",
"deno": "1.2",
"ios": "14",
"samsung": "14",
"opera_mobile": "60",
"electron": "10.0"
},
"proposal-logical-assignment-operators": {
"chrome": "85",
"opera": "71",
"edge": "85",
"firefox": "79",
"safari": "14",
"node": "15",
"deno": "1.2",
"ios": "14",
"samsung": "14",
"opera_mobile": "60",
"electron": "10.0"
},
"transform-nullish-coalescing-operator": {
"chrome": "80",
"opera": "67",
"edge": "80",
"firefox": "72",
"safari": "13.1",
"node": "14",
"deno": "1",
"ios": "13.4",
"samsung": "13",
"rhino": "1.8",
"opera_mobile": "57",
"electron": "8.0"
},
"proposal-nullish-coalescing-operator": {
"chrome": "80",
"opera": "67",
"edge": "80",
"firefox": "72",
"safari": "13.1",
"node": "14",
"deno": "1",
"ios": "13.4",
"samsung": "13",
"rhino": "1.8",
"opera_mobile": "57",
"electron": "8.0"
},
"transform-optional-chaining": {
"chrome": "91",
"opera": "77",
"edge": "91",
"firefox": "74",
"safari": "13.1",
"node": "16.9",
"deno": "1.9",
"ios": "13.4",
"samsung": "16",
"opera_mobile": "64",
"electron": "13.0"
},
"proposal-optional-chaining": {
"chrome": "91",
"opera": "77",
"edge": "91",
"firefox": "74",
"safari": "13.1",
"node": "16.9",
"deno": "1.9",
"ios": "13.4",
"samsung": "16",
"opera_mobile": "64",
"electron": "13.0"
},
"transform-json-strings": {
"chrome": "66",
"opera": "53",
"edge": "79",
"firefox": "62",
"safari": "12",
"node": "10",
"deno": "1",
"ios": "12",
"samsung": "9",
"rhino": "1.7.14",
"opera_mobile": "47",
"electron": "3.0"
},
"proposal-json-strings": {
"chrome": "66",
"opera": "53",
"edge": "79",
"firefox": "62",
"safari": "12",
"node": "10",
"deno": "1",
"ios": "12",
"samsung": "9",
"rhino": "1.7.14",
"opera_mobile": "47",
"electron": "3.0"
},
"transform-optional-catch-binding": {
"chrome": "66",
"opera": "53",
"edge": "79",
"firefox": "58",
"safari": "11.1",
"node": "10",
"deno": "1",
"ios": "11.3",
"samsung": "9",
"opera_mobile": "47",
"electron": "3.0"
},
"proposal-optional-catch-binding": {
"chrome": "66",
"opera": "53",
"edge": "79",
"firefox": "58",
"safari": "11.1",
"node": "10",
"deno": "1",
"ios": "11.3",
"samsung": "9",
"opera_mobile": "47",
"electron": "3.0"
},
"transform-parameters": {
"chrome": "49",
"opera": "36",
"edge": "18",
"firefox": "52",
"safari": "16.3",
"node": "6",
"deno": "1",
"ios": "16.3",
"samsung": "5",
"opera_mobile": "36",
"electron": "0.37"
},
"transform-async-generator-functions": {
"chrome": "63",
"opera": "50",
"edge": "79",
"firefox": "57",
"safari": "12",
"node": "10",
"deno": "1",
"ios": "12",
"samsung": "8",
"opera_mobile": "46",
"electron": "3.0"
},
"proposal-async-generator-functions": {
"chrome": "63",
"opera": "50",
"edge": "79",
"firefox": "57",
"safari": "12",
"node": "10",
"deno": "1",
"ios": "12",
"samsung": "8",
"opera_mobile": "46",
"electron": "3.0"
},
"transform-object-rest-spread": {
"chrome": "60",
"opera": "47",
"edge": "79",
"firefox": "55",
"safari": "11.1",
"node": "8.3",
"deno": "1",
"ios": "11.3",
"samsung": "8",
"opera_mobile": "44",
"electron": "2.0"
},
"proposal-object-rest-spread": {
"chrome": "60",
"opera": "47",
"edge": "79",
"firefox": "55",
"safari": "11.1",
"node": "8.3",
"deno": "1",
"ios": "11.3",
"samsung": "8",
"opera_mobile": "44",
"electron": "2.0"
},
"transform-dotall-regex": {
"chrome": "62",
"opera": "49",
"edge": "79",
"firefox": "78",
"safari": "11.1",
"node": "8.10",
"deno": "1",
"ios": "11.3",
"samsung": "8",
"rhino": "1.7.15",
"opera_mobile": "46",
"electron": "3.0"
},
"transform-unicode-property-regex": {
"chrome": "64",
"opera": "51",
"edge": "79",
"firefox": "78",
"safari": "11.1",
"node": "10",
"deno": "1",
"ios": "11.3",
"samsung": "9",
"opera_mobile": "47",
"electron": "3.0"
},
"proposal-unicode-property-regex": {
"chrome": "64",
"opera": "51",
"edge": "79",
"firefox": "78",
"safari": "11.1",
"node": "10",
"deno": "1",
"ios": "11.3",
"samsung": "9",
"opera_mobile": "47",
"electron": "3.0"
},
"transform-named-capturing-groups-regex": {
"chrome": "64",
"opera": "51",
"edge": "79",
"firefox": "78",
"safari": "11.1",
"node": "10",
"deno": "1",
"ios": "11.3",
"samsung": "9",
"opera_mobile": "47",
"electron": "3.0"
},
"transform-async-to-generator": {
"chrome": "55",
"opera": "42",
"edge": "15",
"firefox": "52",
"safari": "11",
"node": "7.6",
"deno": "1",
"ios": "11",
"samsung": "6",
"opera_mobile": "42",
"electron": "1.6"
},
"transform-exponentiation-operator": {
"chrome": "52",
"opera": "39",
"edge": "14",
"firefox": "52",
"safari": "10.1",
"node": "7",
"deno": "1",
"ios": "10.3",
"samsung": "6",
"rhino": "1.7.14",
"opera_mobile": "41",
"electron": "1.3"
},
"transform-template-literals": {
"chrome": "41",
"opera": "28",
"edge": "13",
"firefox": "34",
"safari": "13",
"node": "4",
"deno": "1",
"ios": "13",
"samsung": "3.4",
"opera_mobile": "28",
"electron": "0.21"
},
"transform-literals": {
"chrome": "44",
"opera": "31",
"edge": "12",
"firefox": "53",
"safari": "9",
"node": "4",
"deno": "1",
"ios": "9",
"samsung": "4",
"rhino": "1.7.15",
"opera_mobile": "32",
"electron": "0.30"
},
"transform-function-name": {
"chrome": "51",
"opera": "38",
"edge": "79",
"firefox": "53",
"safari": "10",
"node": "6.5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "41",
"electron": "1.2"
},
"transform-arrow-functions": {
"chrome": "47",
"opera": "34",
"edge": "13",
"firefox": "43",
"safari": "10",
"node": "6",
"deno": "1",
"ios": "10",
"samsung": "5",
"rhino": "1.7.13",
"opera_mobile": "34",
"electron": "0.36"
},
"transform-block-scoped-functions": {
"chrome": "41",
"opera": "28",
"edge": "12",
"firefox": "46",
"safari": "10",
"node": "4",
"deno": "1",
"ie": "11",
"ios": "10",
"samsung": "3.4",
"opera_mobile": "28",
"electron": "0.21"
},
"transform-classes": {
"chrome": "46",
"opera": "33",
"edge": "13",
"firefox": "45",
"safari": "10",
"node": "5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "33",
"electron": "0.36"
},
"transform-object-super": {
"chrome": "46",
"opera": "33",
"edge": "13",
"firefox": "45",
"safari": "10",
"node": "5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "33",
"electron": "0.36"
},
"transform-shorthand-properties": {
"chrome": "43",
"opera": "30",
"edge": "12",
"firefox": "33",
"safari": "9",
"node": "4",
"deno": "1",
"ios": "9",
"samsung": "4",
"rhino": "1.7.14",
"opera_mobile": "30",
"electron": "0.27"
},
"transform-duplicate-keys": {
"chrome": "42",
"opera": "29",
"edge": "12",
"firefox": "34",
"safari": "9",
"node": "4",
"deno": "1",
"ios": "9",
"samsung": "3.4",
"opera_mobile": "29",
"electron": "0.25"
},
"transform-computed-properties": {
"chrome": "44",
"opera": "31",
"edge": "12",
"firefox": "34",
"safari": "7.1",
"node": "4",
"deno": "1",
"ios": "8",
"samsung": "4",
"rhino": "1.8",
"opera_mobile": "32",
"electron": "0.30"
},
"transform-for-of": {
"chrome": "51",
"opera": "38",
"edge": "15",
"firefox": "53",
"safari": "10",
"node": "6.5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "41",
"electron": "1.2"
},
"transform-sticky-regex": {
"chrome": "49",
"opera": "36",
"edge": "13",
"firefox": "3",
"safari": "10",
"node": "6",
"deno": "1",
"ios": "10",
"samsung": "5",
"rhino": "1.7.15",
"opera_mobile": "36",
"electron": "0.37"
},
"transform-unicode-escapes": {
"chrome": "44",
"opera": "31",
"edge": "12",
"firefox": "53",
"safari": "9",
"node": "4",
"deno": "1",
"ios": "9",
"samsung": "4",
"rhino": "1.7.15",
"opera_mobile": "32",
"electron": "0.30"
},
"transform-unicode-regex": {
"chrome": "50",
"opera": "37",
"edge": "13",
"firefox": "46",
"safari": "12",
"node": "6",
"deno": "1",
"ios": "12",
"samsung": "5",
"opera_mobile": "37",
"electron": "1.1"
},
"transform-spread": {
"chrome": "46",
"opera": "33",
"edge": "13",
"firefox": "45",
"safari": "10",
"node": "5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "33",
"electron": "0.36"
},
"transform-destructuring": {
"chrome": "51",
"opera": "38",
"edge": "15",
"firefox": "53",
"safari": "10",
"node": "6.5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "41",
"electron": "1.2"
},
"transform-block-scoping": {
"chrome": "50",
"opera": "37",
"edge": "14",
"firefox": "53",
"safari": "11",
"node": "6",
"deno": "1",
"ios": "11",
"samsung": "5",
"opera_mobile": "37",
"electron": "1.1"
},
"transform-typeof-symbol": {
"chrome": "48",
"opera": "35",
"edge": "12",
"firefox": "36",
"safari": "9",
"node": "6",
"deno": "1",
"ios": "9",
"samsung": "5",
"rhino": "1.8",
"opera_mobile": "35",
"electron": "0.37"
},
"transform-new-target": {
"chrome": "46",
"opera": "33",
"edge": "14",
"firefox": "41",
"safari": "10",
"node": "5",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "33",
"electron": "0.36"
},
"transform-regenerator": {
"chrome": "50",
"opera": "37",
"edge": "13",
"firefox": "53",
"safari": "10",
"node": "6",
"deno": "1",
"ios": "10",
"samsung": "5",
"opera_mobile": "37",
"electron": "1.1"
},
"transform-member-expression-literals": {
"chrome": "7",
"opera": "12",
"edge": "12",
"firefox": "2",
"safari": "5.1",
"node": "0.4",
"deno": "1",
"ie": "9",
"android": "4",
"ios": "6",
"phantom": "1.9",
"samsung": "1",
"rhino": "1.7.13",
"opera_mobile": "12",
"electron": "0.20"
},
"transform-property-literals": {
"chrome": "7",
"opera": "12",
"edge": "12",
"firefox": "2",
"safari": "5.1",
"node": "0.4",
"deno": "1",
"ie": "9",
"android": "4",
"ios": "6",
"phantom": "1.9",
"samsung": "1",
"rhino": "1.7.13",
"opera_mobile": "12",
"electron": "0.20"
},
"transform-reserved-words": {
"chrome": "13",
"opera": "10.50",
"edge": "12",
"firefox": "2",
"safari": "3.1",
"node": "0.6",
"deno": "1",
"ie": "9",
"android": "4.4",
"ios": "6",
"phantom": "1.9",
"samsung": "1",
"rhino": "1.7.13",
"opera_mobile": "10.1",
"electron": "0.20"
},
"transform-export-namespace-from": {
"chrome": "72",
"deno": "1.0",
"edge": "79",
"firefox": "80",
"node": "13.2.0",
"opera": "60",
"opera_mobile": "51",
"safari": "14.1",
"ios": "14.5",
"samsung": "11.0",
"android": "72",
"electron": "5.0"
},
"proposal-export-namespace-from": {
"chrome": "72",
"deno": "1.0",
"edge": "79",
"firefox": "80",
"node": "13.2.0",
"opera": "60",
"opera_mobile": "51",
"safari": "14.1",
"ios": "14.5",
"samsung": "11.0",
"android": "72",
"electron": "5.0"
}
}

2
ui/node_modules/@babel/compat-data/native-modules.js generated vendored Normal file
View File

@@ -0,0 +1,2 @@
// Todo (Babel 8): remove this file, in Babel 8 users import the .json directly
module.exports = require("./data/native-modules.json");

View File

@@ -0,0 +1,2 @@
// Todo (Babel 8): remove this file, in Babel 8 users import the .json directly
module.exports = require("./data/overlapping-plugins.json");

40
ui/node_modules/@babel/compat-data/package.json generated vendored Normal file
View File

@@ -0,0 +1,40 @@
{
"name": "@babel/compat-data",
"version": "7.29.0",
"author": "The Babel Team (https://babel.dev/team)",
"license": "MIT",
"description": "The compat-data to determine required Babel plugins",
"repository": {
"type": "git",
"url": "https://github.com/babel/babel.git",
"directory": "packages/babel-compat-data"
},
"publishConfig": {
"access": "public"
},
"exports": {
"./plugins": "./plugins.js",
"./native-modules": "./native-modules.js",
"./corejs2-built-ins": "./corejs2-built-ins.js",
"./corejs3-shipped-proposals": "./corejs3-shipped-proposals.js",
"./overlapping-plugins": "./overlapping-plugins.js",
"./plugin-bugfixes": "./plugin-bugfixes.js"
},
"scripts": {
"build-data": "./scripts/download-compat-table.sh && node ./scripts/build-data.mjs && node ./scripts/build-modules-support.mjs && node ./scripts/build-bugfixes-targets.mjs"
},
"keywords": [
"babel",
"compat-table",
"compat-data"
],
"devDependencies": {
"@mdn/browser-compat-data": "^6.0.8",
"core-js-compat": "^3.48.0",
"electron-to-chromium": "^1.5.278"
},
"engines": {
"node": ">=6.9.0"
},
"type": "commonjs"
}

View File

@@ -0,0 +1,2 @@
// Todo (Babel 8): remove this file, in Babel 8 users import the .json directly
module.exports = require("./data/plugin-bugfixes.json");

2
ui/node_modules/@babel/compat-data/plugins.js generated vendored Normal file
View File

@@ -0,0 +1,2 @@
// Todo (Babel 8): remove this file, in Babel 8 users import the .json directly
module.exports = require("./data/plugins.json");

22
ui/node_modules/@babel/core/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,22 @@
MIT License
Copyright (c) 2014-present Sebastian McKenzie and other contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

19
ui/node_modules/@babel/core/README.md generated vendored Normal file
View File

@@ -0,0 +1,19 @@
# @babel/core
> Babel compiler core.
See our website [@babel/core](https://babeljs.io/docs/babel-core) for more information or the [issues](https://github.com/babel/babel/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22pkg%3A%20core%22+is%3Aopen) associated with this package.
## Install
Using npm:
```sh
npm install --save-dev @babel/core
```
or using yarn:
```sh
yarn add @babel/core --dev
```

View File

@@ -0,0 +1,5 @@
"use strict";
0 && 0;
//# sourceMappingURL=cache-contexts.js.map

Some files were not shown because too many files have changed in this diff Show More