bugsink mcp and claude subagents - documentation and test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m11s

This commit is contained in:
2026-01-22 11:22:56 -08:00
parent fac98f4c54
commit eae0dbaa8e
15 changed files with 747 additions and 540 deletions

View File

@@ -20,17 +20,17 @@ The application exposes health check endpoints at `/api/health/*` implementing A
### Endpoint Reference ### Endpoint Reference
| Endpoint | Purpose | Use Case | | Endpoint | Purpose | Use Case |
|----------|---------|----------| | ----------------------- | ---------------------- | --------------------------------------- |
| `/api/health/ping` | Simple connectivity | Quick "is it running?" check | | `/api/health/ping` | Simple connectivity | Quick "is it running?" check |
| `/api/health/live` | Liveness probe | Container orchestration restart trigger | | `/api/health/live` | Liveness probe | Container orchestration restart trigger |
| `/api/health/ready` | Readiness probe | Load balancer traffic routing | | `/api/health/ready` | Readiness probe | Load balancer traffic routing |
| `/api/health/startup` | Startup probe | Initial container readiness | | `/api/health/startup` | Startup probe | Initial container readiness |
| `/api/health/db-schema` | Schema verification | Deployment validation | | `/api/health/db-schema` | Schema verification | Deployment validation |
| `/api/health/db-pool` | Connection pool status | Performance diagnostics | | `/api/health/db-pool` | Connection pool status | Performance diagnostics |
| `/api/health/redis` | Redis connectivity | Cache/queue health | | `/api/health/redis` | Redis connectivity | Cache/queue health |
| `/api/health/storage` | File storage access | Upload capability | | `/api/health/storage` | File storage access | Upload capability |
| `/api/health/time` | Server time sync | Time-sensitive operations | | `/api/health/time` | Server time sync | Time-sensitive operations |
### Liveness Probe (`/api/health/live`) ### Liveness Probe (`/api/health/live`)
@@ -95,18 +95,18 @@ curl -s https://flyer-crawler.projectium.com/api/health/ready | jq .
**Status Values**: **Status Values**:
| Status | Meaning | Action | | Status | Meaning | Action |
|--------|---------|--------| | ----------- | ------------------------------------------------ | ------------------------- |
| `healthy` | All critical services operational | None required | | `healthy` | All critical services operational | None required |
| `degraded` | Non-critical issues (e.g., high connection wait) | Monitor closely | | `degraded` | Non-critical issues (e.g., high connection wait) | Monitor closely |
| `unhealthy` | Critical service unavailable (returns 503) | Remove from load balancer | | `unhealthy` | Critical service unavailable (returns 503) | Remove from load balancer |
### Database Health Thresholds ### Database Health Thresholds
| Metric | Healthy | Degraded | Unhealthy | | Metric | Healthy | Degraded | Unhealthy |
|--------|---------|----------|-----------| | ------------------- | ------------------- | -------- | ---------------- |
| Query response | `SELECT 1` succeeds | N/A | Connection fails | | Query response | `SELECT 1` succeeds | N/A | Connection fails |
| Waiting connections | 0-3 | 4+ | N/A | | Waiting connections | 0-3 | 4+ | N/A |
### Verifying Services from CLI ### Verifying Services from CLI
@@ -150,6 +150,7 @@ The admin dashboard at `/admin` includes a **System Check** component that runs
4. Click "Re-run Checks" to verify all services 4. Click "Re-run Checks" to verify all services
Checks include: Checks include:
- Backend Server Connection - Backend Server Connection
- PM2 Process Status - PM2 Process Status
- Database Connection Pool - Database Connection Pool
@@ -167,33 +168,36 @@ Bugsink is our self-hosted, Sentry-compatible error tracking system (ADR-015).
### Access Points ### Access Points
| Environment | URL | Purpose | | Environment | URL | Purpose |
|-------------|-----|---------| | ----------------- | -------------------------------- | -------------------------- |
| **Production** | `https://bugsink.projectium.com` | Production and test errors | | **Production** | `https://bugsink.projectium.com` | Production and test errors |
| **Dev Container** | `https://localhost:8443` | Local development errors | | **Dev Container** | `https://localhost:8443` | Local development errors |
### Credentials ### Credentials
**Production Bugsink**: **Production Bugsink**:
- Credentials stored in password manager - Credentials stored in password manager
- Admin account created during initial deployment - Admin account created during initial deployment
**Dev Container Bugsink**: **Dev Container Bugsink**:
- Email: `admin@localhost` - Email: `admin@localhost`
- Password: `admin` - Password: `admin`
### Projects ### Projects
| Project ID | Name | Environment | Error Source | | Project ID | Name | Environment | Error Source |
|------------|------|-------------|--------------| | ---------- | --------------------------------- | ----------- | ------------------------------- |
| 1 | flyer-crawler-backend | Production | Backend Node.js errors | | 1 | flyer-crawler-backend | Production | Backend Node.js errors |
| 2 | flyer-crawler-frontend | Production | Frontend JavaScript errors | | 2 | flyer-crawler-frontend | Production | Frontend JavaScript errors |
| 3 | flyer-crawler-backend-test | Test | Test environment backend | | 3 | flyer-crawler-backend-test | Test | Test environment backend |
| 4 | flyer-crawler-frontend-test | Test | Test environment frontend | | 4 | flyer-crawler-frontend-test | Test | Test environment frontend |
| 5 | flyer-crawler-infrastructure | Production | PostgreSQL, Redis, NGINX errors | | 5 | flyer-crawler-infrastructure | Production | PostgreSQL, Redis, NGINX errors |
| 6 | flyer-crawler-test-infrastructure | Test | Test infra errors | | 6 | flyer-crawler-test-infrastructure | Test | Test infra errors |
**Dev Container Projects** (localhost:8000): **Dev Container Projects** (localhost:8000):
- Project 1: Backend (Dev) - Project 1: Backend (Dev)
- Project 2: Frontend (Dev) - Project 2: Frontend (Dev)
@@ -205,6 +209,7 @@ Bugsink is our self-hosted, Sentry-compatible error tracking system (ADR-015).
4. Click on an issue to view details 4. Click on an issue to view details
**Issue Details Include**: **Issue Details Include**:
- Exception type and message - Exception type and message
- Full stack trace - Full stack trace
- Request context (URL, method, headers) - Request context (URL, method, headers)
@@ -238,6 +243,7 @@ mcp__bugsink__list_events --issue_id <uuid>
**MCP Server Configuration**: **MCP Server Configuration**:
Production (in `~/.claude/settings.json`): Production (in `~/.claude/settings.json`):
```json ```json
{ {
"bugsink": { "bugsink": {
@@ -252,6 +258,7 @@ Production (in `~/.claude/settings.json`):
``` ```
Dev Container (in `.mcp.json`): Dev Container (in `.mcp.json`):
```json ```json
{ {
"localerrors": { "localerrors": {
@@ -270,11 +277,13 @@ Dev Container (in `.mcp.json`):
Bugsink 2.0.11 does not have a UI for API tokens. Create via Django management command. Bugsink 2.0.11 does not have a UI for API tokens. Create via Django management command.
**Production**: **Production**:
```bash ```bash
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token" ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token"
``` ```
**Dev Container**: **Dev Container**:
```bash ```bash
MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token' MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token'
``` ```
@@ -284,6 +293,7 @@ The command outputs a 40-character hex token.
### Interpreting Errors ### Interpreting Errors
**Error Anatomy**: **Error Anatomy**:
``` ```
TypeError: Cannot read properties of undefined (reading 'map') TypeError: Cannot read properties of undefined (reading 'map')
├── Exception Type: TypeError ├── Exception Type: TypeError
@@ -299,13 +309,13 @@ TypeError: Cannot read properties of undefined (reading 'map')
**Common Error Patterns**: **Common Error Patterns**:
| Pattern | Likely Cause | Investigation | | Pattern | Likely Cause | Investigation |
|---------|--------------|---------------| | ----------------------------------- | ------------------------------------------------- | -------------------------------------------------- |
| `TypeError: ... undefined` | Missing null check, API returned unexpected shape | Check API response, add defensive coding | | `TypeError: ... undefined` | Missing null check, API returned unexpected shape | Check API response, add defensive coding |
| `DatabaseError: Connection timeout` | Pool exhaustion, slow queries | Check `/api/health/db-pool`, review slow query log | | `DatabaseError: Connection timeout` | Pool exhaustion, slow queries | Check `/api/health/db-pool`, review slow query log |
| `RedisConnectionError` | Redis unavailable | Check Redis service, network connectivity | | `RedisConnectionError` | Redis unavailable | Check Redis service, network connectivity |
| `ValidationError: ...` | Invalid input, schema mismatch | Review request payload, update validation | | `ValidationError: ...` | Invalid input, schema mismatch | Review request payload, update validation |
| `NotFoundError: ...` | Missing resource | Verify resource exists, check ID format | | `NotFoundError: ...` | Missing resource | Verify resource exists, check ID format |
### Error Triage Workflow ### Error Triage Workflow
@@ -325,12 +335,14 @@ TypeError: Cannot read properties of undefined (reading 'map')
The test environment automatically syncs Bugsink issues to Gitea (see `docs/BUGSINK-SYNC.md`). The test environment automatically syncs Bugsink issues to Gitea (see `docs/BUGSINK-SYNC.md`).
**Sync Workflow**: **Sync Workflow**:
1. Runs every 15 minutes on test server 1. Runs every 15 minutes on test server
2. Fetches unresolved issues from all Bugsink projects 2. Fetches unresolved issues from all Bugsink projects
3. Creates Gitea issues with appropriate labels 3. Creates Gitea issues with appropriate labels
4. Marks synced issues as resolved in Bugsink 4. Marks synced issues as resolved in Bugsink
**Manual Sync**: **Manual Sync**:
```bash ```bash
# Trigger sync via API (test environment only) # Trigger sync via API (test environment only)
curl -X POST https://flyer-crawler-test.projectium.com/api/admin/bugsink/sync \ curl -X POST https://flyer-crawler-test.projectium.com/api/admin/bugsink/sync \
@@ -358,25 +370,26 @@ Log Sources Logstash Outputs
### Configuration Files ### Configuration Files
| Path | Purpose | | Path | Purpose |
|------|---------| | --------------------------------------------------- | --------------------------- |
| `/etc/logstash/conf.d/bugsink.conf` | Main pipeline configuration | | `/etc/logstash/conf.d/bugsink.conf` | Main pipeline configuration |
| `/etc/postgresql/14/main/conf.d/observability.conf` | PostgreSQL logging settings | | `/etc/postgresql/14/main/conf.d/observability.conf` | PostgreSQL logging settings |
| `/var/log/logstash/` | Logstash file outputs | | `/var/log/logstash/` | Logstash file outputs |
| `/var/lib/logstash/sincedb_*` | File position tracking | | `/var/lib/logstash/sincedb_*` | File position tracking |
### Log Sources ### Log Sources
| Source | Path | Contents | | Source | Path | Contents |
|--------|------|----------| | ----------- | -------------------------------------------------- | ----------------------------------- |
| PostgreSQL | `/var/log/postgresql/*.log` | Function logs, slow queries, errors | | PostgreSQL | `/var/log/postgresql/*.log` | Function logs, slow queries, errors |
| PM2 Workers | `/home/gitea-runner/.pm2/logs/flyer-crawler-*.log` | Worker stdout/stderr | | PM2 Workers | `/home/gitea-runner/.pm2/logs/flyer-crawler-*.log` | Worker stdout/stderr |
| Redis | `/var/log/redis/redis-server.log` | Connection errors, memory warnings | | Redis | `/var/log/redis/redis-server.log` | Connection errors, memory warnings |
| NGINX | `/var/log/nginx/access.log`, `error.log` | HTTP requests, upstream errors | | NGINX | `/var/log/nginx/access.log`, `error.log` | HTTP requests, upstream errors |
### Pipeline Status ### Pipeline Status
**Check Logstash Service**: **Check Logstash Service**:
```bash ```bash
ssh root@projectium.com ssh root@projectium.com
@@ -398,6 +411,7 @@ curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '{
``` ```
**Check Filter Performance**: **Check Filter Performance**:
```bash ```bash
# Grok pattern success/failure rates # Grok pattern success/failure rates
curl -s http://localhost:9600/_node/stats/pipelines?pretty | \ curl -s http://localhost:9600/_node/stats/pipelines?pretty | \
@@ -422,22 +436,24 @@ tail -f /var/log/postgresql/postgresql-$(date +%Y-%m-%d).log
### Troubleshooting Logstash ### Troubleshooting Logstash
| Issue | Diagnostic | Solution | | Issue | Diagnostic | Solution |
|-------|------------|----------| | --------------------- | --------------------------- | ------------------------------- |
| No events processed | `systemctl status logstash` | Start/restart service | | No events processed | `systemctl status logstash` | Start/restart service |
| Config syntax error | Test config command | Fix config file | | Config syntax error | Test config command | Fix config file |
| Grok failures | Check stats endpoint | Update grok patterns | | Grok failures | Check stats endpoint | Update grok patterns |
| Wrong Bugsink project | Check environment tags | Verify tag routing | | Wrong Bugsink project | Check environment tags | Verify tag routing |
| Permission denied | `groups logstash` | Add to `postgres`, `adm` groups | | Permission denied | `groups logstash` | Add to `postgres`, `adm` groups |
| PM2 logs not captured | Check file paths | Verify log file existence | | PM2 logs not captured | Check file paths | Verify log file existence |
| High disk usage | Check log rotation | Configure logrotate | | High disk usage | Check log rotation | Configure logrotate |
**Test Configuration**: **Test Configuration**:
```bash ```bash
/usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/bugsink.conf /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/bugsink.conf
``` ```
**Restart After Config Change**: **Restart After Config Change**:
```bash ```bash
systemctl restart logstash systemctl restart logstash
journalctl -u logstash -f # Watch for startup errors journalctl -u logstash -f # Watch for startup errors
@@ -453,19 +469,19 @@ PM2 manages the Node.js application processes in production.
**Production Processes** (`ecosystem.config.cjs`): **Production Processes** (`ecosystem.config.cjs`):
| Process Name | Script | Purpose | Instances | | Process Name | Script | Purpose | Instances |
|--------------|--------|---------|-----------| | -------------------------------- | ----------- | -------------------- | ------------------ |
| `flyer-crawler-api` | `server.ts` | Express API server | Cluster (max CPUs) | | `flyer-crawler-api` | `server.ts` | Express API server | Cluster (max CPUs) |
| `flyer-crawler-worker` | `worker.ts` | BullMQ job processor | 1 | | `flyer-crawler-worker` | `worker.ts` | BullMQ job processor | 1 |
| `flyer-crawler-analytics-worker` | `worker.ts` | Analytics jobs | 1 | | `flyer-crawler-analytics-worker` | `worker.ts` | Analytics jobs | 1 |
**Test Processes** (`ecosystem-test.config.cjs`): **Test Processes** (`ecosystem-test.config.cjs`):
| Process Name | Script | Port | Instances | | Process Name | Script | Port | Instances |
|--------------|--------|------|-----------| | ------------------------------------- | ----------- | ---- | ------------- |
| `flyer-crawler-api-test` | `server.ts` | 3002 | 1 (fork mode) | | `flyer-crawler-api-test` | `server.ts` | 3002 | 1 (fork mode) |
| `flyer-crawler-worker-test` | `worker.ts` | N/A | 1 | | `flyer-crawler-worker-test` | `worker.ts` | N/A | 1 |
| `flyer-crawler-analytics-worker-test` | `worker.ts` | N/A | 1 | | `flyer-crawler-analytics-worker-test` | `worker.ts` | N/A | 1 |
### Basic Commands ### Basic Commands
@@ -503,6 +519,7 @@ pm2 stop flyer-crawler-api
### Health Indicators ### Health Indicators
**Healthy Process**: **Healthy Process**:
``` ```
┌─────────────────────┬────┬─────────┬─────────┬───────┬────────┬─────────┬──────────┐ ┌─────────────────────┬────┬─────────┬─────────┬───────┬────────┬─────────┬──────────┐
│ Name │ id │ mode │ status │ cpu │ mem │ uptime │ restarts │ │ Name │ id │ mode │ status │ cpu │ mem │ uptime │ restarts │
@@ -514,6 +531,7 @@ pm2 stop flyer-crawler-api
``` ```
**Warning Signs**: **Warning Signs**:
- `status: errored` - Process crashed - `status: errored` - Process crashed
- High `restarts` count - Instability - High `restarts` count - Instability
- High `mem` (>500MB for API, >1GB for workers) - Memory leak - High `mem` (>500MB for API, >1GB for workers) - Memory leak
@@ -521,22 +539,23 @@ pm2 stop flyer-crawler-api
### Log File Locations ### Log File Locations
| Process | stdout | stderr | | Process | stdout | stderr |
|---------|--------|--------| | ---------------------- | ----------------------------------------------------------- | --------------- |
| `flyer-crawler-api` | `/home/gitea-runner/.pm2/logs/flyer-crawler-api-out.log` | `...-error.log` | | `flyer-crawler-api` | `/home/gitea-runner/.pm2/logs/flyer-crawler-api-out.log` | `...-error.log` |
| `flyer-crawler-worker` | `/home/gitea-runner/.pm2/logs/flyer-crawler-worker-out.log` | `...-error.log` | | `flyer-crawler-worker` | `/home/gitea-runner/.pm2/logs/flyer-crawler-worker-out.log` | `...-error.log` |
### Memory Management ### Memory Management
PM2 is configured to restart processes when they exceed memory limits: PM2 is configured to restart processes when they exceed memory limits:
| Process | Memory Limit | Action | | Process | Memory Limit | Action |
|---------|--------------|--------| | ---------------- | ------------ | ------------ |
| API | 500MB | Auto-restart | | API | 500MB | Auto-restart |
| Worker | 1GB | Auto-restart | | Worker | 1GB | Auto-restart |
| Analytics Worker | 1GB | Auto-restart | | Analytics Worker | 1GB | Auto-restart |
**Check Memory Usage**: **Check Memory Usage**:
```bash ```bash
pm2 show flyer-crawler-api | grep memory pm2 show flyer-crawler-api | grep memory
pm2 show flyer-crawler-worker | grep memory pm2 show flyer-crawler-worker | grep memory
@@ -555,6 +574,7 @@ PM2 uses exponential backoff for restarts:
``` ```
**Force Restart After Repeated Failures**: **Force Restart After Repeated Failures**:
```bash ```bash
pm2 delete flyer-crawler-api pm2 delete flyer-crawler-api
pm2 start ecosystem.config.cjs --only flyer-crawler-api pm2 start ecosystem.config.cjs --only flyer-crawler-api
@@ -568,13 +588,14 @@ pm2 start ecosystem.config.cjs --only flyer-crawler-api
The application uses a PostgreSQL connection pool with these defaults: The application uses a PostgreSQL connection pool with these defaults:
| Setting | Value | Purpose | | Setting | Value | Purpose |
|---------|-------|---------| | ------------------------- | ----- | -------------------------------- |
| `max` | 20 | Maximum concurrent connections | | `max` | 20 | Maximum concurrent connections |
| `idleTimeoutMillis` | 30000 | Close idle connections after 30s | | `idleTimeoutMillis` | 30000 | Close idle connections after 30s |
| `connectionTimeoutMillis` | 2000 | Fail if connection takes >2s | | `connectionTimeoutMillis` | 2000 | Fail if connection takes >2s |
**Check Pool Status via API**: **Check Pool Status via API**:
```bash ```bash
curl -s https://flyer-crawler.projectium.com/api/health/db-pool | jq . curl -s https://flyer-crawler.projectium.com/api/health/db-pool | jq .
@@ -592,10 +613,10 @@ curl -s https://flyer-crawler.projectium.com/api/health/db-pool | jq .
**Pool Health Thresholds**: **Pool Health Thresholds**:
| Metric | Healthy | Warning | Critical | | Metric | Healthy | Warning | Critical |
|--------|---------|---------|----------| | ------------------- | ------- | ------- | ---------- |
| Waiting Connections | 0-2 | 3-4 | 5+ | | Waiting Connections | 0-2 | 3-4 | 5+ |
| Total Connections | 1-15 | 16-19 | 20 (maxed) | | Total Connections | 1-15 | 16-19 | 20 (maxed) |
### Slow Query Logging ### Slow Query Logging
@@ -607,6 +628,7 @@ log_min_duration_statement = 1000 # Log queries over 1 second
``` ```
**View Slow Queries**: **View Slow Queries**:
```bash ```bash
ssh root@projectium.com ssh root@projectium.com
grep "duration:" /var/log/postgresql/postgresql-$(date +%Y-%m-%d).log | tail -20 grep "duration:" /var/log/postgresql/postgresql-$(date +%Y-%m-%d).log | tail -20
@@ -692,10 +714,10 @@ mem_fragmentation_ratio:1.05
**Memory Thresholds**: **Memory Thresholds**:
| Metric | Healthy | Warning | Critical | | Metric | Healthy | Warning | Critical |
|--------|---------|---------|----------| | ------------------- | ----------- | ------- | -------- |
| Used Memory | <70% of max | 70-85% | >85% | | Used Memory | <70% of max | 70-85% | >85% |
| Fragmentation Ratio | 1.0-1.5 | 1.5-2.0 | >2.0 | | Fragmentation Ratio | 1.0-1.5 | 1.5-2.0 | >2.0 |
### Cache Statistics ### Cache Statistics
@@ -707,6 +729,7 @@ redis-cli info stats | grep -E "keyspace_hits|keyspace_misses|evicted_keys"
``` ```
**Cache Hit Rate Targets**: **Cache Hit Rate Targets**:
- Excellent: >95% - Excellent: >95%
- Good: 85-95% - Good: 85-95%
- Needs attention: <85% - Needs attention: <85%
@@ -730,20 +753,22 @@ redis-cli llen "bull:flyer-processing:failed"
**Queue Depth Thresholds**: **Queue Depth Thresholds**:
| Queue | Normal | Warning | Critical | | Queue | Normal | Warning | Critical |
|-------|--------|---------|----------| | ------------------- | ------ | ------- | -------- |
| flyer-processing | 0-10 | 11-50 | >50 | | flyer-processing | 0-10 | 11-50 | >50 |
| email-sending | 0-100 | 101-500 | >500 | | email-sending | 0-100 | 101-500 | >500 |
| analytics-reporting | 0-5 | 6-20 | >20 | | analytics-reporting | 0-5 | 6-20 | >20 |
### Bull Board UI ### Bull Board UI
Access the job queue dashboard: Access the job queue dashboard:
- **Production**: `https://flyer-crawler.projectium.com/api/admin/jobs` (requires admin auth) - **Production**: `https://flyer-crawler.projectium.com/api/admin/jobs` (requires admin auth)
- **Test**: `https://flyer-crawler-test.projectium.com/api/admin/jobs` - **Test**: `https://flyer-crawler-test.projectium.com/api/admin/jobs`
- **Dev**: `http://localhost:3001/api/admin/jobs` - **Dev**: `http://localhost:3001/api/admin/jobs`
Features: Features:
- View all queues and job counts - View all queues and job counts
- Inspect job data and errors - Inspect job data and errors
- Retry failed jobs - Retry failed jobs
@@ -751,11 +776,11 @@ Features:
### Redis Database Allocation ### Redis Database Allocation
| Database | Purpose | | Database | Purpose |
|----------|---------| | -------- | ------------------------ |
| 0 | BullMQ production queues | | 0 | BullMQ production queues |
| 1 | BullMQ test queues | | 1 | BullMQ test queues |
| 15 | Bugsink sync state | | 15 | Bugsink sync state |
--- ---
@@ -763,14 +788,14 @@ Features:
### Critical Monitoring Targets ### Critical Monitoring Targets
| Service | Check | Interval | Alert Threshold | | Service | Check | Interval | Alert Threshold |
|---------|-------|----------|-----------------| | ---------- | ------------------- | -------- | ---------------------- |
| API Server | `/api/health/ready` | 1 min | 2 consecutive failures | | API Server | `/api/health/ready` | 1 min | 2 consecutive failures |
| Database | Pool waiting count | 1 min | >5 waiting | | Database | Pool waiting count | 1 min | >5 waiting |
| Redis | Memory usage | 5 min | >85% of maxmemory | | Redis | Memory usage | 5 min | >85% of maxmemory |
| Disk Space | `/var/log` | 15 min | <10GB free | | Disk Space | `/var/log` | 15 min | <10GB free |
| Worker | Queue depth | 5 min | >50 jobs waiting | | Worker | Queue depth | 5 min | >50 jobs waiting |
| Error Rate | Bugsink issue count | 15 min | >10 new issues/hour | | Error Rate | Bugsink issue count | 15 min | >10 new issues/hour |
### Alert Channels ### Alert Channels
@@ -835,15 +860,15 @@ EOF
### Runbook Quick Reference ### Runbook Quick Reference
| Symptom | First Action | If That Fails | | Symptom | First Action | If That Fails |
|---------|--------------|---------------| | --------------- | ---------------- | --------------------- |
| 503 errors | Restart PM2 | Check database, Redis | | 503 errors | Restart PM2 | Check database, Redis |
| Slow responses | Check DB pool | Review slow query log | | Slow responses | Check DB pool | Review slow query log |
| High error rate | Check Bugsink | Review recent deploys | | High error rate | Check Bugsink | Review recent deploys |
| Queue backlog | Restart worker | Scale workers | | Queue backlog | Restart worker | Scale workers |
| Out of memory | Restart process | Increase PM2 limit | | Out of memory | Restart process | Increase PM2 limit |
| Disk full | Clean old logs | Expand volume | | Disk full | Clean old logs | Expand volume |
| Redis OOM | Flush cache keys | Increase maxmemory | | Redis OOM | Flush cache keys | Increase maxmemory |
### Post-Incident Review ### Post-Incident Review

View File

@@ -44,12 +44,12 @@ The ai-usage subagent understands:
## Key Files ## Key Files
| File | Purpose | | File | Purpose |
|------|---------| | ----------------------------------------------- | ------------------------------------ |
| `src/services/aiService.server.ts` | Gemini API integration | | `src/services/aiService.server.ts` | Gemini API integration |
| `src/services/flyerProcessingService.server.ts` | Flyer extraction pipeline | | `src/services/flyerProcessingService.server.ts` | Flyer extraction pipeline |
| `src/schemas/flyer.schemas.ts` | Zod schemas for AI output validation | | `src/schemas/flyer.schemas.ts` | Zod schemas for AI output validation |
| `src/types/ai.types.ts` | TypeScript types for AI responses | | `src/types/ai.types.ts` | TypeScript types for AI responses |
## Example Requests ## Example Requests
@@ -95,6 +95,7 @@ const processedImages = await imageProcessor.prepareForAI(uploadedFile);
### 2. Prompt Construction ### 2. Prompt Construction
The extraction prompt includes: The extraction prompt includes:
- System instructions for the AI model - System instructions for the AI model
- Expected output schema (JSON) - Expected output schema (JSON)
- Examples of correct extraction - Examples of correct extraction
@@ -103,11 +104,7 @@ The extraction prompt includes:
### 3. API Call ### 3. API Call
```typescript ```typescript
const response = await aiService.extractFlyerData( const response = await aiService.extractFlyerData(processedImages, storeContext, extractionOptions);
processedImages,
storeContext,
extractionOptions
);
``` ```
### 4. Response Validation ### 4. Response Validation
@@ -131,6 +128,7 @@ const normalizedItems = normalizeExtractedItems(validatedItems);
**Symptoms**: Same item priced differently on different extractions. **Symptoms**: Same item priced differently on different extractions.
**Solution**: Improve prompt with explicit price format examples: **Solution**: Improve prompt with explicit price format examples:
``` ```
"Price formats to recognize: "Price formats to recognize:
- $X.XX (regular price) - $X.XX (regular price)
@@ -145,6 +143,7 @@ const normalizedItems = normalizeExtractedItems(validatedItems);
**Symptoms**: Flyers with many items on one page have missing extractions. **Symptoms**: Flyers with many items on one page have missing extractions.
**Solution**: **Solution**:
1. Split page into quadrants for separate extraction 1. Split page into quadrants for separate extraction
2. Increase token limit for response 2. Increase token limit for response
3. Use structured grid-based prompting 3. Use structured grid-based prompting
@@ -154,18 +153,23 @@ const normalizedItems = normalizeExtractedItems(validatedItems);
**Symptoms**: `429 Too Many Requests` errors during bulk uploads. **Symptoms**: `429 Too Many Requests` errors during bulk uploads.
**Solution**: Implement request queuing: **Solution**: Implement request queuing:
```typescript ```typescript
// Add to job queue instead of direct call // Add to job queue instead of direct call
await flyerQueue.add('extract', { await flyerQueue.add(
flyerId, 'extract',
images, {
}, { flyerId,
attempts: 3, images,
backoff: {
type: 'exponential',
delay: 2000,
}, },
}); {
attempts: 3,
backoff: {
type: 'exponential',
delay: 2000,
},
},
);
``` ```
### Issue: Hallucinated Items ### Issue: Hallucinated Items
@@ -173,6 +177,7 @@ await flyerQueue.add('extract', {
**Symptoms**: Items extracted that don't exist in the flyer. **Symptoms**: Items extracted that don't exist in the flyer.
**Solution**: **Solution**:
1. Add confidence scoring to extraction 1. Add confidence scoring to extraction
2. Request bounding box coordinates for verification 2. Request bounding box coordinates for verification
3. Add post-extraction validation against image 3. Add post-extraction validation against image
@@ -227,31 +232,34 @@ For each item:
### Metrics to Track ### Metrics to Track
| Metric | Description | Target | | Metric | Description | Target |
|--------|-------------|--------| | ----------------------- | --------------------------------------- | --------------- |
| Extraction success rate | % of flyers processed without error | >95% | | Extraction success rate | % of flyers processed without error | >95% |
| Items per flyer | Average items extracted | Varies by store | | Items per flyer | Average items extracted | Varies by store |
| Price accuracy | Match rate vs manual verification | >98% | | Price accuracy | Match rate vs manual verification | >98% |
| Response time | Time from upload to extraction complete | <30s | | Response time | Time from upload to extraction complete | <30s |
### Logging ### Logging
```typescript ```typescript
log.info({ log.info(
flyerId, {
itemCount: extractedItems.length, flyerId,
processingTime: duration, itemCount: extractedItems.length,
modelVersion: response.model, processingTime: duration,
tokenUsage: response.usage, modelVersion: response.model,
}, 'Flyer extraction completed'); tokenUsage: response.usage,
},
'Flyer extraction completed',
);
``` ```
## Environment Configuration ## Environment Configuration
| Variable | Purpose | | Variable | Purpose |
|----------|---------| | -------------------------------- | --------------------------- |
| `VITE_GOOGLE_GENAI_API_KEY` | Gemini API key (production) | | `VITE_GOOGLE_GENAI_API_KEY` | Gemini API key (production) |
| `VITE_GOOGLE_GENAI_API_KEY_TEST` | Gemini API key (test) | | `VITE_GOOGLE_GENAI_API_KEY_TEST` | Gemini API key (test) |
**Note**: Use separate API keys for production and test to avoid rate limit conflicts and enable separate billing tracking. **Note**: Use separate API keys for production and test to avoid rate limit conflicts and enable separate billing tracking.
@@ -260,6 +268,7 @@ log.info({
### Unit Tests ### Unit Tests
Mock the Gemini API response: Mock the Gemini API response:
```typescript ```typescript
vi.mock('@google/generative-ai', () => ({ vi.mock('@google/generative-ai', () => ({
GoogleGenerativeAI: vi.fn().mockImplementation(() => ({ GoogleGenerativeAI: vi.fn().mockImplementation(() => ({
@@ -277,6 +286,7 @@ vi.mock('@google/generative-ai', () => ({
### Integration Tests ### Integration Tests
Use recorded responses for deterministic testing: Use recorded responses for deterministic testing:
```typescript ```typescript
// Save real API responses to fixtures // Save real API responses to fixtures
const fixtureResponse = await fs.readFile('fixtures/gemini-response.json'); const fixtureResponse = await fs.readFile('fixtures/gemini-response.json');

View File

@@ -32,27 +32,28 @@ Routes -> Services -> Repositories -> Database
### Key Patterns ### Key Patterns
| Pattern | ADR | Implementation | | Pattern | ADR | Implementation |
|---------|-----|----------------| | ------------------ | ------- | ---------------------------------------------------------- |
| Error Handling | ADR-001 | `handleDbError()`, throw `NotFoundError` | | Error Handling | ADR-001 | `handleDbError()`, throw `NotFoundError` |
| Repository Methods | ADR-034 | `get*` throws, `find*` returns null, `list*` returns array | | Repository Methods | ADR-034 | `get*` throws, `find*` returns null, `list*` returns array |
| API Responses | ADR-028 | `sendSuccess()`, `sendPaginated()`, `sendError()` | | API Responses | ADR-028 | `sendSuccess()`, `sendPaginated()`, `sendError()` |
| Transactions | ADR-002 | `withTransaction(async (client) => {...})` | | Transactions | ADR-002 | `withTransaction(async (client) => {...})` |
### File Naming Conventions ### File Naming Conventions
| Pattern | Location | Purpose | | Pattern | Location | Purpose |
|---------|----------|---------| | ------------- | ------------------ | -------------------------------- |
| `*.db.ts` | `src/services/db/` | Database repositories | | `*.db.ts` | `src/services/db/` | Database repositories |
| `*.server.ts` | `src/services/` | Server-only code (external APIs) | | `*.server.ts` | `src/services/` | Server-only code (external APIs) |
| `*.routes.ts` | `src/routes/` | Express route handlers | | `*.routes.ts` | `src/routes/` | Express route handlers |
| `*.test.ts` | Colocated | Unit tests | | `*.test.ts` | Colocated | Unit tests |
## How to Request Code Changes ## How to Request Code Changes
### Good Request Examples ### Good Request Examples
**Specific and contextual:** **Specific and contextual:**
``` ```
"Use the coder subagent to add a new endpoint GET /api/stores/:id/locations "Use the coder subagent to add a new endpoint GET /api/stores/:id/locations
that returns all locations for a store, following the existing patterns that returns all locations for a store, following the existing patterns
@@ -60,6 +61,7 @@ in stores.routes.ts"
``` ```
**With acceptance criteria:** **With acceptance criteria:**
``` ```
"Use the coder subagent to implement the shopping list sharing feature: "Use the coder subagent to implement the shopping list sharing feature:
- Add a share_token column to shopping_lists table - Add a share_token column to shopping_lists table
@@ -69,6 +71,7 @@ in stores.routes.ts"
``` ```
**Bug fix with reproduction steps:** **Bug fix with reproduction steps:**
``` ```
"Use the coder subagent to fix the issue where flyer items are not "Use the coder subagent to fix the issue where flyer items are not
sorted by price on the deals page. The expected behavior is lowest sorted by price on the deals page. The expected behavior is lowest
@@ -78,16 +81,19 @@ price first, but currently they appear in insertion order."
### Less Effective Request Examples ### Less Effective Request Examples
**Too vague:** **Too vague:**
``` ```
"Make the code better" "Make the code better"
``` ```
**Missing context:** **Missing context:**
``` ```
"Add a feature to search things" "Add a feature to search things"
``` ```
**Multiple unrelated tasks:** **Multiple unrelated tasks:**
``` ```
"Fix the login bug, add a new table, and update the homepage" "Fix the login bug, add a new table, and update the homepage"
``` ```
@@ -108,22 +114,15 @@ The coder subagent will follow this workflow:
```typescript ```typescript
// src/routes/stores.routes.ts // src/routes/stores.routes.ts
router.get( router.get('/:id/locations', validateRequest(getStoreLocationsSchema), async (req, res, next) => {
'/:id/locations', try {
validateRequest(getStoreLocationsSchema), const { id } = req.params;
async (req, res, next) => { const locations = await storeService.getLocationsForStore(parseInt(id, 10), req.log);
try { sendSuccess(res, { locations });
const { id } = req.params; } catch (error) {
const locations = await storeService.getLocationsForStore( next(error);
parseInt(id, 10),
req.log
);
sendSuccess(res, { locations });
} catch (error) {
next(error);
}
} }
); });
``` ```
### Adding a New Database Operation ### Adding a New Database Operation
@@ -142,13 +141,13 @@ The coder subagent will:
// src/services/db/store.db.ts // src/services/db/store.db.ts
export async function listLocationsByStoreId( export async function listLocationsByStoreId(
storeId: number, storeId: number,
client?: PoolClient client?: PoolClient,
): Promise<StoreLocation[]> { ): Promise<StoreLocation[]> {
const queryable = client || getPool(); const queryable = client || getPool();
try { try {
const result = await queryable.query<StoreLocation>( const result = await queryable.query<StoreLocation>(
`SELECT * FROM store_locations WHERE store_id = $1 ORDER BY created_at`, `SELECT * FROM store_locations WHERE store_id = $1 ORDER BY created_at`,
[storeId] [storeId],
); );
return result.rows; return result.rows;
} catch (error) { } catch (error) {
@@ -160,7 +159,7 @@ export async function listLocationsByStoreId(
{ {
entityName: 'StoreLocation', entityName: 'StoreLocation',
defaultMessage: 'Failed to list store locations.', defaultMessage: 'Failed to list store locations.',
} },
); );
} }
} }
@@ -252,6 +251,7 @@ podman exec -it flyer-crawler-dev npm run test:unit
When the coder subagent modifies database-related code, it will remind you: When the coder subagent modifies database-related code, it will remind you:
> **Schema files must stay synchronized:** > **Schema files must stay synchronized:**
>
> - `sql/master_schema_rollup.sql` - Test DB setup > - `sql/master_schema_rollup.sql` - Test DB setup
> - `sql/initial_schema.sql` - Fresh install schema > - `sql/initial_schema.sql` - Fresh install schema
> - `sql/migrations/*.sql` - Production changes > - `sql/migrations/*.sql` - Production changes
@@ -286,6 +286,7 @@ When the coder subagent modifies database-related code, it will remind you:
### Issue: Missing Error Handling ### Issue: Missing Error Handling
**Solution**: Explicitly request comprehensive error handling: **Solution**: Explicitly request comprehensive error handling:
``` ```
"Include proper error handling using handleDbError and the project's "Include proper error handling using handleDbError and the project's
error classes for all database operations" error classes for all database operations"
@@ -294,6 +295,7 @@ error classes for all database operations"
### Issue: Tests Not Included ### Issue: Tests Not Included
**Solution**: Either: **Solution**: Either:
1. Ask the coder to include tests: "Include unit tests for all new code" 1. Ask the coder to include tests: "Include unit tests for all new code"
2. Use the testwriter subagent separately for comprehensive test coverage 2. Use the testwriter subagent separately for comprehensive test coverage

View File

@@ -7,12 +7,12 @@ This guide covers two database-focused subagents:
## Understanding the Difference ## Understanding the Difference
| Aspect | db-dev | db-admin | | Aspect | db-dev | db-admin |
|--------|--------|----------| | --------------- | ----------------------------------- | ------------------------------------- |
| **Focus** | Application database code | Infrastructure and operations | | **Focus** | Application database code | Infrastructure and operations |
| **Tasks** | Queries, migrations, repositories | Performance tuning, backups, security | | **Tasks** | Queries, migrations, repositories | Performance tuning, backups, security |
| **Output** | SQL migrations, repository methods | Configuration, monitoring scripts | | **Output** | SQL migrations, repository methods | Configuration, monitoring scripts |
| **When to Use** | Adding features, optimizing queries | Production issues, capacity planning | | **When to Use** | Adding features, optimizing queries | Production issues, capacity planning |
## The db-dev Subagent ## The db-dev Subagent
@@ -42,17 +42,18 @@ The db-dev subagent has deep knowledge of:
> **Schema files MUST stay synchronized:** > **Schema files MUST stay synchronized:**
> >
> | File | Purpose | > | File | Purpose |
> |------|---------| > | ------------------------------ | --------------------------------- |
> | `sql/master_schema_rollup.sql` | Test DB setup, complete reference | > | `sql/master_schema_rollup.sql` | Test DB setup, complete reference |
> | `sql/initial_schema.sql` | Fresh install schema | > | `sql/initial_schema.sql` | Fresh install schema |
> | `sql/migrations/*.sql` | Production incremental changes | > | `sql/migrations/*.sql` | Production incremental changes |
When db-dev creates a migration, it will also update the schema files. When db-dev creates a migration, it will also update the schema files.
### Example Requests ### Example Requests
**Adding a new table:** **Adding a new table:**
``` ```
"Use db-dev to design a table for storing user recipe reviews. "Use db-dev to design a table for storing user recipe reviews.
Include fields for rating (1-5), review text, and relationships Include fields for rating (1-5), review text, and relationships
@@ -60,6 +61,7 @@ to users and recipes. Create the migration and update schema files."
``` ```
**Optimizing a slow query:** **Optimizing a slow query:**
``` ```
"Use db-dev to optimize the query that lists flyers with their "Use db-dev to optimize the query that lists flyers with their
item counts. It's currently doing N+1 queries and takes too long item counts. It's currently doing N+1 queries and takes too long
@@ -67,6 +69,7 @@ with many flyers."
``` ```
**Adding spatial search:** **Adding spatial search:**
``` ```
"Use db-dev to add the ability to search stores within a radius "Use db-dev to add the ability to search stores within a radius
of a given location using PostGIS. Include the migration for of a given location using PostGIS. Include the migration for
@@ -77,13 +80,13 @@ adding the geography column."
The db-dev subagent follows these naming conventions: The db-dev subagent follows these naming conventions:
| Prefix | Returns | Behavior on Not Found | | Prefix | Returns | Behavior on Not Found |
|--------|---------|----------------------| | --------- | ------------------- | ------------------------------------ |
| `get*` | Single entity | Throws `NotFoundError` | | `get*` | Single entity | Throws `NotFoundError` |
| `find*` | Entity or `null` | Returns `null` | | `find*` | Entity or `null` | Returns `null` |
| `list*` | Array | Returns `[]` | | `list*` | Array | Returns `[]` |
| `create*` | Created entity | Throws on constraint violation | | `create*` | Created entity | Throws on constraint violation |
| `update*` | Updated entity | Throws `NotFoundError` if not exists | | `update*` | Updated entity | Throws `NotFoundError` if not exists |
| `delete*` | `void` or `boolean` | Throws `NotFoundError` if not exists | | `delete*` | `void` or `boolean` | Throws `NotFoundError` if not exists |
### Example Migration ### Example Migration
@@ -121,15 +124,12 @@ CREATE TRIGGER update_recipe_reviews_updated_at
// src/services/db/recipeReview.db.ts // src/services/db/recipeReview.db.ts
import { handleDbError, NotFoundError } from './errors.db'; import { handleDbError, NotFoundError } from './errors.db';
export async function getReviewById( export async function getReviewById(reviewId: string, client?: PoolClient): Promise<RecipeReview> {
reviewId: string,
client?: PoolClient
): Promise<RecipeReview> {
const queryable = client || getPool(); const queryable = client || getPool();
try { try {
const result = await queryable.query<RecipeReview>( const result = await queryable.query<RecipeReview>(
`SELECT * FROM recipe_reviews WHERE review_id = $1`, `SELECT * FROM recipe_reviews WHERE review_id = $1`,
[reviewId] [reviewId],
); );
if (result.rows.length === 0) { if (result.rows.length === 0) {
throw new NotFoundError(`Review with ID ${reviewId} not found.`); throw new NotFoundError(`Review with ID ${reviewId} not found.`);
@@ -144,7 +144,7 @@ export async function getReviewById(
{ {
entityName: 'RecipeReview', entityName: 'RecipeReview',
defaultMessage: 'Failed to fetch review.', defaultMessage: 'Failed to fetch review.',
} },
); );
} }
} }
@@ -152,7 +152,7 @@ export async function getReviewById(
export async function listReviewsByRecipeId( export async function listReviewsByRecipeId(
recipeId: string, recipeId: string,
options: { limit?: number; offset?: number } = {}, options: { limit?: number; offset?: number } = {},
client?: PoolClient client?: PoolClient,
): Promise<RecipeReview[]> { ): Promise<RecipeReview[]> {
const queryable = client || getPool(); const queryable = client || getPool();
const { limit = 50, offset = 0 } = options; const { limit = 50, offset = 0 } = options;
@@ -162,7 +162,7 @@ export async function listReviewsByRecipeId(
WHERE recipe_id = $1 WHERE recipe_id = $1
ORDER BY created_at DESC ORDER BY created_at DESC
LIMIT $2 OFFSET $3`, LIMIT $2 OFFSET $3`,
[recipeId, limit, offset] [recipeId, limit, offset],
); );
return result.rows; return result.rows;
} catch (error) { } catch (error) {
@@ -174,7 +174,7 @@ export async function listReviewsByRecipeId(
{ {
entityName: 'RecipeReview', entityName: 'RecipeReview',
defaultMessage: 'Failed to list reviews.', defaultMessage: 'Failed to list reviews.',
} },
); );
} }
} }
@@ -209,6 +209,7 @@ The db-admin subagent understands:
### Example Requests ### Example Requests
**Performance tuning:** **Performance tuning:**
``` ```
"Use db-admin to analyze why the database is running slow. "Use db-admin to analyze why the database is running slow.
Check connection pool settings, identify slow queries, and Check connection pool settings, identify slow queries, and
@@ -216,12 +217,14 @@ recommend PostgreSQL configuration changes."
``` ```
**Backup configuration:** **Backup configuration:**
``` ```
"Use db-admin to set up daily automated backups for the "Use db-admin to set up daily automated backups for the
production database with 30-day retention." production database with 30-day retention."
``` ```
**User management:** **User management:**
``` ```
"Use db-admin to create a read-only database user for "Use db-admin to create a read-only database user for
reporting purposes that can only SELECT from specific tables." reporting purposes that can only SELECT from specific tables."
@@ -229,11 +232,11 @@ reporting purposes that can only SELECT from specific tables."
### Database Users ### Database Users
| User | Database | Purpose | | User | Database | Purpose |
|------|----------|---------| | -------------------- | -------------------- | ---------------------- |
| `flyer_crawler_prod` | `flyer-crawler-prod` | Production | | `flyer_crawler_prod` | `flyer-crawler-prod` | Production |
| `flyer_crawler_test` | `flyer-crawler-test` | Testing | | `flyer_crawler_test` | `flyer-crawler-test` | Testing |
| `postgres` | All | Superuser (admin only) | | `postgres` | All | Superuser (admin only) |
### Creating Database Users ### Creating Database Users
@@ -345,6 +348,7 @@ psql -U flyer_crawler_prod -h localhost "flyer-crawler-prod" < backup_20260121.s
The db-dev subagent is particularly skilled at identifying N+1 query problems: The db-dev subagent is particularly skilled at identifying N+1 query problems:
**Problematic Pattern:** **Problematic Pattern:**
```typescript ```typescript
// BAD: N+1 queries // BAD: N+1 queries
const flyers = await listFlyers(); const flyers = await listFlyers();
@@ -354,13 +358,14 @@ for (const flyer of flyers) {
``` ```
**Optimized Pattern:** **Optimized Pattern:**
```typescript ```typescript
// GOOD: Single query with JOIN or separate batch query // GOOD: Single query with JOIN or separate batch query
const flyersWithItems = await listFlyersWithItems(); // 1 query const flyersWithItems = await listFlyersWithItems(); // 1 query
// Or with batching: // Or with batching:
const flyers = await listFlyers(); const flyers = await listFlyers();
const flyerIds = flyers.map(f => f.flyer_id); const flyerIds = flyers.map((f) => f.flyer_id);
const allItems = await listItemsByFlyerIds(flyerIds); // 1 query const allItems = await listItemsByFlyerIds(flyerIds); // 1 query
// Group items by flyer_id in application code // Group items by flyer_id in application code
``` ```
@@ -398,6 +403,7 @@ mcp__devdb__query("SELECT * FROM flyers LIMIT 5")
``` ```
This is useful for: This is useful for:
- Verifying data during debugging - Verifying data during debugging
- Checking schema state - Checking schema state
- Testing queries before implementing - Testing queries before implementing

View File

@@ -51,27 +51,30 @@ The devops subagent understands:
**Container Services:** **Container Services:**
| Service | Image | Purpose | Port | | Service | Image | Purpose | Port |
|---------|-------|---------|------| | ---------- | ----------------------- | ---------------------- | ---------- |
| `app` | Custom (Dockerfile.dev) | Node.js application | 3000, 3001 | | `app` | Custom (Dockerfile.dev) | Node.js application | 3000, 3001 |
| `postgres` | postgis/postgis:15-3.4 | Database with PostGIS | 5432 | | `postgres` | postgis/postgis:15-3.4 | Database with PostGIS | 5432 |
| `redis` | redis:alpine | Caching and job queues | 6379 | | `redis` | redis:alpine | Caching and job queues | 6379 |
### Example Requests ### Example Requests
**Container debugging:** **Container debugging:**
``` ```
"Use devops to debug why the dev container fails to start. "Use devops to debug why the dev container fails to start.
The postgres service shows as unhealthy and the app can't connect." The postgres service shows as unhealthy and the app can't connect."
``` ```
**CI/CD pipeline update:** **CI/CD pipeline update:**
``` ```
"Use devops to add a step to the deploy-to-test.yml workflow "Use devops to add a step to the deploy-to-test.yml workflow
that runs database migrations before restarting the app." that runs database migrations before restarting the app."
``` ```
**PM2 configuration:** **PM2 configuration:**
``` ```
"Use devops to update the PM2 ecosystem config to use cluster "Use devops to update the PM2 ecosystem config to use cluster
mode with 4 instances instead of max for the API server." mode with 4 instances instead of max for the API server."
@@ -107,11 +110,11 @@ podman exec -it flyer-crawler-dev npm run test:unit
When running commands from Git Bash on Windows, paths may be incorrectly converted: When running commands from Git Bash on Windows, paths may be incorrectly converted:
| Solution | Example | | Solution | Example |
|----------|---------| | -------------------------- | -------------------------------------------------------- |
| `sh -c` with single quotes | `podman exec container sh -c '/usr/local/bin/script.sh'` | | `sh -c` with single quotes | `podman exec container sh -c '/usr/local/bin/script.sh'` |
| Double slashes | `podman exec container //usr//local//bin//script.sh` | | Double slashes | `podman exec container //usr//local//bin//script.sh` |
| MSYS_NO_PATHCONV=1 | `MSYS_NO_PATHCONV=1 podman exec ...` | | MSYS_NO_PATHCONV=1 | `MSYS_NO_PATHCONV=1 podman exec ...` |
### PM2 Production Configuration ### PM2 Production Configuration
@@ -124,10 +127,10 @@ module.exports = {
name: 'flyer-crawler-api', name: 'flyer-crawler-api',
script: './node_modules/.bin/tsx', script: './node_modules/.bin/tsx',
args: 'server.ts', args: 'server.ts',
instances: 'max', // Use all CPU cores instances: 'max', // Use all CPU cores
exec_mode: 'cluster', // Enable cluster mode exec_mode: 'cluster', // Enable cluster mode
max_memory_restart: '500M', max_memory_restart: '500M',
kill_timeout: 5000, // Graceful shutdown kill_timeout: 5000, // Graceful shutdown
env_production: { env_production: {
NODE_ENV: 'production', NODE_ENV: 'production',
@@ -138,9 +141,9 @@ module.exports = {
name: 'flyer-crawler-worker', name: 'flyer-crawler-worker',
script: './node_modules/.bin/tsx', script: './node_modules/.bin/tsx',
args: 'src/services/worker.ts', args: 'src/services/worker.ts',
instances: 1, // Single instance for workers instances: 1, // Single instance for workers
max_memory_restart: '1G', max_memory_restart: '1G',
kill_timeout: 10000, // Workers need more time kill_timeout: 10000, // Workers need more time
}, },
], ],
}; };
@@ -167,9 +170,9 @@ pm2 describe flyer-crawler-api
### CI/CD Workflow Files ### CI/CD Workflow Files
| File | Purpose | | File | Purpose |
|------|---------| | ------------------------------------- | --------------------------- |
| `.gitea/workflows/deploy-to-prod.yml` | Production deployment | | `.gitea/workflows/deploy-to-prod.yml` | Production deployment |
| `.gitea/workflows/deploy-to-test.yml` | Test environment deployment | | `.gitea/workflows/deploy-to-test.yml` | Test environment deployment |
**Deployment Flow:** **Deployment Flow:**
@@ -227,12 +230,14 @@ The infra-architect subagent understands:
### Example Requests ### Example Requests
**Memory optimization:** **Memory optimization:**
``` ```
"Use infra-architect to analyze memory usage of the worker "Use infra-architect to analyze memory usage of the worker
processes. They're frequently hitting the 1GB limit and restarting." processes. They're frequently hitting the 1GB limit and restarting."
``` ```
**Capacity planning:** **Capacity planning:**
``` ```
"Use infra-architect to estimate resource requirements for "Use infra-architect to estimate resource requirements for
handling 10x current traffic. Include database, Redis, and handling 10x current traffic. Include database, Redis, and
@@ -240,6 +245,7 @@ application server recommendations."
``` ```
**Cost optimization:** **Cost optimization:**
``` ```
"Use infra-architect to identify opportunities to reduce "Use infra-architect to identify opportunities to reduce
infrastructure costs without impacting performance." infrastructure costs without impacting performance."
@@ -247,13 +253,13 @@ infrastructure costs without impacting performance."
### Resource Limits Reference ### Resource Limits Reference
| Process | Memory Limit | Notes | | Process | Memory Limit | Notes |
|---------|--------------|-------| | ---------------- | ------------ | --------------------- |
| API Server | 500MB | Per cluster instance | | API Server | 500MB | Per cluster instance |
| Worker | 1GB | Single instance | | Worker | 1GB | Single instance |
| Analytics Worker | 1GB | Single instance | | Analytics Worker | 1GB | Single instance |
| PostgreSQL | System RAM | Tune `shared_buffers` | | PostgreSQL | System RAM | Tune `shared_buffers` |
| Redis | 256MB | `maxmemory` setting | | Redis | 256MB | `maxmemory` setting |
## The bg-worker Subagent ## The bg-worker Subagent
@@ -294,12 +300,14 @@ The bg-worker subagent understands:
### Example Requests ### Example Requests
**Debugging stuck jobs:** **Debugging stuck jobs:**
``` ```
"Use bg-worker to debug why jobs are stuck in the flyer processing "Use bg-worker to debug why jobs are stuck in the flyer processing
queue. Check for failed jobs, worker status, and Redis connectivity." queue. Check for failed jobs, worker status, and Redis connectivity."
``` ```
**Adding retry logic:** **Adding retry logic:**
``` ```
"Use bg-worker to add exponential backoff retry logic to the "Use bg-worker to add exponential backoff retry logic to the
AI extraction job. It should retry up to 3 times with increasing AI extraction job. It should retry up to 3 times with increasing
@@ -307,6 +315,7 @@ delays for rate limit errors."
``` ```
**Queue monitoring:** **Queue monitoring:**
``` ```
"Use bg-worker to add health check endpoints for monitoring "Use bg-worker to add health check endpoints for monitoring
queue depth and worker status." queue depth and worker status."
@@ -346,7 +355,7 @@ export const flyerWorker = new Worker(
max: 10, max: 10,
duration: 1000, duration: 1000,
}, },
} },
); );
``` ```
@@ -390,13 +399,13 @@ pm2 delete all
### Systemd Services (Production) ### Systemd Services (Production)
| Service | Command | | Service | Command |
|---------|---------| | ---------- | ---------------------- | ---- | ------------------------- |
| PostgreSQL | `sudo systemctl {start|stop|status} postgresql` | | PostgreSQL | `sudo systemctl {start | stop | status} postgresql` |
| Redis | `sudo systemctl {start|stop|status} redis-server` | | Redis | `sudo systemctl {start | stop | status} redis-server` |
| NGINX | `sudo systemctl {start|stop|status} nginx` | | NGINX | `sudo systemctl {start | stop | status} nginx` |
| Bugsink | `sudo systemctl {start|stop|status} gunicorn-bugsink` | | Bugsink | `sudo systemctl {start | stop | status} gunicorn-bugsink` |
| Logstash | `sudo systemctl {start|stop|status} logstash` | | Logstash | `sudo systemctl {start | stop | status} logstash` |
### Health Checks ### Health Checks

View File

@@ -32,6 +32,7 @@ The documenter subagent understands:
### Example Requests ### Example Requests
**API Documentation:** **API Documentation:**
``` ```
"Use documenter to create API documentation for the shopping "Use documenter to create API documentation for the shopping
list endpoints. Include request/response schemas, authentication list endpoints. Include request/response schemas, authentication
@@ -39,6 +40,7 @@ requirements, and example curl commands."
``` ```
**Feature Guide:** **Feature Guide:**
``` ```
"Use documenter to write a user guide for the price watchlist "Use documenter to write a user guide for the price watchlist
feature. Explain how to add items, set price alerts, and view feature. Explain how to add items, set price alerts, and view
@@ -46,6 +48,7 @@ price history."
``` ```
**Troubleshooting Guide:** **Troubleshooting Guide:**
``` ```
"Use documenter to create a troubleshooting guide for common "Use documenter to create a troubleshooting guide for common
flyer upload issues, including file format errors, size limits, flyer upload issues, including file format errors, size limits,
@@ -56,7 +59,7 @@ and processing failures."
#### API Documentation Format #### API Documentation Format
```markdown ````markdown
### [METHOD] /api/endpoint ### [METHOD] /api/endpoint
**Description**: Brief purpose of the endpoint **Description**: Brief purpose of the endpoint
@@ -64,6 +67,7 @@ and processing failures."
**Authentication**: Required (Bearer token) **Authentication**: Required (Bearer token)
**Request**: **Request**:
- Headers: `Content-Type: application/json`, `Authorization: Bearer {token}` - Headers: `Content-Type: application/json`, `Authorization: Bearer {token}`
- Body: - Body:
```json ```json
@@ -72,8 +76,10 @@ and processing failures."
"optional_field": "number (optional) - Description" "optional_field": "number (optional) - Description"
} }
``` ```
````
**Response**: **Response**:
- Success (200): - Success (200):
```json ```json
{ {
@@ -93,13 +99,15 @@ and processing failures."
``` ```
**Example**: **Example**:
```bash ```bash
curl -X POST https://api.example.com/api/endpoint \ curl -X POST https://api.example.com/api/endpoint \
-H "Authorization: Bearer $TOKEN" \ -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"field": "value"}' -d '{"field": "value"}'
``` ```
```
````
## The describer-for-ai Subagent ## The describer-for-ai Subagent
@@ -163,11 +171,12 @@ Technical details, code examples, configuration.
## Related ADRs ## Related ADRs
- [ADR-XXX](./XXXX-title.md) - Related decision - [ADR-XXX](./XXXX-title.md) - Related decision
``` ````
### Example Requests ### Example Requests
**Creating an ADR:** **Creating an ADR:**
``` ```
"Use describer-for-ai to create an ADR for adding websocket "Use describer-for-ai to create an ADR for adding websocket
support for real-time price alerts. Include the technical support for real-time price alerts. Include the technical
@@ -175,6 +184,7 @@ approach, alternatives considered, and implementation details."
``` ```
**CLAUDE.md Update:** **CLAUDE.md Update:**
``` ```
"Use describer-for-ai to update CLAUDE.md with the new "Use describer-for-ai to update CLAUDE.md with the new
authentication flow and any new patterns developers should authentication flow and any new patterns developers should
@@ -182,6 +192,7 @@ be aware of."
``` ```
**Technical Overview:** **Technical Overview:**
``` ```
"Use describer-for-ai to create a technical overview of the "Use describer-for-ai to create a technical overview of the
caching layer for future AI context, including how Redis is caching layer for future AI context, including how Redis is
@@ -213,6 +224,7 @@ The planner subagent understands:
### Example Requests ### Example Requests
**Feature Breakdown:** **Feature Breakdown:**
``` ```
"Use planner to break down the 'store comparison' feature "Use planner to break down the 'store comparison' feature
into implementable tasks. Include frontend, backend, and into implementable tasks. Include frontend, backend, and
@@ -220,6 +232,7 @@ database work. Identify dependencies between tasks."
``` ```
**Roadmap Planning:** **Roadmap Planning:**
``` ```
"Use planner to create a roadmap for the Q2 features: "Use planner to create a roadmap for the Q2 features:
recipe integration, mobile app preparation, and store recipe integration, mobile app preparation, and store
@@ -227,6 +240,7 @@ notifications. Identify what can be parallelized."
``` ```
**Scope Assessment:** **Scope Assessment:**
``` ```
"Use planner to assess the scope of adding multi-language "Use planner to assess the scope of adding multi-language
support. What systems would need to change? What's the support. What systems would need to change? What's the
@@ -239,11 +253,13 @@ minimum viable implementation?"
# Feature: [Feature Name] # Feature: [Feature Name]
## Overview ## Overview
Brief description of the feature and its value. Brief description of the feature and its value.
## Tasks ## Tasks
### Phase 1: Foundation ### Phase 1: Foundation
1. **[Task Name]** (S/M/L) 1. **[Task Name]** (S/M/L)
- Description - Description
- Files: `path/to/file.ts` - Files: `path/to/file.ts`
@@ -257,20 +273,25 @@ Brief description of the feature and its value.
- Acceptance: What "done" looks like - Acceptance: What "done" looks like
### Phase 2: Core Implementation ### Phase 2: Core Implementation
... ...
### Phase 3: Polish & Testing ### Phase 3: Polish & Testing
... ...
## Dependencies ## Dependencies
- External: Third-party services, APIs - External: Third-party services, APIs
- Internal: Other features that must be complete first - Internal: Other features that must be complete first
## Risks ## Risks
- Risk 1: Mitigation strategy - Risk 1: Mitigation strategy
- Risk 2: Mitigation strategy - Risk 2: Mitigation strategy
## Estimates ## Estimates
- Phase 1: X days - Phase 1: X days
- Phase 2: Y days - Phase 2: Y days
- Phase 3: Z days - Phase 3: Z days
@@ -302,6 +323,7 @@ The product-owner subagent understands:
### Example Requests ### Example Requests
**User Story Writing:** **User Story Writing:**
``` ```
"Use product-owner to write user stories for the meal planning "Use product-owner to write user stories for the meal planning
feature. Consider different user personas: budget shoppers, feature. Consider different user personas: budget shoppers,
@@ -309,6 +331,7 @@ health-conscious users, and busy families."
``` ```
**Acceptance Criteria:** **Acceptance Criteria:**
``` ```
"Use product-owner to define acceptance criteria for the price "Use product-owner to define acceptance criteria for the price
alert feature. What conditions must be met for this feature alert feature. What conditions must be met for this feature
@@ -316,6 +339,7 @@ to be considered complete?"
``` ```
**Prioritization:** **Prioritization:**
``` ```
"Use product-owner to prioritize these feature requests based "Use product-owner to prioritize these feature requests based
on user value and development effort: on user value and development effort:
@@ -341,11 +365,13 @@ on user value and development effort:
**Then** [expected outcome] **Then** [expected outcome]
### Additional Notes ### Additional Notes
- Edge cases to consider - Edge cases to consider
- Related features - Related features
- Out of scope items - Out of scope items
### Technical Notes ### Technical Notes
- API endpoints needed - API endpoints needed
- Database changes - Database changes
- Third-party integrations - Third-party integrations
@@ -378,6 +404,7 @@ Documentation should be updated alongside code changes. The `describer-for-ai` s
### 2. Use Consistent Terminology ### 2. Use Consistent Terminology
Refer to entities and concepts consistently: Refer to entities and concepts consistently:
- "Flyer" not "Ad" or "Circular" - "Flyer" not "Ad" or "Circular"
- "Store" not "Retailer" or "Shop" - "Store" not "Retailer" or "Shop"
- "Deal" not "Offer" or "Sale" - "Deal" not "Offer" or "Sale"
@@ -385,6 +412,7 @@ Refer to entities and concepts consistently:
### 3. Include Examples ### 3. Include Examples
All documentation should include concrete examples: All documentation should include concrete examples:
- API docs: Include curl commands and JSON payloads - API docs: Include curl commands and JSON payloads
- User guides: Include screenshots or step-by-step instructions - User guides: Include screenshots or step-by-step instructions
- Technical docs: Include code snippets - Technical docs: Include code snippets
@@ -392,6 +420,7 @@ All documentation should include concrete examples:
### 4. Cross-Reference Related Documentation ### 4. Cross-Reference Related Documentation
Use relative links to connect related documentation: Use relative links to connect related documentation:
```markdown ```markdown
See [Testing Guide](../TESTING.md) for test execution details. See [Testing Guide](../TESTING.md) for test execution details.
``` ```
@@ -399,6 +428,7 @@ See [Testing Guide](../TESTING.md) for test execution details.
### 5. Date and Version Documentation ### 5. Date and Version Documentation
Include dates on documentation that may become stale: Include dates on documentation that may become stale:
```markdown ```markdown
**Last Updated**: 2026-01-21 **Last Updated**: 2026-01-21
**Applies to**: v0.12.x **Applies to**: v0.12.x

View File

@@ -46,25 +46,26 @@ Located in `src/styles/` and documented in `docs/DESIGN_TOKENS.md`:
```css ```css
/* Core colors */ /* Core colors */
--color-primary: #FF6B35; --color-primary: #ff6b35;
--color-secondary: #004E89; --color-secondary: #004e89;
--color-accent: #F7C548; --color-accent: #f7c548;
--color-background: #FFFDF7; --color-background: #fffdf7;
--color-text: #1A1A1A; --color-text: #1a1a1a;
/* Borders */ /* Borders */
--border-width: 3px; --border-width: 3px;
--border-color: #1A1A1A; --border-color: #1a1a1a;
/* Shadows (offset style) */ /* Shadows (offset style) */
--shadow-sm: 2px 2px 0 0 #1A1A1A; --shadow-sm: 2px 2px 0 0 #1a1a1a;
--shadow-md: 4px 4px 0 0 #1A1A1A; --shadow-md: 4px 4px 0 0 #1a1a1a;
--shadow-lg: 6px 6px 0 0 #1A1A1A; --shadow-lg: 6px 6px 0 0 #1a1a1a;
``` ```
### Component Patterns ### Component Patterns
**Brutal Card:** **Brutal Card:**
```tsx ```tsx
<div className="border-3 border-black bg-white p-4 shadow-[4px_4px_0_0_#1A1A1A] hover:shadow-[6px_6px_0_0_#1A1A1A] hover:translate-x-[-2px] hover:translate-y-[-2px] transition-all"> <div className="border-3 border-black bg-white p-4 shadow-[4px_4px_0_0_#1A1A1A] hover:shadow-[6px_6px_0_0_#1A1A1A] hover:translate-x-[-2px] hover:translate-y-[-2px] transition-all">
{children} {children}
@@ -72,6 +73,7 @@ Located in `src/styles/` and documented in `docs/DESIGN_TOKENS.md`:
``` ```
**Brutal Button:** **Brutal Button:**
```tsx ```tsx
<button className="border-3 border-black bg-primary px-4 py-2 font-bold shadow-[4px_4px_0_0_#1A1A1A] hover:shadow-[2px_2px_0_0_#1A1A1A] hover:translate-x-[2px] hover:translate-y-[2px] active:shadow-none active:translate-x-[4px] active:translate-y-[4px] transition-all"> <button className="border-3 border-black bg-primary px-4 py-2 font-bold shadow-[4px_4px_0_0_#1A1A1A] hover:shadow-[2px_2px_0_0_#1A1A1A] hover:translate-x-[2px] hover:translate-y-[2px] active:shadow-none active:translate-x-[4px] active:translate-y-[4px] transition-all">
Click Me Click Me
@@ -118,7 +120,11 @@ on smaller screens."
```tsx ```tsx
// Fetching data with caching // Fetching data with caching
const { data: deals, isLoading, error } = useQuery({ const {
data: deals,
isLoading,
error,
} = useQuery({
queryKey: ['deals', storeId], queryKey: ['deals', storeId],
queryFn: () => dealsApi.getByStore(storeId), queryFn: () => dealsApi.getByStore(storeId),
staleTime: 5 * 60 * 1000, // 5 minutes staleTime: 5 * 60 * 1000, // 5 minutes
@@ -131,7 +137,7 @@ const mutation = useMutation({
await queryClient.cancelQueries(['deals']); await queryClient.cancelQueries(['deals']);
const previous = queryClient.getQueryData(['deals']); const previous = queryClient.getQueryData(['deals']);
queryClient.setQueryData(['deals'], (old) => queryClient.setQueryData(['deals'], (old) =>
old.map(d => d.id === dealId ? {...d, isFavorite: true} : d) old.map((d) => (d.id === dealId ? { ...d, isFavorite: true } : d)),
); );
return { previous }; return { previous };
}, },
@@ -166,6 +172,7 @@ Use the **uiux-designer** subagent when you need to:
### Example Requests ### Example Requests
**Design new feature:** **Design new feature:**
``` ```
"Use uiux-designer to design the user flow for adding items "Use uiux-designer to design the user flow for adding items
to a shopping list from the deals page. Consider both desktop to a shopping list from the deals page. Consider both desktop
@@ -173,6 +180,7 @@ and mobile experiences."
``` ```
**Improve existing UX:** **Improve existing UX:**
``` ```
"Use uiux-designer to improve the flyer upload experience. "Use uiux-designer to improve the flyer upload experience.
Users are confused about which file types are supported and Users are confused about which file types are supported and
@@ -180,6 +188,7 @@ don't understand the processing status."
``` ```
**Component design:** **Component design:**
``` ```
"Use uiux-designer to design a price comparison component "Use uiux-designer to design a price comparison component
that shows the same item across multiple stores." that shows the same item across multiple stores."
@@ -235,19 +244,19 @@ export function PriceTag({
className, className,
}: PriceTagProps) { }: PriceTagProps) {
const isOnSale = originalPrice && originalPrice > currentPrice; const isOnSale = originalPrice && originalPrice > currentPrice;
const discount = isOnSale const discount = isOnSale ? Math.round((1 - currentPrice / originalPrice) * 100) : 0;
? Math.round((1 - currentPrice / originalPrice) * 100)
: 0;
return ( return (
<div className={cn('flex items-baseline gap-2', className)}> <div className={cn('flex items-baseline gap-2', className)}>
<span className="text-2xl font-bold text-primary"> <span className="text-2xl font-bold text-primary">
{currency}{currentPrice.toFixed(2)} {currency}
{currentPrice.toFixed(2)}
</span> </span>
{isOnSale && ( {isOnSale && (
<> <>
<span className="text-sm text-gray-500 line-through"> <span className="text-sm text-gray-500 line-through">
{currency}{originalPrice.toFixed(2)} {currency}
{originalPrice.toFixed(2)}
</span> </span>
<span className="border-2 border-black bg-accent px-1 text-xs font-bold"> <span className="border-2 border-black bg-accent px-1 text-xs font-bold">
-{discount}% -{discount}%
@@ -276,9 +285,7 @@ describe('PriceTag', () => {
}); });
it('shows discount when original price is higher', () => { it('shows discount when original price is higher', () => {
renderWithProviders( renderWithProviders(<PriceTag currentPrice={7.99} originalPrice={9.99} />);
<PriceTag currentPrice={7.99} originalPrice={9.99} />
);
expect(screen.getByText('$7.99')).toBeInTheDocument(); expect(screen.getByText('$7.99')).toBeInTheDocument();
expect(screen.getByText('$9.99')).toBeInTheDocument(); expect(screen.getByText('$9.99')).toBeInTheDocument();
expect(screen.getByText('-20%')).toBeInTheDocument(); expect(screen.getByText('-20%')).toBeInTheDocument();
@@ -301,9 +308,7 @@ describe('useDeals', () => {
const { result } = renderHook(() => useDeals('store-123'), { const { result } = renderHook(() => useDeals('store-123'), {
wrapper: ({ children }) => ( wrapper: ({ children }) => (
<QueryClientProvider client={queryClient}> <QueryClientProvider client={queryClient}>{children}</QueryClientProvider>
{children}
</QueryClientProvider>
), ),
}); });
@@ -389,10 +394,7 @@ function FlyerPage() {
```tsx ```tsx
// Memoize expensive computations // Memoize expensive computations
const sortedDeals = useMemo( const sortedDeals = useMemo(() => deals.slice().sort((a, b) => a.price - b.price), [deals]);
() => deals.slice().sort((a, b) => a.price - b.price),
[deals]
);
// Memoize callbacks passed to children // Memoize callbacks passed to children
const handleSelect = useCallback((id: string) => { const handleSelect = useCallback((id: string) => {

View File

@@ -19,58 +19,58 @@ The following subagents are available for use in this project:
### Core Development ### Core Development
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | --------- | --------------------------------------------------------------- | ---------------------------------------------------------------- |
| **plan** | Design implementation plans, identify files, analyze trade-offs | Starting new features, major refactoring, architecture decisions | | **plan** | Design implementation plans, identify files, analyze trade-offs | Starting new features, major refactoring, architecture decisions |
| **coder** | Write and modify production Node.js/TypeScript code | Implementing features, fixing bugs, writing new modules | | **coder** | Write and modify production Node.js/TypeScript code | Implementing features, fixing bugs, writing new modules |
### Testing and Quality ### Testing and Quality
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | ----------------- | ----------------------------------------------------------------- | ---------------------------------------------------- |
| **tester** | Adversarial testing: edge cases, race conditions, vulnerabilities | Finding bugs, security testing, stress testing | | **tester** | Adversarial testing: edge cases, race conditions, vulnerabilities | Finding bugs, security testing, stress testing |
| **testwriter** | Create comprehensive tests for features and fixes | Writing unit tests, integration tests, test coverage | | **testwriter** | Create comprehensive tests for features and fixes | Writing unit tests, integration tests, test coverage |
| **code-reviewer** | Review code quality, security, best practices | Code review, PR reviews, architecture review | | **code-reviewer** | Review code quality, security, best practices | Code review, PR reviews, architecture review |
### Database and Infrastructure ### Database and Infrastructure
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | ------------------- | --------------------------------------------------------- | -------------------------------------------------------------- |
| **db-dev** | Schemas, queries, migrations, optimization, N+1 problems | Database development, query optimization, schema changes | | **db-dev** | Schemas, queries, migrations, optimization, N+1 problems | Database development, query optimization, schema changes |
| **db-admin** | PostgreSQL/Redis admin, security, backups, infrastructure | Database administration, performance tuning, backup strategies | | **db-admin** | PostgreSQL/Redis admin, security, backups, infrastructure | Database administration, performance tuning, backup strategies |
| **devops** | Containers, services, CI/CD pipelines, deployments | Deployment issues, container configuration, CI/CD pipelines | | **devops** | Containers, services, CI/CD pipelines, deployments | Deployment issues, container configuration, CI/CD pipelines |
| **infra-architect** | Resource optimization: RAM, CPU, disk, storage | Capacity planning, performance optimization, cost reduction | | **infra-architect** | Resource optimization: RAM, CPU, disk, storage | Capacity planning, performance optimization, cost reduction |
### Specialized Technical ### Specialized Technical
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | --------------------------- | ---------------------------------------------------------- | --------------------------------------------------------- |
| **bg-worker** | Background jobs: PM2 workers, BullMQ queues, async tasks | Queue management, worker debugging, job scheduling | | **bg-worker** | Background jobs: PM2 workers, BullMQ queues, async tasks | Queue management, worker debugging, job scheduling |
| **ai-usage** | LLM APIs (Gemini, Claude), prompt engineering, AI features | AI integration, prompt optimization, Gemini API issues | | **ai-usage** | LLM APIs (Gemini, Claude), prompt engineering, AI features | AI integration, prompt optimization, Gemini API issues |
| **security-engineer** | Security audits, vulnerability scanning, OWASP, pentesting | Security reviews, vulnerability assessments, compliance | | **security-engineer** | Security audits, vulnerability scanning, OWASP, pentesting | Security reviews, vulnerability assessments, compliance |
| **log-debug** | Production errors, observability, Bugsink/Sentry analysis | Debugging production issues, log analysis, error tracking | | **log-debug** | Production errors, observability, Bugsink/Sentry analysis | Debugging production issues, log analysis, error tracking |
| **integrations-specialist** | Third-party services, webhooks, external APIs | External API integration, webhook implementation | | **integrations-specialist** | Third-party services, webhooks, external APIs | External API integration, webhook implementation |
### Frontend and Design ### Frontend and Design
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------- |
| **frontend-specialist** | UI components, Neo-Brutalism, Core Web Vitals, accessibility | Frontend development, performance optimization, accessibility | | **frontend-specialist** | UI components, Neo-Brutalism, Core Web Vitals, accessibility | Frontend development, performance optimization, accessibility |
| **uiux-designer** | UI/UX decisions, component design, Neo-Brutalism compliance | Design decisions, user experience improvements | | **uiux-designer** | UI/UX decisions, component design, Neo-Brutalism compliance | Design decisions, user experience improvements |
### Documentation and Planning ### Documentation and Planning
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | -------------------- | ----------------------------------------------------------- | ---------------------------------------------------------- |
| **documenter** | User docs, API specs, feature documentation | Writing documentation, API specs, user guides | | **documenter** | User docs, API specs, feature documentation | Writing documentation, API specs, user guides |
| **describer-for-ai** | Technical docs for AI: ADRs, system overviews, context docs | Writing ADRs, technical specifications, context documents | | **describer-for-ai** | Technical docs for AI: ADRs, system overviews, context docs | Writing ADRs, technical specifications, context documents |
| **planner** | Break down features, roadmaps, scope management | Project planning, feature breakdown, roadmap development | | **planner** | Break down features, roadmaps, scope management | Project planning, feature breakdown, roadmap development |
| **product-owner** | Feature requirements, user stories, validation, backlog | Requirements gathering, user story writing, prioritization | | **product-owner** | Feature requirements, user stories, validation, backlog | Requirements gathering, user story writing, prioritization |
### Support ### Support
| Subagent | Purpose | When to Use | | Subagent | Purpose | When to Use |
|----------|---------|-------------| | -------------------------------- | ---------------------------------------- | ---------------------------------------------------- |
| **tools-integration-specialist** | Bugsink, Gitea, OAuth, operational tools | Tool configuration, OAuth setup, operational tooling | | **tools-integration-specialist** | Bugsink, Gitea, OAuth, operational tools | Tool configuration, OAuth setup, operational tooling |
## How to Launch a Subagent ## How to Launch a Subagent
@@ -94,6 +94,7 @@ Claude will automatically invoke the appropriate subagent with the relevant cont
### Which Subagent Should I Use? ### Which Subagent Should I Use?
**For Writing Code:** **For Writing Code:**
- New features or modules: `coder` - New features or modules: `coder`
- Complex architectural changes: `plan` first, then `coder` - Complex architectural changes: `plan` first, then `coder`
- Database-related code: `db-dev` - Database-related code: `db-dev`
@@ -101,22 +102,26 @@ Claude will automatically invoke the appropriate subagent with the relevant cont
- Background job code: `bg-worker` - Background job code: `bg-worker`
**For Testing:** **For Testing:**
- Writing new tests: `testwriter` - Writing new tests: `testwriter`
- Finding edge cases and bugs: `tester` - Finding edge cases and bugs: `tester`
- Reviewing test coverage: `code-reviewer` - Reviewing test coverage: `code-reviewer`
**For Infrastructure:** **For Infrastructure:**
- Container issues: `devops` - Container issues: `devops`
- CI/CD pipelines: `devops` - CI/CD pipelines: `devops`
- Database administration: `db-admin` - Database administration: `db-admin`
- Performance optimization: `infra-architect` - Performance optimization: `infra-architect`
**For Debugging:** **For Debugging:**
- Production errors: `log-debug` - Production errors: `log-debug`
- Database issues: `db-admin` or `db-dev` - Database issues: `db-admin` or `db-dev`
- AI/Gemini issues: `ai-usage` - AI/Gemini issues: `ai-usage`
**For Documentation:** **For Documentation:**
- API documentation: `documenter` - API documentation: `documenter`
- Architecture decisions: `describer-for-ai` - Architecture decisions: `describer-for-ai`
- Planning and requirements: `planner` or `product-owner` - Planning and requirements: `planner` or `product-owner`
@@ -126,6 +131,7 @@ Claude will automatically invoke the appropriate subagent with the relevant cont
### 1. Start with Planning ### 1. Start with Planning
For complex features, always start with the `plan` subagent to: For complex features, always start with the `plan` subagent to:
- Identify affected files - Identify affected files
- Understand architectural implications - Understand architectural implications
- Break down the work into manageable tasks - Break down the work into manageable tasks
@@ -133,6 +139,7 @@ For complex features, always start with the `plan` subagent to:
### 2. Use Specialized Subagents for Specialized Work ### 2. Use Specialized Subagents for Specialized Work
Avoid using `coder` for database migrations. Use `db-dev` instead - it understands: Avoid using `coder` for database migrations. Use `db-dev` instead - it understands:
- The project's migration patterns - The project's migration patterns
- Schema synchronization requirements - Schema synchronization requirements
- PostgreSQL-specific optimizations - PostgreSQL-specific optimizations
@@ -140,6 +147,7 @@ Avoid using `coder` for database migrations. Use `db-dev` instead - it understan
### 3. Let Subagents Follow Project Conventions ### 3. Let Subagents Follow Project Conventions
All subagents are pre-configured with knowledge of project conventions: All subagents are pre-configured with knowledge of project conventions:
- ADR patterns (see [docs/adr/index.md](../adr/index.md)) - ADR patterns (see [docs/adr/index.md](../adr/index.md))
- Repository pattern standards (ADR-034) - Repository pattern standards (ADR-034)
- Service layer architecture (ADR-035) - Service layer architecture (ADR-035)
@@ -187,6 +195,7 @@ Subagents can pass information back to the main conversation and to each other t
### Subagent Not Available ### Subagent Not Available
If a subagent fails to launch, it may be due to: If a subagent fails to launch, it may be due to:
- Incorrect subagent name (check the list above) - Incorrect subagent name (check the list above)
- Network or API issues - Network or API issues
- Context length limitations - Context length limitations
@@ -194,6 +203,7 @@ If a subagent fails to launch, it may be due to:
### Subagent Gives Incorrect Advice ### Subagent Gives Incorrect Advice
All subagents follow the CLAUDE.md instructions. If advice seems wrong: All subagents follow the CLAUDE.md instructions. If advice seems wrong:
1. Verify the project context is correct 1. Verify the project context is correct
2. Check if the advice conflicts with an ADR 2. Check if the advice conflicts with an ADR
3. Provide additional context to the subagent 3. Provide additional context to the subagent
@@ -201,6 +211,7 @@ All subagents follow the CLAUDE.md instructions. If advice seems wrong:
### Subagent Takes Too Long ### Subagent Takes Too Long
For complex tasks, subagents may take time. Consider: For complex tasks, subagents may take time. Consider:
- Breaking the task into smaller pieces - Breaking the task into smaller pieces
- Using the `plan` subagent first to scope the work - Using the `plan` subagent first to scope the work
- Running simpler queries first to verify understanding - Running simpler queries first to verify understanding

View File

@@ -34,6 +34,7 @@ The security-engineer subagent understands:
### Example Requests ### Example Requests
**Security Audit:** **Security Audit:**
``` ```
"Use security-engineer to audit the user registration and "Use security-engineer to audit the user registration and
login flow for security vulnerabilities. Check for common login flow for security vulnerabilities. Check for common
@@ -42,6 +43,7 @@ management problems."
``` ```
**API Security Review:** **API Security Review:**
``` ```
"Use security-engineer to review the flyer upload endpoint "Use security-engineer to review the flyer upload endpoint
for security issues. Consider file type validation, size for security issues. Consider file type validation, size
@@ -49,6 +51,7 @@ limits, malicious file handling, and authorization."
``` ```
**Vulnerability Assessment:** **Vulnerability Assessment:**
``` ```
"Use security-engineer to assess our exposure to the OWASP "Use security-engineer to assess our exposure to the OWASP
Top 10 vulnerabilities. Identify any gaps in our current Top 10 vulnerabilities. Identify any gaps in our current
@@ -60,6 +63,7 @@ security measures."
The security-engineer subagent uses this checklist: The security-engineer subagent uses this checklist:
#### Authentication & Authorization #### Authentication & Authorization
- [ ] Password hashing with bcrypt (cost factor >= 10) - [ ] Password hashing with bcrypt (cost factor >= 10)
- [ ] JWT tokens with appropriate expiration - [ ] JWT tokens with appropriate expiration
- [ ] Refresh token rotation - [ ] Refresh token rotation
@@ -67,23 +71,27 @@ The security-engineer subagent uses this checklist:
- [ ] Role-based access control (RBAC) - [ ] Role-based access control (RBAC)
#### Input Validation #### Input Validation
- [ ] All user input validated with Zod schemas - [ ] All user input validated with Zod schemas
- [ ] SQL queries use parameterized statements - [ ] SQL queries use parameterized statements
- [ ] File uploads validated for type and size - [ ] File uploads validated for type and size
- [ ] Path traversal prevention - [ ] Path traversal prevention
#### Data Protection #### Data Protection
- [ ] Sensitive data encrypted at rest - [ ] Sensitive data encrypted at rest
- [ ] HTTPS enforced in production - [ ] HTTPS enforced in production
- [ ] No secrets in source code - [ ] No secrets in source code
- [ ] Proper error messages (no stack traces to users) - [ ] Proper error messages (no stack traces to users)
#### Rate Limiting #### Rate Limiting
- [ ] Login attempts limited - [ ] Login attempts limited
- [ ] API endpoints rate limited - [ ] API endpoints rate limited
- [ ] File upload rate limited - [ ] File upload rate limited
#### Headers & CORS #### Headers & CORS
- [ ] Security headers set (Helmet.js) - [ ] Security headers set (Helmet.js)
- [ ] CORS configured appropriately - [ ] CORS configured appropriately
- [ ] Content-Security-Policy defined - [ ] Content-Security-Policy defined
@@ -91,6 +99,7 @@ The security-engineer subagent uses this checklist:
### Security Patterns in This Project ### Security Patterns in This Project
**Rate Limiting (ADR-032):** **Rate Limiting (ADR-032):**
```typescript ```typescript
// src/config/rateLimiters.ts // src/config/rateLimiters.ts
export const loginLimiter = rateLimit({ export const loginLimiter = rateLimit({
@@ -101,6 +110,7 @@ export const loginLimiter = rateLimit({
``` ```
**Input Validation (ADR-003):** **Input Validation (ADR-003):**
```typescript ```typescript
// src/middleware/validation.middleware.ts // src/middleware/validation.middleware.ts
router.post( router.post(
@@ -111,6 +121,7 @@ router.post(
``` ```
**Authentication (ADR-048):** **Authentication (ADR-048):**
```typescript ```typescript
// JWT with refresh tokens // JWT with refresh tokens
const accessToken = jwt.sign(payload, secret, { expiresIn: '15m' }); const accessToken = jwt.sign(payload, secret, { expiresIn: '15m' });
@@ -158,6 +169,7 @@ mcp__localerrors__list_issues({ project_id: 1 })
### Example Requests ### Example Requests
**Production Error Investigation:** **Production Error Investigation:**
``` ```
"Use log-debug to investigate the spike in 500 errors on the "Use log-debug to investigate the spike in 500 errors on the
flyer processing endpoint yesterday. Check Bugsink for error flyer processing endpoint yesterday. Check Bugsink for error
@@ -165,6 +177,7 @@ patterns and identify the root cause."
``` ```
**Performance Analysis:** **Performance Analysis:**
``` ```
"Use log-debug to analyze the slow response times on the deals "Use log-debug to analyze the slow response times on the deals
page. Check logs for database query timing and identify any page. Check logs for database query timing and identify any
@@ -172,6 +185,7 @@ bottlenecks."
``` ```
**Error Pattern Analysis:** **Error Pattern Analysis:**
``` ```
"Use log-debug to identify patterns in the authentication "Use log-debug to identify patterns in the authentication
failures over the past week. Are they coming from specific failures over the past week. Are they coming from specific
@@ -181,6 +195,7 @@ IPs or affecting specific users?"
### Log Analysis Patterns ### Log Analysis Patterns
**Structured Log Format (Pino):** **Structured Log Format (Pino):**
```json ```json
{ {
"level": 50, "level": 50,
@@ -200,6 +215,7 @@ IPs or affecting specific users?"
``` ```
**Request Tracing:** **Request Tracing:**
```typescript ```typescript
// Each request gets a unique ID for tracing // Each request gets a unique ID for tracing
app.use((req, res, next) => { app.use((req, res, next) => {
@@ -210,6 +226,7 @@ app.use((req, res, next) => {
``` ```
**Error Correlation:** **Error Correlation:**
- Same `requestId` across all logs for a request - Same `requestId` across all logs for a request
- Same `userId` for user-related errors - Same `userId` for user-related errors
- Same `flyerId` for flyer processing errors - Same `flyerId` for flyer processing errors
@@ -218,25 +235,26 @@ app.use((req, res, next) => {
**Production Bugsink Projects:** **Production Bugsink Projects:**
| Project | ID | Purpose | | Project | ID | Purpose |
|---------|-----|---------| | ---------------------------- | --- | --------------- |
| flyer-crawler-backend | 1 | Backend errors | | flyer-crawler-backend | 1 | Backend errors |
| flyer-crawler-frontend | 2 | Frontend errors | | flyer-crawler-frontend | 2 | Frontend errors |
| flyer-crawler-backend-test | 3 | Test backend | | flyer-crawler-backend-test | 3 | Test backend |
| flyer-crawler-frontend-test | 4 | Test frontend | | flyer-crawler-frontend-test | 4 | Test frontend |
| flyer-crawler-infrastructure | 5 | Infra errors | | flyer-crawler-infrastructure | 5 | Infra errors |
**Accessing Bugsink:** **Accessing Bugsink:**
- Production: https://bugsink.projectium.com - Production: https://bugsink.projectium.com
- Dev Container: http://localhost:8000 - Dev Container: http://localhost:8000
### Log File Locations ### Log File Locations
| Environment | Log Path | | Environment | Log Path |
|-------------|----------| | ------------- | --------------------------------------------------------- |
| Production | `/var/www/flyer-crawler.projectium.com/logs/app.log` | | Production | `/var/www/flyer-crawler.projectium.com/logs/app.log` |
| Test | `/var/www/flyer-crawler-test.projectium.com/logs/app.log` | | Test | `/var/www/flyer-crawler-test.projectium.com/logs/app.log` |
| Dev Container | `/app/logs/app.log` | | Dev Container | `/app/logs/app.log` |
## The code-reviewer Subagent ## The code-reviewer Subagent
@@ -264,6 +282,7 @@ The code-reviewer subagent understands:
### Example Requests ### Example Requests
**Code Review:** **Code Review:**
``` ```
"Use code-reviewer to review the changes in the shopping list "Use code-reviewer to review the changes in the shopping list
feature branch. Check for adherence to project patterns, feature branch. Check for adherence to project patterns,
@@ -271,6 +290,7 @@ potential bugs, and security issues."
``` ```
**Architecture Review:** **Architecture Review:**
``` ```
"Use code-reviewer to review the proposed changes to the "Use code-reviewer to review the proposed changes to the
caching layer. Does it follow our patterns? Are there caching layer. Does it follow our patterns? Are there
@@ -278,6 +298,7 @@ potential issues with cache invalidation?"
``` ```
**Security-Focused Review:** **Security-Focused Review:**
``` ```
"Use code-reviewer to review the new file upload handling "Use code-reviewer to review the new file upload handling
code with a focus on security. Check for path traversal, code with a focus on security. Check for path traversal,
@@ -289,6 +310,7 @@ file type validation, and size limits."
The code-reviewer subagent checks: The code-reviewer subagent checks:
#### Code Quality #### Code Quality
- [ ] Follows TypeScript strict mode - [ ] Follows TypeScript strict mode
- [ ] No `any` types without justification - [ ] No `any` types without justification
- [ ] Proper error handling - [ ] Proper error handling
@@ -296,24 +318,28 @@ The code-reviewer subagent checks:
- [ ] Appropriate comments - [ ] Appropriate comments
#### Architecture #### Architecture
- [ ] Follows layer separation (Routes -> Services -> Repositories) - [ ] Follows layer separation (Routes -> Services -> Repositories)
- [ ] Uses correct file naming conventions - [ ] Uses correct file naming conventions
- [ ] Repository methods follow naming patterns - [ ] Repository methods follow naming patterns
- [ ] Transactions used for multi-operation changes - [ ] Transactions used for multi-operation changes
#### Testing #### Testing
- [ ] New code has corresponding tests - [ ] New code has corresponding tests
- [ ] Tests follow project patterns - [ ] Tests follow project patterns
- [ ] Edge cases covered - [ ] Edge cases covered
- [ ] Mocks used appropriately - [ ] Mocks used appropriately
#### Security #### Security
- [ ] Input validation present - [ ] Input validation present
- [ ] Authorization checks in place - [ ] Authorization checks in place
- [ ] No secrets in code - [ ] No secrets in code
- [ ] Error messages don't leak information - [ ] Error messages don't leak information
#### Performance #### Performance
- [ ] No obvious N+1 queries - [ ] No obvious N+1 queries
- [ ] Appropriate use of caching - [ ] Appropriate use of caching
- [ ] Large data sets paginated - [ ] Large data sets paginated
@@ -325,30 +351,37 @@ The code-reviewer subagent checks:
## Code Review: [Feature/PR Name] ## Code Review: [Feature/PR Name]
### Summary ### Summary
Brief overview of the changes reviewed. Brief overview of the changes reviewed.
### Issues Found ### Issues Found
#### Critical #### Critical
- **[File:Line]** Description of critical issue - **[File:Line]** Description of critical issue
- Impact: What could go wrong - Impact: What could go wrong
- Suggestion: How to fix - Suggestion: How to fix
#### High Priority #### High Priority
- **[File:Line]** Description - **[File:Line]** Description
#### Medium Priority #### Medium Priority
- **[File:Line]** Description - **[File:Line]** Description
#### Low Priority / Suggestions #### Low Priority / Suggestions
- **[File:Line]** Description - **[File:Line]** Description
### Positive Observations ### Positive Observations
- Good patterns followed - Good patterns followed
- Well-tested areas - Well-tested areas
- Clean implementations - Clean implementations
### Recommendations ### Recommendations
1. Priority items to address before merge 1. Priority items to address before merge
2. Items for follow-up tickets 2. Items for follow-up tickets
``` ```

View File

@@ -7,11 +7,11 @@ This guide covers two related but distinct subagents for testing in the Flyer Cr
## Understanding the Difference ## Understanding the Difference
| Aspect | tester | testwriter | | Aspect | tester | testwriter |
|--------|--------|------------| | --------------- | ------------------------------- | ------------------------------- |
| **Purpose** | Find bugs and weaknesses | Create test coverage | | **Purpose** | Find bugs and weaknesses | Create test coverage |
| **Approach** | Adversarial, exploratory | Systematic, comprehensive | | **Approach** | Adversarial, exploratory | Systematic, comprehensive |
| **Output** | Bug reports, security findings | Test files, test utilities | | **Output** | Bug reports, security findings | Test files, test utilities |
| **When to Use** | Before release, security review | During development, refactoring | | **When to Use** | Before release, security review | During development, refactoring |
## The tester Subagent ## The tester Subagent
@@ -41,6 +41,7 @@ The tester subagent understands:
### Example Requests ### Example Requests
**Finding edge cases:** **Finding edge cases:**
``` ```
"Use the tester subagent to find edge cases in the flyer upload "Use the tester subagent to find edge cases in the flyer upload
endpoint. Consider file types, sizes, concurrent uploads, and endpoint. Consider file types, sizes, concurrent uploads, and
@@ -48,6 +49,7 @@ invalid data scenarios."
``` ```
**Security testing:** **Security testing:**
``` ```
"Use the tester subagent to review the authentication flow for "Use the tester subagent to review the authentication flow for
security vulnerabilities, including JWT handling, session management, security vulnerabilities, including JWT handling, session management,
@@ -55,6 +57,7 @@ and OAuth integration."
``` ```
**Race condition analysis:** **Race condition analysis:**
``` ```
"Use the tester subagent to identify potential race conditions in "Use the tester subagent to identify potential race conditions in
the shopping list sharing feature where multiple users might modify the shopping list sharing feature where multiple users might modify
@@ -107,13 +110,13 @@ The testwriter subagent understands:
### Testing Framework Stack ### Testing Framework Stack
| Tool | Version | Purpose | | Tool | Version | Purpose |
|------|---------|---------| | ------------------------- | ------- | ----------------- |
| Vitest | 4.0.15 | Test runner | | Vitest | 4.0.15 | Test runner |
| @testing-library/react | 16.3.0 | Component testing | | @testing-library/react | 16.3.0 | Component testing |
| @testing-library/jest-dom | 6.9.1 | DOM assertions | | @testing-library/jest-dom | 6.9.1 | DOM assertions |
| supertest | 7.1.4 | API testing | | supertest | 7.1.4 | API testing |
| msw | 2.12.3 | Network mocking | | msw | 2.12.3 | Network mocking |
### Test File Organization ### Test File Organization
@@ -135,6 +138,7 @@ src/
### Example Requests ### Example Requests
**Unit tests for a new feature:** **Unit tests for a new feature:**
``` ```
"Use the testwriter subagent to create comprehensive unit tests "Use the testwriter subagent to create comprehensive unit tests
for the new StoreSearchService in src/services/storeSearchService.ts. for the new StoreSearchService in src/services/storeSearchService.ts.
@@ -142,6 +146,7 @@ Include edge cases for empty results, partial matches, and pagination."
``` ```
**Integration tests for API:** **Integration tests for API:**
``` ```
"Use the testwriter subagent to add integration tests for the "Use the testwriter subagent to add integration tests for the
POST /api/flyers endpoint, covering successful uploads, validation POST /api/flyers endpoint, covering successful uploads, validation
@@ -149,6 +154,7 @@ errors, authentication requirements, and file size limits."
``` ```
**Regression test for bug fix:** **Regression test for bug fix:**
``` ```
"Use the testwriter subagent to create a regression test that "Use the testwriter subagent to create a regression test that
verifies the fix for issue #123 where duplicate flyer items were verifies the fix for issue #123 where duplicate flyer items were
@@ -213,9 +219,7 @@ describe('Stores API', () => {
describe('GET /api/stores', () => { describe('GET /api/stores', () => {
it('returns list of stores', async () => { it('returns list of stores', async () => {
const response = await request const response = await request.get('/api/stores').set('Authorization', `Bearer ${authToken}`);
.get('/api/stores')
.set('Authorization', `Bearer ${authToken}`);
expect(response.status).toBe(200); expect(response.status).toBe(200);
expect(response.body.data.stores).toBeInstanceOf(Array); expect(response.body.data.stores).toBeInstanceOf(Array);
@@ -284,12 +288,12 @@ npm test -- --run src/services/storeService.test.ts
### Test Commands Reference ### Test Commands Reference
| Command | Description | | Command | Description |
|---------|-------------| | -------------------------- | ------------------------------------- |
| `npm test` | All unit tests | | `npm test` | All unit tests |
| `npm run test:unit` | Unit tests only | | `npm run test:unit` | Unit tests only |
| `npm run test:integration` | Integration tests (requires DB/Redis) | | `npm run test:integration` | Integration tests (requires DB/Redis) |
| `npm run test:coverage` | Tests with coverage report | | `npm run test:coverage` | Tests with coverage report |
## Known Testing Issues ## Known Testing Issues
@@ -308,6 +312,7 @@ Vitest's `globalSetup` runs in a separate Node.js context. Mocks and spies do NO
The cleanup worker may process jobs before tests can verify them. The cleanup worker may process jobs before tests can verify them.
**Solution**: **Solution**:
```typescript ```typescript
const { cleanupQueue } = await import('../../services/queues.server'); const { cleanupQueue } = await import('../../services/queues.server');
await cleanupQueue.drain(); await cleanupQueue.drain();
@@ -321,6 +326,7 @@ await cleanupQueue.resume();
Direct database inserts bypass cache invalidation. Direct database inserts bypass cache invalidation.
**Solution**: **Solution**:
```typescript ```typescript
await cacheService.invalidateFlyers(); await cacheService.invalidateFlyers();
``` ```
@@ -330,6 +336,7 @@ await cacheService.invalidateFlyers();
File upload tests need unique filenames to avoid collisions. File upload tests need unique filenames to avoid collisions.
**Solution**: **Solution**:
```typescript ```typescript
const filename = `test-${Date.now()}-${Math.round(Math.random() * 1e9)}.jpg`; const filename = `test-${Date.now()}-${Math.round(Math.random() * 1e9)}.jpg`;
``` ```

View File

@@ -20,14 +20,14 @@ This document covers the complete setup and usage of Bugsink for error tracking
Bugsink is a lightweight, self-hosted error tracking platform that is fully compatible with the Sentry SDK ecosystem. We use Bugsink instead of Sentry SaaS or self-hosted Sentry for several reasons: Bugsink is a lightweight, self-hosted error tracking platform that is fully compatible with the Sentry SDK ecosystem. We use Bugsink instead of Sentry SaaS or self-hosted Sentry for several reasons:
| Aspect | Bugsink | Self-Hosted Sentry | | Aspect | Bugsink | Self-Hosted Sentry |
| ------------------ | -------------------------------------- | -------------------------------- | | ----------------- | -------------------------- | -------------------------------- |
| Resource Usage | Single process, ~256MB RAM | 16GB+ RAM, Kafka, ClickHouse | | Resource Usage | Single process, ~256MB RAM | 16GB+ RAM, Kafka, ClickHouse |
| Deployment | Simple pip/binary install | Docker Compose with 20+ services | | Deployment | Simple pip/binary install | Docker Compose with 20+ services |
| SDK Compatibility | Full Sentry SDK support | Full Sentry SDK support | | SDK Compatibility | Full Sentry SDK support | Full Sentry SDK support |
| Database | PostgreSQL or SQLite | PostgreSQL + ClickHouse | | Database | PostgreSQL or SQLite | PostgreSQL + ClickHouse |
| Cost | Free, self-hosted | Free, self-hosted | | Cost | Free, self-hosted | Free, self-hosted |
| Maintenance | Minimal | Significant | | Maintenance | Minimal | Significant |
**Key Benefits:** **Key Benefits:**
@@ -44,16 +44,16 @@ Bugsink is a lightweight, self-hosted error tracking platform that is fully comp
### Dev Container (Local Development) ### Dev Container (Local Development)
| Item | Value | | Item | Value |
| --------------- | ----------------------------------------------------------------- | | ---------------- | ----------------------------------------------------------------- |
| Web UI | `https://localhost:8443` (nginx proxy) | | Web UI | `https://localhost:8443` (nginx proxy) |
| Internal URL | `http://localhost:8000` (direct) | | Internal URL | `http://localhost:8000` (direct) |
| Credentials | `admin@localhost` / `admin` | | Credentials | `admin@localhost` / `admin` |
| Backend Project | Project ID 1 - `flyer-crawler-dev-backend` | | Backend Project | Project ID 1 - `flyer-crawler-dev-backend` |
| Frontend Project| Project ID 2 - `flyer-crawler-dev-frontend` | | Frontend Project | Project ID 2 - `flyer-crawler-dev-frontend` |
| Backend DSN | `http://<key>@localhost:8000/1` | | Backend DSN | `http://<key>@localhost:8000/1` |
| Frontend DSN | `http://<key>@localhost:8000/2` | | Frontend DSN | `http://<key>@localhost:8000/2` |
| Database | `postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink` | | Database | `postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink` |
**Configuration Files:** **Configuration Files:**
@@ -66,13 +66,13 @@ Bugsink is a lightweight, self-hosted error tracking platform that is fully comp
### Production ### Production
| Item | Value | | Item | Value |
| --------------- | ------------------------------------------ | | ---------------- | --------------------------------------- |
| Web UI | `https://bugsink.projectium.com` | | Web UI | `https://bugsink.projectium.com` |
| Credentials | Managed separately (not shared in docs) | | Credentials | Managed separately (not shared in docs) |
| Backend Project | `flyer-crawler-backend` | | Backend Project | `flyer-crawler-backend` |
| Frontend Project| `flyer-crawler-frontend` | | Frontend Project | `flyer-crawler-frontend` |
| Infra Project | `flyer-crawler-infrastructure` | | Infra Project | `flyer-crawler-infrastructure` |
**Bugsink Projects:** **Bugsink Projects:**
@@ -113,10 +113,10 @@ ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token"
### Token Storage ### Token Storage
| Environment | Storage Location | Notes | | Environment | Storage Location | Notes |
| ----------- | ----------------------------- | --------------------------- | | ----------- | ----------------------------- | ---------------------- |
| Dev | `.mcp.json` (project-level) | Not committed to git | | Dev | `.mcp.json` (project-level) | Not committed to git |
| Production | Gitea secrets + settings.json | `BUGSINK_TOKEN` secret | | Production | Gitea secrets + settings.json | `BUGSINK_TOKEN` secret |
--- ---
@@ -181,31 +181,33 @@ Location: Project root `.mcp.json`
The bugsink-mcp package requires exactly TWO environment variables: The bugsink-mcp package requires exactly TWO environment variables:
| Variable | Description | Required | | Variable | Description | Required |
| -------------- | ----------------------------------------- | -------- | | --------------- | ----------------------- | -------- |
| `BUGSINK_URL` | Bugsink instance URL | Yes | | `BUGSINK_URL` | Bugsink instance URL | Yes |
| `BUGSINK_TOKEN`| API token (40-char hex) | Yes | | `BUGSINK_TOKEN` | API token (40-char hex) | Yes |
**Common Mistakes:** **Common Mistakes:**
- Using `BUGSINK_API_TOKEN` (wrong - use `BUGSINK_TOKEN`) - Using `BUGSINK_API_TOKEN` (wrong - use `BUGSINK_TOKEN`)
- Including `BUGSINK_ORG_SLUG` (not used by the package) - Including `BUGSINK_ORG_SLUG` (not used by the package)
### Available MCP Tools ### Available MCP Tools
| Tool | Purpose | | Tool | Purpose |
| ----------------------------- | -------------------------------------------- | | ----------------- | ------------------------------------ |
| `test_connection` | Verify MCP server can reach Bugsink | | `test_connection` | Verify MCP server can reach Bugsink |
| `list_projects` | List all projects in the instance | | `list_projects` | List all projects in the instance |
| `get_project` | Get project details including DSN | | `get_project` | Get project details including DSN |
| `list_issues` | List issues for a project | | `list_issues` | List issues for a project |
| `get_issue` | Get detailed issue information | | `get_issue` | Get detailed issue information |
| `list_events` | List individual error occurrences | | `list_events` | List individual error occurrences |
| `get_event` | Get full event details with context | | `get_event` | Get full event details with context |
| `get_stacktrace` | Get pre-rendered Markdown stacktrace | | `get_stacktrace` | Get pre-rendered Markdown stacktrace |
| `list_releases` | List releases for a project | | `list_releases` | List releases for a project |
| `create_release` | Create a new release | | `create_release` | Create a new release |
**Tool Prefixes:** **Tool Prefixes:**
- Production: `mcp__bugsink__*` - Production: `mcp__bugsink__*`
- Dev Container: `mcp__localerrors__*` - Dev Container: `mcp__localerrors__*`
@@ -215,11 +217,11 @@ After configuration, restart Claude Code and test:
```typescript ```typescript
// Production // Production
mcp__bugsink__test_connection() mcp__bugsink__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)." // Expected: "Connection successful: Connected successfully. Found N project(s)."
// Dev Container // Dev Container
mcp__localerrors__test_connection() mcp__localerrors__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)." // Expected: "Connection successful: Connected successfully. Found N project(s)."
``` ```
@@ -255,14 +257,14 @@ export function initSentry(): void {
**Key Functions:** **Key Functions:**
| Function | Purpose | | Function | Purpose |
| ------------------- | ------------------------------------------------- | | ----------------------- | -------------------------------------------- |
| `initSentry()` | Initialize SDK at application startup | | `initSentry()` | Initialize SDK at application startup |
| `captureException()`| Manually capture caught errors | | `captureException()` | Manually capture caught errors |
| `captureMessage()` | Log non-exception events | | `captureMessage()` | Log non-exception events |
| `setUser()` | Set user context after authentication | | `setUser()` | Set user context after authentication |
| `addBreadcrumb()` | Add navigation/action breadcrumbs | | `addBreadcrumb()` | Add navigation/action breadcrumbs |
| `getSentryMiddleware()` | Get Express middleware for automatic capture | | `getSentryMiddleware()` | Get Express middleware for automatic capture |
**Integration in server.ts:** **Integration in server.ts:**
@@ -310,9 +312,11 @@ export function initSentry(): void {
], ],
beforeSend(event) { beforeSend(event) {
// Filter browser extension errors // Filter browser extension errors
if (event.exception?.values?.[0]?.stacktrace?.frames?.some( if (
frame => frame.filename?.includes('extension://') event.exception?.values?.[0]?.stacktrace?.frames?.some((frame) =>
)) { frame.filename?.includes('extension://'),
)
) {
return null; return null;
} }
return event; return event;
@@ -338,21 +342,21 @@ const config = {
**Backend (src/config/env.ts):** **Backend (src/config/env.ts):**
| Variable | Description | Default | | Variable | Description | Default |
| ------------------- | ------------------------------- | ----------- | | -------------------- | ------------------------------ | ---------- |
| `SENTRY_DSN` | Sentry-compatible DSN | (optional) | | `SENTRY_DSN` | Sentry-compatible DSN | (optional) |
| `SENTRY_ENABLED` | Enable/disable error reporting | `true` | | `SENTRY_ENABLED` | Enable/disable error reporting | `true` |
| `SENTRY_ENVIRONMENT`| Environment tag | NODE_ENV | | `SENTRY_ENVIRONMENT` | Environment tag | NODE_ENV |
| `SENTRY_DEBUG` | Enable SDK debug logging | `false` | | `SENTRY_DEBUG` | Enable SDK debug logging | `false` |
**Frontend (Vite):** **Frontend (Vite):**
| Variable | Description | | Variable | Description |
| ----------------------- | ------------------------------- | | ------------------------- | ------------------------------- |
| `VITE_SENTRY_DSN` | Frontend DSN (separate project) | | `VITE_SENTRY_DSN` | Frontend DSN (separate project) |
| `VITE_SENTRY_ENVIRONMENT`| Environment tag | | `VITE_SENTRY_ENVIRONMENT` | Environment tag |
| `VITE_SENTRY_DEBUG` | Enable SDK debug logging | | `VITE_SENTRY_DEBUG` | Enable SDK debug logging |
| `VITE_SENTRY_ENABLED` | Enable/disable error reporting | | `VITE_SENTRY_ENABLED` | Enable/disable error reporting |
--- ---
@@ -364,11 +368,11 @@ Logstash aggregates logs from multiple sources and forwards error patterns to Bu
### Log Sources ### Log Sources
| Source | Log Path | Error Detection | | Source | Log Path | Error Detection |
| -------------- | ------------------------ | ---------------------------------- | | ---------- | ---------------------- | ------------------------- |
| Pino (app) | `/app/logs/*.log` | level >= 50 (error/fatal) | | Pino (app) | `/app/logs/*.log` | level >= 50 (error/fatal) |
| Redis | `/var/log/redis/*.log` | WARNING/ERROR log levels | | Redis | `/var/log/redis/*.log` | WARNING/ERROR log levels |
| PostgreSQL | (future) | ERROR/FATAL log levels | | PostgreSQL | (future) | ERROR/FATAL log levels |
### Pipeline Configuration ### Pipeline Configuration
@@ -433,11 +437,13 @@ output {
### Accessing the Web UI ### Accessing the Web UI
**Dev Container:** **Dev Container:**
1. Open `https://localhost:8443` in your browser 1. Open `https://localhost:8443` in your browser
2. Accept the self-signed certificate warning 2. Accept the self-signed certificate warning
3. Login with `admin@localhost` / `admin` 3. Login with `admin@localhost` / `admin`
**Production:** **Production:**
1. Open `https://bugsink.projectium.com` 1. Open `https://bugsink.projectium.com`
2. Login with your credentials 2. Login with your credentials
@@ -445,13 +451,14 @@ output {
Bugsink organizes errors into projects: Bugsink organizes errors into projects:
| Concept | Description | | Concept | Description |
| -------- | ----------------------------------------------- | | ------- | ---------------------------------------------- |
| Team | Group of projects (e.g., "Flyer Crawler") | | Team | Group of projects (e.g., "Flyer Crawler") |
| Project | Single application/service | | Project | Single application/service |
| DSN | Data Source Name - unique key for each project | | DSN | Data Source Name - unique key for each project |
To view projects: To view projects:
1. Click the project dropdown in the top navigation 1. Click the project dropdown in the top navigation
2. Or use MCP: `mcp__bugsink__list_projects()` 2. Or use MCP: `mcp__bugsink__list_projects()`
@@ -460,11 +467,13 @@ To view projects:
**Issues** represent grouped error occurrences. Multiple identical errors are deduplicated into a single issue. **Issues** represent grouped error occurrences. Multiple identical errors are deduplicated into a single issue.
**Issue List View:** **Issue List View:**
- Navigate to a project - Navigate to a project
- Issues are sorted by last occurrence - Issues are sorted by last occurrence
- Each issue shows: title, count, first/last seen - Each issue shows: title, count, first/last seen
**Issue Detail View:** **Issue Detail View:**
- Click an issue to see full details - Click an issue to see full details
- View aggregated statistics - View aggregated statistics
- See list of individual events - See list of individual events
@@ -475,6 +484,7 @@ To view projects:
**Events** are individual error occurrences. **Events** are individual error occurrences.
**Event Information:** **Event Information:**
- Full stacktrace - Full stacktrace
- Request context (URL, method, headers) - Request context (URL, method, headers)
- User context (if set) - User context (if set)
@@ -485,13 +495,13 @@ To view projects:
```typescript ```typescript
// List events for an issue // List events for an issue
mcp__bugsink__list_events({ issue_id: "uuid-here" }) mcp__bugsink__list_events({ issue_id: 'uuid-here' });
// Get full event details // Get full event details
mcp__bugsink__get_event({ event_id: "uuid-here" }) mcp__bugsink__get_event({ event_id: 'uuid-here' });
// Get readable stacktrace // Get readable stacktrace
mcp__bugsink__get_stacktrace({ event_id: "uuid-here" }) mcp__bugsink__get_stacktrace({ event_id: 'uuid-here' });
``` ```
### Stacktraces and Context ### Stacktraces and Context
@@ -499,17 +509,20 @@ mcp__bugsink__get_stacktrace({ event_id: "uuid-here" })
Stacktraces show the call stack at the time of error: Stacktraces show the call stack at the time of error:
**Via Web UI:** **Via Web UI:**
- Open an event - Open an event
- Expand the "Exception" section - Expand the "Exception" section
- Click frames to see source code context - Click frames to see source code context
**Via MCP:** **Via MCP:**
- `get_stacktrace` returns pre-rendered Markdown - `get_stacktrace` returns pre-rendered Markdown
- Includes file paths, line numbers, function names - Includes file paths, line numbers, function names
### Filtering and Searching ### Filtering and Searching
**Web UI Filters:** **Web UI Filters:**
- By status: unresolved, resolved, muted - By status: unresolved, resolved, muted
- By date range - By date range
- By release version - By release version
@@ -521,16 +534,16 @@ Stacktraces show the call stack at the time of error:
// Filter by status // Filter by status
mcp__bugsink__list_issues({ mcp__bugsink__list_issues({
project_id: 1, project_id: 1,
status: "unresolved", status: 'unresolved',
limit: 25 limit: 25,
}) });
// Sort options // Sort options
mcp__bugsink__list_issues({ mcp__bugsink__list_issues({
project_id: 1, project_id: 1,
sort: "last_seen", // or "digest_order" sort: 'last_seen', // or "digest_order"
order: "desc" // or "asc" order: 'desc', // or "asc"
}) });
``` ```
### Release Tracking ### Release Tracking
@@ -542,14 +555,14 @@ Releases help identify which version introduced or fixed issues.
```typescript ```typescript
mcp__bugsink__create_release({ mcp__bugsink__create_release({
project_id: 1, project_id: 1,
version: "1.2.3" version: '1.2.3',
}) });
``` ```
**Viewing Releases:** **Viewing Releases:**
```typescript ```typescript
mcp__bugsink__list_releases({ project_id: 1 }) mcp__bugsink__list_releases({ project_id: 1 });
``` ```
--- ---
@@ -559,24 +572,27 @@ mcp__bugsink__list_releases({ project_id: 1 })
### Investigating Production Errors ### Investigating Production Errors
1. **Check for new errors** (via MCP): 1. **Check for new errors** (via MCP):
```typescript ```typescript
mcp__bugsink__list_issues({ mcp__bugsink__list_issues({
project_id: 1, project_id: 1,
status: "unresolved", status: 'unresolved',
sort: "last_seen", sort: 'last_seen',
limit: 10 limit: 10,
}) });
``` ```
2. **Get issue details**: 2. **Get issue details**:
```typescript ```typescript
mcp__bugsink__get_issue({ issue_id: "uuid" }) mcp__bugsink__get_issue({ issue_id: 'uuid' });
``` ```
3. **View stacktrace**: 3. **View stacktrace**:
```typescript ```typescript
mcp__bugsink__list_events({ issue_id: "uuid", limit: 1 }) mcp__bugsink__list_events({ issue_id: 'uuid', limit: 1 });
mcp__bugsink__get_stacktrace({ event_id: "event-uuid" }) mcp__bugsink__get_stacktrace({ event_id: 'event-uuid' });
``` ```
4. **Examine the code**: Use the file path and line numbers from the stacktrace to locate the issue in the codebase. 4. **Examine the code**: Use the file path and line numbers from the stacktrace to locate the issue in the codebase.
@@ -588,9 +604,11 @@ mcp__bugsink__list_releases({ project_id: 1 })
- Check occurrence counts and frequency - Check occurrence counts and frequency
2. **Examine request context**: 2. **Examine request context**:
```typescript ```typescript
mcp__bugsink__get_event({ event_id: "uuid" }) mcp__bugsink__get_event({ event_id: 'uuid' });
``` ```
Look for: URL, HTTP method, request body, user info Look for: URL, HTTP method, request body, user info
3. **Review breadcrumbs**: Understand the sequence of actions leading to the error. 3. **Review breadcrumbs**: Understand the sequence of actions leading to the error.
@@ -608,6 +626,7 @@ mcp__bugsink__list_releases({ project_id: 1 })
1. **Access local Bugsink**: `https://localhost:8443` 1. **Access local Bugsink**: `https://localhost:8443`
2. **Trigger a test error**: 2. **Trigger a test error**:
```bash ```bash
curl -X POST http://localhost:3001/api/test/error curl -X POST http://localhost:3001/api/test/error
``` ```
@@ -616,7 +635,7 @@ mcp__bugsink__list_releases({ project_id: 1 })
4. **Query via MCP**: 4. **Query via MCP**:
```typescript ```typescript
mcp__localerrors__list_issues({ project_id: 1 }) mcp__localerrors__list_issues({ project_id: 1 });
``` ```
--- ---
@@ -626,6 +645,7 @@ mcp__bugsink__list_releases({ project_id: 1 })
### MCP Server Not Available ### MCP Server Not Available
**Symptoms:** **Symptoms:**
- `mcp__localerrors__*` tools return "No such tool available" - `mcp__localerrors__*` tools return "No such tool available"
- `mcp__bugsink__*` works but `mcp__localerrors__*` fails - `mcp__bugsink__*` works but `mcp__localerrors__*` fails
@@ -636,12 +656,14 @@ mcp__bugsink__list_releases({ project_id: 1 })
2. **Verify token variable name**: Use `BUGSINK_TOKEN`, not `BUGSINK_API_TOKEN` 2. **Verify token variable name**: Use `BUGSINK_TOKEN`, not `BUGSINK_API_TOKEN`
3. **Test manually**: 3. **Test manually**:
```bash ```bash
cd d:\gitea\bugsink-mcp cd d:\gitea\bugsink-mcp
set BUGSINK_URL=http://localhost:8000 set BUGSINK_URL=http://localhost:8000
set BUGSINK_TOKEN=<your-token> set BUGSINK_TOKEN=<your-token>
node dist/index.js node dist/index.js
``` ```
Expected: `Bugsink MCP server started` Expected: `Bugsink MCP server started`
4. **Full restart**: Close VS Code completely, restart 4. **Full restart**: Close VS Code completely, restart
@@ -655,11 +677,13 @@ See [BUGSINK-MCP-TROUBLESHOOTING.md](../BUGSINK-MCP-TROUBLESHOOTING.md) for deta
**Solutions:** **Solutions:**
1. **Check container status**: 1. **Check container status**:
```bash ```bash
podman exec flyer-crawler-dev systemctl status bugsink podman exec flyer-crawler-dev systemctl status bugsink
``` ```
2. **Start the service**: 2. **Start the service**:
```bash ```bash
podman exec flyer-crawler-dev systemctl start bugsink podman exec flyer-crawler-dev systemctl start bugsink
``` ```

View File

@@ -32,12 +32,12 @@ Model Context Protocol (MCP) is a standardized protocol that allows AI assistant
**Why We Use MCP:** **Why We Use MCP:**
| Benefit | Description | | Benefit | Description |
|---------|-------------| | ------------------ | ------------------------------------------------------------------------ |
| Direct Integration | Claude can directly query databases, inspect containers, and access APIs | | Direct Integration | Claude can directly query databases, inspect containers, and access APIs |
| Context Awareness | Tools provide real-time information without manual copy-paste | | Context Awareness | Tools provide real-time information without manual copy-paste |
| Automation | Complex workflows can be executed through tool chains | | Automation | Complex workflows can be executed through tool chains |
| Consistency | Standardized interface across different services | | Consistency | Standardized interface across different services |
--- ---
@@ -45,17 +45,17 @@ Model Context Protocol (MCP) is a standardized protocol that allows AI assistant
The Flyer Crawler project uses the following MCP servers: The Flyer Crawler project uses the following MCP servers:
| Server | Tool Prefix | Purpose | Config Location | | Server | Tool Prefix | Purpose | Config Location |
|--------|-------------|---------|-----------------| | ------------------ | -------------------------- | -------------------------------------------------- | --------------- |
| `gitea-projectium` | `mcp__gitea-projectium__*` | Gitea API at gitea.projectium.com | Global | | `gitea-projectium` | `mcp__gitea-projectium__*` | Gitea API at gitea.projectium.com | Global |
| `gitea-torbonium` | `mcp__gitea-torbonium__*` | Gitea API at gitea.torbonium.com | Global | | `gitea-torbonium` | `mcp__gitea-torbonium__*` | Gitea API at gitea.torbonium.com | Global |
| `podman` | `mcp__podman__*` | Container management | Global | | `podman` | `mcp__podman__*` | Container management | Global |
| `filesystem` | `mcp__filesystem__*` | File system access | Global | | `filesystem` | `mcp__filesystem__*` | File system access | Global |
| `memory` | `mcp__memory__*` | Knowledge graph persistence | Global | | `memory` | `mcp__memory__*` | Knowledge graph persistence | Global |
| `redis` | `mcp__redis__*` | Redis cache inspection | Global | | `redis` | `mcp__redis__*` | Redis cache inspection | Global |
| `bugsink` | `mcp__bugsink__*` | Production error tracking (bugsink.projectium.com) | Global | | `bugsink` | `mcp__bugsink__*` | Production error tracking (bugsink.projectium.com) | Global |
| `localerrors` | `mcp__localerrors__*` | Dev container error tracking (localhost:8000) | Project | | `localerrors` | `mcp__localerrors__*` | Dev container error tracking (localhost:8000) | Project |
| `devdb` | `mcp__devdb__*` | Development PostgreSQL database | Project | | `devdb` | `mcp__devdb__*` | Development PostgreSQL database | Project |
--- ---
@@ -66,11 +66,13 @@ Claude Code uses **two separate configuration systems** for MCP servers:
### Global Configuration ### Global Configuration
**Location (Windows):** **Location (Windows):**
```text ```text
C:\Users\<username>\.claude\settings.json C:\Users\<username>\.claude\settings.json
``` ```
**Used For:** **Used For:**
- Production services (HTTPS endpoints) - Production services (HTTPS endpoints)
- Servers shared across all projects - Servers shared across all projects
- Container management (Podman) - Container management (Podman)
@@ -79,24 +81,26 @@ C:\Users\<username>\.claude\settings.json
### Project-Level Configuration ### Project-Level Configuration
**Location:** **Location:**
```text ```text
<project-root>/.mcp.json <project-root>/.mcp.json
``` ```
**Used For:** **Used For:**
- Localhost services (HTTP endpoints) - Localhost services (HTTP endpoints)
- Development databases - Development databases
- Project-specific tools - Project-specific tools
### When to Use Each ### When to Use Each
| Scenario | Configuration | | Scenario | Configuration |
|----------|---------------| | --------------------------------- | ---------------------- |
| Production APIs (HTTPS) | Global `settings.json` | | Production APIs (HTTPS) | Global `settings.json` |
| Shared tools (memory, filesystem) | Global `settings.json` | | Shared tools (memory, filesystem) | Global `settings.json` |
| Localhost services (HTTP) | Project `.mcp.json` | | Localhost services (HTTP) | Project `.mcp.json` |
| Development databases | Project `.mcp.json` | | Development databases | Project `.mcp.json` |
| Per-project customization | Project `.mcp.json` | | Per-project customization | Project `.mcp.json` |
**Important:** Localhost MCP servers work more reliably in project-level `.mcp.json` than in global `settings.json`. See [Troubleshooting](#localhost-servers-not-loading) for details. **Important:** Localhost MCP servers work more reliably in project-level `.mcp.json` than in global `settings.json`. See [Troubleshooting](#localhost-servers-not-loading) for details.
@@ -123,12 +127,12 @@ C:\Users\<username>\.claude\settings.json
**Configuration Options:** **Configuration Options:**
| Field | Required | Description | | Field | Required | Description |
|-------|----------|-------------| | ---------- | -------- | ----------------------------------------- |
| `command` | Yes | Path to executable or command | | `command` | Yes | Path to executable or command |
| `args` | No | Array of command-line arguments | | `args` | No | Array of command-line arguments |
| `env` | No | Environment variables for the server | | `env` | No | Environment variables for the server |
| `disabled` | No | Set to `true` to disable without removing | | `disabled` | No | Set to `true` to disable without removing |
### Example Global Configuration ### Example Global Configuration
@@ -249,6 +253,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
**Purpose:** Persists knowledge across sessions - project context, credentials, known issues. **Purpose:** Persists knowledge across sessions - project context, credentials, known issues.
**Configuration:** **Configuration:**
```json ```json
"memory": { "memory": {
"command": "D:\\nodejs\\npx.cmd", "command": "D:\\nodejs\\npx.cmd",
@@ -257,6 +262,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
``` ```
**Key Tools:** **Key Tools:**
- `mcp__memory__read_graph` - Read entire knowledge graph - `mcp__memory__read_graph` - Read entire knowledge graph
- `mcp__memory__search_nodes` - Search for specific entities - `mcp__memory__search_nodes` - Search for specific entities
- `mcp__memory__create_entities` - Add new knowledge - `mcp__memory__create_entities` - Add new knowledge
@@ -268,6 +274,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
**Purpose:** Provides file system access to specified directories. **Purpose:** Provides file system access to specified directories.
**Configuration:** **Configuration:**
```json ```json
"filesystem": { "filesystem": {
"command": "d:\\nodejs\\node.exe", "command": "d:\\nodejs\\node.exe",
@@ -287,6 +294,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
**Purpose:** Container management - list, start, stop, inspect containers. **Purpose:** Container management - list, start, stop, inspect containers.
**Configuration:** **Configuration:**
```json ```json
"podman": { "podman": {
"command": "D:\\nodejs\\npx.cmd", "command": "D:\\nodejs\\npx.cmd",
@@ -298,6 +306,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
``` ```
**Key Tools:** **Key Tools:**
- `mcp__podman__container_list` - List running containers - `mcp__podman__container_list` - List running containers
- `mcp__podman__container_logs` - View container logs - `mcp__podman__container_logs` - View container logs
- `mcp__podman__container_inspect` - Detailed container info - `mcp__podman__container_inspect` - Detailed container info
@@ -310,6 +319,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
**Purpose:** Inspect Redis cache, set/get values, list keys. **Purpose:** Inspect Redis cache, set/get values, list keys.
**Configuration:** **Configuration:**
```json ```json
"redis": { "redis": {
"command": "D:\\nodejs\\npx.cmd", "command": "D:\\nodejs\\npx.cmd",
@@ -318,6 +328,7 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
``` ```
**Key Tools:** **Key Tools:**
- `mcp__redis__get` - Get value by key - `mcp__redis__get` - Get value by key
- `mcp__redis__set` - Set key-value pair - `mcp__redis__set` - Set key-value pair
- `mcp__redis__list` - List keys matching pattern - `mcp__redis__list` - List keys matching pattern
@@ -329,10 +340,10 @@ d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json
Bugsink is a self-hosted error tracking service. We run two instances: Bugsink is a self-hosted error tracking service. We run two instances:
| Instance | URL | MCP Server | Purpose | | Instance | URL | MCP Server | Purpose |
|----------|-----|------------|---------| | ----------- | -------------------------------- | ------------- | ---------------------------- |
| Production | `https://bugsink.projectium.com` | `bugsink` | Production error tracking | | Production | `https://bugsink.projectium.com` | `bugsink` | Production error tracking |
| Development | `http://localhost:8000` | `localerrors` | Dev container error tracking | | Development | `http://localhost:8000` | `localerrors` | Dev container error tracking |
### Installation ### Installation
@@ -353,6 +364,7 @@ npm run build
### Configuration ### Configuration
**Production (Global `settings.json`):** **Production (Global `settings.json`):**
```json ```json
"bugsink": { "bugsink": {
"command": "d:\\nodejs\\node.exe", "command": "d:\\nodejs\\node.exe",
@@ -365,6 +377,7 @@ npm run build
``` ```
**Development (Project `.mcp.json`):** **Development (Project `.mcp.json`):**
```json ```json
"localerrors": { "localerrors": {
"command": "d:\\nodejs\\node.exe", "command": "d:\\nodejs\\node.exe",
@@ -378,12 +391,13 @@ npm run build
**Required Environment Variables:** **Required Environment Variables:**
| Variable | Description | | Variable | Description |
|----------|-------------| | --------------- | -------------------------------------------- |
| `BUGSINK_URL` | Full URL to Bugsink instance (with protocol) | | `BUGSINK_URL` | Full URL to Bugsink instance (with protocol) |
| `BUGSINK_TOKEN` | 40-character hex API token | | `BUGSINK_TOKEN` | 40-character hex API token |
**Important:** **Important:**
- Variable is `BUGSINK_TOKEN`, NOT `BUGSINK_API_TOKEN` - Variable is `BUGSINK_TOKEN`, NOT `BUGSINK_API_TOKEN`
- Do NOT use `npx` - the package is not on npm - Do NOT use `npx` - the package is not on npm
- Use `http://127.0.0.1:8000` not `http://localhost:8000` for localhost - Use `http://127.0.0.1:8000` not `http://localhost:8000` for localhost
@@ -424,11 +438,11 @@ Both commands output a 40-character lowercase hex token (e.g., `a609c2886daa4e1e
```typescript ```typescript
// Production // Production
mcp__bugsink__test_connection() mcp__bugsink__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)." // Expected: "Connection successful: Connected successfully. Found N project(s)."
// Development // Development
mcp__localerrors__test_connection() mcp__localerrors__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)." // Expected: "Connection successful: Connected successfully. Found N project(s)."
``` ```
@@ -462,6 +476,7 @@ postgresql://[user]:[password]@[host]:[port]/[database]
``` ```
**Examples:** **Examples:**
```text ```text
# Development (local container) # Development (local container)
postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev
@@ -472,41 +487,47 @@ postgresql://flyer_crawler_test:password@127.0.0.1:5432/flyer_crawler_test
### Database Information ### Database Information
| Property | Value | | Property | Value |
|----------|-------| | --------------------- | ------------------------ |
| Container | `flyer-crawler-postgres` | | Container | `flyer-crawler-postgres` |
| Image | `postgis/postgis:15-3.4` | | Image | `postgis/postgis:15-3.4` |
| Host (from Windows) | `127.0.0.1` | | Host (from Windows) | `127.0.0.1` |
| Host (from container) | `postgres` | | Host (from container) | `postgres` |
| Port | `5432` | | Port | `5432` |
| Database | `flyer_crawler_dev` | | Database | `flyer_crawler_dev` |
| User | `postgres` | | User | `postgres` |
| Password | `postgres` | | Password | `postgres` |
### Usage Examples ### Usage Examples
```typescript ```typescript
// List all tables // List all tables
mcp__devdb__query({ sql: "SELECT tablename FROM pg_tables WHERE schemaname = 'public'" }) mcp__devdb__query({ sql: "SELECT tablename FROM pg_tables WHERE schemaname = 'public'" });
// Count records // Count records
mcp__devdb__query({ sql: "SELECT COUNT(*) FROM flyers" }) mcp__devdb__query({ sql: 'SELECT COUNT(*) FROM flyers' });
// Check table structure // Check table structure
mcp__devdb__query({ sql: "SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'flyers'" }) mcp__devdb__query({
sql: "SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'flyers'",
});
// Find recent records // Find recent records
mcp__devdb__query({ sql: "SELECT id, name, created_at FROM flyers ORDER BY created_at DESC LIMIT 10" }) mcp__devdb__query({
sql: 'SELECT id, name, created_at FROM flyers ORDER BY created_at DESC LIMIT 10',
});
``` ```
### Prerequisites ### Prerequisites
1. **PostgreSQL container must be running:** 1. **PostgreSQL container must be running:**
```bash ```bash
podman ps | grep flyer-crawler-postgres podman ps | grep flyer-crawler-postgres
``` ```
2. **Port 5432 must be mapped:** 2. **Port 5432 must be mapped:**
```bash ```bash
podman port flyer-crawler-postgres podman port flyer-crawler-postgres
# Expected: 5432/tcp -> 0.0.0.0:5432 # Expected: 5432/tcp -> 0.0.0.0:5432
@@ -561,22 +582,24 @@ mcp__devdb__query({ sql: "SELECT id, name, created_at FROM flyers ORDER BY creat
```typescript ```typescript
// List repositories // List repositories
mcp__gitea-projectium__list_my_repos({ page: 1, pageSize: 20 }) mcp__gitea - projectium__list_my_repos({ page: 1, pageSize: 20 });
// Get issue // Get issue
mcp__gitea-projectium__get_issue_by_index({ mcp__gitea -
owner: "username", projectium__get_issue_by_index({
repo: "repository-name", owner: 'username',
index: 42 repo: 'repository-name',
}) index: 42,
});
// Create issue // Create issue
mcp__gitea-projectium__create_issue({ mcp__gitea -
owner: "username", projectium__create_issue({
repo: "repository-name", owner: 'username',
title: "Bug: Something is broken", repo: 'repository-name',
body: "## Description\n\nSteps to reproduce..." title: 'Bug: Something is broken',
}) body: '## Description\n\nSteps to reproduce...',
});
``` ```
--- ---
@@ -634,12 +657,14 @@ mcp__gitea-projectium__create_issue({
### Localhost Servers Not Loading ### Localhost Servers Not Loading
**Symptoms:** **Symptoms:**
- `mcp__localerrors__*` or `mcp__devdb__*` tools not available - `mcp__localerrors__*` or `mcp__devdb__*` tools not available
- No error messages in logs - No error messages in logs
- Server silently skipped during startup - Server silently skipped during startup
**Root Cause:** **Root Cause:**
Claude Code's global `settings.json` has issues loading localhost stdio MCP servers on Windows. The exact cause may be related to: Claude Code's global `settings.json` has issues loading localhost stdio MCP servers on Windows. The exact cause may be related to:
- Multiple servers using the same underlying package - Multiple servers using the same underlying package
- Localhost URL filtering - Localhost URL filtering
- Windows-specific MCP loader bugs - Windows-specific MCP loader bugs
@@ -648,12 +673,14 @@ Claude Code's global `settings.json` has issues loading localhost stdio MCP serv
Use **project-level `.mcp.json`** for all localhost MCP servers. This bypasses the global config loader entirely. Use **project-level `.mcp.json`** for all localhost MCP servers. This bypasses the global config loader entirely.
**Working Pattern:** **Working Pattern:**
- Global `settings.json`: Production HTTPS servers - Global `settings.json`: Production HTTPS servers
- Project `.mcp.json`: Localhost HTTP servers - Project `.mcp.json`: Localhost HTTP servers
### Server Name Collision ### Server Name Collision
**Symptoms:** **Symptoms:**
- Second server with similar name never starts - Second server with similar name never starts
- No error logged - server silently filtered out - No error logged - server silently filtered out
@@ -662,6 +689,7 @@ Claude Code may skip MCP servers when names share prefixes (e.g., `bugsink` and
**Solution:** **Solution:**
Use completely distinct names: Use completely distinct names:
- `bugsink` for production - `bugsink` for production
- `localerrors` for development (NOT `bugsink-dev` or `devbugsink`) - `localerrors` for development (NOT `bugsink-dev` or `devbugsink`)
@@ -670,11 +698,13 @@ Use completely distinct names:
**Error:** `Connection timed out after 30000ms` **Error:** `Connection timed out after 30000ms`
**Causes:** **Causes:**
- Server takes too long to start - Server takes too long to start
- npx download is slow - npx download is slow
- Server crashes during initialization - Server crashes during initialization
**Solutions:** **Solutions:**
1. Move important servers earlier in config 1. Move important servers earlier in config
2. Use pre-installed packages instead of npx: 2. Use pre-installed packages instead of npx:
```json ```json
@@ -687,13 +717,14 @@ Use completely distinct names:
**Common Mistakes:** **Common Mistakes:**
| Wrong | Correct | | Wrong | Correct |
|-------|---------| | ----------------------- | ----------------------- |
| `BUGSINK_API_TOKEN` | `BUGSINK_TOKEN` | | `BUGSINK_API_TOKEN` | `BUGSINK_TOKEN` |
| `http://localhost:8000` | `http://127.0.0.1:8000` | | `http://localhost:8000` | `http://127.0.0.1:8000` |
**Verification:** **Verification:**
Test server manually with environment variables: Test server manually with environment variables:
```bash ```bash
cd d:\gitea\bugsink-mcp cd d:\gitea\bugsink-mcp
set BUGSINK_URL=http://127.0.0.1:8000 set BUGSINK_URL=http://127.0.0.1:8000
@@ -702,6 +733,7 @@ node dist/index.js
``` ```
Expected output: Expected output:
``` ```
Bugsink MCP server started Bugsink MCP server started
Connected to: http://127.0.0.1:8000 Connected to: http://127.0.0.1:8000
@@ -712,16 +744,19 @@ Connected to: http://127.0.0.1:8000
**Solutions:** **Solutions:**
1. Check container is running: 1. Check container is running:
```bash ```bash
podman ps | grep flyer-crawler-postgres podman ps | grep flyer-crawler-postgres
``` ```
2. Verify port mapping: 2. Verify port mapping:
```bash ```bash
podman port flyer-crawler-postgres podman port flyer-crawler-postgres
``` ```
3. Test connection: 3. Test connection:
```bash ```bash
podman exec flyer-crawler-postgres psql -U postgres -d flyer_crawler_dev -c "SELECT 1" podman exec flyer-crawler-postgres psql -U postgres -d flyer_crawler_dev -c "SELECT 1"
``` ```
@@ -734,11 +769,13 @@ Connected to: http://127.0.0.1:8000
### Verifying Configuration ### Verifying Configuration
**List loaded servers:** **List loaded servers:**
```bash ```bash
claude mcp list claude mcp list
``` ```
**Check debug logs (Windows):** **Check debug logs (Windows):**
```text ```text
C:\Users\<username>\.claude\debug\*.txt C:\Users\<username>\.claude\debug\*.txt
``` ```
@@ -758,6 +795,7 @@ Look for MCP server startup messages. Missing servers indicate configuration pro
### 2. Order Servers by Importance ### 2. Order Servers by Importance
Place essential servers first in configuration: Place essential servers first in configuration:
1. `memory` - Knowledge persistence 1. `memory` - Knowledge persistence
2. `filesystem` - File access 2. `filesystem` - File access
3. `podman` - Container management 3. `podman` - Container management
@@ -780,6 +818,7 @@ For faster startup, avoid npx and use direct node execution:
### 4. Disable Instead of Delete ### 4. Disable Instead of Delete
Use `"disabled": true` to troubleshoot without losing configuration: Use `"disabled": true` to troubleshoot without losing configuration:
```json ```json
"problem-server": { "problem-server": {
"command": "...", "command": "...",
@@ -790,6 +829,7 @@ Use `"disabled": true` to troubleshoot without losing configuration:
### 5. Test Manually First ### 5. Test Manually First
Before adding to config, verify server works: Before adding to config, verify server works:
```bash ```bash
cd /path/to/mcp-server cd /path/to/mcp-server
set ENV_VAR=value set ENV_VAR=value
@@ -812,31 +852,31 @@ MCP configuration changes require a full VS Code restart (not just window reload
### Available MCP Packages ### Available MCP Packages
| Server | Package/Source | npm? | | Server | Package/Source | npm? |
|--------|----------------|------| | ------------------- | -------------------------------------------------- | ---------------------- |
| memory | `@modelcontextprotocol/server-memory` | Yes | | memory | `@modelcontextprotocol/server-memory` | Yes |
| filesystem | `@modelcontextprotocol/server-filesystem` | Yes | | filesystem | `@modelcontextprotocol/server-filesystem` | Yes |
| redis | `@modelcontextprotocol/server-redis` | Yes | | redis | `@modelcontextprotocol/server-redis` | Yes |
| postgres | `@modelcontextprotocol/server-postgres` | Yes | | postgres | `@modelcontextprotocol/server-postgres` | Yes |
| sequential-thinking | `@modelcontextprotocol/server-sequential-thinking` | Yes | | sequential-thinking | `@modelcontextprotocol/server-sequential-thinking` | Yes |
| podman | `podman-mcp-server` | Yes | | podman | `podman-mcp-server` | Yes |
| gitea | `gitea-mcp` (binary) | No | | gitea | `gitea-mcp` (binary) | No |
| bugsink | `j-shelfwood/bugsink-mcp` | No (build from source) | | bugsink | `j-shelfwood/bugsink-mcp` | No (build from source) |
| sentry | `@sentry/mcp-server` | Yes | | sentry | `@sentry/mcp-server` | Yes |
| playwright | `@anthropics/mcp-server-playwright` | Yes | | playwright | `@anthropics/mcp-server-playwright` | Yes |
### Common Tool Prefixes ### Common Tool Prefixes
| Server | Tool Prefix | Example | | Server | Tool Prefix | Example |
|--------|-------------|---------| | -------------- | ------------------------- | -------------------------------------- |
| Memory | `mcp__memory__` | `mcp__memory__read_graph` | | Memory | `mcp__memory__` | `mcp__memory__read_graph` |
| Filesystem | `mcp__filesystem__` | `mcp__filesystem__read_file` | | Filesystem | `mcp__filesystem__` | `mcp__filesystem__read_file` |
| Podman | `mcp__podman__` | `mcp__podman__container_list` | | Podman | `mcp__podman__` | `mcp__podman__container_list` |
| Redis | `mcp__redis__` | `mcp__redis__get` | | Redis | `mcp__redis__` | `mcp__redis__get` |
| Bugsink (prod) | `mcp__bugsink__` | `mcp__bugsink__list_issues` | | Bugsink (prod) | `mcp__bugsink__` | `mcp__bugsink__list_issues` |
| Bugsink (dev) | `mcp__localerrors__` | `mcp__localerrors__list_issues` | | Bugsink (dev) | `mcp__localerrors__` | `mcp__localerrors__list_issues` |
| PostgreSQL | `mcp__devdb__` | `mcp__devdb__query` | | PostgreSQL | `mcp__devdb__` | `mcp__devdb__query` |
| Gitea | `mcp__gitea-projectium__` | `mcp__gitea-projectium__list_my_repos` | | Gitea | `mcp__gitea-projectium__` | `mcp__gitea-projectium__list_my_repos` |
--- ---
@@ -849,4 +889,4 @@ MCP configuration changes require a full VS Code restart (not just window reload
--- ---
*Last updated: January 2026* _Last updated: January 2026_

View File

@@ -328,7 +328,9 @@ podman exec flyer-crawler-postgres psql -U postgres -d flyer_crawler_dev -c "SEL
Claude Code version 2.1.15 introduced a regression that causes crashes on CPUs that do not support AVX (Advanced Vector Extensions) instructions. The error manifests as: Claude Code version 2.1.15 introduced a regression that causes crashes on CPUs that do not support AVX (Advanced Vector Extensions) instructions. The error manifests as:
``` ```
Illegal instruction (core dumped) Illegal instruction (core dumped)
``` ```
or similar AVX-related illegal instruction errors when the extension tries to start. or similar AVX-related illegal instruction errors when the extension tries to start.
@@ -346,8 +348,10 @@ The VS Code right-click menu option "Disable Auto Update" for extensions may be
Press `Ctrl+Shift+P` and type "Preferences: Open User Settings (JSON)" or manually edit: Press `Ctrl+Shift+P` and type "Preferences: Open User Settings (JSON)" or manually edit:
``` ```
C:\Users\<username>\AppData\Roaming\Code\User\settings.json C:\Users\<username>\AppData\Roaming\Code\User\settings.json
```
````
**Step 2: Add the Extension to Ignore List** **Step 2: Add the Extension to Ignore List**
@@ -357,7 +361,7 @@ Add the following setting to your `settings.json`:
{ {
"extensions.ignoreAutoUpdate": ["anthropic.claude-code"] "extensions.ignoreAutoUpdate": ["anthropic.claude-code"]
} }
``` ````
If you already have other settings, add it within the existing JSON object: If you already have other settings, add it within the existing JSON object:
@@ -394,6 +398,7 @@ To verify the setting is working:
Once Anthropic releases a fixed version: Once Anthropic releases a fixed version:
1. **Remove the ignore setting** from `settings.json`: 1. **Remove the ignore setting** from `settings.json`:
```json ```json
// Remove or comment out: // Remove or comment out:
// "extensions.ignoreAutoUpdate": ["anthropic.claude-code"] // "extensions.ignoreAutoUpdate": ["anthropic.claude-code"]
@@ -430,7 +435,7 @@ To verify which version is installed:
3. The version number appears below the extension name 3. The version number appears below the extension name
4. Or click on the extension to see full details including version history 4. Or click on the extension to see full details including version history
``` ````
## Security Notes ## Security Notes
@@ -463,7 +468,8 @@ The configuration files (`mcp.json` and `settings.json`) contain sensitive acces
1. Check Podman machine status: 1. Check Podman machine status:
```powershell ```powershell
podman machine list podman machine list
``` ````
2. Ensure Podman Desktop is running 2. Ensure Podman Desktop is running
3. Verify Docker socket is accessible: 3. Verify Docker socket is accessible:
```powershell ```powershell

View File

@@ -17,8 +17,7 @@ import { logger } from '../services/logger.server';
// Test: https://flyer-crawler-test.projectium.com // Test: https://flyer-crawler-test.projectium.com
// Production: https://flyer-crawler.projectium.com // Production: https://flyer-crawler.projectium.com
const BASE_URL = const BASE_URL =
process.env.FLYER_BASE_URL || process.env.FLYER_BASE_URL || process.env.NODE_ENV === 'production'
process.env.NODE_ENV === 'production'
? 'https://flyer-crawler.projectium.com' ? 'https://flyer-crawler.projectium.com'
: process.env.NODE_ENV === 'test' : process.env.NODE_ENV === 'test'
? 'https://flyer-crawler-test.projectium.com' ? 'https://flyer-crawler-test.projectium.com'

View File

@@ -1,7 +1,7 @@
// src/hooks/mutations/useAuthMutations.ts // src/hooks/mutations/useAuthMutations.ts
import { useMutation } from '@tanstack/react-query'; import { useMutation } from '@tanstack/react-query';
import * as apiClient from '../../services/apiClient'; import * as apiClient from '../../services/apiClient';
import { notifyError} from '../../services/notificationService'; import { notifyError } from '../../services/notificationService';
import type { UserProfile } from '../../types'; import type { UserProfile } from '../../types';
import { logger } from '../../services/logger.client'; import { logger } from '../../services/logger.client';
@@ -48,7 +48,10 @@ export const useLoginMutation = () => {
logger.debug('[useLoginMutation] result.data.token', { token: result.data?.token }); logger.debug('[useLoginMutation] result.data.token', { token: result.data?.token });
// The API returns {success, data: {userprofile, token}}, so extract the data // The API returns {success, data: {userprofile, token}}, so extract the data
const extracted = result.data; const extracted = result.data;
logger.info('[useLoginMutation] Returning extracted data', { hasUserprofile: !!extracted?.userprofile, hasToken: !!extracted?.token }); logger.info('[useLoginMutation] Returning extracted data', {
hasUserprofile: !!extracted?.userprofile,
hasToken: !!extracted?.token,
});
return extracted; return extracted;
}, },
onError: (error: Error) => { onError: (error: Error) => {