Compare commits

..

10 Commits

Author SHA1 Message Date
Gitea Actions
e5cdb54308 ci: Bump version to 0.12.13 [skip ci] 2026-01-24 02:48:50 +05:00
a3f212ff81 Primary Issue: TZ Environment Variable Breaking Tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m47s
2026-01-23 13:40:48 -08:00
Gitea Actions
de263f74b0 ci: Bump version to 0.12.12 [skip ci] 2026-01-24 00:30:16 +05:00
a71e41302b no TZ in tests - who knew?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m35s
2026-01-23 11:28:45 -08:00
Gitea Actions
3575803252 ci: Bump version to 0.12.11 [skip ci] 2026-01-23 12:40:09 +05:00
d03900cefe set PST as common time zone for log matching ease
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m4s
2026-01-22 23:38:45 -08:00
Gitea Actions
6d49639845 ci: Bump version to 0.12.10 [skip ci] 2026-01-23 10:59:29 +05:00
d4543cf4b9 Bugsink Fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m12s
2026-01-22 21:55:18 -08:00
Gitea Actions
4f08238698 ci: Bump version to 0.12.9 [skip ci] 2026-01-23 10:49:32 +05:00
38b35f87aa Bugsink Fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-22 21:48:32 -08:00
21 changed files with 875 additions and 130 deletions

View File

@@ -117,7 +117,9 @@
"Bash(git -C \"C:\\\\Users\\\\games3\\\\.claude\\\\plugins\\\\marketplaces\\\\claude-plugins-official\" fetch --dry-run -v)",
"mcp__localerrors__get_project",
"mcp__localerrors__get_issue",
"mcp__localerrors__get_event"
"mcp__localerrors__get_event",
"mcp__localerrors__list_teams",
"WebSearch"
]
},
"enabledMcpjsonServers": [

View File

@@ -94,11 +94,18 @@ WORKER_LOCK_DURATION=120000
# Error Tracking (ADR-015)
# ===================
# Sentry-compatible error tracking via Bugsink (self-hosted)
# DSNs are created in Bugsink UI at http://localhost:8000 (dev) or your production URL
# Backend DSN - for Express/Node.js errors
SENTRY_DSN=
# Frontend DSN - for React/browser errors (uses VITE_ prefix)
VITE_SENTRY_DSN=
# DSNs are created in Bugsink UI at https://localhost:8443 (dev) or your production URL
#
# Dev container projects:
# - Project 1: Backend API (Dev) - receives Pino, PostgreSQL errors
# - Project 2: Frontend (Dev) - receives browser errors via Sentry SDK
# - Project 4: Infrastructure (Dev) - receives Redis, NGINX, Vite errors
#
# Backend DSN - for Express/Node.js errors (internal container URL)
SENTRY_DSN=http://<key>@localhost:8000/1
# Frontend DSN - for React/browser errors (uses nginx proxy for browser access)
# Note: Browsers cannot reach localhost:8000 directly, so we use nginx proxy at /bugsink-api/
VITE_SENTRY_DSN=https://<key>@localhost/bugsink-api/2
# Environment name for error grouping (defaults to NODE_ENV)
SENTRY_ENVIRONMENT=development
VITE_SENTRY_ENVIRONMENT=development

View File

@@ -123,23 +123,30 @@ The dev container now matches production by using PM2 for process management.
### Log Aggregation (ADR-050)
All logs flow to Bugsink via Logstash:
All logs flow to Bugsink via Logstash with 3-project routing:
| Source | Log Location | Status |
| ----------------- | --------------------------------- | ------ |
| Backend (Pino) | `/var/log/pm2/api-*.log` | Active |
| Worker (Pino) | `/var/log/pm2/worker-*.log` | Active |
| Vite | `/var/log/pm2/vite-*.log` | Active |
| PostgreSQL | `/var/log/postgresql/*.log` | Active |
| Redis | `/var/log/redis/redis-server.log` | Active |
| NGINX | `/var/log/nginx/*.log` | Active |
| Frontend (Sentry) | Browser -> Bugsink SDK | Active |
| Source | Log Location | Bugsink Project |
| ----------------- | --------------------------------- | ------------------ |
| Backend (Pino) | `/var/log/pm2/api-*.log` | Backend API (1) |
| Worker (Pino) | `/var/log/pm2/worker-*.log` | Backend API (1) |
| PostgreSQL | `/var/log/postgresql/*.log` | Backend API (1) |
| Vite | `/var/log/pm2/vite-*.log` | Infrastructure (4) |
| Redis | `/var/log/redis/redis-server.log` | Infrastructure (4) |
| NGINX | `/var/log/nginx/*.log` | Infrastructure (4) |
| Frontend (Sentry) | Browser -> nginx proxy | Frontend (2) |
**Bugsink Projects (Dev Container)**:
- Project 1: Backend API (Dev) - Application errors
- Project 2: Frontend (Dev) - Browser errors via nginx proxy
- Project 4: Infrastructure (Dev) - Redis, NGINX, Vite errors
**Key Files**:
- `ecosystem.dev.config.cjs` - PM2 development configuration
- `scripts/dev-entrypoint.sh` - Container startup script
- `docker/logstash/bugsink.conf` - Logstash pipeline configuration
- `docker/nginx/dev.conf` - NGINX config with Bugsink API proxy
**Full Dev Container Guide**: See [docs/development/DEV-CONTAINER.md](docs/development/DEV-CONTAINER.md)
@@ -215,6 +222,7 @@ Common issues with solutions:
4. **Filename collisions** - Multer predictable names → Use `${Date.now()}-${Math.round(Math.random() * 1e9)}`
5. **Response format mismatches** - API format changes → Log response bodies, update assertions
6. **External service failures** - PM2/Redis unavailable → try/catch with graceful degradation
7. **TZ environment variable breaks async hooks** - `TZ=America/Los_Angeles` causes `RangeError: Invalid triggerAsyncId value: NaN` → Tests now explicitly set `TZ=` (empty) in package.json scripts
**Full Details**: See test issues section at end of this document or [docs/development/TESTING.md](docs/development/TESTING.md)
@@ -370,3 +378,28 @@ API formats change: `data.jobId` vs `data.job.id`, nested vs flat, string vs num
PM2/Redis health checks fail when unavailable.
**Solution**: try/catch with graceful degradation or mock
### 7. TZ Environment Variable Breaking Async Hooks
**Problem**: When `TZ=America/Los_Angeles` (or other timezone values) is set in the environment, Node.js async_hooks module can produce `RangeError: Invalid triggerAsyncId value: NaN`. This breaks React Testing Library's `render()` function which uses async hooks internally.
**Root Cause**: Setting `TZ` to certain timezone values interferes with Node.js's internal async tracking mechanism, causing invalid async IDs to be generated.
**Symptoms**:
```text
RangeError: Invalid triggerAsyncId value: NaN
process.env.NODE_ENV.queueSeveralMicrotasks node_modules/react/cjs/react.development.js:751:15
process.env.NODE_ENV.exports.act node_modules/react/cjs/react.development.js:886:11
node_modules/@testing-library/react/dist/act-compat.js:46:25
renderRoot node_modules/@testing-library/react/dist/pure.js:189:26
```
**Solution**: Explicitly unset `TZ` in all test scripts by adding `TZ=` (empty value) to cross-env:
```json
"test:unit": "cross-env NODE_ENV=test TZ= tsx ..."
"test:integration": "cross-env NODE_ENV=test TZ= tsx ..."
```
**Context**: This issue was introduced in commit `d03900c` which added `TZ: 'America/Los_Angeles'` to PM2 ecosystem configs for consistent log timestamps in production/dev environments. Tests must explicitly override this to prevent the async hooks error.

View File

@@ -174,6 +174,21 @@ BUGSINK = {\n\
}\n\
\n\
ALLOWED_HOSTS = deduce_allowed_hosts(BUGSINK["BASE_URL"])\n\
# Also allow 127.0.0.1 access (both localhost and 127.0.0.1 should work)\n\
if "127.0.0.1" not in ALLOWED_HOSTS:\n\
ALLOWED_HOSTS.append("127.0.0.1")\n\
if "localhost" not in ALLOWED_HOSTS:\n\
ALLOWED_HOSTS.append("localhost")\n\
\n\
# CSRF Trusted Origins (Django 4.0+ requires full origin for HTTPS POST requests)\n\
# This fixes "CSRF verification failed" errors when accessing Bugsink via HTTPS\n\
# Both localhost and 127.0.0.1 must be trusted to support different access patterns\n\
CSRF_TRUSTED_ORIGINS = [\n\
"https://localhost:8443",\n\
"https://127.0.0.1:8443",\n\
"http://localhost:8000",\n\
"http://127.0.0.1:8000",\n\
]\n\
\n\
# Console email backend for dev\n\
EMAIL_BACKEND = "bugsink.email_backends.QuietConsoleEmailBackend"\n\

View File

@@ -57,6 +57,8 @@ services:
- '8000:8000' # Bugsink error tracking HTTP (ADR-015)
- '8443:8443' # Bugsink error tracking HTTPS (ADR-015)
environment:
# Timezone: PST (America/Los_Angeles) for consistent log timestamps
- TZ=America/Los_Angeles
# Core settings
- NODE_ENV=development
# Database - use service name for Docker networking
@@ -122,6 +124,10 @@ services:
ports:
- '5432:5432'
environment:
# Timezone: PST (America/Los_Angeles) for consistent log timestamps
TZ: America/Los_Angeles
# PostgreSQL timezone setting (used by log_timezone and timezone parameters)
PGTZ: America/Los_Angeles
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: flyer_crawler_dev
@@ -142,6 +148,8 @@ services:
postgres
-c config_file=/var/lib/postgresql/data/postgresql.conf
-c hba_file=/var/lib/postgresql/data/pg_hba.conf
-c timezone=America/Los_Angeles
-c log_timezone=America/Los_Angeles
-c log_min_messages=notice
-c client_min_messages=notice
-c logging_collector=on
@@ -175,6 +183,9 @@ services:
user: root
ports:
- '6379:6379'
environment:
# Timezone: PST (America/Los_Angeles) for consistent log timestamps
TZ: America/Los_Angeles
volumes:
- redis_data:/data
# Create log volume for Logstash access (ADR-050)

View File

@@ -12,9 +12,18 @@
# - NGINX logs (/var/log/nginx/*.log) - Access and error logs
# - Redis logs (/var/log/redis/*.log) - Via shared volume (ADR-050)
#
# Bugsink Projects:
# - Project 1: Backend API (Dev) - Pino errors, PostgreSQL errors
# Bugsink Projects (3-project architecture):
# - Project 1: Backend API (Dev) - Pino/PM2 app errors, PostgreSQL errors
# DSN Key: cea01396c56246adb5878fa5ee6b1d22
# - Project 2: Frontend (Dev) - Configured via Sentry SDK in browser
# DSN Key: d92663cb73cf4145b677b84029e4b762
# - Project 4: Infrastructure (Dev) - Redis, NGINX, PM2 operational logs
# DSN Key: 14e8791da3d347fa98073261b596cab9
#
# Routing Logic:
# - Backend logs (type: pm2_api, pm2_worker, pino, postgres) -> Project 1
# - Infrastructure logs (type: redis, nginx_error, nginx_5xx) -> Project 4
# - Vite errors (type: pm2_vite with errors) -> Project 4 (build tooling)
#
# Related Documentation:
# - docs/adr/0050-postgresql-function-observability.md
@@ -112,7 +121,8 @@ input {
# ============================================================================
# Captures PostgreSQL log output including fn_log() structured JSON messages.
# PostgreSQL is configured to write logs to /var/log/postgresql/ (shared volume).
# Log format: "2026-01-22 00:00:00 UTC [5724] postgres@flyer_crawler_dev LOG: message"
# Log format: "2026-01-22 14:30:00 PST [5724] postgres@flyer_crawler_dev LOG: message"
# Note: Timestamps are in PST (America/Los_Angeles) timezone as configured in compose.dev.yml
file {
path => "/var/log/postgresql/*.log"
type => "postgres"
@@ -217,10 +227,11 @@ filter {
# PostgreSQL Log Processing (ADR-050)
# ============================================================================
# PostgreSQL log format in dev container:
# "2026-01-22 00:00:00 UTC [5724] postgres@flyer_crawler_dev LOG: message"
# "2026-01-22 07:06:03 UTC [19851] postgres@flyer_crawler_dev ERROR: column "id" does not exist"
# "2026-01-22 14:30:00 PST [5724] postgres@flyer_crawler_dev LOG: message"
# "2026-01-22 15:06:03 PST [19851] postgres@flyer_crawler_dev ERROR: column "id" does not exist"
# Note: Timestamps are in PST (America/Los_Angeles) timezone
if [type] == "postgres" {
# Parse PostgreSQL log prefix with UTC timezone
# Parse PostgreSQL log prefix with timezone (PST in dev, may vary in prod)
grok {
match => { "message" => "%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME} %{WORD:pg_timezone} \[%{POSINT:pg_pid}\] %{DATA:pg_user}@%{DATA:pg_database} %{WORD:pg_level}: ?%{GREEDYDATA:pg_message}" }
tag_on_failure => ["_postgres_grok_failure"]
@@ -344,26 +355,56 @@ filter {
}
# ============================================================================
# Generate Sentry Event ID for all errors
# Generate Sentry Event ID and Ensure Required Fields for all errors
# ============================================================================
# CRITICAL: sentry_level MUST be set for all errors before output.
# Bugsink's PostgreSQL schema limits level to varchar(7), so valid values are:
# fatal, error, warning, info, debug (all <= 7 chars)
# If sentry_level is not set, the literal "%{sentry_level}" (16 chars) is sent,
# causing PostgreSQL insertion failures.
# ============================================================================
if "error" in [tags] {
# Use Ruby for robust field handling - handles all edge cases
ruby {
code => '
require "securerandom"
event.set("sentry_event_id", SecureRandom.hex(16))
'
}
# Ensure error_message has a fallback value
if ![error_message] {
mutate { add_field => { "error_message" => "%{message}" } }
# Generate unique event ID for Sentry
event.set("sentry_event_id", SecureRandom.hex(16))
# =====================================================================
# CRITICAL: Validate and set sentry_level
# =====================================================================
# Valid Sentry levels (max 7 chars for Bugsink PostgreSQL schema):
# fatal, error, warning, info, debug
# Default to "error" if missing, empty, or invalid.
# =====================================================================
valid_levels = ["fatal", "error", "warning", "info", "debug"]
current_level = event.get("sentry_level")
if current_level.nil? || current_level.to_s.strip.empty? || !valid_levels.include?(current_level.to_s.downcase)
event.set("sentry_level", "error")
else
# Normalize to lowercase
event.set("sentry_level", current_level.to_s.downcase)
end
# =====================================================================
# Ensure error_message has a fallback value
# =====================================================================
error_msg = event.get("error_message")
if error_msg.nil? || error_msg.to_s.strip.empty?
fallback_msg = event.get("message") || event.get("msg") || "Unknown error"
event.set("error_message", fallback_msg.to_s)
end
'
}
}
}
output {
# ============================================================================
# Forward Errors to Bugsink (Backend API Project)
# Forward Errors to Bugsink (Project Routing)
# ============================================================================
# Bugsink uses Sentry-compatible API. Events must include:
# - event_id: 32 hex characters (UUID without dashes)
@@ -373,9 +414,50 @@ output {
# - platform: "node" for backend, "javascript" for frontend
#
# Authentication via X-Sentry-Auth header with project's public key.
# Dev container DSN: http://cea01396c56246adb5878fa5ee6b1d22@localhost:8000/1
#
# Project Routing:
# - Project 1 (Backend): Pino app logs, PostgreSQL errors
# - Project 4 (Infrastructure): Redis, NGINX, Vite build errors
# ============================================================================
if "error" in [tags] {
# ============================================================================
# Infrastructure Errors -> Project 4
# ============================================================================
# Redis warnings/errors, NGINX errors, and Vite build errors go to
# the Infrastructure project for separation from application code errors.
if "error" in [tags] and ([type] == "redis" or [type] == "nginx_error" or [type] == "nginx_access" or [type] == "pm2_vite") {
http {
url => "http://localhost:8000/api/4/store/"
http_method => "post"
format => "json"
headers => {
"X-Sentry-Auth" => "Sentry sentry_key=14e8791da3d347fa98073261b596cab9, sentry_version=7"
"Content-Type" => "application/json"
}
mapping => {
"event_id" => "%{sentry_event_id}"
"timestamp" => "%{@timestamp}"
"level" => "%{sentry_level}"
"platform" => "other"
"logger" => "%{type}"
"message" => "%{error_message}"
"extra" => {
"hostname" => "%{[host][name]}"
"source_type" => "%{type}"
"tags" => "%{tags}"
"original_message" => "%{message}"
"project" => "infrastructure"
}
}
}
}
# ============================================================================
# Backend Application Errors -> Project 1
# ============================================================================
# Pino application logs (API, Worker), PostgreSQL function errors, and
# native PostgreSQL errors go to the Backend API project.
else if "error" in [tags] and ([type] in ["pm2_api", "pm2_worker", "pino", "postgres"]) {
http {
url => "http://localhost:8000/api/1/store/"
http_method => "post"
@@ -384,7 +466,6 @@ output {
"X-Sentry-Auth" => "Sentry sentry_key=cea01396c56246adb5878fa5ee6b1d22, sentry_version=7"
"Content-Type" => "application/json"
}
# Transform event to Sentry format using regular fields (not @metadata)
mapping => {
"event_id" => "%{sentry_event_id}"
"timestamp" => "%{@timestamp}"
@@ -397,6 +478,38 @@ output {
"source_type" => "%{type}"
"tags" => "%{tags}"
"original_message" => "%{message}"
"project" => "backend"
}
}
}
}
# ============================================================================
# Fallback: Any other errors -> Project 1
# ============================================================================
# Catch-all for any errors that don't match specific routing rules.
else if "error" in [tags] {
http {
url => "http://localhost:8000/api/1/store/"
http_method => "post"
format => "json"
headers => {
"X-Sentry-Auth" => "Sentry sentry_key=cea01396c56246adb5878fa5ee6b1d22, sentry_version=7"
"Content-Type" => "application/json"
}
mapping => {
"event_id" => "%{sentry_event_id}"
"timestamp" => "%{@timestamp}"
"level" => "%{sentry_level}"
"platform" => "node"
"logger" => "%{type}"
"message" => "%{error_message}"
"extra" => {
"hostname" => "%{[host][name]}"
"source_type" => "%{type}"
"tags" => "%{tags}"
"original_message" => "%{message}"
"project" => "backend-fallback"
}
}
}

View File

@@ -60,6 +60,37 @@ server {
proxy_set_header X-Forwarded-Proto $scheme;
}
# ============================================================================
# Bugsink Sentry API Proxy (for frontend error reporting)
# ============================================================================
# The frontend Sentry SDK cannot reach localhost:8000 directly from the browser
# because port 8000 is only accessible within the container network.
# This proxy allows the browser to send errors to https://localhost/bugsink-api/
# which NGINX forwards to the Bugsink container on port 8000.
#
# Frontend DSN format: https://localhost/bugsink-api/<project_id>
# Example: https://localhost/bugsink-api/2 for Frontend (Dev) project
#
# The Sentry SDK sends POST requests to /bugsink-api/<project>/store/
# This proxy strips /bugsink-api and forwards to http://localhost:8000/api/
# ============================================================================
location /bugsink-api/ {
proxy_pass http://localhost:8000/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Allow large error payloads with stack traces
client_max_body_size 10M;
# Timeouts for error reporting (should be fast)
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# Proxy WebSocket connections for real-time notifications
location /ws {
proxy_pass http://localhost:3001;

View File

@@ -2,6 +2,10 @@
# This file is mounted into the PostgreSQL container to enable structured logging
# from database functions via fn_log()
# Timezone: PST (America/Los_Angeles) for consistent log timestamps
timezone = 'America/Los_Angeles'
log_timezone = 'America/Los_Angeles'
# Enable logging to files for Logstash pickup
logging_collector = on
log_destination = 'stderr'

View File

@@ -28,9 +28,28 @@ The `.env.local` file uses `localhost` while `compose.dev.yml` uses `127.0.0.1`.
## HTTPS Setup
- Self-signed certificates auto-generated with mkcert on container startup
- CSRF Protection: Django configured with `SECURE_PROXY_SSL_HEADER` to trust `X-Forwarded-Proto` from nginx
- CSRF Protection: Django configured with `CSRF_TRUSTED_ORIGINS` for both `localhost` and `127.0.0.1` (see below)
- HTTPS proxy: nginx on port 8443 proxies to Bugsink on port 8000
- HTTPS is for UI access only - Sentry SDK uses HTTP directly
### CSRF Configuration
Django 4.0+ requires `CSRF_TRUSTED_ORIGINS` for HTTPS POST requests. The Bugsink configuration (`Dockerfile.dev`) includes:
```python
CSRF_TRUSTED_ORIGINS = [
"https://localhost:8443",
"https://127.0.0.1:8443",
"http://localhost:8000",
"http://127.0.0.1:8000",
]
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
```
**Both hostnames are required** because browsers treat `localhost` and `127.0.0.1` as different origins.
If you get "CSRF verification failed" errors, see [BUGSINK-SETUP.md](tools/BUGSINK-SETUP.md#csrf-verification-failed) for troubleshooting.
## Isolation Benefits
- Dev errors stay local, don't pollute production/test dashboards

View File

@@ -175,29 +175,30 @@ npm run dev:pm2:logs
### Log Flow Architecture (ADR-050)
All application logs flow through Logstash to Bugsink:
All application logs flow through Logstash to Bugsink using a 3-project architecture:
```
```text
+------------------+ +------------------+ +------------------+
| PM2 Logs | | PostgreSQL | | Redis Logs |
| PM2 Logs | | PostgreSQL | | Redis/NGINX |
| /var/log/pm2/ | | /var/log/ | | /var/log/redis/ |
+--------+---------+ | postgresql/ | +--------+---------+
| +--------+---------+ |
| (API + Worker) | | postgresql/ | | /var/log/nginx/ |
+--------+---------+ +--------+---------+ +--------+---------+
| | |
v v v
+------------------------------------------------------------------------+
| LOGSTASH |
| /etc/logstash/conf.d/bugsink.conf |
| (Routes by log type) |
+------------------------------------------------------------------------+
| | |
| +---------+---------+ |
| | | |
v v v v
+------------------+ +------------------+ +------------------+
| Errors -> | | Operational -> | | NGINX Logs -> |
| Bugsink API | | /var/log/ | | /var/log/ |
| (Project 1) | | logstash/*.log | | logstash/*.log |
+------------------+ +------------------+ +------------------+
v v v
+------------------+ +------------------+ +------------------+
| Backend API | | Frontend (Dev) | | Infrastructure |
| (Project 1) | | (Project 2) | | (Project 4) |
| - Pino errors | | - Browser SDK | | - Redis warnings |
| - PostgreSQL | | (not Logstash) | | - NGINX errors |
+------------------+ +------------------+ | - Vite errors |
+------------------+
```
### Log Sources
@@ -231,8 +232,11 @@ podman exec flyer-crawler-dev curl -s localhost:9600/_node/stats/pipelines?prett
- **URL**: `https://localhost:8443`
- **Login**: `admin@localhost` / `admin`
- **Projects**:
- Project 1: Backend API (errors from Pino, PostgreSQL, Redis)
- Project 2: Frontend (errors from Sentry SDK in browser)
- Project 1: Backend API (Dev) - Pino app errors, PostgreSQL errors
- Project 2: Frontend (Dev) - Browser errors via Sentry SDK
- Project 4: Infrastructure (Dev) - Redis warnings, NGINX errors, Vite build errors
**Note**: Frontend DSN uses nginx proxy (`/bugsink-api/`) because browsers cannot reach `localhost:8000` directly. See [BUGSINK-SETUP.md](../tools/BUGSINK-SETUP.md#frontend-nginx-proxy) for details.
---
@@ -268,14 +272,41 @@ podman-compose -f compose.dev.yml down
Key environment variables are set in `compose.dev.yml`:
| Variable | Value | Purpose |
| ----------------- | ----------------------------- | -------------------- |
| `NODE_ENV` | `development` | Environment mode |
| `DB_HOST` | `postgres` | PostgreSQL hostname |
| `REDIS_URL` | `redis://redis:6379` | Redis connection URL |
| `FRONTEND_URL` | `https://localhost` | CORS origin |
| `SENTRY_DSN` | `http://...@127.0.0.1:8000/1` | Backend Bugsink DSN |
| `VITE_SENTRY_DSN` | `http://...@127.0.0.1:8000/2` | Frontend Bugsink DSN |
| Variable | Value | Purpose |
| ----------------- | ----------------------------- | --------------------------- |
| `TZ` | `America/Los_Angeles` | Timezone (PST) for all logs |
| `NODE_ENV` | `development` | Environment mode |
| `DB_HOST` | `postgres` | PostgreSQL hostname |
| `REDIS_URL` | `redis://redis:6379` | Redis connection URL |
| `FRONTEND_URL` | `https://localhost` | CORS origin |
| `SENTRY_DSN` | `http://...@127.0.0.1:8000/1` | Backend Bugsink DSN |
| `VITE_SENTRY_DSN` | `http://...@127.0.0.1:8000/2` | Frontend Bugsink DSN |
### Timezone Configuration
All dev container services are configured to use PST (America/Los_Angeles) timezone for consistent log timestamps:
| Service | Configuration | Notes |
| ---------- | ------------------------------------------------ | ------------------------------ |
| App | `TZ=America/Los_Angeles` in compose.dev.yml | Also set via dev-entrypoint.sh |
| PostgreSQL | `timezone` and `log_timezone` in postgres config | Logs timestamps in PST |
| Redis | `TZ=America/Los_Angeles` in compose.dev.yml | Alpine uses TZ env var |
| PM2 | `TZ` in ecosystem.dev.config.cjs | Pino timestamps use local time |
**Verifying Timezone**:
```bash
# Check container timezone
podman exec flyer-crawler-dev date
# Check PostgreSQL timezone
podman exec flyer-crawler-postgres psql -U postgres -c "SHOW timezone;"
# Check Redis log timestamps
MSYS_NO_PATHCONV=1 podman exec flyer-crawler-redis cat /var/log/redis/redis-server.log | head -5
```
**Note**: If you need UTC timestamps for production compatibility, change `TZ=UTC` in compose.dev.yml and restart containers.
---

View File

@@ -101,6 +101,7 @@ MSYS_NO_PATHCONV=1 podman exec flyer-crawler-dev ls -la /var/log/redis/
| NGINX logs missing | Output directory | `ls -lh /var/log/logstash/nginx-access-*.log` |
| Redis logs missing | Shared volume | Dev: Check `redis_logs` volume mounted; Prod: Check `/var/log/redis/redis-server.log` exists |
| High disk usage | Log rotation | Verify `/etc/logrotate.d/logstash` configured |
| varchar(7) error | Level validation | Add Ruby filter to validate/normalize `sentry_level` before output |
## Related Documentation

View File

@@ -11,6 +11,7 @@ This runbook provides step-by-step diagnostics and solutions for common Logstash
| Wrong Bugsink project | Environment detection failed | Verify `pg_database` field extraction |
| 403 authentication error | Missing/wrong DSN key | Check `X-Sentry-Auth` header |
| 500 error from Bugsink | Invalid event format | Verify `event_id` and required fields |
| varchar(7) constraint | Unresolved `%{sentry_level}` | Add Ruby filter for level validation |
---
@@ -385,7 +386,88 @@ systemctl status logstash
---
### Issue 7: Log File Rotation Issues
### Issue 7: Level Field Constraint Violation (varchar(7))
**Symptoms:**
- Bugsink returns HTTP 500 errors
- PostgreSQL errors: `value too long for type character varying(7)`
- Events fail to insert with literal `%{sentry_level}` string (16 characters)
**Root Cause:**
When Logstash cannot determine the log level (no error patterns matched), the `sentry_level` field remains as the unresolved placeholder `%{sentry_level}`. Bugsink's PostgreSQL schema has a `varchar(7)` constraint on the level field.
Valid Sentry levels (all <= 7 characters): `fatal`, `error`, `warning`, `info`, `debug`
**Diagnosis:**
```bash
# Check for HTTP 500 responses in Logstash logs
podman exec flyer-crawler-dev cat /var/log/logstash/logstash.log | grep "500"
# Check Bugsink for constraint violation errors
# Via MCP:
mcp__localerrors__list_issues({ project_id: 1, status: 'unresolved' })
```
**Solution:**
Add a Ruby filter block in `docker/logstash/bugsink.conf` to validate and normalize the `sentry_level` field before sending to Bugsink:
```ruby
# Add this AFTER all mutate filters that set sentry_level
# and BEFORE the output section
ruby {
code => '
level = event.get("sentry_level")
# Check if level is invalid (nil, empty, contains placeholder, or too long)
if level.nil? || level.to_s.empty? || level.to_s.include?("%{") || level.to_s.length > 7
# Default to "error" for error-tagged events, "info" otherwise
if event.get("tags")&.include?("error")
event.set("sentry_level", "error")
else
event.set("sentry_level", "info")
end
else
# Normalize to lowercase and validate
normalized = level.to_s.downcase
valid_levels = ["fatal", "error", "warning", "info", "debug"]
unless valid_levels.include?(normalized)
normalized = "error"
end
event.set("sentry_level", normalized)
end
'
}
```
**Key validations performed:**
1. Checks for nil or empty values
2. Detects unresolved placeholders (`%{...}`)
3. Enforces 7-character maximum length
4. Normalizes to lowercase
5. Validates against allowed Sentry levels
6. Defaults to "error" for error-tagged events, "info" otherwise
**Verification:**
```bash
# Restart Logstash
podman exec flyer-crawler-dev systemctl restart logstash
# Generate a test log that triggers the filter
podman exec flyer-crawler-dev pm2 restart flyer-crawler-api-dev
# Check no new HTTP 500 errors
podman exec flyer-crawler-dev cat /var/log/logstash/logstash.log | tail -50 | grep -E "(500|error)"
```
---
### Issue 8: Log File Rotation Issues
**Symptoms:**

View File

@@ -49,18 +49,24 @@ Bugsink is a lightweight, self-hosted error tracking platform that is fully comp
| Web UI | `https://localhost:8443` (nginx proxy) |
| Internal URL | `http://localhost:8000` (direct) |
| Credentials | `admin@localhost` / `admin` |
| Backend Project | Project ID 1 - `flyer-crawler-dev-backend` |
| Frontend Project | Project ID 2 - `flyer-crawler-dev-frontend` |
| Backend Project | Project ID 1 - `Backend API (Dev)` |
| Frontend Project | Project ID 2 - `Frontend (Dev)` |
| Infra Project | Project ID 4 - `Infrastructure (Dev)` |
| Backend DSN | `http://<key>@localhost:8000/1` |
| Frontend DSN | `http://<key>@localhost:8000/2` |
| Frontend DSN | `https://<key>@localhost/bugsink-api/2` (via nginx proxy) |
| Infra DSN | `http://<key>@localhost:8000/4` (Logstash only) |
| Database | `postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink` |
**Important:** The Frontend DSN uses an nginx proxy (`/bugsink-api/`) because the browser cannot reach `localhost:8000` directly (container-internal port). See [Frontend Nginx Proxy](#frontend-nginx-proxy) for details.
**Configuration Files:**
| File | Purpose |
| ----------------- | ----------------------------------------------------------------- |
| `compose.dev.yml` | Initial DSNs using `127.0.0.1:8000` (container startup) |
| `.env.local` | **OVERRIDES** compose.dev.yml with `localhost:8000` (app runtime) |
| File | Purpose |
| ------------------------------ | ------------------------------------------------------- |
| `compose.dev.yml` | Initial DSNs using `127.0.0.1:8000` (container startup) |
| `.env.local` | **OVERRIDES** compose.dev.yml (app runtime) |
| `docker/nginx/dev.conf` | Nginx proxy for Bugsink API (frontend error reporting) |
| `docker/logstash/bugsink.conf` | Log routing to Backend/Infrastructure projects |
**Note:** `.env.local` takes precedence over `compose.dev.yml` environment variables.
@@ -360,75 +366,127 @@ const config = {
---
## Frontend Nginx Proxy
The frontend Sentry SDK runs in the browser, which cannot directly reach `localhost:8000` (the Bugsink container-internal port). To solve this, we use an nginx proxy.
### How It Works
```text
Browser --HTTPS--> https://localhost/bugsink-api/2/store/
|
v (nginx proxy)
http://localhost:8000/api/2/store/
|
v
Bugsink (internal)
```
### Nginx Configuration
Location: `docker/nginx/dev.conf`
```nginx
# Proxy Bugsink Sentry API for frontend error reporting
location /bugsink-api/ {
proxy_pass http://localhost:8000/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Allow large error payloads with stack traces
client_max_body_size 10M;
# Timeouts for error reporting
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
```
### Frontend DSN Format
```bash
# .env.local
# Uses nginx proxy path instead of direct port
VITE_SENTRY_DSN=https://<key>@localhost/bugsink-api/2
```
### Testing Frontend Error Reporting
1. Open browser console at `https://localhost`
2. Trigger a test error:
```javascript
throw new Error('Test frontend error from browser');
```
3. Check Bugsink Frontend (Dev) project for the error
4. Verify browser console shows Sentry SDK activity (if VITE_SENTRY_DEBUG=true)
---
## Logstash Integration
Logstash aggregates logs from multiple sources and forwards error patterns to Bugsink.
**Note:** See [ADR-015](../adr/0015-application-performance-monitoring-and-error-tracking.md) for the full architecture.
### 3-Project Architecture
Logstash routes errors to different Bugsink projects based on log source:
| Project | ID | Receives |
| -------------------- | --- | --------------------------------------------- |
| Backend API (Dev) | 1 | Pino app errors, PostgreSQL errors |
| Frontend (Dev) | 2 | Browser errors (via Sentry SDK, not Logstash) |
| Infrastructure (Dev) | 4 | Redis warnings, NGINX errors, Vite errors |
### Log Sources
| Source | Log Path | Error Detection |
| ---------- | ---------------------- | ------------------------- |
| Pino (app) | `/app/logs/*.log` | level >= 50 (error/fatal) |
| Redis | `/var/log/redis/*.log` | WARNING/ERROR log levels |
| PostgreSQL | (future) | ERROR/FATAL log levels |
| Source | Log Path | Project Destination | Error Detection |
| ---------- | --------------------------- | ------------------- | ------------------------- |
| PM2 API | `/var/log/pm2/api-*.log` | Backend (1) | level >= 50 (error/fatal) |
| PM2 Worker | `/var/log/pm2/worker-*.log` | Backend (1) | level >= 50 (error/fatal) |
| PM2 Vite | `/var/log/pm2/vite-*.log` | Infrastructure (4) | error keyword patterns |
| PostgreSQL | `/var/log/postgresql/*.log` | Backend (1) | ERROR/FATAL log levels |
| Redis | `/var/log/redis/*.log` | Infrastructure (4) | WARNING level (`#`) |
| NGINX | `/var/log/nginx/error.log` | Infrastructure (4) | error/crit/alert/emerg |
### Pipeline Configuration
**Location:** `/etc/logstash/conf.d/bugsink.conf`
**Location:** `/etc/logstash/conf.d/bugsink.conf` (or `docker/logstash/bugsink.conf` in project)
```conf
# === INPUTS ===
input {
file {
path => "/app/logs/*.log"
codec => json
type => "pino"
tags => ["app"]
}
The configuration:
file {
path => "/var/log/redis/*.log"
type => "redis"
tags => ["redis"]
}
1. **Inputs**: Reads from PM2 logs, PostgreSQL logs, Redis logs, NGINX logs
2. **Filters**: Detects errors and assigns tags based on log type
3. **Outputs**: Routes to appropriate Bugsink project based on log source
**Key Routing Logic:**
```ruby
# Infrastructure logs -> Project 4
if "error" in [tags] and ([type] == "redis" or [type] == "nginx_error" or [type] == "pm2_vite") {
http { url => "http://localhost:8000/api/4/store/" ... }
}
# === FILTERS ===
filter {
if [type] == "pino" and [level] >= 50 {
mutate { add_tag => ["error"] }
}
if [type] == "redis" {
grok {
match => { "message" => "%{POSINT:pid}:%{WORD:role} %{MONTHDAY} %{MONTH} %{TIME} %{WORD:loglevel} %{GREEDYDATA:redis_message}" }
}
if [loglevel] in ["WARNING", "ERROR"] {
mutate { add_tag => ["error"] }
}
}
}
# === OUTPUT ===
output {
if "error" in [tags] {
http {
url => "http://localhost:8000/api/store/"
http_method => "post"
format => "json"
}
}
# Backend logs -> Project 1
else if "error" in [tags] and ([type] in ["pm2_api", "pm2_worker", "pino", "postgres"]) {
http { url => "http://localhost:8000/api/1/store/" ... }
}
```
### Benefits
1. **Secondary Capture Path**: Catches errors before SDK initialization
2. **Log-Based Errors**: Captures errors that don't throw exceptions
3. **Infrastructure Monitoring**: Redis connection issues, slow commands
4. **Historical Analysis**: Process existing log files
1. **Separation of Concerns**: Application errors separate from infrastructure issues
2. **Secondary Capture Path**: Catches errors before SDK initialization
3. **Log-Based Errors**: Captures errors that don't throw exceptions
4. **Infrastructure Monitoring**: Redis, NGINX, build tooling issues
5. **Historical Analysis**: Process existing log files
---
@@ -743,6 +801,228 @@ podman exec flyer-crawler-dev psql -U postgres -h postgres -c "\l" | grep bugsin
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage check"
```
### PostgreSQL Sequence Out of Sync (Duplicate Key Errors)
**Symptoms:**
- Bugsink throws `duplicate key value violates unique constraint "projects_project_pkey"`
- Error detail shows: `Key (id)=(1) already exists`
- New projects or other entities fail to create
**Root Cause:**
PostgreSQL sequences can become out of sync with actual data after:
- Manual data insertion or database seeding
- Restoring from backup
- Copying data between environments
The sequence generates IDs that already exist in the table.
**Diagnosis:**
```bash
# Dev Container - Check sequence vs max ID
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT
(SELECT MAX(id) FROM projects_project) as max_id,
(SELECT last_value FROM projects_project_id_seq) as seq_last_value,
CASE
WHEN (SELECT MAX(id) FROM projects_project) <= (SELECT last_value FROM projects_project_id_seq)
THEN 'OK'
ELSE 'OUT OF SYNC - Needs reset'
END as status;
"
# Production
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage dbshell" <<< "
SELECT MAX(id) as max_id, (SELECT last_value FROM projects_project_id_seq) as seq_value FROM projects_project;
"
```
**Solution:**
Reset the sequence to the maximum existing ID:
```bash
# Dev Container
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
"
# Production
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage dbshell" <<< "
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
"
```
**Verification:**
After running the fix, verify:
```bash
# Next ID should be max_id + 1
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT nextval('projects_project_id_seq') - 1 as current_seq_value;
"
```
**Prevention:**
When manually inserting data or restoring backups, always reset sequences:
```sql
-- Generic pattern for any table/sequence
SELECT setval('SEQUENCE_NAME', COALESCE((SELECT MAX(id) FROM TABLE_NAME), 1), true);
-- Common Bugsink sequences that may need reset:
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
SELECT setval('teams_team_id_seq', COALESCE((SELECT MAX(id) FROM teams_team), 1), true);
SELECT setval('releases_release_id_seq', COALESCE((SELECT MAX(id) FROM releases_release), 1), true);
```
### Logstash Level Field Constraint Violation
**Symptoms:**
- Bugsink errors: `value too long for type character varying(7)`
- Errors in Backend API project from Logstash
- Log shows `%{sentry_level}` literal string being sent
**Root Cause:**
Logstash sends the literal placeholder `%{sentry_level}` (16 characters) to Bugsink when:
- No error pattern is detected in the log message
- The `sentry_level` field is not properly initialized
- Bugsink's `level` column has a `varchar(7)` constraint
Valid Sentry levels are: `fatal`, `error`, `warning`, `info`, `debug` (all <= 7 characters).
**Diagnosis:**
```bash
# Check for recent level constraint errors in Bugsink
# Via MCP:
mcp__localerrors__list_issues({ project_id: 1, status: 'unresolved' })
# Or check Logstash logs for HTTP 500 responses
podman exec flyer-crawler-dev cat /var/log/logstash/logstash.log | grep "500"
```
**Solution:**
The fix requires updating the Logstash configuration (`docker/logstash/bugsink.conf`) to:
1. Validate `sentry_level` is not nil, empty, or contains placeholder text
2. Set a default value of "error" for any error-tagged event without a valid level
3. Normalize levels to lowercase
**Key filter block (Ruby):**
```ruby
ruby {
code => '
level = event.get("sentry_level")
# Check if level is invalid (nil, empty, contains placeholder, or invalid value)
if level.nil? || level.to_s.empty? || level.to_s.include?("%{") || level.to_s.length > 7
# Default to "error" for error-tagged events, "info" otherwise
if event.get("tags")&.include?("error")
event.set("sentry_level", "error")
else
event.set("sentry_level", "info")
end
else
# Normalize to lowercase and validate
normalized = level.to_s.downcase
valid_levels = ["fatal", "error", "warning", "info", "debug"]
unless valid_levels.include?(normalized)
normalized = "error"
end
event.set("sentry_level", normalized)
end
'
}
```
**Verification:**
After applying the fix:
1. Restart Logstash: `podman exec flyer-crawler-dev systemctl restart logstash`
2. Generate a test error and verify it appears in Bugsink without level errors
3. Check no new "value too long" errors appear in the project
### CSRF Verification Failed
**Symptoms:** "CSRF verification failed. Request aborted." error when performing actions in Bugsink UI (resolving issues, changing settings, etc.)
**Root Cause:**
Django 4.0+ requires `CSRF_TRUSTED_ORIGINS` to be explicitly configured for HTTPS POST requests. The error occurs because:
1. Bugsink is accessed via `https://localhost:8443` (nginx HTTPS proxy)
2. Django's CSRF protection validates the `Origin` header against `CSRF_TRUSTED_ORIGINS`
3. Without explicit configuration, Django rejects POST requests from HTTPS origins
**Why localhost vs 127.0.0.1 Matters:**
- `localhost` and `127.0.0.1` are treated as DIFFERENT origins by browsers
- If you access Bugsink via `https://localhost:8443`, Django must trust `https://localhost:8443`
- If you access via `https://127.0.0.1:8443`, Django must trust `https://127.0.0.1:8443`
- The fix includes BOTH to allow either access pattern
**Configuration (Already Applied):**
The Bugsink Django configuration in `Dockerfile.dev` includes:
```python
# CSRF Trusted Origins (Django 4.0+ requires full origin for HTTPS POST requests)
CSRF_TRUSTED_ORIGINS = [
"https://localhost:8443",
"https://127.0.0.1:8443",
"http://localhost:8000",
"http://127.0.0.1:8000",
]
# HTTPS proxy support (nginx reverse proxy on port 8443)
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
```
**Verification:**
```bash
# Verify CSRF_TRUSTED_ORIGINS is configured
podman exec flyer-crawler-dev sh -c 'cat /opt/bugsink/conf/bugsink_conf.py | grep -A 6 CSRF_TRUSTED'
# Expected output:
# CSRF_TRUSTED_ORIGINS = [
# "https://localhost:8443",
# "https://127.0.0.1:8443",
# "http://localhost:8000",
# "http://127.0.0.1:8000",
# ]
```
**If Issue Persists After Fix:**
1. **Rebuild the container image** (configuration is baked into the image):
```bash
podman-compose -f compose.dev.yml down
podman build -f Dockerfile.dev -t localhost/flyer-crawler-dev:latest .
podman-compose -f compose.dev.yml up -d
```
2. **Clear browser cookies** for localhost:8443
3. **Check nginx X-Forwarded-Proto header** - the nginx config must set this header for Django to recognize HTTPS:
```bash
podman exec flyer-crawler-dev cat /etc/nginx/sites-available/bugsink | grep X-Forwarded-Proto
# Should show: proxy_set_header X-Forwarded-Proto $scheme;
```
---
## Related Documentation

View File

@@ -44,6 +44,8 @@ if (missingVars.length > 0) {
// --- Shared Environment Variables ---
// These come from compose.dev.yml environment section
const sharedEnv = {
// Timezone: PST (America/Los_Angeles) for consistent log timestamps
TZ: process.env.TZ || 'America/Los_Angeles',
NODE_ENV: 'development',
DB_HOST: process.env.DB_HOST || 'postgres',
DB_PORT: process.env.DB_PORT || '5432',
@@ -160,6 +162,8 @@ module.exports = {
min_uptime: '5s',
// Environment
env: {
// Timezone: PST (America/Los_Angeles) for consistent log timestamps
TZ: process.env.TZ || 'America/Los_Angeles',
NODE_ENV: 'development',
// Vite-specific env vars (VITE_ prefix)
VITE_SENTRY_DSN: process.env.VITE_SENTRY_DSN,

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "flyer-crawler",
"version": "0.12.8",
"version": "0.12.13",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "flyer-crawler",
"version": "0.12.8",
"version": "0.12.13",
"dependencies": {
"@bull-board/api": "^6.14.2",
"@bull-board/express": "^6.14.2",

View File

@@ -1,7 +1,7 @@
{
"name": "flyer-crawler",
"private": true,
"version": "0.12.8",
"version": "0.12.13",
"type": "module",
"scripts": {
"dev": "concurrently \"npm:start:dev\" \"vite\"",
@@ -14,12 +14,12 @@
"start": "npm run start:prod",
"build": "vite build",
"preview": "vite preview",
"test": "node scripts/check-linux.js && cross-env NODE_ENV=test tsx ./node_modules/vitest/vitest.mjs run",
"test-wsl": "cross-env NODE_ENV=test vitest run",
"test": "node scripts/check-linux.js && cross-env NODE_ENV=test TZ= tsx ./node_modules/vitest/vitest.mjs run",
"test-wsl": "cross-env NODE_ENV=test TZ= vitest run",
"test:coverage": "npm run clean && npm run test:unit -- --coverage && npm run test:integration -- --coverage",
"test:unit": "node scripts/check-linux.js && cross-env NODE_ENV=test tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --project unit -c vite.config.ts",
"test:integration": "node scripts/check-linux.js && cross-env NODE_ENV=test tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --project integration -c vitest.config.integration.ts",
"test:e2e": "node scripts/check-linux.js && cross-env NODE_ENV=test tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --config vitest.config.e2e.ts",
"test:unit": "node scripts/check-linux.js && cross-env NODE_ENV=test TZ= tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --project unit -c vite.config.ts",
"test:integration": "node scripts/check-linux.js && cross-env NODE_ENV=test TZ= tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --project integration -c vitest.config.integration.ts",
"test:e2e": "node scripts/check-linux.js && cross-env NODE_ENV=test TZ= tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --config vitest.config.e2e.ts",
"format": "prettier --write .",
"lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0",
"type-check": "tsc --noEmit",

View File

@@ -23,6 +23,26 @@ set -e
echo "Starting Flyer Crawler Dev Container..."
# ============================================================================
# Timezone Configuration
# ============================================================================
# Ensure TZ is set for consistent log timestamps across all services.
# TZ should be set via compose.dev.yml environment (default: America/Los_Angeles)
# ============================================================================
if [ -n "$TZ" ]; then
echo "Timezone configured: $TZ"
# Link timezone data if available (for date command and other tools)
if [ -f "/usr/share/zoneinfo/$TZ" ]; then
ln -sf "/usr/share/zoneinfo/$TZ" /etc/localtime
echo "$TZ" > /etc/timezone
echo "System timezone set to: $(date +%Z) ($(date))"
else
echo "Warning: Timezone data not found for $TZ, using TZ environment variable only"
fi
else
echo "Warning: TZ environment variable not set, using container default timezone"
fi
# Configure Bugsink HTTPS (ADR-015)
echo "Configuring Bugsink HTTPS..."
mkdir -p /etc/bugsink/ssl

View File

@@ -27,9 +27,13 @@ const defaultProps = {
};
const setupSuccessMocks = () => {
// The API returns {success, data: {userprofile, token}}, and the mutation extracts .data
const mockAuthResponse = {
userprofile: createMockUserProfile({ user: { user_id: '123', email: 'test@example.com' } }),
token: 'mock-token',
success: true,
data: {
userprofile: createMockUserProfile({ user: { user_id: '123', email: 'test@example.com' } }),
token: 'mock-token',
},
};
(mockedApiClient.loginUser as Mock).mockResolvedValue(
new Response(JSON.stringify(mockAuthResponse)),

View File

@@ -82,7 +82,11 @@ const defaultAuthenticatedProps = {
};
const setupSuccessMocks = () => {
const mockAuthResponse = { userprofile: authenticatedProfile, token: 'mock-token' };
// The API returns {success, data: {userprofile, token}}, and the mutation extracts .data
const mockAuthResponse = {
success: true,
data: { userprofile: authenticatedProfile, token: 'mock-token' },
};
(mockedApiClient.loginUser as Mock).mockResolvedValue(
new Response(JSON.stringify(mockAuthResponse)),
);

View File

@@ -132,7 +132,8 @@ describe('API Client', () => {
.mockResolvedValueOnce({
ok: true,
status: 200,
json: () => Promise.resolve({ token: 'new-refreshed-token' }),
// The API returns {success, data: {token}} wrapper format
json: () => Promise.resolve({ success: true, data: { token: 'new-refreshed-token' } }),
} as Response)
.mockResolvedValueOnce({
ok: true,
@@ -218,7 +219,7 @@ describe('API Client', () => {
localStorage.setItem('authToken', 'expired-token');
// Mock the global fetch to return a sequence of responses:
// 1. 401 Unauthorized (initial API call)
// 2. 200 OK (token refresh call)
// 2. 200 OK (token refresh call) - uses API wrapper format {success, data: {token}}
// 3. 200 OK (retry of the initial API call)
vi.mocked(global.fetch)
.mockResolvedValueOnce({
@@ -229,7 +230,8 @@ describe('API Client', () => {
.mockResolvedValueOnce({
ok: true,
status: 200,
json: () => Promise.resolve({ token: 'new-refreshed-token' }),
// The API returns {success, data: {token}} wrapper format
json: () => Promise.resolve({ success: true, data: { token: 'new-refreshed-token' } }),
} as Response)
.mockResolvedValueOnce({
ok: true,

View File

@@ -62,12 +62,33 @@ vi.mock('./logger.server', () => ({
vi.mock('bullmq', () => ({
Worker: mocks.MockWorker,
Queue: vi.fn(function () {
return { add: vi.fn() };
return { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) };
}),
// Add UnrecoverableError to the mock so it can be used in tests
UnrecoverableError: class UnrecoverableError extends Error {},
}));
// Mock redis.server to prevent real Redis connection attempts
vi.mock('./redis.server', () => ({
connection: {
on: vi.fn(),
quit: vi.fn().mockResolvedValue(undefined),
},
}));
// Mock queues.server to provide mock queue instances
vi.mock('./queues.server', () => ({
flyerQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
emailQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
analyticsQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
cleanupQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
weeklyAnalyticsQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
tokenCleanupQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
receiptQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
expiryAlertQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
barcodeQueue: { add: vi.fn(), close: vi.fn().mockResolvedValue(undefined) },
}));
// Mock flyerProcessingService.server as flyerWorker and cleanupWorker depend on it
vi.mock('./flyerProcessingService.server', () => {
// Mock the constructor to return an object with the mocked methods
@@ -88,6 +109,67 @@ vi.mock('./flyerDataTransformer', () => ({
},
}));
// Mock aiService.server to prevent initialization issues
vi.mock('./aiService.server', () => ({
aiService: {
extractAndValidateData: vi.fn(),
},
}));
// Mock db/index.db to prevent database connections
vi.mock('./db/index.db', () => ({
personalizationRepo: {},
}));
// Mock flyerAiProcessor.server
vi.mock('./flyerAiProcessor.server', () => ({
FlyerAiProcessor: vi.fn().mockImplementation(function () {
return { processFlyer: vi.fn() };
}),
}));
// Mock flyerPersistenceService.server
vi.mock('./flyerPersistenceService.server', () => ({
FlyerPersistenceService: vi.fn().mockImplementation(function () {
return { persistFlyerData: vi.fn() };
}),
}));
// Mock db/connection.db to prevent database connections
vi.mock('./db/connection.db', () => ({
withTransaction: vi.fn(),
}));
// Mock receiptService.server
vi.mock('./receiptService.server', () => ({
processReceiptJob: vi.fn().mockResolvedValue(undefined),
}));
// Mock expiryService.server
vi.mock('./expiryService.server', () => ({
processExpiryAlertJob: vi.fn().mockResolvedValue(undefined),
}));
// Mock barcodeService.server
vi.mock('./barcodeService.server', () => ({
processBarcodeDetectionJob: vi.fn().mockResolvedValue(undefined),
}));
// Mock flyerFileHandler.server
vi.mock('./flyerFileHandler.server', () => ({
FlyerFileHandler: vi.fn().mockImplementation(function () {
return { handleFile: vi.fn() };
}),
}));
// Mock workerOptions config
vi.mock('../config/workerOptions', () => ({
defaultWorkerOptions: {
lockDuration: 30000,
stalledInterval: 30000,
},
}));
// Helper to create a mock BullMQ Job object
const createMockJob = <T>(data: T): Job<T> => {
return {