32 KiB
Bugsink Error Tracking Setup and Usage Guide
This document covers the complete setup and usage of Bugsink for error tracking in the Flyer Crawler application.
Table of Contents
- What is Bugsink
- Environments
- Token Creation
- MCP Integration
- Application Integration
- Logstash Integration
- Using Bugsink
- Common Workflows
- Troubleshooting
What is Bugsink
Bugsink is a lightweight, self-hosted error tracking platform that is fully compatible with the Sentry SDK ecosystem. We use Bugsink instead of Sentry SaaS or self-hosted Sentry for several reasons:
| Aspect | Bugsink | Self-Hosted Sentry |
|---|---|---|
| Resource Usage | Single process, ~256MB RAM | 16GB+ RAM, Kafka, ClickHouse |
| Deployment | Simple pip/binary install | Docker Compose with 20+ services |
| SDK Compatibility | Full Sentry SDK support | Full Sentry SDK support |
| Database | PostgreSQL or SQLite | PostgreSQL + ClickHouse |
| Cost | Free, self-hosted | Free, self-hosted |
| Maintenance | Minimal | Significant |
Key Benefits:
- Sentry SDK Compatibility: Uses the same
@sentry/nodeand@sentry/reactSDKs as Sentry - Self-Hosted: All error data stays on our infrastructure
- Lightweight: Runs alongside the application without significant overhead
- MCP Integration: AI tools (Claude Code) can query errors via the bugsink-mcp server
Architecture Decision: See ADR-015: Application Performance Monitoring and Error Tracking for the full rationale.
Environments
Dev Container (Local Development)
| Item | Value |
|---|---|
| Web UI | https://localhost:8443 (nginx proxy) |
| Internal URL | http://localhost:8000 (direct) |
| Credentials | admin@localhost / admin |
| Backend Project | Project ID 1 - Backend API (Dev) |
| Frontend Project | Project ID 2 - Frontend (Dev) |
| Infra Project | Project ID 4 - Infrastructure (Dev) |
| Backend DSN | http://<key>@localhost:8000/1 |
| Frontend DSN | https://<key>@localhost/bugsink-api/2 (via nginx proxy) |
| Infra DSN | http://<key>@localhost:8000/4 (Logstash only) |
| Database | postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink |
Important: The Frontend DSN uses an nginx proxy (/bugsink-api/) because the browser cannot reach localhost:8000 directly (container-internal port). See Frontend Nginx Proxy for details.
Configuration Files:
| File | Purpose |
|---|---|
compose.dev.yml |
Initial DSNs using 127.0.0.1:8000 (container startup) |
.env.local |
OVERRIDES compose.dev.yml (app runtime) |
docker/nginx/dev.conf |
Nginx proxy for Bugsink API (frontend error reporting) |
docker/logstash/bugsink.conf |
Log routing to Backend/Infrastructure projects |
Note: .env.local takes precedence over compose.dev.yml environment variables.
Production
| Item | Value |
|---|---|
| Web UI | https://bugsink.projectium.com |
| Credentials | Managed separately (not shared in docs) |
| Backend Project | flyer-crawler-backend |
| Frontend Project | flyer-crawler-frontend |
| Infra Project | flyer-crawler-infrastructure |
Bugsink Projects:
| Project Slug | Type | Environment |
|---|---|---|
| flyer-crawler-backend | Backend | Production |
| flyer-crawler-backend-test | Backend | Test |
| flyer-crawler-frontend | Frontend | Production |
| flyer-crawler-frontend-test | Frontend | Test |
| flyer-crawler-infrastructure | Infra | Production |
| flyer-crawler-test-infrastructure | Infra | Test |
Token Creation
Bugsink 2.0.11 does NOT have a "Settings > API Keys" menu in the UI. API tokens must be created via Django management command.
Dev Container Token
Run this command from the Windows host (Git Bash or PowerShell):
MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token'
Output: A 40-character lowercase hex token (e.g., a609c2886daa4e1e05f1517074d7779a5fb49056)
Production Token
User executes this command on the production server:
cd /opt/bugsink && bugsink-manage create_auth_token
Output: Same format - 40-character hex token.
Token Storage
| Environment | Storage Location | Notes |
|---|---|---|
| Dev | .mcp.json (project-level) |
Not committed to git |
| Production | Gitea secrets + settings.json | BUGSINK_TOKEN secret |
MCP Integration
The bugsink-mcp server allows Claude Code and other AI tools to query Bugsink for error information.
Installation
# Clone the MCP server
cd d:\gitea
git clone https://github.com/j-shelfwood/bugsink-mcp.git
cd bugsink-mcp
npm install
npm run build
Configuration
IMPORTANT: Localhost MCP servers must use project-level .mcp.json due to a known Claude Code loader issue. See BUGSINK-MCP-TROUBLESHOOTING.md for details.
Production (Global settings.json)
Location: ~/.claude/settings.json (or C:\Users\<username>\.claude\settings.json)
{
"mcpServers": {
"bugsink": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "https://bugsink.projectium.com",
"BUGSINK_TOKEN": "<40-char-hex-token>"
}
}
}
}
Dev Container (Project-level .mcp.json)
Location: Project root .mcp.json
{
"mcpServers": {
"localerrors": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "http://127.0.0.1:8000",
"BUGSINK_TOKEN": "<40-char-hex-token>"
}
}
}
}
Environment Variables
The bugsink-mcp package requires exactly TWO environment variables:
| Variable | Description | Required |
|---|---|---|
BUGSINK_URL |
Bugsink instance URL | Yes |
BUGSINK_TOKEN |
API token (40-char hex) | Yes |
Common Mistakes:
- Using
BUGSINK_API_TOKEN(wrong - useBUGSINK_TOKEN) - Including
BUGSINK_ORG_SLUG(not used by the package)
Available MCP Tools
| Tool | Purpose |
|---|---|
test_connection |
Verify MCP server can reach Bugsink |
list_projects |
List all projects in the instance |
get_project |
Get project details including DSN |
list_issues |
List issues for a project |
get_issue |
Get detailed issue information |
list_events |
List individual error occurrences |
get_event |
Get full event details with context |
get_stacktrace |
Get pre-rendered Markdown stacktrace |
list_releases |
List releases for a project |
create_release |
Create a new release |
Tool Prefixes:
- Production:
mcp__bugsink__* - Dev Container:
mcp__localerrors__*
Verifying MCP Connection
After configuration, restart Claude Code and test:
// Production
mcp__bugsink__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)."
// Dev Container
mcp__localerrors__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)."
Application Integration
Backend (Express/Node.js)
File: src/services/sentry.server.ts
The backend uses @sentry/node SDK v8+ to capture errors:
import * as Sentry from '@sentry/node';
import { config, isSentryConfigured, isProduction, isTest } from '../config/env';
export function initSentry(): void {
if (!isSentryConfigured || isTest) return;
Sentry.init({
dsn: config.sentry.dsn,
environment: config.sentry.environment || config.server.nodeEnv,
debug: config.sentry.debug,
tracesSampleRate: 0, // Performance monitoring disabled
beforeSend(event, hint) {
// Custom filtering logic
return event;
},
});
}
Key Functions:
| Function | Purpose |
|---|---|
initSentry() |
Initialize SDK at application startup |
captureException() |
Manually capture caught errors |
captureMessage() |
Log non-exception events |
setUser() |
Set user context after authentication |
addBreadcrumb() |
Add navigation/action breadcrumbs |
getSentryMiddleware() |
Get Express middleware for automatic capture |
Integration in server.ts:
// At the very top of server.ts, before other imports
import { initSentry, getSentryMiddleware } from './services/sentry.server';
initSentry();
// After Express app creation
const { requestHandler, errorHandler } = getSentryMiddleware();
app.use(requestHandler);
// ... routes ...
// Before final error handler
app.use(errorHandler);
Frontend (React)
File: src/services/sentry.client.ts
The frontend uses @sentry/react SDK:
import * as Sentry from '@sentry/react';
import config from '../config';
export function initSentry(): void {
if (!config.sentry.dsn || !config.sentry.enabled) return;
Sentry.init({
dsn: config.sentry.dsn,
environment: config.sentry.environment,
debug: config.sentry.debug,
tracesSampleRate: 0,
integrations: [
Sentry.breadcrumbsIntegration({
console: true,
dom: true,
fetch: true,
history: true,
xhr: true,
}),
],
beforeSend(event) {
// Filter browser extension errors
if (
event.exception?.values?.[0]?.stacktrace?.frames?.some((frame) =>
frame.filename?.includes('extension://'),
)
) {
return null;
}
return event;
},
});
}
Client Configuration (src/config.ts):
const config = {
sentry: {
dsn: import.meta.env.VITE_SENTRY_DSN,
environment: import.meta.env.VITE_SENTRY_ENVIRONMENT || import.meta.env.MODE,
debug: import.meta.env.VITE_SENTRY_DEBUG === 'true',
enabled: import.meta.env.VITE_SENTRY_ENABLED !== 'false',
},
};
Environment Variables
Backend (src/config/env.ts):
| Variable | Description | Default |
|---|---|---|
SENTRY_DSN |
Sentry-compatible DSN | (optional) |
SENTRY_ENABLED |
Enable/disable error reporting | true |
SENTRY_ENVIRONMENT |
Environment tag | NODE_ENV |
SENTRY_DEBUG |
Enable SDK debug logging | false |
Frontend (Vite):
| Variable | Description |
|---|---|
VITE_SENTRY_DSN |
Frontend DSN (separate project) |
VITE_SENTRY_ENVIRONMENT |
Environment tag |
VITE_SENTRY_DEBUG |
Enable SDK debug logging |
VITE_SENTRY_ENABLED |
Enable/disable error reporting |
Frontend Nginx Proxy
The frontend Sentry SDK runs in the browser, which cannot directly reach localhost:8000 (the Bugsink container-internal port). To solve this, we use an nginx proxy.
How It Works
Browser --HTTPS--> https://localhost/bugsink-api/2/store/
|
v (nginx proxy)
http://localhost:8000/api/2/store/
|
v
Bugsink (internal)
Nginx Configuration
Location: docker/nginx/dev.conf
# Proxy Bugsink Sentry API for frontend error reporting
location /bugsink-api/ {
proxy_pass http://localhost:8000/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Allow large error payloads with stack traces
client_max_body_size 10M;
# Timeouts for error reporting
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
Frontend DSN Format
# .env.local
# Uses nginx proxy path instead of direct port
VITE_SENTRY_DSN=https://<key>@localhost/bugsink-api/2
Testing Frontend Error Reporting
-
Open browser console at
https://localhost -
Trigger a test error:
throw new Error('Test frontend error from browser'); -
Check Bugsink Frontend (Dev) project for the error
-
Verify browser console shows Sentry SDK activity (if VITE_SENTRY_DEBUG=true)
Logstash Integration
Logstash aggregates logs from multiple sources and forwards error patterns to Bugsink.
Note: See ADR-015 for the full architecture.
3-Project Architecture
Logstash routes errors to different Bugsink projects based on log source:
| Project | ID | Receives |
|---|---|---|
| Backend API (Dev) | 1 | Pino app errors, PostgreSQL errors |
| Frontend (Dev) | 2 | Browser errors (via Sentry SDK, not Logstash) |
| Infrastructure (Dev) | 4 | Redis warnings, NGINX errors, Vite errors |
Log Sources
| Source | Log Path | Project Destination | Error Detection |
|---|---|---|---|
| PM2 API | /var/log/pm2/api-*.log |
Backend (1) | level >= 50 (error/fatal) |
| PM2 Worker | /var/log/pm2/worker-*.log |
Backend (1) | level >= 50 (error/fatal) |
| PM2 Vite | /var/log/pm2/vite-*.log |
Infrastructure (4) | error keyword patterns |
| PostgreSQL | /var/log/postgresql/*.log |
Backend (1) | ERROR/FATAL log levels |
| Redis | /var/log/redis/*.log |
Infrastructure (4) | WARNING level (#) |
| NGINX | /var/log/nginx/error.log |
Infrastructure (4) | error/crit/alert/emerg |
Pipeline Configuration
Location: /etc/logstash/conf.d/bugsink.conf (or docker/logstash/bugsink.conf in project)
The configuration:
- Inputs: Reads from PM2 logs, PostgreSQL logs, Redis logs, NGINX logs
- Filters: Detects errors and assigns tags based on log type
- Outputs: Routes to appropriate Bugsink project based on log source
Key Routing Logic:
# Infrastructure logs -> Project 4
if "error" in [tags] and ([type] == "redis" or [type] == "nginx_error" or [type] == "pm2_vite") {
http { url => "http://localhost:8000/api/4/store/" ... }
}
# Backend logs -> Project 1
else if "error" in [tags] and ([type] in ["pm2_api", "pm2_worker", "pino", "postgres"]) {
http { url => "http://localhost:8000/api/1/store/" ... }
}
Benefits
- Separation of Concerns: Application errors separate from infrastructure issues
- Secondary Capture Path: Catches errors before SDK initialization
- Log-Based Errors: Captures errors that don't throw exceptions
- Infrastructure Monitoring: Redis, NGINX, build tooling issues
- Historical Analysis: Process existing log files
Using Bugsink
Accessing the Web UI
Dev Container:
- Open
https://localhost:8443in your browser - Accept the self-signed certificate warning
- Login with
admin@localhost/admin
Production:
- Open
https://bugsink.projectium.com - Login with your credentials
Projects and Teams
Bugsink organizes errors into projects:
| Concept | Description |
|---|---|
| Team | Group of projects (e.g., "Flyer Crawler") |
| Project | Single application/service |
| DSN | Data Source Name - unique key for each project |
To view projects:
- Click the project dropdown in the top navigation
- Or use MCP:
mcp__bugsink__list_projects()
Viewing Issues
Issues represent grouped error occurrences. Multiple identical errors are deduplicated into a single issue.
Issue List View:
- Navigate to a project
- Issues are sorted by last occurrence
- Each issue shows: title, count, first/last seen
Issue Detail View:
- Click an issue to see full details
- View aggregated statistics
- See list of individual events
- Access full stacktrace
Viewing Events
Events are individual error occurrences.
Event Information:
- Full stacktrace
- Request context (URL, method, headers)
- User context (if set)
- Breadcrumbs (actions leading to error)
- Tags and extra data
Via MCP:
// List events for an issue
mcp__bugsink__list_events({ issue_id: 'uuid-here' });
// Get full event details
mcp__bugsink__get_event({ event_id: 'uuid-here' });
// Get readable stacktrace
mcp__bugsink__get_stacktrace({ event_id: 'uuid-here' });
Stacktraces and Context
Stacktraces show the call stack at the time of error:
Via Web UI:
- Open an event
- Expand the "Exception" section
- Click frames to see source code context
Via MCP:
get_stacktracereturns pre-rendered Markdown- Includes file paths, line numbers, function names
Filtering and Searching
Web UI Filters:
- By status: unresolved, resolved, muted
- By date range
- By release version
- By environment
MCP Filtering:
// Filter by status
mcp__bugsink__list_issues({
project_id: 1,
status: 'unresolved',
limit: 25,
});
// Sort options
mcp__bugsink__list_issues({
project_id: 1,
sort: 'last_seen', // or "digest_order"
order: 'desc', // or "asc"
});
Release Tracking
Releases help identify which version introduced or fixed issues.
Creating Releases:
mcp__bugsink__create_release({
project_id: 1,
version: '1.2.3',
});
Viewing Releases:
mcp__bugsink__list_releases({ project_id: 1 });
Common Workflows
Investigating Production Errors
-
Check for new errors (via MCP):
mcp__bugsink__list_issues({ project_id: 1, status: 'unresolved', sort: 'last_seen', limit: 10, }); -
Get issue details:
mcp__bugsink__get_issue({ issue_id: 'uuid' }); -
View stacktrace:
mcp__bugsink__list_events({ issue_id: 'uuid', limit: 1 }); mcp__bugsink__get_stacktrace({ event_id: 'event-uuid' }); -
Examine the code: Use the file path and line numbers from the stacktrace to locate the issue in the codebase.
Tracking Down Bugs
-
Identify error patterns:
- Group similar errors by message or location
- Check occurrence counts and frequency
-
Examine request context:
mcp__bugsink__get_event({ event_id: 'uuid' });Look for: URL, HTTP method, request body, user info
-
Review breadcrumbs: Understand the sequence of actions leading to the error.
-
Correlate with logs: Use the request ID from the event to search application logs.
Monitoring Error Rates
- Check issue counts: Compare event counts over time
- Watch for regressions: Resolved issues that reopen
- Track new issues: Filter by "first seen" date
Dev Container Debugging
-
Access local Bugsink:
https://localhost:8443 -
Trigger a test error:
curl -X POST http://localhost:3001/api/test/error -
View in Bugsink: Check the dev project for the captured error
-
Query via MCP:
mcp__localerrors__list_issues({ project_id: 1 });
Troubleshooting
MCP Server Not Available
Symptoms:
mcp__localerrors__*tools return "No such tool available"mcp__bugsink__*works butmcp__localerrors__*fails
Solutions:
-
Check configuration location: Localhost servers must use project-level
.mcp.json, not global settings.json -
Verify token variable name: Use
BUGSINK_TOKEN, notBUGSINK_API_TOKEN -
Test manually:
cd d:\gitea\bugsink-mcp set BUGSINK_URL=http://localhost:8000 set BUGSINK_TOKEN=<your-token> node dist/index.jsExpected:
Bugsink MCP server started -
Full restart: Close VS Code completely, restart
See BUGSINK-MCP-TROUBLESHOOTING.md for detailed troubleshooting.
Connection Refused to localhost:8000
Cause: Dev container Bugsink service not running
Solutions:
-
Check container status:
podman exec flyer-crawler-dev systemctl status bugsink -
Start the service:
podman exec flyer-crawler-dev systemctl start bugsink -
Check logs:
podman exec flyer-crawler-dev journalctl -u bugsink -n 50
Errors Not Appearing in Bugsink
Backend:
- Check DSN: Verify
SENTRY_DSNenvironment variable is set - Check enabled flag:
SENTRY_ENABLEDshould betrue - Check test environment: Sentry is disabled in
NODE_ENV=test
Frontend:
- Check Vite env:
VITE_SENTRY_DSNmust be set - Verify initialization: Check browser console for Sentry init message
- Check filtering:
beforeSendmay be filtering the error
HTTPS Certificate Warnings
Dev Container: Self-signed certificates are expected. Accept the warning.
Production: Should use valid certificates. If warnings appear, check certificate expiration.
Token Invalid or Expired
Symptoms: MCP returns authentication errors
Solutions:
- Regenerate token: Use Django management command (see Token Creation)
- Update configuration: Put new token in
.mcp.jsonorsettings.json - Restart Claude Code: Required after config changes
Bugsink Database Issues
Symptoms: 500 errors in Bugsink UI, connection refused
Dev Container:
# Check PostgreSQL
podman exec flyer-crawler-dev pg_isready -U bugsink -d bugsink -h postgres
# Check database exists
podman exec flyer-crawler-dev psql -U postgres -h postgres -c "\l" | grep bugsink
Production (user executes on server):
cd /opt/bugsink && bugsink-manage check
PostgreSQL Sequence Out of Sync (Duplicate Key Errors)
Symptoms:
- Bugsink throws
duplicate key value violates unique constraint "projects_project_pkey" - Error detail shows:
Key (id)=(1) already exists - New projects or other entities fail to create
Root Cause:
PostgreSQL sequences can become out of sync with actual data after:
- Manual data insertion or database seeding
- Restoring from backup
- Copying data between environments
The sequence generates IDs that already exist in the table.
Diagnosis:
# Dev Container - Check sequence vs max ID
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT
(SELECT MAX(id) FROM projects_project) as max_id,
(SELECT last_value FROM projects_project_id_seq) as seq_last_value,
CASE
WHEN (SELECT MAX(id) FROM projects_project) <= (SELECT last_value FROM projects_project_id_seq)
THEN 'OK'
ELSE 'OUT OF SYNC - Needs reset'
END as status;
"
# Production (user executes on server)
cd /opt/bugsink && bugsink-manage dbshell
# Then run: SELECT MAX(id) as max_id, (SELECT last_value FROM projects_project_id_seq) as seq_value FROM projects_project;
Solution:
Reset the sequence to the maximum existing ID:
# Dev Container
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
"
# Production (user executes on server)
cd /opt/bugsink && bugsink-manage dbshell
# Then run: SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
Verification:
After running the fix, verify:
# Next ID should be max_id + 1
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT nextval('projects_project_id_seq') - 1 as current_seq_value;
"
Prevention:
When manually inserting data or restoring backups, always reset sequences:
-- Generic pattern for any table/sequence
SELECT setval('SEQUENCE_NAME', COALESCE((SELECT MAX(id) FROM TABLE_NAME), 1), true);
-- Common Bugsink sequences that may need reset:
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
SELECT setval('teams_team_id_seq', COALESCE((SELECT MAX(id) FROM teams_team), 1), true);
SELECT setval('releases_release_id_seq', COALESCE((SELECT MAX(id) FROM releases_release), 1), true);
Logstash Level Field Constraint Violation
Symptoms:
- Bugsink errors:
value too long for type character varying(7) - Errors in Backend API project from Logstash
- Log shows
%{sentry_level}literal string being sent
Root Cause:
Logstash sends the literal placeholder %{sentry_level} (16 characters) to Bugsink when:
- No error pattern is detected in the log message
- The
sentry_levelfield is not properly initialized - Bugsink's
levelcolumn has avarchar(7)constraint
Valid Sentry levels are: fatal, error, warning, info, debug (all <= 7 characters).
Diagnosis:
# Check for recent level constraint errors in Bugsink
# Via MCP:
mcp__localerrors__list_issues({ project_id: 1, status: 'unresolved' })
# Or check Logstash logs for HTTP 500 responses
podman exec flyer-crawler-dev cat /var/log/logstash/logstash.log | grep "500"
Solution:
The fix requires updating the Logstash configuration (docker/logstash/bugsink.conf) to:
- Validate
sentry_levelis not nil, empty, or contains placeholder text - Set a default value of "error" for any error-tagged event without a valid level
- Normalize levels to lowercase
Key filter block (Ruby):
ruby {
code => '
level = event.get("sentry_level")
# Check if level is invalid (nil, empty, contains placeholder, or invalid value)
if level.nil? || level.to_s.empty? || level.to_s.include?("%{") || level.to_s.length > 7
# Default to "error" for error-tagged events, "info" otherwise
if event.get("tags")&.include?("error")
event.set("sentry_level", "error")
else
event.set("sentry_level", "info")
end
else
# Normalize to lowercase and validate
normalized = level.to_s.downcase
valid_levels = ["fatal", "error", "warning", "info", "debug"]
unless valid_levels.include?(normalized)
normalized = "error"
end
event.set("sentry_level", normalized)
end
'
}
Verification:
After applying the fix:
- Restart Logstash:
podman exec flyer-crawler-dev systemctl restart logstash - Generate a test error and verify it appears in Bugsink without level errors
- Check no new "value too long" errors appear in the project
CSRF Verification Failed
Symptoms: "CSRF verification failed. Request aborted." error when performing actions in Bugsink UI (resolving issues, changing settings, etc.)
Root Cause:
Django 4.0+ requires CSRF_TRUSTED_ORIGINS to be explicitly configured for HTTPS POST requests. The error occurs because:
- Bugsink is accessed via
https://localhost:8443(nginx HTTPS proxy) - Django's CSRF protection validates the
Originheader againstCSRF_TRUSTED_ORIGINS - Without explicit configuration, Django rejects POST requests from HTTPS origins
Why localhost vs 127.0.0.1 Matters:
localhostand127.0.0.1are treated as DIFFERENT origins by browsers- If you access Bugsink via
https://localhost:8443, Django must trusthttps://localhost:8443 - If you access via
https://127.0.0.1:8443, Django must trusthttps://127.0.0.1:8443 - The fix includes BOTH to allow either access pattern
Configuration (Already Applied):
The Bugsink Django configuration in Dockerfile.dev includes:
# CSRF Trusted Origins (Django 4.0+ requires full origin for HTTPS POST requests)
CSRF_TRUSTED_ORIGINS = [
"https://localhost:8443",
"https://127.0.0.1:8443",
"http://localhost:8000",
"http://127.0.0.1:8000",
]
# HTTPS proxy support (nginx reverse proxy on port 8443)
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
Verification:
# Verify CSRF_TRUSTED_ORIGINS is configured
podman exec flyer-crawler-dev sh -c 'cat /opt/bugsink/conf/bugsink_conf.py | grep -A 6 CSRF_TRUSTED'
# Expected output:
# CSRF_TRUSTED_ORIGINS = [
# "https://localhost:8443",
# "https://127.0.0.1:8443",
# "http://localhost:8000",
# "http://127.0.0.1:8000",
# ]
If Issue Persists After Fix:
-
Rebuild the container image (configuration is baked into the image):
podman-compose -f compose.dev.yml down podman build -f Dockerfile.dev -t localhost/flyer-crawler-dev:latest . podman-compose -f compose.dev.yml up -d -
Clear browser cookies for localhost:8443
-
Check nginx X-Forwarded-Proto header - the nginx config must set this header for Django to recognize HTTPS:
podman exec flyer-crawler-dev cat /etc/nginx/sites-available/bugsink | grep X-Forwarded-Proto # Should show: proxy_set_header X-Forwarded-Proto $scheme;