Bugsink Fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m12s

This commit is contained in:
2026-01-22 21:54:38 -08:00
parent 4f08238698
commit d4543cf4b9
3 changed files with 236 additions and 1 deletions

View File

@@ -801,6 +801,158 @@ podman exec flyer-crawler-dev psql -U postgres -h postgres -c "\l" | grep bugsin
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage check"
```
### PostgreSQL Sequence Out of Sync (Duplicate Key Errors)
**Symptoms:**
- Bugsink throws `duplicate key value violates unique constraint "projects_project_pkey"`
- Error detail shows: `Key (id)=(1) already exists`
- New projects or other entities fail to create
**Root Cause:**
PostgreSQL sequences can become out of sync with actual data after:
- Manual data insertion or database seeding
- Restoring from backup
- Copying data between environments
The sequence generates IDs that already exist in the table.
**Diagnosis:**
```bash
# Dev Container - Check sequence vs max ID
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT
(SELECT MAX(id) FROM projects_project) as max_id,
(SELECT last_value FROM projects_project_id_seq) as seq_last_value,
CASE
WHEN (SELECT MAX(id) FROM projects_project) <= (SELECT last_value FROM projects_project_id_seq)
THEN 'OK'
ELSE 'OUT OF SYNC - Needs reset'
END as status;
"
# Production
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage dbshell" <<< "
SELECT MAX(id) as max_id, (SELECT last_value FROM projects_project_id_seq) as seq_value FROM projects_project;
"
```
**Solution:**
Reset the sequence to the maximum existing ID:
```bash
# Dev Container
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
"
# Production
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage dbshell" <<< "
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
"
```
**Verification:**
After running the fix, verify:
```bash
# Next ID should be max_id + 1
podman exec flyer-crawler-dev psql -U bugsink -h postgres -d bugsink -c "
SELECT nextval('projects_project_id_seq') - 1 as current_seq_value;
"
```
**Prevention:**
When manually inserting data or restoring backups, always reset sequences:
```sql
-- Generic pattern for any table/sequence
SELECT setval('SEQUENCE_NAME', COALESCE((SELECT MAX(id) FROM TABLE_NAME), 1), true);
-- Common Bugsink sequences that may need reset:
SELECT setval('projects_project_id_seq', COALESCE((SELECT MAX(id) FROM projects_project), 1), true);
SELECT setval('teams_team_id_seq', COALESCE((SELECT MAX(id) FROM teams_team), 1), true);
SELECT setval('releases_release_id_seq', COALESCE((SELECT MAX(id) FROM releases_release), 1), true);
```
### Logstash Level Field Constraint Violation
**Symptoms:**
- Bugsink errors: `value too long for type character varying(7)`
- Errors in Backend API project from Logstash
- Log shows `%{sentry_level}` literal string being sent
**Root Cause:**
Logstash sends the literal placeholder `%{sentry_level}` (16 characters) to Bugsink when:
- No error pattern is detected in the log message
- The `sentry_level` field is not properly initialized
- Bugsink's `level` column has a `varchar(7)` constraint
Valid Sentry levels are: `fatal`, `error`, `warning`, `info`, `debug` (all <= 7 characters).
**Diagnosis:**
```bash
# Check for recent level constraint errors in Bugsink
# Via MCP:
mcp__localerrors__list_issues({ project_id: 1, status: 'unresolved' })
# Or check Logstash logs for HTTP 500 responses
podman exec flyer-crawler-dev cat /var/log/logstash/logstash.log | grep "500"
```
**Solution:**
The fix requires updating the Logstash configuration (`docker/logstash/bugsink.conf`) to:
1. Validate `sentry_level` is not nil, empty, or contains placeholder text
2. Set a default value of "error" for any error-tagged event without a valid level
3. Normalize levels to lowercase
**Key filter block (Ruby):**
```ruby
ruby {
code => '
level = event.get("sentry_level")
# Check if level is invalid (nil, empty, contains placeholder, or invalid value)
if level.nil? || level.to_s.empty? || level.to_s.include?("%{") || level.to_s.length > 7
# Default to "error" for error-tagged events, "info" otherwise
if event.get("tags")&.include?("error")
event.set("sentry_level", "error")
else
event.set("sentry_level", "info")
end
else
# Normalize to lowercase and validate
normalized = level.to_s.downcase
valid_levels = ["fatal", "error", "warning", "info", "debug"]
unless valid_levels.include?(normalized)
normalized = "error"
end
event.set("sentry_level", normalized)
end
'
}
```
**Verification:**
After applying the fix:
1. Restart Logstash: `podman exec flyer-crawler-dev systemctl restart logstash`
2. Generate a test error and verify it appears in Bugsink without level errors
3. Check no new "value too long" errors appear in the project
### CSRF Verification Failed
**Symptoms:** "CSRF verification failed. Request aborted." error when performing actions in Bugsink UI (resolving issues, changing settings, etc.)