Compare commits

...

34 Commits

Author SHA1 Message Date
Gitea Actions
c579f141f8 ci: Bump version to 0.11.11 [skip ci] 2026-01-19 09:27:16 +05:00
9cb03c1ede more e2e from the AI
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m42s
2026-01-18 20:26:21 -08:00
Gitea Actions
c14bef4448 ci: Bump version to 0.11.10 [skip ci] 2026-01-19 07:43:17 +05:00
7c0e5450db latest batch of fixes after frontend testing - almost done?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m29s
2026-01-18 18:42:32 -08:00
Gitea Actions
8e85493872 ci: Bump version to 0.11.9 [skip ci] 2026-01-19 07:28:39 +05:00
327d3d4fbc latest batch of fixes after frontend testing - almost done?
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m7s
2026-01-18 18:25:31 -08:00
Gitea Actions
bdb2e274cc ci: Bump version to 0.11.8 [skip ci] 2026-01-19 05:28:15 +05:00
cd46f1d4c2 integration test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m38s
2026-01-18 16:23:34 -08:00
Gitea Actions
6da4b5e9d0 ci: Bump version to 0.11.7 [skip ci] 2026-01-19 03:28:57 +05:00
941626004e test fixes to align with latest tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m51s
2026-01-18 14:27:20 -08:00
Gitea Actions
67cfe39249 ci: Bump version to 0.11.6 [skip ci] 2026-01-19 03:00:22 +05:00
c24103d9a0 frontend direct testing result and fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m42s
2026-01-18 13:57:47 -08:00
Gitea Actions
3e85f839fe ci: Bump version to 0.11.5 [skip ci] 2026-01-18 15:57:52 +05:00
63a0dde0f8 fix unit tests after frontend tests ran
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m21s
2026-01-18 02:56:25 -08:00
Gitea Actions
94f45d9726 ci: Bump version to 0.11.4 [skip ci] 2026-01-18 14:36:55 +05:00
136a9ce3f3 Add ADR-054 for Bugsink to Gitea issue synchronization and frontend testing summary for 2026-01-18
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m3s
- Introduced ADR-054 detailing the implementation of an automated sync worker to create Gitea issues from unresolved Bugsink errors.
- Documented architecture, queue configuration, Redis schema, and implementation phases for the sync feature.
- Added frontend testing summary for 2026-01-18, covering multiple sessions of API testing, fixes applied, and Bugsink error tracking status.
- Included detailed API reference and common validation errors encountered during testing.
2026-01-18 01:35:00 -08:00
Gitea Actions
e65151c3df ci: Bump version to 0.11.3 [skip ci] 2026-01-18 10:49:14 +05:00
3d91d59b9c refactor: update API response handling across multiple queries to ensure compliance with ADR-028
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m53s
- Removed direct return of json.data in favor of structured error handling.
- Implemented checks for success and data array in useActivityLogQuery, useBestSalePricesQuery, useBrandsQuery, useCategoriesQuery, useFlyerItemsForFlyersQuery, useFlyerItemsQuery, useFlyersQuery, useLeaderboardQuery, useMasterItemsQuery, usePriceHistoryQuery, useShoppingListsQuery, useSuggestedCorrectionsQuery, and useWatchedItemsQuery.
- Updated unit tests to reflect changes in expected behavior when API response does not conform to the expected structure.
- Updated package.json to use the latest version of @sentry/vite-plugin.
- Adjusted vite.config.ts for local development SSL configuration.
- Added self-signed SSL certificate and key for local development.
2026-01-17 21:45:51 -08:00
Gitea Actions
822d6d1c3c ci: Bump version to 0.11.2 [skip ci] 2026-01-18 06:50:06 +05:00
a24e28f52f update node packages
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m32s
2026-01-17 17:49:09 -08:00
8dbfa62768 add missing plugin
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 11s
2026-01-17 17:36:25 -08:00
Gitea Actions
da4e0c9136 ci: Bump version to 0.11.1 [skip ci] 2026-01-18 06:25:46 +05:00
dd3cbeb65d fix unit tests from using response
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m55s
2026-01-17 17:24:05 -08:00
e6d383103c feat: add Sentry source map upload configuration and update environment variables 2026-01-17 17:07:50 -08:00
Gitea Actions
a14816c8ee ci: Bump version to 0.11.0 for production release [skip ci] 2026-01-18 05:02:54 +05:00
Gitea Actions
08b220e29c ci: Bump version to 0.10.0 for production release [skip ci] 2026-01-18 04:50:17 +05:00
Gitea Actions
d41a3f1887 ci: Bump version to 0.9.115 [skip ci] 2026-01-18 04:10:18 +05:00
1f6cdc62d7 still fixin test
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m20s
2026-01-17 15:09:17 -08:00
Gitea Actions
978c63bacd ci: Bump version to 0.9.114 [skip ci] 2026-01-18 04:00:21 +05:00
544eb7ae3c still fixin test
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m1s
2026-01-17 14:59:01 -08:00
Gitea Actions
f6839f6e14 ci: Bump version to 0.9.113 [skip ci] 2026-01-18 03:35:25 +05:00
3fac29436a still fixin test
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m6s
2026-01-17 14:34:18 -08:00
Gitea Actions
56f45c9301 ci: Bump version to 0.9.112 [skip ci] 2026-01-18 03:19:53 +05:00
83460abce4 md fixin
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m57s
2026-01-17 14:18:55 -08:00
98 changed files with 7245 additions and 2376 deletions

View File

@@ -98,7 +98,8 @@
"Bash(ssh:*)",
"mcp__redis__list",
"Read(//d/gitea/bugsink-mcp/**)",
"Bash(d:/nodejs/npm.cmd install)"
"Bash(d:/nodejs/npm.cmd install)",
"Bash(node node_modules/vitest/vitest.mjs run:*)"
]
}
}

View File

@@ -102,3 +102,13 @@ VITE_SENTRY_ENABLED=true
# Enable debug mode for SDK troubleshooting (default: false)
SENTRY_DEBUG=false
VITE_SENTRY_DEBUG=false
# ===================
# Source Maps Upload (ADR-015)
# ===================
# Auth token for uploading source maps to Bugsink
# Create at: https://bugsink.projectium.com (Settings > API Keys)
# Required for de-minified stack traces in error reports
SENTRY_AUTH_TOKEN=
# URL of your Bugsink instance (for source map uploads)
SENTRY_URL=https://bugsink.projectium.com

View File

@@ -63,8 +63,8 @@ jobs:
- name: Check for Production Database Schema Changes
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
@@ -87,11 +87,22 @@ jobs:
fi
- name: Build React Application for Production
# Source Maps (ADR-015): If SENTRY_AUTH_TOKEN is set, the @sentry/vite-plugin will:
# 1. Generate hidden source maps during build
# 2. Upload them to Bugsink for error de-minification
# 3. Delete the .map files after upload (so they're not publicly accessible)
run: |
if [ -z "${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}" ]; then
echo "ERROR: The VITE_GOOGLE_GENAI_API_KEY secret is not set."
exit 1
fi
# Source map upload is optional - warn if not configured
if [ -z "${{ secrets.SENTRY_AUTH_TOKEN }}" ]; then
echo "WARNING: SENTRY_AUTH_TOKEN not set. Source maps will NOT be uploaded to Bugsink."
echo " Errors will show minified stack traces. To fix, add SENTRY_AUTH_TOKEN to Gitea secrets."
fi
GITEA_SERVER_URL="https://gitea.projectium.com"
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
PACKAGE_VERSION=$(node -p "require('./package.json').version")
@@ -101,6 +112,8 @@ jobs:
VITE_SENTRY_DSN="${{ secrets.VITE_SENTRY_DSN }}" \
VITE_SENTRY_ENVIRONMENT="production" \
VITE_SENTRY_ENABLED="true" \
SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}" \
SENTRY_URL="https://bugsink.projectium.com" \
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
- name: Deploy Application to Production Server
@@ -117,8 +130,8 @@ jobs:
env:
# --- Production Secrets Injection ---
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
# Explicitly use database 0 for production (test uses database 1)
REDIS_URL: 'redis://localhost:6379/0'

View File

@@ -121,10 +121,11 @@ jobs:
env:
# --- Database credentials for the test suite ---
# These are injected from Gitea secrets into the runner's environment.
# CRITICAL: Use TEST-specific credentials that have CREATE privileges on the public schema.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: 'flyer-crawler-test' # Explicitly set for tests
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
# --- Redis credentials for the test suite ---
# CRITICAL: Use Redis database 1 to isolate tests from production (which uses db 0).
@@ -328,10 +329,11 @@ jobs:
- name: Check for Test Database Schema Changes
env:
# Use test database credentials for this check.
# CRITICAL: Use TEST-specific credentials that have CREATE privileges on the public schema.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # This is used by psql
DB_NAME: ${{ secrets.DB_DATABASE_TEST }} # This is used by the application
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
run: |
# Fail-fast check to ensure secrets are configured in Gitea.
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
@@ -372,6 +374,11 @@ jobs:
# We set the environment variable directly in the command line for this step.
# This maps the Gitea secret to the environment variable the application expects.
# We also generate and inject the application version, commit URL, and commit message.
#
# Source Maps (ADR-015): If SENTRY_AUTH_TOKEN is set, the @sentry/vite-plugin will:
# 1. Generate hidden source maps during build
# 2. Upload them to Bugsink for error de-minification
# 3. Delete the .map files after upload (so they're not publicly accessible)
run: |
# Fail-fast check for the build-time secret.
if [ -z "${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}" ]; then
@@ -379,6 +386,12 @@ jobs:
exit 1
fi
# Source map upload is optional - warn if not configured
if [ -z "${{ secrets.SENTRY_AUTH_TOKEN }}" ]; then
echo "WARNING: SENTRY_AUTH_TOKEN not set. Source maps will NOT be uploaded to Bugsink."
echo " Errors will show minified stack traces. To fix, add SENTRY_AUTH_TOKEN to Gitea secrets."
fi
GITEA_SERVER_URL="https://gitea.projectium.com" # Your Gitea instance URL
# Sanitize commit message to prevent shell injection or build breaks (removes quotes, backticks, backslashes, $)
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s | tr -d '"`\\$')
@@ -389,6 +402,8 @@ jobs:
VITE_SENTRY_DSN="${{ secrets.VITE_SENTRY_DSN_TEST }}" \
VITE_SENTRY_ENVIRONMENT="test" \
VITE_SENTRY_ENABLED="true" \
SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}" \
SENTRY_URL="https://bugsink.projectium.com" \
VITE_API_BASE_URL="https://flyer-crawler-test.projectium.com/api" VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }} npm run build
- name: Deploy Application to Test Server
@@ -427,9 +442,10 @@ jobs:
# Your Node.js application will read these directly from `process.env`.
# Database Credentials
# CRITICAL: Use TEST-specific credentials that have CREATE privileges on the public schema.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
# Redis Credentials (use database 1 to isolate from production)

View File

@@ -20,9 +20,9 @@ jobs:
# Use production database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_NAME_PROD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
steps:
- name: Validate Secrets

View File

@@ -23,9 +23,9 @@ jobs:
env:
# Use production database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # Used by psql
DB_NAME: ${{ secrets.DB_DATABASE_PROD }} # Used by the application
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
steps:
- name: Checkout Code

View File

@@ -23,9 +23,9 @@ jobs:
env:
# Use test database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # Used by psql
DB_NAME: ${{ secrets.DB_DATABASE_TEST }} # Used by the application
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
steps:
- name: Checkout Code

View File

@@ -22,8 +22,8 @@ jobs:
env:
# Use production database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
BACKUP_DIR: '/var/www/backups' # Define a dedicated directory for backups

View File

@@ -62,8 +62,8 @@ jobs:
- name: Check for Production Database Schema Changes
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
@@ -113,8 +113,8 @@ jobs:
env:
# --- Production Secrets Injection ---
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
# Explicitly use database 0 for production (test uses database 1)
REDIS_URL: 'redis://localhost:6379/0'

View File

@@ -15,7 +15,7 @@ Claude Code uses **two separate configuration files** for MCP servers. They must
## File Locations (Windows)
```
```text
C:\Users\<username>\.claude.json # CLI config
C:\Users\<username>\.claude\settings.json # VS Code extension config
```
@@ -182,7 +182,7 @@ npm run build
}
```
- GitHub: https://github.com/j-shelfwood/bugsink-mcp
- GitHub: <https://github.com/j-shelfwood/bugsink-mcp>
- Get token from Bugsink UI: Settings > API Tokens
- **Do NOT use npx** - the package is not on npm
@@ -310,7 +310,7 @@ In `~/.claude/settings.json`, add `"disabled": true`:
## Future: MCP Launchpad
**Project:** https://github.com/kenneth-liao/mcp-launchpad
**Project:** <https://github.com/kenneth-liao/mcp-launchpad>
MCP Launchpad is a CLI tool that wraps multiple MCP servers into a single interface. Worth revisiting when:
@@ -338,7 +338,7 @@ MCP Launchpad is a CLI tool that wraps multiple MCP servers into a single interf
## Future: Graphiti (Advanced Knowledge Graph)
**Project:** https://github.com/getzep/graphiti
**Project:** <https://github.com/getzep/graphiti>
Graphiti provides temporal-aware knowledge graphs - it tracks not just facts, but _when_ they became true/outdated. Much more powerful than simple memory MCP, but requires significant infrastructure.

View File

@@ -293,22 +293,25 @@ To add a new secret (e.g., `SENTRY_DSN`):
**Shared (used by both environments):**
- `DB_HOST`, `DB_USER`, `DB_PASSWORD` - Database credentials
- `DB_HOST` - Database host (shared PostgreSQL server)
- `JWT_SECRET` - Authentication
- `GOOGLE_MAPS_API_KEY` - Google Maps
- `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET` - Google OAuth
- `GH_CLIENT_ID`, `GH_CLIENT_SECRET` - GitHub OAuth
- `SENTRY_AUTH_TOKEN` - Bugsink API token for source map uploads (create at Settings > API Keys in Bugsink)
**Production-specific:**
- `DB_DATABASE_PROD` - Production database name
- `DB_USER_PROD`, `DB_PASSWORD_PROD` - Production database credentials (`flyer_crawler_prod`)
- `DB_DATABASE_PROD` - Production database name (`flyer-crawler`)
- `REDIS_PASSWORD_PROD` - Redis password (uses database 0)
- `VITE_GOOGLE_GENAI_API_KEY` - Gemini API key for production
- `SENTRY_DSN`, `VITE_SENTRY_DSN` - Bugsink error tracking DSNs (production projects)
**Test-specific:**
- `DB_DATABASE_TEST` - Test database name
- `DB_USER_TEST`, `DB_PASSWORD_TEST` - Test database credentials (`flyer_crawler_test`)
- `DB_DATABASE_TEST` - Test database name (`flyer-crawler-test`)
- `REDIS_PASSWORD_TEST` - Redis password (uses database 1 for isolation)
- `VITE_GOOGLE_GENAI_API_KEY_TEST` - Gemini API key for test
- `SENTRY_DSN_TEST`, `VITE_SENTRY_DSN_TEST` - Bugsink error tracking DSNs (test projects)
@@ -322,6 +325,55 @@ The test environment (`flyer-crawler-test.projectium.com`) uses **both** Gitea C
- **Redis database 1**: Isolates test job queues from production (which uses database 0)
- **PM2 process names**: Suffixed with `-test` (e.g., `flyer-crawler-api-test`)
### Database User Setup (Test Environment)
**CRITICAL**: The test database requires specific PostgreSQL permissions to be configured manually. Schema ownership alone is NOT sufficient - explicit privileges must be granted.
**Database Users:**
| User | Database | Purpose |
| -------------------- | -------------------- | ---------- |
| `flyer_crawler_prod` | `flyer-crawler-prod` | Production |
| `flyer_crawler_test` | `flyer-crawler-test` | Testing |
**Required Setup Commands** (run as `postgres` superuser):
```bash
# Connect as postgres superuser
sudo -u postgres psql
# Create the test database and user (if not exists)
CREATE DATABASE "flyer-crawler-test";
CREATE USER flyer_crawler_test WITH PASSWORD 'your-password-here';
# Grant ownership and privileges
ALTER DATABASE "flyer-crawler-test" OWNER TO flyer_crawler_test;
\c "flyer-crawler-test"
ALTER SCHEMA public OWNER TO flyer_crawler_test;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_test;
# Create required extension (must be done by superuser)
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
```
**Why These Steps Are Necessary:**
1. **Schema ownership alone is insufficient** - PostgreSQL requires explicit `GRANT CREATE, USAGE` privileges even when the user owns the schema
2. **uuid-ossp extension** - Required by the application for UUID generation; must be created by a superuser before the app can use it
3. **Separate users for prod/test** - Prevents accidental cross-environment data access; each environment has its own credentials in Gitea secrets
**Verification:**
```bash
# Check schema privileges (should show 'UC' for flyer_crawler_test)
psql -d "flyer-crawler-test" -c "\dn+ public"
# Expected output:
# Name | Owner | Access privileges
# -------+--------------------+------------------------------------------
# public | flyer_crawler_test | flyer_crawler_test=UC/flyer_crawler_test
```
### Dev Container Environment
The dev container runs its own **local Bugsink instance** - it does NOT connect to the production Bugsink server:

View File

@@ -14,6 +14,17 @@ Flyer Crawler uses PostgreSQL with several extensions for full-text search, geog
---
## Database Users
This project uses **environment-specific database users** to isolate production and test environments:
| User | Database | Purpose |
| -------------------- | -------------------- | ---------- |
| `flyer_crawler_prod` | `flyer-crawler-prod` | Production |
| `flyer_crawler_test` | `flyer-crawler-test` | Testing |
---
## Production Database Setup
### Step 1: Install PostgreSQL
@@ -34,15 +45,19 @@ sudo -u postgres psql
Run the following SQL commands (replace `'a_very_strong_password'` with a secure password):
```sql
-- Create a new role for your application
CREATE ROLE flyer_crawler_user WITH LOGIN PASSWORD 'a_very_strong_password';
-- Create the production role
CREATE ROLE flyer_crawler_prod WITH LOGIN PASSWORD 'a_very_strong_password';
-- Create the production database
CREATE DATABASE "flyer-crawler-prod" WITH OWNER = flyer_crawler_user;
CREATE DATABASE "flyer-crawler-prod" WITH OWNER = flyer_crawler_prod;
-- Connect to the new database
\c "flyer-crawler-prod"
-- Grant schema privileges
ALTER SCHEMA public OWNER TO flyer_crawler_prod;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_prod;
-- Install required extensions (must be done as superuser)
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
@@ -57,7 +72,7 @@ CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Navigate to your project directory and run:
```bash
psql -U flyer_crawler_user -d "flyer-crawler-prod" -f sql/master_schema_rollup.sql
psql -U flyer_crawler_prod -d "flyer-crawler-prod" -f sql/master_schema_rollup.sql
```
This creates all tables, functions, triggers, and seeds essential data (categories, master items).
@@ -67,7 +82,7 @@ This creates all tables, functions, triggers, and seeds essential data (categori
Set the required environment variables and run the seed script:
```bash
export DB_USER=flyer_crawler_user
export DB_USER=flyer_crawler_prod
export DB_PASSWORD=your_password
export DB_NAME="flyer-crawler-prod"
export DB_HOST=localhost
@@ -88,20 +103,24 @@ sudo -u postgres psql
```
```sql
-- Create the test role
CREATE ROLE flyer_crawler_test WITH LOGIN PASSWORD 'a_very_strong_password';
-- Create the test database
CREATE DATABASE "flyer-crawler-test" WITH OWNER = flyer_crawler_user;
CREATE DATABASE "flyer-crawler-test" WITH OWNER = flyer_crawler_test;
-- Connect to the test database
\c "flyer-crawler-test"
-- Grant schema privileges (required for test runner to reset schema)
ALTER SCHEMA public OWNER TO flyer_crawler_test;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_test;
-- Install required extensions
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Grant schema ownership (required for test runner to reset schema)
ALTER SCHEMA public OWNER TO flyer_crawler_user;
-- Exit
\q
```
@@ -110,12 +129,28 @@ ALTER SCHEMA public OWNER TO flyer_crawler_user;
Ensure these secrets are set in your Gitea repository settings:
| Secret | Description |
| ------------- | ------------------------------------------ |
| `DB_HOST` | Database hostname (e.g., `localhost`) |
| `DB_PORT` | Database port (e.g., `5432`) |
| `DB_USER` | Database user (e.g., `flyer_crawler_user`) |
| `DB_PASSWORD` | Database password |
**Shared:**
| Secret | Description |
| --------- | ------------------------------------- |
| `DB_HOST` | Database hostname (e.g., `localhost`) |
| `DB_PORT` | Database port (e.g., `5432`) |
**Production-specific:**
| Secret | Description |
| ------------------ | ----------------------------------------------- |
| `DB_USER_PROD` | Production database user (`flyer_crawler_prod`) |
| `DB_PASSWORD_PROD` | Production database password |
| `DB_DATABASE_PROD` | Production database name (`flyer-crawler-prod`) |
**Test-specific:**
| Secret | Description |
| ------------------ | ----------------------------------------- |
| `DB_USER_TEST` | Test database user (`flyer_crawler_test`) |
| `DB_PASSWORD_TEST` | Test database password |
| `DB_DATABASE_TEST` | Test database name (`flyer-crawler-test`) |
---
@@ -135,7 +170,7 @@ This approach is faster than creating/destroying databases and doesn't require s
## Connecting to Production Database
```bash
psql -h localhost -U flyer_crawler_user -d "flyer-crawler-prod" -W
psql -h localhost -U flyer_crawler_prod -d "flyer-crawler-prod" -W
```
---
@@ -149,7 +184,7 @@ SELECT PostGIS_Full_Version();
Example output:
```
```text
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1)
POSTGIS="3.2.0 c3e3cc0" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1"
```
@@ -171,13 +206,13 @@ POSTGIS="3.2.0 c3e3cc0" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1"
### Create a Backup
```bash
pg_dump -U flyer_crawler_user -d "flyer-crawler-prod" -F c -f backup.dump
pg_dump -U flyer_crawler_prod -d "flyer-crawler-prod" -F c -f backup.dump
```
### Restore from Backup
```bash
pg_restore -U flyer_crawler_user -d "flyer-crawler-prod" -c backup.dump
pg_restore -U flyer_crawler_prod -d "flyer-crawler-prod" -c backup.dump
```
---

View File

@@ -61,14 +61,16 @@ See [INSTALL.md](INSTALL.md) for detailed setup instructions.
This project uses environment variables for configuration (no `.env` files). Key variables:
| Variable | Description |
| ----------------------------------- | -------------------------------- |
| `DB_HOST`, `DB_USER`, `DB_PASSWORD` | PostgreSQL credentials |
| `DB_DATABASE_PROD` | Production database name |
| `JWT_SECRET` | Authentication token signing key |
| `VITE_GOOGLE_GENAI_API_KEY` | Google Gemini API key |
| `GOOGLE_MAPS_API_KEY` | Google Maps Geocoding API key |
| `REDIS_PASSWORD_PROD` | Redis password |
| Variable | Description |
| -------------------------------------------- | -------------------------------- |
| `DB_HOST` | PostgreSQL host |
| `DB_USER_PROD`, `DB_PASSWORD_PROD` | Production database credentials |
| `DB_USER_TEST`, `DB_PASSWORD_TEST` | Test database credentials |
| `DB_DATABASE_PROD`, `DB_DATABASE_TEST` | Database names |
| `JWT_SECRET` | Authentication token signing key |
| `VITE_GOOGLE_GENAI_API_KEY` | Google Gemini API key |
| `GOOGLE_MAPS_API_KEY` | Google Maps Geocoding API key |
| `REDIS_PASSWORD_PROD`, `REDIS_PASSWORD_TEST` | Redis passwords |
See [INSTALL.md](INSTALL.md) for the complete list.

19
certs/localhost.crt Normal file
View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDCTCCAfGgAwIBAgIUHhZUK1vmww2wCepWPuVcU6d27hMwDQYJKoZIhvcNAQEL
BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTI2MDExODAyMzM0NFoXDTI3MDEx
ODAyMzM0NFowFDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAuUJGtSZzd+ZpLi+efjrkxJJNfVxVz2VLhknNM2WKeOYx
JTK/VaTYq5hrczy6fEUnMhDAJCgEPUFlOK3vn1gFJKNMN8m7arkLVk6PYtrx8CTw
w78Q06FLITr6hR0vlJNpN4MsmGxYwUoUpn1j5JdfZF7foxNAZRiwoopf7ZJxltDu
PIuFjmVZqdzR8c6vmqIqdawx/V6sL9fizZr+CDH3oTsTUirn2qM+1ibBtPDiBvfX
omUsr6MVOcTtvnMvAdy9NfV88qwF7MEWBGCjXkoT1bKCLD8hjn8l7GjRmPcmMFE2
GqWEvfJiFkBK0CgSHYEUwzo0UtVNeQr0k0qkDRub6QIDAQABo1MwUTAdBgNVHQ4E
FgQU5VeD67yFLV0QNYbHaJ6u9cM6UbkwHwYDVR0jBBgwFoAU5VeD67yFLV0QNYbH
aJ6u9cM6UbkwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEABueA
8ujAD+yjeP5dTgqQH1G0hlriD5LmlJYnktaLarFU+y+EZlRFwjdORF/vLPwSG+y7
CLty/xlmKKQop70QzQ5jtJcsWzUjww8w1sO3AevfZlIF3HNhJmt51ihfvtJ7DVCv
CNyMeYO0pBqRKwOuhbG3EtJgyV7MF8J25UEtO4t+GzX3jcKKU4pWP+kyLBVfeDU3
MQuigd2LBwBQQFxZdpYpcXVKnAJJlHZIt68ycO1oSBEJO9fIF0CiAlC6ITxjtYtz
oCjd6cCLKMJiC6Zg7t1Q17vGl+FdGyQObSsiYsYO9N3CVaeDdpyGCH0Rfa0+oZzu
a5U9/l1FHlvpX980bw==
-----END CERTIFICATE-----

28
certs/localhost.key Normal file
View File

@@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC5Qka1JnN35mku
L55+OuTEkk19XFXPZUuGSc0zZYp45jElMr9VpNirmGtzPLp8RScyEMAkKAQ9QWU4
re+fWAUko0w3ybtquQtWTo9i2vHwJPDDvxDToUshOvqFHS+Uk2k3gyyYbFjBShSm
fWPkl19kXt+jE0BlGLCiil/tknGW0O48i4WOZVmp3NHxzq+aoip1rDH9Xqwv1+LN
mv4IMfehOxNSKufaoz7WJsG08OIG99eiZSyvoxU5xO2+cy8B3L019XzyrAXswRYE
YKNeShPVsoIsPyGOfyXsaNGY9yYwUTYapYS98mIWQErQKBIdgRTDOjRS1U15CvST
SqQNG5vpAgMBAAECggEAAnv0Dw1Mv+rRy4ZyxtObEVPXPRzoxnDDXzHP4E16BTye
Fc/4pSBUIAUn2bPvLz0/X8bMOa4dlDcIv7Eu9Pvns8AY70vMaUReA80fmtHVD2xX
1PCT0X3InnxRAYKstSIUIGs+aHvV5Z+iJ8F82soOStN1MU56h+JLWElL5deCPHq3
tLZT8wM9aOZlNG72kJ71+DlcViahynQj8+VrionOLNjTJ2Jv/ByjM3GMIuSdBrgd
Sl4YAcdn6ontjJGoTgI+e+qkBAPwMZxHarNGQgbS0yNVIJe7Lq4zIKHErU/ZSmpD
GzhdVNzhrjADNIDzS7G+pxtz+aUxGtmRvOyopy8GAQKBgQDEPp2mRM+uZVVT4e1j
pkKO1c3O8j24I5mGKwFqhhNs3qGy051RXZa0+cQNx63GokXQan9DIXzc/Il7Y72E
z9bCFbcSWnlP8dBIpWiJm+UmqLXRyY4N8ecNnzL5x+Tuxm5Ij+ixJwXgdz/TLNeO
MBzu+Qy738/l/cAYxwcF7mR7AQKBgQDxq1F95HzCxBahRU9OGUO4s3naXqc8xKCC
m3vbbI8V0Exse2cuiwtlPPQWzTPabLCJVvCGXNru98sdeOu9FO9yicwZX0knOABK
QfPyDeITsh2u0C63+T9DNn6ixI/T68bTs7DHawEYbpS7bR50BnbHbQrrOAo6FSXF
yC7+Te+o6QKBgQCXEWSmo/4D0Dn5Usg9l7VQ40GFd3EPmUgLwntal0/I1TFAyiom
gpcLReIogXhCmpSHthO1h8fpDfZ/p+4ymRRHYBQH6uHMKugdpEdu9zVVpzYgArp5
/afSEqVZJwoSzWoELdQA23toqiPV2oUtDdiYFdw5nDccY1RHPp8nb7amAQKBgQDj
f4DhYDxKJMmg21xCiuoDb4DgHoaUYA0xpii8cL9pq4KmBK0nVWFO1kh5Robvsa2m
PB+EfNjkaIPepLxWbOTUEAAASoDU2JT9UoTQcl1GaUAkFnpEWfBB14TyuNMkjinH
lLpvn72SQFbm8VvfoU4jgfTrZP/LmajLPR1v6/IWMQKBgBh9qvOTax/GugBAWNj3
ZvF99rHOx0rfotEdaPcRN66OOiSWILR9yfMsTvwt1V0VEj7OqO9juMRFuIyB57gd
Hs/zgbkuggqjr1dW9r22P/UpzpodAEEN2d52RSX8nkMOkH61JXlH2MyRX65kdExA
VkTDq6KwomuhrU3z0+r/MSOn
-----END PRIVATE KEY-----

271
docs/BUGSINK-SYNC.md Normal file
View File

@@ -0,0 +1,271 @@
# Bugsink to Gitea Issue Synchronization
This document describes the automated workflow for syncing Bugsink error tracking issues to Gitea tickets.
## Overview
The sync system automatically creates Gitea issues from unresolved Bugsink errors, ensuring all application errors are tracked and assignable.
**Key Points:**
- Runs **only on test/staging server** (not production)
- Syncs **all 6 Bugsink projects** (including production errors)
- Creates Gitea issues with full error context
- Marks synced issues as resolved in Bugsink
- Uses Redis db 15 for sync state tracking
## Architecture
```
TEST/STAGING SERVER
┌─────────────────────────────────────────────────┐
│ │
│ BullMQ Queue ──▶ Sync Worker ──▶ Redis DB 15 │
│ (bugsink-sync) (15min) (sync state) │
│ │ │
└──────────────────────┼───────────────────────────┘
┌─────────────┴─────────────┐
▼ ▼
┌─────────┐ ┌─────────┐
│ Bugsink │ │ Gitea │
│ (read) │ │ (write) │
└─────────┘ └─────────┘
```
## Bugsink Projects
| Project Slug | Type | Environment | Label Mapping |
| --------------------------------- | -------- | ----------- | ----------------------------------- |
| flyer-crawler-backend | Backend | Production | bug:backend + env:production |
| flyer-crawler-backend-test | Backend | Test | bug:backend + env:test |
| flyer-crawler-frontend | Frontend | Production | bug:frontend + env:production |
| flyer-crawler-frontend-test | Frontend | Test | bug:frontend + env:test |
| flyer-crawler-infrastructure | Infra | Production | bug:infrastructure + env:production |
| flyer-crawler-test-infrastructure | Infra | Test | bug:infrastructure + env:test |
## Gitea Labels
| Label | Color | ID |
| ------------------ | ------------------ | --- |
| bug:frontend | #e11d48 (Red) | 8 |
| bug:backend | #ea580c (Orange) | 9 |
| bug:infrastructure | #7c3aed (Purple) | 10 |
| env:production | #dc2626 (Dark Red) | 11 |
| env:test | #2563eb (Blue) | 12 |
| env:development | #6b7280 (Gray) | 13 |
| source:bugsink | #10b981 (Green) | 14 |
## Environment Variables
Add these to **test environment only** (`deploy-to-test.yml`):
```bash
# Bugsink API
BUGSINK_URL=https://bugsink.projectium.com
BUGSINK_API_TOKEN=<from Bugsink Settings > API Keys>
# Gitea API
GITEA_URL=https://gitea.projectium.com
GITEA_API_TOKEN=<personal access token with repo scope>
GITEA_OWNER=torbo
GITEA_REPO=flyer-crawler.projectium.com
# Sync Control
BUGSINK_SYNC_ENABLED=true # Only set true in test env
BUGSINK_SYNC_INTERVAL=15 # Minutes between sync runs
```
## Gitea Secrets to Add
Add these secrets in Gitea repository settings (Settings > Secrets):
| Secret Name | Value | Environment |
| ---------------------- | ---------------------- | ----------- |
| `BUGSINK_API_TOKEN` | API token from Bugsink | Test only |
| `GITEA_SYNC_TOKEN` | Personal access token | Test only |
| `BUGSINK_SYNC_ENABLED` | `true` | Test only |
## Redis Configuration
| Database | Purpose |
| -------- | ------------------------ |
| 0 | BullMQ production queues |
| 1 | BullMQ test queues |
| 15 | Bugsink sync state |
**Key Pattern:**
```
bugsink:synced:{issue_uuid}
```
**Value (JSON):**
```json
{
"gitea_issue_number": 42,
"synced_at": "2026-01-17T10:30:00Z",
"project": "flyer-crawler-frontend-test",
"title": "[TypeError] t.map is not a function"
}
```
## Sync Workflow
1. **Trigger**: Every 15 minutes (or manual via admin API)
2. **Fetch**: List unresolved issues from all 6 Bugsink projects
3. **Check**: Skip issues already in Redis sync state
4. **Create**: Create Gitea issue with labels and full context
5. **Record**: Store sync mapping in Redis db 15
6. **Resolve**: Mark issue as resolved in Bugsink
## Issue Template
Created Gitea issues follow this format:
```markdown
## Error Details
| Field | Value |
| ------------ | ----------------------- |
| **Type** | TypeError |
| **Message** | t.map is not a function |
| **Platform** | javascript |
| **Level** | error |
## Occurrence Statistics
- **First Seen**: 2026-01-13 18:24:22 UTC
- **Last Seen**: 2026-01-16 05:03:02 UTC
- **Total Occurrences**: 4
## Request Context
- **URL**: GET https://flyer-crawler-test.projectium.com/
## Stacktrace
<details>
<summary>Click to expand</summary>
[Full stacktrace]
</details>
---
**Bugsink Issue**: https://bugsink.projectium.com/issues/{id}
**Project**: flyer-crawler-frontend-test
```
## Admin Endpoints
### Manual Sync Trigger
```bash
POST /api/admin/bugsink/sync
Authorization: Bearer <admin_jwt>
# Response
{
"success": true,
"data": {
"synced": 3,
"skipped": 12,
"failed": 0,
"duration_ms": 2340
}
}
```
### Sync Status
```bash
GET /api/admin/bugsink/sync/status
Authorization: Bearer <admin_jwt>
# Response
{
"success": true,
"data": {
"enabled": true,
"last_run": "2026-01-17T10:30:00Z",
"next_run": "2026-01-17T10:45:00Z",
"total_synced": 47
}
}
```
## Files to Create
| File | Purpose |
| -------------------------------------- | --------------------- |
| `src/services/bugsinkSync.server.ts` | Core sync logic |
| `src/services/bugsinkClient.server.ts` | Bugsink HTTP client |
| `src/services/giteaClient.server.ts` | Gitea HTTP client |
| `src/types/bugsink.ts` | TypeScript interfaces |
| `src/routes/admin/bugsink-sync.ts` | Admin endpoints |
## Files to Modify
| File | Changes |
| ------------------------------------- | ------------------------- |
| `src/services/queues.server.ts` | Add `bugsinkSyncQueue` |
| `src/services/workers.server.ts` | Add sync worker |
| `src/config/env.ts` | Add bugsink config schema |
| `.env.example` | Document new variables |
| `.gitea/workflows/deploy-to-test.yml` | Pass secrets |
## Implementation Phases
### Phase 1: Core Infrastructure
- [ ] Add env vars to `env.ts` schema
- [ ] Create BugsinkClient service
- [ ] Create GiteaClient service
- [ ] Add Redis db 15 connection
### Phase 2: Sync Logic
- [ ] Create BugsinkSyncService
- [ ] Add bugsink-sync queue
- [ ] Add sync worker
- [ ] Create TypeScript types
### Phase 3: Integration
- [ ] Add admin endpoints
- [ ] Update deploy-to-test.yml
- [ ] Add Gitea secrets
- [ ] End-to-end testing
## Troubleshooting
### Sync not running
1. Check `BUGSINK_SYNC_ENABLED` is `true`
2. Verify worker is running: `GET /api/admin/workers/status`
3. Check Bull Board: `/api/admin/jobs`
### Duplicate issues created
1. Check Redis db 15 connectivity
2. Verify sync state keys exist: `redis-cli -n 15 KEYS "bugsink:*"`
### Issues not resolving in Bugsink
1. Verify `BUGSINK_API_TOKEN` has write permissions
2. Check worker logs for API errors
### Missing stacktrace in Gitea issue
1. Source maps may not be uploaded
2. Bugsink API may have returned partial data
3. Check worker logs for fetch errors
## Related Documentation
- [ADR-054: Bugsink-Gitea Sync](./adr/0054-bugsink-gitea-issue-sync.md)
- [ADR-006: Background Job Processing](./adr/0006-background-job-processing-and-task-queues.md)
- [ADR-015: Error Tracking](./adr/0015-application-performance-monitoring-and-error-tracking.md)

View File

@@ -42,9 +42,9 @@ jobs:
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_NAME_PROD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
steps:
- name: Validate Secrets

View File

@@ -0,0 +1,337 @@
# ADR-054: Bugsink to Gitea Issue Synchronization
**Date**: 2026-01-17
**Status**: Proposed
## Context
The application uses Bugsink (Sentry-compatible self-hosted error tracking) to capture runtime errors across 6 projects:
| Project | Type | Environment |
| --------------------------------- | -------------- | ------------ |
| flyer-crawler-backend | Backend | Production |
| flyer-crawler-backend-test | Backend | Test/Staging |
| flyer-crawler-frontend | Frontend | Production |
| flyer-crawler-frontend-test | Frontend | Test/Staging |
| flyer-crawler-infrastructure | Infrastructure | Production |
| flyer-crawler-test-infrastructure | Infrastructure | Test/Staging |
Currently, errors remain in Bugsink until manually reviewed. There is no automated workflow to:
1. Create trackable tickets for errors
2. Assign errors to developers
3. Track resolution progress
4. Prevent errors from being forgotten
## Decision
Implement an automated background worker that synchronizes unresolved Bugsink issues to Gitea as trackable tickets. The sync worker will:
1. **Run only on the test/staging server** (not production, not dev container)
2. **Poll all 6 Bugsink projects** for unresolved issues
3. **Create Gitea issues** with full error context
4. **Mark synced issues as resolved** in Bugsink (to prevent re-polling)
5. **Track sync state in Redis** to ensure idempotency
### Why Test/Staging Only?
- The sync worker is a background service that needs API tokens for both Bugsink and Gitea
- Running on test/staging provides a single sync point without duplicating infrastructure
- All 6 Bugsink projects (including production) are synced from this one worker
- Production server stays focused on serving users, not running sync jobs
## Architecture
### Component Overview
```
┌─────────────────────────────────────────────────────────────────────┐
│ TEST/STAGING SERVER │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌───────────────┐ │
│ │ BullMQ Queue │───▶│ Sync Worker │───▶│ Redis DB 15 │ │
│ │ bugsink-sync │ │ (15min repeat) │ │ Sync State │ │
│ └──────────────────┘ └────────┬─────────┘ └───────────────┘ │
│ │ │
└───────────────────────────────────┼──────────────────────────────────┘
┌───────────────┴───────────────┐
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Bugsink │ │ Gitea │
│ (6 projects) │ │ (1 repo) │
└──────────────┘ └──────────────┘
```
### Queue Configuration
| Setting | Value | Rationale |
| --------------- | ---------------------- | -------------------------------------------- |
| Queue Name | `bugsink-sync` | Follows existing naming pattern |
| Repeat Interval | 15 minutes | Balances responsiveness with API rate limits |
| Retry Attempts | 3 | Standard retry policy |
| Backoff | Exponential (30s base) | Handles temporary API failures |
| Concurrency | 1 | Serial processing prevents race conditions |
### Redis Database Allocation
| Database | Usage | Owner |
| -------- | ------------------- | --------------- |
| 0 | BullMQ (Production) | Existing queues |
| 1 | BullMQ (Test) | Existing queues |
| 2-14 | Reserved | Future use |
| 15 | Bugsink Sync State | This feature |
### Redis Key Schema
```
bugsink:synced:{bugsink_issue_id}
└─ Value: JSON {
gitea_issue_number: number,
synced_at: ISO timestamp,
project: string,
title: string
}
```
### Gitea Labels
The following labels have been created in `torbo/flyer-crawler.projectium.com`:
| Label | ID | Color | Purpose |
| -------------------- | --- | ------------------ | ---------------------------------- |
| `bug:frontend` | 8 | #e11d48 (Red) | Frontend JavaScript/React errors |
| `bug:backend` | 9 | #ea580c (Orange) | Backend Node.js/API errors |
| `bug:infrastructure` | 10 | #7c3aed (Purple) | Infrastructure errors (Redis, PM2) |
| `env:production` | 11 | #dc2626 (Dark Red) | Production environment |
| `env:test` | 12 | #2563eb (Blue) | Test/staging environment |
| `env:development` | 13 | #6b7280 (Gray) | Development environment |
| `source:bugsink` | 14 | #10b981 (Green) | Auto-synced from Bugsink |
### Label Mapping
| Bugsink Project | Bug Label | Env Label |
| --------------------------------- | ------------------ | -------------- |
| flyer-crawler-backend | bug:backend | env:production |
| flyer-crawler-backend-test | bug:backend | env:test |
| flyer-crawler-frontend | bug:frontend | env:production |
| flyer-crawler-frontend-test | bug:frontend | env:test |
| flyer-crawler-infrastructure | bug:infrastructure | env:production |
| flyer-crawler-test-infrastructure | bug:infrastructure | env:test |
All synced issues also receive the `source:bugsink` label.
## Implementation Details
### New Files
| File | Purpose |
| -------------------------------------- | ------------------------------------------- |
| `src/services/bugsinkSync.server.ts` | Core synchronization logic |
| `src/services/bugsinkClient.server.ts` | HTTP client for Bugsink API |
| `src/services/giteaClient.server.ts` | HTTP client for Gitea API |
| `src/types/bugsink.ts` | TypeScript interfaces for Bugsink responses |
| `src/routes/admin/bugsink-sync.ts` | Admin endpoints for manual trigger |
### Modified Files
| File | Changes |
| ------------------------------------- | ------------------------------------- |
| `src/services/queues.server.ts` | Add `bugsinkSyncQueue` definition |
| `src/services/workers.server.ts` | Add sync worker implementation |
| `src/config/env.ts` | Add bugsink sync configuration schema |
| `.env.example` | Document new environment variables |
| `.gitea/workflows/deploy-to-test.yml` | Pass sync-related secrets |
### Environment Variables
```bash
# Bugsink Configuration
BUGSINK_URL=https://bugsink.projectium.com
BUGSINK_API_TOKEN=77deaa5e... # From Bugsink Settings > API Keys
# Gitea Configuration
GITEA_URL=https://gitea.projectium.com
GITEA_API_TOKEN=... # Personal access token with repo scope
GITEA_OWNER=torbo
GITEA_REPO=flyer-crawler.projectium.com
# Sync Control
BUGSINK_SYNC_ENABLED=false # Set true only in test environment
BUGSINK_SYNC_INTERVAL=15 # Minutes between sync runs
```
### Gitea Issue Template
```markdown
## Error Details
| Field | Value |
| ------------ | --------------- |
| **Type** | {error_type} |
| **Message** | {error_message} |
| **Platform** | {platform} |
| **Level** | {level} |
## Occurrence Statistics
- **First Seen**: {first_seen}
- **Last Seen**: {last_seen}
- **Total Occurrences**: {count}
## Request Context
- **URL**: {request_url}
- **Additional Context**: {context}
## Stacktrace
<details>
<summary>Click to expand</summary>
{stacktrace}
</details>
---
**Bugsink Issue**: {bugsink_url}
**Project**: {project_slug}
**Trace ID**: {trace_id}
```
### Sync Workflow
```
1. Worker triggered (every 15 min or manual)
2. For each of 6 Bugsink projects:
a. List issues with status='unresolved'
b. For each issue:
i. Check Redis for existing sync record
ii. If already synced → skip
iii. Fetch issue details + stacktrace
iv. Create Gitea issue with labels
v. Store sync record in Redis
vi. Mark issue as 'resolved' in Bugsink
3. Log summary (synced: N, skipped: N, failed: N)
```
### Idempotency Guarantees
1. **Redis check before creation**: Prevents duplicate Gitea issues
2. **Atomic Redis write after Gitea create**: Ensures state consistency
3. **Query only unresolved issues**: Resolved issues won't appear in polls
4. **No TTL on Redis keys**: Permanent sync history
## Consequences
### Positive
1. **Visibility**: All application errors become trackable tickets
2. **Accountability**: Errors can be assigned to developers
3. **History**: Complete audit trail of when errors were discovered and resolved
4. **Integration**: Errors appear alongside feature work in Gitea
5. **Automation**: No manual error triage required
### Negative
1. **API Dependencies**: Requires both Bugsink and Gitea APIs to be available
2. **Token Management**: Additional secrets to manage in CI/CD
3. **Potential Noise**: High-frequency errors could create many tickets (mitigated by Bugsink's issue grouping)
4. **Single Point**: Sync only runs on test server (if test server is down, no sync occurs)
### Risks & Mitigations
| Risk | Mitigation |
| ----------------------- | ------------------------------------------------- |
| Bugsink API rate limits | 15-minute polling interval |
| Gitea API rate limits | Sequential processing with delays |
| Redis connection issues | Reuse existing connection patterns |
| Duplicate issues | Redis tracking + idempotent checks |
| Missing stacktrace | Graceful degradation (create issue without trace) |
## Admin Interface
### Manual Sync Endpoint
```
POST /api/admin/bugsink/sync
Authorization: Bearer {admin_jwt}
Response:
{
"success": true,
"data": {
"synced": 3,
"skipped": 12,
"failed": 0,
"duration_ms": 2340
}
}
```
### Sync Status Endpoint
```
GET /api/admin/bugsink/sync/status
Authorization: Bearer {admin_jwt}
Response:
{
"success": true,
"data": {
"enabled": true,
"last_run": "2026-01-17T10:30:00Z",
"next_run": "2026-01-17T10:45:00Z",
"total_synced": 47,
"projects": [
{ "slug": "flyer-crawler-backend", "synced_count": 12 },
...
]
}
}
```
## Implementation Phases
### Phase 1: Core Infrastructure
- Add environment variables to `env.ts` schema
- Create `BugsinkClient` service (HTTP client)
- Create `GiteaClient` service (HTTP client)
- Add Redis db 15 connection for sync tracking
### Phase 2: Sync Logic
- Create `BugsinkSyncService` with sync logic
- Add `bugsink-sync` queue to `queues.server.ts`
- Add sync worker to `workers.server.ts`
- Create TypeScript types for API responses
### Phase 3: Integration
- Add admin endpoints for manual sync trigger
- Update `deploy-to-test.yml` with new secrets
- Add secrets to Gitea repository settings
- Test end-to-end in staging environment
### Phase 4: Documentation
- Update CLAUDE.md with sync information
- Create operational runbook for sync issues
## Future Enhancements
1. **Bi-directional sync**: Update Bugsink when Gitea issue is closed
2. **Smart deduplication**: Detect similar errors across projects
3. **Priority mapping**: High occurrence count → high priority label
4. **Slack/Discord notifications**: Alert on new critical errors
5. **Metrics dashboard**: Track error trends over time
## References
- [ADR-006: Background Job Processing](./0006-background-job-processing-and-task-queues.md)
- [ADR-015: Application Performance Monitoring](./0015-application-performance-monitoring-and-error-tracking.md)
- [Bugsink API Documentation](https://bugsink.com/docs/api/)
- [Gitea API Documentation](https://docs.gitea.io/en-us/api-usage/)

View File

@@ -0,0 +1,349 @@
# Frontend Test Automation Plan
**Date**: 2026-01-18
**Status**: Awaiting Approval
**Related**: [2026-01-18-frontend-tests.md](../tests/2026-01-18-frontend-tests.md)
## Executive Summary
This plan formalizes the automated testing of 35+ API endpoints manually tested on 2026-01-18. The testing covered 7 major areas including end-to-end user flows, edge cases, queue behavior, authentication, performance, real-time features, and data integrity.
**Recommendation**: Most tests should be added as **integration tests** (Supertest-based), with select critical flows as **E2E tests**. This aligns with ADR-010 and ADR-040's guidance on testing economics.
---
## Analysis of Manual Tests vs Existing Coverage
### Current Test Coverage
| Test Type | Existing Files | Existing Tests |
| ----------- | -------------- | -------------- |
| Integration | 21 files | ~150+ tests |
| E2E | 9 files | ~40+ tests |
### Gap Analysis
| Manual Test Area | Existing Coverage | Gap | Priority |
| -------------------------- | ------------------------- | --------------------------- | -------- |
| Budget API | budget.integration.test | Partial - add validation | Medium |
| Deals API | None | **New file needed** | Low |
| Reactions API | None | **New file needed** | Low |
| Gamification API | gamification.integration | Good coverage | None |
| Recipe API | recipe.integration.test | Add fork error, comment | Medium |
| Receipt API | receipt.integration.test | Good coverage | None |
| UPC API | upc.integration.test | Good coverage | None |
| Price History API | price.integration.test | Good coverage | None |
| Personalization API | public.routes.integration | Good coverage | None |
| Admin Routes | admin.integration.test | Add queue/trigger endpoints | Medium |
| Edge Cases (Area 2) | Scattered | **Consolidate/add** | High |
| Queue/Worker (Area 3) | Partial | Add admin trigger tests | Medium |
| Auth Edge Cases (Area 4) | auth.integration.test | Add token malformation | Medium |
| Performance (Area 5) | None | **Not recommended** | Skip |
| Real-time/Polling (Area 6) | notification.integration | Add job status polling | Low |
| Data Integrity (Area 7) | Scattered | **Consolidate** | High |
---
## Implementation Plan
### Phase 1: New Integration Test Files (Priority: High)
#### 1.1 Create `deals.integration.test.ts`
**Rationale**: Routes were unmounted until this testing session; no tests exist.
```typescript
// Tests to add:
describe('Deals API', () => {
it('GET /api/deals/best-watched-prices requires auth');
it('GET /api/deals/best-watched-prices returns watched items for user');
it('Returns empty array when no watched items');
});
```
**Estimated effort**: 30 minutes
#### 1.2 Create `reactions.integration.test.ts`
**Rationale**: Routes were unmounted until this testing session; no tests exist.
```typescript
// Tests to add:
describe('Reactions API', () => {
it('GET /api/reactions/summary/:targetType/:targetId returns counts');
it('POST /api/reactions/toggle requires auth');
it('POST /api/reactions/toggle toggles reaction on/off');
it('Returns validation error for invalid target_type');
it('Returns validation error for non-string entity_id');
});
```
**Estimated effort**: 45 minutes
#### 1.3 Create `edge-cases.integration.test.ts`
**Rationale**: Consolidate edge case tests discovered during manual testing.
```typescript
// Tests to add:
describe('Edge Cases', () => {
describe('File Upload Validation', () => {
it('Accepts small files');
it('Processes corrupt file with IMAGE_CONVERSION_FAILED');
it('Rejects wrong checksum format');
it('Rejects short checksum');
});
describe('Input Sanitization', () => {
it('Handles XSS payloads in shopping list names (stores as-is)');
it('Handles unicode/emoji in text fields');
it('Rejects null bytes in JSON');
it('Handles very long input strings');
});
describe('Authorization Boundaries', () => {
it('Cross-user access returns 404 (not 403)');
it('SQL injection in query params is safely handled');
});
});
```
**Estimated effort**: 1.5 hours
#### 1.4 Create `data-integrity.integration.test.ts`
**Rationale**: Consolidate FK/cascade/constraint tests.
```typescript
// Tests to add:
describe('Data Integrity', () => {
describe('Cascade Deletes', () => {
it('User deletion cascades to shopping lists, budgets, notifications');
it('Shopping list deletion cascades to items');
it('Admin cannot delete own account');
});
describe('FK Constraints', () => {
it('Rejects invalid FK references via API');
it('Rejects invalid FK references via direct DB');
});
describe('Unique Constraints', () => {
it('Duplicate email returns CONFLICT');
it('Duplicate flyer checksum is handled');
});
describe('CHECK Constraints', () => {
it('Budget period rejects invalid values');
it('Budget amount rejects negative values');
});
});
```
**Estimated effort**: 2 hours
---
### Phase 2: Extend Existing Integration Tests (Priority: Medium)
#### 2.1 Extend `budget.integration.test.ts`
Add validation edge cases discovered during manual testing:
```typescript
// Tests to add:
it('Rejects period="yearly" (only weekly/monthly allowed)');
it('Rejects negative amount_cents');
it('Rejects invalid date format');
it('Returns 404 for update on non-existent budget');
it('Returns 404 for delete on non-existent budget');
```
**Estimated effort**: 30 minutes
#### 2.2 Extend `admin.integration.test.ts`
Add queue and trigger endpoint tests:
```typescript
// Tests to add:
describe('Queue Management', () => {
it('GET /api/admin/queues/status returns all queue counts');
it('POST /api/admin/trigger/analytics-report enqueues job');
it('POST /api/admin/trigger/weekly-analytics enqueues job');
it('POST /api/admin/trigger/daily-deal-check enqueues job');
it('POST /api/admin/jobs/:queue/:id/retry retries failed job');
it('POST /api/admin/system/clear-cache clears Redis cache');
it('Returns validation error for invalid queue name');
it('Returns 404 for retry on non-existent job');
});
```
**Estimated effort**: 1 hour
#### 2.3 Extend `auth.integration.test.ts`
Add token malformation edge cases:
```typescript
// Tests to add:
describe('Token Edge Cases', () => {
it('Empty Bearer token returns Unauthorized');
it('Token without dots returns Unauthorized');
it('Token with 2 parts returns Unauthorized');
it('Token with invalid signature returns Unauthorized');
it('Lowercase "bearer" scheme is accepted');
it('Basic auth scheme returns Unauthorized');
it('Tampered token payload returns Unauthorized');
});
describe('Login Security', () => {
it('Wrong password and non-existent user return same error');
it('Forgot password returns same response for existing/non-existing');
});
```
**Estimated effort**: 45 minutes
#### 2.4 Extend `recipe.integration.test.ts`
Add fork error case and comment tests:
```typescript
// Tests to add:
it('Fork fails for seed recipes (null user_id)');
it('POST /api/recipes/:id/comments adds comment');
it('GET /api/recipes/:id/comments returns comments');
```
**Estimated effort**: 30 minutes
#### 2.5 Extend `notification.integration.test.ts`
Add job status polling tests:
```typescript
// Tests to add:
describe('Job Status Polling', () => {
it('GET /api/ai/jobs/:id/status returns completed job');
it('GET /api/ai/jobs/:id/status returns failed job with error');
it('GET /api/ai/jobs/:id/status returns 404 for non-existent');
it('Job status endpoint works without auth (public)');
});
```
**Estimated effort**: 30 minutes
---
### Phase 3: E2E Tests (Priority: Low-Medium)
Per ADR-040, E2E tests should be limited to critical user flows. The existing E2E tests cover the main flows well. However, we should consider:
#### 3.1 Do NOT Add
- Performance tests (handle via monitoring, not E2E)
- Pagination tests (integration level is sufficient)
- Cache behavior tests (integration level is sufficient)
#### 3.2 Consider Adding (Optional)
**Budget flow E2E** - If budget management becomes a critical feature:
```typescript
// budget-journey.e2e.test.ts
describe('Budget Journey', () => {
it('User creates budget → tracks spending → sees analysis');
});
```
**Recommendation**: Defer unless budget becomes a core value proposition.
---
### Phase 4: Documentation Updates
#### 4.1 Update ADR-010
Add the newly discovered API gotchas to the testing documentation:
- `entity_id` must be STRING in reactions
- `customItemName` (camelCase) in shopping list items
- `scan_source` must be `manual_entry`, not `manual`
#### 4.2 Update CLAUDE.md
Add API reference section for correct endpoint calls (already captured in test doc).
---
## Tests NOT Recommended
Per ADR-040 (Testing Economics), the following tests from the manual session should NOT be automated:
| Test Area | Reason |
| --------------------------- | ------------------------------------------------- |
| Performance benchmarks | Use APM/monitoring tools instead (see ADR-015) |
| Concurrent request handling | Connection pool behavior is framework-level |
| Cache hit/miss timing | Observable via Redis metrics, not test assertions |
| Response time consistency | Better suited for production monitoring |
| WebSocket/SSE | Not implemented - polling is the architecture |
---
## Implementation Timeline
| Phase | Description | Effort | Priority |
| --------- | ------------------------------ | ------------ | -------- |
| 1.1 | deals.integration.test.ts | 30 min | High |
| 1.2 | reactions.integration.test.ts | 45 min | High |
| 1.3 | edge-cases.integration.test.ts | 1.5 hours | High |
| 1.4 | data-integrity.integration.ts | 2 hours | High |
| 2.1 | Extend budget tests | 30 min | Medium |
| 2.2 | Extend admin tests | 1 hour | Medium |
| 2.3 | Extend auth tests | 45 min | Medium |
| 2.4 | Extend recipe tests | 30 min | Medium |
| 2.5 | Extend notification tests | 30 min | Medium |
| 4.x | Documentation updates | 30 min | Low |
| **Total** | | **~8 hours** | |
---
## Verification Strategy
For each new test file, verify by running:
```bash
# In dev container
npm run test:integration -- --run src/tests/integration/<file>.test.ts
```
All tests should:
1. Pass consistently (no flaky tests)
2. Run in isolation (no shared state)
3. Clean up test data (use `cleanupDb()`)
4. Follow existing patterns in the codebase
---
## Risks and Mitigations
| Risk | Mitigation |
| ------------------------------------ | --------------------------------------------------- |
| Test flakiness from async operations | Use proper waitFor/polling utilities |
| Database state leakage between tests | Strict cleanup in afterEach/afterAll |
| Queue state affecting test isolation | Drain/pause queues in tests that interact with them |
| Port conflicts | Use dedicated test port (3099) |
---
## Approval Request
Please review and approve this plan. Upon approval, implementation will proceed in priority order (Phase 1 first).
**Questions for clarification**:
1. Should the deals/reactions routes remain mounted, or was that a temporary fix?
2. Is the recipe fork failure for seed recipes expected behavior or a bug to fix?
3. Any preference on splitting Phase 1 into multiple PRs vs one large PR?

File diff suppressed because it is too large Load Diff

458
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "flyer-crawler",
"version": "0.9.111",
"version": "0.11.11",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "flyer-crawler",
"version": "0.9.111",
"version": "0.11.11",
"dependencies": {
"@bull-board/api": "^6.14.2",
"@bull-board/express": "^6.14.2",
@@ -55,6 +55,7 @@
"zxing-wasm": "^2.2.4"
},
"devDependencies": {
"@sentry/vite-plugin": "^4.6.2",
"@tailwindcss/postcss": "4.1.17",
"@tanstack/react-query-devtools": "^5.91.2",
"@testcontainers/postgresql": "^11.8.1",
@@ -4634,6 +4635,16 @@
"node": ">=18"
}
},
"node_modules/@sentry/babel-plugin-component-annotate": {
"version": "4.6.2",
"resolved": "https://registry.npmjs.org/@sentry/babel-plugin-component-annotate/-/babel-plugin-component-annotate-4.6.2.tgz",
"integrity": "sha512-6VTjLJXtIHKwxMmThtZKwi1+hdklLNzlbYH98NhbH22/Vzb/c6BlSD2b5A0NGN9vFB807rD4x4tuP+Su7BxQXQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 14"
}
},
"node_modules/@sentry/browser": {
"version": "10.32.1",
"resolved": "https://registry.npmjs.org/@sentry/browser/-/browser-10.32.1.tgz",
@@ -4650,6 +4661,258 @@
"node": ">=18"
}
},
"node_modules/@sentry/bundler-plugin-core": {
"version": "4.6.2",
"resolved": "https://registry.npmjs.org/@sentry/bundler-plugin-core/-/bundler-plugin-core-4.6.2.tgz",
"integrity": "sha512-JkOc3JkVzi/fbXsFp8R9uxNKmBrPRaU4Yu4y1i3ihWfugqymsIYaN0ixLENZbGk2j4xGHIk20PAJzBJqBMTHew==",
"dev": true,
"license": "MIT",
"dependencies": {
"@babel/core": "^7.18.5",
"@sentry/babel-plugin-component-annotate": "4.6.2",
"@sentry/cli": "^2.57.0",
"dotenv": "^16.3.1",
"find-up": "^5.0.0",
"glob": "^10.5.0",
"magic-string": "0.30.8",
"unplugin": "1.0.1"
},
"engines": {
"node": ">= 14"
}
},
"node_modules/@sentry/bundler-plugin-core/node_modules/glob": {
"version": "10.5.0",
"resolved": "https://registry.npmjs.org/glob/-/glob-10.5.0.tgz",
"integrity": "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==",
"dev": true,
"license": "ISC",
"dependencies": {
"foreground-child": "^3.1.0",
"jackspeak": "^3.1.2",
"minimatch": "^9.0.4",
"minipass": "^7.1.2",
"package-json-from-dist": "^1.0.0",
"path-scurry": "^1.11.1"
},
"bin": {
"glob": "dist/esm/bin.mjs"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/@sentry/bundler-plugin-core/node_modules/lru-cache": {
"version": "10.4.3",
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz",
"integrity": "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==",
"dev": true,
"license": "ISC"
},
"node_modules/@sentry/bundler-plugin-core/node_modules/magic-string": {
"version": "0.30.8",
"resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.8.tgz",
"integrity": "sha512-ISQTe55T2ao7XtlAStud6qwYPZjE4GK1S/BeVPus4jrq6JuOnQ00YKQC581RWhR122W7msZV263KzVeLoqidyQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/sourcemap-codec": "^1.4.15"
},
"engines": {
"node": ">=12"
}
},
"node_modules/@sentry/bundler-plugin-core/node_modules/path-scurry": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-1.11.1.tgz",
"integrity": "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==",
"dev": true,
"license": "BlueOak-1.0.0",
"dependencies": {
"lru-cache": "^10.2.0",
"minipass": "^5.0.0 || ^6.0.2 || ^7.0.0"
},
"engines": {
"node": ">=16 || 14 >=14.18"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/@sentry/cli": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli/-/cli-2.58.4.tgz",
"integrity": "sha512-ArDrpuS8JtDYEvwGleVE+FgR+qHaOp77IgdGSacz6SZy6Lv90uX0Nu4UrHCQJz8/xwIcNxSqnN22lq0dH4IqTg==",
"dev": true,
"hasInstallScript": true,
"license": "FSL-1.1-MIT",
"dependencies": {
"https-proxy-agent": "^5.0.0",
"node-fetch": "^2.6.7",
"progress": "^2.0.3",
"proxy-from-env": "^1.1.0",
"which": "^2.0.2"
},
"bin": {
"sentry-cli": "bin/sentry-cli"
},
"engines": {
"node": ">= 10"
},
"optionalDependencies": {
"@sentry/cli-darwin": "2.58.4",
"@sentry/cli-linux-arm": "2.58.4",
"@sentry/cli-linux-arm64": "2.58.4",
"@sentry/cli-linux-i686": "2.58.4",
"@sentry/cli-linux-x64": "2.58.4",
"@sentry/cli-win32-arm64": "2.58.4",
"@sentry/cli-win32-i686": "2.58.4",
"@sentry/cli-win32-x64": "2.58.4"
}
},
"node_modules/@sentry/cli-darwin": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-darwin/-/cli-darwin-2.58.4.tgz",
"integrity": "sha512-kbTD+P4X8O+nsNwPxCywtj3q22ecyRHWff98rdcmtRrvwz8CKi/T4Jxn/fnn2i4VEchy08OWBuZAqaA5Kh2hRQ==",
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-linux-arm": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-linux-arm/-/cli-linux-arm-2.58.4.tgz",
"integrity": "sha512-rdQ8beTwnN48hv7iV7e7ZKucPec5NJkRdrrycMJMZlzGBPi56LqnclgsHySJ6Kfq506A2MNuQnKGaf/sBC9REA==",
"cpu": [
"arm"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"linux",
"freebsd",
"android"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-linux-arm64": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-linux-arm64/-/cli-linux-arm64-2.58.4.tgz",
"integrity": "sha512-0g0KwsOozkLtzN8/0+oMZoOuQ0o7W6O+hx+ydVU1bktaMGKEJLMAWxOQNjsh1TcBbNIXVOKM/I8l0ROhaAb8Ig==",
"cpu": [
"arm64"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"linux",
"freebsd",
"android"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-linux-i686": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-linux-i686/-/cli-linux-i686-2.58.4.tgz",
"integrity": "sha512-NseoIQAFtkziHyjZNPTu1Gm1opeQHt7Wm1LbLrGWVIRvUOzlslO9/8i6wETUZ6TjlQxBVRgd3Q0lRBG2A8rFYA==",
"cpu": [
"x86",
"ia32"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"linux",
"freebsd",
"android"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-linux-x64": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-linux-x64/-/cli-linux-x64-2.58.4.tgz",
"integrity": "sha512-d3Arz+OO/wJYTqCYlSN3Ktm+W8rynQ/IMtSZLK8nu0ryh5mJOh+9XlXY6oDXw4YlsM8qCRrNquR8iEI1Y/IH+Q==",
"cpu": [
"x64"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"linux",
"freebsd",
"android"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-win32-arm64": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-win32-arm64/-/cli-win32-arm64-2.58.4.tgz",
"integrity": "sha512-bqYrF43+jXdDBh0f8HIJU3tbvlOFtGyRjHB8AoRuMQv9TEDUfENZyCelhdjA+KwDKYl48R1Yasb4EHNzsoO83w==",
"cpu": [
"arm64"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-win32-i686": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-win32-i686/-/cli-win32-i686-2.58.4.tgz",
"integrity": "sha512-3triFD6jyvhVcXOmGyttf+deKZcC1tURdhnmDUIBkiDPJKGT/N5xa4qAtHJlAB/h8L9jgYih9bvJnvvFVM7yug==",
"cpu": [
"x86",
"ia32"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/cli-win32-x64": {
"version": "2.58.4",
"resolved": "https://registry.npmjs.org/@sentry/cli-win32-x64/-/cli-win32-x64-2.58.4.tgz",
"integrity": "sha512-cSzN4PjM1RsCZ4pxMjI0VI7yNCkxiJ5jmWncyiwHXGiXrV1eXYdQ3n1LhUYLZ91CafyprR0OhDcE+RVZ26Qb5w==",
"cpu": [
"x64"
],
"dev": true,
"license": "FSL-1.1-MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=10"
}
},
"node_modules/@sentry/core": {
"version": "10.32.1",
"resolved": "https://registry.npmjs.org/@sentry/core/-/core-10.32.1.tgz",
@@ -4765,6 +5028,20 @@
"react": "^16.14.0 || 17.x || 18.x || 19.x"
}
},
"node_modules/@sentry/vite-plugin": {
"version": "4.6.2",
"resolved": "https://registry.npmjs.org/@sentry/vite-plugin/-/vite-plugin-4.6.2.tgz",
"integrity": "sha512-hK9N50LlTaPlb2P1r87CFupU7MJjvtrp+Js96a2KDdiP8ViWnw4Gsa/OvA0pkj2wAFXFeBQMLS6g/SktTKG54w==",
"dev": true,
"license": "MIT",
"dependencies": {
"@sentry/bundler-plugin-core": "4.6.2",
"unplugin": "1.0.1"
},
"engines": {
"node": ">= 14"
}
},
"node_modules/@smithy/abort-controller": {
"version": "4.2.7",
"resolved": "https://registry.npmjs.org/@smithy/abort-controller/-/abort-controller-4.2.7.tgz",
@@ -7036,6 +7313,33 @@
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
}
},
"node_modules/anymatch": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.3.tgz",
"integrity": "sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw==",
"dev": true,
"license": "ISC",
"dependencies": {
"normalize-path": "^3.0.0",
"picomatch": "^2.0.4"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/anymatch/node_modules/picomatch": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8.6"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/append-field": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/append-field/-/append-field-1.0.0.tgz",
@@ -7691,6 +7995,19 @@
"node": "*"
}
},
"node_modules/binary-extensions": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.3.0.tgz",
"integrity": "sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/bl": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/bl/-/bl-4.1.0.tgz",
@@ -8153,6 +8470,44 @@
"node": ">=8"
}
},
"node_modules/chokidar": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.6.0.tgz",
"integrity": "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==",
"dev": true,
"license": "MIT",
"dependencies": {
"anymatch": "~3.1.2",
"braces": "~3.0.2",
"glob-parent": "~5.1.2",
"is-binary-path": "~2.1.0",
"is-glob": "~4.0.1",
"normalize-path": "~3.0.0",
"readdirp": "~3.6.0"
},
"engines": {
"node": ">= 8.10.0"
},
"funding": {
"url": "https://paulmillr.com/funding/"
},
"optionalDependencies": {
"fsevents": "~2.3.2"
}
},
"node_modules/chokidar/node_modules/glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"license": "ISC",
"dependencies": {
"is-glob": "^4.0.1"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/chownr": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/chownr/-/chownr-2.0.0.tgz",
@@ -9216,6 +9571,19 @@
"license": "MIT",
"peer": true
},
"node_modules/dotenv": {
"version": "16.6.1",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.6.1.tgz",
"integrity": "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow==",
"dev": true,
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://dotenvx.com"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
@@ -11615,6 +11983,19 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/is-binary-path": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
"integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==",
"dev": true,
"license": "MIT",
"dependencies": {
"binary-extensions": "^2.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/is-boolean-object": {
"version": "1.2.2",
"resolved": "https://registry.npmjs.org/is-boolean-object/-/is-boolean-object-1.2.2.tgz",
@@ -15197,6 +15578,16 @@
],
"license": "MIT"
},
"node_modules/progress": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/progress/-/progress-2.0.3.tgz",
"integrity": "sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/prop-types": {
"version": "15.8.1",
"resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.8.1.tgz",
@@ -15303,6 +15694,13 @@
"node": ">= 0.10"
}
},
"node_modules/proxy-from-env": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz",
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"dev": true,
"license": "MIT"
},
"node_modules/pump": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/pump/-/pump-3.0.3.tgz",
@@ -15567,6 +15965,32 @@
"node": ">=10"
}
},
"node_modules/readdirp": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz",
"integrity": "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==",
"dev": true,
"license": "MIT",
"dependencies": {
"picomatch": "^2.2.1"
},
"engines": {
"node": ">=8.10.0"
}
},
"node_modules/readdirp/node_modules/picomatch": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8.6"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/real-require": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/real-require/-/real-require-0.2.0.tgz",
@@ -17782,6 +18206,19 @@
"node": ">= 0.8"
}
},
"node_modules/unplugin": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/unplugin/-/unplugin-1.0.1.tgz",
"integrity": "sha512-aqrHaVBWW1JVKBHmGo33T5TxeL0qWzfvjWokObHA9bYmN7eNDkwOxmLjhioHl9878qDFMAaT51XNroRyuz7WxA==",
"dev": true,
"license": "MIT",
"dependencies": {
"acorn": "^8.8.1",
"chokidar": "^3.5.3",
"webpack-sources": "^3.2.3",
"webpack-virtual-modules": "^0.5.0"
}
},
"node_modules/until-async": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/until-async/-/until-async-3.0.2.tgz",
@@ -18110,6 +18547,23 @@
"node": ">=20"
}
},
"node_modules/webpack-sources": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.3.3.tgz",
"integrity": "sha512-yd1RBzSGanHkitROoPFd6qsrxt+oFhg/129YzheDGqeustzX0vTZJZsSsQjVQC4yzBQ56K55XU8gaNCtIzOnTg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/webpack-virtual-modules": {
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/webpack-virtual-modules/-/webpack-virtual-modules-0.5.0.tgz",
"integrity": "sha512-kyDivFZ7ZM0BVOUteVbDFhlRt7Ah/CSPwJdi8hBpkK7QLumUqdLtVfm/PX/hkcnrvr0i77fO5+TjZ94Pe+C9iw==",
"dev": true,
"license": "MIT"
},
"node_modules/whatwg-encoding": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-3.1.1.tgz",

View File

@@ -1,7 +1,7 @@
{
"name": "flyer-crawler",
"private": true,
"version": "0.9.111",
"version": "0.11.11",
"type": "module",
"scripts": {
"dev": "concurrently \"npm:start:dev\" \"vite\"",
@@ -75,6 +75,7 @@
"zxing-wasm": "^2.2.4"
},
"devDependencies": {
"@sentry/vite-plugin": "^4.6.2",
"@tailwindcss/postcss": "4.1.17",
"@tanstack/react-query-devtools": "^5.91.2",
"@testcontainers/postgresql": "^11.8.1",

View File

@@ -35,6 +35,8 @@ import healthRouter from './src/routes/health.routes';
import upcRouter from './src/routes/upc.routes';
import inventoryRouter from './src/routes/inventory.routes';
import receiptRouter from './src/routes/receipt.routes';
import dealsRouter from './src/routes/deals.routes';
import reactionsRouter from './src/routes/reactions.routes';
import { errorHandler } from './src/middleware/errorHandler';
import { backgroundJobService, startBackgroundJobs } from './src/services/backgroundJobService';
import type { UserProfile } from './src/types';
@@ -278,9 +280,25 @@ app.use('/api/upc', upcRouter);
app.use('/api/inventory', inventoryRouter);
// 13. Receipt scanning routes.
app.use('/api/receipts', receiptRouter);
// 14. Deals and best prices routes.
app.use('/api/deals', dealsRouter);
// 15. Reactions/social features routes.
app.use('/api/reactions', reactionsRouter);
// --- Error Handling and Server Startup ---
// Catch-all 404 handler for unmatched routes.
// Returns JSON instead of HTML for API consistency.
app.use((req: Request, res: Response) => {
res.status(404).json({
success: false,
error: {
code: 'NOT_FOUND',
message: `Cannot ${req.method} ${req.path}`,
},
});
});
// Sentry Error Handler (ADR-015) - captures errors and sends to Bugsink.
// Must come BEFORE the custom error handler but AFTER all routes.
app.use(sentryMiddleware.errorHandler);

View File

@@ -10,11 +10,16 @@
-- Usage:
-- Connect to the database as a superuser (e.g., 'postgres') and run this
-- entire script.
--
-- IMPORTANT: Set the new_owner variable to the appropriate user:
-- - For production: 'flyer_crawler_prod'
-- - For test: 'flyer_crawler_test'
DO $$
DECLARE
-- Define the new owner for all objects.
new_owner TEXT := 'flyer_crawler_user';
-- Change this to 'flyer_crawler_test' when running against the test database.
new_owner TEXT := 'flyer_crawler_prod';
-- Variables for iterating through object names.
tbl_name TEXT;
@@ -81,7 +86,7 @@ END $$;
--
-- -- Construct and execute the ALTER FUNCTION statement using the full signature.
-- -- This command is now unambiguous and will work for all functions, including overloaded ones.
-- EXECUTE format('ALTER FUNCTION %s OWNER TO flyer_crawler_user;', func_signature);
-- EXECUTE format('ALTER FUNCTION %s OWNER TO flyer_crawler_prod;', func_signature);
-- END LOOP;
-- END $$;

View File

@@ -3,15 +3,15 @@ import React from 'react';
import { screen, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import Leaderboard from './Leaderboard';
import * as apiClient from '../services/apiClient';
import { LeaderboardUser } from '../types';
import { createMockLeaderboardUser } from '../tests/utils/mockFactories';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
import { useLeaderboardQuery } from '../hooks/queries/useLeaderboardQuery';
// Must explicitly call vi.mock() for apiClient
vi.mock('../services/apiClient');
// Mock the hook directly
vi.mock('../hooks/queries/useLeaderboardQuery');
const mockedApiClient = vi.mocked(apiClient);
const mockedUseLeaderboardQuery = vi.mocked(useLeaderboardQuery);
// Mock lucide-react icons to prevent rendering errors in the test environment
vi.mock('lucide-react', () => ({
@@ -36,29 +36,38 @@ const mockLeaderboardData: LeaderboardUser[] = [
describe('Leaderboard', () => {
beforeEach(() => {
vi.clearAllMocks();
// Default mock: loading state
mockedUseLeaderboardQuery.mockReturnValue({
data: [],
isLoading: true,
error: null,
} as any);
});
it('should display a loading message initially', () => {
// Mock a pending promise that never resolves to keep it in the loading state
mockedApiClient.fetchLeaderboard.mockReturnValue(new Promise(() => {}));
renderWithProviders(<Leaderboard />);
expect(screen.getByText('Loading Leaderboard...')).toBeInTheDocument();
});
it('should display an error message if the API call fails', async () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(new Response(null, { status: 500 }));
mockedUseLeaderboardQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('Request failed with status 500'),
} as any);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
expect(screen.getByRole('alert')).toBeInTheDocument();
// The query hook throws an error with the status code when JSON parsing fails
expect(screen.getByText('Error: Request failed with status 500')).toBeInTheDocument();
});
});
it('should display a generic error for unknown error types', async () => {
// Use an actual Error object since the component displays error.message
mockedApiClient.fetchLeaderboard.mockRejectedValue(new Error('A string error'));
mockedUseLeaderboardQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('A string error'),
} as any);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
@@ -68,7 +77,11 @@ describe('Leaderboard', () => {
});
it('should display a message when the leaderboard is empty', async () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(new Response(JSON.stringify([])));
mockedUseLeaderboardQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
} as any);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
@@ -79,9 +92,11 @@ describe('Leaderboard', () => {
});
it('should render the leaderboard with user data on successful fetch', async () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(
new Response(JSON.stringify(mockLeaderboardData)),
);
mockedUseLeaderboardQuery.mockReturnValue({
data: mockLeaderboardData,
isLoading: false,
error: null,
} as any);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
@@ -104,9 +119,11 @@ describe('Leaderboard', () => {
});
it('should render the correct rank icons', async () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(
new Response(JSON.stringify(mockLeaderboardData)),
);
mockedUseLeaderboardQuery.mockReturnValue({
data: mockLeaderboardData,
isLoading: false,
error: null,
} as any);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
@@ -123,9 +140,11 @@ describe('Leaderboard', () => {
const dataWithMissingNames: LeaderboardUser[] = [
createMockLeaderboardUser({ user_id: 'user-anon', full_name: null, points: 500, rank: '5' }),
];
mockedApiClient.fetchLeaderboard.mockResolvedValue(
new Response(JSON.stringify(dataWithMissingNames)),
);
mockedUseLeaderboardQuery.mockReturnValue({
data: dataWithMissingNames,
isLoading: false,
error: null,
} as any);
renderWithProviders(<Leaderboard />);
await waitFor(() => {

View File

@@ -353,6 +353,50 @@ passport.use(
}),
);
// --- Custom Error Class for Unauthorized Access ---
class UnauthorizedError extends Error {
status: number;
constructor(message: string) {
super(message);
this.name = 'UnauthorizedError';
this.status = 401;
}
}
/**
* A required authentication middleware that returns standardized error responses.
* Unlike the default passport.authenticate(), this middleware ensures that 401 responses
* follow our API response format with { success: false, error: { code, message } }.
*
* Use this instead of `passport.authenticate('jwt', { session: false })` to ensure
* consistent error responses per ADR-028.
*/
export const requireAuth = (req: Request, res: Response, next: NextFunction) => {
passport.authenticate(
'jwt',
{ session: false },
(err: Error | null, user: UserProfile | false, info: { message: string } | Error) => {
if (err) {
// An actual error occurred during authentication
req.log.error({ error: err }, 'Authentication error');
return next(err);
}
if (!user) {
// Authentication failed - return standardized error through error handler
const message =
info instanceof Error ? info.message : info?.message || 'Authentication required.';
req.log.warn({ info: message }, 'JWT authentication failed');
return next(new UnauthorizedError(message));
}
// Authentication succeeded - attach user and proceed
req.user = user;
next();
},
)(req, res, next);
};
// --- Middleware for Admin Role Check ---
export const isAdmin = (req: Request, res: Response, next: NextFunction) => {
// Use the type guard for safer access to req.user

View File

@@ -4,7 +4,7 @@ import { render, screen, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, type Mock } from 'vitest';
import { PriceHistoryChart } from './PriceHistoryChart';
import { useUserData } from '../../hooks/useUserData';
import * as apiClient from '../../services/apiClient';
import { usePriceHistoryQuery } from '../../hooks/queries/usePriceHistoryQuery';
import type { MasterGroceryItem, HistoricalPriceDataPoint } from '../../types';
import {
createMockMasterGroceryItem,
@@ -12,13 +12,14 @@ import {
} from '../../tests/utils/mockFactories';
import { QueryWrapper } from '../../tests/utils/renderWithProviders';
// Mock the apiClient
vi.mock('../../services/apiClient');
// Mock the useUserData hook
vi.mock('../../hooks/useUserData');
const mockedUseUserData = useUserData as Mock;
// Mock the usePriceHistoryQuery hook
vi.mock('../../hooks/queries/usePriceHistoryQuery');
const mockedUsePriceHistoryQuery = usePriceHistoryQuery as Mock;
const renderWithQuery = (ui: React.ReactElement) => render(ui, { wrapper: QueryWrapper });
// Mock the logger
@@ -108,6 +109,13 @@ describe('PriceHistoryChart', () => {
isLoading: false,
error: null,
});
// Default mock for usePriceHistoryQuery (empty/loading false)
mockedUsePriceHistoryQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
});
});
it('should render a placeholder when there are no watched items', () => {
@@ -126,13 +134,21 @@ describe('PriceHistoryChart', () => {
});
it('should display a loading state while fetching data', () => {
vi.mocked(apiClient.fetchHistoricalPriceData).mockReturnValue(new Promise(() => {}));
mockedUsePriceHistoryQuery.mockReturnValue({
data: [],
isLoading: true,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
expect(screen.getByText('Loading Price History...')).toBeInTheDocument();
});
it('should display an error message if the API call fails', async () => {
vi.mocked(apiClient.fetchHistoricalPriceData).mockRejectedValue(new Error('API is down'));
mockedUsePriceHistoryQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('API is down'),
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -142,9 +158,11 @@ describe('PriceHistoryChart', () => {
});
it('should display a message if no historical data is returned', async () => {
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify([])),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -157,14 +175,16 @@ describe('PriceHistoryChart', () => {
});
it('should render the chart with data on successful fetch', async () => {
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(mockPriceHistory)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: mockPriceHistory,
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
// Check that the API was called with the correct item IDs
expect(apiClient.fetchHistoricalPriceData).toHaveBeenCalledWith([1, 2]);
// Check that the hook was called with the correct item IDs
expect(mockedUsePriceHistoryQuery).toHaveBeenCalledWith([1, 2], true);
// Check that the chart components are rendered
expect(screen.getByTestId('responsive-container')).toBeInTheDocument();
@@ -188,15 +208,17 @@ describe('PriceHistoryChart', () => {
isLoading: true, // Test the isLoading state from the useUserData hook
error: null,
});
vi.mocked(apiClient.fetchHistoricalPriceData).mockReturnValue(new Promise(() => {}));
// Even if price history is loading or not, user data loading takes precedence in UI
renderWithQuery(<PriceHistoryChart />);
expect(screen.getByText('Loading Price History...')).toBeInTheDocument();
});
it('should clear the chart when the watchlist becomes empty', async () => {
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(mockPriceHistory)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: mockPriceHistory,
isLoading: false,
error: null,
});
const { rerender } = renderWithQuery(<PriceHistoryChart />);
// Initial render with items
@@ -225,7 +247,7 @@ describe('PriceHistoryChart', () => {
});
it('should filter out items with only one data point', async () => {
const dataWithSinglePoint: HistoricalPriceDataPoint[] = [
const dataWithSinglePoint = [
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-01',
@@ -242,9 +264,11 @@ describe('PriceHistoryChart', () => {
avg_price_in_cents: 350,
}), // Almond Milk only has one point
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(dataWithSinglePoint)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: dataWithSinglePoint,
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -254,7 +278,7 @@ describe('PriceHistoryChart', () => {
});
it('should process data to only keep the lowest price for a given day', async () => {
const dataWithDuplicateDate: HistoricalPriceDataPoint[] = [
const dataWithDuplicateDate = [
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-01',
@@ -271,9 +295,11 @@ describe('PriceHistoryChart', () => {
avg_price_in_cents: 99,
}),
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(dataWithDuplicateDate)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: dataWithDuplicateDate,
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -288,7 +314,7 @@ describe('PriceHistoryChart', () => {
});
it('should filter out data points with a price of zero', async () => {
const dataWithZeroPrice: HistoricalPriceDataPoint[] = [
const dataWithZeroPrice = [
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-01',
@@ -305,9 +331,11 @@ describe('PriceHistoryChart', () => {
avg_price_in_cents: 105,
}),
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(dataWithZeroPrice)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: dataWithZeroPrice,
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -330,9 +358,11 @@ describe('PriceHistoryChart', () => {
{ master_item_id: 1, summary_date: '2024-10-01', avg_price_in_cents: null }, // Missing price
{ master_item_id: 999, summary_date: '2024-10-01', avg_price_in_cents: 100 }, // ID not in watchlist
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(malformedData)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: malformedData,
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -346,7 +376,7 @@ describe('PriceHistoryChart', () => {
});
it('should ignore higher prices for the same day', async () => {
const dataWithHigherPrice: HistoricalPriceDataPoint[] = [
const dataWithHigherPrice = [
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-01',
@@ -363,9 +393,11 @@ describe('PriceHistoryChart', () => {
avg_price_in_cents: 100,
}),
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(dataWithHigherPrice)),
);
mockedUsePriceHistoryQuery.mockReturnValue({
data: dataWithHigherPrice,
isLoading: false,
error: null,
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {
@@ -377,8 +409,11 @@ describe('PriceHistoryChart', () => {
});
it('should handle non-Error objects thrown during fetch', async () => {
// Use an actual Error object since the component displays error.message
vi.mocked(apiClient.fetchHistoricalPriceData).mockRejectedValue(new Error('Fetch failed'));
mockedUsePriceHistoryQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('Fetch failed'),
});
renderWithQuery(<PriceHistoryChart />);
await waitFor(() => {

View File

@@ -31,9 +31,10 @@ describe('useActivityLogQuery', () => {
{ id: 1, action: 'user_login', timestamp: '2024-01-01T10:00:00Z' },
{ id: 2, action: 'flyer_uploaded', timestamp: '2024-01-01T11:00:00Z' },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchActivityLog.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockActivityLog),
json: () => Promise.resolve({ success: true, data: mockActivityLog }),
} as Response);
const { result } = renderHook(() => useActivityLogQuery(), { wrapper });
@@ -46,9 +47,10 @@ describe('useActivityLogQuery', () => {
it('should fetch activity log with custom limit and offset', async () => {
const mockActivityLog = [{ id: 3, action: 'item_added', timestamp: '2024-01-01T12:00:00Z' }];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchActivityLog.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockActivityLog),
json: () => Promise.resolve({ success: true, data: mockActivityLog }),
} as Response);
const { result } = renderHook(() => useActivityLogQuery(10, 5), { wrapper });
@@ -102,9 +104,10 @@ describe('useActivityLogQuery', () => {
});
it('should return empty array for no activity log entries', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchActivityLog.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useActivityLogQuery(), { wrapper });

View File

@@ -33,7 +33,13 @@ export const useActivityLogQuery = (limit: number = 20, offset: number = 0) => {
throw new Error(error.message || 'Failed to fetch activity log');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
// Activity log changes frequently, keep stale time short
staleTime: 1000 * 30, // 30 seconds

View File

@@ -35,9 +35,10 @@ describe('useApplicationStatsQuery', () => {
pendingCorrectionsCount: 10,
recipeCount: 75,
};
// API returns wrapped response: { success: true, data: {...} }
mockedApiClient.getApplicationStats.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockStats),
json: () => Promise.resolve({ success: true, data: mockStats }),
} as Response);
const { result } = renderHook(() => useApplicationStatsQuery(), { wrapper });

View File

@@ -31,7 +31,9 @@ export const useApplicationStatsQuery = () => {
throw new Error(error.message || 'Failed to fetch application stats');
}
return response.json();
const json = await response.json();
// API returns { success: true, data: {...} }, extract the data object
return json.data ?? json;
},
staleTime: 1000 * 60 * 2, // 2 minutes - stats change moderately, not as frequently as activity log
});

View File

@@ -41,7 +41,9 @@ export const useAuthProfileQuery = (enabled: boolean = true) => {
throw new Error(error.message || 'Failed to fetch user profile');
}
return response.json();
const json = await response.json();
// API returns { success: true, data: {...} }, extract the data object
return json.data ?? json;
},
enabled: enabled && hasToken,
staleTime: 1000 * 60 * 5, // 5 minutes

View File

@@ -31,7 +31,13 @@ export const useBestSalePricesQuery = (enabled: boolean = true) => {
throw new Error(error.message || 'Failed to fetch best sale prices');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled,
// Prices update when flyers change, keep fresh for 2 minutes

View File

@@ -27,7 +27,13 @@ export const useBrandsQuery = (enabled: boolean = true) => {
throw new Error(error.message || 'Failed to fetch brands');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled,
staleTime: 1000 * 60 * 5, // 5 minutes - brands don't change frequently

View File

@@ -32,9 +32,10 @@ describe('useCategoriesQuery', () => {
{ category_id: 2, name: 'Bakery' },
{ category_id: 3, name: 'Produce' },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchCategories.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockCategories),
json: () => Promise.resolve({ success: true, data: mockCategories }),
} as Response);
const { result } = renderHook(() => useCategoriesQuery(), { wrapper });
@@ -88,9 +89,10 @@ describe('useCategoriesQuery', () => {
});
it('should return empty array for no categories', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchCategories.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useCategoriesQuery(), { wrapper });

View File

@@ -26,7 +26,13 @@ export const useCategoriesQuery = () => {
throw new Error(error.message || 'Failed to fetch categories');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
staleTime: 1000 * 60 * 60, // 1 hour - categories rarely change
});

View File

@@ -40,7 +40,9 @@ export const useFlyerItemCountQuery = (flyerIds: number[], enabled: boolean = tr
throw new Error(error.message || 'Failed to count flyer items');
}
return response.json();
const json = await response.json();
// API returns { success: true, data: {...} }, extract the data object
return json.data ?? json;
},
enabled: enabled && flyerIds.length > 0,
// Count doesn't change frequently

View File

@@ -37,7 +37,13 @@ export const useFlyerItemsForFlyersQuery = (flyerIds: number[], enabled: boolean
throw new Error(error.message || 'Failed to fetch flyer items');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled: enabled && flyerIds.length > 0,
// Flyer items don't change frequently once created

View File

@@ -31,9 +31,10 @@ describe('useFlyerItemsQuery', () => {
{ item_id: 1, name: 'Milk', price: 3.99, flyer_id: 42 },
{ item_id: 2, name: 'Bread', price: 2.49, flyer_id: 42 },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchFlyerItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve({ items: mockFlyerItems }),
json: () => Promise.resolve({ success: true, data: mockFlyerItems }),
} as Response);
const { result } = renderHook(() => useFlyerItemsQuery(42), { wrapper });
@@ -103,9 +104,10 @@ describe('useFlyerItemsQuery', () => {
// respects the enabled condition. The guard exists as a defensive measure only.
it('should return empty array when API returns no items', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchFlyerItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve({ items: [] }),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useFlyerItemsQuery(42), { wrapper });
@@ -115,16 +117,20 @@ describe('useFlyerItemsQuery', () => {
expect(result.current.data).toEqual([]);
});
it('should handle response without items property', async () => {
it('should return empty array when response lacks success/data structure (ADR-028)', async () => {
// ADR-028: API must return { success: true, data: [...] }
// Non-compliant responses return empty array to prevent .map() errors
const legacyItems = [{ item_id: 1, name: 'Legacy Item' }];
mockedApiClient.fetchFlyerItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve({}),
json: () => Promise.resolve(legacyItems),
} as Response);
const { result } = renderHook(() => useFlyerItemsQuery(42), { wrapper });
await waitFor(() => expect(result.current.isSuccess).toBe(true));
// Returns empty array when response doesn't match ADR-028 format
expect(result.current.data).toEqual([]);
});
});

View File

@@ -35,9 +35,13 @@ export const useFlyerItemsQuery = (flyerId: number | undefined) => {
throw new Error(error.message || 'Failed to fetch flyer items');
}
const data = await response.json();
// API returns { items: FlyerItem[] }
return data.items || [];
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
// Only run the query if we have a valid flyer ID
enabled: !!flyerId,

View File

@@ -31,9 +31,10 @@ describe('useFlyersQuery', () => {
{ flyer_id: 1, store_name: 'Store A', valid_from: '2024-01-01', valid_to: '2024-01-07' },
{ flyer_id: 2, store_name: 'Store B', valid_from: '2024-01-01', valid_to: '2024-01-07' },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchFlyers.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockFlyers),
json: () => Promise.resolve({ success: true, data: mockFlyers }),
} as Response);
const { result } = renderHook(() => useFlyersQuery(), { wrapper });
@@ -46,9 +47,10 @@ describe('useFlyersQuery', () => {
it('should fetch flyers with custom limit and offset', async () => {
const mockFlyers = [{ flyer_id: 3, store_name: 'Store C' }];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchFlyers.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockFlyers),
json: () => Promise.resolve({ success: true, data: mockFlyers }),
} as Response);
const { result } = renderHook(() => useFlyersQuery(10, 5), { wrapper });
@@ -102,9 +104,10 @@ describe('useFlyersQuery', () => {
});
it('should return empty array for no flyers', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchFlyers.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useFlyersQuery(), { wrapper });

View File

@@ -32,7 +32,13 @@ export const useFlyersQuery = (limit: number = 20, offset: number = 0) => {
throw new Error(error.message || 'Failed to fetch flyers');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
// Keep data fresh for 2 minutes since flyers don't change frequently
staleTime: 1000 * 60 * 2,

View File

@@ -29,7 +29,13 @@ export const useLeaderboardQuery = (limit: number = 10, enabled: boolean = true)
throw new Error(error.message || 'Failed to fetch leaderboard');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled,
staleTime: 1000 * 60 * 2, // 2 minutes - leaderboard can change moderately

View File

@@ -32,9 +32,10 @@ describe('useMasterItemsQuery', () => {
{ master_item_id: 2, name: 'Bread', category: 'Bakery' },
{ master_item_id: 3, name: 'Eggs', category: 'Dairy' },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchMasterItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockMasterItems),
json: () => Promise.resolve({ success: true, data: mockMasterItems }),
} as Response);
const { result } = renderHook(() => useMasterItemsQuery(), { wrapper });
@@ -88,9 +89,10 @@ describe('useMasterItemsQuery', () => {
});
it('should return empty array for no master items', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchMasterItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useMasterItemsQuery(), { wrapper });

View File

@@ -31,7 +31,13 @@ export const useMasterItemsQuery = () => {
throw new Error(error.message || 'Failed to fetch master items');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
// Master items change infrequently, keep data fresh for 10 minutes
staleTime: 1000 * 60 * 10,

View File

@@ -34,7 +34,13 @@ export const usePriceHistoryQuery = (masterItemIds: number[], enabled: boolean =
throw new Error(error.message || 'Failed to fetch price history');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled: enabled && masterItemIds.length > 0,
staleTime: 1000 * 60 * 10, // 10 minutes - historical data doesn't change frequently

View File

@@ -31,9 +31,10 @@ describe('useShoppingListsQuery', () => {
{ shopping_list_id: 1, name: 'Weekly Groceries', items: [] },
{ shopping_list_id: 2, name: 'Party Supplies', items: [] },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchShoppingLists.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockShoppingLists),
json: () => Promise.resolve({ success: true, data: mockShoppingLists }),
} as Response);
const { result } = renderHook(() => useShoppingListsQuery(true), { wrapper });
@@ -98,9 +99,10 @@ describe('useShoppingListsQuery', () => {
});
it('should return empty array for no shopping lists', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchShoppingLists.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useShoppingListsQuery(true), { wrapper });

View File

@@ -31,7 +31,13 @@ export const useShoppingListsQuery = (enabled: boolean) => {
throw new Error(error.message || 'Failed to fetch shopping lists');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled,
// Keep data fresh for 1 minute since users actively manage shopping lists

View File

@@ -31,9 +31,10 @@ describe('useSuggestedCorrectionsQuery', () => {
{ correction_id: 1, item_name: 'Milk', suggested_name: 'Whole Milk', status: 'pending' },
{ correction_id: 2, item_name: 'Bread', suggested_name: 'White Bread', status: 'pending' },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.getSuggestedCorrections.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockCorrections),
json: () => Promise.resolve({ success: true, data: mockCorrections }),
} as Response);
const { result } = renderHook(() => useSuggestedCorrectionsQuery(), { wrapper });
@@ -87,9 +88,10 @@ describe('useSuggestedCorrectionsQuery', () => {
});
it('should return empty array for no corrections', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.getSuggestedCorrections.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useSuggestedCorrectionsQuery(), { wrapper });

View File

@@ -26,7 +26,13 @@ export const useSuggestedCorrectionsQuery = () => {
throw new Error(error.message || 'Failed to fetch suggested corrections');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
staleTime: 1000 * 60, // 1 minute - corrections change moderately
});

View File

@@ -36,7 +36,9 @@ export const useUserAddressQuery = (
throw new Error(error.message || 'Failed to fetch user address');
}
return response.json();
const json = await response.json();
// API returns { success: true, data: {...} }, extract the data object
return json.data ?? json;
},
enabled: enabled && !!addressId,
staleTime: 1000 * 60 * 5, // 5 minutes - address data doesn't change frequently

View File

@@ -48,8 +48,12 @@ export const useUserProfileDataQuery = (enabled: boolean = true) => {
throw new Error(error.message || 'Failed to fetch user achievements');
}
const profile: UserProfile = await profileRes.json();
const achievements: (UserAchievement & Achievement)[] = await achievementsRes.json();
const profileJson = await profileRes.json();
const achievementsJson = await achievementsRes.json();
// API returns { success: true, data: {...} }, extract the data
const profile: UserProfile = profileJson.data ?? profileJson;
const achievements: (UserAchievement & Achievement)[] =
achievementsJson.data ?? achievementsJson;
return {
profile,

View File

@@ -31,9 +31,10 @@ describe('useWatchedItemsQuery', () => {
{ master_item_id: 1, name: 'Milk', category: 'Dairy' },
{ master_item_id: 2, name: 'Bread', category: 'Bakery' },
];
// API returns wrapped response: { success: true, data: [...] }
mockedApiClient.fetchWatchedItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve(mockWatchedItems),
json: () => Promise.resolve({ success: true, data: mockWatchedItems }),
} as Response);
const { result } = renderHook(() => useWatchedItemsQuery(true), { wrapper });
@@ -98,9 +99,10 @@ describe('useWatchedItemsQuery', () => {
});
it('should return empty array for no watched items', async () => {
// API returns wrapped response: { success: true, data: [] }
mockedApiClient.fetchWatchedItems.mockResolvedValue({
ok: true,
json: () => Promise.resolve([]),
json: () => Promise.resolve({ success: true, data: [] }),
} as Response);
const { result } = renderHook(() => useWatchedItemsQuery(true), { wrapper });

View File

@@ -31,7 +31,13 @@ export const useWatchedItemsQuery = (enabled: boolean) => {
throw new Error(error.message || 'Failed to fetch watched items');
}
return response.json();
const json = await response.json();
// ADR-028: API returns { success: true, data: [...] }
// If success is false or data is not an array, return empty array to prevent .map() errors
if (!json.success || !Array.isArray(json.data)) {
return [];
}
return json.data;
},
enabled,
// Keep data fresh for 1 minute since users actively manage watched items

View File

@@ -1,8 +1,6 @@
// src/hooks/useActiveDeals.test.tsx
import { renderHook, waitFor, act } from '@testing-library/react';
import { renderHook, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { useActiveDeals } from './useActiveDeals';
import * as apiClient from '../services/apiClient';
import type { Flyer, MasterGroceryItem, FlyerItem } from '../types';
import {
createMockFlyer,
@@ -12,9 +10,8 @@ import {
} from '../tests/utils/mockFactories';
import { mockUseFlyers, mockUseUserData } from '../tests/setup/mockHooks';
import { QueryWrapper } from '../tests/utils/renderWithProviders';
// Must explicitly call vi.mock() for apiClient
vi.mock('../services/apiClient');
import { useFlyerItemsForFlyersQuery } from './queries/useFlyerItemsForFlyersQuery';
import { useFlyerItemCountQuery } from './queries/useFlyerItemCountQuery';
// Mock the hooks to avoid Missing Context errors
vi.mock('./useFlyers', () => ({
@@ -25,7 +22,12 @@ vi.mock('../hooks/useUserData', () => ({
useUserData: () => mockUseUserData(),
}));
const mockedApiClient = vi.mocked(apiClient);
// Mock the query hooks
vi.mock('./queries/useFlyerItemsForFlyersQuery');
vi.mock('./queries/useFlyerItemCountQuery');
const mockedUseFlyerItemsForFlyersQuery = vi.mocked(useFlyerItemsForFlyersQuery);
const mockedUseFlyerItemCountQuery = vi.mocked(useFlyerItemCountQuery);
// Set a consistent "today" for testing flyer validity to make tests deterministic
const TODAY = new Date('2024-01-15T12:00:00.000Z');
@@ -33,9 +35,6 @@ const TODAY = new Date('2024-01-15T12:00:00.000Z');
describe('useActiveDeals Hook', () => {
// Use fake timers to control the current date in tests
beforeEach(() => {
// FIX: Only fake the 'Date' object.
// This allows `new Date()` to be mocked (via setSystemTime) while keeping
// `setTimeout`/`setInterval` native so `waitFor` doesn't hang.
vi.useFakeTimers({ toFake: ['Date'] });
vi.setSystemTime(TODAY);
vi.clearAllMocks();
@@ -58,6 +57,18 @@ describe('useActiveDeals Hook', () => {
isLoading: false,
error: null,
});
// Default mocks for query hooks
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
} as any);
mockedUseFlyerItemCountQuery.mockReturnValue({
data: { count: 0 },
isLoading: false,
error: null,
} as any);
});
afterEach(() => {
@@ -124,20 +135,18 @@ describe('useActiveDeals Hook', () => {
];
it('should return loading state initially and then calculated data', async () => {
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 10 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify(mockFlyerItems)),
);
mockedUseFlyerItemCountQuery.mockReturnValue({
data: { count: 10 },
isLoading: false,
error: null,
} as any);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: mockFlyerItems,
isLoading: false,
error: null,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
// The hook runs the effect almost immediately. We shouldn't strictly assert false
// because depending on render timing, it might already be true.
// We mainly care that it eventually resolves.
// Wait for the hook's useEffect to run and complete
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
expect(result.current.totalActiveItems).toBe(10);
@@ -147,25 +156,18 @@ describe('useActiveDeals Hook', () => {
});
it('should correctly filter for valid flyers and make API calls with their IDs', async () => {
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 0 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(new Response(JSON.stringify([])));
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
await waitFor(() => {
// Only the valid flyer (id: 1) should be used in the API calls
expect(mockedApiClient.countFlyerItemsForFlyers).toHaveBeenCalledWith([1]);
expect(mockedApiClient.fetchFlyerItemsForFlyers).toHaveBeenCalledWith([1]);
// The second argument is `enabled` which should be true
expect(mockedUseFlyerItemCountQuery).toHaveBeenCalledWith([1], true);
expect(mockedUseFlyerItemsForFlyersQuery).toHaveBeenCalledWith([1], true);
expect(result.current.isLoading).toBe(false);
});
});
it('should not fetch flyer items if there are no watched items', async () => {
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 10 })),
);
mockUseUserData.mockReturnValue({
watchedItems: [],
shoppingLists: [],
@@ -173,16 +175,16 @@ describe('useActiveDeals Hook', () => {
setShoppingLists: vi.fn(),
isLoading: false,
error: null,
}); // Override for this test
});
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
expect(result.current.totalActiveItems).toBe(10);
expect(result.current.activeDeals).toEqual([]);
// The key assertion: fetchFlyerItemsForFlyers should not be called
expect(mockedApiClient.fetchFlyerItemsForFlyers).not.toHaveBeenCalled();
// The enabled flag (2nd arg) should be false for items query
expect(mockedUseFlyerItemsForFlyersQuery).toHaveBeenCalledWith([1], false);
// Count query should still be enabled if there are valid flyers
expect(mockedUseFlyerItemCountQuery).toHaveBeenCalledWith([1], true);
});
});
@@ -204,16 +206,20 @@ describe('useActiveDeals Hook', () => {
expect(result.current.totalActiveItems).toBe(0);
expect(result.current.activeDeals).toEqual([]);
// No API calls should be made if there are no valid flyers
expect(mockedApiClient.countFlyerItemsForFlyers).not.toHaveBeenCalled();
expect(mockedApiClient.fetchFlyerItemsForFlyers).not.toHaveBeenCalled();
// API calls should be made with empty array, or enabled=false depending on implementation
// In useActiveDeals.tsx: validFlyerIds.length > 0 is the condition
expect(mockedUseFlyerItemCountQuery).toHaveBeenCalledWith([], false);
expect(mockedUseFlyerItemsForFlyersQuery).toHaveBeenCalledWith([], false);
});
});
it('should set an error state if counting items fails', async () => {
const apiError = new Error('Network Failure');
mockedApiClient.countFlyerItemsForFlyers.mockRejectedValue(apiError);
// Also mock fetchFlyerItemsForFlyers to avoid interference from the other query
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(new Response(JSON.stringify([])));
mockedUseFlyerItemCountQuery.mockReturnValue({
data: undefined,
isLoading: false,
error: apiError,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
@@ -225,17 +231,16 @@ describe('useActiveDeals Hook', () => {
it('should set an error state if fetching items fails', async () => {
const apiError = new Error('Item fetch failed');
// Mock the count to succeed but the item fetch to fail
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 10 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockRejectedValue(apiError);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: undefined,
isLoading: false,
error: apiError,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
// This covers the `|| errorItems?.message` part of the error logic
expect(result.current.error).toBe(
'Could not fetch active deals or totals: Item fetch failed',
);
@@ -243,12 +248,16 @@ describe('useActiveDeals Hook', () => {
});
it('should correctly map flyer items to DealItem format', async () => {
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 10 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify(mockFlyerItems)),
);
mockedUseFlyerItemCountQuery.mockReturnValue({
data: { count: 10 },
isLoading: false,
error: null,
} as any);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: mockFlyerItems,
isLoading: false,
error: null,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
@@ -261,7 +270,7 @@ describe('useActiveDeals Hook', () => {
quantity: 'lb',
storeName: 'Valid Store',
master_item_name: 'Apples',
unit_price: null, // Expect null as the hook ensures undefined is converted to null
unit_price: null,
});
expect(deal).toEqual(expectedDeal);
});
@@ -276,7 +285,7 @@ describe('useActiveDeals Hook', () => {
valid_from: '2024-01-10',
valid_to: '2024-01-20',
});
(flyerWithoutStore as any).store = null; // Explicitly set to null
(flyerWithoutStore as any).store = null;
const itemInFlyerWithoutStore = createMockFlyerItem({
flyer_item_id: 3,
@@ -289,27 +298,21 @@ describe('useActiveDeals Hook', () => {
});
mockUseFlyers.mockReturnValue({ ...mockUseFlyers(), flyers: [flyerWithoutStore] });
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 1 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify([itemInFlyerWithoutStore])),
);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: [itemInFlyerWithoutStore],
isLoading: false,
error: null,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
await waitFor(() => {
expect(result.current.activeDeals).toHaveLength(1);
// This covers the `|| 'Unknown Store'` fallback logic
expect(result.current.activeDeals[0].storeName).toBe('Unknown Store');
});
});
it('should filter out items that do not match watched items or have no master ID', async () => {
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 5 })),
);
const mixedItems: FlyerItem[] = [
// Watched item (Master ID 101 is in mockWatchedItems)
createMockFlyerItem({
@@ -345,9 +348,11 @@ describe('useActiveDeals Hook', () => {
}),
];
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify(mixedItems)),
);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: mixedItems,
isLoading: false,
error: null,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
@@ -360,40 +365,18 @@ describe('useActiveDeals Hook', () => {
});
it('should return true for isLoading while API calls are pending', async () => {
// Create promises we can control
let resolveCount: (value: Response) => void;
const countPromise = new Promise<Response>((resolve) => {
resolveCount = resolve;
});
let resolveItems: (value: Response) => void;
const itemsPromise = new Promise<Response>((resolve) => {
resolveItems = resolve;
});
mockedApiClient.countFlyerItemsForFlyers.mockReturnValue(countPromise);
mockedApiClient.fetchFlyerItemsForFlyers.mockReturnValue(itemsPromise);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: undefined,
isLoading: true,
error: null,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
// Wait for the effect to trigger the API call and set loading to true
await waitFor(() => expect(result.current.isLoading).toBe(true));
// Resolve promises
await act(async () => {
resolveCount!(new Response(JSON.stringify({ count: 5 })));
resolveItems!(new Response(JSON.stringify([])));
});
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
expect(result.current.isLoading).toBe(true);
});
it('should re-filter active deals when watched items change (client-side filtering)', async () => {
// With TanStack Query, changing watchedItems does NOT trigger a new API call
// because the query key is based on flyerIds, not watchedItems.
// The filtering happens client-side via useMemo. This is more efficient.
const allFlyerItems: FlyerItem[] = [
createMockFlyerItem({
flyer_item_id: 1,
@@ -415,12 +398,11 @@ describe('useActiveDeals Hook', () => {
}),
];
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 2 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify(allFlyerItems)),
);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: allFlyerItems,
isLoading: false,
error: null,
} as any);
const { result, rerender } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
@@ -433,9 +415,6 @@ describe('useActiveDeals Hook', () => {
expect(result.current.activeDeals).toHaveLength(1);
expect(result.current.activeDeals[0].item).toBe('Red Apples');
// API should have been called exactly once
expect(mockedApiClient.fetchFlyerItemsForFlyers).toHaveBeenCalledTimes(1);
// Now add Bread to watched items
const newWatchedItems = [
...mockWatchedItems,
@@ -462,9 +441,6 @@ describe('useActiveDeals Hook', () => {
const dealItems = result.current.activeDeals.map((d) => d.item);
expect(dealItems).toContain('Red Apples');
expect(dealItems).toContain('Fresh Bread');
// The API should NOT be called again - data is already cached
expect(mockedApiClient.fetchFlyerItemsForFlyers).toHaveBeenCalledTimes(1);
});
it('should include flyers valid exactly on the start or end date', async () => {
@@ -518,16 +494,10 @@ describe('useActiveDeals Hook', () => {
refetchFlyers: vi.fn(),
});
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 0 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(new Response(JSON.stringify([])));
renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });
await waitFor(() => {
// Should call with IDs 10, 11, 12. Should NOT include 13.
expect(mockedApiClient.countFlyerItemsForFlyers).toHaveBeenCalledWith([10, 11, 12]);
expect(mockedUseFlyerItemCountQuery).toHaveBeenCalledWith([10, 11, 12], true);
});
});
@@ -544,12 +514,11 @@ describe('useActiveDeals Hook', () => {
quantity: undefined,
});
mockedApiClient.countFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify({ count: 1 })),
);
mockedApiClient.fetchFlyerItemsForFlyers.mockResolvedValue(
new Response(JSON.stringify([incompleteItem])),
);
mockedUseFlyerItemsForFlyersQuery.mockReturnValue({
data: [incompleteItem],
isLoading: false,
error: null,
} as any);
const { result } = renderHook(() => useActiveDeals(), { wrapper: QueryWrapper });

View File

@@ -153,7 +153,7 @@ describe('useAuth Hook and AuthProvider', () => {
expect(result.current.userProfile).toBeNull();
expect(mockedTokenStorage.removeToken).toHaveBeenCalled();
expect(logger.warn).toHaveBeenCalledWith(
'[AuthProvider] Token was present but profile is null. Signing out.',
'[AuthProvider] Token was present but validation failed. Signing out.',
);
});

View File

@@ -1,17 +1,15 @@
// src/pages/MyDealsPage.test.tsx
import React from 'react';
import { render, screen, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, type Mock } from 'vitest';
import MyDealsPage from './MyDealsPage';
import * as apiClient from '../services/apiClient';
import { useBestSalePricesQuery } from '../hooks/queries/useBestSalePricesQuery';
import type { WatchedItemDeal } from '../types';
import { createMockWatchedItemDeal } from '../tests/utils/mockFactories';
import { QueryWrapper } from '../tests/utils/renderWithProviders';
// Must explicitly call vi.mock() for apiClient
vi.mock('../services/apiClient');
const mockedApiClient = vi.mocked(apiClient);
vi.mock('../hooks/queries/useBestSalePricesQuery');
const mockedUseBestSalePricesQuery = useBestSalePricesQuery as Mock;
const renderWithQuery = (ui: React.ReactElement) => render(ui, { wrapper: QueryWrapper });
@@ -26,66 +24,65 @@ vi.mock('lucide-react', () => ({
describe('MyDealsPage', () => {
beforeEach(() => {
vi.clearAllMocks();
// Default mock: loading false, empty data
mockedUseBestSalePricesQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
});
});
it('should display a loading message initially', () => {
// Mock a pending promise
mockedApiClient.fetchBestSalePrices.mockReturnValue(new Promise(() => {}));
mockedUseBestSalePricesQuery.mockReturnValue({
data: [],
isLoading: true,
error: null,
});
renderWithQuery(<MyDealsPage />);
expect(screen.getByText('Loading your deals...')).toBeInTheDocument();
});
it('should display an error message if the API call fails', async () => {
mockedApiClient.fetchBestSalePrices.mockResolvedValue(
new Response(null, { status: 500, statusText: 'Server Error' }),
);
renderWithQuery(<MyDealsPage />);
await waitFor(() => {
expect(screen.getByText('Error')).toBeInTheDocument();
// The query hook throws an error with status code when JSON parsing fails on non-ok response
expect(screen.getByText('Request failed with status 500')).toBeInTheDocument();
it('should display an error message if the API call fails', () => {
mockedUseBestSalePricesQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('Request failed with status 500'),
});
renderWithQuery(<MyDealsPage />);
expect(screen.getByText('Error')).toBeInTheDocument();
expect(screen.getByText('Request failed with status 500')).toBeInTheDocument();
});
it('should handle network errors and log them', async () => {
const networkError = new Error('Network connection failed');
mockedApiClient.fetchBestSalePrices.mockRejectedValue(networkError);
renderWithQuery(<MyDealsPage />);
await waitFor(() => {
expect(screen.getByText('Error')).toBeInTheDocument();
expect(screen.getByText('Network connection failed')).toBeInTheDocument();
it('should handle network errors and log them', () => {
mockedUseBestSalePricesQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('Network connection failed'),
});
renderWithQuery(<MyDealsPage />);
expect(screen.getByText('Error')).toBeInTheDocument();
expect(screen.getByText('Network connection failed')).toBeInTheDocument();
});
it('should handle unknown errors and log them', async () => {
// Mock a rejection with an Error object - TanStack Query passes through Error objects
mockedApiClient.fetchBestSalePrices.mockRejectedValue(new Error('Unknown failure'));
renderWithQuery(<MyDealsPage />);
await waitFor(() => {
expect(screen.getByText('Error')).toBeInTheDocument();
expect(screen.getByText('Unknown failure')).toBeInTheDocument();
it('should handle unknown errors and log them', () => {
mockedUseBestSalePricesQuery.mockReturnValue({
data: [],
isLoading: false,
error: new Error('Unknown failure'),
});
renderWithQuery(<MyDealsPage />);
expect(screen.getByText('Error')).toBeInTheDocument();
expect(screen.getByText('Unknown failure')).toBeInTheDocument();
});
it('should display a message when no deals are found', async () => {
mockedApiClient.fetchBestSalePrices.mockResolvedValue(
new Response(JSON.stringify([]), {
headers: { 'Content-Type': 'application/json' },
}),
);
it('should display a message when no deals are found', () => {
renderWithQuery(<MyDealsPage />);
await waitFor(() => {
expect(
screen.getByText('No deals found for your watched items right now.'),
).toBeInTheDocument();
});
expect(
screen.getByText('No deals found for your watched items right now.'),
).toBeInTheDocument();
});
it('should render the list of deals on successful fetch', async () => {
it('should render the list of deals on successful fetch', () => {
const mockDeals: WatchedItemDeal[] = [
createMockWatchedItemDeal({
master_item_id: 1,
@@ -104,20 +101,18 @@ describe('MyDealsPage', () => {
valid_to: '2024-10-22',
}),
];
mockedApiClient.fetchBestSalePrices.mockResolvedValue(
new Response(JSON.stringify(mockDeals), {
headers: { 'Content-Type': 'application/json' },
}),
);
mockedUseBestSalePricesQuery.mockReturnValue({
data: mockDeals,
isLoading: false,
error: null,
});
renderWithQuery(<MyDealsPage />);
await waitFor(() => {
expect(screen.getByText('Organic Bananas')).toBeInTheDocument();
expect(screen.getByText('$0.99')).toBeInTheDocument();
expect(screen.getByText('Almond Milk')).toBeInTheDocument();
expect(screen.getByText('$3.49')).toBeInTheDocument();
expect(screen.getByText('Green Grocer')).toBeInTheDocument();
});
expect(screen.getByText('Organic Bananas')).toBeInTheDocument();
expect(screen.getByText('$0.99')).toBeInTheDocument();
expect(screen.getByText('Almond Milk')).toBeInTheDocument();
expect(screen.getByText('$3.49')).toBeInTheDocument();
expect(screen.getByText('Green Grocer')).toBeInTheDocument();
});
});

View File

@@ -11,20 +11,33 @@ import {
createMockUser,
} from '../tests/utils/mockFactories';
import { QueryWrapper } from '../tests/utils/renderWithProviders';
import { useUserProfileData } from '../hooks/useUserProfileData';
// Must explicitly call vi.mock() for apiClient
vi.mock('../services/apiClient');
vi.mock('../hooks/useUserProfileData');
const renderWithQuery = (ui: React.ReactElement) => render(ui, { wrapper: QueryWrapper });
const mockedNotificationService = vi.mocked(await import('../services/notificationService'));
vi.mock('../services/notificationService', () => ({
notifySuccess: vi.fn(),
notifyError: vi.fn(),
}));
import { notifyError } from '../services/notificationService';
vi.mock('../components/AchievementsList', () => ({
AchievementsList: ({ achievements }: { achievements: (UserAchievement & Achievement)[] }) => (
<div data-testid="achievements-list-mock">Achievements Count: {achievements.length}</div>
AchievementsList: ({
achievements,
}: {
achievements: (UserAchievement & Achievement)[] | null;
}) => (
<div data-testid="achievements-list-mock">Achievements Count: {achievements?.length || 0}</div>
),
}));
const mockedApiClient = vi.mocked(apiClient);
const mockedUseUserProfileData = vi.mocked(useUserProfileData);
const mockedNotifyError = vi.mocked(notifyError);
// --- Mock Data ---
const mockProfile: UserProfile = createMockUserProfile({
@@ -47,206 +60,109 @@ const mockAchievements: (UserAchievement & Achievement)[] = [
}),
];
const mockSetProfile = vi.fn();
describe('UserProfilePage', () => {
beforeEach(() => {
vi.clearAllMocks();
// Default mock implementation: Success state
mockedUseUserProfileData.mockReturnValue({
profile: mockProfile,
setProfile: mockSetProfile,
achievements: mockAchievements,
isLoading: false,
error: null,
});
});
// ... (Keep existing tests for loading message, error handling, rendering, etc.) ...
it('should display a loading message initially', () => {
mockedApiClient.getAuthenticatedUserProfile.mockReturnValue(new Promise(() => {}));
mockedApiClient.getUserAchievements.mockReturnValue(new Promise(() => {}));
mockedUseUserProfileData.mockReturnValue({
profile: null,
setProfile: mockSetProfile,
achievements: [],
isLoading: true,
error: null,
});
renderWithQuery(<UserProfilePage />);
expect(screen.getByText('Loading profile...')).toBeInTheDocument();
});
it('should display an error message if fetching profile fails', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockRejectedValue(new Error('Network Error'));
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
expect(screen.getByText('Error: Network Error')).toBeInTheDocument();
it('should display an error message if fetching profile fails', () => {
mockedUseUserProfileData.mockReturnValue({
profile: null,
setProfile: mockSetProfile,
achievements: [],
isLoading: false,
error: 'Network Error',
});
renderWithQuery(<UserProfilePage />);
expect(screen.getByText('Error: Network Error')).toBeInTheDocument();
});
it('should display an error message if fetching profile returns a non-ok response', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify({ message: 'Auth Failed' }), { status: 401 }),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
it('should render the profile and achievements on successful fetch', () => {
renderWithQuery(<UserProfilePage />);
expect(screen.getByRole('heading', { name: 'Test User' })).toBeInTheDocument();
expect(screen.getByText('test@example.com')).toBeInTheDocument();
expect(screen.getByText('150 Points')).toBeInTheDocument();
expect(screen.getByAltText('User Avatar')).toHaveAttribute('src', mockProfile.avatar_url);
expect(screen.getByTestId('achievements-list-mock')).toHaveTextContent('Achievements Count: 1');
});
await waitFor(() => {
// The query hook parses the error message from the JSON body
expect(screen.getByText('Error: Auth Failed')).toBeInTheDocument();
it('should render a fallback message if profile is null after loading', () => {
mockedUseUserProfileData.mockReturnValue({
profile: null,
setProfile: mockSetProfile,
achievements: [],
isLoading: false,
error: null,
});
});
it('should display an error message if fetching achievements returns a non-ok response', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify({ message: 'Server Busy' }), { status: 503 }),
);
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
// The query hook parses the error message from the JSON body
expect(screen.getByText('Error: Server Busy')).toBeInTheDocument();
});
expect(screen.getByText('Could not load user profile.')).toBeInTheDocument();
});
it('should display an error message if fetching achievements fails', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
mockedApiClient.getUserAchievements.mockRejectedValue(new Error('Achievements service down'));
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
expect(screen.getByText('Error: Achievements service down')).toBeInTheDocument();
});
});
it('should handle unknown errors during fetch', async () => {
// Use an actual Error object since the hook extracts error.message
mockedApiClient.getAuthenticatedUserProfile.mockRejectedValue(new Error('Unknown error'));
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
expect(screen.getByText('Error: Unknown error')).toBeInTheDocument();
});
});
it('should handle null achievements data gracefully on fetch', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
// Mock a successful response but with a null body for achievements
mockedApiClient.getUserAchievements.mockResolvedValue(new Response(JSON.stringify(null)));
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
expect(screen.getByRole('heading', { name: 'Test User' })).toBeInTheDocument();
// The mock achievements list should show 0 achievements because the component
// should handle the null response and pass an empty array to the list.
expect(screen.getByTestId('achievements-list-mock')).toHaveTextContent(
'Achievements Count: 0',
);
});
});
it('should render the profile and achievements on successful fetch', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
expect(screen.getByRole('heading', { name: 'Test User' })).toBeInTheDocument();
expect(screen.getByText('test@example.com')).toBeInTheDocument();
expect(screen.getByText('150 Points')).toBeInTheDocument();
expect(screen.getByAltText('User Avatar')).toHaveAttribute('src', mockProfile.avatar_url);
expect(screen.getByTestId('achievements-list-mock')).toHaveTextContent(
'Achievements Count: 1',
);
});
});
it('should render a fallback message if profile is null after loading', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(null)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
renderWithQuery(<UserProfilePage />);
expect(await screen.findByText('Could not load user profile.')).toBeInTheDocument();
});
it('should display a fallback avatar if the user has no avatar_url', async () => {
// Create a mock profile with a null avatar_url and a specific name for the seed
it('should display a fallback avatar if the user has no avatar_url', () => {
const profileWithoutAvatar = { ...mockProfile, avatar_url: null, full_name: 'No Avatar User' };
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(profileWithoutAvatar)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(new Response(JSON.stringify([])));
mockedUseUserProfileData.mockReturnValue({
profile: profileWithoutAvatar,
setProfile: mockSetProfile,
achievements: [],
isLoading: false,
error: null,
});
renderWithQuery(<UserProfilePage />);
// Wait for the component to render with the fetched data
await waitFor(() => {
const avatarImage = screen.getByAltText('User Avatar');
// JSDOM might not URL-encode spaces in the src attribute in the same way a browser does.
// We adjust the expectation to match the literal string returned by getAttribute.
const expectedSrc = 'https://api.dicebear.com/8.x/initials/svg?seed=No Avatar User';
console.log('[TEST LOG] Actual Avatar Src:', avatarImage.getAttribute('src'));
expect(avatarImage).toHaveAttribute('src', expectedSrc);
});
const avatarImage = screen.getByAltText('User Avatar');
const expectedSrc = 'https://api.dicebear.com/8.x/initials/svg?seed=No Avatar User';
expect(avatarImage).toHaveAttribute('src', expectedSrc);
});
it('should use email for avatar seed if full_name is missing', async () => {
it('should use email for avatar seed if full_name is missing', () => {
const profileNoName = { ...mockProfile, full_name: null, avatar_url: null };
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(profileNoName)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
mockedUseUserProfileData.mockReturnValue({
profile: profileNoName,
setProfile: mockSetProfile,
achievements: [],
isLoading: false,
error: null,
});
renderWithQuery(<UserProfilePage />);
await waitFor(() => {
const avatar = screen.getByAltText('User Avatar');
// seed should be the email
expect(avatar.getAttribute('src')).toContain(`seed=${profileNoName.user.email}`);
});
const avatar = screen.getByAltText('User Avatar');
expect(avatar.getAttribute('src')).toContain(`seed=${profileNoName.user.email}`);
});
it('should trigger file input click when avatar is clicked', async () => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
it('should trigger file input click when avatar is clicked', () => {
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
const clickSpy = vi.spyOn(fileInput, 'click');
const avatarContainer = screen.getByAltText('User Avatar');
fireEvent.click(avatarContainer);
expect(clickSpy).toHaveBeenCalled();
});
describe('Name Editing', () => {
beforeEach(() => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
});
it('should allow editing and saving the user name', async () => {
const updatedProfile = { ...mockProfile, full_name: 'Updated Name' };
mockedApiClient.updateUserProfile.mockResolvedValue(
@@ -254,8 +170,6 @@ describe('UserProfilePage', () => {
);
renderWithQuery(<UserProfilePage />);
await screen.findByText('Test User');
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
const nameInput = screen.getByRole('textbox');
fireEvent.change(nameInput, { target: { value: 'Updated Name' } });
@@ -265,17 +179,14 @@ describe('UserProfilePage', () => {
expect(mockedApiClient.updateUserProfile).toHaveBeenCalledWith({
full_name: 'Updated Name',
});
expect(screen.getByRole('heading', { name: 'Updated Name' })).toBeInTheDocument();
expect(mockSetProfile).toHaveBeenCalled();
});
});
it('should allow canceling the name edit', async () => {
it('should allow canceling the name edit', () => {
renderWithQuery(<UserProfilePage />);
await screen.findByText('Test User');
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
fireEvent.click(screen.getByRole('button', { name: /cancel/i }));
expect(screen.queryByRole('textbox')).not.toBeInTheDocument();
expect(screen.getByRole('heading', { name: 'Test User' })).toBeInTheDocument();
});
@@ -285,7 +196,6 @@ describe('UserProfilePage', () => {
new Response(JSON.stringify({ message: 'Validation failed' }), { status: 400 }),
);
renderWithQuery(<UserProfilePage />);
await screen.findByText('Test User');
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
const nameInput = screen.getByRole('textbox');
@@ -293,136 +203,33 @@ describe('UserProfilePage', () => {
fireEvent.click(screen.getByRole('button', { name: /save/i }));
await waitFor(() => {
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith('Validation failed');
});
});
it('should show a default error if saving the name fails with a non-ok response and no message', async () => {
mockedApiClient.updateUserProfile.mockResolvedValue(
new Response(JSON.stringify({}), { status: 400 }),
);
renderWithQuery(<UserProfilePage />);
await screen.findByText('Test User');
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
const nameInput = screen.getByRole('textbox');
fireEvent.change(nameInput, { target: { value: 'Invalid Name' } });
fireEvent.click(screen.getByRole('button', { name: /save/i }));
await waitFor(() => {
// This covers the `|| 'Failed to update name.'` part of the error throw
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'Failed to update name.',
);
});
});
it('should handle non-ok response with null body when saving name', async () => {
// This tests the case where the server returns an error status but an empty/null body.
mockedApiClient.updateUserProfile.mockResolvedValue(new Response(null, { status: 500 }));
renderWithQuery(<UserProfilePage />);
await screen.findByText('Test User');
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
fireEvent.change(screen.getByRole('textbox'), { target: { value: 'New Name' } });
fireEvent.click(screen.getByRole('button', { name: /save/i }));
await waitFor(() => {
// The component should fall back to the default error message.
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'Failed to update name.',
);
});
});
it('should handle unknown errors when saving name', async () => {
mockedApiClient.updateUserProfile.mockRejectedValue('Unknown update error');
renderWithQuery(<UserProfilePage />);
await screen.findByText('Test User');
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
const nameInput = screen.getByRole('textbox');
fireEvent.change(nameInput, { target: { value: 'New Name' } });
fireEvent.click(screen.getByRole('button', { name: /save/i }));
await waitFor(() => {
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'An unknown error occurred.',
);
expect(mockedNotifyError).toHaveBeenCalledWith('Validation failed');
});
});
});
describe('Avatar Upload', () => {
beforeEach(() => {
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue(
new Response(JSON.stringify(mockProfile)),
);
mockedApiClient.getUserAchievements.mockResolvedValue(
new Response(JSON.stringify(mockAchievements)),
);
});
it('should upload a new avatar and update the image source', async () => {
it('should upload a new avatar and update the profile', async () => {
const updatedProfile = { ...mockProfile, avatar_url: 'https://example.com/new-avatar.png' };
// Log when the mock is called
mockedApiClient.uploadAvatar.mockImplementation((file) => {
console.log('[TEST LOG] uploadAvatar mock called with:', file.name);
// Add a slight delay to ensure "isUploading" state can be observed
return new Promise((resolve) => {
setTimeout(() => {
console.log('[TEST LOG] uploadAvatar mock resolving...');
resolve(new Response(JSON.stringify(updatedProfile)));
}, 100);
});
});
mockedApiClient.uploadAvatar.mockResolvedValue(new Response(JSON.stringify(updatedProfile)));
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
// Mock the hidden file input
const fileInput = screen.getByTestId('avatar-file-input');
const file = new File(['(⌐□_□)'], 'chucknorris.png', { type: 'image/png' });
console.log('[TEST LOG] Firing file change event...');
fireEvent.change(fileInput, { target: { files: [file] } });
// DEBUG: Print current DOM state if spinner is not found immediately
// const spinner = screen.queryByTestId('avatar-upload-spinner');
// if (!spinner) {
// console.log('[TEST LOG] Spinner NOT found immediately after event.');
// // screen.debug(); // Uncomment to see DOM
// } else {
// console.log('[TEST LOG] Spinner FOUND immediately.');
// }
// Wait for the spinner to appear
console.log('[TEST LOG] Waiting for spinner...');
await screen.findByTestId('avatar-upload-spinner');
console.log('[TEST LOG] Spinner found.');
// Wait for the upload to complete and the UI to update.
await waitFor(() => {
expect(mockedApiClient.uploadAvatar).toHaveBeenCalledWith(file);
expect(screen.getByAltText('User Avatar')).toHaveAttribute(
'src',
updatedProfile.avatar_url,
);
expect(screen.queryByTestId('avatar-upload-spinner')).not.toBeInTheDocument();
expect(mockSetProfile).toHaveBeenCalled();
});
});
it('should not attempt to upload if no file is selected', async () => {
it('should not attempt to upload if no file is selected', () => {
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
// Simulate user canceling the file dialog
fireEvent.change(fileInput, { target: { files: null } });
// Assert that no API call was made
expect(mockedApiClient.uploadAvatar).not.toHaveBeenCalled();
});
@@ -431,96 +238,13 @@ describe('UserProfilePage', () => {
new Response(JSON.stringify({ message: 'File too large' }), { status: 413 }),
);
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
const file = new File(['(⌐□_□)'], 'large.png', { type: 'image/png' });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith('File too large');
});
});
it('should show a default error if avatar upload returns a non-ok response and no message', async () => {
mockedApiClient.uploadAvatar.mockResolvedValue(
new Response(JSON.stringify({}), { status: 413 }),
);
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
const file = new File(['(⌐□_□)'], 'large.png', { type: 'image/png' });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
// This covers the `|| 'Failed to upload avatar.'` part of the error throw
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'Failed to upload avatar.',
);
});
});
it('should handle non-ok response with null body when uploading avatar', async () => {
mockedApiClient.uploadAvatar.mockResolvedValue(new Response(null, { status: 500 }));
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
const file = new File(['(⌐□_□)'], 'chucknorris.png', { type: 'image/png' });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'Failed to upload avatar.',
);
});
});
it('should handle unknown errors when uploading avatar', async () => {
mockedApiClient.uploadAvatar.mockRejectedValue('Unknown upload error');
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
const file = new File(['(⌐□_□)'], 'error.png', { type: 'image/png' });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'An unknown error occurred.',
);
});
});
it('should show an error if a non-image file is selected for upload', async () => {
// Mock the API client to return a non-OK response, simulating server-side validation failure
mockedApiClient.uploadAvatar.mockResolvedValue(
new Response(
JSON.stringify({
message: 'Invalid file type. Only images (png, jpeg, gif) are allowed.',
}),
{ status: 400, headers: { 'Content-Type': 'application/json' } },
),
);
renderWithQuery(<UserProfilePage />);
await screen.findByAltText('User Avatar');
const fileInput = screen.getByTestId('avatar-file-input');
// Create a mock file that is NOT an image (e.g., a PDF)
const nonImageFile = new File(['some text content'], 'document.pdf', {
type: 'application/pdf',
});
fireEvent.change(fileInput, { target: { files: [nonImageFile] } });
await waitFor(() => {
expect(mockedApiClient.uploadAvatar).toHaveBeenCalledWith(nonImageFile);
expect(mockedNotificationService.notifyError).toHaveBeenCalledWith(
'Invalid file type. Only images (png, jpeg, gif) are allowed.',
);
expect(screen.queryByTestId('avatar-upload-spinner')).not.toBeInTheDocument();
expect(mockedNotifyError).toHaveBeenCalledWith('File too large');
});
});
});

View File

@@ -5,14 +5,18 @@ import { describe, it, expect, vi, beforeEach } from 'vitest';
import toast from 'react-hot-toast';
import { AdminBrandManager } from './AdminBrandManager';
import * as apiClient from '../../../services/apiClient';
import { useBrandsQuery } from '../../../hooks/queries/useBrandsQuery';
import { createMockBrand } from '../../../tests/utils/mockFactories';
import { renderWithProviders } from '../../../tests/utils/renderWithProviders';
// Must explicitly call vi.mock() for apiClient
// Must explicitly call vi.mock() for apiClient and the hook
vi.mock('../../../services/apiClient');
vi.mock('../../../hooks/queries/useBrandsQuery');
const mockedApiClient = vi.mocked(apiClient);
const mockedUseBrandsQuery = vi.mocked(useBrandsQuery);
const mockedToast = vi.mocked(toast, true);
const mockBrands = [
createMockBrand({ brand_id: 1, name: 'No Frills', store_name: 'No Frills', logo_url: null }),
createMockBrand({
@@ -26,70 +30,66 @@ const mockBrands = [
describe('AdminBrandManager', () => {
beforeEach(() => {
vi.clearAllMocks();
// Default mock: loading false, empty data
mockedUseBrandsQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
refetch: vi.fn(),
} as any);
});
it('should render a loading state initially', () => {
console.log('TEST START: should render a loading state initially');
// Mock a promise that never resolves to keep the component in a loading state.
console.log('TEST SETUP: Mocking fetchAllBrands with a non-resolving promise.');
mockedApiClient.fetchAllBrands.mockReturnValue(new Promise(() => {}));
mockedUseBrandsQuery.mockReturnValue({
data: undefined,
isLoading: true,
error: null,
} as any);
console.log('TEST ACTION: Rendering AdminBrandManager component.');
renderWithProviders(<AdminBrandManager />);
console.log('TEST ASSERTION: Checking for the loading text.');
expect(screen.getByText('Loading brands...')).toBeInTheDocument();
console.log('TEST SUCCESS: Loading text is visible.');
console.log('TEST END: should render a loading state initially');
});
it('should render an error message if fetching brands fails', async () => {
console.log('TEST START: should render an error message if fetching brands fails');
const errorMessage = 'Network Error';
console.log(`TEST SETUP: Mocking fetchAllBrands to reject with: ${errorMessage}`);
mockedApiClient.fetchAllBrands.mockRejectedValue(new Error('Network Error'));
mockedUseBrandsQuery.mockReturnValue({
data: undefined,
isLoading: false,
error: new Error('Network Error'),
} as any);
console.log('TEST ACTION: Rendering AdminBrandManager component.');
renderWithProviders(<AdminBrandManager />);
console.log('TEST ASSERTION: Waiting for error message to be displayed.');
await waitFor(() => {
expect(screen.getByText('Failed to load brands: Network Error')).toBeInTheDocument();
console.log('TEST SUCCESS: Error message found in the document.');
});
console.log('TEST END: should render an error message if fetching brands fails');
});
it('should render the list of brands when data is fetched successfully', async () => {
console.log('TEST START: should render the list of brands when data is fetched successfully');
// Use mockImplementation to return a new Response object on each call,
// preventing "Body has already been read" errors.
console.log('TEST SETUP: Mocking fetchAllBrands to resolve with mockBrands.');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
console.log('TEST ACTION: Rendering AdminBrandManager component.');
renderWithProviders(<AdminBrandManager />);
console.log('TEST ASSERTION: Waiting for brand list to render.');
await waitFor(() => {
expect(screen.getByRole('heading', { name: /brand management/i })).toBeInTheDocument();
expect(screen.getByText('No Frills')).toBeInTheDocument();
expect(screen.getByText('(Sobeys)')).toBeInTheDocument();
expect(screen.getByAltText('Compliments logo')).toBeInTheDocument();
expect(screen.getByText('No Logo')).toBeInTheDocument();
console.log('TEST SUCCESS: All brand elements found in the document.');
});
console.log('TEST END: should render the list of brands when data is fetched successfully');
});
it('should handle successful logo upload', async () => {
console.log('TEST START: should handle successful logo upload');
console.log('TEST SETUP: Mocking fetchAllBrands and uploadBrandLogo for success.');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
mockedApiClient.uploadBrandLogo.mockImplementation(
async () =>
new Response(JSON.stringify({ logoUrl: 'https://example.com/new-logo.png' }), {
@@ -98,41 +98,34 @@ describe('AdminBrandManager', () => {
);
mockedToast.loading.mockReturnValue('toast-1');
console.log('TEST ACTION: Rendering AdminBrandManager component.');
renderWithProviders(<AdminBrandManager />);
console.log('TEST ACTION: Waiting for initial brands to render.');
await waitFor(() => expect(screen.getByText('No Frills')).toBeInTheDocument());
const file = new File(['logo'], 'logo.png', { type: 'image/png' });
// Use the new accessible label to find the correct input.
const input = screen.getByLabelText('Upload logo for No Frills');
console.log('TEST ACTION: Firing file change event on input for "No Frills".');
fireEvent.change(input, { target: { files: [file] } });
console.log('TEST ASSERTION: Waiting for upload to complete and UI to update.');
await waitFor(() => {
expect(mockedApiClient.uploadBrandLogo).toHaveBeenCalledWith(1, file);
expect(mockedToast.loading).toHaveBeenCalledWith('Uploading logo...');
expect(mockedToast.success).toHaveBeenCalledWith('Logo updated successfully!', {
id: 'toast-1',
});
// Check if the UI updates with the new logo
expect(screen.getByAltText('No Frills logo')).toHaveAttribute(
'src',
'https://example.com/new-logo.png',
);
console.log('TEST SUCCESS: All assertions for successful upload passed.');
});
console.log('TEST END: should handle successful logo upload');
});
it('should handle failed logo upload with a non-Error object', async () => {
console.log('TEST START: should handle failed logo upload with a non-Error object');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
// Reject with a string instead of an Error object to test the fallback error handling
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
mockedApiClient.uploadBrandLogo.mockRejectedValue('A string error');
mockedToast.loading.mockReturnValue('toast-non-error');
@@ -145,104 +138,88 @@ describe('AdminBrandManager', () => {
fireEvent.change(input, { target: { files: [file] } });
await waitFor(() => {
// This assertion verifies that the `String(e)` part of the catch block is executed.
expect(mockedToast.error).toHaveBeenCalledWith('Upload failed: A string error', {
id: 'toast-non-error',
});
});
console.log('TEST END: should handle failed logo upload with a non-Error object');
});
it('should handle failed logo upload', async () => {
console.log('TEST START: should handle failed logo upload');
console.log('TEST SETUP: Mocking fetchAllBrands for success and uploadBrandLogo for failure.');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
mockedApiClient.uploadBrandLogo.mockRejectedValue(new Error('Upload failed'));
mockedToast.loading.mockReturnValue('toast-2');
console.log('TEST ACTION: Rendering AdminBrandManager component.');
renderWithProviders(<AdminBrandManager />);
console.log('TEST ACTION: Waiting for initial brands to render.');
await waitFor(() => expect(screen.getByText('No Frills')).toBeInTheDocument());
const file = new File(['logo'], 'logo.png', { type: 'image/png' });
const input = screen.getByLabelText('Upload logo for No Frills');
console.log('TEST ACTION: Firing file change event on input for "No Frills".');
fireEvent.change(input, { target: { files: [file] } });
console.log('TEST ASSERTION: Waiting for error toast to be called.');
await waitFor(() => {
expect(mockedToast.error).toHaveBeenCalledWith('Upload failed: Upload failed', {
id: 'toast-2',
});
console.log('TEST SUCCESS: Error toast was called with the correct message.');
});
console.log('TEST END: should handle failed logo upload');
});
it('should show an error toast for invalid file type', async () => {
console.log('TEST START: should show an error toast for invalid file type');
console.log('TEST SETUP: Mocking fetchAllBrands to resolve successfully.');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
console.log('TEST ACTION: Rendering AdminBrandManager component.');
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
renderWithProviders(<AdminBrandManager />);
console.log('TEST ACTION: Waiting for initial brands to render.');
await waitFor(() => expect(screen.getByText('No Frills')).toBeInTheDocument());
const file = new File(['text'], 'document.txt', { type: 'text/plain' });
const input = screen.getByLabelText('Upload logo for No Frills');
console.log('TEST ACTION: Firing file change event with invalid file type.');
fireEvent.change(input, { target: { files: [file] } });
console.log('TEST ASSERTION: Waiting for validation error toast.');
await waitFor(() => {
expect(mockedToast.error).toHaveBeenCalledWith(
'Invalid file type. Please upload a PNG, JPG, WEBP, or SVG.',
);
expect(mockedApiClient.uploadBrandLogo).not.toHaveBeenCalled();
console.log('TEST SUCCESS: Validation toast shown and upload API not called.');
});
console.log('TEST END: should show an error toast for invalid file type');
});
it('should show an error toast for oversized file', async () => {
console.log('TEST START: should show an error toast for oversized file');
console.log('TEST SETUP: Mocking fetchAllBrands to resolve successfully.');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
console.log('TEST ACTION: Rendering AdminBrandManager component.');
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
renderWithProviders(<AdminBrandManager />);
console.log('TEST ACTION: Waiting for initial brands to render.');
await waitFor(() => expect(screen.getByText('No Frills')).toBeInTheDocument());
const file = new File(['a'.repeat(3 * 1024 * 1024)], 'large.png', { type: 'image/png' });
const input = screen.getByLabelText('Upload logo for No Frills');
console.log('TEST ACTION: Firing file change event with oversized file.');
fireEvent.change(input, { target: { files: [file] } });
console.log('TEST ASSERTION: Waiting for size validation error toast.');
await waitFor(() => {
expect(mockedToast.error).toHaveBeenCalledWith('File is too large. Maximum size is 2MB.');
expect(mockedApiClient.uploadBrandLogo).not.toHaveBeenCalled();
console.log('TEST SUCCESS: Size validation toast shown and upload API not called.');
});
console.log('TEST END: should show an error toast for oversized file');
});
it('should show an error toast if upload fails with a non-ok response', async () => {
console.log('TEST START: should handle non-ok response from upload API');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
// Mock a failed response (e.g., 400 Bad Request)
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
mockedApiClient.uploadBrandLogo.mockResolvedValue(
new Response('Invalid image format', { status: 400 }),
);
@@ -260,51 +237,49 @@ describe('AdminBrandManager', () => {
expect(mockedToast.error).toHaveBeenCalledWith('Upload failed: Invalid image format', {
id: 'toast-3',
});
console.log('TEST SUCCESS: Error toast shown for non-ok response.');
});
console.log('TEST END: should handle non-ok response from upload API');
});
it('should show an error toast if no file is selected', async () => {
console.log('TEST START: should show an error toast if no file is selected');
console.log('TEST SETUP: Mocking fetchAllBrands to resolve successfully.');
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
renderWithProviders(<AdminBrandManager />);
console.log('TEST ACTION: Waiting for initial brands to render.');
await waitFor(() => expect(screen.getByText('No Frills')).toBeInTheDocument());
const input = screen.getByLabelText('Upload logo for No Frills');
// Simulate canceling the file picker by firing a change event with an empty file list.
console.log('TEST ACTION: Firing file change event with an empty file list.');
fireEvent.change(input, { target: { files: [] } });
console.log('TEST ASSERTION: Waiting for the "no file selected" error toast.');
await waitFor(() => {
expect(mockedToast.error).toHaveBeenCalledWith('Please select a file to upload.');
console.log('TEST SUCCESS: Error toast shown when no file is selected.');
});
console.log('TEST END: should show an error toast if no file is selected');
});
it('should render an empty table if no brands are found', async () => {
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify([]), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: [],
isLoading: false,
error: null,
} as any);
renderWithProviders(<AdminBrandManager />);
await waitFor(() => {
expect(screen.getByRole('heading', { name: /brand management/i })).toBeInTheDocument();
// Only the header row should be present
expect(screen.getAllByRole('row')).toHaveLength(1);
});
});
it('should use status code in error message if response body is empty on upload failure', async () => {
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
mockedApiClient.uploadBrandLogo.mockImplementation(
async () => new Response(null, { status: 500, statusText: 'Internal Server Error' }),
);
@@ -326,9 +301,12 @@ describe('AdminBrandManager', () => {
});
it('should only update the target brand logo and leave others unchanged', async () => {
mockedApiClient.fetchAllBrands.mockImplementation(
async () => new Response(JSON.stringify(mockBrands), { status: 200 }),
);
mockedUseBrandsQuery.mockReturnValue({
data: mockBrands,
isLoading: false,
error: null,
} as any);
mockedApiClient.uploadBrandLogo.mockImplementation(
async () => new Response(JSON.stringify({ logoUrl: 'new-logo.png' }), { status: 200 }),
);
@@ -337,17 +315,12 @@ describe('AdminBrandManager', () => {
renderWithProviders(<AdminBrandManager />);
await waitFor(() => expect(screen.getByText('No Frills')).toBeInTheDocument());
// Brand 1: No Frills (initially null logo)
// Brand 2: Compliments (initially has logo)
const file = new File(['logo'], 'logo.png', { type: 'image/png' });
const input = screen.getByLabelText('Upload logo for No Frills'); // Brand 1
const input = screen.getByLabelText('Upload logo for No Frills');
fireEvent.change(input, { target: { files: [file] } });
await waitFor(() => {
// Brand 1 should have new logo
expect(screen.getByAltText('No Frills logo')).toHaveAttribute('src', 'new-logo.png');
// Brand 2 should still have original logo
expect(screen.getByAltText('Compliments logo')).toHaveAttribute(
'src',
'https://example.com/compliments.png',

View File

@@ -65,6 +65,13 @@ const activityLogSchema = z.object({
}),
});
const usersListSchema = z.object({
query: z.object({
limit: optionalNumeric({ integer: true, positive: true, max: 100 }),
offset: optionalNumeric({ default: 0, integer: true, nonnegative: true }),
}),
});
const jobRetrySchema = z.object({
params: z.object({
queueName: z.enum([
@@ -712,21 +719,35 @@ router.put(
* get:
* tags: [Admin]
* summary: Get all users
* description: Retrieve a list of all users. Requires admin role.
* description: Retrieve a list of all users with optional pagination. Requires admin role.
* security:
* - bearerAuth: []
* parameters:
* - in: query
* name: limit
* schema:
* type: integer
* maximum: 100
* description: Maximum number of users to return. If omitted, returns all users.
* - in: query
* name: offset
* schema:
* type: integer
* default: 0
* description: Number of users to skip
* responses:
* 200:
* description: List of all users
* description: List of users with total count
* 401:
* description: Unauthorized
* 403:
* description: Forbidden - admin role required
*/
router.get('/users', validateRequest(emptySchema), async (req, res, next: NextFunction) => {
router.get('/users', validateRequest(usersListSchema), async (req, res, next: NextFunction) => {
try {
const users = await db.adminRepo.getAllUsers(req.log);
sendSuccess(res, users);
const { limit, offset } = usersListSchema.shape.query.parse(req.query);
const result = await db.adminRepo.getAllUsers(req.log, limit, offset);
sendSuccess(res, result);
} catch (error) {
req.log.error({ error }, 'Error fetching users');
next(error);
@@ -1298,6 +1319,43 @@ router.post(
},
);
/**
* @openapi
* /admin/trigger/token-cleanup:
* post:
* tags: [Admin]
* summary: Trigger token cleanup
* description: Manually trigger the expired token cleanup job. Requires admin role.
* security:
* - bearerAuth: []
* responses:
* 202:
* description: Job enqueued successfully
* 401:
* description: Unauthorized
* 403:
* description: Forbidden - admin role required
*/
router.post(
'/trigger/token-cleanup',
adminTriggerLimiter,
validateRequest(emptySchema),
async (req: Request, res: Response, next: NextFunction) => {
const userProfile = req.user as UserProfile;
req.log.info(
`[Admin] Manual trigger for token cleanup received from user: ${userProfile.user.user_id}`,
);
try {
const jobId = await backgroundJobService.triggerTokenCleanup();
sendSuccess(res, { message: 'Successfully enqueued token cleanup job.', jobId }, 202);
} catch (error) {
req.log.error({ error }, 'Error enqueuing token cleanup job');
next(error);
}
},
);
/**
* @openapi
* /admin/system/clear-cache:

View File

@@ -122,10 +122,10 @@ describe('Admin User Management Routes (/api/admin/users)', () => {
createMockAdminUserView({ user_id: '1', email: 'user1@test.com', role: 'user' }),
createMockAdminUserView({ user_id: '2', email: 'user2@test.com', role: 'admin' }),
];
vi.mocked(adminRepo.getAllUsers).mockResolvedValue(mockUsers);
vi.mocked(adminRepo.getAllUsers).mockResolvedValue({ users: mockUsers, total: 2 });
const response = await supertest(app).get('/api/admin/users');
expect(response.status).toBe(200);
expect(response.body.data).toEqual(mockUsers);
expect(response.body.data).toEqual({ users: mockUsers, total: 2 });
expect(adminRepo.getAllUsers).toHaveBeenCalledTimes(1);
});

View File

@@ -158,7 +158,11 @@ const searchWebSchema = z.object({
body: z.object({ query: requiredString('A search query is required.') }),
});
const uploadToDisk = createUploadMiddleware({ storageType: 'flyer' });
const uploadToDisk = createUploadMiddleware({
storageType: 'flyer',
fileSize: 50 * 1024 * 1024, // 50MB limit for flyer uploads
fileFilter: 'image',
});
// Diagnostic middleware: log incoming AI route requests (headers and sizes)
router.use((req: Request, res: Response, next: NextFunction) => {

View File

@@ -36,6 +36,14 @@ vi.mock('../config/passport', () => ({
next();
}),
},
requireAuth: vi.fn((req: Request, res: Response, next: NextFunction) => {
// If req.user is not set by the test setup, simulate unauthenticated access.
if (!req.user) {
return res.status(401).json({ message: 'Unauthorized' });
}
// If req.user is set, proceed as an authenticated user.
next();
}),
}));
// Define a reusable matcher for the logger object.

View File

@@ -1,7 +1,7 @@
// src/routes/deals.routes.ts
import express, { type Request, type Response, type NextFunction } from 'express';
import { z } from 'zod';
import passport from '../config/passport';
import { requireAuth } from '../config/passport';
import { dealsRepo } from '../services/db/deals.db';
import type { UserProfile } from '../types';
import { validateRequest } from '../middleware/validation.middleware';
@@ -19,8 +19,8 @@ const bestWatchedPricesSchema = z.object({
// --- Middleware for all deal routes ---
// Per ADR-002, all routes in this file require an authenticated user.
// We apply the standard passport JWT middleware at the router level.
router.use(passport.authenticate('jwt', { session: false }));
// We apply the requireAuth middleware which returns standardized 401 responses per ADR-028.
router.use(requireAuth);
/**
* @openapi

View File

@@ -38,14 +38,17 @@ describe('Personalization Routes (/api/personalization)', () => {
describe('GET /master-items', () => {
it('should return a list of master items', async () => {
const mockItems = [createMockMasterGroceryItem({ master_grocery_item_id: 1, name: 'Milk' })];
vi.mocked(db.personalizationRepo.getAllMasterItems).mockResolvedValue(mockItems);
vi.mocked(db.personalizationRepo.getAllMasterItems).mockResolvedValue({
items: mockItems,
total: 1,
});
const response = await supertest(app)
.get('/api/personalization/master-items')
.set('x-test-rate-limit-enable', 'true');
expect(response.status).toBe(200);
expect(response.body.data).toEqual(mockItems);
expect(response.body.data).toEqual({ items: mockItems, total: 1 });
});
it('should return 500 if the database call fails', async () => {
@@ -113,7 +116,10 @@ describe('Personalization Routes (/api/personalization)', () => {
describe('Rate Limiting', () => {
it('should apply publicReadLimiter to GET /master-items', async () => {
vi.mocked(db.personalizationRepo.getAllMasterItems).mockResolvedValue([]);
vi.mocked(db.personalizationRepo.getAllMasterItems).mockResolvedValue({
items: [],
total: 0,
});
const response = await supertest(app)
.get('/api/personalization/master-items')
.set('X-Test-Rate-Limit-Enable', 'true');

View File

@@ -5,6 +5,7 @@ import * as db from '../services/db/index.db';
import { validateRequest } from '../middleware/validation.middleware';
import { publicReadLimiter } from '../config/rateLimiters';
import { sendSuccess } from '../utils/apiResponse';
import { optionalNumeric } from '../utils/zodUtils';
const router = Router();
@@ -13,16 +14,37 @@ const router = Router();
// to maintain a consistent validation pattern across the application.
const emptySchema = z.object({});
// Schema for master-items with optional pagination
const masterItemsSchema = z.object({
query: z.object({
limit: optionalNumeric({ integer: true, positive: true, max: 500 }),
offset: optionalNumeric({ default: 0, integer: true, nonnegative: true }),
}),
});
/**
* @openapi
* /personalization/master-items:
* get:
* tags: [Personalization]
* summary: Get master items list
* description: Get the master list of all grocery items. Response is cached for 1 hour.
* description: Get the master list of all grocery items with optional pagination. Response is cached for 1 hour.
* parameters:
* - in: query
* name: limit
* schema:
* type: integer
* maximum: 500
* description: Maximum number of items to return. If omitted, returns all items.
* - in: query
* name: offset
* schema:
* type: integer
* default: 0
* description: Number of items to skip
* responses:
* 200:
* description: List of all master grocery items
* description: List of master grocery items with total count
* content:
* application/json:
* schema:
@@ -31,17 +53,20 @@ const emptySchema = z.object({});
router.get(
'/master-items',
publicReadLimiter,
validateRequest(emptySchema),
validateRequest(masterItemsSchema),
async (req: Request, res: Response, next: NextFunction) => {
try {
// Parse and apply defaults from schema
const { limit, offset } = masterItemsSchema.shape.query.parse(req.query);
// LOGGING: Track how often this heavy DB call is actually made vs served from cache
req.log.info('Fetching master items list from database...');
req.log.info({ limit, offset }, 'Fetching master items list from database...');
// Optimization: This list changes rarely. Instruct clients to cache it for 1 hour (3600s).
res.set('Cache-Control', 'public, max-age=3600');
const masterItems = await db.personalizationRepo.getAllMasterItems(req.log);
sendSuccess(res, masterItems);
const result = await db.personalizationRepo.getAllMasterItems(req.log, limit, offset);
sendSuccess(res, result);
} catch (error) {
req.log.error({ error }, 'Error fetching master items in /api/personalization/master-items:');
next(error);

View File

@@ -239,6 +239,50 @@ router.get(
},
);
/**
* @openapi
* /users/notifications/unread-count:
* get:
* tags: [Users]
* summary: Get unread notification count
* description: Get the count of unread notifications for the authenticated user. Optimized for navbar badge UI.
* security:
* - bearerAuth: []
* responses:
* 200:
* description: Unread notification count
* content:
* application/json:
* schema:
* type: object
* properties:
* success:
* type: boolean
* example: true
* data:
* type: object
* properties:
* count:
* type: integer
* example: 5
* 401:
* description: Unauthorized - invalid or missing token
*/
router.get(
'/notifications/unread-count',
validateRequest(emptySchema),
async (req: Request, res: Response, next: NextFunction) => {
try {
const userProfile = req.user as UserProfile;
const count = await db.notificationRepo.getUnreadCount(userProfile.user.user_id, req.log);
sendSuccess(res, { count });
} catch (error) {
req.log.error({ error }, 'Error fetching unread notification count');
next(error);
}
},
);
/**
* @openapi
* /users/notifications/mark-all-read:
@@ -294,7 +338,7 @@ router.post(
* description: Notification not found
*/
const notificationIdSchema = numericIdParam('notificationId');
type MarkNotificationReadRequest = z.infer<typeof notificationIdSchema>;
type NotificationIdRequest = z.infer<typeof notificationIdSchema>;
router.post(
'/notifications/:notificationId/mark-read',
validateRequest(notificationIdSchema),
@@ -302,7 +346,7 @@ router.post(
try {
const userProfile = req.user as UserProfile;
// Apply ADR-003 pattern for type safety
const { params } = req as unknown as MarkNotificationReadRequest;
const { params } = req as unknown as NotificationIdRequest;
await db.notificationRepo.markNotificationAsRead(
params.notificationId,
userProfile.user.user_id,
@@ -316,6 +360,51 @@ router.post(
},
);
/**
* @openapi
* /users/notifications/{notificationId}:
* delete:
* tags: [Users]
* summary: Delete a notification
* description: Delete a specific notification by its ID. Users can only delete their own notifications.
* security:
* - bearerAuth: []
* parameters:
* - in: path
* name: notificationId
* required: true
* schema:
* type: integer
* description: ID of the notification to delete
* responses:
* 204:
* description: Notification deleted successfully
* 401:
* description: Unauthorized - invalid or missing token
* 404:
* description: Notification not found or user does not have permission
*/
router.delete(
'/notifications/:notificationId',
validateRequest(notificationIdSchema),
async (req: Request, res: Response, next: NextFunction) => {
try {
const userProfile = req.user as UserProfile;
// Apply ADR-003 pattern for type safety
const { params } = req as unknown as NotificationIdRequest;
await db.notificationRepo.deleteNotification(
params.notificationId,
userProfile.user.user_id,
req.log,
);
sendNoContent(res);
} catch (error) {
req.log.error({ error }, 'Error deleting notification');
next(error);
}
},
);
/**
* @openapi
* /users/profile:

View File

@@ -160,10 +160,11 @@ export class AIService {
this.logger = logger;
this.logger.info('---------------- [AIService] Constructor Start ----------------');
// Use mock AI in test and staging environments (no real API calls, no GEMINI_API_KEY needed)
// Use mock AI in test, staging, and development environments (no real API calls, no GEMINI_API_KEY needed)
const isTestEnvironment =
process.env.NODE_ENV === 'test' ||
process.env.NODE_ENV === 'staging' ||
process.env.NODE_ENV === 'development' ||
!!process.env.VITEST_POOL_ID;
if (aiClient) {

View File

@@ -8,7 +8,7 @@ import type { Notification, WatchedItemDeal } from '../types';
// Import types for repositories from their source files
import type { PersonalizationRepository } from './db/personalization.db';
import type { NotificationRepository } from './db/notification.db';
import { analyticsQueue, weeklyAnalyticsQueue } from './queueService.server';
import { analyticsQueue, weeklyAnalyticsQueue, tokenCleanupQueue } from './queueService.server';
type UserDealGroup = {
userProfile: { user_id: string; email: string; full_name: string | null };
@@ -54,6 +54,16 @@ export class BackgroundJobService {
return job.id;
}
public async triggerTokenCleanup(): Promise<string> {
const timestamp = new Date().toISOString();
const jobId = `manual-token-cleanup-${Date.now()}`;
const job = await tokenCleanupQueue.add('cleanup-tokens', { timestamp }, { jobId });
if (!job.id) {
throw new Error('Failed to enqueue token cleanup job: No job ID returned');
}
return job.id;
}
/**
* Prepares the data for an email notification job based on a user's deals.
* @param user The user to whom the email will be sent.
@@ -107,7 +117,10 @@ export class BackgroundJobService {
private async _processDealsForUser({
userProfile,
deals,
}: UserDealGroup): Promise<Omit<Notification, 'notification_id' | 'is_read' | 'created_at' | 'updated_at'> | null> {
}: UserDealGroup): Promise<Omit<
Notification,
'notification_id' | 'is_read' | 'created_at' | 'updated_at'
> | null> {
try {
this.logger.info(
`[BackgroundJob] Found ${deals.length} deals for user ${userProfile.user_id}.`,

View File

@@ -668,12 +668,17 @@ describe('Admin DB Service', () => {
const mockUsers: AdminUserView[] = [
createMockAdminUserView({ user_id: '1', email: 'test@test.com' }),
];
mockDb.query.mockResolvedValue({ rows: mockUsers });
// Mock count query
mockDb.query.mockResolvedValueOnce({ rows: [{ count: '1' }] });
// Mock users query
mockDb.query.mockResolvedValueOnce({ rows: mockUsers });
const result = await adminRepo.getAllUsers(mockLogger);
expect(mockDb.query).toHaveBeenCalledWith(
expect.stringContaining('FROM public.users u JOIN public.profiles p'),
undefined,
);
expect(result).toEqual(mockUsers);
expect(result).toEqual({ users: mockUsers, total: 1 });
});
it('should throw an error if the database query fails', async () => {

View File

@@ -627,14 +627,33 @@ export class AdminRepository {
}
}
async getAllUsers(logger: Logger): Promise<AdminUserView[]> {
async getAllUsers(
logger: Logger,
limit?: number,
offset?: number,
): Promise<{ users: AdminUserView[]; total: number }> {
try {
const query = `
// Get total count
const countRes = await this.db.query<{ count: string }>('SELECT COUNT(*) FROM public.users');
const total = parseInt(countRes.rows[0].count, 10);
// Build query with optional pagination
let query = `
SELECT u.user_id, u.email, u.created_at, p.role, p.full_name, p.avatar_url
FROM public.users u JOIN public.profiles p ON u.user_id = p.user_id ORDER BY u.created_at DESC;
`;
const res = await this.db.query<AdminUserView>(query);
return res.rows;
FROM public.users u JOIN public.profiles p ON u.user_id = p.user_id ORDER BY u.created_at DESC`;
const params: number[] = [];
if (limit !== undefined) {
query += ` LIMIT $${params.length + 1}`;
params.push(limit);
}
if (offset !== undefined) {
query += ` OFFSET $${params.length + 1}`;
params.push(offset);
}
const res = await this.db.query<AdminUserView>(query, params.length > 0 ? params : undefined);
return { users: res.rows, total };
} catch (error) {
handleDbError(
error,

View File

@@ -32,7 +32,7 @@ export class DealsRepository {
const query = `
WITH UserWatchedItems AS (
-- Select all items the user is watching
SELECT master_item_id FROM watched_items WHERE user_id = $1
SELECT master_item_id FROM user_watched_items WHERE user_id = $1
),
RankedPrices AS (
-- Find all current sale prices for those items and rank them
@@ -70,9 +70,15 @@ export class DealsRepository {
const { rows } = await this.db.query<WatchedItemDeal>(query, [userId]);
return rows;
} catch (error) {
handleDbError(error, logger, 'Database error in findBestPricesForWatchedItems', { userId }, {
defaultMessage: 'Failed to find best prices for watched items.',
});
handleDbError(
error,
logger,
'Database error in findBestPricesForWatchedItems',
{ userId },
{
defaultMessage: 'Failed to find best prices for watched items.',
},
);
}
}
}

View File

@@ -34,10 +34,16 @@ export class NotificationRepository {
);
return res.rows[0];
} catch (error) {
handleDbError(error, logger, 'Database error in createNotification', { userId, content, linkUrl }, {
fkMessage: 'The specified user does not exist.',
defaultMessage: 'Failed to create notification.',
});
handleDbError(
error,
logger,
'Database error in createNotification',
{ userId, content, linkUrl },
{
fkMessage: 'The specified user does not exist.',
defaultMessage: 'Failed to create notification.',
},
);
}
}
@@ -74,10 +80,16 @@ export class NotificationRepository {
await this.db.query(query, [userIds, contents, linkUrls]);
} catch (error) {
handleDbError(error, logger, 'Database error in createBulkNotifications', { notifications }, {
fkMessage: 'One or more of the specified users do not exist.',
defaultMessage: 'Failed to create bulk notifications.',
});
handleDbError(
error,
logger,
'Database error in createBulkNotifications',
{ notifications },
{
fkMessage: 'One or more of the specified users do not exist.',
defaultMessage: 'Failed to create bulk notifications.',
},
);
}
}
@@ -118,6 +130,32 @@ export class NotificationRepository {
}
}
/**
* Gets the count of unread notifications for a specific user.
* This is optimized for the navbar badge UI.
* @param userId The ID of the user.
* @returns A promise that resolves to the count of unread notifications.
*/
async getUnreadCount(userId: string, logger: Logger): Promise<number> {
try {
const res = await this.db.query<{ count: string }>(
`SELECT COUNT(*) FROM public.notifications WHERE user_id = $1 AND is_read = false`,
[userId],
);
return parseInt(res.rows[0].count, 10);
} catch (error) {
handleDbError(
error,
logger,
'Database error in getUnreadCount',
{ userId },
{
defaultMessage: 'Failed to get unread notification count.',
},
);
}
}
/**
* Marks all unread notifications for a user as read.
* @param userId The ID of the user whose notifications should be marked as read.
@@ -130,9 +168,15 @@ export class NotificationRepository {
[userId],
);
} catch (error) {
handleDbError(error, logger, 'Database error in markAllNotificationsAsRead', { userId }, {
defaultMessage: 'Failed to mark notifications as read.',
});
handleDbError(
error,
logger,
'Database error in markAllNotificationsAsRead',
{ userId },
{
defaultMessage: 'Failed to mark notifications as read.',
},
);
}
}
@@ -169,6 +213,35 @@ export class NotificationRepository {
}
}
/**
* Deletes a single notification for a specific user.
* Ensures that a user can only delete their own notifications.
* @param notificationId The ID of the notification to delete.
* @param userId The ID of the user who owns the notification.
* @throws NotFoundError if the notification is not found or does not belong to the user.
*/
async deleteNotification(notificationId: number, userId: string, logger: Logger): Promise<void> {
try {
const res = await this.db.query(
`DELETE FROM public.notifications WHERE notification_id = $1 AND user_id = $2`,
[notificationId, userId],
);
if (res.rowCount === 0) {
throw new NotFoundError('Notification not found or user does not have permission.');
}
} catch (error) {
handleDbError(
error,
logger,
'Database error in deleteNotification',
{ notificationId, userId },
{
defaultMessage: 'Failed to delete notification.',
},
);
}
}
/**
* Deletes notifications that are older than a specified number of days.
* This is intended for a periodic cleanup job.
@@ -183,9 +256,15 @@ export class NotificationRepository {
);
return res.rowCount ?? 0;
} catch (error) {
handleDbError(error, logger, 'Database error in deleteOldNotifications', { daysOld }, {
defaultMessage: 'Failed to delete old notifications.',
});
handleDbError(
error,
logger,
'Database error in deleteOldNotifications',
{ daysOld },
{
defaultMessage: 'Failed to delete old notifications.',
},
);
}
}
}

View File

@@ -5,7 +5,10 @@ import type { Pool, PoolClient } from 'pg';
import { withTransaction } from './connection.db';
import { PersonalizationRepository } from './personalization.db';
import type { MasterGroceryItem, UserAppliance, DietaryRestriction, Appliance } from '../../types';
import { createMockMasterGroceryItem, createMockUserAppliance } from '../../tests/utils/mockFactories';
import {
createMockMasterGroceryItem,
createMockUserAppliance,
} from '../../tests/utils/mockFactories';
// Un-mock the module we are testing to ensure we use the real implementation.
vi.unmock('./personalization.db');
@@ -50,7 +53,10 @@ describe('Personalization DB Service', () => {
const mockItems: MasterGroceryItem[] = [
createMockMasterGroceryItem({ master_grocery_item_id: 1, name: 'Apples' }),
];
mockQuery.mockResolvedValue({ rows: mockItems });
// Mock count query
mockQuery.mockResolvedValueOnce({ rows: [{ count: '1' }] });
// Mock items query
mockQuery.mockResolvedValueOnce({ rows: mockItems });
const result = await personalizationRepo.getAllMasterItems(mockLogger);
@@ -64,14 +70,17 @@ describe('Personalization DB Service', () => {
// The query string in the implementation has a lot of whitespace from the template literal.
// This updated expectation matches the new query exactly.
expect(mockQuery).toHaveBeenCalledWith(expectedQuery);
expect(result).toEqual(mockItems);
expect(mockQuery).toHaveBeenCalledWith(expectedQuery, undefined);
expect(result).toEqual({ items: mockItems, total: 1 });
});
it('should return an empty array if no master items exist', async () => {
mockQuery.mockResolvedValue({ rows: [] });
// Mock count query
mockQuery.mockResolvedValueOnce({ rows: [{ count: '0' }] });
// Mock items query
mockQuery.mockResolvedValueOnce({ rows: [] });
const result = await personalizationRepo.getAllMasterItems(mockLogger);
expect(result).toEqual([]);
expect(result).toEqual({ items: [], total: 0 });
});
it('should throw an error if the database query fails', async () => {

View File

@@ -25,24 +25,58 @@ export class PersonalizationRepository {
}
/**
* Retrieves all master grocery items from the database.
* @returns A promise that resolves to an array of MasterGroceryItem objects.
* Retrieves master grocery items from the database with optional pagination.
* @param logger The logger instance.
* @param limit Optional limit for pagination. If not provided, returns all items.
* @param offset Optional offset for pagination.
* @returns A promise that resolves to an object with items array and total count.
*/
async getAllMasterItems(logger: Logger): Promise<MasterGroceryItem[]> {
async getAllMasterItems(
logger: Logger,
limit?: number,
offset?: number,
): Promise<{ items: MasterGroceryItem[]; total: number }> {
try {
const query = `
// Get total count
const countRes = await this.db.query<{ count: string }>(
'SELECT COUNT(*) FROM public.master_grocery_items',
);
const total = parseInt(countRes.rows[0].count, 10);
// Build query with optional pagination
let query = `
SELECT
mgi.*,
c.name as category_name
FROM public.master_grocery_items mgi
LEFT JOIN public.categories c ON mgi.category_id = c.category_id
ORDER BY mgi.name ASC`;
const res = await this.db.query<MasterGroceryItem>(query);
return res.rows;
const params: number[] = [];
if (limit !== undefined) {
query += ` LIMIT $${params.length + 1}`;
params.push(limit);
}
if (offset !== undefined) {
query += ` OFFSET $${params.length + 1}`;
params.push(offset);
}
const res = await this.db.query<MasterGroceryItem>(
query,
params.length > 0 ? params : undefined,
);
return { items: res.rows, total };
} catch (error) {
handleDbError(error, logger, 'Database error in getAllMasterItems', {}, {
defaultMessage: 'Failed to retrieve master grocery items.',
});
handleDbError(
error,
logger,
'Database error in getAllMasterItems',
{},
{
defaultMessage: 'Failed to retrieve master grocery items.',
},
);
}
}
@@ -63,9 +97,15 @@ export class PersonalizationRepository {
const res = await this.db.query<MasterGroceryItem>(query, [userId]);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getWatchedItems', { userId }, {
defaultMessage: 'Failed to retrieve watched items.',
});
handleDbError(
error,
logger,
'Database error in getWatchedItems',
{ userId },
{
defaultMessage: 'Failed to retrieve watched items.',
},
);
}
}
@@ -81,9 +121,15 @@ export class PersonalizationRepository {
[userId, masterItemId],
);
} catch (error) {
handleDbError(error, logger, 'Database error in removeWatchedItem', { userId, masterItemId }, {
defaultMessage: 'Failed to remove item from watchlist.',
});
handleDbError(
error,
logger,
'Database error in removeWatchedItem',
{ userId, masterItemId },
{
defaultMessage: 'Failed to remove item from watchlist.',
},
);
}
}
@@ -103,9 +149,15 @@ export class PersonalizationRepository {
);
return res.rows[0];
} catch (error) {
handleDbError(error, logger, 'Database error in findPantryItemOwner', { pantryItemId }, {
defaultMessage: 'Failed to retrieve pantry item owner from database.',
});
handleDbError(
error,
logger,
'Database error in findPantryItemOwner',
{ pantryItemId },
{
defaultMessage: 'Failed to retrieve pantry item owner from database.',
},
);
}
}
@@ -189,9 +241,15 @@ export class PersonalizationRepository {
>('SELECT * FROM public.get_best_sale_prices_for_all_users()');
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getBestSalePricesForAllUsers', {}, {
defaultMessage: 'Failed to get best sale prices for all users.',
});
handleDbError(
error,
logger,
'Database error in getBestSalePricesForAllUsers',
{},
{
defaultMessage: 'Failed to get best sale prices for all users.',
},
);
}
}
@@ -204,9 +262,15 @@ export class PersonalizationRepository {
const res = await this.db.query<Appliance>('SELECT * FROM public.appliances ORDER BY name');
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getAppliances', {}, {
defaultMessage: 'Failed to get appliances.',
});
handleDbError(
error,
logger,
'Database error in getAppliances',
{},
{
defaultMessage: 'Failed to get appliances.',
},
);
}
}
@@ -221,9 +285,15 @@ export class PersonalizationRepository {
);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getDietaryRestrictions', {}, {
defaultMessage: 'Failed to get dietary restrictions.',
});
handleDbError(
error,
logger,
'Database error in getDietaryRestrictions',
{},
{
defaultMessage: 'Failed to get dietary restrictions.',
},
);
}
}
@@ -242,9 +312,15 @@ export class PersonalizationRepository {
const res = await this.db.query<DietaryRestriction>(query, [userId]);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getUserDietaryRestrictions', { userId }, {
defaultMessage: 'Failed to get user dietary restrictions.',
});
handleDbError(
error,
logger,
'Database error in getUserDietaryRestrictions',
{ userId },
{
defaultMessage: 'Failed to get user dietary restrictions.',
},
);
}
}
@@ -278,7 +354,10 @@ export class PersonalizationRepository {
logger,
'Database error in setUserDietaryRestrictions',
{ userId, restrictionIds },
{ fkMessage: 'One or more of the specified restriction IDs are invalid.', defaultMessage: 'Failed to set user dietary restrictions.' },
{
fkMessage: 'One or more of the specified restriction IDs are invalid.',
defaultMessage: 'Failed to set user dietary restrictions.',
},
);
}
}
@@ -309,10 +388,16 @@ export class PersonalizationRepository {
return newAppliances;
});
} catch (error) {
handleDbError(error, logger, 'Database error in setUserAppliances', { userId, applianceIds }, {
fkMessage: 'Invalid appliance ID',
defaultMessage: 'Failed to set user appliances.',
});
handleDbError(
error,
logger,
'Database error in setUserAppliances',
{ userId, applianceIds },
{
fkMessage: 'Invalid appliance ID',
defaultMessage: 'Failed to set user appliances.',
},
);
}
}
@@ -331,9 +416,15 @@ export class PersonalizationRepository {
const res = await this.db.query<Appliance>(query, [userId]);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getUserAppliances', { userId }, {
defaultMessage: 'Failed to get user appliances.',
});
handleDbError(
error,
logger,
'Database error in getUserAppliances',
{ userId },
{
defaultMessage: 'Failed to get user appliances.',
},
);
}
}
@@ -350,9 +441,15 @@ export class PersonalizationRepository {
);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in findRecipesFromPantry', { userId }, {
defaultMessage: 'Failed to find recipes from pantry.',
});
handleDbError(
error,
logger,
'Database error in findRecipesFromPantry',
{ userId },
{
defaultMessage: 'Failed to find recipes from pantry.',
},
);
}
}
@@ -374,9 +471,15 @@ export class PersonalizationRepository {
);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in recommendRecipesForUser', { userId, limit }, {
defaultMessage: 'Failed to recommend recipes.',
});
handleDbError(
error,
logger,
'Database error in recommendRecipesForUser',
{ userId, limit },
{
defaultMessage: 'Failed to recommend recipes.',
},
);
}
}
@@ -393,9 +496,15 @@ export class PersonalizationRepository {
);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getBestSalePricesForUser', { userId }, {
defaultMessage: 'Failed to get best sale prices.',
});
handleDbError(
error,
logger,
'Database error in getBestSalePricesForUser',
{ userId },
{
defaultMessage: 'Failed to get best sale prices.',
},
);
}
}
@@ -415,9 +524,15 @@ export class PersonalizationRepository {
);
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in suggestPantryItemConversions', { pantryItemId }, {
defaultMessage: 'Failed to suggest pantry item conversions.',
});
handleDbError(
error,
logger,
'Database error in suggestPantryItemConversions',
{ pantryItemId },
{
defaultMessage: 'Failed to suggest pantry item conversions.',
},
);
}
}
@@ -434,9 +549,15 @@ export class PersonalizationRepository {
); // This is a standalone function, no change needed here.
return res.rows;
} catch (error) {
handleDbError(error, logger, 'Database error in getRecipesForUserDiets', { userId }, {
defaultMessage: 'Failed to get recipes compatible with user diet.',
});
handleDbError(
error,
logger,
'Database error in getRecipesForUserDiets',
{ userId },
{
defaultMessage: 'Failed to get recipes compatible with user diet.',
},
);
}
}
}

View File

@@ -37,7 +37,7 @@ describe('FlyerAiProcessor', () => {
extractCoreDataFromFlyerImage: vi.fn(),
} as unknown as AIService;
mockPersonalizationRepo = {
getAllMasterItems: vi.fn().mockResolvedValue([]),
getAllMasterItems: vi.fn().mockResolvedValue({ items: [], total: 0 }),
} as unknown as PersonalizationRepository;
service = new FlyerAiProcessor(mockAiService, mockPersonalizationRepo);
@@ -86,9 +86,9 @@ describe('FlyerAiProcessor', () => {
const imagePaths = [{ path: 'page1.jpg', mimetype: 'image/jpeg' }];
// Act & Assert
await expect(
service.extractAndValidateData(imagePaths, jobData, logger),
).rejects.toThrow(dbError);
await expect(service.extractAndValidateData(imagePaths, jobData, logger)).rejects.toThrow(
dbError,
);
// Verify that the process stops before calling the AI service
expect(mockAiService.extractCoreDataFromFlyerImage).not.toHaveBeenCalled();
@@ -103,8 +103,20 @@ describe('FlyerAiProcessor', () => {
valid_to: '2024-01-07',
store_address: '123 Good St',
items: [
{ item: 'Priced Item 1', price_in_cents: 199, price_display: '$1.99', quantity: '1', category_name: 'A' },
{ item: 'Priced Item 2', price_in_cents: 299, price_display: '$2.99', quantity: '1', category_name: 'B' },
{
item: 'Priced Item 1',
price_in_cents: 199,
price_display: '$1.99',
quantity: '1',
category_name: 'A',
},
{
item: 'Priced Item 2',
price_in_cents: 299,
price_display: '$2.99',
quantity: '1',
category_name: 'B',
},
],
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(mockAiResponse);
@@ -128,7 +140,9 @@ describe('FlyerAiProcessor', () => {
valid_to: null,
store_address: null,
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(invalidResponse as any);
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(
invalidResponse as any,
);
const imagePaths = [{ path: 'page1.jpg', mimetype: 'image/jpeg' }];
await expect(service.extractAndValidateData(imagePaths, jobData, logger)).rejects.toThrow(
@@ -140,7 +154,15 @@ describe('FlyerAiProcessor', () => {
const jobData = createMockJobData({});
const mockAiResponse = {
store_name: null, // Missing store name
items: [{ item: 'Test Item', price_display: '$1.99', price_in_cents: 199, quantity: 'each', category_name: 'Grocery' }],
items: [
{
item: 'Test Item',
price_display: '$1.99',
price_in_cents: 199,
quantity: 'each',
category_name: 'Grocery',
},
],
valid_from: '2024-01-01',
valid_to: '2024-01-07',
store_address: null,
@@ -187,9 +209,27 @@ describe('FlyerAiProcessor', () => {
valid_to: '2024-01-07',
store_address: '123 Test St',
items: [
{ item: 'Priced Item', price_in_cents: 199, price_display: '$1.99', quantity: '1', category_name: 'A' },
{ item: 'Unpriced Item 1', price_in_cents: null, price_display: 'See store', quantity: '1', category_name: 'B' },
{ item: 'Unpriced Item 2', price_in_cents: null, price_display: 'FREE', quantity: '1', category_name: 'C' },
{
item: 'Priced Item',
price_in_cents: 199,
price_display: '$1.99',
quantity: '1',
category_name: 'A',
},
{
item: 'Unpriced Item 1',
price_in_cents: null,
price_display: 'See store',
quantity: '1',
category_name: 'B',
},
{
item: 'Unpriced Item 2',
price_in_cents: null,
price_display: 'FREE',
quantity: '1',
category_name: 'C',
},
], // 1/3 = 33% have price, which is < 50%
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(mockAiResponse);
@@ -200,7 +240,9 @@ describe('FlyerAiProcessor', () => {
expect(result.needsReview).toBe(true);
expect(logger.warn).toHaveBeenCalledWith(
expect.objectContaining({ qualityIssues: ['Low price quality (33% of items have a price)'] }),
expect.objectContaining({
qualityIssues: ['Low price quality (33% of items have a price)'],
}),
expect.stringContaining('AI response has quality issues.'),
);
});
@@ -216,10 +258,34 @@ describe('FlyerAiProcessor', () => {
valid_to: '2024-01-07',
store_address: '123 Test St',
items: [
{ item: 'Priced Item 1', price_in_cents: 199, price_display: '$1.99', quantity: '1', category_name: 'A' },
{ item: 'Priced Item 2', price_in_cents: 299, price_display: '$2.99', quantity: '1', category_name: 'B' },
{ item: 'Priced Item 3', price_in_cents: 399, price_display: '$3.99', quantity: '1', category_name: 'C' },
{ item: 'Unpriced Item 1', price_in_cents: null, price_display: 'See store', quantity: '1', category_name: 'D' },
{
item: 'Priced Item 1',
price_in_cents: 199,
price_display: '$1.99',
quantity: '1',
category_name: 'A',
},
{
item: 'Priced Item 2',
price_in_cents: 299,
price_display: '$2.99',
quantity: '1',
category_name: 'B',
},
{
item: 'Priced Item 3',
price_in_cents: 399,
price_display: '$3.99',
quantity: '1',
category_name: 'C',
},
{
item: 'Unpriced Item 1',
price_in_cents: null,
price_display: 'See store',
quantity: '1',
category_name: 'D',
},
], // 3/4 = 75% have price. This is > 50% (default) but < 80% (custom).
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(mockAiResponse);
@@ -233,7 +299,9 @@ describe('FlyerAiProcessor', () => {
// Because 75% < 80%, it should be flagged for review.
expect(result.needsReview).toBe(true);
expect(logger.warn).toHaveBeenCalledWith(
expect.objectContaining({ qualityIssues: ['Low price quality (75% of items have a price)'] }),
expect.objectContaining({
qualityIssues: ['Low price quality (75% of items have a price)'],
}),
expect.stringContaining('AI response has quality issues.'),
);
});
@@ -243,9 +311,17 @@ describe('FlyerAiProcessor', () => {
const mockAiResponse = {
store_name: 'Test Store',
valid_from: null, // Missing date
valid_to: null, // Missing date
valid_to: null, // Missing date
store_address: '123 Test St',
items: [{ item: 'Test Item', price_in_cents: 199, price_display: '$1.99', quantity: '1', category_name: 'A' }],
items: [
{
item: 'Test Item',
price_in_cents: 199,
price_display: '$1.99',
quantity: '1',
category_name: 'A',
},
],
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(mockAiResponse);
const { logger } = await import('./logger.server');
@@ -264,7 +340,7 @@ describe('FlyerAiProcessor', () => {
const jobData = createMockJobData({});
const mockAiResponse = {
store_name: null, // Issue 1
items: [], // Issue 2
items: [], // Issue 2
valid_from: null, // Issue 3
valid_to: null,
store_address: null,
@@ -277,7 +353,14 @@ describe('FlyerAiProcessor', () => {
expect(result.needsReview).toBe(true);
expect(logger.warn).toHaveBeenCalledWith(
{ rawData: mockAiResponse, qualityIssues: ['Missing store name', 'No items were extracted', 'Missing both valid_from and valid_to dates'] },
{
rawData: mockAiResponse,
qualityIssues: [
'Missing store name',
'No items were extracted',
'Missing both valid_from and valid_to dates',
],
},
'AI response has quality issues. Flagging for review. Issues: Missing store name, No items were extracted, Missing both valid_from and valid_to dates',
);
});
@@ -291,7 +374,15 @@ describe('FlyerAiProcessor', () => {
valid_from: '2024-01-01',
valid_to: '2024-01-07',
store_address: '123 Test St',
items: [{ item: 'Test Item', price_in_cents: 199, price_display: '$1.99', quantity: '1', category_name: 'A' }],
items: [
{
item: 'Test Item',
price_in_cents: 199,
price_display: '$1.99',
quantity: '1',
category_name: 'A',
},
],
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage).mockResolvedValue(mockAiResponse);
@@ -300,7 +391,11 @@ describe('FlyerAiProcessor', () => {
// Assert
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenCalledWith(
imagePaths, [], undefined, '456 Fallback Ave', logger
imagePaths,
[],
undefined,
'456 Fallback Ave',
logger,
);
});
@@ -323,8 +418,22 @@ describe('FlyerAiProcessor', () => {
valid_to: '2025-01-07',
store_address: '123 Batch St',
items: [
{ item: 'Item A', price_display: '$1', price_in_cents: 100, quantity: '1', category_name: 'Cat A', master_item_id: 1 },
{ item: 'Item B', price_display: '$2', price_in_cents: 200, quantity: '1', category_name: 'Cat B', master_item_id: 2 },
{
item: 'Item A',
price_display: '$1',
price_in_cents: 100,
quantity: '1',
category_name: 'Cat A',
master_item_id: 1,
},
{
item: 'Item B',
price_display: '$2',
price_in_cents: 200,
quantity: '1',
category_name: 'Cat B',
master_item_id: 2,
},
],
};
@@ -334,7 +443,14 @@ describe('FlyerAiProcessor', () => {
valid_to: null,
store_address: null,
items: [
{ item: 'Item C', price_display: '$3', price_in_cents: 300, quantity: '1', category_name: 'Cat C', master_item_id: 3 },
{
item: 'Item C',
price_display: '$3',
price_in_cents: 300,
quantity: '1',
category_name: 'Cat C',
master_item_id: 3,
},
],
};
@@ -351,8 +467,22 @@ describe('FlyerAiProcessor', () => {
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenCalledTimes(2);
// 2. Check the arguments for each call
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenNthCalledWith(1, imagePaths.slice(0, 4), [], undefined, undefined, logger);
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenNthCalledWith(2, imagePaths.slice(4, 5), [], undefined, undefined, logger);
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenNthCalledWith(
1,
imagePaths.slice(0, 4),
[],
undefined,
undefined,
logger,
);
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenNthCalledWith(
2,
imagePaths.slice(4, 5),
[],
undefined,
undefined,
logger,
);
// 3. Check the merged data
expect(result.data.store_name).toBe('Batch 1 Store'); // Metadata from the first batch
@@ -362,11 +492,13 @@ describe('FlyerAiProcessor', () => {
// 4. Check that items from both batches are merged
expect(result.data.items).toHaveLength(3);
expect(result.data.items).toEqual(expect.arrayContaining([
expect.objectContaining({ item: 'Item A' }),
expect.objectContaining({ item: 'Item B' }),
expect.objectContaining({ item: 'Item C' }),
]));
expect(result.data.items).toEqual(
expect.arrayContaining([
expect.objectContaining({ item: 'Item A' }),
expect.objectContaining({ item: 'Item B' }),
expect.objectContaining({ item: 'Item C' }),
]),
);
// 5. Check that the job is not flagged for review
expect(result.needsReview).toBe(false);
@@ -376,7 +508,11 @@ describe('FlyerAiProcessor', () => {
// Arrange
const jobData = createMockJobData({});
const imagePaths = [
{ path: 'page1.jpg', mimetype: 'image/jpeg' }, { path: 'page2.jpg', mimetype: 'image/jpeg' }, { path: 'page3.jpg', mimetype: 'image/jpeg' }, { path: 'page4.jpg', mimetype: 'image/jpeg' }, { path: 'page5.jpg', mimetype: 'image/jpeg' },
{ path: 'page1.jpg', mimetype: 'image/jpeg' },
{ path: 'page2.jpg', mimetype: 'image/jpeg' },
{ path: 'page3.jpg', mimetype: 'image/jpeg' },
{ path: 'page4.jpg', mimetype: 'image/jpeg' },
{ path: 'page5.jpg', mimetype: 'image/jpeg' },
];
const mockAiResponseBatch1 = {
@@ -385,7 +521,14 @@ describe('FlyerAiProcessor', () => {
valid_to: '2025-01-07',
store_address: '123 Good St',
items: [
{ item: 'Item A', price_display: '$1', price_in_cents: 100, quantity: '1', category_name: 'Cat A', master_item_id: 1 },
{
item: 'Item A',
price_display: '$1',
price_in_cents: 100,
quantity: '1',
category_name: 'Cat A',
master_item_id: 1,
},
],
};
@@ -416,11 +559,45 @@ describe('FlyerAiProcessor', () => {
// Arrange
const jobData = createMockJobData({});
const imagePaths = [
{ path: 'page1.jpg', mimetype: 'image/jpeg' }, { path: 'page2.jpg', mimetype: 'image/jpeg' }, { path: 'page3.jpg', mimetype: 'image/jpeg' }, { path: 'page4.jpg', mimetype: 'image/jpeg' }, { path: 'page5.jpg', mimetype: 'image/jpeg' },
{ path: 'page1.jpg', mimetype: 'image/jpeg' },
{ path: 'page2.jpg', mimetype: 'image/jpeg' },
{ path: 'page3.jpg', mimetype: 'image/jpeg' },
{ path: 'page4.jpg', mimetype: 'image/jpeg' },
{ path: 'page5.jpg', mimetype: 'image/jpeg' },
];
const mockAiResponseBatch1 = { store_name: null, valid_from: '2025-01-01', valid_to: '2025-01-07', store_address: null, items: [{ item: 'Item A', price_display: '$1', price_in_cents: 100, quantity: '1', category_name: 'Cat A', master_item_id: 1 }] };
const mockAiResponseBatch2 = { store_name: 'Batch 2 Store', valid_from: '2025-01-02', valid_to: null, store_address: '456 Subsequent St', items: [{ item: 'Item C', price_display: '$3', price_in_cents: 300, quantity: '1', category_name: 'Cat C', master_item_id: 3 }] };
const mockAiResponseBatch1 = {
store_name: null,
valid_from: '2025-01-01',
valid_to: '2025-01-07',
store_address: null,
items: [
{
item: 'Item A',
price_display: '$1',
price_in_cents: 100,
quantity: '1',
category_name: 'Cat A',
master_item_id: 1,
},
],
};
const mockAiResponseBatch2 = {
store_name: 'Batch 2 Store',
valid_from: '2025-01-02',
valid_to: null,
store_address: '456 Subsequent St',
items: [
{
item: 'Item C',
price_display: '$3',
price_in_cents: 300,
quantity: '1',
category_name: 'Cat C',
master_item_id: 3,
},
],
};
vi.mocked(mockAiService.extractCoreDataFromFlyerImage)
.mockResolvedValueOnce(mockAiResponseBatch1)
@@ -453,7 +630,14 @@ describe('FlyerAiProcessor', () => {
valid_to: '2025-02-07',
store_address: '789 Single St',
items: [
{ item: 'Item X', price_display: '$10', price_in_cents: 1000, quantity: '1', category_name: 'Cat X', master_item_id: 10 },
{
item: 'Item X',
price_display: '$10',
price_in_cents: 1000,
quantity: '1',
category_name: 'Cat X',
master_item_id: 10,
},
],
};
@@ -468,9 +652,15 @@ describe('FlyerAiProcessor', () => {
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenCalledTimes(1);
// 2. Check the arguments for the single call.
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenCalledWith(imagePaths, [], undefined, undefined, logger);
expect(mockAiService.extractCoreDataFromFlyerImage).toHaveBeenCalledWith(
imagePaths,
[],
undefined,
undefined,
logger,
);
// 3. Check that the final data matches the single batch's data.
expect(result.data).toEqual(mockAiResponse);
});
});
});

View File

@@ -139,7 +139,7 @@ export class FlyerAiProcessor {
logger.info(`Starting AI data extraction for ${imagePaths.length} pages.`);
const { submitterIp, userProfileAddress } = jobData;
const masterItems = await this.personalizationRepo.getAllMasterItems(logger);
const { items: masterItems } = await this.personalizationRepo.getAllMasterItems(logger);
logger.debug(`Retrieved ${masterItems.length} master items for AI matching.`);
// BATCHING LOGIC: Process images in chunks to avoid hitting AI payload/token limits.

View File

@@ -182,7 +182,10 @@ describe('FlyerProcessingService', () => {
);
vi.mocked(mockedDb.adminRepo.logActivity).mockResolvedValue();
// FIX: Provide a default mock for getAllMasterItems to prevent a TypeError on `.length`.
vi.mocked(mockedDb.personalizationRepo.getAllMasterItems).mockResolvedValue([]);
vi.mocked(mockedDb.personalizationRepo.getAllMasterItems).mockResolvedValue({
items: [],
total: 0,
});
});
beforeEach(() => {
vi.mocked(generateFlyerIcon).mockResolvedValue('icon-flyer.webp');

View File

@@ -75,9 +75,11 @@ describe('E2E Admin Dashboard Flow', () => {
expect(usersResponse.status).toBe(200);
const usersResponseBody = await usersResponse.json();
expect(Array.isArray(usersResponseBody.data)).toBe(true);
expect(usersResponseBody.data).toHaveProperty('users');
expect(usersResponseBody.data).toHaveProperty('total');
expect(Array.isArray(usersResponseBody.data.users)).toBe(true);
// The list should contain the admin user we just created
const self = usersResponseBody.data.find((u: any) => u.user_id === adminUserId);
const self = usersResponseBody.data.users.find((u: any) => u.user_id === adminUserId);
expect(self).toBeDefined();
// 6. Check Queue Status (Protected Admin Route)

View File

@@ -0,0 +1,350 @@
// src/tests/e2e/budget-journey.e2e.test.ts
/**
* End-to-End test for the Budget Management user journey.
* Tests the complete flow from user registration to creating budgets, tracking spending, and managing finances.
*/
import { describe, it, expect, afterAll } from 'vitest';
import * as apiClient from '../../services/apiClient';
import { cleanupDb } from '../utils/cleanup';
import { poll } from '../utils/poll';
import { getPool } from '../../services/db/connection.db';
/**
* @vitest-environment node
*/
const API_BASE_URL = process.env.VITE_API_BASE_URL || 'http://localhost:3000/api';
// Helper to make authenticated API calls
const authedFetch = async (
path: string,
options: RequestInit & { token?: string } = {},
): Promise<Response> => {
const { token, ...fetchOptions } = options;
const headers: Record<string, string> = {
'Content-Type': 'application/json',
...(fetchOptions.headers as Record<string, string>),
};
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}
return fetch(`${API_BASE_URL}${path}`, {
...fetchOptions,
headers,
});
};
describe('E2E Budget Management Journey', () => {
const uniqueId = Date.now();
const userEmail = `budget-e2e-${uniqueId}@example.com`;
const userPassword = 'StrongBudgetPassword123!';
let authToken: string;
let userId: string | null = null;
const createdBudgetIds: number[] = [];
const createdReceiptIds: number[] = [];
const createdStoreIds: number[] = [];
afterAll(async () => {
const pool = getPool();
// Clean up receipt items and receipts (for spending tracking)
if (createdReceiptIds.length > 0) {
await pool.query('DELETE FROM public.receipt_items WHERE receipt_id = ANY($1::bigint[])', [
createdReceiptIds,
]);
await pool.query('DELETE FROM public.receipts WHERE receipt_id = ANY($1::bigint[])', [
createdReceiptIds,
]);
}
// Clean up budgets
if (createdBudgetIds.length > 0) {
await pool.query('DELETE FROM public.budgets WHERE budget_id = ANY($1::bigint[])', [
createdBudgetIds,
]);
}
// Clean up stores
if (createdStoreIds.length > 0) {
await pool.query('DELETE FROM public.stores WHERE store_id = ANY($1::int[])', [
createdStoreIds,
]);
}
// Clean up user
await cleanupDb({
userIds: [userId],
});
});
it('should complete budget journey: Register -> Create Budget -> Track Spending -> Update -> Delete', async () => {
// Step 1: Register a new user
const registerResponse = await apiClient.registerUser(
userEmail,
userPassword,
'Budget E2E User',
);
expect(registerResponse.status).toBe(201);
// Step 2: Login to get auth token
const { response: loginResponse, responseBody: loginResponseBody } = await poll(
async () => {
const response = await apiClient.loginUser(userEmail, userPassword, false);
const responseBody = response.ok ? await response.clone().json() : {};
return { response, responseBody };
},
(result) => result.response.ok,
{ timeout: 10000, interval: 1000, description: 'user login after registration' },
);
expect(loginResponse.status).toBe(200);
authToken = loginResponseBody.data.token;
userId = loginResponseBody.data.userprofile.user.user_id;
expect(authToken).toBeDefined();
// Step 3: Create a monthly budget
const today = new Date();
const startOfMonth = new Date(today.getFullYear(), today.getMonth(), 1);
const formatDate = (d: Date) => d.toISOString().split('T')[0];
const createBudgetResponse = await authedFetch('/budgets', {
method: 'POST',
token: authToken,
body: JSON.stringify({
name: 'Monthly Groceries',
amount_cents: 50000, // $500.00
period: 'monthly',
start_date: formatDate(startOfMonth),
}),
});
expect(createBudgetResponse.status).toBe(201);
const createBudgetData = await createBudgetResponse.json();
expect(createBudgetData.data.name).toBe('Monthly Groceries');
expect(createBudgetData.data.amount_cents).toBe(50000);
expect(createBudgetData.data.period).toBe('monthly');
const budgetId = createBudgetData.data.budget_id;
createdBudgetIds.push(budgetId);
// Step 4: Create a weekly budget
const weeklyBudgetResponse = await authedFetch('/budgets', {
method: 'POST',
token: authToken,
body: JSON.stringify({
name: 'Weekly Dining Out',
amount_cents: 10000, // $100.00
period: 'weekly',
start_date: formatDate(today),
}),
});
expect(weeklyBudgetResponse.status).toBe(201);
const weeklyBudgetData = await weeklyBudgetResponse.json();
expect(weeklyBudgetData.data.period).toBe('weekly');
createdBudgetIds.push(weeklyBudgetData.data.budget_id);
// Step 5: View all budgets
const listBudgetsResponse = await authedFetch('/budgets', {
method: 'GET',
token: authToken,
});
expect(listBudgetsResponse.status).toBe(200);
const listBudgetsData = await listBudgetsResponse.json();
expect(listBudgetsData.data.length).toBe(2);
// Find our budgets
const monthlyBudget = listBudgetsData.data.find(
(b: { name: string }) => b.name === 'Monthly Groceries',
);
expect(monthlyBudget).toBeDefined();
expect(monthlyBudget.amount_cents).toBe(50000);
// Step 6: Update a budget
const updateBudgetResponse = await authedFetch(`/budgets/${budgetId}`, {
method: 'PUT',
token: authToken,
body: JSON.stringify({
amount_cents: 55000, // Increase to $550.00
name: 'Monthly Groceries (Updated)',
}),
});
expect(updateBudgetResponse.status).toBe(200);
const updateBudgetData = await updateBudgetResponse.json();
expect(updateBudgetData.data.amount_cents).toBe(55000);
expect(updateBudgetData.data.name).toBe('Monthly Groceries (Updated)');
// Step 7: Create test spending data (receipts) to track against budget
const pool = getPool();
// Create a test store
const storeResult = await pool.query(
`INSERT INTO public.stores (name, address, city, province, postal_code)
VALUES ('E2E Budget Test Store', '789 Budget St', 'Toronto', 'ON', 'M5V 3A3')
RETURNING store_id`,
);
const storeId = storeResult.rows[0].store_id;
createdStoreIds.push(storeId);
// Create receipts with spending
const receipt1Result = await pool.query(
`INSERT INTO public.receipts (user_id, receipt_image_url, status, store_id, total_amount_cents, transaction_date)
VALUES ($1, '/uploads/receipts/e2e-budget-1.jpg', 'completed', $2, 12500, $3)
RETURNING receipt_id`,
[userId, storeId, formatDate(today)],
);
createdReceiptIds.push(receipt1Result.rows[0].receipt_id);
const receipt2Result = await pool.query(
`INSERT INTO public.receipts (user_id, receipt_image_url, status, store_id, total_amount_cents, transaction_date)
VALUES ($1, '/uploads/receipts/e2e-budget-2.jpg', 'completed', $2, 8750, $3)
RETURNING receipt_id`,
[userId, storeId, formatDate(today)],
);
createdReceiptIds.push(receipt2Result.rows[0].receipt_id);
// Step 8: Check spending analysis
const endOfMonth = new Date(today.getFullYear(), today.getMonth() + 1, 0);
const spendingResponse = await authedFetch(
`/budgets/spending-analysis?startDate=${formatDate(startOfMonth)}&endDate=${formatDate(endOfMonth)}`,
{
method: 'GET',
token: authToken,
},
);
expect(spendingResponse.status).toBe(200);
const spendingData = await spendingResponse.json();
expect(spendingData.success).toBe(true);
expect(Array.isArray(spendingData.data)).toBe(true);
// Verify we have spending data
// Note: The spending might be $0 or have data depending on how the backend calculates spending
// The test is mainly verifying the endpoint works
// Step 9: Test budget validation - try to create invalid budget
const invalidBudgetResponse = await authedFetch('/budgets', {
method: 'POST',
token: authToken,
body: JSON.stringify({
name: 'Invalid Budget',
amount_cents: -100, // Negative amount should be rejected
period: 'monthly',
start_date: formatDate(today),
}),
});
expect(invalidBudgetResponse.status).toBe(400);
// Step 10: Test budget validation - missing required fields
const missingFieldsResponse = await authedFetch('/budgets', {
method: 'POST',
token: authToken,
body: JSON.stringify({
name: 'Incomplete Budget',
// Missing amount_cents, period, start_date
}),
});
expect(missingFieldsResponse.status).toBe(400);
// Step 11: Test update validation - empty update
const emptyUpdateResponse = await authedFetch(`/budgets/${budgetId}`, {
method: 'PUT',
token: authToken,
body: JSON.stringify({}), // No fields to update
});
expect(emptyUpdateResponse.status).toBe(400);
// Step 12: Verify another user cannot access our budgets
const otherUserEmail = `other-budget-e2e-${uniqueId}@example.com`;
await apiClient.registerUser(otherUserEmail, userPassword, 'Other Budget User');
const { responseBody: otherLoginData } = await poll(
async () => {
const response = await apiClient.loginUser(otherUserEmail, userPassword, false);
const responseBody = response.ok ? await response.clone().json() : {};
return { response, responseBody };
},
(result) => result.response.ok,
{ timeout: 10000, interval: 1000, description: 'other user login' },
);
const otherToken = otherLoginData.data.token;
const otherUserId = otherLoginData.data.userprofile.user.user_id;
// Other user should not see our budgets
const otherBudgetsResponse = await authedFetch('/budgets', {
method: 'GET',
token: otherToken,
});
expect(otherBudgetsResponse.status).toBe(200);
const otherBudgetsData = await otherBudgetsResponse.json();
expect(otherBudgetsData.data.length).toBe(0);
// Other user should not be able to update our budget
const otherUpdateResponse = await authedFetch(`/budgets/${budgetId}`, {
method: 'PUT',
token: otherToken,
body: JSON.stringify({
amount_cents: 99999,
}),
});
expect(otherUpdateResponse.status).toBe(404); // Should not find the budget
// Other user should not be able to delete our budget
const otherDeleteAttemptResponse = await authedFetch(`/budgets/${budgetId}`, {
method: 'DELETE',
token: otherToken,
});
expect(otherDeleteAttemptResponse.status).toBe(404);
// Clean up other user
await cleanupDb({ userIds: [otherUserId] });
// Step 13: Delete the weekly budget
const deleteBudgetResponse = await authedFetch(`/budgets/${weeklyBudgetData.data.budget_id}`, {
method: 'DELETE',
token: authToken,
});
expect(deleteBudgetResponse.status).toBe(204);
// Remove from cleanup list
const deleteIndex = createdBudgetIds.indexOf(weeklyBudgetData.data.budget_id);
if (deleteIndex > -1) {
createdBudgetIds.splice(deleteIndex, 1);
}
// Step 14: Verify deletion
const verifyDeleteResponse = await authedFetch('/budgets', {
method: 'GET',
token: authToken,
});
expect(verifyDeleteResponse.status).toBe(200);
const verifyDeleteData = await verifyDeleteResponse.json();
expect(verifyDeleteData.data.length).toBe(1); // Only monthly budget remains
const deletedBudget = verifyDeleteData.data.find(
(b: { budget_id: number }) => b.budget_id === weeklyBudgetData.data.budget_id,
);
expect(deletedBudget).toBeUndefined();
// Step 15: Delete account
const deleteAccountResponse = await apiClient.deleteUserAccount(userPassword, {
tokenOverride: authToken,
});
expect(deleteAccountResponse.status).toBe(200);
userId = null;
});
});

View File

@@ -0,0 +1,352 @@
// src/tests/e2e/deals-journey.e2e.test.ts
/**
* End-to-End test for the Deals/Price Tracking user journey.
* Tests the complete flow from user registration to watching items and viewing best prices.
*/
import { describe, it, expect, afterAll } from 'vitest';
import * as apiClient from '../../services/apiClient';
import { cleanupDb } from '../utils/cleanup';
import { poll } from '../utils/poll';
import { getPool } from '../../services/db/connection.db';
/**
* @vitest-environment node
*/
const API_BASE_URL = process.env.VITE_API_BASE_URL || 'http://localhost:3000/api';
// Helper to make authenticated API calls
const authedFetch = async (
path: string,
options: RequestInit & { token?: string } = {},
): Promise<Response> => {
const { token, ...fetchOptions } = options;
const headers: Record<string, string> = {
'Content-Type': 'application/json',
...(fetchOptions.headers as Record<string, string>),
};
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}
return fetch(`${API_BASE_URL}${path}`, {
...fetchOptions,
headers,
});
};
describe('E2E Deals and Price Tracking Journey', () => {
const uniqueId = Date.now();
const userEmail = `deals-e2e-${uniqueId}@example.com`;
const userPassword = 'StrongDealsPassword123!';
let authToken: string;
let userId: string | null = null;
const createdMasterItemIds: number[] = [];
const createdFlyerIds: number[] = [];
const createdStoreIds: number[] = [];
afterAll(async () => {
const pool = getPool();
// Clean up watched items
if (userId) {
await pool.query('DELETE FROM public.watched_items WHERE user_id = $1', [userId]);
}
// Clean up flyer items
if (createdFlyerIds.length > 0) {
await pool.query('DELETE FROM public.flyer_items WHERE flyer_id = ANY($1::bigint[])', [
createdFlyerIds,
]);
}
// Clean up flyers
if (createdFlyerIds.length > 0) {
await pool.query('DELETE FROM public.flyers WHERE flyer_id = ANY($1::bigint[])', [
createdFlyerIds,
]);
}
// Clean up master grocery items
if (createdMasterItemIds.length > 0) {
await pool.query(
'DELETE FROM public.master_grocery_items WHERE master_grocery_item_id = ANY($1::int[])',
[createdMasterItemIds],
);
}
// Clean up stores
if (createdStoreIds.length > 0) {
await pool.query('DELETE FROM public.stores WHERE store_id = ANY($1::int[])', [
createdStoreIds,
]);
}
// Clean up user
await cleanupDb({
userIds: [userId],
});
});
it('should complete deals journey: Register -> Watch Items -> View Prices -> Check Deals', async () => {
// Step 1: Register a new user
const registerResponse = await apiClient.registerUser(
userEmail,
userPassword,
'Deals E2E User',
);
expect(registerResponse.status).toBe(201);
// Step 2: Login to get auth token
const { response: loginResponse, responseBody: loginResponseBody } = await poll(
async () => {
const response = await apiClient.loginUser(userEmail, userPassword, false);
const responseBody = response.ok ? await response.clone().json() : {};
return { response, responseBody };
},
(result) => result.response.ok,
{ timeout: 10000, interval: 1000, description: 'user login after registration' },
);
expect(loginResponse.status).toBe(200);
authToken = loginResponseBody.data.token;
userId = loginResponseBody.data.userprofile.user.user_id;
expect(authToken).toBeDefined();
// Step 3: Create test stores and master items with pricing data
const pool = getPool();
// Create stores
const store1Result = await pool.query(
`INSERT INTO public.stores (name, address, city, province, postal_code)
VALUES ('E2E Test Store 1', '123 Main St', 'Toronto', 'ON', 'M5V 3A1')
RETURNING store_id`,
);
const store1Id = store1Result.rows[0].store_id;
createdStoreIds.push(store1Id);
const store2Result = await pool.query(
`INSERT INTO public.stores (name, address, city, province, postal_code)
VALUES ('E2E Test Store 2', '456 Oak Ave', 'Toronto', 'ON', 'M5V 3A2')
RETURNING store_id`,
);
const store2Id = store2Result.rows[0].store_id;
createdStoreIds.push(store2Id);
// Create master grocery items
const items = [
'E2E Milk 2%',
'E2E Bread White',
'E2E Coffee Beans',
'E2E Bananas',
'E2E Chicken Breast',
];
for (const itemName of items) {
const result = await pool.query(
`INSERT INTO public.master_grocery_items (name)
VALUES ($1)
RETURNING master_grocery_item_id`,
[itemName],
);
createdMasterItemIds.push(result.rows[0].master_grocery_item_id);
}
// Create flyers for both stores
const today = new Date();
const validFrom = today.toISOString().split('T')[0];
const validTo = new Date(today.getTime() + 7 * 24 * 60 * 60 * 1000).toISOString().split('T')[0];
const flyer1Result = await pool.query(
`INSERT INTO public.flyers (store_id, flyer_image_url, valid_from, valid_to, processing_status)
VALUES ($1, '/uploads/flyers/e2e-flyer-1.jpg', $2, $3, 'completed')
RETURNING flyer_id`,
[store1Id, validFrom, validTo],
);
const flyer1Id = flyer1Result.rows[0].flyer_id;
createdFlyerIds.push(flyer1Id);
const flyer2Result = await pool.query(
`INSERT INTO public.flyers (store_id, flyer_image_url, valid_from, valid_to, processing_status)
VALUES ($1, '/uploads/flyers/e2e-flyer-2.jpg', $2, $3, 'completed')
RETURNING flyer_id`,
[store2Id, validFrom, validTo],
);
const flyer2Id = flyer2Result.rows[0].flyer_id;
createdFlyerIds.push(flyer2Id);
// Add items to flyers with prices (Store 1 - higher prices)
await pool.query(
`INSERT INTO public.flyer_items (flyer_id, master_item_id, sale_price_cents, page_number)
VALUES
($1, $2, 599, 1), -- Milk at $5.99
($1, $3, 349, 1), -- Bread at $3.49
($1, $4, 1299, 2), -- Coffee at $12.99
($1, $5, 299, 2), -- Bananas at $2.99
($1, $6, 899, 3) -- Chicken at $8.99
`,
[flyer1Id, ...createdMasterItemIds],
);
// Add items to flyers with prices (Store 2 - better prices)
await pool.query(
`INSERT INTO public.flyer_items (flyer_id, master_item_id, sale_price_cents, page_number)
VALUES
($1, $2, 499, 1), -- Milk at $4.99 (BEST PRICE)
($1, $3, 299, 1), -- Bread at $2.99 (BEST PRICE)
($1, $4, 1099, 2), -- Coffee at $10.99 (BEST PRICE)
($1, $5, 249, 2), -- Bananas at $2.49 (BEST PRICE)
($1, $6, 799, 3) -- Chicken at $7.99 (BEST PRICE)
`,
[flyer2Id, ...createdMasterItemIds],
);
// Step 4: Add items to watch list
const watchItem1Response = await authedFetch('/users/watched-items', {
method: 'POST',
token: authToken,
body: JSON.stringify({
itemName: 'E2E Milk 2%',
category: 'Dairy',
}),
});
expect(watchItem1Response.status).toBe(201);
const watchItem1Data = await watchItem1Response.json();
expect(watchItem1Data.data.item_name).toBe('E2E Milk 2%');
// Add more items to watch list
const itemsToWatch = [
{ itemName: 'E2E Bread White', category: 'Bakery' },
{ itemName: 'E2E Coffee Beans', category: 'Beverages' },
];
for (const item of itemsToWatch) {
const response = await authedFetch('/users/watched-items', {
method: 'POST',
token: authToken,
body: JSON.stringify(item),
});
expect(response.status).toBe(201);
}
// Step 5: View all watched items
const watchedListResponse = await authedFetch('/users/watched-items', {
method: 'GET',
token: authToken,
});
expect(watchedListResponse.status).toBe(200);
const watchedListData = await watchedListResponse.json();
expect(watchedListData.data.length).toBeGreaterThanOrEqual(3);
// Find our watched items
const watchedMilk = watchedListData.data.find(
(item: { item_name: string }) => item.item_name === 'E2E Milk 2%',
);
expect(watchedMilk).toBeDefined();
expect(watchedMilk.category).toBe('Dairy');
// Step 6: Get best prices for watched items
const bestPricesResponse = await authedFetch('/users/deals/best-watched-prices', {
method: 'GET',
token: authToken,
});
expect(bestPricesResponse.status).toBe(200);
const bestPricesData = await bestPricesResponse.json();
expect(bestPricesData.success).toBe(true);
// Verify we got deals for our watched items
expect(Array.isArray(bestPricesData.data)).toBe(true);
// Find the milk deal and verify it's the best price (Store 2 at $4.99)
if (bestPricesData.data.length > 0) {
const milkDeal = bestPricesData.data.find(
(deal: { item_name: string }) => deal.item_name === 'E2E Milk 2%',
);
if (milkDeal) {
expect(milkDeal.best_price_cents).toBe(499); // Best price from Store 2
expect(milkDeal.store_id).toBe(store2Id);
}
}
// Step 7: Search for specific items in flyers
// Note: This would require implementing a flyer search endpoint
// For now, we'll test the watched items functionality
// Step 8: Remove an item from watch list
const milkMasterItemId = createdMasterItemIds[0];
const removeResponse = await authedFetch(`/users/watched-items/${milkMasterItemId}`, {
method: 'DELETE',
token: authToken,
});
expect(removeResponse.status).toBe(204);
// Step 9: Verify item was removed
const updatedWatchedListResponse = await authedFetch('/users/watched-items', {
method: 'GET',
token: authToken,
});
expect(updatedWatchedListResponse.status).toBe(200);
const updatedWatchedListData = await updatedWatchedListResponse.json();
const milkStillWatched = updatedWatchedListData.data.find(
(item: { item_name: string }) => item.item_name === 'E2E Milk 2%',
);
expect(milkStillWatched).toBeUndefined();
// Step 10: Verify another user cannot see our watched items
const otherUserEmail = `other-deals-e2e-${uniqueId}@example.com`;
await apiClient.registerUser(otherUserEmail, userPassword, 'Other Deals User');
const { responseBody: otherLoginData } = await poll(
async () => {
const response = await apiClient.loginUser(otherUserEmail, userPassword, false);
const responseBody = response.ok ? await response.clone().json() : {};
return { response, responseBody };
},
(result) => result.response.ok,
{ timeout: 10000, interval: 1000, description: 'other user login' },
);
const otherToken = otherLoginData.data.token;
const otherUserId = otherLoginData.data.userprofile.user.user_id;
// Other user's watched items should be empty
const otherWatchedResponse = await authedFetch('/users/watched-items', {
method: 'GET',
token: otherToken,
});
expect(otherWatchedResponse.status).toBe(200);
const otherWatchedData = await otherWatchedResponse.json();
expect(otherWatchedData.data.length).toBe(0);
// Other user's deals should be empty
const otherDealsResponse = await authedFetch('/users/deals/best-watched-prices', {
method: 'GET',
token: otherToken,
});
expect(otherDealsResponse.status).toBe(200);
const otherDealsData = await otherDealsResponse.json();
expect(otherDealsData.data.length).toBe(0);
// Clean up other user
await cleanupDb({ userIds: [otherUserId] });
// Step 11: Delete account
const deleteAccountResponse = await apiClient.deleteUserAccount(userPassword, {
tokenOverride: authToken,
});
expect(deleteAccountResponse.status).toBe(200);
userId = null;
});
});

View File

@@ -275,10 +275,16 @@ describe('Admin API Routes Integration Tests', () => {
describe('DELETE /api/admin/users/:id', () => {
it("should allow an admin to delete another user's account", async () => {
// Create a dedicated user for this deletion test to avoid affecting other tests
const { user: userToDelete } = await createAndLoginUser({
email: `delete-target-${Date.now()}@test.com`,
fullName: 'User To Delete',
request,
});
// Act: Call the delete endpoint as an admin.
const targetUserId = regularUser.user.user_id;
const response = await request
.delete(`/api/admin/users/${targetUserId}`)
.delete(`/api/admin/users/${userToDelete.user.user_id}`)
.set('Authorization', `Bearer ${adminToken}`);
// Assert: Check for a successful deletion status.
@@ -318,4 +324,187 @@ describe('Admin API Routes Integration Tests', () => {
expect(response.status).toBe(404);
});
});
describe('Queue Management Routes', () => {
describe('GET /api/admin/queues/status', () => {
it('should return queue status for all queues', async () => {
const response = await request
.get('/api/admin/queues/status')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
// Should have data for each queue
if (response.body.data.length > 0) {
const firstQueue = response.body.data[0];
expect(firstQueue).toHaveProperty('name');
expect(firstQueue).toHaveProperty('counts');
}
});
it('should forbid regular users from viewing queue status', async () => {
const response = await request
.get('/api/admin/queues/status')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
expect(response.body.error.message).toBe('Forbidden: Administrator access required.');
});
});
describe('POST /api/admin/trigger/analytics-report', () => {
it('should enqueue an analytics report job', async () => {
const response = await request
.post('/api/admin/trigger/analytics-report')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(202); // 202 Accepted for async job enqueue
expect(response.body.success).toBe(true);
expect(response.body.data.message).toContain('enqueued');
});
it('should forbid regular users from triggering analytics report', async () => {
const response = await request
.post('/api/admin/trigger/analytics-report')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
describe('POST /api/admin/trigger/weekly-analytics', () => {
it('should enqueue a weekly analytics job', async () => {
const response = await request
.post('/api/admin/trigger/weekly-analytics')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(202); // 202 Accepted for async job enqueue
expect(response.body.success).toBe(true);
expect(response.body.data.message).toContain('enqueued');
});
it('should forbid regular users from triggering weekly analytics', async () => {
const response = await request
.post('/api/admin/trigger/weekly-analytics')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
describe('POST /api/admin/trigger/daily-deal-check', () => {
it('should enqueue a daily deal check job', async () => {
const response = await request
.post('/api/admin/trigger/daily-deal-check')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(202); // 202 Accepted for async job trigger
expect(response.body.success).toBe(true);
expect(response.body.data.message).toContain('triggered');
});
it('should forbid regular users from triggering daily deal check', async () => {
const response = await request
.post('/api/admin/trigger/daily-deal-check')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
describe('POST /api/admin/system/clear-cache', () => {
it('should clear the application cache', async () => {
const response = await request
.post('/api/admin/system/clear-cache')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data.message).toContain('cleared');
});
it('should forbid regular users from clearing cache', async () => {
const response = await request
.post('/api/admin/system/clear-cache')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
describe('POST /api/admin/jobs/:queue/:id/retry', () => {
it('should return validation error for invalid queue name', async () => {
const response = await request
.post('/api/admin/jobs/invalid-queue-name/1/retry')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
it('should return 404 for non-existent job', async () => {
const response = await request
.post('/api/admin/jobs/flyer-processing/999999999/retry')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(404);
expect(response.body.success).toBe(false);
});
it('should forbid regular users from retrying jobs', async () => {
const response = await request
.post('/api/admin/jobs/flyer-processing/1/retry')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
});
describe('GET /api/admin/users', () => {
it('should return all users for admin', async () => {
const response = await request
.get('/api/admin/users')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
// The endpoint returns { users: [...], total: N }
expect(response.body.data).toHaveProperty('users');
expect(response.body.data).toHaveProperty('total');
expect(response.body.data.users).toBeInstanceOf(Array);
expect(typeof response.body.data.total).toBe('number');
});
it('should forbid regular users from listing all users', async () => {
const response = await request
.get('/api/admin/users')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
describe('GET /api/admin/review/flyers', () => {
it('should return pending review flyers for admin', async () => {
const response = await request
.get('/api/admin/review/flyers')
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
});
it('should forbid regular users from viewing pending flyers', async () => {
const response = await request
.get('/api/admin/review/flyers')
.set('Authorization', `Bearer ${regularUserToken}`);
expect(response.status).toBe(403);
});
});
});

View File

@@ -206,4 +206,170 @@ describe('Authentication API Integration', () => {
);
}, 15000); // Increase timeout to handle multiple sequential requests
});
describe('Token Edge Cases', () => {
it('should reject empty Bearer token', async () => {
const response = await request.get('/api/users/profile').set('Authorization', 'Bearer ');
expect(response.status).toBe(401);
});
it('should reject token without dots (invalid JWT structure)', async () => {
const response = await request
.get('/api/users/profile')
.set('Authorization', 'Bearer notavalidtoken');
expect(response.status).toBe(401);
});
it('should reject token with only 2 parts (missing signature)', async () => {
const response = await request
.get('/api/users/profile')
.set('Authorization', 'Bearer header.payload');
expect(response.status).toBe(401);
});
it('should reject token with invalid signature', async () => {
// Valid structure but tampered signature
const response = await request
.get('/api/users/profile')
.set('Authorization', 'Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0In0.invalidsig');
expect(response.status).toBe(401);
});
it('should accept lowercase "bearer" scheme (case-insensitive)', async () => {
// First get a valid token
const loginResponse = await request
.post('/api/auth/login')
.send({ email: testUserEmail, password: TEST_PASSWORD, rememberMe: false });
const token = loginResponse.body.data.token;
// Use lowercase "bearer"
const response = await request
.get('/api/users/profile')
.set('Authorization', `bearer ${token}`);
expect(response.status).toBe(200);
});
it('should reject Basic auth scheme', async () => {
const response = await request
.get('/api/users/profile')
.set('Authorization', 'Basic dXNlcm5hbWU6cGFzc3dvcmQ=');
expect(response.status).toBe(401);
});
it('should reject missing Authorization header', async () => {
const response = await request.get('/api/users/profile');
expect(response.status).toBe(401);
});
});
describe('Login Security', () => {
it('should return same error for wrong password and non-existent user', async () => {
// Wrong password for existing user
const wrongPassResponse = await request
.post('/api/auth/login')
.send({ email: testUserEmail, password: 'wrong-password', rememberMe: false });
// Non-existent user
const nonExistentResponse = await request
.post('/api/auth/login')
.send({ email: 'nonexistent@example.com', password: 'any-password', rememberMe: false });
// Both should return 401 with the same message
expect(wrongPassResponse.status).toBe(401);
expect(nonExistentResponse.status).toBe(401);
expect(wrongPassResponse.body.error.message).toBe(nonExistentResponse.body.error.message);
expect(wrongPassResponse.body.error.message).toBe('Incorrect email or password.');
});
it('should return same response for forgot-password on existing and non-existing email', async () => {
// Request for existing user
const existingResponse = await request
.post('/api/auth/forgot-password')
.send({ email: testUserEmail });
// Request for non-existing user
const nonExistingResponse = await request
.post('/api/auth/forgot-password')
.send({ email: 'nonexistent-user@example.com' });
// Both should return 200 with similar success message (prevents email enumeration)
expect(existingResponse.status).toBe(200);
expect(nonExistingResponse.status).toBe(200);
expect(existingResponse.body.success).toBe(true);
expect(nonExistingResponse.body.success).toBe(true);
});
it('should return validation error for missing login fields', async () => {
const response = await request.post('/api/auth/login').send({ email: testUserEmail }); // Missing password
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
});
describe('Password Reset', () => {
it('should reject reset with invalid token', async () => {
const response = await request.post('/api/auth/reset-password').send({
token: 'invalid-reset-token',
newPassword: TEST_PASSWORD,
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
});
describe('Registration Validation', () => {
it('should reject duplicate email registration', async () => {
const response = await request.post('/api/auth/register').send({
email: testUserEmail, // Already exists
password: TEST_PASSWORD,
full_name: 'Duplicate User',
});
expect(response.status).toBe(409); // CONFLICT
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('CONFLICT');
});
it('should reject invalid email format', async () => {
const response = await request.post('/api/auth/register').send({
email: 'not-an-email',
password: TEST_PASSWORD,
full_name: 'Invalid Email User',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
it('should reject weak password', async () => {
const response = await request.post('/api/auth/register').send({
email: `weak-pass-${Date.now()}@example.com`,
password: '123456', // Too weak
full_name: 'Weak Password User',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
});
describe('Refresh Token Edge Cases', () => {
it('should return error when refresh token cookie is missing', async () => {
const response = await request.post('/api/auth/refresh-token');
expect(response.status).toBe(401);
expect(response.body.error.message).toBe('Refresh token not found.');
});
});
});

View File

@@ -143,6 +143,67 @@ describe('Budget API Routes Integration Tests', () => {
expect(response.status).toBe(401);
});
it('should reject period="yearly" (only weekly/monthly allowed)', async () => {
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${authToken}`)
.send({
name: 'Yearly Budget',
amount_cents: 100000,
period: 'yearly',
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
it('should reject negative amount_cents', async () => {
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${authToken}`)
.send({
name: 'Negative Budget',
amount_cents: -500,
period: 'weekly',
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should reject invalid date format', async () => {
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${authToken}`)
.send({
name: 'Invalid Date Budget',
amount_cents: 10000,
period: 'weekly',
start_date: '01-01-2025', // Wrong format, should be YYYY-MM-DD
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should require name field', async () => {
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${authToken}`)
.send({
amount_cents: 10000,
period: 'weekly',
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
});
describe('PUT /api/budgets/:id', () => {

View File

@@ -0,0 +1,388 @@
// src/tests/integration/data-integrity.integration.test.ts
import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';
import supertest from 'supertest';
import { createAndLoginUser, TEST_PASSWORD } from '../utils/testHelpers';
import type { UserProfile } from '../../types';
import { getPool } from '../../services/db/connection.db';
/**
* @vitest-environment node
*
* Integration tests for data integrity: FK constraints, cascades, unique constraints, and CHECK constraints.
* These tests verify that database-level constraints are properly enforced.
*/
describe('Data Integrity Integration Tests', () => {
let request: ReturnType<typeof supertest>;
let adminToken: string;
let adminUser: UserProfile;
beforeAll(async () => {
vi.stubEnv('FRONTEND_URL', 'https://example.com');
const app = (await import('../../../server')).default;
request = supertest(app);
// Create an admin user for admin-level tests
const { user, token } = await createAndLoginUser({
email: `data-integrity-admin-${Date.now()}@example.com`,
fullName: 'Data Integrity Admin',
role: 'admin',
request,
});
adminUser = user;
adminToken = token;
});
afterAll(async () => {
vi.unstubAllEnvs();
// Clean up admin user
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [adminUser.user.user_id]);
});
describe('Cascade Deletes', () => {
it('should cascade delete shopping lists when user is deleted', async () => {
// Create a test user with shopping lists
const { token } = await createAndLoginUser({
email: `cascade-test-${Date.now()}@example.com`,
fullName: 'Cascade Test User',
request,
});
// Create some shopping lists
const listResponse = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${token}`)
.send({ name: 'Cascade Test List' });
expect(listResponse.status).toBe(201);
const listId = listResponse.body.data.shopping_list_id;
// Verify list exists
const checkListBefore = await getPool().query(
'SELECT * FROM public.shopping_lists WHERE shopping_list_id = $1',
[listId],
);
expect(checkListBefore.rows.length).toBe(1);
// Delete the user account
const deleteResponse = await request
.delete('/api/users/account')
.set('Authorization', `Bearer ${token}`)
.send({ password: TEST_PASSWORD });
expect(deleteResponse.status).toBe(200);
// Verify list was cascade deleted
const checkListAfter = await getPool().query(
'SELECT * FROM public.shopping_lists WHERE shopping_list_id = $1',
[listId],
);
expect(checkListAfter.rows.length).toBe(0);
});
it('should cascade delete budgets when user is deleted', async () => {
// Create a test user with budgets
const { token } = await createAndLoginUser({
email: `budget-cascade-${Date.now()}@example.com`,
fullName: 'Budget Cascade User',
request,
});
// Create a budget
const budgetResponse = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${token}`)
.send({
name: 'Cascade Test Budget',
amount_cents: 10000,
period: 'weekly',
start_date: '2025-01-01',
});
expect(budgetResponse.status).toBe(201);
const budgetId = budgetResponse.body.data.budget_id;
// Verify budget exists
const checkBefore = await getPool().query(
'SELECT * FROM public.budgets WHERE budget_id = $1',
[budgetId],
);
expect(checkBefore.rows.length).toBe(1);
// Delete the user account
const deleteResponse = await request
.delete('/api/users/account')
.set('Authorization', `Bearer ${token}`)
.send({ password: TEST_PASSWORD });
expect(deleteResponse.status).toBe(200);
// Verify budget was cascade deleted
const checkAfter = await getPool().query(
'SELECT * FROM public.budgets WHERE budget_id = $1',
[budgetId],
);
expect(checkAfter.rows.length).toBe(0);
});
it('should cascade delete shopping list items when list is deleted', async () => {
// Create a test user
const { user, token } = await createAndLoginUser({
email: `item-cascade-${Date.now()}@example.com`,
fullName: 'Item Cascade User',
request,
});
// Create a shopping list
const listResponse = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${token}`)
.send({ name: 'Item Cascade List' });
expect(listResponse.status).toBe(201);
const listId = listResponse.body.data.shopping_list_id;
// Add an item to the list
const itemResponse = await request
.post(`/api/users/shopping-lists/${listId}/items`)
.set('Authorization', `Bearer ${token}`)
.send({ customItemName: 'Test Item', quantity: 1 });
expect(itemResponse.status).toBe(201);
const itemId = itemResponse.body.data.shopping_list_item_id;
// Verify item exists
const checkItemBefore = await getPool().query(
'SELECT * FROM public.shopping_list_items WHERE shopping_list_item_id = $1',
[itemId],
);
expect(checkItemBefore.rows.length).toBe(1);
// Delete the shopping list
const deleteResponse = await request
.delete(`/api/users/shopping-lists/${listId}`)
.set('Authorization', `Bearer ${token}`);
expect(deleteResponse.status).toBe(204);
// Verify item was cascade deleted
const checkItemAfter = await getPool().query(
'SELECT * FROM public.shopping_list_items WHERE shopping_list_item_id = $1',
[itemId],
);
expect(checkItemAfter.rows.length).toBe(0);
// Clean up user
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [user.user.user_id]);
});
});
describe('Admin Self-Deletion Prevention', () => {
it('should prevent admin from deleting their own account via admin route', async () => {
const response = await request
.delete(`/api/admin/users/${adminUser.user.user_id}`)
.set('Authorization', `Bearer ${adminToken}`);
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
expect(response.body.error.message).toContain('cannot delete');
});
});
describe('FK Constraint Enforcement', () => {
it('should return error when adding item with invalid shopping list ID', async () => {
// Create a test user
const { user, token } = await createAndLoginUser({
email: `fk-test-${Date.now()}@example.com`,
fullName: 'FK Test User',
request,
});
// Try to add item to non-existent list
const response = await request
.post('/api/users/shopping-lists/999999/items')
.set('Authorization', `Bearer ${token}`)
.send({ customItemName: 'Invalid List Item', quantity: 1 });
expect(response.status).toBe(404);
// Clean up
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [user.user.user_id]);
});
it('should enforce FK constraints at database level', async () => {
// Try to insert directly into DB with invalid FK
try {
await getPool().query(
`INSERT INTO public.shopping_list_items (shopping_list_id, custom_item_name, quantity)
VALUES (999999999, 'Direct Insert Test', 1)`,
);
// If we get here, the constraint didn't fire
expect.fail('Expected FK constraint violation');
} catch (error) {
// Expected - FK constraint should prevent this
expect(error).toBeDefined();
expect((error as Error).message).toContain('violates foreign key constraint');
}
});
});
describe('Unique Constraints', () => {
it('should return CONFLICT for duplicate email registration', async () => {
const email = `unique-test-${Date.now()}@example.com`;
// Register first user
const firstResponse = await request
.post('/api/auth/register')
.send({ email, password: TEST_PASSWORD, full_name: 'First User' });
expect(firstResponse.status).toBe(201);
// Try to register second user with same email
const secondResponse = await request
.post('/api/auth/register')
.send({ email, password: TEST_PASSWORD, full_name: 'Second User' });
expect(secondResponse.status).toBe(409); // CONFLICT
expect(secondResponse.body.success).toBe(false);
expect(secondResponse.body.error.code).toBe('CONFLICT');
// Clean up first user
const userId = firstResponse.body.data.userprofile.user.user_id;
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [userId]);
});
});
describe('CHECK Constraints', () => {
it('should reject budget with invalid period via API', async () => {
const { user, token } = await createAndLoginUser({
email: `check-test-${Date.now()}@example.com`,
fullName: 'Check Constraint User',
request,
});
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${token}`)
.send({
name: 'Invalid Period Budget',
amount_cents: 10000,
period: 'yearly', // Invalid - only weekly/monthly allowed
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
// Clean up
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [user.user.user_id]);
});
it('should reject budget with negative amount via API', async () => {
const { user, token } = await createAndLoginUser({
email: `amount-check-${Date.now()}@example.com`,
fullName: 'Amount Check User',
request,
});
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${token}`)
.send({
name: 'Negative Amount Budget',
amount_cents: -100, // Invalid - must be positive
period: 'weekly',
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
// Clean up
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [user.user.user_id]);
});
it('should enforce CHECK constraints at database level', async () => {
// Try to insert directly with invalid period
const { user, token: _ } = await createAndLoginUser({
email: `db-check-${Date.now()}@example.com`,
fullName: 'DB Check User',
request,
});
try {
await getPool().query(
`INSERT INTO public.budgets (user_id, name, amount_cents, period, start_date)
VALUES ($1, 'Direct Insert', 10000, 'yearly', '2025-01-01')`,
[user.user.user_id],
);
// If we get here, the constraint didn't fire
expect.fail('Expected CHECK constraint violation');
} catch (error) {
// Expected - CHECK constraint should prevent this
expect(error).toBeDefined();
expect((error as Error).message).toContain('violates check constraint');
}
// Clean up
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [user.user.user_id]);
});
});
describe('NOT NULL Constraints', () => {
it('should require budget name via API', async () => {
const { user, token } = await createAndLoginUser({
email: `notnull-test-${Date.now()}@example.com`,
fullName: 'NotNull Test User',
request,
});
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${token}`)
.send({
// name is missing - required field
amount_cents: 10000,
period: 'weekly',
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
// Clean up
await getPool().query('DELETE FROM public.users WHERE user_id = $1', [user.user.user_id]);
});
});
describe('Transaction Rollback', () => {
it('should rollback partial inserts on constraint violation', async () => {
const pool = getPool();
const client = await pool.connect();
try {
await client.query('BEGIN');
// First insert should work
const { user } = await createAndLoginUser({
email: `transaction-test-${Date.now()}@example.com`,
fullName: 'Transaction Test User',
request,
});
await client.query(
`INSERT INTO public.shopping_lists (user_id, name) VALUES ($1, 'Transaction Test List') RETURNING shopping_list_id`,
[user.user.user_id],
);
// This should fail due to FK constraint
await client.query(
`INSERT INTO public.shopping_list_items (shopping_list_id, custom_item_name, quantity)
VALUES (999999999, 'Should Fail', 1)`,
);
await client.query('COMMIT');
expect.fail('Expected transaction to fail');
} catch {
await client.query('ROLLBACK');
// Expected - transaction should have rolled back
} finally {
client.release();
}
});
});
});

View File

@@ -0,0 +1,76 @@
// src/tests/integration/deals.integration.test.ts
import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';
import supertest from 'supertest';
import { createAndLoginUser } from '../utils/testHelpers';
import { cleanupDb } from '../utils/cleanup';
/**
* @vitest-environment node
*
* Integration tests for the Deals API routes.
* These routes were previously unmounted and are now available at /api/deals.
*/
describe('Deals API Routes Integration Tests', () => {
let request: ReturnType<typeof supertest>;
let authToken: string;
const createdUserIds: string[] = [];
beforeAll(async () => {
vi.stubEnv('FRONTEND_URL', 'https://example.com');
const app = (await import('../../../server')).default;
request = supertest(app);
// Create a user for the tests
const { user, token } = await createAndLoginUser({
email: `deals-user-${Date.now()}@example.com`,
fullName: 'Deals Test User',
request,
});
authToken = token;
createdUserIds.push(user.user.user_id);
});
afterAll(async () => {
vi.unstubAllEnvs();
await cleanupDb({
userIds: createdUserIds,
});
});
describe('GET /api/deals/best-watched-prices', () => {
it('should require authentication', async () => {
const response = await request.get('/api/deals/best-watched-prices');
// Passport returns 401 Unauthorized for unauthenticated requests
expect(response.status).toBe(401);
});
it('should return empty array for authenticated user with no watched items', async () => {
// The test user has no watched items by default, so should get empty array
const response = await request
.get('/api/deals/best-watched-prices')
.set('Authorization', `Bearer ${authToken}`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
});
it('should reject invalid JWT token', async () => {
const response = await request
.get('/api/deals/best-watched-prices')
.set('Authorization', 'Bearer invalid.token.here');
expect(response.status).toBe(401);
});
it('should reject missing Bearer prefix', async () => {
const response = await request
.get('/api/deals/best-watched-prices')
.set('Authorization', authToken);
expect(response.status).toBe(401);
});
});
});

View File

@@ -0,0 +1,360 @@
// src/tests/integration/edge-cases.integration.test.ts
import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';
import supertest from 'supertest';
import { createAndLoginUser } from '../utils/testHelpers';
import { cleanupDb } from '../utils/cleanup';
import * as fs from 'fs';
import * as path from 'path';
import * as crypto from 'crypto';
/**
* @vitest-environment node
*
* Integration tests for edge cases discovered during manual frontend testing.
* These tests cover file upload validation, input sanitization, and authorization boundaries.
*/
describe('Edge Cases Integration Tests', () => {
let request: ReturnType<typeof supertest>;
let authToken: string;
let otherUserToken: string;
const createdUserIds: string[] = [];
const createdShoppingListIds: number[] = [];
beforeAll(async () => {
vi.stubEnv('FRONTEND_URL', 'https://example.com');
const app = (await import('../../../server')).default;
request = supertest(app);
// Create primary test user
const { user, token } = await createAndLoginUser({
email: `edge-case-user-${Date.now()}@example.com`,
fullName: 'Edge Case Test User',
request,
});
authToken = token;
createdUserIds.push(user.user.user_id);
// Create secondary test user for cross-user tests
const { user: user2, token: token2 } = await createAndLoginUser({
email: `edge-case-other-${Date.now()}@example.com`,
fullName: 'Other Test User',
request,
});
otherUserToken = token2;
createdUserIds.push(user2.user.user_id);
});
afterAll(async () => {
vi.unstubAllEnvs();
await cleanupDb({
userIds: createdUserIds,
shoppingListIds: createdShoppingListIds,
});
});
describe('File Upload Validation', () => {
describe('Checksum Validation', () => {
it('should reject missing checksum', async () => {
// Create a small valid PNG
const testImagePath = path.join(__dirname, '../assets/flyer-test.png');
if (!fs.existsSync(testImagePath)) {
// Skip if test asset doesn't exist
return;
}
const response = await request
.post('/api/ai/upload-and-process')
.set('Authorization', `Bearer ${authToken}`)
.attach('flyerFile', testImagePath);
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.message).toContain('checksum');
});
it('should reject invalid checksum format (non-hex)', async () => {
const testImagePath = path.join(__dirname, '../assets/flyer-test.png');
if (!fs.existsSync(testImagePath)) {
return;
}
const response = await request
.post('/api/ai/upload-and-process')
.set('Authorization', `Bearer ${authToken}`)
.attach('flyerFile', testImagePath)
.field('checksum', 'not-a-valid-hex-checksum-at-all!!!!');
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should reject short checksum (not 64 characters)', async () => {
const testImagePath = path.join(__dirname, '../assets/flyer-test.png');
if (!fs.existsSync(testImagePath)) {
return;
}
const response = await request
.post('/api/ai/upload-and-process')
.set('Authorization', `Bearer ${authToken}`)
.attach('flyerFile', testImagePath)
.field('checksum', 'abc123'); // Too short
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
});
describe('File Type Validation', () => {
it('should require flyerFile field', async () => {
const checksum = crypto.randomBytes(32).toString('hex');
const response = await request
.post('/api/ai/upload-and-process')
.set('Authorization', `Bearer ${authToken}`)
.field('checksum', checksum);
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.message).toContain('file');
});
});
});
describe('Input Sanitization', () => {
describe('Shopping List Names', () => {
it('should accept unicode characters and emojis', async () => {
const response = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.send({ name: 'Grocery List 🛒 日本語 émoji' });
expect(response.status).toBe(201);
expect(response.body.success).toBe(true);
expect(response.body.data.name).toBe('Grocery List 🛒 日本語 émoji');
if (response.body.data.shopping_list_id) {
createdShoppingListIds.push(response.body.data.shopping_list_id);
}
});
it('should store XSS payloads as-is (frontend must escape)', async () => {
const xssPayload = '<script>alert("xss")</script>';
const response = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.send({ name: xssPayload });
expect(response.status).toBe(201);
expect(response.body.success).toBe(true);
// The payload is stored as-is - frontend is responsible for escaping
expect(response.body.data.name).toBe(xssPayload);
if (response.body.data.shopping_list_id) {
createdShoppingListIds.push(response.body.data.shopping_list_id);
}
});
it('should reject null bytes in JSON', async () => {
// Null bytes in JSON should be rejected by the JSON parser
const response = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.set('Content-Type', 'application/json')
.send('{"name":"test\u0000value"}');
// JSON parser may reject this or sanitize it
expect([400, 201]).toContain(response.status);
});
});
});
describe('Authorization Boundaries', () => {
describe('Cross-User Resource Access', () => {
it("should return 404 (not 403) for accessing another user's shopping list", async () => {
// Create a shopping list as the primary user
const createResponse = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.send({ name: 'Private List' });
expect(createResponse.status).toBe(201);
const listId = createResponse.body.data.shopping_list_id;
createdShoppingListIds.push(listId);
// Try to access it as the other user
const accessResponse = await request
.get(`/api/users/shopping-lists/${listId}`)
.set('Authorization', `Bearer ${otherUserToken}`);
// Should return 404 to hide resource existence
expect(accessResponse.status).toBe(404);
expect(accessResponse.body.success).toBe(false);
expect(accessResponse.body.error.code).toBe('NOT_FOUND');
});
it("should return 404 when trying to update another user's shopping list", async () => {
// Create a shopping list as the primary user
const createResponse = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.send({ name: 'Another Private List' });
expect(createResponse.status).toBe(201);
const listId = createResponse.body.data.shopping_list_id;
createdShoppingListIds.push(listId);
// Try to update it as the other user
const updateResponse = await request
.put(`/api/users/shopping-lists/${listId}`)
.set('Authorization', `Bearer ${otherUserToken}`)
.send({ name: 'Hacked List' });
// Should return 404 to hide resource existence
expect(updateResponse.status).toBe(404);
});
it("should return 404 when trying to delete another user's shopping list", async () => {
// Create a shopping list as the primary user
const createResponse = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.send({ name: 'Delete Test List' });
expect(createResponse.status).toBe(201);
const listId = createResponse.body.data.shopping_list_id;
createdShoppingListIds.push(listId);
// Try to delete it as the other user
const deleteResponse = await request
.delete(`/api/users/shopping-lists/${listId}`)
.set('Authorization', `Bearer ${otherUserToken}`);
// Should return 404 to hide resource existence
expect(deleteResponse.status).toBe(404);
});
});
describe('SQL Injection Prevention', () => {
it('should safely handle SQL injection in query params', async () => {
// Attempt SQL injection in limit param
const response = await request
.get('/api/personalization/master-items')
.query({ limit: '10; DROP TABLE users; --' });
// Should either return normal data or a validation error, not crash
expect([200, 400]).toContain(response.status);
expect(response.body).toBeDefined();
});
it('should safely handle SQL injection in search params', async () => {
// Attempt SQL injection in flyer search
const response = await request.get('/api/flyers').query({
search: "'; DROP TABLE flyers; --",
});
// Should handle safely
expect([200, 400]).toContain(response.status);
});
});
});
describe('API Error Handling', () => {
it('should return 404 for non-existent resources with clear message', async () => {
const response = await request
.get('/api/flyers/99999999')
.set('Authorization', `Bearer ${authToken}`);
expect(response.status).toBe(404);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('NOT_FOUND');
});
it('should return validation error for malformed JSON body', async () => {
const response = await request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.set('Content-Type', 'application/json')
.send('{ invalid json }');
expect(response.status).toBe(400);
});
it('should return validation error for missing required fields', async () => {
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${authToken}`)
.send({}); // Empty body
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
it('should return validation error for invalid data types', async () => {
const response = await request
.post('/api/budgets')
.set('Authorization', `Bearer ${authToken}`)
.send({
name: 'Test Budget',
amount_cents: 'not-a-number', // Should be number
period: 'weekly',
start_date: '2025-01-01',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
});
describe('Concurrent Operations', () => {
it('should handle concurrent writes without data loss', async () => {
// Create 5 shopping lists concurrently
const promises = Array.from({ length: 5 }, (_, i) =>
request
.post('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`)
.send({ name: `Concurrent List ${i + 1}` }),
);
const results = await Promise.all(promises);
// All should succeed
results.forEach((response) => {
expect(response.status).toBe(201);
expect(response.body.success).toBe(true);
if (response.body.data.shopping_list_id) {
createdShoppingListIds.push(response.body.data.shopping_list_id);
}
});
// Verify all lists were created
const listResponse = await request
.get('/api/users/shopping-lists')
.set('Authorization', `Bearer ${authToken}`);
expect(listResponse.status).toBe(200);
const lists = listResponse.body.data;
const concurrentLists = lists.filter((l: { name: string }) =>
l.name.startsWith('Concurrent List'),
);
expect(concurrentLists.length).toBe(5);
});
it('should handle concurrent reads without errors', async () => {
// Make 10 concurrent read requests
const promises = Array.from({ length: 10 }, () =>
request.get('/api/personalization/master-items'),
);
const results = await Promise.all(promises);
// All should succeed
results.forEach((response) => {
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
});
});
});
});

View File

@@ -145,4 +145,87 @@ describe('Notification API Routes Integration Tests', () => {
expect(Number(finalUnreadCountRes.rows[0].count)).toBe(0);
});
});
describe('Job Status Polling', () => {
describe('GET /api/ai/jobs/:id/status', () => {
it('should return 404 for non-existent job', async () => {
const response = await request.get('/api/ai/jobs/nonexistent-job-id/status');
expect(response.status).toBe(404);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('NOT_FOUND');
});
it('should be accessible without authentication (public endpoint)', async () => {
// This verifies that job status can be polled without auth
// This is important for UX where users may poll status from frontend
const response = await request.get('/api/ai/jobs/test-job-123/status');
// Should return 404 (job not found) rather than 401 (unauthorized)
expect(response.status).toBe(404);
expect(response.body.error.code).toBe('NOT_FOUND');
});
});
});
describe('DELETE /api/users/notifications/:notificationId', () => {
it('should delete a specific notification', async () => {
// First create a notification to delete
const createResult = await getPool().query(
`INSERT INTO public.notifications (user_id, content, is_read, link_url)
VALUES ($1, 'Notification to delete', false, '/test')
RETURNING notification_id`,
[testUser.user.user_id],
);
const notificationId = createResult.rows[0].notification_id;
const response = await request
.delete(`/api/users/notifications/${notificationId}`)
.set('Authorization', `Bearer ${authToken}`);
expect(response.status).toBe(204);
// Verify it was deleted
const verifyResult = await getPool().query(
`SELECT * FROM public.notifications WHERE notification_id = $1`,
[notificationId],
);
expect(verifyResult.rows.length).toBe(0);
});
it('should return 404 for non-existent notification', async () => {
const response = await request
.delete('/api/users/notifications/999999')
.set('Authorization', `Bearer ${authToken}`);
expect(response.status).toBe(404);
});
it("should prevent deleting another user's notification", async () => {
// Create another user
const { user: otherUser, token: otherToken } = await createAndLoginUser({
email: `notification-other-${Date.now()}@example.com`,
fullName: 'Other Notification User',
request,
});
createdUserIds.push(otherUser.user.user_id);
// Create a notification for the original user
const createResult = await getPool().query(
`INSERT INTO public.notifications (user_id, content, is_read, link_url)
VALUES ($1, 'Private notification', false, '/test')
RETURNING notification_id`,
[testUser.user.user_id],
);
const notificationId = createResult.rows[0].notification_id;
// Try to delete it as the other user
const response = await request
.delete(`/api/users/notifications/${notificationId}`)
.set('Authorization', `Bearer ${otherToken}`);
// Should return 404 (not 403) to hide existence
expect(response.status).toBe(404);
});
});
});

View File

@@ -167,8 +167,11 @@ describe('Public API Routes Integration Tests', () => {
it('GET /api/personalization/master-items should return a list of master grocery items', async () => {
const response = await request.get('/api/personalization/master-items');
const masterItems = response.body.data;
expect(response.status).toBe(200);
// The endpoint returns { items: [...], total: N } for pagination support
expect(response.body.data).toHaveProperty('items');
expect(response.body.data).toHaveProperty('total');
const masterItems = response.body.data.items;
expect(masterItems).toBeInstanceOf(Array);
expect(masterItems.length).toBeGreaterThan(0); // This relies on seed data for master items.
expect(masterItems[0]).toHaveProperty('master_grocery_item_id');

View File

@@ -0,0 +1,244 @@
// src/tests/integration/reactions.integration.test.ts
import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';
import supertest from 'supertest';
import { createAndLoginUser } from '../utils/testHelpers';
import { cleanupDb } from '../utils/cleanup';
import { getPool } from '../../services/db/connection.db';
/**
* @vitest-environment node
*
* Integration tests for the Reactions API routes.
* These routes were previously unmounted and are now available at /api/reactions.
*/
describe('Reactions API Routes Integration Tests', () => {
let request: ReturnType<typeof supertest>;
let authToken: string;
let testRecipeId: number;
const createdUserIds: string[] = [];
const createdReactionIds: number[] = [];
beforeAll(async () => {
vi.stubEnv('FRONTEND_URL', 'https://example.com');
const app = (await import('../../../server')).default;
request = supertest(app);
// Create a user for the tests
const { user, token } = await createAndLoginUser({
email: `reactions-user-${Date.now()}@example.com`,
fullName: 'Reactions Test User',
request,
});
authToken = token;
createdUserIds.push(user.user.user_id);
// Get an existing recipe ID from the seed data to use for reactions
const recipeResult = await getPool().query(`SELECT recipe_id FROM public.recipes LIMIT 1`);
if (recipeResult.rows.length > 0) {
testRecipeId = recipeResult.rows[0].recipe_id;
} else {
// Create a minimal recipe if none exist
const newRecipe = await getPool().query(
`INSERT INTO public.recipes (title, description, instructions, prep_time_minutes, cook_time_minutes, servings)
VALUES ('Test Recipe for Reactions', 'A test recipe', 'Test instructions', 10, 20, 4)
RETURNING recipe_id`,
);
testRecipeId = newRecipe.rows[0].recipe_id;
}
});
afterAll(async () => {
vi.unstubAllEnvs();
// Clean up reactions created during tests
if (createdReactionIds.length > 0) {
await getPool().query(
'DELETE FROM public.user_reactions WHERE reaction_id = ANY($1::int[])',
[createdReactionIds],
);
}
await cleanupDb({
userIds: createdUserIds,
});
});
describe('GET /api/reactions', () => {
it('should return reactions (public endpoint)', async () => {
const response = await request.get('/api/reactions');
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
});
it('should filter reactions by entityType', async () => {
const response = await request.get('/api/reactions').query({ entityType: 'recipe' });
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
});
it('should filter reactions by entityId', async () => {
const response = await request
.get('/api/reactions')
.query({ entityType: 'recipe', entityId: String(testRecipeId) });
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
});
});
describe('GET /api/reactions/summary', () => {
it('should return reaction summary for an entity', async () => {
const response = await request
.get('/api/reactions/summary')
.query({ entityType: 'recipe', entityId: String(testRecipeId) });
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
// Summary should have reaction counts
expect(response.body.data).toBeDefined();
});
it('should return 400 when entityType is missing', async () => {
const response = await request
.get('/api/reactions/summary')
.query({ entityId: String(testRecipeId) });
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should return 400 when entityId is missing', async () => {
const response = await request.get('/api/reactions/summary').query({ entityType: 'recipe' });
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
});
describe('POST /api/reactions/toggle', () => {
it('should require authentication', async () => {
const response = await request.post('/api/reactions/toggle').send({
entity_type: 'recipe',
entity_id: String(testRecipeId),
reaction_type: 'like',
});
expect(response.status).toBe(401);
});
it('should add a reaction when none exists', async () => {
const response = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_type: 'recipe',
entity_id: String(testRecipeId),
reaction_type: 'like',
});
expect(response.status).toBe(201);
expect(response.body.success).toBe(true);
expect(response.body.data.message).toBe('Reaction added.');
expect(response.body.data.reaction).toBeDefined();
// Track for cleanup
if (response.body.data.reaction?.reaction_id) {
createdReactionIds.push(response.body.data.reaction.reaction_id);
}
});
it('should remove the reaction when toggled again', async () => {
// First add the reaction
const addResponse = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_type: 'recipe',
entity_id: String(testRecipeId),
reaction_type: 'love', // Use different type to not conflict
});
expect(addResponse.status).toBe(201);
if (addResponse.body.data.reaction?.reaction_id) {
createdReactionIds.push(addResponse.body.data.reaction.reaction_id);
}
// Then toggle it off
const removeResponse = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_type: 'recipe',
entity_id: String(testRecipeId),
reaction_type: 'love',
});
expect(removeResponse.status).toBe(200);
expect(removeResponse.body.success).toBe(true);
expect(removeResponse.body.data.message).toBe('Reaction removed.');
});
it('should return 400 for missing entity_type', async () => {
const response = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_id: String(testRecipeId),
reaction_type: 'like',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should return 400 for missing entity_id', async () => {
const response = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_type: 'recipe',
reaction_type: 'like',
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should return 400 for missing reaction_type', async () => {
const response = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_type: 'recipe',
entity_id: String(testRecipeId),
});
expect(response.status).toBe(400);
expect(response.body.success).toBe(false);
});
it('should accept entity_id as string (required format)', async () => {
// entity_id must be a string per the Zod schema
const response = await request
.post('/api/reactions/toggle')
.set('Authorization', `Bearer ${authToken}`)
.send({
entity_type: 'recipe',
entity_id: String(testRecipeId),
reaction_type: 'helpful',
});
// Should succeed (201 for add, 200 for remove)
expect([200, 201]).toContain(response.status);
expect(response.body.success).toBe(true);
if (response.body.data.reaction?.reaction_id) {
createdReactionIds.push(response.body.data.reaction.reaction_id);
}
});
});
});

View File

@@ -232,6 +232,88 @@ describe('Recipe API Routes Integration Tests', () => {
createdRecipeIds.push(forkedRecipe.recipe_id);
});
it('should allow forking seed recipes (null user_id)', async () => {
// First, find or create a seed recipe (one with null user_id)
let seedRecipeId: number;
const seedRecipeResult = await getPool().query(
`SELECT recipe_id FROM public.recipes WHERE user_id IS NULL LIMIT 1`,
);
if (seedRecipeResult.rows.length > 0) {
seedRecipeId = seedRecipeResult.rows[0].recipe_id;
} else {
// Create a seed recipe if none exist
const createSeedResult = await getPool().query(
`INSERT INTO public.recipes (name, instructions, user_id, status, description)
VALUES ('Seed Recipe for Fork Test', 'Seed recipe instructions.', NULL, 'public', 'A seed recipe.')
RETURNING recipe_id`,
);
seedRecipeId = createSeedResult.rows[0].recipe_id;
createdRecipeIds.push(seedRecipeId);
}
// Fork the seed recipe - this should succeed
const response = await request
.post(`/api/recipes/${seedRecipeId}/fork`)
.set('Authorization', `Bearer ${authToken}`);
// Forking should work - seed recipes should be forkable
expect(response.status).toBe(201);
const forkedRecipe: Recipe = response.body.data;
expect(forkedRecipe.original_recipe_id).toBe(seedRecipeId);
expect(forkedRecipe.user_id).toBe(testUser.user.user_id);
// Track for cleanup
createdRecipeIds.push(forkedRecipe.recipe_id);
});
describe('GET /api/recipes/:recipeId/comments', () => {
it('should return comments for a recipe', async () => {
// First add a comment
await request
.post(`/api/recipes/${testRecipe.recipe_id}/comments`)
.set('Authorization', `Bearer ${authToken}`)
.send({ content: 'Test comment for GET request' });
// Now fetch comments
const response = await request.get(`/api/recipes/${testRecipe.recipe_id}/comments`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toBeInstanceOf(Array);
expect(response.body.data.length).toBeGreaterThan(0);
// Verify comment structure
const comment = response.body.data[0];
expect(comment).toHaveProperty('recipe_comment_id');
expect(comment).toHaveProperty('content');
expect(comment).toHaveProperty('user_id');
expect(comment).toHaveProperty('recipe_id');
});
it('should return empty array for recipe with no comments', async () => {
// Create a recipe specifically with no comments
const createRes = await request
.post('/api/users/recipes')
.set('Authorization', `Bearer ${authToken}`)
.send({
name: 'Recipe With No Comments',
instructions: 'No comments here.',
description: 'Testing empty comments.',
});
const noCommentsRecipe: Recipe = createRes.body.data;
createdRecipeIds.push(noCommentsRecipe.recipe_id);
// Fetch comments for this recipe
const response = await request.get(`/api/recipes/${noCommentsRecipe.recipe_id}/comments`);
expect(response.status).toBe(200);
expect(response.body.success).toBe(true);
expect(response.body.data).toEqual([]);
});
});
describe('POST /api/recipes/suggest', () => {
it('should return a recipe suggestion based on ingredients', async () => {
const ingredients = ['chicken', 'rice', 'broccoli'];

View File

@@ -19,21 +19,27 @@ import { vi } from 'vitest';
* // ... rest of the test
* });
*/
/**
* Helper to create a mock API response in the standard format.
* API responses are wrapped in { success: true, data: ... } per ADR-028.
*/
const mockApiResponse = <T>(data: T): Response =>
new Response(JSON.stringify({ success: true, data }));
// Global mock for apiClient - provides defaults for tests using renderWithProviders.
// Note: Individual test files must also call vi.mock() with their relative path.
vi.mock('../../services/apiClient', () => ({
// --- Provider Mocks (with default successful responses) ---
// These are essential for any test using renderWithProviders, as AppProviders
// will mount all these data providers.
fetchFlyers: vi.fn(() =>
Promise.resolve(new Response(JSON.stringify({ flyers: [], hasMore: false }))),
),
fetchMasterItems: vi.fn(() => Promise.resolve(new Response(JSON.stringify([])))),
fetchWatchedItems: vi.fn(() => Promise.resolve(new Response(JSON.stringify([])))),
fetchShoppingLists: vi.fn(() => Promise.resolve(new Response(JSON.stringify([])))),
getAuthenticatedUserProfile: vi.fn(() => Promise.resolve(new Response(JSON.stringify(null)))),
fetchCategories: vi.fn(() => Promise.resolve(new Response(JSON.stringify([])))), // For CorrectionsPage
fetchAllBrands: vi.fn(() => Promise.resolve(new Response(JSON.stringify([])))), // For AdminBrandManager
// All responses use the standard API format: { success: true, data: ... }
fetchFlyers: vi.fn(() => Promise.resolve(mockApiResponse([]))),
fetchMasterItems: vi.fn(() => Promise.resolve(mockApiResponse([]))),
fetchWatchedItems: vi.fn(() => Promise.resolve(mockApiResponse([]))),
fetchShoppingLists: vi.fn(() => Promise.resolve(mockApiResponse([]))),
getAuthenticatedUserProfile: vi.fn(() => Promise.resolve(mockApiResponse(null))),
fetchCategories: vi.fn(() => Promise.resolve(mockApiResponse([]))), // For CorrectionsPage
fetchAllBrands: vi.fn(() => Promise.resolve(mockApiResponse([]))), // For AdminBrandManager
// --- General Mocks (return empty vi.fn() by default) ---
// These functions are commonly used and can be implemented in specific tests.

View File

@@ -36,19 +36,22 @@ if (typeof global.GeolocationPositionError === 'undefined') {
// Mock window.matchMedia, which is not implemented in JSDOM.
// This is necessary for components that check for the user's preferred color scheme.
Object.defineProperty(window, 'matchMedia', {
writable: true,
value: vi.fn().mockImplementation((query) => ({
matches: false,
media: query,
onchange: null,
addListener: vi.fn(), // deprecated
removeListener: vi.fn(), // deprecated
addEventListener: vi.fn(),
removeEventListener: vi.fn(),
dispatchEvent: vi.fn(),
})),
});
// Guard against node environment where window doesn't exist (integration tests).
if (typeof window !== 'undefined') {
Object.defineProperty(window, 'matchMedia', {
writable: true,
value: vi.fn().mockImplementation((query) => ({
matches: false,
media: query,
onchange: null,
addListener: vi.fn(), // deprecated
removeListener: vi.fn(), // deprecated
addEventListener: vi.fn(),
removeEventListener: vi.fn(),
dispatchEvent: vi.fn(),
})),
});
}
// --- Polyfill for File constructor and prototype ---
// The `File` object in JSDOM is incomplete. It lacks `arrayBuffer` and its constructor
@@ -334,12 +337,34 @@ vi.mock('../../services/aiApiClient', () => ({
vi.mock('@bull-board/express', () => ({
ExpressAdapter: class {
setBasePath() {}
setQueues() {} // Required by createBullBoard
setViewsPath() {} // Required by createBullBoard
setStaticPath() {} // Required by createBullBoard
setEntryRoute() {} // Required by createBullBoard
setErrorHandler() {} // Required by createBullBoard
setApiRoutes() {} // Required by createBullBoard
getRouter() {
return (req: Request, res: Response, next: NextFunction) => next();
}
},
}));
/**
* Mocks the @bull-board/api module.
* createBullBoard normally calls methods on the serverAdapter, but in tests
* we want to skip all of that initialization.
*/
vi.mock('@bull-board/api', () => ({
createBullBoard: vi.fn(() => ({
addQueue: vi.fn(),
removeQueue: vi.fn(),
setQueues: vi.fn(),
})),
BullMQAdapter: class {
constructor() {}
},
}));
/**
* Mocks the Sentry client.
* This prevents errors when tests import modules that depend on sentry.client.ts.

View File

@@ -20,12 +20,20 @@ describe('rateLimit utils', () => {
expect(shouldSkipRateLimit(req)).toBe(false);
});
it('should return false (do not skip) when NODE_ENV is "development"', async () => {
it('should return true (skip) when NODE_ENV is "development"', async () => {
vi.stubEnv('NODE_ENV', 'development');
const { shouldSkipRateLimit } = await import('./rateLimit');
const req = createMockRequest({ headers: {} });
expect(shouldSkipRateLimit(req)).toBe(false);
expect(shouldSkipRateLimit(req)).toBe(true);
});
it('should return true (skip) when NODE_ENV is "staging"', async () => {
vi.stubEnv('NODE_ENV', 'staging');
const { shouldSkipRateLimit } = await import('./rateLimit');
const req = createMockRequest({ headers: {} });
expect(shouldSkipRateLimit(req)).toBe(true);
});
it('should return true (skip) when NODE_ENV is "test" and header is missing', async () => {
@@ -55,5 +63,15 @@ describe('rateLimit utils', () => {
});
expect(shouldSkipRateLimit(req)).toBe(true);
});
it('should return false (do not skip) when NODE_ENV is "development" and header is "true"', async () => {
vi.stubEnv('NODE_ENV', 'development');
const { shouldSkipRateLimit } = await import('./rateLimit');
const req = createMockRequest({
headers: { 'x-test-rate-limit-enable': 'true' },
});
expect(shouldSkipRateLimit(req)).toBe(false);
});
});
});
});

View File

@@ -1,7 +1,10 @@
// src/utils/rateLimit.ts
import { Request } from 'express';
const isTestEnv = process.env.NODE_ENV === 'test';
const isTestEnv =
process.env.NODE_ENV === 'test' ||
process.env.NODE_ENV === 'development' ||
process.env.NODE_ENV === 'staging';
/**
* Helper to determine if rate limiting should be skipped.
@@ -10,4 +13,4 @@ const isTestEnv = process.env.NODE_ENV === 'test';
export const shouldSkipRateLimit = (req: Request) => {
if (!isTestEnv) return false;
return req.headers['x-test-rate-limit-enable'] !== 'true';
};
};

View File

@@ -4,15 +4,16 @@ import { z } from 'zod';
/**
* A Zod schema for a required, non-empty string.
* @param message The error message to display if the string is empty or missing.
* @param maxLength Optional maximum length (defaults to 255).
* @returns A Zod string schema.
*/
export const requiredString = (message: string) =>
export const requiredString = (message: string, maxLength = 255) =>
z.preprocess(
// If the value is null or undefined, preprocess it to an empty string.
// This ensures that the subsequent `.min(1)` check will catch missing required fields.
(val) => val ?? '',
// Now, validate that the (potentially preprocessed) value is a string that, after trimming, has at least 1 character.
z.string().trim().min(1, message),
z.string().trim().min(1, message).max(maxLength, `Must be ${maxLength} characters or less.`),
);
/**
@@ -76,7 +77,7 @@ export const optionalNumeric = (
// the .optional() and .default() logic for null inputs. We want null to be
// treated as "not provided", just like undefined.
const schema = z.preprocess((val) => (val === null ? undefined : val), optionalNumberSchema);
if (options.default !== undefined) return schema.default(options.default);
return schema;
@@ -89,7 +90,6 @@ export const optionalNumeric = (
*/
export const optionalDate = (message?: string) => z.string().date(message).optional();
/**
* Creates a Zod schema for an optional boolean query parameter that is coerced from a string.
* Handles 'true', '1' as true and 'false', '0' as false.

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,22 @@
// vite.config.ts
import path from 'path';
// import fs from 'fs'; // Unused when nginx handles SSL
import { defineConfig } from 'vitest/config';
import react from '@vitejs/plugin-react';
import { sentryVitePlugin } from '@sentry/vite-plugin';
// HTTPS configuration for local development (optional, disabled when using nginx proxy)
// const httpsConfig = (() => {
// const keyPath = '/app/certs/localhost.key';
// const certPath = '/app/certs/localhost.crt';
// if (fs.existsSync(keyPath) && fs.existsSync(certPath)) {
// return {
// key: fs.readFileSync(keyPath),
// cert: fs.readFileSync(certPath),
// };
// }
// return undefined;
// })();
// Ensure NODE_ENV is set to 'test' for all Vitest runs.
process.env.NODE_ENV = 'test';
@@ -10,6 +25,13 @@ process.on('unhandledRejection', (reason, promise) => {
console.error('Unhandled Rejection at:', promise, 'reason:', reason);
});
/**
* Determines if we should enable Sentry source map uploads.
* Only enabled during production builds with the required environment variables.
*/
const shouldUploadSourceMaps =
process.env.VITE_SENTRY_DSN && process.env.SENTRY_AUTH_TOKEN && process.env.NODE_ENV !== 'test';
/**
* This is the main configuration file for Vite and the Vitest 'unit' test project.
* When running `vitest`, it is orchestrated by `vitest.workspace.ts`, which
@@ -18,10 +40,42 @@ process.on('unhandledRejection', (reason, promise) => {
export default defineConfig({
// Vite-specific configuration for the dev server, build, etc.
// This is inherited by all Vitest projects.
plugins: [react()],
build: {
// Generate source maps for production builds (hidden = not referenced in built files)
// The Sentry plugin will upload them and then delete them
sourcemap: shouldUploadSourceMaps ? 'hidden' : false,
},
plugins: [
react(),
// Conditionally add Sentry plugin for production builds with source map upload
...(shouldUploadSourceMaps
? [
sentryVitePlugin({
// URL of the Bugsink instance (Sentry-compatible)
url: process.env.SENTRY_URL,
// Org and project are required by the API but Bugsink ignores them
org: 'flyer-crawler',
project: 'flyer-crawler-frontend',
// Auth token from environment variable
authToken: process.env.SENTRY_AUTH_TOKEN,
sourcemaps: {
// Delete source maps after upload to prevent public exposure
filesToDeleteAfterUpload: ['./dist/**/*.map'],
},
// Disable telemetry to Sentry
telemetry: false,
}),
]
: []),
],
server: {
port: 3000,
port: 5173, // Internal port, nginx proxies on 3000
host: '0.0.0.0',
// https: httpsConfig, // Disabled - nginx handles SSL
},
resolve: {
alias: {