Compare commits

...

317 Commits

Author SHA1 Message Date
Gitea Actions
7e460a11e4 ci: Bump version to 0.12.7 [skip ci] 2026-01-23 00:24:43 +05:00
eae0dbaa8e bugsink mcp and claude subagents - documentation and test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m11s
2026-01-22 11:23:45 -08:00
fac98f4c54 doc updates and test fixin 2026-01-22 11:23:43 -08:00
9f7b821760 bugsink mcp and claude subagents - documentation and test fixes 2026-01-22 11:23:42 -08:00
cd60178450 bugsink mcp and claude subagents 2026-01-22 11:23:40 -08:00
Gitea Actions
1fcb9fd5c7 ci: Bump version to 0.12.6 [skip ci] 2026-01-22 03:41:25 +05:00
8bd4e081ea e2e fixin, frontend + home page work
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m0s
2026-01-21 14:40:19 -08:00
Gitea Actions
6e13570deb ci: Bump version to 0.12.5 [skip ci] 2026-01-22 01:36:01 +05:00
2eba66fb71 make e2e actually e2e - sigh
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m9s
2026-01-21 12:34:46 -08:00
Gitea Actions
10cdd78e22 ci: Bump version to 0.12.4 [skip ci] 2026-01-22 00:47:30 +05:00
521943bec0 make e2e actually e2e - sigh
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m55s
2026-01-21 11:43:39 -08:00
Gitea Actions
810c0eb61b ci: Bump version to 0.12.3 [skip ci] 2026-01-21 23:08:48 +05:00
3314063e25 migration from react-joyride to driver.js:
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m52s
2026-01-21 10:07:38 -08:00
Gitea Actions
65c38765c6 ci: Bump version to 0.12.2 [skip ci] 2026-01-21 22:44:29 +05:00
4ddd9bb220 unit test fix
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 15m59s
2026-01-21 09:41:07 -08:00
Gitea Actions
0b80b01ebf ci: Bump version to 0.12.1 [skip ci] 2026-01-21 22:15:55 +05:00
05860b52f6 fix deploy
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 15m40s
2026-01-21 09:13:51 -08:00
4e5d709973 more fixin logging, UI update #1, source maps fix
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 12s
2026-01-21 03:27:44 -08:00
Gitea Actions
eaf229f252 ci: Bump version to 0.12.0 for production release [skip ci] 2026-01-21 02:19:44 +05:00
Gitea Actions
e16ff809e3 ci: Bump version to 0.11.20 [skip ci] 2026-01-21 00:29:59 +05:00
f9fba3334f minor test fix
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m26s
2026-01-20 11:29:06 -08:00
Gitea Actions
2379f3a878 ci: Bump version to 0.11.19 [skip ci] 2026-01-20 23:40:50 +05:00
0232b9de7a Enhance logging and error handling in PostgreSQL functions; update API endpoints in E2E tests; add Logstash troubleshooting documentation
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m25s
- Added tiered logging and error handling in various PostgreSQL functions to improve observability and error tracking.
- Updated E2E tests to reflect changes in API endpoints for fetching best watched prices.
- Introduced a comprehensive troubleshooting runbook for Logstash to assist in diagnosing common issues in the PostgreSQL observability pipeline.
2026-01-20 10:39:33 -08:00
Gitea Actions
2e98bc3fc7 ci: Bump version to 0.11.18 [skip ci] 2026-01-20 14:18:32 +05:00
ec2f143218 logging postgres + test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m18s
2026-01-20 01:16:27 -08:00
Gitea Actions
f3e233bf38 ci: Bump version to 0.11.17 [skip ci] 2026-01-20 10:30:14 +05:00
1696aeb54f minor fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m42s
2026-01-19 21:28:44 -08:00
Gitea Actions
e45804776d ci: Bump version to 0.11.16 [skip ci] 2026-01-20 08:14:50 +05:00
5879328b67 fixing categories 3rd normal form
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m34s
2026-01-19 19:13:30 -08:00
Gitea Actions
4618d11849 ci: Bump version to 0.11.15 [skip ci] 2026-01-20 02:49:48 +05:00
4022768c03 set up local e2e tests, and some e2e test fixes + docs on more db fixin - ugh
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m39s
2026-01-19 13:45:21 -08:00
Gitea Actions
7fc57b4b10 ci: Bump version to 0.11.14 [skip ci] 2026-01-20 01:18:38 +05:00
99f5d52d17 more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m34s
2026-01-19 12:13:04 -08:00
Gitea Actions
e22b5ec02d ci: Bump version to 0.11.13 [skip ci] 2026-01-19 23:54:59 +05:00
cf476e7afc ADR-022 - websocket notificaitons - also more test fixes with stores
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m47s
2026-01-19 10:53:42 -08:00
Gitea Actions
7b7a8d0f35 ci: Bump version to 0.11.12 [skip ci] 2026-01-19 13:35:47 +05:00
795b3d0b28 massive fixes to stores and addresses
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m46s
2026-01-19 00:34:11 -08:00
d2efca8339 massive fixes to stores and addresses 2026-01-19 00:33:09 -08:00
Gitea Actions
c579f141f8 ci: Bump version to 0.11.11 [skip ci] 2026-01-19 09:27:16 +05:00
9cb03c1ede more e2e from the AI
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m42s
2026-01-18 20:26:21 -08:00
Gitea Actions
c14bef4448 ci: Bump version to 0.11.10 [skip ci] 2026-01-19 07:43:17 +05:00
7c0e5450db latest batch of fixes after frontend testing - almost done?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m29s
2026-01-18 18:42:32 -08:00
Gitea Actions
8e85493872 ci: Bump version to 0.11.9 [skip ci] 2026-01-19 07:28:39 +05:00
327d3d4fbc latest batch of fixes after frontend testing - almost done?
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m7s
2026-01-18 18:25:31 -08:00
Gitea Actions
bdb2e274cc ci: Bump version to 0.11.8 [skip ci] 2026-01-19 05:28:15 +05:00
cd46f1d4c2 integration test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m38s
2026-01-18 16:23:34 -08:00
Gitea Actions
6da4b5e9d0 ci: Bump version to 0.11.7 [skip ci] 2026-01-19 03:28:57 +05:00
941626004e test fixes to align with latest tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m51s
2026-01-18 14:27:20 -08:00
Gitea Actions
67cfe39249 ci: Bump version to 0.11.6 [skip ci] 2026-01-19 03:00:22 +05:00
c24103d9a0 frontend direct testing result and fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m42s
2026-01-18 13:57:47 -08:00
Gitea Actions
3e85f839fe ci: Bump version to 0.11.5 [skip ci] 2026-01-18 15:57:52 +05:00
63a0dde0f8 fix unit tests after frontend tests ran
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m21s
2026-01-18 02:56:25 -08:00
Gitea Actions
94f45d9726 ci: Bump version to 0.11.4 [skip ci] 2026-01-18 14:36:55 +05:00
136a9ce3f3 Add ADR-054 for Bugsink to Gitea issue synchronization and frontend testing summary for 2026-01-18
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m3s
- Introduced ADR-054 detailing the implementation of an automated sync worker to create Gitea issues from unresolved Bugsink errors.
- Documented architecture, queue configuration, Redis schema, and implementation phases for the sync feature.
- Added frontend testing summary for 2026-01-18, covering multiple sessions of API testing, fixes applied, and Bugsink error tracking status.
- Included detailed API reference and common validation errors encountered during testing.
2026-01-18 01:35:00 -08:00
Gitea Actions
e65151c3df ci: Bump version to 0.11.3 [skip ci] 2026-01-18 10:49:14 +05:00
3d91d59b9c refactor: update API response handling across multiple queries to ensure compliance with ADR-028
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m53s
- Removed direct return of json.data in favor of structured error handling.
- Implemented checks for success and data array in useActivityLogQuery, useBestSalePricesQuery, useBrandsQuery, useCategoriesQuery, useFlyerItemsForFlyersQuery, useFlyerItemsQuery, useFlyersQuery, useLeaderboardQuery, useMasterItemsQuery, usePriceHistoryQuery, useShoppingListsQuery, useSuggestedCorrectionsQuery, and useWatchedItemsQuery.
- Updated unit tests to reflect changes in expected behavior when API response does not conform to the expected structure.
- Updated package.json to use the latest version of @sentry/vite-plugin.
- Adjusted vite.config.ts for local development SSL configuration.
- Added self-signed SSL certificate and key for local development.
2026-01-17 21:45:51 -08:00
Gitea Actions
822d6d1c3c ci: Bump version to 0.11.2 [skip ci] 2026-01-18 06:50:06 +05:00
a24e28f52f update node packages
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m32s
2026-01-17 17:49:09 -08:00
8dbfa62768 add missing plugin
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 11s
2026-01-17 17:36:25 -08:00
Gitea Actions
da4e0c9136 ci: Bump version to 0.11.1 [skip ci] 2026-01-18 06:25:46 +05:00
dd3cbeb65d fix unit tests from using response
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m55s
2026-01-17 17:24:05 -08:00
e6d383103c feat: add Sentry source map upload configuration and update environment variables 2026-01-17 17:07:50 -08:00
Gitea Actions
a14816c8ee ci: Bump version to 0.11.0 for production release [skip ci] 2026-01-18 05:02:54 +05:00
Gitea Actions
08b220e29c ci: Bump version to 0.10.0 for production release [skip ci] 2026-01-18 04:50:17 +05:00
Gitea Actions
d41a3f1887 ci: Bump version to 0.9.115 [skip ci] 2026-01-18 04:10:18 +05:00
1f6cdc62d7 still fixin test
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m20s
2026-01-17 15:09:17 -08:00
Gitea Actions
978c63bacd ci: Bump version to 0.9.114 [skip ci] 2026-01-18 04:00:21 +05:00
544eb7ae3c still fixin test
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m1s
2026-01-17 14:59:01 -08:00
Gitea Actions
f6839f6e14 ci: Bump version to 0.9.113 [skip ci] 2026-01-18 03:35:25 +05:00
3fac29436a still fixin test
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m6s
2026-01-17 14:34:18 -08:00
Gitea Actions
56f45c9301 ci: Bump version to 0.9.112 [skip ci] 2026-01-18 03:19:53 +05:00
83460abce4 md fixin
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m57s
2026-01-17 14:18:55 -08:00
Gitea Actions
1b084b2ba4 ci: Bump version to 0.9.111 [skip ci] 2026-01-18 02:56:20 +05:00
0ea034bdc8 push
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m54s
2026-01-17 13:55:22 -08:00
Gitea Actions
fc9e27078a ci: Bump version to 0.9.110 [skip ci] 2026-01-18 02:41:36 +05:00
fb8cbe8007 update mcp and created new test user and reset passes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m56s
2026-01-17 13:40:31 -08:00
f49f786c23 fix: Add .env file loading to ecosystem-test.config.cjs
Allows test environment PM2 processes to load environment variables
from /var/www/flyer-crawler-test.projectium.com/.env file, enabling
manual restarts without requiring CI/CD to inject variables.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 13:38:15 -08:00
Gitea Actions
dd31141d4e ci: Bump version to 0.9.109 [skip ci] 2026-01-13 23:09:47 +05:00
8073094760 testing/staging fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m15s
2026-01-13 10:08:28 -08:00
Gitea Actions
33a1e146ab ci: Bump version to 0.9.108 [skip ci] 2026-01-13 22:34:20 +05:00
4f8216db77 testing/staging fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m55s
2026-01-13 09:33:38 -08:00
Gitea Actions
42d605d19f ci: Bump version to 0.9.107 [skip ci] 2026-01-13 22:06:39 +05:00
749350df7f testing/staging fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m56s
2026-01-13 09:03:42 -08:00
Gitea Actions
ac085100fe ci: Bump version to 0.9.106 [skip ci] 2026-01-13 21:43:43 +05:00
ce4ecd1268 use port 3002 in test
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m13s
2026-01-13 08:42:34 -08:00
Gitea Actions
a57cfc396b ci: Bump version to 0.9.105 [skip ci] 2026-01-13 21:00:45 +05:00
987badbf8d use port 3002 in test
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m41s
2026-01-13 07:59:49 -08:00
Gitea Actions
d38fcd21c1 ci: Bump version to 0.9.104 [skip ci] 2026-01-13 08:11:38 +05:00
6e36cc3b07 logging + e2e test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m34s
2026-01-12 19:10:29 -08:00
Gitea Actions
62a8a8bf4b ci: Bump version to 0.9.103 [skip ci] 2026-01-13 06:39:39 +05:00
96038cfcf4 logging work - almost there
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m51s
2026-01-12 17:38:58 -08:00
Gitea Actions
981214fdd0 ci: Bump version to 0.9.102 [skip ci] 2026-01-13 06:27:55 +05:00
92b0138108 logging work - almost there
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m2s
2026-01-12 17:26:59 -08:00
Gitea Actions
27f0255240 ci: Bump version to 0.9.101 [skip ci] 2026-01-13 05:57:55 +05:00
4e06dde9e1 logging work - almost there
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m30s
2026-01-12 16:57:18 -08:00
Gitea Actions
b9a0e5b82c ci: Bump version to 0.9.100 [skip ci] 2026-01-13 05:35:11 +05:00
bb7fe8dc2c logging work - almost there
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m28s
2026-01-12 16:34:18 -08:00
Gitea Actions
81f1f2250b ci: Bump version to 0.9.99 [skip ci] 2026-01-13 05:08:56 +05:00
c6c90bb615 more new feature fixes + sentry logging
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m53s
2026-01-12 16:08:18 -08:00
Gitea Actions
60489a626b ci: Bump version to 0.9.98 [skip ci] 2026-01-13 05:05:59 +05:00
3c63e1ecbb more new feature fixes + sentry logging
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-12 16:04:09 -08:00
Gitea Actions
acbcb39cbe ci: Bump version to 0.9.97 [skip ci] 2026-01-13 03:34:42 +05:00
a87a0b6af1 unit test repairs
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m12s
2026-01-12 14:31:41 -08:00
Gitea Actions
abdc3cb6db ci: Bump version to 0.9.96 [skip ci] 2026-01-13 00:52:54 +05:00
7a1bd50119 unit test repairs
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m42s
2026-01-12 11:51:48 -08:00
Gitea Actions
87d75d0571 ci: Bump version to 0.9.95 [skip ci] 2026-01-13 00:04:10 +05:00
faf2900c28 unit test repairs
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m43s
2026-01-12 10:58:00 -08:00
Gitea Actions
5258efc179 ci: Bump version to 0.9.94 [skip ci] 2026-01-12 21:11:57 +05:00
2a5cc5bb51 unit test repairs
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m17s
2026-01-12 08:10:37 -08:00
Gitea Actions
8eaee2844f ci: Bump version to 0.9.93 [skip ci] 2026-01-12 08:57:24 +05:00
440a19c3a7 whoa - so much - new features (UPC,etc) - Sentry for app logging! so much more !
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m53s
2026-01-11 19:55:10 -08:00
4ae6d84240 sql fix
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-11 19:49:13 -08:00
Gitea Actions
5870e5c614 ci: Bump version to 0.9.92 [skip ci] 2026-01-12 08:20:09 +05:00
2e7ebbd9ed whoa - so much - new features (UPC,etc) - Sentry for app logging! so much more !
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m47s
2026-01-11 19:18:52 -08:00
Gitea Actions
dc3fa21359 ci: Bump version to 0.9.91 [skip ci] 2026-01-12 08:08:50 +05:00
11aeac5edd whoa - so much - new features (UPC,etc) - Sentry for app logging! so much more !
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m10s
2026-01-11 19:07:02 -08:00
Gitea Actions
f6c0c082bc ci: Bump version to 0.9.90 [skip ci] 2026-01-11 15:05:48 +05:00
4e22213cd1 all the new shiny things
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m54s
2026-01-11 02:04:52 -08:00
Gitea Actions
9815eb3686 ci: Bump version to 0.9.89 [skip ci] 2026-01-11 12:58:20 +05:00
2bf4a7c1e6 google + github oauth
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m39s
2026-01-10 23:57:18 -08:00
Gitea Actions
5eed3f51f4 ci: Bump version to 0.9.88 [skip ci] 2026-01-11 12:01:25 +05:00
d250932c05 all tests fixed? can it be?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m28s
2026-01-10 22:58:38 -08:00
Gitea Actions
7d1f964574 ci: Bump version to 0.9.87 [skip ci] 2026-01-11 08:30:29 +05:00
3b69e58de3 remove useless windows testing files, fix testing?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m1s
2026-01-10 19:29:54 -08:00
Gitea Actions
5211aadd22 ci: Bump version to 0.9.86 [skip ci] 2026-01-11 08:05:21 +05:00
a997d1d0b0 ranstack query fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m21s
2026-01-10 19:03:40 -08:00
cf5f77c58e Adopt TanStack Query fixes 2026-01-10 19:02:42 -08:00
Gitea Actions
cf0f5bb820 ci: Bump version to 0.9.85 [skip ci] 2026-01-11 06:44:28 +05:00
503e7084da Adopt TanStack Query fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m41s
2026-01-10 17:42:45 -08:00
Gitea Actions
d8aa19ac40 ci: Bump version to 0.9.84 [skip ci] 2026-01-10 23:45:42 +05:00
dcd9452b8c Adopt TanStack Query
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m46s
2026-01-10 10:45:10 -08:00
Gitea Actions
6d468544e2 ci: Bump version to 0.9.83 [skip ci] 2026-01-10 23:14:18 +05:00
2913c7aa09 tanstack
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m1s
2026-01-10 03:20:40 -08:00
Gitea Actions
77f9cb6081 ci: Bump version to 0.9.82 [skip ci] 2026-01-10 12:17:24 +05:00
2f1d73ca12 fix(tests): access wrapped API response data correctly
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 3h0m5s
Tests were accessing response.body directly instead of response.body.data,
causing failures since sendSuccess() wraps responses in { success, data }.
2026-01-09 23:16:30 -08:00
Gitea Actions
402e2617ca ci: Bump version to 0.9.81 [skip ci] 2026-01-10 11:40:07 +05:00
e14c19c112 linting docs + some fixes go claude and gemini
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m0s
2026-01-09 22:38:57 -08:00
Gitea Actions
ea46f66c7a ci: Bump version to 0.9.80 [skip ci] 2026-01-10 11:00:30 +05:00
a42ee5a461 unit tests - wheeee! Claude is the mvp
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m11s
2026-01-09 21:59:09 -08:00
Gitea Actions
71710c8316 ci: Bump version to 0.9.79 [skip ci] 2026-01-10 09:32:36 +05:00
1480a73ab0 more compliance
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 58s
2026-01-09 20:30:52 -08:00
Gitea Actions
b3efa3c756 ci: Bump version to 0.9.78 [skip ci] 2026-01-10 08:01:56 +05:00
fb8fd57bb6 huge linting fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m3s
2026-01-09 19:01:05 -08:00
Gitea Actions
0a90d9d590 ci: Bump version to 0.9.77 [skip ci] 2026-01-10 07:54:20 +05:00
6ab473f5f0 huge linting fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 58s
2026-01-09 18:50:04 -08:00
Gitea Actions
c46efe1474 ci: Bump version to 0.9.76 [skip ci] 2026-01-10 06:59:56 +05:00
25d6b76f6d ADR-026: Client-Side Logging + linting fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-09 17:58:21 -08:00
Gitea Actions
9ffcc9d65d ci: Bump version to 0.9.75 [skip ci] 2026-01-10 03:25:25 +05:00
1285702210 adr-028 fixes for tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m38s
2026-01-09 14:24:20 -08:00
Gitea Actions
d38b751b40 ci: Bump version to 0.9.74 [skip ci] 2026-01-10 03:14:12 +05:00
e122d55ced adr-028 fixes for tests
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m1s
2026-01-09 14:12:48 -08:00
Gitea Actions
af9992f773 ci: Bump version to 0.9.73 [skip ci] 2026-01-10 01:54:56 +05:00
3912139273 adr-028 and int tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m24s
2026-01-09 12:47:41 -08:00
b5f7f5e4d1 adr-0028 and int test fixes 2026-01-09 12:35:55 -08:00
Gitea Actions
5173059621 ci: Bump version to 0.9.72 [skip ci] 2026-01-10 00:46:09 +05:00
ebceb0e2e3 just work
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 20m51s
2026-01-09 11:45:03 -08:00
e75054b1ab ADR work, dockerfile work, integration test fixes 2026-01-09 11:45:00 -08:00
Gitea Actions
639313485a ci: Bump version to 0.9.71 [skip ci] 2026-01-09 19:00:01 +05:00
4a04e478c4 integration test fixes - claude for the win? try 4 - i have a good feeling
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 16m58s
2026-01-09 05:56:19 -08:00
Gitea Actions
1814469eb4 ci: Bump version to 0.9.70 [skip ci] 2026-01-09 18:19:13 +05:00
b777430ff7 integration test fixes - claude for the win? try 4 - i have a good feeling
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-09 05:18:19 -08:00
Gitea Actions
23830c0d4e ci: Bump version to 0.9.69 [skip ci] 2026-01-09 17:24:00 +05:00
ef42fee982 integration test fixes - claude for the win? try 3
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m3s
2026-01-09 04:23:23 -08:00
Gitea Actions
65cb54500c ci: Bump version to 0.9.68 [skip ci] 2026-01-09 16:42:51 +05:00
664ad291be integration test fixes - claude for the win? try 3
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m3s
2026-01-09 03:41:57 -08:00
Gitea Actions
ff912b9055 ci: Bump version to 0.9.67 [skip ci] 2026-01-09 15:32:50 +05:00
ec32027bd4 integration test fixes - claude for the win?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 33m43s
2026-01-09 02:32:16 -08:00
Gitea Actions
59f773639b ci: Bump version to 0.9.66 [skip ci] 2026-01-09 15:27:50 +05:00
dd2be5eecf integration test fixes - claude for the win?
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m0s
2026-01-09 02:27:14 -08:00
Gitea Actions
a94bfbd3e9 ci: Bump version to 0.9.65 [skip ci] 2026-01-09 14:43:36 +05:00
338bbc9440 integration test fixes - claude for the win?
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 20m4s
2026-01-09 01:42:51 -08:00
Gitea Actions
60aad04642 ci: Bump version to 0.9.64 [skip ci] 2026-01-09 13:57:52 +05:00
7f2aff9a24 unit test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m39s
2026-01-09 00:57:12 -08:00
Gitea Actions
689320e7d2 ci: Bump version to 0.9.63 [skip ci] 2026-01-09 13:19:09 +05:00
e457bbf046 more req work
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m51s
2026-01-09 00:18:09 -08:00
68cdbb6066 progress enforcing adr-0005 2026-01-09 00:18:09 -08:00
Gitea Actions
cea6be7145 ci: Bump version to 0.9.62 [skip ci] 2026-01-09 11:31:00 +05:00
74a5ca6331 claude 1 - fixes : -/
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 24m33s
2026-01-08 22:30:21 -08:00
Gitea Actions
62470e7661 ci: Bump version to 0.9.61 [skip ci] 2026-01-09 10:50:57 +05:00
2b517683fd progress enforcing adr-0005
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m2s
2026-01-08 21:50:21 -08:00
Gitea Actions
5d06d1ba09 ci: Bump version to 0.9.60 [skip ci] 2026-01-09 10:41:14 +05:00
46c1e56b14 progress enforcing adr-0005
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 46s
2026-01-08 21:40:20 -08:00
Gitea Actions
78a9b80010 ci: Bump version to 0.9.59 [skip ci] 2026-01-08 20:48:22 +05:00
d356d9dfb6 claude 1
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 43s
2026-01-08 07:47:29 -08:00
Gitea Actions
ab63f83f50 ci: Bump version to 0.9.58 [skip ci] 2026-01-08 05:23:21 +05:00
b546a55eaf fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m3s
2026-01-07 16:22:48 -08:00
Gitea Actions
dfa53a93dd ci: Bump version to 0.9.57 [skip ci] 2026-01-08 04:39:12 +05:00
f30464cd0e fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m1s
2026-01-07 15:38:14 -08:00
Gitea Actions
2d2fa3c2c8 ci: Bump version to 0.9.56 [skip ci] 2026-01-08 00:40:29 +05:00
58cb391f4b fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m36s
2026-01-07 11:39:35 -08:00
Gitea Actions
0ebe2f0806 ci: Bump version to 0.9.55 [skip ci] 2026-01-07 14:43:38 +05:00
7867abc5bc fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 24m23s
2026-01-07 01:42:43 -08:00
Gitea Actions
cc4c8e2839 ci: Bump version to 0.9.54 [skip ci] 2026-01-07 10:49:08 +05:00
33ee2eeac9 switch to instantiating the pm2 worker in the testing threads
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m44s
2026-01-06 21:48:35 -08:00
Gitea Actions
e0b13f26fb ci: Bump version to 0.9.53 [skip ci] 2026-01-07 09:57:37 +05:00
eee7f36756 switch to instantiating the pm2 worker in the testing threads
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m53s
2026-01-06 20:56:39 -08:00
Gitea Actions
622c919733 ci: Bump version to 0.9.52 [skip ci] 2026-01-07 08:26:14 +05:00
c7f6b6369a fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m27s
2026-01-06 19:25:25 -08:00
Gitea Actions
879d956003 ci: Bump version to 0.9.51 [skip ci] 2026-01-07 07:11:22 +05:00
27eaac7ea8 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m15s
2026-01-06 18:10:47 -08:00
Gitea Actions
93618c57e5 ci: Bump version to 0.9.50 [skip ci] 2026-01-07 06:41:16 +05:00
7f043ef704 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 29m45s
2026-01-06 17:40:20 -08:00
Gitea Actions
62e35deddc ci: Bump version to 0.9.49 [skip ci] 2026-01-07 02:54:13 +05:00
59f6f43d03 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m36s
2026-01-06 13:53:00 -08:00
Gitea Actions
e675c1a73c ci: Bump version to 0.9.48 [skip ci] 2026-01-07 01:35:26 +05:00
3c19084a0a fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m17s
2026-01-06 12:34:18 -08:00
Gitea Actions
e2049c6b9f ci: Bump version to 0.9.47 [skip ci] 2026-01-06 23:34:29 +05:00
a3839c2f0d debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 33m13s
2026-01-06 10:33:51 -08:00
Gitea Actions
c1df3d7b1b ci: Bump version to 0.9.46 [skip ci] 2026-01-06 22:39:47 +05:00
94782f030d debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m42s
2026-01-06 09:38:14 -08:00
Gitea Actions
1c25b79251 ci: Bump version to 0.9.45 [skip ci] 2026-01-06 14:34:44 +05:00
0b0fa8294d debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m54s
2026-01-06 01:33:48 -08:00
Gitea Actions
f49f3a75fb ci: Bump version to 0.9.44 [skip ci] 2026-01-06 13:41:43 +05:00
8f14044ae6 debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m27s
2026-01-06 00:41:03 -08:00
Gitea Actions
55e1e425f4 ci: Bump version to 0.9.43 [skip ci] 2026-01-06 12:56:47 +05:00
68b16ad2e8 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m2s
2026-01-05 23:53:54 -08:00
Gitea Actions
6a28934692 ci: Bump version to 0.9.42 [skip ci] 2026-01-06 12:25:08 +05:00
78c4a5fee6 fix the dang integration tests
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-05 23:20:56 -08:00
Gitea Actions
1ce5f481a8 ci: Bump version to 0.9.41 [skip ci] 2026-01-06 11:39:28 +05:00
Gitea Actions
e0120d38fd ci: Bump version to 0.9.39 [skip ci] 2026-01-06 11:39:27 +05:00
6b2079ef2c fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m44s
2026-01-05 22:38:21 -08:00
Gitea Actions
0478e176d5 ci: Bump version to 0.9.38 [skip ci] 2026-01-06 10:23:22 +05:00
47f7f97cd9 fuck database contraints - seems buggy
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m10s
2026-01-05 21:16:08 -08:00
Gitea Actions
b0719d1e39 ci: Bump version to 0.9.37 [skip ci] 2026-01-06 10:11:19 +05:00
0039ac3752 fuck database contraints - seems buggy
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 37s
2026-01-05 21:08:16 -08:00
Gitea Actions
3c8316f4f7 ci: Bump version to 0.9.36 [skip ci] 2026-01-06 09:03:20 +05:00
2564df1c64 get rid of localhost in tests - not a qualified URL - we'll see
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 33m19s
2026-01-05 20:02:44 -08:00
Gitea Actions
696c547238 ci: Bump version to 0.9.35 [skip ci] 2026-01-06 08:11:42 +05:00
38165bdb9a get rid of localhost in tests - not a qualified URL - we'll see
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m14s
2026-01-05 19:10:46 -08:00
Gitea Actions
6139dca072 ci: Bump version to 0.9.34 [skip ci] 2026-01-06 06:33:46 +05:00
68bfaa50e6 more baseurl work - hopefully that does it for now
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m5s
2026-01-05 17:33:00 -08:00
Gitea Actions
9c42621f74 ci: Bump version to 0.9.33 [skip ci] 2026-01-06 04:34:48 +05:00
1b98282202 more rate limiting
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m19s
2026-01-05 15:31:01 -08:00
Gitea Actions
b6731b220c ci: Bump version to 0.9.32 [skip ci] 2026-01-06 04:13:42 +05:00
3507d455e8 more rate limiting
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-05 15:13:10 -08:00
Gitea Actions
92b2adf8e8 ci: Bump version to 0.9.31 [skip ci] 2026-01-06 04:07:21 +05:00
d6c7452256 more rate limiting
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 41s
2026-01-05 15:06:55 -08:00
Gitea Actions
d812b681dd ci: Bump version to 0.9.30 [skip ci] 2026-01-06 03:54:42 +05:00
b4306a6092 more rate limiting
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 50s
2026-01-05 14:53:49 -08:00
Gitea Actions
57fdd159d5 ci: Bump version to 0.9.29 [skip ci] 2026-01-06 01:08:45 +05:00
4a747ca042 even even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m46s
2026-01-05 12:08:18 -08:00
Gitea Actions
e0bf96824c ci: Bump version to 0.9.28 [skip ci] 2026-01-06 00:28:11 +05:00
e86e09703e even even more and more test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 59s
2026-01-05 11:27:13 -08:00
Gitea Actions
275741c79e ci: Bump version to 0.9.27 [skip ci] 2026-01-05 15:32:08 +05:00
3a40249ddb even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 22m19s
2026-01-05 02:30:28 -08:00
Gitea Actions
4c70905950 ci: Bump version to 0.9.26 [skip ci] 2026-01-05 14:51:27 +05:00
0b4884ff2a even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m1s
2026-01-05 01:50:54 -08:00
Gitea Actions
e4acab77c8 ci: Bump version to 0.9.25 [skip ci] 2026-01-05 14:26:57 +05:00
4e20b1b430 even more and more test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 54s
2026-01-05 01:26:12 -08:00
Gitea Actions
15747ac942 ci: Bump version to 0.9.24 [skip ci] 2026-01-05 12:37:56 +05:00
e5fa89ef17 even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m55s
2026-01-04 23:36:56 -08:00
Gitea Actions
2c65da31e9 ci: Bump version to 0.9.23 [skip ci] 2026-01-05 05:12:54 +05:00
eeec6af905 even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m33s
2026-01-04 16:01:55 -08:00
Gitea Actions
e7d03951b9 ci: Bump version to 0.9.22 [skip ci] 2026-01-05 03:35:06 +05:00
af8816e0af more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 29m30s
2026-01-04 14:34:16 -08:00
Gitea Actions
64f6427e1a ci: Bump version to 0.9.21 [skip ci] 2026-01-05 01:31:50 +05:00
c9b7a75429 more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m59s
2026-01-04 12:30:44 -08:00
Gitea Actions
0490f6922e ci: Bump version to 0.9.20 [skip ci] 2026-01-05 00:30:12 +05:00
057c4c9174 more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m19s
2026-01-04 11:28:52 -08:00
Gitea Actions
a9e56bc707 ci: Bump version to 0.9.19 [skip ci] 2026-01-04 16:00:35 +05:00
e5d09c73b7 test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 20m31s
2026-01-04 02:59:55 -08:00
Gitea Actions
6e1298b825 ci: Bump version to 0.9.18 [skip ci] 2026-01-04 15:22:37 +05:00
fc8e43437a test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 56s
2026-01-04 02:21:08 -08:00
Gitea Actions
cb453aa949 ci: Bump version to 0.9.17 [skip ci] 2026-01-04 09:02:18 +05:00
2651bd16ae test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 52s
2026-01-03 20:01:10 -08:00
Gitea Actions
91e0f0c46f ci: Bump version to 0.9.16 [skip ci] 2026-01-04 05:05:33 +05:00
e6986d512b test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m38s
2026-01-03 16:04:04 -08:00
Gitea Actions
8f9c21675c ci: Bump version to 0.9.15 [skip ci] 2026-01-04 03:58:29 +05:00
7fb22cdd20 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m12s
2026-01-03 14:57:40 -08:00
Gitea Actions
780291303d ci: Bump version to 0.9.14 [skip ci] 2026-01-04 02:48:56 +05:00
4f607f7d2f more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 29m49s
2026-01-03 13:47:44 -08:00
Gitea Actions
208227b3ed ci: Bump version to 0.9.13 [skip ci] 2026-01-04 01:35:36 +05:00
bf1c7d4adf more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m41s
2026-01-03 12:35:05 -08:00
Gitea Actions
a7a30cf983 ci: Bump version to 0.9.12 [skip ci] 2026-01-04 01:01:26 +05:00
0bc0676b33 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m39s
2026-01-03 12:00:20 -08:00
Gitea Actions
73484d3eb4 ci: Bump version to 0.9.11 [skip ci] 2026-01-03 23:52:31 +05:00
b3253d5bbc more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m17s
2026-01-03 10:51:44 -08:00
Gitea Actions
54f3769e90 ci: Bump version to 0.9.10 [skip ci] 2026-01-03 13:34:20 +05:00
bad6f74ee6 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m21s
2026-01-03 00:33:47 -08:00
Gitea Actions
bcf16168b6 ci: Bump version to 0.9.9 [skip ci] 2026-01-03 13:03:37 +05:00
498fbd9e0e more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m5s
2026-01-03 00:02:09 -08:00
Gitea Actions
007ff8e538 ci: Bump version to 0.9.8 [skip ci] 2026-01-03 11:34:34 +05:00
1fc70e3915 extend timers duration - prevent jobs from timing out after 30secs, increased to 4mins
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 22m56s
2026-01-02 22:33:51 -08:00
Gitea Actions
d891e47e02 ci: Bump version to 0.9.7 [skip ci] 2026-01-03 10:36:05 +05:00
08c39afde4 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m33s
2026-01-02 21:33:31 -08:00
Gitea Actions
c579543b8a ci: Bump version to 0.9.6 [skip ci] 2026-01-03 09:31:41 +05:00
0d84137786 test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m17s
2026-01-02 20:31:08 -08:00
Gitea Actions
20ee30c4b4 ci: Bump version to 0.9.5 [skip ci] 2026-01-03 08:52:26 +05:00
93612137e3 test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m23s
2026-01-02 19:51:10 -08:00
Gitea Actions
6e70f08e3c ci: Bump version to 0.9.4 [skip ci] 2026-01-03 07:59:50 +05:00
459f5f7976 sql fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m36s
2026-01-02 18:59:16 -08:00
Gitea Actions
a2e6331ddd ci: Bump version to 0.9.3 [skip ci] 2026-01-03 07:28:11 +05:00
13cd30bec9 sql fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m51s
2026-01-02 18:27:42 -08:00
Gitea Actions
baeb9488c6 ci: Bump version to 0.9.2 [skip ci] 2026-01-03 07:07:42 +05:00
0cba0f987e remove refresh_token as it really should not be stored
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m57s
2026-01-02 18:07:08 -08:00
Gitea Actions
958a79997d ci: Bump version to 0.9.1 [skip ci] 2026-01-03 07:01:27 +05:00
8fb1c96f93 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 46s
2026-01-02 17:56:18 -08:00
6e6fe80c7f sql fixes 2026-01-02 17:55:22 -08:00
Gitea Actions
d1554050bd ci: Bump version to 0.9.0 for production release [skip ci] 2026-01-03 05:50:23 +05:00
Gitea Actions
b1fae270bb ci: Bump version to 0.8.0 for production release [skip ci] 2026-01-03 05:48:40 +05:00
Gitea Actions
c852483e18 ci: Bump version to 0.7.29 [skip ci] 2026-01-03 02:43:54 +05:00
2e01ad5bc9 more test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m50s
2026-01-02 13:43:20 -08:00
Gitea Actions
26763c7183 ci: Bump version to 0.7.28 [skip ci] 2026-01-03 02:04:26 +05:00
f0c5c2c45b more test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m40s
2026-01-02 13:03:25 -08:00
Gitea Actions
034bb60fd5 ci: Bump version to 0.7.27 [skip ci] 2026-01-03 01:31:54 +05:00
d4b389cb79 more test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m39s
2026-01-02 12:31:19 -08:00
Gitea Actions
a71fb81468 ci: Bump version to 0.7.26 [skip ci] 2026-01-03 00:58:34 +05:00
9bee0a013b unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m8s
2026-01-02 11:58:03 -08:00
Gitea Actions
8bcb4311b3 ci: Bump version to 0.7.25 [skip ci] 2026-01-03 00:34:45 +05:00
9fd15f3a50 unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m58s
2026-01-02 11:33:11 -08:00
Gitea Actions
e3c876c7be ci: Bump version to 0.7.24 [skip ci] 2026-01-02 23:23:21 +05:00
32dcf3b89e unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m2s
2026-01-02 10:22:27 -08:00
7066b937f6 unit test auto-provider refactor 2026-01-02 10:17:01 -08:00
Gitea Actions
8553ea8811 ci: Bump version to 0.7.23 [skip ci] 2026-01-02 12:13:43 +05:00
19885a50f7 unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m30s
2026-01-01 23:12:32 -08:00
Gitea Actions
ce82034b9d ci: Bump version to 0.7.22 [skip ci] 2026-01-02 07:30:53 +05:00
4528da2934 integration test fixes + added new ai models and recipeSuggestion
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m36s
2026-01-01 18:30:03 -08:00
619 changed files with 112772 additions and 11806 deletions

16
.claude/hooks.json Normal file
View File

@@ -0,0 +1,16 @@
{
"$schema": "https://claude.ai/schemas/hooks.json",
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "node -e \"const cmd = process.argv[1] || ''; const isTest = /\\b(npm\\s+(run\\s+)?test|vitest|jest)\\b/i.test(cmd); const isWindows = process.platform === 'win32'; const inContainer = process.env.REMOTE_CONTAINERS === 'true' || process.env.DEVCONTAINER === 'true'; if (isTest && isWindows && !inContainer) { console.error('BLOCKED: Tests must run on Linux. Use Dev Container (Reopen in Container) or WSL.'); process.exit(1); }\" -- \"$CLAUDE_TOOL_INPUT\""
}
]
}
]
}
}

120
.claude/settings.local.json Normal file
View File

@@ -0,0 +1,120 @@
{
"permissions": {
"allow": [
"Bash(npm test:*)",
"Bash(podman --version:*)",
"Bash(podman ps:*)",
"Bash(podman machine start:*)",
"Bash(podman compose:*)",
"Bash(podman pull:*)",
"Bash(podman images:*)",
"Bash(podman stop:*)",
"Bash(echo:*)",
"Bash(podman rm:*)",
"Bash(podman run:*)",
"Bash(podman start:*)",
"Bash(podman exec:*)",
"Bash(cat:*)",
"Bash(PGPASSWORD=postgres psql:*)",
"Bash(npm search:*)",
"Bash(npx:*)",
"Bash(curl:*)",
"Bash(powershell:*)",
"Bash(cmd.exe:*)",
"Bash(npm run test:integration:*)",
"Bash(grep:*)",
"Bash(done)",
"Bash(podman info:*)",
"Bash(podman machine:*)",
"Bash(podman system connection:*)",
"Bash(podman inspect:*)",
"Bash(python -m json.tool:*)",
"Bash(claude mcp status)",
"Bash(powershell.exe -Command \"claude mcp status\")",
"Bash(powershell.exe -Command \"claude mcp\")",
"Bash(powershell.exe -Command \"claude mcp list\")",
"Bash(powershell.exe -Command \"claude --version\")",
"Bash(powershell.exe -Command \"claude config\")",
"Bash(powershell.exe -Command \"claude mcp get gitea-projectium\")",
"Bash(powershell.exe -Command \"claude mcp add --help\")",
"Bash(powershell.exe -Command \"claude mcp add -t stdio -s user filesystem -- D:\\\\nodejs\\\\npx.cmd -y @modelcontextprotocol/server-filesystem D:\\\\gitea\\\\flyer-crawler.projectium.com\\\\flyer-crawler.projectium.com\")",
"Bash(powershell.exe -Command \"claude mcp add -t stdio -s user fetch -- D:\\\\nodejs\\\\npx.cmd -y @modelcontextprotocol/server-fetch\")",
"Bash(powershell.exe -Command \"echo ''List files in src/hooks using filesystem MCP'' | claude --print\")",
"Bash(powershell.exe -Command \"echo ''List all podman containers'' | claude --print\")",
"Bash(powershell.exe -Command \"echo ''List my repositories on gitea.projectium.com using gitea-projectium MCP'' | claude --print\")",
"Bash(powershell.exe -Command \"echo ''List my repositories on gitea.projectium.com using gitea-projectium MCP'' | claude --print --allowedTools ''mcp__gitea-projectium__*''\")",
"Bash(powershell.exe -Command \"echo ''Fetch the homepage of https://gitea.projectium.com and summarize it'' | claude --print --allowedTools ''mcp__fetch__*''\")",
"Bash(dir \"C:\\\\Users\\\\games3\\\\.claude\")",
"Bash(dir:*)",
"Bash(D:nodejsnpx.cmd -y @modelcontextprotocol/server-fetch --help)",
"Bash(cmd /c \"dir /o-d C:\\\\Users\\\\games3\\\\.claude\\\\debug 2>nul | head -10\")",
"mcp__memory__read_graph",
"mcp__memory__create_entities",
"mcp__memory__search_nodes",
"mcp__memory__delete_entities",
"mcp__sequential-thinking__sequentialthinking",
"mcp__filesystem__list_directory",
"mcp__filesystem__read_multiple_files",
"mcp__filesystem__directory_tree",
"mcp__filesystem__read_text_file",
"Bash(wc:*)",
"Bash(npm install:*)",
"Bash(git grep:*)",
"Bash(findstr:*)",
"Bash(git add:*)",
"mcp__filesystem__write_file",
"mcp__podman__container_list",
"Bash(podman cp:*)",
"mcp__podman__container_inspect",
"mcp__podman__network_list",
"Bash(podman network connect:*)",
"Bash(npm run build:*)",
"Bash(set NODE_ENV=test)",
"Bash(podman-compose:*)",
"Bash(timeout 60 podman machine start:*)",
"Bash(podman build:*)",
"Bash(podman network rm:*)",
"Bash(npm run lint)",
"Bash(npm run typecheck:*)",
"Bash(npm run type-check:*)",
"Bash(npm run test:unit:*)",
"mcp__filesystem__move_file",
"Bash(git checkout:*)",
"Bash(podman image inspect:*)",
"Bash(node -e:*)",
"Bash(xargs -I {} sh -c 'if ! grep -q \"\"vi.mock.*apiClient\"\" \"\"{}\"\"; then echo \"\"{}\"\"; fi')",
"Bash(MSYS_NO_PATHCONV=1 podman exec:*)",
"Bash(docker ps:*)",
"Bash(find:*)",
"Bash(\"/c/Users/games3/.local/bin/uvx.exe\" markitdown-mcp --help)",
"Bash(git stash:*)",
"Bash(ping:*)",
"Bash(tee:*)",
"Bash(timeout 1800 podman exec flyer-crawler-dev npm run test:unit:*)",
"mcp__filesystem__edit_file",
"Bash(timeout 300 tail:*)",
"mcp__filesystem__list_allowed_directories",
"mcp__memory__add_observations",
"Bash(ssh:*)",
"mcp__redis__list",
"Read(//d/gitea/bugsink-mcp/**)",
"Bash(d:/nodejs/npm.cmd install)",
"Bash(node node_modules/vitest/vitest.mjs run:*)",
"Bash(npm run test:e2e:*)",
"Bash(export BUGSINK_URL=http://localhost:8000)",
"Bash(export BUGSINK_TOKEN=a609c2886daa4e1e05f1517074d7779a5fb49056)",
"Bash(timeout 3 d:/nodejs/node.exe:*)",
"Bash(export BUGSINK_URL=https://bugsink.projectium.com)",
"Bash(export BUGSINK_API_TOKEN=77deaa5e2649ab0fbbca51bbd427ec4637d073a0)",
"Bash(export BUGSINK_TOKEN=77deaa5e2649ab0fbbca51bbd427ec4637d073a0)",
"Bash(where:*)",
"mcp__localerrors__test_connection",
"mcp__localerrors__list_projects",
"Bash(\"D:\\\\nodejs\\\\npx.cmd\" -y @modelcontextprotocol/server-postgres --help)",
"Bash(git rm:*)",
"Bash(git -C \"C:\\\\Users\\\\games3\\\\.claude\\\\plugins\\\\marketplaces\\\\claude-plugins-official\" log -1 --format=\"%H %ci %s\")",
"Bash(git -C \"C:\\\\Users\\\\games3\\\\.claude\\\\plugins\\\\marketplaces\\\\claude-plugins-official\" config --get remote.origin.url)",
"Bash(git -C \"C:\\\\Users\\\\games3\\\\.claude\\\\plugins\\\\marketplaces\\\\claude-plugins-official\" fetch --dry-run -v)"
]
}
}

View File

@@ -1,18 +1,97 @@
{
// ============================================================================
// VS CODE DEV CONTAINER CONFIGURATION
// ============================================================================
// This file configures VS Code's Dev Containers extension to provide a
// consistent, fully-configured development environment.
//
// Features:
// - Automatic PostgreSQL + Redis startup with healthchecks
// - Automatic npm install
// - Automatic database schema initialization and seeding
// - Pre-configured VS Code extensions (ESLint, Prettier)
// - Podman support for Windows users
//
// Usage:
// 1. Install the "Dev Containers" extension in VS Code
// 2. Open this project folder
// 3. Click "Reopen in Container" when prompted (or use Command Palette)
// 4. Wait for container build and initialization
// 5. Development server starts automatically
// ============================================================================
"name": "Flyer Crawler Dev (Ubuntu 22.04)",
// Use Docker Compose for multi-container setup
"dockerComposeFile": ["../compose.dev.yml"],
"service": "app",
"workspaceFolder": "/app",
// VS Code customizations
"customizations": {
"vscode": {
"extensions": ["dbaeumer.vscode-eslint", "esbenp.prettier-vscode"]
"extensions": [
// Code quality
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
// TypeScript
"ms-vscode.vscode-typescript-next",
// Database
"mtxr.sqltools",
"mtxr.sqltools-driver-pg",
// Utilities
"eamodio.gitlens",
"streetsidesoftware.code-spell-checker"
],
"settings": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"typescript.preferences.importModuleSpecifier": "relative"
}
}
},
// Run as root (required for npm global installs)
"remoteUser": "root",
// Automatically install dependencies when the container is created.
// This runs inside the container, populating the isolated node_modules volume.
"postCreateCommand": "npm install",
"postAttachCommand": "npm run dev:container",
// Try to start podman machine, but exit with success (0) even if it's already running
"initializeCommand": "powershell -Command \"podman machine start; exit 0\""
// ============================================================================
// Lifecycle Commands
// ============================================================================
// initializeCommand: Runs on the HOST before the container is created.
// Starts Podman machine on Windows (no-op if already running or using Docker).
"initializeCommand": "powershell -Command \"podman machine start; exit 0\"",
// postCreateCommand: Runs ONCE when the container is first created.
// This is where we do full initialization: npm install + database setup.
"postCreateCommand": "chmod +x scripts/docker-init.sh && ./scripts/docker-init.sh",
// postAttachCommand: Runs EVERY TIME VS Code attaches to the container.
// Server now starts automatically via dev-entrypoint.sh in compose.dev.yml.
// No need to start it again here.
// "postAttachCommand": "npm run dev:container",
// ============================================================================
// Port Forwarding
// ============================================================================
// Automatically forward these ports from the container to the host
"forwardPorts": [443, 3001],
// Labels for forwarded ports in VS Code's Ports panel
"portsAttributes": {
"443": {
"label": "Frontend HTTPS (nginx → Vite)",
"onAutoForward": "notify"
},
"3001": {
"label": "Backend API",
"onAutoForward": "notify"
}
},
// ============================================================================
// Features
// ============================================================================
// Additional dev container features (optional)
"features": {}
}

123
.env.example Normal file
View File

@@ -0,0 +1,123 @@
# .env.example
# ============================================================================
# ENVIRONMENT VARIABLES TEMPLATE
# ============================================================================
# Copy this file to .env and fill in your values.
# For local development with Docker/Podman, these defaults should work out of the box.
#
# IMPORTANT: Never commit .env files with real credentials to version control!
# ============================================================================
# ===================
# Database Configuration
# ===================
# PostgreSQL connection settings
# For container development, use the service name "postgres"
DB_HOST=postgres
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=postgres
DB_NAME=flyer_crawler_dev
# ===================
# Redis Configuration
# ===================
# Redis URL for caching and job queues
# For container development, use the service name "redis"
REDIS_URL=redis://redis:6379
# Optional: Redis password (leave empty if not required)
REDIS_PASSWORD=
# ===================
# Application Settings
# ===================
NODE_ENV=development
# Frontend URL for CORS and email links
FRONTEND_URL=http://localhost:3000
# Flyer Base URL - used for seed data and flyer image URLs
# Dev container: http://127.0.0.1
# Test: https://flyer-crawler-test.projectium.com
# Production: https://flyer-crawler.projectium.com
FLYER_BASE_URL=http://127.0.0.1
# ===================
# Authentication
# ===================
# REQUIRED: Secret key for signing JWT tokens (generate a random 64+ character string)
JWT_SECRET=your-super-secret-jwt-key-change-this-in-production
# OAuth Providers (Optional - enable social login)
# Google OAuth - https://console.cloud.google.com/apis/credentials
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
# GitHub OAuth - https://github.com/settings/developers
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
# ===================
# AI/ML Services
# ===================
# REQUIRED: Google Gemini API key for flyer OCR processing
GEMINI_API_KEY=your-gemini-api-key
# ===================
# External APIs
# ===================
# Optional: Google Maps API key for geocoding store addresses
GOOGLE_MAPS_API_KEY=
# ===================
# Email Configuration (Optional)
# ===================
# SMTP settings for sending emails (deal notifications, password reset)
SMTP_HOST=
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=
SMTP_PASS=
SMTP_FROM_EMAIL=noreply@example.com
# ===================
# Worker Configuration (Optional)
# ===================
# Concurrency settings for background job workers
WORKER_CONCURRENCY=1
EMAIL_WORKER_CONCURRENCY=10
ANALYTICS_WORKER_CONCURRENCY=1
CLEANUP_WORKER_CONCURRENCY=10
# Worker lock duration in milliseconds (default: 2 minutes)
WORKER_LOCK_DURATION=120000
# ===================
# Error Tracking (ADR-015)
# ===================
# Sentry-compatible error tracking via Bugsink (self-hosted)
# DSNs are created in Bugsink UI at http://localhost:8000 (dev) or your production URL
# Backend DSN - for Express/Node.js errors
SENTRY_DSN=
# Frontend DSN - for React/browser errors (uses VITE_ prefix)
VITE_SENTRY_DSN=
# Environment name for error grouping (defaults to NODE_ENV)
SENTRY_ENVIRONMENT=development
VITE_SENTRY_ENVIRONMENT=development
# Enable/disable error tracking (default: true)
SENTRY_ENABLED=true
VITE_SENTRY_ENABLED=true
# Enable debug mode for SDK troubleshooting (default: false)
SENTRY_DEBUG=false
VITE_SENTRY_DEBUG=false
# ===================
# Source Maps Upload (ADR-015)
# ===================
# Set to 'true' to enable source map generation and upload during builds
# Only used in CI/CD pipelines (deploy-to-prod.yml, deploy-to-test.yml)
GENERATE_SOURCE_MAPS=true
# Auth token for uploading source maps to Bugsink
# Create at: https://bugsink.projectium.com (Settings > API Keys)
# Required for de-minified stack traces in error reports
SENTRY_AUTH_TOKEN=
# URL of your Bugsink instance (for source map uploads)
SENTRY_URL=https://bugsink.projectium.com

6
.env.test Normal file
View File

@@ -0,0 +1,6 @@
DB_HOST=10.89.0.4
DB_USER=flyer
DB_PASSWORD=flyer
DB_NAME=flyer_crawler_test
REDIS_URL=redis://redis:6379
NODE_ENV=test

View File

@@ -63,8 +63,8 @@ jobs:
- name: Check for Production Database Schema Changes
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
@@ -87,17 +87,34 @@ jobs:
fi
- name: Build React Application for Production
# Source Maps (ADR-015): If SENTRY_AUTH_TOKEN is set, the @sentry/vite-plugin will:
# 1. Generate hidden source maps during build
# 2. Upload them to Bugsink for error de-minification
# 3. Delete the .map files after upload (so they're not publicly accessible)
run: |
if [ -z "${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}" ]; then
echo "ERROR: The VITE_GOOGLE_GENAI_API_KEY secret is not set."
exit 1
fi
# Source map upload is optional - warn if not configured
if [ -z "${{ secrets.SENTRY_AUTH_TOKEN }}" ]; then
echo "WARNING: SENTRY_AUTH_TOKEN not set. Source maps will NOT be uploaded to Bugsink."
echo " Errors will show minified stack traces. To fix, add SENTRY_AUTH_TOKEN to Gitea secrets."
fi
GITEA_SERVER_URL="https://gitea.projectium.com"
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
PACKAGE_VERSION=$(node -p "require('./package.json').version")
GENERATE_SOURCE_MAPS=true \
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
VITE_SENTRY_DSN="${{ secrets.VITE_SENTRY_DSN }}" \
VITE_SENTRY_ENVIRONMENT="production" \
VITE_SENTRY_ENABLED="true" \
SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}" \
SENTRY_URL="https://bugsink.projectium.com" \
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
- name: Deploy Application to Production Server
@@ -114,10 +131,11 @@ jobs:
env:
# --- Production Secrets Injection ---
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
REDIS_URL: 'redis://localhost:6379'
# Explicitly use database 0 for production (test uses database 1)
REDIS_URL: 'redis://localhost:6379/0'
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_PROD }}
FRONTEND_URL: 'https://flyer-crawler.projectium.com'
JWT_SECRET: ${{ secrets.JWT_SECRET }}
@@ -129,6 +147,15 @@ jobs:
SMTP_USER: ''
SMTP_PASS: ''
SMTP_FROM_EMAIL: 'noreply@flyer-crawler.projectium.com'
# OAuth Providers
GOOGLE_CLIENT_ID: ${{ secrets.GOOGLE_CLIENT_ID }}
GOOGLE_CLIENT_SECRET: ${{ secrets.GOOGLE_CLIENT_SECRET }}
GITHUB_CLIENT_ID: ${{ secrets.GH_CLIENT_ID }}
GITHUB_CLIENT_SECRET: ${{ secrets.GH_CLIENT_SECRET }}
# Sentry/Bugsink Error Tracking (ADR-015)
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
SENTRY_ENVIRONMENT: 'production'
SENTRY_ENABLED: 'true'
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
echo "ERROR: One or more production database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_PROD) are not set."
@@ -158,7 +185,7 @@ jobs:
else
echo "Version mismatch (Running: $RUNNING_VERSION -> Deployed: $NEW_VERSION) or app not running. Reloading PM2..."
fi
pm2 startOrReload ecosystem.config.cjs --env production --update-env && pm2 save
pm2 startOrReload ecosystem.config.cjs --update-env && pm2 save
echo "Production backend server reloaded successfully."
else
echo "Version $NEW_VERSION is already running. Skipping PM2 reload."

View File

@@ -96,6 +96,24 @@ jobs:
# It prevents the accumulation of duplicate processes from previous test runs.
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.name && p.name.endsWith('-test')) { console.log('Deleting test process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id, e.message); } } }); console.log('✅ Test process cleanup complete.'); } catch (e) { if (e.stdout.toString().includes('No process found')) { console.log('No PM2 processes running, cleanup not needed.'); } else { console.error('Error cleaning up test processes:', e.message); } }" || true
- name: Flush Redis Test Database Before Tests
# CRITICAL: Clear Redis database 1 (test database) to remove stale BullMQ jobs.
# This prevents old jobs with outdated error messages from polluting test results.
# NOTE: We use database 1 for tests to isolate from production (database 0).
env:
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
run: |
echo "--- Flushing Redis database 1 (test database) to remove stale jobs ---"
if [ -z "$REDIS_PASSWORD" ]; then
echo "⚠️ REDIS_PASSWORD_TEST not set, attempting flush without password..."
redis-cli -n 1 FLUSHDB || echo "Redis flush failed (no password)"
else
redis-cli -a "$REDIS_PASSWORD" -n 1 FLUSHDB 2>/dev/null && echo "✅ Redis database 1 (test) flushed successfully." || echo "⚠️ Redis flush failed"
fi
# Verify the flush worked by checking key count on database 1
KEY_COUNT=$(redis-cli -a "$REDIS_PASSWORD" -n 1 DBSIZE 2>/dev/null | grep -oE '[0-9]+' || echo "unknown")
echo "Redis database 1 key count after flush: $KEY_COUNT"
- name: Run All Tests and Generate Merged Coverage Report
# This single step runs both unit and integration tests, then merges their
# coverage data into a single report. It combines the environment variables
@@ -103,20 +121,30 @@ jobs:
env:
# --- Database credentials for the test suite ---
# These are injected from Gitea secrets into the runner's environment.
# CRITICAL: Use TEST-specific credentials that have CREATE privileges on the public schema.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: 'flyer-crawler-test' # Explicitly set for tests
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
# --- Redis credentials for the test suite ---
REDIS_URL: 'redis://localhost:6379'
# CRITICAL: Use Redis database 1 to isolate tests from production (which uses db 0).
# This prevents the production worker from picking up test jobs.
REDIS_URL: 'redis://localhost:6379/1'
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
# --- Integration test specific variables ---
FRONTEND_URL: 'http://localhost:3000'
FRONTEND_URL: 'https://example.com'
VITE_API_BASE_URL: 'http://localhost:3001/api'
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}
# --- Storage path for flyer images ---
# CRITICAL: Use an absolute path in the test runner's working directory for file storage.
# This ensures tests can read processed files to verify their contents (e.g., EXIF stripping).
# Without this, multer and flyerProcessingService default to /var/www/.../flyer-images.
# NOTE: We use ${{ github.workspace }} which resolves to the checkout directory.
STORAGE_PATH: '${{ github.workspace }}/flyer-images'
# --- JWT Secret for Passport authentication in tests ---
JWT_SECRET: ${{ secrets.JWT_SECRET }}
@@ -171,8 +199,8 @@ jobs:
--reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
echo "--- Running E2E Tests ---"
# Run E2E tests using the dedicated E2E config which inherits from integration config.
# We still pass --coverage to enable it, but directory and timeout are now in the config.
# Run E2E tests using the dedicated E2E config.
# E2E uses port 3098, integration uses 3099 to avoid conflicts.
npx vitest run --config vitest.config.e2e.ts --coverage \
--coverage.exclude='**/*.test.ts' \
--coverage.exclude='**/tests/**' \
@@ -213,7 +241,19 @@ jobs:
# Run c8: read raw files from the temp dir, and output an Istanbul JSON report.
# We only generate the 'json' report here because it's all nyc needs for merging.
echo "Server coverage report about to be generated..."
npx c8 report --exclude='**/*.test.ts' --exclude='**/tests/**' --exclude='**/mocks/**' --reporter=json --temp-directory .coverage/tmp/integration-server --reports-dir .coverage/integration-server
npx c8 report \
--include='src/**' \
--exclude='**/*.test.ts' \
--exclude='**/*.test.tsx' \
--exclude='**/tests/**' \
--exclude='**/mocks/**' \
--exclude='hostexecutor/**' \
--exclude='scripts/**' \
--exclude='*.config.js' \
--exclude='*.config.ts' \
--reporter=json \
--temp-directory .coverage/tmp/integration-server \
--reports-dir .coverage/integration-server
echo "Server coverage report generated. Verifying existence:"
ls -l .coverage/integration-server/coverage-final.json
@@ -253,12 +293,18 @@ jobs:
--reporter=html \
--report-dir .coverage/ \
--temp-dir "$NYC_SOURCE_DIR" \
--include "src/**" \
--exclude "**/*.test.ts" \
--exclude "**/*.test.tsx" \
--exclude "**/tests/**" \
--exclude "**/mocks/**" \
--exclude "**/index.tsx" \
--exclude "**/vite-env.d.ts" \
--exclude "**/vitest.setup.ts"
--exclude "**/vitest.setup.ts" \
--exclude "hostexecutor/**" \
--exclude "scripts/**" \
--exclude "*.config.js" \
--exclude "*.config.ts"
# Re-enable secret masking for subsequent steps.
echo "::secret-masking::"
@@ -283,10 +329,11 @@ jobs:
- name: Check for Test Database Schema Changes
env:
# Use test database credentials for this check.
# CRITICAL: Use TEST-specific credentials that have CREATE privileges on the public schema.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # This is used by psql
DB_NAME: ${{ secrets.DB_DATABASE_TEST }} # This is used by the application
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
run: |
# Fail-fast check to ensure secrets are configured in Gitea.
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
@@ -327,6 +374,11 @@ jobs:
# We set the environment variable directly in the command line for this step.
# This maps the Gitea secret to the environment variable the application expects.
# We also generate and inject the application version, commit URL, and commit message.
#
# Source Maps (ADR-015): If SENTRY_AUTH_TOKEN is set, the @sentry/vite-plugin will:
# 1. Generate hidden source maps during build
# 2. Upload them to Bugsink for error de-minification
# 3. Delete the .map files after upload (so they're not publicly accessible)
run: |
# Fail-fast check for the build-time secret.
if [ -z "${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}" ]; then
@@ -334,12 +386,25 @@ jobs:
exit 1
fi
# Source map upload is optional - warn if not configured
if [ -z "${{ secrets.SENTRY_AUTH_TOKEN }}" ]; then
echo "WARNING: SENTRY_AUTH_TOKEN not set. Source maps will NOT be uploaded to Bugsink."
echo " Errors will show minified stack traces. To fix, add SENTRY_AUTH_TOKEN to Gitea secrets."
fi
GITEA_SERVER_URL="https://gitea.projectium.com" # Your Gitea instance URL
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
# Sanitize commit message to prevent shell injection or build breaks (removes quotes, backticks, backslashes, $)
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s | tr -d '"`\\$')
PACKAGE_VERSION=$(node -p "require('./package.json').version")
GENERATE_SOURCE_MAPS=true \
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
VITE_SENTRY_DSN="${{ secrets.VITE_SENTRY_DSN_TEST }}" \
VITE_SENTRY_ENVIRONMENT="test" \
VITE_SENTRY_ENABLED="true" \
SENTRY_AUTH_TOKEN="${{ secrets.SENTRY_AUTH_TOKEN }}" \
SENTRY_URL="https://bugsink.projectium.com" \
VITE_API_BASE_URL="https://flyer-crawler-test.projectium.com/api" VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }} npm run build
- name: Deploy Application to Test Server
@@ -378,17 +443,18 @@ jobs:
# Your Node.js application will read these directly from `process.env`.
# Database Credentials
# CRITICAL: Use TEST-specific credentials that have CREATE privileges on the public schema.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
# Redis Credentials
REDIS_URL: 'redis://localhost:6379'
# Redis Credentials (use database 1 to isolate from production)
REDIS_URL: 'redis://localhost:6379/1'
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
# Application Secrets
FRONTEND_URL: 'https://flyer-crawler-test.projectium.com'
FRONTEND_URL: 'https://example.com'
JWT_SECRET: ${{ secrets.JWT_SECRET }}
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }}
GOOGLE_MAPS_API_KEY: ${{ secrets.GOOGLE_MAPS_API_KEY }}
@@ -400,6 +466,10 @@ jobs:
SMTP_USER: '' # Using MailHog, no auth needed
SMTP_PASS: '' # Using MailHog, no auth needed
SMTP_FROM_EMAIL: 'noreply@flyer-crawler-test.projectium.com'
# Sentry/Bugsink Error Tracking (ADR-015)
SENTRY_DSN: ${{ secrets.SENTRY_DSN_TEST }}
SENTRY_ENVIRONMENT: 'test'
SENTRY_ENABLED: 'true'
run: |
# Fail-fast check to ensure secrets are configured in Gitea.
@@ -423,10 +493,11 @@ jobs:
echo "Cleaning up errored or stopped PM2 processes..."
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
# Use `startOrReload` with the ecosystem file. This is the standard, idempotent way to deploy.
# It will START the process if it's not running, or RELOAD it if it is.
# Use `startOrReload` with the TEST ecosystem file. This starts test-specific processes
# (flyer-crawler-api-test, flyer-crawler-worker-test, flyer-crawler-analytics-worker-test)
# that run separately from production processes.
# We also add `&& pm2 save` to persist the process list across server reboots.
pm2 startOrReload ecosystem.config.cjs --env test --update-env && pm2 save
pm2 startOrReload ecosystem-test.config.cjs --update-env && pm2 save
echo "Test backend server reloaded successfully."
# After a successful deployment, update the schema hash in the database.

View File

@@ -20,9 +20,9 @@ jobs:
# Use production database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_NAME_PROD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
steps:
- name: Validate Secrets

View File

@@ -23,9 +23,9 @@ jobs:
env:
# Use production database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # Used by psql
DB_NAME: ${{ secrets.DB_DATABASE_PROD }} # Used by the application
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
steps:
- name: Checkout Code

View File

@@ -23,9 +23,9 @@ jobs:
env:
# Use test database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # Used by psql
DB_NAME: ${{ secrets.DB_DATABASE_TEST }} # Used by the application
DB_USER: ${{ secrets.DB_USER_TEST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_TEST }}
DB_NAME: ${{ secrets.DB_DATABASE_TEST }}
steps:
- name: Checkout Code

View File

@@ -22,8 +22,8 @@ jobs:
env:
# Use production database credentials for this entire job.
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
BACKUP_DIR: '/var/www/backups' # Define a dedicated directory for backups

View File

@@ -62,8 +62,8 @@ jobs:
- name: Check for Production Database Schema Changes
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
@@ -113,10 +113,11 @@ jobs:
env:
# --- Production Secrets Injection ---
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
REDIS_URL: 'redis://localhost:6379'
# Explicitly use database 0 for production (test uses database 1)
REDIS_URL: 'redis://localhost:6379/0'
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_PROD }}
FRONTEND_URL: 'https://flyer-crawler.projectium.com'
JWT_SECRET: ${{ secrets.JWT_SECRET }}

View File

@@ -0,0 +1,167 @@
# .gitea/workflows/manual-redis-flush-prod.yml
#
# DANGER: This workflow is DESTRUCTIVE and intended for manual execution only.
# It will completely FLUSH the PRODUCTION Redis database (db 0).
# This will clear all BullMQ queues, sessions, caches, and any other Redis data.
#
name: Manual - Flush Production Redis
on:
workflow_dispatch:
inputs:
confirmation:
description: 'DANGER: This will FLUSH production Redis. Type "flush-production-redis" to confirm.'
required: true
default: 'do-not-run'
flush_type:
description: 'What to flush?'
required: true
type: choice
options:
- 'queues-only'
- 'entire-database'
default: 'queues-only'
jobs:
flush-redis:
runs-on: projectium.com # This job runs on your self-hosted Gitea runner.
env:
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_PROD }}
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: '**/package-lock.json'
- name: Install Dependencies
run: npm ci
- name: Validate Secrets
run: |
if [ -z "$REDIS_PASSWORD" ]; then
echo "ERROR: REDIS_PASSWORD_PROD secret is not set in Gitea repository settings."
exit 1
fi
echo "✅ Redis password secret is present."
- name: Verify Confirmation Phrase
run: |
if [ "${{ gitea.event.inputs.confirmation }}" != "flush-production-redis" ]; then
echo "ERROR: Confirmation phrase did not match. Aborting Redis flush."
exit 1
fi
echo "✅ Confirmation accepted. Proceeding with Redis flush."
- name: Show Current Redis State
run: |
echo "--- Current Redis Database 0 (Production) State ---"
redis-cli -a "$REDIS_PASSWORD" -n 0 INFO keyspace 2>/dev/null || echo "Could not get keyspace info"
echo ""
echo "--- Key Count ---"
KEY_COUNT=$(redis-cli -a "$REDIS_PASSWORD" -n 0 DBSIZE 2>/dev/null | grep -oE '[0-9]+' || echo "unknown")
echo "Production Redis (db 0) key count: $KEY_COUNT"
echo ""
echo "--- BullMQ Queue Keys ---"
redis-cli -a "$REDIS_PASSWORD" -n 0 KEYS "bull:*" 2>/dev/null | head -20 || echo "No BullMQ keys found"
- name: 🚨 FINAL WARNING & PAUSE 🚨
run: |
echo "*********************************************************************"
echo "WARNING: YOU ARE ABOUT TO FLUSH PRODUCTION REDIS DATA."
echo "Flush type: ${{ gitea.event.inputs.flush_type }}"
echo ""
if [ "${{ gitea.event.inputs.flush_type }}" = "entire-database" ]; then
echo "This will DELETE ALL Redis data including sessions, caches, and queues!"
else
echo "This will DELETE ALL BullMQ queue data (pending jobs, failed jobs, etc.)"
fi
echo ""
echo "This action is IRREVERSIBLE. Press Ctrl+C in the runner terminal NOW to cancel."
echo "Sleeping for 10 seconds..."
echo "*********************************************************************"
sleep 10
- name: Flush BullMQ Queues Only
if: ${{ gitea.event.inputs.flush_type == 'queues-only' }}
env:
REDIS_URL: 'redis://localhost:6379/0'
run: |
echo "--- Obliterating BullMQ queues using Node.js ---"
node -e "
const { Queue } = require('bullmq');
const IORedis = require('ioredis');
const connection = new IORedis(process.env.REDIS_URL, {
maxRetriesPerRequest: null,
password: process.env.REDIS_PASSWORD,
});
const queueNames = [
'flyer-processing',
'email-sending',
'analytics-reporting',
'weekly-analytics-reporting',
'file-cleanup',
'token-cleanup'
];
(async () => {
for (const name of queueNames) {
try {
const queue = new Queue(name, { connection });
const counts = await queue.getJobCounts();
console.log('Queue \"' + name + '\" before obliterate:', JSON.stringify(counts));
await queue.obliterate({ force: true });
console.log('✅ Obliterated queue: ' + name);
await queue.close();
} catch (err) {
console.error('⚠️ Failed to obliterate queue ' + name + ':', err.message);
}
}
await connection.quit();
console.log('✅ All BullMQ queues obliterated.');
})();
"
- name: Flush Entire Redis Database
if: ${{ gitea.event.inputs.flush_type == 'entire-database' }}
run: |
echo "--- Flushing entire Redis database 0 (production) ---"
redis-cli -a "$REDIS_PASSWORD" -n 0 FLUSHDB 2>/dev/null && echo "✅ Redis database 0 flushed successfully." || echo "❌ Redis flush failed"
- name: Verify Flush Results
run: |
echo "--- Redis Database 0 (Production) State After Flush ---"
KEY_COUNT=$(redis-cli -a "$REDIS_PASSWORD" -n 0 DBSIZE 2>/dev/null | grep -oE '[0-9]+' || echo "unknown")
echo "Production Redis (db 0) key count after flush: $KEY_COUNT"
echo ""
echo "--- Remaining BullMQ Queue Keys ---"
BULL_KEYS=$(redis-cli -a "$REDIS_PASSWORD" -n 0 KEYS "bull:*" 2>/dev/null | wc -l || echo "0")
echo "BullMQ key count: $BULL_KEYS"
if [ "${{ gitea.event.inputs.flush_type }}" = "queues-only" ] && [ "$BULL_KEYS" -gt 0 ]; then
echo "⚠️ Warning: Some BullMQ keys may still exist. This can happen if new jobs were added during the flush."
fi
- name: Summary
run: |
echo ""
echo "=========================================="
echo "PRODUCTION REDIS FLUSH COMPLETE"
echo "=========================================="
echo "Flush type: ${{ gitea.event.inputs.flush_type }}"
echo "Timestamp: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
echo ""
echo "NOTE: If you flushed queues, any pending jobs (flyer processing,"
echo "emails, analytics, etc.) have been permanently deleted."
echo ""
echo "The production workers will automatically start processing"
echo "new jobs as they are added to the queues."
echo "=========================================="

16
.gitignore vendored
View File

@@ -11,6 +11,18 @@ node_modules
dist
dist-ssr
*.local
.env
*.tsbuildinfo
# Test coverage
coverage
.nyc_output
.coverage
# Test artifacts - flyer-images/ is a runtime directory
# Test fixtures are stored in src/tests/assets/ instead
flyer-images/
test-output.txt
# Editor directories and files
.vscode/*
@@ -22,3 +34,7 @@ dist-ssr
*.njsproj
*.sln
*.sw?
Thumbs.db
.claude
nul
tmpclaude*

1
.husky/pre-commit Normal file
View File

@@ -0,0 +1 @@
FORCE_COLOR=0 npx lint-staged --quiet

4
.lintstagedrc.json Normal file
View File

@@ -0,0 +1,4 @@
{
"*.{js,jsx,ts,tsx}": ["eslint --fix --no-color", "prettier --write"],
"*.{json,md,css,html,yml,yaml}": ["prettier --write"]
}

20
.mcp.json Normal file
View File

@@ -0,0 +1,20 @@
{
"mcpServers": {
"localerrors": {
"command": "d:\\nodejs\\node.exe",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "http://127.0.0.1:8000",
"BUGSINK_TOKEN": "a609c2886daa4e1e05f1517074d7779a5fb49056"
}
},
"devdb": {
"command": "D:\\nodejs\\npx.cmd",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev"
]
}
}
}

5
.nycrc.json Normal file
View File

@@ -0,0 +1,5 @@
{
"text": {
"maxCols": 200
}
}

41
.prettierignore Normal file
View File

@@ -0,0 +1,41 @@
# Dependencies
node_modules/
# Build output
dist/
build/
.cache/
# Coverage reports
coverage/
.coverage/
# IDE and editor configs
.idea/
.vscode/
*.swp
*.swo
# Logs
*.log
logs/
# Environment files (may contain secrets)
.env*
!.env.example
# Lock files (managed by package managers)
package-lock.json
pnpm-lock.yaml
yarn.lock
# Generated files
*.min.js
*.min.css
# Git directory
.git/
.gitea/
# Test artifacts
__snapshots__/

378
CLAUDE-MCP.md Normal file
View File

@@ -0,0 +1,378 @@
# Claude Code MCP Configuration Guide
This document explains how to configure MCP (Model Context Protocol) servers for Claude Code, covering both the CLI and VS Code extension.
## The Two Config Files
Claude Code uses **two separate configuration files** for MCP servers. They must be kept in sync manually.
| File | Used By | Notes |
| ------------------------- | ----------------------------- | ------------------------------------------- |
| `~/.claude.json` | Claude CLI (`claude` command) | Requires `"type": "stdio"` in each server |
| `~/.claude/settings.json` | VS Code Extension | Simpler format, supports `"disabled": true` |
**Important:** Changes to one file do NOT automatically sync to the other!
## File Locations (Windows)
```text
C:\Users\<username>\.claude.json # CLI config
C:\Users\<username>\.claude\settings.json # VS Code extension config
```
## Config Format Differences
### VS Code Extension Format (`~/.claude/settings.json`)
```json
{
"mcpServers": {
"server-name": {
"command": "path/to/executable",
"args": ["arg1", "arg2"],
"env": {
"ENV_VAR": "value"
},
"disabled": true // Optional - disable without removing
}
}
}
```
### CLI Format (`~/.claude.json`)
The CLI config is a larger file with many settings. The `mcpServers` section is nested within it:
```json
{
"numStartups": 14,
"installMethod": "global",
// ... other settings ...
"mcpServers": {
"server-name": {
"type": "stdio", // REQUIRED for CLI
"command": "path/to/executable",
"args": ["arg1", "arg2"],
"env": {
"ENV_VAR": "value"
}
}
}
// ... more settings ...
}
```
**Key difference:** CLI format requires `"type": "stdio"` in each server definition.
## Common MCP Server Examples
### Memory (Knowledge Graph)
```json
// VS Code format
"memory": {
"command": "D:\\nodejs\\npx.cmd",
"args": ["-y", "@modelcontextprotocol/server-memory"]
}
// CLI format
"memory": {
"type": "stdio",
"command": "D:\\nodejs\\npx.cmd",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"env": {}
}
```
### Filesystem
```json
// VS Code format
"filesystem": {
"command": "d:\\nodejs\\node.exe",
"args": [
"c:\\Users\\<user>\\AppData\\Roaming\\npm\\node_modules\\@modelcontextprotocol\\server-filesystem\\dist\\index.js",
"d:\\path\\to\\project"
]
}
// CLI format
"filesystem": {
"type": "stdio",
"command": "d:\\nodejs\\node.exe",
"args": [
"c:\\Users\\<user>\\AppData\\Roaming\\npm\\node_modules\\@modelcontextprotocol\\server-filesystem\\dist\\index.js",
"d:\\path\\to\\project"
],
"env": {}
}
```
### Podman/Docker
```json
// VS Code format
"podman": {
"command": "D:\\nodejs\\npx.cmd",
"args": ["-y", "podman-mcp-server@latest"],
"env": {
"DOCKER_HOST": "npipe:////./pipe/podman-machine-default"
}
}
```
### Gitea
```json
// VS Code format
"gitea-myserver": {
"command": "d:\\gitea-mcp\\gitea-mcp.exe",
"args": ["run", "-t", "stdio"],
"env": {
"GITEA_HOST": "https://gitea.example.com",
"GITEA_ACCESS_TOKEN": "your-token-here"
}
}
```
### Redis
```json
// VS Code format
"redis": {
"command": "D:\\nodejs\\npx.cmd",
"args": ["-y", "@modelcontextprotocol/server-redis", "redis://localhost:6379"]
}
```
### Bugsink (Error Tracking)
**Important:** Bugsink has a different API than Sentry. Use `bugsink-mcp`, NOT `sentry-selfhosted-mcp`.
**Note:** The `bugsink-mcp` npm package is NOT published. You must clone and build from source:
```bash
# Clone and build bugsink-mcp
git clone https://github.com/j-shelfwood/bugsink-mcp.git d:\gitea\bugsink-mcp
cd d:\gitea\bugsink-mcp
npm install
npm run build
```
```json
// VS Code format (using locally built version)
"bugsink": {
"command": "d:\\nodejs\\node.exe",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "https://bugsink.example.com",
"BUGSINK_TOKEN": "your-api-token"
}
}
// CLI format
"bugsink": {
"type": "stdio",
"command": "d:\\nodejs\\node.exe",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "https://bugsink.example.com",
"BUGSINK_TOKEN": "your-api-token"
}
}
```
- GitHub: <https://github.com/j-shelfwood/bugsink-mcp>
- Get token from Bugsink UI: Settings > API Tokens
- **Do NOT use npx** - the package is not on npm
### Sentry (Cloud or Self-hosted)
For actual Sentry instances (not Bugsink), use:
```json
"sentry": {
"command": "D:\\nodejs\\npx.cmd",
"args": ["-y", "@sentry/mcp-server"],
"env": {
"SENTRY_AUTH_TOKEN": "your-sentry-token"
}
}
```
## Troubleshooting
### Server Not Loading
1. **Check both config files** - Make sure the server is defined in both `~/.claude.json` AND `~/.claude/settings.json`
2. **Verify server order** - Servers load sequentially. Broken/slow servers can block others. Put important servers first.
3. **Check for timeout** - Each server has 30 seconds to connect. Slow npx downloads can cause timeouts.
4. **Fully restart VS Code** - Window reload is not enough. Close all VS Code windows and reopen.
### Verifying Configuration
**For CLI:**
```bash
claude mcp list
```
**For VS Code:**
1. Open VS Code
2. View → Output
3. Select "Claude" from the dropdown
4. Look for MCP server connection logs
### Common Errors
| Error | Cause | Solution |
| ------------------------------------ | ----------------------------- | --------------------------------------------------------------------------- |
| `Connection timed out after 30000ms` | Server took too long to start | Move server earlier in config, or use pre-installed packages instead of npx |
| `npm error 404 Not Found` | Package doesn't exist | Check package name spelling |
| `The system cannot find the path` | Wrong executable path | Verify the command path exists |
| `Connection closed` | Server crashed on startup | Check server logs, verify environment variables |
### Disabling Problem Servers
In `~/.claude/settings.json`, add `"disabled": true`:
```json
"problem-server": {
"command": "...",
"args": ["..."],
"disabled": true
}
```
**Note:** The CLI config (`~/.claude.json`) does not support the `disabled` flag. You must remove the server entirely from that file.
## Adding a New MCP Server
1. **Install/clone the MCP server** (if not using npx)
2. **Add to VS Code config** (`~/.claude/settings.json`):
```json
"new-server": {
"command": "path/to/command",
"args": ["arg1", "arg2"],
"env": { "VAR": "value" }
}
```
3. **Add to CLI config** (`~/.claude.json`) - find the `mcpServers` section:
```json
"new-server": {
"type": "stdio",
"command": "path/to/command",
"args": ["arg1", "arg2"],
"env": { "VAR": "value" }
}
```
4. **Fully restart VS Code**
5. **Verify with `claude mcp list`**
## Quick Reference: Available MCP Servers
| Server | Package/Repo | Purpose |
| ------------------- | -------------------------------------------------- | --------------------------- |
| memory | `@modelcontextprotocol/server-memory` | Knowledge graph persistence |
| filesystem | `@modelcontextprotocol/server-filesystem` | File system access |
| redis | `@modelcontextprotocol/server-redis` | Redis cache inspection |
| postgres | `@modelcontextprotocol/server-postgres` | PostgreSQL queries |
| sequential-thinking | `@modelcontextprotocol/server-sequential-thinking` | Step-by-step reasoning |
| podman | `podman-mcp-server` | Container management |
| gitea | `gitea-mcp` (binary) | Gitea API access |
| bugsink | `j-shelfwood/bugsink-mcp` (build from source) | Error tracking for Bugsink |
| sentry | `@sentry/mcp-server` | Error tracking for Sentry |
| playwright | `@anthropics/mcp-server-playwright` | Browser automation |
## Best Practices
1. **Keep configs in sync** - When you change one file, update the other
2. **Order servers by importance** - Put essential servers (memory, filesystem) first
3. **Disable instead of delete** - Use `"disabled": true` in settings.json to troubleshoot
4. **Use node.exe directly** - For faster startup, install packages globally and use `node.exe` instead of `npx`
5. **Store sensitive data in memory** - Use the memory MCP to store API tokens and config for future sessions
---
## Future: MCP Launchpad
**Project:** <https://github.com/kenneth-liao/mcp-launchpad>
MCP Launchpad is a CLI tool that wraps multiple MCP servers into a single interface. Worth revisiting when:
- [ ] Windows support is stable (currently experimental)
- [ ] Available as an MCP server itself (currently Bash-based)
**Why it's interesting:**
| Benefit | Description |
| ---------------------- | -------------------------------------------------------------- |
| Single config file | No more syncing `~/.claude.json` and `~/.claude/settings.json` |
| Project-level configs | Drop `mcp.json` in any project for instant MCP setup |
| Context window savings | One MCP server in context instead of 10+, reducing token usage |
| Persistent daemon | Keeps server connections alive for faster repeated calls |
| Tool search | Find tools across all servers with `mcpl search` |
**Current limitations:**
- Experimental Windows support
- Requires Python 3.13+ and uv
- Claude calls tools via Bash instead of native MCP integration
- Different mental model (runtime discovery vs startup loading)
---
## Future: Graphiti (Advanced Knowledge Graph)
**Project:** <https://github.com/getzep/graphiti>
Graphiti provides temporal-aware knowledge graphs - it tracks not just facts, but _when_ they became true/outdated. Much more powerful than simple memory MCP, but requires significant infrastructure.
**Ideal setup:** Run on a Linux server, connect via HTTP from Windows:
```json
// Windows client config (settings.json)
"graphiti": {
"type": "sse",
"url": "http://linux-server:8000/mcp/"
}
```
**Linux server setup:**
```bash
git clone https://github.com/getzep/graphiti.git
cd graphiti/mcp_server
docker compose up -d # Starts FalkorDB + MCP server on port 8000
```
**Requirements:**
- Docker on Linux server
- OpenAI API key (for embeddings)
- Port 8000 open on LAN
**Benefits of remote deployment:**
- Heavy lifting (Neo4j/FalkorDB + embeddings) offloaded to Linux
- Always-on server, Windows connects/disconnects freely
- Multiple machines can share the same knowledge graph
- Avoids Windows Docker/WSL2 complexity
---
\_Last updated: January 2026

321
CLAUDE.md Normal file
View File

@@ -0,0 +1,321 @@
# Claude Code Project Instructions
## CRITICAL RULES (READ FIRST)
### Platform: Linux Only (ADR-014)
**ALL tests MUST run in dev container** - Windows results are unreliable.
| Test Result | Container | Windows | Status |
| ----------- | --------- | ------- | ------------------------ |
| Pass | Fail | = | **BROKEN** (must fix) |
| Fail | Pass | = | **PASSING** (acceptable) |
```bash
# Always test in container
podman exec -it flyer-crawler-dev npm test
podman exec -it flyer-crawler-dev npm run type-check
```
### Database Schema Sync
**CRITICAL**: Keep these files synchronized:
- `sql/master_schema_rollup.sql` (test DB, complete reference)
- `sql/initial_schema.sql` (fresh install, identical to rollup)
- `sql/migrations/*.sql` (production ALTER TABLE statements)
Out-of-sync = test failures.
### Communication Style
Ask before assuming. Never assume:
- Steps completed / User knowledge / External services configured / Secrets created
---
## Session Startup Checklist
1. **Memory**: `mcp__memory__read_graph` - Recall project context, credentials, known issues
2. **Git**: `git log --oneline -10` - Recent changes
3. **Containers**: `mcp__podman__container_list` - Running state
---
## Quick Reference
### Essential Commands
| Command | Description |
| ------------------------------------------------------------ | ----------------- |
| `podman exec -it flyer-crawler-dev npm test` | Run all tests |
| `podman exec -it flyer-crawler-dev npm run test:unit` | Unit tests only |
| `podman exec -it flyer-crawler-dev npm run type-check` | TypeScript check |
| `podman exec -it flyer-crawler-dev npm run test:integration` | Integration tests |
### Key Patterns (with file locations)
| Pattern | ADR | Implementation | File |
| ------------------ | ------- | ------------------------------------------------- | ----------------------------------- |
| Error Handling | ADR-001 | `handleDbError()`, throw `NotFoundError` | `src/services/db/errors.db.ts` |
| Repository Methods | ADR-034 | `get*` (throws), `find*` (null), `list*` (array) | `src/services/db/*.db.ts` |
| API Responses | ADR-028 | `sendSuccess()`, `sendPaginated()`, `sendError()` | `src/utils/apiResponse.ts` |
| Transactions | ADR-002 | `withTransaction(async (client) => {...})` | `src/services/db/transaction.db.ts` |
### Key Files Quick Access
| Purpose | File |
| ------------ | -------------------------------- |
| Express app | `server.ts` |
| Environment | `src/config/env.ts` |
| Routes | `src/routes/*.routes.ts` |
| Repositories | `src/services/db/*.db.ts` |
| Workers | `src/services/workers.server.ts` |
| Queues | `src/services/queues.server.ts` |
---
## Application Overview
**Flyer Crawler** - AI-powered grocery deal extraction and analysis platform.
**Data Flow**: Upload → AI extraction (Gemini) → PostgreSQL → Cache (Redis) → API → React display
**Architecture** (ADR-035):
```text
Routes → Services → Repositories → Database
External APIs (*.server.ts)
```
**Key Entities**: Flyers, FlyerItems, Stores, StoreLocations, Users, Watchlists, ShoppingLists, Recipes, Achievements
**Full Architecture**: See [docs/architecture/OVERVIEW.md](docs/architecture/OVERVIEW.md)
---
## Common Workflows
### Adding a New API Endpoint
1. Add route in `src/routes/{domain}.routes.ts`
2. Use `validateRequest(schema)` middleware for input validation
3. Call service layer (never access DB directly from routes)
4. Return via `sendSuccess()` or `sendPaginated()`
5. Add tests in `*.routes.test.ts`
**Example Pattern**: See [docs/development/CODE-PATTERNS.md](docs/development/CODE-PATTERNS.md)
### Adding a New Database Operation
1. Add method to `src/services/db/{domain}.db.ts`
2. Follow naming: `get*` (throws), `find*` (returns null), `list*` (array)
3. Use `handleDbError()` for error handling
4. Accept optional `PoolClient` for transaction support
5. Add unit test
### Adding a Background Job
1. Define queue in `src/services/queues.server.ts`
2. Add worker in `src/services/workers.server.ts`
3. Call `queue.add()` from service layer
---
## Subagent Delegation Guide
**When to Delegate**: Complex work, specialized expertise, multi-domain tasks
### Decision Matrix
| Task Type | Subagent | Key Docs |
| --------------------- | ----------------------- | ----------------------------------------------------------------- |
| Write production code | coder | [CODER-GUIDE.md](docs/subagents/CODER-GUIDE.md) |
| Database changes | db-dev | [DATABASE-GUIDE.md](docs/subagents/DATABASE-GUIDE.md) |
| Create tests | testwriter | [TESTER-GUIDE.md](docs/subagents/TESTER-GUIDE.md) |
| Fix failing tests | tester | [TESTER-GUIDE.md](docs/subagents/TESTER-GUIDE.md) |
| Container/deployment | devops | [DEVOPS-GUIDE.md](docs/subagents/DEVOPS-GUIDE.md) |
| UI components | frontend-specialist | [FRONTEND-GUIDE.md](docs/subagents/FRONTEND-GUIDE.md) |
| External APIs | integrations-specialist | [INTEGRATIONS-GUIDE.md](docs/subagents/INTEGRATIONS-GUIDE.md) |
| Security review | security-engineer | [SECURITY-DEBUG-GUIDE.md](docs/subagents/SECURITY-DEBUG-GUIDE.md) |
| Production errors | log-debug | [SECURITY-DEBUG-GUIDE.md](docs/subagents/SECURITY-DEBUG-GUIDE.md) |
| AI/Gemini issues | ai-usage | [AI-USAGE-GUIDE.md](docs/subagents/AI-USAGE-GUIDE.md) |
| Planning features | planner | [DOCUMENTATION-GUIDE.md](docs/subagents/DOCUMENTATION-GUIDE.md) |
**All Subagents**: See [docs/subagents/OVERVIEW.md](docs/subagents/OVERVIEW.md)
**Launch Pattern**:
```
Use Task tool with subagent_type: "coder", "db-dev", "tester", etc.
```
---
## Known Issues & Gotchas
### Integration Test Issues (Summary)
Common issues with solutions:
1. **Vitest globalSetup context isolation** - Mocks/spies don't share instances → Mark `.todo()` or use Redis-based flags
2. **Cleanup queue interference** - Worker processes jobs during tests → `cleanupQueue.drain()` and `.pause()`
3. **Cache staleness** - Direct SQL bypasses cache → `cacheService.invalidateFlyers()` after inserts
4. **Filename collisions** - Multer predictable names → Use `${Date.now()}-${Math.round(Math.random() * 1e9)}`
5. **Response format mismatches** - API format changes → Log response bodies, update assertions
6. **External service failures** - PM2/Redis unavailable → try/catch with graceful degradation
**Full Details**: See test issues section at end of this document or [docs/development/TESTING.md](docs/development/TESTING.md)
### Git Bash Path Conversion (Windows)
Git Bash auto-converts Unix paths, breaking container commands.
**Solutions**:
```bash
# Use sh -c with single quotes
podman exec container sh -c '/usr/local/bin/script.sh'
# Use MSYS_NO_PATHCONV=1
MSYS_NO_PATHCONV=1 podman exec container /path/to/script
# Use Windows paths for host files
podman cp "d:/path/file" container:/tmp/file
```
---
## Configuration & Environment
### Environment Variables
**See**: [docs/getting-started/ENVIRONMENT.md](docs/getting-started/ENVIRONMENT.md) for complete reference.
**Quick Overview**:
- **Production**: Gitea CI/CD secrets only (no `.env` file)
- **Test**: Gitea secrets + `.env.test` overrides
- **Dev**: `.env.local` file (overrides `compose.dev.yml`)
**Key Variables**: `DB_HOST`, `DB_USER`, `DB_PASSWORD`, `JWT_SECRET`, `VITE_GOOGLE_GENAI_API_KEY`, `REDIS_URL`
**Adding Variables**: Update `src/config/env.ts`, Gitea Secrets, workflows, `ecosystem.config.cjs`, `.env.example`
### MCP Servers
**See**: [docs/tools/MCP-CONFIGURATION.md](docs/tools/MCP-CONFIGURATION.md) for setup.
**Quick Overview**:
| Server | Purpose | Config |
| -------------------------- | -------------------- | ---------------------- |
| gitea-projectium/torbonium | Gitea API | Global `settings.json` |
| podman | Container management | Global `settings.json` |
| memory | Knowledge graph | Global `settings.json` |
| redis | Cache access | Global `settings.json` |
| bugsink | Prod error tracking | Global `settings.json` |
| localerrors | Dev Bugsink | Project `.mcp.json` |
| devdb | Dev PostgreSQL | Project `.mcp.json` |
**Note**: Localhost servers use project `.mcp.json` due to Windows/loader issues.
### Bugsink Error Tracking
**See**: [docs/tools/BUGSINK-SETUP.md](docs/tools/BUGSINK-SETUP.md) for setup.
**Quick Access**:
- **Dev**: https://localhost:8443 (`admin@localhost`/`admin`)
- **Prod**: https://bugsink.projectium.com
**Token Creation** (required for MCP):
```bash
# Dev container
MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token'
# Production (via SSH)
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token"
```
### Logstash
**See**: [docs/operations/LOGSTASH-QUICK-REF.md](docs/operations/LOGSTASH-QUICK-REF.md)
Log aggregation: PostgreSQL + PM2 + Redis + NGINX → Bugsink (ADR-050)
---
## Documentation Quick Links
| Topic | Document |
| ------------------- | ----------------------------------------------------- |
| **Getting Started** | [QUICKSTART.md](docs/getting-started/QUICKSTART.md) |
| **Architecture** | [OVERVIEW.md](docs/architecture/OVERVIEW.md) |
| **Code Patterns** | [CODE-PATTERNS.md](docs/development/CODE-PATTERNS.md) |
| **Testing** | [TESTING.md](docs/development/TESTING.md) |
| **Debugging** | [DEBUGGING.md](docs/development/DEBUGGING.md) |
| **Database** | [DATABASE.md](docs/architecture/DATABASE.md) |
| **Deployment** | [DEPLOYMENT.md](docs/operations/DEPLOYMENT.md) |
| **Monitoring** | [MONITORING.md](docs/operations/MONITORING.md) |
| **ADRs** | [docs/adr/index.md](docs/adr/index.md) |
| **All Docs** | [docs/README.md](docs/README.md) |
---
## Appendix: Integration Test Issues (Full Details)
### 1. Vitest globalSetup Context Isolation
Vitest's `globalSetup` runs in separate Node.js context. Singletons, spies, mocks do NOT share instances with test files.
**Affected**: BullMQ worker service mocks (AI/DB failure tests)
**Solutions**: Mark `.todo()`, create test-only API endpoints, use Redis-based mock flags
```typescript
// DOES NOT WORK - different instances
const { flyerProcessingService } = await import('../../services/workers.server');
flyerProcessingService._getAiProcessor()._setExtractAndValidateData(mockFn);
```
### 2. Cleanup Queue Deletes Before Verification
Cleanup worker processes jobs in globalSetup context, ignoring test spies.
**Solution**: Drain and pause queue:
```typescript
const { cleanupQueue } = await import('../../services/queues.server');
await cleanupQueue.drain();
await cleanupQueue.pause();
// ... test ...
await cleanupQueue.resume();
```
### 3. Cache Stale After Direct SQL
Direct `pool.query()` inserts bypass cache invalidation.
**Solution**: `await cacheService.invalidateFlyers();` after inserts
### 4. Test Filename Collisions
Multer predictable filenames cause race conditions.
**Solution**: Use unique suffix: `${Date.now()}-${Math.round(Math.random() * 1e9)}`
### 5. Response Format Mismatches
API formats change: `data.jobId` vs `data.job.id`, nested vs flat, string vs number IDs.
**Solution**: Log response bodies, update assertions
### 6. External Service Availability
PM2/Redis health checks fail when unavailable.
**Solution**: try/catch with graceful degradation or mock

660
CLAUDE.md.backup Normal file
View File

@@ -0,0 +1,660 @@
# Claude Code Project Instructions
## Session Startup Checklist
**IMPORTANT**: At the start of every session, perform these steps:
1. **Check Memory First** - Use `mcp__memory__read_graph` or `mcp__memory__search_nodes` to recall:
- Project-specific configurations and credentials
- Previous work context and decisions
- Infrastructure details (URLs, ports, access patterns)
- Known issues and their solutions
2. **Review Recent Git History** - Check `git log --oneline -10` to understand recent changes
3. **Check Container Status** - Use `mcp__podman__container_list` to see what's running
---
## Project Instructions
### Things to Remember
Before writing any code:
1. State how you will verify this change works (test, bash command, browser check, etc.)
2. Write the test or verification step first
3. Then implement the code
4. Run verification and iterate until it passes
## Git Bash / MSYS Path Conversion Issue (Windows Host)
**CRITICAL ISSUE**: Git Bash on Windows automatically converts Unix-style paths to Windows paths, which breaks Podman/Docker commands.
### Problem Examples:
```bash
# This FAILS in Git Bash:
podman exec container /usr/local/bin/script.sh
# Git Bash converts to: C:/Program Files/Git/usr/local/bin/script.sh
# This FAILS in Git Bash:
podman exec container bash -c "cat /tmp/file.sql"
# Git Bash converts /tmp to C:/Users/user/AppData/Local/Temp
```
### Solutions:
1. **Use `sh -c` instead of `bash -c`** for single-quoted commands:
```bash
podman exec container sh -c '/usr/local/bin/script.sh'
```
2. **Use double slashes** to escape path conversion:
```bash
podman exec container //usr//local//bin//script.sh
```
3. **Set MSYS_NO_PATHCONV** environment variable:
```bash
MSYS_NO_PATHCONV=1 podman exec container /usr/local/bin/script.sh
```
4. **Use Windows paths with forward slashes** when referencing host files:
```bash
podman cp "d:/path/to/file" container:/tmp/file
```
**ALWAYS use one of these workarounds when running Bash commands on Windows that involve Unix paths inside containers.**
## Communication Style: Ask Before Assuming
**IMPORTANT**: When helping with tasks, **ask clarifying questions before making assumptions**. Do not assume:
- What steps the user has or hasn't completed
- What the user already knows or has configured
- What external services (OAuth providers, APIs, etc.) are already set up
- What secrets or credentials have already been created
Instead, ask the user to confirm the current state before providing instructions or making recommendations. This prevents wasted effort and respects the user's existing work.
## Platform Requirement: Linux Only
**CRITICAL**: This application is designed to run **exclusively on Linux**. See [ADR-014](docs/adr/0014-containerization-and-deployment-strategy.md) for full details.
### Environment Terminology
- **Dev Container** (or just "dev"): The containerized Linux development environment (`flyer-crawler-dev`). This is where all development and testing should occur.
- **Host**: The Windows machine running Podman/Docker and VS Code.
When instructions say "run in dev" or "run in the dev container", they mean executing commands inside the `flyer-crawler-dev` container.
### Test Execution Rules
1. **ALL tests MUST be executed in the dev container** - the Linux container environment
2. **NEVER run tests directly on Windows host** - test results from Windows are unreliable
3. **Always use the dev container for testing** when developing on Windows
4. **TypeScript type-check MUST run in dev container** - `npm run type-check` on Windows does not reliably detect errors
See [docs/TESTING.md](docs/TESTING.md) for comprehensive testing documentation.
### How to Run Tests Correctly
```bash
# If on Windows, first open VS Code and "Reopen in Container"
# Then run tests inside the dev container:
npm test # Run all unit tests
npm run test:unit # Run unit tests only
npm run test:integration # Run integration tests (requires DB/Redis)
```
### Running Tests via Podman (from Windows host)
**Note:** This project has 2900+ unit tests. For AI-assisted development, pipe output to a file for easier processing.
The command to run unit tests in the dev container via podman:
```bash
# Basic (output to terminal)
podman exec -it flyer-crawler-dev npm run test:unit
# Recommended for AI processing: pipe to file
podman exec -it flyer-crawler-dev npm run test:unit 2>&1 | tee test-results.txt
```
The command to run integration tests in the dev container via podman:
```bash
podman exec -it flyer-crawler-dev npm run test:integration
```
For running specific test files:
```bash
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
```
### Why Linux Only?
- Path separators: Code uses POSIX-style paths (`/`) which may break on Windows
- Shell scripts in `scripts/` directory are Linux-only
- External dependencies like `pdftocairo` assume Linux installation paths
- Unix-style file permissions are assumed throughout
### Test Result Interpretation
- Tests that **pass on Windows but fail on Linux** = **BROKEN tests** (must be fixed)
- Tests that **fail on Windows but pass on Linux** = **PASSING tests** (acceptable)
## Development Workflow
1. Open project in VS Code
2. Use "Reopen in Container" (Dev Containers extension required) to enter the dev environment
3. Wait for dev container initialization to complete
4. Run `npm test` to verify the dev environment is working
5. Make changes and run tests inside the dev container
## Code Change Verification
After making any code changes, **always run a type-check** to catch TypeScript errors before committing:
```bash
npm run type-check
```
This prevents linting/type errors from being introduced into the codebase.
## Quick Reference
| Command | Description |
| -------------------------- | ---------------------------- |
| `npm test` | Run all unit tests |
| `npm run test:unit` | Run unit tests only |
| `npm run test:integration` | Run integration tests |
| `npm run dev:container` | Start dev server (container) |
| `npm run build` | Build for production |
| `npm run type-check` | Run TypeScript type checking |
## Database Schema Files
**CRITICAL**: The database schema files must be kept in sync with each other. When making schema changes:
| File | Purpose |
| ------------------------------ | ----------------------------------------------------------- |
| `sql/master_schema_rollup.sql` | Complete schema used by test database setup and reference |
| `sql/initial_schema.sql` | Base schema without seed data, used as standalone reference |
| `sql/migrations/*.sql` | Incremental migrations for production database updates |
**Maintenance Rules:**
1. **Keep `master_schema_rollup.sql` and `initial_schema.sql` in sync** - These files should contain the same table definitions
2. **When adding columns via migration**, also add them to both `master_schema_rollup.sql` and `initial_schema.sql`
3. **Migrations are for production deployments** - They use `ALTER TABLE` to add columns incrementally
4. **Schema files are for fresh installs** - They define the complete table structure
5. **Test database uses `master_schema_rollup.sql`** - If schema files are out of sync with migrations, tests will fail
**Example:** When `002_expiry_tracking.sql` adds `purchase_date` to `pantry_items`, that column must also exist in the `CREATE TABLE` statements in both `master_schema_rollup.sql` and `initial_schema.sql`.
## Known Integration Test Issues and Solutions
This section documents common test issues encountered in integration tests, their root causes, and solutions. These patterns recur frequently.
### 1. Vitest globalSetup Runs in Separate Node.js Context
**Problem:** Vitest's `globalSetup` runs in a completely separate Node.js context from test files. This means:
- Singletons created in globalSetup are NOT the same instances as those in test files
- `global`, `globalThis`, and `process` are all isolated between contexts
- `vi.spyOn()` on module exports doesn't work cross-context
- Dependency injection via setter methods fails across contexts
**Affected Tests:** Any test trying to inject mocks into BullMQ worker services (e.g., AI failure tests, DB failure tests)
**Solution Options:**
1. Mark tests as `.todo()` until an API-based mock injection mechanism is implemented
2. Create test-only API endpoints that allow setting mock behaviors via HTTP
3. Use file-based or Redis-based mock flags that services check at runtime
**Example of affected code pattern:**
```typescript
// This DOES NOT work - different module instances
const { flyerProcessingService } = await import('../../services/workers.server');
flyerProcessingService._getAiProcessor()._setExtractAndValidateData(mockFn);
// The worker uses a different flyerProcessingService instance!
```
### 2. BullMQ Cleanup Queue Deleting Files Before Test Verification
**Problem:** The cleanup worker runs in the globalSetup context and processes cleanup jobs even when tests spy on `cleanupQueue.add()`. The spy intercepts calls in the test context, but jobs already queued run in the worker's context.
**Affected Tests:** EXIF/PNG metadata stripping tests that need to verify file contents before deletion
**Solution:** Drain and pause the cleanup queue before the test:
```typescript
const { cleanupQueue } = await import('../../services/queues.server');
await cleanupQueue.drain(); // Remove existing jobs
await cleanupQueue.pause(); // Prevent new jobs from processing
// ... run test ...
await cleanupQueue.resume(); // Restore normal operation
```
### 3. Cache Invalidation After Direct Database Inserts
**Problem:** Tests that insert data directly via SQL (bypassing the service layer) don't trigger cache invalidation. Subsequent API calls return stale cached data.
**Affected Tests:** Any test using `pool.query()` to insert flyers, stores, or other cached entities
**Solution:** Manually invalidate the cache after direct inserts:
```typescript
await pool.query('INSERT INTO flyers ...');
await cacheService.invalidateFlyers(); // Clear stale cache
```
### 4. Unique Filenames Required for Test Isolation
**Problem:** Multer generates predictable filenames in test environments, causing race conditions when multiple tests upload files concurrently or in sequence.
**Affected Tests:** Flyer processing tests, file upload tests
**Solution:** Always use unique filenames with timestamps:
```typescript
// In multer.middleware.ts
const uniqueSuffix = `${Date.now()}-${Math.round(Math.random() * 1e9)}`;
cb(null, `${file.fieldname}-${uniqueSuffix}-${sanitizedOriginalName}`);
```
### 5. Response Format Mismatches
**Problem:** API response formats may change, causing tests to fail when expecting old formats.
**Common Issues:**
- `response.body.data.jobId` vs `response.body.data.job.id`
- Nested objects vs flat response structures
- Type coercion (string vs number for IDs)
**Solution:** Always log response bodies during debugging and update test assertions to match actual API contracts.
### 6. External Service Availability
**Problem:** Tests depending on external services (PM2, Redis health checks) fail when those services aren't available in the test environment.
**Solution:** Use try/catch with graceful degradation or mock the external service checks.
## Secrets and Environment Variables
**CRITICAL**: This project uses **Gitea CI/CD secrets** for all sensitive configuration. There is NO `/etc/flyer-crawler/environment` file or similar local config file on the server.
### Server Directory Structure
| Path | Environment | Notes |
| --------------------------------------------- | ----------- | ------------------------------------------------ |
| `/var/www/flyer-crawler.projectium.com/` | Production | NO `.env` file - secrets injected via CI/CD only |
| `/var/www/flyer-crawler-test.projectium.com/` | Test | Has `.env.test` file for test-specific config |
### How Secrets Work
1. **Gitea Secrets**: All secrets are stored in Gitea repository settings (Settings → Secrets)
2. **CI/CD Injection**: Secrets are injected during deployment via `.gitea/workflows/deploy-to-prod.yml` and `deploy-to-test.yml`
3. **PM2 Environment**: The CI/CD workflow passes secrets to PM2 via environment variables, which are then available to the application
### Key Files for Configuration
| File | Purpose |
| ------------------------------------- | ---------------------------------------------------- |
| `src/config/env.ts` | Centralized config with Zod schema validation |
| `ecosystem.config.cjs` | PM2 process config - reads from `process.env` |
| `.gitea/workflows/deploy-to-prod.yml` | Production deployment with secret injection |
| `.gitea/workflows/deploy-to-test.yml` | Test deployment with secret injection |
| `.env.example` | Template showing all available environment variables |
| `.env.test` | Test environment overrides (only on test server) |
### Adding New Secrets
To add a new secret (e.g., `SENTRY_DSN`):
1. Add the secret to Gitea repository settings
2. Update the relevant workflow file (e.g., `deploy-to-prod.yml`) to inject it:
```yaml
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
```
3. Update `ecosystem.config.cjs` to read it from `process.env`
4. Update `src/config/env.ts` schema if validation is needed
5. Update `.env.example` to document the new variable
### Current Gitea Secrets
**Shared (used by both environments):**
- `DB_HOST` - Database host (shared PostgreSQL server)
- `JWT_SECRET` - Authentication
- `GOOGLE_MAPS_API_KEY` - Google Maps
- `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET` - Google OAuth
- `GH_CLIENT_ID`, `GH_CLIENT_SECRET` - GitHub OAuth
- `SENTRY_AUTH_TOKEN` - Bugsink API token for source map uploads (create at Settings > API Keys in Bugsink)
**Production-specific:**
- `DB_USER_PROD`, `DB_PASSWORD_PROD` - Production database credentials (`flyer_crawler_prod`)
- `DB_DATABASE_PROD` - Production database name (`flyer-crawler`)
- `REDIS_PASSWORD_PROD` - Redis password (uses database 0)
- `VITE_GOOGLE_GENAI_API_KEY` - Gemini API key for production
- `SENTRY_DSN`, `VITE_SENTRY_DSN` - Bugsink error tracking DSNs (production projects)
**Test-specific:**
- `DB_USER_TEST`, `DB_PASSWORD_TEST` - Test database credentials (`flyer_crawler_test`)
- `DB_DATABASE_TEST` - Test database name (`flyer-crawler-test`)
- `REDIS_PASSWORD_TEST` - Redis password (uses database 1 for isolation)
- `VITE_GOOGLE_GENAI_API_KEY_TEST` - Gemini API key for test
- `SENTRY_DSN_TEST`, `VITE_SENTRY_DSN_TEST` - Bugsink error tracking DSNs (test projects)
### Test Environment
The test environment (`flyer-crawler-test.projectium.com`) uses **both** Gitea CI/CD secrets and a local `.env.test` file:
- **Gitea secrets**: Injected during deployment via `.gitea/workflows/deploy-to-test.yml`
- **`.env.test` file**: Located at `/var/www/flyer-crawler-test.projectium.com/.env.test` for local overrides
- **Redis database 1**: Isolates test job queues from production (which uses database 0)
- **PM2 process names**: Suffixed with `-test` (e.g., `flyer-crawler-api-test`)
### Database User Setup (Test Environment)
**CRITICAL**: The test database requires specific PostgreSQL permissions to be configured manually. Schema ownership alone is NOT sufficient - explicit privileges must be granted.
**Database Users:**
| User | Database | Purpose |
| -------------------- | -------------------- | ---------- |
| `flyer_crawler_prod` | `flyer-crawler-prod` | Production |
| `flyer_crawler_test` | `flyer-crawler-test` | Testing |
**Required Setup Commands** (run as `postgres` superuser):
```bash
# Connect as postgres superuser
sudo -u postgres psql
# Create the test database and user (if not exists)
CREATE DATABASE "flyer-crawler-test";
CREATE USER flyer_crawler_test WITH PASSWORD 'your-password-here';
# Grant ownership and privileges
ALTER DATABASE "flyer-crawler-test" OWNER TO flyer_crawler_test;
\c "flyer-crawler-test"
ALTER SCHEMA public OWNER TO flyer_crawler_test;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_test;
# Create required extension (must be done by superuser)
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
```
**Why These Steps Are Necessary:**
1. **Schema ownership alone is insufficient** - PostgreSQL requires explicit `GRANT CREATE, USAGE` privileges even when the user owns the schema
2. **uuid-ossp extension** - Required by the application for UUID generation; must be created by a superuser before the app can use it
3. **Separate users for prod/test** - Prevents accidental cross-environment data access; each environment has its own credentials in Gitea secrets
**Verification:**
```bash
# Check schema privileges (should show 'UC' for flyer_crawler_test)
psql -d "flyer-crawler-test" -c "\dn+ public"
# Expected output:
# Name | Owner | Access privileges
# -------+--------------------+------------------------------------------
# public | flyer_crawler_test | flyer_crawler_test=UC/flyer_crawler_test
```
### Dev Container Environment
The dev container runs its own **local Bugsink instance** - it does NOT connect to the production Bugsink server:
- **Local Bugsink UI**: Accessible at `https://localhost:8443` (proxied from `http://localhost:8000` by nginx)
- **Admin credentials**: `admin@localhost` / `admin`
- **Bugsink Projects**: Backend (Dev) - Project ID 1, Frontend (Dev) - Project ID 2
- **Configuration Files**:
- `compose.dev.yml` - Sets default DSNs using `127.0.0.1:8000` protocol (for initial container setup)
- `.env.local` - **OVERRIDES** compose.dev.yml with `localhost:8000` protocol (this is what the app actually uses)
- **CRITICAL**: `.env.local` takes precedence over `compose.dev.yml` environment variables
- **DSN Configuration**:
- **Backend DSN** (Node.js/Express): Configured in `.env.local` as `SENTRY_DSN=http://<key>@localhost:8000/1`
- **Frontend DSN** (React/Browser): Configured in `.env.local` as `VITE_SENTRY_DSN=http://<key>@localhost:8000/2`
- **Why localhost instead of 127.0.0.1?** The `.env.local` file was created separately and uses `localhost` which works fine in practice
- **HTTPS Setup**: Self-signed certificates auto-generated with mkcert on container startup (for UI access only, not for Sentry SDK)
- **CSRF Protection**: Django configured with `SECURE_PROXY_SSL_HEADER` to trust `X-Forwarded-Proto` from nginx
- **Isolated**: Dev errors stay local, don't pollute production/test dashboards
- **No Gitea secrets needed**: Everything is self-contained in the container
- **Accessing Errors**:
- **Via Browser**: Open `https://localhost:8443` and login to view issues
- **Via MCP**: Configure a second Bugsink MCP server pointing to `http://localhost:8000` (see MCP Servers section below)
---
## MCP Servers
The following MCP servers are configured for this project:
| Server | Purpose |
| ------------------- | ---------------------------------------------------------------------------- |
| gitea-projectium | Gitea API for gitea.projectium.com |
| gitea-torbonium | Gitea API for gitea.torbonium.com |
| podman | Container management |
| filesystem | File system access |
| fetch | Web fetching |
| markitdown | Convert documents to markdown |
| sequential-thinking | Step-by-step reasoning |
| memory | Knowledge graph persistence |
| postgres | Direct database queries (localhost:5432) |
| playwright | Browser automation and testing |
| redis | Redis cache inspection (localhost:6379) |
| bugsink | Error tracking - production Bugsink (bugsink.projectium.com) - **PROD/TEST** |
| bugsink-dev | Error tracking - dev container Bugsink (localhost:8000) - **DEV CONTAINER** |
**Note:** MCP servers work in both **Claude CLI** and **Claude Code VS Code extension** (as of January 2026).
**CRITICAL**: There are **TWO separate Bugsink MCP servers**:
- **bugsink**: Connects to production Bugsink at `https://bugsink.projectium.com` for production and test server errors
- **bugsink-dev**: Connects to local dev container Bugsink at `http://localhost:8000` for local development errors
### Bugsink MCP Server Setup (ADR-015)
**IMPORTANT**: You need to configure **TWO separate MCP servers** - one for production/test, one for local dev.
#### Installation (shared for both servers)
```bash
# Clone the bugsink-mcp repository (NOT sentry-selfhosted-mcp)
git clone https://github.com/j-shelfwood/bugsink-mcp.git
cd bugsink-mcp
npm install
npm run build
```
#### Production/Test Bugsink MCP (bugsink)
Add to `.claude/mcp.json`:
```json
{
"bugsink": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "https://bugsink.projectium.com",
"BUGSINK_API_TOKEN": "<get-from-production-bugsink>",
"BUGSINK_ORG_SLUG": "sentry"
}
}
}
```
**Get the auth token**:
- Navigate to https://bugsink.projectium.com
- Log in with production credentials
- Go to Settings > API Keys
- Create a new API key with read access
#### Dev Container Bugsink MCP (bugsink-dev)
Add to `.claude/mcp.json`:
```json
{
"bugsink-dev": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "http://localhost:8000",
"BUGSINK_API_TOKEN": "<get-from-local-bugsink>",
"BUGSINK_ORG_SLUG": "sentry"
}
}
}
```
**Get the auth token**:
- Navigate to http://localhost:8000 (or https://localhost:8443)
- Log in with `admin@localhost` / `admin`
- Go to Settings > API Keys
- Create a new API key with read access
#### MCP Tool Usage
When using Bugsink MCP tools, remember:
- `mcp__bugsink__*` tools connect to **production/test** Bugsink
- `mcp__bugsink-dev__*` tools connect to **dev container** Bugsink
- Available capabilities for both:
- List projects and issues
- View detailed error events and stacktraces
- Search by error message or stack trace
- Update issue status (resolve, ignore)
- Create releases
### SSH Server Access
Claude Code can execute commands on the production server via SSH:
```bash
# Basic command execution
ssh root@projectium.com "command here"
# Examples:
ssh root@projectium.com "systemctl status logstash"
ssh root@projectium.com "pm2 list"
ssh root@projectium.com "tail -50 /var/www/flyer-crawler.projectium.com/logs/app.log"
```
**Use cases:**
- Managing Logstash, PM2, NGINX, Redis services
- Viewing server logs
- Deploying configuration changes
- Checking service status
**Important:** SSH access requires the host machine to have SSH keys configured for `root@projectium.com`.
---
## Logstash Configuration (ADR-050)
The production server uses **Logstash** to aggregate logs from multiple sources and forward errors to Bugsink for centralized error tracking.
**Log Sources:**
- **PostgreSQL function logs** - Structured JSON logs from `fn_log()` helper function
- **PM2 worker logs** - Service logs from BullMQ job workers (stdout)
- **Redis logs** - Operational logs (INFO level) and errors
- **NGINX logs** - Access logs (all requests) and error logs
### Configuration Location
**Primary configuration file:**
- `/etc/logstash/conf.d/bugsink.conf` - Complete Logstash pipeline configuration
**Related files:**
- `/etc/postgresql/14/main/conf.d/observability.conf` - PostgreSQL logging configuration
- `/var/log/postgresql/*.log` - PostgreSQL log files
- `/home/gitea-runner/.pm2/logs/*.log` - PM2 worker logs
- `/var/log/redis/redis-server.log` - Redis logs
- `/var/log/nginx/access.log` - NGINX access logs
- `/var/log/nginx/error.log` - NGINX error logs
- `/var/log/logstash/*.log` - Logstash file outputs (operational logs)
- `/var/lib/logstash/sincedb_*` - Logstash position tracking files
### Key Features
1. **Multi-source aggregation**: Collects logs from PostgreSQL, PM2 workers, Redis, and NGINX
2. **Environment-based routing**: Automatically detects production vs test environments and routes errors to the correct Bugsink project
3. **Structured JSON parsing**: Extracts `fn_log()` function output from PostgreSQL logs and Pino JSON from PM2 workers
4. **Sentry-compatible format**: Transforms events to Sentry format with `event_id`, `timestamp`, `level`, `message`, and `extra` context
5. **Error filtering**: Only forwards WARNING and ERROR level messages to Bugsink
6. **Operational log storage**: Stores non-error logs (Redis INFO, NGINX access, PM2 operational) to `/var/log/logstash/` for analysis
7. **Request monitoring**: Categorizes NGINX requests by status code (2xx, 3xx, 4xx, 5xx) and identifies slow requests
### Common Maintenance Commands
```bash
# Check Logstash status
systemctl status logstash
# Restart Logstash after configuration changes
systemctl restart logstash
# Test configuration syntax
/usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/bugsink.conf
# View Logstash logs
journalctl -u logstash -f
# Check Logstash stats (events processed, failures)
curl -XGET 'localhost:9600/_node/stats/pipelines?pretty' | jq '.pipelines.main.plugins.filters'
# Monitor PostgreSQL logs being processed
tail -f /var/log/postgresql/postgresql-$(date +%Y-%m-%d).log
# View operational log outputs
tail -f /var/log/logstash/pm2-workers-$(date +%Y-%m-%d).log
tail -f /var/log/logstash/redis-operational-$(date +%Y-%m-%d).log
tail -f /var/log/logstash/nginx-access-$(date +%Y-%m-%d).log
# Check disk usage of log files
du -sh /var/log/logstash/
```
### Troubleshooting
| Issue | Check | Solution |
| ------------------------------- | ---------------------------- | ---------------------------------------------------------------------------------------------- |
| Errors not appearing in Bugsink | Check Logstash is running | `systemctl status logstash` |
| Configuration syntax errors | Test config file | `/usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/bugsink.conf` |
| Grok pattern failures | Check Logstash stats | `curl localhost:9600/_node/stats/pipelines?pretty \| jq '.pipelines.main.plugins.filters'` |
| Wrong Bugsink project | Verify environment detection | Check tags in logs match expected environment (production/test) |
| Permission denied reading logs | Check Logstash permissions | `groups logstash` should include `postgres`, `adm` groups |
| PM2 logs not captured | Check file paths exist | `ls /home/gitea-runner/.pm2/logs/flyer-crawler-worker-*.log` |
| NGINX access logs not showing | Check file output directory | `ls -lh /var/log/logstash/nginx-access-*.log` |
| High disk usage | Check log rotation | Verify `/etc/logrotate.d/logstash` is configured and running daily |
**Full setup guide**: See [docs/BARE-METAL-SETUP.md](docs/BARE-METAL-SETUP.md) section "PostgreSQL Function Observability (ADR-050)"
**Architecture details**: See [docs/adr/0050-postgresql-function-observability.md](docs/adr/0050-postgresql-function-observability.md)

348
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,348 @@
# Contributing to Flyer Crawler
Thank you for your interest in contributing to Flyer Crawler! This guide will help you understand our development workflow and coding standards.
## Table of Contents
- [Getting Started](#getting-started)
- [Development Workflow](#development-workflow)
- [Code Standards](#code-standards)
- [Testing Requirements](#testing-requirements)
- [Pull Request Process](#pull-request-process)
- [Architecture Decision Records](#architecture-decision-records)
- [Working with AI Agents](#working-with-ai-agents)
## Getting Started
### Prerequisites
1. **Windows with Podman Desktop**: See [docs/getting-started/INSTALL.md](docs/getting-started/INSTALL.md)
2. **Node.js 20+**: For running the application
3. **Git**: For version control
### Initial Setup
```bash
# Clone the repository
git clone <repository-url>
cd flyer-crawler.projectium.com
# Install dependencies
npm install
# Start development containers
podman start flyer-crawler-postgres flyer-crawler-redis
# Start development server
npm run dev
```
## Development Workflow
### Before Making Changes
1. **Read [CLAUDE.md](CLAUDE.md)** - Project guidelines and patterns
2. **Review relevant ADRs** in [docs/adr/](docs/adr/) - Understand architectural decisions
3. **Check existing issues** - Avoid duplicate work
4. **Create a feature branch** - Use descriptive names
```bash
git checkout -b feature/descriptive-name
# or
git checkout -b fix/issue-description
```
### Making Changes
#### Code Changes
Follow the established patterns from [docs/development/CODE-PATTERNS.md](docs/development/CODE-PATTERNS.md):
1. **Routes****Services****Repositories****Database**
2. Never access the database directly from routes
3. Use Zod schemas for input validation
4. Follow ADR-034 repository naming conventions:
- `get*` - Throws NotFoundError if not found
- `find*` - Returns null if not found
- `list*` - Returns empty array if none found
#### Database Changes
When modifying the database schema:
1. Create migration: `sql/migrations/NNNN-description.sql`
2. Update `sql/master_schema_rollup.sql` (complete schema)
3. Update `sql/initial_schema.sql` (identical to rollup)
4. Test with integration tests
**CRITICAL**: Schema files must stay synchronized. See [CLAUDE.md#database-schema-sync](CLAUDE.md#database-schema-sync).
#### NGINX Configuration Changes
When modifying NGINX configurations on the production or test servers:
1. Make changes on the server at `/etc/nginx/sites-available/`
2. Test with `sudo nginx -t` and reload with `sudo systemctl reload nginx`
3. Update the reference copies in the repository root:
- `etc-nginx-sites-available-flyer-crawler.projectium.com` - Production
- `etc-nginx-sites-available-flyer-crawler-test-projectium-com.txt` - Test
4. Commit the updated reference files
These reference files serve as version-controlled documentation of the deployed configurations.
### Testing
**IMPORTANT**: All tests must run in the dev container.
```bash
# Run all tests
podman exec -it flyer-crawler-dev npm test
# Run specific test file
podman exec -it flyer-crawler-dev npm test -- --run src/path/to/file.test.ts
# Type check
podman exec -it flyer-crawler-dev npm run type-check
```
#### Before Committing
1. Write tests for new features
2. Update existing tests if behavior changes
3. Run full test suite
4. Run type check
5. Verify documentation is updated
See [docs/development/TESTING.md](docs/development/TESTING.md) for detailed testing guidelines.
## Code Standards
### TypeScript
- Use strict TypeScript mode
- Define types in `src/types/*.ts` files
- Avoid `any` - use `unknown` if type is truly unknown
- Follow ADR-027 for naming conventions
### Error Handling
```typescript
// Repository layer (ADR-001)
import { handleDbError, NotFoundError } from '../services/db/errors.db';
try {
const result = await client.query(query, values);
if (result.rows.length === 0) {
throw new NotFoundError('Flyer', id);
}
return result.rows[0];
} catch (error) {
throw handleDbError(error);
}
```
### API Responses
```typescript
// Route handlers (ADR-028)
import { sendSuccess, sendError } from '../utils/apiResponse';
// Success response
return sendSuccess(res, flyer, 'Flyer retrieved successfully');
// Paginated response
return sendPaginated(res, {
items: flyers,
total: count,
page: 1,
pageSize: 20,
});
```
### Transactions
```typescript
// Multi-operation changes (ADR-002)
import { withTransaction } from '../services/db/transaction.db';
await withTransaction(async (client) => {
await flyerDb.createFlyer(flyerData, client);
await flyerItemDb.createItems(items, client);
// Automatically commits on success, rolls back on error
});
```
## Testing Requirements
### Test Coverage
- **Unit tests**: All service functions, utilities, and helpers
- **Integration tests**: API endpoints, database operations
- **E2E tests**: Critical user flows
### Test Patterns
See [docs/subagents/TESTER-GUIDE.md](docs/subagents/TESTER-GUIDE.md) for:
- Test helper functions
- Mocking patterns
- Known testing issues and solutions
### Test Naming
```typescript
// Good test names
describe('FlyerService.createFlyer', () => {
it('should create flyer with valid data', async () => { ... });
it('should throw ValidationError when store_id is missing', async () => { ... });
it('should rollback transaction on item creation failure', async () => { ... });
});
```
## Pull Request Process
### 1. Prepare Your PR
- [ ] All tests pass in dev container
- [ ] Type check passes
- [ ] No console.log or debugging code
- [ ] Documentation updated (if applicable)
- [ ] ADR created (if architectural decision made)
### 2. Create Pull Request
**Title Format:**
- `feat: Add flyer bulk import endpoint`
- `fix: Resolve cache invalidation bug`
- `docs: Update testing guide`
- `refactor: Simplify transaction handling`
**Description Template:**
```markdown
## Summary
Brief description of changes
## Changes Made
- Added X
- Modified Y
- Fixed Z
## Related Issues
Fixes #123
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests pass
- [ ] Manual testing completed
## Documentation
- [ ] Code comments added where needed
- [ ] ADR created/updated (if applicable)
- [ ] User-facing docs updated
```
### 3. Code Review
- Address all reviewer feedback
- Keep discussions focused and constructive
- Update PR based on feedback
### 4. Merge
- Squash commits if requested
- Ensure CI passes
- Maintainer will merge when approved
## Architecture Decision Records
When making significant architectural decisions:
1. Create ADR in `docs/adr/`
2. Use template from existing ADRs
3. Number sequentially
4. Update `docs/adr/index.md`
**Examples of ADR-worthy decisions:**
- New design patterns
- Technology choices
- API design changes
- Database schema conventions
See [docs/adr/index.md](docs/adr/index.md) for existing decisions.
## Working with AI Agents
This project uses Claude Code with specialized subagents. See:
- [docs/subagents/OVERVIEW.md](docs/subagents/OVERVIEW.md) - Introduction
- [CLAUDE.md](CLAUDE.md) - AI agent instructions
### When to Use Subagents
| Task | Subagent |
| -------------------- | ------------------- |
| Writing code | `coder` |
| Creating tests | `testwriter` |
| Database changes | `db-dev` |
| Container/deployment | `devops` |
| Security review | `security-engineer` |
### Example
```
Use the coder subagent to implement the bulk flyer import endpoint with proper transaction handling and error responses.
```
## Git Conventions
### Commit Messages
Follow conventional commits:
```
feat: Add watchlist price alerts
fix: Resolve duplicate flyer upload bug
docs: Update deployment guide
refactor: Simplify auth middleware
test: Add integration tests for flyer API
```
### Branch Naming
```
feature/watchlist-alerts
fix/duplicate-upload-bug
docs/update-deployment-guide
refactor/auth-middleware
```
## Getting Help
- **Documentation**: Start with [docs/README.md](docs/README.md)
- **Testing Issues**: See [docs/development/TESTING.md](docs/development/TESTING.md)
- **Architecture Questions**: Review [docs/adr/index.md](docs/adr/index.md)
- **Debugging**: Check [docs/development/DEBUGGING.md](docs/development/DEBUGGING.md)
- **AI Agents**: Consult [docs/subagents/OVERVIEW.md](docs/subagents/OVERVIEW.md)
## Code of Conduct
- Be respectful and inclusive
- Welcome newcomers
- Focus on constructive feedback
- Assume good intentions
## License
By contributing, you agree that your contributions will be licensed under the same license as the project.
---
Thank you for contributing to Flyer Crawler! 🎉

View File

@@ -1,31 +1,410 @@
# Use Ubuntu 22.04 (LTS) as the base image to match production
# Dockerfile.dev
# ============================================================================
# DEVELOPMENT DOCKERFILE
# ============================================================================
# This Dockerfile creates a development environment that matches production
# as closely as possible while providing the tools needed for development.
#
# Base: Ubuntu 22.04 (LTS) - matches production server
# Node: v20.x (LTS) - matches production
# Includes: PostgreSQL client, Redis CLI, build tools, Bugsink, Logstash
# ============================================================================
FROM ubuntu:22.04
# Set environment variables to non-interactive to avoid prompts during installation
ENV DEBIAN_FRONTEND=noninteractive
# Update package lists and install essential tools
# - curl: for downloading Node.js setup script
# ============================================================================
# Install System Dependencies
# ============================================================================
# - curl: for downloading Node.js setup script and health checks
# - git: for version control operations
# - build-essential: for compiling native Node.js modules (node-gyp)
# - python3: required by some Node.js build tools
# - python3, python3-pip, python3-venv: for Bugsink
# - postgresql-client: for psql CLI (database initialization)
# - redis-tools: for redis-cli (health checks)
# - gnupg, apt-transport-https: for Elastic APT repository (Logstash)
# - openjdk-17-jre-headless: required by Logstash
# - nginx: for proxying Vite dev server with HTTPS
# - libnss3-tools: required by mkcert for installing CA certificates
# - wget: for downloading mkcert binary
# - tzdata: timezone data required by Bugsink/Django (uses Europe/Amsterdam)
RUN apt-get update && apt-get install -y \
curl \
git \
build-essential \
python3 \
python3-pip \
python3-venv \
postgresql-client \
redis-tools \
gnupg \
apt-transport-https \
openjdk-17-jre-headless \
nginx \
libnss3-tools \
wget \
tzdata \
&& rm -rf /var/lib/apt/lists/*
# Install Node.js 20.x (LTS) from NodeSource
# ============================================================================
# Install Node.js 20.x (LTS)
# ============================================================================
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
&& apt-get install -y nodejs
# Set the working directory inside the container
# ============================================================================
# Install mkcert and Generate Self-Signed Certificates
# ============================================================================
# mkcert creates locally-trusted development certificates
# This matches production HTTPS setup but with self-signed certs for localhost
RUN wget -O /usr/local/bin/mkcert https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64 \
&& chmod +x /usr/local/bin/mkcert
# Create certificates directory and generate localhost certificates
RUN mkdir -p /app/certs \
&& cd /app/certs \
&& mkcert -install \
&& mkcert localhost 127.0.0.1 ::1 \
&& mv localhost+2.pem localhost.crt \
&& mv localhost+2-key.pem localhost.key
# ============================================================================
# Install Logstash (Elastic APT Repository)
# ============================================================================
# ADR-015: Log aggregation for Pino and Redis logs → Bugsink
RUN curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-8.x.list \
&& apt-get update \
&& apt-get install -y logstash \
&& rm -rf /var/lib/apt/lists/*
# ============================================================================
# Install Bugsink (Python Package)
# ============================================================================
# ADR-015: Self-hosted Sentry-compatible error tracking
# Create a virtual environment for Bugsink to avoid conflicts
RUN python3 -m venv /opt/bugsink \
&& /opt/bugsink/bin/pip install --upgrade pip \
&& /opt/bugsink/bin/pip install bugsink gunicorn psycopg2-binary
# Create Bugsink directories and configuration
RUN mkdir -p /var/log/bugsink /var/lib/bugsink /opt/bugsink/conf
# Create Bugsink configuration file (Django settings module)
# This file is imported by bugsink-manage via DJANGO_SETTINGS_MODULE
# Based on bugsink/conf_templates/docker.py.template but customized for our setup
RUN echo 'import os\n\
from urllib.parse import urlparse\n\
\n\
from bugsink.settings.default import *\n\
from bugsink.settings.default import DATABASES, SILENCED_SYSTEM_CHECKS\n\
from bugsink.conf_utils import deduce_allowed_hosts, deduce_script_name\n\
\n\
IS_DOCKER = True\n\
\n\
# Security settings\n\
SECRET_KEY = os.getenv("SECRET_KEY")\n\
DEBUG = os.getenv("DEBUG", "False").lower() in ("true", "1", "yes")\n\
\n\
# Silence cookie security warnings for dev (no HTTPS)\n\
SILENCED_SYSTEM_CHECKS += ["security.W012", "security.W016"]\n\
\n\
# Database configuration from DATABASE_URL environment variable\n\
if os.getenv("DATABASE_URL"):\n\
DATABASE_URL = os.getenv("DATABASE_URL")\n\
parsed = urlparse(DATABASE_URL)\n\
\n\
if parsed.scheme in ["postgres", "postgresql"]:\n\
DATABASES["default"] = {\n\
"ENGINE": "django.db.backends.postgresql",\n\
"NAME": parsed.path.lstrip("/"),\n\
"USER": parsed.username,\n\
"PASSWORD": parsed.password,\n\
"HOST": parsed.hostname,\n\
"PORT": parsed.port or "5432",\n\
}\n\
\n\
# Snappea (background task runner) settings\n\
SNAPPEA = {\n\
"TASK_ALWAYS_EAGER": False,\n\
"WORKAHOLIC": True,\n\
"NUM_WORKERS": 2,\n\
"PID_FILE": None,\n\
}\n\
DATABASES["snappea"]["NAME"] = "/tmp/snappea.sqlite3"\n\
\n\
# Site settings\n\
_PORT = os.getenv("PORT", "8000")\n\
BUGSINK = {\n\
"BASE_URL": os.getenv("BASE_URL", f"http://localhost:{_PORT}"),\n\
"SITE_TITLE": os.getenv("SITE_TITLE", "Flyer Crawler Error Tracking"),\n\
"SINGLE_USER": os.getenv("SINGLE_USER", "True").lower() in ("true", "1", "yes"),\n\
"SINGLE_TEAM": os.getenv("SINGLE_TEAM", "True").lower() in ("true", "1", "yes"),\n\
"PHONEHOME": False,\n\
}\n\
\n\
ALLOWED_HOSTS = deduce_allowed_hosts(BUGSINK["BASE_URL"])\n\
\n\
# Console email backend for dev\n\
EMAIL_BACKEND = "bugsink.email_backends.QuietConsoleEmailBackend"\n\
\n\
# HTTPS proxy support (nginx reverse proxy on port 8443)\n\
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")\n\
' > /opt/bugsink/conf/bugsink_conf.py
# Create Bugsink startup script
# Uses DATABASE_URL environment variable (standard Docker approach per docs)
RUN echo '#!/bin/bash\n\
set -e\n\
\n\
# Build DATABASE_URL from individual env vars for flexibility\n\
export DATABASE_URL="postgresql://${BUGSINK_DB_USER:-bugsink}:${BUGSINK_DB_PASSWORD:-bugsink_dev_password}@${BUGSINK_DB_HOST:-postgres}:${BUGSINK_DB_PORT:-5432}/${BUGSINK_DB_NAME:-bugsink}"\n\
# SECRET_KEY is required by Bugsink/Django\n\
export SECRET_KEY="${BUGSINK_SECRET_KEY:-dev-bugsink-secret-key-minimum-50-characters-for-security}"\n\
\n\
# Create superuser if not exists (for dev convenience)\n\
if [ -n "$BUGSINK_ADMIN_EMAIL" ] && [ -n "$BUGSINK_ADMIN_PASSWORD" ]; then\n\
export CREATE_SUPERUSER="${BUGSINK_ADMIN_EMAIL}:${BUGSINK_ADMIN_PASSWORD}"\n\
fi\n\
\n\
# Wait for PostgreSQL to be ready\n\
until pg_isready -h ${BUGSINK_DB_HOST:-postgres} -p ${BUGSINK_DB_PORT:-5432} -U ${BUGSINK_DB_USER:-bugsink}; do\n\
echo "Waiting for PostgreSQL..."\n\
sleep 2\n\
done\n\
\n\
echo "PostgreSQL is ready. Starting Bugsink..."\n\
echo "DATABASE_URL: postgresql://${BUGSINK_DB_USER}:***@${BUGSINK_DB_HOST}:${BUGSINK_DB_PORT}/${BUGSINK_DB_NAME}"\n\
\n\
# Change to config directory so bugsink_conf.py can be found\n\
cd /opt/bugsink/conf\n\
\n\
# Run migrations\n\
echo "Running database migrations..."\n\
/opt/bugsink/bin/bugsink-manage migrate --noinput\n\
\n\
# Create superuser if CREATE_SUPERUSER is set (format: email:password)\n\
if [ -n "$CREATE_SUPERUSER" ]; then\n\
IFS=":" read -r ADMIN_EMAIL ADMIN_PASS <<< "$CREATE_SUPERUSER"\n\
/opt/bugsink/bin/bugsink-manage shell -c "\n\
from django.contrib.auth import get_user_model\n\
User = get_user_model()\n\
if not User.objects.filter(email='"'"'$ADMIN_EMAIL'"'"').exists():\n\
User.objects.create_superuser('"'"'$ADMIN_EMAIL'"'"', '"'"'$ADMIN_PASS'"'"')\n\
print('"'"'Superuser created'"'"')\n\
else:\n\
print('"'"'Superuser already exists'"'"')\n\
" || true\n\
fi\n\
\n\
# Start Bugsink with Gunicorn\n\
echo "Starting Gunicorn on port ${BUGSINK_PORT:-8000}..."\n\
exec /opt/bugsink/bin/gunicorn \\\n\
--bind 0.0.0.0:${BUGSINK_PORT:-8000} \\\n\
--workers ${BUGSINK_WORKERS:-2} \\\n\
--access-logfile - \\\n\
--error-logfile - \\\n\
bugsink.wsgi:application\n\
' > /usr/local/bin/start-bugsink.sh \
&& chmod +x /usr/local/bin/start-bugsink.sh
# ============================================================================
# Create Logstash Pipeline Configuration
# ============================================================================
# ADR-015: Pino and Redis logs → Bugsink
RUN mkdir -p /etc/logstash/conf.d /app/logs
RUN echo 'input {\n\
# Pino application logs\n\
file {\n\
path => "/app/logs/*.log"\n\
codec => json\n\
type => "pino"\n\
tags => ["app"]\n\
start_position => "beginning"\n\
sincedb_path => "/var/lib/logstash/sincedb_pino"\n\
}\n\
\n\
# Redis logs\n\
file {\n\
path => "/var/log/redis/*.log"\n\
type => "redis"\n\
tags => ["redis"]\n\
start_position => "beginning"\n\
sincedb_path => "/var/lib/logstash/sincedb_redis"\n\
}\n\
\n\
# PostgreSQL function logs (ADR-050)\n\
file {\n\
path => "/var/log/postgresql/*.log"\n\
type => "postgres"\n\
tags => ["postgres", "database"]\n\
start_position => "beginning"\n\
sincedb_path => "/var/lib/logstash/sincedb_postgres"\n\
}\n\
}\n\
\n\
filter {\n\
# Pino error detection (level 50 = error, 60 = fatal)\n\
if [type] == "pino" and [level] >= 50 {\n\
mutate { add_tag => ["error"] }\n\
}\n\
\n\
# Redis log parsing\n\
if [type] == "redis" {\n\
grok {\n\
match => { "message" => "%%{POSINT:pid}:%%{WORD:role} %%{MONTHDAY} %%{MONTH} %%{TIME} %%{WORD:loglevel} %%{GREEDYDATA:redis_message}" }\n\
}\n\
\n\
# Tag errors (WARNING/ERROR) for Bugsink forwarding\n\
if [loglevel] in ["WARNING", "ERROR"] {\n\
mutate { add_tag => ["error"] }\n\
}\n\
# Tag INFO-level operational events (startup, config, persistence)\n\
else if [loglevel] == "INFO" {\n\
mutate { add_tag => ["redis_operational"] }\n\
}\n\
}\n\
\n\
# PostgreSQL function log parsing (ADR-050)\n\
if [type] == "postgres" {\n\
# Extract timestamp and process ID from PostgreSQL log prefix\n\
# Format: "2026-01-18 10:30:00 PST [12345] user@database "\n\
grok {\n\
match => { "message" => "%%{TIMESTAMP_ISO8601:pg_timestamp} \\\\[%%{POSINT:pg_pid}\\\\] %%{USERNAME:pg_user}@%%{WORD:pg_database} %%{GREEDYDATA:pg_message}" }\n\
}\n\
\n\
# Check if this is a structured JSON log from fn_log()\n\
# fn_log() emits JSON like: {"timestamp":"...","level":"WARNING","source":"postgresql","function":"award_achievement",...}\n\
if [pg_message] =~ /^\\{.*"source":"postgresql".*\\}$/ {\n\
json {\n\
source => "pg_message"\n\
target => "fn_log"\n\
}\n\
\n\
# Mark as error if level is WARNING or ERROR\n\
if [fn_log][level] in ["WARNING", "ERROR"] {\n\
mutate { add_tag => ["error", "db_function"] }\n\
}\n\
}\n\
\n\
# Also catch native PostgreSQL errors\n\
if [pg_message] =~ /^ERROR:/ or [pg_message] =~ /^FATAL:/ {\n\
mutate { add_tag => ["error", "postgres_native"] }\n\
}\n\
}\n\
}\n\
\n\
output {\n\
# Forward errors to Bugsink\n\
if "error" in [tags] {\n\
http {\n\
url => "http://localhost:8000/api/store/"\n\
http_method => "post"\n\
format => "json"\n\
}\n\
}\n\
\n\
# Store Redis operational logs (INFO level) to file\n\
if "redis_operational" in [tags] {\n\
file {\n\
path => "/var/log/logstash/redis-operational-%%{+YYYY-MM-dd}.log"\n\
codec => json_lines\n\
}\n\
}\n\
\n\
# Debug output (comment out in production)\n\
stdout { codec => rubydebug }\n\
}\n\
' > /etc/logstash/conf.d/bugsink.conf
# Create Logstash directories
RUN mkdir -p /var/lib/logstash && chown -R logstash:logstash /var/lib/logstash
RUN mkdir -p /var/log/logstash && chown -R logstash:logstash /var/log/logstash
# ============================================================================
# Configure Nginx
# ============================================================================
# Copy development nginx configuration
COPY docker/nginx/dev.conf /etc/nginx/sites-available/default
# Configure nginx to run in foreground (required for container)
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# ============================================================================
# Set Working Directory
# ============================================================================
WORKDIR /app
# Set default environment variables for development
# ============================================================================
# Install Node.js Dependencies
# ============================================================================
# Copy package files first for better Docker layer caching
COPY package*.json ./
# Install all dependencies (including devDependencies for development)
RUN npm install
# ============================================================================
# Environment Configuration
# ============================================================================
# Default environment variables for development
ENV NODE_ENV=development
# Increase Node.js memory limit for large builds
ENV NODE_OPTIONS='--max-old-space-size=8192'
# Default command keeps the container running so you can attach to it
CMD ["bash"]
# Bugsink defaults (ADR-015)
ENV BUGSINK_DB_HOST=postgres
ENV BUGSINK_DB_PORT=5432
ENV BUGSINK_DB_NAME=bugsink
ENV BUGSINK_DB_USER=bugsink
ENV BUGSINK_DB_PASSWORD=bugsink_dev_password
ENV BUGSINK_PORT=8000
ENV BUGSINK_BASE_URL=http://localhost:8000
ENV BUGSINK_ADMIN_EMAIL=admin@localhost
ENV BUGSINK_ADMIN_PASSWORD=admin
# ============================================================================
# Expose Ports
# ============================================================================
# 80 - HTTP redirect to HTTPS (matches production)
# 443 - Nginx HTTPS frontend proxy (Vite on 5173)
# 3001 - Express backend
# 8000 - Bugsink error tracking
EXPOSE 80 443 3001 8000
# ============================================================================
# Copy Application Code and Scripts
# ============================================================================
# Copy the scripts directory which contains the entrypoint script
COPY scripts/ /app/scripts/
# ============================================================================
# Fix Line Endings for Windows Compatibility
# ============================================================================
# Convert ALL text files from CRLF to LF (Windows to Unix)
# This ensures compatibility when building on Windows hosts
# We process: shell scripts, JS/TS files, JSON, config files, etc.
RUN find /app -type f \( \
-name "*.sh" -o \
-name "*.js" -o \
-name "*.ts" -o \
-name "*.tsx" -o \
-name "*.jsx" -o \
-name "*.json" -o \
-name "*.conf" -o \
-name "*.config" -o \
-name "*.yml" -o \
-name "*.yaml" \
\) -exec sed -i 's/\r$//' {} \; && \
find /etc/nginx -type f -name "*.conf" -exec sed -i 's/\r$//' {} \; && \
chmod +x /app/scripts/*.sh
# ============================================================================
# Default Command
# ============================================================================
# Keep container running so VS Code can attach.
# Actual commands (npm run dev, etc.) are run via devcontainer.json.
CMD ["bash"]

477
README.md
View File

@@ -1,424 +1,141 @@
# Flyer Crawler - Grocery AI Analyzer
Flyer Crawler is a web application that uses the Google Gemini AI to extract, analyze, and manage data from grocery store flyers. Users can upload flyer images or PDFs, and the application will automatically identify items, prices, and sale dates, storing the structured data in a PostgreSQL database for historical analysis, price tracking, and personalized deal alerts.
Flyer Crawler is a web application that uses Google Gemini AI to extract, analyze, and manage data from grocery store flyers. Users can upload flyer images or PDFs, and the application automatically identifies items, prices, and sale dates, storing structured data in a PostgreSQL database for historical analysis, price tracking, and personalized deal alerts.
We are working on an app to help people save money, by finding good deals that are only advertized in store flyers/ads. So, the primary purpose of the site is to make uploading flyers as easy as possible and as accurate as possible, and to store peoples needs, so sales can be matched to needs.
**Our mission**: Help people save money by finding good deals that are only advertised in store flyers. The app makes uploading flyers as easy and accurate as possible, and matches sales to users' needs.
---
## Features
- **AI-Powered Data Extraction**: Upload PNG, JPG, or PDF flyers to automatically extract store names, sale dates, and a detailed list of items with prices and quantities.
- **Bulk Import**: Process multiple flyers at once with a summary report of successes, skips (duplicates), and errors.
- **Database Integration**: All extracted data is saved to a PostgreSQL database, enabling long-term persistence and analysis.
- **Personalized Watchlist**: Authenticated users can create a "watchlist" of specific grocery items they want to track.
- **Active Deal Alerts**: The app highlights current sales on your watched items from all valid flyers in the database.
- **Price History Charts**: Visualize the price trends of your watched items over time.
- **Shopping List Management**: Users can create multiple shopping lists, add items from flyers or their watchlist, and track purchased items.
- **User Authentication & Management**: Secure user sign-up, login, and profile management, including a secure account deletion process.
- **Dynamic UI**: A responsive interface with dark mode and a choice between metric/imperial unit systems.
- **AI-Powered Data Extraction**: Upload PNG, JPG, or PDF flyers to automatically extract store names, sale dates, and detailed item lists with prices and quantities
- **Bulk Import**: Process multiple flyers at once with summary reports of successes, skips (duplicates), and errors
- **Personalized Watchlist**: Create a watchlist of specific grocery items you want to track
- **Active Deal Alerts**: See current sales on your watched items from all valid flyers
- **Price History Charts**: Visualize price trends of watched items over time
- **Shopping List Management**: Create multiple shopping lists, add items from flyers or your watchlist, and track purchased items
- **User Authentication**: Secure sign-up, login, profile management, and account deletion
- **Dynamic UI**: Responsive interface with dark mode and metric/imperial unit systems
---
## Tech Stack
- **Frontend**: React, TypeScript, Tailwind CSS
- **AI**: Google Gemini API (`@google/genai`)
- **Backend**: Node.js with Express
- **Database**: PostgreSQL
- **Authentication**: Passport.js
- **UI Components**: Recharts for charts
| Layer | Technology |
| -------------- | ----------------------------------- |
| Frontend | React, TypeScript, Tailwind CSS |
| AI | Google Gemini API (`@google/genai`) |
| Backend | Node.js, Express |
| Database | PostgreSQL with PostGIS |
| Authentication | Passport.js (Google, GitHub OAuth) |
| Charts | Recharts |
---
## Required Secrets & Configuration
## Quick Start
This project is configured to run in a CI/CD environment and does not use `.env` files. All configuration and secrets must be provided as environment variables. For deployments using the included Gitea workflows, these must be configured as **repository secrets** in your Gitea instance.
- **`DB_HOST`, `DB_USER`, `DB_PASSWORD`**: Credentials for your PostgreSQL server. The port is assumed to be `5432`.
- **`DB_DATABASE_PROD`**: The name of your production database.
- **`REDIS_PASSWORD_PROD`**: The password for your production Redis instance.
- **`REDIS_PASSWORD_TEST`**: The password for your test Redis instance.
- **`JWT_SECRET`**: A long, random, and secret string for signing authentication tokens.
- **`VITE_GOOGLE_GENAI_API_KEY`**: Your Google Gemini API key.
- **`GOOGLE_MAPS_API_KEY`**: Your Google Maps Geocoding API key.
## Setup and Installation
### Step 1: Set Up PostgreSQL Database
1. **Set up a PostgreSQL database instance.**
2. **Run the Database Schema**:
- Connect to your database using a tool like `psql` or DBeaver.
- Open `sql/schema.sql.txt`, copy its entire contents, and execute it against your database.
- This will create all necessary tables, functions, and relationships.
### Step 2: Install Dependencies and Run the Application
1. **Install Dependencies**:
```bash
npm install
```
2. **Run the Application**:
```bash
npm run start:prod
```
### Step 3: Seed Development Users (Optional)
To create the initial `admin@example.com` and `user@example.com` accounts, you can run the seed script:
### Development with Podman Containers
```bash
npm run seed
# 1. Start PostgreSQL and Redis containers
podman start flyer-crawler-postgres flyer-crawler-redis
# 2. Install dependencies (first time only)
npm install
# 3. Run in development mode
npm run dev
```
After running, you may need to restart your IDE's TypeScript server to pick up the changes.
The application will be available at:
## NGINX mime types issue
- **Frontend**: http://localhost:5173
- **Backend API**: http://localhost:3001
sudo nano /etc/nginx/mime.types
See [docs/getting-started/INSTALL.md](docs/getting-started/INSTALL.md) for detailed setup instructions including:
change
- Podman Desktop installation
- Container configuration
- Database initialization
- Environment variables
application/javascript js;
### Testing
TO
**IMPORTANT**: All tests must run inside the dev container for reliable results.
application/javascript js mjs;
```bash
# Run all tests in container
podman exec -it flyer-crawler-dev npm test
RESTART NGINX
# Run only unit tests
podman exec -it flyer-crawler-dev npm run test:unit
sudo nginx -t
sudo systemctl reload nginx
# Run only integration tests
podman exec -it flyer-crawler-dev npm run test:integration
```
actually the proper change was to do this in the /etc/nginx/sites-available/flyer-crawler.projectium.com file
## for OAuth
1. Get Google OAuth Credentials
This is a crucial step that you must do outside the codebase:
Go to the Google Cloud Console.
Create a new project (or select an existing one).
In the navigation menu, go to APIs & Services > Credentials.
Click Create Credentials > OAuth client ID.
Select Web application as the application type.
Under Authorized redirect URIs, click ADD URI and enter the URL where Google will redirect users back to your server. For local development, this will be: http://localhost:3001/api/auth/google/callback.
Click Create. You will be given a Client ID and a Client Secret.
2. Get GitHub OAuth Credentials
You'll need to obtain a Client ID and Client Secret from GitHub:
Go to your GitHub profile settings.
Navigate to Developer settings > OAuth Apps.
Click New OAuth App.
Fill in the required fields:
Application name: A descriptive name for your app (e.g., "Flyer Crawler").
Homepage URL: The base URL of your application (e.g., http://localhost:5173 for local development).
Authorization callback URL: This is where GitHub will redirect users after they authorize your app. For local development, this will be: <http://localhost:3001/api/auth/github/callback>.
Click Register application.
You will be given a Client ID and a Client Secret.
## connect to postgres on projectium.com
psql -h localhost -U flyer_crawler_user -d "flyer-crawler-prod" -W
## postgis
flyer-crawler-prod=> SELECT version();
version
See [docs/development/TESTING.md](docs/development/TESTING.md) for testing guidelines.
---
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0, 64-bit
(1 row)
## Documentation
flyer-crawler-prod=> SELECT PostGIS_Full_Version();
postgis_full_version
### Core Documentation
| Document | Description |
| --------------------------------------------------------- | --------------------------------------- |
| [📚 Documentation Index](docs/README.md) | Navigate all documentation |
| [⚙️ Installation Guide](docs/getting-started/INSTALL.md) | Local development setup with Podman |
| [🏗️ Architecture Overview](docs/architecture/DATABASE.md) | System design, database, authentication |
| [💻 Development Guide](docs/development/TESTING.md) | Testing, debugging, code patterns |
| [🚀 Deployment Guide](docs/operations/DEPLOYMENT.md) | Production setup, NGINX, PM2 |
| [🤖 AI Agent Guides](docs/subagents/OVERVIEW.md) | Working with Claude Code subagents |
### Quick References
| Document | Description |
| -------------------------------------------------- | -------------------------------- |
| [CLAUDE.md](CLAUDE.md) | AI agent project instructions |
| [CONTRIBUTING.md](CONTRIBUTING.md) | Development workflow, PR process |
| [Architecture Decision Records](docs/adr/index.md) | Design decisions and rationale |
---
POSTGIS="3.2.0 c3e3cc0" [EXTENSION] PGSQL="140" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1" LIBXML="2.9.12" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
(1 row)
## Environment Variables
## production postgres setup
**Production/Test**: Uses Gitea CI/CD secrets injected during deployment (no local `.env` files)
Part 1: Production Database Setup
This database will be the live, persistent storage for your application.
**Dev Container**: Uses `.env.local` file which **overrides** the default DSNs in `compose.dev.yml`
Step 1: Install PostgreSQL (if not already installed)
First, ensure PostgreSQL is installed on your server.
Key variables:
bash
sudo apt update
sudo apt install postgresql postgresql-contrib
Step 2: Create the Production Database and User
It's best practice to create a dedicated, non-superuser role for your application to connect with.
| Variable | Description |
| -------------------------------------------- | -------------------------------- |
| `DB_HOST` | PostgreSQL host |
| `DB_USER_PROD`, `DB_PASSWORD_PROD` | Production database credentials |
| `DB_USER_TEST`, `DB_PASSWORD_TEST` | Test database credentials |
| `DB_DATABASE_PROD`, `DB_DATABASE_TEST` | Database names |
| `JWT_SECRET` | Authentication token signing key |
| `VITE_GOOGLE_GENAI_API_KEY` | Google Gemini API key |
| `GOOGLE_MAPS_API_KEY` | Google Maps Geocoding API key |
| `REDIS_PASSWORD_PROD`, `REDIS_PASSWORD_TEST` | Redis passwords |
Switch to the postgres system user to get superuser access to the database.
See [INSTALL.md](INSTALL.md) for the complete list.
bash
sudo -u postgres psql
Inside the psql shell, run the following SQL commands. Remember to replace 'a_very_strong_password' with a secure password that you will manage with a secrets tool or in your .env file.
---
sql
-- Create a new role (user) for your application
CREATE ROLE flyer_crawler_user WITH LOGIN PASSWORD 'a_very_strong_password';
## Scripts
-- Create the production database and assign ownership to the new user
CREATE DATABASE "flyer-crawler-prod" WITH OWNER = flyer_crawler_user;
| Command | Description |
| -------------------- | -------------------------------- |
| `npm run dev` | Start development server |
| `npm run build` | Build for production |
| `npm run start:prod` | Start production server with PM2 |
| `npm run test` | Run test suite |
| `npm run seed` | Seed development user accounts |
-- Connect to the new database to install extensions within it.
\c "flyer-crawler-prod"
---
-- Install the required extensions as a superuser. This only needs to be done once.
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
## License
-- Exit the psql shell
Step 3: Apply the Master Schema
Now, you'll populate your new database with all the tables, functions, and initial data. Your master_schema_rollup.sql file is perfect for this.
Navigate to your project's root directory on the server.
Run the following command to execute the master schema script against your new production database. You will be prompted for the password you created in the previous step.
bash
psql -U flyer_crawler_user -d "flyer-crawler-prod" -f sql/master_schema_rollup.sql
This single command creates all tables, extensions (pg_trgm, postgis), functions, and triggers, and seeds essential data like categories and master items.
Step 4: Seed the Admin Account (If Needed)
Your application has a separate script to create the initial admin user. To run it, you must first set the required environment variables in your shell session.
bash
# Set variables for the current session
export DB_USER=flyer_crawler_user DB_PASSWORD=your_password DB_NAME="flyer-crawler-prod" ...
# Run the seeding script
npx tsx src/db/seed_admin_account.ts
Your production database is now ready!
Part 2: Test Database Setup (for CI/CD)
Your Gitea workflow (deploy.yml) already automates the creation and teardown of the test database during the pipeline run. The steps below are for understanding what the workflow does and for manual setup if you ever need to run tests outside the CI pipeline.
The process your CI pipeline follows is:
Setup (sql/test_setup.sql):
As the postgres superuser, it runs sql/test_setup.sql.
This creates a temporary role named test_runner.
It creates a separate database named "flyer-crawler-test" owned by test_runner.
Schema Application (src/tests/setup/global-setup.ts):
The test runner (vitest) executes the global-setup.ts file.
This script connects to the "flyer-crawler-test" database using the temporary credentials.
It then runs the same sql/master_schema_rollup.sql file, ensuring your test database has the exact same structure as production.
Test Execution:
Your tests run against this clean, isolated "flyer-crawler-test" database.
Teardown (sql/test_teardown.sql):
After tests complete (whether they pass or fail), the if: always() step in your workflow ensures that sql/test_teardown.sql is executed.
This script terminates any lingering connections to the test database, drops the "flyer-crawler-test" database completely, and drops the test_runner role.
Part 3: Test Database Setup (for CI/CD and Local Testing)
Your Gitea workflow and local test runner rely on a permanent test database. This database needs to be created once on your server. The test runner will automatically reset the schema inside it before every test run.
Step 1: Create the Test Database
On your server, switch to the postgres system user to get superuser access.
bash
sudo -u postgres psql
Inside the psql shell, create a new database. We will assign ownership to the same flyer_crawler_user that your application uses. This user needs to be the owner to have permission to drop and recreate the schema during testing.
sql
-- Create the test database and assign ownership to your existing application user
CREATE DATABASE "flyer-crawler-test" WITH OWNER = flyer_crawler_user;
-- Connect to the newly created test database
\c "flyer-crawler-test"
-- Install the required extensions as a superuser. This only needs to be done once.
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Connect to the newly created test database
\c "flyer-crawler-test"
-- Grant ownership of the public schema within this database to your application user.
-- This is CRITICAL for allowing the test runner to drop and recreate the schema.
ALTER SCHEMA public OWNER TO flyer_crawler_user;
-- Exit the psql shell
\q
Step 2: Configure Gitea Secrets for Testing
Your CI pipeline needs to know how to connect to this test database. Ensure the following secrets are set in your Gitea repository settings:
DB_HOST: The hostname of your database server (e.g., localhost).
DB_PORT: The port for your database (e.g., 5432).
DB_USER: The user for the database (e.g., flyer_crawler_user).
DB_PASSWORD: The password for the database user.
The workflow file (.gitea/workflows/deploy.yml) is configured to use these secrets and will automatically connect to the "flyer-crawler-test" database when it runs the npm test command.
How the Test Workflow Works
The CI pipeline no longer uses sudo or creates/destroys the database on each run. Instead, the process is now:
Setup: The vitest global setup script (src/tests/setup/global-setup.ts) connects to the permanent "flyer-crawler-test" database.
Schema Reset: It executes sql/drop_tables.sql (which runs DROP SCHEMA public CASCADE) to completely wipe all tables, functions, and triggers.
Schema Application: It then immediately executes sql/master_schema_rollup.sql to build a fresh, clean schema and seed initial data.
Test Execution: Your tests run against this clean, isolated schema.
This approach is faster, more reliable, and removes the need for sudo access within the CI pipeline.
gitea-runner@projectium:~$ pm2 install pm2-logrotate
[PM2][Module] Installing NPM pm2-logrotate module
[PM2][Module] Calling [NPM] to install pm2-logrotate ...
added 161 packages in 5s
21 packages are looking for funding
run `npm fund` for details
npm notice
npm notice New patch version of npm available! 11.6.3 -> 11.6.4
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.6.4
npm notice To update run: npm install -g npm@11.6.4
npm notice
[PM2][Module] Module downloaded
[PM2][WARN] Applications pm2-logrotate not running, starting...
[PM2] App [pm2-logrotate] launched (1 instances)
Module: pm2-logrotate
$ pm2 set pm2-logrotate:max_size 10M
$ pm2 set pm2-logrotate:retain 30
$ pm2 set pm2-logrotate:compress false
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
$ pm2 set pm2-logrotate:workerInterval 30
$ pm2 set pm2-logrotate:rotateInterval 0 0 \* \* _
$ pm2 set pm2-logrotate:rotateModule true
Modules configuration. Copy/Paste line to edit values.
[PM2][Module] Module successfully installed and launched
[PM2][Module] Checkout module options: `$ pm2 conf`
┌────┬───────────────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├────┼───────────────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 2 │ flyer-crawler-analytics-worker │ default │ 0.0.0 │ fork │ 3846981 │ 7m │ 5 │ online │ 0% │ 55.8mb │ git… │ disabled │
│ 11 │ flyer-crawler-api │ default │ 0.0.0 │ fork │ 3846987 │ 7m │ 0 │ online │ 0% │ 59.0mb │ git… │ disabled │
│ 12 │ flyer-crawler-worker │ default │ 0.0.0 │ fork │ 3846988 │ 7m │ 0 │ online │ 0% │ 54.2mb │ git… │ disabled │
└────┴───────────────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬──────────────────────────────┬───────────────┬──────────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
├────┼──────────────────────────────┼───────────────┼──────────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 13 │ pm2-logrotate │ 3.0.0 │ 3848878 │ online │ 0 │ 0% │ 20.1mb │ git… │
└────┴──────────────────────────────┴───────────────┴──────────┴──────────┴──────┴──────────┴──────────┴──────────┘
gitea-runner@projectium:~$ pm2 set pm2-logrotate:max_size 10M
[PM2] Module pm2-logrotate restarted
[PM2] Setting changed
Module: pm2-logrotate
$ pm2 set pm2-logrotate:max_size 10M
$ pm2 set pm2-logrotate:retain 30
$ pm2 set pm2-logrotate:compress false
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
$ pm2 set pm2-logrotate:workerInterval 30
$ pm2 set pm2-logrotate:rotateInterval 0 0 _ \* _
$ pm2 set pm2-logrotate:rotateModule true
gitea-runner@projectium:~$ pm2 set pm2-logrotate:retain 14
[PM2] Module pm2-logrotate restarted
[PM2] Setting changed
Module: pm2-logrotate
$ pm2 set pm2-logrotate:max_size 10M
$ pm2 set pm2-logrotate:retain 14
$ pm2 set pm2-logrotate:compress false
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
$ pm2 set pm2-logrotate:workerInterval 30
$ pm2 set pm2-logrotate:rotateInterval 0 0 _ \* \*
$ pm2 set pm2-logrotate:rotateModule true
gitea-runner@projectium:~$
## dev server setup:
Here are the steps to set up the development environment on Windows using Podman with an Ubuntu container:
1. Install Prerequisites on Windows
Install WSL 2: Podman on Windows relies on the Windows Subsystem for Linux. Install it by running wsl --install in an administrator PowerShell.
Install Podman Desktop: Download and install Podman Desktop for Windows.
2. Set Up Podman
Initialize Podman: Launch Podman Desktop. It will automatically set up its WSL 2 machine.
Start Podman: Ensure the Podman machine is running from the Podman Desktop interface.
3. Set Up the Ubuntu Container
- Pull Ubuntu Image: Open a PowerShell or command prompt and pull the latest Ubuntu image:
podman pull ubuntu:latest
- Create a Podman Volume: Create a volume to persist node_modules and avoid installing them every time the container starts.
podman volume create node_modules_cache
- Run the Ubuntu Container: Start a new container with the project directory mounted and the necessary ports forwarded.
- Open a terminal in your project's root directory on Windows.
- Run the following command, replacing D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com with the full path to your project:
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com:/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
-p 3001:3001: Forwards the backend server port.
-p 5173:5173: Forwards the Vite frontend server port.
--name flyer-dev: Names the container for easy reference.
-v "...:/app": Mounts your project directory into the container at /app.
-v "node_modules_cache:/app/node_modules": Mounts the named volume for node_modules.
4. Configure the Ubuntu Environment
You are now inside the Ubuntu container's shell.
- Update Package Lists:
apt-get update
- Install Dependencies: Install curl, git, and nodejs (which includes npm).
apt-get install -y curl git
curl -sL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejs
- Navigate to Project Directory:
cd /app
- Install Project Dependencies:
npm install
5. Run the Development Server
- Start the Application:
npm run dev
6. Accessing the Application
- Frontend: Open your browser and go to http://localhost:5173.
- Backend: The frontend will make API calls to http://localhost:3001.
Managing the Environment
- Stopping the Container: Press Ctrl+C in the container terminal, then type exit.
- Restarting the Container:
podman start -a -i flyer-dev
## for me:
cd /mnt/d/gitea/flyer-crawler.projectium.com/flyer-crawler.projectium.com
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "$(pwd):/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
rate limiting
respect the AI service's rate limits, making it more stable and robust. You can adjust the GEMINI_RPM environment variable in your production environment as needed without changing the code.
[Add license information here]

19
certs/localhost.crt Normal file
View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDCTCCAfGgAwIBAgIUHhZUK1vmww2wCepWPuVcU6d27hMwDQYJKoZIhvcNAQEL
BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTI2MDExODAyMzM0NFoXDTI3MDEx
ODAyMzM0NFowFDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAuUJGtSZzd+ZpLi+efjrkxJJNfVxVz2VLhknNM2WKeOYx
JTK/VaTYq5hrczy6fEUnMhDAJCgEPUFlOK3vn1gFJKNMN8m7arkLVk6PYtrx8CTw
w78Q06FLITr6hR0vlJNpN4MsmGxYwUoUpn1j5JdfZF7foxNAZRiwoopf7ZJxltDu
PIuFjmVZqdzR8c6vmqIqdawx/V6sL9fizZr+CDH3oTsTUirn2qM+1ibBtPDiBvfX
omUsr6MVOcTtvnMvAdy9NfV88qwF7MEWBGCjXkoT1bKCLD8hjn8l7GjRmPcmMFE2
GqWEvfJiFkBK0CgSHYEUwzo0UtVNeQr0k0qkDRub6QIDAQABo1MwUTAdBgNVHQ4E
FgQU5VeD67yFLV0QNYbHaJ6u9cM6UbkwHwYDVR0jBBgwFoAU5VeD67yFLV0QNYbH
aJ6u9cM6UbkwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEABueA
8ujAD+yjeP5dTgqQH1G0hlriD5LmlJYnktaLarFU+y+EZlRFwjdORF/vLPwSG+y7
CLty/xlmKKQop70QzQ5jtJcsWzUjww8w1sO3AevfZlIF3HNhJmt51ihfvtJ7DVCv
CNyMeYO0pBqRKwOuhbG3EtJgyV7MF8J25UEtO4t+GzX3jcKKU4pWP+kyLBVfeDU3
MQuigd2LBwBQQFxZdpYpcXVKnAJJlHZIt68ycO1oSBEJO9fIF0CiAlC6ITxjtYtz
oCjd6cCLKMJiC6Zg7t1Q17vGl+FdGyQObSsiYsYO9N3CVaeDdpyGCH0Rfa0+oZzu
a5U9/l1FHlvpX980bw==
-----END CERTIFICATE-----

28
certs/localhost.key Normal file
View File

@@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC5Qka1JnN35mku
L55+OuTEkk19XFXPZUuGSc0zZYp45jElMr9VpNirmGtzPLp8RScyEMAkKAQ9QWU4
re+fWAUko0w3ybtquQtWTo9i2vHwJPDDvxDToUshOvqFHS+Uk2k3gyyYbFjBShSm
fWPkl19kXt+jE0BlGLCiil/tknGW0O48i4WOZVmp3NHxzq+aoip1rDH9Xqwv1+LN
mv4IMfehOxNSKufaoz7WJsG08OIG99eiZSyvoxU5xO2+cy8B3L019XzyrAXswRYE
YKNeShPVsoIsPyGOfyXsaNGY9yYwUTYapYS98mIWQErQKBIdgRTDOjRS1U15CvST
SqQNG5vpAgMBAAECggEAAnv0Dw1Mv+rRy4ZyxtObEVPXPRzoxnDDXzHP4E16BTye
Fc/4pSBUIAUn2bPvLz0/X8bMOa4dlDcIv7Eu9Pvns8AY70vMaUReA80fmtHVD2xX
1PCT0X3InnxRAYKstSIUIGs+aHvV5Z+iJ8F82soOStN1MU56h+JLWElL5deCPHq3
tLZT8wM9aOZlNG72kJ71+DlcViahynQj8+VrionOLNjTJ2Jv/ByjM3GMIuSdBrgd
Sl4YAcdn6ontjJGoTgI+e+qkBAPwMZxHarNGQgbS0yNVIJe7Lq4zIKHErU/ZSmpD
GzhdVNzhrjADNIDzS7G+pxtz+aUxGtmRvOyopy8GAQKBgQDEPp2mRM+uZVVT4e1j
pkKO1c3O8j24I5mGKwFqhhNs3qGy051RXZa0+cQNx63GokXQan9DIXzc/Il7Y72E
z9bCFbcSWnlP8dBIpWiJm+UmqLXRyY4N8ecNnzL5x+Tuxm5Ij+ixJwXgdz/TLNeO
MBzu+Qy738/l/cAYxwcF7mR7AQKBgQDxq1F95HzCxBahRU9OGUO4s3naXqc8xKCC
m3vbbI8V0Exse2cuiwtlPPQWzTPabLCJVvCGXNru98sdeOu9FO9yicwZX0knOABK
QfPyDeITsh2u0C63+T9DNn6ixI/T68bTs7DHawEYbpS7bR50BnbHbQrrOAo6FSXF
yC7+Te+o6QKBgQCXEWSmo/4D0Dn5Usg9l7VQ40GFd3EPmUgLwntal0/I1TFAyiom
gpcLReIogXhCmpSHthO1h8fpDfZ/p+4ymRRHYBQH6uHMKugdpEdu9zVVpzYgArp5
/afSEqVZJwoSzWoELdQA23toqiPV2oUtDdiYFdw5nDccY1RHPp8nb7amAQKBgQDj
f4DhYDxKJMmg21xCiuoDb4DgHoaUYA0xpii8cL9pq4KmBK0nVWFO1kh5Robvsa2m
PB+EfNjkaIPepLxWbOTUEAAASoDU2JT9UoTQcl1GaUAkFnpEWfBB14TyuNMkjinH
lLpvn72SQFbm8VvfoU4jgfTrZP/LmajLPR1v6/IWMQKBgBh9qvOTax/GugBAWNj3
ZvF99rHOx0rfotEdaPcRN66OOiSWILR9yfMsTvwt1V0VEj7OqO9juMRFuIyB57gd
Hs/zgbkuggqjr1dW9r22P/UpzpodAEEN2d52RSX8nkMOkH61JXlH2MyRX65kdExA
VkTDq6KwomuhrU3z0+r/MSOn
-----END PRIVATE KEY-----

View File

@@ -1,8 +1,40 @@
# compose.dev.yml
# ============================================================================
# DEVELOPMENT DOCKER COMPOSE CONFIGURATION
# ============================================================================
# This file defines the local development environment using Docker/Podman.
#
# Services:
# - app: Node.js application (API + Frontend + Bugsink + Logstash)
# - postgres: PostgreSQL 15 with PostGIS extension
# - redis: Redis for caching and job queues
#
# Usage:
# Start all services: podman-compose -f compose.dev.yml up -d
# Stop all services: podman-compose -f compose.dev.yml down
# View logs: podman-compose -f compose.dev.yml logs -f
# Reset everything: podman-compose -f compose.dev.yml down -v
#
# VS Code Dev Containers:
# This file is referenced by .devcontainer/devcontainer.json for seamless
# VS Code integration. Open the project in VS Code and use "Reopen in Container".
#
# Bugsink (ADR-015):
# Access error tracking UI at http://localhost:8000
# Default login: admin@localhost / admin
# ============================================================================
version: '3.8'
services:
# ===================
# Application Service
# ===================
app:
container_name: flyer-crawler-dev
# Use pre-built image if available, otherwise build from Dockerfile.dev
# To build: podman build -f Dockerfile.dev -t flyer-crawler-dev:latest .
image: localhost/flyer-crawler-dev:latest
build:
context: .
dockerfile: Dockerfile.dev
@@ -12,25 +44,76 @@ services:
# Create a volume for node_modules to avoid conflicts with Windows host
# and improve performance.
- node_modules_data:/app/node_modules
# Mount PostgreSQL logs for Logstash access (ADR-050)
- postgres_logs:/var/log/postgresql:ro
ports:
- '3000:3000' # Frontend (Vite default)
- '80:80' # HTTP redirect to HTTPS (matches production)
- '443:443' # Frontend HTTPS (nginx proxies Vite 5173 → 443)
- '3001:3001' # Backend API
- '8000:8000' # Bugsink error tracking HTTP (ADR-015)
- '8443:8443' # Bugsink error tracking HTTPS (ADR-015)
environment:
# Core settings
- NODE_ENV=development
# Database - use service name for Docker networking
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_NAME=flyer_crawler_dev
# Redis - use service name for Docker networking
- REDIS_URL=redis://redis:6379
# Add other secrets here or use a .env file
- REDIS_HOST=redis
- REDIS_PORT=6379
# Frontend URL for CORS
- FRONTEND_URL=http://localhost:3000
# Default JWT secret for development (override in production!)
- JWT_SECRET=dev-jwt-secret-change-in-production
# Worker settings
- WORKER_LOCK_DURATION=120000
# Bugsink error tracking (ADR-015)
- BUGSINK_DB_HOST=postgres
- BUGSINK_DB_PORT=5432
- BUGSINK_DB_NAME=bugsink
- BUGSINK_DB_USER=bugsink
- BUGSINK_DB_PASSWORD=bugsink_dev_password
- BUGSINK_PORT=8000
- BUGSINK_BASE_URL=https://localhost:8443
- BUGSINK_ADMIN_EMAIL=admin@localhost
- BUGSINK_ADMIN_PASSWORD=admin
- BUGSINK_SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security
# Sentry SDK configuration (points to local Bugsink HTTP)
# Note: Using HTTP with 127.0.0.1 instead of localhost because Sentry SDK
# doesn't accept 'localhost' as a valid hostname in DSN validation
# The browser accesses Bugsink at http://localhost:8000 (nginx proxies to HTTPS for the app)
- SENTRY_DSN=http://cea01396-c562-46ad-b587-8fa5ee6b1d22@127.0.0.1:8000/1
- VITE_SENTRY_DSN=http://d92663cb-73cf-4145-b677-b84029e4b762@127.0.0.1:8000/2
- SENTRY_ENVIRONMENT=development
- VITE_SENTRY_ENVIRONMENT=development
- SENTRY_ENABLED=true
- VITE_SENTRY_ENABLED=true
- SENTRY_DEBUG=true
- VITE_SENTRY_DEBUG=true
depends_on:
- postgres
- redis
# Keep container running so VS Code can attach
command: tail -f /dev/null
postgres:
condition: service_healthy
redis:
condition: service_healthy
# Start dev server automatically (works with or without VS Code)
command: /app/scripts/dev-entrypoint.sh
# Healthcheck for the app (once it's running)
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3001/api/health/live']
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
# ===================
# PostgreSQL Database
# ===================
postgres:
image: docker.io/library/postgis/postgis:15-3.4
image: docker.io/postgis/postgis:15-3.4
container_name: flyer-crawler-postgres
ports:
- '5432:5432'
@@ -38,15 +121,80 @@ services:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: flyer_crawler_dev
# Optimize for development
POSTGRES_INITDB_ARGS: '--encoding=UTF8 --locale=C'
volumes:
- postgres_data:/var/lib/postgresql/data
# Mount init scripts to run on first database creation
# Scripts run in alphabetical order: 00-extensions, 01-bugsink
- ./sql/00-init-extensions.sql:/docker-entrypoint-initdb.d/00-init-extensions.sql:ro
- ./sql/01-init-bugsink.sh:/docker-entrypoint-initdb.d/01-init-bugsink.sh:ro
# Mount custom PostgreSQL configuration (ADR-050)
- ./docker/postgres/postgresql.conf.override:/etc/postgresql/postgresql.conf.d/custom.conf:ro
# Create log volume for Logstash access (ADR-050)
- postgres_logs:/var/log/postgresql
# Override postgres command to include custom config (ADR-050)
command: >
postgres
-c config_file=/var/lib/postgresql/data/postgresql.conf
-c hba_file=/var/lib/postgresql/data/pg_hba.conf
-c log_min_messages=notice
-c client_min_messages=notice
-c logging_collector=on
-c log_destination=stderr
-c log_directory=/var/log/postgresql
-c log_filename=postgresql-%Y-%m-%d.log
-c log_rotation_age=1d
-c log_rotation_size=100MB
-c log_truncate_on_rotation=on
-c log_line_prefix='%t [%p] %u@%d '
-c log_min_duration_statement=1000
-c log_statement=none
-c log_connections=on
-c log_disconnections=on
# Healthcheck ensures postgres is ready before app starts
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres -d flyer_crawler_dev']
interval: 5s
timeout: 5s
retries: 10
start_period: 10s
# ===================
# Redis Cache/Queue
# ===================
redis:
image: docker.io/library/redis:alpine
container_name: flyer-crawler-redis
ports:
- '6379:6379'
volumes:
- redis_data:/data
# Healthcheck ensures redis is ready before app starts
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 5s
timeout: 5s
retries: 10
start_period: 5s
# Enable persistence for development data
command: redis-server --appendonly yes
# ===================
# Named Volumes
# ===================
volumes:
postgres_data:
name: flyer-crawler-postgres-data
postgres_logs:
name: flyer-crawler-postgres-logs
redis_data:
name: flyer-crawler-redis-data
node_modules_data:
name: flyer-crawler-node-modules
# ===================
# Network Configuration
# ===================
# All services are on the default bridge network.
# Use service names (postgres, redis) as hostnames.

86
docker/nginx/dev.conf Normal file
View File

@@ -0,0 +1,86 @@
# docker/nginx/dev.conf
# ============================================================================
# Development Nginx Configuration (HTTPS)
# ============================================================================
# This configuration matches production by using HTTPS on port 443 with
# self-signed certificates generated by mkcert. Port 80 redirects to HTTPS.
#
# This allows the dev container to work the same way as production:
# - Frontend accessible on https://localhost (port 443)
# - Backend API on http://localhost:3001
# - Port 80 redirects to HTTPS
# ============================================================================
# HTTPS Server (main)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
# SSL Configuration (self-signed certificates from mkcert)
ssl_certificate /app/certs/localhost.crt;
ssl_certificate_key /app/certs/localhost.key;
# Allow large file uploads (matches production)
client_max_body_size 100M;
# Proxy API requests to Express server on port 3001
location /api/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Proxy WebSocket connections for real-time notifications
location /ws {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Serve flyer images from static storage
location /flyer-images/ {
alias /app/public/flyer-images/;
expires 7d;
add_header Cache-Control "public, immutable";
}
# Proxy all other requests to Vite dev server on port 5173
location / {
proxy_pass http://localhost:5173;
proxy_http_version 1.1;
# WebSocket support for Hot Module Replacement (HMR)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Forward real client IP
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Security headers (matches production)
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
}
# HTTP to HTTPS Redirect (matches production)
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$host$request_uri;
}

View File

@@ -0,0 +1,29 @@
# PostgreSQL Logging Configuration for Database Function Observability (ADR-050)
# This file is mounted into the PostgreSQL container to enable structured logging
# from database functions via fn_log()
# Enable logging to files for Logstash pickup
logging_collector = on
log_destination = 'stderr'
log_directory = '/var/log/postgresql'
log_filename = 'postgresql-%Y-%m-%d.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_truncate_on_rotation = on
# Log level - capture NOTICE and above (includes fn_log WARNING/ERROR)
log_min_messages = notice
client_min_messages = notice
# Include useful context in log prefix
log_line_prefix = '%t [%p] %u@%d '
# Capture slow queries from functions (1 second threshold)
log_min_duration_statement = 1000
# Log statement types (off for production, 'all' for debugging)
log_statement = 'none'
# Connection logging (useful for dev, can be disabled in production)
log_connections = on
log_disconnections = on

View File

@@ -0,0 +1,267 @@
# Bugsink MCP Troubleshooting Guide
This document tracks known issues and solutions for the Bugsink MCP server integration with Claude Code.
## Issue History
### 2026-01-22: Server Name Prefix Collision (LATEST)
**Problem:**
- `bugsink-dev` MCP server never starts (tools not available)
- Production `bugsink` MCP works fine
- Manual test works: `BUGSINK_URL=http://localhost:8000 BUGSINK_TOKEN=<token> node d:/gitea/bugsink-mcp/dist/index.js`
- Configuration correct, environment variables correct, but server silently skipped
**Root Cause:**
Claude Code silently skips MCP servers when server names share prefixes (e.g., `bugsink` and `bugsink-dev`).
Debug logs showed `bugsink-dev` was NEVER started - no "Starting connection" message ever appeared.
**Discovery Method:**
- Analyzed Claude Code debug logs at `C:\Users\games3\.claude\debug\*.txt`
- Found that MCP startup messages only showed: `memory`, `bugsink`, `redis`, `gitea-projectium`, etc.
- `bugsink-dev` was completely absent from startup sequence
- No error was logged - server was silently filtered out
**Solution Attempts:**
**Attempt 1:** Renamed `bugsink-dev` to `devbugsink`
- New MCP tool prefix: `mcp__devbugsink__*`
- Changed URL from `http://localhost:8000` to `http://127.0.0.1:8000`
- **Result:** Still failed after full VS Code restart - server never loaded
**Attempt 2:** Renamed `devbugsink` to `localerrors` (completely different name)
- New MCP tool prefix: `mcp__localerrors__*`
- Uses completely unrelated name with no shared prefix
- Based on infra-architect research showing name collision issues
- **Result:** Still failed after full VS Code restart - server never loaded
**Attempt 3:** Created project-level `.mcp.json` file ✅ **SUCCESS**
- Location: `d:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com\.mcp.json`
- Contains `localerrors` server configuration
- Project-level config bypassed the global config loader issue
- **Result:** ✅ Working! Server loads successfully, 2 projects found
- Tool prefix: `mcp__localerrors__*`
**Alternative Solutions Researched:**
- Sentry MCP: Not compatible (different API endpoints `/api/canonical/0/` vs `/api/0/`)
- MCP-Proxy: Could work but requires separate process
- Project-level `.mcp.json`: Alternative if global config continues to fail
**Status:** ✅ RESOLVED - Project-level `.mcp.json` works successfully
**Root Cause Analysis:**
Claude Code's global settings.json has an issue loading localhost stdio MCP servers, even with completely distinct names. The exact cause is unknown, but may be related to:
- Multiple servers using the same package (bugsink-mcp)
- Localhost URL filtering in global config
- Internal MCP loader bug specific to Windows/localhost combinations
**Working Solution:**
Use **project-level `.mcp.json`** file instead of global `settings.json` for localhost MCP servers. This bypasses the global config loader issue entirely.
**Key Insights:**
1. Global config fails for localhost servers even with distinct names (`localerrors`)
2. Project-level `.mcp.json` successfully loads the same configuration
3. Production HTTPS servers work fine in global config
4. Both configs can coexist: global for production, project-level for dev
---
### 2026-01-21: Environment Variable Typo (RESOLVED)
**Problem:**
- `bugsink-dev` MCP server fails to start (tools not available)
- Production `bugsink` MCP works fine
- User experienced repeated troubleshooting loops without resolution
**Root Cause:**
Environment variable name mismatch:
- **Package expects:** `BUGSINK_TOKEN`
- **Configuration had:** `BUGSINK_API_TOKEN`
**Discovery Method:**
- infra-architect subagent examined `d:\gitea\bugsink-mcp\README.md` line 47
- Found correct variable name in package documentation
- Production MCP continued working because it was started before config change
**Solution Applied:**
1. Updated `C:\Users\games3\.claude\settings.json`:
- Changed `BUGSINK_API_TOKEN` to `BUGSINK_TOKEN` for both `bugsink` and `bugsink-dev`
- Removed unused `BUGSINK_ORG_SLUG` environment variable
2. Updated documentation:
- `CLAUDE.md` MCP Servers section (lines 307-327)
- `docs/BUGSINK-SYNC.md` environment variables (line 66, 108)
3. Required action:
- Restart Claude Code to reload MCP servers
**Status:** Fixed - but superseded by server name collision issue above
---
## Correct Configuration
### Required Environment Variables
The bugsink-mcp package requires exactly **TWO** environment variables:
```json
{
"bugsink": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "https://bugsink.projectium.com",
"BUGSINK_TOKEN": "<40-char-hex-token>"
}
},
"localerrors": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "http://127.0.0.1:8000",
"BUGSINK_TOKEN": "<40-char-hex-token>"
}
}
}
```
**Important:**
- Variable is `BUGSINK_TOKEN`, NOT `BUGSINK_API_TOKEN`
- `BUGSINK_ORG_SLUG` is NOT used by the package
- Works with both HTTPS (production) and HTTP (localhost)
- Use completely distinct name like `localerrors` (not `bugsink-dev` or `devbugsink`) to avoid any name collision
---
## Common Issues
### MCP Server Tools Not Available
**Symptoms:**
- `mcp__localerrors__*` tools return "No such tool available"
- Production `bugsink` MCP may work while `localerrors` fails
**Possible Causes:**
1. **Wrong environment variable name** (most common)
- Check: Variable must be `BUGSINK_TOKEN`, not `BUGSINK_API_TOKEN`
2. **Invalid API token**
- Check: Token must be 40-character lowercase hex
- Verify: Token created via Django management command
3. **Bugsink instance not accessible**
- Test: `curl -s -o /dev/null -w "%{http_code}" http://localhost:8000`
- Expected: `302` (redirect) or `200`
4. **MCP server crashed on startup**
- Check: Claude Code logs (if available)
- Test manually: `BUGSINK_URL=http://localhost:8000 BUGSINK_TOKEN=<token> node d:/gitea/bugsink-mcp/dist/index.js`
**Solution:**
1. Verify correct variable names in settings.json
2. Restart Claude Code
3. Test connection: Use `mcp__bugsink__test_connection` tool
---
## Testing MCP Server
### Manual Test
Run the MCP server directly to see error output:
```bash
# For localhost Bugsink
cd d:\gitea\bugsink-mcp
set BUGSINK_URL=http://localhost:8000
set BUGSINK_TOKEN=a609c2886daa4e1e05f1517074d7779a5fb49056
node dist/index.js
```
Expected output:
```
Bugsink MCP server started
Connected to: http://localhost:8000
```
### Test in Claude Code
After restart, verify both MCP servers work:
```typescript
// Production Bugsink
mcp__bugsink__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)."
// Dev Container Bugsink
mcp__localerrors__test_connection();
// Expected: "Connection successful: Connected successfully. Found N project(s)."
```
---
## Creating Bugsink API Tokens
Bugsink 2.0.11 does NOT have a "Settings > API Keys" menu in the UI. Tokens must be created via Django management command.
### For Dev Container (localhost:8000)
```bash
MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token'
```
### For Production (bugsink.projectium.com via SSH)
```bash
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token"
```
Both commands output a 40-character hex token.
---
## Package Information
- **Repository:** https://github.com/j-shelfwood/bugsink-mcp.git
- **Local Installation:** `d:\gitea\bugsink-mcp`
- **Build Command:** `npm install && npm run build`
- **Main File:** `dist/index.js`
---
## Related Documentation
- [CLAUDE.md MCP Servers Section](../CLAUDE.md#mcp-servers)
- [DEV-CONTAINER-BUGSINK.md](./DEV-CONTAINER-BUGSINK.md)
- [BUGSINK-SYNC.md](./BUGSINK-SYNC.md)
---
## Failed Solutions (Do Not Retry)
These approaches were tried and did NOT work:
1. ❌ Regenerating API token multiple times
2. ❌ Restarting Claude Code without config changes
3. ❌ Checking Bugsink instance accessibility (was already working)
4. ❌ Adding `BUGSINK_ORG_SLUG` environment variable (not used by package)
**Lesson:** Always verify actual package requirements in source code/README before troubleshooting.

294
docs/BUGSINK-SYNC.md Normal file
View File

@@ -0,0 +1,294 @@
# Bugsink to Gitea Issue Synchronization
This document describes the automated workflow for syncing Bugsink error tracking issues to Gitea tickets.
## Overview
The sync system automatically creates Gitea issues from unresolved Bugsink errors, ensuring all application errors are tracked and assignable.
**Key Points:**
- Runs **only on test/staging server** (not production)
- Syncs **all 6 Bugsink projects** (including production errors)
- Creates Gitea issues with full error context
- Marks synced issues as resolved in Bugsink
- Uses Redis db 15 for sync state tracking
## Architecture
```
TEST/STAGING SERVER
┌─────────────────────────────────────────────────┐
│ │
│ BullMQ Queue ──▶ Sync Worker ──▶ Redis DB 15 │
│ (bugsink-sync) (15min) (sync state) │
│ │ │
└──────────────────────┼───────────────────────────┘
┌─────────────┴─────────────┐
▼ ▼
┌─────────┐ ┌─────────┐
│ Bugsink │ │ Gitea │
│ (read) │ │ (write) │
└─────────┘ └─────────┘
```
## Bugsink Projects
| Project Slug | Type | Environment | Label Mapping |
| --------------------------------- | -------- | ----------- | ----------------------------------- |
| flyer-crawler-backend | Backend | Production | bug:backend + env:production |
| flyer-crawler-backend-test | Backend | Test | bug:backend + env:test |
| flyer-crawler-frontend | Frontend | Production | bug:frontend + env:production |
| flyer-crawler-frontend-test | Frontend | Test | bug:frontend + env:test |
| flyer-crawler-infrastructure | Infra | Production | bug:infrastructure + env:production |
| flyer-crawler-test-infrastructure | Infra | Test | bug:infrastructure + env:test |
## Gitea Labels
| Label | Color | ID |
| ------------------ | ------------------ | --- |
| bug:frontend | #e11d48 (Red) | 8 |
| bug:backend | #ea580c (Orange) | 9 |
| bug:infrastructure | #7c3aed (Purple) | 10 |
| env:production | #dc2626 (Dark Red) | 11 |
| env:test | #2563eb (Blue) | 12 |
| env:development | #6b7280 (Gray) | 13 |
| source:bugsink | #10b981 (Green) | 14 |
## Environment Variables
Add these to **test environment only** (`deploy-to-test.yml`):
```bash
# Bugsink API
BUGSINK_URL=https://bugsink.projectium.com
BUGSINK_TOKEN=<create via Django management command - see below>
# Gitea API
GITEA_URL=https://gitea.projectium.com
GITEA_API_TOKEN=<personal access token with repo scope>
GITEA_OWNER=torbo
GITEA_REPO=flyer-crawler.projectium.com
# Sync Control
BUGSINK_SYNC_ENABLED=true # Only set true in test env
BUGSINK_SYNC_INTERVAL=15 # Minutes between sync runs
```
## Creating Bugsink API Token
Bugsink 2.0.11 does not have a "Settings > API Keys" UI. Create API tokens via Django management command:
**On Production Server:**
```bash
sudo su - bugsink
source venv/bin/activate
cd ~
bugsink-manage shell -c "
from django.contrib.auth import get_user_model
from rest_framework.authtoken.models import Token
User = get_user_model()
user = User.objects.get(email='admin@yourdomain.com') # Use your admin email
token, created = Token.objects.get_or_create(user=user)
print(f'Token: {token.key}')
"
exit
```
This will output a 40-character lowercase hex token.
## Gitea Secrets to Add
Add these secrets in Gitea repository settings (Settings > Secrets):
| Secret Name | Value | Environment |
| ---------------------- | ------------------------ | ----------- |
| `BUGSINK_TOKEN` | Token from command above | Test only |
| `GITEA_SYNC_TOKEN` | Personal access token | Test only |
| `BUGSINK_SYNC_ENABLED` | `true` | Test only |
## Redis Configuration
| Database | Purpose |
| -------- | ------------------------ |
| 0 | BullMQ production queues |
| 1 | BullMQ test queues |
| 15 | Bugsink sync state |
**Key Pattern:**
```
bugsink:synced:{issue_uuid}
```
**Value (JSON):**
```json
{
"gitea_issue_number": 42,
"synced_at": "2026-01-17T10:30:00Z",
"project": "flyer-crawler-frontend-test",
"title": "[TypeError] t.map is not a function"
}
```
## Sync Workflow
1. **Trigger**: Every 15 minutes (or manual via admin API)
2. **Fetch**: List unresolved issues from all 6 Bugsink projects
3. **Check**: Skip issues already in Redis sync state
4. **Create**: Create Gitea issue with labels and full context
5. **Record**: Store sync mapping in Redis db 15
6. **Resolve**: Mark issue as resolved in Bugsink
## Issue Template
Created Gitea issues follow this format:
```markdown
## Error Details
| Field | Value |
| ------------ | ----------------------- |
| **Type** | TypeError |
| **Message** | t.map is not a function |
| **Platform** | javascript |
| **Level** | error |
## Occurrence Statistics
- **First Seen**: 2026-01-13 18:24:22 UTC
- **Last Seen**: 2026-01-16 05:03:02 UTC
- **Total Occurrences**: 4
## Request Context
- **URL**: GET https://flyer-crawler-test.projectium.com/
## Stacktrace
<details>
<summary>Click to expand</summary>
[Full stacktrace]
</details>
---
**Bugsink Issue**: https://bugsink.projectium.com/issues/{id}
**Project**: flyer-crawler-frontend-test
```
## Admin Endpoints
### Manual Sync Trigger
```bash
POST /api/admin/bugsink/sync
Authorization: Bearer <admin_jwt>
# Response
{
"success": true,
"data": {
"synced": 3,
"skipped": 12,
"failed": 0,
"duration_ms": 2340
}
}
```
### Sync Status
```bash
GET /api/admin/bugsink/sync/status
Authorization: Bearer <admin_jwt>
# Response
{
"success": true,
"data": {
"enabled": true,
"last_run": "2026-01-17T10:30:00Z",
"next_run": "2026-01-17T10:45:00Z",
"total_synced": 47
}
}
```
## Files to Create
| File | Purpose |
| -------------------------------------- | --------------------- |
| `src/services/bugsinkSync.server.ts` | Core sync logic |
| `src/services/bugsinkClient.server.ts` | Bugsink HTTP client |
| `src/services/giteaClient.server.ts` | Gitea HTTP client |
| `src/types/bugsink.ts` | TypeScript interfaces |
| `src/routes/admin/bugsink-sync.ts` | Admin endpoints |
## Files to Modify
| File | Changes |
| ------------------------------------- | ------------------------- |
| `src/services/queues.server.ts` | Add `bugsinkSyncQueue` |
| `src/services/workers.server.ts` | Add sync worker |
| `src/config/env.ts` | Add bugsink config schema |
| `.env.example` | Document new variables |
| `.gitea/workflows/deploy-to-test.yml` | Pass secrets |
## Implementation Phases
### Phase 1: Core Infrastructure
- [ ] Add env vars to `env.ts` schema
- [ ] Create BugsinkClient service
- [ ] Create GiteaClient service
- [ ] Add Redis db 15 connection
### Phase 2: Sync Logic
- [ ] Create BugsinkSyncService
- [ ] Add bugsink-sync queue
- [ ] Add sync worker
- [ ] Create TypeScript types
### Phase 3: Integration
- [ ] Add admin endpoints
- [ ] Update deploy-to-test.yml
- [ ] Add Gitea secrets
- [ ] End-to-end testing
## Troubleshooting
### Sync not running
1. Check `BUGSINK_SYNC_ENABLED` is `true`
2. Verify worker is running: `GET /api/admin/workers/status`
3. Check Bull Board: `/api/admin/jobs`
### Duplicate issues created
1. Check Redis db 15 connectivity
2. Verify sync state keys exist: `redis-cli -n 15 KEYS "bugsink:*"`
### Issues not resolving in Bugsink
1. Verify `BUGSINK_API_TOKEN` has write permissions
2. Check worker logs for API errors
### Missing stacktrace in Gitea issue
1. Source maps may not be uploaded
2. Bugsink API may have returned partial data
3. Check worker logs for fetch errors
## Related Documentation
- [ADR-054: Bugsink-Gitea Sync](./adr/0054-bugsink-gitea-issue-sync.md)
- [ADR-006: Background Job Processing](./adr/0006-background-job-processing-and-task-queues.md)
- [ADR-015: Error Tracking](./adr/0015-application-performance-monitoring-and-error-tracking.md)

View File

@@ -0,0 +1,81 @@
# Dev Container Bugsink Setup
Local Bugsink instance for development - NOT connected to production.
## Quick Reference
| Item | Value |
| ------------ | ----------------------------------------------------------- |
| UI | `https://localhost:8443` (nginx proxy from 8000) |
| Credentials | `admin@localhost` / `admin` |
| Projects | Backend (Dev) = Project ID 1, Frontend (Dev) = Project ID 2 |
| Backend DSN | `SENTRY_DSN=http://<key>@localhost:8000/1` |
| Frontend DSN | `VITE_SENTRY_DSN=http://<key>@localhost:8000/2` |
## Configuration Files
| File | Purpose |
| ----------------- | ----------------------------------------------------------------- |
| `compose.dev.yml` | Initial DSNs using `127.0.0.1:8000` (container startup) |
| `.env.local` | **OVERRIDES** compose.dev.yml with `localhost:8000` (app runtime) |
**CRITICAL**: `.env.local` takes precedence over `compose.dev.yml` environment variables.
## Why localhost vs 127.0.0.1?
The `.env.local` file uses `localhost` while `compose.dev.yml` uses `127.0.0.1`. Both work in practice - `localhost` was chosen when `.env.local` was created separately.
## HTTPS Setup
- Self-signed certificates auto-generated with mkcert on container startup
- CSRF Protection: Django configured with `SECURE_PROXY_SSL_HEADER` to trust `X-Forwarded-Proto` from nginx
- HTTPS is for UI access only - Sentry SDK uses HTTP directly
## Isolation Benefits
- Dev errors stay local, don't pollute production/test dashboards
- No Gitea secrets needed - everything self-contained
- Independent testing of error tracking without affecting metrics
## Accessing Errors
### Via Browser
1. Open `https://localhost:8443`
2. Login with credentials above
3. Navigate to Issues to view captured errors
### Via MCP (bugsink-dev)
Configure in `.claude/mcp.json`:
```json
{
"bugsink-dev": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "http://localhost:8000",
"BUGSINK_API_TOKEN": "<token-from-local-bugsink>",
"BUGSINK_ORG_SLUG": "sentry"
}
}
}
```
**Get auth token**:
API tokens must be created via Django management command (Bugsink 2.0.11 does not have a "Settings > API Keys" UI):
```bash
podman exec flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && \
DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink \
SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security \
DJANGO_SETTINGS_MODULE=bugsink_conf \
PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages \
/opt/bugsink/bin/python -m django create_auth_token'
```
This will output a 40-character lowercase hex token. Copy it to your MCP configuration.
**MCP Tools**: Use `mcp__bugsink-dev__*` tools (not `mcp__bugsink__*` which connects to production).

View File

@@ -0,0 +1,181 @@
# Flyer URL Configuration
## Overview
Flyer image and icon URLs are environment-specific to ensure they point to the correct server for each deployment. Images are served as static files by NGINX from the `/flyer-images/` path with 7-day browser caching enabled.
## Environment-Specific URLs
| Environment | Base URL | Example |
| ------------- | ------------------------------------------- | -------------------------------------------------------------------------- |
| Dev Container | `https://127.0.0.1` | `https://127.0.0.1/flyer-images/safeway-flyer.jpg` |
| Test | `https://flyer-crawler-test.projectium.com` | `https://flyer-crawler-test.projectium.com/flyer-images/safeway-flyer.jpg` |
| Production | `https://flyer-crawler.projectium.com` | `https://flyer-crawler.projectium.com/flyer-images/safeway-flyer.jpg` |
## NGINX Static File Serving
All environments serve flyer images as static files with browser caching:
```nginx
# Serve flyer images from static storage (7-day cache)
location /flyer-images/ {
alias /path/to/flyer-images/;
expires 7d;
add_header Cache-Control "public, immutable";
}
```
### Directory Paths by Environment
| Environment | NGINX Alias Path |
| ------------- | ---------------------------------------------------------- |
| Dev Container | `/app/public/flyer-images/` |
| Test | `/var/www/flyer-crawler-test.projectium.com/flyer-images/` |
| Production | `/var/www/flyer-crawler.projectium.com/flyer-images/` |
## Configuration
### Environment Variable
Set `FLYER_BASE_URL` in your environment configuration:
```bash
# Dev container (.env)
FLYER_BASE_URL=https://127.0.0.1
# Test environment
FLYER_BASE_URL=https://flyer-crawler-test.projectium.com
# Production
FLYER_BASE_URL=https://flyer-crawler.projectium.com
```
### Seed Script
The seed script ([src/db/seed.ts](../src/db/seed.ts)) automatically uses the correct base URL based on:
1. `FLYER_BASE_URL` environment variable (if set)
2. `NODE_ENV` value:
- `production``https://flyer-crawler.projectium.com`
- `test``https://flyer-crawler-test.projectium.com`
- Default → `https://127.0.0.1`
The seed script also copies test images from `src/tests/assets/` to `public/flyer-images/`:
- `test-flyer-image.jpg` - Sample flyer image
- `test-flyer-icon.png` - Sample 64x64 icon
## Updating Existing Data
If you need to update existing flyer URLs in the database, use the provided SQL script:
### Dev Container
```bash
# Connect to dev database
podman exec -it flyer-crawler-postgres psql -U postgres -d flyer_crawler_dev
# Run the update (dev container uses HTTPS with self-signed certs)
UPDATE flyers
SET
image_url = REPLACE(image_url, 'example.com', '127.0.0.1'),
icon_url = REPLACE(icon_url, 'example.com', '127.0.0.1')
WHERE
image_url LIKE '%example.com%'
OR icon_url LIKE '%example.com%';
# Verify
SELECT flyer_id, image_url, icon_url FROM flyers;
```
### Test Environment
```bash
# Via SSH
ssh root@projectium.com "psql -U flyer_crawler_test -d flyer-crawler-test -c \"
UPDATE flyers
SET
image_url = REPLACE(image_url, 'example.com', 'flyer-crawler-test.projectium.com'),
icon_url = REPLACE(icon_url, 'example.com', 'flyer-crawler-test.projectium.com')
WHERE
image_url LIKE '%example.com%'
OR icon_url LIKE '%example.com%';
\""
```
### Production
```bash
# Via SSH
ssh root@projectium.com "psql -U flyer_crawler_prod -d flyer-crawler-prod -c \"
UPDATE flyers
SET
image_url = REPLACE(image_url, 'example.com', 'flyer-crawler.projectium.com'),
icon_url = REPLACE(icon_url, 'example.com', 'flyer-crawler.projectium.com')
WHERE
image_url LIKE '%example.com%'
OR icon_url LIKE '%example.com%';
\""
```
## Test Data Updates
### Test Helper Function
A helper function `getFlyerBaseUrl()` is available in [src/tests/utils/testHelpers.ts](../src/tests/utils/testHelpers.ts) that automatically detects the correct base URL for tests:
```typescript
export const getFlyerBaseUrl = (): string => {
if (process.env.FLYER_BASE_URL) {
return process.env.FLYER_BASE_URL;
}
// Check if we're in dev container (DB_HOST=postgres is typical indicator)
if (process.env.DB_HOST === 'postgres' || process.env.DB_HOST === '127.0.0.1') {
return 'https://127.0.0.1';
}
if (process.env.NODE_ENV === 'production') {
return 'https://flyer-crawler.projectium.com';
}
if (process.env.NODE_ENV === 'test') {
return 'https://flyer-crawler-test.projectium.com';
}
// Default for unit tests
return 'https://example.com';
};
```
### Updated Test Files
The following test files now use `getFlyerBaseUrl()` for environment-aware URL generation:
- [src/db/seed.ts](../src/db/seed.ts) - Main seed script (uses `FLYER_BASE_URL`)
- [src/tests/utils/testHelpers.ts](../src/tests/utils/testHelpers.ts) - `getFlyerBaseUrl()` helper function
- [src/hooks/useDataExtraction.test.ts](../src/hooks/useDataExtraction.test.ts) - Mock flyer factory
- [src/schemas/flyer.schemas.test.ts](../src/schemas/flyer.schemas.test.ts) - Schema validation tests
- [src/services/flyerProcessingService.server.test.ts](../src/services/flyerProcessingService.server.test.ts) - Processing service tests
- [src/tests/integration/flyer-processing.integration.test.ts](../src/tests/integration/flyer-processing.integration.test.ts) - Integration tests
This approach ensures tests work correctly in all environments (dev container, CI/CD, local development, test, production).
## Files Changed
| File | Change |
| --------------------------- | ------------------------------------------------------------------------------------------------- |
| `src/db/seed.ts` | Added `FLYER_BASE_URL` environment variable support, copies test images to `public/flyer-images/` |
| `docker/nginx/dev.conf` | Added `/flyer-images/` location block for static file serving |
| `.env.example` | Added `FLYER_BASE_URL` variable |
| `sql/update_flyer_urls.sql` | SQL script for updating existing data |
| Test files | Updated mock data to use `https://127.0.0.1` |
## Summary
- Seed script now uses environment-specific HTTPS URLs
- Seed script copies test images from `src/tests/assets/` to `public/flyer-images/`
- NGINX serves `/flyer-images/` as static files with 7-day cache
- Test files updated with `https://127.0.0.1`
- SQL script provided for updating existing data
- Documentation updated for each environment

View File

@@ -0,0 +1,695 @@
# Production Deployment Checklist: Extended Logstash Configuration
**Important**: This checklist follows a **inspect-first, then-modify** approach. Each step first checks the current state before making changes.
---
## Phase 1: Pre-Deployment Inspection
### Step 1.1: Verify Logstash Status
```bash
ssh root@projectium.com
systemctl status logstash
curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '.pipelines.main.events'
```
**Record current state:**
- Status: [active/inactive]
- Events processed: [number]
- Memory usage: [amount]
**Expected**: Logstash should be active and processing PostgreSQL logs from ADR-050.
---
### Step 1.2: Inspect Existing Configuration Files
```bash
# List all configuration files
ls -alF /etc/logstash/conf.d/
# Check existing backups (if any)
ls -lh /etc/logstash/conf.d/*.backup-* 2>/dev/null || echo "No backups found"
# View current configuration
cat /etc/logstash/conf.d/bugsink.conf
```
**Record current state:**
- Configuration files present: [list]
- Existing backups: [list or "none"]
- Current config size: [bytes]
**Questions to answer:**
- ✅ Is there an existing `bugsink.conf`?
- ✅ Are there any existing backups?
- ✅ What inputs/filters/outputs are currently configured?
---
### Step 1.3: Inspect Log Output Directory
```bash
# Check if directory exists
ls -ld /var/log/logstash 2>/dev/null || echo "Directory does not exist"
# If exists, check contents
ls -alF /var/log/logstash/
# Check ownership and permissions
ls -ld /var/log/logstash
```
**Record current state:**
- Directory exists: [yes/no]
- Current ownership: [user:group]
- Current permissions: [drwx------]
- Existing files: [list]
**Questions to answer:**
- ✅ Does `/var/log/logstash/` already exist?
- ✅ What files are currently in it?
- ✅ Are these Logstash's own logs or our operational logs?
---
### Step 1.4: Check Logrotate Configuration
```bash
# Check if logrotate config exists
cat /etc/logrotate.d/logstash 2>/dev/null || echo "No logrotate config found"
# List all logrotate configs
ls -lh /etc/logrotate.d/ | grep logstash
```
**Record current state:**
- Logrotate config exists: [yes/no]
- Current rotation policy: [daily/weekly/none]
---
### Step 1.5: Check Logstash User Groups
```bash
# Check current group membership
groups logstash
# Verify which groups have access to required logs
ls -l /home/gitea-runner/.pm2/logs/*.log | head -3
ls -l /var/log/redis/redis-server.log
ls -l /var/log/nginx/access.log
ls -l /var/log/nginx/error.log
```
**Record current state:**
- Logstash groups: [list]
- PM2 log file group: [group]
- Redis log file group: [group]
- NGINX log file group: [group]
**Questions to answer:**
- ✅ Is logstash already in the `adm` group?
- ✅ Is logstash already in the `postgres` group?
- ✅ Can logstash currently read PM2 logs?
---
### Step 1.6: Test Log File Access (Current State)
```bash
# Test PM2 worker logs
sudo -u logstash cat /home/gitea-runner/.pm2/logs/flyer-crawler-worker-*.log | head -5 2>&1
# Test PM2 analytics worker logs
sudo -u logstash cat /home/gitea-runner/.pm2/logs/flyer-crawler-analytics-worker-*.log | head -5 2>&1
# Test Redis logs
sudo -u logstash cat /var/log/redis/redis-server.log | head -5 2>&1
# Test NGINX access logs
sudo -u logstash cat /var/log/nginx/access.log | head -5 2>&1
# Test NGINX error logs
sudo -u logstash cat /var/log/nginx/error.log | head -5 2>&1
```
**Record current state:**
- PM2 worker logs accessible: [yes/no/error]
- PM2 analytics logs accessible: [yes/no/error]
- Redis logs accessible: [yes/no/error]
- NGINX access logs accessible: [yes/no/error]
- NGINX error logs accessible: [yes/no/error]
**If any fail**: Note the specific error message (permission denied, file not found, etc.)
---
### Step 1.7: Check PM2 Log File Locations
```bash
# List all PM2 log files
ls -lh /home/gitea-runner/.pm2/logs/
# Check for production and test worker logs
ls -lh /home/gitea-runner/.pm2/logs/ | grep -E "(flyer-crawler-worker|flyer-crawler-analytics-worker)"
```
**Record current state:**
- Production worker logs present: [yes/no]
- Test worker logs present: [yes/no]
- Analytics worker logs present: [yes/no]
- File naming pattern: [describe pattern]
**Questions to answer:**
- ✅ Do the log file paths match what's in the new Logstash config?
- ✅ Are there separate logs for production vs test environments?
---
### Step 1.8: Check Disk Space
```bash
# Check available disk space
df -h /var/log/
# Check current size of Logstash logs
du -sh /var/log/logstash/
# Check size of PM2 logs
du -sh /home/gitea-runner/.pm2/logs/
```
**Record current state:**
- Available space on `/var/log`: [amount]
- Current Logstash log size: [amount]
- Current PM2 log size: [amount]
**Risk assessment:**
- ✅ Is there sufficient space for 30 days of rotated logs?
- ✅ Estimate: ~100MB/day for new operational logs = ~3GB for 30 days
---
### Step 1.9: Review Bugsink Projects
```bash
# Check if Bugsink projects 5 and 6 exist
# (This requires accessing Bugsink UI or API)
echo "Manual check: Navigate to https://bugsink.projectium.com"
echo "Verify project IDs 5 and 6 exist and their names/DSNs"
```
**Record current state:**
- Project 5 exists: [yes/no]
- Project 5 name: [name]
- Project 6 exists: [yes/no]
- Project 6 name: [name]
**Questions to answer:**
- ✅ Do the project IDs in the new config match actual Bugsink projects?
- ✅ Are DSNs correct?
---
## Phase 2: Make Deployment Decisions
Based on Phase 1 inspection, answer these questions:
1. **Backup needed?**
- Current config exists: [yes/no]
- Decision: [create backup / no backup needed]
2. **Directory creation needed?**
- `/var/log/logstash/` exists with correct permissions: [yes/no]
- Decision: [create directory / fix permissions / no action needed]
3. **Logrotate config needed?**
- Config exists: [yes/no]
- Decision: [create config / update config / no action needed]
4. **Group membership needed?**
- Logstash already in `adm` group: [yes/no]
- Decision: [add to group / already member]
5. **Log file access issues?**
- Any files inaccessible: [list files]
- Decision: [fix permissions / fix group membership / no action needed]
---
## Phase 3: Execute Deployment
### Step 3.1: Create Configuration Backup
**Only if**: Configuration file exists and no recent backup.
```bash
# Create timestamped backup
sudo cp /etc/logstash/conf.d/bugsink.conf \
/etc/logstash/conf.d/bugsink.conf.backup-$(date +%Y%m%d-%H%M%S)
# Verify backup
ls -lh /etc/logstash/conf.d/*.backup-*
```
**Confirmation**: ✅ Backup file created with timestamp.
---
### Step 3.2: Handle Log Output Directory
**If directory doesn't exist:**
```bash
sudo mkdir -p /var/log/logstash-operational
sudo chown logstash:logstash /var/log/logstash-operational
sudo chmod 755 /var/log/logstash-operational
```
**If directory exists but has wrong permissions:**
```bash
sudo chown logstash:logstash /var/log/logstash
sudo chmod 755 /var/log/logstash
```
**Note**: The existing `/var/log/logstash/` contains Logstash's own operational logs (logstash-plain.log, etc.). You have two options:
**Option A**: Use a separate directory for our operational logs (recommended):
- Directory: `/var/log/logstash-operational/`
- Update config to use this path instead
**Option B**: Share the directory (requires careful logrotate config):
- Keep using `/var/log/logstash/`
- Ensure logrotate doesn't rotate our custom logs the same way as Logstash's own logs
**Decision**: [Choose Option A or B]
**Verification:**
```bash
ls -ld /var/log/logstash-operational # or /var/log/logstash
```
**Confirmation**: ✅ Directory exists with `drwxr-xr-x logstash logstash`.
---
### Step 3.3: Configure Logrotate
**Only if**: Logrotate config doesn't exist or needs updating.
**For Option A (separate directory):**
```bash
sudo tee /etc/logrotate.d/logstash-operational <<'EOF'
/var/log/logstash-operational/*.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 0644 logstash logstash
sharedscripts
postrotate
# No reload needed - Logstash handles rotation automatically
endscript
}
EOF
```
**For Option B (shared directory):**
```bash
sudo tee /etc/logrotate.d/logstash-operational <<'EOF'
/var/log/logstash/pm2-workers-*.log
/var/log/logstash/redis-operational-*.log
/var/log/logstash/nginx-access-*.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 0644 logstash logstash
sharedscripts
postrotate
# No reload needed - Logstash handles rotation automatically
endscript
}
EOF
```
**Verify configuration:**
```bash
sudo logrotate -d /etc/logrotate.d/logstash-operational
cat /etc/logrotate.d/logstash-operational
```
**Confirmation**: ✅ Logrotate config created, syntax check passes.
---
### Step 3.4: Grant Logstash Permissions
**Only if**: Logstash not already in `adm` group.
```bash
# Add logstash to adm group (for NGINX and system logs)
sudo usermod -a -G adm logstash
# Verify group membership
groups logstash
```
**Expected output**: `logstash : logstash adm postgres`
**Confirmation**: ✅ Logstash user is in required groups.
---
### Step 3.5: Verify Log File Access (Post-Permission Changes)
**Only if**: Previous access tests failed.
```bash
# Re-test log file access
sudo -u logstash cat /home/gitea-runner/.pm2/logs/flyer-crawler-worker-*.log | head -5
sudo -u logstash cat /home/gitea-runner/.pm2/logs/flyer-crawler-analytics-worker-*.log | head -5
sudo -u logstash cat /var/log/redis/redis-server.log | head -5
sudo -u logstash cat /var/log/nginx/access.log | head -5
sudo -u logstash cat /var/log/nginx/error.log | head -5
```
**Confirmation**: ✅ All log files now readable without errors.
---
### Step 3.6: Update Logstash Configuration
**Important**: Before pasting, adjust the file output paths based on your directory decision.
```bash
# Open configuration file
sudo nano /etc/logstash/conf.d/bugsink.conf
```
**Paste the complete configuration from `docs/BARE-METAL-SETUP.md`.**
**If using Option A (separate directory)**, update these lines in the config:
```ruby
# Change this:
path => "/var/log/logstash/pm2-workers-%{+YYYY-MM-dd}.log"
# To this:
path => "/var/log/logstash-operational/pm2-workers-%{+YYYY-MM-dd}.log"
# (Repeat for redis-operational and nginx-access file outputs)
```
**Save and exit**: Ctrl+X, Y, Enter
---
### Step 3.7: Test Configuration Syntax
```bash
# Test for syntax errors
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/bugsink.conf
```
**Expected output**: `Configuration OK`
**If errors:**
1. Review error message for line number
2. Check for missing braces, quotes, commas
3. Verify file paths match your directory decision
4. Compare against documentation
**Confirmation**: ✅ Configuration syntax is valid.
---
### Step 3.8: Restart Logstash Service
```bash
# Restart Logstash
sudo systemctl restart logstash
# Check service started successfully
sudo systemctl status logstash
# Wait for initialization
sleep 30
# Check for startup errors
sudo journalctl -u logstash -n 100 --no-pager | grep -i error
```
**Expected**:
- Status: `active (running)`
- No critical errors (warnings about missing files are OK initially)
**Confirmation**: ✅ Logstash restarted successfully.
---
## Phase 4: Post-Deployment Verification
### Step 4.1: Verify Pipeline Processing
```bash
# Check pipeline stats - events should be increasing
curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '.pipelines.main.events'
# Check input plugins
curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '.pipelines.main.plugins.inputs'
# Check for grok failures
curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '.pipelines.main.plugins.filters[] | select(.name == "grok") | {name, events_in: .events.in, events_out: .events.out, failures}'
```
**Expected**:
- `events.in` and `events.out` are increasing
- Input plugins show files being read
- Grok failures < 1% of events
**Confirmation**: ✅ Pipeline processing events from multiple sources.
---
### Step 4.2: Verify File Outputs Created
```bash
# Wait a few minutes for log generation
sleep 120
# Check files were created
ls -lh /var/log/logstash-operational/ # or /var/log/logstash/
# View sample logs
tail -20 /var/log/logstash-operational/pm2-workers-$(date +%Y-%m-%d).log
tail -20 /var/log/logstash-operational/redis-operational-$(date +%Y-%m-%d).log
tail -20 /var/log/logstash-operational/nginx-access-$(date +%Y-%m-%d).log
```
**Expected**:
- Files exist with today's date
- Files contain JSON-formatted log entries
- Timestamps are recent
**Confirmation**: ✅ Operational logs being written successfully.
---
### Step 4.3: Test Error Forwarding to Bugsink
```bash
# Check HTTP output stats (Bugsink forwarding)
curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '.pipelines.main.plugins.outputs[] | select(.name == "http") | {name, events_in: .events.in, events_out: .events.out}'
```
**Manual check**:
1. Navigate to: https://bugsink.projectium.com
2. Check Project 5 (production infrastructure) for recent events
3. Check Project 6 (test infrastructure) for recent events
**Confirmation**: ✅ Errors forwarded to correct Bugsink projects.
---
### Step 4.4: Monitor Logstash Performance
```bash
# Check memory usage
ps aux | grep logstash | grep -v grep
# Check disk usage
du -sh /var/log/logstash-operational/
# Monitor in real-time (Ctrl+C to exit)
sudo journalctl -u logstash -f
```
**Expected**:
- Memory usage < 1.5GB (with 1GB heap)
- Disk usage reasonable (< 100MB for first day)
- No repeated errors
**Confirmation**: ✅ Performance is stable.
---
### Step 4.5: Verify Environment Detection
```bash
# Check recent logs for environment tags
sudo journalctl -u logstash -n 500 | grep -E "(production|test)" | tail -20
# Check file outputs for correct tagging
grep -o '"environment":"[^"]*"' /var/log/logstash-operational/pm2-workers-$(date +%Y-%m-%d).log | sort | uniq -c
```
**Expected**:
- Production worker logs tagged as "production"
- Test worker logs tagged as "test"
**Confirmation**: ✅ Environment detection working correctly.
---
### Step 4.6: Document Deployment
```bash
# Record deployment
echo "Extended Logstash Configuration deployed on $(date)" | sudo tee -a /var/log/deployments.log
# Record configuration version
sudo ls -lh /etc/logstash/conf.d/bugsink.conf
```
**Confirmation**: ✅ Deployment documented.
---
## Phase 5: 24-Hour Monitoring Plan
Monitor these metrics over the next 24 hours:
**Every 4 hours:**
1. **Service health**: `systemctl status logstash`
2. **Disk usage**: `du -sh /var/log/logstash-operational/`
3. **Memory usage**: `ps aux | grep logstash | grep -v grep`
**Every 12 hours:**
1. **Error rates**: Check Bugsink projects 5 and 6
2. **Log file growth**: `ls -lh /var/log/logstash-operational/`
3. **Pipeline stats**: `curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq '.pipelines.main.events'`
---
## Rollback Procedure
**If issues occur:**
```bash
# Stop Logstash
sudo systemctl stop logstash
# Find latest backup
ls -lt /etc/logstash/conf.d/*.backup-* | head -1
# Restore backup (replace TIMESTAMP with actual timestamp)
sudo cp /etc/logstash/conf.d/bugsink.conf.backup-TIMESTAMP \
/etc/logstash/conf.d/bugsink.conf
# Test restored config
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/bugsink.conf
# Restart Logstash
sudo systemctl start logstash
# Verify status
systemctl status logstash
```
---
## Quick Health Check
Run this anytime to verify deployment health:
```bash
# One-line health check
systemctl is-active logstash && \
echo "Service: OK" && \
ls /var/log/logstash-operational/*.log &>/dev/null && \
echo "Logs: OK" && \
curl -s http://localhost:9600/_node/stats/pipelines?pretty | jq -e '.pipelines.main.events.in > 0' &>/dev/null && \
echo "Processing: OK"
```
Expected output:
```
active
Service: OK
Logs: OK
Processing: OK
```
---
## Summary Checklist
After completing all steps:
- ✅ Phase 1: Inspection complete, state recorded
- ✅ Phase 2: Deployment decisions made
- ✅ Phase 3: Configuration deployed
- ✅ Backup created
- ✅ Directory configured
- ✅ Logrotate configured
- ✅ Permissions granted
- ✅ Config updated and tested
- ✅ Service restarted
- ✅ Phase 4: Verification complete
- ✅ Pipeline processing
- ✅ File outputs working
- ✅ Errors forwarded to Bugsink
- ✅ Performance stable
- ✅ Environment detection working
- ✅ Phase 5: Monitoring plan established
**Deployment Status**: [READY / IN PROGRESS / COMPLETE / ROLLED BACK]

864
docs/MANUAL_TESTING_PLAN.md Normal file
View File

@@ -0,0 +1,864 @@
# Manual Testing Plan - UI/UX Improvements
**Date**: 2026-01-20
**Testing Focus**: Onboarding Tour, Mobile Navigation, Dark Mode, Admin Routes
**Tester**: [Your Name]
**Environment**: Dev Container (`http://localhost:5173`)
---
## Pre-Testing Setup
### 1. Start Dev Server
```bash
podman exec -it flyer-crawler-dev npm run dev:container
```
**Expected**: Server starts at `http://localhost:5173`
### 2. Open Browser
- Primary browser: Chrome/Edge (DevTools required)
- Secondary: Firefox, Safari (for cross-browser testing)
- Enable DevTools: F12 or Ctrl+Shift+I
### 3. Prepare Test Environment
- Clear browser cache
- Clear all cookies for localhost
- Open DevTools → Application → Local Storage
- Note any existing keys
---
## Test Suite 1: Onboarding Tour
### Test 1.1: First-Time User Experience ⭐ CRITICAL
**Objective**: Verify tour starts automatically for new users
**Steps**:
1. Open DevTools → Application → Local Storage → `http://localhost:5173`
2. Delete key: `flyer_crawler_onboarding_completed` (if exists)
3. Refresh page (F5)
4. Observe page load
**Expected Results**:
- ✅ Tour modal appears automatically within 2 seconds
- ✅ First tooltip points to "Flyer Uploader" section
- ✅ Tooltip shows "Step 1 of 6"
- ✅ Tooltip contains text: "Upload grocery flyers here..."
- ✅ "Skip" button visible in top-right
- ✅ "Next" button visible at bottom
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 1.2: Tour Navigation
**Objective**: Verify all 6 tour steps are accessible and display correctly
**Steps**:
1. Ensure tour is active (from Test 1.1)
2. Click "Next" button
3. Repeat for all 6 steps, noting each tooltip
**Expected Results**:
| Step | Target Element | Tooltip Text Snippet | Pass/Fail |
| ---- | -------------------- | -------------------------------------- | --------- |
| 1 | Flyer Uploader | "Upload grocery flyers here..." | [ ] |
| 2 | Extracted Data Table | "View AI-extracted items..." | [ ] |
| 3 | Watch Button | "Click + Watch to track items..." | [ ] |
| 4 | Watched Items List | "Your watchlist appears here..." | [ ] |
| 5 | Price Chart | "See active deals on watched items..." | [ ] |
| 6 | Shopping List | "Create shopping lists..." | [ ] |
**Additional Checks**:
- ✅ Progress indicator updates (1/6 → 2/6 → ... → 6/6)
- ✅ Each tooltip highlights correct element
- ✅ "Previous" button works (after step 2)
- ✅ No JavaScript errors in console
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 1.3: Tour Completion
**Objective**: Verify tour completion saves to localStorage
**Steps**:
1. Complete all 6 steps (click "Next" 5 times)
2. On step 6, click "Done" or "Finish"
3. Open DevTools → Application → Local Storage
4. Check for key: `flyer_crawler_onboarding_completed`
**Expected Results**:
- ✅ Tour closes after final step
- ✅ localStorage key `flyer_crawler_onboarding_completed` = `"true"`
- ✅ No tour modal visible
- ✅ Application fully functional
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 1.4: Tour Skip
**Objective**: Verify "Skip" button works and saves preference
**Steps**:
1. Delete localStorage key (reset)
2. Refresh page to start tour
3. Click "Skip" button on step 1
4. Check localStorage
**Expected Results**:
- ✅ Tour closes immediately
- ✅ localStorage key saved: `flyer_crawler_onboarding_completed` = `"true"`
- ✅ Application remains functional
- ✅ No errors in console
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 1.5: Tour Does Not Repeat
**Objective**: Verify tour doesn't show for returning users
**Steps**:
1. Ensure localStorage key exists from previous test
2. Refresh page multiple times
3. Navigate to different routes (/deals, /lists)
4. Return to home page
**Expected Results**:
- ✅ Tour modal never appears
- ✅ No tour-related elements visible
- ✅ Application loads normally
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Suite 2: Mobile Navigation
### Test 2.1: Responsive Breakpoints - Mobile (375px)
**Objective**: Verify mobile layout at iPhone SE width
**Setup**:
1. Open DevTools → Toggle Device Toolbar (Ctrl+Shift+M)
2. Select "iPhone SE" or set custom width to 375px
3. Refresh page
**Expected Results**:
| Element | Expected Behavior | Pass/Fail |
| ------------------------- | ----------------------------- | --------- |
| Bottom Tab Bar | ✅ Visible at bottom | [ ] |
| Left Sidebar (Flyer List) | ✅ Hidden | [ ] |
| Right Sidebar (Widgets) | ✅ Hidden | [ ] |
| Main Content | ✅ Full width, single column | [ ] |
| Bottom Padding | ✅ 64px padding below content | [ ] |
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.2: Responsive Breakpoints - Tablet (768px)
**Objective**: Verify mobile layout at iPad width
**Setup**:
1. Set device width to 768px (iPad)
2. Refresh page
**Expected Results**:
- ✅ Bottom tab bar still visible
- ✅ Sidebars still hidden
- ✅ Content uses full width
- ✅ Tab bar does NOT overlap content
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.3: Responsive Breakpoints - Desktop (1024px+)
**Objective**: Verify desktop layout unchanged
**Setup**:
1. Set device width to 1440px (desktop)
2. Refresh page
**Expected Results**:
- ✅ Bottom tab bar HIDDEN
- ✅ Left sidebar (flyer list) VISIBLE
- ✅ Right sidebar (widgets) VISIBLE
- ✅ 3-column grid layout intact
- ✅ No layout changes from before
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.4: Tab Navigation - Home
**Objective**: Verify Home tab navigation
**Setup**: Set width to 375px (mobile)
**Steps**:
1. Tap "Home" tab in bottom bar
2. Observe page content
**Expected Results**:
- ✅ Tab icon highlighted in teal (#14b8a6)
- ✅ Tab label highlighted
- ✅ URL changes to `/`
- ✅ HomePage component renders
- ✅ Shows flyer view and upload section
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.5: Tab Navigation - Deals
**Objective**: Verify Deals tab navigation
**Steps**:
1. Tap "Deals" tab (TagIcon)
2. Observe page content
**Expected Results**:
- ✅ Tab icon highlighted in teal
- ✅ URL changes to `/deals`
- ✅ DealsPage component renders
- ✅ Shows WatchedItemsList component
- ✅ Shows PriceChart component
- ✅ Shows PriceHistoryChart component
- ✅ Previous tab (Home) is unhighlighted
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.6: Tab Navigation - Lists
**Objective**: Verify Lists tab navigation
**Steps**:
1. Tap "Lists" tab (ListBulletIcon)
2. Observe page content
**Expected Results**:
- ✅ Tab icon highlighted in teal
- ✅ URL changes to `/lists`
- ✅ ShoppingListsPage component renders
- ✅ Shows ShoppingList component
- ✅ Can create/view shopping lists
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.7: Tab Navigation - Profile
**Objective**: Verify Profile tab navigation
**Steps**:
1. Tap "Profile" tab (UserIcon)
2. Observe page content
**Expected Results**:
- ✅ Tab icon highlighted in teal
- ✅ URL changes to `/profile`
- ✅ UserProfilePage component renders
- ✅ Shows user profile information
- ✅ Shows achievements (if logged in)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.8: Touch Target Size (Accessibility)
**Objective**: Verify touch targets meet 44x44px minimum (WCAG 2.5.5)
**Steps**:
1. Stay in mobile view (375px)
2. Open DevTools → Elements
3. Inspect each tab in bottom bar
4. Check computed dimensions
**Expected Results**:
- ✅ Each tab button: min-height: 44px
- ✅ Each tab button: min-width: 44px
- ✅ Icon is centered
- ✅ Label is readable below icon
- ✅ Adequate spacing between tabs
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 2.9: Tab Bar Visibility on Admin Routes
**Objective**: Verify tab bar hidden on admin pages
**Steps**:
1. Navigate to `/admin` (may need to log in as admin)
2. Check bottom of page
3. Navigate to `/admin/stats`
4. Navigate to `/admin/corrections`
**Expected Results**:
- ✅ Tab bar NOT visible on `/admin`
- ✅ Tab bar NOT visible on any `/admin/*` routes
- ✅ Admin pages function normally
- ✅ Footer visible as normal
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Suite 3: Dark Mode
### Test 3.1: Dark Mode Toggle
**Objective**: Verify dark mode toggle works for new components
**Steps**:
1. Ensure you're in light mode (check header toggle)
2. Click dark mode toggle in header
3. Observe all new components
**Expected Results - DealsPage**:
- ✅ Background changes to dark gray (#1f2937 or similar)
- ✅ Text changes to light colors
- ✅ WatchedItemsList: dark background, light text
- ✅ PriceChart: dark theme colors
- ✅ No white boxes remaining
**Expected Results - ShoppingListsPage**:
- ✅ Background changes to dark
- ✅ ShoppingList cards: dark background
- ✅ Input fields: dark background with light text
- ✅ Buttons maintain brand colors
**Expected Results - FlyersPage**:
- ✅ Background dark
- ✅ Flyer cards: dark theme
- ✅ FlyerUploader: dark background
**Expected Results - MobileTabBar**:
- ✅ Tab bar background: dark (#111827 or similar)
- ✅ Border top: dark border color
- ✅ Inactive tab icons: gray
- ✅ Active tab icon: teal (#14b8a6)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 3.2: Dark Mode Persistence
**Objective**: Verify dark mode preference persists across navigation
**Steps**:
1. Enable dark mode
2. Navigate between tabs: Home → Deals → Lists → Profile
3. Refresh page
4. Check mode
**Expected Results**:
- ✅ Dark mode stays enabled across all routes
- ✅ Dark mode persists after page refresh
- ✅ All pages render in dark mode consistently
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 3.3: Button Component in Dark Mode
**Objective**: Verify Button component variants in dark mode
**Setup**: Enable dark mode
**Check each variant**:
| Variant | Expected Dark Mode Colors | Pass/Fail |
| --------- | ------------------------------ | --------- |
| Primary | bg-brand-secondary, text-white | [ ] |
| Secondary | bg-gray-700, text-gray-200 | [ ] |
| Danger | bg-red-900/50, text-red-300 | [ ] |
| Ghost | hover: bg-gray-700/50 | [ ] |
**Locations to check**:
- FlyerUploader: "Upload Another Flyer" (primary)
- ShoppingList: "New List" (secondary)
- ShoppingList: "Delete List" (danger)
- FlyerUploader: "Stop Watching" (ghost)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 3.4: Onboarding Tour in Dark Mode
**Objective**: Verify tour tooltips work in dark mode
**Steps**:
1. Enable dark mode
2. Delete localStorage key to reset tour
3. Refresh to start tour
4. Navigate through all 6 steps
**Expected Results**:
- ✅ Tooltip background visible (not too dark)
- ✅ Tooltip text readable (good contrast)
- ✅ Progress indicator visible
- ✅ Buttons clearly visible
- ✅ Highlighted elements stand out
- ✅ No visual glitches
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Suite 4: Admin Routes
### Test 4.1: Admin Access (Requires Admin User)
**Objective**: Verify admin routes still function correctly
**Prerequisites**: Need admin account credentials
**Steps**:
1. Log in as admin user
2. Click admin shield icon in header
3. Should navigate to `/admin`
**Expected Results**:
- ✅ Admin dashboard loads
- ✅ 4 links visible: Corrections, Stats, Flyer Review, Stores
- ✅ SystemCheck component shows health checks
- ✅ Layout looks correct (no mobile tab bar)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 4.2: Admin Subpages
**Objective**: Verify all admin subpages load
**Steps**:
1. From admin dashboard, click each link:
- Corrections → `/admin/corrections`
- Stats → `/admin/stats`
- Flyer Review → `/admin/flyer-review`
- Stores → `/admin/stores`
**Expected Results**:
- ✅ Each page loads without errors
- ✅ No mobile tab bar visible
- ✅ Desktop layout maintained
- ✅ All admin functionality works
- ✅ Can navigate back to `/admin`
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 4.3: Admin in Mobile View
**Objective**: Verify admin pages work in mobile view
**Steps**:
1. Set device width to 375px
2. Navigate to `/admin`
3. Check layout
**Expected Results**:
- ✅ Admin page renders correctly
- ✅ No mobile tab bar visible
- ✅ Content is readable (may scroll)
- ✅ All buttons/links clickable
- ✅ No layout breaking
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Suite 5: Integration Tests
### Test 5.1: Cross-Feature Navigation
**Objective**: Verify navigation between new and old features
**Scenario**: User journey through app
**Steps**:
1. Start on Home page (mobile view)
2. Upload a flyer (if possible)
3. Click "Deals" tab → should see deals page
4. Add item to watchlist (from deals page)
5. Click "Lists" tab → create shopping list
6. Add item to shopping list
7. Click "Profile" tab → view profile
8. Click "Home" tab → return to home
**Expected Results**:
- ✅ All navigation works smoothly
- ✅ No data loss between pages
- ✅ Active tab always correct
- ✅ Back button works (browser history)
- ✅ No JavaScript errors
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 5.2: Button Component Integration
**Objective**: Verify Button component works in all contexts
**Steps**:
1. Navigate to page with buttons (FlyerUploader, ShoppingList)
2. Click each button variant
3. Test loading states
4. Test disabled states
**Expected Results**:
- ✅ All buttons clickable
- ✅ Loading spinner appears when appropriate
- ✅ Disabled buttons prevent clicks
- ✅ Icons render correctly
- ✅ Hover states work
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 5.3: Brand Colors Visual Check
**Objective**: Verify brand colors display correctly throughout app
**Check these elements**:
- ✅ Active tab in tab bar: teal (#14b8a6)
- ✅ Primary buttons: teal background
- ✅ Links on hover: teal color
- ✅ Focus rings: teal color
- ✅ Watched item indicators: green (not brand color)
- ✅ All teal shades consistent
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Suite 6: Error Scenarios
### Test 6.1: Missing Data
**Objective**: Verify pages handle empty states gracefully
**Steps**:
1. Navigate to /deals (without watched items)
2. Navigate to /lists (without shopping lists)
3. Navigate to /flyers (without uploaded flyers)
**Expected Results**:
- ✅ Empty state messages shown
- ✅ No JavaScript errors
- ✅ Clear calls to action displayed
- ✅ Page structure intact
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 6.2: Network Errors (Simulated)
**Objective**: Verify app handles network failures
**Steps**:
1. Open DevTools → Network tab
2. Set throttling to "Offline"
3. Try to navigate between tabs
4. Try to load data
**Expected Results**:
- ✅ Error messages displayed
- ✅ App doesn't crash
- ✅ Can retry actions
- ✅ Navigation still works (cached)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Suite 7: Performance
### Test 7.1: Page Load Speed
**Objective**: Verify new features don't slow down app
**Steps**:
1. Open DevTools → Network tab
2. Disable cache
3. Refresh page
4. Note "Load" time in Network tab
**Expected Results**:
- ✅ Initial load: < 3 seconds
- ✅ Route changes: < 500ms
- ✅ No long-running scripts
- ✅ No memory leaks (use Performance Monitor)
**Pass/Fail**: [ ]
**Measurements**:
- Initial load: **\_\_\_** ms
- Home → Deals: **\_\_\_** ms
- Deals → Lists: **\_\_\_** ms
---
### Test 7.2: Bundle Size
**Objective**: Verify bundle size increase is acceptable
**Steps**:
1. Run: `npm run build`
2. Check `dist/` folder size
3. Compare to previous build (if available)
**Expected Results**:
- ✅ Bundle size increase: < 50KB
- ✅ No duplicate libraries loaded
- ✅ Tree-shaking working
**Pass/Fail**: [ ]
**Measurements**: **********************\_\_\_**********************
---
## Cross-Browser Testing
### Test 8.1: Chrome/Edge
**Browser Version**: ******\_\_\_******
**Tests to Run**:
- [ ] All Test Suite 1 (Onboarding)
- [ ] All Test Suite 2 (Mobile Nav)
- [ ] Test 3.1-3.4 (Dark Mode)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 8.2: Firefox
**Browser Version**: ******\_\_\_******
**Tests to Run**:
- [ ] Test 1.1, 1.2 (Onboarding basics)
- [ ] Test 2.4-2.7 (Tab navigation)
- [ ] Test 3.1 (Dark mode)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
### Test 8.3: Safari (macOS/iOS)
**Browser Version**: ******\_\_\_******
**Tests to Run**:
- [ ] Test 1.1 (Tour starts)
- [ ] Test 2.1 (Mobile layout)
- [ ] Test 3.1 (Dark mode)
**Pass/Fail**: [ ]
**Notes**: **********************\_\_\_**********************
---
## Test Summary
### Overall Results
| Test Suite | Pass | Fail | Skipped | Total |
| -------------------- | ---- | ---- | ------- | ------ |
| 1. Onboarding Tour | | | | 5 |
| 2. Mobile Navigation | | | | 9 |
| 3. Dark Mode | | | | 4 |
| 4. Admin Routes | | | | 3 |
| 5. Integration | | | | 3 |
| 6. Error Scenarios | | | | 2 |
| 7. Performance | | | | 2 |
| 8. Cross-Browser | | | | 3 |
| **TOTAL** | | | | **31** |
### Critical Issues Found
1. ***
2. ***
3. ***
### Minor Issues Found
1. ***
2. ***
3. ***
### Recommendations
1. ***
2. ***
3. ***
---
## Sign-Off
**Tester Name**: **********************\_\_\_**********************
**Date Completed**: **********************\_\_\_**********************
**Overall Status**: [ ] PASS [ ] PASS WITH ISSUES [ ] FAIL
**Ready for Production**: [ ] YES [ ] NO [ ] WITH FIXES
**Additional Comments**:
---
---
---

318
docs/POSTGRES-MCP-SETUP.md Normal file
View File

@@ -0,0 +1,318 @@
# PostgreSQL MCP Server Setup
This document describes the configuration and troubleshooting for the PostgreSQL MCP server integration with Claude Code.
## Status
**WORKING** - Successfully configured and tested on 2026-01-22
- **Server Name**: `devdb`
- **Database**: `flyer_crawler_dev` (68 tables)
- **Connection**: Verified working
- **Tool Prefix**: `mcp__devdb__*`
- **Configuration**: Project-level `.mcp.json`
## Overview
The PostgreSQL MCP server (`@modelcontextprotocol/server-postgres`) provides database query capabilities directly from Claude Code, enabling:
- Running SQL queries against the development database
- Exploring database schema and tables
- Testing queries before implementation
- Debugging data issues
## Configuration
### Project-Level Configuration (Recommended)
The PostgreSQL MCP server is configured in the project-level `.mcp.json` file:
```json
{
"mcpServers": {
"devdb": {
"command": "D:\\nodejs\\npx.cmd",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev"
]
}
}
}
```
**Key Configuration Details:**
| Parameter | Value | Notes |
| ----------- | --------------------------------------- | --------------------------------------- |
| Server Name | `devdb` | Distinct name to avoid collision issues |
| Package | `@modelcontextprotocol/server-postgres` | Official MCP PostgreSQL server |
| Host | `127.0.0.1` | Use IP address, not `localhost` |
| Port | `5432` | Default PostgreSQL port |
| Database | `flyer_crawler_dev` | Development database name |
| User | `postgres` | Default superuser for dev |
| Password | `postgres` | Default password for dev |
### Why Project-Level Configuration?
Based on troubleshooting experience with other MCP servers (documented in `BUGSINK-MCP-TROUBLESHOOTING.md`), **localhost MCP servers work more reliably in project-level `.mcp.json`** than in global `settings.json`.
Issues observed with global configuration:
- MCP servers silently not loading
- No error messages in logs
- Tools not appearing in available tool list
Project-level configuration bypasses these issues entirely.
### Connection String Format
```
postgresql://[user]:[password]@[host]:[port]/[database]
```
Examples:
```
# Development (local container)
postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev
# Test database (if needed)
postgresql://flyer_crawler_test:password@127.0.0.1:5432/flyer_crawler_test
```
## Available Tools
Once configured, the following tools become available (prefix `mcp__devdb__`):
| Tool | Description |
| ------- | -------------------------------------- |
| `query` | Execute SQL queries and return results |
## Usage Examples
### Basic Query
```typescript
// List all tables
mcp__devdb__query("SELECT tablename FROM pg_tables WHERE schemaname = 'public'");
// Count records in a table
mcp__devdb__query('SELECT COUNT(*) FROM flyers');
// Check table structure
mcp__devdb__query(
"SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'flyers'",
);
```
### Debugging Data Issues
```typescript
// Find recent flyers
mcp__devdb__query('SELECT id, name, created_at FROM flyers ORDER BY created_at DESC LIMIT 10');
// Check job queue status
mcp__devdb__query('SELECT state, COUNT(*) FROM bullmq_jobs GROUP BY state');
// Verify user data
mcp__devdb__query("SELECT id, email, created_at FROM users WHERE email LIKE '%test%'");
```
## Prerequisites
### 1. PostgreSQL Container Running
The PostgreSQL container must be running and healthy:
```bash
# Check container status
podman ps | grep flyer-crawler-postgres
# Expected output shows "healthy" status
# flyer-crawler-postgres ... Up N hours (healthy) ...
```
### 2. Port Accessible from Host
PostgreSQL port 5432 must be mapped to the host:
```bash
# Verify port mapping
podman port flyer-crawler-postgres
# Expected: 5432/tcp -> 0.0.0.0:5432
```
### 3. Database Exists
Verify the database exists:
```bash
podman exec flyer-crawler-postgres psql -U postgres -c "\l" | grep flyer_crawler_dev
```
## Troubleshooting
### Tools Not Available
**Symptoms:**
- `mcp__devdb__*` tools not in available tool list
- No error messages displayed
**Solutions:**
1. **Restart Claude Code** - MCP config changes require restart
2. **Check container status** - Ensure PostgreSQL container is running
3. **Verify port mapping** - Confirm port 5432 is accessible
4. **Test connection manually**:
```bash
podman exec flyer-crawler-postgres psql -U postgres -d flyer_crawler_dev -c "SELECT 1"
```
### Connection Refused
**Symptoms:**
- Connection error when using tools
- "Connection refused" in error message
**Solutions:**
1. **Check container health**:
```bash
podman ps | grep flyer-crawler-postgres
```
2. **Restart the container**:
```bash
podman restart flyer-crawler-postgres
```
3. **Check for port conflicts**:
```bash
netstat -an | findstr 5432
```
### Authentication Failed
**Symptoms:**
- "password authentication failed" error
**Solutions:**
1. **Verify credentials** in container environment:
```bash
podman exec flyer-crawler-dev env | grep DB_
```
2. **Check PostgreSQL users**:
```bash
podman exec flyer-crawler-postgres psql -U postgres -c "\du"
```
3. **Update connection string** in `.mcp.json` if credentials differ
### Database Does Not Exist
**Symptoms:**
- "database does not exist" error
**Solutions:**
1. **List available databases**:
```bash
podman exec flyer-crawler-postgres psql -U postgres -c "\l"
```
2. **Create database if missing**:
```bash
podman exec flyer-crawler-postgres createdb -U postgres flyer_crawler_dev
```
## Security Considerations
### Development Only
The default credentials (`postgres:postgres`) are for **development only**. Never use these in production.
### Connection String in Config
The connection string includes the password in plain text. This is acceptable for:
- Local development
- Container environments
For production MCP access (if ever needed):
- Use environment variables
- Consider connection pooling
- Implement proper access controls
### Query Permissions
The MCP server executes queries as the configured user (`postgres` in dev). Be aware that:
- `postgres` is a superuser with full access
- For restricted access, create a dedicated MCP user with limited permissions:
```sql
-- Example: Create read-only MCP user
CREATE USER mcp_reader WITH PASSWORD 'secure_password';
GRANT CONNECT ON DATABASE flyer_crawler_dev TO mcp_reader;
GRANT USAGE ON SCHEMA public TO mcp_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_reader;
```
## Database Information
### Development Environment
| Property | Value |
| --------------------- | ------------------------- |
| Container | `flyer-crawler-postgres` |
| Image | `postgis/postgis:15-3.4` |
| Host (from Windows) | `127.0.0.1` / `localhost` |
| Host (from container) | `postgres` |
| Port | `5432` |
| Database | `flyer_crawler_dev` |
| User | `postgres` |
| Password | `postgres` |
### Schema Reference
The database uses PostGIS for geographic data. Key tables include:
- `users` - User accounts
- `stores` - Store definitions
- `store_locations` - Store geographic locations
- `flyers` - Uploaded flyer metadata
- `flyer_items` - Extracted deal items
- `watchlists` - User watchlists
- `shopping_lists` - User shopping lists
- `recipes` - Recipe definitions
For complete schema, see `sql/master_schema_rollup.sql`.
## Related Documentation
- [CLAUDE.md - MCP Servers Section](../CLAUDE.md#mcp-servers)
- [BUGSINK-MCP-TROUBLESHOOTING.md](./BUGSINK-MCP-TROUBLESHOOTING.md) - Similar MCP setup patterns
- [sql/master_schema_rollup.sql](../sql/master_schema_rollup.sql) - Database schema
## Changelog
### 2026-01-21
- Initial configuration added to project-level `.mcp.json`
- Server named `devdb` to avoid naming collisions
- Using `127.0.0.1` instead of `localhost` based on Bugsink MCP experience
- Documentation created

View File

@@ -0,0 +1,275 @@
# Quick Test Checklist - UI/UX Improvements
**Date**: 2026-01-20
**Estimated Time**: 30-45 minutes
---
## 🚀 Quick Start
### 1. Start Dev Server
```bash
podman exec -it flyer-crawler-dev npm run dev:container
```
Open browser: `http://localhost:5173`
### 2. Open DevTools
Press F12 or Ctrl+Shift+I
---
## ✅ Critical Tests (15 minutes)
### Test A: Onboarding Tour Works
**Time**: 5 minutes
1. DevTools → Application → Local Storage
2. Delete key: `flyer_crawler_onboarding_completed`
3. Refresh page (F5)
4. **PASS if**: Tour modal appears with 6 steps
5. Click through all steps or skip
6. **PASS if**: Tour closes and localStorage key is saved
**Result**: [ ] PASS [ ] FAIL
---
### Test B: Mobile Tab Bar Works
**Time**: 5 minutes
1. DevTools → Toggle Device Toolbar (Ctrl+Shift+M)
2. Select "iPhone SE" (375px width)
3. Refresh page
4. **PASS if**: Bottom tab bar visible with 4 tabs
5. Click each tab: Home, Deals, Lists, Profile
6. **PASS if**: Each tab navigates correctly and highlights
**Result**: [ ] PASS [ ] FAIL
---
### Test C: Desktop Layout Unchanged
**Time**: 3 minutes
1. Set browser width to 1440px (exit device mode)
2. Refresh page
3. **PASS if**:
- No bottom tab bar visible
- Left sidebar (flyer list) visible
- Right sidebar (widgets) visible
- 3-column layout intact
**Result**: [ ] PASS [ ] FAIL
---
### Test D: Dark Mode Works
**Time**: 2 minutes
1. Click dark mode toggle in header
2. Navigate: Home → Deals → Lists → Profile
3. **PASS if**: All pages have dark backgrounds, light text
4. Toggle back to light mode
5. **PASS if**: All pages return to light theme
**Result**: [ ] PASS [ ] FAIL
---
## 🔍 Detailed Tests (30 minutes)
### Test 1: Tour Features
**Time**: 5 minutes
- [ ] Tour step 1 points to Flyer Uploader
- [ ] Tour step 2 points to Extracted Data Table
- [ ] Tour step 3 points to Watch button
- [ ] Tour step 4 points to Watched Items List
- [ ] Tour step 5 points to Price Chart
- [ ] Tour step 6 points to Shopping List
- [ ] Skip button works (saves to localStorage)
- [ ] Tour doesn't repeat after completion
**Result**: [ ] PASS [ ] FAIL
---
### Test 2: Mobile Navigation
**Time**: 10 minutes
**At 375px (mobile)**:
- [ ] Tab bar visible at bottom
- [ ] Sidebars hidden
- [ ] Home tab navigates to `/`
- [ ] Deals tab navigates to `/deals`
- [ ] Lists tab navigates to `/lists`
- [ ] Profile tab navigates to `/profile`
- [ ] Active tab highlighted in teal
- [ ] Tabs are 44x44px (check DevTools)
**At 768px (tablet)**:
- [ ] Tab bar still visible
- [ ] Sidebars still hidden
**At 1024px+ (desktop)**:
- [ ] Tab bar hidden
- [ ] Sidebars visible
- [ ] Layout unchanged
**Result**: [ ] PASS [ ] FAIL
---
### Test 3: New Pages Work
**Time**: 5 minutes
**DealsPage (`/deals`)**:
- [ ] Shows WatchedItemsList component
- [ ] Shows PriceChart component
- [ ] Shows PriceHistoryChart component
- [ ] Can add watched items
**ShoppingListsPage (`/lists`)**:
- [ ] Shows ShoppingList component
- [ ] Can create new list
- [ ] Can add items to list
- [ ] Can delete list
**FlyersPage (`/flyers`)**:
- [ ] Shows FlyerList component
- [ ] Shows FlyerUploader component
- [ ] Can upload flyer
**Result**: [ ] PASS [ ] FAIL
---
### Test 4: Button Component
**Time**: 5 minutes
**Find buttons and test**:
- [ ] FlyerUploader: "Upload Another Flyer" (primary variant, teal)
- [ ] ShoppingList: "New List" (secondary variant, gray)
- [ ] ShoppingList: "Delete List" (danger variant, red)
- [ ] FlyerUploader: "Stop Watching" (ghost variant, transparent)
- [ ] Loading states show spinner
- [ ] Hover states work
- [ ] Dark mode variants look correct
**Result**: [ ] PASS [ ] FAIL
---
### Test 5: Admin Routes
**Time**: 5 minutes
**If you have admin access**:
- [ ] Navigate to `/admin`
- [ ] Tab bar NOT visible on admin pages
- [ ] Admin dashboard loads correctly
- [ ] Subpages work: /admin/stats, /admin/corrections
- [ ] Can navigate back to main app
- [ ] Admin pages work in mobile view (no tab bar)
**If not admin, skip this test**
**Result**: [ ] PASS [ ] FAIL [ ] SKIPPED
---
## 🐛 Error Checks (5 minutes)
### Console Errors
1. Open DevTools → Console tab
2. Navigate through entire app
3. **PASS if**: No red error messages
4. Warnings are OK (React 19 peer dependency warnings expected)
**Result**: [ ] PASS [ ] FAIL
**Errors found**: ******************\_\_\_******************
---
### Visual Glitches
Check for:
- [ ] No white boxes in dark mode
- [ ] No overlapping elements
- [ ] Text is readable (good contrast)
- [ ] Images load correctly
- [ ] No layout jumping/flickering
**Result**: [ ] PASS [ ] FAIL
**Issues found**: ******************\_\_\_******************
---
## 📊 Quick Summary
| Test | Result | Priority |
| -------------------- | ------ | ----------- |
| A. Onboarding Tour | [ ] | 🔴 Critical |
| B. Mobile Tab Bar | [ ] | 🔴 Critical |
| C. Desktop Layout | [ ] | 🔴 Critical |
| D. Dark Mode | [ ] | 🟡 High |
| 1. Tour Features | [ ] | 🟡 High |
| 2. Mobile Navigation | [ ] | 🔴 Critical |
| 3. New Pages | [ ] | 🟡 High |
| 4. Button Component | [ ] | 🟢 Medium |
| 5. Admin Routes | [ ] | 🟢 Medium |
| Console Errors | [ ] | 🔴 Critical |
| Visual Glitches | [ ] | 🟡 High |
---
## ✅ Pass Criteria
**Minimum to pass (Critical tests only)**:
- All 4 quick tests (A-D) must pass
- Mobile Navigation (Test 2) must pass
- No critical console errors
**Full pass (All tests)**:
- All tests pass or have minor issues only
- No blocking bugs
- No data loss or crashes
---
## 🚦 Final Decision
**Overall Status**: [ ] READY FOR PROD [ ] NEEDS FIXES [ ] BLOCKED
**Issues blocking production**:
1. ***
2. ***
3. ***
**Sign-off**: ********\_\_\_******** **Date**: ****\_\_\_****

130
docs/README.md Normal file
View File

@@ -0,0 +1,130 @@
# Flyer Crawler Documentation
Welcome to the Flyer Crawler documentation. This guide will help you navigate the various documentation resources available.
## Quick Links
- [Main README](../README.md) - Project overview and quick start
- [CLAUDE.md](../CLAUDE.md) - AI agent instructions and project guidelines
- [CONTRIBUTING.md](../CONTRIBUTING.md) - Development workflow and contribution guide
## Documentation Structure
### 🚀 Getting Started
New to the project? Start here:
- [Installation Guide](getting-started/INSTALL.md) - Complete setup instructions
- [Environment Configuration](getting-started/ENVIRONMENT.md) - Environment variables and secrets
### 🏗️ Architecture
Understand how the system works:
- [System Overview](architecture/OVERVIEW.md) - High-level architecture
- [Database Schema](architecture/DATABASE.md) - Database design and entities
- [Authentication](architecture/AUTHENTICATION.md) - OAuth and JWT authentication
- [WebSocket Usage](architecture/WEBSOCKET_USAGE.md) - Real-time communication patterns
### 💻 Development
Day-to-day development guides:
- [Testing Guide](development/TESTING.md) - Unit, integration, and E2E testing
- [Code Patterns](development/CODE-PATTERNS.md) - Common code patterns and ADR examples
- [Design Tokens](development/DESIGN_TOKENS.md) - UI design system and Neo-Brutalism
- [Debugging Guide](development/DEBUGGING.md) - Common debugging patterns
### 🔧 Operations
Production operations and deployment:
- [Deployment Guide](operations/DEPLOYMENT.md) - Deployment procedures
- [Bare Metal Setup](operations/BARE-METAL-SETUP.md) - Server provisioning
- [Logstash Quick Reference](operations/LOGSTASH-QUICK-REF.md) - Log aggregation
- [Logstash Troubleshooting](operations/LOGSTASH-TROUBLESHOOTING.md) - Debugging logs
- [Monitoring](operations/MONITORING.md) - Bugsink, health checks, observability
**NGINX Reference Configs** (in repository root):
- `etc-nginx-sites-available-flyer-crawler.projectium.com` - Production server config
- `etc-nginx-sites-available-flyer-crawler-test-projectium-com.txt` - Test server config
### 🛠️ Tools
External tool configuration:
- [MCP Configuration](tools/MCP-CONFIGURATION.md) - Model Context Protocol servers
- [Bugsink Setup](tools/BUGSINK-SETUP.md) - Error tracking configuration
- [VS Code Setup](tools/VSCODE-SETUP.md) - Editor configuration
### 🤖 AI Agents
Working with Claude Code subagents:
- [Subagent Overview](subagents/OVERVIEW.md) - Introduction to specialized agents
- [Coder Guide](subagents/CODER-GUIDE.md) - Code development patterns
- [Tester Guide](subagents/TESTER-GUIDE.md) - Testing strategies
- [Database Guide](subagents/DATABASE-GUIDE.md) - Database workflows
- [DevOps Guide](subagents/DEVOPS-GUIDE.md) - Deployment and infrastructure
- [AI Usage Guide](subagents/AI-USAGE-GUIDE.md) - Gemini integration
- [Frontend Guide](subagents/FRONTEND-GUIDE.md) - UI/UX development
- [Documentation Guide](subagents/DOCUMENTATION-GUIDE.md) - Writing docs
- [Security & Debug Guide](subagents/SECURITY-DEBUG-GUIDE.md) - Security and debugging
**AI-Optimized References** (token-efficient quick refs):
- [Coder Reference](SUBAGENT-CODER-REFERENCE.md)
- [Tester Reference](SUBAGENT-TESTER-REFERENCE.md)
- [DB Reference](SUBAGENT-DB-REFERENCE.md)
- [DevOps Reference](SUBAGENT-DEVOPS-REFERENCE.md)
- [Integrations Reference](SUBAGENT-INTEGRATIONS-REFERENCE.md)
### 📐 Architecture Decision Records (ADRs)
Design decisions and rationale:
- [ADR Index](adr/index.md) - Complete list of all ADRs
- 54+ ADRs covering patterns, conventions, and technical decisions
### 📦 Archive
Historical and completed documentation:
- [Session Notes](archive/sessions/) - Development session logs
- [Planning Documents](archive/plans/) - Feature plans and implementation status
- [Research Notes](archive/research/) - Investigation and research documents
## Documentation Conventions
- **File Names**: Use SCREAMING_SNAKE_CASE for human-readable docs (e.g., `INSTALL.md`)
- **Links**: Use relative paths from the document's location
- **Code Blocks**: Always specify language for syntax highlighting
- **Tables**: Use markdown tables for structured data
- **Cross-References**: Link to ADRs and other docs for detailed explanations
## Contributing to Documentation
See [CONTRIBUTING.md](../CONTRIBUTING.md) for guidelines on:
- Writing clear, concise documentation
- Updating docs when code changes
- Creating new ADRs for significant decisions
- Documenting new features and APIs
## Need Help?
- Check the [Testing Guide](development/TESTING.md) for test-related issues
- See [Debugging Guide](development/DEBUGGING.md) for troubleshooting
- Review [ADRs](adr/index.md) for architectural context
- Consult [Subagent Guides](subagents/OVERVIEW.md) for AI agent assistance
## Documentation Maintenance
This documentation is actively maintained. If you find:
- Broken links or outdated information
- Missing documentation for features
- Unclear or confusing sections
Please open an issue or submit a pull request with improvements.

View File

@@ -0,0 +1,311 @@
# Database Schema Relationship Analysis
## Executive Summary
This document analyzes the database schema to identify missing table relationships and JOINs that aren't properly implemented in the codebase. This analysis was triggered by discovering that `WatchedItemDeal` was using a `store_name` string instead of a proper `store` object with nested locations.
## Key Findings
### ✅ CORRECTLY IMPLEMENTED
#### 1. Store → Store Locations → Addresses (3-table normalization)
**Schema:**
```sql
stores (store_id) store_locations (store_location_id) addresses (address_id)
```
**Implementation:**
- [src/services/db/storeLocation.db.ts](src/services/db/storeLocation.db.ts) properly JOINs all three tables
- [src/types.ts](src/types.ts) defines `StoreWithLocations` interface with nested address objects
- Recent fixes corrected `WatchedItemDeal` to use `store` object instead of `store_name` string
**Queries:**
```typescript
// From storeLocation.db.ts
FROM public.stores s
LEFT JOIN public.store_locations sl ON s.store_id = sl.store_id
LEFT JOIN public.addresses a ON sl.address_id = a.address_id
```
#### 2. Shopping Trips → Shopping Trip Items
**Schema:**
```sql
shopping_trips (shopping_trip_id) shopping_trip_items (shopping_trip_item_id) master_grocery_items
```
**Implementation:**
- [src/services/db/shopping.db.ts:513-518](src/services/db/shopping.db.ts#L513-L518) properly JOINs shopping_trips → shopping_trip_items → master_grocery_items
- Uses `json_agg` to nest items array within trip object
- [src/types.ts:639-647](src/types.ts#L639-L647) `ShoppingTrip` interface includes nested `items: ShoppingTripItem[]`
**Queries:**
```typescript
FROM public.shopping_trips st
LEFT JOIN public.shopping_trip_items sti ON st.shopping_trip_id = sti.shopping_trip_id
LEFT JOIN public.master_grocery_items mgi ON sti.master_item_id = mgi.master_grocery_item_id
```
#### 3. Receipts → Receipt Items
**Schema:**
```sql
receipts (receipt_id) receipt_items (receipt_item_id)
```
**Implementation:**
- [src/types.ts:649-662](src/types.ts#L649-L662) `Receipt` interface includes optional `items?: ReceiptItem[]`
- Receipt items are fetched separately via repository methods
- Proper foreign key relationship maintained
---
### ❌ MISSING / INCORRECT IMPLEMENTATIONS
#### 1. **CRITICAL: Flyers → Flyer Locations → Store Locations (Many-to-Many)**
**Schema:**
```sql
CREATE TABLE IF NOT EXISTS public.flyer_locations (
flyer_id BIGINT NOT NULL REFERENCES public.flyers(flyer_id) ON DELETE CASCADE,
store_location_id BIGINT NOT NULL REFERENCES public.store_locations(store_location_id) ON DELETE CASCADE,
PRIMARY KEY (flyer_id, store_location_id),
...
);
COMMENT: 'A linking table associating a single flyer with multiple store locations where its deals are valid.'
```
**Problem:**
- The schema defines a **many-to-many relationship** - a flyer can be valid at multiple store locations
- Current implementation in [src/services/db/flyer.db.ts](src/services/db/flyer.db.ts) **IGNORES** the `flyer_locations` table entirely
- Queries JOIN `flyers` directly to `stores` via `store_id` foreign key
- This means flyers can only be associated with ONE store, not multiple locations
**Current (Incorrect) Queries:**
```typescript
// From flyer.db.ts:315-362
FROM public.flyers f
JOIN public.stores s ON f.store_id = s.store_id // ❌ Wrong - ignores flyer_locations
```
**Expected (Correct) Queries:**
```typescript
// Should be:
FROM public.flyers f
JOIN public.flyer_locations fl ON f.flyer_id = fl.flyer_id
JOIN public.store_locations sl ON fl.store_location_id = sl.store_location_id
JOIN public.stores s ON sl.store_id = s.store_id
JOIN public.addresses a ON sl.address_id = a.address_id
```
**TypeScript Type Issues:**
- [src/types.ts](src/types.ts) `Flyer` interface has `store` object, but it should have `locations: StoreLocation[]` array
- Current structure assumes one store per flyer, not multiple locations
**Files Affected:**
- [src/services/db/flyer.db.ts](src/services/db/flyer.db.ts) - All flyer queries
- [src/types.ts](src/types.ts) - `Flyer` interface definition
- Any component displaying flyer locations
---
#### 2. **User Submitted Prices → Store Locations (MIGRATED)**
**Status**: ✅ **FIXED** - Migration created
**Schema:**
```sql
CREATE TABLE IF NOT EXISTS public.user_submitted_prices (
...
store_id BIGINT NOT NULL REFERENCES public.stores(store_id) ON DELETE CASCADE,
...
);
```
**Solution Implemented:**
- Created migration [sql/migrations/005_add_store_location_to_user_submitted_prices.sql](sql/migrations/005_add_store_location_to_user_submitted_prices.sql)
- Added `store_location_id` column to table (NOT NULL after migration)
- Migrated existing data: linked each price to first location of its store
- Updated TypeScript interface [src/types.ts:270-282](src/types.ts#L270-L282) to include both fields
- Kept `store_id` for backward compatibility during transition
**Benefits:**
- Prices are now specific to individual store locations
- "Walmart Toronto" and "Walmart Vancouver" prices are tracked separately
- Improves geographic specificity for price comparisons
- Enables proximity-based price recommendations
**Next Steps:**
- Application code needs to be updated to use `store_location_id` when creating new prices
- Once all code is migrated, can drop the legacy `store_id` column
- User-submitted prices feature is not yet implemented in the UI
---
#### 3. **Receipts → Store Locations (MIGRATED)**
**Status**: ✅ **FIXED** - Migration created
**Schema:**
```sql
CREATE TABLE IF NOT EXISTS public.receipts (
...
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE CASCADE,
store_location_id BIGINT REFERENCES public.store_locations(store_location_id) ON DELETE SET NULL,
...
);
```
**Solution Implemented:**
- Created migration [sql/migrations/006_add_store_location_to_receipts.sql](sql/migrations/006_add_store_location_to_receipts.sql)
- Added `store_location_id` column to table (nullable - receipts may not have matched store)
- Migrated existing data: linked each receipt to first location of its store
- Updated TypeScript interface [src/types.ts:661-675](src/types.ts#L661-L675) to include both fields
- Kept `store_id` for backward compatibility during transition
**Benefits:**
- Receipts can now be tied to specific store locations
- "Loblaws Queen St" and "Loblaws Bloor St" are tracked separately
- Enables location-specific shopping pattern analysis
- Improves receipt matching accuracy with address data
**Next Steps:**
- Receipt scanning code needs to determine specific store_location_id from OCR text
- May require address parsing/matching logic in receipt processing
- Once all code is migrated, can drop the legacy `store_id` column
- OCR confidence and pattern matching should prefer location-specific data
---
#### 4. Item Price History → Store Locations (Already Correct!)
**Schema:**
```sql
CREATE TABLE IF NOT EXISTS public.item_price_history (
...
store_location_id BIGINT REFERENCES public.store_locations(store_location_id) ON DELETE CASCADE,
...
);
```
**Status:**
-**CORRECTLY IMPLEMENTED** - This table already uses `store_location_id`
- Properly tracks price history per location
- Good example of how other tables should be structured
---
## Summary Table
| Table | Foreign Key | Should Use | Status | Priority |
| --------------------- | --------------------------- | ------------------------------------- | --------------- | -------- |
| **flyer_locations** | flyer_id, store_location_id | Many-to-many link | ✅ **FIXED** | ✅ Done |
| flyers | store_id | ~~store_id~~ Now uses flyer_locations | ✅ **FIXED** | ✅ Done |
| user_submitted_prices | store_id | store_location_id | ✅ **MIGRATED** | ✅ Done |
| receipts | store_id | store_location_id | ✅ **MIGRATED** | ✅ Done |
| item_price_history | store_location_id | ✅ Already correct | ✅ Correct | ✅ Good |
| shopping_trips | (no store ref) | N/A | ✅ Correct | ✅ Good |
| store_locations | store_id, address_id | ✅ Already correct | ✅ Correct | ✅ Good |
---
## Impact Assessment
### Critical (Must Fix)
1. **Flyer Locations Many-to-Many**
- **Impact:** Flyers can't be associated with multiple store locations
- **User Impact:** Users can't see which specific store locations have deals
- **Business Logic:** Breaks core assumption that one flyer can be valid at multiple stores
- **Fix Complexity:** High - requires schema migration, type changes, query rewrites
### Medium (Should Consider)
2. **User Submitted Prices & Receipts**
- **Impact:** Loss of location-specific data
- **User Impact:** Can't distinguish between different locations of same store chain
- **Business Logic:** Reduces accuracy of proximity-based recommendations
- **Fix Complexity:** Medium - requires migration and query updates
---
## Recommended Actions
### Phase 1: Fix Flyer Locations (Critical)
1. Create migration to properly use `flyer_locations` table
2. Update `Flyer` TypeScript interface to support multiple locations
3. Rewrite all flyer queries in [src/services/db/flyer.db.ts](src/services/db/flyer.db.ts)
4. Update flyer creation/update endpoints to manage `flyer_locations` entries
5. Update frontend components to display multiple locations per flyer
6. Update tests to use new structure
### Phase 2: Consider Store Location Specificity (Optional)
1. Evaluate if location-specific receipts and prices provide value
2. If yes, create migrations to change `store_id``store_location_id`
3. Update repository queries
4. Update TypeScript interfaces
5. Update tests
---
## Related Documents
- [ADR-013: Store Address Normalization](../docs/adr/0013-store-address-normalization.md)
- [STORE_ADDRESS_IMPLEMENTATION_PLAN.md](../STORE_ADDRESS_IMPLEMENTATION_PLAN.md)
- [TESTING.md](../docs/TESTING.md)
---
## Analysis Methodology
This analysis was conducted by:
1. Extracting all foreign key relationships from [sql/master_schema_rollup.sql](sql/master_schema_rollup.sql)
2. Comparing schema relationships against TypeScript interfaces in [src/types.ts](src/types.ts)
3. Auditing database queries in [src/services/db/](src/services/db/) for proper JOIN usage
4. Identifying gaps where schema relationships exist but aren't used in queries
Commands used:
```bash
# Extract all foreign keys
podman exec -it flyer-crawler-dev bash -c "grep -n 'REFERENCES' sql/master_schema_rollup.sql"
# Check specific table structures
podman exec -it flyer-crawler-dev bash -c "grep -A 15 'CREATE TABLE.*table_name' sql/master_schema_rollup.sql"
# Verify query patterns
podman exec -it flyer-crawler-dev bash -c "grep -n 'JOIN.*table_name' src/services/db/*.ts"
```
---
**Last Updated:** 2026-01-19
**Analyzed By:** Claude Code (via user request after discovering store_name → store bug)

View File

@@ -0,0 +1,265 @@
# Coder Subagent Reference
## Quick Navigation
| Category | Key Files |
| ------------ | ------------------------------------------------------------------ |
| Routes | `src/routes/*.routes.ts` |
| Services | `src/services/*.server.ts` (backend), `src/services/*.ts` (shared) |
| Repositories | `src/services/db/*.db.ts` |
| Types | `src/types.ts`, `src/types/*.ts` |
| Schemas | `src/schemas/*.schemas.ts` |
| Config | `src/config/env.ts` |
| Utils | `src/utils/*.ts` |
---
## Architecture Patterns (ADR Summary)
### Layer Flow
```
Route → validateRequest(schema) → Service → Repository → Database
External APIs
```
### Repository Naming Convention (ADR-034)
| Prefix | Behavior | Return |
| --------- | --------------------------------- | -------------- |
| `get*` | Throws `NotFoundError` if missing | Entity |
| `find*` | Returns `null` if missing | Entity \| null |
| `list*` | Returns empty array if none | Entity[] |
| `create*` | Creates new record | Entity |
| `update*` | Updates existing | Entity |
| `delete*` | Removes record | void |
### Error Handling (ADR-001)
```typescript
import { handleDbError, NotFoundError } from '../services/db/errors.db';
import { logger } from '../services/logger.server';
// Repository pattern
async function getById(id: string): Promise<Entity> {
try {
const result = await pool.query('SELECT * FROM table WHERE id = $1', [id]);
if (result.rows.length === 0) throw new NotFoundError('Entity not found');
return result.rows[0];
} catch (error) {
handleDbError(error, logger, 'Failed to get entity', { id });
}
}
```
### API Response Helpers (ADR-028)
```typescript
import {
sendSuccess,
sendPaginated,
sendError,
sendNoContent,
sendMessage,
ErrorCode,
} from '../utils/apiResponse';
// Success with data
sendSuccess(res, data); // 200
sendSuccess(res, data, 201); // 201 Created
// Paginated
sendPaginated(res, items, { page, limit, total });
// Error
sendError(res, ErrorCode.NOT_FOUND, 'User not found', 404);
sendError(res, ErrorCode.VALIDATION_ERROR, 'Invalid input', 400, validationErrors);
// No content / Message
sendNoContent(res); // 204
sendMessage(res, 'Password updated');
```
### Transaction Pattern (ADR-002)
```typescript
import { withTransaction } from '../services/db/connection.db';
const result = await withTransaction(async (client) => {
await client.query('INSERT INTO a ...');
await client.query('INSERT INTO b ...');
return { success: true };
});
```
---
## Adding New Features
### New API Endpoint Checklist
1. **Schema** (`src/schemas/{domain}.schemas.ts`)
```typescript
import { z } from 'zod';
export const createEntitySchema = z.object({
body: z.object({ name: z.string().min(1) }),
});
```
2. **Route** (`src/routes/{domain}.routes.ts`)
```typescript
import { validateRequest } from '../middleware/validation.middleware';
import { createEntitySchema } from '../schemas/{domain}.schemas';
router.post('/', validateRequest(createEntitySchema), async (req, res, next) => {
try {
const result = await entityService.create(req.body);
sendSuccess(res, result, 201);
} catch (error) {
next(error);
}
});
```
3. **Service** (`src/services/{domain}Service.server.ts`)
```typescript
export async function create(data: CreateInput): Promise<Entity> {
// Business logic here
return repository.create(data);
}
```
4. **Repository** (`src/services/db/{domain}.db.ts`)
```typescript
export async function create(data: CreateInput, client?: PoolClient): Promise<Entity> {
const pool = client || getPool();
try {
const result = await pool.query('INSERT INTO ...', [data.name]);
return result.rows[0];
} catch (error) {
handleDbError(error, logger, 'Failed to create entity', { data });
}
}
```
### New Background Job Checklist
1. **Queue** (`src/services/queues.server.ts`)
```typescript
export const myQueue = new Queue('my-queue', { connection: redisConnection });
```
2. **Worker** (`src/services/workers.server.ts`)
```typescript
new Worker(
'my-queue',
async (job) => {
// Process job
},
{ connection: redisConnection },
);
```
3. **Trigger** (in service)
```typescript
await myQueue.add('job-name', { data });
```
---
## Key Files Reference
### Database Repositories
| Repository | Purpose | Path |
| -------------------- | ----------------------------- | ------------------------------------ |
| `flyer.db.ts` | Flyer CRUD, processing status | `src/services/db/flyer.db.ts` |
| `store.db.ts` | Store management | `src/services/db/store.db.ts` |
| `user.db.ts` | User accounts | `src/services/db/user.db.ts` |
| `shopping.db.ts` | Shopping lists, watchlists | `src/services/db/shopping.db.ts` |
| `gamification.db.ts` | Achievements, points | `src/services/db/gamification.db.ts` |
| `category.db.ts` | Item categories | `src/services/db/category.db.ts` |
| `price.db.ts` | Price history, comparisons | `src/services/db/price.db.ts` |
| `recipe.db.ts` | Recipe management | `src/services/db/recipe.db.ts` |
### Services
| Service | Purpose | Path |
| ---------------------------------- | -------------------------------- | ----------------------------------------------- |
| `flyerProcessingService.server.ts` | Orchestrates flyer AI extraction | `src/services/flyerProcessingService.server.ts` |
| `flyerAiProcessor.server.ts` | Gemini AI integration | `src/services/flyerAiProcessor.server.ts` |
| `cacheService.server.ts` | Redis caching | `src/services/cacheService.server.ts` |
| `queues.server.ts` | BullMQ queue definitions | `src/services/queues.server.ts` |
| `workers.server.ts` | BullMQ workers | `src/services/workers.server.ts` |
| `emailService.server.ts` | Nodemailer integration | `src/services/emailService.server.ts` |
| `geocodingService.server.ts` | Address geocoding | `src/services/geocodingService.server.ts` |
### Routes
| Route | Base Path | Auth Required |
| ------------------ | ------------- | ------------- |
| `flyer.routes.ts` | `/api/flyers` | Mixed |
| `store.routes.ts` | `/api/stores` | Mixed |
| `user.routes.ts` | `/api/users` | Yes |
| `auth.routes.ts` | `/api/auth` | No |
| `admin.routes.ts` | `/api/admin` | Admin only |
| `deals.routes.ts` | `/api/deals` | No |
| `health.routes.ts` | `/api/health` | No |
---
## Error Types
| Error Class | HTTP Status | Use Case |
| --------------------------- | ----------- | ------------------------- |
| `NotFoundError` | 404 | Resource not found |
| `ForbiddenError` | 403 | Access denied |
| `ValidationError` | 400 | Input validation failed |
| `UniqueConstraintError` | 409 | Duplicate record |
| `ForeignKeyConstraintError` | 400 | Referenced record missing |
| `NotNullConstraintError` | 400 | Required field null |
Import: `import { NotFoundError, ... } from '../services/db/errors.db'`
---
## Middleware
| Middleware | Purpose | Usage |
| ------------------------- | -------------------- | ------------------------------------------------------------ |
| `validateRequest(schema)` | Zod validation | `router.post('/', validateRequest(schema), handler)` |
| `requireAuth` | JWT authentication | `router.get('/', requireAuth, handler)` |
| `requireAdmin` | Admin role check | `router.delete('/', requireAuth, requireAdmin, handler)` |
| `fileUpload` | Multer file handling | `router.post('/upload', fileUpload.single('file'), handler)` |
---
## Type Definitions
| File | Contains |
| --------------------------- | ----------------------------------------------- |
| `src/types.ts` | Main types: User, Flyer, FlyerItem, Store, etc. |
| `src/types/api.ts` | API response envelopes, pagination |
| `src/types/auth.ts` | Auth-related types |
| `src/types/gamification.ts` | Achievement types |
---
## Naming Conventions (ADR-027)
| Context | Convention | Example |
| ----------------- | ---------------- | ----------------------------------- |
| AI output types | `Ai*` prefix | `AiFlyerItem`, `AiExtractionResult` |
| Database types | `Db*` prefix | `DbFlyer`, `DbUser` |
| API types | No prefix | `Flyer`, `User` |
| Schema validation | `*Schema` suffix | `createFlyerSchema` |
| Routes | `*.routes.ts` | `flyer.routes.ts` |
| Repositories | `*.db.ts` | `flyer.db.ts` |
| Server services | `*.server.ts` | `aiService.server.ts` |
| Client services | `*.client.ts` | `logger.client.ts` |

View File

@@ -0,0 +1,377 @@
# Database Subagent Reference
## Quick Navigation
| Resource | Path |
| ------------------ | ---------------------------------------- |
| Master Schema | `sql/master_schema_rollup.sql` |
| Initial Schema | `sql/initial_schema.sql` |
| Migrations | `sql/migrations/*.sql` |
| Triggers/Functions | `sql/Initial_triggers_and_functions.sql` |
| Initial Data | `sql/initial_data.sql` |
| Drop Script | `sql/drop_tables.sql` |
| Repositories | `src/services/db/*.db.ts` |
| Connection | `src/services/db/connection.db.ts` |
| Errors | `src/services/db/errors.db.ts` |
---
## Database Credentials
### Environments
| Environment | User | Database | Host |
| ------------- | -------------------- | -------------------- | --------------------------- |
| Production | `flyer_crawler_prod` | `flyer-crawler-prod` | `DB_HOST` secret |
| Test | `flyer_crawler_test` | `flyer-crawler-test` | `DB_HOST` secret |
| Dev Container | `postgres` | `flyer_crawler_dev` | `postgres` (container name) |
### Connection (Dev Container)
```bash
# Inside container
psql -U postgres -d flyer_crawler_dev
# From Windows via Podman
podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev
```
### Connection (Production/Test via SSH)
```bash
# SSH to server, then:
PGPASSWORD=$DB_PASSWORD psql -h $DB_HOST -U $DB_USER -d $DB_NAME
```
---
## Schema Tables (Core)
| Table | Purpose | Key Columns |
| --------------------- | -------------------- | --------------------------------------------------------------- |
| `users` | Authentication | `user_id` (UUID PK), `email`, `password_hash` |
| `profiles` | User data | `user_id` (FK), `full_name`, `role`, `points` |
| `addresses` | Normalized addresses | `address_id`, `address_line_1`, `city`, `latitude`, `longitude` |
| `stores` | Store chains | `store_id`, `name`, `logo_url` |
| `store_locations` | Physical locations | `store_location_id`, `store_id` (FK), `address_id` (FK) |
| `flyers` | Uploaded flyers | `flyer_id`, `store_id` (FK), `image_url`, `status` |
| `flyer_items` | Extracted deals | `flyer_item_id`, `flyer_id` (FK), `name`, `price` |
| `categories` | Item categories | `category_id`, `name` |
| `master_items` | Canonical items | `master_item_id`, `name`, `category_id` (FK) |
| `shopping_lists` | User lists | `shopping_list_id`, `user_id` (FK), `name` |
| `shopping_list_items` | List items | `shopping_list_item_id`, `shopping_list_id` (FK) |
| `watchlist` | Price alerts | `watchlist_id`, `user_id` (FK), `search_term` |
| `activity_log` | Audit trail | `activity_log_id`, `user_id`, `action`, `details` |
---
## Schema Sync Rule (CRITICAL)
**Both files MUST stay synchronized:**
- `sql/master_schema_rollup.sql` - Used by test DB setup
- `sql/initial_schema.sql` - Used for fresh installs
**When adding columns:**
1. Add migration in `sql/migrations/NNN_description.sql`
2. Add column to `master_schema_rollup.sql`
3. Add column to `initial_schema.sql`
4. Test DB uses `master_schema_rollup.sql` - out-of-sync = test failures
---
## Migration Pattern
### Creating a Migration
```sql
-- sql/migrations/NNN_descriptive_name.sql
-- Add column with default
ALTER TABLE public.flyers
ADD COLUMN IF NOT EXISTS new_column TEXT DEFAULT 'value';
-- Add index
CREATE INDEX IF NOT EXISTS idx_flyers_new_column
ON public.flyers(new_column);
-- Update schema_info
UPDATE public.schema_info
SET schema_hash = 'new_hash', updated_at = now()
WHERE environment = 'production';
```
### Running Migrations
```bash
# Via psql
PGPASSWORD=$DB_PASSWORD psql -h $DB_HOST -U $DB_USER -d $DB_NAME -f sql/migrations/NNN_description.sql
# In CI/CD - migrations are checked via schema hash
```
---
## Repository Pattern (ADR-034)
### Method Naming Convention
| Prefix | Behavior | Return Type |
| --------- | --------------------------------- | ---------------- |
| `get*` | Throws `NotFoundError` if missing | `Entity` |
| `find*` | Returns `null` if missing | `Entity \| null` |
| `list*` | Returns empty array if none | `Entity[]` |
| `create*` | Creates new record | `Entity` |
| `update*` | Updates existing record | `Entity` |
| `delete*` | Removes record | `void` |
| `count*` | Returns count | `number` |
### Repository Template
```typescript
// src/services/db/entity.db.ts
import { getPool } from './connection.db';
import { handleDbError, NotFoundError } from './errors.db';
import type { PoolClient } from 'pg';
import type { Logger } from 'pino';
export async function getEntityById(
id: string,
logger: Logger,
client?: PoolClient,
): Promise<Entity> {
const pool = client || getPool();
try {
const result = await pool.query('SELECT * FROM public.entities WHERE entity_id = $1', [id]);
if (result.rows.length === 0) {
throw new NotFoundError('Entity not found');
}
return result.rows[0];
} catch (error) {
handleDbError(error, logger, 'Failed to get entity', { id });
}
}
export async function findEntityByName(
name: string,
logger: Logger,
client?: PoolClient,
): Promise<Entity | null> {
const pool = client || getPool();
try {
const result = await pool.query('SELECT * FROM public.entities WHERE name = $1', [name]);
return result.rows[0] || null;
} catch (error) {
handleDbError(error, logger, 'Failed to find entity', { name });
}
}
export async function listEntities(logger: Logger, client?: PoolClient): Promise<Entity[]> {
const pool = client || getPool();
try {
const result = await pool.query('SELECT * FROM public.entities ORDER BY name');
return result.rows;
} catch (error) {
handleDbError(error, logger, 'Failed to list entities', {});
}
}
```
---
## Transaction Pattern (ADR-002)
```typescript
import { withTransaction } from './connection.db';
const result = await withTransaction(async (client) => {
// All queries use same client
const user = await userRepo.createUser(data, logger, client);
const profile = await profileRepo.createProfile(user.user_id, profileData, logger, client);
await activityRepo.logActivity('user_created', user.user_id, logger, client);
return { user, profile };
});
// Commits on success, rolls back on any error
```
---
## Error Handling (ADR-001)
### Error Types
| Error | PostgreSQL Code | HTTP Status | Use Case |
| -------------------------------- | --------------- | ----------- | ------------------------- |
| `UniqueConstraintError` | `23505` | 409 | Duplicate record |
| `ForeignKeyConstraintError` | `23503` | 400 | Referenced record missing |
| `NotNullConstraintError` | `23502` | 400 | Required field null |
| `CheckConstraintError` | `23514` | 400 | Check constraint violated |
| `InvalidTextRepresentationError` | `22P02` | 400 | Invalid type format |
| `NumericValueOutOfRangeError` | `22003` | 400 | Number out of range |
| `NotFoundError` | - | 404 | Record not found |
| `ForbiddenError` | - | 403 | Access denied |
### Using handleDbError
```typescript
import { handleDbError, NotFoundError } from './errors.db';
try {
const result = await pool.query('INSERT INTO ...', [data]);
if (result.rows.length === 0) throw new NotFoundError('Entity not found');
return result.rows[0];
} catch (error) {
handleDbError(
error,
logger,
'Failed to create entity',
{ data },
{
uniqueMessage: 'Entity with this name already exists',
fkMessage: 'Referenced category does not exist',
defaultMessage: 'Failed to create entity',
},
);
}
```
---
## Connection Pool
```typescript
import { getPool, getPoolStatus } from './connection.db';
// Get pool (singleton)
const pool = getPool();
// Check pool status
const status = getPoolStatus();
// { totalCount: 20, idleCount: 15, waitingCount: 0 }
```
### Pool Configuration
| Setting | Value | Purpose |
| ------------------------- | ----- | ------------------- |
| `max` | 20 | Max clients in pool |
| `idleTimeoutMillis` | 30000 | Idle client timeout |
| `connectionTimeoutMillis` | 2000 | Connection timeout |
---
## Common Queries
### Paginated List
```typescript
const result = await pool.query(
`SELECT * FROM public.flyers
ORDER BY created_at DESC
LIMIT $1 OFFSET $2`,
[limit, (page - 1) * limit],
);
const countResult = await pool.query('SELECT COUNT(*) FROM public.flyers');
const total = parseInt(countResult.rows[0].count, 10);
```
### Spatial Query (Find Nearby)
```typescript
const result = await pool.query(
`SELECT sl.*, a.*,
ST_Distance(a.location, ST_SetSRID(ST_MakePoint($1, $2), 4326)::geography) as distance
FROM public.store_locations sl
JOIN public.addresses a ON sl.address_id = a.address_id
WHERE ST_DWithin(a.location, ST_SetSRID(ST_MakePoint($1, $2), 4326)::geography, $3)
ORDER BY distance`,
[longitude, latitude, radiusMeters],
);
```
### Upsert Pattern
```typescript
const result = await pool.query(
`INSERT INTO public.stores (name, logo_url)
VALUES ($1, $2)
ON CONFLICT (name) DO UPDATE SET
logo_url = EXCLUDED.logo_url,
updated_at = now()
RETURNING *`,
[name, logoUrl],
);
```
---
## Database Reset Commands
### Dev Container
```bash
# Reset dev database (runs seed script)
podman exec -it flyer-crawler-dev npm run db:reset:dev
# Reset test database
podman exec -it flyer-crawler-dev npm run db:reset:test
```
### Manual SQL
```bash
# Drop all tables
podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev -f /app/sql/drop_tables.sql
# Recreate schema
podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev -f /app/sql/master_schema_rollup.sql
# Load initial data
podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev -f /app/sql/initial_data.sql
```
---
## Database Users Setup
```sql
-- Create database and user (as postgres superuser)
CREATE DATABASE "flyer-crawler-test";
CREATE USER flyer_crawler_test WITH PASSWORD 'password';
ALTER DATABASE "flyer-crawler-test" OWNER TO flyer_crawler_test;
-- Grant permissions
\c "flyer-crawler-test"
ALTER SCHEMA public OWNER TO flyer_crawler_test;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_test;
-- Required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "postgis";
-- Verify permissions
\dn+ public
-- Should show 'UC' for the user
```
---
## Repository Files
| Repository | Domain | Path |
| --------------------- | -------------------------- | ------------------------------------- |
| `user.db.ts` | Users, profiles | `src/services/db/user.db.ts` |
| `flyer.db.ts` | Flyers, processing | `src/services/db/flyer.db.ts` |
| `store.db.ts` | Stores | `src/services/db/store.db.ts` |
| `storeLocation.db.ts` | Store locations | `src/services/db/storeLocation.db.ts` |
| `address.db.ts` | Addresses | `src/services/db/address.db.ts` |
| `category.db.ts` | Categories | `src/services/db/category.db.ts` |
| `shopping.db.ts` | Shopping lists, watchlists | `src/services/db/shopping.db.ts` |
| `price.db.ts` | Price history | `src/services/db/price.db.ts` |
| `gamification.db.ts` | Achievements, points | `src/services/db/gamification.db.ts` |
| `notification.db.ts` | Notifications | `src/services/db/notification.db.ts` |
| `recipe.db.ts` | Recipes | `src/services/db/recipe.db.ts` |
| `receipt.db.ts` | Receipts | `src/services/db/receipt.db.ts` |
| `admin.db.ts` | Admin operations | `src/services/db/admin.db.ts` |

View File

@@ -0,0 +1,357 @@
# DevOps Subagent Reference
## Critical Rule: Git Bash Path Conversion
Git Bash on Windows auto-converts Unix paths, breaking container commands.
| Solution | Example |
| ---------------------------- | -------------------------------------------------------- |
| `sh -c` with single quotes | `podman exec container sh -c '/usr/local/bin/script.sh'` |
| Double slashes | `podman exec container //usr//local//bin//script.sh` |
| MSYS_NO_PATHCONV=1 | `MSYS_NO_PATHCONV=1 podman exec ...` |
| Windows paths for host files | `podman cp "d:/path/file" container:/tmp/file` |
---
## Container Commands (Podman)
### Dev Container Operations
```bash
# List running containers
podman ps
# Container logs
podman logs flyer-crawler-dev
podman logs -f flyer-crawler-dev # Follow
# Execute in container
podman exec -it flyer-crawler-dev bash
podman exec -it flyer-crawler-dev npm run test:unit
# Restart container
podman restart flyer-crawler-dev
# Container resource usage
podman stats flyer-crawler-dev
```
### Test Execution (from Windows)
```bash
# Unit tests - pipe output for AI processing
podman exec -it flyer-crawler-dev npm run test:unit 2>&1 | tee test-results.txt
# Integration tests
podman exec -it flyer-crawler-dev npm run test:integration
# Type check (CRITICAL before commit)
podman exec -it flyer-crawler-dev npm run type-check
# Specific test file
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
```
### Database Operations (from Windows)
```bash
# Reset dev database
podman exec -it flyer-crawler-dev npm run db:reset:dev
# Access PostgreSQL
podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev
# Run SQL file (use MSYS_NO_PATHCONV to avoid path conversion)
MSYS_NO_PATHCONV=1 podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev -f /app/sql/master_schema_rollup.sql
```
---
## PM2 Commands
### Production Server (via SSH)
```bash
# SSH to server
ssh root@projectium.com
# List all apps
pm2 list
# App status
pm2 show flyer-crawler-api
# Logs
pm2 logs flyer-crawler-api
pm2 logs --lines 100
# Restart apps
pm2 restart flyer-crawler-api
pm2 reload flyer-crawler-api # Zero-downtime
# Stop/Start
pm2 stop flyer-crawler-api
pm2 start flyer-crawler-api
# Delete and reload from config
pm2 delete all
pm2 start ecosystem.config.cjs
```
### PM2 Config: `ecosystem.config.cjs`
| App | Purpose | Memory | Mode |
| -------------------------------- | ---------------- | ------ | ------- |
| `flyer-crawler-api` | Express server | 500M | Cluster |
| `flyer-crawler-worker` | BullMQ worker | 1G | Fork |
| `flyer-crawler-analytics-worker` | Analytics worker | 1G | Fork |
Test variants: `*-test` suffix
---
## CI/CD Workflows
### Location: `.gitea/workflows/`
| Workflow | Trigger | Purpose |
| ----------------------------- | ------------ | ----------------------- |
| `deploy-to-test.yml` | Push to main | Auto-deploy to test env |
| `deploy-to-prod.yml` | Manual | Deploy to production |
| `manual-db-backup.yml` | Manual | Database backup |
| `manual-db-reset-test.yml` | Manual | Reset test database |
| `manual-db-reset-prod.yml` | Manual | Reset prod database |
| `manual-db-restore.yml` | Manual | Restore database |
| `manual-deploy-major.yml` | Manual | Major version release |
| `manual-redis-flush-prod.yml` | Manual | Flush Redis cache |
### Deploy to Test Pipeline Steps
1. Checkout code
2. Setup Node.js 20
3. `npm ci`
4. Bump patch version (creates git tag)
5. `npm run type-check`
6. `npm run test:unit`
7. Check schema hash against deployed DB
8. `npm run build`
9. Copy files to `/var/www/flyer-crawler-test.projectium.com/`
10. `pm2 reload ecosystem-test.config.cjs`
### Deploy to Production Pipeline Steps
1. Verify confirmation phrase ("deploy-to-prod")
2. Checkout `main` branch
3. `npm ci`
4. Bump minor version (creates git tag)
5. Check schema hash against prod DB
6. `npm run build` (with Sentry source maps)
7. Copy files to `/var/www/flyer-crawler.projectium.com/`
8. `pm2 reload ecosystem.config.cjs`
---
## Deployment Paths
| Environment | Path | Domain |
| ------------- | --------------------------------------------- | ----------------------------------- |
| Production | `/var/www/flyer-crawler.projectium.com/` | `flyer-crawler.projectium.com` |
| Test | `/var/www/flyer-crawler-test.projectium.com/` | `flyer-crawler-test.projectium.com` |
| Dev Container | `/app/` | `localhost:3000` |
---
## Environment Configuration
### Files
| File | Purpose |
| --------------------------- | ----------------------- |
| `ecosystem.config.cjs` | PM2 production config |
| `ecosystem-test.config.cjs` | PM2 test config |
| `src/config/env.ts` | Zod schema for env vars |
| `.env.example` | Template for env vars |
### Required Secrets (Gitea CI/CD)
| Category | Secrets |
| ---------- | -------------------------------------------------------------------------------------------- |
| Database | `DB_HOST`, `DB_USER_PROD`, `DB_PASSWORD_PROD`, `DB_DATABASE_PROD` |
| Test DB | `DB_USER_TEST`, `DB_PASSWORD_TEST`, `DB_DATABASE_TEST` |
| Redis | `REDIS_PASSWORD_PROD`, `REDIS_PASSWORD_TEST` |
| Auth | `JWT_SECRET`, `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET`, `GH_CLIENT_ID`, `GH_CLIENT_SECRET` |
| AI | `VITE_GOOGLE_GENAI_API_KEY`, `VITE_GOOGLE_GENAI_API_KEY_TEST` |
| Monitoring | `SENTRY_DSN`, `SENTRY_DSN_TEST`, `SENTRY_AUTH_TOKEN` |
| Maps | `GOOGLE_MAPS_API_KEY` |
### Adding New Secret
1. Add to Gitea Settings > Secrets
2. Update workflow YAML: `SENTRY_DSN: ${{ secrets.SENTRY_DSN }}`
3. Update `ecosystem.config.cjs`
4. Update `src/config/env.ts` Zod schema
5. Update `.env.example`
---
## Redis Commands
### Dev Container
```bash
# Access Redis CLI
podman exec -it flyer-crawler-dev redis-cli
# Common commands
KEYS *
FLUSHALL
INFO
```
### Production
```bash
# Via SSH
ssh root@projectium.com
redis-cli -a $REDIS_PASSWORD
# Flush cache (use with caution)
# Or use manual-redis-flush-prod.yml workflow
```
---
## Health Checks
### Endpoints
| Endpoint | Purpose |
| -------------------------- | -------------------------------------- |
| `GET /api/health` | Basic health check |
| `GET /api/health/detailed` | Full system status (DB, Redis, queues) |
### Manual Health Check
```bash
# From Windows
curl http://localhost:3000/api/health
# Or via Podman
podman exec -it flyer-crawler-dev curl http://localhost:3000/api/health
```
---
## Log Locations
### Production Server
```bash
# PM2 logs
~/.pm2/logs/
# NGINX logs
/var/log/nginx/access.log
/var/log/nginx/error.log
# Application logs (via PM2)
pm2 logs flyer-crawler-api --lines 200
```
### Dev Container
```bash
# View container logs
podman logs flyer-crawler-dev
# Follow logs
podman logs -f flyer-crawler-dev
```
---
## Backup/Restore
### Database Backup (Manual Workflow)
Trigger `manual-db-backup.yml` from Gitea Actions UI.
### Manual Backup
```bash
# SSH to server
ssh root@projectium.com
# Backup
PGPASSWORD=$DB_PASSWORD pg_dump -h $DB_HOST -U $DB_USER $DB_NAME > backup_$(date +%Y%m%d).sql
# Restore
PGPASSWORD=$DB_PASSWORD psql -h $DB_HOST -U $DB_USER $DB_NAME < backup.sql
```
---
## Bugsink (Error Tracking)
### Dev Container Token Generation
```bash
MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token'
```
### Production Token Generation
```bash
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token"
```
---
## Common Troubleshooting
### Container Won't Start
```bash
# Check logs
podman logs flyer-crawler-dev
# Inspect container
podman inspect flyer-crawler-dev
# Remove and recreate
podman rm -f flyer-crawler-dev
# Then recreate with docker-compose or podman run
```
### Database Connection Issues
```bash
# Test connection inside container
podman exec -it flyer-crawler-dev psql -U postgres -d flyer_crawler_dev -c "SELECT 1"
# Check if PostgreSQL is running
podman exec -it flyer-crawler-dev pg_isready
```
### PM2 App Keeps Restarting
```bash
# Check logs
pm2 logs flyer-crawler-api --err --lines 100
# Check for memory issues
pm2 monit
# View app details
pm2 show flyer-crawler-api
```
### Redis Connection Issues
```bash
# Test Redis inside container
podman exec -it flyer-crawler-dev redis-cli ping
# Check Redis logs
podman logs flyer-crawler-redis
```

View File

@@ -0,0 +1,410 @@
# Integrations Subagent Reference
## MCP Servers Overview
| Server | Purpose | URL | Tools Prefix |
| ------------------ | ------------------------- | ----------------------------------------------------------------- | -------------------------- |
| `bugsink` | Production error tracking | `https://bugsink.projectium.com` | `mcp__bugsink__*` |
| `localerrors` | Dev container errors | `http://127.0.0.1:8000` | `mcp__localerrors__*` |
| `devdb` | Dev PostgreSQL | `postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev` | `mcp__devdb__*` |
| `gitea-projectium` | Gitea API | `gitea.projectium.com` | `mcp__gitea-projectium__*` |
| `gitea-torbonium` | Gitea API | `gitea.torbonium.com` | `mcp__gitea-torbonium__*` |
| `podman` | Container management | - | `mcp__podman__*` |
| `filesystem` | File system access | - | `mcp__filesystem__*` |
| `memory` | Knowledge graph | - | `mcp__memory__*` |
| `redis` | Cache management | `localhost:6379` | `mcp__redis__*` |
---
## MCP Server Configuration
### Global Config: `~/.claude/settings.json`
Used for production/remote servers (HTTPS works fine).
```json
{
"mcpServers": {
"bugsink": {
"command": "node",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "https://bugsink.projectium.com",
"BUGSINK_TOKEN": "<40-char-hex-token>"
}
}
}
}
```
### Project Config: `.mcp.json`
**CRITICAL:** Use project-level `.mcp.json` for localhost servers. Global config has issues loading localhost stdio MCP servers.
```json
{
"mcpServers": {
"localerrors": {
"command": "d:\\nodejs\\node.exe",
"args": ["d:\\gitea\\bugsink-mcp\\dist\\index.js"],
"env": {
"BUGSINK_URL": "http://127.0.0.1:8000",
"BUGSINK_TOKEN": "<40-char-hex-token>"
}
},
"devdb": {
"command": "D:\\nodejs\\npx.cmd",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev"
]
}
}
}
```
---
## Bugsink Integration
### API Token Generation
**Bugsink 2.0.11 has NO UI for API tokens.** Use Django management command.
#### Dev Container
```bash
MSYS_NO_PATHCONV=1 podman exec -e DATABASE_URL=postgresql://bugsink:bugsink_dev_password@postgres:5432/bugsink -e SECRET_KEY=dev-bugsink-secret-key-minimum-50-characters-for-security flyer-crawler-dev sh -c 'cd /opt/bugsink/conf && DJANGO_SETTINGS_MODULE=bugsink_conf PYTHONPATH=/opt/bugsink/conf:/opt/bugsink/lib/python3.10/site-packages /opt/bugsink/bin/python -m django create_auth_token'
```
#### Production
```bash
ssh root@projectium.com "cd /opt/bugsink && bugsink-manage create_auth_token"
```
### Bugsink MCP Tools
| Tool | Purpose |
| ----------------- | ---------------------- |
| `test_connection` | Verify connection |
| `list_projects` | List all projects |
| `list_issues` | List issues by project |
| `get_issue` | Issue details |
| `list_events` | Events for an issue |
| `get_event` | Event details |
| `get_stacktrace` | Formatted stacktrace |
### Usage Example
```typescript
// Test connection
mcp__bugsink__test_connection();
// List production issues
mcp__bugsink__list_issues({ project_id: 1, status: 'unresolved', limit: 10 });
// Get stacktrace
mcp__bugsink__get_stacktrace({ event_id: 'uuid-here' });
```
---
## PostgreSQL MCP Integration
### Setup
```bash
# Uses @modelcontextprotocol/server-postgres package
# Connection string in .mcp.json
```
### Tools
| Tool | Purpose |
| ------- | --------------------- |
| `query` | Execute read-only SQL |
### Usage Example
```typescript
mcp__devdb__query({ sql: 'SELECT * FROM public.users LIMIT 5' });
mcp__devdb__query({ sql: 'SELECT COUNT(*) FROM public.flyers' });
```
---
## Gitea MCP Integration
### Common Tools
| Tool | Purpose |
| ------------------------- | ----------------- |
| `get_my_user_info` | Current user info |
| `list_my_repos` | List repositories |
| `get_issue_by_index` | Issue details |
| `list_repo_issues` | Repository issues |
| `create_issue` | Create new issue |
| `create_pull_request` | Create PR |
| `list_repo_pull_requests` | List PRs |
| `get_file_content` | Read file |
| `list_repo_commits` | Commit history |
### Usage Example
```typescript
// List issues
mcp__gitea -
projectium__list_repo_issues({
owner: 'james',
repo: 'flyer-crawler',
state: 'open',
});
// Create issue
mcp__gitea -
projectium__create_issue({
owner: 'james',
repo: 'flyer-crawler',
title: 'Bug: Description',
body: 'Details here',
});
// Get file content
mcp__gitea -
projectium__get_file_content({
owner: 'james',
repo: 'flyer-crawler',
ref: 'main',
filePath: 'CLAUDE.md',
});
```
---
## Redis MCP Integration
### Tools
| Tool | Purpose |
| -------- | -------------------- |
| `get` | Get key value |
| `set` | Set key value |
| `delete` | Delete key(s) |
| `list` | List keys by pattern |
### Usage Example
```typescript
// List cache keys
mcp__redis__list({ pattern: 'flyer:*' });
// Get cached value
mcp__redis__get({ key: 'flyer:123' });
// Set with expiration
mcp__redis__set({ key: 'test:key', value: 'data', expireSeconds: 3600 });
// Delete key
mcp__redis__delete({ key: 'test:key' });
```
---
## Podman MCP Integration
### Tools
| Tool | Purpose |
| ------------------- | ----------------- |
| `container_list` | List containers |
| `container_logs` | View logs |
| `container_inspect` | Container details |
| `container_stop` | Stop container |
| `container_remove` | Remove container |
| `container_run` | Run container |
| `image_list` | List images |
| `image_pull` | Pull image |
### Usage Example
```typescript
// List running containers
mcp__podman__container_list();
// View container logs
mcp__podman__container_logs({ name: 'flyer-crawler-dev' });
// Inspect container
mcp__podman__container_inspect({ name: 'flyer-crawler-dev' });
```
---
## Memory MCP (Knowledge Graph)
### Tools
| Tool | Purpose |
| ------------------ | -------------------- |
| `read_graph` | Read entire graph |
| `search_nodes` | Search by query |
| `open_nodes` | Get specific nodes |
| `create_entities` | Create entities |
| `create_relations` | Create relationships |
| `add_observations` | Add observations |
| `delete_entities` | Delete entities |
### Usage Example
```typescript
// Search for context
mcp__memory__search_nodes({ query: 'flyer-crawler' });
// Read full graph
mcp__memory__read_graph();
// Create entity
mcp__memory__create_entities({
entities: [
{
name: 'FlyCrawler',
entityType: 'Project',
observations: ['Uses PostgreSQL', 'Express backend'],
},
],
});
```
---
## Filesystem MCP
### Tools
| Tool | Purpose |
| ---------------- | --------------------- |
| `read_text_file` | Read file contents |
| `write_file` | Write file |
| `edit_file` | Edit file |
| `list_directory` | List directory |
| `directory_tree` | Tree view |
| `search_files` | Find files by pattern |
### Usage Example
```typescript
// Read file
mcp__filesystem__read_text_file({ path: 'd:\\gitea\\project\\README.md' });
// List directory
mcp__filesystem__list_directory({ path: 'd:\\gitea\\project\\src' });
// Search for files
mcp__filesystem__search_files({
path: 'd:\\gitea\\project',
pattern: '**/*.test.ts',
});
```
---
## Troubleshooting MCP Servers
### Server Not Loading
1. **Check server name** - Avoid shared prefixes (e.g., `bugsink` and `bugsink-dev`)
2. **Use project-level `.mcp.json`** for localhost servers
3. **Restart Claude Code** after config changes
### Test Connection Manually
```bash
# Bugsink
set BUGSINK_URL=http://localhost:8000
set BUGSINK_TOKEN=<token>
node d:\gitea\bugsink-mcp\dist\index.js
# PostgreSQL
npx -y @modelcontextprotocol/server-postgres "postgresql://postgres:postgres@127.0.0.1:5432/flyer_crawler_dev"
```
### Check Claude Debug Logs
```
C:\Users\<username>\.claude\debug\*.txt
```
Look for "Starting connection" messages - missing server = never started.
---
## External API Integrations
### Gemini AI (Flyer Extraction)
| Config | Location |
| ------- | ---------------------------------------------- |
| API Key | `VITE_GOOGLE_GENAI_API_KEY` / `GEMINI_API_KEY` |
| Service | `src/services/flyerAiProcessor.server.ts` |
| Client | `@google/genai` package |
### Google OAuth
| Config | Location |
| ------------- | ------------------------ |
| Client ID | `GOOGLE_CLIENT_ID` |
| Client Secret | `GOOGLE_CLIENT_SECRET` |
| Service | `src/config/passport.ts` |
### GitHub OAuth
| Config | Location |
| ------------- | ------------------------------------------- |
| Client ID | `GH_CLIENT_ID` / `GITHUB_CLIENT_ID` |
| Client Secret | `GH_CLIENT_SECRET` / `GITHUB_CLIENT_SECRET` |
| Service | `src/config/passport.ts` |
### Google Maps (Geocoding)
| Config | Location |
| ------- | ----------------------------------------------- |
| API Key | `GOOGLE_MAPS_API_KEY` |
| Service | `src/services/googleGeocodingService.server.ts` |
### Nominatim (Fallback Geocoding)
| Config | Location |
| ------- | -------------------------------------------------- |
| URL | `https://nominatim.openstreetmap.org` |
| Service | `src/services/nominatimGeocodingService.server.ts` |
### Sentry (Error Tracking)
| Config | Location |
| -------------- | ------------------------------------------------- |
| DSN | `SENTRY_DSN` (server), `VITE_SENTRY_DSN` (client) |
| Auth Token | `SENTRY_AUTH_TOKEN` (source map upload) |
| Server Service | `src/services/sentry.server.ts` |
| Client Service | `src/services/sentry.client.ts` |
### SMTP (Email)
| Config | Location |
| ----------- | ------------------------------------- |
| Host | `SMTP_HOST` |
| Port | `SMTP_PORT` |
| Credentials | `SMTP_USER`, `SMTP_PASS` |
| Service | `src/services/emailService.server.ts` |
---
## Related Documentation
| Document | Purpose |
| -------------------------------- | ----------------------- |
| `BUGSINK-MCP-TROUBLESHOOTING.md` | MCP server issues |
| `POSTGRES-MCP-SETUP.md` | PostgreSQL MCP setup |
| `DEV-CONTAINER-BUGSINK.md` | Local Bugsink setup |
| `BUGSINK-SYNC.md` | Bugsink synchronization |

View File

@@ -0,0 +1,358 @@
# Tester Subagent Reference
## Critical Rule: Linux Only (ADR-014)
**ALL tests MUST run in the dev container.** Windows test results are unreliable.
| Result | Interpretation |
| ------------------------- | -------------------- |
| Pass Windows / Fail Linux | BROKEN - must fix |
| Fail Windows / Pass Linux | PASSING - acceptable |
---
## Test Commands
### From Windows Host (via Podman)
```bash
# Unit tests (~2900 tests) - pipe to file for AI processing
podman exec -it flyer-crawler-dev npm run test:unit 2>&1 | tee test-results.txt
# Integration tests (requires DB/Redis)
podman exec -it flyer-crawler-dev npm run test:integration
# E2E tests (requires all services)
podman exec -it flyer-crawler-dev npm run test:e2e
# Specific test file
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
# Type checking (CRITICAL before commit)
podman exec -it flyer-crawler-dev npm run type-check
# Coverage report
podman exec -it flyer-crawler-dev npm run test:coverage
```
### Inside Dev Container
```bash
npm test # All tests
npm run test:unit # Unit tests only
npm run test:integration # Integration tests
npm run test:e2e # E2E tests
npm run type-check # TypeScript check
```
---
## Test File Locations
| Test Type | Location | Config |
| ----------- | --------------------------------------------- | ------------------------------ |
| Unit | `src/**/*.test.ts`, `src/**/*.test.tsx` | `vite.config.ts` |
| Integration | `src/tests/integration/*.integration.test.ts` | `vitest.config.integration.ts` |
| E2E | `src/tests/e2e/*.e2e.test.ts` | `vitest.config.e2e.ts` |
| Setup | `src/tests/setup/*.ts` | - |
| Helpers | `src/tests/utils/*.ts` | - |
---
## Test Helpers
### Location: `src/tests/utils/`
| Helper | Purpose | Import |
| ----------------------- | --------------------------------------------------------------- | ----------------------------------- |
| `testHelpers.ts` | `createAndLoginUser()`, `getTestBaseUrl()`, `getFlyerBaseUrl()` | `../tests/utils/testHelpers` |
| `cleanup.ts` | `cleanupDb({ userIds, flyerIds })` | `../tests/utils/cleanup` |
| `mockFactories.ts` | `createMockStore()`, `createMockAddress()`, `createMockFlyer()` | `../tests/utils/mockFactories` |
| `storeHelpers.ts` | `createStoreWithLocation()`, `cleanupStoreLocations()` | `../tests/utils/storeHelpers` |
| `poll.ts` | `poll(fn, predicate, options)` - wait for async conditions | `../tests/utils/poll` |
| `mockLogger.ts` | Mock pino logger for tests | `../tests/utils/mockLogger` |
| `createTestApp.ts` | Create Express app instance for route tests | `../tests/utils/createTestApp` |
| `createMockRequest.ts` | Create mock Express request objects | `../tests/utils/createMockRequest` |
| `cleanupFiles.ts` | Clean up test file uploads | `../tests/utils/cleanupFiles` |
| `websocketTestUtils.ts` | WebSocket testing utilities | `../tests/utils/websocketTestUtils` |
### Usage Examples
```typescript
// Create authenticated user for tests
import { createAndLoginUser, TEST_PASSWORD } from '../tests/utils/testHelpers';
const { user, token } = await createAndLoginUser({
email: `test-${Date.now()}@example.com`,
request: request(app), // For integration tests
role: 'admin', // Optional: make admin
});
// Cleanup after tests
import { cleanupDb } from '../tests/utils/cleanup';
afterEach(async () => {
await cleanupDb({ userIds: [user.user.user_id] });
});
// Wait for async operation
import { poll } from '../tests/utils/poll';
await poll(
() => db.userRepo.findUserByEmail(email, logger),
(user) => !!user,
{ timeout: 5000, interval: 500, description: 'user to be findable' },
);
// Create mock data
import { createMockStore, createMockFlyer } from '../tests/utils/mockFactories';
const mockStore = createMockStore({ name: 'Test Store' });
const mockFlyer = createMockFlyer({ store_id: mockStore.store_id });
```
---
## Test Setup Files
| File | Purpose |
| --------------------------------------------- | ---------------------------------------- |
| `src/tests/setup/tests-setup-unit.ts` | Unit test setup (mocks, DOM environment) |
| `src/tests/setup/tests-setup-integration.ts` | Integration test setup (DB connections) |
| `src/tests/setup/global-setup.ts` | Global setup for unit tests |
| `src/tests/setup/integration-global-setup.ts` | Global setup for integration tests |
| `src/tests/setup/e2e-global-setup.ts` | Global setup for E2E tests |
| `src/tests/setup/mockHooks.ts` | React hook mocking utilities |
| `src/tests/setup/mockUI.ts` | UI component mocking |
| `src/tests/setup/globalApiMock.ts` | API mocking setup |
---
## Known Integration Test Issues
### 1. Vitest globalSetup Context Isolation
**Problem**: globalSetup runs in separate Node.js context. Singletons/mocks don't share instances.
**Affected**: BullMQ worker mocks
**Solution**: Use `.todo()`, test-only API endpoints, or Redis-based flags.
### 2. Cleanup Queue Race Condition
**Problem**: Cleanup worker processes jobs before test verification.
**Solution**:
```typescript
const { cleanupQueue } = await import('../../services/queues.server');
await cleanupQueue.drain();
await cleanupQueue.pause();
// ... run test ...
await cleanupQueue.resume();
```
### 3. Cache Stale After Direct SQL
**Problem**: Direct `pool.query()` bypasses cache invalidation.
**Solution**:
```typescript
await pool.query('INSERT INTO flyers ...');
await cacheService.invalidateFlyers(); // Add this
```
### 4. File Upload Filename Collisions
**Problem**: Multer predictable filenames cause race conditions.
**Solution**:
```typescript
const filename = `test-${Date.now()}-${Math.round(Math.random() * 1e9)}.jpg`;
```
### 5. Response Format Mismatches
**Problem**: API response structure changes (`data.jobId` vs `data.job.id`).
**Solution**: Log response bodies, update assertions to match actual format.
---
## Test Patterns
### Unit Test Pattern
```typescript
import { describe, it, expect, vi, beforeEach } from 'vitest';
describe('MyService', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('should do something', () => {
// Arrange
const input = { data: 'test' };
// Act
const result = myService.process(input);
// Assert
expect(result).toBe('expected');
});
});
```
### Integration Test Pattern
```typescript
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import request from 'supertest';
import { createAndLoginUser } from '../utils/testHelpers';
import { cleanupDb } from '../utils/cleanup';
describe('API Integration', () => {
let user, token;
beforeEach(async () => {
const result = await createAndLoginUser({ request: request(app) });
user = result.user;
token = result.token;
});
afterEach(async () => {
await cleanupDb({ userIds: [user.user.user_id] });
});
it('GET /api/resource returns data', async () => {
const res = await request(app).get('/api/resource').set('Authorization', `Bearer ${token}`);
expect(res.status).toBe(200);
expect(res.body.success).toBe(true);
});
});
```
### Route Test Pattern
```typescript
import { describe, it, expect, vi, beforeEach } from 'vitest';
import request from 'supertest';
import { createTestApp } from '../tests/utils/createTestApp';
vi.mock('../services/db/flyer.db', () => ({
getFlyerById: vi.fn(),
}));
describe('Flyer Routes', () => {
let app;
beforeEach(() => {
vi.clearAllMocks();
app = createTestApp();
});
it('GET /api/flyers/:id returns flyer', async () => {
const mockFlyer = { id: '123', name: 'Test' };
vi.mocked(flyerRepo.getFlyerById).mockResolvedValue(mockFlyer);
const res = await request(app).get('/api/flyers/123');
expect(res.status).toBe(200);
expect(res.body.data).toEqual(mockFlyer);
});
});
```
---
## Mocking Patterns
### Mock Modules
```typescript
// At top of test file
vi.mock('../services/db/flyer.db', () => ({
getFlyerById: vi.fn(),
listFlyers: vi.fn(),
}));
// In test
import * as flyerDb from '../services/db/flyer.db';
vi.mocked(flyerDb.getFlyerById).mockResolvedValue(mockFlyer);
```
### Mock React Query
```typescript
vi.mock('@tanstack/react-query', async () => {
const actual = await vi.importActual('@tanstack/react-query');
return {
...actual,
useQuery: vi.fn().mockReturnValue({
data: mockData,
isLoading: false,
error: null,
}),
};
});
```
### Mock Pino Logger
```typescript
import { createMockLogger } from '../tests/utils/mockLogger';
const mockLogger = createMockLogger();
```
---
## Testing React Components
```typescript
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import { BrowserRouter } from 'react-router-dom';
const createWrapper = () => {
const queryClient = new QueryClient({
defaultOptions: { queries: { retry: false } },
});
return ({ children }) => (
<QueryClientProvider client={queryClient}>
<BrowserRouter>{children}</BrowserRouter>
</QueryClientProvider>
);
};
it('renders component', () => {
render(<MyComponent />, { wrapper: createWrapper() });
expect(screen.getByText('Expected Text')).toBeInTheDocument();
});
```
---
## Test Coverage
```bash
# Generate coverage report
podman exec -it flyer-crawler-dev npm run test:coverage
# View HTML report
# Coverage reports generated in coverage/ directory
```
---
## Debugging Tests
```bash
# Verbose output
npm test -- --reporter=verbose
# Run single test with debugging
DEBUG=* npm test -- --run src/path/to/test.test.ts
# Vitest UI (interactive)
npm run test:ui
```

View File

@@ -4,6 +4,8 @@
**Status**: Accepted
**Implemented**: 2026-01-07
## Context
Our application has experienced a recurring pattern of bugs and brittle tests related to error handling, specifically for "resource not found" scenarios. The root causes identified are:
@@ -41,3 +43,86 @@ We will adopt a strict, consistent error-handling contract for the service and r
**Initial Refactoring**: Requires a one-time effort to audit and refactor all existing repository methods to conform to this new standard.
**Convention Adherence**: Developers must be aware of and adhere to this convention. This ADR serves as the primary documentation for this pattern.
## Implementation Details
### Custom Error Types
All custom errors are defined in `src/services/db/errors.db.ts`:
| Error Class | HTTP Status | PostgreSQL Code | Use Case |
| -------------------------------- | ----------- | --------------- | ------------------------------- |
| `NotFoundError` | 404 | - | Resource not found |
| `UniqueConstraintError` | 409 | 23505 | Duplicate key violation |
| `ForeignKeyConstraintError` | 400 | 23503 | Referenced record doesn't exist |
| `NotNullConstraintError` | 400 | 23502 | Required field is null |
| `CheckConstraintError` | 400 | 23514 | Check constraint violated |
| `InvalidTextRepresentationError` | 400 | 22P02 | Invalid data type format |
| `NumericValueOutOfRangeError` | 400 | 22003 | Numeric overflow |
| `ValidationError` | 400 | - | Request validation failed |
| `ForbiddenError` | 403 | - | Access denied |
### Error Handler Middleware
The centralized error handler in `src/middleware/errorHandler.ts`:
1. Catches all errors from route handlers
2. Maps custom error types to HTTP status codes
3. Logs errors with appropriate severity (warn for 4xx, error for 5xx)
4. Returns consistent JSON error responses
5. Includes error ID for server errors (for support correlation)
### Usage Pattern
```typescript
// In repository (throws NotFoundError)
async function getUserById(id: number): Promise<User> {
const result = await pool.query('SELECT * FROM users WHERE id = $1', [id]);
if (result.rows.length === 0) {
throw new NotFoundError(`User with ID ${id} not found.`);
}
return result.rows[0];
}
// In route handler (simple try/catch)
router.get('/:id', async (req, res, next) => {
try {
const user = await getUserById(req.params.id);
res.json(user);
} catch (error) {
next(error); // errorHandler maps NotFoundError to 404
}
});
```
### Centralized Error Handler Helper
The `handleDbError` function in `src/services/db/errors.db.ts` provides centralized PostgreSQL error handling:
```typescript
import { handleDbError } from './errors.db';
try {
await pool.query('INSERT INTO users (email) VALUES ($1)', [email]);
} catch (error) {
handleDbError(
error,
logger,
'Failed to create user',
{ email },
{
uniqueMessage: 'A user with this email already exists.',
defaultMessage: 'Failed to create user.',
},
);
}
```
## Key Files
- `src/services/db/errors.db.ts` - Custom error classes and `handleDbError` utility
- `src/middleware/errorHandler.ts` - Centralized Express error handling middleware
## Related ADRs
- [ADR-034](./0034-repository-pattern-standards.md) - Repository Pattern Standards (extends this ADR)

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-07
## Context
@@ -58,3 +60,109 @@ async function registerUserAndCreateDefaultList(userData) {
**Learning Curve**: Developers will need to learn and adopt the `withTransaction` pattern for all transactional database work.
**Refactoring Effort**: Existing methods that manually manage transactions (`createUser`, `createBudget`, etc.) will need to be refactored to use the new pattern.
## Implementation Details
### The `withTransaction` Helper
Located in `src/services/db/connection.db.ts`:
```typescript
export async function withTransaction<T>(callback: (client: PoolClient) => Promise<T>): Promise<T> {
const client = await getPool().connect();
try {
await client.query('BEGIN');
const result = await callback(client);
await client.query('COMMIT');
return result;
} catch (error) {
await client.query('ROLLBACK');
logger.error({ err: error }, 'Transaction failed, rolling back.');
throw error;
} finally {
client.release();
}
}
```
### Repository Pattern for Transaction Support
Repository methods accept an optional `PoolClient` parameter:
```typescript
// Function-based approach
export async function createUser(userData: CreateUserInput, client?: PoolClient): Promise<User> {
const queryable = client || getPool();
const result = await queryable.query<User>(
'INSERT INTO users (email, password_hash) VALUES ($1, $2) RETURNING *',
[userData.email, userData.passwordHash],
);
return result.rows[0];
}
```
### Transactional Service Example
```typescript
// src/services/authService.ts
import { withTransaction } from './db/connection.db';
import { createUser, createProfile } from './db';
export async function registerUserWithProfile(
email: string,
password: string,
profileData: ProfileInput,
): Promise<UserWithProfile> {
return withTransaction(async (client) => {
// All operations use the same transactional client
const user = await createUser({ email, password }, client);
const profile = await createProfile(
{
userId: user.user_id,
...profileData,
},
client,
);
return { user, profile };
});
}
```
### Services Using `withTransaction`
| Service | Function | Operations |
| ------------------------- | ----------------------- | ----------------------------------- |
| `authService` | `registerAndLoginUser` | Create user + profile + preferences |
| `userService` | `updateUserWithProfile` | Update user + profile atomically |
| `flyerPersistenceService` | `saveFlyer` | Create flyer + items + metadata |
| `shoppingService` | `createListWithItems` | Create list + initial items |
| `gamificationService` | `awardAchievement` | Create achievement + update points |
### Connection Pool Configuration
```typescript
const poolConfig: PoolConfig = {
max: 20, // Max clients in pool
idleTimeoutMillis: 30000, // Close idle clients after 30s
connectionTimeoutMillis: 2000, // Fail connect after 2s
};
```
### Pool Status Monitoring
```typescript
import { getPoolStatus } from './db/connection.db';
const status = getPoolStatus();
// { totalCount: 20, idleCount: 15, waitingCount: 0 }
```
## Key Files
- `src/services/db/connection.db.ts` - `getPool()`, `withTransaction()`, `getPoolStatus()`
## Related ADRs
- [ADR-001](./0001-standardized-error-handling.md) - Error handling within transactions
- [ADR-034](./0034-repository-pattern-standards.md) - Repository patterns for transaction participation

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-07
## Context
@@ -77,3 +79,140 @@ router.get('/:id', validateRequest(getFlyerSchema), async (req, res, next) => {
**New Dependency**: Introduces `zod` as a new project dependency.
**Learning Curve**: Developers need to learn the `zod` schema definition syntax.
**Refactoring Effort**: Requires a one-time effort to create schemas and refactor all existing routes to use the `validateRequest` middleware.
## Implementation Details
### The `validateRequest` Middleware
Located in `src/middleware/validation.middleware.ts`:
```typescript
export const validateRequest =
(schema: ZodObject<z.ZodRawShape>) => async (req: Request, res: Response, next: NextFunction) => {
try {
const { params, query, body } = await schema.parseAsync({
params: req.params,
query: req.query,
body: req.body,
});
// Merge parsed data back into request
Object.keys(req.params).forEach((key) => delete req.params[key]);
Object.assign(req.params, params);
Object.keys(req.query).forEach((key) => delete req.query[key]);
Object.assign(req.query, query);
req.body = body;
return next();
} catch (error) {
if (error instanceof ZodError) {
const validationIssues = error.issues.map((issue) => ({
...issue,
path: issue.path.map((p) => String(p)),
}));
return next(new ValidationError(validationIssues));
}
return next(error);
}
};
```
### Common Zod Patterns
```typescript
import { z } from 'zod';
import { requiredString } from '../utils/zodUtils';
// String that coerces to positive integer (for ID params)
const idParam = z.string().pipe(z.coerce.number().int().positive());
// Pagination query params with defaults
const paginationQuery = z.object({
limit: z.coerce.number().int().positive().max(100).default(20),
offset: z.coerce.number().int().nonnegative().default(0),
});
// Email with sanitization
const emailSchema = z.string().trim().toLowerCase().email('A valid email is required.');
// Password with strength validation
const passwordSchema = z
.string()
.trim()
.min(8, 'Password must be at least 8 characters long.')
.superRefine((password, ctx) => {
const strength = validatePasswordStrength(password);
if (!strength.isValid) ctx.addIssue({ code: 'custom', message: strength.feedback });
});
// Optional string that converts empty string to undefined
const optionalString = z.preprocess(
(val) => (val === '' ? undefined : val),
z.string().trim().optional(),
);
```
### Routes Using `validateRequest`
All API routes use the validation middleware:
| Router | Schemas Defined | Validated Endpoints |
| ------------------------ | --------------- | -------------------------------------------------------------------------------- |
| `auth.routes.ts` | 5 | `/register`, `/login`, `/forgot-password`, `/reset-password`, `/change-password` |
| `user.routes.ts` | 4 | `/profile`, `/address`, `/preferences`, `/notifications` |
| `flyer.routes.ts` | 6 | `GET /:id`, `GET /`, `GET /:id/items`, `DELETE /:id` |
| `budget.routes.ts` | 5 | `/`, `/:id`, `/batch`, `/categories` |
| `recipe.routes.ts` | 4 | `GET /`, `GET /:id`, `POST /`, `PATCH /:id` |
| `admin.routes.ts` | 8 | Various admin endpoints |
| `ai.routes.ts` | 3 | `/upload-and-process`, `/analyze`, `/jobs/:jobId/status` |
| `gamification.routes.ts` | 3 | `/achievements`, `/leaderboard`, `/points` |
### Validation Error Response Format
When validation fails, the `errorHandler` returns:
```json
{
"message": "The request data is invalid.",
"errors": [
{
"path": ["body", "email"],
"message": "A valid email is required."
},
{
"path": ["body", "password"],
"message": "Password must be at least 8 characters long."
}
]
}
```
HTTP Status: `400 Bad Request`
### Zod Utility Functions
Located in `src/utils/zodUtils.ts`:
```typescript
// String that rejects empty strings
export const requiredString = (message?: string) =>
z.string().min(1, message || 'This field is required.');
// Number from string with validation
export const numericString = z.string().pipe(z.coerce.number());
// Boolean from string ('true'/'false')
export const booleanString = z.enum(['true', 'false']).transform((v) => v === 'true');
```
## Key Files
- `src/middleware/validation.middleware.ts` - The `validateRequest` middleware
- `src/services/db/errors.db.ts` - `ValidationError` class definition
- `src/middleware/errorHandler.ts` - Error formatting for validation errors
- `src/utils/zodUtils.ts` - Reusable Zod schema utilities
## Related ADRs
- [ADR-001](./0001-standardized-error-handling.md) - Error handling for validation errors
- [ADR-032](./0032-rate-limiting-strategy.md) - Rate limiting applied alongside validation

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-07
## Context
@@ -84,3 +86,219 @@ router.get('/:id', async (req, res, next) => {
**Refactoring Effort**: Requires adding the `requestLogger` middleware and refactoring all routes and services to use `req.log` instead of the global `logger`.
**Slight Performance Overhead**: Creating a child logger for every request adds a minor performance cost, though this is negligible for most modern logging libraries.
## Implementation Details
### Logger Configuration
Located in `src/services/logger.server.ts`:
```typescript
import pino from 'pino';
const isProduction = process.env.NODE_ENV === 'production';
const isTest = process.env.NODE_ENV === 'test';
export const logger = pino({
level: isProduction ? 'info' : 'debug',
transport:
isProduction || isTest
? undefined
: {
target: 'pino-pretty',
options: {
colorize: true,
translateTime: 'SYS:standard',
ignore: 'pid,hostname',
},
},
redact: {
paths: [
'req.headers.authorization',
'req.headers.cookie',
'*.body.password',
'*.body.newPassword',
'*.body.currentPassword',
'*.body.confirmPassword',
'*.body.refreshToken',
'*.body.token',
],
censor: '[REDACTED]',
},
});
```
### Request Logger Middleware
Located in `server.ts`:
```typescript
const requestLogger = (req: Request, res: Response, next: NextFunction) => {
const requestId = randomUUID();
const user = req.user as UserProfile | undefined;
const start = process.hrtime();
// Create request-scoped logger
req.log = logger.child({
request_id: requestId,
user_id: user?.user.user_id,
ip_address: req.ip,
});
req.log.debug({ method: req.method, originalUrl: req.originalUrl }, 'INCOMING');
res.on('finish', () => {
const duration = getDurationInMilliseconds(start);
const { statusCode, statusMessage } = res;
const logDetails = {
user_id: (req.user as UserProfile | undefined)?.user.user_id,
method: req.method,
originalUrl: req.originalUrl,
statusCode,
statusMessage,
duration: duration.toFixed(2),
};
// Include request details for failed requests (for debugging)
if (statusCode >= 400) {
logDetails.req = { headers: req.headers, body: req.body };
}
if (statusCode >= 500) req.log.error(logDetails, 'Request completed with server error');
else if (statusCode >= 400) req.log.warn(logDetails, 'Request completed with client error');
else req.log.info(logDetails, 'Request completed successfully');
});
next();
};
app.use(requestLogger);
```
### TypeScript Support
The `req.log` property is typed via declaration merging in `src/types/express.d.ts`:
```typescript
import { Logger } from 'pino';
declare global {
namespace Express {
export interface Request {
log: Logger;
}
}
}
```
### Automatic Sensitive Data Redaction
The Pino logger automatically redacts sensitive fields:
```json
// Before redaction
{
"body": {
"email": "user@example.com",
"password": "secret123",
"newPassword": "newsecret456"
}
}
// After redaction (in logs)
{
"body": {
"email": "user@example.com",
"password": "[REDACTED]",
"newPassword": "[REDACTED]"
}
}
```
### Log Levels by Scenario
| Level | HTTP Status | Scenario |
| ----- | ----------- | -------------------------------------------------- |
| DEBUG | Any | Request incoming, internal state, development info |
| INFO | 2xx | Successful requests, business events |
| WARN | 4xx | Client errors, validation failures, not found |
| ERROR | 5xx | Server errors, unhandled exceptions |
### Service Layer Logging
Services accept the request-scoped logger as an optional parameter:
```typescript
export async function registerUser(email: string, password: string, reqLog?: Logger) {
const log = reqLog || logger; // Fall back to global logger
log.info({ email }, 'Registering new user');
// ... implementation
log.debug({ userId: user.user_id }, 'User created successfully');
return user;
}
// In route handler
router.post('/register', async (req, res, next) => {
await authService.registerUser(req.body.email, req.body.password, req.log);
});
```
### Log Output Format
**Development** (pino-pretty):
```text
[2026-01-09 12:34:56.789] INFO (request_id=abc123): Request completed successfully
method: "GET"
originalUrl: "/api/flyers"
statusCode: 200
duration: "45.23"
```
**Production** (JSON):
```json
{
"level": 30,
"time": 1704812096789,
"request_id": "abc123",
"user_id": "user_456",
"ip_address": "192.168.1.1",
"method": "GET",
"originalUrl": "/api/flyers",
"statusCode": 200,
"duration": "45.23",
"msg": "Request completed successfully"
}
```
### Routes Using `req.log`
All route files have been migrated to use the request-scoped logger:
- `src/routes/auth.routes.ts`
- `src/routes/user.routes.ts`
- `src/routes/flyer.routes.ts`
- `src/routes/ai.routes.ts`
- `src/routes/admin.routes.ts`
- `src/routes/budget.routes.ts`
- `src/routes/recipe.routes.ts`
- `src/routes/gamification.routes.ts`
- `src/routes/personalization.routes.ts`
- `src/routes/stats.routes.ts`
- `src/routes/health.routes.ts`
- `src/routes/system.routes.ts`
## Key Files
- `src/services/logger.server.ts` - Pino logger configuration
- `src/services/logger.client.ts` - Client-side logger (for frontend)
- `src/types/express.d.ts` - TypeScript declaration for `req.log`
- `server.ts` - Request logger middleware
## Related ADRs
- [ADR-001](./0001-standardized-error-handling.md) - Error handler uses `req.log` for error logging
- [ADR-026](./0026-standardized-client-side-structured-logging.md) - Client-side logging strategy

View File

@@ -1,8 +1,9 @@
# ADR-005: Frontend State Management and Server Cache Strategy
**Date**: 2025-12-12
**Implementation Date**: 2026-01-08
**Status**: Proposed
**Status**: Accepted and Fully Implemented (Phases 1-8 complete, 100% coverage)
## Context
@@ -16,3 +17,228 @@ We will adopt a dedicated library for managing server state, such as **TanStack
**Positive**: Leads to a more performant, predictable, and simpler frontend codebase. Standardizes how the client-side communicates with the server and handles loading/error states. Improves user experience through intelligent caching.
**Negative**: Introduces a new frontend dependency. Requires a learning curve for developers unfamiliar with the library. Requires refactoring of existing data-fetching logic.
## Implementation Status
### Phase 1: Infrastructure & Core Queries (✅ Complete - 2026-01-08)
**Files Created:**
- [src/config/queryClient.ts](../../src/config/queryClient.ts) - Global QueryClient configuration
- [src/hooks/queries/useFlyersQuery.ts](../../src/hooks/queries/useFlyersQuery.ts) - Flyers data query
- [src/hooks/queries/useWatchedItemsQuery.ts](../../src/hooks/queries/useWatchedItemsQuery.ts) - Watched items query
- [src/hooks/queries/useShoppingListsQuery.ts](../../src/hooks/queries/useShoppingListsQuery.ts) - Shopping lists query
**Files Modified:**
- [src/providers/AppProviders.tsx](../../src/providers/AppProviders.tsx) - Added QueryClientProvider wrapper
- [src/providers/FlyersProvider.tsx](../../src/providers/FlyersProvider.tsx) - Refactored to use TanStack Query
- [src/providers/UserDataProvider.tsx](../../src/providers/UserDataProvider.tsx) - Refactored to use TanStack Query
- [src/services/apiClient.ts](../../src/services/apiClient.ts) - Added pagination params to fetchFlyers
**Benefits Achieved:**
- ✅ Removed ~150 lines of custom state management code
- ✅ Automatic caching of server data
- ✅ Background refetching for stale data
- ✅ React Query Devtools available in development
- ✅ Automatic data invalidation on user logout
- ✅ Better error handling and loading states
### Phase 2: Remaining Queries (✅ Complete - 2026-01-08)
**Files Created:**
- [src/hooks/queries/useMasterItemsQuery.ts](../../src/hooks/queries/useMasterItemsQuery.ts) - Master grocery items query
- [src/hooks/queries/useFlyerItemsQuery.ts](../../src/hooks/queries/useFlyerItemsQuery.ts) - Flyer items query
**Files Modified:**
- [src/providers/MasterItemsProvider.tsx](../../src/providers/MasterItemsProvider.tsx) - Refactored to use TanStack Query
- [src/hooks/useFlyerItems.ts](../../src/hooks/useFlyerItems.ts) - Refactored to use TanStack Query
**Benefits Achieved:**
- ✅ Removed additional ~50 lines of custom state management code
- ✅ Per-flyer item caching (items cached separately for each flyer)
- ✅ Longer cache times for infrequently changing data (master items)
- ✅ Automatic query disabling when dependencies are not met
### Phase 3: Mutations (✅ Complete - 2026-01-08)
**Files Created:**
- [src/hooks/mutations/useAddWatchedItemMutation.ts](../../src/hooks/mutations/useAddWatchedItemMutation.ts) - Add watched item mutation
- [src/hooks/mutations/useRemoveWatchedItemMutation.ts](../../src/hooks/mutations/useRemoveWatchedItemMutation.ts) - Remove watched item mutation
- [src/hooks/mutations/useCreateShoppingListMutation.ts](../../src/hooks/mutations/useCreateShoppingListMutation.ts) - Create shopping list mutation
- [src/hooks/mutations/useDeleteShoppingListMutation.ts](../../src/hooks/mutations/useDeleteShoppingListMutation.ts) - Delete shopping list mutation
- [src/hooks/mutations/useAddShoppingListItemMutation.ts](../../src/hooks/mutations/useAddShoppingListItemMutation.ts) - Add shopping list item mutation
- [src/hooks/mutations/useUpdateShoppingListItemMutation.ts](../../src/hooks/mutations/useUpdateShoppingListItemMutation.ts) - Update shopping list item mutation
- [src/hooks/mutations/useRemoveShoppingListItemMutation.ts](../../src/hooks/mutations/useRemoveShoppingListItemMutation.ts) - Remove shopping list item mutation
- [src/hooks/mutations/index.ts](../../src/hooks/mutations/index.ts) - Barrel export for all mutation hooks
**Benefits Achieved:**
- ✅ Standardized mutation pattern across all data modifications
- ✅ Automatic cache invalidation after successful mutations
- ✅ Built-in success/error notifications
- ✅ Consistent error handling
- ✅ Full TypeScript type safety
- ✅ Comprehensive documentation with usage examples
**See**: [plans/adr-0005-phase-3-summary.md](../../plans/adr-0005-phase-3-summary.md) for detailed documentation
### Phase 4: Hook Refactoring (✅ Complete)
**Goal:** Refactor user-facing hooks to use TanStack Query mutation hooks.
**Files Modified:**
- [src/hooks/useWatchedItems.tsx](../../src/hooks/useWatchedItems.tsx) - Refactored to use mutation hooks
- [src/hooks/useShoppingLists.tsx](../../src/hooks/useShoppingLists.tsx) - Refactored to use mutation hooks
- [src/contexts/UserDataContext.ts](../../src/contexts/UserDataContext.ts) - Clean read-only interface (no setters)
- [src/providers/UserDataProvider.tsx](../../src/providers/UserDataProvider.tsx) - Uses query hooks, no setter stubs
**Benefits Achieved:**
- ✅ Both hooks now use TanStack Query mutations
- ✅ Automatic cache invalidation after mutations
- ✅ Consistent error handling via mutation hooks
- ✅ Clean context interface (read-only server state)
- ✅ Backward compatible API for hook consumers
### Phase 5: Admin Features (✅ Complete)
**Goal:** Create query hooks for admin features.
**Files Created:**
- [src/hooks/queries/useActivityLogQuery.ts](../../src/hooks/queries/useActivityLogQuery.ts) - Activity log with pagination
- [src/hooks/queries/useApplicationStatsQuery.ts](../../src/hooks/queries/useApplicationStatsQuery.ts) - Application statistics
- [src/hooks/queries/useSuggestedCorrectionsQuery.ts](../../src/hooks/queries/useSuggestedCorrectionsQuery.ts) - Corrections data
- [src/hooks/queries/useCategoriesQuery.ts](../../src/hooks/queries/useCategoriesQuery.ts) - Categories (public endpoint)
**Components Migrated:**
- [src/pages/admin/ActivityLog.tsx](../../src/pages/admin/ActivityLog.tsx) - Uses useActivityLogQuery
- [src/pages/admin/AdminStatsPage.tsx](../../src/pages/admin/AdminStatsPage.tsx) - Uses useApplicationStatsQuery
- [src/pages/admin/CorrectionsPage.tsx](../../src/pages/admin/CorrectionsPage.tsx) - Uses useSuggestedCorrectionsQuery, useMasterItemsQuery, useCategoriesQuery
**Benefits Achieved:**
- ✅ Automatic caching of admin data
- ✅ Parallel fetching (CorrectionsPage fetches 3 queries simultaneously)
- ✅ Consistent stale times (30s to 2 min based on data volatility)
- ✅ Shared cache across components (useMasterItemsQuery reused)
### Phase 6: Analytics Features (✅ Complete - 2026-01-10)
**Goal:** Migrate analytics and deals features.
**Files Created:**
- [src/hooks/queries/useBestSalePricesQuery.ts](../../src/hooks/queries/useBestSalePricesQuery.ts) - Best sale prices for watched items
- [src/hooks/queries/useFlyerItemsForFlyersQuery.ts](../../src/hooks/queries/useFlyerItemsForFlyersQuery.ts) - Batch fetch items for multiple flyers
- [src/hooks/queries/useFlyerItemCountQuery.ts](../../src/hooks/queries/useFlyerItemCountQuery.ts) - Count items across flyers
**Files Modified:**
- [src/pages/MyDealsPage.tsx](../../src/pages/MyDealsPage.tsx) - Now uses useBestSalePricesQuery
- [src/hooks/useActiveDeals.tsx](../../src/hooks/useActiveDeals.tsx) - Refactored to use TanStack Query hooks
**Benefits Achieved:**
- ✅ Removed useApi dependency from analytics features
- ✅ Automatic caching of deal data (2-5 minute stale times)
- ✅ Consistent error handling via TanStack Query
- ✅ Batch fetching for flyer items (single query for multiple flyers)
### Phase 7: Cleanup (✅ Complete - 2026-01-10)
**Goal:** Remove legacy hooks once migration is complete.
**Files Created:**
- [src/hooks/queries/useUserAddressQuery.ts](../../src/hooks/queries/useUserAddressQuery.ts) - User address fetching
- [src/hooks/queries/useAuthProfileQuery.ts](../../src/hooks/queries/useAuthProfileQuery.ts) - Auth profile fetching
- [src/hooks/mutations/useGeocodeMutation.ts](../../src/hooks/mutations/useGeocodeMutation.ts) - Address geocoding
**Files Modified:**
- [src/hooks/useProfileAddress.ts](../../src/hooks/useProfileAddress.ts) - Refactored to use TanStack Query
- [src/providers/AuthProvider.tsx](../../src/providers/AuthProvider.tsx) - Refactored to use TanStack Query
**Files Removed:**
- ~~src/hooks/useApi.ts~~ - Legacy hook removed
- ~~src/hooks/useApi.test.ts~~ - Test file removed
- ~~src/hooks/useApiOnMount.ts~~ - Legacy hook removed
- ~~src/hooks/useApiOnMount.test.ts~~ - Test file removed
**Benefits Achieved:**
- ✅ Removed all legacy `useApi` and `useApiOnMount` hooks
- ✅ Complete TanStack Query coverage for all data fetching
- ✅ Consistent error handling across the entire application
- ✅ Unified caching strategy for all server state
### Phase 8: Additional Component Migration (✅ Complete - 2026-01-10)
**Goal:** Migrate remaining components with manual data fetching to TanStack Query.
**Files Created:**
- [src/hooks/queries/useUserProfileDataQuery.ts](../../src/hooks/queries/useUserProfileDataQuery.ts) - Combined user profile + achievements query
- [src/hooks/queries/useLeaderboardQuery.ts](../../src/hooks/queries/useLeaderboardQuery.ts) - Public leaderboard data
- [src/hooks/queries/usePriceHistoryQuery.ts](../../src/hooks/queries/usePriceHistoryQuery.ts) - Historical price data for watched items
**Files Modified:**
- [src/hooks/useUserProfileData.ts](../../src/hooks/useUserProfileData.ts) - Refactored to use useUserProfileDataQuery
- [src/components/Leaderboard.tsx](../../src/components/Leaderboard.tsx) - Refactored to use useLeaderboardQuery
- [src/features/charts/PriceHistoryChart.tsx](../../src/features/charts/PriceHistoryChart.tsx) - Refactored to use usePriceHistoryQuery
**Benefits Achieved:**
- ✅ Parallel fetching for profile + achievements data
- ✅ Public leaderboard cached with 2-minute stale time
- ✅ Price history cached with 10-minute stale time (data changes infrequently)
- ✅ Backward-compatible setProfile function via queryClient.setQueryData
- ✅ Stable query keys with sorted IDs for price history
## Migration Status
Current Coverage: **100% complete**
| Category | Total | Migrated | Status |
| ----------------------------- | ----- | -------- | ------- |
| Query Hooks (User) | 7 | 7 | ✅ 100% |
| Query Hooks (Admin) | 4 | 4 | ✅ 100% |
| Query Hooks (Analytics) | 3 | 3 | ✅ 100% |
| Query Hooks (Phase 8) | 3 | 3 | ✅ 100% |
| Mutation Hooks | 8 | 8 | ✅ 100% |
| User Hooks | 2 | 2 | ✅ 100% |
| Analytics Features | 2 | 2 | ✅ 100% |
| Component Migration (Phase 8) | 3 | 3 | ✅ 100% |
| Legacy Hook Cleanup | 4 | 4 | ✅ 100% |
**Completed:**
- ✅ Core query hooks (flyers, flyerItems, masterItems, watchedItems, shoppingLists)
- ✅ Admin query hooks (activityLog, applicationStats, suggestedCorrections, categories)
- ✅ Analytics query hooks (bestSalePrices, flyerItemsForFlyers, flyerItemCount)
- ✅ Auth/Profile query hooks (authProfile, userAddress)
- ✅ Phase 8 query hooks (userProfileData, leaderboard, priceHistory)
- ✅ All mutation hooks (watched items, shopping lists, geocode)
- ✅ Provider refactoring (AppProviders, FlyersProvider, MasterItemsProvider, UserDataProvider, AuthProvider)
- ✅ User hooks refactoring (useWatchedItems, useShoppingLists, useProfileAddress, useUserProfileData)
- ✅ Admin component migration (ActivityLog, AdminStatsPage, CorrectionsPage)
- ✅ Analytics features (MyDealsPage, useActiveDeals)
- ✅ Component migration (Leaderboard, PriceHistoryChart)
- ✅ Legacy hooks removed (useApi, useApiOnMount)
See [plans/adr-0005-master-migration-status.md](../../plans/adr-0005-master-migration-status.md) for complete tracking of all components.
## Implementation Guide
See [plans/adr-0005-implementation-plan.md](../../plans/adr-0005-implementation-plan.md) for detailed implementation steps.

View File

@@ -2,7 +2,7 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
## Context
@@ -16,3 +16,82 @@ We will implement a dedicated background job processing system using a task queu
**Positive**: Decouples the API from heavy processing, allows for retries on failure, and enables scaling the processing workers independently. Increases application reliability and resilience.
**Negative**: Introduces a new dependency (Redis) into the infrastructure. Requires refactoring of the flyer processing logic to work within a job queue structure.
## Implementation Details
### Queue Infrastructure
The implementation uses **BullMQ v5.65.1** with **ioredis v5.8.2** for Redis connectivity. Six distinct queues handle different job types:
| Queue Name | Purpose | Retry Attempts | Backoff Strategy |
| ---------------------------- | --------------------------- | -------------- | ---------------------- |
| `flyer-processing` | OCR/AI processing of flyers | 3 | Exponential (5s base) |
| `email-sending` | Email delivery | 5 | Exponential (10s base) |
| `analytics-reporting` | Daily report generation | 2 | Exponential (60s base) |
| `weekly-analytics-reporting` | Weekly report generation | 2 | Exponential (1h base) |
| `file-cleanup` | Temporary file cleanup | 3 | Exponential (30s base) |
| `token-cleanup` | Expired token removal | 2 | Exponential (1h base) |
### Key Files
- `src/services/queues.server.ts` - Queue definitions and configuration
- `src/services/workers.server.ts` - Worker implementations with configurable concurrency
- `src/services/redis.server.ts` - Redis connection management
- `src/services/queueService.server.ts` - Queue lifecycle and graceful shutdown
- `src/services/flyerProcessingService.server.ts` - 5-stage flyer processing pipeline
- `src/types/job-data.ts` - TypeScript interfaces for all job data types
### API Design
Endpoints for long-running tasks return **202 Accepted** immediately with a job ID:
```text
POST /api/ai/upload-and-process → 202 { jobId: "..." }
GET /api/ai/jobs/:jobId/status → { state: "...", progress: ... }
```
### Worker Configuration
Workers are configured via environment variables:
- `WORKER_CONCURRENCY` - Flyer processing parallelism (default: 1)
- `EMAIL_WORKER_CONCURRENCY` - Email worker parallelism (default: 10)
- `ANALYTICS_WORKER_CONCURRENCY` - Analytics worker parallelism (default: 1)
- `CLEANUP_WORKER_CONCURRENCY` - Cleanup worker parallelism (default: 10)
### Monitoring
- **Bull Board UI** available at `/api/admin/jobs` for admin users
- Worker status endpoint: `GET /api/admin/workers/status`
- Queue status endpoint: `GET /api/admin/queues/status`
### Graceful Shutdown
Both API and worker processes implement graceful shutdown with a 30-second timeout, ensuring in-flight jobs complete before process termination.
## Compliance Notes
### Deprecated Synchronous Endpoints
The following endpoints process flyers synchronously and are **deprecated**:
- `POST /api/ai/upload-legacy` - For integration testing only
- `POST /api/ai/flyers/process` - Legacy workflow, should migrate to queue-based approach
New integrations MUST use `POST /api/ai/upload-and-process` for queue-based processing.
### Email Handling
- **Bulk emails** (deal notifications): Enqueued via `emailQueue`
- **Transactional emails** (password reset): Sent synchronously for immediate user feedback
## Future Enhancements
Potential improvements for consideration:
1. **Dead Letter Queue (DLQ)**: Move permanently failed jobs to a dedicated queue for analysis
2. **Job Priority Levels**: Allow priority-based processing for different job types
3. **Real-time Progress**: WebSocket/SSE for live job progress updates to clients
4. **Per-Queue Rate Limiting**: Throttle job processing based on external API limits
5. **Job Dependencies**: Support for jobs that depend on completion of other jobs
6. **Prometheus Metrics**: Export queue metrics for observability dashboards

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
@@ -16,3 +18,216 @@ We will introduce a centralized, schema-validated configuration service. We will
**Positive**: Improves application reliability and developer experience by catching configuration errors at startup rather than at runtime. Provides a single source of truth for all required configuration.
**Negative**: Adds a small amount of boilerplate for defining the configuration schema. Requires a one-time effort to refactor all `process.env` access points to use the new configuration service.
## Implementation Status
### What's Implemented
-**Centralized Configuration Schema** - Zod-based validation in `src/config/env.ts`
-**Type-Safe Access** - Full TypeScript types for all configuration
-**Fail-Fast Startup** - Clear error messages for missing/invalid config
-**Environment Helpers** - `isProduction`, `isTest`, `isDevelopment` exports
-**Service Configuration Helpers** - `isSmtpConfigured`, `isAiConfigured`, etc.
### Migration Status
- ⏳ Gradual migration of `process.env` access to `config.*` in progress
- Legacy `process.env` access still works during transition
## Implementation Details
### Configuration Schema
The configuration is organized into logical groups:
```typescript
import { config, isProduction, isTest } from './config/env';
// Database
config.database.host; // DB_HOST
config.database.port; // DB_PORT (default: 5432)
config.database.user; // DB_USER
config.database.password; // DB_PASSWORD
config.database.name; // DB_NAME
// Redis
config.redis.url; // REDIS_URL
config.redis.password; // REDIS_PASSWORD (optional)
// Authentication
config.auth.jwtSecret; // JWT_SECRET (min 32 chars)
config.auth.jwtSecretPrevious; // JWT_SECRET_PREVIOUS (for rotation)
// SMTP (all optional - email degrades gracefully)
config.smtp.host; // SMTP_HOST
config.smtp.port; // SMTP_PORT (default: 587)
config.smtp.user; // SMTP_USER
config.smtp.pass; // SMTP_PASS
config.smtp.secure; // SMTP_SECURE (default: false)
config.smtp.fromEmail; // SMTP_FROM_EMAIL
// AI Services
config.ai.geminiApiKey; // GEMINI_API_KEY
config.ai.geminiRpm; // GEMINI_RPM (default: 5)
config.ai.priceQualityThreshold; // AI_PRICE_QUALITY_THRESHOLD (default: 0.5)
// Google Services
config.google.mapsApiKey; // GOOGLE_MAPS_API_KEY (optional)
config.google.clientId; // GOOGLE_CLIENT_ID (optional)
config.google.clientSecret; // GOOGLE_CLIENT_SECRET (optional)
// Worker Configuration
config.worker.concurrency; // WORKER_CONCURRENCY (default: 1)
config.worker.lockDuration; // WORKER_LOCK_DURATION (default: 30000)
config.worker.emailConcurrency; // EMAIL_WORKER_CONCURRENCY (default: 10)
config.worker.analyticsConcurrency; // ANALYTICS_WORKER_CONCURRENCY (default: 1)
config.worker.cleanupConcurrency; // CLEANUP_WORKER_CONCURRENCY (default: 10)
config.worker.weeklyAnalyticsConcurrency; // WEEKLY_ANALYTICS_WORKER_CONCURRENCY (default: 1)
// Server
config.server.nodeEnv; // NODE_ENV (development/production/test)
config.server.port; // PORT (default: 3001)
config.server.frontendUrl; // FRONTEND_URL
config.server.baseUrl; // BASE_URL
config.server.storagePath; // STORAGE_PATH (default: /var/www/.../flyer-images)
```
### Convenience Helpers
```typescript
import { isProduction, isTest, isDevelopment, isSmtpConfigured } from './config/env';
// Environment checks
if (isProduction) {
// Production-only logic
}
// Service availability checks
if (isSmtpConfigured) {
await sendEmail(...);
} else {
logger.warn('Email not configured, skipping notification');
}
```
### Fail-Fast Error Messages
When configuration is invalid, the application exits with a clear error:
```text
╔════════════════════════════════════════════════════════════════╗
║ CONFIGURATION ERROR - APPLICATION STARTUP ║
╚════════════════════════════════════════════════════════════════╝
The following environment variables are missing or invalid:
- database.host: DB_HOST is required
- auth.jwtSecret: JWT_SECRET must be at least 32 characters for security
Please check your .env file or environment configuration.
See ADR-007 for the complete list of required environment variables.
```
### Usage Example
```typescript
// Before (direct process.env access)
const pool = new Pool({
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT || '5432', 10),
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
});
// After (type-safe config access)
import { config } from './config/env';
const pool = new Pool({
host: config.database.host,
port: config.database.port,
user: config.database.user,
password: config.database.password,
database: config.database.name,
});
```
## Required Environment Variables
### Critical (Application will not start without these)
| Variable | Description |
| ------------- | ----------------------------------------------------- |
| `DB_HOST` | PostgreSQL database host |
| `DB_USER` | PostgreSQL database user |
| `DB_PASSWORD` | PostgreSQL database password |
| `DB_NAME` | PostgreSQL database name |
| `REDIS_URL` | Redis connection URL (e.g., `redis://localhost:6379`) |
| `JWT_SECRET` | JWT signing secret (minimum 32 characters) |
### Optional with Defaults
| Variable | Default | Description |
| ---------------------------- | ------------------------- | ------------------------------- |
| `DB_PORT` | 5432 | PostgreSQL port |
| `PORT` | 3001 | Server HTTP port |
| `NODE_ENV` | development | Environment mode |
| `STORAGE_PATH` | /var/www/.../flyer-images | File upload directory |
| `SMTP_PORT` | 587 | SMTP server port |
| `SMTP_SECURE` | false | Use TLS for SMTP |
| `GEMINI_RPM` | 5 | Gemini API requests per minute |
| `AI_PRICE_QUALITY_THRESHOLD` | 0.5 | AI extraction quality threshold |
| `WORKER_CONCURRENCY` | 1 | Flyer processing concurrency |
| `WORKER_LOCK_DURATION` | 30000 | Worker lock duration (ms) |
### Optional (Feature-specific)
| Variable | Description |
| --------------------- | ------------------------------------------- |
| `GEMINI_API_KEY` | Google Gemini API key (enables AI features) |
| `GOOGLE_MAPS_API_KEY` | Google Maps API key (enables geocoding) |
| `SMTP_HOST` | SMTP server (enables email notifications) |
| `SMTP_USER` | SMTP authentication username |
| `SMTP_PASS` | SMTP authentication password |
| `SMTP_FROM_EMAIL` | Sender email address |
| `FRONTEND_URL` | Frontend URL for email links |
| `JWT_SECRET_PREVIOUS` | Previous JWT secret for rotation (ADR-029) |
## Key Files
- `src/config/env.ts` - Configuration schema and validation
- `.env.example` - Template for required environment variables
## Migration Guide
To migrate existing `process.env` usage:
1. Import the config:
```typescript
import { config, isProduction } from '../config/env';
```
2. Replace direct access:
```typescript
// Before
process.env.DB_HOST;
process.env.NODE_ENV === 'production';
parseInt(process.env.PORT || '3001', 10);
// After
config.database.host;
isProduction;
config.server.port;
```
3. Use service helpers for optional features:
```typescript
import { isSmtpConfigured, isAiConfigured } from '../config/env';
if (isSmtpConfigured) {
// Email is available
}
```

View File

@@ -2,7 +2,7 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
## Context
@@ -20,3 +20,107 @@ We will implement a multi-layered caching strategy using an in-memory data store
**Positive**: Directly addresses application performance and scalability. Reduces database load and improves API response times for common requests.
**Negative**: Introduces Redis as a dependency if not already used. Adds complexity to the data-fetching logic and requires careful management of cache invalidation to prevent stale data.
## Implementation Details
### Cache Service
A centralized cache service (`src/services/cacheService.server.ts`) provides reusable caching functionality:
- **`getOrSet<T>(key, fetcher, options)`**: Cache-aside pattern implementation
- **`get<T>(key)`**: Retrieve cached value
- **`set<T>(key, value, ttl)`**: Store value with TTL
- **`del(key)`**: Delete specific key
- **`invalidatePattern(pattern)`**: Delete keys matching a pattern
All cache operations are fail-safe - cache failures do not break the application.
### TTL Configuration
Different data types use different TTL values based on volatility:
| Data Type | TTL | Rationale |
| ------------------- | --------- | -------------------------------------- |
| Brands/Stores | 1 hour | Rarely changes, safe to cache longer |
| Flyer lists | 5 minutes | Changes when new flyers are added |
| Individual flyers | 10 minutes| Stable once created |
| Flyer items | 10 minutes| Stable once created |
| Statistics | 5 minutes | Can be slightly stale |
| Frequent sales | 15 minutes| Aggregated data, updated periodically |
| Categories | 1 hour | Rarely changes |
### Cache Key Strategy
Cache keys follow a consistent prefix pattern for pattern-based invalidation:
- `cache:brands` - All brands list
- `cache:flyers:{limit}:{offset}` - Paginated flyer lists
- `cache:flyer:{id}` - Individual flyer data
- `cache:flyer-items:{flyerId}` - Items for a specific flyer
- `cache:stats:*` - Statistics data
- `geocode:{address}` - Geocoding results (30-day TTL)
### Cached Endpoints
The following repository methods implement server-side caching:
| Method | Cache Key Pattern | TTL |
| ------ | ----------------- | --- |
| `FlyerRepository.getAllBrands()` | `cache:brands` | 1 hour |
| `FlyerRepository.getFlyers()` | `cache:flyers:{limit}:{offset}` | 5 minutes |
| `FlyerRepository.getFlyerItems()` | `cache:flyer-items:{flyerId}` | 10 minutes |
### Cache Invalidation
**Event-based invalidation** is triggered on write operations:
- **Flyer creation** (`FlyerPersistenceService.saveFlyer`): Invalidates all `cache:flyers*` keys
- **Flyer deletion** (`FlyerRepository.deleteFlyer`): Invalidates specific flyer and flyer items cache, plus flyer lists
**Manual invalidation** via admin endpoints:
- `POST /api/admin/system/clear-cache` - Clears all application cache (flyers, brands, stats)
- `POST /api/admin/system/clear-geocode-cache` - Clears geocoding cache
### Client-Side Caching
TanStack React Query provides client-side caching with configurable stale times:
| Query Type | Stale Time |
| ----------------- | ----------- |
| Categories | 1 hour |
| Master Items | 10 minutes |
| Flyer Items | 5 minutes |
| Flyers | 2 minutes |
| Shopping Lists | 1 minute |
| Activity Log | 30 seconds |
### Multi-Layer Cache Architecture
```text
Client Request
[TanStack React Query] ← Client-side cache (staleTime-based)
[Express API]
[CacheService.getOrSet()] ← Server-side Redis cache (TTL-based)
[PostgreSQL Database]
```
## Key Files
- `src/services/cacheService.server.ts` - Centralized cache service
- `src/services/db/flyer.db.ts` - Repository with caching for brands, flyers, flyer items
- `src/services/flyerPersistenceService.server.ts` - Cache invalidation on flyer creation
- `src/routes/admin.routes.ts` - Admin cache management endpoints
- `src/config/queryClient.ts` - Client-side query cache configuration
## Future Enhancements
1. **Recipe caching**: Add caching to expensive recipe queries (by-sale-percentage, etc.)
2. **Cache warming**: Pre-populate cache on startup for frequently accessed static data
3. **Cache metrics**: Add hit/miss rate monitoring for observability
4. **Conditional caching**: Skip cache for authenticated user-specific data
5. **Cache compression**: Compress large cached payloads to reduce Redis memory usage

View File

@@ -2,7 +2,7 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
## Context
@@ -14,9 +14,359 @@ We will formalize the testing pyramid for the project, defining the role of each
1. **Unit Tests (Vitest)**: For isolated functions, components, and repository methods with mocked dependencies. High coverage is expected.
2. **Integration Tests (Supertest)**: For API routes, testing the interaction between controllers, services, and mocked database layers. Focus on contract and middleware correctness.
3. **End-to-End (E2E) Tests (Playwright/Cypress)**: For critical user flows (e.g., login, flyer upload, checkout), running against a real browser and a test database to ensure the entire system works together.
3. **End-to-End (E2E) Tests (Vitest + Supertest)**: For critical user flows (e.g., login, flyer upload, checkout), running against a real test server and database to ensure the entire system works together.
## Consequences
**Positive**: Ensures a consistent and comprehensive approach to quality assurance. Gives developers confidence when refactoring or adding new features. Clearly defines "done" for a new feature.
**Negative**: May require investment in setting up and maintaining the E2E testing environment. Can slightly increase the time required to develop a feature if all test layers are required.
## Implementation Details
### Testing Framework Stack
| Tool | Version | Purpose |
| ------------------------- | --------------- | --------------------------------------- |
| Vitest | 4.0.15 | Test runner for all test types |
| @testing-library/react | 16.3.0 | React component testing |
| @testing-library/jest-dom | 6.9.1 | DOM assertion matchers |
| supertest | 7.1.4 | HTTP assertion library for API testing |
| msw | 2.12.3 | Mock Service Worker for network mocking |
| testcontainers | 11.8.1 | Database containerization (optional) |
| c8 + nyc | 10.1.3 / 17.1.0 | Coverage reporting |
### Test File Organization
```text
src/
├── components/
│ └── *.test.tsx # Component unit tests (colocated)
├── hooks/
│ └── *.test.ts # Hook unit tests (colocated)
├── services/
│ └── *.test.ts # Service unit tests (colocated)
├── routes/
│ └── *.test.ts # Route handler unit tests (colocated)
├── utils/
│ └── *.test.ts # Utility function tests (colocated)
└── tests/
├── setup/ # Test configuration and setup files
├── utils/ # Test utilities, factories, helpers
├── assets/ # Test fixtures (images, files)
├── integration/ # Integration test files (*.test.ts)
└── e2e/ # End-to-end test files (*.e2e.test.ts)
```
**Naming Convention**: `{filename}.test.ts` or `{filename}.test.tsx` for unit/integration, `{filename}.e2e.test.ts` for E2E.
### Configuration Files
| Config | Environment | Purpose |
| ------------------------------ | ----------- | ------------------------------------ |
| `vite.config.ts` | jsdom | Unit tests (React components, hooks) |
| `vitest.config.integration.ts` | node | Integration tests (API routes) |
| `vitest.config.e2e.ts` | node | E2E tests (full user flows) |
| `vitest.workspace.ts` | - | Orchestrates all test projects |
### Test Pyramid
```text
┌─────────────┐
│ E2E │ 5 test files
│ Tests │ Critical user flows
├─────────────┤
│ Integration │ 17 test files
│ Tests │ API contracts + middleware
┌───┴─────────────┴───┐
│ Unit Tests │ 185 test files
│ Components, Hooks, │ Isolated functions
│ Services, Utils │ Mocked dependencies
└─────────────────────┘
```
### Unit Tests
**Purpose**: Test isolated functions, components, and modules with mocked dependencies.
**Environment**: jsdom (browser-like)
**Key Patterns**:
```typescript
// Component testing with providers
import { renderWithProviders, screen } from '@/tests/utils/renderWithProviders';
describe('MyComponent', () => {
it('renders correctly', () => {
renderWithProviders(<MyComponent />);
expect(screen.getByText('Hello')).toBeInTheDocument();
});
});
```
```typescript
// Hook testing
import { renderHook, waitFor } from '@testing-library/react';
import { useMyHook } from './useMyHook';
describe('useMyHook', () => {
it('returns expected value', async () => {
const { result } = renderHook(() => useMyHook());
await waitFor(() => expect(result.current.data).toBeDefined());
});
});
```
**Global Mocks** (automatically applied via `tests-setup-unit.ts`):
- Database connections (`pg.Pool`)
- AI services (`@google/genai`)
- Authentication (`jsonwebtoken`, `bcrypt`)
- Logging (`logger.server`, `logger.client`)
- Notifications (`notificationService`)
### Integration Tests
**Purpose**: Test API routes with real service interactions and database.
**Environment**: node
**Setup**: Real Express server on port 3001, real PostgreSQL database
```typescript
// API route testing pattern
import supertest from 'supertest';
import { createAndLoginUser } from '@/tests/utils/testHelpers';
describe('Auth API', () => {
let request: ReturnType<typeof supertest>;
let authToken: string;
beforeAll(async () => {
const app = (await import('../../../server')).default;
request = supertest(app);
const { token } = await createAndLoginUser(request);
authToken = token;
});
it('GET /api/auth/me returns user profile', async () => {
const response = await request.get('/api/auth/me').set('Authorization', `Bearer ${authToken}`);
expect(response.status).toBe(200);
expect(response.body.user.email).toBeDefined();
});
});
```
**Database Cleanup**:
```typescript
import { cleanupDb } from '@/tests/utils/cleanup';
afterAll(async () => {
await cleanupDb({ users: [testUserId] });
});
```
### E2E Tests
**Purpose**: Test complete user journeys through the application.
**Timeout**: 120 seconds (for long-running flows)
**Current E2E Tests**:
- `auth.e2e.test.ts` - Registration, login, password reset
- `flyer-upload.e2e.test.ts` - Complete flyer upload pipeline
- `user-journey.e2e.test.ts` - Full user workflow
- `admin-authorization.e2e.test.ts` - Admin-specific flows
- `admin-dashboard.e2e.test.ts` - Admin dashboard functionality
### Mock Factories
The project uses comprehensive mock factories (`src/tests/utils/mockFactories.ts`, 1553 lines) for creating test data:
```typescript
import {
createMockUser,
createMockFlyer,
createMockFlyerItem,
createMockRecipe,
resetMockIds,
} from '@/tests/utils/mockFactories';
beforeEach(() => {
resetMockIds(); // Ensure deterministic IDs
});
it('creates flyer with items', () => {
const flyer = createMockFlyer({ store_name: 'TestMart' });
const items = [createMockFlyerItem({ flyer_id: flyer.flyer_id })];
// ...
});
```
**Factory Coverage**: 90+ factory functions for all domain entities including users, flyers, recipes, shopping lists, budgets, achievements, etc.
### Test Utilities
| Utility | Purpose |
| ----------------------- | ------------------------------------------ |
| `renderWithProviders()` | Wrap components with AppProviders + Router |
| `createAndLoginUser()` | Create user and return auth token |
| `cleanupDb()` | Database cleanup respecting FK constraints |
| `createTestApp()` | Create Express app for route testing |
| `poll()` | Polling utility for async operations |
### Coverage Configuration
**Coverage Provider**: v8 (built-in Vitest)
**Report Directories**:
- `.coverage/unit/` - Unit test coverage
- `.coverage/integration/` - Integration test coverage
- `.coverage/e2e/` - E2E test coverage
**Excluded from Coverage**:
- `src/index.tsx`, `src/main.tsx` (entry points)
- `src/tests/**` (test files themselves)
- `src/**/*.d.ts` (type declarations)
- `src/components/icons/**` (icon components)
- `src/db/seed*.ts` (database seeding scripts)
### npm Scripts
```bash
# Run all tests
npm run test
# Run by level
npm run test:unit # Unit tests only (jsdom)
npm run test:integration # Integration tests only (node)
# With coverage
npm run test:coverage # Unit + Integration with reports
# Clean coverage directories
npm run clean
```
### Test Timeouts
| Test Type | Timeout | Rationale |
| ----------- | ----------- | -------------------------------------- |
| Unit | 5 seconds | Fast, isolated tests |
| Integration | 60 seconds | AI service calls, DB operations |
| E2E | 120 seconds | Full user flow with multiple API calls |
## Best Practices
### When to Write Each Test Type
1. **Unit Tests** (required):
- Pure functions and utilities
- React components (rendering, user interactions)
- Custom hooks
- Service methods with mocked dependencies
- Repository methods
2. **Integration Tests** (required for API changes):
- New API endpoints
- Authentication/authorization flows
- Middleware behavior
- Database query correctness
3. **E2E Tests** (for critical paths):
- User registration and login
- Core business flows (flyer upload, shopping lists)
- Admin operations
### Test Isolation Guidelines
1. **Reset mock IDs**: Call `resetMockIds()` in `beforeEach()`
2. **Unique test data**: Use timestamps or UUIDs for emails/usernames
3. **Clean up after tests**: Use `cleanupDb()` in `afterAll()`
4. **Don't share state**: Each test should be independent
### Mocking Guidelines
1. **Unit tests**: Mock external dependencies (DB, APIs, services)
2. **Integration tests**: Mock only external APIs (AI services)
3. **E2E tests**: Minimal mocking, use real services where possible
### Testing Code Smells
**When testing requires any of the following patterns, treat it as a code smell indicating the production code needs refactoring:**
1. **Capturing callbacks through mocks**: If you need to capture a callback passed to a mock and manually invoke it to test behavior, the code under test likely has poor separation of concerns.
2. **Complex module resets**: If tests require `vi.resetModules()`, `vi.doMock()`, or careful ordering of mock setup to work correctly, the module likely has problematic initialization or hidden global state.
3. **Indirect verification**: If you can only verify behavior by checking that internal mocks were called with specific arguments (rather than asserting on direct outputs), the code likely lacks proper return values or has side effects that should be explicit.
4. **Excessive mock setup**: If setting up mocks requires more lines than the actual test assertions, consider whether the code under test has too many dependencies or responsibilities.
**The Fix**: Rather than writing complex test scaffolding, refactor the production code to be more testable:
- Extract pure functions that can be tested with simple input/output assertions
- Use dependency injection to make dependencies explicit and easily replaceable
- Return values from functions instead of relying on side effects
- Split modules with complex initialization into smaller, focused units
- Make async flows explicit and controllable rather than callback-based
**Example anti-pattern**:
```typescript
// BAD: Capturing callback to test behavior
const capturedCallback = vi.fn();
mockService.onEvent.mockImplementation((cb) => {
capturedCallback = cb;
});
await initializeModule();
capturedCallback('test-data'); // Manually triggering to test
expect(mockOtherService.process).toHaveBeenCalledWith('test-data');
```
**Example preferred pattern**:
```typescript
// GOOD: Direct input/output testing
const result = await processEvent('test-data');
expect(result).toEqual({ processed: true, data: 'test-data' });
```
### Known Code Smell Violations (Technical Debt)
The following files contain acknowledged code smell violations that are deferred for future refactoring:
| File | Violations | Rationale for Deferral |
| ------------------------------------------------------ | ------------------------------------------------------ | ----------------------------------------------------------------------------------------- |
| `src/services/queueService.workers.test.ts` | Callback capture, `vi.resetModules()`, excessive setup | BullMQ workers instantiate at module load; business logic is tested via service classes |
| `src/services/workers.server.test.ts` | `vi.resetModules()` | Same as above - worker wiring tests |
| `src/services/queues.server.test.ts` | `vi.resetModules()` | Queue instantiation at module load |
| `src/App.test.tsx` | Callback capture, excessive setup | Component integration test; refactoring would require significant UI architecture changes |
| `src/features/voice-assistant/VoiceAssistant.test.tsx` | Multiple callback captures | WebSocket/audio APIs are inherently callback-based |
| `src/services/aiService.server.test.ts` | Multiple `vi.resetModules()` | AI service initialization complexity |
**Policy**: New code should follow the code smell guidelines. These existing violations are tracked here and will be addressed when the underlying modules are refactored or replaced.
## Key Files
- `vite.config.ts` - Unit test configuration
- `vitest.config.integration.ts` - Integration test configuration
- `vitest.config.e2e.ts` - E2E test configuration
- `vitest.workspace.ts` - Workspace orchestration
- `src/tests/setup/tests-setup-unit.ts` - Global mocks (488 lines)
- `src/tests/setup/integration-global-setup.ts` - Server + DB setup
- `src/tests/utils/mockFactories.ts` - Mock factories (1553 lines)
- `src/tests/utils/testHelpers.ts` - Test utilities
## Future Enhancements
1. **Browser E2E Tests**: Consider adding Playwright for actual browser testing
2. **Visual Regression**: Screenshot comparison for UI components
3. **Performance Testing**: Add benchmarks for critical paths
4. **Mutation Testing**: Verify test quality with mutation testing tools
5. **Coverage Thresholds**: Define minimum coverage requirements per module

View File

@@ -2,7 +2,7 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Partially Implemented
## Context
@@ -16,3 +16,255 @@ We will establish a formal Design System and Component Library. This will involv
- **Positive**: Ensures a consistent and high-quality user interface. Accelerates frontend development by providing reusable, well-documented components. Improves maintainability and reduces technical debt.
- **Negative**: Requires an initial investment in setting up Storybook and migrating existing components. Adds a new dependency and a new workflow for frontend development.
## Implementation Status
### What's Implemented
The codebase has a solid foundation for a design system:
-**Tailwind CSS v4.1.17** as the styling solution
-**Dark mode** fully implemented with system preference detection
-**55 custom icon components** for consistent iconography
-**Component organization** with shared vs. feature-specific separation
-**Accessibility patterns** with ARIA attributes and focus management
### What's Not Yet Implemented
-**Storybook** is not yet installed or configured
-**Formal design token documentation** (colors, typography, spacing)
-**Visual regression testing** for component changes
## Implementation Details
### Component Library Structure
```text
src/
├── components/ # 30+ shared UI components
│ ├── icons/ # 55 SVG icon components
│ ├── Header.tsx
│ ├── Footer.tsx
│ ├── LoadingSpinner.tsx
│ ├── ErrorDisplay.tsx
│ ├── ConfirmationModal.tsx
│ ├── DarkModeToggle.tsx
│ ├── StatCard.tsx
│ ├── PasswordInput.tsx
│ └── ...
├── features/ # Feature-specific components
│ ├── charts/ # PriceChart, PriceHistoryChart
│ ├── flyer/ # FlyerDisplay, FlyerList, FlyerUploader
│ ├── shopping/ # ShoppingListComponent, WatchedItemsList
│ └── voice-assistant/ # VoiceAssistant
├── layouts/ # Page layouts
│ └── MainLayout.tsx
├── pages/ # Page components
│ └── admin/components/ # Admin-specific components
└── providers/ # Context providers
```
### Styling Approach
**Tailwind CSS** with utility-first classes:
```typescript
// Component example with consistent styling patterns
<button className="px-4 py-2 bg-brand-primary text-white rounded-lg
hover:bg-brand-dark transition-colors duration-200
focus:outline-none focus:ring-2 focus:ring-brand-primary
focus:ring-offset-2 dark:focus:ring-offset-gray-800">
Click me
</button>
```
**Common Utility Patterns**:
| Pattern | Classes |
| ------- | ------- |
| Card container | `bg-white dark:bg-gray-800 rounded-lg shadow-md p-6` |
| Primary button | `bg-brand-primary hover:bg-brand-dark text-white rounded-lg px-4 py-2` |
| Secondary button | `bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-200` |
| Input field | `border border-gray-300 dark:border-gray-600 rounded-md px-3 py-2` |
| Focus ring | `focus:outline-none focus:ring-2 focus:ring-brand-primary` |
### Color System
**Brand Colors** (Tailwind theme extensions):
- `brand-primary` - Primary brand color (blue/teal)
- `brand-light` - Lighter variant
- `brand-dark` - Darker variant for hover states
- `brand-secondary` - Secondary accent color
**Semantic Colors**:
- Gray scale: `gray-50` through `gray-950`
- Error: `red-500`, `red-600`
- Success: `green-500`, `green-600`
- Warning: `yellow-500`, `orange-500`
- Info: `blue-500`, `blue-600`
### Dark Mode Implementation
Dark mode is fully implemented using Tailwind's `dark:` variant:
```typescript
// Initialization in useAppInitialization hook
const initializeDarkMode = () => {
// Priority: user profile > localStorage > system preference
const stored = localStorage.getItem('darkMode');
const systemPreference = window.matchMedia('(prefers-color-scheme: dark)').matches;
const isDarkMode = stored ? stored === 'true' : systemPreference;
document.documentElement.classList.toggle('dark', isDarkMode);
return isDarkMode;
};
```
**Usage in components**:
```typescript
<div className="bg-white dark:bg-gray-800 text-gray-900 dark:text-white">
Content adapts to theme
</div>
```
### Icon System
**55 custom SVG icon components** in `src/components/icons/`:
```typescript
// Icon component pattern
interface IconProps extends React.SVGProps<SVGSVGElement> {
title?: string;
}
export const CheckCircleIcon: React.FC<IconProps> = ({ title, ...props }) => (
<svg {...props} fill="currentColor" viewBox="0 0 24 24">
{title && <title>{title}</title>}
<path d="..." />
</svg>
);
```
**Usage**:
```typescript
<CheckCircleIcon className="w-5 h-5 text-green-500" title="Success" />
```
**External icons**: Lucide React (`lucide-react` v0.555.0) used for additional icons.
### Accessibility Patterns
**ARIA Attributes**:
```typescript
// Modal pattern
<div role="dialog" aria-modal="true" aria-labelledby="modal-title">
<h2 id="modal-title">Modal Title</h2>
</div>
// Button with label
<button aria-label="Close modal">
<XMarkIcon aria-hidden="true" />
</button>
// Loading state
<div role="status" aria-live="polite">
<LoadingSpinner />
</div>
```
**Focus Management**:
- Consistent focus rings: `focus:ring-2 focus:ring-brand-primary focus:ring-offset-2`
- Dark mode offset: `dark:focus:ring-offset-gray-800`
- No outline: `focus:outline-none` (using ring instead)
### State Management
**Context Providers** (see ADR-005):
| Provider | Purpose |
| -------- | ------- |
| `AuthProvider` | Authentication state |
| `ModalProvider` | Modal open/close state |
| `FlyersProvider` | Flyer data |
| `MasterItemsProvider` | Grocery items |
| `UserDataProvider` | User-specific data |
**Provider Hierarchy** in `AppProviders.tsx`:
```typescript
<QueryClientProvider>
<ModalProvider>
<AuthProvider>
<FlyersProvider>
<MasterItemsProvider>
<UserDataProvider>
{children}
</UserDataProvider>
</MasterItemsProvider>
</FlyersProvider>
</AuthProvider>
</ModalProvider>
</QueryClientProvider>
```
## Key Files
- `tailwind.config.js` - Tailwind CSS configuration
- `src/index.css` - Tailwind CSS entry point
- `src/components/` - Shared UI components
- `src/components/icons/` - Icon component library (55 icons)
- `src/providers/AppProviders.tsx` - Context provider composition
- `src/hooks/useAppInitialization.ts` - Dark mode initialization
## Component Guidelines
### When to Create Shared Components
Create a shared component in `src/components/` when:
1. Used in 3+ places across the application
2. Represents a reusable UI pattern (buttons, cards, modals)
3. Has consistent styling/behavior requirements
### Naming Conventions
- **Components**: PascalCase (`LoadingSpinner.tsx`)
- **Icons**: PascalCase with `Icon` suffix (`CheckCircleIcon.tsx`)
- **Hooks**: camelCase with `use` prefix (`useModal.ts`)
- **Contexts**: PascalCase with `Context` suffix (`AuthContext.tsx`)
### Styling Guidelines
1. Use Tailwind utility classes exclusively
2. Include dark mode variants for all colors: `bg-white dark:bg-gray-800`
3. Add focus states for interactive elements
4. Use semantic color names from the design system
## Future Enhancements (Storybook Setup)
To complete ADR-012 implementation:
1. **Install Storybook**:
```bash
npx storybook@latest init
```
2. **Create stories for core components**:
- Button variants
- Form inputs (PasswordInput, etc.)
- Modal components
- Loading states
- Icon showcase
3. **Add visual regression testing** with Chromatic or Percy
4. **Document design tokens** formally in Storybook
5. **Create component composition guidelines**

View File

@@ -2,17 +2,351 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Implemented
**Implemented**: 2026-01-09
## Context
The project is currently run using `pm2`, and the `README.md` contains manual setup instructions. While functional, this lacks the portability, scalability, and consistency of modern deployment practices.
The project is currently run using `pm2`, and the `README.md` contains manual setup instructions. While functional, this lacks the portability, scalability, and consistency of modern deployment practices. Local development environments also suffered from inconsistency issues.
## Platform Requirement: Linux Only
**CRITICAL**: This application is designed and intended to run **exclusively on Linux**, either:
- **In a container** (Docker/Podman) - the recommended and primary development environment
- **On bare-metal Linux** - for production deployments
### Windows Compatibility
**Windows is NOT a supported platform.** Any apparent Windows compatibility is:
- Coincidental and not guaranteed
- Subject to break at any time without notice
- Not a priority to fix or maintain
Specific issues that arise on Windows include:
- **Path separators**: The codebase uses POSIX-style paths (`/`) which work natively on Linux but may cause issues with `path.join()` on Windows producing backslash paths
- **Shell scripts**: Bash scripts in `scripts/` directory are Linux-only
- **External dependencies**: Tools like `pdftocairo` assume Linux installation paths
- **File permissions**: Unix-style permissions are assumed throughout
### Test Execution Requirement
**ALL tests MUST be executed on Linux.** This includes:
- Unit tests
- Integration tests
- End-to-end tests
- Any CI/CD pipeline tests
Tests that pass on Windows but fail on Linux are considered **broken tests**. Tests that fail on Windows but pass on Linux are considered **passing tests**.
**For Windows developers**: Always use the Dev Container (VS Code "Reopen in Container") to run tests. Never rely on test results from the Windows host machine.
## Decision
We will standardize the deployment process by containerizing the application using **Docker**. This will involve defining a `Dockerfile` for building a production-ready image and a `docker-compose.yml` file for orchestrating the application, database, and other services (like Redis) in a development environment.
We will standardize the deployment process using a hybrid approach:
1. **PM2 for Production**: Use PM2 cluster mode for process management, load balancing, and zero-downtime reloads.
2. **Docker/Podman for Development**: Provide a complete containerized development environment with automatic initialization.
3. **VS Code Dev Containers**: Enable one-click development environment setup.
4. **Gitea Actions for CI/CD**: Automated deployment pipelines handle builds and deployments.
## Consequences
- **Positive**: Ensures consistency between development and production environments. Simplifies the setup for new developers. Improves portability and scalability of the application.
- **Negative**: Requires learning Docker and containerization concepts. Adds `Dockerfile` and `docker-compose.yml` to the project's configuration.
- **Positive**: Ensures consistency between development and production environments. Simplifies the setup for new developers to a single "Reopen in Container" action. Improves portability and scalability of the application.
- **Negative**: Requires Docker/Podman installation. Container builds take time on first setup.
## Implementation Details
### Quick Start (Development)
```bash
# Prerequisites:
# - Docker Desktop or Podman installed
# - VS Code with "Dev Containers" extension
# Option 1: VS Code Dev Containers (Recommended)
# 1. Open project in VS Code
# 2. Click "Reopen in Container" when prompted
# 3. Wait for initialization to complete
# 4. Development server starts automatically
# Option 2: Manual Docker Compose
podman-compose -f compose.dev.yml up -d
podman exec -it flyer-crawler-dev bash
./scripts/docker-init.sh
npm run dev:container
```
### Container Services Architecture
```text
┌─────────────────────────────────────────────────────────────┐
│ Development Environment │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ app │ │ postgres │ │ redis │ │
│ │ (Node.js) │───▶│ (PostGIS) │ │ (Cache) │ │
│ │ │───▶│ │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ :3000/:3001 :5432 :6379 │
│ │
└─────────────────────────────────────────────────────────────┘
```
### compose.dev.yml Services
| Service | Image | Purpose | Healthcheck |
| ---------- | ----------------------- | ---------------------- | ---------------- |
| `app` | Custom (Dockerfile.dev) | Node.js application | HTTP /api/health |
| `postgres` | postgis/postgis:15-3.4 | Database with PostGIS | pg_isready |
| `redis` | redis:alpine | Caching and job queues | redis-cli ping |
### Automatic Initialization
The container initialization script (`scripts/docker-init.sh`) performs:
1. **npm install** - Installs dependencies into isolated volume
2. **Wait for PostgreSQL** - Polls until database is ready
3. **Wait for Redis** - Polls until Redis is responding
4. **Schema Check** - Detects if database needs initialization
5. **Database Setup** - Runs `npm run db:reset:dev` if needed (schema + seed data)
### Development Dockerfile
Located in `Dockerfile.dev`:
```dockerfile
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
# Install Node.js 20.x LTS + database clients
RUN apt-get update && apt-get install -y \
curl git build-essential python3 \
postgresql-client redis-tools \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
&& apt-get install -y nodejs
WORKDIR /app
ENV NODE_ENV=development
ENV NODE_OPTIONS='--max-old-space-size=8192'
CMD ["bash"]
```
### Environment Configuration
Copy `.env.example` to `.env` for local overrides (optional for containers):
```bash
# Container defaults (set in compose.dev.yml)
DB_HOST=postgres # Use Docker service name, not IP
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=postgres
DB_NAME=flyer_crawler_dev
REDIS_URL=redis://redis:6379
```
### VS Code Dev Container Configuration
Located in `.devcontainer/devcontainer.json`:
| Lifecycle Hook | Timing | Action |
| ------------------- | ----------------- | ------------------------------ |
| `initializeCommand` | Before container | Start Podman machine (Windows) |
| `postCreateCommand` | Container created | Run `docker-init.sh` |
| `postAttachCommand` | VS Code attached | Start dev server |
### Default Test Accounts
After initialization, these accounts are available:
| Role | Email | Password |
| ----- | ------------------- | --------- |
| Admin | `admin@example.com` | adminpass |
| User | `user@example.com` | userpass |
---
## Production Deployment (PM2)
### PM2 Ecosystem Configuration
Located in `ecosystem.config.cjs`:
```javascript
module.exports = {
apps: [
{
// API Server - Cluster mode for load balancing
name: 'flyer-crawler-api',
script: './node_modules/.bin/tsx',
args: 'server.ts',
max_memory_restart: '500M',
instances: 'max', // Use all CPU cores
exec_mode: 'cluster', // Enable cluster mode
kill_timeout: 5000, // Graceful shutdown timeout
// Restart configuration
max_restarts: 40,
exp_backoff_restart_delay: 100,
min_uptime: '10s',
env_production: {
NODE_ENV: 'production',
cwd: '/var/www/flyer-crawler.projectium.com',
},
env_test: {
NODE_ENV: 'test',
cwd: '/var/www/flyer-crawler-test.projectium.com',
},
},
{
// Background Worker - Single instance
name: 'flyer-crawler-worker',
script: './node_modules/.bin/tsx',
args: 'src/services/worker.ts',
max_memory_restart: '1G',
kill_timeout: 10000, // Workers need more time for jobs
// ... similar config
},
],
};
```
### Deployment Directory Structure
```text
/var/www/
├── flyer-crawler.projectium.com/ # Production
│ ├── server.ts
│ ├── ecosystem.config.cjs
│ ├── package.json
│ ├── flyer-images/
│ │ ├── icons/
│ │ └── archive/
│ └── ...
└── flyer-crawler-test.projectium.com/ # Test environment
└── ... (same structure)
```
### Environment-Specific Configuration
| Environment | Port | Redis DB | PM2 Process Suffix |
| ----------- | ---- | -------- | ------------------ |
| Production | 3000 | 0 | (none) |
| Test | 3001 | 1 | `-test` |
| Development | 3000 | 0 | `-dev` |
### PM2 Commands Reference
```bash
# Start/reload with environment
pm2 startOrReload ecosystem.config.cjs --env production --update-env
# Save process list for startup
pm2 save
# View logs
pm2 logs flyer-crawler-api --lines 50
# Monitor processes
pm2 monit
# List all processes
pm2 list
# Describe process details
pm2 describe flyer-crawler-api
```
### Resource Limits
| Process | Memory Limit | Restart Delay | Kill Timeout |
| ---------------- | ------------ | ------------------------ | ------------ |
| API Server | 500MB | Exponential (100ms base) | 5s |
| Worker | 1GB | Exponential (100ms base) | 10s |
| Analytics Worker | 1GB | Exponential (100ms base) | 10s |
---
## Troubleshooting
### Container Issues
```bash
# Reset everything and start fresh
podman-compose -f compose.dev.yml down -v
podman-compose -f compose.dev.yml up -d --build
# View container logs
podman-compose -f compose.dev.yml logs -f app
# Connect to database manually
podman exec -it flyer-crawler-postgres psql -U postgres -d flyer_crawler_dev
# Rebuild just the app container
podman-compose -f compose.dev.yml build app
```
### Common Issues
| Issue | Solution |
| ------------------------ | --------------------------------------------------------------- |
| "Database not ready" | Wait for postgres healthcheck, or run `docker-init.sh` manually |
| "node_modules not found" | Run `npm install` inside container |
| "Permission denied" | Ensure scripts have execute permission: `chmod +x scripts/*.sh` |
| "Network unreachable" | Use service names (postgres, redis) not IPs |
## Key Files
- `compose.dev.yml` - Docker Compose configuration
- `Dockerfile.dev` - Development container definition
- `.devcontainer/devcontainer.json` - VS Code Dev Container config
- `scripts/docker-init.sh` - Container initialization script
- `.env.example` - Environment variable template
- `ecosystem.config.cjs` - PM2 production configuration
- `.gitea/workflows/deploy-to-prod.yml` - Production deployment pipeline
- `.gitea/workflows/deploy-to-test.yml` - Test deployment pipeline
## Container Test Readiness Requirement
**CRITICAL**: The development container MUST be fully test-ready on startup. This means:
1. **Zero Manual Steps**: After running `podman-compose -f compose.dev.yml up -d` and entering the container, tests MUST run immediately with `npm test` without any additional setup steps.
2. **Complete Environment**: All environment variables, database connections, Redis connections, and seed data MUST be automatically initialized during container startup.
3. **Enforcement Checklist**:
- [ ] `npm test` runs successfully immediately after container start
- [ ] Database is seeded with test data (admin account, sample data)
- [ ] Redis is connected and healthy
- [ ] All environment variables are set via `compose.dev.yml` or `.env` files
- [ ] No "database not ready" or "connection refused" errors on first test run
4. **Current Gaps (To Fix)**:
- Integration tests require database seeding (`npm run db:reset:test`)
- Environment variables from `.env.test` may not be loaded automatically
- Some npm scripts use `NODE_ENV=` syntax which fails on Windows (use `cross-env`)
5. **Resolution Steps**:
- The `docker-init.sh` script should seed the test database after seeding dev database
- Add automatic `.env.test` loading or move all test env vars to `compose.dev.yml`
- Update all npm scripts to use `cross-env` for cross-platform compatibility
**Rationale**: Developers and CI systems should never need to run manual setup commands to execute tests. If the container is running, tests should work. Any deviation from this principle indicates an incomplete container setup.
## Related ADRs
- [ADR-017](./0017-ci-cd-and-branching-strategy.md) - CI/CD Strategy
- [ADR-038](./0038-graceful-shutdown-pattern.md) - Graceful Shutdown Pattern
- [ADR-010](./0010-testing-strategy-and-standards.md) - Testing Strategy and Standards

View File

@@ -2,17 +2,323 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Updated**: 2026-01-11
## Context
While `ADR-004` established structured logging, the application lacks a high-level, aggregated view of its health, performance, and errors. It's difficult to spot trends, identify slow API endpoints, or be proactively notified of new types of errors.
While `ADR-004` established structured logging with Pino, the application lacks a high-level, aggregated view of its health, performance, and errors. It's difficult to spot trends, identify slow API endpoints, or be proactively notified of new types of errors.
Key requirements:
1. **Self-hosted**: No external SaaS dependencies for error tracking
2. **Sentry SDK compatible**: Leverage mature, well-documented SDKs
3. **Lightweight**: Minimal resource overhead in the dev container
4. **Production-ready**: Same architecture works on bare-metal production servers
5. **AI-accessible**: MCP server integration for Claude Code and other AI tools
## Decision
We will integrate a dedicated Application Performance Monitoring (APM) and error tracking service like **Sentry**, **Datadog**, or **New Relic**. This will define how the service is integrated to automatically capture and report unhandled exceptions, performance data (e.g., transaction traces, database query times), and release health.
We will implement a self-hosted error tracking stack using **Bugsink** as the Sentry-compatible backend, with the following components:
### 1. Error Tracking Backend: Bugsink
**Bugsink** is a lightweight, self-hosted Sentry alternative that:
- Runs as a single process (no Kafka, Redis, ClickHouse required)
- Is fully compatible with Sentry SDKs
- Supports ARM64 and AMD64 architectures
- Can use SQLite (dev) or PostgreSQL (production)
**Deployment**:
- **Dev container**: Installed as a systemd service inside the container
- **Production**: Runs as a systemd service on bare-metal, listening on localhost only
- **Database**: Uses PostgreSQL with a dedicated `bugsink` user and `bugsink` database (same PostgreSQL instance as the main application)
### 2. Backend Integration: @sentry/node
The Express backend will integrate `@sentry/node` SDK to:
- Capture unhandled exceptions before PM2/process manager restarts
- Report errors with full stack traces and context
- Integrate with Pino logger for breadcrumbs
- Track transaction performance (optional)
### 3. Frontend Integration: @sentry/react
The React frontend will integrate `@sentry/react` SDK to:
- Wrap the app in a Sentry Error Boundary
- Capture unhandled JavaScript errors
- Report errors with component stack traces
- Track user session context
- **Frontend Error Correlation**: The global API client (Axios/Fetch wrapper) MUST intercept 4xx/5xx responses. It MUST extract the `x-request-id` header (if present) and attach it to the Sentry scope as a tag `api_request_id` before re-throwing the error. This allows developers to copy the ID from Sentry and search for it in backend logs.
### 4. Log Aggregation: Logstash
**Logstash** parses application and infrastructure logs, forwarding error patterns to Bugsink:
- **Installation**: Installed inside the dev container (and on bare-metal prod servers)
- **Inputs**:
- Pino JSON logs from the Node.js application
- Redis logs (connection errors, memory warnings, slow commands)
- PostgreSQL function logs (future - see Implementation Steps)
- **Filter**: Identifies error-level logs (5xx responses, unhandled exceptions, Redis errors)
- **Output**: Sends to Bugsink via Sentry-compatible HTTP API
This provides a secondary error capture path for:
- Errors that occur before Sentry SDK initialization
- Log-based errors that don't throw exceptions
- Redis connection/performance issues
- Database function errors and slow queries
- Historical error analysis from log files
### 5. MCP Server Integration: bugsink-mcp
For AI tool integration (Claude Code, Cursor, etc.), we use the open-source [bugsink-mcp](https://github.com/j-shelfwood/bugsink-mcp) server:
- **No code changes required**: Configurable via environment variables
- **Capabilities**: List projects, get issues, view events, get stacktraces, manage releases
- **Configuration**:
- `BUGSINK_URL`: Points to Bugsink instance (`http://localhost:8000` for dev, `https://bugsink.projectium.com` for prod)
- `BUGSINK_API_TOKEN`: API token from Bugsink (created via Django management command)
- `BUGSINK_ORG_SLUG`: Organization identifier (usually "sentry")
**Note:** Despite the name `sentry-selfhosted-mcp` mentioned in earlier drafts of this ADR, the actual MCP server used is `bugsink-mcp` which is specifically designed for Bugsink's API structure.
## Architecture
```text
┌─────────────────────────────────────────────────────────────────────────┐
│ Dev Container / Production Server │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Frontend │ │ Backend │ │
│ │ (React) │ │ (Express) │ │
│ │ @sentry/react │ │ @sentry/node │ │
│ └────────┬─────────┘ └────────┬─────────┘ │
│ │ │ │
│ │ Sentry SDK Protocol │ │
│ └───────────┬───────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────┐ │
│ │ Bugsink │ │
│ │ (localhost:8000) │◄──────────────────┐ │
│ │ │ │ │
│ │ PostgreSQL backend │ │ │
│ └──────────────────────┘ │ │
│ │ │
│ ┌──────────────────────┐ │ │
│ │ Logstash │───────────────────┘ │
│ │ (Log Aggregator) │ Sentry Output │
│ │ │ │
│ │ Inputs: │ │
│ │ - Pino app logs │ │
│ │ - Redis logs │ │
│ │ - PostgreSQL (future) │
│ └──────────────────────┘ │
│ ▲ ▲ ▲ │
│ │ │ │ │
│ ┌───────────┘ │ └───────────┐ │
│ │ │ │ │
│ ┌────┴─────┐ ┌─────┴────┐ ┌──────┴─────┐ │
│ │ Pino │ │ Redis │ │ PostgreSQL │ │
│ │ Logs │ │ Logs │ │ Logs (TBD) │ │
│ └──────────┘ └──────────┘ └────────────┘ │
│ │
│ ┌──────────────────────┐ │
│ │ PostgreSQL │ │
│ │ ┌────────────────┐ │ │
│ │ │ flyer_crawler │ │ (main app database) │
│ │ ├────────────────┤ │ │
│ │ │ bugsink │ │ (error tracking database) │
│ │ └────────────────┘ │ │
│ └──────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
External (Developer Machine):
┌──────────────────────────────────────┐
│ Claude Code / Cursor / VS Code │
│ ┌────────────────────────────────┐ │
│ │ bugsink-mcp │ │
│ │ (MCP Server) │ │
│ │ │ │
│ │ BUGSINK_URL=http://localhost:8000
│ │ BUGSINK_API_TOKEN=... │ │
│ │ BUGSINK_ORG_SLUG=... │ │
│ └────────────────────────────────┘ │
└──────────────────────────────────────┘
```
## Configuration
### Environment Variables
| Variable | Description | Default (Dev) |
| ------------------ | ------------------------------ | -------------------------- |
| `BUGSINK_DSN` | Sentry-compatible DSN for SDKs | Set after project creation |
| `BUGSINK_ENABLED` | Enable/disable error reporting | `true` |
| `BUGSINK_BASE_URL` | Bugsink web UI URL (internal) | `http://localhost:8000` |
### PostgreSQL Setup
```sql
-- Create dedicated Bugsink database and user
CREATE USER bugsink WITH PASSWORD 'bugsink_dev_password';
CREATE DATABASE bugsink OWNER bugsink;
GRANT ALL PRIVILEGES ON DATABASE bugsink TO bugsink;
```
### Bugsink Configuration
```bash
# Environment variables for Bugsink service
SECRET_KEY=<random-50-char-string>
DATABASE_URL=postgresql://bugsink:bugsink_dev_password@localhost:5432/bugsink
BASE_URL=http://localhost:8000
PORT=8000
```
### Logstash Pipeline
```conf
# /etc/logstash/conf.d/bugsink.conf
# === INPUTS ===
input {
# Pino application logs
file {
path => "/app/logs/*.log"
codec => json
type => "pino"
tags => ["app"]
}
# Redis logs
file {
path => "/var/log/redis/*.log"
type => "redis"
tags => ["redis"]
}
# PostgreSQL logs (for function logging - future)
# file {
# path => "/var/log/postgresql/*.log"
# type => "postgres"
# tags => ["postgres"]
# }
}
# === FILTERS ===
filter {
# Pino error detection (level 50 = error, 60 = fatal)
if [type] == "pino" and [level] >= 50 {
mutate { add_tag => ["error"] }
}
# Redis error detection
if [type] == "redis" {
grok {
match => { "message" => "%{POSINT:pid}:%{WORD:role} %{MONTHDAY} %{MONTH} %{TIME} %{WORD:loglevel} %{GREEDYDATA:redis_message}" }
}
if [loglevel] in ["WARNING", "ERROR"] {
mutate { add_tag => ["error"] }
}
}
# PostgreSQL function error detection (future)
# if [type] == "postgres" {
# # Parse PostgreSQL log format and detect ERROR/FATAL levels
# }
}
# === OUTPUT ===
output {
if "error" in [tags] {
http {
url => "http://localhost:8000/api/store/"
http_method => "post"
format => "json"
# Sentry envelope format
}
}
}
```
## Implementation Steps
1. **Update Dockerfile.dev**:
- Install Bugsink (pip package or binary)
- Install Logstash (Elastic APT repository)
- Add systemd service files for both
2. **PostgreSQL initialization**:
- Add Bugsink user/database creation to `sql/00-init-extensions.sql`
3. **Backend SDK integration**:
- Install `@sentry/node`
- Initialize in `server.ts` before Express app
- Configure error handler middleware integration
4. **Frontend SDK integration**:
- Install `@sentry/react`
- Wrap `App` component with `Sentry.ErrorBoundary`
- Configure in `src/index.tsx`
5. **Environment configuration**:
- Add Bugsink variables to `src/config/env.ts`
- Update `.env.example` and `compose.dev.yml`
6. **Logstash configuration**:
- Create pipeline config for Pino → Bugsink
- Configure Pino to write to log file in addition to stdout
- Configure Redis log monitoring (connection errors, slow commands)
7. **MCP server documentation**:
- Document `bugsink-mcp` setup in CLAUDE.md
8. **PostgreSQL function logging** (future):
- Configure PostgreSQL to log function execution errors
- Add Logstash input for PostgreSQL logs
- Define filter rules for function-level error detection
- _Note: Ask for implementation details when this step is reached_
## Consequences
**Positive**: Provides critical observability into the application's real-world behavior. Enables proactive identification and resolution of performance bottlenecks and errors. Improves overall application reliability and user experience.
**Negative**: Introduces a new third-party dependency and potential subscription costs. Requires initial setup and configuration of the APM/error tracking agent.
### Positive
- **Full observability**: Aggregated view of errors, trends, and performance
- **Self-hosted**: No external SaaS dependencies or subscription costs
- **SDK compatibility**: Leverages mature Sentry SDKs with excellent documentation
- **AI integration**: MCP server enables Claude Code to query and analyze errors
- **Unified architecture**: Same setup works in dev container and production
- **Lightweight**: Bugsink runs in a single process, unlike full Sentry (16GB+ RAM)
### Negative
- **Additional services**: Bugsink and Logstash add complexity to the container
- **PostgreSQL overhead**: Additional database for error tracking
- **Initial setup**: Requires configuration of multiple components
- **Logstash learning curve**: Pipeline configuration requires Logstash knowledge
## Alternatives Considered
1. **Full Sentry self-hosted**: Rejected due to complexity (Kafka, Redis, ClickHouse, 16GB+ RAM minimum)
2. **GlitchTip**: Considered, but Bugsink is lighter weight and easier to deploy
3. **Sentry SaaS**: Rejected due to self-hosted requirement
4. **Custom error aggregation**: Rejected in favor of proven Sentry SDK ecosystem
## References
- [Bugsink Documentation](https://www.bugsink.com/docs/)
- [Bugsink Docker Install](https://www.bugsink.com/docs/docker-install/)
- [@sentry/node Documentation](https://docs.sentry.io/platforms/javascript/guides/node/)
- [@sentry/react Documentation](https://docs.sentry.io/platforms/javascript/guides/react/)
- [bugsink-mcp](https://github.com/j-shelfwood/bugsink-mcp)
- [Logstash Reference](https://www.elastic.co/guide/en/logstash/current/index.html)

View File

@@ -2,7 +2,7 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
## Context
@@ -20,3 +20,197 @@ We will implement a multi-layered security approach for the API:
- **Positive**: Significantly improves the application's security posture against common web vulnerabilities like XSS, clickjacking, and brute-force attacks.
- **Negative**: Requires careful configuration of CORS and rate limits to avoid blocking legitimate traffic. Content-Security-Policy can be complex to configure correctly.
## Implementation Status
### What's Implemented
-**Helmet** - Security headers middleware with CSP, HSTS, and more
-**Rate Limiting** - Comprehensive implementation with 17+ specific limiters
-**Input Validation** - Zod-based request validation on all routes
-**File Upload Security** - MIME type validation, size limits, filename sanitization
-**Error Handling** - Production-safe error responses (no sensitive data leakage)
-**Request Timeout** - 5-minute timeout protection
-**Secure Cookies** - httpOnly and secure flags for authentication cookies
### Not Required
- **CORS** - Not needed (API and frontend are same-origin)
## Implementation Details
### Helmet Security Headers
Using **helmet v8.x** configured in `server.ts` as the first middleware after app initialization.
**Security Headers Applied**:
| Header | Configuration | Purpose |
| ------ | ------------- | ------- |
| Content-Security-Policy | Custom directives | Prevents XSS, code injection |
| Strict-Transport-Security | 1 year, includeSubDomains, preload | Forces HTTPS connections |
| X-Content-Type-Options | nosniff | Prevents MIME type sniffing |
| X-Frame-Options | DENY | Prevents clickjacking |
| X-XSS-Protection | 0 (disabled) | Deprecated, CSP preferred |
| Referrer-Policy | strict-origin-when-cross-origin | Controls referrer information |
| Cross-Origin-Resource-Policy | cross-origin | Allows external resource loading |
**Content Security Policy Directives**:
```typescript
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'unsafe-inline'"], // React inline scripts
styleSrc: ["'self'", "'unsafe-inline'"], // Tailwind inline styles
imgSrc: ["'self'", 'data:', 'blob:', 'https:'], // External images
fontSrc: ["'self'", 'https:', 'data:'],
connectSrc: ["'self'", 'https:', 'wss:'], // API + WebSocket
frameSrc: ["'none'"], // No iframes
objectSrc: ["'none'"], // No plugins
upgradeInsecureRequests: [], // Production only
},
}
```
**HSTS Configuration**:
- Max-age: 1 year (31536000 seconds)
- Includes subdomains
- Preload-ready for browser HSTS lists
### Rate Limiting
Using **express-rate-limit v8.2.1** with a centralized configuration in `src/config/rateLimiters.ts`.
**Standard Configuration**:
```typescript
const standardConfig = {
standardHeaders: true, // Sends RateLimit-* headers
legacyHeaders: false,
skip: shouldSkipRateLimit, // Disabled in test environment
};
```
**Rate Limiters by Category**:
| Category | Limiter | Window | Max Requests |
| -------- | ------- | ------ | ------------ |
| **Authentication** | loginLimiter | 15 min | 5 |
| | registerLimiter | 1 hour | 5 |
| | forgotPasswordLimiter | 15 min | 5 |
| | resetPasswordLimiter | 15 min | 10 |
| | refreshTokenLimiter | 15 min | 20 |
| | logoutLimiter | 15 min | 10 |
| **Public/User Read** | publicReadLimiter | 15 min | 100 |
| | userReadLimiter | 15 min | 100 |
| | userUpdateLimiter | 15 min | 100 |
| **Sensitive Operations** | userSensitiveUpdateLimiter | 1 hour | 5 |
| | adminTriggerLimiter | 15 min | 30 |
| **AI/Costly** | aiGenerationLimiter | 15 min | 20 |
| | geocodeLimiter | 1 hour | 100 |
| | priceHistoryLimiter | 15 min | 50 |
| **Uploads** | adminUploadLimiter | 15 min | 20 |
| | aiUploadLimiter | 15 min | 10 |
| | batchLimiter | 15 min | 50 |
| **Tracking** | trackingLimiter | 15 min | 200 |
| | reactionToggleLimiter | 15 min | 150 |
**Test Environment Handling**:
Rate limiting is automatically disabled in test environment via `shouldSkipRateLimit` utility (`src/utils/rateLimit.ts`). Tests can opt-in to rate limiting by setting the `x-test-rate-limit-enable: true` header.
### Input Validation
**Zod Schema Validation** (`src/middleware/validation.middleware.ts`):
- Type-safe parsing and coercion for params, query, and body
- Applied to all API routes via `validateRequest()` middleware
- Returns structured validation errors with field-level details
**Filename Sanitization** (`src/utils/stringUtils.ts`):
```typescript
// Removes dangerous characters from uploaded filenames
sanitizeFilename(filename: string): string
```
### File Upload Security
**Multer Configuration** (`src/middleware/multer.middleware.ts`):
- MIME type validation via `imageFileFilter` (only image/* allowed)
- File size limits (2MB for logos, configurable per upload type)
- Unique filenames using timestamps + random suffixes
- User-scoped storage paths
### Error Handling
**Production-Safe Responses** (`src/middleware/errorHandler.ts`):
- Production mode: Returns generic error message with tracking ID
- Development mode: Returns detailed error information
- Sensitive error details are logged but never exposed to clients
### Request Security
**Timeout Protection** (`server.ts`):
- 5-minute request timeout via `connect-timeout` middleware
- Prevents resource exhaustion from long-running requests
**Secure Cookies**:
```typescript
// Cookie configuration for auth tokens
{
httpOnly: true,
secure: process.env.NODE_ENV === 'production',
sameSite: 'strict',
maxAge: 7 * 24 * 60 * 60 * 1000 // 7 days for refresh token
}
```
### Request Logging
Per-request structured logging (ADR-004):
- Request ID tracking
- User ID and IP address logging
- Failed request details (4xx+) logged with headers and body
- Unhandled errors assigned unique error IDs
## Key Files
- `server.ts` - Helmet middleware configuration (security headers)
- `src/config/rateLimiters.ts` - Rate limiter definitions (17+ limiters)
- `src/utils/rateLimit.ts` - Rate limit skip logic for testing
- `src/middleware/validation.middleware.ts` - Zod-based request validation
- `src/middleware/errorHandler.ts` - Production-safe error handling
- `src/middleware/multer.middleware.ts` - Secure file upload configuration
- `src/utils/stringUtils.ts` - Filename sanitization
## Future Enhancements
1. **Configure CORS** (if needed for cross-origin access):
```bash
npm install cors @types/cors
```
Add to `server.ts`:
```typescript
import cors from 'cors';
app.use(cors({
origin: process.env.ALLOWED_ORIGINS?.split(',') || 'http://localhost:3000',
credentials: true,
}));
```
2. **Redis-backed rate limiting**: For distributed deployments, use `rate-limit-redis` store
3. **CSP Nonce**: Generate per-request nonces for stricter script-src policy
4. **Report-Only CSP**: Add `Content-Security-Policy-Report-Only` header for testing policy changes

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
@@ -10,9 +12,186 @@ The project has Gitea workflows but lacks a documented standard for how code mov
## Decision
We will formalize the end-to-end CI/CD process. This ADR will define the project's **branching strategy** (e.g., GitFlow or Trunk-Based Development), establish mandatory checks in the pipeline (e.g., linting, unit tests, vulnerability scanning), and specify the process for building and publishing Docker images (`ADR-014`) to a registry.
We will formalize the end-to-end CI/CD process using:
1. **Trunk-Based Development**: All work is merged to `main` branch.
2. **Automated Test Deployment**: Every push to `main` triggers deployment to test environment.
3. **Manual Production Deployment**: Production deployments require explicit confirmation.
4. **Semantic Versioning**: Automated version bumping on deployments.
## Consequences
- **Positive**: Automates quality control and creates a safe, repeatable path to production. Increases development velocity and reduces deployment-related errors.
- **Negative**: Initial setup effort for the CI/CD pipeline. May slightly increase the time to merge code due to mandatory checks.
## Implementation Details
### Branching Strategy
**Trunk-Based Development**:
```text
main ─────●─────●─────●─────●─────●─────▶
│ │ │ │ │
│ │ │ │ └── Deploy to Prod (manual)
│ │ │ └── v0.9.70 (patch bump)
│ │ └── Deploy to Test (auto)
│ └── v0.9.69 (patch bump)
└── Feature complete
```
- All development happens on `main` branch
- Feature branches are short-lived (< 1 day)
- Every merge to `main` triggers test deployment
- Production deploys are manual with confirmation
### Pipeline Stages
**Deploy to Test** (Automatic on push to `main`):
```yaml
jobs:
deploy-to-test:
steps:
- Checkout code
- Setup Node.js 20
- Install dependencies (npm ci)
- Bump patch version (npm version patch)
- TypeScript type-check
- Prettier check
- ESLint check
- Run unit tests with coverage
- Run integration tests with coverage
- Run E2E tests with coverage
- Merge coverage reports
- Check database schema hash
- Build React application
- Deploy to test server (rsync)
- Install production dependencies
- Reload PM2 processes
- Update schema hash in database
```
**Deploy to Production** (Manual trigger):
```yaml
on:
workflow_dispatch:
inputs:
confirmation:
description: 'Type "deploy-to-prod" to confirm'
required: true
jobs:
deploy-production:
steps:
- Verify confirmation phrase
- Checkout main branch
- Install dependencies
- Bump minor version (npm version minor)
- Check production schema hash
- Build React application
- Deploy to production server
- Reload PM2 processes
- Update schema hash
```
### Version Bumping Strategy
| Trigger | Version Change | Example |
| -------------------------- | -------------- | --------------- |
| Push to main (test deploy) | Patch bump | 0.9.69 → 0.9.70 |
| Production deploy | Minor bump | 0.9.70 → 0.10.0 |
| Major release | Manual | 0.10.0 → 1.0.0 |
**Commit Message Format**:
```text
ci: Bump version to 0.9.70 [skip ci]
```
The `[skip ci]` tag prevents version bump commits from triggering another workflow.
### Database Schema Management
Schema changes are tracked via SHA-256 hash:
```sql
CREATE TABLE public.schema_info (
environment VARCHAR(50) PRIMARY KEY,
schema_hash VARCHAR(64) NOT NULL,
deployed_at TIMESTAMP DEFAULT NOW()
);
```
**Deployment Checks**:
1. Calculate hash of `sql/master_schema_rollup.sql`
2. Compare with hash in target database
3. If mismatch: **FAIL** deployment (manual migration required)
4. If match: Continue deployment
5. After deploy: Update hash in database
### Quality Gates
| Check | Required | Blocking |
| --------------------- | -------- | ---------------------- |
| TypeScript type-check | ✅ | No (continue-on-error) |
| Prettier formatting | ✅ | No |
| ESLint | ✅ | No |
| Unit tests | ✅ | No |
| Integration tests | ✅ | No |
| E2E tests | ✅ | No |
| Schema hash check | ✅ | **Yes** |
| Build | ✅ | **Yes** |
### Environment Variables
Secrets are injected from Gitea repository settings:
| Secret | Test | Production |
| -------------------------------------------------------------- | ------------------ | ------------- |
| `DB_DATABASE_TEST` / `DB_DATABASE_PROD` | flyer-crawler-test | flyer-crawler |
| `REDIS_PASSWORD_TEST` / `REDIS_PASSWORD_PROD` | \*\*\* | \*\*\* |
| `VITE_GOOGLE_GENAI_API_KEY_TEST` / `VITE_GOOGLE_GENAI_API_KEY` | \*\*\* | \*\*\* |
### Coverage Reporting
Coverage reports are generated and published:
```text
https://flyer-crawler-test.projectium.com/coverage/
```
Coverage merging combines:
- Unit test coverage (Vitest)
- Integration test coverage (Vitest)
- E2E test coverage (Vitest)
- Server V8 coverage (c8)
### Gitea Workflows
| Workflow | Trigger | Purpose |
| ----------------------------- | ------------ | ------------------------- |
| `deploy-to-test.yml` | Push to main | Automated test deployment |
| `deploy-to-prod.yml` | Manual | Production deployment |
| `manual-db-backup.yml` | Manual | Create database backup |
| `manual-db-restore.yml` | Manual | Restore from backup |
| `manual-db-reset-test.yml` | Manual | Reset test database |
| `manual-db-reset-prod.yml` | Manual | Reset production database |
| `manual-deploy-major.yml` | Manual | Major version release |
| `manual-redis-flush-prod.yml` | Manual | Flush Redis cache |
## Key Files
- `.gitea/workflows/deploy-to-test.yml` - Test deployment pipeline
- `.gitea/workflows/deploy-to-prod.yml` - Production deployment pipeline
- `.gitea/workflows/manual-db-backup.yml` - Database backup workflow
- `ecosystem.config.cjs` - PM2 configuration
## Related ADRs
- [ADR-014](./0014-containerization-and-deployment-strategy.md) - Containerization Strategy
- [ADR-010](./0010-testing-strategy-and-standards.md) - Testing Strategy
- [ADR-019](./0019-data-backup-and-recovery-strategy.md) - Backup Strategy

View File

@@ -2,17 +2,265 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-11
## Context
As the API grows, it becomes increasingly difficult for frontend developers and other consumers to understand its endpoints, request formats, and response structures. There is no single source of truth for API documentation.
Key requirements:
1. **Developer Experience**: Developers need interactive documentation to explore and test API endpoints.
2. **Code-Documentation Sync**: Documentation should stay in sync with the actual code to prevent drift.
3. **Low Maintenance Overhead**: The documentation approach should be "fast and lite" - minimal additional work for developers.
4. **Security**: Documentation should not expose sensitive information in production environments.
## Decision
We will adopt **OpenAPI (Swagger)** for API documentation. We will use tools (e.g., JSDoc annotations with `swagger-jsdoc`) to generate an `openapi.json` specification directly from the route handler source code. This specification will be served via a UI like Swagger UI for interactive exploration.
We will adopt **OpenAPI 3.0 (Swagger)** for API documentation using the following approach:
1. **JSDoc Annotations**: Use `swagger-jsdoc` to generate OpenAPI specs from JSDoc comments in route files.
2. **Swagger UI**: Use `swagger-ui-express` to serve interactive documentation at `/docs/api-docs`.
3. **Environment Restriction**: Only expose the Swagger UI in development and test environments, not production.
4. **Incremental Adoption**: Start with key public routes and progressively add annotations to all endpoints.
### Tooling Selection
| Tool | Purpose |
| -------------------- | ---------------------------------------------- |
| `swagger-jsdoc` | Generates OpenAPI 3.0 spec from JSDoc comments |
| `swagger-ui-express` | Serves interactive Swagger UI |
**Why JSDoc over separate schema files?**
- Documentation lives with the code, reducing drift
- No separate files to maintain
- Developers see documentation when editing routes
- Lower learning curve for the team
## Implementation Details
### OpenAPI Configuration
Located in `src/config/swagger.ts`:
```typescript
import swaggerJsdoc from 'swagger-jsdoc';
const options: swaggerJsdoc.Options = {
definition: {
openapi: '3.0.0',
info: {
title: 'Flyer Crawler API',
version: '1.0.0',
description: 'API for the Flyer Crawler application',
contact: {
name: 'API Support',
},
},
servers: [
{
url: '/api',
description: 'API server',
},
],
components: {
securitySchemes: {
bearerAuth: {
type: 'http',
scheme: 'bearer',
bearerFormat: 'JWT',
},
},
},
},
apis: ['./src/routes/*.ts'],
};
export const swaggerSpec = swaggerJsdoc(options);
```
### JSDoc Annotation Pattern
Each route handler should include OpenAPI annotations using the `@openapi` tag:
```typescript
/**
* @openapi
* /health/ping:
* get:
* summary: Simple ping endpoint
* description: Returns a pong response to verify server is responsive
* tags:
* - Health
* responses:
* 200:
* description: Server is responsive
* content:
* application/json:
* schema:
* type: object
* properties:
* success:
* type: boolean
* example: true
* data:
* type: object
* properties:
* message:
* type: string
* example: pong
*/
router.get('/ping', validateRequest(emptySchema), (_req: Request, res: Response) => {
return sendSuccess(res, { message: 'pong' });
});
```
### Route Documentation Priority
Document routes in this order of priority:
1. **Health Routes** - `/api/health/*` (public, critical for operations)
2. **Auth Routes** - `/api/auth/*` (public, essential for integration)
3. **Gamification Routes** - `/api/achievements/*` (simple, good example)
4. **Flyer Routes** - `/api/flyers/*` (core functionality)
5. **User Routes** - `/api/users/*` (common CRUD patterns)
6. **Remaining Routes** - Budget, Recipe, Admin, etc.
### Swagger UI Setup
In `server.ts`, add the Swagger UI middleware (development/test only):
```typescript
import swaggerUi from 'swagger-ui-express';
import { swaggerSpec } from './src/config/swagger';
// Only serve Swagger UI in non-production environments
if (process.env.NODE_ENV !== 'production') {
app.use('/docs/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerSpec));
// Optionally expose raw JSON spec for tooling
app.get('/docs/api-docs.json', (_req, res) => {
res.setHeader('Content-Type', 'application/json');
res.send(swaggerSpec);
});
}
```
### Response Schema Standardization
All API responses follow the standardized format from [ADR-028](./0028-api-response-standardization.md):
```typescript
// Success response
{
"success": true,
"data": { ... }
}
// Error response
{
"success": false,
"error": {
"code": "ERROR_CODE",
"message": "Human-readable message"
}
}
```
Define reusable schema components for these patterns:
```typescript
/**
* @openapi
* components:
* schemas:
* SuccessResponse:
* type: object
* properties:
* success:
* type: boolean
* example: true
* data:
* type: object
* ErrorResponse:
* type: object
* properties:
* success:
* type: boolean
* example: false
* error:
* type: object
* properties:
* code:
* type: string
* message:
* type: string
*/
```
### Security Considerations
1. **Production Disabled**: Swagger UI is not available in production to prevent information disclosure.
2. **No Sensitive Data**: Never include actual secrets, tokens, or PII in example values.
3. **Authentication Documented**: Clearly document which endpoints require authentication.
## API Route Tags
Organize endpoints using consistent tags:
| Tag | Description | Routes |
| ------------ | ---------------------------------- | --------------------- |
| Health | Server health and readiness checks | `/api/health/*` |
| Auth | Authentication and authorization | `/api/auth/*` |
| Users | User profile management | `/api/users/*` |
| Flyers | Flyer uploads and retrieval | `/api/flyers/*` |
| Achievements | Gamification and leaderboards | `/api/achievements/*` |
| Budgets | Budget tracking | `/api/budgets/*` |
| Recipes | Recipe management | `/api/recipes/*` |
| Admin | Administrative operations | `/api/admin/*` |
| System | System status and monitoring | `/api/system/*` |
## Testing
Verify API documentation is correct by:
1. **Manual Review**: Navigate to `/docs/api-docs` and test each endpoint.
2. **Spec Validation**: Use OpenAPI validators to check the generated spec.
3. **Integration Tests**: Existing integration tests serve as implicit documentation verification.
## Consequences
- **Positive**: Creates a single source of truth for API documentation that stays in sync with the code. Enables auto-generation of client SDKs and simplifies testing.
- **Negative**: Requires developers to maintain JSDoc annotations on all routes. Adds a build step and new dependencies to the project.
### Positive
- **Single Source of Truth**: Documentation lives with the code and stays in sync.
- **Interactive Exploration**: Developers can try endpoints directly from the UI.
- **SDK Generation**: OpenAPI spec enables automatic client SDK generation.
- **Onboarding**: New developers can quickly understand the API surface.
- **Low Overhead**: JSDoc annotations are minimal additions to existing code.
### Negative
- **Maintenance Required**: Developers must update annotations when routes change.
- **Build Dependency**: Adds `swagger-jsdoc` and `swagger-ui-express` packages.
- **Initial Investment**: Existing routes need annotations added incrementally.
### Mitigation
- Include documentation checks in code review process.
- Start with high-priority routes and expand coverage over time.
- Use TypeScript types to reduce documentation duplication where possible.
## Key Files
- `src/config/swagger.ts` - OpenAPI configuration
- `src/routes/*.ts` - Route files with JSDoc annotations
- `server.ts` - Swagger UI middleware setup
## Related ADRs
- [ADR-003](./0003-standardized-input-validation-using-middleware.md) - Input Validation (Zod schemas)
- [ADR-028](./0028-api-response-standardization.md) - Response Standardization
- [ADR-016](./0016-api-security-hardening.md) - Security Hardening

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
@@ -16,3 +18,210 @@ We will implement a formal data backup and recovery strategy. This will involve
- **Positive**: Protects against catastrophic data loss, ensuring business continuity. Provides a clear, tested plan for disaster recovery.
- **Negative**: Requires setup and maintenance of backup scripts and secure storage. Incurs storage costs for backup files.
## Implementation Details
### Backup Workflow
Located in `.gitea/workflows/manual-db-backup.yml`:
```yaml
name: Manual - Backup Production Database
on:
workflow_dispatch:
inputs:
confirmation:
description: 'Type "backup-production-db" to confirm'
required: true
jobs:
backup-database:
runs-on: projectium.com
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_USER: ${{ secrets.DB_USER_PROD }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_PROD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
steps:
- name: Validate Secrets
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ]; then
echo "ERROR: Database secrets not configured."
exit 1
fi
- name: Create Database Backup
run: |
TIMESTAMP=$(date +'%Y%m%d-%H%M%S')
BACKUP_FILENAME="flyer-crawler-prod-backup-${TIMESTAMP}.sql.gz"
# Create compressed backup
PGPASSWORD="$DB_PASSWORD" pg_dump \
-h "$DB_HOST" -p "$DB_PORT" \
-U "$DB_USER" -d "$DB_NAME" \
--clean --if-exists | gzip > "$BACKUP_FILENAME"
echo "backup_filename=$BACKUP_FILENAME" >> $GITEA_ENV
- name: Upload Backup as Artifact
uses: actions/upload-artifact@v3
with:
name: database-backup
path: ${{ env.backup_filename }}
```
### Restore Workflow
Located in `.gitea/workflows/manual-db-restore.yml`:
```yaml
name: Manual - Restore Database from Backup
on:
workflow_dispatch:
inputs:
confirmation:
description: 'Type "restore-from-backup" to confirm'
required: true
backup_file:
description: 'Path to backup file on server'
required: true
jobs:
restore-database:
steps:
- name: Verify Confirmation
run: |
if [ "${{ inputs.confirmation }}" != "restore-from-backup" ]; then
exit 1
fi
- name: Restore Database
run: |
# Decompress and restore
gunzip -c "${{ inputs.backup_file }}" | \
PGPASSWORD="$DB_PASSWORD" psql \
-h "$DB_HOST" -p "$DB_PORT" \
-U "$DB_USER" -d "$DB_NAME"
```
### Backup Command Reference
**Manual Backup**:
```bash
# Create compressed backup
PGPASSWORD="password" pg_dump \
-h localhost -p 5432 \
-U dbuser -d flyer-crawler \
--clean --if-exists | gzip > backup-$(date +%Y%m%d).sql.gz
# List backup contents (without restoring)
gunzip -c backup-20260109.sql.gz | head -100
```
**Manual Restore**:
```bash
# Restore from compressed backup
gunzip -c backup-20260109.sql.gz | \
PGPASSWORD="password" psql \
-h localhost -p 5432 \
-U dbuser -d flyer-crawler
```
### pg_dump Options
| Option | Purpose |
| ----------------- | ------------------------------ |
| `--clean` | Drop objects before recreating |
| `--if-exists` | Use IF EXISTS when dropping |
| `--no-owner` | Skip ownership commands |
| `--no-privileges` | Skip access privilege commands |
| `-F c` | Custom format (for pg_restore) |
| `-F p` | Plain text SQL (default) |
### Recovery Objectives
| Metric | Target | Current |
| ---------------------------------- | -------- | -------------- |
| **RPO** (Recovery Point Objective) | 24 hours | Manual trigger |
| **RTO** (Recovery Time Objective) | 1 hour | ~15 minutes |
### Backup Retention Policy
| Type | Retention | Storage |
| --------------- | --------- | ---------------- |
| Daily backups | 7 days | Gitea artifacts |
| Weekly backups | 4 weeks | Gitea artifacts |
| Monthly backups | 12 months | Off-site storage |
### Backup Verification
Periodically test backup integrity:
```bash
# Verify backup can be read
gunzip -t backup-20260109.sql.gz
# Test restore to a temporary database
createdb flyer-crawler-restore-test
gunzip -c backup-20260109.sql.gz | psql -d flyer-crawler-restore-test
# Verify data integrity...
dropdb flyer-crawler-restore-test
```
### Disaster Recovery Checklist
1. **Identify the Issue**
- Data corruption?
- Accidental deletion?
- Full database loss?
2. **Select Backup**
- Find most recent valid backup
- Download from Gitea artifacts or off-site storage
3. **Stop Application**
```bash
pm2 stop all
```
4. **Restore Database**
```bash
gunzip -c backup.sql.gz | psql -d flyer-crawler
```
5. **Verify Data**
- Check table row counts
- Verify recent data exists
- Test critical queries
6. **Restart Application**
```bash
pm2 start all
```
7. **Post-Mortem**
- Document incident
- Update procedures if needed
## Key Files
- `.gitea/workflows/manual-db-backup.yml` - Backup workflow
- `.gitea/workflows/manual-db-restore.yml` - Restore workflow
- `.gitea/workflows/manual-db-reset-test.yml` - Reset test database
- `.gitea/workflows/manual-db-reset-prod.yml` - Reset production database
- `sql/master_schema_rollup.sql` - Current schema definition
## Related ADRs
- [ADR-013](./0013-database-schema-migration-strategy.md) - Schema Migration Strategy
- [ADR-017](./0017-ci-cd-and-branching-strategy.md) - CI/CD Strategy

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
@@ -20,3 +22,195 @@ We will implement dedicated health check endpoints in the Express application.
- **Positive**: Enables robust, automated application lifecycle management in a containerized environment. Prevents traffic from being sent to unhealthy or uninitialized application instances.
- **Negative**: Adds a small amount of code for the health check endpoints. Requires configuration in the container orchestration layer.
## Implementation Status
### What's Implemented
-**Liveness Probe** (`/api/health/live`) - Simple process health check
-**Readiness Probe** (`/api/health/ready`) - Comprehensive dependency health check
-**Startup Probe** (`/api/health/startup`) - Initial startup verification
-**Individual Service Checks** - Database, Redis, Storage endpoints
-**Detailed Health Response** - Service latency, status, and details
## Implementation Details
### Probe Endpoints
| Endpoint | Purpose | Checks | HTTP Status |
| --------------------- | --------------- | ------------------ | ----------------------------- |
| `/api/health/live` | Liveness probe | Process running | 200 = alive |
| `/api/health/ready` | Readiness probe | DB, Redis, Storage | 200 = ready, 503 = not ready |
| `/api/health/startup` | Startup probe | Database only | 200 = started, 503 = starting |
### Liveness Probe
The liveness probe is intentionally simple with no external dependencies:
```typescript
// GET /api/health/live
{
"status": "ok",
"timestamp": "2026-01-09T12:00:00.000Z"
}
```
**Usage**: If this endpoint fails to respond, the container should be restarted.
### Readiness Probe
The readiness probe checks all critical dependencies:
```typescript
// GET /api/health/ready
{
"status": "healthy", // healthy | degraded | unhealthy
"timestamp": "2026-01-09T12:00:00.000Z",
"uptime": 3600.5,
"services": {
"database": {
"status": "healthy",
"latency": 5,
"details": {
"totalConnections": 10,
"idleConnections": 8,
"waitingConnections": 0
}
},
"redis": {
"status": "healthy",
"latency": 2
},
"storage": {
"status": "healthy",
"latency": 1,
"details": {
"path": "/var/www/.../flyer-images"
}
}
}
}
```
**Status Logic**:
- `healthy` - All critical services (database, Redis) are healthy
- `degraded` - Some non-critical issues (high connection wait, storage issues)
- `unhealthy` - Critical service unavailable (returns 503)
### Startup Probe
The startup probe is used during container initialization:
```typescript
// GET /api/health/startup
// Success (200):
{
"status": "started",
"timestamp": "2026-01-09T12:00:00.000Z",
"database": { "status": "healthy", "latency": 5 }
}
// Still starting (503):
{
"status": "starting",
"message": "Waiting for database connection",
"database": { "status": "unhealthy", "message": "..." }
}
```
### Individual Service Endpoints
For detailed diagnostics:
| Endpoint | Purpose |
| ----------------------- | ------------------------------- |
| `/api/health/ping` | Simple server responsiveness |
| `/api/health/db-schema` | Verify database tables exist |
| `/api/health/db-pool` | Database connection pool status |
| `/api/health/redis` | Redis connectivity |
| `/api/health/storage` | File storage accessibility |
| `/api/health/time` | Server time synchronization |
## Kubernetes Configuration Example
```yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: flyer-crawler
livenessProbe:
httpGet:
path: /api/health/live
port: 3001
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/health/ready
port: 3001
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
startupProbe:
httpGet:
path: /api/health/startup
port: 3001
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 30 # Allow up to 150 seconds for startup
```
## Docker Compose Configuration Example
```yaml
services:
api:
image: flyer-crawler:latest
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3001/api/health/ready']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
```
## PM2 Configuration Example
For non-containerized deployments using PM2:
```javascript
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'flyer-crawler',
script: 'dist/server.js',
// PM2 will check this endpoint
// and restart if it fails
health_check: {
url: 'http://localhost:3001/api/health/ready',
interval: 30000,
timeout: 10000,
},
},
],
};
```
## Key Files
- `src/routes/health.routes.ts` - Health check endpoint implementations
- `server.ts` - Health routes mounted at `/api/health`
## Service Health Thresholds
| Service | Healthy | Degraded | Unhealthy |
| -------- | ---------------------- | ----------------------- | ------------------- |
| Database | Responds to `SELECT 1` | > 3 waiting connections | Connection fails |
| Redis | `PING` returns `PONG` | N/A | Connection fails |
| Storage | Write access to path | N/A | Path not accessible |

View File

@@ -2,7 +2,9 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
@@ -10,10 +12,203 @@ The project contains both frontend (React) and backend (Node.js) code. While lin
## Decision
We will mandate the use of **Prettier** for automated code formatting and a unified **ESLint** configuration for code quality rules across both frontend and backend. This will be enforced automatically using a pre-commit hook managed by a tool like **Husky**.
We will mandate the use of **Prettier** for automated code formatting and a unified **ESLint** configuration for code quality rules across both frontend and backend. This will be enforced automatically using a pre-commit hook managed by **Husky** and **lint-staged**.
## Consequences
**Positive**: Improves developer experience and team velocity by automating code consistency. Reduces time spent on stylistic code review comments. Enhances code readability and maintainability.
**Negative**: Requires an initial setup and configuration of Prettier, ESLint, and Husky. May require a one-time reformatting of the entire codebase.
## Implementation Status
### What's Implemented
-**Prettier Configuration** - `.prettierrc` with consistent settings
-**Prettier Ignore** - `.prettierignore` to exclude generated files
-**ESLint Configuration** - `eslint.config.js` with TypeScript and React support
-**ESLint + Prettier Integration** - `eslint-config-prettier` to avoid conflicts
-**Husky Pre-commit Hooks** - Automatic enforcement on commit
-**lint-staged** - Run linters only on staged files for performance
## Implementation Details
### Prettier Configuration
The project uses a consistent Prettier configuration in `.prettierrc`:
```json
{
"semi": true,
"trailingComma": "all",
"singleQuote": true,
"printWidth": 100,
"tabWidth": 2,
"useTabs": false,
"endOfLine": "auto"
}
```
### ESLint Configuration
ESLint is configured with:
- TypeScript support via `typescript-eslint`
- React hooks rules via `eslint-plugin-react-hooks`
- React Refresh support for HMR
- Prettier compatibility via `eslint-config-prettier`
- **Relaxed rules for test files** (see below)
```javascript
// eslint.config.js (ESLint v9 flat config)
import globals from 'globals';
import tseslint from 'typescript-eslint';
import pluginReact from 'eslint-plugin-react';
import pluginReactHooks from 'eslint-plugin-react-hooks';
import pluginReactRefresh from 'eslint-plugin-react-refresh';
import eslintConfigPrettier from 'eslint-config-prettier';
export default tseslint.config(
// ... configurations
eslintConfigPrettier, // Must be last to override formatting rules
);
```
### Relaxed Linting Rules for Test Files
**Decision Date**: 2026-01-09
**Status**: Active (revisit when product nears final release)
The following ESLint rules are relaxed for test files (`*.test.ts`, `*.test.tsx`, `*.spec.ts`, `*.spec.tsx`):
| Rule | Setting | Rationale |
| ------------------------------------ | ------- | ---------------------------------------------------------------------------------------------------------- |
| `@typescript-eslint/no-explicit-any` | `off` | Mocking complexity often requires `any`; strict typing in tests adds friction without proportional benefit |
**Rationale**:
1. **Tests are not production code** - The primary goal of tests is verifying behavior, not type safety of the test code itself
2. **Mocking complexity** - Mocking libraries often require type gymnastics; `any` simplifies creating partial mocks and test doubles
3. **Testing edge cases** - Sometimes tests intentionally pass invalid types to verify error handling
4. **Development velocity** - Strict typing in tests slows down test writing without proportional benefit during active development
**Future Consideration**: This decision should be revisited when the product is nearing its final stages. At that point, stricter linting in tests may be warranted to ensure long-term maintainability.
```javascript
// eslint.config.js - Test file overrides
{
files: ['**/*.test.ts', '**/*.test.tsx', '**/*.spec.ts', '**/*.spec.tsx'],
rules: {
'@typescript-eslint/no-explicit-any': 'off',
},
}
```
### Pre-commit Hook
The pre-commit hook runs lint-staged automatically:
```bash
# .husky/pre-commit
npx lint-staged
```
### lint-staged Configuration
lint-staged runs appropriate tools based on file type:
```json
{
"*.{js,jsx,ts,tsx}": ["eslint --fix", "prettier --write"],
"*.{json,md,css,html,yml,yaml}": ["prettier --write"]
}
```
### NPM Scripts
| Script | Description |
| ------------------ | ---------------------------------------------- |
| `npm run format` | Format all files with Prettier |
| `npm run lint` | Run ESLint on all TypeScript/JavaScript files |
| `npm run validate` | Run Prettier check + TypeScript check + ESLint |
## Key Files
| File | Purpose |
| -------------------- | -------------------------------- |
| `.prettierrc` | Prettier configuration |
| `.prettierignore` | Files to exclude from formatting |
| `eslint.config.js` | ESLint flat configuration (v9) |
| `.husky/pre-commit` | Pre-commit hook script |
| `.lintstagedrc.json` | lint-staged configuration |
## Developer Workflow
### Automatic Formatting on Commit
When you commit changes:
1. Husky intercepts the commit
2. lint-staged identifies staged files
3. ESLint fixes auto-fixable issues
4. Prettier formats the code
5. Changes are automatically staged
6. Commit proceeds if no errors
### Manual Formatting
```bash
# Format entire codebase
npm run format
# Check formatting without changes
npx prettier --check .
# Run ESLint
npm run lint
# Run all validation checks
npm run validate
```
### IDE Integration
For the best experience, configure your IDE:
**VS Code** - Install extensions:
- Prettier - Code formatter
- ESLint
Add to `.vscode/settings.json`:
```json
{
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
}
}
```
## Troubleshooting
### "eslint --fix failed"
ESLint may fail on unfixable errors. Review the output and manually fix the issues.
### "prettier --write failed"
Check for syntax errors in the file that prevent parsing.
### Bypassing Hooks (Emergency)
In rare cases, you may need to bypass hooks:
```bash
git commit --no-verify -m "emergency fix"
```
Use sparingly - the CI pipeline will still catch formatting issues.

View File

@@ -2,17 +2,374 @@
**Date**: 2025-12-12
**Status**: Proposed
**Status**: Accepted
**Implemented**: 2026-01-19
## Context
A core feature is providing "Active Deal Alerts" to users. The current HTTP-based architecture is not suitable for pushing real-time updates to clients efficiently. Relying on traditional polling would be inefficient and slow.
Users need to be notified immediately when:
1. **New deals are found** on their watched items
2. **System announcements** need to be broadcast
3. **Background jobs complete** that affect their data
Traditional approaches:
- **HTTP Polling**: Inefficient, creates unnecessary load, delays up to polling interval
- **Server-Sent Events (SSE)**: One-way only, no client-to-server messaging
- **WebSockets**: Bi-directional, real-time, efficient
## Decision
We will implement a real-time communication system using **WebSockets** (e.g., with the `ws` library or Socket.IO). This will involve an architecture for a notification service that listens for backend events (like a new deal from a background job) and pushes live updates to connected clients.
We will implement a real-time communication system using **WebSockets** with the `ws` library. This will involve:
1. **WebSocket Server**: Manages connections, authentication, and message routing
2. **React Hook**: Provides easy integration for React components
3. **Event Bus Integration**: Bridges WebSocket messages to in-app events
4. **Background Job Integration**: Emits WebSocket notifications when deals are found
### Design Principles
- **JWT Authentication**: WebSocket connections authenticated via JWT tokens
- **Type-Safe Messages**: Strongly-typed message formats prevent errors
- **Auto-Reconnect**: Client automatically reconnects with exponential backoff
- **Graceful Degradation**: Email + DB notifications remain for offline users
- **Heartbeat Ping/Pong**: Detect and cleanup dead connections
- **Singleton Service**: Single WebSocket service instance shared across app
## Implementation Details
### WebSocket Message Types
Located in `src/types/websocket.ts`:
```typescript
export interface WebSocketMessage<T = unknown> {
type: WebSocketMessageType;
data: T;
timestamp: string;
}
export type WebSocketMessageType =
| 'deal-notification'
| 'system-message'
| 'ping'
| 'pong'
| 'error'
| 'connection-established';
// Deal notification payload
export interface DealNotificationData {
notification_id?: string;
deals: DealInfo[];
user_id: string;
message: string;
}
// Type-safe message creators
export const createWebSocketMessage = {
dealNotification: (data: DealNotificationData) => ({ ... }),
systemMessage: (data: SystemMessageData) => ({ ... }),
error: (data: ErrorMessageData) => ({ ... }),
// ...
};
```
### WebSocket Server Service
Located in `src/services/websocketService.server.ts`:
```typescript
export class WebSocketService {
private wss: WebSocketServer | null = null;
private clients: Map<string, Set<AuthenticatedWebSocket>> = new Map();
private pingInterval: NodeJS.Timeout | null = null;
initialize(server: HTTPServer): void {
this.wss = new WebSocketServer({
server,
path: '/ws',
});
this.wss.on('connection', (ws, request) => {
this.handleConnection(ws, request);
});
this.startHeartbeat(); // Ping every 30s
}
// Authentication via JWT from query string or cookie
private extractToken(request: IncomingMessage): string | null {
// Extract from ?token=xxx or Cookie: accessToken=xxx
}
// Broadcast to specific user
broadcastDealNotification(userId: string, data: DealNotificationData): void {
const message = createWebSocketMessage.dealNotification(data);
this.broadcastToUser(userId, message);
}
// Broadcast to all users
broadcastToAll(data: SystemMessageData): void {
// Send to all connected clients
}
shutdown(): void {
// Gracefully close all connections
}
}
export const websocketService = new WebSocketService(globalLogger);
```
### Server Integration
Located in `server.ts`:
```typescript
import { websocketService } from './src/services/websocketService.server';
if (process.env.NODE_ENV !== 'test') {
const server = app.listen(PORT, () => {
logger.info(`Authentication server started on port ${PORT}`);
});
// Initialize WebSocket server (ADR-022)
websocketService.initialize(server);
logger.info('WebSocket server initialized for real-time notifications');
// Graceful shutdown
const handleShutdown = (signal: string) => {
websocketService.shutdown();
gracefulShutdown(signal);
};
process.on('SIGINT', () => handleShutdown('SIGINT'));
process.on('SIGTERM', () => handleShutdown('SIGTERM'));
}
```
### React Client Hook
Located in `src/hooks/useWebSocket.ts`:
```typescript
export function useWebSocket(options: UseWebSocketOptions = {}) {
const [state, setState] = useState<WebSocketState>({
isConnected: false,
isConnecting: false,
error: null,
});
const connect = useCallback(() => {
const url = getWebSocketUrl(); // wss://host/ws?token=xxx
const ws = new WebSocket(url);
ws.onmessage = (event) => {
const message = JSON.parse(event.data) as WebSocketMessage;
// Emit to event bus for cross-component communication
switch (message.type) {
case 'deal-notification':
eventBus.dispatch('notification:deal', message.data);
break;
case 'system-message':
eventBus.dispatch('notification:system', message.data);
break;
// ...
}
};
ws.onclose = () => {
// Auto-reconnect with exponential backoff
if (reconnectAttempts < maxReconnectAttempts) {
setTimeout(connect, reconnectDelay * Math.pow(2, reconnectAttempts));
reconnectAttempts++;
}
};
}, []);
useEffect(() => {
if (autoConnect) connect();
return () => disconnect();
}, [autoConnect, connect, disconnect]);
return { ...state, connect, disconnect, send };
}
```
### Background Job Integration
Located in `src/services/backgroundJobService.ts`:
```typescript
private async _processDealsForUser({ userProfile, deals }: UserDealGroup) {
// ... existing email notification logic ...
// Send real-time WebSocket notification (ADR-022)
const { websocketService } = await import('./websocketService.server');
websocketService.broadcastDealNotification(userProfile.user_id, {
user_id: userProfile.user_id,
deals: deals.map((deal) => ({
item_name: deal.item_name,
best_price_in_cents: deal.best_price_in_cents,
store_name: deal.store.name,
store_id: deal.store.store_id,
})),
message: `You have ${deals.length} new deal(s) on your watched items!`,
});
}
```
### Usage in React Components
```typescript
import { useWebSocket } from '../hooks/useWebSocket';
import { useEventBus } from '../hooks/useEventBus';
import { useCallback } from 'react';
function NotificationComponent() {
// Connect to WebSocket
const { isConnected, error } = useWebSocket({ autoConnect: true });
// Listen for deal notifications via event bus
const handleDealNotification = useCallback((data: DealNotificationData) => {
toast.success(`${data.deals.length} new deals found!`);
}, []);
useEventBus('notification:deal', handleDealNotification);
return (
<div>
{isConnected ? '🟢 Live' : '🔴 Offline'}
</div>
);
}
```
## Architecture Diagram
```
┌─────────────────────────────────────────────────────────────┐
│ WebSocket Architecture │
└─────────────────────────────────────────────────────────────┘
Server Side:
┌──────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Background Job │─────▶│ WebSocket │─────▶│ Connected │
│ (Deal Checker) │ │ Service │ │ Clients │
└──────────────────┘ └──────────────────┘ └─────────────────┘
│ ▲
│ │
▼ │
┌──────────────────┐ │
│ Email Queue │ │
│ (BullMQ) │ │
└──────────────────┘ │
│ │
▼ │
┌──────────────────┐ ┌──────────────────┐
│ DB Notification │ │ Express Server │
│ Storage │ │ + WS Upgrade │
└──────────────────┘ └──────────────────┘
Client Side:
┌──────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ useWebSocket │◀────▶│ WebSocket │◀────▶│ Event Bus │
│ Hook │ │ Connection │ │ Integration │
└──────────────────┘ └──────────────────┘ └─────────────────┘
┌──────────────────┐
│ UI Components │
│ (Notifications) │
└──────────────────┘
```
## Security Considerations
1. **Authentication**: JWT tokens required for WebSocket connections
2. **User Isolation**: Messages routed only to authenticated user's connections
3. **Rate Limiting**: Heartbeat ping/pong prevents connection flooding
4. **Graceful Shutdown**: Notifies clients before server shutdown
5. **Error Handling**: Failed WebSocket sends don't crash the server
## Consequences
**Positive**: Enables a core, user-facing feature in a scalable and efficient manner. Significantly improves user engagement and experience.
**Negative**: Introduces a new dependency (e.g., WebSocket library) and adds complexity to the backend and frontend architecture. Requires careful handling of connection management and scaling.
### Positive
- **Real-time Updates**: Users see deals immediately when found
- **Better UX**: No page refresh needed, instant notifications
- **Efficient**: Single persistent connection vs polling every N seconds
- **Scalable**: Connection pooling per user, heartbeat cleanup
- **Type-Safe**: TypeScript types prevent message format errors
- **Resilient**: Auto-reconnect with exponential backoff
- **Observable**: Connection stats available via `getConnectionStats()`
- **Testable**: Comprehensive unit tests for message types and service
### Negative
- **Complexity**: WebSocket server adds new infrastructure component
- **Memory**: Each connection consumes server memory
- **Scaling**: Single-server implementation (multi-server requires Redis pub/sub)
- **Browser Support**: Requires WebSocket-capable browsers (all modern browsers)
- **Network**: Persistent connections require stable network
### Mitigation
- **Graceful Degradation**: Email + DB notifications remain for offline users
- **Connection Limits**: Can add max connections per user if needed
- **Monitoring**: Connection stats exposed for observability
- **Future Scaling**: Can add Redis pub/sub for multi-instance deployments
- **Heartbeat**: 30s ping/pong detects and cleans up dead connections
## Testing Strategy
### Unit Tests
Located in `src/services/websocketService.server.test.ts`:
```typescript
describe('WebSocketService', () => {
it('should initialize without errors', () => { ... });
it('should handle broadcasting with no active connections', () => { ... });
it('should shutdown gracefully', () => { ... });
});
```
Located in `src/types/websocket.test.ts`:
```typescript
describe('WebSocket Message Creators', () => {
it('should create valid deal notification messages', () => { ... });
it('should generate valid ISO timestamps', () => { ... });
});
```
### Integration Tests
Future work: Add integration tests that:
- Connect WebSocket clients to test server
- Verify authentication and message routing
- Test reconnection logic
- Validate message delivery
## Key Files
- `src/types/websocket.ts` - WebSocket message types and creators
- `src/services/websocketService.server.ts` - WebSocket server service
- `src/hooks/useWebSocket.ts` - React hook for WebSocket connections
- `src/services/backgroundJobService.ts` - Integration point for deal notifications
- `server.ts` - Express + WebSocket server initialization
- `src/services/websocketService.server.test.ts` - Unit tests
- `src/types/websocket.test.ts` - Message type tests
## Related ADRs
- [ADR-036](./0036-event-bus-and-pub-sub-pattern.md) - Event Bus Pattern (used by client hook)
- [ADR-042](./0042-email-and-notification-architecture.md) - Email Notifications (fallback mechanism)
- [ADR-006](./0006-background-job-processing-and-task-queues.md) - Background Jobs (triggers WebSocket notifications)

View File

@@ -0,0 +1,352 @@
# ADR-023: Database Normalization and Referential Integrity
**Date:** 2026-01-19
**Status:** Accepted
**Context:** API design violates database normalization principles
## Problem Statement
The application's API layer currently accepts string-based references (category names) instead of numerical IDs when creating relationships between entities. This violates database normalization principles and creates a brittle, error-prone API contract.
**Example of Current Problem:**
```typescript
// API accepts string:
POST /api/users/watched-items
{ "itemName": "Milk", "category": "Dairy & Eggs" } // ❌ String reference
// But database uses normalized foreign keys:
CREATE TABLE master_grocery_items (
category_id BIGINT REFERENCES categories(category_id) -- Proper FK
)
```
This mismatch forces the service layer to perform string lookups on every request:
```typescript
// Service must do string matching:
const categoryRes = await client.query(
'SELECT category_id FROM categories WHERE name = $1',
[categoryName], // ❌ Error-prone string matching
);
```
## Database Normal Forms (In Order of Importance)
### 1. First Normal Form (1NF) ✅ Currently Satisfied
**Rule:** Each column contains atomic values; no repeating groups.
**Status:****Compliant**
- All columns contain single values
- No arrays or delimited strings in columns
- Each row is uniquely identifiable
**Example:**
```sql
-- ✅ Good: Atomic values
CREATE TABLE master_grocery_items (
master_grocery_item_id BIGINT PRIMARY KEY,
name TEXT,
category_id BIGINT
);
-- ❌ Bad: Non-atomic values (violates 1NF)
CREATE TABLE items (
id BIGINT,
categories TEXT -- "Dairy,Frozen,Snacks" (comma-delimited)
);
```
### 2. Second Normal Form (2NF) ✅ Currently Satisfied
**Rule:** No partial dependencies; all non-key columns depend on the entire primary key.
**Status:****Compliant**
- All tables use single-column primary keys (no composite keys)
- All non-key columns depend on the entire primary key
**Example:**
```sql
-- ✅ Good: All columns depend on full primary key
CREATE TABLE flyer_items (
flyer_item_id BIGINT PRIMARY KEY,
flyer_id BIGINT, -- Depends on flyer_item_id
master_item_id BIGINT, -- Depends on flyer_item_id
price_in_cents INT -- Depends on flyer_item_id
);
-- ❌ Bad: Partial dependency (violates 2NF)
CREATE TABLE flyer_items (
flyer_id BIGINT,
item_id BIGINT,
store_name TEXT, -- Depends only on flyer_id, not (flyer_id, item_id)
PRIMARY KEY (flyer_id, item_id)
);
```
### 3. Third Normal Form (3NF) ⚠️ VIOLATED IN API LAYER
**Rule:** No transitive dependencies; non-key columns depend only on the primary key, not on other non-key columns.
**Status:** ⚠️ **Database is compliant, but API layer violates this principle**
**Database Schema (Correct):**
```sql
-- ✅ Categories are normalized
CREATE TABLE categories (
category_id BIGINT PRIMARY KEY,
name TEXT NOT NULL UNIQUE
);
CREATE TABLE master_grocery_items (
master_grocery_item_id BIGINT PRIMARY KEY,
name TEXT,
category_id BIGINT REFERENCES categories(category_id) -- Direct reference
);
```
**API Layer (Violates 3NF Principle):**
```typescript
// ❌ API accepts category name instead of ID
POST /api/users/watched-items
{
"itemName": "Milk",
"category": "Dairy & Eggs" // String! Should be category_id
}
// Service layer must denormalize by doing lookup:
SELECT category_id FROM categories WHERE name = $1
```
This creates a **transitive dependency** in the application layer:
- `watched_item``category_name``category_id`
- Instead of direct: `watched_item``category_id`
### 4. Boyce-Codd Normal Form (BCNF) ✅ Currently Satisfied
**Rule:** Every determinant is a candidate key (stricter version of 3NF).
**Status:****Compliant**
- All foreign key references use primary keys
- No non-trivial functional dependencies where determinant is not a superkey
### 5. Fourth Normal Form (4NF) ✅ Currently Satisfied
**Rule:** No multi-valued dependencies; a record should not contain independent multi-valued facts.
**Status:****Compliant**
- Junction tables properly separate many-to-many relationships
- Examples: `user_watched_items`, `shopping_list_items`, `recipe_ingredients`
### 6. Fifth Normal Form (5NF) ✅ Currently Satisfied
**Rule:** No join dependencies; tables cannot be decomposed further without loss of information.
**Status:****Compliant** (as far as schema design goes)
## Impact of API Violation
### 1. Brittleness
```typescript
// Test fails because of exact string matching:
addWatchedItem('Milk', 'Dairy'); // ❌ Fails - not exact match
addWatchedItem('Milk', 'Dairy & Eggs'); // ✅ Works - exact match
addWatchedItem('Milk', 'dairy & eggs'); // ❌ Fails - case sensitive
```
### 2. No Discovery Mechanism
- No API endpoint to list available categories
- Frontend cannot dynamically populate dropdowns
- Clients must hardcode category names
### 3. Performance Penalty
```sql
-- Current: String lookup on every request
SELECT category_id FROM categories WHERE name = $1; -- Full table scan or index scan
-- Should be: Direct ID reference (no lookup needed)
INSERT INTO master_grocery_items (name, category_id) VALUES ($1, $2);
```
### 4. Impossible Localization
- Cannot translate category names without breaking API
- Category names are hardcoded in English
### 5. Maintenance Burden
- Renaming a category breaks all API clients
- Must coordinate name changes across frontend, tests, and documentation
## Decision
**We adopt the following principles for all API design:**
### 1. Use Numerical IDs for All Foreign Key References
**Rule:** APIs MUST accept numerical IDs when creating relationships between entities.
```typescript
// ✅ CORRECT: Use IDs
POST /api/users/watched-items
{
"itemName": "Milk",
"category_id": 3 // Numerical ID
}
// ❌ INCORRECT: Use strings
POST /api/users/watched-items
{
"itemName": "Milk",
"category": "Dairy & Eggs" // String name
}
```
### 2. Provide Discovery Endpoints
**Rule:** For any entity referenced by ID, provide a GET endpoint to list available options.
```typescript
// Required: Category discovery endpoint
GET / api / categories;
Response: [
{ category_id: 1, name: 'Fruits & Vegetables' },
{ category_id: 2, name: 'Meat & Seafood' },
{ category_id: 3, name: 'Dairy & Eggs' },
];
```
### 3. Support Lookup by Name (Optional)
**Rule:** If convenient, provide query parameters for name-based lookup, but use IDs internally.
```typescript
// Optional: Convenience endpoint
GET /api/categories?name=Dairy%20%26%20Eggs
Response: { "category_id": 3, "name": "Dairy & Eggs" }
```
### 4. Return Full Objects in Responses
**Rule:** API responses SHOULD include denormalized data for convenience, but inputs MUST use IDs.
```typescript
// ✅ Response includes category details
GET / api / users / watched - items;
Response: [
{
master_grocery_item_id: 42,
name: 'Milk',
category_id: 3,
category: {
// ✅ Include full object in response
category_id: 3,
name: 'Dairy & Eggs',
},
},
];
```
## Affected Areas
### Immediate Violations (Must Fix)
1. **User Watched Items** ([src/routes/user.routes.ts:76](../../src/routes/user.routes.ts))
- Currently: `category: string`
- Should be: `category_id: number`
2. **Service Layer** ([src/services/db/personalization.db.ts:175](../../src/services/db/personalization.db.ts))
- Currently: `categoryName: string`
- Should be: `categoryId: number`
3. **API Client** ([src/services/apiClient.ts:436](../../src/services/apiClient.ts))
- Currently: `category: string`
- Should be: `category_id: number`
4. **Frontend Hooks** ([src/hooks/mutations/useAddWatchedItemMutation.ts:9](../../src/hooks/mutations/useAddWatchedItemMutation.ts))
- Currently: `category?: string`
- Should be: `category_id: number`
### Potential Violations (Review Required)
1. **UPC/Barcode System** ([src/types/upc.ts:85](../../src/types/upc.ts))
- Uses `category: string | null`
- May be appropriate if category is free-form user input
2. **AI Extraction** ([src/types/ai.ts:21](../../src/types/ai.ts))
- Uses `category_name: z.string()`
- AI extracts category names, needs mapping to IDs
3. **Flyer Data Transformer** ([src/services/flyerDataTransformer.ts:40](../../src/services/flyerDataTransformer.ts))
- Uses `category_name: string`
- May need category matching/creation logic
## Migration Strategy
See [research-category-id-migration.md](../research-category-id-migration.md) for detailed migration plan.
**High-level approach:**
1. **Phase 1: Add category discovery endpoint** (non-breaking)
- `GET /api/categories`
- No API changes yet
2. **Phase 2: Support both formats** (non-breaking)
- Accept both `category` (string) and `category_id` (number)
- Deprecate string format with warning logs
3. **Phase 3: Remove string support** (breaking change, major version bump)
- Only accept `category_id`
- Update all clients and tests
## Consequences
### Positive
- ✅ API matches database schema design
- ✅ More robust (no typo-based failures)
- ✅ Better performance (no string lookups)
- ✅ Enables localization
- ✅ Discoverable via REST API
- ✅ Follows REST best practices
### Negative
- ⚠️ Breaking change for existing API consumers
- ⚠️ Requires client updates
- ⚠️ More complex migration path
### Neutral
- Frontend must fetch categories before displaying form
- Slightly more initial API calls (one-time category fetch)
## References
- [Database Normalization (Wikipedia)](https://en.wikipedia.org/wiki/Database_normalization)
- [REST API Design Best Practices](https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/)
- [PostgreSQL Foreign Keys](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-FK)
## Related Decisions
- [ADR-001: Database Schema Design](./0001-database-schema-design.md) (if exists)
- [ADR-014: Containerization and Deployment Strategy](./0014-containerization-and-deployment-strategy.md)
## Approval
- **Proposed by:** Claude Code (via user observation)
- **Date:** 2026-01-19
- **Status:** Accepted (pending implementation)

View File

@@ -2,7 +2,7 @@
**Date**: 2025-12-14
**Status**: Proposed
**Status**: Adopted
## Context

View File

@@ -0,0 +1,41 @@
# ADR-027: Standardized Naming Convention for AI and Database Types
**Date**: 2026-01-05
**Status**: Accepted
## Context
The application codebase primarily follows the standard TypeScript convention of `camelCase` for variable and property names. However, the PostgreSQL database uses `snake_case` for column names. Additionally, the AI prompts are designed to extract data that maps directly to these database columns.
Attempting to enforce `camelCase` strictly across the entire stack created friction and ambiguity, particularly in the background processing pipeline where data moves from the AI model directly to the database. Developers were unsure whether to transform keys immediately upon receipt (adding overhead) or keep them as-is.
## Decision
We will adopt a hybrid naming convention strategy to explicitly distinguish between internal application state and external/persisted data formats.
1. **Database and AI Types (`snake_case`)**:
Interfaces, Type definitions, and Zod schemas that represent raw database rows or direct AI responses **MUST** use `snake_case`.
- *Examples*: `AiFlyerDataSchema`, `ExtractedFlyerItemSchema`, `FlyerInsert`.
- *Reasoning*: This avoids unnecessary mapping layers when inserting data into the database or parsing AI output. It serves as a visual cue that the data is "raw", "external", or destined for persistence.
2. **Internal Application Logic (`camelCase`)**:
Variables, function arguments, and processed data structures used within the application logic (Service layer, UI components, utility functions) **MUST** use `camelCase`.
- *Reasoning*: This adheres to standard JavaScript/TypeScript practices and maintains consistency with the rest of the ecosystem (React, etc.).
3. **Boundary Handling**:
- For background jobs that primarily move data from AI to DB, preserving `snake_case` is preferred to minimize transformation logic.
- For API responses sent to the frontend, data should generally be transformed to `camelCase` unless it is a direct dump of a database entity for a specific administrative view.
## Consequences
### Positive
- **Visual Distinction**: It is immediately obvious whether a variable holds raw data (`price_in_cents`) or processed application state (`priceInCents`).
- **Efficiency**: Reduces boilerplate code for mapping keys (e.g., `price_in_cents: data.priceInCents`) when performing bulk inserts or updates.
- **Simplicity**: AI prompts can request JSON keys that match the database schema 1:1, reducing the risk of mapping errors.
### Negative
- **Context Switching**: Developers must be mindful of the casing context.
- **Linter Configuration**: May require specific overrides or `// eslint-disable-next-line` comments if the linter is configured to strictly enforce `camelCase` everywhere.

View File

@@ -0,0 +1,177 @@
# ADR-028: API Response Standardization and Envelope Pattern
**Date**: 2026-01-09
**Status**: Implemented
## Context
The API currently has inconsistent response formats across different endpoints:
1. Some endpoints return raw data arrays (`[{...}, {...}]`)
2. Some return wrapped objects (`{ data: [...] }`)
3. Pagination is handled inconsistently (some use `page`/`limit`, others use `offset`/`count`)
4. Error responses vary in structure between middleware and route handlers
5. No standard for including metadata (pagination info, request timing, etc.)
This inconsistency creates friction for:
- Frontend developers who must handle multiple response formats
- API documentation and client SDK generation
- Implementing consistent error handling across the application
- Future API versioning transitions
## Decision
We will adopt a standardized response envelope pattern for all API responses.
### Success Response Format
```typescript
interface ApiSuccessResponse<T> {
success: true;
data: T;
meta?: {
// Pagination (when applicable)
pagination?: {
page: number;
limit: number;
total: number;
totalPages: number;
hasNextPage: boolean;
hasPrevPage: boolean;
};
// Timing
requestId?: string;
timestamp?: string;
duration?: number;
};
}
```
### Error Response Format
```typescript
interface ApiErrorResponse {
success: false;
error: {
code: string; // Machine-readable error code (e.g., 'VALIDATION_ERROR')
message: string; // Human-readable message
details?: unknown; // Additional context (validation errors, etc.)
};
meta?: {
requestId?: string;
timestamp?: string;
};
}
```
### Implementation Approach
1. **Response Helper Functions**: Create utility functions in `src/utils/apiResponse.ts`:
- `sendSuccess(res, data, meta?)`
- `sendPaginated(res, data, pagination)`
- `sendError(res, code, message, details?, statusCode?)`
2. **Error Handler Integration**: Update `errorHandler.ts` to use the standard error format
3. **Gradual Migration**: Apply to new endpoints immediately, migrate existing endpoints incrementally
4. **TypeScript Types**: Export response types for frontend consumption
## Consequences
### Positive
- **Consistency**: All responses follow a predictable structure
- **Type Safety**: Frontend can rely on consistent types
- **Debugging**: Request IDs and timestamps aid in issue investigation
- **Pagination**: Standardized pagination metadata reduces frontend complexity
- **API Evolution**: Envelope pattern makes it easier to add fields without breaking changes
### Negative
- **Verbosity**: Responses are slightly larger due to envelope overhead
- **Migration Effort**: Existing endpoints need updating
- **Learning Curve**: Developers must learn and use the helper functions
## Implementation Status
### What's Implemented
- ✅ Created `src/utils/apiResponse.ts` with helper functions (`sendSuccess`, `sendPaginated`, `sendError`, `sendNoContent`, `sendMessage`, `calculatePagination`)
- ✅ Created `src/types/api.ts` with response type definitions (`ApiSuccessResponse`, `ApiErrorResponse`, `PaginationMeta`, `ErrorCode`)
- ✅ Updated `src/middleware/errorHandler.ts` to use standard error format
- ✅ Migrated all route files to use standardized responses:
- `health.routes.ts`
- `flyer.routes.ts`
- `deals.routes.ts`
- `budget.routes.ts`
- `personalization.routes.ts`
- `price.routes.ts`
- `reactions.routes.ts`
- `stats.routes.ts`
- `system.routes.ts`
- `gamification.routes.ts`
- `recipe.routes.ts`
- `auth.routes.ts`
- `user.routes.ts`
- `admin.routes.ts`
- `ai.routes.ts`
### Error Codes
The following error codes are defined in `src/types/api.ts`:
| Code | HTTP Status | Description |
| ------------------------ | ----------- | ----------------------------------- |
| `VALIDATION_ERROR` | 400 | Request validation failed |
| `BAD_REQUEST` | 400 | Malformed request |
| `UNAUTHORIZED` | 401 | Authentication required |
| `FORBIDDEN` | 403 | Insufficient permissions |
| `NOT_FOUND` | 404 | Resource not found |
| `CONFLICT` | 409 | Resource conflict (e.g., duplicate) |
| `RATE_LIMITED` | 429 | Too many requests |
| `PAYLOAD_TOO_LARGE` | 413 | Request body too large |
| `INTERNAL_ERROR` | 500 | Server error |
| `NOT_IMPLEMENTED` | 501 | Feature not yet implemented |
| `SERVICE_UNAVAILABLE` | 503 | Service temporarily unavailable |
| `EXTERNAL_SERVICE_ERROR` | 502 | External service failure |
## Example Usage
```typescript
// In a route handler
router.get('/flyers', async (req, res, next) => {
try {
const { page = 1, limit = 20 } = req.query;
const { flyers, total } = await flyerService.getFlyers({ page, limit });
return sendPaginated(res, flyers, {
page,
limit,
total,
});
} catch (error) {
next(error);
}
});
// Response:
// {
// "success": true,
// "data": [...],
// "meta": {
// "pagination": {
// "page": 1,
// "limit": 20,
// "total": 150,
// "totalPages": 8,
// "hasNextPage": true,
// "hasPrevPage": false
// },
// "requestId": "abc-123",
// "timestamp": "2026-01-09T12:00:00.000Z"
// }
// }
```

View File

@@ -0,0 +1,147 @@
# ADR-029: Secret Rotation and Key Management Strategy
**Date**: 2026-01-09
**Status**: Proposed
## Context
While ADR-007 covers configuration validation at startup, it does not address the lifecycle management of secrets:
1. **JWT Secrets**: If the JWT_SECRET is rotated, all existing user sessions are immediately invalidated
2. **Database Credentials**: No documented procedure for rotating database passwords without downtime
3. **API Keys**: External service API keys (AI services, geocoding) have no rotation strategy
4. **Emergency Revocation**: No process for immediately invalidating compromised credentials
Current risks:
- Long-lived secrets that never change become high-value targets
- No ability to rotate secrets without application restart
- No audit trail of when secrets were last rotated
- Compromised keys could remain active indefinitely
## Decision
We will implement a comprehensive secret rotation and key management strategy.
### 1. JWT Secret Rotation with Dual-Key Support
Support multiple JWT secrets simultaneously to enable zero-downtime rotation:
```typescript
// Environment variables
JWT_SECRET = current_secret;
JWT_SECRET_PREVIOUS = old_secret; // Optional, for transition period
// Token verification tries current first, falls back to previous
const verifyToken = (token: string) => {
try {
return jwt.verify(token, process.env.JWT_SECRET);
} catch {
if (process.env.JWT_SECRET_PREVIOUS) {
return jwt.verify(token, process.env.JWT_SECRET_PREVIOUS);
}
throw new AuthenticationError('Invalid token');
}
};
```
### 2. Database Credential Rotation
Document and implement a procedure for PostgreSQL credential rotation:
1. Create new database user with identical permissions
2. Update application configuration to use new credentials
3. Restart application instances (rolling restart)
4. Remove old database user after all instances updated
5. Log rotation event for audit purposes
### 3. API Key Management
For external service API keys (Google AI, geocoding services):
1. **Naming Convention**: `{SERVICE}_API_KEY` and `{SERVICE}_API_KEY_PREVIOUS`
2. **Fallback Logic**: Try primary key, fall back to previous on 401/403
3. **Health Checks**: Validate API keys on startup
4. **Usage Logging**: Track which key is being used for each request
### 4. Emergency Revocation Procedures
Document emergency procedures for:
- **JWT Compromise**: Set new JWT_SECRET, clear all refresh tokens from database
- **Database Compromise**: Rotate credentials immediately, audit access logs
- **API Key Compromise**: Regenerate at provider, update environment, restart
### 5. Secret Audit Trail
Track secret lifecycle events:
- When secrets were last rotated
- Who initiated the rotation
- Which instances are using which secrets
## Implementation Approach
### Phase 1: Dual JWT Secret Support
- Modify token verification to support fallback secret
- Add JWT_SECRET_PREVIOUS to configuration schema
- Update documentation
### Phase 2: Rotation Scripts
- Create `scripts/rotate-jwt-secret.sh`
- Create `scripts/rotate-db-credentials.sh`
- Add rotation instructions to operations runbook
### Phase 3: API Key Fallback
- Wrap external API clients with fallback logic
- Add key validation to health checks
- Implement key usage logging
## Consequences
### Positive
- **Zero-Downtime Rotation**: Secrets can be rotated without invalidating all sessions
- **Reduced Risk**: Regular rotation limits exposure window for compromised credentials
- **Audit Trail**: Clear record of when secrets were changed
- **Emergency Response**: Documented procedures for security incidents
### Negative
- **Complexity**: Dual-key logic adds code complexity
- **Operations Overhead**: Regular rotation requires operational discipline
- **Testing**: Rotation procedures need to be tested periodically
## Implementation Status
### What's Implemented
- ❌ Not yet implemented
### What Needs To Be Done
1. Implement dual JWT secret verification
2. Create rotation scripts
3. Document emergency procedures
4. Add secret validation to health checks
5. Create rotation schedule recommendations
## Key Files (To Be Created)
- `src/utils/secretManager.ts` - Secret rotation utilities
- `scripts/rotate-jwt-secret.sh` - JWT rotation script
- `scripts/rotate-db-credentials.sh` - Database credential rotation
- `docs/operations/secret-rotation.md` - Operations runbook
## Rotation Schedule Recommendations
| Secret Type | Rotation Frequency | Grace Period |
| ------------------ | -------------------------- | ----------------- |
| JWT_SECRET | 90 days | 7 days (dual-key) |
| Database Passwords | 180 days | Rolling restart |
| AI API Keys | On suspicion of compromise | Immediate |
| Refresh Tokens | 7-day max age | N/A (per-token) |

View File

@@ -0,0 +1,150 @@
# ADR-030: Graceful Degradation and Circuit Breaker Pattern
**Date**: 2026-01-09
**Status**: Proposed
## Context
The application depends on several external services:
1. **AI Services** (Google Gemini) - For flyer item extraction
2. **Redis** - For caching, rate limiting, and job queues
3. **PostgreSQL** - Primary data store
4. **Geocoding APIs** - For location services
Currently, when these services fail:
- AI failures may cause the entire upload to fail
- Redis unavailability could crash the application or bypass rate limiting
- No circuit breakers prevent repeated calls to failing services
- No fallback behaviors are defined
This creates fragility where a single service outage can cascade into application-wide failures.
## Decision
We will implement a graceful degradation strategy with circuit breakers for external service dependencies.
### 1. Circuit Breaker Pattern
Implement circuit breakers for external service calls using a library like `opossum`:
```typescript
import CircuitBreaker from 'opossum';
const aiCircuitBreaker = new CircuitBreaker(callAiService, {
timeout: 30000, // 30 second timeout
errorThresholdPercentage: 50, // Open circuit at 50% failures
resetTimeout: 30000, // Try again after 30 seconds
volumeThreshold: 5, // Minimum calls before calculating error %
});
aiCircuitBreaker.on('open', () => {
logger.warn('AI service circuit breaker opened');
});
aiCircuitBreaker.on('halfOpen', () => {
logger.info('AI service circuit breaker half-open, testing...');
});
```
### 2. Fallback Behaviors by Service
| Service | Fallback Behavior |
| ---------------------- | ---------------------------------------- |
| **Redis (Cache)** | Skip cache, query database directly |
| **Redis (Rate Limit)** | Log warning, allow request (fail-open) |
| **Redis (Queues)** | Queue to memory, process synchronously |
| **AI Service** | Return partial results, queue for retry |
| **Geocoding** | Return null location, allow manual entry |
| **PostgreSQL** | No fallback - critical dependency |
### 3. Health Status Aggregation
Extend health checks (ADR-020) to report service-level health:
```typescript
// GET /api/health/ready response
{
"status": "degraded", // healthy | degraded | unhealthy
"services": {
"database": { "status": "healthy", "latency": 5 },
"redis": { "status": "healthy", "latency": 2 },
"ai": { "status": "degraded", "circuitState": "half-open" },
"geocoding": { "status": "healthy", "latency": 150 }
}
}
```
### 4. Retry Strategies
Define retry policies for transient failures:
```typescript
const retryConfig = {
ai: { maxRetries: 3, backoff: 'exponential', initialDelay: 1000 },
geocoding: { maxRetries: 2, backoff: 'linear', initialDelay: 500 },
database: { maxRetries: 3, backoff: 'exponential', initialDelay: 100 },
};
```
## Implementation Approach
### Phase 1: Redis Fallbacks
- Wrap cache operations with try-catch (already partially done in cacheService)
- Add fail-open for rate limiting when Redis is down
- Log degraded state
### Phase 2: AI Circuit Breaker
- Wrap AI service calls with circuit breaker
- Implement queue-for-retry on circuit open
- Add manual fallback UI for failed extractions
### Phase 3: Health Aggregation
- Update health endpoints with service status
- Add Prometheus-compatible metrics
- Create dashboard for service health
## Consequences
### Positive
- **Resilience**: Application continues functioning during partial outages
- **User Experience**: Degraded but functional is better than complete failure
- **Observability**: Clear visibility into service health
- **Protection**: Circuit breakers prevent cascading failures
### Negative
- **Complexity**: Additional code for fallback logic
- **Testing**: Requires testing failure scenarios
- **Consistency**: Some operations may have different results during degradation
## Implementation Status
### What's Implemented
- ✅ Cache operations fail gracefully (cacheService.server.ts)
- ❌ Circuit breakers for AI services
- ❌ Rate limit fail-open behavior
- ❌ Health aggregation endpoint
- ❌ Retry strategies with backoff
### What Needs To Be Done
1. Install and configure `opossum` circuit breaker library
2. Wrap AI service calls with circuit breaker
3. Add fail-open to rate limiting
4. Extend health endpoints with service status
5. Document degraded mode behaviors
## Key Files
- `src/utils/circuitBreaker.ts` - Circuit breaker configurations (to create)
- `src/services/cacheService.server.ts` - Already has graceful fallbacks
- `src/routes/health.routes.ts` - Health check endpoints (to extend)
- `src/services/aiService.server.ts` - AI service wrapper (to wrap)

View File

@@ -0,0 +1,199 @@
# ADR-031: Data Retention and Privacy Compliance (GDPR/CCPA)
**Date**: 2026-01-09
**Status**: Proposed
## Context
The application stores various types of user data:
1. **User Accounts**: Email, password hash, profile information
2. **Shopping Lists**: Personal shopping preferences and history
3. **Watch Lists**: Tracked items and price alerts
4. **Activity Logs**: User actions for analytics and debugging
5. **Tracking Data**: Page views, interactions, feature usage
Current gaps in privacy compliance:
- **No Data Retention Policies**: Activity logs accumulate indefinitely
- **No User Data Export**: Users cannot export their data (GDPR Article 20)
- **No User Data Deletion**: No self-service account deletion (GDPR Article 17)
- **No Cookie Consent**: Cookie usage not disclosed or consented
- **No Privacy Policy Enforcement**: Privacy commitments not enforced in code
These gaps create legal exposure for users in EU (GDPR) and California (CCPA).
## Decision
We will implement comprehensive data retention and privacy compliance features.
### 1. Data Retention Policies
| Data Type | Retention Period | Deletion Method |
| ------------------------- | ------------------------ | ------------------------ |
| **Activity Logs** | 90 days | Automated cleanup job |
| **Tracking Events** | 30 days | Automated cleanup job |
| **Deleted User Data** | 30 days (soft delete) | Hard delete after period |
| **Expired Sessions** | 7 days after expiry | Token cleanup job |
| **Failed Login Attempts** | 24 hours | Automated cleanup |
| **Flyer Data** | Indefinite (public data) | N/A |
| **User Shopping Lists** | Until account deletion | With account |
| **User Watch Lists** | Until account deletion | With account |
### 2. User Data Export (Right to Portability)
Implement `GET /api/users/me/export` endpoint:
```typescript
interface UserDataExport {
exportDate: string;
user: {
email: string;
created_at: string;
profile: ProfileData;
};
shoppingLists: ShoppingList[];
watchedItems: WatchedItem[];
priceAlerts: PriceAlert[];
achievements: Achievement[];
// Exclude: password hash, internal IDs, admin flags
}
```
Export formats: JSON (primary), CSV (optional)
### 3. User Data Deletion (Right to Erasure)
Implement `DELETE /api/users/me` endpoint:
1. **Soft Delete**: Mark account as deleted, anonymize PII
2. **Grace Period**: 30 days to restore account
3. **Hard Delete**: Permanently remove all user data after grace period
4. **Audit Log**: Record deletion request (anonymized)
Deletion cascade:
- User account → Anonymize email/name
- Shopping lists → Delete
- Watch lists → Delete
- Achievements → Delete
- Activity logs → Anonymize user_id
- Sessions/tokens → Delete immediately
### 4. Cookie Consent
Implement cookie consent banner:
```typescript
// Cookie categories
enum CookieCategory {
ESSENTIAL = 'essential', // Always allowed (auth, CSRF)
FUNCTIONAL = 'functional', // Dark mode, preferences
ANALYTICS = 'analytics', // Usage tracking
}
// Store consent in localStorage and server-side
interface CookieConsent {
essential: true; // Cannot be disabled
functional: boolean;
analytics: boolean;
consentDate: string;
consentVersion: string;
}
```
### 5. Privacy Policy Enforcement
Enforce privacy commitments in code:
- Email addresses never logged in plaintext
- Passwords never logged (already in pino redact config)
- IP addresses anonymized after 7 days
- Third-party data sharing requires explicit consent
## Implementation Approach
### Phase 1: Data Retention Jobs
- Create retention cleanup job in background job service
- Add activity_log retention (90 days)
- Add tracking_events retention (30 days)
### Phase 2: User Data Export
- Create export endpoint
- Implement data aggregation query
- Add rate limiting (1 export per 24h)
### Phase 3: Account Deletion
- Implement soft delete with anonymization
- Create hard delete cleanup job
- Add account recovery endpoint
### Phase 4: Cookie Consent
- Create consent banner component
- Store consent preferences
- Gate analytics based on consent
## Consequences
### Positive
- **Legal Compliance**: Meets GDPR and CCPA requirements
- **User Trust**: Demonstrates commitment to privacy
- **Data Hygiene**: Automatic cleanup prevents data bloat
- **Reduced Liability**: Less data = less risk
### Negative
- **Implementation Effort**: Significant feature development
- **Operational Complexity**: Deletion jobs need monitoring
- **Feature Limitations**: Some features may be limited without consent
## Implementation Status
### What's Implemented
- ✅ Token cleanup job exists (tokenCleanupQueue)
- ❌ Activity log retention
- ❌ User data export endpoint
- ❌ Account deletion endpoint
- ❌ Cookie consent banner
- ❌ Data anonymization functions
### What Needs To Be Done
1. Add activity_log cleanup to background jobs
2. Create `/api/users/me/export` endpoint
3. Create `/api/users/me` DELETE endpoint with soft delete
4. Implement cookie consent UI component
5. Document data retention in privacy policy
6. Add anonymization utility functions
## Key Files (To Be Created/Modified)
- `src/services/backgroundJobService.ts` - Add retention jobs
- `src/routes/user.routes.ts` - Add export/delete endpoints
- `src/services/privacyService.server.ts` - Data export/deletion logic
- `src/components/CookieConsent.tsx` - Consent banner
- `src/utils/anonymize.ts` - Data anonymization utilities
## Compliance Checklist
### GDPR Requirements
- [ ] Article 15: Right of Access (data export)
- [ ] Article 17: Right to Erasure (account deletion)
- [ ] Article 20: Right to Data Portability (JSON export)
- [ ] Article 7: Conditions for Consent (cookie consent)
- [ ] Article 13: Information to be Provided (privacy policy)
### CCPA Requirements
- [ ] Right to Know (data export)
- [ ] Right to Delete (account deletion)
- [ ] Right to Opt-Out (cookie consent for analytics)
- [ ] Non-Discrimination (no feature penalty for privacy choices)

View File

@@ -0,0 +1,147 @@
# ADR-032: Rate Limiting Strategy
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
Public-facing APIs are vulnerable to abuse through excessive requests, whether from malicious actors attempting denial-of-service attacks, automated scrapers, or accidental loops in client code. Without proper rate limiting, the application could:
1. **Experience degraded performance**: Excessive requests can overwhelm database connections and server resources
2. **Incur unexpected costs**: AI service calls (Gemini API) and external APIs (Google Maps) are billed per request
3. **Allow credential stuffing**: Login endpoints without limits enable brute-force attacks
4. **Suffer from data scraping**: Public endpoints could be scraped at high volume
## Decision
We will implement a tiered rate limiting strategy using `express-rate-limit` middleware, with different limits based on endpoint sensitivity and resource cost.
### Tier System
| Tier | Window | Max Requests | Use Case |
| --------------------------- | ------ | ------------ | -------------------------------- |
| **Authentication (Strict)** | 15 min | 5 | Login, registration |
| **Sensitive Operations** | 1 hour | 5 | Password changes, email updates |
| **AI/Costly Operations** | 15 min | 10-20 | Gemini API calls, geocoding |
| **File Uploads** | 15 min | 10-20 | Flyer uploads, avatar uploads |
| **Batch Operations** | 15 min | 50 | Bulk updates |
| **User Read** | 15 min | 100 | Standard authenticated endpoints |
| **Public Read** | 15 min | 100 | Public data endpoints |
| **Tracking/High-Volume** | 15 min | 150-200 | Analytics, reactions |
### Rate Limiter Configuration
All rate limiters share a standard configuration:
```typescript
const standardConfig = {
standardHeaders: true, // Return rate limit info in headers
legacyHeaders: false, // Disable deprecated X-RateLimit headers
skip: shouldSkipRateLimit, // Allow bypassing in test environment
};
```
### Test Environment Bypass
Rate limiting is bypassed during integration and E2E tests to avoid test flakiness:
```typescript
export const shouldSkipRateLimit = (req: Request): boolean => {
return process.env.NODE_ENV === 'test';
};
```
## Implementation Details
### Available Rate Limiters
| Limiter | Window | Max | Endpoint Examples |
| ---------------------------- | ------ | --- | --------------------------------- |
| `loginLimiter` | 15 min | 5 | POST /api/auth/login |
| `registerLimiter` | 1 hour | 5 | POST /api/auth/register |
| `forgotPasswordLimiter` | 15 min | 5 | POST /api/auth/forgot-password |
| `resetPasswordLimiter` | 15 min | 10 | POST /api/auth/reset-password |
| `refreshTokenLimiter` | 15 min | 20 | POST /api/auth/refresh |
| `logoutLimiter` | 15 min | 10 | POST /api/auth/logout |
| `publicReadLimiter` | 15 min | 100 | GET /api/flyers, GET /api/recipes |
| `userReadLimiter` | 15 min | 100 | GET /api/users/profile |
| `userUpdateLimiter` | 15 min | 100 | PUT /api/users/profile |
| `userSensitiveUpdateLimiter` | 1 hour | 5 | PUT /api/auth/change-password |
| `adminTriggerLimiter` | 15 min | 30 | POST /api/admin/jobs/\* |
| `aiGenerationLimiter` | 15 min | 20 | POST /api/ai/analyze |
| `aiUploadLimiter` | 15 min | 10 | POST /api/ai/upload-and-process |
| `geocodeLimiter` | 1 hour | 100 | GET /api/users/geocode |
| `priceHistoryLimiter` | 15 min | 50 | GET /api/price-history/\* |
| `reactionToggleLimiter` | 15 min | 150 | POST /api/reactions/toggle |
| `trackingLimiter` | 15 min | 200 | POST /api/personalization/track |
| `batchLimiter` | 15 min | 50 | PATCH /api/budgets/batch |
### Usage Pattern
```typescript
import { loginLimiter, userReadLimiter } from '../config/rateLimiters';
// Apply to individual routes
router.post('/login', loginLimiter, validateRequest(loginSchema), async (req, res, next) => {
// handler
});
// Or apply to entire router for consistent limits
router.use(userReadLimiter);
router.get('/me', async (req, res, next) => {
/* handler */
});
```
### Response Headers
When rate limiting is active, responses include standard headers:
```
RateLimit-Limit: 100
RateLimit-Remaining: 95
RateLimit-Reset: 900
```
### Rate Limit Exceeded Response
When a client exceeds their limit:
```json
{
"message": "Too many login attempts from this IP, please try again after 15 minutes."
}
```
HTTP Status: `429 Too Many Requests`
## Key Files
- `src/config/rateLimiters.ts` - Rate limiter definitions
- `src/utils/rateLimit.ts` - Helper functions (test bypass)
## Consequences
### Positive
- **Security**: Protects against brute-force and credential stuffing attacks
- **Cost Control**: Prevents runaway costs from AI/external API abuse
- **Fair Usage**: Ensures all users get reasonable service access
- **DDoS Mitigation**: Provides basic protection against request flooding
### Negative
- **Legitimate User Impact**: Aggressive users may hit limits during normal use
- **IP-Based Limitations**: Shared IPs (offices, VPNs) may cause false positives
- **No Distributed State**: Rate limits are per-instance, not cluster-wide (would need Redis store for that)
## Future Enhancements
1. **Redis Store**: Implement distributed rate limiting with Redis for multi-instance deployments
2. **User-Based Limits**: Track limits per authenticated user rather than just IP
3. **Dynamic Limits**: Adjust limits based on user tier (free vs premium)
4. **Monitoring Dashboard**: Track rate limit hits in admin dashboard
5. **Allowlisting**: Allow specific IPs (monitoring services) to bypass limits

View File

@@ -0,0 +1,196 @@
# ADR-033: File Upload and Storage Strategy
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application handles file uploads for flyer images and user avatars. Without a consistent strategy, file uploads can introduce security vulnerabilities (path traversal, malicious file types), performance issues (unbounded file sizes), and maintenance challenges (inconsistent storage locations).
Key concerns:
1. **Security**: Preventing malicious file uploads, path traversal attacks, and unsafe filenames
2. **Storage Organization**: Consistent directory structure for uploaded files
3. **Size Limits**: Preventing resource exhaustion from oversized uploads
4. **File Type Validation**: Ensuring only expected file types are accepted
5. **Cleanup**: Managing temporary and orphaned files
## Decision
We will implement a centralized file upload strategy using `multer` middleware with custom storage configurations, file type validation, and size limits.
### Storage Types
| Type | Directory | Purpose | Size Limit |
| -------- | ------------------------------ | ------------------------------ | ---------- |
| `flyer` | `$STORAGE_PATH` (configurable) | Flyer images for AI processing | 100MB |
| `avatar` | `public/uploads/avatars/` | User profile pictures | 5MB |
### Filename Strategy
All uploaded files are renamed to prevent:
- Path traversal attacks
- Filename collisions
- Problematic characters in filenames
**Pattern**: `{fieldname}-{timestamp}-{random}-{sanitized-original}`
Example: `flyer-1704825600000-829461742-grocery-flyer.jpg`
### File Type Validation
Only image files (`image/*` MIME type) are accepted. Non-image uploads are rejected with a structured `ValidationError`.
## Implementation Details
### Multer Configuration Factory
```typescript
import { createUploadMiddleware } from '../middleware/multer.middleware';
// For flyer uploads (100MB limit)
const flyerUpload = createUploadMiddleware({
storageType: 'flyer',
fileSize: 100 * 1024 * 1024, // 100MB
fileFilter: 'image',
});
// For avatar uploads (5MB limit)
const avatarUpload = createUploadMiddleware({
storageType: 'avatar',
fileSize: 5 * 1024 * 1024, // 5MB
fileFilter: 'image',
});
```
### Storage Configuration
```typescript
// Configurable via environment variable
export const flyerStoragePath =
process.env.STORAGE_PATH || '/var/www/flyer-crawler.projectium.com/flyer-images';
// Relative to project root
export const avatarStoragePath = path.join(process.cwd(), 'public', 'uploads', 'avatars');
```
### Filename Sanitization
The `sanitizeFilename` utility removes dangerous characters:
```typescript
// Removes: path separators, null bytes, special characters
// Keeps: alphanumeric, dots, hyphens, underscores
const sanitized = sanitizeFilename(file.originalname);
```
### Required File Validation Middleware
Ensures a file was uploaded before processing:
```typescript
import { requireFileUpload } from '../middleware/fileUpload.middleware';
router.post(
'/upload',
flyerUpload.single('flyerImage'),
requireFileUpload('flyerImage'), // 400 error if missing
handleMulterError,
async (req, res) => {
// req.file is guaranteed to exist
},
);
```
### Error Handling
```typescript
import { handleMulterError } from '../middleware/multer.middleware';
// Catches multer-specific errors (file too large, etc.)
router.use(handleMulterError);
```
### Directory Initialization
Storage directories are created automatically at application startup:
```typescript
(async () => {
await fs.mkdir(flyerStoragePath, { recursive: true });
await fs.mkdir(avatarStoragePath, { recursive: true });
})();
```
### Test Environment Handling
In test environments, files use predictable names for easy cleanup:
```typescript
if (process.env.NODE_ENV === 'test') {
return cb(null, `test-avatar${path.extname(file.originalname) || '.png'}`);
}
```
## Usage Example
```typescript
import { createUploadMiddleware, handleMulterError } from '../middleware/multer.middleware';
import { requireFileUpload } from '../middleware/fileUpload.middleware';
import { validateRequest } from '../middleware/validation.middleware';
import { aiUploadLimiter } from '../config/rateLimiters';
const flyerUpload = createUploadMiddleware({
storageType: 'flyer',
fileSize: 100 * 1024 * 1024,
fileFilter: 'image',
});
router.post(
'/upload-and-process',
aiUploadLimiter,
validateRequest(uploadSchema),
flyerUpload.single('flyerImage'),
requireFileUpload('flyerImage'),
handleMulterError,
async (req, res, next) => {
const filePath = req.file!.path;
// Process the uploaded file...
},
);
```
## Key Files
- `src/middleware/multer.middleware.ts` - Multer configuration and storage handlers
- `src/middleware/fileUpload.middleware.ts` - File requirement validation
- `src/utils/stringUtils.ts` - Filename sanitization utilities
- `src/utils/fileUtils.ts` - File system utilities (deletion, etc.)
## Consequences
### Positive
- **Security**: Prevents path traversal and malicious uploads through sanitization and validation
- **Consistency**: All uploads follow the same patterns and storage organization
- **Predictability**: Test environments use predictable filenames for cleanup
- **Extensibility**: Factory pattern allows easy addition of new upload types
### Negative
- **Disk Storage**: Files stored on disk require backup and cleanup strategies
- **Single Server**: Current implementation doesn't support cloud storage (S3, etc.)
- **No Virus Scanning**: Files aren't scanned for malware before processing
## Future Enhancements
1. **Cloud Storage**: Support for S3/GCS as storage backend
2. **Virus Scanning**: Integrate ClamAV or cloud-based scanning
3. **Image Optimization**: Automatic resizing/compression before storage
4. **CDN Integration**: Serve uploaded files through CDN
5. **Cleanup Job**: Scheduled job to remove orphaned/temporary files
6. **Presigned URLs**: Direct upload to cloud storage to reduce server load

View File

@@ -0,0 +1,345 @@
# ADR-034: Repository Pattern Standards
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application uses a repository pattern to abstract database access from business logic. However, without clear standards, repository implementations can diverge in:
1. **Method naming**: Inconsistent verbs (get vs find vs fetch)
2. **Return types**: Some methods return `undefined`, others throw errors
3. **Error handling**: Varied approaches to database error handling
4. **Transaction participation**: Unclear how methods participate in transactions
5. **Logging patterns**: Inconsistent logging context and messages
This ADR establishes standards for all repository implementations, complementing ADR-001 (Error Handling) and ADR-002 (Transaction Management).
## Decision
All repository implementations MUST follow these standards:
### Method Naming Conventions
| Prefix | Returns | Behavior on Not Found |
| --------- | ---------------------- | ------------------------------------ |
| `get*` | Single entity | Throws `NotFoundError` |
| `find*` | Entity or `null` | Returns `null` |
| `list*` | Array (possibly empty) | Returns `[]` |
| `create*` | Created entity | Throws on constraint violation |
| `update*` | Updated entity | Throws `NotFoundError` if not exists |
| `delete*` | `void` or `boolean` | Throws `NotFoundError` if not exists |
| `exists*` | `boolean` | Returns true/false |
| `count*` | `number` | Returns count |
### Error Handling Pattern
All repository methods MUST use the centralized `handleDbError` function:
```typescript
import { handleDbError, NotFoundError } from './errors.db';
async getById(id: number): Promise<Entity> {
try {
const result = await this.pool.query('SELECT * FROM entities WHERE id = $1', [id]);
if (result.rows.length === 0) {
throw new NotFoundError(`Entity with ID ${id} not found.`);
}
return result.rows[0];
} catch (error) {
handleDbError(error, this.logger, 'Database error in getById', { id }, {
entityName: 'Entity',
defaultMessage: 'Failed to fetch entity.',
});
}
}
```
### Transaction Participation
Repository methods that need to participate in transactions MUST accept an optional `PoolClient`:
```typescript
class UserRepository {
private pool: Pool;
private client?: PoolClient;
constructor(poolOrClient?: Pool | PoolClient) {
if (poolOrClient && 'query' in poolOrClient && !('connect' in poolOrClient)) {
// It's a PoolClient (for transactions)
this.client = poolOrClient as PoolClient;
} else {
this.pool = (poolOrClient as Pool) || getPool();
}
}
private get queryable() {
return this.client || this.pool;
}
}
```
Or using the function-based pattern:
```typescript
async function createUser(userData: CreateUserInput, client?: PoolClient): Promise<User> {
const queryable = client || getPool();
// ...
}
```
## Implementation Details
### Repository File Structure
```
src/services/db/
├── connection.db.ts # Pool management, withTransaction
├── errors.db.ts # Custom error types, handleDbError
├── index.db.ts # Barrel exports
├── user.db.ts # User repository
├── user.db.test.ts # User repository tests
├── flyer.db.ts # Flyer repository
├── flyer.db.test.ts # Flyer repository tests
└── ... # Other domain repositories
```
### Standard Repository Template
```typescript
// src/services/db/example.db.ts
import { Pool, PoolClient } from 'pg';
import { getPool } from './connection.db';
import { handleDbError, NotFoundError } from './errors.db';
import { logger } from '../logger.server';
import type { Example, CreateExampleInput, UpdateExampleInput } from '../../types';
const log = logger.child({ module: 'example.db' });
/**
* Gets an example by ID.
* @throws {NotFoundError} If the example doesn't exist.
*/
export async function getExampleById(id: number, client?: PoolClient): Promise<Example> {
const queryable = client || getPool();
try {
const result = await queryable.query<Example>('SELECT * FROM examples WHERE id = $1', [id]);
if (result.rows.length === 0) {
throw new NotFoundError(`Example with ID ${id} not found.`);
}
return result.rows[0];
} catch (error) {
handleDbError(
error,
log,
'Database error in getExampleById',
{ id },
{
entityName: 'Example',
defaultMessage: 'Failed to fetch example.',
},
);
}
}
/**
* Finds an example by slug, returns null if not found.
*/
export async function findExampleBySlug(
slug: string,
client?: PoolClient,
): Promise<Example | null> {
const queryable = client || getPool();
try {
const result = await queryable.query<Example>('SELECT * FROM examples WHERE slug = $1', [slug]);
return result.rows[0] || null;
} catch (error) {
handleDbError(
error,
log,
'Database error in findExampleBySlug',
{ slug },
{
entityName: 'Example',
defaultMessage: 'Failed to find example.',
},
);
}
}
/**
* Lists all examples with optional pagination.
*/
export async function listExamples(
options: { limit?: number; offset?: number } = {},
client?: PoolClient,
): Promise<Example[]> {
const queryable = client || getPool();
const { limit = 100, offset = 0 } = options;
try {
const result = await queryable.query<Example>(
'SELECT * FROM examples ORDER BY created_at DESC LIMIT $1 OFFSET $2',
[limit, offset],
);
return result.rows;
} catch (error) {
handleDbError(
error,
log,
'Database error in listExamples',
{ limit, offset },
{
entityName: 'Example',
defaultMessage: 'Failed to list examples.',
},
);
}
}
/**
* Creates a new example.
* @throws {UniqueConstraintError} If slug already exists.
*/
export async function createExample(
input: CreateExampleInput,
client?: PoolClient,
): Promise<Example> {
const queryable = client || getPool();
try {
const result = await queryable.query<Example>(
`INSERT INTO examples (name, slug, description)
VALUES ($1, $2, $3)
RETURNING *`,
[input.name, input.slug, input.description],
);
return result.rows[0];
} catch (error) {
handleDbError(
error,
log,
'Database error in createExample',
{ input },
{
entityName: 'Example',
uniqueMessage: 'An example with this slug already exists.',
defaultMessage: 'Failed to create example.',
},
);
}
}
/**
* Updates an existing example.
* @throws {NotFoundError} If the example doesn't exist.
*/
export async function updateExample(
id: number,
input: UpdateExampleInput,
client?: PoolClient,
): Promise<Example> {
const queryable = client || getPool();
try {
const result = await queryable.query<Example>(
`UPDATE examples
SET name = COALESCE($2, name), description = COALESCE($3, description)
WHERE id = $1
RETURNING *`,
[id, input.name, input.description],
);
if (result.rows.length === 0) {
throw new NotFoundError(`Example with ID ${id} not found.`);
}
return result.rows[0];
} catch (error) {
handleDbError(
error,
log,
'Database error in updateExample',
{ id, input },
{
entityName: 'Example',
defaultMessage: 'Failed to update example.',
},
);
}
}
/**
* Deletes an example.
* @throws {NotFoundError} If the example doesn't exist.
*/
export async function deleteExample(id: number, client?: PoolClient): Promise<void> {
const queryable = client || getPool();
try {
const result = await queryable.query('DELETE FROM examples WHERE id = $1', [id]);
if (result.rowCount === 0) {
throw new NotFoundError(`Example with ID ${id} not found.`);
}
} catch (error) {
handleDbError(
error,
log,
'Database error in deleteExample',
{ id },
{
entityName: 'Example',
defaultMessage: 'Failed to delete example.',
},
);
}
}
```
### Using with Transactions
```typescript
import { withTransaction } from './connection.db';
import { createExample, updateExample } from './example.db';
import { createRelated } from './related.db';
async function createExampleWithRelated(data: ComplexInput): Promise<Example> {
return withTransaction(async (client) => {
const example = await createExample(data.example, client);
await createRelated({ exampleId: example.id, ...data.related }, client);
return example;
});
}
```
## Key Files
- `src/services/db/connection.db.ts` - `getPool()`, `withTransaction()`
- `src/services/db/errors.db.ts` - `handleDbError()`, custom error classes
- `src/services/db/index.db.ts` - Barrel exports for all repositories
- `src/services/db/*.db.ts` - Individual domain repositories
## Consequences
### Positive
- **Consistency**: All repositories follow the same patterns
- **Predictability**: Method names clearly indicate behavior
- **Testability**: Consistent interfaces make mocking straightforward
- **Error Handling**: Centralized error handling prevents inconsistent responses
- **Transaction Safety**: Clear pattern for transaction participation
### Negative
- **Learning Curve**: Developers must learn and follow conventions
- **Boilerplate**: Each method requires similar error handling structure
- **Refactoring**: Existing repositories may need updates to conform
## Compliance Checklist
For new repository methods:
- [ ] Method name follows prefix convention (get/find/list/create/update/delete)
- [ ] Throws `NotFoundError` for `get*` methods when entity not found
- [ ] Returns `null` for `find*` methods when entity not found
- [ ] Uses `handleDbError` for database error handling
- [ ] Accepts optional `PoolClient` parameter for transaction support
- [ ] Includes JSDoc with `@throws` documentation
- [ ] Has corresponding unit tests

View File

@@ -0,0 +1,328 @@
# ADR-035: Service Layer Architecture
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application has evolved to include multiple service types:
1. **Repository services** (`*.db.ts`): Direct database access
2. **Business services** (`*Service.ts`): Business logic orchestration
3. **External services** (`*Service.server.ts`): Integration with external APIs
4. **Infrastructure services** (`logger`, `redis`, `queues`): Cross-cutting concerns
Without clear boundaries, business logic can leak into routes, repositories can contain business rules, and services can become tightly coupled.
## Decision
We will establish a clear layered architecture with defined responsibilities for each layer:
### Layer Responsibilities
```
┌─────────────────────────────────────────────────────────────────┐
│ Routes Layer │
│ - Request/response handling │
│ - Input validation (via middleware) │
│ - Authentication/authorization │
│ - Rate limiting │
│ - Response formatting │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Services Layer │
│ - Business logic orchestration │
│ - Transaction coordination │
│ - External API integration │
│ - Cross-repository operations │
│ - Event publishing │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Repository Layer │
│ - Direct database access │
│ - Query construction │
│ - Entity mapping │
│ - Error translation │
└─────────────────────────────────────────────────────────────────┘
```
### Service Types and Naming
| Type | Pattern | Suffix | Example |
| ------------------- | ------------------------------- | ------------- | --------------------- |
| Business Service | Orchestrates business logic | `*Service.ts` | `authService.ts` |
| Server-Only Service | External APIs, server-side only | `*.server.ts` | `aiService.server.ts` |
| Database Repository | Direct DB access | `*.db.ts` | `user.db.ts` |
| Infrastructure | Cross-cutting concerns | Descriptive | `logger.server.ts` |
### Service Dependencies
```
Routes → Business Services → Repositories
External Services
Infrastructure (logger, redis, queues)
```
**Rules**:
- Routes MUST NOT directly access repositories (except simple CRUD)
- Repositories MUST NOT call other repositories (use services)
- Services MAY call other services
- Infrastructure services MAY be called from any layer
## Implementation Details
### Business Service Pattern
```typescript
// src/services/authService.ts
import { withTransaction } from './db/connection.db';
import * as userRepo from './db/user.db';
import * as profileRepo from './db/personalization.db';
import { emailService } from './emailService.server';
import { logger } from './logger.server';
const log = logger.child({ service: 'auth' });
interface LoginResult {
user: UserProfile;
accessToken: string;
refreshToken: string;
}
export const authService = {
/**
* Registers a new user and sends welcome email.
* Orchestrates multiple repositories in a transaction.
*/
async registerAndLoginUser(
email: string,
password: string,
fullName?: string,
avatarUrl?: string,
reqLog?: Logger,
): Promise<LoginResult> {
const log = reqLog || logger;
return withTransaction(async (client) => {
// 1. Create user (repository)
const user = await userRepo.createUser({ email, password }, client);
// 2. Create profile (repository)
await profileRepo.createProfile(
{
userId: user.user_id,
fullName,
avatarUrl,
},
client,
);
// 3. Generate tokens (business logic)
const { accessToken, refreshToken } = this.generateTokens(user);
// 4. Send welcome email (external service, non-blocking)
emailService.sendWelcomeEmail(email, fullName).catch((err) => {
log.warn({ err, email }, 'Failed to send welcome email');
});
log.info({ userId: user.user_id }, 'User registered successfully');
return {
user: await this.buildUserProfile(user.user_id, client),
accessToken,
refreshToken,
};
});
},
// ... other methods
};
```
### Server-Only Service Pattern
```typescript
// src/services/aiService.server.ts
// This file MUST only be imported by server-side code
import { GenAI } from '@google/genai';
import { config } from '../config/env';
import { logger } from './logger.server';
const log = logger.child({ service: 'ai' });
class AiService {
private client: GenAI;
constructor() {
this.client = new GenAI({ apiKey: config.ai.geminiApiKey });
}
async analyzeImage(imagePath: string): Promise<AnalysisResult> {
log.info({ imagePath }, 'Starting image analysis');
// ... implementation
}
}
export const aiService = new AiService();
```
### Route Handler Pattern
```typescript
// src/routes/auth.routes.ts
import { Router } from 'express';
import { validateRequest } from '../middleware/validation.middleware';
import { loginLimiter } from '../config/rateLimiters';
import { authService } from '../services/authService';
const router = Router();
// Route is thin - delegates to service
router.post(
'/register',
registerLimiter,
validateRequest(registerSchema),
async (req, res, next) => {
try {
const { email, password, full_name } = req.body;
// Delegate to service
const result = await authService.registerAndLoginUser(
email,
password,
full_name,
undefined,
req.log, // Pass request-scoped logger
);
// Format response
res.status(201).json({
message: 'Registration successful',
user: result.user,
accessToken: result.accessToken,
});
} catch (error) {
next(error); // Let error handler deal with it
}
},
);
```
### Service File Organization
```
src/services/
├── db/ # Repository layer
│ ├── connection.db.ts # Pool, transactions
│ ├── errors.db.ts # DB error types
│ ├── user.db.ts # User repository
│ ├── flyer.db.ts # Flyer repository
│ └── index.db.ts # Barrel exports
├── authService.ts # Authentication business logic
├── userService.ts # User management business logic
├── gamificationService.ts # Gamification business logic
├── aiService.server.ts # AI API integration (server-only)
├── emailService.server.ts # Email sending (server-only)
├── geocodingService.server.ts # Geocoding API (server-only)
├── cacheService.server.ts # Redis caching (server-only)
├── queueService.server.ts # BullMQ queues (server-only)
├── logger.server.ts # Pino logger (server-only)
└── logger.client.ts # Client-side logger
```
### Dependency Injection for Testing
Services should support dependency injection for easier testing:
```typescript
// Production: use singleton
export const authService = createAuthService();
// Testing: inject mocks
export function createAuthService(deps?: Partial<AuthServiceDeps>) {
const userRepo = deps?.userRepo || defaultUserRepo;
const emailService = deps?.emailService || defaultEmailService;
return {
async registerAndLoginUser(...) { /* ... */ },
};
}
```
## Key Files
### Infrastructure Services
- `src/services/logger.server.ts` - Server-side structured logging
- `src/services/logger.client.ts` - Client-side logging
- `src/services/redis.server.ts` - Redis connection management
- `src/services/queueService.server.ts` - BullMQ queue management
- `src/services/cacheService.server.ts` - Caching abstraction
### Business Services
- `src/services/authService.ts` - Authentication flows
- `src/services/userService.ts` - User management
- `src/services/gamificationService.ts` - Achievements, leaderboards
- `src/services/flyerProcessingService.server.ts` - Flyer pipeline
### External Integration Services
- `src/services/aiService.server.ts` - Gemini AI integration
- `src/services/emailService.server.ts` - Email sending
- `src/services/geocodingService.server.ts` - Address geocoding
## Consequences
### Positive
- **Separation of Concerns**: Clear boundaries between layers
- **Testability**: Services can be tested in isolation with mocked dependencies
- **Reusability**: Business logic in services can be used by multiple routes
- **Maintainability**: Changes to one layer don't ripple through others
- **Transaction Safety**: Services coordinate transactions across repositories
### Negative
- **Indirection**: More layers mean more code to navigate
- **Potential Over-Engineering**: Simple CRUD operations don't need full service layer
- **Coordination Overhead**: Team must agree on layer boundaries
## Guidelines
### When to Create a Service
Create a business service when:
- Logic spans multiple repositories
- External APIs need to be called
- Complex business rules exist
- The same logic is needed by multiple routes
- Transaction coordination is required
### When Direct Repository Access is OK
Routes can directly use repositories for:
- Simple single-entity CRUD operations
- Read-only queries with no business logic
- Operations that don't need transaction coordination
### Service Method Guidelines
- Accept a request-scoped logger as an optional parameter
- Return domain objects, not HTTP-specific responses
- Throw domain errors, let routes handle HTTP status codes
- Use `withTransaction` for multi-repository operations
- Log business events (user registered, order placed, etc.)

View File

@@ -0,0 +1,212 @@
# ADR-036: Event Bus and Pub/Sub Pattern
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
Modern web applications often need to handle cross-component communication without creating tight coupling between modules. In our application, several scenarios require broadcasting events across the system:
1. **Session Expiry**: When a user's session expires, multiple components need to respond (auth state, UI notifications, API client).
2. **Real-time Updates**: When data changes on the server, multiple UI components may need to update.
3. **Cross-Component Communication**: Independent components need to communicate without direct references to each other.
Traditional approaches like prop drilling or global state management can lead to tightly coupled code that is difficult to maintain and test.
## Decision
We will implement a lightweight, in-memory event bus pattern using a publish/subscribe (pub/sub) architecture. This provides:
1. **Decoupled Communication**: Publishers and subscribers don't need to know about each other.
2. **Event-Driven Architecture**: Components react to events rather than polling for changes.
3. **Testability**: Events can be easily mocked and verified in tests.
### Design Principles
- **Singleton Pattern**: A single event bus instance is shared across the application.
- **Type-Safe Events**: Event names are string constants to prevent typos.
- **Memory Management**: Subscribers must unsubscribe when components unmount to prevent memory leaks.
## Implementation Details
### EventBus Class
Located in `src/services/eventBus.ts`:
```typescript
type EventCallback = (data?: any) => void;
export class EventBus {
private listeners: { [key: string]: EventCallback[] } = {};
on(event: string, callback: EventCallback): void {
if (!this.listeners[event]) {
this.listeners[event] = [];
}
this.listeners[event].push(callback);
}
off(event: string, callback: EventCallback): void {
if (!this.listeners[event]) return;
this.listeners[event] = this.listeners[event].filter((l) => l !== callback);
}
dispatch(event: string, data?: any): void {
if (!this.listeners[event]) return;
this.listeners[event].forEach((callback) => callback(data));
}
}
// Singleton instance
export const eventBus = new EventBus();
```
### Event Constants
Define event names as constants to prevent typos:
```typescript
// src/constants/events.ts
export const EVENTS = {
SESSION_EXPIRED: 'session:expired',
SESSION_REFRESHED: 'session:refreshed',
USER_LOGGED_OUT: 'user:loggedOut',
DATA_UPDATED: 'data:updated',
NOTIFICATION_RECEIVED: 'notification:received',
} as const;
```
### React Hook for Event Subscription
```typescript
// src/hooks/useEventBus.ts
import { useEffect } from 'react';
import { eventBus } from '../services/eventBus';
export function useEventBus(event: string, callback: (data?: any) => void) {
useEffect(() => {
eventBus.on(event, callback);
// Cleanup on unmount
return () => {
eventBus.off(event, callback);
};
}, [event, callback]);
}
```
### Usage Examples
**Publishing Events**:
```typescript
import { eventBus } from '../services/eventBus';
import { EVENTS } from '../constants/events';
// In API client when session expires
function handleSessionExpiry() {
eventBus.dispatch(EVENTS.SESSION_EXPIRED, { reason: 'token_expired' });
}
```
**Subscribing in Components**:
```typescript
import { useCallback } from 'react';
import { useEventBus } from '../hooks/useEventBus';
import { EVENTS } from '../constants/events';
function AuthenticatedComponent() {
const handleSessionExpired = useCallback((data) => {
console.log('Session expired:', data.reason);
// Redirect to login, show notification, etc.
}, []);
useEventBus(EVENTS.SESSION_EXPIRED, handleSessionExpired);
return <div>Protected Content</div>;
}
```
**Subscribing in Non-React Code**:
```typescript
import { eventBus } from '../services/eventBus';
import { EVENTS } from '../constants/events';
// In API client
const handleLogout = () => {
clearAuthToken();
};
eventBus.on(EVENTS.USER_LOGGED_OUT, handleLogout);
```
### Testing
The EventBus is fully tested in `src/services/eventBus.test.ts`:
```typescript
import { EventBus } from './eventBus';
describe('EventBus', () => {
let bus: EventBus;
beforeEach(() => {
bus = new EventBus();
});
it('should call registered listeners when event is dispatched', () => {
const callback = vi.fn();
bus.on('test', callback);
bus.dispatch('test', { value: 42 });
expect(callback).toHaveBeenCalledWith({ value: 42 });
});
it('should unsubscribe listeners correctly', () => {
const callback = vi.fn();
bus.on('test', callback);
bus.off('test', callback);
bus.dispatch('test');
expect(callback).not.toHaveBeenCalled();
});
it('should handle multiple listeners for the same event', () => {
const callback1 = vi.fn();
const callback2 = vi.fn();
bus.on('test', callback1);
bus.on('test', callback2);
bus.dispatch('test');
expect(callback1).toHaveBeenCalled();
expect(callback2).toHaveBeenCalled();
});
});
```
## Consequences
### Positive
- **Loose Coupling**: Components don't need direct references to communicate.
- **Flexibility**: New subscribers can be added without modifying publishers.
- **Testability**: Easy to mock events and verify interactions.
- **Simplicity**: Minimal code footprint compared to full state management solutions.
### Negative
- **Debugging Complexity**: Event-driven flows can be harder to trace than direct function calls.
- **Memory Leaks**: Forgetting to unsubscribe can cause memory leaks (mitigated by the React hook).
- **No Type Safety for Payloads**: Event data is typed as `any` (could be improved with generics).
## Key Files
- `src/services/eventBus.ts` - EventBus implementation
- `src/services/eventBus.test.ts` - EventBus tests
## Related ADRs
- [ADR-005](./0005-frontend-state-management-and-server-cache-strategy.md) - State Management Strategy
- [ADR-022](./0022-real-time-notification-system.md) - Real-time Notification System

View File

@@ -0,0 +1,265 @@
# ADR-037: Scheduled Jobs and Cron Pattern
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
Many business operations need to run on a recurring schedule without user intervention:
1. **Daily Deal Checks**: Scan watched items for price drops and notify users.
2. **Analytics Generation**: Compile daily and weekly statistics reports.
3. **Token Cleanup**: Remove expired password reset tokens from the database.
4. **Data Maintenance**: Archive old data, clean up temporary files.
These scheduled operations require:
- Reliable execution at specific times
- Protection against overlapping runs
- Graceful error handling that doesn't crash the server
- Integration with the existing job queue system (BullMQ)
## Decision
We will use `node-cron` for scheduling jobs and integrate with BullMQ for job execution. This provides:
1. **Cron Expressions**: Standard, well-understood scheduling syntax.
2. **Job Queue Integration**: Scheduled jobs enqueue work to BullMQ for reliable processing.
3. **Idempotency**: Jobs use predictable IDs to prevent duplicate runs.
4. **Overlap Protection**: In-memory locks prevent concurrent execution of the same job.
### Architecture
```text
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ node-cron │────▶│ BullMQ Queue │────▶│ Worker │
│ (Scheduler) │ │ (Job Store) │ │ (Processor) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
┌─────────────────┐
│ Redis │
│ (Persistence) │
└─────────────────┘
```
## Implementation Details
### BackgroundJobService
Located in `src/services/backgroundJobService.ts`:
```typescript
import cron from 'node-cron';
import type { Logger } from 'pino';
import type { Queue } from 'bullmq';
export class BackgroundJobService {
constructor(
private personalizationRepo: PersonalizationRepository,
private notificationRepo: NotificationRepository,
private emailQueue: Queue<EmailJobData>,
private logger: Logger,
) {}
async runDailyDealCheck(): Promise<void> {
this.logger.info('[BackgroundJob] Starting daily deal check...');
// 1. Fetch all deals for all users in one efficient query
const allDeals = await this.personalizationRepo.getBestSalePricesForAllUsers(this.logger);
// 2. Group deals by user
const dealsByUser = this.groupDealsByUser(allDeals);
// 3. Process each user's deals in parallel
const results = await Promise.allSettled(
Array.from(dealsByUser.values()).map((userGroup) => this._processDealsForUser(userGroup)),
);
// 4. Bulk insert notifications
await this.bulkCreateNotifications(results);
this.logger.info('[BackgroundJob] Daily deal check completed.');
}
async triggerAnalyticsReport(): Promise<string> {
const reportDate = getCurrentDateISOString();
const jobId = `manual-report-${reportDate}-${Date.now()}`;
const job = await analyticsQueue.add('generate-daily-report', { reportDate }, { jobId });
return job.id;
}
}
```
### Cron Job Initialization
```typescript
// In-memory lock to prevent job overlap
let isDailyDealCheckRunning = false;
export function startBackgroundJobs(
backgroundJobService: BackgroundJobService,
analyticsQueue: Queue,
weeklyAnalyticsQueue: Queue,
tokenCleanupQueue: Queue,
logger: Logger,
): void {
// Daily deal check at 2:00 AM
cron.schedule('0 2 * * *', () => {
(async () => {
if (isDailyDealCheckRunning) {
logger.warn('[BackgroundJob] Daily deal check already running. Skipping.');
return;
}
isDailyDealCheckRunning = true;
try {
await backgroundJobService.runDailyDealCheck();
} catch (error) {
logger.error({ err: error }, '[BackgroundJob] Daily deal check failed.');
} finally {
isDailyDealCheckRunning = false;
}
})().catch((error) => {
logger.error({ err: error }, '[BackgroundJob] Unhandled rejection in cron wrapper.');
isDailyDealCheckRunning = false;
});
});
// Daily analytics at 3:00 AM
cron.schedule('0 3 * * *', () => {
(async () => {
const reportDate = getCurrentDateISOString();
await analyticsQueue.add(
'generate-daily-report',
{ reportDate },
{ jobId: `daily-report-${reportDate}` }, // Prevents duplicates
);
})().catch((error) => {
logger.error({ err: error }, '[BackgroundJob] Analytics job enqueue failed.');
});
});
// Weekly analytics at 4:00 AM on Sundays
cron.schedule('0 4 * * 0', () => {
(async () => {
const { year, week } = getSimpleWeekAndYear();
await weeklyAnalyticsQueue.add(
'generate-weekly-report',
{ reportYear: year, reportWeek: week },
{ jobId: `weekly-report-${year}-${week}` },
);
})().catch((error) => {
logger.error({ err: error }, '[BackgroundJob] Weekly analytics enqueue failed.');
});
});
// Token cleanup at 5:00 AM
cron.schedule('0 5 * * *', () => {
(async () => {
const timestamp = new Date().toISOString();
await tokenCleanupQueue.add(
'cleanup-tokens',
{ timestamp },
{ jobId: `token-cleanup-${timestamp.split('T')[0]}` },
);
})().catch((error) => {
logger.error({ err: error }, '[BackgroundJob] Token cleanup enqueue failed.');
});
});
logger.info('[BackgroundJob] All cron jobs scheduled successfully.');
}
```
### Job Schedule Reference
| Job | Schedule | Queue | Purpose |
| ---------------- | ---------------------------- | ---------------------- | --------------------------------- |
| Daily Deal Check | `0 2 * * *` (2:00 AM) | Direct execution | Find price drops on watched items |
| Daily Analytics | `0 3 * * *` (3:00 AM) | `analyticsQueue` | Generate daily statistics |
| Weekly Analytics | `0 4 * * 0` (4:00 AM Sunday) | `weeklyAnalyticsQueue` | Generate weekly reports |
| Token Cleanup | `0 5 * * *` (5:00 AM) | `tokenCleanupQueue` | Remove expired tokens |
### Cron Expression Reference
```text
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 7, Sun = 0 or 7)
│ │ │ │ │
* * * * *
Examples:
0 2 * * * = 2:00 AM every day
0 4 * * 0 = 4:00 AM every Sunday
*/15 * * * * = Every 15 minutes
0 0 1 * * = Midnight on the 1st of each month
```
### Error Handling Pattern
The async IIFE wrapper with `.catch()` ensures that:
1. Errors in the job don't crash the cron scheduler
2. Unhandled promise rejections are logged
3. The lock is always released in the `finally` block
```typescript
cron.schedule('0 2 * * *', () => {
(async () => {
// Job logic here
})().catch((error) => {
// Handle unhandled rejections from the async wrapper
logger.error({ err: error }, 'Unhandled rejection');
});
});
```
### Manual Trigger API
Admin endpoints allow manual triggering of scheduled jobs:
```typescript
// src/routes/admin.routes.ts
router.post('/jobs/daily-deals', isAdmin, async (req, res, next) => {
await backgroundJobService.runDailyDealCheck();
res.json({ message: 'Daily deal check triggered' });
});
router.post('/jobs/analytics', isAdmin, async (req, res, next) => {
const jobId = await backgroundJobService.triggerAnalyticsReport();
res.json({ message: 'Analytics report queued', jobId });
});
```
## Consequences
### Positive
- **Reliability**: Jobs run at predictable times without manual intervention.
- **Idempotency**: Duplicate job prevention via job IDs.
- **Observability**: All job activity is logged with structured logging.
- **Flexibility**: Jobs can be triggered manually for testing or urgent runs.
- **Separation**: Scheduling is decoupled from job execution (cron vs BullMQ).
### Negative
- **Single Server**: Cron runs on a single server instance. For multi-server deployments, consider distributed scheduling.
- **Time Zone Dependency**: Cron times are server-local; consider UTC for distributed systems.
- **In-Memory Locks**: Overlap protection is per-process, not cluster-wide.
## Key Files
- `src/services/backgroundJobService.ts` - BackgroundJobService class and `startBackgroundJobs`
- `src/services/queueService.server.ts` - BullMQ queue definitions
- `src/services/workers.server.ts` - BullMQ worker processors
## Related ADRs
- [ADR-006](./0006-background-job-processing-and-task-queues.md) - Background Job Processing
- [ADR-004](./0004-standardized-application-wide-structured-logging.md) - Structured Logging

View File

@@ -0,0 +1,290 @@
# ADR-038: Graceful Shutdown Pattern
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
When deploying or restarting the application, abrupt termination can cause:
1. **Lost Jobs**: BullMQ jobs in progress may be marked as failed or stalled.
2. **Connection Leaks**: Database and Redis connections may not be properly closed.
3. **Incomplete Requests**: HTTP requests in flight may receive no response.
4. **Data Corruption**: Transactions may be left in an inconsistent state.
Kubernetes and PM2 send termination signals (SIGTERM, SIGINT) to processes before forcefully killing them. The application must handle these signals to shut down gracefully.
## Decision
We will implement a coordinated graceful shutdown pattern that:
1. **Stops Accepting New Work**: Closes HTTP server, pauses job queues.
2. **Completes In-Flight Work**: Waits for active requests and jobs to finish.
3. **Releases Resources**: Closes database pools, Redis connections, and queues.
4. **Logs Shutdown Progress**: Provides visibility into the shutdown process.
### Signal Handling
| Signal | Source | Behavior |
| ------- | ------------------ | --------------------------------------- |
| SIGTERM | Kubernetes, PM2 | Graceful shutdown with resource cleanup |
| SIGINT | Ctrl+C in terminal | Same as SIGTERM |
| SIGKILL | Force kill | Cannot be caught; immediate termination |
## Implementation Details
### Queue and Worker Shutdown
Located in `src/services/queueService.server.ts`:
```typescript
import { logger } from './logger.server';
export const gracefulShutdown = async (signal: string): Promise<void> => {
logger.info(`[Shutdown] Received ${signal}. Closing all queues and workers...`);
const resources = [
{ name: 'flyerQueue', close: () => flyerQueue.close() },
{ name: 'emailQueue', close: () => emailQueue.close() },
{ name: 'analyticsQueue', close: () => analyticsQueue.close() },
{ name: 'weeklyAnalyticsQueue', close: () => weeklyAnalyticsQueue.close() },
{ name: 'cleanupQueue', close: () => cleanupQueue.close() },
{ name: 'tokenCleanupQueue', close: () => tokenCleanupQueue.close() },
{ name: 'redisConnection', close: () => connection.quit() },
];
const results = await Promise.allSettled(
resources.map(async (resource) => {
try {
await resource.close();
logger.info(`[Shutdown] ${resource.name} closed successfully.`);
} catch (error) {
logger.error({ err: error }, `[Shutdown] Error closing ${resource.name}`);
throw error;
}
}),
);
const failures = results.filter((r) => r.status === 'rejected');
if (failures.length > 0) {
logger.error(`[Shutdown] ${failures.length} resources failed to close.`);
}
logger.info('[Shutdown] All resources closed. Process can now exit.');
};
// Register signal handlers
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
```
### HTTP Server Shutdown
Located in `server.ts`:
```typescript
import { gracefulShutdown as shutdownQueues } from './src/services/queueService.server';
import { closePool } from './src/services/db/connection.db';
const server = app.listen(PORT, () => {
logger.info(`Server listening on port ${PORT}`);
});
const gracefulShutdown = async (signal: string): Promise<void> => {
logger.info(`[Shutdown] Received ${signal}. Starting graceful shutdown...`);
// 1. Stop accepting new connections
server.close((err) => {
if (err) {
logger.error({ err }, '[Shutdown] Error closing HTTP server');
} else {
logger.info('[Shutdown] HTTP server closed.');
}
});
// 2. Wait for in-flight requests (with timeout)
await new Promise((resolve) => setTimeout(resolve, 5000));
// 3. Close queues and workers
await shutdownQueues(signal);
// 4. Close database pool
await closePool();
logger.info('[Shutdown] Database pool closed.');
// 5. Exit process
process.exit(0);
};
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
```
### Database Pool Shutdown
Located in `src/services/db/connection.db.ts`:
```typescript
let pool: Pool | null = null;
export function getPool(): Pool {
if (!pool) {
pool = new Pool({
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
}
return pool;
}
export async function closePool(): Promise<void> {
if (pool) {
await pool.end();
pool = null;
logger.info('[Database] Connection pool closed.');
}
}
export function getPoolStatus(): { totalCount: number; idleCount: number; waitingCount: number } {
const p = getPool();
return {
totalCount: p.totalCount,
idleCount: p.idleCount,
waitingCount: p.waitingCount,
};
}
```
### PM2 Ecosystem Configuration
Located in `ecosystem.config.cjs`:
```javascript
module.exports = {
apps: [
{
name: 'flyer-crawler-api',
script: 'server.ts',
interpreter: 'tsx',
// Graceful shutdown settings
kill_timeout: 10000, // 10 seconds to cleanup before SIGKILL
wait_ready: true, // Wait for 'ready' signal before considering app started
listen_timeout: 10000, // Timeout for ready signal
// Cluster mode for zero-downtime reloads
instances: 1,
exec_mode: 'fork',
// Environment variables
env_production: {
NODE_ENV: 'production',
PORT: 3000,
},
env_test: {
NODE_ENV: 'test',
PORT: 3001,
},
},
],
};
```
### Worker Graceful Shutdown
BullMQ workers can be configured to wait for active jobs:
```typescript
import { Worker } from 'bullmq';
const worker = new Worker('flyerQueue', processor, {
connection,
// Graceful shutdown: wait for active jobs before closing
settings: {
lockDuration: 30000, // Time before job is considered stalled
stalledInterval: 5000, // Check for stalled jobs every 5s
},
});
// Workers auto-close when connection closes
worker.on('closing', () => {
logger.info('[Worker] flyerQueue worker is closing...');
});
worker.on('closed', () => {
logger.info('[Worker] flyerQueue worker closed.');
});
```
### Shutdown Sequence Diagram
```text
SIGTERM Received
┌──────────────────────┐
│ Stop HTTP Server │ ← No new connections accepted
│ (server.close()) │
└──────────────────────┘
┌──────────────────────┐
│ Wait for In-Flight │ ← 5-second grace period
│ Requests │
└──────────────────────┘
┌──────────────────────┐
│ Close BullMQ Queues │ ← Stop processing new jobs
│ and Workers │
└──────────────────────┘
┌──────────────────────┐
│ Close Redis │ ← Disconnect from Redis
│ Connection │
└──────────────────────┘
┌──────────────────────┐
│ Close Database Pool │ ← Release all DB connections
│ (pool.end()) │
└──────────────────────┘
┌──────────────────────┐
│ process.exit(0) │ ← Clean exit
└──────────────────────┘
```
## Consequences
### Positive
- **Zero Lost Work**: In-flight requests and jobs complete before shutdown.
- **Clean Resource Cleanup**: All connections are properly closed.
- **Zero-Downtime Deploys**: PM2 can reload without dropping requests.
- **Observability**: Shutdown progress is logged for debugging.
### Negative
- **Shutdown Delay**: Takes 5-15 seconds to fully shutdown.
- **Complexity**: Multiple shutdown handlers must be coordinated.
- **Edge Cases**: Very long-running jobs may be killed if they exceed the grace period.
## Key Files
- `server.ts` - HTTP server shutdown and signal handling
- `src/services/queueService.server.ts` - Queue shutdown (`gracefulShutdown`)
- `src/services/db/connection.db.ts` - Database pool shutdown (`closePool`)
- `ecosystem.config.cjs` - PM2 configuration with `kill_timeout`
## Related ADRs
- [ADR-006](./0006-background-job-processing-and-task-queues.md) - Background Job Processing
- [ADR-020](./0020-health-checks-and-liveness-readiness-probes.md) - Health Checks
- [ADR-014](./0014-containerization-and-deployment-strategy.md) - Containerization

View File

@@ -0,0 +1,278 @@
# ADR-039: Dependency Injection Pattern
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
As the application grows, tightly coupled components become difficult to test and maintain. Common issues include:
1. **Hard-to-Test Code**: Components that instantiate their own dependencies cannot be easily unit tested with mocks.
2. **Rigid Architecture**: Changing one implementation requires modifying all consumers.
3. **Hidden Dependencies**: It's unclear what a component needs to function.
4. **Circular Dependencies**: Tight coupling can lead to circular import issues.
Dependency Injection (DI) addresses these issues by inverting the control of dependency creation.
## Decision
We will adopt a constructor-based dependency injection pattern for all services and repositories. This approach:
1. **Explicit Dependencies**: All dependencies are declared in the constructor.
2. **Default Values**: Production dependencies have sensible defaults.
3. **Testability**: Test code can inject mocks without modifying source code.
4. **Loose Coupling**: Components depend on interfaces, not implementations.
### Design Principles
- **Constructor Injection**: Dependencies are passed through constructors, not looked up globally.
- **Default Production Dependencies**: Use default parameter values for production instances.
- **Interface Segregation**: Depend on the minimal interface needed (e.g., `Pick<Pool, 'query'>`).
- **Composition Root**: Wire dependencies at the application entry point.
## Implementation Details
### Repository Pattern with DI
Located in `src/services/db/flyer.db.ts`:
```typescript
import { Pool, PoolClient } from 'pg';
import { getPool } from './connection.db';
export class FlyerRepository {
// Accept any object with a 'query' method - Pool or PoolClient
private db: Pick<Pool | PoolClient, 'query'>;
constructor(db: Pick<Pool | PoolClient, 'query'> = getPool()) {
this.db = db;
}
async getFlyerById(flyerId: number, logger: Logger): Promise<Flyer> {
const result = await this.db.query<Flyer>('SELECT * FROM flyers WHERE flyer_id = $1', [
flyerId,
]);
if (result.rows.length === 0) {
throw new NotFoundError(`Flyer with ID ${flyerId} not found.`);
}
return result.rows[0];
}
async insertFlyer(flyer: FlyerDbInsert, logger: Logger): Promise<Flyer> {
// Implementation
}
}
```
**Usage in Production**:
```typescript
// Uses default pool
const flyerRepo = new FlyerRepository();
```
**Usage in Tests**:
```typescript
const mockDb = {
query: vi.fn().mockResolvedValue({ rows: [mockFlyer] }),
};
const flyerRepo = new FlyerRepository(mockDb);
```
**Usage in Transactions**:
```typescript
import { withTransaction } from './connection.db';
await withTransaction(async (client) => {
// Pass transactional client to repository
const flyerRepo = new FlyerRepository(client);
const flyer = await flyerRepo.insertFlyer(flyerData, logger);
// ... more operations in the same transaction
});
```
### Service Layer with DI
Located in `src/services/backgroundJobService.ts`:
```typescript
export class BackgroundJobService {
constructor(
private personalizationRepo: PersonalizationRepository,
private notificationRepo: NotificationRepository,
private emailQueue: Queue<EmailJobData>,
private logger: Logger,
) {}
async runDailyDealCheck(): Promise<void> {
this.logger.info('[BackgroundJob] Starting daily deal check...');
const deals = await this.personalizationRepo.getBestSalePricesForAllUsers(this.logger);
// ... process deals
}
}
// Composition root - wire production dependencies
import { personalizationRepo, notificationRepo } from './db/index.db';
import { logger } from './logger.server';
import { emailQueue } from './queueService.server';
export const backgroundJobService = new BackgroundJobService(
personalizationRepo,
notificationRepo,
emailQueue,
logger,
);
```
**Testing with Mocks**:
```typescript
describe('BackgroundJobService', () => {
it('should process deals for all users', async () => {
const mockPersonalizationRepo = {
getBestSalePricesForAllUsers: vi.fn().mockResolvedValue([mockDeal]),
};
const mockNotificationRepo = {
createBulkNotifications: vi.fn().mockResolvedValue([]),
};
const mockEmailQueue = {
add: vi.fn().mockResolvedValue({ id: 'job-1' }),
};
const mockLogger = {
info: vi.fn(),
error: vi.fn(),
};
const service = new BackgroundJobService(
mockPersonalizationRepo as any,
mockNotificationRepo as any,
mockEmailQueue as any,
mockLogger as any,
);
await service.runDailyDealCheck();
expect(mockPersonalizationRepo.getBestSalePricesForAllUsers).toHaveBeenCalled();
expect(mockEmailQueue.add).toHaveBeenCalled();
});
});
```
### Processing Service with DI
Located in `src/services/flyer/flyerProcessingService.ts`:
```typescript
export class FlyerProcessingService {
constructor(
private fileHandler: FlyerFileHandler,
private aiProcessor: FlyerAiProcessor,
private fsAdapter: FileSystemAdapter,
private cleanupQueue: Queue<CleanupJobData>,
private dataTransformer: FlyerDataTransformer,
private persistenceService: FlyerPersistenceService,
) {}
async processFlyer(filePath: string, logger: Logger): Promise<ProcessedFlyer> {
// Use injected dependencies
const fileInfo = await this.fileHandler.extractMetadata(filePath);
const aiResult = await this.aiProcessor.analyze(filePath, logger);
const transformed = this.dataTransformer.transform(aiResult);
const saved = await this.persistenceService.save(transformed, logger);
// Queue cleanup
await this.cleanupQueue.add('cleanup', { filePath });
return saved;
}
}
// Composition root
const flyerProcessingService = new FlyerProcessingService(
new FlyerFileHandler(fsAdapter, execAsync),
new FlyerAiProcessor(aiService, db.personalizationRepo),
fsAdapter,
cleanupQueue,
new FlyerDataTransformer(),
new FlyerPersistenceService(),
);
```
### Interface Segregation
Use the minimum interface required:
```typescript
// Bad - depends on full Pool
constructor(pool: Pool) {}
// Good - depends only on what's needed
constructor(db: Pick<Pool | PoolClient, 'query'>) {}
```
This allows injecting either a `Pool`, `PoolClient` (for transactions), or a mock object with just a `query` method.
### Composition Root Pattern
Wire all dependencies at application startup:
```typescript
// src/services/db/index.db.ts - Composition root for repositories
import { getPool } from './connection.db';
export const userRepo = new UserRepository(getPool());
export const flyerRepo = new FlyerRepository(getPool());
export const adminRepo = new AdminRepository(getPool());
export const personalizationRepo = new PersonalizationRepository(getPool());
export const notificationRepo = new NotificationRepository(getPool());
export const db = {
userRepo,
flyerRepo,
adminRepo,
personalizationRepo,
notificationRepo,
};
```
## Consequences
### Positive
- **Testability**: Unit tests can inject mocks without modifying production code.
- **Flexibility**: Swap implementations (e.g., different database adapters) easily.
- **Explicit Dependencies**: Clear contract of what a component needs.
- **Transaction Support**: Repositories can participate in transactions by accepting a client.
### Negative
- **More Boilerplate**: Constructors become longer with many dependencies.
- **Composition Complexity**: Must wire dependencies somewhere (composition root).
- **No Runtime Type Checking**: TypeScript types are erased at runtime.
### Mitigation
For complex services with many dependencies, consider:
1. **Factory Functions**: Encapsulate construction logic.
2. **Dependency Groups**: Pass related dependencies as a single object.
3. **DI Containers**: For very large applications, consider a DI library like `tsyringe` or `inversify`.
## Key Files
- `src/services/db/*.db.ts` - Repository classes with constructor DI
- `src/services/db/index.db.ts` - Composition root for repositories
- `src/services/backgroundJobService.ts` - Service class with constructor DI
- `src/services/flyer/flyerProcessingService.ts` - Complex service with multiple dependencies
## Related ADRs
- [ADR-002](./0002-standardized-transaction-management.md) - Transaction Management
- [ADR-034](./0034-repository-pattern-standards.md) - Repository Pattern Standards
- [ADR-035](./0035-service-layer-architecture.md) - Service Layer Architecture

View File

@@ -0,0 +1,214 @@
# ADR-040: Testing Economics and Priorities
**Date**: 2026-01-09
**Status**: Accepted
## Context
ADR-010 established the testing strategy and standards. However, it does not address the economic trade-offs of testing: when the cost of writing and maintaining tests exceeds their value. This document provides practical guidance on where to invest testing effort for maximum return.
## Decision
We adopt a **value-based testing approach** that prioritizes tests based on:
1. Risk of the code path (what breaks if this fails?)
2. Stability of the code (how often does this change?)
3. Complexity of the logic (can a human easily verify correctness?)
4. Cost of the test (setup complexity, execution time, maintenance burden)
## Testing Investment Matrix
| Test Type | Investment Level | When to Write | When to Skip |
| --------------- | ------------------- | ------------------------------- | --------------------------------- |
| **E2E** | Minimal (5 tests) | Critical user flows only | Everything else |
| **Integration** | Moderate (17 tests) | API contracts, auth, DB queries | Internal service wiring |
| **Unit** | High (185+ tests) | Business logic, utilities | Defensive fallbacks, trivial code |
## High-Value Tests (Always Write)
### E2E Tests (Budget: 5-10 tests total)
Write E2E tests for flows where failure means:
- Users cannot sign up or log in
- Users cannot complete the core value proposition (upload flyer → see deals)
- Money or data is at risk
**Current E2E coverage is appropriate:**
- `auth.e2e.test.ts` - Registration, login, password reset
- `flyer-upload.e2e.test.ts` - Complete upload pipeline
- `user-journey.e2e.test.ts` - Full user workflow
- `admin-authorization.e2e.test.ts` - Admin access control
- `admin-dashboard.e2e.test.ts` - Admin operations
**Do NOT add E2E tests for:**
- UI variations or styling
- Edge cases (handle in unit tests)
- Features that can be tested faster at a lower level
### Integration Tests (Budget: 15-25 tests)
Write integration tests for:
- Every public API endpoint (contract testing)
- Authentication and authorization flows
- Database queries that involve joins or complex logic
- Middleware behavior (rate limiting, validation)
**Current integration coverage is appropriate:**
- Auth, admin, user routes
- Flyer processing pipeline
- Shopping lists, budgets, recipes
- Gamification and notifications
**Do NOT add integration tests for:**
- Internal service-to-service calls (mock at boundaries)
- Simple CRUD operations (test the repository pattern once)
- UI components (use unit tests)
### Unit Tests (Budget: Proportional to complexity)
Write unit tests for:
- **Pure functions and utilities** - High value, easy to test
- **Business logic in services** - Medium-high value
- **React components** - Rendering, user interactions, state changes
- **Custom hooks** - Data transformation, side effects
- **Validators and parsers** - Edge cases matter here
## Low-Value Tests (Skip or Defer)
### Tests That Cost More Than They're Worth
1. **Defensive fallback code protected by types**
```typescript
// This fallback can never execute if types are correct
const name = store.name || 'Unknown'; // store.name is required
```
- If you need `as any` to test it, the type system already prevents it
- Either remove the fallback or accept the coverage gap
2. **Switch/case default branches for exhaustive enums**
```typescript
switch (status) {
case 'pending':
return 'yellow';
case 'complete':
return 'green';
default:
return ''; // TypeScript prevents this
}
```
- The default exists for safety, not for execution
- Don't test impossible states
3. **Trivial component variations**
- Testing every tab in a tab panel when they share logic
- Testing loading states that just show a spinner
- Testing disabled button states (test the logic that disables, not the disabled state)
4. **Tests requiring excessive mock setup**
- If test setup is longer than test assertions, reconsider
- Per ADR-010: "Excessive mock setup" is a code smell
5. **Framework behavior verification**
- React rendering, React Query caching, Router navigation
- Trust the framework; test your code
### Coverage Gaps to Accept
The following coverage gaps are acceptable and should NOT be closed with tests:
| Pattern | Reason | Alternative |
| ------------------------------------------ | ------------------------- | ----------------------------- |
| `value \|\| 'default'` for required fields | Type system prevents | Remove fallback or accept gap |
| `catch (error) { ... }` for typed APIs | Error types are known | Test the expected error types |
| `default:` in exhaustive switches | TypeScript exhaustiveness | Accept gap |
| Logging statements | Observability, not logic | No test needed |
| Feature flags / environment checks | Tested by deployment | Config tests if complex |
## Time Budget Guidelines
For a typical feature (new API endpoint + UI):
| Activity | Time Budget | Notes |
| --------------------------------------- | ----------- | ------------------------------------- |
| Unit tests (component + hook + utility) | 30-45 min | Write alongside code |
| Integration test (API contract) | 15-20 min | One test per endpoint |
| E2E test | 0 min | Only for critical paths |
| Total testing overhead | ~1 hour | Should not exceed implementation time |
**Rule of thumb**: If testing takes longer than implementation, you're either:
1. Testing too much
2. Writing tests that are too complex
3. Testing code that should be refactored
## Coverage Targets
We explicitly reject arbitrary coverage percentage targets. Instead:
| Metric | Target | Rationale |
| ---------------------- | --------------- | -------------------------------------- |
| Statement coverage | No target | High coverage ≠ quality tests |
| Branch coverage | No target | Many branches are defensive/impossible |
| E2E test count | 5-10 | Critical paths only |
| Integration test count | 15-25 | API contracts |
| Unit test files | 1:1 with source | Colocated, proportional |
## When to Add Tests to Existing Code
Add tests when:
1. **Fixing a bug** - Add a test that would have caught it
2. **Refactoring** - Add tests before changing behavior
3. **Code review feedback** - Reviewer identifies risk
4. **Production incident** - Prevent recurrence
Do NOT add tests:
1. To increase coverage percentages
2. For code that hasn't changed in 6+ months
3. For code scheduled for deletion/replacement
## Consequences
**Positive:**
- Testing effort focuses on high-risk, high-value code
- Developers spend less time on low-value tests
- Test suite runs faster (fewer unnecessary tests)
- Maintenance burden decreases
**Negative:**
- Some defensive code paths remain untested
- Coverage percentages may not satisfy external audits
- Requires judgment calls that may be inconsistent
## Key Files
- `docs/adr/0010-testing-strategy-and-standards.md` - Testing mechanics
- `vitest.config.ts` - Coverage configuration
- `src/tests/` - Test utilities and setup
## Review Checklist
Before adding a new test, ask:
1. [ ] What user-visible behavior does this test protect?
2. [ ] Can this be tested at a lower level (unit vs integration)?
3. [ ] Does this test require `as any` or mock gymnastics?
4. [ ] Will this test break when implementation changes (brittle)?
5. [ ] Is the test setup simpler than the code being tested?
If any answer suggests low value, skip the test or simplify.

View File

@@ -0,0 +1,291 @@
# ADR-041: AI/Gemini Integration Architecture
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application relies heavily on Google Gemini AI for core functionality:
1. **Flyer Processing**: Extracting store names, dates, addresses, and individual sale items from uploaded flyer images.
2. **Receipt Analysis**: Parsing purchased items and prices from receipt images.
3. **Recipe Suggestions**: Generating recipe ideas based on available ingredients.
4. **Text Extraction**: OCR-style extraction from cropped image regions.
These AI operations have unique challenges:
- **Rate Limits**: Google AI API enforces requests-per-minute (RPM) limits.
- **Quota Buckets**: Different model families (stable, preview, experimental) have separate quotas.
- **Model Availability**: Models may be unavailable due to regional restrictions, updates, or high load.
- **Cost Variability**: Different models have different pricing (Flash-Lite vs Pro).
- **Output Limits**: Some models have 8k token limits, others 65k.
- **Testability**: Tests must not make real API calls.
## Decision
We will implement a centralized `AIService` class with:
1. **Dependency Injection**: AI client and filesystem are injectable for testability.
2. **Model Fallback Chain**: Automatic failover through prioritized model lists.
3. **Rate Limiting**: Per-instance rate limiter using `p-ratelimit`.
4. **Tiered Model Selection**: Different model lists for different task types.
5. **Environment-Aware Mocking**: Automatic mock client in test environments.
### Design Principles
- **Single Responsibility**: `AIService` handles all AI interactions.
- **Fail-Safe Fallbacks**: If a model fails, try the next one in the chain.
- **Cost Optimization**: Use cheaper "lite" models for simple text tasks.
- **Structured Logging**: Log all AI interactions with timing and model info.
## Implementation Details
### AIService Class Structure
Located in `src/services/aiService.server.ts`:
```typescript
interface IAiClient {
generateContent(request: {
contents: Content[];
tools?: Tool[];
useLiteModels?: boolean;
}): Promise<GenerateContentResponse>;
}
interface IFileSystem {
readFile(path: string): Promise<Buffer>;
}
export class AIService {
private aiClient: IAiClient;
private fs: IFileSystem;
private rateLimiter: <T>(fn: () => Promise<T>) => Promise<T>;
private logger: Logger;
constructor(logger: Logger, aiClient?: IAiClient, fs?: IFileSystem) {
// If aiClient provided: use it (unit test)
// Else if test environment: use internal mock (integration test)
// Else: create real GoogleGenAI client (production)
}
}
```
### Tiered Model Lists
Models are organized by task complexity and quota bucket:
```typescript
// For image processing (vision + long output)
private readonly models = [
// Tier A: Fast & Stable
'gemini-2.5-flash', // Primary, 65k output
'gemini-2.5-flash-lite', // Cost-saver, 65k output
// Tier B: Heavy Lifters
'gemini-2.5-pro', // Complex layouts, 65k output
// Tier C: Preview Bucket (separate quota)
'gemini-3-flash-preview',
'gemini-3-pro-preview',
// Tier D: Experimental Bucket
'gemini-exp-1206',
// Tier E: Last Resort
'gemma-3-27b-it',
'gemini-2.0-flash-exp', // WARNING: 8k limit
];
// For simple text tasks (recipes, categorization)
private readonly models_lite = [
'gemini-2.5-flash-lite',
'gemini-2.0-flash-lite-001',
'gemini-2.0-flash-001',
'gemma-3-12b-it',
'gemma-3-4b-it',
'gemini-2.0-flash-exp',
];
```
### Fallback with Retry Logic
```typescript
private async _generateWithFallback(
genAI: GoogleGenAI,
request: { contents: Content[]; tools?: Tool[] },
models: string[],
): Promise<GenerateContentResponse> {
let lastError: Error | null = null;
for (const modelName of models) {
try {
return await genAI.models.generateContent({ model: modelName, ...request });
} catch (error: unknown) {
const errorMsg = extractErrorMessage(error);
const isRetriable = [
'quota', '429', '503', 'resource_exhausted',
'overloaded', 'unavailable', 'not found'
].some(term => errorMsg.toLowerCase().includes(term));
if (isRetriable) {
this.logger.warn(`Model '${modelName}' failed, trying next...`);
lastError = new Error(errorMsg);
continue;
}
throw error; // Non-retriable error
}
}
throw lastError || new Error('All AI models failed.');
}
```
### Rate Limiting
```typescript
const requestsPerMinute = parseInt(process.env.GEMINI_RPM || '5', 10);
this.rateLimiter = pRateLimit({
interval: 60 * 1000,
rate: requestsPerMinute,
concurrency: requestsPerMinute,
});
// Usage:
const result = await this.rateLimiter(() =>
this.aiClient.generateContent({ contents: [...] })
);
```
### Test Environment Detection
```typescript
const isTestEnvironment = process.env.NODE_ENV === 'test' || !!process.env.VITEST_POOL_ID;
if (aiClient) {
// Unit test: use provided mock
this.aiClient = aiClient;
} else if (isTestEnvironment) {
// Integration test: use internal mock
this.aiClient = {
generateContent: async () => ({
text: JSON.stringify(this.getMockFlyerData()),
}),
};
} else {
// Production: use real client
const genAI = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
this.aiClient = { generateContent: /* adapter */ };
}
```
### Prompt Engineering
Prompts are constructed with:
1. **Clear Task Definition**: What to extract and in what format.
2. **Structured Output Requirements**: JSON schema with field descriptions.
3. **Examples**: Concrete examples of expected output.
4. **Context Hints**: User location for store address resolution.
```typescript
private _buildFlyerExtractionPrompt(
masterItems: MasterGroceryItem[],
submitterIp?: string,
userProfileAddress?: string,
): string {
// Location hint for address resolution
let locationHint = '';
if (userProfileAddress) {
locationHint = `The user has profile address "${userProfileAddress}"...`;
}
// Simplified master item list (reduce token usage)
const simplifiedMasterList = masterItems.map(item => ({
id: item.master_grocery_item_id,
name: item.name,
}));
return `
# TASK
Analyze the flyer image(s) and extract...
# RULES
1. Extract store_name, valid_from, valid_to, store_address
2. Extract items array with item, price_display, price_in_cents...
# EXAMPLES
- { "item": "Red Grapes", "price_display": "$1.99 /lb", ... }
# MASTER LIST
${JSON.stringify(simplifiedMasterList)}
`;
}
```
### Response Parsing
AI responses may contain markdown, trailing text, or formatting issues:
````typescript
private _parseJsonFromAiResponse<T>(responseText: string | undefined, logger: Logger): T | null {
if (!responseText) return null;
// Try to extract from markdown code block
const markdownMatch = responseText.match(/```(json)?\s*([\s\S]*?)\s*```/);
let jsonString = markdownMatch?.[2]?.trim() || responseText;
// Find JSON boundaries
const startIndex = Math.min(
jsonString.indexOf('{') >= 0 ? jsonString.indexOf('{') : Infinity,
jsonString.indexOf('[') >= 0 ? jsonString.indexOf('[') : Infinity
);
const endIndex = Math.max(jsonString.lastIndexOf('}'), jsonString.lastIndexOf(']'));
if (startIndex === Infinity || endIndex === -1) return null;
try {
return JSON.parse(jsonString.substring(startIndex, endIndex + 1));
} catch {
return null;
}
}
````
## Consequences
### Positive
- **Resilience**: Automatic failover when models are unavailable or rate-limited.
- **Cost Control**: Uses cheaper models for simple tasks.
- **Testability**: Full mock support for unit and integration tests.
- **Observability**: Detailed logging of all AI operations with timing.
- **Maintainability**: Centralized AI logic in one service.
### Negative
- **Model List Maintenance**: Must update model lists when new models release.
- **Complexity**: Fallback logic adds complexity.
- **Delayed Failures**: May take longer to fail if all models are down.
### Mitigation
- Monitor model deprecation announcements from Google.
- Add health checks that validate AI connectivity on startup.
- Consider caching successful model selections per task type.
## Key Files
- `src/services/aiService.server.ts` - Main AIService class
- `src/services/aiService.server.test.ts` - Unit tests with mocked AI client
- `src/services/aiApiClient.ts` - Low-level API client wrapper
- `src/services/aiAnalysisService.ts` - Higher-level analysis orchestration
- `src/types/ai.ts` - Zod schemas for AI response validation
## Related ADRs
- [ADR-027](./0027-standardized-naming-convention-for-ai-and-database-types.md) - Naming Conventions for AI Types
- [ADR-039](./0039-dependency-injection-pattern.md) - Dependency Injection Pattern
- [ADR-001](./0001-standardized-error-handling.md) - Error Handling

View File

@@ -0,0 +1,329 @@
# ADR-042: Email and Notification Architecture
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application sends emails for multiple purposes:
1. **Transactional Emails**: Password reset, welcome emails, account verification.
2. **Deal Notifications**: Alerting users when watched items go on sale.
3. **Bulk Communications**: System announcements, marketing (future).
Email delivery has unique challenges:
- **Reliability**: Emails must be delivered even if the main request fails.
- **Rate Limits**: SMTP servers enforce sending limits.
- **Retry Logic**: Failed emails should be retried with backoff.
- **Templating**: Emails need consistent branding and formatting.
- **Testing**: Tests should not send real emails.
## Decision
We will implement a queue-based email system using:
1. **Nodemailer**: For SMTP transport and email composition.
2. **BullMQ**: For job queuing, retry logic, and rate limiting.
3. **Dedicated Worker**: Background process for email delivery.
4. **Structured Logging**: Job-scoped logging for debugging.
### Design Principles
- **Asynchronous Delivery**: Queue emails immediately, deliver asynchronously.
- **Idempotent Jobs**: Jobs can be retried safely.
- **Separation of Concerns**: Email composition separate from delivery.
- **Environment-Aware**: Disable real sending in test environments.
## Implementation Details
### Email Service Structure
Located in `src/services/emailService.server.ts`:
```typescript
import nodemailer from 'nodemailer';
import type { Job } from 'bullmq';
import type { Logger } from 'pino';
// SMTP transporter configured from environment
const transporter = nodemailer.createTransport({
host: process.env.SMTP_HOST,
port: parseInt(process.env.SMTP_PORT || '587', 10),
secure: process.env.SMTP_SECURE === 'true',
auth: {
user: process.env.SMTP_USER,
pass: process.env.SMTP_PASS,
},
});
```
### Email Job Data Structure
```typescript
// src/types/job-data.ts
export interface EmailJobData {
to: string;
subject: string;
text: string;
html: string;
}
```
### Core Send Function
```typescript
export const sendEmail = async (options: EmailJobData, logger: Logger) => {
const mailOptions = {
from: `"Flyer Crawler" <${process.env.SMTP_FROM_EMAIL}>`,
to: options.to,
subject: options.subject,
text: options.text,
html: options.html,
};
const info = await transporter.sendMail(mailOptions);
logger.info(
{ to: options.to, subject: options.subject, messageId: info.messageId },
'Email sent successfully.',
);
};
```
### Job Processor
```typescript
export const processEmailJob = async (job: Job<EmailJobData>) => {
// Create child logger with job context
const jobLogger = globalLogger.child({
jobId: job.id,
jobName: job.name,
recipient: job.data.to,
});
jobLogger.info('Picked up email job.');
try {
await sendEmail(job.data, jobLogger);
} catch (error) {
const wrappedError = error instanceof Error ? error : new Error(String(error));
jobLogger.error({ err: wrappedError, attemptsMade: job.attemptsMade }, 'Email job failed.');
throw wrappedError; // BullMQ will retry
}
};
```
### Specialized Email Functions
#### Password Reset
```typescript
export const sendPasswordResetEmail = async (to: string, token: string, logger: Logger) => {
const resetUrl = `${process.env.FRONTEND_URL}/reset-password?token=${token}`;
const html = `
<div style="font-family: sans-serif; padding: 20px;">
<h2>Password Reset Request</h2>
<p>Click the link below to set a new password. This link expires in 1 hour.</p>
<a href="${resetUrl}" style="background-color: #007bff; color: white; padding: 14px 25px; ...">
Reset Your Password
</a>
<p>If you did not request this, please ignore this email.</p>
</div>
`;
await sendEmail({ to, subject: 'Your Password Reset Request', text: '...', html }, logger);
};
```
#### Welcome Email
```typescript
export const sendWelcomeEmail = async (to: string, name: string | null, logger: Logger) => {
const recipientName = name || 'there';
const html = `
<div style="font-family: sans-serif; padding: 20px;">
<h2>Welcome!</h2>
<p>Hello ${recipientName},</p>
<p>Thank you for joining Flyer Crawler.</p>
<p>Start by uploading your first flyer to see how much you can save!</p>
</div>
`;
await sendEmail({ to, subject: 'Welcome to Flyer Crawler!', text: '...', html }, logger);
};
```
#### Deal Notifications
```typescript
export const sendDealNotificationEmail = async (
to: string,
name: string | null,
deals: WatchedItemDeal[],
logger: Logger,
) => {
const dealsListHtml = deals
.map(
(deal) => `
<li>
<strong>${deal.item_name}</strong> is on sale for
<strong>$${(deal.best_price_in_cents / 100).toFixed(2)}</strong>
at ${deal.store_name}!
</li>
`,
)
.join('');
const html = `
<h1>Hi ${name || 'there'},</h1>
<p>We found great deals on items you're watching:</p>
<ul>${dealsListHtml}</ul>
<p>Check them out on the deals page!</p>
`;
await sendEmail({ to, subject: 'New Deals Found!', text: '...', html }, logger);
};
```
### Queue Configuration
Located in `src/services/queueService.server.ts`:
```typescript
import { Queue, Worker, Job } from 'bullmq';
import { processEmailJob } from './emailService.server';
export const emailQueue = new Queue<EmailJobData>('email', {
connection: redisConnection,
defaultJobOptions: {
attempts: 3,
backoff: {
type: 'exponential',
delay: 1000,
},
removeOnComplete: 100,
removeOnFail: 500,
},
});
// Worker to process email jobs
const emailWorker = new Worker('email', processEmailJob, {
connection: redisConnection,
concurrency: 5,
});
```
### Enqueueing Emails
```typescript
// From backgroundJobService.ts
await emailQueue.add('deal-notification', {
to: user.email,
subject: 'New Deals Found!',
text: textContent,
html: htmlContent,
});
```
### Background Job Integration
Located in `src/services/backgroundJobService.ts`:
```typescript
export class BackgroundJobService {
constructor(
private personalizationRepo: PersonalizationRepository,
private notificationRepo: NotificationRepository,
private emailQueue: Queue<EmailJobData>,
private logger: Logger,
) {}
async runDailyDealCheck(): Promise<void> {
this.logger.info('Starting daily deal check...');
const deals = await this.personalizationRepo.getBestSalePricesForAllUsers(this.logger);
for (const userDeals of deals) {
await this.emailQueue.add('deal-notification', {
to: userDeals.email,
subject: 'New Deals Found!',
text: '...',
html: '...',
});
}
}
}
```
## Environment Variables
```bash
# SMTP Configuration
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=user@example.com
SMTP_PASS=secret
SMTP_FROM_EMAIL=noreply@flyer-crawler.com
# Frontend URL for email links
FRONTEND_URL=https://flyer-crawler.com
```
## Consequences
### Positive
- **Reliability**: Failed emails are automatically retried with exponential backoff.
- **Scalability**: Queue can handle burst traffic without overwhelming SMTP.
- **Observability**: Job-scoped logging enables easy debugging.
- **Separation**: Email composition is decoupled from delivery timing.
- **Testability**: Can mock the queue or use Ethereal for testing.
### Negative
- **Complexity**: Adds queue infrastructure dependency (Redis).
- **Delayed Delivery**: Emails are not instant (queued first).
- **Monitoring Required**: Need to monitor queue depth and failure rates.
### Mitigation
- Use Bull Board UI for queue monitoring (already implemented).
- Set up alerts for queue depth and failure rate thresholds.
- Consider Ethereal or MailHog for development/testing.
## Testing Strategy
```typescript
// Unit test with mocked queue
const mockEmailQueue = {
add: vi.fn().mockResolvedValue({ id: 'job-1' }),
};
const service = new BackgroundJobService(
mockPersonalizationRepo,
mockNotificationRepo,
mockEmailQueue as any,
mockLogger,
);
await service.runDailyDealCheck();
expect(mockEmailQueue.add).toHaveBeenCalledWith('deal-notification', expect.any(Object));
```
## Key Files
- `src/services/emailService.server.ts` - Email composition and sending
- `src/services/queueService.server.ts` - Queue configuration and workers
- `src/services/backgroundJobService.ts` - Scheduled deal notifications
- `src/types/job-data.ts` - Email job data types
## Related ADRs
- [ADR-006](./0006-background-job-processing-and-task-queues.md) - Background Job Processing
- [ADR-004](./0004-standardized-application-wide-structured-logging.md) - Structured Logging
- [ADR-039](./0039-dependency-injection-pattern.md) - Dependency Injection

View File

@@ -0,0 +1,392 @@
# ADR-043: Express Middleware Pipeline Architecture
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The Express application uses a layered middleware pipeline to handle cross-cutting concerns:
1. **Security**: Helmet headers, CORS, rate limiting.
2. **Parsing**: JSON body, URL-encoded, cookies.
3. **Authentication**: Session management, JWT verification.
4. **Validation**: Request body/params validation.
5. **File Handling**: Multipart form data, file uploads.
6. **Error Handling**: Centralized error responses.
Middleware ordering is critical - incorrect ordering can cause security vulnerabilities or broken functionality. This ADR documents the canonical middleware order and patterns.
## Decision
We will establish a strict middleware ordering convention:
1. **Security First**: Security headers and protections apply to all requests.
2. **Parsing Before Logic**: Body/cookie parsing before route handlers.
3. **Auth Before Routes**: Authentication middleware before protected routes.
4. **Validation At Route Level**: Per-route validation middleware.
5. **Error Handler Last**: Centralized error handling catches all errors.
### Design Principles
- **Defense in Depth**: Multiple security layers.
- **Fail-Fast**: Reject bad requests early in the pipeline.
- **Explicit Ordering**: Document and enforce middleware order.
- **Route-Level Flexibility**: Specific middleware per route as needed.
## Implementation Details
### Global Middleware Order
Located in `src/server.ts`:
```typescript
import express from 'express';
import helmet from 'helmet';
import cors from 'cors';
import cookieParser from 'cookie-parser';
import { requestTimeoutMiddleware } from './middleware/timeout.middleware';
import { rateLimiter } from './middleware/rateLimit.middleware';
import { errorHandler } from './middleware/errorHandler.middleware';
const app = express();
// ============================================
// LAYER 1: Security Headers & Protections
// ============================================
app.use(
helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'unsafe-inline'"],
styleSrc: ["'self'", "'unsafe-inline'"],
imgSrc: ["'self'", 'data:', 'blob:'],
},
},
}),
);
app.use(
cors({
origin: process.env.FRONTEND_URL,
credentials: true,
}),
);
// ============================================
// LAYER 2: Request Limits & Timeouts
// ============================================
app.use(requestTimeoutMiddleware(30000)); // 30s default
app.use(rateLimiter); // Rate limiting per IP
// ============================================
// LAYER 3: Body & Cookie Parsing
// ============================================
app.use(express.json({ limit: '10mb' }));
app.use(express.urlencoded({ extended: true, limit: '10mb' }));
app.use(cookieParser());
// ============================================
// LAYER 4: Static Assets (before auth)
// ============================================
app.use('/flyer-images', express.static('flyer-images'));
// ============================================
// LAYER 5: Authentication Setup
// ============================================
app.use(passport.initialize());
app.use(passport.session());
// ============================================
// LAYER 6: Routes (with per-route middleware)
// ============================================
app.use('/api/auth', authRoutes);
app.use('/api/flyers', flyerRoutes);
app.use('/api/admin', adminRoutes);
// ... more routes
// ============================================
// LAYER 7: Error Handling (must be last)
// ============================================
app.use(errorHandler);
```
### Validation Middleware
Located in `src/middleware/validation.middleware.ts`:
```typescript
import { z } from 'zod';
import { Request, Response, NextFunction } from 'express';
import { ValidationError } from '../services/db/errors.db';
export const validate = <T extends z.ZodType>(schema: T) => {
return (req: Request, res: Response, next: NextFunction) => {
const result = schema.safeParse({
body: req.body,
query: req.query,
params: req.params,
});
if (!result.success) {
const errors = result.error.errors.map((err) => ({
path: err.path.join('.'),
message: err.message,
}));
return next(new ValidationError(errors));
}
// Attach validated data to request
req.validated = result.data;
next();
};
};
// Usage in routes:
router.post('/flyers', authenticate, validate(CreateFlyerSchema), flyerController.create);
```
### File Upload Middleware
Located in `src/middleware/fileUpload.middleware.ts`:
```typescript
import multer from 'multer';
import path from 'path';
import { v4 as uuidv4 } from 'uuid';
const storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, 'flyer-images/');
},
filename: (req, file, cb) => {
const ext = path.extname(file.originalname);
cb(null, `${uuidv4()}${ext}`);
},
});
const fileFilter = (req: Request, file: Express.Multer.File, cb: multer.FileFilterCallback) => {
const allowedTypes = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
if (allowedTypes.includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error('Invalid file type'));
}
};
export const uploadFlyer = multer({
storage,
fileFilter,
limits: {
fileSize: 10 * 1024 * 1024, // 10MB
files: 10, // Max 10 files per request
},
});
// Usage:
router.post('/flyers/upload', uploadFlyer.array('files', 10), flyerController.upload);
```
### Authentication Middleware
Located in `src/middleware/auth.middleware.ts`:
```typescript
import passport from 'passport';
import { Request, Response, NextFunction } from 'express';
// Require authenticated user
export const authenticate = (req: Request, res: Response, next: NextFunction) => {
passport.authenticate('jwt', { session: false }, (err, user) => {
if (err) return next(err);
if (!user) {
return res.status(401).json({ error: 'Unauthorized' });
}
req.user = user;
next();
})(req, res, next);
};
// Require admin role
export const requireAdmin = (req: Request, res: Response, next: NextFunction) => {
if (!req.user?.role || req.user.role !== 'admin') {
return res.status(403).json({ error: 'Forbidden' });
}
next();
};
// Optional auth (attach user if present, continue if not)
export const optionalAuth = (req: Request, res: Response, next: NextFunction) => {
passport.authenticate('jwt', { session: false }, (err, user) => {
if (user) req.user = user;
next();
})(req, res, next);
};
```
### Error Handler Middleware
Located in `src/middleware/errorHandler.middleware.ts`:
```typescript
import { Request, Response, NextFunction } from 'express';
import { v4 as uuidv4 } from 'uuid';
import { logger } from '../services/logger.server';
import { ValidationError, NotFoundError, UniqueConstraintError } from '../services/db/errors.db';
export const errorHandler = (err: Error, req: Request, res: Response, next: NextFunction) => {
const errorId = uuidv4();
// Log error with context
logger.error(
{
errorId,
err,
path: req.path,
method: req.method,
userId: req.user?.user_id,
},
'Request error',
);
// Map error types to HTTP responses
if (err instanceof ValidationError) {
return res.status(400).json({
success: false,
error: { code: 'VALIDATION_ERROR', message: err.message, details: err.errors },
meta: { errorId },
});
}
if (err instanceof NotFoundError) {
return res.status(404).json({
success: false,
error: { code: 'NOT_FOUND', message: err.message },
meta: { errorId },
});
}
if (err instanceof UniqueConstraintError) {
return res.status(409).json({
success: false,
error: { code: 'CONFLICT', message: err.message },
meta: { errorId },
});
}
// Default: Internal Server Error
return res.status(500).json({
success: false,
error: {
code: 'INTERNAL_ERROR',
message: process.env.NODE_ENV === 'production' ? 'An unexpected error occurred' : err.message,
},
meta: { errorId },
});
};
```
### Request Timeout Middleware
```typescript
export const requestTimeoutMiddleware = (timeout: number) => {
return (req: Request, res: Response, next: NextFunction) => {
res.setTimeout(timeout, () => {
if (!res.headersSent) {
res.status(503).json({
success: false,
error: { code: 'TIMEOUT', message: 'Request timed out' },
});
}
});
next();
};
};
```
## Route-Level Middleware Patterns
### Protected Route with Validation
```typescript
router.put(
'/flyers/:flyerId',
authenticate, // 1. Auth check
validate(UpdateFlyerSchema), // 2. Input validation
flyerController.update, // 3. Handler
);
```
### Admin-Only Route
```typescript
router.delete(
'/admin/users/:userId',
authenticate, // 1. Auth check
requireAdmin, // 2. Role check
validate(DeleteUserSchema), // 3. Input validation
adminController.deleteUser, // 4. Handler
);
```
### File Upload Route
```typescript
router.post(
'/flyers/upload',
authenticate, // 1. Auth check
uploadFlyer.array('files', 10), // 2. File handling
validate(UploadFlyerSchema), // 3. Metadata validation
flyerController.upload, // 4. Handler
);
```
### Public Route with Optional Auth
```typescript
router.get(
'/flyers/:flyerId',
optionalAuth, // 1. Attach user if present
flyerController.getById, // 2. Handler (can check req.user)
);
```
## Consequences
### Positive
- **Security**: Defense-in-depth with multiple security layers.
- **Consistency**: Predictable request processing order.
- **Maintainability**: Clear separation of concerns.
- **Debuggability**: Errors caught and logged centrally.
- **Flexibility**: Per-route middleware composition.
### Negative
- **Order Sensitivity**: Middleware order bugs can be subtle.
- **Performance**: Many middleware layers add latency.
- **Complexity**: New developers must understand the pipeline.
### Mitigation
- Document middleware order in comments (as shown above).
- Use integration tests that verify middleware chain behavior.
- Profile middleware performance in production.
## Key Files
- `src/server.ts` - Global middleware registration
- `src/middleware/validation.middleware.ts` - Zod validation
- `src/middleware/fileUpload.middleware.ts` - Multer configuration
- `src/middleware/multer.middleware.ts` - File upload handling
- `src/middleware/errorHandler.middleware.ts` - Error handling (implicit)
## Related ADRs
- [ADR-001](./0001-standardized-error-handling.md) - Error Handling
- [ADR-003](./0003-standardized-input-validation-using-middleware.md) - Input Validation
- [ADR-016](./0016-api-security-hardening.md) - API Security
- [ADR-032](./0032-rate-limiting-strategy.md) - Rate Limiting
- [ADR-033](./0033-file-upload-and-storage-strategy.md) - File Uploads

View File

@@ -0,0 +1,275 @@
# ADR-044: Frontend Feature Organization Pattern
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The React frontend has grown to include multiple distinct features:
- Flyer viewing and management
- Shopping list creation
- Budget tracking and charts
- Voice assistant
- User personalization
- Admin dashboard
Without clear organization, code becomes scattered across generic folders (`/components`, `/hooks`, `/utils`), making it hard to:
1. Understand feature boundaries
2. Find related code
3. Refactor or remove features
4. Onboard new developers
## Decision
We will adopt a **feature-based folder structure** where each major feature is self-contained in its own directory under `/features`. Shared code lives in dedicated top-level folders.
### Design Principles
- **Colocation**: Keep related code together (components, hooks, types, utils).
- **Feature Independence**: Features should minimize cross-dependencies.
- **Shared Extraction**: Only extract to shared folders when truly reused.
- **Flat Within Features**: Avoid deep nesting within feature folders.
## Implementation Details
### Directory Structure
```
src/
├── features/ # Feature modules
│ ├── flyer/ # Flyer viewing/management
│ │ ├── components/
│ │ ├── hooks/
│ │ ├── types.ts
│ │ └── index.ts
│ ├── shopping/ # Shopping lists
│ │ ├── components/
│ │ ├── hooks/
│ │ └── index.ts
│ ├── charts/ # Budget/analytics charts
│ │ ├── components/
│ │ └── index.ts
│ ├── voice-assistant/ # Voice commands
│ │ ├── components/
│ │ └── index.ts
│ └── admin/ # Admin dashboard
│ ├── components/
│ └── index.ts
├── components/ # Shared UI components
│ ├── ui/ # Primitive components (Button, Input, etc.)
│ ├── layout/ # Layout components (Header, Footer, etc.)
│ └── common/ # Shared composite components
├── hooks/ # Shared hooks
│ ├── queries/ # TanStack Query hooks
│ ├── mutations/ # TanStack Mutation hooks
│ └── utils/ # Utility hooks (useDebounce, etc.)
├── providers/ # React context providers
│ ├── AppProviders.tsx
│ ├── UserDataProvider.tsx
│ └── FlyersProvider.tsx
├── pages/ # Route page components
├── services/ # API clients, external services
├── types/ # Shared TypeScript types
├── utils/ # Shared utility functions
└── lib/ # Third-party library wrappers
```
### Feature Module Structure
Each feature follows a consistent internal structure:
```
features/flyer/
├── components/
│ ├── FlyerCard.tsx
│ ├── FlyerGrid.tsx
│ ├── FlyerUploader.tsx
│ ├── FlyerItemList.tsx
│ └── index.ts # Re-exports all components
├── hooks/
│ ├── useFlyerDetails.ts
│ ├── useFlyerUpload.ts
│ └── index.ts # Re-exports all hooks
├── types.ts # Feature-specific types
├── utils.ts # Feature-specific utilities
└── index.ts # Public API of the feature
```
### Feature Index File
Each feature has an `index.ts` that defines its public API:
```typescript
// features/flyer/index.ts
export { FlyerCard, FlyerGrid, FlyerUploader } from './components';
export { useFlyerDetails, useFlyerUpload } from './hooks';
export type { FlyerViewProps, FlyerUploadState } from './types';
```
### Import Patterns
```typescript
// Importing from a feature (preferred)
import { FlyerCard, useFlyerDetails } from '@/features/flyer';
// Importing shared components
import { Button, Card } from '@/components/ui';
import { useDebounce } from '@/hooks/utils';
// Avoid: reaching into feature internals
// import { FlyerCard } from '@/features/flyer/components/FlyerCard';
```
### Provider Organization
Located in `src/providers/`:
```typescript
// AppProviders.tsx - Composes all providers
export function AppProviders({ children }: { children: React.ReactNode }) {
return (
<QueryClientProvider client={queryClient}>
<AuthProvider>
<UserDataProvider>
<FlyersProvider>
<ThemeProvider>
{children}
</ThemeProvider>
</FlyersProvider>
</UserDataProvider>
</AuthProvider>
</QueryClientProvider>
);
}
```
### Query/Mutation Hook Organization
Located in `src/hooks/`:
```typescript
// hooks/queries/useFlyersQuery.ts
export function useFlyersQuery(options?: { storeId?: number }) {
return useQuery({
queryKey: ['flyers', options],
queryFn: () => flyerService.getFlyers(options),
staleTime: 5 * 60 * 1000,
});
}
// hooks/mutations/useFlyerUploadMutation.ts
export function useFlyerUploadMutation() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: flyerService.uploadFlyer,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['flyers'] });
},
});
}
```
### Page Components
Pages are thin wrappers that compose feature components:
```typescript
// pages/Flyers.tsx
import { FlyerGrid, FlyerUploader } from '@/features/flyer';
import { PageLayout } from '@/components/layout';
export function FliversPage() {
return (
<PageLayout title="My Flyers">
<FlyerUploader />
<FlyerGrid />
</PageLayout>
);
}
```
### Cross-Feature Communication
When features need to communicate, use:
1. **Shared State Providers**: For global state (user, theme).
2. **Query Invalidation**: For data synchronization.
3. **Event Bus**: For loose coupling (see ADR-036).
```typescript
// Feature A triggers update
const uploadMutation = useFlyerUploadMutation();
await uploadMutation.mutateAsync(file);
// Query invalidation automatically updates Feature B's flyer list
```
## Naming Conventions
| Item | Convention | Example |
| -------------- | -------------------- | -------------------- |
| Feature folder | kebab-case | `voice-assistant/` |
| Component file | PascalCase | `FlyerCard.tsx` |
| Hook file | camelCase with `use` | `useFlyerDetails.ts` |
| Type file | lowercase | `types.ts` |
| Utility file | lowercase | `utils.ts` |
| Index file | lowercase | `index.ts` |
## When to Create a New Feature
Create a new feature folder when:
1. The functionality is distinct and self-contained.
2. It has its own set of components, hooks, and potentially types.
3. It could theoretically be extracted into a separate package.
4. It has minimal dependencies on other features.
Do NOT create a feature folder for:
- A single reusable component (use `components/`).
- A single utility function (use `utils/`).
- A single hook (use `hooks/`).
## Consequences
### Positive
- **Discoverability**: Easy to find all code related to a feature.
- **Encapsulation**: Features have clear boundaries and public APIs.
- **Refactoring**: Can modify or remove features with confidence.
- **Scalability**: Supports team growth with feature ownership.
- **Testing**: Can test features in isolation.
### Negative
- **Duplication Risk**: Similar utilities might be duplicated across features.
- **Decision Overhead**: Must decide when to extract to shared folders.
- **Import Verbosity**: Feature imports can be longer.
### Mitigation
- Regular refactoring sessions to extract shared code.
- Lint rules to prevent importing from feature internals.
- Code review focus on proper feature boundaries.
## Key Directories
- `src/features/flyer/` - Flyer viewing and management
- `src/features/shopping/` - Shopping list functionality
- `src/features/charts/` - Budget and analytics charts
- `src/features/voice-assistant/` - Voice command interface
- `src/features/admin/` - Admin dashboard
- `src/components/ui/` - Shared primitive components
- `src/hooks/queries/` - TanStack Query hooks
- `src/providers/` - React context providers
## Related ADRs
- [ADR-005](./0005-frontend-state-management-and-server-cache-strategy.md) - State Management
- [ADR-012](./0012-frontend-component-library-and-design-system.md) - Component Library
- [ADR-026](./0026-standardized-client-side-structured-logging.md) - Client Logging

View File

@@ -0,0 +1,350 @@
# ADR-045: Test Data Factories and Fixtures
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application has a complex domain model with many entity types:
- Users, Profiles, Addresses
- Flyers, FlyerItems, Stores
- ShoppingLists, ShoppingListItems
- Recipes, RecipeIngredients
- Gamification (points, badges, leaderboards)
- And more...
Testing requires realistic mock data that:
1. Satisfies TypeScript types.
2. Has valid relationships between entities.
3. Is customizable for specific test scenarios.
4. Is consistent across test suites.
5. Avoids boilerplate in test files.
## Decision
We will implement a **factory function pattern** for test data generation:
1. **Centralized Mock Factories**: All factories in a single, organized file.
2. **Sensible Defaults**: Each factory produces valid data with minimal input.
3. **Override Support**: Factories accept partial overrides for customization.
4. **Relationship Helpers**: Factories can generate related entities.
5. **Type Safety**: Factories return properly typed objects.
### Design Principles
- **Convention over Configuration**: Factories work with zero arguments.
- **Composability**: Factories can call other factories.
- **Immutability**: Each call returns a new object (no shared references).
- **Predictability**: Deterministic output when seeded.
## Implementation Details
### Factory File Structure
Located in `src/test/mockFactories.ts`:
```typescript
import { v4 as uuidv4 } from 'uuid';
import type {
User,
UserProfile,
Flyer,
FlyerItem,
ShoppingList,
// ... other types
} from '../types';
// ============================================
// PRIMITIVE HELPERS
// ============================================
let idCounter = 1;
export const nextId = () => idCounter++;
export const resetIdCounter = () => {
idCounter = 1;
};
export const randomEmail = () => `user-${uuidv4().slice(0, 8)}@test.com`;
export const randomDate = (daysAgo = 0) => {
const date = new Date();
date.setDate(date.getDate() - daysAgo);
return date.toISOString();
};
// ============================================
// USER FACTORIES
// ============================================
export const createMockUser = (overrides: Partial<User> = {}): User => ({
user_id: nextId(),
email: randomEmail(),
name: 'Test User',
role: 'user',
created_at: randomDate(30),
updated_at: randomDate(),
...overrides,
});
export const createMockUserProfile = (overrides: Partial<UserProfile> = {}): UserProfile => {
const user = createMockUser(overrides.user);
return {
user,
profile: createMockProfile({ user_id: user.user_id, ...overrides.profile }),
address: overrides.address ?? null,
preferences: overrides.preferences ?? null,
};
};
// ============================================
// FLYER FACTORIES
// ============================================
export const createMockFlyer = (overrides: Partial<Flyer> = {}): Flyer => ({
flyer_id: nextId(),
file_name: 'test-flyer.jpg',
image_url: 'https://example.com/flyer.jpg',
icon_url: 'https://example.com/flyer-icon.jpg',
checksum: uuidv4(),
store_name: 'Test Store',
store_address: '123 Test St',
valid_from: randomDate(7),
valid_to: randomDate(-7), // 7 days in future
item_count: 10,
status: 'approved',
uploaded_by: null,
created_at: randomDate(7),
updated_at: randomDate(),
...overrides,
});
export const createMockFlyerItem = (overrides: Partial<FlyerItem> = {}): FlyerItem => ({
flyer_item_id: nextId(),
flyer_id: overrides.flyer_id ?? nextId(),
item: 'Test Product',
price_display: '$2.99',
price_in_cents: 299,
quantity: 'each',
category_name: 'Groceries',
master_item_id: null,
view_count: 0,
click_count: 0,
created_at: randomDate(7),
updated_at: randomDate(),
...overrides,
});
// ============================================
// FLYER WITH ITEMS (COMPOSITE)
// ============================================
export const createMockFlyerWithItems = (
flyerOverrides: Partial<Flyer> = {},
itemCount = 5,
): { flyer: Flyer; items: FlyerItem[] } => {
const flyer = createMockFlyer(flyerOverrides);
const items = Array.from({ length: itemCount }, (_, i) =>
createMockFlyerItem({
flyer_id: flyer.flyer_id,
item: `Product ${i + 1}`,
price_in_cents: 100 + i * 50,
}),
);
flyer.item_count = items.length;
return { flyer, items };
};
// ============================================
// SHOPPING LIST FACTORIES
// ============================================
export const createMockShoppingList = (overrides: Partial<ShoppingList> = {}): ShoppingList => ({
shopping_list_id: nextId(),
user_id: overrides.user_id ?? nextId(),
name: 'Weekly Groceries',
is_active: true,
created_at: randomDate(14),
updated_at: randomDate(),
...overrides,
});
export const createMockShoppingListItem = (
overrides: Partial<ShoppingListItem> = {},
): ShoppingListItem => ({
shopping_list_item_id: nextId(),
shopping_list_id: overrides.shopping_list_id ?? nextId(),
item_name: 'Milk',
quantity: 1,
is_purchased: false,
created_at: randomDate(7),
updated_at: randomDate(),
...overrides,
});
```
### Usage in Tests
```typescript
import {
createMockUser,
createMockFlyer,
createMockFlyerWithItems,
resetIdCounter,
} from '../test/mockFactories';
describe('FlyerService', () => {
beforeEach(() => {
resetIdCounter(); // Consistent IDs across tests
});
it('should get flyer by ID', async () => {
const mockFlyer = createMockFlyer({ store_name: 'Walmart' });
mockDb.query.mockResolvedValue({ rows: [mockFlyer] });
const result = await flyerService.getFlyerById(mockFlyer.flyer_id);
expect(result.store_name).toBe('Walmart');
});
it('should return flyer with items', async () => {
const { flyer, items } = createMockFlyerWithItems(
{ store_name: 'Costco' },
10, // 10 items
);
mockDb.query.mockResolvedValueOnce({ rows: [flyer] }).mockResolvedValueOnce({ rows: items });
const result = await flyerService.getFlyerWithItems(flyer.flyer_id);
expect(result.flyer.store_name).toBe('Costco');
expect(result.items).toHaveLength(10);
});
});
```
### Bulk Data Generation
For integration tests or seeding:
```typescript
export const createMockDataset = () => {
const users = Array.from({ length: 10 }, () => createMockUser());
const flyers = Array.from({ length: 5 }, () => createMockFlyer());
const flyersWithItems = flyers.map((flyer) => ({
flyer,
items: Array.from({ length: Math.floor(Math.random() * 20) + 5 }, () =>
createMockFlyerItem({ flyer_id: flyer.flyer_id }),
),
}));
return { users, flyers, flyersWithItems };
};
```
### API Response Factories
For testing API handlers:
```typescript
export const createMockApiResponse = <T>(
data: T,
overrides: Partial<ApiResponse<T>> = {},
): ApiResponse<T> => ({
success: true,
data,
meta: {
timestamp: new Date().toISOString(),
requestId: uuidv4(),
...overrides.meta,
},
...overrides,
});
export const createMockPaginatedResponse = <T>(
items: T[],
page = 1,
pageSize = 20,
): PaginatedApiResponse<T> => ({
success: true,
data: items,
meta: {
timestamp: new Date().toISOString(),
requestId: uuidv4(),
},
pagination: {
page,
pageSize,
totalItems: items.length,
totalPages: Math.ceil(items.length / pageSize),
hasMore: false,
},
});
```
### Database Query Mock Helpers
```typescript
export const mockQueryResult = <T>(rows: T[]) => ({
rows,
rowCount: rows.length,
});
export const mockEmptyResult = () => ({
rows: [],
rowCount: 0,
});
export const mockInsertResult = <T>(inserted: T) => ({
rows: [inserted],
rowCount: 1,
});
```
## Test Cleanup Utilities
```typescript
// For integration tests with real database
export const cleanupTestData = async (pool: Pool) => {
await pool.query('DELETE FROM flyer_items WHERE flyer_id > 1000000');
await pool.query('DELETE FROM flyers WHERE flyer_id > 1000000');
await pool.query('DELETE FROM users WHERE user_id > 1000000');
};
// Mark test data with high IDs
export const createTestFlyer = (overrides: Partial<Flyer> = {}) =>
createMockFlyer({ flyer_id: 1000000 + nextId(), ...overrides });
```
## Consequences
### Positive
- **Consistency**: All tests use the same factory patterns.
- **Type Safety**: Factories return correctly typed objects.
- **Reduced Boilerplate**: Tests focus on behavior, not data setup.
- **Maintainability**: Update factory once, all tests benefit.
- **Flexibility**: Easy to create edge case data.
### Negative
- **Single Large File**: Factory file can become large.
- **Learning Curve**: New developers must learn factory patterns.
- **Maintenance**: Factories must be updated when types change.
### Mitigation
- Split factories into multiple files if needed (by domain).
- Add JSDoc comments explaining each factory.
- Use TypeScript to catch type mismatches automatically.
## Key Files
- `src/test/mockFactories.ts` - All mock factory functions
- `src/test/testUtils.ts` - Test helper utilities
- `src/test/setup.ts` - Global test setup with factory reset
## Related ADRs
- [ADR-010](./0010-testing-strategy-and-standards.md) - Testing Strategy
- [ADR-040](./0040-testing-economics-and-priorities.md) - Testing Economics
- [ADR-027](./0027-standardized-naming-convention-for-ai-and-database-types.md) - Type Naming

View File

@@ -0,0 +1,363 @@
# ADR-046: Image Processing Pipeline
**Date**: 2026-01-09
**Status**: Accepted
**Implemented**: 2026-01-09
## Context
The application handles significant image processing for flyer uploads:
1. **Privacy Protection**: Strip EXIF metadata (location, device info).
2. **Optimization**: Resize, compress, and convert images for web delivery.
3. **Icon Generation**: Create thumbnails for listing views.
4. **Format Support**: Handle JPEG, PNG, WebP, and PDF inputs.
5. **Storage Management**: Organize processed images on disk.
These operations must be:
- **Performant**: Large images should not block the request.
- **Secure**: Prevent malicious file uploads.
- **Consistent**: Produce predictable output quality.
- **Testable**: Support unit testing without real files.
## Decision
We will implement a modular image processing pipeline using:
1. **Sharp**: For image resizing, compression, and format conversion.
2. **EXIF Parsing**: For metadata extraction and stripping.
3. **UUID Naming**: For unique, non-guessable file names.
4. **Directory Structure**: Organized storage for originals and derivatives.
### Design Principles
- **Pipeline Pattern**: Chain processing steps in a predictable order.
- **Fail-Fast Validation**: Reject invalid files before processing.
- **Idempotent Operations**: Same input produces same output.
- **Resource Cleanup**: Delete temp files on error.
## Implementation Details
### Image Processor Module
Located in `src/utils/imageProcessor.ts`:
```typescript
import sharp from 'sharp';
import path from 'path';
import { v4 as uuidv4 } from 'uuid';
import fs from 'fs/promises';
import type { Logger } from 'pino';
// ============================================
// CONFIGURATION
// ============================================
const IMAGE_CONFIG = {
maxWidth: 2048,
maxHeight: 2048,
quality: 85,
iconSize: 200,
allowedFormats: ['jpeg', 'png', 'webp', 'avif'],
outputFormat: 'webp' as const,
};
// ============================================
// MAIN PROCESSING FUNCTION
// ============================================
export async function processAndSaveImage(
inputPath: string,
outputDir: string,
originalFileName: string,
logger: Logger,
): Promise<string> {
const outputFileName = `${uuidv4()}.${IMAGE_CONFIG.outputFormat}`;
const outputPath = path.join(outputDir, outputFileName);
logger.info({ inputPath, outputPath }, 'Processing image');
try {
// Create sharp instance and strip metadata
await sharp(inputPath)
.rotate() // Auto-rotate based on EXIF orientation
.resize(IMAGE_CONFIG.maxWidth, IMAGE_CONFIG.maxHeight, {
fit: 'inside',
withoutEnlargement: true,
})
.webp({ quality: IMAGE_CONFIG.quality })
.toFile(outputPath);
logger.info({ outputPath }, 'Image processed successfully');
return outputFileName;
} catch (error) {
logger.error({ error, inputPath }, 'Image processing failed');
throw error;
}
}
```
### Icon Generation
```typescript
export async function generateFlyerIcon(
inputPath: string,
iconsDir: string,
logger: Logger,
): Promise<string> {
// Ensure icons directory exists
await fs.mkdir(iconsDir, { recursive: true });
const iconFileName = `${uuidv4()}-icon.webp`;
const iconPath = path.join(iconsDir, iconFileName);
logger.info({ inputPath, iconPath }, 'Generating icon');
await sharp(inputPath)
.resize(IMAGE_CONFIG.iconSize, IMAGE_CONFIG.iconSize, {
fit: 'cover',
position: 'top', // Flyers usually have store name at top
})
.webp({ quality: 80 })
.toFile(iconPath);
logger.info({ iconPath }, 'Icon generated successfully');
return iconFileName;
}
```
### EXIF Metadata Extraction
For audit/logging purposes before stripping:
```typescript
import ExifParser from 'exif-parser';
export async function extractExifMetadata(
filePath: string,
logger: Logger,
): Promise<ExifMetadata | null> {
try {
const buffer = await fs.readFile(filePath);
const parser = ExifParser.create(buffer);
const result = parser.parse();
const metadata: ExifMetadata = {
make: result.tags?.Make,
model: result.tags?.Model,
dateTime: result.tags?.DateTimeOriginal,
gpsLatitude: result.tags?.GPSLatitude,
gpsLongitude: result.tags?.GPSLongitude,
orientation: result.tags?.Orientation,
};
// Log if GPS data was present (privacy concern)
if (metadata.gpsLatitude || metadata.gpsLongitude) {
logger.info({ filePath }, 'GPS data found in image, will be stripped during processing');
}
return metadata;
} catch (error) {
logger.debug({ error, filePath }, 'No EXIF data found or parsing failed');
return null;
}
}
```
### PDF to Image Conversion
```typescript
import * as pdfjs from 'pdfjs-dist';
export async function convertPdfToImages(
pdfPath: string,
outputDir: string,
logger: Logger,
): Promise<string[]> {
const pdfData = await fs.readFile(pdfPath);
const pdf = await pdfjs.getDocument({ data: pdfData }).promise;
const outputPaths: string[] = [];
for (let i = 1; i <= pdf.numPages; i++) {
const page = await pdf.getPage(i);
const viewport = page.getViewport({ scale: 2.0 }); // 2x for quality
// Create canvas and render
const canvas = createCanvas(viewport.width, viewport.height);
const context = canvas.getContext('2d');
await page.render({
canvasContext: context,
viewport: viewport,
}).promise;
// Save as image
const outputFileName = `${uuidv4()}-page-${i}.png`;
const outputPath = path.join(outputDir, outputFileName);
const buffer = canvas.toBuffer('image/png');
await fs.writeFile(outputPath, buffer);
outputPaths.push(outputPath);
logger.info({ page: i, outputPath }, 'PDF page converted to image');
}
return outputPaths;
}
```
### File Validation
```typescript
import { fileTypeFromBuffer } from 'file-type';
export async function validateImageFile(
filePath: string,
logger: Logger,
): Promise<{ valid: boolean; mimeType: string | null; error?: string }> {
try {
const buffer = await fs.readFile(filePath, { length: 4100 }); // Read header only
const type = await fileTypeFromBuffer(buffer);
if (!type) {
return { valid: false, mimeType: null, error: 'Unknown file type' };
}
const allowedMimes = ['image/jpeg', 'image/png', 'image/webp', 'image/avif', 'application/pdf'];
if (!allowedMimes.includes(type.mime)) {
return {
valid: false,
mimeType: type.mime,
error: `File type ${type.mime} not allowed`,
};
}
return { valid: true, mimeType: type.mime };
} catch (error) {
logger.error({ error, filePath }, 'File validation failed');
return { valid: false, mimeType: null, error: 'Validation error' };
}
}
```
### Storage Organization
```
flyer-images/
├── originals/ # Uploaded files (if kept)
│ └── {uuid}.{ext}
├── processed/ # Optimized images (or root level)
│ └── {uuid}.webp
├── icons/ # Thumbnails
│ └── {uuid}-icon.webp
└── temp/ # Temporary processing files
└── {uuid}.tmp
```
### Cleanup Utilities
```typescript
export async function cleanupTempFiles(
tempDir: string,
maxAgeMs: number,
logger: Logger,
): Promise<number> {
const files = await fs.readdir(tempDir);
const now = Date.now();
let deletedCount = 0;
for (const file of files) {
const filePath = path.join(tempDir, file);
const stats = await fs.stat(filePath);
const age = now - stats.mtimeMs;
if (age > maxAgeMs) {
await fs.unlink(filePath);
deletedCount++;
}
}
logger.info({ deletedCount, tempDir }, 'Cleaned up temp files');
return deletedCount;
}
```
### Integration with Flyer Processing
```typescript
// In flyerProcessingService.ts
export async function processUploadedFlyer(
file: Express.Multer.File,
logger: Logger,
): Promise<{ imageUrl: string; iconUrl: string }> {
const flyerImageDir = 'flyer-images';
const iconsDir = path.join(flyerImageDir, 'icons');
// 1. Validate file
const validation = await validateImageFile(file.path, logger);
if (!validation.valid) {
throw new ValidationError([{ path: 'file', message: validation.error! }]);
}
// 2. Extract and log EXIF before stripping
await extractExifMetadata(file.path, logger);
// 3. Process and optimize image
const processedFileName = await processAndSaveImage(
file.path,
flyerImageDir,
file.originalname,
logger,
);
// 4. Generate icon
const processedImagePath = path.join(flyerImageDir, processedFileName);
const iconFileName = await generateFlyerIcon(processedImagePath, iconsDir, logger);
// 5. Construct URLs
const baseUrl = process.env.BACKEND_URL || 'http://localhost:3001';
const imageUrl = `${baseUrl}/flyer-images/${processedFileName}`;
const iconUrl = `${baseUrl}/flyer-images/icons/${iconFileName}`;
// 6. Delete original upload (privacy)
await fs.unlink(file.path);
return { imageUrl, iconUrl };
}
```
## Consequences
### Positive
- **Privacy**: EXIF metadata (including GPS) is stripped automatically.
- **Performance**: WebP output reduces file sizes by 25-35%.
- **Consistency**: All images processed to standard format and dimensions.
- **Security**: File type validation prevents malicious uploads.
- **Organization**: Clear directory structure for storage management.
### Negative
- **CPU Intensive**: Image processing can be slow for large files.
- **Storage**: Keeping originals doubles storage requirements.
- **Dependency**: Sharp requires native binaries.
### Mitigation
- Process images in background jobs (BullMQ queue).
- Configure whether to keep originals based on requirements.
- Use pre-built Sharp binaries via npm.
## Key Files
- `src/utils/imageProcessor.ts` - Core image processing functions
- `src/services/flyer/flyerProcessingService.ts` - Integration with flyer workflow
- `src/middleware/fileUpload.middleware.ts` - Multer configuration
## Related ADRs
- [ADR-033](./0033-file-upload-and-storage-strategy.md) - File Upload Strategy
- [ADR-006](./0006-background-job-processing-and-task-queues.md) - Background Jobs
- [ADR-041](./0041-ai-gemini-integration-architecture.md) - AI Integration (uses processed images)

View File

@@ -0,0 +1,545 @@
# ADR-047: Project File and Folder Organization
**Date**: 2026-01-09
**Status**: Proposed
**Effort**: XL (Major reorganization across entire codebase)
## Context
The project has grown organically with a mix of organizational patterns:
- **By Type**: Components, hooks, middleware, utilities, types all in flat directories
- **By Feature**: Routes, database modules, and partial feature directories
- **Mixed Concerns**: Frontend and backend code intermingled in `src/`
Current pain points:
1. **Flat services directory**: 75+ files with no subdirectory grouping
2. **Monolithic types.ts**: 750+ lines, unclear when to add new types
3. **Flat components directory**: 43+ components at root level
4. **Incomplete feature modules**: Features contain only UI, not domain logic
5. **No clear frontend/backend separation**: Both share `src/` root
As the project scales, these issues compound, making navigation, refactoring, and onboarding increasingly difficult.
## Decision
We will adopt a **domain-driven organization** with clear separation between:
1. **Client code** (React, browser-only)
2. **Server code** (Express, Node-only)
3. **Shared code** (Types, utilities used by both)
Within each layer, organize by **feature/domain** rather than by file type.
### Design Principles
- **Colocation**: Related code lives together (components, hooks, types, tests)
- **Explicit Boundaries**: Clear separation between client, server, and shared
- **Feature Ownership**: Each domain owns its entire vertical slice
- **Discoverability**: New developers can find code by thinking about features, not file types
- **Incremental Migration**: Structure supports gradual transition from current layout
## Target Directory Structure
```
src/
├── client/ # React frontend (browser-only code)
│ ├── app/ # App shell and routing
│ │ ├── App.tsx
│ │ ├── routes.tsx
│ │ └── providers/ # React context providers
│ │ ├── AppProviders.tsx
│ │ ├── AuthProvider.tsx
│ │ ├── FlyersProvider.tsx
│ │ └── index.ts
│ │
│ ├── features/ # Feature modules (UI + hooks + types)
│ │ ├── auth/
│ │ │ ├── components/
│ │ │ │ ├── LoginForm.tsx
│ │ │ │ ├── RegisterForm.tsx
│ │ │ │ └── index.ts
│ │ │ ├── hooks/
│ │ │ │ ├── useAuth.ts
│ │ │ │ ├── useLogin.ts
│ │ │ │ └── index.ts
│ │ │ ├── types.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── flyer/
│ │ │ ├── components/
│ │ │ │ ├── FlyerCard.tsx
│ │ │ │ ├── FlyerGrid.tsx
│ │ │ │ ├── FlyerUploader.tsx
│ │ │ │ ├── BulkImporter.tsx
│ │ │ │ └── index.ts
│ │ │ ├── hooks/
│ │ │ │ ├── useFlyersQuery.ts
│ │ │ │ ├── useFlyerUploadMutation.ts
│ │ │ │ └── index.ts
│ │ │ ├── types.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── shopping/
│ │ │ ├── components/
│ │ │ ├── hooks/
│ │ │ ├── types.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── recipes/
│ │ │ ├── components/
│ │ │ ├── hooks/
│ │ │ └── index.ts
│ │ │
│ │ ├── charts/
│ │ │ ├── components/
│ │ │ └── index.ts
│ │ │
│ │ ├── voice-assistant/
│ │ │ ├── components/
│ │ │ └── index.ts
│ │ │
│ │ ├── user/
│ │ │ ├── components/
│ │ │ ├── hooks/
│ │ │ └── index.ts
│ │ │
│ │ ├── gamification/
│ │ │ ├── components/
│ │ │ ├── hooks/
│ │ │ └── index.ts
│ │ │
│ │ └── admin/
│ │ ├── components/
│ │ ├── hooks/
│ │ ├── pages/ # Admin-specific pages
│ │ └── index.ts
│ │
│ ├── pages/ # Route page components
│ │ ├── HomePage.tsx
│ │ ├── MyDealsPage.tsx
│ │ ├── UserProfilePage.tsx
│ │ └── index.ts
│ │
│ ├── components/ # Shared UI components
│ │ ├── ui/ # Primitive components (design system)
│ │ │ ├── Button.tsx
│ │ │ ├── Card.tsx
│ │ │ ├── Input.tsx
│ │ │ ├── Modal.tsx
│ │ │ ├── Badge.tsx
│ │ │ └── index.ts
│ │ │
│ │ ├── layout/ # Layout components
│ │ │ ├── Header.tsx
│ │ │ ├── Footer.tsx
│ │ │ ├── Sidebar.tsx
│ │ │ ├── PageLayout.tsx
│ │ │ └── index.ts
│ │ │
│ │ ├── feedback/ # User feedback components
│ │ │ ├── LoadingSpinner.tsx
│ │ │ ├── ErrorMessage.tsx
│ │ │ ├── Toast.tsx
│ │ │ ├── ConfirmDialog.tsx
│ │ │ └── index.ts
│ │ │
│ │ ├── forms/ # Form components
│ │ │ ├── FormField.tsx
│ │ │ ├── SearchInput.tsx
│ │ │ ├── DatePicker.tsx
│ │ │ └── index.ts
│ │ │
│ │ ├── icons/ # Icon components
│ │ │ ├── ChevronIcon.tsx
│ │ │ ├── UserIcon.tsx
│ │ │ └── index.ts
│ │ │
│ │ └── index.ts
│ │
│ ├── hooks/ # Shared hooks (not feature-specific)
│ │ ├── useDebounce.ts
│ │ ├── useLocalStorage.ts
│ │ ├── useMediaQuery.ts
│ │ └── index.ts
│ │
│ ├── services/ # Client-side services (API clients)
│ │ ├── apiClient.ts
│ │ ├── logger.client.ts
│ │ └── index.ts
│ │
│ ├── lib/ # Third-party library wrappers
│ │ ├── queryClient.ts
│ │ ├── toast.ts
│ │ └── index.ts
│ │
│ └── styles/ # Global styles
│ ├── globals.css
│ └── tailwind.css
├── server/ # Express backend (Node-only code)
│ ├── app.ts # Express app setup
│ ├── server.ts # Server entry point
│ │
│ ├── domains/ # Domain modules (business logic)
│ │ ├── auth/
│ │ │ ├── auth.service.ts
│ │ │ ├── auth.routes.ts
│ │ │ ├── auth.controller.ts
│ │ │ ├── auth.repository.ts
│ │ │ ├── auth.types.ts
│ │ │ ├── auth.service.test.ts
│ │ │ ├── auth.routes.test.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── flyer/
│ │ │ ├── flyer.service.ts
│ │ │ ├── flyer.routes.ts
│ │ │ ├── flyer.controller.ts
│ │ │ ├── flyer.repository.ts
│ │ │ ├── flyer.types.ts
│ │ │ ├── flyer.processing.ts # Flyer-specific processing logic
│ │ │ ├── flyer.ai.ts # AI integration for flyers
│ │ │ └── index.ts
│ │ │
│ │ ├── user/
│ │ │ ├── user.service.ts
│ │ │ ├── user.routes.ts
│ │ │ ├── user.controller.ts
│ │ │ ├── user.repository.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── shopping/
│ │ │ ├── shopping.service.ts
│ │ │ ├── shopping.routes.ts
│ │ │ ├── shopping.repository.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── recipe/
│ │ │ ├── recipe.service.ts
│ │ │ ├── recipe.routes.ts
│ │ │ ├── recipe.repository.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── gamification/
│ │ │ ├── gamification.service.ts
│ │ │ ├── gamification.routes.ts
│ │ │ ├── gamification.repository.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── notification/
│ │ │ ├── notification.service.ts
│ │ │ ├── email.service.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── ai/
│ │ │ ├── ai.service.ts
│ │ │ ├── ai.client.ts
│ │ │ ├── ai.prompts.ts
│ │ │ └── index.ts
│ │ │
│ │ └── admin/
│ │ ├── admin.routes.ts
│ │ ├── admin.controller.ts
│ │ ├── admin.service.ts
│ │ └── index.ts
│ │
│ ├── middleware/ # Express middleware
│ │ ├── auth.middleware.ts
│ │ ├── validation.middleware.ts
│ │ ├── errorHandler.middleware.ts
│ │ ├── rateLimit.middleware.ts
│ │ ├── fileUpload.middleware.ts
│ │ └── index.ts
│ │
│ ├── infrastructure/ # Cross-cutting infrastructure
│ │ ├── database/
│ │ │ ├── pool.ts
│ │ │ ├── migrations/
│ │ │ └── seeds/
│ │ │
│ │ ├── cache/
│ │ │ ├── redis.ts
│ │ │ └── cacheService.ts
│ │ │
│ │ ├── queue/
│ │ │ ├── queueService.ts
│ │ │ ├── workers/
│ │ │ │ ├── email.worker.ts
│ │ │ │ ├── flyer.worker.ts
│ │ │ │ └── index.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── jobs/
│ │ │ ├── cronJobs.ts
│ │ │ ├── dailyAnalytics.job.ts
│ │ │ └── index.ts
│ │ │
│ │ └── logging/
│ │ ├── logger.ts
│ │ └── index.ts
│ │
│ ├── config/ # Server configuration
│ │ ├── database.config.ts
│ │ ├── redis.config.ts
│ │ ├── auth.config.ts
│ │ └── index.ts
│ │
│ └── utils/ # Server-only utilities
│ ├── imageProcessor.ts
│ ├── geocoding.ts
│ └── index.ts
├── shared/ # Code shared between client and server
│ ├── types/ # Shared TypeScript types
│ │ ├── entities/ # Domain entities
│ │ │ ├── flyer.types.ts
│ │ │ ├── user.types.ts
│ │ │ ├── shopping.types.ts
│ │ │ ├── recipe.types.ts
│ │ │ └── index.ts
│ │ │
│ │ ├── api/ # API contract types
│ │ │ ├── requests.ts
│ │ │ ├── responses.ts
│ │ │ ├── errors.ts
│ │ │ └── index.ts
│ │ │
│ │ └── index.ts
│ │
│ ├── schemas/ # Zod validation schemas
│ │ ├── flyer.schema.ts
│ │ ├── user.schema.ts
│ │ ├── auth.schema.ts
│ │ └── index.ts
│ │
│ ├── constants/ # Shared constants
│ │ ├── categories.ts
│ │ ├── errorCodes.ts
│ │ └── index.ts
│ │
│ └── utils/ # Isomorphic utilities
│ ├── formatting.ts
│ ├── validation.ts
│ └── index.ts
├── tests/ # Test infrastructure
│ ├── setup/
│ │ ├── vitest.setup.ts
│ │ └── testDb.setup.ts
│ │
│ ├── fixtures/
│ │ ├── mockFactories.ts
│ │ ├── sampleFlyers/
│ │ └── index.ts
│ │
│ ├── utils/
│ │ ├── testHelpers.ts
│ │ └── index.ts
│ │
│ ├── integration/ # Integration tests
│ │ ├── api/
│ │ └── database/
│ │
│ └── e2e/ # End-to-end tests
│ └── flows/
├── scripts/ # Build and utility scripts
│ ├── seed.ts
│ ├── migrate.ts
│ └── generateTypes.ts
└── index.tsx # Client entry point
```
## Domain Module Structure
Each server domain follows a consistent structure:
```
domains/flyer/
├── flyer.service.ts # Business logic
├── flyer.routes.ts # Express routes
├── flyer.controller.ts # Route handlers
├── flyer.repository.ts # Database access
├── flyer.types.ts # Domain-specific types
├── flyer.service.test.ts # Service tests
├── flyer.routes.test.ts # Route tests
└── index.ts # Public API
```
### Domain Index Pattern
Each domain exports a clean public API:
```typescript
// server/domains/flyer/index.ts
export { FlyerService } from './flyer.service';
export { flyerRoutes } from './flyer.routes';
export type { FlyerWithItems, FlyerCreateInput } from './flyer.types';
```
## Client Feature Module Structure
Each client feature follows a consistent structure:
```
client/features/flyer/
├── components/
│ ├── FlyerCard.tsx
│ ├── FlyerCard.test.tsx
│ ├── FlyerGrid.tsx
│ └── index.ts
├── hooks/
│ ├── useFlyersQuery.ts
│ ├── useFlyerUploadMutation.ts
│ └── index.ts
├── types.ts # Feature-specific client types
└── index.ts # Public API
```
## Import Path Aliases
Configure TypeScript and bundler for clean imports:
```typescript
// tsconfig.json paths
{
"paths": {
"@/client/*": ["src/client/*"],
"@/server/*": ["src/server/*"],
"@/shared/*": ["src/shared/*"],
"@/tests/*": ["src/tests/*"]
}
}
// Usage examples
import { Button, Card } from '@/client/components/ui';
import { useFlyersQuery } from '@/client/features/flyer';
import { FlyerService } from '@/server/domains/flyer';
import type { Flyer } from '@/shared/types/entities';
```
## Migration Strategy
Given the scope of this reorganization, migrate incrementally:
### Phase 1: Create Directory Structure
1. Create `client/`, `server/`, `shared/` directories
2. Set up path aliases in tsconfig.json
3. Update build configuration (Vite)
### Phase 2: Migrate Shared Code
1. Move types to `shared/types/`
2. Move schemas to `shared/schemas/`
3. Move shared utils to `shared/utils/`
4. Update imports across codebase
### Phase 3: Migrate Server Code
1. Create `server/domains/` structure
2. Move one domain at a time (start with `auth` or `user`)
3. Move each service + routes + repository together
4. Update route registration in app.ts
5. Run tests after each domain migration
### Phase 4: Migrate Client Code
1. Create `client/features/` structure
2. Move components into features
3. Move hooks into features or shared hooks
4. Move pages to `client/pages/`
5. Organize shared components into categories
### Phase 5: Cleanup
1. Remove empty old directories
2. Update all remaining imports
3. Update CI/CD paths if needed
4. Update documentation
## Naming Conventions
| Item | Convention | Example |
| ----------------- | -------------------- | ----------------------- |
| Domain directory | lowercase | `flyer/`, `shopping/` |
| Feature directory | kebab-case | `voice-assistant/` |
| Service file | domain.service.ts | `flyer.service.ts` |
| Route file | domain.routes.ts | `flyer.routes.ts` |
| Repository file | domain.repository.ts | `flyer.repository.ts` |
| Component file | PascalCase.tsx | `FlyerCard.tsx` |
| Hook file | camelCase.ts | `useFlyersQuery.ts` |
| Type file | domain.types.ts | `flyer.types.ts` |
| Test file | \*.test.ts(x) | `flyer.service.test.ts` |
| Index file | index.ts | `index.ts` |
## File Placement Guidelines
**Where does this file go?**
| If the file is... | Place it in... |
| ------------------------------------ | ------------------------------------------------ |
| Used only by React | `client/` |
| Used only by Express/Node | `server/` |
| TypeScript types used by both | `shared/types/` |
| Zod schemas | `shared/schemas/` |
| React component for one feature | `client/features/{feature}/components/` |
| React component used across features | `client/components/` |
| React hook for one feature | `client/features/{feature}/hooks/` |
| React hook used across features | `client/hooks/` |
| Business logic for a domain | `server/domains/{domain}/` |
| Database access for a domain | `server/domains/{domain}/{domain}.repository.ts` |
| Express middleware | `server/middleware/` |
| Background job worker | `server/infrastructure/queue/workers/` |
| Cron job definition | `server/infrastructure/jobs/` |
| Test factory/fixture | `tests/fixtures/` |
## Consequences
### Positive
- **Clear Boundaries**: Frontend, backend, and shared code are explicitly separated
- **Feature Discoverability**: Find all code for a feature in one place
- **Parallel Development**: Teams can work on domains independently
- **Easier Refactoring**: Domain boundaries make changes localized
- **Better Onboarding**: New developers navigate by feature, not file type
- **Scalability**: Structure supports growth without becoming unwieldy
### Negative
- **Large Migration Effort**: Significant one-time cost (XL effort)
- **Import Updates**: All imports need updating
- **Learning Curve**: Team must learn new structure
- **Merge Conflicts**: In-flight PRs will need rebasing
### Mitigation
- Use automated tools (e.g., `ts-morph`) to update imports
- Migrate one domain/feature at a time
- Create a migration checklist and track progress
- Coordinate with team to minimize in-flight work during migration phases
- Consider using feature flags to ship incrementally
## Key Differences from Current Structure
| Aspect | Current | Target |
| ---------------- | -------------------------- | ----------------------------------------- |
| Frontend/Backend | Mixed in `src/` | Separated in `client/` and `server/` |
| Services | Flat directory (75+ files) | Grouped by domain |
| Components | Flat directory (43+ files) | Categorized (ui, layout, feedback, forms) |
| Types | Monolithic `types.ts` | Split by entity in `shared/types/` |
| Features | UI-only | Full vertical slice (UI + hooks + types) |
| Routes | Separate from services | Co-located in domain |
| Tests | Co-located + `tests/` | Co-located + `tests/` for fixtures |
## Related ADRs
- [ADR-034](./0034-repository-pattern-standards.md) - Repository Pattern (affects domain structure)
- [ADR-035](./0035-service-layer-architecture.md) - Service Layer (affects domain structure)
- [ADR-044](./0044-frontend-feature-organization.md) - Frontend Features (this ADR supersedes it)
- [ADR-045](./0045-test-data-factories-and-fixtures.md) - Test Fixtures (affects tests/ directory)

View File

@@ -0,0 +1,419 @@
# ADR-048: Authentication Strategy
**Date**: 2026-01-09
**Status**: Partially Implemented
**Implemented**: 2026-01-09 (Local auth only)
## Context
The application requires a secure authentication system that supports both traditional email/password login and social OAuth providers (Google, GitHub). The system must handle user sessions, token refresh, account security (lockout after failed attempts), and integrate seamlessly with the existing Express middleware pipeline.
Currently, **only local authentication is enabled**. OAuth strategies are fully implemented but commented out, pending configuration of OAuth provider credentials.
## Decision
We will implement a stateless JWT-based authentication system with the following components:
1. **Local Authentication**: Email/password login with bcrypt hashing.
2. **OAuth Authentication**: Google and GitHub OAuth 2.0 (currently disabled).
3. **JWT Access Tokens**: Short-lived tokens (15 minutes) for API authentication.
4. **Refresh Tokens**: Long-lived tokens (7 days) stored in HTTP-only cookies.
5. **Account Security**: Lockout after 5 failed login attempts for 15 minutes.
### Design Principles
- **Stateless Sessions**: No server-side session storage; JWT contains all auth state.
- **Defense in Depth**: Multiple security layers (rate limiting, lockout, secure cookies).
- **Graceful OAuth Degradation**: OAuth is optional; system works with local auth only.
- **OAuth User Flexibility**: OAuth users have `password_hash = NULL` in database.
## Current Implementation Status
| Component | Status | Notes |
| ------------------------ | ------- | ----------------------------------------------------------- |
| **Local Authentication** | Enabled | Email/password with bcrypt (salt rounds = 10) |
| **JWT Access Tokens** | Enabled | 15-minute expiry, `Authorization: Bearer` header |
| **Refresh Tokens** | Enabled | 7-day expiry, HTTP-only cookie |
| **Account Lockout** | Enabled | 5 failed attempts, 15-minute lockout |
| **Password Reset** | Enabled | Email-based token flow |
| **Google OAuth** | Enabled | Requires GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET env vars |
| **GitHub OAuth** | Enabled | Requires GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET env vars |
| **OAuth Routes** | Enabled | `/api/auth/google`, `/api/auth/github` + callbacks |
| **OAuth Frontend UI** | Enabled | Login buttons in AuthView.tsx |
## Implementation Details
### Authentication Flow
```text
┌─────────────────────────────────────────────────────────────────────┐
│ AUTHENTICATION FLOW │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Login │───>│ Passport │───>│ JWT │───>│ Protected│ │
│ │ Request │ │ Local │ │ Token │ │ Routes │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │ │ │ │
│ │ ┌──────────┐ │ │ │
│ └────────>│ OAuth │─────────────┘ │ │
│ (disabled) │ Provider │ │ │
│ └──────────┘ │ │
│ │ │
│ ┌──────────┐ ┌──────────┐ │ │
│ │ Refresh │───>│ New │<─────────────────────────┘ │
│ │ Token │ │ JWT │ (when access token expires) │
│ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
### Local Strategy (Enabled)
Located in `src/routes/passport.routes.ts`:
```typescript
passport.use(
new LocalStrategy(
{ usernameField: 'email', passReqToCallback: true },
async (req, email, password, done) => {
// 1. Find user with profile by email
const userprofile = await db.userRepo.findUserWithProfileByEmail(email, req.log);
// 2. Check account lockout
if (userprofile.failed_login_attempts >= MAX_FAILED_ATTEMPTS) {
// Check if lockout period has passed
}
// 3. Verify password with bcrypt
const isMatch = await bcrypt.compare(password, userprofile.password_hash);
// 4. On success, reset failed attempts and return user
// 5. On failure, increment failed attempts
},
),
);
```
**Security Features**:
- Bcrypt password hashing with salt rounds = 10
- Account lockout after 5 failed attempts
- 15-minute lockout duration
- Failed attempt tracking persists across lockout refreshes
- Activity logging for failed login attempts
### JWT Strategy (Enabled)
```typescript
const jwtOptions = {
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
secretOrKey: JWT_SECRET,
};
passport.use(
new JwtStrategy(jwtOptions, async (jwt_payload, done) => {
const userProfile = await db.userRepo.findUserProfileById(jwt_payload.user_id);
if (userProfile) {
return done(null, userProfile);
}
return done(null, false);
}),
);
```
**Token Configuration**:
- Access token: 15 minutes expiry
- Refresh token: 7 days expiry, 64-byte random hex
- Refresh token stored in HTTP-only cookie with `secure` flag in production
### OAuth Strategies (Disabled)
#### Google OAuth
Located in `src/routes/passport.routes.ts` (lines 167-217, commented):
```typescript
// passport.use(new GoogleStrategy({
// clientID: process.env.GOOGLE_CLIENT_ID!,
// clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
// callbackURL: '/api/auth/google/callback',
// scope: ['profile', 'email']
// },
// async (accessToken, refreshToken, profile, done) => {
// const email = profile.emails?.[0]?.value;
// const user = await db.findUserByEmail(email);
// if (user) {
// return done(null, user);
// }
// // Create new user with null password_hash
// const newUser = await db.createUser(email, null, {
// full_name: profile.displayName,
// avatar_url: profile.photos?.[0]?.value
// });
// return done(null, newUser);
// }
// ));
```
#### GitHub OAuth
Located in `src/routes/passport.routes.ts` (lines 219-269, commented):
```typescript
// passport.use(new GitHubStrategy({
// clientID: process.env.GITHUB_CLIENT_ID!,
// clientSecret: process.env.GITHUB_CLIENT_SECRET!,
// callbackURL: '/api/auth/github/callback',
// scope: ['user:email']
// },
// async (accessToken, refreshToken, profile, done) => {
// const email = profile.emails?.[0]?.value;
// // Similar flow to Google OAuth
// }
// ));
```
#### OAuth Routes (Disabled)
Located in `src/routes/auth.routes.ts` (lines 289-315, commented):
```typescript
// const handleOAuthCallback = (req, res) => {
// const user = req.user;
// const accessToken = jwt.sign(payload, JWT_SECRET, { expiresIn: '15m' });
// const refreshToken = crypto.randomBytes(64).toString('hex');
//
// await db.saveRefreshToken(user.user_id, refreshToken);
// res.cookie('refreshToken', refreshToken, { httpOnly: true, secure: true });
// res.redirect(`${FRONTEND_URL}/auth/callback?token=${accessToken}`);
// };
// router.get('/google', passport.authenticate('google', { session: false }));
// router.get('/google/callback', passport.authenticate('google', { ... }), handleOAuthCallback);
// router.get('/github', passport.authenticate('github', { session: false }));
// router.get('/github/callback', passport.authenticate('github', { ... }), handleOAuthCallback);
```
### Database Schema
**Users Table** (`sql/initial_schema.sql`):
```sql
CREATE TABLE public.users (
user_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email TEXT NOT NULL UNIQUE,
password_hash TEXT, -- NULL for OAuth-only users
refresh_token TEXT, -- Current refresh token
failed_login_attempts INTEGER DEFAULT 0,
last_failed_login TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
```
**Note**: There is no separate OAuth provider mapping table. OAuth users are identified by `password_hash = NULL`. If a user signs up via OAuth and later wants to add a password, this would require schema changes.
### Authentication Middleware
Located in `src/routes/passport.routes.ts`:
```typescript
// Require admin role
export const isAdmin = (req, res, next) => {
if (req.user?.role === 'admin') {
next();
} else {
next(new ForbiddenError('Administrator access required.'));
}
};
// Optional auth - attach user if present, continue if not
export const optionalAuth = (req, res, next) => {
passport.authenticate('jwt', { session: false }, (err, user) => {
if (user) req.user = user;
next();
})(req, res, next);
};
// Mock auth for testing (only in NODE_ENV=test)
export const mockAuth = (req, res, next) => {
if (process.env.NODE_ENV === 'test') {
req.user = createMockUserProfile({ role: 'admin' });
}
next();
};
```
## Enabling OAuth
### Step 1: Set Environment Variables
Add to `.env`:
```bash
# Google OAuth
GOOGLE_CLIENT_ID=your-google-client-id
GOOGLE_CLIENT_SECRET=your-google-client-secret
# GitHub OAuth
GITHUB_CLIENT_ID=your-github-client-id
GITHUB_CLIENT_SECRET=your-github-client-secret
```
### Step 2: Configure OAuth Providers
**Google Cloud Console**:
1. Create project at <https://console.cloud.google.com/>
2. Enable Google+ API
3. Create OAuth 2.0 credentials (Web Application)
4. Add authorized redirect URI:
- Development: `http://localhost:3001/api/auth/google/callback`
- Production: `https://your-domain.com/api/auth/google/callback`
**GitHub Developer Settings**:
1. Go to <https://github.com/settings/developers>
2. Create new OAuth App
3. Set Authorization callback URL:
- Development: `http://localhost:3001/api/auth/github/callback`
- Production: `https://your-domain.com/api/auth/github/callback`
### Step 3: Uncomment Backend Code
**In `src/routes/passport.routes.ts`**:
1. Uncomment import statements (lines 5-6):
```typescript
import { Strategy as GoogleStrategy } from 'passport-google-oauth20';
import { Strategy as GitHubStrategy } from 'passport-github2';
```
2. Uncomment Google strategy (lines 167-217)
3. Uncomment GitHub strategy (lines 219-269)
**In `src/routes/auth.routes.ts`**:
1. Uncomment `handleOAuthCallback` function (lines 291-309)
2. Uncomment OAuth routes (lines 311-315)
### Step 4: Add Frontend OAuth Buttons
Create login buttons that redirect to:
- Google: `GET /api/auth/google`
- GitHub: `GET /api/auth/github`
Handle callback at `/auth/callback?token=<accessToken>`:
1. Extract token from URL
2. Store in client-side token storage
3. Redirect to dashboard
### Step 5: Handle OAuth Callback Page
Create `src/pages/AuthCallback.tsx`:
```typescript
const AuthCallback = () => {
const token = new URLSearchParams(location.search).get('token');
if (token) {
setToken(token);
navigate('/dashboard');
} else {
navigate('/login?error=auth_failed');
}
};
```
## Known Limitations
1. **No OAuth Provider ID Mapping**: Users are identified by email only. If a user has accounts with different emails on Google and GitHub, they create separate accounts.
2. **No Account Linking**: Users cannot link multiple OAuth providers to one account.
3. **No Password Addition for OAuth Users**: OAuth-only users cannot add a password to enable local login.
4. **No PKCE Flow**: OAuth implementation uses standard flow, not PKCE (Proof Key for Code Exchange).
5. **No OAuth State Parameter Validation**: The commented code doesn't show explicit state parameter handling for CSRF protection (Passport may handle this internally).
6. **No Refresh Token from OAuth Providers**: Only email/profile data is extracted; OAuth refresh tokens are not stored for API access.
## Dependencies
**Installed** (all available):
- `passport` v0.7.0
- `passport-local` v1.0.0
- `passport-jwt` v4.0.1
- `passport-google-oauth20` v2.0.0
- `passport-github2` v0.1.12
- `bcrypt` v5.x
- `jsonwebtoken` v9.x
**Type Definitions**:
- `@types/passport`
- `@types/passport-local`
- `@types/passport-jwt`
- `@types/passport-google-oauth20`
- `@types/passport-github2`
## Consequences
### Positive
- **Stateless Architecture**: No session storage required; scales horizontally.
- **Secure by Default**: HTTP-only cookies, short token expiry, bcrypt hashing.
- **Account Protection**: Lockout prevents brute-force attacks.
- **Flexible OAuth**: Can enable/disable OAuth without code changes (just env vars + uncommenting).
- **Graceful Degradation**: System works with local auth only.
### Negative
- **OAuth Disabled by Default**: Requires manual uncommenting to enable.
- **No Account Linking**: Multiple OAuth providers create separate accounts.
- **Frontend Work Required**: OAuth login buttons don't exist yet.
- **Token in URL**: OAuth callback passes token in URL (visible in browser history).
### Mitigation
- Document OAuth enablement steps clearly (see [../architecture/AUTHENTICATION.md](../architecture/AUTHENTICATION.md)).
- Consider adding OAuth provider ID columns for future account linking.
- Use URL fragment (`#token=`) instead of query parameter for callback.
## Key Files
| File | Purpose |
| ------------------------------------------------------ | ------------------------------------------------ |
| `src/routes/passport.routes.ts` | Passport strategies (local, JWT, OAuth) |
| `src/routes/auth.routes.ts` | Auth endpoints (login, register, refresh, OAuth) |
| `src/services/authService.ts` | Auth business logic |
| `src/services/db/user.db.ts` | User database operations |
| `src/config/env.ts` | Environment variable validation |
| [AUTHENTICATION.md](../architecture/AUTHENTICATION.md) | OAuth setup guide |
| `.env.example` | Environment variable template |
## Related ADRs
- [ADR-011](./0011-advanced-authorization-and-access-control-strategy.md) - Authorization and Access Control
- [ADR-016](./0016-api-security-hardening.md) - API Security (rate limiting, headers)
- [ADR-032](./0032-rate-limiting-strategy.md) - Rate Limiting
- [ADR-043](./0043-express-middleware-pipeline.md) - Middleware Pipeline
## Future Enhancements
1. **Enable OAuth**: Uncomment strategies and configure providers.
2. **Add OAuth Provider Mapping Table**: Store `googleId`, `githubId` for account linking.
3. **Implement Account Linking**: Allow users to connect multiple OAuth providers.
4. **Add Password to OAuth Users**: Allow OAuth users to set a password.
5. **Implement PKCE**: Add PKCE flow for enhanced OAuth security.
6. **Token in Fragment**: Use URL fragment for OAuth callback token.
7. **OAuth Token Storage**: Store OAuth refresh tokens for provider API access.
8. **Magic Link Login**: Add passwordless email login option.

View File

@@ -0,0 +1,299 @@
# ADR-049: Gamification and Achievement System
**Date**: 2026-01-11
**Status**: Accepted
**Implemented**: 2026-01-11
## Context
The application implements a gamification system to encourage user engagement through achievements and points. Users earn achievements for completing specific actions within the platform, and these achievements contribute to a points-based leaderboard.
Key requirements:
1. **User Engagement**: Reward users for meaningful actions (uploads, recipes, sharing).
2. **Progress Tracking**: Show users their accomplishments and progress.
3. **Social Competition**: Leaderboard to compare users by points.
4. **Idempotent Awards**: Achievements should only be awarded once per user.
5. **Transactional Safety**: Achievement awards must be atomic with the triggering action.
## Decision
We will implement a database-driven gamification system with:
1. **Database Functions**: Core logic in PostgreSQL for atomicity and idempotency.
2. **Database Triggers**: Automatic achievement awards on specific events.
3. **Application-Level Awards**: Explicit calls from service layer when triggers aren't suitable.
4. **Points Aggregation**: Stored in user profile for efficient leaderboard queries.
### Design Principles
- **Single Award**: Each achievement can only be earned once per user (enforced by unique constraint).
- **Atomic Operations**: Achievement awards happen within the same transaction as the triggering action.
- **Silent Failure**: If an achievement doesn't exist, the award function returns silently (no error).
- **Points Sync**: Points are updated on the profile immediately when an achievement is awarded.
## Implementation Details
### Database Schema
```sql
-- Achievements master table
CREATE TABLE public.achievements (
achievement_id BIGSERIAL PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT NOT NULL,
icon TEXT NOT NULL,
points_value INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- User achievements (junction table)
CREATE TABLE public.user_achievements (
user_id UUID REFERENCES public.users(user_id) ON DELETE CASCADE,
achievement_id BIGINT REFERENCES public.achievements(achievement_id) ON DELETE CASCADE,
achieved_at TIMESTAMPTZ DEFAULT NOW(),
PRIMARY KEY (user_id, achievement_id)
);
-- Points stored on profile for efficient leaderboard
ALTER TABLE public.profiles ADD COLUMN points INTEGER DEFAULT 0;
```
### Award Achievement Function
Located in `sql/Initial_triggers_and_functions.sql`:
```sql
CREATE OR REPLACE FUNCTION public.award_achievement(p_user_id UUID, p_achievement_name TEXT)
RETURNS void
LANGUAGE plpgsql
SECURITY DEFINER
AS $$
DECLARE
v_achievement_id BIGINT;
v_points_value INTEGER;
BEGIN
-- Find the achievement by name to get its ID and point value.
SELECT achievement_id, points_value INTO v_achievement_id, v_points_value
FROM public.achievements WHERE name = p_achievement_name;
-- If the achievement doesn't exist, do nothing.
IF v_achievement_id IS NULL THEN
RETURN;
END IF;
-- Insert the achievement for the user.
-- ON CONFLICT DO NOTHING ensures idempotency.
INSERT INTO public.user_achievements (user_id, achievement_id)
VALUES (p_user_id, v_achievement_id)
ON CONFLICT (user_id, achievement_id) DO NOTHING;
-- If the insert was successful (user didn't have it), update their points.
IF FOUND THEN
UPDATE public.profiles SET points = points + v_points_value WHERE user_id = p_user_id;
END IF;
END;
$$;
```
### Current Achievements
| Name | Description | Icon | Points |
| -------------------- | ----------------------------------------------------------- | ------------ | ------ |
| Welcome Aboard | Join the community by creating your account. | user-check | 5 |
| First Recipe | Create your very first recipe. | chef-hat | 10 |
| Recipe Sharer | Share a recipe with another user for the first time. | share-2 | 15 |
| List Sharer | Share a shopping list with another user for the first time. | list | 20 |
| First Favorite | Mark a recipe as one of your favorites. | heart | 5 |
| First Fork | Make a personal copy of a public recipe. | git-fork | 10 |
| First Budget Created | Create your first budget to track spending. | piggy-bank | 15 |
| First-Upload | Upload your first flyer. | upload-cloud | 25 |
### Achievement Triggers
#### User Registration (Database Trigger)
Awards "Welcome Aboard" when a new user is created:
```sql
-- In handle_new_user() function
PERFORM public.award_achievement(new.user_id, 'Welcome Aboard');
```
#### Flyer Upload (Database Trigger + Application Code)
Awards "First-Upload" when a flyer is inserted with an `uploaded_by` value:
```sql
-- In log_new_flyer() trigger function
IF NEW.uploaded_by IS NOT NULL THEN
PERFORM public.award_achievement(NEW.uploaded_by, 'First-Upload');
END IF;
```
Additionally, the `FlyerPersistenceService.saveFlyer()` method explicitly awards the achievement within the transaction:
```typescript
// In src/services/flyerPersistenceService.server.ts
if (userId) {
const gamificationRepo = new GamificationRepository(client);
await gamificationRepo.awardAchievement(userId, 'First-Upload', logger);
}
```
### Repository Layer
Located in `src/services/db/gamification.db.ts`:
```typescript
export class GamificationRepository {
private db: Pick<Pool | PoolClient, 'query'>;
constructor(db: Pick<Pool | PoolClient, 'query'> = getPool()) {
this.db = db;
}
async getUserAchievements(
userId: string,
logger: Logger,
): Promise<(UserAchievement & Achievement)[]> {
const query = `
SELECT ua.user_id, ua.achievement_id, ua.achieved_at,
a.name, a.description, a.icon, a.points_value, a.created_at
FROM public.user_achievements ua
JOIN public.achievements a ON ua.achievement_id = a.achievement_id
WHERE ua.user_id = $1
ORDER BY ua.achieved_at DESC;
`;
const res = await this.db.query(query, [userId]);
return res.rows;
}
async awardAchievement(userId: string, achievementName: string, logger: Logger): Promise<void> {
await this.db.query('SELECT public.award_achievement($1, $2)', [userId, achievementName]);
}
async getLeaderboard(limit: number, logger: Logger): Promise<LeaderboardUser[]> {
const query = `
SELECT user_id, full_name, avatar_url, points,
RANK() OVER (ORDER BY points DESC) as rank
FROM public.profiles
ORDER BY points DESC, full_name ASC
LIMIT $1;
`;
const res = await this.db.query(query, [limit]);
return res.rows;
}
}
```
### API Endpoints
| Method | Endpoint | Description |
| ------ | ------------------------------- | ------------------------------- |
| GET | `/api/achievements` | List all available achievements |
| GET | `/api/achievements/me` | Get current user's achievements |
| GET | `/api/achievements/leaderboard` | Get top users by points |
## Testing Considerations
### Critical Testing Requirements
When testing gamification features, be aware of the following:
1. **Database Seed Data**: Achievement definitions must exist in the database before tests run. The `award_achievement()` function silently returns if the achievement name doesn't exist.
2. **Transactional Context**: When awarding achievements from within a transaction:
- The achievement is visible within the transaction immediately
- External queries won't see the achievement until the transaction commits
- Tests should wait for job completion before asserting achievement state
3. **Vitest Global Setup Context**: The integration test global setup runs in a separate Node.js context. Achievement verification must use direct database queries, not mocked services.
4. **Achievement Idempotency**: Calling `award_achievement()` multiple times for the same user/achievement combination is safe and expected. Only the first call actually inserts.
### Example Integration Test Pattern
```typescript
it('should award the "First Upload" achievement after flyer processing', async () => {
// 1. Create user (awards "Welcome Aboard" via database trigger)
const { user: testUser, token } = await createAndLoginUser({...});
// 2. Upload flyer (triggers async job)
const uploadResponse = await request
.post('/api/flyers/upload')
.set('Authorization', `Bearer ${token}`)
.attach('flyerFile', testImagePath);
expect(uploadResponse.status).toBe(202);
// 3. Wait for job to complete
await poll(async () => {
const status = await request.get(`/api/flyers/job/${jobId}/status`);
return status.body.data.status === 'completed';
}, { timeout: 15000 });
// 4. Wait for achievements to be visible (transaction committed)
await vi.waitUntil(async () => {
const achievements = await db.gamificationRepo.getUserAchievements(
testUser.user.user_id,
logger
);
return achievements.length >= 2; // Welcome Aboard + First-Upload
}, { timeout: 15000, interval: 500 });
// 5. Assert specific achievements
const userAchievements = await db.gamificationRepo.getUserAchievements(
testUser.user.user_id,
logger
);
expect(userAchievements.find(a => a.name === 'Welcome Aboard')).toBeDefined();
expect(userAchievements.find(a => a.name === 'First-Upload')).toBeDefined();
});
```
### Common Test Pitfalls
1. **Missing Seed Data**: If tests fail with "achievement not found", ensure the test database has the achievements table populated.
2. **Race Conditions**: Achievement awards in async jobs may not be visible immediately. Always poll or use `vi.waitUntil()`.
3. **Wrong User ID**: Verify the user ID passed to `awardAchievement()` matches the user created in the test.
4. **Transaction Isolation**: When querying within a test, use the same database connection if checking mid-transaction state.
## Consequences
### Positive
- **Engagement**: Users have clear goals and rewards for platform activity.
- **Scalability**: Points stored on profile enable O(1) leaderboard sorting.
- **Reliability**: Database-level idempotency prevents duplicate awards.
- **Flexibility**: New achievements can be added via SQL without code changes.
### Negative
- **Complexity**: Multiple award paths (triggers + application code) require careful coordination.
- **Testing**: Async nature of some awards complicates integration testing.
- **Coupling**: Achievement names are strings; typos fail silently.
### Mitigation
- Use constants for achievement names in application code.
- Document all award trigger points clearly.
- Test each achievement path independently.
## Key Files
- `sql/initial_data.sql` - Achievement definitions (seed data)
- `sql/Initial_triggers_and_functions.sql` - `award_achievement()` function and triggers
- `src/services/db/gamification.db.ts` - Repository layer
- `src/routes/achievements.routes.ts` - API endpoints
- `src/services/flyerPersistenceService.server.ts` - First-Upload award (application code)
## Related ADRs
- [ADR-002](./0002-standardized-transaction-management.md) - Transaction Management
- [ADR-034](./0034-repository-pattern-standards.md) - Repository Pattern
- [ADR-006](./0006-background-job-processing-and-task-queues.md) - Background Jobs (flyer processing)

View File

@@ -0,0 +1,341 @@
# ADR-050: PostgreSQL Function Observability
**Date**: 2026-01-11
**Status**: Proposed
**Related**: [ADR-015](0015-application-performance-monitoring-and-error-tracking.md), [ADR-004](0004-standardized-application-wide-structured-logging.md)
## Context
The application uses 30+ PostgreSQL functions and 11+ triggers for business logic, including:
- Recipe recommendations and search
- Shopping list generation from menu plans
- Price history tracking
- Achievement awards
- Activity logging
- User profile creation
**Current Problem**: These database functions can fail silently in several ways:
1. **`ON CONFLICT DO NOTHING`** - Swallows constraint violations without notification
2. **`IF NOT FOUND THEN RETURN;`** - Silently exits when data is missing
3. **Trigger functions returning `NULL`** - No indication of partial failures
4. **No logging inside functions** - No visibility into function execution
When these silent failures occur:
- The application layer receives no error (function "succeeds" but does nothing)
- No logs are generated for debugging
- Issues are only discovered when users report missing data
- Root cause analysis is extremely difficult
**Example of Silent Failure**:
```sql
-- This function silently does nothing if achievement doesn't exist
CREATE OR REPLACE FUNCTION public.award_achievement(p_user_id UUID, p_achievement_name TEXT)
RETURNS void AS $$
BEGIN
SELECT achievement_id INTO v_achievement_id FROM achievements WHERE name = p_achievement_name;
IF v_achievement_id IS NULL THEN
RETURN; -- Silent failure - no log, no error
END IF;
-- ...
END;
$$;
```
ADR-015 established Logstash + Bugsink for error tracking, with PostgreSQL log integration marked as "future". This ADR defines the implementation.
## Decision
We will implement a standardized PostgreSQL function observability strategy with three tiers of logging severity:
### 1. Function Logging Helper
Create a reusable logging function that outputs structured JSON to PostgreSQL logs:
```sql
-- Function to emit structured log messages from PL/pgSQL
CREATE OR REPLACE FUNCTION public.fn_log(
p_level TEXT, -- 'DEBUG', 'INFO', 'NOTICE', 'WARNING', 'ERROR'
p_function_name TEXT, -- The calling function name
p_message TEXT, -- Human-readable message
p_context JSONB DEFAULT NULL -- Additional context (user_id, params, etc.)
)
RETURNS void
LANGUAGE plpgsql
AS $$
DECLARE
log_line TEXT;
BEGIN
-- Build structured JSON log line
log_line := jsonb_build_object(
'timestamp', now(),
'level', p_level,
'source', 'postgresql',
'function', p_function_name,
'message', p_message,
'context', COALESCE(p_context, '{}'::jsonb)
)::text;
-- Use appropriate RAISE level
CASE p_level
WHEN 'DEBUG' THEN RAISE DEBUG '%', log_line;
WHEN 'INFO' THEN RAISE INFO '%', log_line;
WHEN 'NOTICE' THEN RAISE NOTICE '%', log_line;
WHEN 'WARNING' THEN RAISE WARNING '%', log_line;
WHEN 'ERROR' THEN RAISE LOG '%', log_line; -- Use LOG for errors to ensure capture
ELSE RAISE NOTICE '%', log_line;
END CASE;
END;
$$;
```
### 2. Logging Tiers
#### Tier 1: Critical Functions (Always Log)
Functions where silent failure causes data corruption or user-facing issues:
| Function | Log Events |
| ---------------------------------- | --------------------------------------- |
| `handle_new_user()` | User creation, profile creation, errors |
| `award_achievement()` | Achievement not found, already awarded |
| `approve_correction()` | Correction not found, permission denied |
| `complete_shopping_list()` | List not found, permission denied |
| `add_menu_plan_to_shopping_list()` | Permission denied, items added |
| `fork_recipe()` | Original not found, fork created |
**Pattern**:
```sql
CREATE OR REPLACE FUNCTION public.award_achievement(p_user_id UUID, p_achievement_name TEXT)
RETURNS void AS $$
DECLARE
v_achievement_id BIGINT;
v_points_value INTEGER;
v_context JSONB;
BEGIN
v_context := jsonb_build_object('user_id', p_user_id, 'achievement_name', p_achievement_name);
SELECT achievement_id, points_value INTO v_achievement_id, v_points_value
FROM public.achievements WHERE name = p_achievement_name;
IF v_achievement_id IS NULL THEN
-- Log the issue instead of silent return
PERFORM fn_log('WARNING', 'award_achievement',
'Achievement not found: ' || p_achievement_name, v_context);
RETURN;
END IF;
INSERT INTO public.user_achievements (user_id, achievement_id)
VALUES (p_user_id, v_achievement_id)
ON CONFLICT (user_id, achievement_id) DO NOTHING;
IF FOUND THEN
UPDATE public.profiles SET points = points + v_points_value WHERE user_id = p_user_id;
PERFORM fn_log('INFO', 'award_achievement',
'Achievement awarded: ' || p_achievement_name, v_context);
END IF;
END;
$$;
```
#### Tier 2: Business Logic Functions (Log on Anomalies)
Functions where unexpected conditions should be logged but aren't critical:
| Function | Log Events |
| -------------------------------------- | ---------------------------------- |
| `suggest_master_item_for_flyer_item()` | No match found (below threshold) |
| `recommend_recipes_for_user()` | No recommendations generated |
| `find_recipes_from_pantry()` | Empty pantry, no recipes found |
| `get_best_sale_prices_for_user()` | No watched items, no current sales |
**Pattern**: Log when results are unexpectedly empty or inputs are invalid.
#### Tier 3: Triggers (Log Errors Only)
Triggers should be fast, so only log when something goes wrong:
| Trigger Function | Log Events |
| --------------------------------------------- | ------------------------- |
| `update_price_history_on_flyer_item_insert()` | Failed to update history |
| `update_recipe_rating_aggregates()` | Rating calculation failed |
| `log_new_recipe()` | Profile lookup failed |
| `log_new_flyer()` | Store lookup failed |
### 3. PostgreSQL Configuration
Enable logging in `postgresql.conf`:
```ini
# Log all function notices and above
log_min_messages = notice
# Include function name in log prefix
log_line_prefix = '%t [%p] %u@%d '
# Log to file for Logstash pickup
logging_collector = on
log_directory = '/var/log/postgresql'
log_filename = 'postgresql-%Y-%m-%d.log'
log_rotation_age = 1d
log_rotation_size = 100MB
# Capture slow queries from functions
log_min_duration_statement = 1000 # Log queries over 1 second
```
### 4. Logstash Integration
Update the Logstash pipeline (extends ADR-015 configuration):
```conf
# PostgreSQL function log input
input {
file {
path => "/var/log/postgresql/*.log"
type => "postgres"
tags => ["postgres"]
start_position => "beginning"
sincedb_path => "/var/lib/logstash/sincedb_postgres"
}
}
filter {
if [type] == "postgres" {
# Extract timestamp and process ID from PostgreSQL log prefix
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:pg_timestamp} \[%{POSINT:pg_pid}\] %{USER:pg_user}@%{WORD:pg_database} %{GREEDYDATA:pg_message}" }
}
# Check if this is a structured JSON log from fn_log()
if [pg_message] =~ /^\{.*"source":"postgresql".*\}$/ {
json {
source => "pg_message"
target => "fn_log"
}
# Mark as error if level is WARNING or ERROR
if [fn_log][level] in ["WARNING", "ERROR"] {
mutate { add_tag => ["error", "db_function"] }
}
}
# Also catch native PostgreSQL errors
if [pg_message] =~ /^ERROR:/ or [pg_message] =~ /^FATAL:/ {
mutate { add_tag => ["error", "postgres_native"] }
}
}
}
output {
if "error" in [tags] and "postgres" in [tags] {
http {
url => "http://localhost:8000/api/store/"
http_method => "post"
format => "json"
}
}
}
```
### 5. Dual-File Update Requirement
**IMPORTANT**: All SQL function changes must be applied to BOTH files:
1. `sql/Initial_triggers_and_functions.sql` - Used for incremental updates
2. `sql/master_schema_rollup.sql` - Used for fresh database setup
Both files must remain in sync for triggers and functions.
## Implementation Steps
1. **Create `fn_log()` helper function**:
- Add to both `Initial_triggers_and_functions.sql` and `master_schema_rollup.sql`
- Test with `SELECT fn_log('INFO', 'test', 'Test message', '{"key": "value"}'::jsonb);`
2. **Update Tier 1 critical functions** (highest priority):
- `award_achievement()` - Log missing achievements, duplicate awards
- `handle_new_user()` - Log user creation success/failure
- `approve_correction()` - Log not found, permission denied
- `complete_shopping_list()` - Log permission checks
- `add_menu_plan_to_shopping_list()` - Log permission checks, items added
- `fork_recipe()` - Log original not found
3. **Update Tier 2 business logic functions**:
- Add anomaly logging to suggestion/recommendation functions
- Log empty result sets with context
4. **Update Tier 3 trigger functions**:
- Add error-only logging to critical triggers
- Wrap complex trigger logic in exception handlers
5. **Configure PostgreSQL logging**:
- Update `postgresql.conf` in dev container
- Update production PostgreSQL configuration
- Verify logs appear in expected location
6. **Update Logstash pipeline**:
- Add PostgreSQL input to `bugsink.conf`
- Add filter rules for structured JSON extraction
- Test end-to-end: function log → Logstash → Bugsink
7. **Verify in Bugsink**:
- Confirm database function errors appear as issues
- Verify context (user_id, function name, params) is captured
## Consequences
### Positive
- **Visibility**: Silent failures become visible in error tracking
- **Debugging**: Function execution context captured for root cause analysis
- **Proactive detection**: Anomalies logged before users report issues
- **Unified monitoring**: Database errors appear alongside application errors in Bugsink
- **Structured logs**: JSON format enables filtering and aggregation
### Negative
- **Performance overhead**: Logging adds latency to function execution
- **Log volume**: Tier 1/2 functions may generate significant log volume
- **Maintenance**: Two SQL files must be kept in sync
- **PostgreSQL configuration**: Requires access to `postgresql.conf`
### Mitigations
- **Performance**: Only log meaningful events, not every function call
- **Log volume**: Use appropriate log levels; Logstash filters reduce noise
- **Sync**: Add CI check to verify SQL files match for function definitions
- **Configuration**: Document PostgreSQL settings in deployment runbook
## Examples
### Before (Silent Failure)
```sql
-- User thinks achievement was awarded, but it silently failed
SELECT award_achievement('user-uuid', 'Nonexistent Badge');
-- Returns: void (no error, no log)
-- Result: User never gets achievement, nobody knows why
```
### After (Observable Failure)
```sql
SELECT award_achievement('user-uuid', 'Nonexistent Badge');
-- Returns: void
-- PostgreSQL log: {"timestamp":"2026-01-11T10:30:00Z","level":"WARNING","source":"postgresql","function":"award_achievement","message":"Achievement not found: Nonexistent Badge","context":{"user_id":"user-uuid","achievement_name":"Nonexistent Badge"}}
-- Bugsink: New issue created with full context
```
## References
- [ADR-015: Application Performance Monitoring](0015-application-performance-monitoring-and-error-tracking.md)
- [ADR-004: Standardized Structured Logging](0004-standardized-application-wide-structured-logging.md)
- [PostgreSQL RAISE Documentation](https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html)
- [PostgreSQL Logging Configuration](https://www.postgresql.org/docs/current/runtime-config-logging.html)

View File

@@ -0,0 +1,54 @@
# ADR-051: Asynchronous Context Propagation
**Date**: 2026-01-11
**Status**: Accepted (Implemented)
## Context
Debugging asynchronous workflows is difficult because the `request_id` generated at the API layer is lost when a task is handed off to a background queue (BullMQ). Logs from the worker appear disconnected from the user action that triggered them.
## Decision
We will implement a context propagation pattern for all background jobs:
1. **Job Data Payload**: All job data interfaces MUST include a `meta` object containing `requestId`, `userId`, and `origin`.
2. **Worker Logger Initialization**: All BullMQ workers MUST initialize a child logger immediately upon processing a job, using the metadata passed in the payload.
3. **Correlation**: The worker's logger must use the _same_ `request_id` as the initiating API request.
## Implementation
```typescript
// 1. Enqueueing (API Layer)
await queue.add('process-flyer', {
...data,
meta: {
requestId: req.log.bindings().request_id, // Propagate ID
userId: req.user.id,
},
});
// 2. Processing (Worker Layer)
const worker = new Worker('queue', async (job) => {
const { requestId, userId } = job.data.meta || {};
// Create context-aware logger for this specific job execution
const jobLogger = logger.child({
request_id: requestId || uuidv4(), // Use propagated ID or generate new
user_id: userId,
job_id: job.id,
service: 'worker',
});
try {
await processJob(job.data, jobLogger); // Pass logger down
} catch (err) {
jobLogger.error({ err }, 'Job failed');
throw err;
}
});
```
## Consequences
**Positive**: Complete traceability from API request -> Queue -> Worker execution. Drastically reduces time to find "what happened" to a specific user request.

View File

@@ -0,0 +1,42 @@
# ADR-052: Granular Debug Logging Strategy
**Date**: 2026-01-11
**Status**: Proposed
## Context
Global log levels (INFO vs DEBUG) are too coarse. Developers need to inspect detailed debug information for specific subsystems (e.g., `ai-service`, `db-pool`) without being flooded by logs from the entire application.
## Decision
We will adopt a namespace-based debug filter pattern, similar to the `debug` npm package, but integrated into our Pino logger.
1. **Logger Namespaces**: Every service/module logger must be initialized with a `module` property (e.g., `logger.child({ module: 'ai-service' })`).
2. **Environment Filter**: We will support a `DEBUG_MODULES` environment variable that overrides the log level for matching modules.
## Implementation
In `src/services/logger.server.ts`:
```typescript
const debugModules = (process.env.DEBUG_MODULES || '').split(',').map((s) => s.trim());
export const createScopedLogger = (moduleName: string) => {
// If DEBUG_MODULES contains "ai-service" or "*", force level to 'debug'
const isDebugEnabled = debugModules.includes('*') || debugModules.includes(moduleName);
return logger.child({
module: moduleName,
level: isDebugEnabled ? 'debug' : logger.level,
});
};
```
## Usage
To debug only AI and Database interactions:
```bash
DEBUG_MODULES=ai-service,db-repo npm run dev
```

View File

@@ -0,0 +1,62 @@
# ADR-053: Worker Health Checks and Stalled Job Monitoring
**Date**: 2026-01-11
**Status**: Proposed
## Context
Our application relies heavily on background workers (BullMQ) for flyer processing, analytics, and emails. If a worker process crashes (e.g., Out of Memory) or hangs, jobs may remain in the 'active' state indefinitely ("stalled") until BullMQ's fail-safe triggers.
Currently, we lack:
1. Visibility into queue depths and worker status via HTTP endpoints (for uptime monitors).
2. A mechanism to detect if the worker process itself is alive, beyond just queue statistics.
3. Explicit configuration to ensure stalled jobs are recovered quickly.
## Decision
We will implement a multi-layered health check strategy for background workers:
1. **Queue Metrics Endpoint**: Expose a protected endpoint `GET /health/queues` that returns the counts (waiting, active, failed) for all critical queues.
2. **Stalled Job Configuration**: Explicitly configure BullMQ workers with aggressive stall detection settings to recover quickly from crashes.
3. **Worker Heartbeats**: Workers will periodically update a "heartbeat" key in Redis. The health endpoint will check if this timestamp is recent.
## Implementation
### 1. BullMQ Worker Settings
Workers must be initialized with specific options to handle stalls:
```typescript
const workerOptions = {
// Check for stalled jobs every 30 seconds
stalledInterval: 30000,
// Fail job after 3 stalls (prevents infinite loops causing infinite retries)
maxStalledCount: 3,
// Duration of the lock for the job in milliseconds.
// If the worker doesn't renew this (e.g. crash), the job stalls.
lockDuration: 30000,
};
```
### 2. Health Endpoint Logic
The `/health/queues` endpoint will:
1. Iterate through all defined queues (`flyerQueue`, `emailQueue`, etc.).
2. Fetch job counts (`waiting`, `active`, `failed`, `delayed`).
3. Return a 200 OK if queues are accessible, or 503 if Redis is unreachable.
4. (Future) Return 500 if the `waiting` count exceeds a critical threshold for too long.
## Consequences
**Positive**:
- Early detection of stuck processing pipelines.
- Automatic recovery of stalled jobs via BullMQ configuration.
- Metrics available for external monitoring tools (e.g., UptimeRobot, Datadog).
**Negative**:
- Requires configuring external monitoring to poll the new endpoint.

View File

@@ -0,0 +1,337 @@
# ADR-054: Bugsink to Gitea Issue Synchronization
**Date**: 2026-01-17
**Status**: Proposed
## Context
The application uses Bugsink (Sentry-compatible self-hosted error tracking) to capture runtime errors across 6 projects:
| Project | Type | Environment |
| --------------------------------- | -------------- | ------------ |
| flyer-crawler-backend | Backend | Production |
| flyer-crawler-backend-test | Backend | Test/Staging |
| flyer-crawler-frontend | Frontend | Production |
| flyer-crawler-frontend-test | Frontend | Test/Staging |
| flyer-crawler-infrastructure | Infrastructure | Production |
| flyer-crawler-test-infrastructure | Infrastructure | Test/Staging |
Currently, errors remain in Bugsink until manually reviewed. There is no automated workflow to:
1. Create trackable tickets for errors
2. Assign errors to developers
3. Track resolution progress
4. Prevent errors from being forgotten
## Decision
Implement an automated background worker that synchronizes unresolved Bugsink issues to Gitea as trackable tickets. The sync worker will:
1. **Run only on the test/staging server** (not production, not dev container)
2. **Poll all 6 Bugsink projects** for unresolved issues
3. **Create Gitea issues** with full error context
4. **Mark synced issues as resolved** in Bugsink (to prevent re-polling)
5. **Track sync state in Redis** to ensure idempotency
### Why Test/Staging Only?
- The sync worker is a background service that needs API tokens for both Bugsink and Gitea
- Running on test/staging provides a single sync point without duplicating infrastructure
- All 6 Bugsink projects (including production) are synced from this one worker
- Production server stays focused on serving users, not running sync jobs
## Architecture
### Component Overview
```
┌─────────────────────────────────────────────────────────────────────┐
│ TEST/STAGING SERVER │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌───────────────┐ │
│ │ BullMQ Queue │───▶│ Sync Worker │───▶│ Redis DB 15 │ │
│ │ bugsink-sync │ │ (15min repeat) │ │ Sync State │ │
│ └──────────────────┘ └────────┬─────────┘ └───────────────┘ │
│ │ │
└───────────────────────────────────┼──────────────────────────────────┘
┌───────────────┴───────────────┐
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Bugsink │ │ Gitea │
│ (6 projects) │ │ (1 repo) │
└──────────────┘ └──────────────┘
```
### Queue Configuration
| Setting | Value | Rationale |
| --------------- | ---------------------- | -------------------------------------------- |
| Queue Name | `bugsink-sync` | Follows existing naming pattern |
| Repeat Interval | 15 minutes | Balances responsiveness with API rate limits |
| Retry Attempts | 3 | Standard retry policy |
| Backoff | Exponential (30s base) | Handles temporary API failures |
| Concurrency | 1 | Serial processing prevents race conditions |
### Redis Database Allocation
| Database | Usage | Owner |
| -------- | ------------------- | --------------- |
| 0 | BullMQ (Production) | Existing queues |
| 1 | BullMQ (Test) | Existing queues |
| 2-14 | Reserved | Future use |
| 15 | Bugsink Sync State | This feature |
### Redis Key Schema
```
bugsink:synced:{bugsink_issue_id}
└─ Value: JSON {
gitea_issue_number: number,
synced_at: ISO timestamp,
project: string,
title: string
}
```
### Gitea Labels
The following labels have been created in `torbo/flyer-crawler.projectium.com`:
| Label | ID | Color | Purpose |
| -------------------- | --- | ------------------ | ---------------------------------- |
| `bug:frontend` | 8 | #e11d48 (Red) | Frontend JavaScript/React errors |
| `bug:backend` | 9 | #ea580c (Orange) | Backend Node.js/API errors |
| `bug:infrastructure` | 10 | #7c3aed (Purple) | Infrastructure errors (Redis, PM2) |
| `env:production` | 11 | #dc2626 (Dark Red) | Production environment |
| `env:test` | 12 | #2563eb (Blue) | Test/staging environment |
| `env:development` | 13 | #6b7280 (Gray) | Development environment |
| `source:bugsink` | 14 | #10b981 (Green) | Auto-synced from Bugsink |
### Label Mapping
| Bugsink Project | Bug Label | Env Label |
| --------------------------------- | ------------------ | -------------- |
| flyer-crawler-backend | bug:backend | env:production |
| flyer-crawler-backend-test | bug:backend | env:test |
| flyer-crawler-frontend | bug:frontend | env:production |
| flyer-crawler-frontend-test | bug:frontend | env:test |
| flyer-crawler-infrastructure | bug:infrastructure | env:production |
| flyer-crawler-test-infrastructure | bug:infrastructure | env:test |
All synced issues also receive the `source:bugsink` label.
## Implementation Details
### New Files
| File | Purpose |
| -------------------------------------- | ------------------------------------------- |
| `src/services/bugsinkSync.server.ts` | Core synchronization logic |
| `src/services/bugsinkClient.server.ts` | HTTP client for Bugsink API |
| `src/services/giteaClient.server.ts` | HTTP client for Gitea API |
| `src/types/bugsink.ts` | TypeScript interfaces for Bugsink responses |
| `src/routes/admin/bugsink-sync.ts` | Admin endpoints for manual trigger |
### Modified Files
| File | Changes |
| ------------------------------------- | ------------------------------------- |
| `src/services/queues.server.ts` | Add `bugsinkSyncQueue` definition |
| `src/services/workers.server.ts` | Add sync worker implementation |
| `src/config/env.ts` | Add bugsink sync configuration schema |
| `.env.example` | Document new environment variables |
| `.gitea/workflows/deploy-to-test.yml` | Pass sync-related secrets |
### Environment Variables
```bash
# Bugsink Configuration
BUGSINK_URL=https://bugsink.projectium.com
BUGSINK_API_TOKEN=77deaa5e... # Created via Django management command (see BUGSINK-SYNC.md)
# Gitea Configuration
GITEA_URL=https://gitea.projectium.com
GITEA_API_TOKEN=... # Personal access token with repo scope
GITEA_OWNER=torbo
GITEA_REPO=flyer-crawler.projectium.com
# Sync Control
BUGSINK_SYNC_ENABLED=false # Set true only in test environment
BUGSINK_SYNC_INTERVAL=15 # Minutes between sync runs
```
### Gitea Issue Template
```markdown
## Error Details
| Field | Value |
| ------------ | --------------- |
| **Type** | {error_type} |
| **Message** | {error_message} |
| **Platform** | {platform} |
| **Level** | {level} |
## Occurrence Statistics
- **First Seen**: {first_seen}
- **Last Seen**: {last_seen}
- **Total Occurrences**: {count}
## Request Context
- **URL**: {request_url}
- **Additional Context**: {context}
## Stacktrace
<details>
<summary>Click to expand</summary>
{stacktrace}
</details>
---
**Bugsink Issue**: {bugsink_url}
**Project**: {project_slug}
**Trace ID**: {trace_id}
```
### Sync Workflow
```
1. Worker triggered (every 15 min or manual)
2. For each of 6 Bugsink projects:
a. List issues with status='unresolved'
b. For each issue:
i. Check Redis for existing sync record
ii. If already synced → skip
iii. Fetch issue details + stacktrace
iv. Create Gitea issue with labels
v. Store sync record in Redis
vi. Mark issue as 'resolved' in Bugsink
3. Log summary (synced: N, skipped: N, failed: N)
```
### Idempotency Guarantees
1. **Redis check before creation**: Prevents duplicate Gitea issues
2. **Atomic Redis write after Gitea create**: Ensures state consistency
3. **Query only unresolved issues**: Resolved issues won't appear in polls
4. **No TTL on Redis keys**: Permanent sync history
## Consequences
### Positive
1. **Visibility**: All application errors become trackable tickets
2. **Accountability**: Errors can be assigned to developers
3. **History**: Complete audit trail of when errors were discovered and resolved
4. **Integration**: Errors appear alongside feature work in Gitea
5. **Automation**: No manual error triage required
### Negative
1. **API Dependencies**: Requires both Bugsink and Gitea APIs to be available
2. **Token Management**: Additional secrets to manage in CI/CD
3. **Potential Noise**: High-frequency errors could create many tickets (mitigated by Bugsink's issue grouping)
4. **Single Point**: Sync only runs on test server (if test server is down, no sync occurs)
### Risks & Mitigations
| Risk | Mitigation |
| ----------------------- | ------------------------------------------------- |
| Bugsink API rate limits | 15-minute polling interval |
| Gitea API rate limits | Sequential processing with delays |
| Redis connection issues | Reuse existing connection patterns |
| Duplicate issues | Redis tracking + idempotent checks |
| Missing stacktrace | Graceful degradation (create issue without trace) |
## Admin Interface
### Manual Sync Endpoint
```
POST /api/admin/bugsink/sync
Authorization: Bearer {admin_jwt}
Response:
{
"success": true,
"data": {
"synced": 3,
"skipped": 12,
"failed": 0,
"duration_ms": 2340
}
}
```
### Sync Status Endpoint
```
GET /api/admin/bugsink/sync/status
Authorization: Bearer {admin_jwt}
Response:
{
"success": true,
"data": {
"enabled": true,
"last_run": "2026-01-17T10:30:00Z",
"next_run": "2026-01-17T10:45:00Z",
"total_synced": 47,
"projects": [
{ "slug": "flyer-crawler-backend", "synced_count": 12 },
...
]
}
}
```
## Implementation Phases
### Phase 1: Core Infrastructure
- Add environment variables to `env.ts` schema
- Create `BugsinkClient` service (HTTP client)
- Create `GiteaClient` service (HTTP client)
- Add Redis db 15 connection for sync tracking
### Phase 2: Sync Logic
- Create `BugsinkSyncService` with sync logic
- Add `bugsink-sync` queue to `queues.server.ts`
- Add sync worker to `workers.server.ts`
- Create TypeScript types for API responses
### Phase 3: Integration
- Add admin endpoints for manual sync trigger
- Update `deploy-to-test.yml` with new secrets
- Add secrets to Gitea repository settings
- Test end-to-end in staging environment
### Phase 4: Documentation
- Update CLAUDE.md with sync information
- Create operational runbook for sync issues
## Future Enhancements
1. **Bi-directional sync**: Update Bugsink when Gitea issue is closed
2. **Smart deduplication**: Detect similar errors across projects
3. **Priority mapping**: High occurrence count → high priority label
4. **Slack/Discord notifications**: Alert on new critical errors
5. **Metrics dashboard**: Track error trends over time
## References
- [ADR-006: Background Job Processing](./0006-background-job-processing-and-task-queues.md)
- [ADR-015: Application Performance Monitoring](./0015-application-performance-monitoring-and-error-tracking.md)
- [Bugsink API Documentation](https://bugsink.com/docs/api/)
- [Gitea API Documentation](https://docs.gitea.io/en-us/api-usage/)

View File

@@ -0,0 +1,164 @@
# ADR Implementation Tracker
This document tracks the implementation status and estimated effort for all Architectural Decision Records (ADRs).
## Effort Estimation Guide
| Rating | Description | Typical Duration |
| ------ | ------------------------------------------- | ----------------- |
| S | Small - Simple, isolated changes | 1-2 hours |
| M | Medium - Multiple files, some testing | Half day to 1 day |
| L | Large - Significant refactoring, many files | 1-3 days |
| XL | Extra Large - Major architectural change | 1+ weeks |
## Implementation Status Overview
| Status | Count |
| ---------------------------- | ----- |
| Accepted (Fully Implemented) | 30 |
| Partially Implemented | 2 |
| Proposed (Not Started) | 16 |
---
## Detailed Implementation Status
### Category 1: Foundational / Core Infrastructure
| ADR | Title | Status | Effort | Notes |
| ---------------------------------------------------------------- | ----------------------- | -------- | ------ | ------------------------------ |
| [ADR-002](./0002-standardized-transaction-management.md) | Transaction Management | Accepted | - | Fully implemented |
| [ADR-007](./0007-configuration-and-secrets-management.md) | Configuration & Secrets | Accepted | - | Fully implemented |
| [ADR-020](./0020-health-checks-and-liveness-readiness-probes.md) | Health Checks | Accepted | - | Fully implemented |
| [ADR-030](./0030-graceful-degradation-and-circuit-breaker.md) | Circuit Breaker | Proposed | L | New resilience patterns needed |
### Category 2: Data Management
| ADR | Title | Status | Effort | Notes |
| --------------------------------------------------------------- | ------------------------ | -------- | ------ | ------------------------------ |
| [ADR-009](./0009-caching-strategy-for-read-heavy-operations.md) | Caching Strategy | Accepted | - | Fully implemented |
| [ADR-013](./0013-database-schema-migration-strategy.md) | Schema Migrations v1 | Proposed | M | Superseded by ADR-023 |
| [ADR-019](./0019-data-backup-and-recovery-strategy.md) | Backup & Recovery | Accepted | - | Fully implemented |
| [ADR-023](./0023-database-schema-migration-strategy.md) | Schema Migrations v2 | Proposed | L | Requires tooling setup |
| [ADR-031](./0031-data-retention-and-privacy-compliance.md) | Data Retention & Privacy | Proposed | XL | Legal/compliance review needed |
### Category 3: API & Integration
| ADR | Title | Status | Effort | Notes |
| ------------------------------------------------------------------- | ------------------------ | ----------- | ------ | ------------------------------------- |
| [ADR-003](./0003-standardized-input-validation-using-middleware.md) | Input Validation | Accepted | - | Fully implemented |
| [ADR-008](./0008-api-versioning-strategy.md) | API Versioning | Proposed | L | Major URL/routing changes |
| [ADR-018](./0018-api-documentation-strategy.md) | API Documentation | Accepted | - | OpenAPI/Swagger implemented |
| [ADR-022](./0022-real-time-notification-system.md) | Real-time Notifications | Proposed | XL | WebSocket infrastructure |
| [ADR-028](./0028-api-response-standardization.md) | Response Standardization | Implemented | L | Completed (routes, middleware, tests) |
### Category 4: Security & Compliance
| ADR | Title | Status | Effort | Notes |
| ----------------------------------------------------------------------- | --------------------- | -------- | ------ | -------------------------------- |
| [ADR-001](./0001-standardized-error-handling.md) | Error Handling | Accepted | - | Fully implemented |
| [ADR-011](./0011-advanced-authorization-and-access-control-strategy.md) | Authorization & RBAC | Proposed | XL | Policy engine, permission system |
| [ADR-016](./0016-api-security-hardening.md) | Security Hardening | Accepted | - | Fully implemented |
| [ADR-029](./0029-secret-rotation-and-key-management.md) | Secret Rotation | Proposed | L | Infrastructure changes needed |
| [ADR-032](./0032-rate-limiting-strategy.md) | Rate Limiting | Accepted | - | Fully implemented |
| [ADR-033](./0033-file-upload-and-storage-strategy.md) | File Upload & Storage | Accepted | - | Fully implemented |
### Category 5: Observability & Monitoring
| ADR | Title | Status | Effort | Notes |
| -------------------------------------------------------------------------- | --------------------------- | -------- | ------ | --------------------------------- |
| [ADR-004](./0004-standardized-application-wide-structured-logging.md) | Structured Logging | Accepted | - | Fully implemented |
| [ADR-015](./0015-application-performance-monitoring-and-error-tracking.md) | APM & Error Tracking | Proposed | M | Third-party integration |
| [ADR-050](./0050-postgresql-function-observability.md) | PostgreSQL Fn Observability | Proposed | M | Depends on ADR-015 implementation |
### Category 6: Deployment & Operations
| ADR | Title | Status | Effort | Notes |
| -------------------------------------------------------------- | ----------------- | -------- | ------ | -------------------------- |
| [ADR-006](./0006-background-job-processing-and-task-queues.md) | Background Jobs | Accepted | - | Fully implemented |
| [ADR-014](./0014-containerization-and-deployment-strategy.md) | Containerization | Partial | M | Docker done, K8s pending |
| [ADR-017](./0017-ci-cd-and-branching-strategy.md) | CI/CD & Branching | Accepted | - | Fully implemented |
| [ADR-024](./0024-feature-flagging-strategy.md) | Feature Flags | Proposed | M | New service/library needed |
| [ADR-037](./0037-scheduled-jobs-and-cron-pattern.md) | Scheduled Jobs | Accepted | - | Fully implemented |
| [ADR-038](./0038-graceful-shutdown-pattern.md) | Graceful Shutdown | Accepted | - | Fully implemented |
### Category 7: Frontend / User Interface
| ADR | Title | Status | Effort | Notes |
| ------------------------------------------------------------------------ | -------------------- | -------- | ------ | ------------------------------------------- |
| [ADR-005](./0005-frontend-state-management-and-server-cache-strategy.md) | State Management | Accepted | - | Fully implemented |
| [ADR-012](./0012-frontend-component-library-and-design-system.md) | Component Library | Partial | L | Core components done, design tokens pending |
| [ADR-025](./0025-internationalization-and-localization-strategy.md) | i18n & l10n | Proposed | XL | All UI strings need extraction |
| [ADR-026](./0026-standardized-client-side-structured-logging.md) | Client-Side Logging | Accepted | - | Fully implemented |
| [ADR-044](./0044-frontend-feature-organization.md) | Feature Organization | Accepted | - | Fully implemented |
### Category 8: Development Workflow & Quality
| ADR | Title | Status | Effort | Notes |
| ----------------------------------------------------------------------------- | -------------------- | -------- | ------ | -------------------- |
| [ADR-010](./0010-testing-strategy-and-standards.md) | Testing Strategy | Accepted | - | Fully implemented |
| [ADR-021](./0021-code-formatting-and-linting-unification.md) | Formatting & Linting | Accepted | - | Fully implemented |
| [ADR-027](./0027-standardized-naming-convention-for-ai-and-database-types.md) | Naming Conventions | Accepted | - | Fully implemented |
| [ADR-045](./0045-test-data-factories-and-fixtures.md) | Test Data Factories | Accepted | - | Fully implemented |
| [ADR-047](./0047-project-file-and-folder-organization.md) | Project Organization | Proposed | XL | Major reorganization |
### Category 9: Architecture Patterns
| ADR | Title | Status | Effort | Notes |
| -------------------------------------------------------- | --------------------- | -------- | ------ | ----------------- |
| [ADR-034](./0034-repository-pattern-standards.md) | Repository Pattern | Accepted | - | Fully implemented |
| [ADR-035](./0035-service-layer-architecture.md) | Service Layer | Accepted | - | Fully implemented |
| [ADR-036](./0036-event-bus-and-pub-sub-pattern.md) | Event Bus | Accepted | - | Fully implemented |
| [ADR-039](./0039-dependency-injection-pattern.md) | Dependency Injection | Accepted | - | Fully implemented |
| [ADR-041](./0041-ai-gemini-integration-architecture.md) | AI/Gemini Integration | Accepted | - | Fully implemented |
| [ADR-042](./0042-email-and-notification-architecture.md) | Email & Notifications | Accepted | - | Fully implemented |
| [ADR-043](./0043-express-middleware-pipeline.md) | Middleware Pipeline | Accepted | - | Fully implemented |
| [ADR-046](./0046-image-processing-pipeline.md) | Image Processing | Accepted | - | Fully implemented |
| [ADR-049](./0049-gamification-and-achievement-system.md) | Gamification System | Accepted | - | Fully implemented |
---
## Work Still To Be Completed (Priority Order)
These ADRs are proposed but not yet implemented, ordered by suggested implementation priority:
| Priority | ADR | Title | Effort | Rationale |
| -------- | ------- | --------------------------- | ------ | ------------------------------------------------- |
| 1 | ADR-015 | APM & Error Tracking | M | Production visibility, debugging |
| 1b | ADR-050 | PostgreSQL Fn Observability | M | Database function visibility (depends on ADR-015) |
| 2 | ADR-024 | Feature Flags | M | Safer deployments, A/B testing |
| 3 | ADR-023 | Schema Migrations v2 | L | Database evolution support |
| 4 | ADR-029 | Secret Rotation | L | Security improvement |
| 5 | ADR-008 | API Versioning | L | Future API evolution |
| 6 | ADR-030 | Circuit Breaker | L | Resilience improvement |
| 7 | ADR-022 | Real-time Notifications | XL | Major feature enhancement |
| 8 | ADR-011 | Authorization & RBAC | XL | Advanced permission system |
| 9 | ADR-025 | i18n & l10n | XL | Multi-language support |
| 10 | ADR-031 | Data Retention & Privacy | XL | Compliance requirements |
---
## Recent Implementation History
| Date | ADR | Change |
| ---------- | ------- | ---------------------------------------------------------------------- |
| 2026-01-11 | ADR-050 | Created - PostgreSQL function observability with fn_log() and Logstash |
| 2026-01-11 | ADR-018 | Implemented - OpenAPI/Swagger documentation at /docs/api-docs |
| 2026-01-11 | ADR-049 | Created - Gamification system, achievements, and testing requirements |
| 2026-01-09 | ADR-047 | Created - Project file/folder organization with migration plan |
| 2026-01-09 | ADR-041 | Created - AI/Gemini integration with model fallback and rate limiting |
| 2026-01-09 | ADR-042 | Created - Email and notification architecture with BullMQ queuing |
| 2026-01-09 | ADR-043 | Created - Express middleware pipeline ordering and patterns |
| 2026-01-09 | ADR-044 | Created - Frontend feature-based folder organization |
| 2026-01-09 | ADR-045 | Created - Test data factory pattern for mock generation |
| 2026-01-09 | ADR-046 | Created - Image processing pipeline with Sharp and EXIF stripping |
| 2026-01-09 | ADR-026 | Fully implemented - client-side structured logger |
| 2026-01-09 | ADR-028 | Fully implemented - all routes, middleware, and tests updated |
---
## Notes
- **Effort estimates** are rough guidelines and may vary based on current codebase state
- **Dependencies** between ADRs should be considered when planning implementation order
- This document should be updated when ADRs are implemented or status changes

View File

@@ -4,49 +4,75 @@ This directory contains a log of the architectural decisions made for the Flyer
## 1. Foundational / Core Infrastructure
**[ADR-002](./0002-standardized-transaction-management.md)**: Standardized Transaction Management and Unit of Work Pattern (Proposed)
**[ADR-007](./0007-configuration-and-secrets-management.md)**: Configuration and Secrets Management (Proposed)
**[ADR-020](./0020-health-checks-and-liveness-readiness-probes.md)**: Health Checks and Liveness/Readiness Probes (Proposed)
**[ADR-002](./0002-standardized-transaction-management.md)**: Standardized Transaction Management and Unit of Work Pattern (Accepted)
**[ADR-007](./0007-configuration-and-secrets-management.md)**: Configuration and Secrets Management (Accepted)
**[ADR-020](./0020-health-checks-and-liveness-readiness-probes.md)**: Health Checks and Liveness/Readiness Probes (Accepted)
**[ADR-030](./0030-graceful-degradation-and-circuit-breaker.md)**: Graceful Degradation and Circuit Breaker Pattern (Proposed)
## 2. Data Management
**[ADR-009](./0009-caching-strategy-for-read-heavy-operations.md)**: Caching Strategy for Read-Heavy Operations (Proposed)
**[ADR-009](./0009-caching-strategy-for-read-heavy-operations.md)**: Caching Strategy for Read-Heavy Operations (Accepted)
**[ADR-013](./0013-database-schema-migration-strategy.md)**: Database Schema Migration Strategy (Proposed)
**[ADR-019](./0019-data-backup-and-recovery-strategy.md)**: Data Backup and Recovery Strategy (Proposed)
**[ADR-019](./0019-data-backup-and-recovery-strategy.md)**: Data Backup and Recovery Strategy (Accepted)
**[ADR-023](./0023-database-schema-migration-strategy.md)**: Database Schema Migration Strategy (Proposed)
**[ADR-031](./0031-data-retention-and-privacy-compliance.md)**: Data Retention and Privacy Compliance (Proposed)
## 3. API & Integration
**[ADR-003](./0003-standardized-input-validation-using-middleware.md)**: Standardized Input Validation using Middleware (Proposed)
**[ADR-003](./0003-standardized-input-validation-using-middleware.md)**: Standardized Input Validation using Middleware (Accepted)
**[ADR-008](./0008-api-versioning-strategy.md)**: API Versioning Strategy (Proposed)
**[ADR-018](./0018-api-documentation-strategy.md)**: API Documentation Strategy (Proposed)
**[ADR-022](./0022-real-time-notification-system.md)**: Real-time Notification System (Proposed)
**[ADR-028](./0028-api-response-standardization.md)**: API Response Standardization and Envelope Pattern (Implemented)
## 4. Security & Compliance
**[ADR-001](./0001-standardized-error-handling.md)**: Standardized Error Handling for Service and Repository Layers (Accepted)
**[ADR-011](./0011-advanced-authorization-and-access-control-strategy.md)**: Advanced Authorization and Access Control Strategy (Proposed)
**[ADR-016](./0016-api-security-hardening.md)**: API Security Hardening (Proposed)
**[ADR-016](./0016-api-security-hardening.md)**: API Security Hardening (Accepted)
**[ADR-029](./0029-secret-rotation-and-key-management.md)**: Secret Rotation and Key Management Strategy (Proposed)
**[ADR-032](./0032-rate-limiting-strategy.md)**: Rate Limiting Strategy (Accepted)
**[ADR-033](./0033-file-upload-and-storage-strategy.md)**: File Upload and Storage Strategy (Accepted)
**[ADR-048](./0048-authentication-strategy.md)**: Authentication Strategy (Partially Implemented)
## 5. Observability & Monitoring
**[ADR-004](./0004-standardized-application-wide-structured-logging.md)**: Standardized Application-Wide Structured Logging (Proposed)
**[ADR-004](./0004-standardized-application-wide-structured-logging.md)**: Standardized Application-Wide Structured Logging (Accepted)
**[ADR-015](./0015-application-performance-monitoring-and-error-tracking.md)**: Application Performance Monitoring (APM) and Error Tracking (Proposed)
## 6. Deployment & Operations
**[ADR-006](./0006-background-job-processing-and-task-queues.md)**: Background Job Processing and Task Queues (Proposed)
**[ADR-014](./0014-containerization-and-deployment-strategy.md)**: Containerization and Deployment Strategy (Proposed)
**[ADR-017](./0017-ci-cd-and-branching-strategy.md)**: CI/CD and Branching Strategy (Proposed)
**[ADR-006](./0006-background-job-processing-and-task-queues.md)**: Background Job Processing and Task Queues (Accepted)
**[ADR-014](./0014-containerization-and-deployment-strategy.md)**: Containerization and Deployment Strategy (Partially Implemented)
**[ADR-017](./0017-ci-cd-and-branching-strategy.md)**: CI/CD and Branching Strategy (Accepted)
**[ADR-024](./0024-feature-flagging-strategy.md)**: Feature Flagging Strategy (Proposed)
**[ADR-037](./0037-scheduled-jobs-and-cron-pattern.md)**: Scheduled Jobs and Cron Pattern (Accepted)
**[ADR-038](./0038-graceful-shutdown-pattern.md)**: Graceful Shutdown Pattern (Accepted)
## 7. Frontend / User Interface
**[ADR-005](./0005-frontend-state-management-and-server-cache-strategy.md)**: Frontend State Management and Server Cache Strategy (Proposed)
**[ADR-012](./0012-frontend-component-library-and-design-system.md)**: Frontend Component Library and Design System (Proposed)
**[ADR-005](./0005-frontend-state-management-and-server-cache-strategy.md)**: Frontend State Management and Server Cache Strategy (Accepted)
**[ADR-012](./0012-frontend-component-library-and-design-system.md)**: Frontend Component Library and Design System (Partially Implemented)
**[ADR-025](./0025-internationalization-and-localization-strategy.md)**: Internationalization (i18n) and Localization (l10n) Strategy (Proposed)
**[ADR-026](./0026-standardized-client-side-structured-logging.md)**: Standardized Client-Side Structured Logging (Proposed)
**[ADR-044](./0044-frontend-feature-organization.md)**: Frontend Feature Organization Pattern (Accepted)
## 8. Development Workflow & Quality
**[ADR-010](./0010-testing-strategy-and-standards.md)**: Testing Strategy and Standards (Proposed)
**[ADR-021](./0021-code-formatting-and-linting-unification.md)**: Code Formatting and Linting Unification (Proposed)
**[ADR-010](./0010-testing-strategy-and-standards.md)**: Testing Strategy and Standards (Accepted)
**[ADR-021](./0021-code-formatting-and-linting-unification.md)**: Code Formatting and Linting Unification (Accepted)
**[ADR-027](./0027-standardized-naming-convention-for-ai-and-database-types.md)**: Standardized Naming Convention for AI and Database Types (Accepted)
**[ADR-040](./0040-testing-economics-and-priorities.md)**: Testing Economics and Priorities (Accepted)
**[ADR-045](./0045-test-data-factories-and-fixtures.md)**: Test Data Factories and Fixtures (Accepted)
**[ADR-047](./0047-project-file-and-folder-organization.md)**: Project File and Folder Organization (Proposed)
## 9. Architecture Patterns
**[ADR-034](./0034-repository-pattern-standards.md)**: Repository Pattern Standards (Accepted)
**[ADR-035](./0035-service-layer-architecture.md)**: Service Layer Architecture (Accepted)
**[ADR-036](./0036-event-bus-and-pub-sub-pattern.md)**: Event Bus and Pub/Sub Pattern (Accepted)
**[ADR-039](./0039-dependency-injection-pattern.md)**: Dependency Injection Pattern (Accepted)
**[ADR-041](./0041-ai-gemini-integration-architecture.md)**: AI/Gemini Integration Architecture (Accepted)
**[ADR-042](./0042-email-and-notification-architecture.md)**: Email and Notification Architecture (Accepted)
**[ADR-043](./0043-express-middleware-pipeline.md)**: Express Middleware Pipeline Architecture (Accepted)
**[ADR-046](./0046-image-processing-pipeline.md)**: Image Processing Pipeline (Accepted)

View File

@@ -0,0 +1,110 @@
# Authentication Setup
Flyer Crawler supports OAuth authentication via Google and GitHub. This guide walks through configuring both providers.
---
## Google OAuth
### Step 1: Create OAuth Credentials
1. Go to the [Google Cloud Console](https://console.cloud.google.com/)
2. Create a new project (or select an existing one)
3. Navigate to **APIs & Services > Credentials**
4. Click **Create Credentials > OAuth client ID**
5. Select **Web application** as the application type
### Step 2: Configure Authorized Redirect URIs
Add the callback URL where Google will redirect users after authentication:
| Environment | Redirect URI |
| ----------- | -------------------------------------------------- |
| Development | `http://localhost:3001/api/auth/google/callback` |
| Production | `https://your-domain.com/api/auth/google/callback` |
### Step 3: Save Credentials
After clicking **Create**, you'll receive:
- **Client ID**
- **Client Secret**
Store these securely as environment variables:
- `GOOGLE_CLIENT_ID`
- `GOOGLE_CLIENT_SECRET`
---
## GitHub OAuth
### Step 1: Create OAuth App
1. Go to your [GitHub Developer Settings](https://github.com/settings/developers)
2. Navigate to **OAuth Apps**
3. Click **New OAuth App**
### Step 2: Fill in Application Details
| Field | Value |
| -------------------------- | ---------------------------------------------------- |
| Application name | Flyer Crawler (or your preferred name) |
| Homepage URL | `http://localhost:5173` (dev) or your production URL |
| Authorization callback URL | `http://localhost:3001/api/auth/github/callback` |
### Step 3: Save GitHub Credentials
After clicking **Register application**, you'll receive:
- **Client ID**
- **Client Secret**
Store these securely as environment variables:
- `GITHUB_CLIENT_ID`
- `GITHUB_CLIENT_SECRET`
---
## Environment Variables Summary
| Variable | Description |
| ---------------------- | ---------------------------------------- |
| `GOOGLE_CLIENT_ID` | Google OAuth client ID |
| `GOOGLE_CLIENT_SECRET` | Google OAuth client secret |
| `GITHUB_CLIENT_ID` | GitHub OAuth client ID |
| `GITHUB_CLIENT_SECRET` | GitHub OAuth client secret |
| `JWT_SECRET` | Secret for signing authentication tokens |
---
## Production Considerations
When deploying to production:
1. **Update redirect URIs** in both Google Cloud Console and GitHub OAuth settings to use your production domain
2. **Use HTTPS** for all callback URLs in production
3. **Store secrets securely** using your CI/CD platform's secrets management (e.g., Gitea repository secrets)
---
## Troubleshooting
### "redirect_uri_mismatch" Error
The callback URL in your OAuth provider settings doesn't match what the application is sending. Verify:
- The URL is exactly correct (no trailing slashes, correct port)
- You're using the right environment (dev vs production URLs)
### "invalid_client" Error
The Client ID or Client Secret is incorrect. Double-check your environment variables.
---
## Related Documentation
- [Installation Guide](INSTALL.md) - Local development setup
- [Deployment Guide](DEPLOYMENT.md) - Production deployment

View File

@@ -0,0 +1,223 @@
# Database Setup
Flyer Crawler uses PostgreSQL with several extensions for full-text search, geographic data, and UUID generation.
---
## Required Extensions
| Extension | Purpose |
| ----------- | ------------------------------------------- |
| `postgis` | Geographic/spatial data for store locations |
| `pg_trgm` | Trigram matching for fuzzy text search |
| `uuid-ossp` | UUID generation for primary keys |
---
## Database Users
This project uses **environment-specific database users** to isolate production and test environments:
| User | Database | Purpose |
| -------------------- | -------------------- | ---------- |
| `flyer_crawler_prod` | `flyer-crawler-prod` | Production |
| `flyer_crawler_test` | `flyer-crawler-test` | Testing |
---
## Production Database Setup
### Step 1: Install PostgreSQL
```bash
sudo apt update
sudo apt install postgresql postgresql-contrib
```
### Step 2: Create Database and User
Switch to the postgres system user:
```bash
sudo -u postgres psql
```
Run the following SQL commands (replace `'a_very_strong_password'` with a secure password):
```sql
-- Create the production role
CREATE ROLE flyer_crawler_prod WITH LOGIN PASSWORD 'a_very_strong_password';
-- Create the production database
CREATE DATABASE "flyer-crawler-prod" WITH OWNER = flyer_crawler_prod;
-- Connect to the new database
\c "flyer-crawler-prod"
-- Grant schema privileges
ALTER SCHEMA public OWNER TO flyer_crawler_prod;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_prod;
-- Install required extensions (must be done as superuser)
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Exit
\q
```
### Step 3: Apply the Schema
Navigate to your project directory and run:
```bash
psql -U flyer_crawler_prod -d "flyer-crawler-prod" -f sql/master_schema_rollup.sql
```
This creates all tables, functions, triggers, and seeds essential data (categories, master items).
### Step 4: Seed the Admin Account
Set the required environment variables and run the seed script:
```bash
export DB_USER=flyer_crawler_prod
export DB_PASSWORD=your_password
export DB_NAME="flyer-crawler-prod"
export DB_HOST=localhost
npx tsx src/db/seed_admin_account.ts
```
---
## Test Database Setup
The test database is used by CI/CD pipelines and local test runs.
### Step 1: Create the Test Database
```bash
sudo -u postgres psql
```
```sql
-- Create the test role
CREATE ROLE flyer_crawler_test WITH LOGIN PASSWORD 'a_very_strong_password';
-- Create the test database
CREATE DATABASE "flyer-crawler-test" WITH OWNER = flyer_crawler_test;
-- Connect to the test database
\c "flyer-crawler-test"
-- Grant schema privileges (required for test runner to reset schema)
ALTER SCHEMA public OWNER TO flyer_crawler_test;
GRANT CREATE, USAGE ON SCHEMA public TO flyer_crawler_test;
-- Install required extensions
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Exit
\q
```
### Step 2: Configure CI/CD Secrets
Ensure these secrets are set in your Gitea repository settings:
**Shared:**
| Secret | Description |
| --------- | ------------------------------------- |
| `DB_HOST` | Database hostname (e.g., `localhost`) |
| `DB_PORT` | Database port (e.g., `5432`) |
**Production-specific:**
| Secret | Description |
| ------------------ | ----------------------------------------------- |
| `DB_USER_PROD` | Production database user (`flyer_crawler_prod`) |
| `DB_PASSWORD_PROD` | Production database password |
| `DB_DATABASE_PROD` | Production database name (`flyer-crawler-prod`) |
**Test-specific:**
| Secret | Description |
| ------------------ | ----------------------------------------- |
| `DB_USER_TEST` | Test database user (`flyer_crawler_test`) |
| `DB_PASSWORD_TEST` | Test database password |
| `DB_DATABASE_TEST` | Test database name (`flyer-crawler-test`) |
---
## How the Test Pipeline Works
The CI pipeline uses a permanent test database that gets reset on each test run:
1. **Setup**: The vitest global setup script connects to `flyer-crawler-test`
2. **Schema Reset**: Executes `sql/drop_tables.sql` (`DROP SCHEMA public CASCADE`)
3. **Schema Application**: Runs `sql/master_schema_rollup.sql` to build a fresh schema
4. **Test Execution**: Tests run against the clean database
This approach is faster than creating/destroying databases and doesn't require sudo access.
---
## Connecting to Production Database
```bash
psql -h localhost -U flyer_crawler_prod -d "flyer-crawler-prod" -W
```
---
## Checking PostGIS Version
```sql
SELECT version();
SELECT PostGIS_Full_Version();
```
Example output:
```text
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1)
POSTGIS="3.2.0 c3e3cc0" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1"
```
---
## Schema Files
| File | Purpose |
| ------------------------------ | --------------------------------------------------------- |
| `sql/master_schema_rollup.sql` | Complete schema with all tables, functions, and seed data |
| `sql/drop_tables.sql` | Drops entire schema (used by test runner) |
| `sql/schema.sql.txt` | Legacy schema file (reference only) |
---
## Backup and Restore
### Create a Backup
```bash
pg_dump -U flyer_crawler_prod -d "flyer-crawler-prod" -F c -f backup.dump
```
### Restore from Backup
```bash
pg_restore -U flyer_crawler_prod -d "flyer-crawler-prod" -c backup.dump
```
---
## Related Documentation
- [Installation Guide](INSTALL.md) - Local development setup
- [Deployment Guide](DEPLOYMENT.md) - Production deployment

View File

@@ -0,0 +1,859 @@
# Flyer Crawler - System Architecture Overview
**Version**: 0.12.5
**Last Updated**: 2026-01-22
**Platform**: Linux (Production and Development)
---
## Table of Contents
1. [Executive Summary](#executive-summary)
2. [System Architecture Diagram](#system-architecture-diagram)
3. [Technology Stack](#technology-stack)
4. [System Components](#system-components)
5. [Data Flow](#data-flow)
6. [Architecture Layers](#architecture-layers)
7. [Key Entities](#key-entities)
8. [Authentication Flow](#authentication-flow)
9. [Background Processing](#background-processing)
10. [Deployment Architecture](#deployment-architecture)
11. [Design Principles and ADRs](#design-principles-and-adrs)
12. [Key Files Reference](#key-files-reference)
---
## Executive Summary
**Flyer Crawler** is a grocery deal extraction and analysis platform that uses AI-powered processing to extract deals from grocery store flyer images and PDFs. The system provides users with features including watchlists, price history tracking, shopping lists, deal alerts, and recipe management.
### Core Capabilities
| Domain | Description |
| ------------------------- | --------------------------------------------------------------------------------------- |
| **Deal Extraction** | AI-powered extraction of deals from grocery store flyer images/PDFs using Google Gemini |
| **Price Tracking** | Historical price data, trend analysis, and price alerts |
| **User Features** | Watchlists, shopping lists, recipes, pantry management, achievements |
| **Real-time Updates** | WebSocket-based notifications for price alerts and processing status |
| **Background Processing** | Asynchronous job queues for flyer processing, emails, and analytics |
---
## System Architecture Diagram
```
+-----------------------------------------------------------------------------------+
| CLIENT LAYER |
+-----------------------------------------------------------------------------------+
| |
| +-------------------+ +-------------------+ +-------------------+ |
| | Web Browser | | Mobile PWA | | API Clients | |
| | (React SPA) | | (React SPA) | | (REST/JSON) | |
| +--------+----------+ +--------+----------+ +--------+----------+ |
| | | | |
+-------------|-------------------------|-------------------------|------------------+
| | |
v v v
+-----------------------------------------------------------------------------------+
| NGINX REVERSE PROXY |
| - SSL/TLS Termination - Rate Limiting - Static Asset Serving |
| - Load Balancing - Compression - WebSocket Proxying |
| - Flyer Images (/flyer-images/) with 7-day cache |
+----------------------------------+------------------------------------------------+
|
v
+-----------------------------------------------------------------------------------+
| APPLICATION LAYER |
+-----------------------------------------------------------------------------------+
| |
| +-----------------------------------------------------------------------------+ |
| | EXPRESS.JS SERVER (Node.js) | |
| | | |
| | +-------------------------+ +-------------------------+ | |
| | | Routes Layer | | Middleware Chain | | |
| | | - API Endpoints | | - Authentication | | |
| | | - Request Validation | | - Rate Limiting | | |
| | | - Response Formatting | | - Logging | | |
| | +------------+------------+ | - Error Handling | | |
| | | +-------------------------+ | |
| | v | |
| | +-------------------------+ +-------------------------+ | |
| | | Services Layer | | External Services | | |
| | | - Business Logic | | - Google Gemini AI | | |
| | | - Transaction Coord. | | - Google Maps API | | |
| | | - Event Publishing | | - OAuth Providers | | |
| | +------------+------------+ | - Email (SMTP) | | |
| | | +-------------------------+ | |
| | v | |
| | +-------------------------+ | |
| | | Repository Layer | | |
| | | - Database Access | | |
| | | - Query Construction | | |
| | | - Entity Mapping | | |
| | +------------+------------+ | |
| | | | |
| +---------------|-------------------------------------------------------------+ |
| | |
+------------------|----------------------------------------------------------------+
|
v
+-----------------------------------------------------------------------------------+
| DATA LAYER |
+-----------------------------------------------------------------------------------+
| |
| +---------------------------+ +---------------------------+ |
| | PostgreSQL 16 | | Redis 7 | |
| | (with PostGIS) | | | |
| | | | - Session Cache | |
| | - Primary Data Store | | - Query Cache | |
| | - Geographic Queries | | - Job Queue Backing | |
| | - Full-Text Search | | - Rate Limit Counters | |
| | - Stored Functions | | - Real-time Pub/Sub | |
| +---------------------------+ +---------------------------+ |
| |
+-----------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------+
| BACKGROUND PROCESSING LAYER |
+-----------------------------------------------------------------------------------+
| |
| +---------------------------+ +---------------------------+ |
| | PM2 Process | | BullMQ Workers | |
| | Manager | | | |
| | | | - Flyer Processing | |
| | - Process Clustering | | - Receipt Processing | |
| | - Auto-restart | | - Email Sending | |
| | - Log Management | | - Analytics Reports | |
| | - Health Monitoring | | - File Cleanup | |
| +---------------------------+ | - Token Cleanup | |
| | - Expiry Alerts | |
| | - Barcode Detection | |
| +---------------------------+ |
| |
+-----------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------+
| OBSERVABILITY LAYER |
+-----------------------------------------------------------------------------------+
| |
| +------------------+ +------------------+ +------------------+ |
| | Bugsink/Sentry | | Pino Logger | | Logstash | |
| | (Error Track) | | (Structured) | | (Aggregation) | |
| +------------------+ +------------------+ +------------------+ |
| |
+-----------------------------------------------------------------------------------+
```
---
## Technology Stack
### Core Technologies
| Component | Technology | Version | Purpose |
| ---------------------- | ---------- | -------- | -------------------------------- |
| **Runtime** | Node.js | 22.x LTS | Server-side JavaScript runtime |
| **Language** | TypeScript | 5.9.x | Type-safe JavaScript superset |
| **Web Framework** | Express.js | 5.1.x | HTTP server and routing |
| **Frontend Framework** | React | 19.2.x | UI component library |
| **Build Tool** | Vite | 7.2.x | Frontend bundling and dev server |
### Data Storage
| Component | Technology | Version | Purpose |
| --------------------- | ---------------- | ------- | ---------------------------------------------- |
| **Primary Database** | PostgreSQL | 16.x | Relational data storage |
| **Spatial Extension** | PostGIS | 3.x | Geographic queries and store location features |
| **Cache & Queues** | Redis | 7.x | Caching, session storage, job queue backing |
| **File Storage** | Local Filesystem | - | Uploaded flyers and processed images |
### AI and External Services
| Component | Technology | Purpose |
| --------------- | --------------------------- | --------------------------------------- |
| **AI Provider** | Google Gemini | Flyer data extraction, image analysis |
| **Geocoding** | Google Maps API / Nominatim | Address geocoding and location services |
| **OAuth** | Google, GitHub | Social authentication |
| **Email** | Nodemailer (SMTP) | Transactional emails |
### Background Processing
| Component | Technology | Version | Purpose |
| ------------------- | ---------- | ------- | --------------------------------- |
| **Job Queues** | BullMQ | 5.65.x | Reliable async job processing |
| **Process Manager** | PM2 | Latest | Process management and clustering |
| **Scheduler** | node-cron | 4.2.x | Scheduled tasks |
### Frontend Stack
| Component | Technology | Version | Purpose |
| -------------------- | -------------- | ------- | ---------------------------------------- |
| **State Management** | TanStack Query | 5.90.x | Server state caching and synchronization |
| **Routing** | React Router | 7.9.x | Client-side routing |
| **Styling** | Tailwind CSS | 4.1.x | Utility-first CSS framework |
| **Icons** | Lucide React | 0.555.x | Icon components |
| **Charts** | Recharts | 3.4.x | Data visualization |
### Observability and Quality
| Component | Technology | Purpose |
| --------------------- | ---------------- | ----------------------------- |
| **Error Tracking** | Sentry / Bugsink | Error monitoring and alerting |
| **Logging** | Pino | Structured JSON logging |
| **Log Aggregation** | Logstash | Centralized log collection |
| **Testing** | Vitest | Unit and integration testing |
| **API Documentation** | Swagger/OpenAPI | Interactive API documentation |
---
## System Components
### Frontend (React/Vite)
The frontend is a single-page application (SPA) built with React 19 and Vite.
**Key Characteristics**:
- Server state management via TanStack Query
- Neo-Brutalism design system (ADR-012)
- Responsive design for mobile and desktop
- PWA-capable for offline access
**Directory Structure**:
```
src/
+-- components/ # Reusable UI components
+-- contexts/ # React context providers
+-- features/ # Feature-specific modules (ADR-047)
+-- hooks/ # Custom React hooks
+-- layouts/ # Page layout components
+-- pages/ # Route page components
+-- services/ # API client services
```
### Backend (Express/Node.js)
The backend is a RESTful API server built with Express.js 5.
**Key Characteristics**:
- Layered architecture (Routes -> Services -> Repositories)
- JWT-based authentication with OAuth support
- Request validation via Zod schemas
- Structured logging with Pino
- Standardized error handling (ADR-001)
**API Route Modules**:
| Route | Purpose |
|-------|---------|
| `/api/auth` | Authentication (login, register, OAuth) |
| `/api/users` | User profile management |
| `/api/flyers` | Flyer CRUD and processing |
| `/api/recipes` | Recipe management |
| `/api/deals` | Best prices and deal discovery |
| `/api/stores` | Store management |
| `/api/admin` | Administrative functions |
| `/api/health` | Health checks and monitoring |
### Database (PostgreSQL/PostGIS)
PostgreSQL serves as the primary data store with PostGIS extension for geographic queries.
**Key Features**:
- UUID primary keys for user data
- BIGINT IDENTITY for auto-incrementing IDs
- PostGIS geography types for store locations
- Stored functions for complex business logic
- Triggers for automated updates (e.g., `item_count` maintenance)
### Cache (Redis)
Redis provides caching and backing for the job queue system.
**Usage Patterns**:
- Query result caching (flyers, prices, stats)
- Rate limiting counters
- BullMQ job queue storage
- Session token storage
### AI (Google Gemini)
Google Gemini powers the AI extraction capabilities.
**Capabilities**:
- Flyer image analysis and data extraction
- Store name and logo detection
- Deal item parsing (name, price, quantity)
- Date range extraction
- Category classification
### Background Workers (BullMQ/PM2)
BullMQ workers handle asynchronous processing tasks.
**Job Queues**:
| Queue | Purpose | Retry Strategy |
| ---------------------------- | -------------------------------- | ------------------------------------- |
| `flyer-processing` | Process uploaded flyers with AI | 3 attempts, exponential backoff (5s) |
| `receipt-processing` | OCR and parse receipts | 3 attempts, exponential backoff (10s) |
| `email-sending` | Send transactional emails | 5 attempts, exponential backoff (10s) |
| `analytics-reporting` | Generate daily analytics | 2 attempts, exponential backoff (60s) |
| `weekly-analytics-reporting` | Generate weekly reports | 2 attempts, exponential backoff (1h) |
| `file-cleanup` | Remove temporary files | 3 attempts, exponential backoff (30s) |
| `token-cleanup` | Expire old refresh tokens | 2 attempts, exponential backoff (1h) |
| `expiry-alerts` | Send pantry expiry notifications | 2 attempts, exponential backoff (5m) |
| `barcode-detection` | Process barcode scans | 2 attempts, exponential backoff (5s) |
---
## Data Flow
### Flyer Processing Pipeline
```
+-------------+ +----------------+ +------------------+ +---------------+
| User | | Express | | BullMQ | | PostgreSQL |
| Upload +---->+ Route +---->+ Queue +---->+ Storage |
+-------------+ +-------+--------+ +--------+---------+ +-------+-------+
| | |
v v v
+-------+--------+ +--------+---------+ +-------+-------+
| Validate | | Worker | | Cache |
| & Store | | Process | | Invalidate |
| Temp File | | | | |
+----------------+ +--------+---------+ +---------------+
|
v
+--------+---------+
| Google |
| Gemini AI |
| Extraction |
+--------+---------+
|
v
+--------+---------+
| Transform |
| & Validate |
| Data |
+--------+---------+
|
v
+--------+---------+
| Persist to |
| Database |
| (Transaction) |
+--------+---------+
|
v
+--------+---------+
| WebSocket |
| Notification |
+------------------+
```
### Detailed Processing Steps
1. **Upload**: User uploads flyer image via `/api/flyers/upload`
2. **Validation**: Server validates file type, size, and generates checksum
3. **Queueing**: Job added to `flyer-processing` queue with file path
4. **Worker Pickup**: BullMQ worker picks up job for processing
5. **AI Extraction**: Google Gemini analyzes image and extracts:
- Store name
- Valid date range
- Store address (if present)
- Deal items (name, price, quantity, category)
6. **Data Transformation**: Raw AI output transformed to database schema
7. **Persistence**: Transactional insert of flyer + items + store
8. **Cache Invalidation**: Redis cache cleared for affected queries
9. **Notification**: WebSocket message sent to user with results
10. **Cleanup**: Temporary files scheduled for deletion
---
## Architecture Layers
The application follows a strict layered architecture as defined in ADR-035.
```
+-----------------------------------------------------------------------+
| ROUTES LAYER |
| Responsibilities: |
| - HTTP request/response handling |
| - Input validation (via middleware) |
| - Authentication/authorization checks |
| - Rate limiting |
| - Response formatting (sendSuccess, sendPaginated, sendError) |
+----------------------------------+------------------------------------+
|
v
+-----------------------------------------------------------------------+
| SERVICES LAYER |
| Responsibilities: |
| - Business logic orchestration |
| - Transaction coordination (withTransaction) |
| - External API integration |
| - Cross-repository operations |
| - Event publishing |
+----------------------------------+------------------------------------+
|
v
+-----------------------------------------------------------------------+
| REPOSITORY LAYER |
| Responsibilities: |
| - Direct database access |
| - Query construction |
| - Entity mapping |
| - Error translation (handleDbError) |
+-----------------------------------------------------------------------+
```
### Layer Communication Rules
1. **Routes MUST NOT** directly access repositories (except simple CRUD)
2. **Repositories MUST NOT** call other repositories (use services)
3. **Services MAY** call other services
4. **Infrastructure services MAY** be called from any layer
### Service Types and Naming Conventions
| Type | Suffix | Example | Location |
| ------------------- | ------------- | --------------------- | ------------------ |
| Business Service | `*Service.ts` | `authService.ts` | `src/services/` |
| Server-Only Service | `*.server.ts` | `aiService.server.ts` | `src/services/` |
| Database Repository | `*.db.ts` | `user.db.ts` | `src/services/db/` |
| Infrastructure | Descriptive | `logger.server.ts` | `src/services/` |
### Repository Method Naming (ADR-034)
| Prefix | Behavior | Return Type |
| ------- | ----------------------------------- | -------------- |
| `get*` | Throws `NotFoundError` if not found | Entity |
| `find*` | Returns `null` if not found | Entity or null |
| `list*` | Returns empty array if none found | Entity[] |
---
## Key Entities
### Entity Relationship Overview
```
+------------------+ +------------------+ +------------------+
| users | | profiles | | addresses |
|------------------| |------------------| |------------------|
| user_id (PK) |<-------->| user_id (PK,FK) |--------->| address_id (PK) |
| email | | full_name | | address_line_1 |
| password_hash | | avatar_url | | city |
| refresh_token | | points | | province_state |
+--------+---------+ | role | | latitude |
| +------------------+ | longitude |
| | location (GIS) |
| +--------+---------+
| ^
v |
+--------+---------+ +------------------+ +--------+---------+
| stores |--------->| store_locations |--------->| |
|------------------| |------------------| | |
| store_id (PK) | | store_location_id| | |
| name | | store_id (FK) | | |
| logo_url | | address_id (FK) | | |
+--------+---------+ +------------------+ +------------------+
|
v
+--------+---------+ +------------------+ +------------------+
| flyers |--------->| flyer_items |--------->| master_grocery_ |
|------------------| |------------------| | items |
| flyer_id (PK) | | flyer_item_id | |------------------|
| store_id (FK) | | flyer_id (FK) | | master_grocery_ |
| file_name | | item | | item_id (PK) |
| image_url | | price_display | | name |
| valid_from | | price_in_cents | | category_id (FK) |
| valid_to | | quantity | | is_allergen |
| status | | master_item_id | +------------------+
| item_count | | category_id (FK) |
+------------------+ +------------------+
```
### Core Entities
| Entity | Table | Purpose |
| --------------------- | ---------------------- | --------------------------------------------- |
| **User** | `users` | Authentication credentials and login tracking |
| **Profile** | `profiles` | Public user data, preferences, points |
| **Store** | `stores` | Grocery store chains (Safeway, Kroger, etc.) |
| **StoreLocation** | `store_locations` | Physical store locations with addresses |
| **Address** | `addresses` | Normalized address storage with geocoding |
| **Flyer** | `flyers` | Uploaded flyer metadata and status |
| **FlyerItem** | `flyer_items` | Individual deals extracted from flyers |
| **MasterGroceryItem** | `master_grocery_items` | Canonical grocery item dictionary |
| **Category** | `categories` | Item categorization (Produce, Dairy, etc.) |
### User Feature Entities
| Entity | Table | Purpose |
| -------------------- | --------------------- | ------------------------------------ |
| **UserWatchedItem** | `user_watched_items` | Items user wants to track prices for |
| **UserAlert** | `user_alerts` | Price alert thresholds |
| **ShoppingList** | `shopping_lists` | User shopping lists |
| **ShoppingListItem** | `shopping_list_items` | Items on shopping lists |
| **Recipe** | `recipes` | User recipes with ingredients |
| **RecipeIngredient** | `recipe_ingredients` | Recipe ingredient list |
| **PantryItem** | `pantry_items` | User pantry inventory |
| **Receipt** | `receipts` | Scanned receipt data |
| **ReceiptItem** | `receipt_items` | Items parsed from receipts |
### Gamification Entities
| Entity | Table | Purpose |
| ------------------- | ------------------- | ------------------------------------- |
| **Achievement** | `achievements` | Defined achievements |
| **UserAchievement** | `user_achievements` | Achievements earned by users |
| **ActivityLog** | `activity_log` | User activity for feeds and analytics |
---
## Authentication Flow
### JWT Token Architecture
```
+-------------------+ +-------------------+ +-------------------+
| Login Request | | Server | | Database |
| (email/pass) +---->+ Validates +---->+ Verify User |
+-------------------+ +--------+----------+ +-------------------+
|
v
+--------+----------+
| Generate |
| JWT Tokens |
| - Access (15m) |
| - Refresh (7d) |
+--------+----------+
|
v
+-------------------+ +--------+----------+
| Client Storage |<----+ Return Tokens |
| - Access: Memory| | - Access: Body |
| - Refresh: HTTP | | - Refresh: Cookie|
| Only Cookie | +-------------------+
+-------------------+
```
### Authentication Methods
1. **Local Authentication**: Email/password with bcrypt hashing
2. **Google OAuth 2.0**: Social login via Google account
3. **GitHub OAuth 2.0**: Social login via GitHub account
### Security Features (ADR-016, ADR-048)
- **Rate Limiting**: Login attempts rate-limited per IP
- **Account Lockout**: 15-minute lockout after 5 failed attempts
- **Password Requirements**: Strength validation via zxcvbn
- **JWT Rotation**: Access tokens are short-lived, refresh tokens are rotated
- **HTTPS Only**: All production traffic encrypted
### Protected Route Flow
```
+-------------------+ +-------------------+ +-------------------+
| API Request | | requireAuth | | JWT Strategy |
| + Bearer Token +---->+ Middleware +---->+ Validate |
+-------------------+ +--------+----------+ +--------+----------+
| |
| +-------------------+
| |
v v
+--------+-----+----+
| req.user |
| populated |
+--------+----------+
|
v
+--------+----------+
| Route Handler |
| Executes |
+-------------------+
```
---
## Background Processing
### Worker Architecture
```
+-------------------+ +-------------------+ +-------------------+
| API Server | | Redis | | Worker Process |
| (Queue Producer)| | (Job Storage) | | (Consumer) |
+--------+----------+ +--------+----------+ +--------+----------+
| ^ |
| Add Job | Poll/Process |
+------------------------>+<------------------------+
|
|
+-------------------------+-------------------------+
| | |
v v v
+--------+----------+ +--------+----------+ +--------+----------+
| Flyer Worker | | Email Worker | | Analytics |
| Concurrency: 1 | | Concurrency: 10 | | Worker |
+-------------------+ +-------------------+ | Concurrency: 1 |
+-------------------+
```
### Job Lifecycle
1. **Queued**: Job added to queue with data payload
2. **Active**: Worker picks up job and begins processing
3. **Completed**: Job finishes successfully
4. **Failed**: Job encounters error, may retry
5. **Delayed**: Job waiting for retry backoff
### Retry Strategy
Jobs use exponential backoff for retries:
```
Attempt 1: Immediate
Attempt 2: Initial delay (e.g., 5 seconds)
Attempt 3: 2x delay (e.g., 10 seconds)
Attempt 4: 4x delay (e.g., 20 seconds)
...
```
### Scheduled Jobs (ADR-037)
| Schedule | Job | Purpose |
| --------------------- | ---------------- | ------------------------------------------ |
| Daily 2:00 AM | Analytics Report | Generate daily usage statistics |
| Weekly Sunday 3:00 AM | Weekly Analytics | Generate weekly summary reports |
| Every 6 hours | Token Cleanup | Remove expired refresh tokens |
| Every hour | Expiry Alerts | Check and send pantry expiry notifications |
---
## Deployment Architecture
### Environment Overview
```
+-----------------------------------------------------------------------------------+
| DEVELOPMENT |
+-----------------------------------------------------------------------------------+
| |
| +-----------------------------------+ +-----------------------------------+ |
| | Windows Host Machine | | Linux Dev Container | |
| | - VS Code | | (flyer-crawler-dev) | |
| | - Podman Desktop +---->+ - Node.js 22 | |
| | - Git | | - PostgreSQL 16 | |
| +-----------------------------------+ | - Redis 7 | |
| | - Bugsink (local) | |
| +-----------------------------------+ |
+-----------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------+
| TEST SERVER |
+-----------------------------------------------------------------------------------+
| |
| +-----------------------------------+ +-----------------------------------+ |
| | NGINX Reverse Proxy | | Application Server | |
| | flyer-crawler-test.projectium.com | - PM2 Process Manager | |
| | - SSL/TLS (Let's Encrypt) +---->+ - Node.js 22 | |
| | - Rate Limiting | | - PostgreSQL 16 | |
| +-----------------------------------+ | - Redis 7 | |
| +-----------------------------------+ |
+-----------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------+
| PRODUCTION |
+-----------------------------------------------------------------------------------+
| |
| +-----------------------------------+ +-----------------------------------+ |
| | NGINX Reverse Proxy | | Application Server | |
| | flyer-crawler.projectium.com | - PM2 Process Manager | |
| | - SSL/TLS (Let's Encrypt) +---->+ - Node.js 22 (Clustered) | |
| | - Rate Limiting | | - PostgreSQL 16 | |
| | - Gzip Compression | | - Redis 7 | |
| +-----------------------------------+ +-----------------------------------+ |
| |
| +-----------------------------------+ |
| | Monitoring | |
| | - Bugsink (Error Tracking) | |
| | - Logstash (Log Aggregation) | |
| +-----------------------------------+ |
+-----------------------------------------------------------------------------------+
```
### Deployment Pipeline (ADR-017)
```
+------------+ +------------+ +------------+ +------------+
| Push to | | Gitea | | Build & | | Deploy |
| main +---->+ Actions +---->+ Test +---->+ to Prod |
+------------+ +------------+ +------------+ +------------+
|
v
+------+------+
| Type |
| Check |
+------+------+
|
v
+------+------+
| Unit |
| Tests |
+------+------+
|
v
+------+------+
| Build |
| Assets |
+-------------+
```
### Server Paths
| Environment | Web Root | Data Storage | Flyer Images |
| ----------- | --------------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------------- |
| Production | `/var/www/flyer-crawler.projectium.com/` | `/var/www/flyer-crawler.projectium.com/uploads/` | `/var/www/flyer-crawler.projectium.com/flyer-images/` |
| Test | `/var/www/flyer-crawler-test.projectium.com/` | `/var/www/flyer-crawler-test.projectium.com/uploads/` | `/var/www/flyer-crawler-test.projectium.com/flyer-images/` |
| Development | Container-local | Container-local | `/app/public/flyer-images/` |
Flyer images are served by NGINX as static files at `/flyer-images/` with 7-day browser caching.
---
## Design Principles and ADRs
The system architecture is governed by Architecture Decision Records (ADRs). Key decisions include:
### Core Infrastructure
| ADR | Title | Status |
| ------- | ------------------------------------ | -------- |
| ADR-001 | Standardized Error Handling | Accepted |
| ADR-002 | Standardized Transaction Management | Accepted |
| ADR-007 | Configuration and Secrets Management | Accepted |
| ADR-020 | Health Checks and Probes | Accepted |
### API and Integration
| ADR | Title | Status |
| ------- | ----------------------------- | ----------- |
| ADR-003 | Standardized Input Validation | Accepted |
| ADR-022 | Real-time Notification System | Proposed |
| ADR-028 | API Response Standardization | Implemented |
### Security
| ADR | Title | Status |
| ------- | ----------------------- | --------------------- |
| ADR-016 | API Security Hardening | Accepted |
| ADR-032 | Rate Limiting Strategy | Accepted |
| ADR-048 | Authentication Strategy | Partially Implemented |
### Architecture Patterns
| ADR | Title | Status |
| ------- | ---------------------------------- | -------- |
| ADR-034 | Repository Pattern Standards | Accepted |
| ADR-035 | Service Layer Architecture | Accepted |
| ADR-036 | Event Bus and Pub/Sub Pattern | Accepted |
| ADR-041 | AI/Gemini Integration Architecture | Accepted |
### Operations
| ADR | Title | Status |
| ------- | ------------------------------- | --------------------- |
| ADR-006 | Background Job Processing | Accepted |
| ADR-014 | Containerization and Deployment | Partially Implemented |
| ADR-037 | Scheduled Jobs and Cron Pattern | Accepted |
| ADR-038 | Graceful Shutdown Pattern | Accepted |
### Observability
| ADR | Title | Status |
| ------- | --------------------------------- | -------- |
| ADR-004 | Structured Logging | Accepted |
| ADR-015 | APM and Error Tracking | Proposed |
| ADR-050 | PostgreSQL Function Observability | Accepted |
**Full ADR Index**: [docs/adr/index.md](../adr/index.md)
---
## Key Files Reference
### Configuration Files
| File | Purpose |
| ------------------------ | ------------------------------------------------------ |
| `server.ts` | Express application setup and middleware configuration |
| `src/config/env.ts` | Environment variable validation (Zod schema) |
| `src/config/passport.ts` | Authentication strategies (Local, JWT, OAuth) |
| `ecosystem.config.cjs` | PM2 process manager configuration |
| `vite.config.ts` | Vite build and dev server configuration |
### Route Files
| File | API Prefix |
| ----------------------------- | -------------- |
| `src/routes/auth.routes.ts` | `/api/auth` |
| `src/routes/user.routes.ts` | `/api/users` |
| `src/routes/flyer.routes.ts` | `/api/flyers` |
| `src/routes/recipe.routes.ts` | `/api/recipes` |
| `src/routes/deals.routes.ts` | `/api/deals` |
| `src/routes/store.routes.ts` | `/api/stores` |
| `src/routes/admin.routes.ts` | `/api/admin` |
| `src/routes/health.routes.ts` | `/api/health` |
### Service Files
| File | Purpose |
| ----------------------------------------------- | --------------------------------------- |
| `src/services/flyerProcessingService.server.ts` | Flyer processing pipeline orchestration |
| `src/services/aiService.server.ts` | Google Gemini AI integration |
| `src/services/cacheService.server.ts` | Redis caching abstraction |
| `src/services/emailService.server.ts` | Email sending |
| `src/services/queues.server.ts` | BullMQ queue definitions |
| `src/services/workers.server.ts` | BullMQ worker definitions |
### Database Files
| File | Purpose |
| ---------------------------------- | -------------------------------------------- |
| `src/services/db/connection.db.ts` | Database pool and transaction management |
| `src/services/db/errors.db.ts` | Database error types |
| `src/services/db/user.db.ts` | User repository |
| `src/services/db/flyer.db.ts` | Flyer repository |
| `sql/master_schema_rollup.sql` | Complete database schema (for test DB setup) |
| `sql/initial_schema.sql` | Fresh installation schema |
### Type Definitions
| File | Purpose |
| ----------------------- | ---------------------------- |
| `src/types.ts` | Core entity type definitions |
| `src/types/job-data.ts` | BullMQ job payload types |
---
## Additional Resources
- **API Documentation**: Available at `/docs/api-docs` in development environments
- **Testing Guide**: [docs/tests/](../tests/)
- **Getting Started**: [docs/getting-started/](../getting-started/)
- **Operations Guide**: [docs/operations/](../operations/)
- **Authentication Details**: [docs/architecture/AUTHENTICATION.md](./AUTHENTICATION.md)
- **Database Schema**: [docs/architecture/DATABASE.md](./DATABASE.md)
- **WebSocket Usage**: [docs/architecture/WEBSOCKET_USAGE.md](./WEBSOCKET_USAGE.md)
---
_This document is maintained as part of the Flyer Crawler project documentation. For updates, contact the development team or submit a pull request._

View File

@@ -0,0 +1,411 @@
# WebSocket Real-Time Notifications - Usage Guide
This guide shows you how to use the WebSocket real-time notification system in your React components.
## Quick Start
### 1. Enable Global Notifications
Add the `NotificationToastHandler` to your root `App.tsx`:
```tsx
// src/App.tsx
import { Toaster } from 'react-hot-toast';
import { NotificationToastHandler } from './components/NotificationToastHandler';
function App() {
return (
<>
{/* React Hot Toast container */}
<Toaster position="top-right" />
{/* WebSocket notification handler (renders nothing, handles side effects) */}
<NotificationToastHandler
enabled={true}
playSound={false} // Set to true to play notification sounds
/>
{/* Your app routes and components */}
<YourAppContent />
</>
);
}
```
### 2. Add Notification Bell to Header
```tsx
// src/components/Header.tsx
import { NotificationBell } from './components/NotificationBell';
import { useNavigate } from 'react-router-dom';
function Header() {
const navigate = useNavigate();
return (
<header className="flex items-center justify-between p-4">
<h1>Flyer Crawler</h1>
<div className="flex items-center gap-4">
{/* Notification bell with unread count */}
<NotificationBell onClick={() => navigate('/notifications')} showConnectionStatus={true} />
<UserMenu />
</div>
</header>
);
}
```
### 3. Listen for Notifications in Components
```tsx
// src/pages/DealsPage.tsx
import { useEventBus } from '../hooks/useEventBus';
import { useCallback, useState } from 'react';
import type { DealNotificationData } from '../types/websocket';
function DealsPage() {
const [deals, setDeals] = useState([]);
// Listen for new deal notifications
const handleDealNotification = useCallback((data: DealNotificationData) => {
console.log('New deals received:', data.deals);
// Update your deals list
setDeals((prev) => [...data.deals, ...prev]);
// Or refetch from API
// refetchDeals();
}, []);
useEventBus('notification:deal', handleDealNotification);
return (
<div>
<h1>Deals</h1>
{/* Render deals */}
</div>
);
}
```
## Available Components
### `NotificationBell`
A notification bell icon with unread count and connection status indicator.
**Props:**
- `onClick?: () => void` - Callback when bell is clicked
- `showConnectionStatus?: boolean` - Show green/red/yellow connection dot (default: `true`)
- `className?: string` - Custom CSS classes
**Example:**
```tsx
<NotificationBell
onClick={() => navigate('/notifications')}
showConnectionStatus={true}
className="mr-4"
/>
```
### `ConnectionStatus`
A simple status indicator showing if WebSocket is connected (no bell icon).
**Example:**
```tsx
<ConnectionStatus />
```
### `NotificationToastHandler`
Global handler that listens for WebSocket events and displays toasts. Should be rendered once at app root.
**Props:**
- `enabled?: boolean` - Enable/disable toast notifications (default: `true`)
- `playSound?: boolean` - Play sound on notifications (default: `false`)
- `soundUrl?: string` - Custom notification sound URL
**Example:**
```tsx
<NotificationToastHandler enabled={true} playSound={true} soundUrl="/custom-sound.mp3" />
```
## Available Hooks
### `useWebSocket`
Connect to the WebSocket server and manage connection state.
**Options:**
- `autoConnect?: boolean` - Auto-connect on mount (default: `true`)
- `maxReconnectAttempts?: number` - Max reconnect attempts (default: `5`)
- `reconnectDelay?: number` - Base reconnect delay in ms (default: `1000`)
- `onConnect?: () => void` - Callback on connection
- `onDisconnect?: () => void` - Callback on disconnect
- `onError?: (error: Event) => void` - Callback on error
**Returns:**
- `isConnected: boolean` - Connection status
- `isConnecting: boolean` - Connecting state
- `error: string | null` - Error message if any
- `connect: () => void` - Manual connect function
- `disconnect: () => void` - Manual disconnect function
- `send: (message: WebSocketMessage) => void` - Send message to server
**Example:**
```tsx
const { isConnected, error, connect, disconnect } = useWebSocket({
autoConnect: true,
maxReconnectAttempts: 3,
onConnect: () => console.log('Connected!'),
onDisconnect: () => console.log('Disconnected!'),
});
return (
<div>
<p>Status: {isConnected ? 'Connected' : 'Disconnected'}</p>
{error && <p>Error: {error}</p>}
<button onClick={connect}>Reconnect</button>
</div>
);
```
### `useEventBus`
Subscribe to event bus events (used with WebSocket integration).
**Parameters:**
- `event: string` - Event name to listen for
- `callback: (data?: T) => void` - Callback function
**Available Events:**
- `'notification:deal'` - Deal notifications (`DealNotificationData`)
- `'notification:system'` - System messages (`SystemMessageData`)
- `'notification:error'` - Error messages (`{ message: string; code?: string }`)
**Example:**
```tsx
import { useEventBus } from '../hooks/useEventBus';
import type { DealNotificationData } from '../types/websocket';
function MyComponent() {
useEventBus<DealNotificationData>('notification:deal', (data) => {
console.log('Received deal:', data);
});
return <div>Listening for deals...</div>;
}
```
## Message Types
### Deal Notification
```typescript
interface DealNotificationData {
notification_id?: string;
deals: Array<{
item_name: string;
best_price_in_cents: number;
store_name: string;
store_id: string;
}>;
user_id: string;
message: string;
}
```
### System Message
```typescript
interface SystemMessageData {
message: string;
severity: 'info' | 'warning' | 'error';
}
```
## Advanced Usage
### Custom Notification Handling
If you don't want to use the default `NotificationToastHandler`, you can create your own:
```tsx
import { useWebSocket } from '../hooks/useWebSocket';
import { useEventBus } from '../hooks/useEventBus';
import type { DealNotificationData } from '../types/websocket';
function CustomNotificationHandler() {
const { isConnected } = useWebSocket({ autoConnect: true });
useEventBus<DealNotificationData>('notification:deal', (data) => {
// Custom handling - e.g., update Redux store
dispatch(addDeals(data.deals));
// Show custom UI
showCustomNotification(data.message);
});
return null; // Or return your custom UI
}
```
### Conditional WebSocket Connection
```tsx
import { useWebSocket } from '../hooks/useWebSocket';
import { useAuth } from '../hooks/useAuth';
function ConditionalWebSocket() {
const { user } = useAuth();
// Only connect if user is logged in
useWebSocket({
autoConnect: !!user,
});
return null;
}
```
### Send Messages to Server
```tsx
import { useWebSocket } from '../hooks/useWebSocket';
function PingComponent() {
const { send, isConnected } = useWebSocket();
const sendPing = () => {
send({
type: 'ping',
data: {},
timestamp: new Date().toISOString(),
});
};
return (
<button onClick={sendPing} disabled={!isConnected}>
Send Ping
</button>
);
}
```
## Admin Monitoring
### Get WebSocket Stats
Admin users can check WebSocket connection statistics:
```bash
# Get connection stats
curl -H "Authorization: Bearer <admin-token>" \
http://localhost:3001/api/admin/websocket/stats
```
**Response:**
```json
{
"success": true,
"data": {
"totalUsers": 42,
"totalConnections": 67
}
}
```
### Admin Dashboard Integration
```tsx
import { useEffect, useState } from 'react';
function AdminWebSocketStats() {
const [stats, setStats] = useState({ totalUsers: 0, totalConnections: 0 });
useEffect(() => {
const fetchStats = async () => {
const response = await fetch('/api/admin/websocket/stats', {
headers: { Authorization: `Bearer ${token}` },
});
const data = await response.json();
setStats(data.data);
};
fetchStats();
const interval = setInterval(fetchStats, 5000); // Poll every 5s
return () => clearInterval(interval);
}, []);
return (
<div className="p-4 border rounded">
<h3>WebSocket Stats</h3>
<p>Connected Users: {stats.totalUsers}</p>
<p>Total Connections: {stats.totalConnections}</p>
</div>
);
}
```
## Troubleshooting
### Connection Issues
1. **Check JWT Token**: WebSocket requires a valid JWT token in cookies or query string
2. **Check Server Logs**: Look for WebSocket connection errors in server logs
3. **Check Browser Console**: WebSocket errors are logged to console
4. **Verify Path**: WebSocket server is at `ws://localhost:3001/ws` (or `wss://` for HTTPS)
### Not Receiving Notifications
1. **Check Connection Status**: Use `<ConnectionStatus />` to verify connection
2. **Verify Event Name**: Ensure you're listening to the correct event (`notification:deal`, etc.)
3. **Check User ID**: Notifications are sent to specific users - verify JWT user_id matches
### High Memory Usage
1. **Connection Leaks**: Ensure components using `useWebSocket` are properly unmounting
2. **Event Listeners**: `useEventBus` automatically cleans up, but verify no manual listeners remain
3. **Check Stats**: Use `/api/admin/websocket/stats` to monitor connection count
## Testing
### Unit Tests
```typescript
import { renderHook } from '@testing-library/react';
import { useWebSocket } from '../hooks/useWebSocket';
describe('useWebSocket', () => {
it('should connect automatically', () => {
const { result } = renderHook(() => useWebSocket({ autoConnect: true }));
expect(result.current.isConnecting).toBe(true);
});
});
```
### Integration Tests
See [src/tests/integration/websocket.integration.test.ts](../src/tests/integration/websocket.integration.test.ts) for comprehensive integration tests.
## Related Documentation
- [ADR-022: Real-time Notification System](./adr/0022-real-time-notification-system.md)
- [ADR-036: Event Bus and Pub/Sub Pattern](./adr/0036-event-bus-and-pub-sub-pattern.md)
- [ADR-042: Email and Notification Architecture](./adr/0042-email-and-notification-architecture.md)

Some files were not shown because too many files have changed in this diff Show More