Compare commits

...

412 Commits

Author SHA1 Message Date
Gitea Actions
622c919733 ci: Bump version to 0.9.52 [skip ci] 2026-01-07 08:26:14 +05:00
c7f6b6369a fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m27s
2026-01-06 19:25:25 -08:00
Gitea Actions
879d956003 ci: Bump version to 0.9.51 [skip ci] 2026-01-07 07:11:22 +05:00
27eaac7ea8 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m15s
2026-01-06 18:10:47 -08:00
Gitea Actions
93618c57e5 ci: Bump version to 0.9.50 [skip ci] 2026-01-07 06:41:16 +05:00
7f043ef704 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 29m45s
2026-01-06 17:40:20 -08:00
Gitea Actions
62e35deddc ci: Bump version to 0.9.49 [skip ci] 2026-01-07 02:54:13 +05:00
59f6f43d03 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m36s
2026-01-06 13:53:00 -08:00
Gitea Actions
e675c1a73c ci: Bump version to 0.9.48 [skip ci] 2026-01-07 01:35:26 +05:00
3c19084a0a fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m17s
2026-01-06 12:34:18 -08:00
Gitea Actions
e2049c6b9f ci: Bump version to 0.9.47 [skip ci] 2026-01-06 23:34:29 +05:00
a3839c2f0d debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 33m13s
2026-01-06 10:33:51 -08:00
Gitea Actions
c1df3d7b1b ci: Bump version to 0.9.46 [skip ci] 2026-01-06 22:39:47 +05:00
94782f030d debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m42s
2026-01-06 09:38:14 -08:00
Gitea Actions
1c25b79251 ci: Bump version to 0.9.45 [skip ci] 2026-01-06 14:34:44 +05:00
0b0fa8294d debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m54s
2026-01-06 01:33:48 -08:00
Gitea Actions
f49f3a75fb ci: Bump version to 0.9.44 [skip ci] 2026-01-06 13:41:43 +05:00
8f14044ae6 debugging the flyer integration issue
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m27s
2026-01-06 00:41:03 -08:00
Gitea Actions
55e1e425f4 ci: Bump version to 0.9.43 [skip ci] 2026-01-06 12:56:47 +05:00
68b16ad2e8 fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m2s
2026-01-05 23:53:54 -08:00
Gitea Actions
6a28934692 ci: Bump version to 0.9.42 [skip ci] 2026-01-06 12:25:08 +05:00
78c4a5fee6 fix the dang integration tests
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-05 23:20:56 -08:00
Gitea Actions
1ce5f481a8 ci: Bump version to 0.9.41 [skip ci] 2026-01-06 11:39:28 +05:00
Gitea Actions
e0120d38fd ci: Bump version to 0.9.39 [skip ci] 2026-01-06 11:39:27 +05:00
6b2079ef2c fix the dang integration tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m44s
2026-01-05 22:38:21 -08:00
Gitea Actions
0478e176d5 ci: Bump version to 0.9.38 [skip ci] 2026-01-06 10:23:22 +05:00
47f7f97cd9 fuck database contraints - seems buggy
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 32m10s
2026-01-05 21:16:08 -08:00
Gitea Actions
b0719d1e39 ci: Bump version to 0.9.37 [skip ci] 2026-01-06 10:11:19 +05:00
0039ac3752 fuck database contraints - seems buggy
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 37s
2026-01-05 21:08:16 -08:00
Gitea Actions
3c8316f4f7 ci: Bump version to 0.9.36 [skip ci] 2026-01-06 09:03:20 +05:00
2564df1c64 get rid of localhost in tests - not a qualified URL - we'll see
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 33m19s
2026-01-05 20:02:44 -08:00
Gitea Actions
696c547238 ci: Bump version to 0.9.35 [skip ci] 2026-01-06 08:11:42 +05:00
38165bdb9a get rid of localhost in tests - not a qualified URL - we'll see
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m14s
2026-01-05 19:10:46 -08:00
Gitea Actions
6139dca072 ci: Bump version to 0.9.34 [skip ci] 2026-01-06 06:33:46 +05:00
68bfaa50e6 more baseurl work - hopefully that does it for now
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m5s
2026-01-05 17:33:00 -08:00
Gitea Actions
9c42621f74 ci: Bump version to 0.9.33 [skip ci] 2026-01-06 04:34:48 +05:00
1b98282202 more rate limiting
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m19s
2026-01-05 15:31:01 -08:00
Gitea Actions
b6731b220c ci: Bump version to 0.9.32 [skip ci] 2026-01-06 04:13:42 +05:00
3507d455e8 more rate limiting
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-05 15:13:10 -08:00
Gitea Actions
92b2adf8e8 ci: Bump version to 0.9.31 [skip ci] 2026-01-06 04:07:21 +05:00
d6c7452256 more rate limiting
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 41s
2026-01-05 15:06:55 -08:00
Gitea Actions
d812b681dd ci: Bump version to 0.9.30 [skip ci] 2026-01-06 03:54:42 +05:00
b4306a6092 more rate limiting
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 50s
2026-01-05 14:53:49 -08:00
Gitea Actions
57fdd159d5 ci: Bump version to 0.9.29 [skip ci] 2026-01-06 01:08:45 +05:00
4a747ca042 even even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m46s
2026-01-05 12:08:18 -08:00
Gitea Actions
e0bf96824c ci: Bump version to 0.9.28 [skip ci] 2026-01-06 00:28:11 +05:00
e86e09703e even even more and more test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 59s
2026-01-05 11:27:13 -08:00
Gitea Actions
275741c79e ci: Bump version to 0.9.27 [skip ci] 2026-01-05 15:32:08 +05:00
3a40249ddb even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 22m19s
2026-01-05 02:30:28 -08:00
Gitea Actions
4c70905950 ci: Bump version to 0.9.26 [skip ci] 2026-01-05 14:51:27 +05:00
0b4884ff2a even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 26m1s
2026-01-05 01:50:54 -08:00
Gitea Actions
e4acab77c8 ci: Bump version to 0.9.25 [skip ci] 2026-01-05 14:26:57 +05:00
4e20b1b430 even more and more test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 54s
2026-01-05 01:26:12 -08:00
Gitea Actions
15747ac942 ci: Bump version to 0.9.24 [skip ci] 2026-01-05 12:37:56 +05:00
e5fa89ef17 even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m55s
2026-01-04 23:36:56 -08:00
Gitea Actions
2c65da31e9 ci: Bump version to 0.9.23 [skip ci] 2026-01-05 05:12:54 +05:00
eeec6af905 even more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 27m33s
2026-01-04 16:01:55 -08:00
Gitea Actions
e7d03951b9 ci: Bump version to 0.9.22 [skip ci] 2026-01-05 03:35:06 +05:00
af8816e0af more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 29m30s
2026-01-04 14:34:16 -08:00
Gitea Actions
64f6427e1a ci: Bump version to 0.9.21 [skip ci] 2026-01-05 01:31:50 +05:00
c9b7a75429 more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m59s
2026-01-04 12:30:44 -08:00
Gitea Actions
0490f6922e ci: Bump version to 0.9.20 [skip ci] 2026-01-05 00:30:12 +05:00
057c4c9174 more and more test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m19s
2026-01-04 11:28:52 -08:00
Gitea Actions
a9e56bc707 ci: Bump version to 0.9.19 [skip ci] 2026-01-04 16:00:35 +05:00
e5d09c73b7 test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 20m31s
2026-01-04 02:59:55 -08:00
Gitea Actions
6e1298b825 ci: Bump version to 0.9.18 [skip ci] 2026-01-04 15:22:37 +05:00
fc8e43437a test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 56s
2026-01-04 02:21:08 -08:00
Gitea Actions
cb453aa949 ci: Bump version to 0.9.17 [skip ci] 2026-01-04 09:02:18 +05:00
2651bd16ae test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 52s
2026-01-03 20:01:10 -08:00
Gitea Actions
91e0f0c46f ci: Bump version to 0.9.16 [skip ci] 2026-01-04 05:05:33 +05:00
e6986d512b test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 30m38s
2026-01-03 16:04:04 -08:00
Gitea Actions
8f9c21675c ci: Bump version to 0.9.15 [skip ci] 2026-01-04 03:58:29 +05:00
7fb22cdd20 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m12s
2026-01-03 14:57:40 -08:00
Gitea Actions
780291303d ci: Bump version to 0.9.14 [skip ci] 2026-01-04 02:48:56 +05:00
4f607f7d2f more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 29m49s
2026-01-03 13:47:44 -08:00
Gitea Actions
208227b3ed ci: Bump version to 0.9.13 [skip ci] 2026-01-04 01:35:36 +05:00
bf1c7d4adf more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m41s
2026-01-03 12:35:05 -08:00
Gitea Actions
a7a30cf983 ci: Bump version to 0.9.12 [skip ci] 2026-01-04 01:01:26 +05:00
0bc0676b33 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m39s
2026-01-03 12:00:20 -08:00
Gitea Actions
73484d3eb4 ci: Bump version to 0.9.11 [skip ci] 2026-01-03 23:52:31 +05:00
b3253d5bbc more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m17s
2026-01-03 10:51:44 -08:00
Gitea Actions
54f3769e90 ci: Bump version to 0.9.10 [skip ci] 2026-01-03 13:34:20 +05:00
bad6f74ee6 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m21s
2026-01-03 00:33:47 -08:00
Gitea Actions
bcf16168b6 ci: Bump version to 0.9.9 [skip ci] 2026-01-03 13:03:37 +05:00
498fbd9e0e more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m5s
2026-01-03 00:02:09 -08:00
Gitea Actions
007ff8e538 ci: Bump version to 0.9.8 [skip ci] 2026-01-03 11:34:34 +05:00
1fc70e3915 extend timers duration - prevent jobs from timing out after 30secs, increased to 4mins
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 22m56s
2026-01-02 22:33:51 -08:00
Gitea Actions
d891e47e02 ci: Bump version to 0.9.7 [skip ci] 2026-01-03 10:36:05 +05:00
08c39afde4 more test improvements
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m33s
2026-01-02 21:33:31 -08:00
Gitea Actions
c579543b8a ci: Bump version to 0.9.6 [skip ci] 2026-01-03 09:31:41 +05:00
0d84137786 test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 23m17s
2026-01-02 20:31:08 -08:00
Gitea Actions
20ee30c4b4 ci: Bump version to 0.9.5 [skip ci] 2026-01-03 08:52:26 +05:00
93612137e3 test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 25m23s
2026-01-02 19:51:10 -08:00
Gitea Actions
6e70f08e3c ci: Bump version to 0.9.4 [skip ci] 2026-01-03 07:59:50 +05:00
459f5f7976 sql fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m36s
2026-01-02 18:59:16 -08:00
Gitea Actions
a2e6331ddd ci: Bump version to 0.9.3 [skip ci] 2026-01-03 07:28:11 +05:00
13cd30bec9 sql fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m51s
2026-01-02 18:27:42 -08:00
Gitea Actions
baeb9488c6 ci: Bump version to 0.9.2 [skip ci] 2026-01-03 07:07:42 +05:00
0cba0f987e remove refresh_token as it really should not be stored
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m57s
2026-01-02 18:07:08 -08:00
Gitea Actions
958a79997d ci: Bump version to 0.9.1 [skip ci] 2026-01-03 07:01:27 +05:00
8fb1c96f93 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 46s
2026-01-02 17:56:18 -08:00
6e6fe80c7f sql fixes 2026-01-02 17:55:22 -08:00
Gitea Actions
d1554050bd ci: Bump version to 0.9.0 for production release [skip ci] 2026-01-03 05:50:23 +05:00
Gitea Actions
b1fae270bb ci: Bump version to 0.8.0 for production release [skip ci] 2026-01-03 05:48:40 +05:00
Gitea Actions
c852483e18 ci: Bump version to 0.7.29 [skip ci] 2026-01-03 02:43:54 +05:00
2e01ad5bc9 more test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m50s
2026-01-02 13:43:20 -08:00
Gitea Actions
26763c7183 ci: Bump version to 0.7.28 [skip ci] 2026-01-03 02:04:26 +05:00
f0c5c2c45b more test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m40s
2026-01-02 13:03:25 -08:00
Gitea Actions
034bb60fd5 ci: Bump version to 0.7.27 [skip ci] 2026-01-03 01:31:54 +05:00
d4b389cb79 more test fixin
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m39s
2026-01-02 12:31:19 -08:00
Gitea Actions
a71fb81468 ci: Bump version to 0.7.26 [skip ci] 2026-01-03 00:58:34 +05:00
9bee0a013b unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m8s
2026-01-02 11:58:03 -08:00
Gitea Actions
8bcb4311b3 ci: Bump version to 0.7.25 [skip ci] 2026-01-03 00:34:45 +05:00
9fd15f3a50 unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m58s
2026-01-02 11:33:11 -08:00
Gitea Actions
e3c876c7be ci: Bump version to 0.7.24 [skip ci] 2026-01-02 23:23:21 +05:00
32dcf3b89e unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 21m2s
2026-01-02 10:22:27 -08:00
7066b937f6 unit test auto-provider refactor 2026-01-02 10:17:01 -08:00
Gitea Actions
8553ea8811 ci: Bump version to 0.7.23 [skip ci] 2026-01-02 12:13:43 +05:00
19885a50f7 unit test auto-provider refactor
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m30s
2026-01-01 23:12:32 -08:00
Gitea Actions
ce82034b9d ci: Bump version to 0.7.22 [skip ci] 2026-01-02 07:30:53 +05:00
4528da2934 integration test fixes + added new ai models and recipeSuggestion
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m36s
2026-01-01 18:30:03 -08:00
Gitea Actions
146d4c1351 ci: Bump version to 0.7.21 [skip ci] 2026-01-02 03:37:22 +05:00
88625706f4 integration test fixes + added new ai models and recipeSuggestion
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m3s
2026-01-01 14:36:43 -08:00
Gitea Actions
e395faed30 ci: Bump version to 0.7.20 [skip ci] 2026-01-02 01:40:18 +05:00
e8f8399896 integration test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m11s
2026-01-01 12:30:03 -08:00
Gitea Actions
ac0115af2b ci: Bump version to 0.7.19 [skip ci] 2026-01-02 00:55:57 +05:00
f24b15f19b integration test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m22s
2026-01-01 11:55:26 -08:00
Gitea Actions
e64426bd84 ci: Bump version to 0.7.18 [skip ci] 2026-01-02 00:35:49 +05:00
0ec4cd68d2 integration test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m25s
2026-01-01 11:35:23 -08:00
Gitea Actions
840516d2a3 ci: Bump version to 0.7.17 [skip ci] 2026-01-02 00:29:45 +05:00
59355c3eef integration test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 39s
2026-01-01 11:29:10 -08:00
d024935fe9 integration test fixes 2026-01-01 11:18:27 -08:00
Gitea Actions
5a5470634e ci: Bump version to 0.7.16 [skip ci] 2026-01-01 23:07:19 +05:00
392231ad63 more db
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m34s
2026-01-01 10:06:49 -08:00
Gitea Actions
4b1c896621 ci: Bump version to 0.7.15 [skip ci] 2026-01-01 22:33:18 +05:00
720920a51c more db
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 20m35s
2026-01-01 09:31:49 -08:00
Gitea Actions
460adb9506 ci: Bump version to 0.7.14 [skip ci] 2026-01-01 16:08:43 +05:00
7aa1f756a9 more db
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m26s
2026-01-01 03:08:02 -08:00
Gitea Actions
c484a8ca9b ci: Bump version to 0.7.13 [skip ci] 2026-01-01 15:58:33 +05:00
28d2c9f4ec more db
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2026-01-01 02:58:02 -08:00
Gitea Actions
ee253e9449 ci: Bump version to 0.7.12 [skip ci] 2026-01-01 15:48:03 +05:00
b6c15e53d0 more db
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m24s
2026-01-01 02:47:31 -08:00
Gitea Actions
722162c2c3 ci: Bump version to 0.7.11 [skip ci] 2026-01-01 15:35:25 +05:00
02a76fe996 more db
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m20s
2026-01-01 02:35:00 -08:00
Gitea Actions
0ebb03a7ab ci: Bump version to 0.7.10 [skip ci] 2026-01-01 15:30:43 +05:00
748ac9e049 more db
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 51s
2026-01-01 02:30:06 -08:00
Gitea Actions
495edd621c ci: Bump version to 0.7.9 [skip ci] 2026-01-01 14:59:38 +05:00
4ffca19db6 more db
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m28s
2026-01-01 01:58:18 -08:00
Gitea Actions
717427c5d7 ci: Bump version to 0.7.8 [skip ci] 2026-01-01 10:08:06 +05:00
cc438a0e36 more db
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 38s
2025-12-31 21:07:40 -08:00
Gitea Actions
a32a0b62fc ci: Bump version to 0.7.7 [skip ci] 2026-01-01 09:44:49 +05:00
342f72b713 more db
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 45s
2025-12-31 20:44:00 -08:00
Gitea Actions
91254d18f3 ci: Bump version to 0.7.6 [skip ci] 2026-01-01 06:02:31 +05:00
40580dbf15 database work !
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 41s
2025-12-31 17:01:35 -08:00
7f1d74c047 flyer upload (anon) issues 2025-12-31 09:40:46 -08:00
Gitea Actions
ecec686347 ci: Bump version to 0.7.5 [skip ci] 2025-12-31 22:27:56 +05:00
86de680080 flyer processing fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m36s
2025-12-31 09:27:06 -08:00
Gitea Actions
0371947065 ci: Bump version to 0.7.4 [skip ci] 2025-12-31 22:03:02 +05:00
296698758c flyer upload (anon) issues
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m20s
2025-12-31 09:02:09 -08:00
Gitea Actions
18c1161587 ci: Bump version to 0.7.3 [skip ci] 2025-12-31 15:09:29 +05:00
0010396780 flyer upload (anon) issues
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 41s
2025-12-31 02:08:37 -08:00
Gitea Actions
d4557e13fb ci: Bump version to 0.7.2 [skip ci] 2025-12-31 13:32:58 +05:00
3e41130c69 again
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m59s
2025-12-31 00:31:18 -08:00
Gitea Actions
d9034563d6 ci: Bump version to 0.7.1 [skip ci] 2025-12-31 13:21:54 +05:00
5836a75157 flyer upload (anon) issues
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 42s
2025-12-31 00:21:19 -08:00
Gitea Actions
790008ae0d ci: Bump version to 0.7.0 for production release [skip ci] 2025-12-31 12:43:41 +05:00
Gitea Actions
b5b91eb968 ci: Bump version to 0.6.6 [skip ci] 2025-12-31 12:29:43 +05:00
38eb810e7a logging the frontend loop
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m55s
2025-12-30 23:28:38 -08:00
Gitea Actions
458588a6e7 ci: Bump version to 0.6.5 [skip ci] 2025-12-31 11:34:23 +05:00
0b4113417f flyer upload (anon) issues
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m56s
2025-12-30 22:33:55 -08:00
Gitea Actions
b59d2a9533 ci: Bump version to 0.6.4 [skip ci] 2025-12-31 11:11:53 +05:00
6740b35f8a flyer upload (anon) issues
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m52s
2025-12-30 22:11:21 -08:00
Gitea Actions
92ad82a012 ci: Bump version to 0.6.3 [skip ci] 2025-12-31 10:54:15 +05:00
672e4ca597 flyer upload (anon) issues
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m56s
2025-12-30 21:53:36 -08:00
Gitea Actions
e4d70a9b37 ci: Bump version to 0.6.2 [skip ci] 2025-12-31 10:31:41 +05:00
c30f1c4162 flyer upload (anon) issues
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m55s
2025-12-30 21:30:55 -08:00
Gitea Actions
44062a9f5b ci: Bump version to 0.6.1 [skip ci] 2025-12-31 09:52:26 +05:00
17fac8cf86 flyer upload (anon) issues
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m1s
2025-12-30 20:44:34 -08:00
Gitea Actions
9fa8553486 ci: Bump version to 0.6.0 for production release [skip ci] 2025-12-31 09:04:20 +05:00
Gitea Actions
f5b0b3b543 ci: Bump version to 0.5.5 [skip ci] 2025-12-31 08:29:53 +05:00
e3ed5c7e63 fix tests + flyer upload (anon)
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m0s
2025-12-30 19:28:57 -08:00
Gitea Actions
ae0040e092 ci: Bump version to 0.5.4 [skip ci] 2025-12-31 03:57:03 +05:00
1f3f99d430 fix tests + flyer upload (anon)
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m0s
2025-12-30 14:56:25 -08:00
Gitea Actions
7be72f1758 ci: Bump version to 0.5.3 [skip ci] 2025-12-31 03:42:15 +05:00
0967c7a33d fix tests + flyer upload (anon)
Some checks are pending
Deploy to Test Environment / deploy-to-test (push) Has started running
2025-12-30 14:41:06 -08:00
1f1c0fa6f3 fix tests + flyer upload (anon) 2025-12-30 14:38:11 -08:00
Gitea Actions
728b1a20d3 ci: Bump version to 0.5.2 [skip ci] 2025-12-30 23:37:58 +05:00
f248f7cbd0 fix tests + flyer upload (anon)
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m42s
2025-12-30 10:37:29 -08:00
Gitea Actions
0ad9bb16c2 ci: Bump version to 0.5.1 [skip ci] 2025-12-30 23:33:27 +05:00
510787bc5b fix tests + flyer upload (anon)
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 42s
2025-12-30 10:32:58 -08:00
Gitea Actions
9f696e7676 ci: Bump version to 0.5.0 for production release [skip ci] 2025-12-30 22:55:32 +05:00
Gitea Actions
a77105316f ci: Bump version to 0.4.6 [skip ci] 2025-12-30 22:39:46 +05:00
cadacb63f5 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m54s
2025-12-30 03:19:47 -08:00
Gitea Actions
62592f707e ci: Bump version to 0.4.5 [skip ci] 2025-12-30 15:32:34 +05:00
023e48d99a fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m27s
2025-12-30 02:32:02 -08:00
Gitea Actions
99efca0371 ci: Bump version to 0.4.4 [skip ci] 2025-12-30 15:11:01 +05:00
1448950b81 fix unit tests
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 42s
2025-12-30 02:10:29 -08:00
Gitea Actions
a811fdac63 ci: Bump version to 0.4.3 [skip ci] 2025-12-30 14:42:51 +05:00
1201fe4d3c fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m41s
2025-12-30 01:42:03 -08:00
Gitea Actions
ba9228c9cb ci: Bump version to 0.4.2 [skip ci] 2025-12-30 13:10:33 +05:00
b392b82c25 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m20s
2025-12-30 00:09:57 -08:00
Gitea Actions
87825d13d6 ci: Bump version to 0.4.1 [skip ci] 2025-12-30 12:24:16 +05:00
21a6a796cf fix some uploading flyer issues + more unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m34s
2025-12-29 23:23:27 -08:00
Gitea Actions
ecd0a73bc8 ci: Bump version to 0.4.0 for production release [skip ci] 2025-12-30 11:22:35 +05:00
Gitea Actions
39d61dc7ad ci: Bump version to 0.3.0 for production release [skip ci] 2025-12-30 11:20:47 +05:00
Gitea Actions
43491359d9 ci: Bump version to 0.2.37 [skip ci] 2025-12-30 10:28:29 +05:00
5ed2cea7e9 /coverage
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m0s
2025-12-29 21:27:28 -08:00
Gitea Actions
cbb16a8d52 ci: Bump version to 0.2.36 [skip ci] 2025-12-30 09:27:29 +05:00
70e94a6ce0 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m5s
2025-12-29 20:27:00 -08:00
Gitea Actions
b61a00003a ci: Bump version to 0.2.35 [skip ci] 2025-12-30 09:16:46 +05:00
52dba6f890 moar!
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-29 20:16:02 -08:00
4242678aab fix unit tests 2025-12-29 20:08:01 -08:00
Gitea Actions
b2e086d5ba ci: Bump version to 0.2.34 [skip ci] 2025-12-30 08:44:55 +05:00
07a9787570 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m5s
2025-12-29 19:44:25 -08:00
Gitea Actions
4bf5dc3d58 ci: Bump version to 0.2.33 [skip ci] 2025-12-30 08:02:02 +05:00
be3d269928 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m3s
2025-12-29 19:01:21 -08:00
Gitea Actions
80a53fae94 ci: Bump version to 0.2.32 [skip ci] 2025-12-30 07:27:55 +05:00
e15d2b6c2f fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m4s
2025-12-29 18:27:30 -08:00
Gitea Actions
7a52bf499e ci: Bump version to 0.2.31 [skip ci] 2025-12-30 06:58:25 +05:00
2489ec8d2d fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m3s
2025-12-29 17:57:40 -08:00
Gitea Actions
4a4f349805 ci: Bump version to 0.2.30 [skip ci] 2025-12-30 06:19:25 +05:00
517a268307 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m5s
2025-12-29 17:18:52 -08:00
Gitea Actions
a94b2a97b1 ci: Bump version to 0.2.29 [skip ci] 2025-12-30 05:41:58 +05:00
542cdfbb82 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m18s
2025-12-29 16:41:32 -08:00
Gitea Actions
262062f468 ci: Bump version to 0.2.28 [skip ci] 2025-12-30 05:38:33 +05:00
0a14193371 fix unit tests
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 40s
2025-12-29 16:37:55 -08:00
Gitea Actions
7f665f5117 ci: Bump version to 0.2.27 [skip ci] 2025-12-30 05:09:16 +05:00
2782a8fb3b fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m3s
2025-12-29 16:08:49 -08:00
Gitea Actions
c182ef6d30 ci: Bump version to 0.2.26 [skip ci] 2025-12-30 04:38:22 +05:00
fdb3b76cbd fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m59s
2025-12-29 15:37:51 -08:00
Gitea Actions
01e7c843cb ci: Bump version to 0.2.25 [skip ci] 2025-12-30 04:15:41 +05:00
a0dbefbfa0 fix unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m4s
2025-12-29 15:14:44 -08:00
Gitea Actions
ab3fc318a0 ci: Bump version to 0.2.24 [skip ci] 2025-12-30 02:44:22 +05:00
e658b35e43 ffs
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m3s
2025-12-29 13:43:41 -08:00
Gitea Actions
67e106162a ci: Bump version to 0.2.23 [skip ci] 2025-12-30 02:35:43 +05:00
b7f3182fd6 clean up routes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 4m24s
2025-12-29 13:34:26 -08:00
Gitea Actions
ac60072d88 ci: Bump version to 0.2.22 [skip ci] 2025-12-29 12:09:21 +05:00
9390f38bf6 maybe a few too many fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 8m45s
2025-12-28 23:08:40 -08:00
Gitea Actions
236d5518c9 ci: Bump version to 0.2.21 [skip ci] 2025-12-29 11:45:13 +05:00
fd52a79a72 fixin
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 42s
2025-12-28 22:38:26 -08:00
Gitea Actions
f72819e343 ci: Bump version to 0.2.20 [skip ci] 2025-12-29 11:26:09 +05:00
1af8be3f15 more fixings
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 38s
2025-12-28 22:20:28 -08:00
Gitea Actions
28d03f4e21 ci: Bump version to 0.2.19 [skip ci] 2025-12-29 10:39:22 +05:00
2e72ee81dd maybe a few too many fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 41s
2025-12-28 21:38:31 -08:00
Gitea Actions
ba67ace190 ci: Bump version to 0.2.18 [skip ci] 2025-12-29 04:33:54 +05:00
Gitea Actions
50782c30e5 ci: Bump version to 0.2.16 [skip ci] 2025-12-29 04:33:54 +05:00
4a2ff8afc5 fix unit tests
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 8m39s
2025-12-28 15:33:22 -08:00
Gitea Actions
7a1c14ce89 ci: Bump version to 0.2.15 [skip ci] 2025-12-29 04:12:16 +05:00
6fafc3d089 test secrets better
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 8m47s
2025-12-28 15:11:46 -08:00
Gitea Actions
4316866bce ci: Bump version to 0.2.14 [skip ci] 2025-12-29 03:54:44 +05:00
356c1a1894 jwtsecret issue
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 24s
2025-12-28 14:50:57 -08:00
Gitea Actions
2a310648ca ci: Bump version to 0.2.13 [skip ci] 2025-12-29 03:42:41 +05:00
8592633c22 unit test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-28 14:42:11 -08:00
Gitea Actions
0a9cdb8709 ci: Bump version to 0.2.12 [skip ci] 2025-12-29 02:50:56 +05:00
0d21e098f8 Merge branches 'main' and 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m7s
2025-12-28 13:49:58 -08:00
b6799ed167 test fixing and flyer processor refactor 2025-12-28 13:48:27 -08:00
Gitea Actions
be5bda169e ci: Bump version to 0.2.11 [skip ci] 2025-12-29 00:08:54 +05:00
4ede403356 refactor flyer processing etc to be more atomic
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m54s
2025-12-28 11:07:46 -08:00
5d31605b80 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com 2025-12-27 23:36:06 -08:00
ddd4ad024e pm2 worker fixes 2025-12-27 23:29:46 -08:00
Gitea Actions
4e927f48bd ci: Bump version to 0.2.10 [skip ci] 2025-12-28 11:55:35 +05:00
af5644d17a add backoffs etc
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m40s
2025-12-27 22:54:51 -08:00
Gitea Actions
016c0a883a ci: Bump version to 0.2.9 [skip ci] 2025-12-28 11:28:27 +05:00
c6a5f889b4 unit test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m12s
2025-12-27 22:27:39 -08:00
Gitea Actions
c895ecdb28 ci: Bump version to 0.2.8 [skip ci] 2025-12-28 10:30:44 +05:00
05e3f8a61c minor fix
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m10s
2025-12-27 21:29:37 -08:00
Gitea Actions
f79a2abc65 ci: Bump version to 0.2.7 [skip ci] 2025-12-28 10:17:24 +05:00
a726c270bb Refactor the "God Component" (App.tsx) Your App.tsx has lower branch coverage (77%) and uncovered lines. This usually means it's doing too much: managing routing, auth state checks, theme toggling, and global error handling. Move Logic to "Initialization Hooks": Create a useAppInitialization hook that handles the OAuth token check, version check, and theme sync. Use Layouts for Routing: Move the "What's New" modal and "Anonymous Banner" into the MainLayout or a specialized AppGuard component, leaving App.tsx as a clean list of Routes.
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 56s
2025-12-27 21:13:15 -08:00
Gitea Actions
8a4965c45b ci: Bump version to 0.2.6 [skip ci] 2025-12-28 00:01:33 +05:00
93497bf7c7 unit test fixes
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m2s
2025-12-27 11:00:19 -08:00
Gitea Actions
20584af729 ci: Bump version to 0.2.5 [skip ci] 2025-12-27 22:11:57 +05:00
be9f452656 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m5s
2025-12-27 09:11:00 -08:00
ef4b8e58fe several fixes to various tests 2025-12-27 09:10:51 -08:00
Gitea Actions
a42f7d7007 ci: Bump version to 0.2.4 [skip ci] 2025-12-27 12:38:49 +05:00
768d02b9ed several fixes to various tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m14s
2025-12-26 23:37:39 -08:00
Gitea Actions
c4742959e4 ci: Bump version to 0.2.3 [skip ci] 2025-12-27 09:17:15 +05:00
97c54c0c5c Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m48s
2025-12-26 20:16:33 -08:00
7cc50907d1 more testing fixes 2025-12-26 20:16:19 -08:00
Gitea Actions
b4199f7c48 ci: Bump version to 0.2.2 [skip ci] 2025-12-27 08:37:04 +05:00
dda36f7bc5 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m49s
2025-12-26 19:36:34 -08:00
27810bbb36 some pm2 fiex + unit test work for new flyeruploader 2025-12-26 19:35:44 -08:00
Gitea Actions
7a1421d5c2 ci: Bump version to 0.2.1 [skip ci] 2025-12-27 07:51:11 +05:00
1b52478f97 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m56s
2025-12-26 18:50:22 -08:00
fe8b000737 try to make upload better using tan-react library 2025-12-26 18:49:54 -08:00
Gitea Actions
d2babbe3b0 ci: Bump version to 0.2.0 for production release [skip ci] 2025-12-27 06:32:10 +05:00
Gitea Actions
684d81db2a ci: Bump version to 0.1.19 [skip ci] 2025-12-27 06:18:02 +05:00
59ffa65562 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m37s
2025-12-26 17:17:12 -08:00
0c0dd852ac hanldle uncaught exceptions in the tests 2025-12-26 17:16:34 -08:00
Gitea Actions
cde766872e ci: Bump version to 0.1.18 [skip ci] 2025-12-27 00:00:47 +05:00
604b543c12 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m35s
2025-12-26 10:59:48 -08:00
fd67fe2941 more unit test fixes 2025-12-26 10:59:38 -08:00
Gitea Actions
582035b60e ci: Bump version to 0.1.17 [skip ci] 2025-12-26 23:35:02 +05:00
44e7670a89 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m37s
2025-12-26 10:34:31 -08:00
2abfb3ed6e more unit tests 2025-12-26 10:32:25 -08:00
Gitea Actions
219de4a25c ci: Bump version to 0.1.16 [skip ci] 2025-12-26 22:53:31 +05:00
1540d5051f Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m45s
2025-12-26 09:52:47 -08:00
9c978c26fa not sure why those errors got removed we'll see 2025-12-26 09:52:41 -08:00
Gitea Actions
adb109d8e9 ci: Bump version to 0.1.15 [skip ci] 2025-12-26 22:33:15 +05:00
c668c8785f not sure why those errors got removed we'll see
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m39s
2025-12-26 09:32:38 -08:00
Gitea Actions
695bbb61b9 ci: Bump version to 0.1.14 [skip ci] 2025-12-26 22:00:15 +05:00
877c971833 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m48s
2025-12-26 08:59:39 -08:00
ed3af07aab not sure why those errors got removed we'll see 2025-12-26 08:59:31 -08:00
Gitea Actions
dd4b34edfa ci: Bump version to 0.1.13 [skip ci] 2025-12-26 21:44:58 +05:00
91fa2f0516 not sure why those errors got removed we'll see
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m30s
2025-12-26 08:43:49 -08:00
Gitea Actions
aefd57e57b ci: Bump version to 0.1.12 [skip ci] 2025-12-26 08:12:15 +05:00
2ca4eb47ac Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m39s
2025-12-25 19:11:25 -08:00
a4fe30da22 not sure why those errors got removed we'll see 2025-12-25 19:11:00 -08:00
Gitea Actions
abab7fd25e ci: Bump version to 0.1.11 [skip ci] 2025-12-26 07:33:29 +05:00
53dd26d2d9 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m34s
2025-12-25 18:32:56 -08:00
ab3da0336c more route work - fuck you ai 2025-12-25 18:32:14 -08:00
Gitea Actions
ed6d6349a2 ci: Bump version to 0.1.10 [skip ci] 2025-12-26 06:54:19 +05:00
d4db2a709a Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 47s
2025-12-25 17:53:28 -08:00
508583809b minor sql fix in schema check 2025-12-25 17:53:21 -08:00
Gitea Actions
6b1f7e7590 ci: Bump version to 0.1.9 [skip ci] 2025-12-26 03:46:16 +05:00
07bb31f4fb Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m29s
2025-12-25 14:45:29 -08:00
a42fb76da8 minor sql fix in schema check 2025-12-25 14:45:22 -08:00
Gitea Actions
08c320423c ci: Bump version to 0.1.8 [skip ci] 2025-12-26 01:17:16 +05:00
d2498065ed Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-25 12:16:33 -08:00
56dc96f418 more work on the BullMQ workers 2025-12-25 12:16:22 -08:00
Gitea Actions
4e9aa0efc3 ci: Bump version to 0.1.7 [skip ci] 2025-12-26 00:01:02 +05:00
e5e4b1316c Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 39s
2025-12-25 11:00:21 -08:00
e8d511b4de more work on the BullMQ workers 2025-12-25 10:59:35 -08:00
Gitea Actions
c4bbf5c251 ci: Bump version to 0.1.6 [skip ci] 2025-12-25 07:19:39 +05:00
32a9e6732b Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 3h0m53s
2025-12-24 18:18:35 -08:00
e7c076e2ed test repair 2025-12-24 18:18:28 -08:00
Gitea Actions
dbe8e72efe ci: Bump version to 0.1.5 [skip ci] 2025-12-25 06:13:16 +05:00
38bd193042 not sure why those errors got removed we'll see
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-24 17:12:20 -08:00
Gitea Actions
57215e2778 ci: Bump version to 0.1.4 [skip ci] 2025-12-25 06:04:17 +05:00
2c1de24e9a undo stupid logging change
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m21s
2025-12-24 16:54:56 -08:00
c8baff7aac Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com 2025-12-24 16:17:45 -08:00
de3f21a7ec not sure why those errors got removed we'll see 2025-12-24 16:16:42 -08:00
Gitea Actions
c6adbf79e7 ci: Bump version to 0.1.3 [skip ci] 2025-12-25 02:26:17 +05:00
7399a27600 add ai agent fallbacks
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 3h14m13s
2025-12-24 13:25:18 -08:00
Gitea Actions
68aadcaa4e ci: Bump version to 0.1.2 [skip ci] 2025-12-25 01:41:06 +05:00
971d2c3fa7 add ai agent fallbacks
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m18s
2025-12-24 12:39:15 -08:00
Gitea Actions
daaacfde5e ci: Bump version to 0.1.1 [skip ci] 2025-12-24 23:53:27 +05:00
7ac8fe1d29 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks are pending
Deploy to Test Environment / deploy-to-test (push) Has started running
2025-12-24 10:52:13 -08:00
a2462dfb6b testing push to prod etc 2025-12-24 10:51:43 -08:00
Gitea Actions
a911224fb4 ci: Bump version to 0.1.0 for production release [skip ci] 2025-12-24 23:24:53 +05:00
Gitea Actions
bf4bcef890 ci: Bump version to 0.0.30 [skip ci] 2025-12-24 22:59:36 +05:00
ac6cd2e0a1 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 16m10s
2025-12-24 09:58:45 -08:00
eea03880c1 exclude some dirs from coverage 2025-12-24 09:58:37 -08:00
Gitea Actions
7fc263691f ci: Bump version to 0.0.29 [skip ci] 2025-12-24 22:41:17 +05:00
c0912d36d5 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m59s
2025-12-24 09:39:56 -08:00
612c2b5943 deploy to test fix 2025-12-24 09:39:49 -08:00
Gitea Actions
8e787ddcf0 ci: Bump version to 0.0.28 [skip ci] 2025-12-24 22:18:18 +05:00
11c52d284c fixing unit tests
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m13s
2025-12-24 09:17:09 -08:00
Gitea Actions
b528bd3651 ci: Bump version to 0.0.27 [skip ci] 2025-12-24 22:06:03 +05:00
4c5ceb1bd6 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks are pending
Deploy to Test Environment / deploy-to-test (push) Has started running
2025-12-24 09:05:12 -08:00
bcc4ad64dc fixing unit tests 2025-12-24 09:04:10 -08:00
Gitea Actions
d520980322 ci: Bump version to 0.0.26 [skip ci] 2025-12-24 21:23:30 +05:00
d79955aaa0 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m50s
2025-12-24 08:22:30 -08:00
e66027dc8e fix e2e and deploy to prod 2025-12-24 08:21:35 -08:00
Gitea Actions
027df989a4 ci: Bump version to 0.0.25 [skip ci] 2025-12-24 12:50:52 +05:00
d4d69caaf7 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 9m47s
2025-12-23 23:50:10 -08:00
03b5af39e1 consolidate some testing functions 2025-12-23 23:50:03 -08:00
Gitea Actions
8a86333f86 ci: Bump version to 0.0.24 [skip ci] 2025-12-24 10:50:48 +05:00
f173f805ea Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 19m55s
2025-12-23 21:49:16 -08:00
d3b0996ad5 create a few initial E2E tests 2025-12-23 21:49:05 -08:00
Gitea Actions
b939262f0c ci: Bump version to 0.0.23 [skip ci] 2025-12-24 10:18:28 +05:00
9437f3d6c6 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 18m50s
2025-12-23 21:16:47 -08:00
f1e028d498 only one error left - huzzah ! 2025-12-23 21:16:40 -08:00
Gitea Actions
5274650aea ci: Bump version to 0.0.22 [skip ci] 2025-12-24 08:10:33 +05:00
de5a9a565b Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 17m19s
2025-12-23 19:08:59 -08:00
10a379c5e3 fix for integration tests 404 ? not sure this is right 2025-12-23 19:08:53 -08:00
a6a484d432 fix for integration tests 404 ? not sure this is right 2025-12-23 18:13:35 -08:00
Gitea Actions
4b0a172c35 ci: Bump version to 0.0.21 [skip ci] 2025-12-24 07:06:33 +05:00
e8c894d5cf Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m46s
2025-12-23 18:05:28 -08:00
6c8fd4b126 prettier ! 2025-12-23 17:57:32 -08:00
Gitea Actions
a1f52544d0 ci: Bump version to 0.0.20 [skip ci] 2025-12-24 06:49:05 +05:00
2334359756 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m31s
2025-12-23 17:48:07 -08:00
406954ca06 database fixes due to adding foreign keys and seed script improvement 2025-12-23 17:48:00 -08:00
Gitea Actions
95d441be98 ci: Bump version to 0.0.19 [skip ci] 2025-12-24 06:22:03 +05:00
186ed484b7 last test fixes for upcoming V0.1 + pretty
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m40s
2025-12-23 17:20:51 -08:00
Gitea Actions
3669958e9d ci: Bump version to 0.0.18 [skip ci] 2025-12-24 05:32:56 +05:00
5f3daf0539 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 11m23s
2025-12-23 16:32:11 -08:00
ae7afaaf97 integration test fixes 2025-12-23 16:32:05 -08:00
Gitea Actions
3ae7b9e0d4 ci: Bump version to 0.0.17 [skip ci] 2025-12-24 04:51:21 +05:00
921c48fc57 more unit test fixes now the UseProfileAddress OOM has been identified
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 10m14s
2025-12-23 15:50:01 -08:00
Gitea Actions
2571864b91 ci: Bump version to 0.0.16 [skip ci] 2025-12-24 01:52:55 +05:00
065d0c746a Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 12m17s
2025-12-23 12:52:02 -08:00
395f6c21a2 some "push to get dev working" stuff, and possibly found the problem test 2025-12-23 12:51:56 -08:00
Gitea Actions
aec56dfc23 ci: Bump version to 0.0.15 [skip ci] 2025-12-24 01:18:44 +05:00
a12a0e5207 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m28s
2025-12-23 12:17:31 -08:00
e337bd67b1 some "push to get dev working" stuff, and possibly found the problem test 2025-12-23 12:17:21 -08:00
Gitea Actions
a8f5b4e51a ci: Bump version to 0.0.14 [skip ci] 2025-12-23 08:45:00 +05:00
d0ce8021d6 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m5s
2025-12-22 19:44:11 -08:00
efbb162880 keep disabling tests until the culprit is found this fucking sucks 2025-12-22 19:44:02 -08:00
Gitea Actions
e353ce8a81 ci: Bump version to 0.0.13 [skip ci] 2025-12-23 08:30:20 +05:00
b5cbf271b8 debugging the fucking OOM
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 13m51s
2025-12-22 19:29:10 -08:00
Gitea Actions
2041b4ac3c ci: Bump version to 0.0.12 [skip ci] 2025-12-23 08:19:32 +05:00
e547363a65 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-22 19:19:05 -08:00
bddaf765fc disable to stupid system test that fails - fuck it 2025-12-22 19:18:29 -08:00
Gitea Actions
3c0bebb65c ci: Bump version to 0.0.11 [skip ci] 2025-12-23 07:59:09 +05:00
265cc3ffd4 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 14m37s
2025-12-22 18:58:11 -08:00
3d5767b60b roll back changes to src/routes/system.routes.ts hopefully before OOM issues 2025-12-22 18:58:01 -08:00
Gitea Actions
e9cb45efe0 ci: Bump version to 0.0.10 [skip ci] 2025-12-23 07:41:54 +05:00
99a57f3a30 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m47s
2025-12-22 18:40:59 -08:00
e46f5eb7f6 roll back changes to src/routes/system.routes.test.ts hopefully before OOM issues 2025-12-22 18:40:37 -08:00
Gitea Actions
034887069c ci: Bump version to 0.0.9 [skip ci] 2025-12-23 07:23:30 +05:00
84b5e0e15e Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-22 18:22:23 -08:00
dc0f774699 try to stop system.route test crashes fuck sakes 2025-12-22 18:21:39 -08:00
Gitea Actions
1195b7e87f ci: Bump version to 0.0.8 [skip ci] 2025-12-23 04:45:54 +05:00
e9889f1f1e Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
All checks were successful
Deploy to Test Environment / deploy-to-test (push) Successful in 15m41s
2025-12-22 15:45:11 -08:00
3c7f6429aa try to stop system.route test crashes 2025-12-22 15:43:59 -08:00
Gitea Actions
0db90dfaa6 ci: Bump version to 0.0.7 [skip ci] 2025-12-23 04:33:19 +05:00
b7a1294ae6 fix to versioning
Some checks are pending
Deploy to Test Environment / deploy-to-test (push) Has started running
2025-12-22 15:32:43 -08:00
Gitea Actions
be652f9790 ci: Bump version to 0.0.6 [skip ci] 2025-12-23 04:17:17 +05:00
1a3e6a9ab5 unit test fixes
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 38s
2025-12-22 15:11:18 -08:00
Gitea Actions
262396ddd0 ci: Bump version to 0.0.5 [skip ci] 2025-12-23 02:29:27 +05:00
c542796048 Merge branch 'main' of https://gitea.projectium.com/torbo/flyer-crawler.projectium.com
Some checks failed
Deploy to Test Environment / deploy-to-test (push) Has been cancelled
2025-12-22 13:22:28 -08:00
5b8f309ad8 oom issue 2025-12-22 13:22:21 -08:00
341 changed files with 29121 additions and 12034 deletions

View File

@@ -0,0 +1,18 @@
{
"name": "Flyer Crawler Dev (Ubuntu 22.04)",
"dockerComposeFile": ["../compose.dev.yml"],
"service": "app",
"workspaceFolder": "/app",
"customizations": {
"vscode": {
"extensions": ["dbaeumer.vscode-eslint", "esbenp.prettier-vscode"]
}
},
"remoteUser": "root",
// Automatically install dependencies when the container is created.
// This runs inside the container, populating the isolated node_modules volume.
"postCreateCommand": "npm install",
"postAttachCommand": "npm run dev:container",
// Try to start podman machine, but exit with success (0) even if it's already running
"initializeCommand": "powershell -Command \"podman machine start; exit 0\""
}

View File

@@ -47,6 +47,19 @@ jobs:
- name: Install Dependencies
run: npm ci
- name: Bump Minor Version and Push
run: |
# Configure git for the commit.
git config --global user.name 'Gitea Actions'
git config --global user.email 'actions@gitea.projectium.com'
# Bump the minor version number. This creates a new commit and a new tag.
# The commit message includes [skip ci] to prevent this push from triggering another workflow run.
npm version minor -m "ci: Bump version to %s for production release [skip ci]"
# Push the new commit and the new tag back to the main branch.
git push --follow-tags
- name: Check for Production Database Schema Changes
env:
DB_HOST: ${{ secrets.DB_HOST }}
@@ -61,9 +74,10 @@ jobs:
echo "--- Checking for production schema changes ---"
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
echo "Current Git Schema Hash: $CURRENT_HASH"
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A || echo "none")
# The psql command will now fail the step if the query errors (e.g., column missing), preventing deployment on a bad schema.
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
if [ "$DEPLOYED_HASH" = "none" ] || [ -z "$DEPLOYED_HASH" ]; then
if [ -z "$DEPLOYED_HASH" ]; then
echo "WARNING: No schema hash found in the production database. This is expected for a first-time deployment."
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
echo "ERROR: Database schema mismatch detected! A manual database migration is required."
@@ -79,8 +93,9 @@ jobs:
exit 1
fi
GITEA_SERVER_URL="https://gitea.projectium.com"
COMMIT_MESSAGE=$(git log -1 --pretty=%s)
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD)" \
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
PACKAGE_VERSION=$(node -p "require('./package.json').version")
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
@@ -123,6 +138,10 @@ jobs:
cd /var/www/flyer-crawler.projectium.com
npm install --omit=dev
# --- Cleanup Errored Processes ---
echo "Cleaning up errored or stopped PM2 processes..."
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
# --- Version Check Logic ---
# Get the version from the newly deployed package.json
NEW_VERSION=$(node -p "require('./package.json').version")
@@ -139,7 +158,7 @@ jobs:
else
echo "Version mismatch (Running: $RUNNING_VERSION -> Deployed: $NEW_VERSION) or app not running. Reloading PM2..."
fi
pm2 startOrReload ecosystem.config.cjs --env production && pm2 save
pm2 startOrReload ecosystem.config.cjs --env production --update-env && pm2 save
echo "Production backend server reloaded successfully."
else
echo "Version $NEW_VERSION is already running. Skipping PM2 reload."
@@ -148,7 +167,12 @@ jobs:
echo "Updating schema hash in production database..."
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
"INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('production', '$CURRENT_HASH', NOW())
"CREATE TABLE IF NOT EXISTS public.schema_info (
environment VARCHAR(50) PRIMARY KEY,
schema_hash VARCHAR(64) NOT NULL,
deployed_at TIMESTAMP DEFAULT NOW()
);
INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('production', '$CURRENT_HASH', NOW())
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
UPDATED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
@@ -161,7 +185,17 @@ jobs:
- name: Show PM2 Environment for Production
run: |
echo "--- Displaying recent PM2 logs for flyer-crawler-api ---"
sleep 5
pm2 describe flyer-crawler-api || echo "Could not find production pm2 process."
pm2 logs flyer-crawler-api --lines 20 --nostream || echo "Could not find production pm2 process."
pm2 env flyer-crawler-api || echo "Could not find production pm2 process."
sleep 5 # Wait a few seconds for the app to start and log its output.
# Resolve the PM2 ID dynamically to ensure we target the correct process
PM2_ID=$(pm2 jlist | node -e "try { const list = JSON.parse(require('fs').readFileSync(0, 'utf-8')); const app = list.find(p => p.name === 'flyer-crawler-api'); console.log(app ? app.pm2_env.pm_id : ''); } catch(e) { console.log(''); }")
if [ -n "$PM2_ID" ]; then
echo "Found process ID: $PM2_ID"
pm2 describe "$PM2_ID" || echo "Failed to describe process $PM2_ID"
pm2 logs "$PM2_ID" --lines 20 --nostream || echo "Failed to get logs for $PM2_ID"
pm2 env "$PM2_ID" || echo "Failed to get env for $PM2_ID"
else
echo "Could not find process 'flyer-crawler-api' in pm2 list."
pm2 list # Fallback to listing everything to help debug
fi

View File

@@ -51,7 +51,14 @@ jobs:
# Bump the patch version number. This creates a new commit and a new tag.
# The commit message includes [skip ci] to prevent this push from triggering another workflow run.
npm version patch -m "ci: Bump version to %s [skip ci]"
# If the tag already exists (e.g. re-running a failed job), we skip the conflicting version.
if ! npm version patch -m "ci: Bump version to %s [skip ci]"; then
echo "⚠️ Version bump failed (likely tag exists). Attempting to skip to next version..."
# Bump package.json to the conflicting version without git tagging
npm version patch --no-git-tag-version > /dev/null
# Bump again to the next version, forcing it because the directory is now dirty
npm version patch -m "ci: Bump version to %s [skip ci]" --force
fi
# Push the new commit and the new tag back to the main branch.
git push --follow-tags
@@ -83,10 +90,11 @@ jobs:
# integration test suite can launch its own, fresh server instance.
# '|| true' ensures the workflow doesn't fail if the process isn't running.
run: |
pm2 stop flyer-crawler-api-test || true
pm2 stop flyer-crawler-worker-test || true
pm2 delete flyer-crawler-api-test || true
pm2 delete flyer-crawler-worker-test || true
echo "--- Stopping and deleting all test processes ---"
# Use a script to parse pm2's JSON output and delete any process whose name ends with '-test'.
# This is safer than 'pm2 delete all' and more robust than naming each process individually.
# It prevents the accumulation of duplicate processes from previous test runs.
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.name && p.name.endsWith('-test')) { console.log('Deleting test process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id, e.message); } } }); console.log('✅ Test process cleanup complete.'); } catch (e) { if (e.stdout.toString().includes('No process found')) { console.log('No PM2 processes running, cleanup not needed.'); } else { console.error('Error cleaning up test processes:', e.message); } }" || true
- name: Run All Tests and Generate Merged Coverage Report
# This single step runs both unit and integration tests, then merges their
@@ -105,16 +113,21 @@ jobs:
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
# --- Integration test specific variables ---
FRONTEND_URL: 'http://localhost:3000'
FRONTEND_URL: 'https://example.com'
VITE_API_BASE_URL: 'http://localhost:3001/api'
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}
# --- JWT Secret for Passport authentication in tests ---
JWT_SECRET: ${{ secrets.JWT_SECRET }}
# --- V8 Coverage for Server Process ---
# This variable tells the Node.js process (our server, started by globalSetup)
# where to output its raw V8 coverage data.
NODE_V8_COVERAGE: '.coverage/tmp/integration-server'
# --- Increase Node.js memory limit to prevent heap out of memory errors ---
# This is crucial for memory-intensive tasks like running tests and coverage.
NODE_OPTIONS: '--max-old-space-size=8192'
NODE_OPTIONS: '--max-old-space-size=8192 --trace-warnings --unhandled-rejections=strict'
run: |
# Fail-fast check to ensure secrets are configured in Gitea for testing.
@@ -129,10 +142,49 @@ jobs:
# Run unit and integration tests as separate steps.
# The `|| true` ensures the workflow continues even if tests fail, allowing coverage to run.
echo "--- Running Unit Tests ---"
npm run test:unit -- --coverage --reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
# npm run test:unit -- --coverage --reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
npm run test:unit -- --coverage \
--coverage.exclude='**/*.test.ts' \
--coverage.exclude='**/tests/**' \
--coverage.exclude='**/mocks/**' \
--coverage.exclude='src/components/icons/**' \
--coverage.exclude='src/db/**' \
--coverage.exclude='src/lib/**' \
--coverage.exclude='src/types/**' \
--coverage.exclude='**/index.tsx' \
--coverage.exclude='**/vite-env.d.ts' \
--coverage.exclude='**/vitest.setup.ts' \
--reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only --no-file-parallelism || true
echo "--- Running Integration Tests ---"
npm run test:integration -- --coverage --reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
npm run test:integration -- --coverage \
--coverage.exclude='**/*.test.ts' \
--coverage.exclude='**/tests/**' \
--coverage.exclude='**/mocks/**' \
--coverage.exclude='src/components/icons/**' \
--coverage.exclude='src/db/**' \
--coverage.exclude='src/lib/**' \
--coverage.exclude='src/types/**' \
--coverage.exclude='**/index.tsx' \
--coverage.exclude='**/vite-env.d.ts' \
--coverage.exclude='**/vitest.setup.ts' \
--reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
echo "--- Running E2E Tests ---"
# Run E2E tests using the dedicated E2E config which inherits from integration config.
# We still pass --coverage to enable it, but directory and timeout are now in the config.
npx vitest run --config vitest.config.e2e.ts --coverage \
--coverage.exclude='**/*.test.ts' \
--coverage.exclude='**/tests/**' \
--coverage.exclude='**/mocks/**' \
--coverage.exclude='src/components/icons/**' \
--coverage.exclude='src/db/**' \
--coverage.exclude='src/lib/**' \
--coverage.exclude='src/types/**' \
--coverage.exclude='**/index.tsx' \
--coverage.exclude='**/vite-env.d.ts' \
--coverage.exclude='**/vitest.setup.ts' \
--reporter=verbose --no-file-parallelism || true
# Re-enable secret masking for subsequent steps.
echo "::secret-masking::"
@@ -148,6 +200,7 @@ jobs:
echo "Checking for source coverage files..."
ls -l .coverage/unit/coverage-final.json
ls -l .coverage/integration/coverage-final.json
ls -l .coverage/e2e/coverage-final.json || echo "E2E coverage file not found"
# --- V8 Coverage Processing for Backend Server ---
# The integration tests start the server, which generates raw V8 coverage data.
@@ -160,7 +213,7 @@ jobs:
# Run c8: read raw files from the temp dir, and output an Istanbul JSON report.
# We only generate the 'json' report here because it's all nyc needs for merging.
echo "Server coverage report about to be generated..."
npx c8 report --reporter=json --temp-directory .coverage/tmp/integration-server --reports-dir .coverage/integration-server
npx c8 report --exclude='**/*.test.ts' --exclude='**/tests/**' --exclude='**/mocks/**' --reporter=json --temp-directory .coverage/tmp/integration-server --reports-dir .coverage/integration-server
echo "Server coverage report generated. Verifying existence:"
ls -l .coverage/integration-server/coverage-final.json
@@ -179,6 +232,7 @@ jobs:
# We give them unique names to be safe, though it's not strictly necessary.
cp .coverage/unit/coverage-final.json "$NYC_SOURCE_DIR/unit-coverage.json"
cp .coverage/integration/coverage-final.json "$NYC_SOURCE_DIR/integration-coverage.json"
cp .coverage/e2e/coverage-final.json "$NYC_SOURCE_DIR/e2e-coverage.json" || echo "E2E coverage file not found, skipping."
# This file might not exist if integration tests fail early, so we add `|| true`
cp .coverage/integration-server/coverage-final.json "$NYC_SOURCE_DIR/integration-server-coverage.json" || echo "Server coverage file not found, skipping."
echo "Copied coverage files to source directory. Contents:"
@@ -198,7 +252,13 @@ jobs:
--reporter=text \
--reporter=html \
--report-dir .coverage/ \
--temp-dir "$NYC_SOURCE_DIR"
--temp-dir "$NYC_SOURCE_DIR" \
--exclude "**/*.test.ts" \
--exclude "**/tests/**" \
--exclude "**/mocks/**" \
--exclude "**/index.tsx" \
--exclude "**/vite-env.d.ts" \
--exclude "**/vitest.setup.ts"
# Re-enable secret masking for subsequent steps.
echo "::secret-masking::"
@@ -211,16 +271,6 @@ jobs:
if: always() # This step runs even if the previous test or coverage steps failed.
run: echo "Skipping test artifact cleanup on runner; this is handled on the server."
- name: Deploy Coverage Report to Public URL
if: always()
run: |
TARGET_DIR="/var/www/flyer-crawler-test.projectium.com/coverage"
echo "Deploying HTML coverage report to $TARGET_DIR..."
mkdir -p "$TARGET_DIR"
rm -rf "$TARGET_DIR"/*
cp -r .coverage/* "$TARGET_DIR/"
echo "✅ Coverage report deployed to https://flyer-crawler-test.projectium.com/coverage"
- name: Archive Code Coverage Report
# This action saves the generated HTML coverage report as a downloadable artifact.
uses: actions/upload-artifact@v3
@@ -249,18 +299,19 @@ jobs:
# We normalize line endings to ensure the hash is consistent across different OS environments.
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
echo "Current Git Schema Hash: $CURRENT_HASH"
# Query the production database to get the hash of the deployed schema.
# The `psql` command requires PGPASSWORD to be set.
# `\t` sets tuples-only mode and `\A` unaligns output to get just the raw value.
# The `|| echo "none"` ensures the command doesn't fail if the table or row doesn't exist yet.
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'test';" -t -A || echo "none")
# The psql command will now fail the step if the query errors (e.g., column missing), preventing deployment on a bad schema.
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'test';" -t -A)
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
# Check if the hash is "none" (command failed) OR if it's an empty string (table exists but is empty).
if [ "$DEPLOYED_HASH" = "none" ] || [ -z "$DEPLOYED_HASH" ]; then
if [ -z "$DEPLOYED_HASH" ]; then
echo "WARNING: No schema hash found in the test database."
echo "This is expected for a first-time deployment. The hash will be set after a successful deployment."
echo "--- Debug: Dumping schema_info table ---"
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=0 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -P pager=off -c "SELECT * FROM public.schema_info;" || true
echo "----------------------------------------"
# We allow the deployment to continue, but a manual schema update is required.
# You could choose to fail here by adding `exit 1`.
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
@@ -284,8 +335,10 @@ jobs:
fi
GITEA_SERVER_URL="https://gitea.projectium.com" # Your Gitea instance URL
COMMIT_MESSAGE=$(git log -1 --pretty=%s)
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD)" \
# Sanitize commit message to prevent shell injection or build breaks (removes quotes, backticks, backslashes, $)
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s | tr -d '"`\\$')
PACKAGE_VERSION=$(node -p "require('./package.json').version")
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
VITE_API_BASE_URL="https://flyer-crawler-test.projectium.com/api" VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }} npm run build
@@ -308,6 +361,17 @@ jobs:
rsync -avz dist/ "$APP_PATH"
echo "Application deployment complete."
- name: Deploy Coverage Report to Public URL
if: always()
run: |
TARGET_DIR="/var/www/flyer-crawler-test.projectium.com/coverage"
echo "Deploying HTML coverage report to $TARGET_DIR..."
mkdir -p "$TARGET_DIR"
rm -rf "$TARGET_DIR"/*
# The merged nyc report is generated in the .coverage directory. We copy its contents.
cp -r .coverage/* "$TARGET_DIR/"
echo "✅ Coverage report deployed to https://flyer-crawler-test.projectium.com/coverage"
- name: Install Backend Dependencies and Restart Test Server
env:
# --- Test Secrets Injection ---
@@ -325,8 +389,8 @@ jobs:
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
# Application Secrets
FRONTEND_URL: 'https://flyer-crawler-test.projectium.com'
JWT_SECRET: ${{ secrets.JWT_SECRET_TEST }}
FRONTEND_URL: 'https://example.com'
JWT_SECRET: ${{ secrets.JWT_SECRET }}
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }}
GOOGLE_MAPS_API_KEY: ${{ secrets.GOOGLE_MAPS_API_KEY }}
@@ -340,18 +404,30 @@ jobs:
run: |
# Fail-fast check to ensure secrets are configured in Gitea.
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
echo "ERROR: One or more test database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_TEST) are not set in Gitea repository settings."
MISSING_SECRETS=""
if [ -z "$DB_HOST" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_HOST"; fi
if [ -z "$DB_USER" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_USER"; fi
if [ -z "$DB_PASSWORD" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_PASSWORD"; fi
if [ -z "$DB_NAME" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_NAME"; fi
if [ -z "$JWT_SECRET" ]; then MISSING_SECRETS="${MISSING_SECRETS} JWT_SECRET"; fi
if [ ! -z "$MISSING_SECRETS" ]; then
echo "ERROR: The following required secrets are missing in Gitea:${MISSING_SECRETS}"
exit 1
fi
echo "Installing production dependencies and restarting test server..."
cd /var/www/flyer-crawler-test.projectium.com
npm install --omit=dev # Install only production dependencies
npm install --omit=dev
# --- Cleanup Errored Processes ---
echo "Cleaning up errored or stopped PM2 processes..."
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
# Use `startOrReload` with the ecosystem file. This is the standard, idempotent way to deploy.
# It will START the process if it's not running, or RELOAD it if it is.
# We also add `&& pm2 save` to persist the process list across server reboots.
pm2 startOrReload ecosystem.config.cjs --env test && pm2 save
pm2 startOrReload ecosystem.config.cjs --env test --update-env && pm2 save
echo "Test backend server reloaded successfully."
# After a successful deployment, update the schema hash in the database.
@@ -359,7 +435,12 @@ jobs:
echo "Updating schema hash in test database..."
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
"INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('test', '$CURRENT_HASH', NOW())
"CREATE TABLE IF NOT EXISTS public.schema_info (
environment VARCHAR(50) PRIMARY KEY,
schema_hash VARCHAR(64) NOT NULL,
deployed_at TIMESTAMP DEFAULT NOW()
);
INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('test', '$CURRENT_HASH', NOW())
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
# Verify the hash was updated
@@ -381,7 +462,17 @@ jobs:
run: |
echo "--- Displaying recent PM2 logs for flyer-crawler-api-test ---"
# After a reload, the server restarts. We'll show the last 20 lines of the log to see the startup messages.
sleep 5 # Wait a few seconds for the app to start and log its output.
pm2 describe flyer-crawler-api-test || echo "Could not find test pm2 process."
pm2 logs flyer-crawler-api-test --lines 20 --nostream || echo "Could not find test pm2 process."
pm2 env flyer-crawler-api-test || echo "Could not find test pm2 process."
sleep 5
# Resolve the PM2 ID dynamically to ensure we target the correct process
PM2_ID=$(pm2 jlist | node -e "try { const list = JSON.parse(require('fs').readFileSync(0, 'utf-8')); const app = list.find(p => p.name === 'flyer-crawler-api-test'); console.log(app ? app.pm2_env.pm_id : ''); } catch(e) { console.log(''); }")
if [ -n "$PM2_ID" ]; then
echo "Found process ID: $PM2_ID"
pm2 describe "$PM2_ID" || echo "Failed to describe process $PM2_ID"
pm2 logs "$PM2_ID" --lines 20 --nostream || echo "Failed to get logs for $PM2_ID"
pm2 env "$PM2_ID" || echo "Failed to get env for $PM2_ID"
else
echo "Could not find process 'flyer-crawler-api-test' in pm2 list."
pm2 list # Fallback to listing everything to help debug
fi

View File

@@ -60,4 +60,4 @@ jobs:
uses: actions/upload-artifact@v3
with:
name: database-backup
path: ${{ env.backup_filename }}
path: ${{ env.backup_filename }}

View File

@@ -144,4 +144,4 @@ jobs:
find "$APP_PATH/flyer-images" -type f -name '*-test-flyer-image.*' -delete
find "$APP_PATH/flyer-images/icons" -type f -name '*-test-flyer-image.*' -delete
find "$APP_PATH/flyer-images/archive" -mindepth 1 -maxdepth 1 -type f -delete || echo "Archive directory not found, skipping."
echo "✅ Flyer asset directories cleared."
echo "✅ Flyer asset directories cleared."

View File

@@ -130,4 +130,4 @@ jobs:
find "$APP_PATH/flyer-images" -mindepth 1 -type f -delete
find "$APP_PATH/flyer-images/icons" -mindepth 1 -type f -delete
find "$APP_PATH/flyer-images/archive" -mindepth 1 -type f -delete || echo "Archive directory not found, skipping."
echo "✅ Test flyer asset directories cleared."
echo "✅ Test flyer asset directories cleared."

View File

@@ -25,7 +25,7 @@ jobs:
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
BACKUP_DIR: "/var/www/backups" # Define a dedicated directory for backups
BACKUP_DIR: '/var/www/backups' # Define a dedicated directory for backups
steps:
- name: Validate Secrets and Inputs
@@ -92,4 +92,4 @@ jobs:
echo "Restarting application server..."
cd /var/www/flyer-crawler.projectium.com
pm2 startOrReload ecosystem.config.cjs --env production && pm2 save
echo "✅ Application server restarted."
echo "✅ Application server restarted."

View File

@@ -0,0 +1,185 @@
# .gitea/workflows/manual-deploy-major.yml
#
# This workflow provides a MANUAL trigger to perform a MAJOR version bump
# and deploy the application to the PRODUCTION environment.
name: Manual - Deploy Major Version to Production
on:
workflow_dispatch:
inputs:
confirmation:
description: 'Type "deploy-major-to-prod" to confirm you want to deploy a new major version.'
required: true
default: 'do-not-run'
force_reload:
description: 'Force PM2 reload even if version matches (true/false).'
required: false
type: boolean
default: false
jobs:
deploy-production-major:
runs-on: projectium.com
steps:
- name: Verify Confirmation Phrase
run: |
if [ "${{ gitea.event.inputs.confirmation }}" != "deploy-major-to-prod" ]; then
echo "ERROR: Confirmation phrase did not match. Aborting deployment."
exit 1
fi
echo "✅ Confirmation accepted. Proceeding with major version production deployment."
- name: Checkout Code from 'main' branch
uses: actions/checkout@v3
with:
ref: 'main' # Explicitly check out the main branch for production deployment
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: '**/package-lock.json'
- name: Install Dependencies
run: npm ci
- name: Bump Major Version and Push
run: |
# Configure git for the commit.
git config --global user.name 'Gitea Actions'
git config --global user.email 'actions@gitea.projectium.com'
# Bump the major version number. This creates a new commit and a new tag.
# The commit message includes [skip ci] to prevent this push from triggering another workflow run.
npm version major -m "ci: Bump version to %s for major release [skip ci]"
# Push the new commit and the new tag back to the main branch.
git push --follow-tags
- name: Check for Production Database Schema Changes
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
echo "ERROR: One or more production database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_PROD) are not set."
exit 1
fi
echo "--- Checking for production schema changes ---"
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
echo "Current Git Schema Hash: $CURRENT_HASH"
# The psql command will now fail the step if the query errors (e.g., column missing), preventing deployment on a bad schema.
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
if [ -z "$DEPLOYED_HASH" ]; then
echo "WARNING: No schema hash found in the production database. This is expected for a first-time deployment."
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
echo "ERROR: Database schema mismatch detected! A manual database migration is required."
exit 1
else
echo "✅ Schema is up to date. No changes detected."
fi
- name: Build React Application for Production
run: |
if [ -z "${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}" ]; then
echo "ERROR: The VITE_GOOGLE_GENAI_API_KEY secret is not set."
exit 1
fi
GITEA_SERVER_URL="https://gitea.projectium.com"
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
PACKAGE_VERSION=$(node -p "require('./package.json').version")
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
- name: Deploy Application to Production Server
run: |
echo "Deploying application files to /var/www/flyer-crawler.projectium.com..."
APP_PATH="/var/www/flyer-crawler.projectium.com"
mkdir -p "$APP_PATH"
mkdir -p "$APP_PATH/flyer-images/icons" "$APP_PATH/flyer-images/archive"
rsync -avz --delete --exclude 'node_modules' --exclude '.git' --exclude 'dist' --exclude 'flyer-images' ./ "$APP_PATH/"
rsync -avz dist/ "$APP_PATH"
echo "Application deployment complete."
- name: Install Backend Dependencies and Restart Production Server
env:
# --- Production Secrets Injection ---
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
REDIS_URL: 'redis://localhost:6379'
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_PROD }}
FRONTEND_URL: 'https://flyer-crawler.projectium.com'
JWT_SECRET: ${{ secrets.JWT_SECRET }}
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}
GOOGLE_MAPS_API_KEY: ${{ secrets.GOOGLE_MAPS_API_KEY }}
SMTP_HOST: 'localhost'
SMTP_PORT: '1025'
SMTP_SECURE: 'false'
SMTP_USER: ''
SMTP_PASS: ''
SMTP_FROM_EMAIL: 'noreply@flyer-crawler.projectium.com'
run: |
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
echo "ERROR: One or more production database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_PROD) are not set."
exit 1
fi
echo "Installing production dependencies and restarting server..."
cd /var/www/flyer-crawler.projectium.com
npm install --omit=dev
# --- Cleanup Errored Processes ---
echo "Cleaning up errored or stopped PM2 processes..."
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
# --- Version Check Logic ---
# Get the version from the newly deployed package.json
NEW_VERSION=$(node -p "require('./package.json').version")
echo "Deployed Package Version: $NEW_VERSION"
# Get the running version from PM2 for the main API process
# We use a small node script to parse the JSON output from pm2 jlist
RUNNING_VERSION=$(pm2 jlist | node -e "try { const list = JSON.parse(require('fs').readFileSync(0, 'utf-8')); const app = list.find(p => p.name === 'flyer-crawler-api'); console.log(app ? app.pm2_env.version : ''); } catch(e) { console.log(''); }")
echo "Running PM2 Version: $RUNNING_VERSION"
if [ "${{ gitea.event.inputs.force_reload }}" == "true" ] || [ "$NEW_VERSION" != "$RUNNING_VERSION" ] || [ -z "$RUNNING_VERSION" ]; then
if [ "${{ gitea.event.inputs.force_reload }}" == "true" ]; then
echo "Force reload triggered by manual input. Reloading PM2..."
else
echo "Version mismatch (Running: $RUNNING_VERSION -> Deployed: $NEW_VERSION) or app not running. Reloading PM2..."
fi
pm2 startOrReload ecosystem.config.cjs --env production --update-env && pm2 save
echo "Production backend server reloaded successfully."
else
echo "Version $NEW_VERSION is already running. Skipping PM2 reload."
fi
echo "Updating schema hash in production database..."
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
"INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('production', '$CURRENT_HASH', NOW())
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
UPDATED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
if [ "$CURRENT_HASH" = "$UPDATED_HASH" ]; then
echo "✅ Schema hash successfully updated in the database to: $UPDATED_HASH"
else
echo "ERROR: Failed to update schema hash in the database."
fi
- name: Show PM2 Environment for Production
run: |
echo "--- Displaying recent PM2 logs for flyer-crawler-api ---"
sleep 5
pm2 describe flyer-crawler-api || echo "Could not find production pm2 process."
pm2 logs flyer-crawler-api --lines 20 --nostream || echo "Could not find production pm2 process."
pm2 env flyer-crawler-api || echo "Could not find production pm2 process."

View File

@@ -6,4 +6,4 @@
"tabWidth": 2,
"useTabs": false,
"endOfLine": "auto"
}
}

31
Dockerfile.dev Normal file
View File

@@ -0,0 +1,31 @@
# Use Ubuntu 22.04 (LTS) as the base image to match production
FROM ubuntu:22.04
# Set environment variables to non-interactive to avoid prompts during installation
ENV DEBIAN_FRONTEND=noninteractive
# Update package lists and install essential tools
# - curl: for downloading Node.js setup script
# - git: for version control operations
# - build-essential: for compiling native Node.js modules (node-gyp)
# - python3: required by some Node.js build tools
RUN apt-get update && apt-get install -y \
curl \
git \
build-essential \
python3 \
&& rm -rf /var/lib/apt/lists/*
# Install Node.js 20.x (LTS) from NodeSource
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
&& apt-get install -y nodejs
# Set the working directory inside the container
WORKDIR /app
# Set default environment variables for development
ENV NODE_ENV=development
ENV NODE_OPTIONS='--max-old-space-size=8192'
# Default command keeps the container running so you can attach to it
CMD ["bash"]

130
README.md
View File

@@ -2,7 +2,7 @@
Flyer Crawler is a web application that uses the Google Gemini AI to extract, analyze, and manage data from grocery store flyers. Users can upload flyer images or PDFs, and the application will automatically identify items, prices, and sale dates, storing the structured data in a PostgreSQL database for historical analysis, price tracking, and personalized deal alerts.
We are working on an app to help people save money, by finding good deals that are only advertized in store flyers/ads. So, the primary purpose of the site is to make uploading flyers as easy as possible and as accurate as possible, and to store peoples needs, so sales can be matched to needs.
We are working on an app to help people save money, by finding good deals that are only advertized in store flyers/ads. So, the primary purpose of the site is to make uploading flyers as easy as possible and as accurate as possible, and to store peoples needs, so sales can be matched to needs.
## Features
@@ -45,9 +45,9 @@ This project is configured to run in a CI/CD environment and does not use `.env`
1. **Set up a PostgreSQL database instance.**
2. **Run the Database Schema**:
- Connect to your database using a tool like `psql` or DBeaver.
- Open `sql/schema.sql.txt`, copy its entire contents, and execute it against your database.
- This will create all necessary tables, functions, and relationships.
- Connect to your database using a tool like `psql` or DBeaver.
- Open `sql/schema.sql.txt`, copy its entire contents, and execute it against your database.
- This will create all necessary tables, functions, and relationships.
### Step 2: Install Dependencies and Run the Application
@@ -79,11 +79,11 @@ sudo nano /etc/nginx/mime.types
change
application/javascript js;
application/javascript js;
TO
application/javascript js mjs;
application/javascript js mjs;
RESTART NGINX
@@ -95,7 +95,7 @@ actually the proper change was to do this in the /etc/nginx/sites-available/flye
## for OAuth
1. Get Google OAuth Credentials
This is a crucial step that you must do outside the codebase:
This is a crucial step that you must do outside the codebase:
Go to the Google Cloud Console.
@@ -112,7 +112,7 @@ Under Authorized redirect URIs, click ADD URI and enter the URL where Google wil
Click Create. You will be given a Client ID and a Client Secret.
2. Get GitHub OAuth Credentials
You'll need to obtain a Client ID and Client Secret from GitHub:
You'll need to obtain a Client ID and Client Secret from GitHub:
Go to your GitHub profile settings.
@@ -133,21 +133,23 @@ You will be given a Client ID and a Client Secret.
psql -h localhost -U flyer_crawler_user -d "flyer-crawler-prod" -W
## postgis
flyer-crawler-prod=> SELECT version();
version
------------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0, 64-bit
version
---
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0, 64-bit
(1 row)
flyer-crawler-prod=> SELECT PostGIS_Full_Version();
postgis_full_version
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
POSTGIS="3.2.0 c3e3cc0" [EXTENSION] PGSQL="140" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1" LIBXML="2.9.12" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
(1 row)
postgis_full_version
---
POSTGIS="3.2.0 c3e3cc0" [EXTENSION] PGSQL="140" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1" LIBXML="2.9.12" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
(1 row)
## production postgres setup
@@ -201,9 +203,13 @@ Step 4: Seed the Admin Account (If Needed)
Your application has a separate script to create the initial admin user. To run it, you must first set the required environment variables in your shell session.
bash
# Set variables for the current session
export DB_USER=flyer_crawler_user DB_PASSWORD=your_password DB_NAME="flyer-crawler-prod" ...
# Run the seeding script
npx tsx src/db/seed_admin_account.ts
Your production database is now ready!
@@ -284,8 +290,6 @@ Test Execution: Your tests run against this clean, isolated schema.
This approach is faster, more reliable, and removes the need for sudo access within the CI pipeline.
gitea-runner@projectium:~$ pm2 install pm2-logrotate
[PM2][Module] Installing NPM pm2-logrotate module
[PM2][Module] Calling [NPM] to install pm2-logrotate ...
@@ -293,7 +297,7 @@ gitea-runner@projectium:~$ pm2 install pm2-logrotate
added 161 packages in 5s
21 packages are looking for funding
run `npm fund` for details
run `npm fund` for details
npm notice
npm notice New patch version of npm available! 11.6.3 -> 11.6.4
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.6.4
@@ -308,23 +312,23 @@ $ pm2 set pm2-logrotate:retain 30
$ pm2 set pm2-logrotate:compress false
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
$ pm2 set pm2-logrotate:workerInterval 30
$ pm2 set pm2-logrotate:rotateInterval 0 0 * * *
$ pm2 set pm2-logrotate:rotateInterval 0 0 \* \* _
$ pm2 set pm2-logrotate:rotateModule true
Modules configuration. Copy/Paste line to edit values.
[PM2][Module] Module successfully installed and launched
[PM2][Module] Checkout module options: `$ pm2 conf`
┌────┬───────────────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├────┼───────────────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 2 │ flyer-crawler-analytics-worker │ default │ 0.0.0 │ fork │ 3846981 │ 7m │ 5 │ online │ 0% │ 55.8mb │ git… │ disabled │
│ 11 │ flyer-crawler-api │ default │ 0.0.0 │ fork │ 3846987 │ 7m │ 0 │ online │ 0% │ 59.0mb │ git… │ disabled │
│ 12 │ flyer-crawler-worker │ default │ 0.0.0 │ fork │ 3846988 │ 7m │ 0 │ online │ 0% │ 54.2mb │ git… │ disabled │
│ 2 │ flyer-crawler-analytics-worker │ default │ 0.0.0 │ fork │ 3846981 │ 7m │ 5 │ online │ 0% │ 55.8mb │ git… │ disabled │
│ 11 │ flyer-crawler-api │ default │ 0.0.0 │ fork │ 3846987 │ 7m │ 0 │ online │ 0% │ 59.0mb │ git… │ disabled │
│ 12 │ flyer-crawler-worker │ default │ 0.0.0 │ fork │ 3846988 │ 7m │ 0 │ online │ 0% │ 54.2mb │ git… │ disabled │
└────┴───────────────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬──────────────────────────────┬───────────────┬──────────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
├────┼──────────────────────────────┼───────────────┼──────────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 13 │ pm2-logrotate │ 3.0.0 │ 3848878 │ online │ 0 │ 0% │ 20.1mb │ git… │
│ 13 │ pm2-logrotate │ 3.0.0 │ 3848878 │ online │ 0 │ 0% │ 20.1mb │ git… │
└────┴──────────────────────────────┴───────────────┴──────────┴──────────┴──────┴──────────┴──────────┴──────────┘
gitea-runner@projectium:~$ pm2 set pm2-logrotate:max_size 10M
[PM2] Module pm2-logrotate restarted
@@ -335,7 +339,7 @@ $ pm2 set pm2-logrotate:retain 30
$ pm2 set pm2-logrotate:compress false
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
$ pm2 set pm2-logrotate:workerInterval 30
$ pm2 set pm2-logrotate:rotateInterval 0 0 * * *
$ pm2 set pm2-logrotate:rotateInterval 0 0 _ \* _
$ pm2 set pm2-logrotate:rotateModule true
gitea-runner@projectium:~$ pm2 set pm2-logrotate:retain 14
[PM2] Module pm2-logrotate restarted
@@ -346,33 +350,31 @@ $ pm2 set pm2-logrotate:retain 14
$ pm2 set pm2-logrotate:compress false
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
$ pm2 set pm2-logrotate:workerInterval 30
$ pm2 set pm2-logrotate:rotateInterval 0 0 * * *
$ pm2 set pm2-logrotate:rotateInterval 0 0 _ \* \*
$ pm2 set pm2-logrotate:rotateModule true
gitea-runner@projectium:~$
## dev server setup:
Here are the steps to set up the development environment on Windows using Podman with an Ubuntu container:
1. Install Prerequisites on Windows
Install WSL 2: Podman on Windows relies on the Windows Subsystem for Linux. Install it by running wsl --install in an administrator PowerShell.
Install Podman Desktop: Download and install Podman Desktop for Windows.
Install WSL 2: Podman on Windows relies on the Windows Subsystem for Linux. Install it by running wsl --install in an administrator PowerShell.
Install Podman Desktop: Download and install Podman Desktop for Windows.
2. Set Up Podman
Initialize Podman: Launch Podman Desktop. It will automatically set up its WSL 2 machine.
Start Podman: Ensure the Podman machine is running from the Podman Desktop interface.
Initialize Podman: Launch Podman Desktop. It will automatically set up its WSL 2 machine.
Start Podman: Ensure the Podman machine is running from the Podman Desktop interface.
3. Set Up the Ubuntu Container
- Pull Ubuntu Image: Open a PowerShell or command prompt and pull the latest Ubuntu image:
podman pull ubuntu:latest
- Create a Podman Volume: Create a volume to persist node_modules and avoid installing them every time the container starts.
podman volume create node_modules_cache
- Run the Ubuntu Container: Start a new container with the project directory mounted and the necessary ports forwarded.
- Open a terminal in your project's root directory on Windows.
- Run the following command, replacing D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com with the full path to your project:
- Pull Ubuntu Image: Open a PowerShell or command prompt and pull the latest Ubuntu image:
podman pull ubuntu:latest
- Create a Podman Volume: Create a volume to persist node_modules and avoid installing them every time the container starts.
podman volume create node_modules_cache
- Run the Ubuntu Container: Start a new container with the project directory mounted and the necessary ports forwarded.
- Open a terminal in your project's root directory on Windows.
- Run the following command, replacing D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com with the full path to your project:
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com:/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
@@ -383,46 +385,40 @@ podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "D:\gitea\flyer-cra
-v "node_modules_cache:/app/node_modules": Mounts the named volume for node_modules.
4. Configure the Ubuntu Environment
You are now inside the Ubuntu container's shell.
You are now inside the Ubuntu container's shell.
- Update Package Lists:
apt-get update
- Install Dependencies: Install curl, git, and nodejs (which includes npm).
apt-get install -y curl git
curl -sL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejs
- Navigate to Project Directory:
cd /app
- Update Package Lists:
apt-get update
- Install Dependencies: Install curl, git, and nodejs (which includes npm).
apt-get install -y curl git
curl -sL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejs
- Navigate to Project Directory:
cd /app
- Install Project Dependencies:
npm install
- Install Project Dependencies:
npm install
5. Run the Development Server
- Start the Application:
npm run dev
npm run dev
6. Accessing the Application
- Frontend: Open your browser and go to http://localhost:5173.
- Backend: The frontend will make API calls to http://localhost:3001.
- Frontend: Open your browser and go to http://localhost:5173.
- Backend: The frontend will make API calls to http://localhost:3001.
Managing the Environment
- Stopping the Container: Press Ctrl+C in the container terminal, then type exit.
- Restarting the Container:
podman start -a -i flyer-dev
- Stopping the Container: Press Ctrl+C in the container terminal, then type exit.
- Restarting the Container:
podman start -a -i flyer-dev
## for me:
cd /mnt/d/gitea/flyer-crawler.projectium.com/flyer-crawler.projectium.com
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "$(pwd):/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
rate limiting
respect the AI service's rate limits, making it more stable and robust. You can adjust the GEMINI_RPM environment variable in your production environment as needed without changing the code.
respect the AI service's rate limits, making it more stable and robust. You can adjust the GEMINI_RPM environment variable in your production environment as needed without changing the code.

52
compose.dev.yml Normal file
View File

@@ -0,0 +1,52 @@
version: '3.8'
services:
app:
container_name: flyer-crawler-dev
build:
context: .
dockerfile: Dockerfile.dev
volumes:
# Mount the current directory to /app in the container
- .:/app
# Create a volume for node_modules to avoid conflicts with Windows host
# and improve performance.
- node_modules_data:/app/node_modules
ports:
- '3000:3000' # Frontend (Vite default)
- '3001:3001' # Backend API
environment:
- NODE_ENV=development
- DB_HOST=postgres
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_NAME=flyer_crawler_dev
- REDIS_URL=redis://redis:6379
# Add other secrets here or use a .env file
depends_on:
- postgres
- redis
# Keep container running so VS Code can attach
command: tail -f /dev/null
postgres:
image: docker.io/library/postgis/postgis:15-3.4
container_name: flyer-crawler-postgres
ports:
- '5432:5432'
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: flyer_crawler_dev
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: docker.io/library/redis:alpine
container_name: flyer-crawler-redis
ports:
- '6379:6379'
volumes:
postgres_data:
node_modules_data:

View File

@@ -34,7 +34,7 @@ We will adopt a strict, consistent error-handling contract for the service and r
**Robustness**: Eliminates an entire class of bugs where `undefined` is passed to `res.json()`, preventing incorrect `500` errors.
**Consistency & Predictability**: All data-fetching methods now have a predictable contract. They either return the expected data or throw a specific, typed error.
**Developer Experience**: Route handlers become simpler, cleaner, and easier to write correctly. The cognitive load on developers is reduced as they no longer need to remember to check for `undefined`.
**Improved Testability**: Tests become more reliable and realistic. Mocks can now throw the *exact* error type (`new NotFoundError()`) that the real implementation would, ensuring tests accurately reflect the application's behavior.
**Improved Testability**: Tests become more reliable and realistic. Mocks can now throw the _exact_ error type (`new NotFoundError()`) that the real implementation would, ensuring tests accurately reflect the application's behavior.
**Centralized Control**: Error-to-HTTP-status logic is centralized in the `errorHandler` middleware, making it easy to manage and modify error responses globally.
### Negative

View File

@@ -10,21 +10,19 @@ Following the standardization of error handling in ADR-001, the next most common
This manual approach has several drawbacks:
**Repetitive Boilerplate**: The `try/catch/finally` block for transaction management is duplicated across multiple files.
**Error-Prone**: It is easy to forget to `client.release()` in all code paths, which can lead to connection pool exhaustion and bring down the application.
3. **Poor Composability**: It is difficult to compose multiple repository methods into a single, atomic "Unit of Work". For example, a service function that needs to update a user's points and create a budget in a single transaction cannot easily do so if both underlying repository methods create their own transactions.
**Error-Prone**: It is easy to forget to `client.release()` in all code paths, which can lead to connection pool exhaustion and bring down the application. 3. **Poor Composability**: It is difficult to compose multiple repository methods into a single, atomic "Unit of Work". For example, a service function that needs to update a user's points and create a budget in a single transaction cannot easily do so if both underlying repository methods create their own transactions.
## Decision
We will implement a standardized "Unit of Work" pattern through a high-level `withTransaction` helper function. This function will abstract away the complexity of transaction management.
1. **`withTransaction` Helper**: A new helper function, `withTransaction<T>(callback: (client: PoolClient) => Promise<T>): Promise<T>`, will be created. This function will be responsible for:
* Acquiring a client from the database pool.
* Starting a transaction (`BEGIN`).
* Executing the `callback` function, passing the transactional client to it.
* If the callback succeeds, it will `COMMIT` the transaction.
* If the callback throws an error, it will `ROLLBACK` the transaction and re-throw the error.
* In all cases, it will `RELEASE` the client back to the pool.
- Acquiring a client from the database pool.
- Starting a transaction (`BEGIN`).
- Executing the `callback` function, passing the transactional client to it.
- If the callback succeeds, it will `COMMIT` the transaction.
- If the callback throws an error, it will `ROLLBACK` the transaction and re-throw the error.
- In all cases, it will `RELEASE` the client back to the pool.
2. **Repository Method Signature**: Repository methods that need to be part of a transaction will be updated to optionally accept a `PoolClient` in their constructor or as a method parameter. By default, they will use the global pool. When called from within a `withTransaction` block, they will be passed the transactional client.
3. **Service Layer Orchestration**: Service-layer functions that orchestrate multi-step operations will use `withTransaction` to ensure atomicity. They will instantiate or call repository methods, providing them with the transactional client from the callback.
@@ -40,7 +38,7 @@ async function registerUserAndCreateDefaultList(userData) {
const shoppingRepo = new ShoppingRepository(client);
const newUser = await userRepo.createUser(userData);
await shoppingRepo.createShoppingList(newUser.user_id, "My First List");
await shoppingRepo.createShoppingList(newUser.user_id, 'My First List');
return newUser;
});

View File

@@ -20,8 +20,8 @@ We will adopt a schema-based approach for input validation using the `zod` libra
1. **Adopt `zod` for Schema Definition**: We will use `zod` to define clear, type-safe schemas for the `params`, `query`, and `body` of each API request. `zod` provides powerful and declarative validation rules and automatically infers TypeScript types.
2. **Create a Reusable Validation Middleware**: A generic `validateRequest(schema)` middleware will be created. This middleware will take a `zod` schema, parse the incoming request against it, and handle success and error cases.
* On successful validation, the parsed and typed data will be attached to the `req` object (e.g., `req.body` will be replaced with the parsed body), and `next()` will be called.
* On validation failure, the middleware will call `next()` with a custom `ValidationError` containing a structured list of issues, which `ADR-001`'s `errorHandler` can then format into a user-friendly `400 Bad Request` response.
- On successful validation, the parsed and typed data will be attached to the `req` object (e.g., `req.body` will be replaced with the parsed body), and `next()` will be called.
- On validation failure, the middleware will call `next()` with a custom `ValidationError` containing a structured list of issues, which `ADR-001`'s `errorHandler` can then format into a user-friendly `400 Bad Request` response.
3. **Refactor Routes**: All route handlers will be refactored to use this new middleware, removing all manual validation logic.
@@ -46,18 +46,18 @@ const getFlyerSchema = z.object({
type GetFlyerRequest = z.infer<typeof getFlyerSchema>;
// 3. Apply the middleware and use an inline cast for the request
router.get('/:id', validateRequest(getFlyerSchema), (async (req, res, next) => {
// Cast 'req' to the inferred type.
// This provides full type safety for params, query, and body.
const { params } = req as unknown as GetFlyerRequest;
router.get('/:id', validateRequest(getFlyerSchema), async (req, res, next) => {
// Cast 'req' to the inferred type.
// This provides full type safety for params, query, and body.
const { params } = req as unknown as GetFlyerRequest;
try {
const flyer = await db.flyerRepo.getFlyerById(params.id); // params.id is 'number'
res.json(flyer);
} catch (error) {
next(error);
}
}));
try {
const flyer = await db.flyerRepo.getFlyerById(params.id); // params.id is 'number'
res.json(flyer);
} catch (error) {
next(error);
}
});
```
## Consequences

View File

@@ -20,9 +20,9 @@ We will adopt a standardized, application-wide structured logging policy. All lo
**Request-Scoped Logger with Context**: We will create a middleware that runs at the beginning of the request lifecycle. This middleware will:
* Generate a unique `request_id` for each incoming request.
* Create a request-scoped logger instance (a "child logger") that automatically includes the `request_id`, `user_id` (if authenticated), and `ip_address` in every log message it generates.
* Attach this child logger to the `req` object (e.g., `req.log`).
- Generate a unique `request_id` for each incoming request.
- Create a request-scoped logger instance (a "child logger") that automatically includes the `request_id`, `user_id` (if authenticated), and `ip_address` in every log message it generates.
- Attach this child logger to the `req` object (e.g., `req.log`).
**Mandatory Use of Request-Scoped Logger**: All route handlers and any service functions called by them **MUST** use the request-scoped logger (`req.log`) instead of the global logger instance. This ensures all logs for a given request are automatically correlated.
@@ -32,9 +32,9 @@ We will adopt a standardized, application-wide structured logging policy. All lo
**Standardized Logging Practices**:
**INFO**: Log key business events, such as `User logged in` or `Flyer processed`.
**WARN**: Log recoverable errors or unusual situations that do not break the request, such as `Client Error: 404 on GET /api/non-existent-route` or `Retrying failed database connection`.
**ERROR**: Log only unhandled or server-side errors that cause a request to fail (typically handled by the `errorHandler`). Avoid logging expected client errors (like 4xx) at this level.
**DEBUG**: Log detailed diagnostic information useful during development, such as function entry/exit points or variable states.
**WARN**: Log recoverable errors or unusual situations that do not break the request, such as `Client Error: 404 on GET /api/non-existent-route` or `Retrying failed database connection`.
**ERROR**: Log only unhandled or server-side errors that cause a request to fail (typically handled by the `errorHandler`). Avoid logging expected client errors (like 4xx) at this level.
**DEBUG**: Log detailed diagnostic information useful during development, such as function entry/exit points or variable states.
### Example Usage
@@ -59,15 +59,15 @@ export const requestLogger = (req, res, next) => {
// In a route handler:
router.get('/:id', async (req, res, next) => {
// Use the request-scoped logger
req.log.info({ flyerId: req.params.id }, 'Fetching flyer by ID');
try {
// ... business logic ...
res.json(flyer);
} catch (error) {
// The error itself will be logged with full context by the errorHandler
next(error);
}
// Use the request-scoped logger
req.log.info({ flyerId: req.params.id }, 'Fetching flyer by ID');
try {
// ... business logic ...
res.json(flyer);
} catch (error) {
// The error itself will be logged with full context by the errorHandler
next(error);
}
});
```

View File

@@ -14,5 +14,5 @@ We will formalize a centralized Role-Based Access Control (RBAC) or Attribute-Ba
## Consequences
* **Positive**: Ensures authorization logic is consistent, easy to audit, and decoupled from business logic. Improves security by centralizing access control.
* **Negative**: Requires a significant refactoring effort to integrate the new authorization system across all protected routes and features. Introduces a new dependency if an external library is chosen.
- **Positive**: Ensures authorization logic is consistent, easy to audit, and decoupled from business logic. Improves security by centralizing access control.
- **Negative**: Requires a significant refactoring effort to integrate the new authorization system across all protected routes and features. Introduces a new dependency if an external library is chosen.

View File

@@ -14,5 +14,5 @@ We will establish a formal Design System and Component Library. This will involv
## Consequences
* **Positive**: Ensures a consistent and high-quality user interface. Accelerates frontend development by providing reusable, well-documented components. Improves maintainability and reduces technical debt.
* **Negative**: Requires an initial investment in setting up Storybook and migrating existing components. Adds a new dependency and a new workflow for frontend development.
- **Positive**: Ensures a consistent and high-quality user interface. Accelerates frontend development by providing reusable, well-documented components. Improves maintainability and reduces technical debt.
- **Negative**: Requires an initial investment in setting up Storybook and migrating existing components. Adds a new dependency and a new workflow for frontend development.

View File

@@ -14,5 +14,5 @@ We will adopt a dedicated database migration tool, such as **`node-pg-migrate`**
## Consequences
* **Positive**: Provides a safe, repeatable, and reversible way to evolve the database schema. Improves team collaboration on database changes. Reduces the risk of data loss or downtime during deployments.
* **Negative**: Requires an initial setup and learning curve for the chosen migration tool. All future schema changes must adhere to the migration workflow.
- **Positive**: Provides a safe, repeatable, and reversible way to evolve the database schema. Improves team collaboration on database changes. Reduces the risk of data loss or downtime during deployments.
- **Negative**: Requires an initial setup and learning curve for the chosen migration tool. All future schema changes must adhere to the migration workflow.

View File

@@ -14,5 +14,5 @@ We will standardize the deployment process by containerizing the application usi
## Consequences
* **Positive**: Ensures consistency between development and production environments. Simplifies the setup for new developers. Improves portability and scalability of the application.
* **Negative**: Requires learning Docker and containerization concepts. Adds `Dockerfile` and `docker-compose.yml` to the project's configuration.
- **Positive**: Ensures consistency between development and production environments. Simplifies the setup for new developers. Improves portability and scalability of the application.
- **Negative**: Requires learning Docker and containerization concepts. Adds `Dockerfile` and `docker-compose.yml` to the project's configuration.

View File

@@ -18,5 +18,5 @@ We will implement a multi-layered security approach for the API:
## Consequences
* **Positive**: Significantly improves the application's security posture against common web vulnerabilities like XSS, clickjacking, and brute-force attacks.
* **Negative**: Requires careful configuration of CORS and rate limits to avoid blocking legitimate traffic. Content-Security-Policy can be complex to configure correctly.
- **Positive**: Significantly improves the application's security posture against common web vulnerabilities like XSS, clickjacking, and brute-force attacks.
- **Negative**: Requires careful configuration of CORS and rate limits to avoid blocking legitimate traffic. Content-Security-Policy can be complex to configure correctly.

View File

@@ -14,5 +14,5 @@ We will formalize the end-to-end CI/CD process. This ADR will define the project
## Consequences
* **Positive**: Automates quality control and creates a safe, repeatable path to production. Increases development velocity and reduces deployment-related errors.
* **Negative**: Initial setup effort for the CI/CD pipeline. May slightly increase the time to merge code due to mandatory checks.
- **Positive**: Automates quality control and creates a safe, repeatable path to production. Increases development velocity and reduces deployment-related errors.
- **Negative**: Initial setup effort for the CI/CD pipeline. May slightly increase the time to merge code due to mandatory checks.

View File

@@ -14,5 +14,5 @@ We will adopt **OpenAPI (Swagger)** for API documentation. We will use tools (e.
## Consequences
* **Positive**: Creates a single source of truth for API documentation that stays in sync with the code. Enables auto-generation of client SDKs and simplifies testing.
* **Negative**: Requires developers to maintain JSDoc annotations on all routes. Adds a build step and new dependencies to the project.
- **Positive**: Creates a single source of truth for API documentation that stays in sync with the code. Enables auto-generation of client SDKs and simplifies testing.
- **Negative**: Requires developers to maintain JSDoc annotations on all routes. Adds a build step and new dependencies to the project.

View File

@@ -14,5 +14,5 @@ We will implement a formal data backup and recovery strategy. This will involve
## Consequences
* **Positive**: Protects against catastrophic data loss, ensuring business continuity. Provides a clear, tested plan for disaster recovery.
* **Negative**: Requires setup and maintenance of backup scripts and secure storage. Incurs storage costs for backup files.
- **Positive**: Protects against catastrophic data loss, ensuring business continuity. Provides a clear, tested plan for disaster recovery.
- **Negative**: Requires setup and maintenance of backup scripts and secure storage. Incurs storage costs for backup files.

View File

@@ -12,11 +12,11 @@ When the application is containerized (`ADR-014`), the container orchestrator (e
We will implement dedicated health check endpoints in the Express application.
* A **Liveness Probe** (`/api/health/live`) will return a `200 OK` to indicate the server is running. If it fails, the orchestrator should restart the container.
- A **Liveness Probe** (`/api/health/live`) will return a `200 OK` to indicate the server is running. If it fails, the orchestrator should restart the container.
* A **Readiness Probe** (`/api/health/ready`) will return a `200 OK` only if the application is ready to accept traffic (e.g., database connection is established). If it fails, the orchestrator will temporarily remove the container from the load balancer.
- A **Readiness Probe** (`/api/health/ready`) will return a `200 OK` only if the application is ready to accept traffic (e.g., database connection is established). If it fails, the orchestrator will temporarily remove the container from the load balancer.
## Consequences
* **Positive**: Enables robust, automated application lifecycle management in a containerized environment. Prevents traffic from being sent to unhealthy or uninitialized application instances.
* **Negative**: Adds a small amount of code for the health check endpoints. Requires configuration in the container orchestration layer.
- **Positive**: Enables robust, automated application lifecycle management in a containerized environment. Prevents traffic from being sent to unhealthy or uninitialized application instances.
- **Negative**: Adds a small amount of code for the health check endpoints. Requires configuration in the container orchestration layer.

View File

@@ -24,8 +24,8 @@ We will adopt a standardized, application-wide structured logging policy for all
**2. Pino-like API for Structured Logging**: The client logger mimics the `pino` API, which is the standard on the backend. It supports two primary call signatures:
* `logger.info('A simple message');`
* `logger.info({ key: 'value' }, 'A message with a structured data payload');`
- `logger.info('A simple message');`
- `logger.info({ key: 'value' }, 'A message with a structured data payload');`
The second signature, which includes a data object as the first argument, is **strongly preferred**, especially for logging errors or complex state.
@@ -79,7 +79,7 @@ describe('MyComponent', () => {
// Assert that the logger was called with the expected structure
expect(logger.error).toHaveBeenCalledWith(
expect.objectContaining({ err: expect.any(Error) }), // Check for the error object
'Failed to fetch component data' // Check for the message
'Failed to fetch component data', // Check for the message
);
});
});

View File

@@ -0,0 +1,41 @@
# ADR-027: Standardized Naming Convention for AI and Database Types
**Date**: 2026-01-05
**Status**: Accepted
## Context
The application codebase primarily follows the standard TypeScript convention of `camelCase` for variable and property names. However, the PostgreSQL database uses `snake_case` for column names. Additionally, the AI prompts are designed to extract data that maps directly to these database columns.
Attempting to enforce `camelCase` strictly across the entire stack created friction and ambiguity, particularly in the background processing pipeline where data moves from the AI model directly to the database. Developers were unsure whether to transform keys immediately upon receipt (adding overhead) or keep them as-is.
## Decision
We will adopt a hybrid naming convention strategy to explicitly distinguish between internal application state and external/persisted data formats.
1. **Database and AI Types (`snake_case`)**:
Interfaces, Type definitions, and Zod schemas that represent raw database rows or direct AI responses **MUST** use `snake_case`.
- *Examples*: `AiFlyerDataSchema`, `ExtractedFlyerItemSchema`, `FlyerInsert`.
- *Reasoning*: This avoids unnecessary mapping layers when inserting data into the database or parsing AI output. It serves as a visual cue that the data is "raw", "external", or destined for persistence.
2. **Internal Application Logic (`camelCase`)**:
Variables, function arguments, and processed data structures used within the application logic (Service layer, UI components, utility functions) **MUST** use `camelCase`.
- *Reasoning*: This adheres to standard JavaScript/TypeScript practices and maintains consistency with the rest of the ecosystem (React, etc.).
3. **Boundary Handling**:
- For background jobs that primarily move data from AI to DB, preserving `snake_case` is preferred to minimize transformation logic.
- For API responses sent to the frontend, data should generally be transformed to `camelCase` unless it is a direct dump of a database entity for a specific administrative view.
## Consequences
### Positive
- **Visual Distinction**: It is immediately obvious whether a variable holds raw data (`price_in_cents`) or processed application state (`priceInCents`).
- **Efficiency**: Reduces boilerplate code for mapping keys (e.g., `price_in_cents: data.priceInCents`) when performing bulk inserts or updates.
- **Simplicity**: AI prompts can request JSON keys that match the database schema 1:1, reducing the risk of mapping errors.
### Negative
- **Context Switching**: Developers must be mindful of the casing context.
- **Linter Configuration**: May require specific overrides or `// eslint-disable-next-line` comments if the linter is configured to strictly enforce `camelCase` everywhere.

View File

@@ -3,64 +3,159 @@
// It allows us to define all the settings for our application in one place.
// The .cjs extension is required because the project's package.json has "type": "module".
// --- Environment Variable Validation ---
const requiredSecrets = ['DB_HOST', 'JWT_SECRET', 'GEMINI_API_KEY'];
const missingSecrets = requiredSecrets.filter(key => !process.env[key]);
if (missingSecrets.length > 0) {
console.warn('\n[ecosystem.config.cjs] ⚠️ WARNING: The following environment variables are MISSING in the shell:');
missingSecrets.forEach(key => console.warn(` - ${key}`));
console.warn('[ecosystem.config.cjs] The application may crash if these are required for startup.\n');
process.exit(1); // Fail fast so PM2 doesn't attempt to start a broken app
} else {
console.log('[ecosystem.config.cjs] ✅ Critical environment variables are present.');
}
// --- Shared Environment Variables ---
// Define common variables to reduce duplication and ensure consistency across apps.
const sharedEnv = {
DB_HOST: process.env.DB_HOST,
DB_USER: process.env.DB_USER,
DB_PASSWORD: process.env.DB_PASSWORD,
DB_NAME: process.env.DB_NAME,
REDIS_URL: process.env.REDIS_URL,
REDIS_PASSWORD: process.env.REDIS_PASSWORD,
FRONTEND_URL: process.env.FRONTEND_URL,
JWT_SECRET: process.env.JWT_SECRET,
GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GOOGLE_MAPS_API_KEY: process.env.GOOGLE_MAPS_API_KEY,
SMTP_HOST: process.env.SMTP_HOST,
SMTP_PORT: process.env.SMTP_PORT,
SMTP_SECURE: process.env.SMTP_SECURE,
SMTP_USER: process.env.SMTP_USER,
SMTP_PASS: process.env.SMTP_PASS,
SMTP_FROM_EMAIL: process.env.SMTP_FROM_EMAIL,
};
module.exports = {
apps: [
{
// --- API Server ---
// The name is now dynamically set based on the environment.
// This is a common pattern but requires you to call pm2 with the correct name.
// The deploy script handles this by using 'flyer-crawler-api' for prod and 'flyer-crawler-api-test' for test.
name: 'flyer-crawler-api',
// Note: The process names below are referenced in .gitea/workflows/ for status checks.
script: './node_modules/.bin/tsx',
args: 'server.ts', // tsx will execute this file
args: 'server.ts',
max_memory_restart: '500M',
// Production Optimization: Run in cluster mode to utilize all CPU cores
instances: 'max',
exec_mode: 'cluster',
kill_timeout: 5000, // Allow 5s for graceful shutdown of API requests
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
// Restart Logic
max_restarts: 40,
exp_backoff_restart_delay: 100,
min_uptime: '10s',
// Production Environment Settings
env_production: {
NODE_ENV: 'production', // Set the Node.js environment to production
NODE_ENV: 'production',
name: 'flyer-crawler-api',
cwd: '/var/www/flyer-crawler.projectium.com',
WORKER_LOCK_DURATION: '120000',
...sharedEnv,
},
// Test Environment Settings
env_test: {
NODE_ENV: 'development', // Use 'development' for test to enable more verbose logging if needed
NODE_ENV: 'test',
name: 'flyer-crawler-api-test',
cwd: '/var/www/flyer-crawler-test.projectium.com',
WORKER_LOCK_DURATION: '120000',
...sharedEnv,
},
// Development Environment Settings
env_development: {
NODE_ENV: 'development',
name: 'flyer-crawler-api-dev',
watch: true,
ignore_watch: ['node_modules', 'logs', '*.log', 'flyer-images', '.git'],
WORKER_LOCK_DURATION: '120000',
...sharedEnv,
},
},
{
// --- General Worker ---
name: 'flyer-crawler-worker',
script: './node_modules/.bin/tsx',
args: 'src/services/queueService.server.ts', // tsx will execute this file
args: 'src/services/worker.ts',
max_memory_restart: '1G',
kill_timeout: 10000, // Workers may need more time to complete a job
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
// Restart Logic
max_restarts: 40,
exp_backoff_restart_delay: 100,
min_uptime: '10s',
// Production Environment Settings
env_production: {
NODE_ENV: 'production',
name: 'flyer-crawler-worker',
cwd: '/var/www/flyer-crawler.projectium.com',
...sharedEnv,
},
// Test Environment Settings
env_test: {
NODE_ENV: 'development',
NODE_ENV: 'test',
name: 'flyer-crawler-worker-test',
cwd: '/var/www/flyer-crawler-test.projectium.com',
...sharedEnv,
},
// Development Environment Settings
env_development: {
NODE_ENV: 'development',
name: 'flyer-crawler-worker-dev',
watch: true,
ignore_watch: ['node_modules', 'logs', '*.log', 'flyer-images', '.git'],
...sharedEnv,
},
},
{
// --- Analytics Worker ---
name: 'flyer-crawler-analytics-worker',
script: './node_modules/.bin/tsx',
args: 'src/services/queueService.server.ts', // tsx will execute this file
args: 'src/services/worker.ts',
max_memory_restart: '1G',
kill_timeout: 10000,
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
// Restart Logic
max_restarts: 40,
exp_backoff_restart_delay: 100,
min_uptime: '10s',
// Production Environment Settings
env_production: {
NODE_ENV: 'production',
name: 'flyer-crawler-analytics-worker',
cwd: '/var/www/flyer-crawler.projectium.com',
...sharedEnv,
},
// Test Environment Settings
env_test: {
NODE_ENV: 'development',
NODE_ENV: 'test',
name: 'flyer-crawler-analytics-worker-test',
cwd: '/var/www/flyer-crawler-test.projectium.com',
...sharedEnv,
},
// Development Environment Settings
env_development: {
NODE_ENV: 'development',
name: 'flyer-crawler-analytics-worker-dev',
watch: true,
ignore_watch: ['node_modules', 'logs', '*.log', 'flyer-images', '.git'],
...sharedEnv,
},
},
],
};
};

View File

@@ -1,21 +1,21 @@
import globals from "globals";
import tseslint from "typescript-eslint";
import pluginReact from "eslint-plugin-react";
import pluginReactHooks from "eslint-plugin-react-hooks";
import pluginReactRefresh from "eslint-plugin-react-refresh";
import globals from 'globals';
import tseslint from 'typescript-eslint';
import pluginReact from 'eslint-plugin-react';
import pluginReactHooks from 'eslint-plugin-react-hooks';
import pluginReactRefresh from 'eslint-plugin-react-refresh';
export default tseslint.config(
{
// Global ignores
ignores: ["dist", ".gitea", "node_modules", "*.cjs"],
ignores: ['dist', '.gitea', 'node_modules', '*.cjs'],
},
{
// All files
files: ["**/*.{js,mjs,cjs,ts,jsx,tsx}"],
files: ['**/*.{js,mjs,cjs,ts,jsx,tsx}'],
plugins: {
react: pluginReact,
"react-hooks": pluginReactHooks,
"react-refresh": pluginReactRefresh,
'react-hooks': pluginReactHooks,
'react-refresh': pluginReactRefresh,
},
languageOptions: {
globals: {
@@ -24,12 +24,9 @@ export default tseslint.config(
},
},
rules: {
"react-refresh/only-export-components": [
"warn",
{ allowConstantExport: true },
],
'react-refresh/only-export-components': ['warn', { allowConstantExport: true }],
},
},
// TypeScript files
...tseslint.configs.recommended,
);
);

4
express.d.ts vendored
View File

@@ -1,4 +1,4 @@
// src/types/express.d.ts
// express.d.ts
import { Logger } from 'pino';
/**
@@ -12,4 +12,4 @@ declare global {
log: Logger;
}
}
}
}

View File

@@ -1,20 +1,20 @@
<!DOCTYPE html>
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Grocery Flyer AI Analyzer</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap');
body {
font-family: 'Inter', sans-serif;
}
</style>
<!-- The stylesheet will be injected here by Vite during the build process -->
</head>
<body>
<div id="root"></div>
<!-- Vite will inject the correct <script> tag here during the build process -->
<script type="module" src="/src/index.tsx"></script>
</body>
</html>
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Grocery Flyer AI Analyzer</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap');
body {
font-family: 'Inter', sans-serif;
}
</style>
<!-- The stylesheet will be injected here by Vite during the build process -->
</head>
<body>
<div id="root"></div>
<!-- Vite will inject the correct <script> tag here during the build process -->
<script type="module" src="/src/index.tsx"></script>
</body>
</html>

View File

@@ -1,8 +1,5 @@
{
"name": "Flyer Crawler",
"description": "Upload a grocery store flyer image to extract item details, prices, and quantities using AI. Get insights, meal plans, and compare prices to save money on your shopping.",
"requestFramePermissions": [
"geolocation",
"microphone"
]
}
"requestFramePermissions": ["geolocation", "microphone"]
}

127
notes-to-ai4.txt Normal file
View File

@@ -0,0 +1,127 @@
RULES:
1) if you do not have a file that you need, stop, and request it immediately.
2) never remove logging or comments
3) you cannot ever use 'any' or 'unknown' to solve possible typescript issues
4) when creating new files, output there entire path in your explanation, to make it easier to know where to save those new files and directories to
5) add comments when you can, as that will help ensure ideas persist into the app
6) Your knowledge of package version, like nodejs, is always old, like a year or more old - ask me for the best version to use, as your knowledge is incomplete
7) Stop making predictions and/or guessing at solutions. Focus on adding logging and debugging to issues that are not solved right away.
8) Do not make obsequious statements - we're here to do a job, not get patted on the shoulder for insignificant achievements.
9) Provide me with the npm command to execute rather than wanting to edit the package.json file. That is not the correct way to handle a package update.
10) Provide the code solution in DIFF format for brevity.
11) Always add logging and debugging to prove a solution works, and only remove logging and debugging when it is not possible to adapt it to the new conditions.
latest refacter
Refactor `RecipeSuggester.test.tsx` to use `renderWithProviders`.
Create a new test file for `StatCard.tsx` to verify its props and rendering.
while assuming that master_schema_rollup.sql is the "ultimate source of truth", issues can happen and it may not have been properly
updated - look for differences between these files
UPC SCANNING !
EXPIRY DATES !
are there any other unit test files that should be refactored :
Extract Shared Utility Functions Several utility functions are duplicated across multiple route files.
Update Route Files to Use New Utilities route files will be updated to import these new utility functions
Review the `getJobStatus` API client function and add more robust error handling for non-JSON responses.
Review the `src/services/apiClient.test.ts` file. It seems to have a lot of manual data creation for API payloads. Can we refactor it to use factories where appropriate, or create new factories for API payloads if needed?
The `useApi` hook has complex state management. Can you suggest a way to simplify it, perhaps by using a reducer or a state machine?
Are there any opportunities to improve the mock factories themselves, for example, by adding more realistic default data?
Now that the user creation is centralized, can we add more complex test user scenarios to the helper, like a user with a pre-existing shopping list?
How can I add a new "Recipes Created" stat to the AdminStatsPage and its tests?
NO CODE just outline the steps
Gemini
Outlining the Implementation Path
Gemini 2.5 generated this response due to high traffic on Gemini 3
Of course! Here is a step-by-step outline of how to add a new "Recipes Created" statistic to your AdminStatsPage.
Backend Steps
Update Database Query:
In the database repository file responsible for fetching admin statistics (e.g., admin.db.ts), you'll need to add a query that performs a COUNT(*) on your recipes table.
Update Service Layer:
The service function that aggregates the statistics (which is called by your API controller) needs to be modified to execute the new recipe count query.
The result of this query should be added to the statistics object that is returned.
Update API Endpoint:
The controller for the /api/admin/stats endpoint must be updated to include the new recipeCount field in the JSON object it sends to the frontend.
Frontend Steps
Update AppStats Type:
In src/services/apiClient.ts, where the AppStats type is defined, add the new property: recipeCount: number;.
Update Mock Factory:
In src/tests/utils/mockFactories.ts, modify the createMockAppStats function. Add recipeCount to the defaultStats object with a default value (e.g., recipeCount: 500). This ensures any test that uses the factory will not break due to the missing property.
Update AdminStatsPage Component (AdminStatsPage.tsx):
Import a suitable icon for recipes, like BookOpenIcon, at the top of the file.
Inside the main div where the other StatCard components are rendered, add a new one for the recipe count:
jsx
<StatCard title="Total Recipes" value={stats.recipeCount.toLocaleString()} icon={<BookOpenIcon className="w-6 h-6" />} />
Update AdminStatsPage Tests (AdminStatsPage.test.tsx):
In the test 'should display stats cards when data is fetched successfully', update the mockStats object to include a value for recipeCount. Add a new expect assertion to verify that the "Total Recipes" title and its corresponding value are rendered.
In the test 'should format large numbers with commas for readability', add a large number for recipeCount in the mock data and assert that it is rendered with commas.
In the test 'should correctly display zero values for all stats', add recipeCount: 0 to the mock data and increase the expected count of "0" elements found in the document from 5 to 6.

4331
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,19 @@
{
"name": "flyer-crawler",
"private": true,
"version": "0.0.4",
"version": "0.9.52",
"type": "module",
"scripts": {
"dev": "concurrently \"npm:start:dev\" \"vite\"",
"dev:container": "concurrently \"npm:start:dev\" \"vite --host\"",
"start": "npm run start:prod",
"build": "vite build",
"preview": "vite preview",
"test": "NODE_ENV=test tsx ./node_modules/vitest/vitest.mjs run",
"test": "cross-env NODE_ENV=test tsx ./node_modules/vitest/vitest.mjs run",
"test-wsl": "cross-env NODE_ENV=test vitest run",
"test:coverage": "npm run clean && npm run test:unit -- --coverage && npm run test:integration -- --coverage",
"test:unit": "NODE_ENV=test tsx ./node_modules/vitest/vitest.mjs run --project unit -c vite.config.ts",
"test:integration": "NODE_ENV=test tsx ./node_modules/vitest/vitest.mjs run --project integration -c vitest.config.integration.ts",
"test:unit": "NODE_ENV=test tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --project unit -c vite.config.ts",
"test:integration": "NODE_ENV=test tsx --max-old-space-size=8192 ./node_modules/vitest/vitest.mjs run --project integration -c vitest.config.integration.ts",
"format": "prettier --write .",
"lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0",
"type-check": "tsc --noEmit",
@@ -20,6 +22,7 @@
"start:dev": "NODE_ENV=development tsx watch server.ts",
"start:prod": "NODE_ENV=production tsx server.ts",
"start:test": "NODE_ENV=test NODE_V8_COVERAGE=.coverage/tmp/integration-server tsx server.ts",
"db:reset:dev": "NODE_ENV=development tsx src/db/seed.ts",
"db:reset:test": "NODE_ENV=test tsx src/db/seed.ts",
"worker:prod": "NODE_ENV=production tsx src/services/queueService.server.ts"
},
@@ -27,12 +30,14 @@
"@bull-board/api": "^6.14.2",
"@bull-board/express": "^6.14.2",
"@google/genai": "^1.30.0",
"@tanstack/react-query": "^5.90.12",
"@types/connect-timeout": "^1.9.0",
"bcrypt": "^5.1.1",
"bullmq": "^5.65.1",
"connect-timeout": "^1.9.1",
"cookie-parser": "^1.4.7",
"date-fns": "^4.1.0",
"exif-parser": "^0.1.12",
"express": "^5.1.0",
"express-list-endpoints": "^7.1.1",
"express-rate-limit": "^8.2.1",
@@ -50,6 +55,7 @@
"passport-local": "^1.0.0",
"pdfjs-dist": "^5.4.394",
"pg": "^8.16.3",
"piexifjs": "^1.0.6",
"pino": "^10.1.0",
"react": "^19.2.0",
"react-dom": "^19.2.0",
@@ -58,7 +64,7 @@
"recharts": "^3.4.1",
"sharp": "^0.34.5",
"tsx": "^4.20.6",
"zod": "^4.1.13",
"zod": "^4.2.1",
"zxcvbn": "^4.4.2"
},
"devDependencies": {
@@ -81,6 +87,7 @@
"@types/passport-jwt": "^4.0.1",
"@types/passport-local": "^1.0.38",
"@types/pg": "^8.15.6",
"@types/piexifjs": "^1.0.0",
"@types/pino": "^7.0.4",
"@types/react": "^19.2.7",
"@types/react-dom": "^19.2.3",
@@ -95,6 +102,7 @@
"autoprefixer": "^10.4.22",
"c8": "^10.1.3",
"concurrently": "^9.2.1",
"cross-env": "^10.1.0",
"eslint": "9.39.1",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-react": "7.37.5",

View File

@@ -10,10 +10,13 @@ const tailwindConfigPath = path.resolve(process.cwd(), 'tailwind.config.js');
console.log(`[POSTCSS] Attempting to use Tailwind config at: ${tailwindConfigPath}`);
// Log to prove the imported config object is what we expect
console.log('[POSTCSS] Imported tailwind.config.js object:', JSON.stringify(tailwindConfig, null, 2));
console.log(
'[POSTCSS] Imported tailwind.config.js object:',
JSON.stringify(tailwindConfig, null, 2),
);
export default {
plugins: {
'@tailwindcss/postcss': {}, // The empty object is correct.
},
};
};

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,23 @@
-- It is idempotent, meaning it can be run multiple times without causing errors.
-- 1. Pre-populate the master grocery items dictionary.
-- This block links generic items to their respective categories.
-- This MUST run after populating categories.
-- Renumbered to 2.
-- 2. Pre-populate the categories table from a predefined list.
-- Renumbered to 1. This MUST run before populating master_grocery_items.
DO $$
BEGIN
INSERT INTO public.categories (name) VALUES
('Fruits & Vegetables'), ('Meat & Seafood'), ('Dairy & Eggs'), ('Bakery & Bread'),
('Pantry & Dry Goods'), ('Beverages'), ('Frozen Foods'), ('Snacks'), ('Household & Cleaning'),
('Personal Care & Health'), ('Baby & Child'), ('Pet Supplies'), ('Deli & Prepared Foods'),
('Canned Goods'), ('Condiments & Spices'), ('Breakfast & Cereal'), ('Organic'),
('International Foods'), ('Other/Miscellaneous')
ON CONFLICT (name) DO NOTHING;
END $$;
-- 2. Pre-populate the master grocery items dictionary.
DO $$
DECLARE
fv_cat_id BIGINT; ms_cat_id BIGINT; de_cat_id BIGINT; bb_cat_id BIGINT; pdg_cat_id BIGINT;
@@ -53,18 +69,6 @@ BEGIN
ON CONFLICT (name) DO NOTHING;
END $$;
-- 2. Pre-populate the categories table from a predefined list.
DO $$
BEGIN
INSERT INTO public.categories (name) VALUES
('Fruits & Vegetables'), ('Meat & Seafood'), ('Dairy & Eggs'), ('Bakery & Bread'),
('Pantry & Dry Goods'), ('Beverages'), ('Frozen Foods'), ('Snacks'), ('Household & Cleaning'),
('Personal Care & Health'), ('Baby & Child'), ('Pet Supplies'), ('Deli & Prepared Foods'),
('Canned Goods'), ('Condiments & Spices'), ('Breakfast & Cereal'), ('Organic'),
('International Foods'), ('Other/Miscellaneous')
ON CONFLICT (name) DO NOTHING;
END $$;
-- 3. Pre-populate the brands and products tables.
-- This block adds common brands and links them to specific products.
DO $$
@@ -261,5 +265,6 @@ INSERT INTO public.achievements (name, description, icon, points_value) VALUES
('List Sharer', 'Share a shopping list with another user for the first time.', 'list', 20),
('First Favorite', 'Mark a recipe as one of your favorites.', 'heart', 5),
('First Fork', 'Make a personal copy of a public recipe.', 'git-fork', 10),
('First Budget Created', 'Create your first budget to track spending.', 'piggy-bank', 15)
('First Budget Created', 'Create your first budget to track spending.', 'piggy-bank', 15),
('First-Upload', 'Upload your first flyer.', 'upload-cloud', 25)
ON CONFLICT (name) DO NOTHING;

View File

@@ -8,16 +8,23 @@
CREATE TABLE IF NOT EXISTS public.addresses (
address_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
address_line_1 TEXT NOT NULL UNIQUE,
address_line_2 TEXT,
city TEXT NOT NULL,
province_state TEXT NOT NULL,
postal_code TEXT NOT NULL,
country TEXT NOT NULL,
address_line_2 TEXT,
latitude NUMERIC(9, 6),
longitude NUMERIC(9, 6),
location GEOGRAPHY(Point, 4326),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT addresses_address_line_1_check CHECK (TRIM(address_line_1) <> ''),
CONSTRAINT addresses_city_check CHECK (TRIM(city) <> ''),
CONSTRAINT addresses_province_state_check CHECK (TRIM(province_state) <> ''),
CONSTRAINT addresses_postal_code_check CHECK (TRIM(postal_code) <> ''),
CONSTRAINT addresses_country_check CHECK (TRIM(country) <> ''),
CONSTRAINT addresses_latitude_check CHECK (latitude >= -90 AND latitude <= 90),
CONSTRAINT addresses_longitude_check CHECK (longitude >= -180 AND longitude <= 180)
);
COMMENT ON TABLE public.addresses IS 'A centralized table for storing all physical addresses for users and stores.';
COMMENT ON COLUMN public.addresses.latitude IS 'The geographic latitude.';
@@ -31,12 +38,14 @@ CREATE TABLE IF NOT EXISTS public.users (
email TEXT NOT NULL UNIQUE,
password_hash TEXT,
refresh_token TEXT,
failed_login_attempts INTEGER DEFAULT 0,
failed_login_attempts INTEGER DEFAULT 0 CHECK (failed_login_attempts >= 0),
last_failed_login TIMESTAMPTZ,
last_login_at TIMESTAMPTZ,
last_login_ip TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT users_email_check CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'),
CONSTRAINT users_password_hash_check CHECK (password_hash IS NULL OR TRIM(password_hash) <> '')
);
COMMENT ON TABLE public.users IS 'Stores user authentication information.';
COMMENT ON COLUMN public.users.refresh_token IS 'Stores the long-lived refresh token for re-authentication.';
@@ -59,10 +68,13 @@ CREATE TABLE IF NOT EXISTS public.activity_log (
icon TEXT,
details JSONB,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT activity_log_action_check CHECK (TRIM(action) <> ''),
CONSTRAINT activity_log_display_text_check CHECK (TRIM(display_text) <> '')
);
COMMENT ON TABLE public.activity_log IS 'Logs key user and system actions for auditing and display in an activity feed.';
CREATE INDEX IF NOT EXISTS idx_activity_log_user_id ON public.activity_log(user_id);
-- This composite index is more efficient for user-specific activity feeds ordered by date.
CREATE INDEX IF NOT EXISTS idx_activity_log_user_id_created_at ON public.activity_log(user_id, created_at DESC);
-- 3. for public user profiles.
-- This table is linked to the users table and stores non-sensitive user data.
@@ -72,16 +84,20 @@ CREATE TABLE IF NOT EXISTS public.profiles (
full_name TEXT,
avatar_url TEXT,
address_id BIGINT REFERENCES public.addresses(address_id) ON DELETE SET NULL,
points INTEGER DEFAULT 0 NOT NULL CHECK (points >= 0),
preferences JSONB,
role TEXT CHECK (role IN ('admin', 'user')),
points INTEGER DEFAULT 0 NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT profiles_full_name_check CHECK (full_name IS NULL OR TRIM(full_name) <> ''),
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
updated_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
);
-- CONSTRAINT profiles_avatar_url_check CHECK (avatar_url IS NULL OR avatar_url ~* '^https://?.*'),
COMMENT ON TABLE public.profiles IS 'Stores public-facing user data, linked to the public.users table.';
COMMENT ON COLUMN public.profiles.address_id IS 'A foreign key to the user''s primary address in the `addresses` table.';
-- This index is crucial for the gamification leaderboard feature.
CREATE INDEX IF NOT EXISTS idx_profiles_points_leaderboard ON public.profiles (points DESC, full_name ASC);
COMMENT ON COLUMN public.profiles.points IS 'A simple integer column to store a user''s total accumulated points from achievements.';
-- 4. The 'stores' table for normalized store data.
@@ -91,7 +107,10 @@ CREATE TABLE IF NOT EXISTS public.stores (
logo_url TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT stores_name_check CHECK (TRIM(name) <> ''),
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
);
-- CONSTRAINT stores_logo_url_check CHECK (logo_url IS NULL OR logo_url ~* '^https://?.*'),
COMMENT ON TABLE public.stores IS 'Stores metadata for grocery store chains (e.g., Safeway, Kroger).';
-- 5. The 'categories' table for normalized category data.
@@ -99,7 +118,8 @@ CREATE TABLE IF NOT EXISTS public.categories (
category_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT categories_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.categories IS 'Stores a predefined list of grocery item categories (e.g., ''Fruits & Vegetables'', ''Dairy & Eggs'').';
@@ -109,16 +129,22 @@ CREATE TABLE IF NOT EXISTS public.flyers (
file_name TEXT NOT NULL,
image_url TEXT NOT NULL,
icon_url TEXT,
checksum TEXT UNIQUE,
store_id BIGINT REFERENCES public.stores(store_id),
checksum TEXT UNIQUE,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE CASCADE,
valid_from DATE,
valid_to DATE,
store_address TEXT,
item_count INTEGER DEFAULT 0 NOT NULL,
status TEXT DEFAULT 'processed' NOT NULL CHECK (status IN ('processed', 'needs_review', 'archived')),
item_count INTEGER DEFAULT 0 NOT NULL CHECK (item_count >= 0),
uploaded_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT flyers_valid_dates_check CHECK (valid_to >= valid_from),
CONSTRAINT flyers_file_name_check CHECK (TRIM(file_name) <> ''),
CONSTRAINT flyers_checksum_check CHECK (checksum IS NULL OR length(checksum) = 64)
);
-- CONSTRAINT flyers_image_url_check CHECK (image_url ~* '^https://?.*'),
-- CONSTRAINT flyers_icon_url_check CHECK (icon_url IS NULL OR icon_url ~* '^https://?.*'),
COMMENT ON TABLE public.flyers IS 'Stores metadata for each processed flyer, linking it to a store and its validity period.';
CREATE INDEX IF NOT EXISTS idx_flyers_store_id ON public.flyers(store_id);
COMMENT ON COLUMN public.flyers.file_name IS 'The original name of the uploaded flyer file (e.g., "flyer_week_1.pdf").';
@@ -129,21 +155,24 @@ COMMENT ON COLUMN public.flyers.store_id IS 'Foreign key linking this flyer to a
COMMENT ON COLUMN public.flyers.valid_from IS 'The start date of the sale period for this flyer, extracted by the AI.';
COMMENT ON COLUMN public.flyers.valid_to IS 'The end date of the sale period for this flyer, extracted by the AI.';
COMMENT ON COLUMN public.flyers.store_address IS 'The physical store address if it was successfully extracted from the flyer image.';
COMMENT ON COLUMN public.flyers.status IS 'The processing status of the flyer, e.g., if it needs manual review.';
COMMENT ON COLUMN public.flyers.item_count IS 'A cached count of the number of items in this flyer, maintained by a trigger.';
COMMENT ON COLUMN public.flyers.uploaded_by IS 'The user who uploaded the flyer. Can be null for anonymous or system uploads.';
CREATE INDEX IF NOT EXISTS idx_flyers_status ON public.flyers(status);
CREATE INDEX IF NOT EXISTS idx_flyers_created_at ON public.flyers (created_at DESC);
CREATE INDEX IF NOT EXISTS idx_flyers_valid_to_file_name ON public.flyers (valid_to DESC, file_name ASC);
-- 7. The 'master_grocery_items' table. This is the master dictionary.
CREATE TABLE IF NOT EXISTS public.master_grocery_items (
master_grocery_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
category_id BIGINT REFERENCES public.categories(category_id),
category_id BIGINT REFERENCES public.categories(category_id) ON DELETE SET NULL,
is_allergen BOOLEAN DEFAULT false,
allergy_info JSONB,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
CONSTRAINT master_grocery_items_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.master_grocery_items IS 'The master dictionary of canonical grocery items. Each item has a unique name and is linked to a category.';
CREATE INDEX IF NOT EXISTS idx_master_grocery_items_category_id ON public.master_grocery_items(category_id);
@@ -161,24 +190,64 @@ CREATE TABLE IF NOT EXISTS public.user_watched_items (
COMMENT ON TABLE public.user_watched_items IS 'A linking table that represents a user''s personal watchlist of grocery items.';
CREATE INDEX IF NOT EXISTS idx_user_watched_items_master_item_id ON public.user_watched_items(master_item_id);
-- 23. Store brand information. (Moved up due to dependency in flyer_items)
CREATE TABLE IF NOT EXISTS public.brands (
brand_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
logo_url TEXT,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT brands_name_check CHECK (TRIM(name) <> '')
);
-- CONSTRAINT brands_logo_url_check CHECK (logo_url IS NULL OR logo_url ~* '^https://?.*')
COMMENT ON TABLE public.brands IS 'Stores brand names like "Coca-Cola", "Maple Leaf", or "Kraft".';
COMMENT ON COLUMN public.brands.store_id IS 'If this is a store-specific brand (e.g., President''s Choice), this links to the parent store.';
-- 24. For specific products, linking a master item with a brand and size. (Moved up due to dependency in flyer_items)
CREATE TABLE IF NOT EXISTS public.products (
product_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
brand_id BIGINT REFERENCES public.brands(brand_id) ON DELETE SET NULL,
name TEXT NOT NULL,
description TEXT,
size TEXT,
upc_code TEXT UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT products_name_check CHECK (TRIM(name) <> ''),
CONSTRAINT products_upc_code_check CHECK (upc_code IS NULL OR upc_code ~ '^[0-9]{8,14}$')
);
COMMENT ON TABLE public.products IS 'Represents a specific, sellable product, combining a generic item with a brand and size.';
COMMENT ON COLUMN public.products.upc_code IS 'Universal Product Code, if available, for exact product matching.';
COMMENT ON COLUMN public.products.brand_id IS 'Can be null for generic/store-brand items.';
COMMENT ON COLUMN public.products.name IS 'Prime Raised without Antibiotics Chicken Breast.';
COMMENT ON COLUMN public.products.size IS 'e.g., "4L", "500g".';
CREATE INDEX IF NOT EXISTS idx_products_master_item_id ON public.products(master_item_id);
CREATE INDEX IF NOT EXISTS idx_products_brand_id ON public.products(brand_id);
-- 9. The 'flyer_items' table. This stores individual items from flyers.
CREATE TABLE IF NOT EXISTS public.flyer_items (
flyer_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
flyer_id BIGINT REFERENCES public.flyers(flyer_id) ON DELETE CASCADE,
item TEXT NOT NULL,
price_display TEXT NOT NULL,
price_in_cents INTEGER,
price_in_cents INTEGER CHECK (price_in_cents IS NULL OR price_in_cents >= 0),
quantity_num NUMERIC,
quantity TEXT NOT NULL,
category_id BIGINT REFERENCES public.categories(category_id),
category_id BIGINT REFERENCES public.categories(category_id) ON DELETE SET NULL,
category_name TEXT,
unit_price JSONB,
view_count INTEGER DEFAULT 0 NOT NULL,
click_count INTEGER DEFAULT 0 NOT NULL,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
product_id BIGINT,
view_count INTEGER DEFAULT 0 NOT NULL CHECK (view_count >= 0),
click_count INTEGER DEFAULT 0 NOT NULL CHECK (click_count >= 0),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE SET NULL,
product_id BIGINT REFERENCES public.products(product_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT flyer_items_item_check CHECK (TRIM(item) <> ''),
CONSTRAINT flyer_items_price_display_check CHECK (TRIM(price_display) <> ''),
CONSTRAINT flyer_items_quantity_check CHECK (TRIM(quantity) <> ''),
CONSTRAINT flyer_items_category_name_check CHECK (category_name IS NULL OR TRIM(category_name) <> '')
);
COMMENT ON TABLE public.flyer_items IS 'Stores individual items extracted from a specific flyer.';
COMMENT ON COLUMN public.flyer_items.flyer_id IS 'Foreign key linking this item to its parent flyer in the `flyers` table.';
@@ -197,6 +266,8 @@ CREATE INDEX IF NOT EXISTS idx_flyer_items_master_item_id ON public.flyer_items(
CREATE INDEX IF NOT EXISTS idx_flyer_items_category_id ON public.flyer_items(category_id);
CREATE INDEX IF NOT EXISTS idx_flyer_items_product_id ON public.flyer_items(product_id);
-- Add a GIN index to the 'item' column for fast fuzzy text searching.
-- This partial index is optimized for queries that find the best price for an item.
CREATE INDEX IF NOT EXISTS idx_flyer_items_master_item_price ON public.flyer_items (master_item_id, price_in_cents ASC) WHERE price_in_cents IS NOT NULL;
-- This requires the pg_trgm extension.
CREATE INDEX IF NOT EXISTS flyer_items_item_trgm_idx ON public.flyer_items USING GIN (item gin_trgm_ops);
@@ -205,7 +276,7 @@ CREATE TABLE IF NOT EXISTS public.user_alerts (
user_alert_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_watched_item_id BIGINT NOT NULL REFERENCES public.user_watched_items(user_watched_item_id) ON DELETE CASCADE,
alert_type TEXT NOT NULL CHECK (alert_type IN ('PRICE_BELOW', 'PERCENT_OFF_AVERAGE')),
threshold_value NUMERIC NOT NULL,
threshold_value NUMERIC NOT NULL CHECK (threshold_value > 0),
is_active BOOLEAN DEFAULT true NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
@@ -223,7 +294,8 @@ CREATE TABLE IF NOT EXISTS public.notifications (
link_url TEXT,
is_read BOOLEAN DEFAULT false NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT notifications_content_check CHECK (TRIM(content) <> '')
);
COMMENT ON TABLE public.notifications IS 'A central log of notifications generated for users, such as price alerts.';
COMMENT ON COLUMN public.notifications.content IS 'The notification message displayed to the user.';
@@ -236,8 +308,8 @@ CREATE TABLE IF NOT EXISTS public.store_locations (
store_location_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE CASCADE,
address_id BIGINT NOT NULL REFERENCES public.addresses(address_id) ON DELETE CASCADE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
UNIQUE(store_id, address_id),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.store_locations IS 'Stores physical locations of stores with geographic data for proximity searches.';
@@ -249,13 +321,14 @@ CREATE TABLE IF NOT EXISTS public.item_price_history (
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
summary_date DATE NOT NULL,
store_location_id BIGINT REFERENCES public.store_locations(store_location_id) ON DELETE CASCADE,
min_price_in_cents INTEGER,
max_price_in_cents INTEGER,
avg_price_in_cents INTEGER,
data_points_count INTEGER DEFAULT 0 NOT NULL,
min_price_in_cents INTEGER CHECK (min_price_in_cents IS NULL OR min_price_in_cents >= 0),
max_price_in_cents INTEGER CHECK (max_price_in_cents IS NULL OR max_price_in_cents >= 0),
avg_price_in_cents INTEGER CHECK (avg_price_in_cents IS NULL OR avg_price_in_cents >= 0),
data_points_count INTEGER DEFAULT 0 NOT NULL CHECK (data_points_count >= 0),
UNIQUE(master_item_id, summary_date, store_location_id),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT item_price_history_price_order_check CHECK (min_price_in_cents <= max_price_in_cents)
);
COMMENT ON TABLE public.item_price_history IS 'Serves as a summary table to speed up charting and analytics.';
COMMENT ON COLUMN public.item_price_history.summary_date IS 'The date for which the price data is summarized.';
@@ -272,7 +345,8 @@ CREATE TABLE IF NOT EXISTS public.master_item_aliases (
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
alias TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT master_item_aliases_alias_check CHECK (TRIM(alias) <> '')
);
COMMENT ON TABLE public.master_item_aliases IS 'Stores synonyms or alternative names for master items to improve matching.';
COMMENT ON COLUMN public.master_item_aliases.alias IS 'An alternative name, e.g., "Ground Chuck" for the master item "Ground Beef".';
@@ -284,7 +358,8 @@ CREATE TABLE IF NOT EXISTS public.shopping_lists (
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT shopping_lists_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.shopping_lists IS 'Stores user-created shopping lists, e.g., "Weekly Groceries".';
CREATE INDEX IF NOT EXISTS idx_shopping_lists_user_id ON public.shopping_lists(user_id);
@@ -293,14 +368,15 @@ CREATE INDEX IF NOT EXISTS idx_shopping_lists_user_id ON public.shopping_lists(u
CREATE TABLE IF NOT EXISTS public.shopping_list_items (
shopping_list_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
shopping_list_id BIGINT NOT NULL REFERENCES public.shopping_lists(shopping_list_id) ON DELETE CASCADE,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
custom_item_name TEXT,
quantity NUMERIC DEFAULT 1 NOT NULL,
quantity NUMERIC DEFAULT 1 NOT NULL CHECK (quantity > 0),
is_purchased BOOLEAN DEFAULT false NOT NULL,
notes TEXT,
added_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT must_have_item_identifier CHECK (master_item_id IS NOT NULL OR custom_item_name IS NOT NULL)
CONSTRAINT must_have_item_identifier CHECK (master_item_id IS NOT NULL OR custom_item_name IS NOT NULL),
CONSTRAINT shopping_list_items_custom_item_name_check CHECK (custom_item_name IS NULL OR TRIM(custom_item_name) <> '')
);
COMMENT ON TABLE public.shopping_list_items IS 'Contains individual items for a specific shopping list.';
COMMENT ON COLUMN public.shopping_list_items.custom_item_name IS 'For items not in the master list, e.g., "Grandma''s special spice mix".';
@@ -308,7 +384,6 @@ COMMENT ON COLUMN public.shopping_list_items.is_purchased IS 'Lets users check i
CREATE INDEX IF NOT EXISTS idx_shopping_list_items_shopping_list_id ON public.shopping_list_items(shopping_list_id);
CREATE INDEX IF NOT EXISTS idx_shopping_list_items_master_item_id ON public.shopping_list_items(master_item_id);
-- 17. Manage shared access to shopping lists.
CREATE TABLE IF NOT EXISTS public.shared_shopping_lists (
shared_shopping_list_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
shopping_list_id BIGINT NOT NULL REFERENCES public.shopping_lists(shopping_list_id) ON DELETE CASCADE,
@@ -333,6 +408,7 @@ CREATE TABLE IF NOT EXISTS public.menu_plans (
end_date DATE NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT menu_plans_name_check CHECK (TRIM(name) <> ''),
CONSTRAINT date_range_check CHECK (end_date >= start_date)
);
COMMENT ON TABLE public.menu_plans IS 'Represents a user''s meal plan for a specific period, e.g., "Week of Oct 23".';
@@ -358,14 +434,16 @@ CREATE INDEX IF NOT EXISTS idx_shared_menu_plans_shared_with_user_id ON public.s
CREATE TABLE IF NOT EXISTS public.suggested_corrections (
suggested_correction_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
flyer_item_id BIGINT NOT NULL REFERENCES public.flyer_items(flyer_item_id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES public.users(user_id),
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
correction_type TEXT NOT NULL,
suggested_value TEXT NOT NULL,
status TEXT DEFAULT 'pending' NOT NULL,
status TEXT DEFAULT 'pending' NOT NULL CHECK (status IN ('pending', 'approved', 'rejected')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
reviewed_notes TEXT,
reviewed_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT suggested_corrections_correction_type_check CHECK (TRIM(correction_type) <> ''),
CONSTRAINT suggested_corrections_suggested_value_check CHECK (TRIM(suggested_value) <> '')
);
COMMENT ON TABLE public.suggested_corrections IS 'A queue for user-submitted data corrections, enabling crowdsourced data quality improvements.';
COMMENT ON COLUMN public.suggested_corrections.correction_type IS 'The type of error the user is reporting.';
@@ -378,16 +456,17 @@ CREATE INDEX IF NOT EXISTS idx_suggested_corrections_pending ON public.suggested
-- 21. For prices submitted directly by users from in-store.
CREATE TABLE IF NOT EXISTS public.user_submitted_prices (
user_submitted_price_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id),
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id),
store_id BIGINT NOT NULL REFERENCES public.stores(store_id),
price_in_cents INTEGER NOT NULL,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
store_id BIGINT NOT NULL REFERENCES public.stores(store_id) ON DELETE CASCADE,
price_in_cents INTEGER NOT NULL CHECK (price_in_cents > 0),
photo_url TEXT,
upvotes INTEGER DEFAULT 0 NOT NULL,
downvotes INTEGER DEFAULT 0 NOT NULL,
upvotes INTEGER DEFAULT 0 NOT NULL CHECK (upvotes >= 0),
downvotes INTEGER DEFAULT 0 NOT NULL CHECK (downvotes >= 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
-- CONSTRAINT user_submitted_prices_photo_url_check CHECK (photo_url IS NULL OR photo_url ~* '^https://?.*')
COMMENT ON TABLE public.user_submitted_prices IS 'Stores item prices submitted by users directly from physical stores.';
COMMENT ON COLUMN public.user_submitted_prices.photo_url IS 'URL to user-submitted photo evidence of the price.';
COMMENT ON COLUMN public.user_submitted_prices.upvotes IS 'Community validation score indicating accuracy.';
@@ -408,38 +487,6 @@ COMMENT ON TABLE public.unmatched_flyer_items IS 'A queue for reviewing flyer it
CREATE INDEX IF NOT EXISTS idx_unmatched_flyer_items_flyer_item_id ON public.unmatched_flyer_items(flyer_item_id);
CREATE INDEX IF NOT EXISTS idx_unmatched_flyer_items_pending ON public.unmatched_flyer_items (created_at) WHERE status = 'pending';
-- 23. Store brand information.
CREATE TABLE IF NOT EXISTS public.brands (
brand_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
logo_url TEXT,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.brands IS 'Stores brand names like "Coca-Cola", "Maple Leaf", or "Kraft".';
COMMENT ON COLUMN public.brands.store_id IS 'If this is a store-specific brand (e.g., President''s Choice), this links to the parent store.';
-- 24. For specific products, linking a master item with a brand and size.
CREATE TABLE IF NOT EXISTS public.products (
product_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id),
brand_id BIGINT REFERENCES public.brands(brand_id),
name TEXT NOT NULL,
description TEXT,
size TEXT,
upc_code TEXT UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.products IS 'Represents a specific, sellable product, combining a generic item with a brand and size.';
COMMENT ON COLUMN public.products.upc_code IS 'Universal Product Code, if available, for exact product matching.';
COMMENT ON COLUMN public.products.brand_id IS 'Can be null for generic/store-brand items.';
COMMENT ON COLUMN public.products.name IS 'Prime Raised without Antibiotics Chicken Breast.';
COMMENT ON COLUMN public.products.size IS 'e.g., "4L", "500g".';
CREATE INDEX IF NOT EXISTS idx_products_master_item_id ON public.products(master_item_id);
CREATE INDEX IF NOT EXISTS idx_products_brand_id ON public.products(brand_id);
-- 25. Linking table for when one flyer is valid for multiple locations.
CREATE TABLE IF NOT EXISTS public.flyer_locations (
flyer_id BIGINT NOT NULL REFERENCES public.flyers(flyer_id) ON DELETE CASCADE,
@@ -460,21 +507,23 @@ CREATE TABLE IF NOT EXISTS public.recipes (
name TEXT NOT NULL,
description TEXT,
instructions TEXT,
prep_time_minutes INTEGER,
cook_time_minutes INTEGER,
servings INTEGER,
prep_time_minutes INTEGER CHECK (prep_time_minutes IS NULL OR prep_time_minutes >= 0),
cook_time_minutes INTEGER CHECK (cook_time_minutes IS NULL OR cook_time_minutes >= 0),
servings INTEGER CHECK (servings IS NULL OR servings > 0),
photo_url TEXT,
calories_per_serving INTEGER,
protein_grams NUMERIC,
fat_grams NUMERIC,
carb_grams NUMERIC,
avg_rating NUMERIC(2,1) DEFAULT 0.0 NOT NULL,
status TEXT DEFAULT 'private' NOT NULL CHECK (status IN ('private', 'pending_review', 'public', 'rejected')),
rating_count INTEGER DEFAULT 0 NOT NULL,
fork_count INTEGER DEFAULT 0 NOT NULL,
avg_rating NUMERIC(2,1) DEFAULT 0.0 NOT NULL CHECK (avg_rating >= 0.0 AND avg_rating <= 5.0),
status TEXT DEFAULT 'private' NOT NULL CHECK (status IN ('private', 'pending_review', 'public', 'rejected')),
rating_count INTEGER DEFAULT 0 NOT NULL CHECK (rating_count >= 0),
fork_count INTEGER DEFAULT 0 NOT NULL CHECK (fork_count >= 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipes_name_check CHECK (TRIM(name) <> '')
);
-- CONSTRAINT recipes_photo_url_check CHECK (photo_url IS NULL OR photo_url ~* '^https://?.*')
COMMENT ON TABLE public.recipes IS 'Stores recipes that can be used to generate shopping lists.';
COMMENT ON COLUMN public.recipes.servings IS 'The number of servings this recipe yields.';
COMMENT ON COLUMN public.recipes.original_recipe_id IS 'If this recipe is a variation of another, this points to the original.';
@@ -484,22 +533,23 @@ COMMENT ON COLUMN public.recipes.calories_per_serving IS 'Optional nutritional i
COMMENT ON COLUMN public.recipes.protein_grams IS 'Optional nutritional information.';
COMMENT ON COLUMN public.recipes.fat_grams IS 'Optional nutritional information.';
COMMENT ON COLUMN public.recipes.carb_grams IS 'Optional nutritional information.';
COMMENT ON COLUMN public.recipes.fork_count IS 'To track how many times a public recipe has been "forked" or copied by other users.';
CREATE INDEX IF NOT EXISTS idx_recipes_user_id ON public.recipes(user_id);
CREATE INDEX IF NOT EXISTS idx_recipes_original_recipe_id ON public.recipes(original_recipe_id);
-- Add a partial unique index to ensure system-wide recipes (user_id IS NULL) have unique names.
-- This allows different users to have recipes with the same name.
-- This index helps speed up sorting for recipe recommendations.
CREATE INDEX IF NOT EXISTS idx_recipes_rating_sort ON public.recipes (avg_rating DESC, rating_count DESC);
CREATE UNIQUE INDEX IF NOT EXISTS idx_recipes_unique_system_recipe_name ON public.recipes(name) WHERE user_id IS NULL;
-- 27. For ingredients required for each recipe.
CREATE TABLE IF NOT EXISTS public.recipe_ingredients (
recipe_ingredient_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
recipe_id BIGINT NOT NULL REFERENCES public.recipes(recipe_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id),
quantity NUMERIC NOT NULL,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
quantity NUMERIC NOT NULL CHECK (quantity > 0),
unit TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipe_ingredients_unit_check CHECK (TRIM(unit) <> '')
);
COMMENT ON TABLE public.recipe_ingredients IS 'Defines the ingredients and quantities needed for a recipe.';
COMMENT ON COLUMN public.recipe_ingredients.unit IS 'e.g., "cups", "tbsp", "g", "each".';
@@ -525,7 +575,8 @@ CREATE TABLE IF NOT EXISTS public.tags (
tag_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT tags_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.tags IS 'Stores tags for categorizing recipes, e.g., "Vegetarian", "Quick & Easy".';
@@ -539,6 +590,7 @@ CREATE TABLE IF NOT EXISTS public.recipe_tags (
);
COMMENT ON TABLE public.recipe_tags IS 'A linking table to associate multiple tags with a single recipe.';
CREATE INDEX IF NOT EXISTS idx_recipe_tags_recipe_id ON public.recipe_tags(recipe_id);
-- This index is crucial for functions that find recipes based on tags.
CREATE INDEX IF NOT EXISTS idx_recipe_tags_tag_id ON public.recipe_tags(tag_id);
-- 31. Store a predefined list of kitchen appliances.
@@ -546,7 +598,8 @@ CREATE TABLE IF NOT EXISTS public.appliances (
appliance_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT appliances_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.appliances IS 'A predefined list of kitchen appliances (e.g., Air Fryer, Instant Pot).';
@@ -586,7 +639,8 @@ CREATE TABLE IF NOT EXISTS public.recipe_comments (
content TEXT NOT NULL,
status TEXT DEFAULT 'visible' NOT NULL CHECK (status IN ('visible', 'hidden', 'reported')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipe_comments_content_check CHECK (TRIM(content) <> '')
);
COMMENT ON TABLE public.recipe_comments IS 'Allows for threaded discussions and comments on recipes.';
COMMENT ON COLUMN public.recipe_comments.parent_comment_id IS 'For threaded comments.';
@@ -601,6 +655,7 @@ CREATE TABLE IF NOT EXISTS public.pantry_locations (
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT pantry_locations_name_check CHECK (TRIM(name) <> ''),
UNIQUE(user_id, name)
);
COMMENT ON TABLE public.pantry_locations IS 'User-defined locations for organizing pantry items (e.g., "Fridge", "Freezer", "Spice Rack").';
@@ -614,8 +669,9 @@ CREATE TABLE IF NOT EXISTS public.planned_meals (
plan_date DATE NOT NULL,
meal_type TEXT NOT NULL,
servings_to_cook INTEGER,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT planned_meals_meal_type_check CHECK (TRIM(meal_type) <> '')
);
COMMENT ON TABLE public.planned_meals IS 'Assigns a recipe to a specific day and meal type within a user''s menu plan.';
COMMENT ON COLUMN public.planned_meals.meal_type IS 'The designated meal for the recipe, e.g., ''Breakfast'', ''Lunch'', ''Dinner''.';
@@ -627,7 +683,7 @@ CREATE TABLE IF NOT EXISTS public.pantry_items (
pantry_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
quantity NUMERIC NOT NULL,
quantity NUMERIC NOT NULL CHECK (quantity >= 0),
unit TEXT,
best_before_date DATE,
pantry_location_id BIGINT REFERENCES public.pantry_locations(pantry_location_id) ON DELETE SET NULL,
@@ -636,7 +692,6 @@ CREATE TABLE IF NOT EXISTS public.pantry_items (
UNIQUE(user_id, master_item_id, unit)
);
COMMENT ON TABLE public.pantry_items IS 'Tracks a user''s personal inventory of grocery items to enable smart shopping lists.';
COMMENT ON COLUMN public.pantry_items.quantity IS 'The current amount of the item. Convention: use grams for weight, mL for volume where applicable.';
COMMENT ON COLUMN public.pantry_items.pantry_location_id IS 'Links the item to a user-defined location like "Fridge" or "Freezer".';
COMMENT ON COLUMN public.pantry_items.unit IS 'e.g., ''g'', ''ml'', ''items''. Should align with recipe_ingredients.unit and quantity convention.';
CREATE INDEX IF NOT EXISTS idx_pantry_items_user_id ON public.pantry_items(user_id);
@@ -650,7 +705,8 @@ CREATE TABLE IF NOT EXISTS public.password_reset_tokens (
token_hash TEXT NOT NULL UNIQUE,
expires_at TIMESTAMPTZ NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT password_reset_tokens_token_hash_check CHECK (TRIM(token_hash) <> '')
);
COMMENT ON TABLE public.password_reset_tokens IS 'Stores secure, single-use tokens for password reset requests.';
COMMENT ON COLUMN public.password_reset_tokens.token_hash IS 'A bcrypt hash of the reset token sent to the user.';
@@ -665,10 +721,13 @@ CREATE TABLE IF NOT EXISTS public.unit_conversions (
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
from_unit TEXT NOT NULL,
to_unit TEXT NOT NULL,
factor NUMERIC NOT NULL,
factor NUMERIC NOT NULL CHECK (factor > 0),
UNIQUE(master_item_id, from_unit, to_unit),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT unit_conversions_from_unit_check CHECK (TRIM(from_unit) <> ''),
CONSTRAINT unit_conversions_to_unit_check CHECK (TRIM(to_unit) <> ''),
CONSTRAINT unit_conversions_units_check CHECK (from_unit <> to_unit)
);
COMMENT ON TABLE public.unit_conversions IS 'Stores item-specific unit conversion factors (e.g., grams of flour to cups).';
COMMENT ON COLUMN public.unit_conversions.factor IS 'The multiplication factor to convert from_unit to to_unit.';
@@ -682,7 +741,8 @@ CREATE TABLE IF NOT EXISTS public.user_item_aliases (
alias TEXT NOT NULL,
UNIQUE(user_id, alias),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT user_item_aliases_alias_check CHECK (TRIM(alias) <> '')
);
COMMENT ON TABLE public.user_item_aliases IS 'Allows users to create personal aliases for grocery items (e.g., "Dad''s Cereal").';
CREATE INDEX IF NOT EXISTS idx_user_item_aliases_user_id ON public.user_item_aliases(user_id);
@@ -719,7 +779,8 @@ CREATE TABLE IF NOT EXISTS public.recipe_collections (
name TEXT NOT NULL,
description TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipe_collections_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.recipe_collections IS 'Allows users to create personal collections of recipes (e.g., "Holiday Baking").';
CREATE INDEX IF NOT EXISTS idx_recipe_collections_user_id ON public.recipe_collections(user_id);
@@ -744,8 +805,11 @@ CREATE TABLE IF NOT EXISTS public.shared_recipe_collections (
shared_with_user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
permission_level TEXT NOT NULL CHECK (permission_level IN ('view', 'edit')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
UNIQUE(recipe_collection_id, shared_with_user_id)
);
-- This index is crucial for efficiently finding all collections shared with a specific user.
CREATE INDEX IF NOT EXISTS idx_shared_recipe_collections_shared_with ON public.shared_recipe_collections(shared_with_user_id);
-- 45. Log user search queries for analysis.
CREATE TABLE IF NOT EXISTS public.search_queries (
@@ -755,7 +819,8 @@ CREATE TABLE IF NOT EXISTS public.search_queries (
result_count INTEGER,
was_successful BOOLEAN,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT search_queries_query_text_check CHECK (TRIM(query_text) <> '')
);
COMMENT ON TABLE public.search_queries IS 'Logs user search queries to analyze search effectiveness and identify gaps in data.';
COMMENT ON COLUMN public.search_queries.was_successful IS 'Indicates if the user interacted with a search result.';
@@ -779,12 +844,13 @@ CREATE INDEX IF NOT EXISTS idx_shopping_trips_shopping_list_id ON public.shoppin
CREATE TABLE IF NOT EXISTS public.shopping_trip_items (
shopping_trip_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
shopping_trip_id BIGINT NOT NULL REFERENCES public.shopping_trips(shopping_trip_id) ON DELETE CASCADE,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE SET NULL,
custom_item_name TEXT,
quantity NUMERIC NOT NULL,
quantity NUMERIC NOT NULL CHECK (quantity > 0),
price_paid_cents INTEGER,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT shopping_trip_items_custom_item_name_check CHECK (custom_item_name IS NULL OR TRIM(custom_item_name) <> ''),
CONSTRAINT trip_must_have_item_identifier CHECK (master_item_id IS NOT NULL OR custom_item_name IS NOT NULL)
);
COMMENT ON TABLE public.shopping_trip_items IS 'A historical log of items purchased during a shopping trip.';
@@ -798,7 +864,8 @@ CREATE TABLE IF NOT EXISTS public.dietary_restrictions (
name TEXT NOT NULL UNIQUE,
type TEXT NOT NULL CHECK (type IN ('diet', 'allergy')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT dietary_restrictions_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.dietary_restrictions IS 'A predefined list of common diets (e.g., Vegan) and allergies (e.g., Nut Allergy).';
@@ -811,6 +878,7 @@ CREATE TABLE IF NOT EXISTS public.user_dietary_restrictions (
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.user_dietary_restrictions IS 'Connects users to their selected dietary needs and allergies.';
-- This index is crucial for functions that filter recipes based on user diets/allergies.
CREATE INDEX IF NOT EXISTS idx_user_dietary_restrictions_user_id ON public.user_dietary_restrictions(user_id);
CREATE INDEX IF NOT EXISTS idx_user_dietary_restrictions_restriction_id ON public.user_dietary_restrictions(restriction_id);
@@ -836,6 +904,7 @@ CREATE TABLE IF NOT EXISTS public.user_follows (
CONSTRAINT cant_follow_self CHECK (follower_id <> following_id)
);
COMMENT ON TABLE public.user_follows IS 'Stores user following relationships to build a social graph.';
-- This index is crucial for efficiently generating a user's activity feed.
CREATE INDEX IF NOT EXISTS idx_user_follows_follower_id ON public.user_follows(follower_id);
CREATE INDEX IF NOT EXISTS idx_user_follows_following_id ON public.user_follows(following_id);
@@ -843,16 +912,17 @@ CREATE INDEX IF NOT EXISTS idx_user_follows_following_id ON public.user_follows(
CREATE TABLE IF NOT EXISTS public.receipts (
receipt_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
store_id BIGINT REFERENCES public.stores(store_id),
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE CASCADE,
receipt_image_url TEXT NOT NULL,
transaction_date TIMESTAMPTZ,
total_amount_cents INTEGER,
total_amount_cents INTEGER CHECK (total_amount_cents IS NULL OR total_amount_cents >= 0),
status TEXT DEFAULT 'pending' NOT NULL CHECK (status IN ('pending', 'processing', 'completed', 'failed')),
raw_text TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
processed_at TIMESTAMPTZ,
processed_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
-- CONSTRAINT receipts_receipt_image_url_check CHECK (receipt_image_url ~* '^https://?.*')
COMMENT ON TABLE public.receipts IS 'Stores uploaded user receipts for purchase tracking and analysis.';
CREATE INDEX IF NOT EXISTS idx_receipts_user_id ON public.receipts(user_id);
CREATE INDEX IF NOT EXISTS idx_receipts_store_id ON public.receipts(store_id);
@@ -862,13 +932,14 @@ CREATE TABLE IF NOT EXISTS public.receipt_items (
receipt_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
receipt_id BIGINT NOT NULL REFERENCES public.receipts(receipt_id) ON DELETE CASCADE,
raw_item_description TEXT NOT NULL,
quantity NUMERIC DEFAULT 1 NOT NULL,
price_paid_cents INTEGER NOT NULL,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
product_id BIGINT REFERENCES public.products(product_id),
quantity NUMERIC DEFAULT 1 NOT NULL CHECK (quantity > 0),
price_paid_cents INTEGER NOT NULL CHECK (price_paid_cents >= 0),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE SET NULL,
product_id BIGINT REFERENCES public.products(product_id) ON DELETE SET NULL,
status TEXT DEFAULT 'unmatched' NOT NULL CHECK (status IN ('unmatched', 'matched', 'needs_review', 'ignored')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT receipt_items_raw_item_description_check CHECK (TRIM(raw_item_description) <> '')
);
COMMENT ON TABLE public.receipt_items IS 'Stores individual line items extracted from a user receipt.';
CREATE INDEX IF NOT EXISTS idx_receipt_items_receipt_id ON public.receipt_items(receipt_id);
@@ -881,7 +952,6 @@ CREATE TABLE IF NOT EXISTS public.schema_info (
deployed_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.schema_info IS 'Stores metadata about the deployed schema, such as a hash of the schema file, to detect changes.';
COMMENT ON COLUMN public.schema_info.environment IS 'The deployment environment (e.g., ''development'', ''test'', ''production'').';
COMMENT ON COLUMN public.schema_info.schema_hash IS 'A SHA-256 hash of the master_schema_rollup.sql file at the time of deployment.';
-- 55. Store user reactions to various entities (e.g., recipes, comments).
@@ -902,14 +972,31 @@ COMMENT ON COLUMN public.user_reactions.reaction_type IS 'The type of reaction (
CREATE INDEX IF NOT EXISTS idx_user_reactions_user_id ON public.user_reactions(user_id);
CREATE INDEX IF NOT EXISTS idx_user_reactions_entity ON public.user_reactions(entity_type, entity_id);
-- 56. Store user-defined budgets for spending analysis.
CREATE TABLE IF NOT EXISTS public.budgets (
budget_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
name TEXT NOT NULL,
amount_cents INTEGER NOT NULL CHECK (amount_cents > 0),
period TEXT NOT NULL CHECK (period IN ('weekly', 'monthly')),
start_date DATE NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT budgets_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.budgets IS 'Allows users to set weekly or monthly grocery budgets for spending tracking.';
CREATE INDEX IF NOT EXISTS idx_budgets_user_id ON public.budgets(user_id);
-- 57. Static table defining available achievements for gamification.
CREATE TABLE IF NOT EXISTS public.achievements (
achievement_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
description TEXT NOT NULL,
icon TEXT,
points_value INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL
points_value INTEGER NOT NULL DEFAULT 0 CHECK (points_value >= 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT achievements_name_check CHECK (TRIM(name) <> ''),
CONSTRAINT achievements_description_check CHECK (TRIM(description) <> '')
);
COMMENT ON TABLE public.achievements IS 'A static table defining the available achievements users can earn.';
@@ -925,16 +1012,3 @@ CREATE INDEX IF NOT EXISTS idx_user_achievements_user_id ON public.user_achievem
CREATE INDEX IF NOT EXISTS idx_user_achievements_achievement_id ON public.user_achievements(achievement_id);
-- 56. Store user-defined budgets for spending analysis.
CREATE TABLE IF NOT EXISTS public.budgets (
budget_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
name TEXT NOT NULL,
amount_cents INTEGER NOT NULL,
period TEXT NOT NULL CHECK (period IN ('weekly', 'monthly')),
start_date DATE NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.budgets IS 'Allows users to set weekly or monthly grocery budgets for spending tracking.';
CREATE INDEX IF NOT EXISTS idx_budgets_user_id ON public.budgets(user_id);

View File

@@ -23,16 +23,23 @@
CREATE TABLE IF NOT EXISTS public.addresses (
address_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
address_line_1 TEXT NOT NULL UNIQUE,
address_line_2 TEXT,
city TEXT NOT NULL,
province_state TEXT NOT NULL,
postal_code TEXT NOT NULL,
country TEXT NOT NULL,
address_line_2 TEXT,
latitude NUMERIC(9, 6),
longitude NUMERIC(9, 6),
location GEOGRAPHY(Point, 4326),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT addresses_address_line_1_check CHECK (TRIM(address_line_1) <> ''),
CONSTRAINT addresses_city_check CHECK (TRIM(city) <> ''),
CONSTRAINT addresses_province_state_check CHECK (TRIM(province_state) <> ''),
CONSTRAINT addresses_postal_code_check CHECK (TRIM(postal_code) <> ''),
CONSTRAINT addresses_country_check CHECK (TRIM(country) <> ''),
CONSTRAINT addresses_latitude_check CHECK (latitude >= -90 AND latitude <= 90),
CONSTRAINT addresses_longitude_check CHECK (longitude >= -180 AND longitude <= 180)
);
COMMENT ON TABLE public.addresses IS 'A centralized table for storing all physical addresses for users and stores.';
COMMENT ON COLUMN public.addresses.latitude IS 'The geographic latitude.';
@@ -45,14 +52,16 @@ CREATE INDEX IF NOT EXISTS addresses_location_idx ON public.addresses USING GIST
CREATE TABLE IF NOT EXISTS public.users (
user_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email TEXT NOT NULL UNIQUE,
password_hash TEXT,
password_hash TEXT,
refresh_token TEXT,
failed_login_attempts INTEGER DEFAULT 0,
failed_login_attempts INTEGER DEFAULT 0 CHECK (failed_login_attempts >= 0),
last_failed_login TIMESTAMPTZ,
last_login_at TIMESTAMPTZ,
last_login_ip TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT users_email_check CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'),
CONSTRAINT users_password_hash_check CHECK (password_hash IS NULL OR TRIM(password_hash) <> '')
);
COMMENT ON TABLE public.users IS 'Stores user authentication information.';
COMMENT ON COLUMN public.users.refresh_token IS 'Stores the long-lived refresh token for re-authentication.';
@@ -74,11 +83,14 @@ CREATE TABLE IF NOT EXISTS public.activity_log (
display_text TEXT NOT NULL,
icon TEXT,
details JSONB,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT activity_log_action_check CHECK (TRIM(action) <> ''),
CONSTRAINT activity_log_display_text_check CHECK (TRIM(display_text) <> '')
);
COMMENT ON TABLE public.activity_log IS 'Logs key user and system actions for auditing and display in an activity feed.';
CREATE INDEX IF NOT EXISTS idx_activity_log_user_id ON public.activity_log(user_id);
-- This composite index is more efficient for user-specific activity feeds ordered by date.
CREATE INDEX IF NOT EXISTS idx_activity_log_user_id_created_at ON public.activity_log(user_id, created_at DESC);
-- 3. for public user profiles.
-- This table is linked to the users table and stores non-sensitive user data.
@@ -88,16 +100,20 @@ CREATE TABLE IF NOT EXISTS public.profiles (
full_name TEXT,
avatar_url TEXT,
address_id BIGINT REFERENCES public.addresses(address_id) ON DELETE SET NULL,
points INTEGER DEFAULT 0 NOT NULL,
points INTEGER DEFAULT 0 NOT NULL CHECK (points >= 0),
preferences JSONB,
role TEXT CHECK (role IN ('admin', 'user')),
role TEXT NOT NULL CHECK (role IN ('admin', 'user')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
CONSTRAINT profiles_full_name_check CHECK (full_name IS NULL OR TRIM(full_name) <> ''),
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
updated_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
);
-- CONSTRAINT profiles_avatar_url_check CHECK (avatar_url IS NULL OR avatar_url ~* '^https?://.*'),
COMMENT ON TABLE public.profiles IS 'Stores public-facing user data, linked to the public.users table.';
COMMENT ON COLUMN public.profiles.address_id IS 'A foreign key to the user''s primary address in the `addresses` table.';
-- This index is crucial for the gamification leaderboard feature.
CREATE INDEX IF NOT EXISTS idx_profiles_points_leaderboard ON public.profiles (points DESC, full_name ASC);
COMMENT ON COLUMN public.profiles.points IS 'A simple integer column to store a user''s total accumulated points from achievements.';
-- 4. The 'stores' table for normalized store data.
@@ -107,8 +123,10 @@ CREATE TABLE IF NOT EXISTS public.stores (
logo_url TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
CONSTRAINT stores_name_check CHECK (TRIM(name) <> ''),
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
);
-- CONSTRAINT stores_logo_url_check CHECK (logo_url IS NULL OR logo_url ~* '^https?://.*'),
COMMENT ON TABLE public.stores IS 'Stores metadata for grocery store chains (e.g., Safeway, Kroger).';
-- 5. The 'categories' table for normalized category data.
@@ -116,7 +134,8 @@ CREATE TABLE IF NOT EXISTS public.categories (
category_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT categories_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.categories IS 'Stores a predefined list of grocery item categories (e.g., ''Fruits & Vegetables'', ''Dairy & Eggs'').';
@@ -125,17 +144,23 @@ CREATE TABLE IF NOT EXISTS public.flyers (
flyer_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
file_name TEXT NOT NULL,
image_url TEXT NOT NULL,
icon_url TEXT,
checksum TEXT UNIQUE,
store_id BIGINT REFERENCES public.stores(store_id),
icon_url TEXT NOT NULL,
checksum TEXT UNIQUE,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE CASCADE,
valid_from DATE,
valid_to DATE,
store_address TEXT,
item_count INTEGER DEFAULT 0 NOT NULL,
status TEXT DEFAULT 'processed' NOT NULL CHECK (status IN ('processed', 'needs_review', 'archived')),
item_count INTEGER DEFAULT 0 NOT NULL CHECK (item_count >= 0),
uploaded_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT flyers_valid_dates_check CHECK (valid_to >= valid_from),
CONSTRAINT flyers_file_name_check CHECK (TRIM(file_name) <> ''),
CONSTRAINT flyers_checksum_check CHECK (checksum IS NULL OR length(checksum) = 64)
);
-- CONSTRAINT flyers_image_url_check CHECK (image_url ~* '^https?://.*'),
-- CONSTRAINT flyers_icon_url_check CHECK (icon_url ~* '^https?://.*'),
COMMENT ON TABLE public.flyers IS 'Stores metadata for each processed flyer, linking it to a store and its validity period.';
CREATE INDEX IF NOT EXISTS idx_flyers_store_id ON public.flyers(store_id);
COMMENT ON COLUMN public.flyers.file_name IS 'The original name of the uploaded flyer file (e.g., "flyer_week_1.pdf").';
@@ -146,21 +171,24 @@ COMMENT ON COLUMN public.flyers.store_id IS 'Foreign key linking this flyer to a
COMMENT ON COLUMN public.flyers.valid_from IS 'The start date of the sale period for this flyer, extracted by the AI.';
COMMENT ON COLUMN public.flyers.valid_to IS 'The end date of the sale period for this flyer, extracted by the AI.';
COMMENT ON COLUMN public.flyers.store_address IS 'The physical store address if it was successfully extracted from the flyer image.';
COMMENT ON COLUMN public.flyers.status IS 'The processing status of the flyer, e.g., if it needs manual review.';
COMMENT ON COLUMN public.flyers.item_count IS 'A cached count of the number of items in this flyer, maintained by a trigger.';
COMMENT ON COLUMN public.flyers.uploaded_by IS 'The user who uploaded the flyer. Can be null for anonymous or system uploads.';
CREATE INDEX IF NOT EXISTS idx_flyers_status ON public.flyers(status);
CREATE INDEX IF NOT EXISTS idx_flyers_created_at ON public.flyers (created_at DESC);
CREATE INDEX IF NOT EXISTS idx_flyers_valid_to_file_name ON public.flyers (valid_to DESC, file_name ASC);
-- 7. The 'master_grocery_items' table. This is the master dictionary.
CREATE TABLE IF NOT EXISTS public.master_grocery_items (
master_grocery_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
category_id BIGINT REFERENCES public.categories(category_id),
category_id BIGINT REFERENCES public.categories(category_id) ON DELETE SET NULL,
is_allergen BOOLEAN DEFAULT false,
allergy_info JSONB,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL
created_by UUID REFERENCES public.users(user_id) ON DELETE SET NULL,
CONSTRAINT master_grocery_items_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.master_grocery_items IS 'The master dictionary of canonical grocery items. Each item has a unique name and is linked to a category.';
CREATE INDEX IF NOT EXISTS idx_master_grocery_items_category_id ON public.master_grocery_items(category_id);
@@ -178,24 +206,64 @@ CREATE TABLE IF NOT EXISTS public.user_watched_items (
COMMENT ON TABLE public.user_watched_items IS 'A linking table that represents a user''s personal watchlist of grocery items.';
CREATE INDEX IF NOT EXISTS idx_user_watched_items_master_item_id ON public.user_watched_items(master_item_id);
-- 23. Store brand information. (Moved up due to dependency in flyer_items)
CREATE TABLE IF NOT EXISTS public.brands (
brand_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
logo_url TEXT,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT brands_name_check CHECK (TRIM(name) <> '')
);
-- CONSTRAINT brands_logo_url_check CHECK (logo_url IS NULL OR logo_url ~* '^https?://.*')
COMMENT ON TABLE public.brands IS 'Stores brand names like "Coca-Cola", "Maple Leaf", or "Kraft".';
COMMENT ON COLUMN public.brands.store_id IS 'If this is a store-specific brand (e.g., President''s Choice), this links to the parent store.';
-- 24. For specific products, linking a master item with a brand and size. (Moved up due to dependency in flyer_items)
CREATE TABLE IF NOT EXISTS public.products (
product_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
brand_id BIGINT REFERENCES public.brands(brand_id) ON DELETE SET NULL,
name TEXT NOT NULL,
description TEXT,
size TEXT,
upc_code TEXT UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT products_name_check CHECK (TRIM(name) <> ''),
CONSTRAINT products_upc_code_check CHECK (upc_code IS NULL OR upc_code ~ '^[0-9]{8,14}$')
);
COMMENT ON TABLE public.products IS 'Represents a specific, sellable product, combining a generic item with a brand and size.';
COMMENT ON COLUMN public.products.upc_code IS 'Universal Product Code, if available, for exact product matching.';
COMMENT ON COLUMN public.products.brand_id IS 'Can be null for generic/store-brand items.';
COMMENT ON COLUMN public.products.name IS 'Prime Raised without Antibiotics Chicken Breast.';
COMMENT ON COLUMN public.products.size IS 'e.g., "4L", "500g".';
CREATE INDEX IF NOT EXISTS idx_products_master_item_id ON public.products(master_item_id);
CREATE INDEX IF NOT EXISTS idx_products_brand_id ON public.products(brand_id);
-- 9. The 'flyer_items' table. This stores individual items from flyers.
CREATE TABLE IF NOT EXISTS public.flyer_items (
flyer_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
flyer_id BIGINT REFERENCES public.flyers(flyer_id) ON DELETE CASCADE,
item TEXT NOT NULL,
price_display TEXT NOT NULL,
price_in_cents INTEGER,
price_in_cents INTEGER CHECK (price_in_cents IS NULL OR price_in_cents >= 0),
quantity_num NUMERIC,
quantity TEXT NOT NULL,
category_id BIGINT REFERENCES public.categories(category_id),
category_id BIGINT REFERENCES public.categories(category_id) ON DELETE SET NULL,
category_name TEXT,
unit_price JSONB,
view_count INTEGER DEFAULT 0 NOT NULL,
click_count INTEGER DEFAULT 0 NOT NULL,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
product_id BIGINT,
view_count INTEGER DEFAULT 0 NOT NULL CHECK (view_count >= 0),
click_count INTEGER DEFAULT 0 NOT NULL CHECK (click_count >= 0),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE SET NULL,
product_id BIGINT REFERENCES public.products(product_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT flyer_items_item_check CHECK (TRIM(item) <> ''),
CONSTRAINT flyer_items_price_display_check CHECK (TRIM(price_display) <> ''),
CONSTRAINT flyer_items_quantity_check CHECK (TRIM(quantity) <> ''),
CONSTRAINT flyer_items_category_name_check CHECK (category_name IS NULL OR TRIM(category_name) <> '')
);
COMMENT ON TABLE public.flyer_items IS 'Stores individual items extracted from a specific flyer.';
COMMENT ON COLUMN public.flyer_items.flyer_id IS 'Foreign key linking this item to its parent flyer in the `flyers` table.';
@@ -214,6 +282,8 @@ CREATE INDEX IF NOT EXISTS idx_flyer_items_master_item_id ON public.flyer_items(
CREATE INDEX IF NOT EXISTS idx_flyer_items_category_id ON public.flyer_items(category_id);
CREATE INDEX IF NOT EXISTS idx_flyer_items_product_id ON public.flyer_items(product_id);
-- Add a GIN index to the 'item' column for fast fuzzy text searching.
-- This partial index is optimized for queries that find the best price for an item.
CREATE INDEX IF NOT EXISTS idx_flyer_items_master_item_price ON public.flyer_items (master_item_id, price_in_cents ASC) WHERE price_in_cents IS NOT NULL;
-- This requires the pg_trgm extension.
CREATE INDEX IF NOT EXISTS flyer_items_item_trgm_idx ON public.flyer_items USING GIN (item gin_trgm_ops);
@@ -222,7 +292,7 @@ CREATE TABLE IF NOT EXISTS public.user_alerts (
user_alert_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_watched_item_id BIGINT NOT NULL REFERENCES public.user_watched_items(user_watched_item_id) ON DELETE CASCADE,
alert_type TEXT NOT NULL CHECK (alert_type IN ('PRICE_BELOW', 'PERCENT_OFF_AVERAGE')),
threshold_value NUMERIC NOT NULL,
threshold_value NUMERIC NOT NULL CHECK (threshold_value > 0),
is_active BOOLEAN DEFAULT true NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
@@ -240,7 +310,8 @@ CREATE TABLE IF NOT EXISTS public.notifications (
link_url TEXT,
is_read BOOLEAN DEFAULT false NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT notifications_content_check CHECK (TRIM(content) <> '')
);
COMMENT ON TABLE public.notifications IS 'A central log of notifications generated for users, such as price alerts.';
COMMENT ON COLUMN public.notifications.content IS 'The notification message displayed to the user.';
@@ -266,13 +337,14 @@ CREATE TABLE IF NOT EXISTS public.item_price_history (
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
summary_date DATE NOT NULL,
store_location_id BIGINT REFERENCES public.store_locations(store_location_id) ON DELETE CASCADE,
min_price_in_cents INTEGER,
max_price_in_cents INTEGER,
avg_price_in_cents INTEGER,
data_points_count INTEGER DEFAULT 0 NOT NULL,
min_price_in_cents INTEGER CHECK (min_price_in_cents IS NULL OR min_price_in_cents >= 0),
max_price_in_cents INTEGER CHECK (max_price_in_cents IS NULL OR max_price_in_cents >= 0),
avg_price_in_cents INTEGER CHECK (avg_price_in_cents IS NULL OR avg_price_in_cents >= 0),
data_points_count INTEGER DEFAULT 0 NOT NULL CHECK (data_points_count >= 0),
UNIQUE(master_item_id, summary_date, store_location_id),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT item_price_history_price_order_check CHECK (min_price_in_cents <= max_price_in_cents)
);
COMMENT ON TABLE public.item_price_history IS 'Serves as a summary table to speed up charting and analytics.';
COMMENT ON COLUMN public.item_price_history.summary_date IS 'The date for which the price data is summarized.';
@@ -289,7 +361,8 @@ CREATE TABLE IF NOT EXISTS public.master_item_aliases (
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
alias TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT master_item_aliases_alias_check CHECK (TRIM(alias) <> '')
);
COMMENT ON TABLE public.master_item_aliases IS 'Stores synonyms or alternative names for master items to improve matching.';
COMMENT ON COLUMN public.master_item_aliases.alias IS 'An alternative name, e.g., "Ground Chuck" for the master item "Ground Beef".';
@@ -301,7 +374,8 @@ CREATE TABLE IF NOT EXISTS public.shopping_lists (
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT shopping_lists_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.shopping_lists IS 'Stores user-created shopping lists, e.g., "Weekly Groceries".';
CREATE INDEX IF NOT EXISTS idx_shopping_lists_user_id ON public.shopping_lists(user_id);
@@ -310,14 +384,15 @@ CREATE INDEX IF NOT EXISTS idx_shopping_lists_user_id ON public.shopping_lists(u
CREATE TABLE IF NOT EXISTS public.shopping_list_items (
shopping_list_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
shopping_list_id BIGINT NOT NULL REFERENCES public.shopping_lists(shopping_list_id) ON DELETE CASCADE,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
custom_item_name TEXT,
quantity NUMERIC DEFAULT 1 NOT NULL,
quantity NUMERIC DEFAULT 1 NOT NULL CHECK (quantity > 0),
is_purchased BOOLEAN DEFAULT false NOT NULL,
notes TEXT,
added_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT must_have_item_identifier CHECK (master_item_id IS NOT NULL OR custom_item_name IS NOT NULL)
CONSTRAINT must_have_item_identifier CHECK (master_item_id IS NOT NULL OR custom_item_name IS NOT NULL),
CONSTRAINT shopping_list_items_custom_item_name_check CHECK (custom_item_name IS NULL OR TRIM(custom_item_name) <> '')
);
COMMENT ON TABLE public.shopping_list_items IS 'Contains individual items for a specific shopping list.';
COMMENT ON COLUMN public.shopping_list_items.custom_item_name IS 'For items not in the master list, e.g., "Grandma''s special spice mix".';
@@ -349,7 +424,8 @@ CREATE TABLE IF NOT EXISTS public.menu_plans (
start_date DATE NOT NULL,
end_date DATE NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT menu_plans_name_check CHECK (TRIM(name) <> ''),
CONSTRAINT date_range_check CHECK (end_date >= start_date)
);
COMMENT ON TABLE public.menu_plans IS 'Represents a user''s meal plan for a specific period, e.g., "Week of Oct 23".';
@@ -375,14 +451,16 @@ CREATE INDEX IF NOT EXISTS idx_shared_menu_plans_shared_with_user_id ON public.s
CREATE TABLE IF NOT EXISTS public.suggested_corrections (
suggested_correction_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
flyer_item_id BIGINT NOT NULL REFERENCES public.flyer_items(flyer_item_id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES public.users(user_id),
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
correction_type TEXT NOT NULL,
suggested_value TEXT NOT NULL,
status TEXT DEFAULT 'pending' NOT NULL,
status TEXT DEFAULT 'pending' NOT NULL CHECK (status IN ('pending', 'approved', 'rejected')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
reviewed_notes TEXT,
reviewed_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT suggested_corrections_correction_type_check CHECK (TRIM(correction_type) <> ''),
CONSTRAINT suggested_corrections_suggested_value_check CHECK (TRIM(suggested_value) <> '')
);
COMMENT ON TABLE public.suggested_corrections IS 'A queue for user-submitted data corrections, enabling crowdsourced data quality improvements.';
COMMENT ON COLUMN public.suggested_corrections.correction_type IS 'The type of error the user is reporting.';
@@ -395,16 +473,17 @@ CREATE INDEX IF NOT EXISTS idx_suggested_corrections_pending ON public.suggested
-- 21. For prices submitted directly by users from in-store.
CREATE TABLE IF NOT EXISTS public.user_submitted_prices (
user_submitted_price_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id),
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id),
store_id BIGINT NOT NULL REFERENCES public.stores(store_id),
price_in_cents INTEGER NOT NULL,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
store_id BIGINT NOT NULL REFERENCES public.stores(store_id) ON DELETE CASCADE,
price_in_cents INTEGER NOT NULL CHECK (price_in_cents > 0),
photo_url TEXT,
upvotes INTEGER DEFAULT 0 NOT NULL,
downvotes INTEGER DEFAULT 0 NOT NULL,
upvotes INTEGER DEFAULT 0 NOT NULL CHECK (upvotes >= 0),
downvotes INTEGER DEFAULT 0 NOT NULL CHECK (downvotes >= 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
-- CONSTRAINT user_submitted_prices_photo_url_check CHECK (photo_url IS NULL OR photo_url ~* '^https?://.*')
COMMENT ON TABLE public.user_submitted_prices IS 'Stores item prices submitted by users directly from physical stores.';
COMMENT ON COLUMN public.user_submitted_prices.photo_url IS 'URL to user-submitted photo evidence of the price.';
COMMENT ON COLUMN public.user_submitted_prices.upvotes IS 'Community validation score indicating accuracy.';
@@ -414,7 +493,8 @@ CREATE INDEX IF NOT EXISTS idx_user_submitted_prices_master_item_id ON public.us
-- 22. Log flyer items that could not be automatically matched to a master item.
CREATE TABLE IF NOT EXISTS public.unmatched_flyer_items (
unmatched_flyer_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
flyer_item_id BIGINT NOT NULL REFERENCES public.flyer_items(flyer_item_id) ON DELETE CASCADE, status TEXT DEFAULT 'pending' NOT NULL CHECK (status IN ('pending', 'resolved', 'ignored')),
flyer_item_id BIGINT NOT NULL REFERENCES public.flyer_items(flyer_item_id) ON DELETE CASCADE,
status TEXT DEFAULT 'pending' NOT NULL CHECK (status IN ('pending', 'resolved', 'ignored')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
reviewed_at TIMESTAMPTZ,
UNIQUE(flyer_item_id),
@@ -424,38 +504,6 @@ COMMENT ON TABLE public.unmatched_flyer_items IS 'A queue for reviewing flyer it
CREATE INDEX IF NOT EXISTS idx_unmatched_flyer_items_flyer_item_id ON public.unmatched_flyer_items(flyer_item_id);
CREATE INDEX IF NOT EXISTS idx_unmatched_flyer_items_pending ON public.unmatched_flyer_items (created_at) WHERE status = 'pending';
-- 23. Store brand information.
CREATE TABLE IF NOT EXISTS public.brands (
brand_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
logo_url TEXT,
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE SET NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.brands IS 'Stores brand names like "Coca-Cola", "Maple Leaf", or "Kraft".';
COMMENT ON COLUMN public.brands.store_id IS 'If this is a store-specific brand (e.g., President''s Choice), this links to the parent store.';
-- 24. For specific products, linking a master item with a brand and size.
CREATE TABLE IF NOT EXISTS public.products (
product_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id),
brand_id BIGINT REFERENCES public.brands(brand_id),
name TEXT NOT NULL,
description TEXT,
size TEXT,
upc_code TEXT UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
COMMENT ON TABLE public.products IS 'Represents a specific, sellable product, combining a generic item with a brand and size.';
COMMENT ON COLUMN public.products.upc_code IS 'Universal Product Code, if available, for exact product matching.';
COMMENT ON COLUMN public.products.brand_id IS 'Can be null for generic/store-brand items.';
COMMENT ON COLUMN public.products.name IS 'Prime Raised without Antibiotics Chicken Breast.';
COMMENT ON COLUMN public.products.size IS 'e.g., "4L", "500g".';
CREATE INDEX IF NOT EXISTS idx_products_master_item_id ON public.products(master_item_id);
CREATE INDEX IF NOT EXISTS idx_products_brand_id ON public.products(brand_id);
-- 25. Linking table for when one flyer is valid for multiple locations.
CREATE TABLE IF NOT EXISTS public.flyer_locations (
flyer_id BIGINT NOT NULL REFERENCES public.flyers(flyer_id) ON DELETE CASCADE,
@@ -476,21 +524,23 @@ CREATE TABLE IF NOT EXISTS public.recipes (
name TEXT NOT NULL,
description TEXT,
instructions TEXT,
prep_time_minutes INTEGER,
cook_time_minutes INTEGER,
servings INTEGER,
prep_time_minutes INTEGER CHECK (prep_time_minutes IS NULL OR prep_time_minutes >= 0),
cook_time_minutes INTEGER CHECK (cook_time_minutes IS NULL OR cook_time_minutes >= 0),
servings INTEGER CHECK (servings IS NULL OR servings > 0),
photo_url TEXT,
calories_per_serving INTEGER,
protein_grams NUMERIC,
fat_grams NUMERIC,
carb_grams NUMERIC,
avg_rating NUMERIC(2,1) DEFAULT 0.0 NOT NULL,
avg_rating NUMERIC(2,1) DEFAULT 0.0 NOT NULL CHECK (avg_rating >= 0.0 AND avg_rating <= 5.0),
status TEXT DEFAULT 'private' NOT NULL CHECK (status IN ('private', 'pending_review', 'public', 'rejected')),
rating_count INTEGER DEFAULT 0 NOT NULL,
fork_count INTEGER DEFAULT 0 NOT NULL,
rating_count INTEGER DEFAULT 0 NOT NULL CHECK (rating_count >= 0),
fork_count INTEGER DEFAULT 0 NOT NULL CHECK (fork_count >= 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipes_name_check CHECK (TRIM(name) <> '')
);
-- CONSTRAINT recipes_photo_url_check CHECK (photo_url IS NULL OR photo_url ~* '^https?://.*')
COMMENT ON TABLE public.recipes IS 'Stores recipes that can be used to generate shopping lists.';
COMMENT ON COLUMN public.recipes.servings IS 'The number of servings this recipe yields.';
COMMENT ON COLUMN public.recipes.original_recipe_id IS 'If this recipe is a variation of another, this points to the original.';
@@ -504,17 +554,20 @@ CREATE INDEX IF NOT EXISTS idx_recipes_user_id ON public.recipes(user_id);
CREATE INDEX IF NOT EXISTS idx_recipes_original_recipe_id ON public.recipes(original_recipe_id);
-- Add a partial unique index to ensure system-wide recipes (user_id IS NULL) have unique names.
-- This allows different users to have recipes with the same name.
-- This index helps speed up sorting for recipe recommendations.
CREATE INDEX IF NOT EXISTS idx_recipes_rating_sort ON public.recipes (avg_rating DESC, rating_count DESC);
CREATE UNIQUE INDEX IF NOT EXISTS idx_recipes_unique_system_recipe_name ON public.recipes(name) WHERE user_id IS NULL;
-- 27. For ingredients required for each recipe.
CREATE TABLE IF NOT EXISTS public.recipe_ingredients (
recipe_ingredient_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
recipe_id BIGINT NOT NULL REFERENCES public.recipes(recipe_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id),
quantity NUMERIC NOT NULL,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
quantity NUMERIC NOT NULL CHECK (quantity > 0),
unit TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipe_ingredients_unit_check CHECK (TRIM(unit) <> '')
);
COMMENT ON TABLE public.recipe_ingredients IS 'Defines the ingredients and quantities needed for a recipe.';
COMMENT ON COLUMN public.recipe_ingredients.unit IS 'e.g., "cups", "tbsp", "g", "each".';
@@ -541,7 +594,8 @@ CREATE TABLE IF NOT EXISTS public.tags (
tag_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT tags_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.tags IS 'Stores tags for categorizing recipes, e.g., "Vegetarian", "Quick & Easy".';
@@ -563,7 +617,8 @@ CREATE TABLE IF NOT EXISTS public.appliances (
appliance_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT appliances_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.appliances IS 'A predefined list of kitchen appliances (e.g., Air Fryer, Instant Pot).';
@@ -603,7 +658,8 @@ CREATE TABLE IF NOT EXISTS public.recipe_comments (
content TEXT NOT NULL,
status TEXT DEFAULT 'visible' NOT NULL CHECK (status IN ('visible', 'hidden', 'reported')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipe_comments_content_check CHECK (TRIM(content) <> '')
);
COMMENT ON TABLE public.recipe_comments IS 'Allows for threaded discussions and comments on recipes.';
COMMENT ON COLUMN public.recipe_comments.parent_comment_id IS 'For threaded comments.';
@@ -617,7 +673,8 @@ CREATE TABLE IF NOT EXISTS public.pantry_locations (
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT pantry_locations_name_check CHECK (TRIM(name) <> ''),
UNIQUE(user_id, name)
);
COMMENT ON TABLE public.pantry_locations IS 'User-defined locations for organizing pantry items (e.g., "Fridge", "Freezer", "Spice Rack").';
@@ -631,8 +688,9 @@ CREATE TABLE IF NOT EXISTS public.planned_meals (
plan_date DATE NOT NULL,
meal_type TEXT NOT NULL,
servings_to_cook INTEGER,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT planned_meals_meal_type_check CHECK (TRIM(meal_type) <> '')
);
COMMENT ON TABLE public.planned_meals IS 'Assigns a recipe to a specific day and meal type within a user''s menu plan.';
COMMENT ON COLUMN public.planned_meals.meal_type IS 'The designated meal for the recipe, e.g., ''Breakfast'', ''Lunch'', ''Dinner''.';
@@ -644,7 +702,7 @@ CREATE TABLE IF NOT EXISTS public.pantry_items (
pantry_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
quantity NUMERIC NOT NULL,
quantity NUMERIC NOT NULL CHECK (quantity >= 0),
unit TEXT,
best_before_date DATE,
pantry_location_id BIGINT REFERENCES public.pantry_locations(pantry_location_id) ON DELETE SET NULL,
@@ -667,7 +725,8 @@ CREATE TABLE IF NOT EXISTS public.password_reset_tokens (
token_hash TEXT NOT NULL UNIQUE,
expires_at TIMESTAMPTZ NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT password_reset_tokens_token_hash_check CHECK (TRIM(token_hash) <> '')
);
COMMENT ON TABLE public.password_reset_tokens IS 'Stores secure, single-use tokens for password reset requests.';
COMMENT ON COLUMN public.password_reset_tokens.token_hash IS 'A bcrypt hash of the reset token sent to the user.';
@@ -682,10 +741,13 @@ CREATE TABLE IF NOT EXISTS public.unit_conversions (
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
from_unit TEXT NOT NULL,
to_unit TEXT NOT NULL,
factor NUMERIC NOT NULL,
UNIQUE(master_item_id, from_unit, to_unit),
factor NUMERIC NOT NULL CHECK (factor > 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
UNIQUE(master_item_id, from_unit, to_unit),
CONSTRAINT unit_conversions_from_unit_check CHECK (TRIM(from_unit) <> ''),
CONSTRAINT unit_conversions_to_unit_check CHECK (TRIM(to_unit) <> ''),
CONSTRAINT unit_conversions_units_check CHECK (from_unit <> to_unit)
);
COMMENT ON TABLE public.unit_conversions IS 'Stores item-specific unit conversion factors (e.g., grams of flour to cups).';
COMMENT ON COLUMN public.unit_conversions.factor IS 'The multiplication factor to convert from_unit to to_unit.';
@@ -697,9 +759,10 @@ CREATE TABLE IF NOT EXISTS public.user_item_aliases (
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
master_item_id BIGINT NOT NULL REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE CASCADE,
alias TEXT NOT NULL,
UNIQUE(user_id, alias),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
UNIQUE(user_id, alias),
CONSTRAINT user_item_aliases_alias_check CHECK (TRIM(alias) <> '')
);
COMMENT ON TABLE public.user_item_aliases IS 'Allows users to create personal aliases for grocery items (e.g., "Dad''s Cereal").';
CREATE INDEX IF NOT EXISTS idx_user_item_aliases_user_id ON public.user_item_aliases(user_id);
@@ -736,7 +799,8 @@ CREATE TABLE IF NOT EXISTS public.recipe_collections (
name TEXT NOT NULL,
description TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT recipe_collections_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.recipe_collections IS 'Allows users to create personal collections of recipes (e.g., "Holiday Baking").';
CREATE INDEX IF NOT EXISTS idx_recipe_collections_user_id ON public.recipe_collections(user_id);
@@ -761,8 +825,11 @@ CREATE TABLE IF NOT EXISTS public.shared_recipe_collections (
shared_with_user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
permission_level TEXT NOT NULL CHECK (permission_level IN ('view', 'edit')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
UNIQUE(recipe_collection_id, shared_with_user_id)
);
-- This index is crucial for efficiently finding all collections shared with a specific user.
CREATE INDEX IF NOT EXISTS idx_shared_recipe_collections_shared_with ON public.shared_recipe_collections(shared_with_user_id);
-- 45. Log user search queries for analysis.
CREATE TABLE IF NOT EXISTS public.search_queries (
@@ -772,7 +839,8 @@ CREATE TABLE IF NOT EXISTS public.search_queries (
result_count INTEGER,
was_successful BOOLEAN,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT search_queries_query_text_check CHECK (TRIM(query_text) <> '')
);
COMMENT ON TABLE public.search_queries IS 'Logs user search queries to analyze search effectiveness and identify gaps in data.';
COMMENT ON COLUMN public.search_queries.was_successful IS 'Indicates if the user interacted with a search result.';
@@ -796,12 +864,13 @@ CREATE INDEX IF NOT EXISTS idx_shopping_trips_shopping_list_id ON public.shoppin
CREATE TABLE IF NOT EXISTS public.shopping_trip_items (
shopping_trip_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
shopping_trip_id BIGINT NOT NULL REFERENCES public.shopping_trips(shopping_trip_id) ON DELETE CASCADE,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE SET NULL,
custom_item_name TEXT,
quantity NUMERIC NOT NULL,
quantity NUMERIC NOT NULL CHECK (quantity > 0),
price_paid_cents INTEGER,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT shopping_trip_items_custom_item_name_check CHECK (custom_item_name IS NULL OR TRIM(custom_item_name) <> ''),
CONSTRAINT trip_must_have_item_identifier CHECK (master_item_id IS NOT NULL OR custom_item_name IS NOT NULL)
);
COMMENT ON TABLE public.shopping_trip_items IS 'A historical log of items purchased during a shopping trip.';
@@ -815,7 +884,8 @@ CREATE TABLE IF NOT EXISTS public.dietary_restrictions (
name TEXT NOT NULL UNIQUE,
type TEXT NOT NULL CHECK (type IN ('diet', 'allergy')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT dietary_restrictions_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.dietary_restrictions IS 'A predefined list of common diets (e.g., Vegan) and allergies (e.g., Nut Allergy).';
@@ -862,16 +932,17 @@ CREATE INDEX IF NOT EXISTS idx_user_follows_following_id ON public.user_follows(
CREATE TABLE IF NOT EXISTS public.receipts (
receipt_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
store_id BIGINT REFERENCES public.stores(store_id),
store_id BIGINT REFERENCES public.stores(store_id) ON DELETE CASCADE,
receipt_image_url TEXT NOT NULL,
transaction_date TIMESTAMPTZ,
total_amount_cents INTEGER,
total_amount_cents INTEGER CHECK (total_amount_cents IS NULL OR total_amount_cents >= 0),
status TEXT DEFAULT 'pending' NOT NULL CHECK (status IN ('pending', 'processing', 'completed', 'failed')),
raw_text TEXT,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
processed_at TIMESTAMPTZ,
processed_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
);
-- CONSTRAINT receipts_receipt_image_url_check CHECK (receipt_image_url ~* '^https?://.*'),
COMMENT ON TABLE public.receipts IS 'Stores uploaded user receipts for purchase tracking and analysis.';
CREATE INDEX IF NOT EXISTS idx_receipts_user_id ON public.receipts(user_id);
CREATE INDEX IF NOT EXISTS idx_receipts_store_id ON public.receipts(store_id);
@@ -881,13 +952,14 @@ CREATE TABLE IF NOT EXISTS public.receipt_items (
receipt_item_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
receipt_id BIGINT NOT NULL REFERENCES public.receipts(receipt_id) ON DELETE CASCADE,
raw_item_description TEXT NOT NULL,
quantity NUMERIC DEFAULT 1 NOT NULL,
price_paid_cents INTEGER NOT NULL,
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id),
product_id BIGINT REFERENCES public.products(product_id),
quantity NUMERIC DEFAULT 1 NOT NULL CHECK (quantity > 0),
price_paid_cents INTEGER NOT NULL CHECK (price_paid_cents >= 0),
master_item_id BIGINT REFERENCES public.master_grocery_items(master_grocery_item_id) ON DELETE SET NULL,
product_id BIGINT REFERENCES public.products(product_id) ON DELETE SET NULL,
status TEXT DEFAULT 'unmatched' NOT NULL CHECK (status IN ('unmatched', 'matched', 'needs_review', 'ignored')),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT receipt_items_raw_item_description_check CHECK (TRIM(raw_item_description) <> '')
);
COMMENT ON TABLE public.receipt_items IS 'Stores individual line items extracted from a user receipt.';
CREATE INDEX IF NOT EXISTS idx_receipt_items_receipt_id ON public.receipt_items(receipt_id);
@@ -926,11 +998,12 @@ CREATE TABLE IF NOT EXISTS public.budgets (
budget_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
user_id UUID NOT NULL REFERENCES public.users(user_id) ON DELETE CASCADE,
name TEXT NOT NULL,
amount_cents INTEGER NOT NULL,
amount_cents INTEGER NOT NULL CHECK (amount_cents > 0),
period TEXT NOT NULL CHECK (period IN ('weekly', 'monthly')),
start_date DATE NOT NULL,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL
updated_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT budgets_name_check CHECK (TRIM(name) <> '')
);
COMMENT ON TABLE public.budgets IS 'Allows users to set weekly or monthly grocery budgets for spending tracking.';
CREATE INDEX IF NOT EXISTS idx_budgets_user_id ON public.budgets(user_id);
@@ -941,8 +1014,10 @@ CREATE TABLE IF NOT EXISTS public.achievements (
name TEXT NOT NULL UNIQUE,
description TEXT NOT NULL,
icon TEXT,
points_value INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now() NOT NULL
points_value INTEGER NOT NULL DEFAULT 0 CHECK (points_value >= 0),
created_at TIMESTAMPTZ DEFAULT now() NOT NULL,
CONSTRAINT achievements_name_check CHECK (TRIM(name) <> ''),
CONSTRAINT achievements_description_check CHECK (TRIM(description) <> '')
);
COMMENT ON TABLE public.achievements IS 'A static table defining the available achievements users can earn.';
@@ -1038,6 +1113,7 @@ DECLARE
ground_beef_id BIGINT; pasta_item_id BIGINT; tomatoes_id BIGINT; onions_id BIGINT; garlic_id BIGINT;
bell_peppers_id BIGINT; carrots_id BIGINT; soy_sauce_id BIGINT;
soda_item_id BIGINT; turkey_item_id BIGINT; bread_item_id BIGINT; cheese_item_id BIGINT;
chicken_thighs_id BIGINT; paper_towels_id BIGINT; toilet_paper_id BIGINT;
-- Tag IDs
quick_easy_tag BIGINT; healthy_tag BIGINT; chicken_tag BIGINT;
@@ -1089,6 +1165,9 @@ BEGIN
SELECT mgi.master_grocery_item_id INTO turkey_item_id FROM public.master_grocery_items mgi WHERE mgi.name = 'turkey';
SELECT mgi.master_grocery_item_id INTO bread_item_id FROM public.master_grocery_items mgi WHERE mgi.name = 'bread';
SELECT mgi.master_grocery_item_id INTO cheese_item_id FROM public.master_grocery_items mgi WHERE mgi.name = 'cheese';
SELECT mgi.master_grocery_item_id INTO chicken_thighs_id FROM public.master_grocery_items mgi WHERE mgi.name = 'chicken thighs';
SELECT mgi.master_grocery_item_id INTO paper_towels_id FROM public.master_grocery_items mgi WHERE mgi.name = 'paper towels';
SELECT mgi.master_grocery_item_id INTO toilet_paper_id FROM public.master_grocery_items mgi WHERE mgi.name = 'toilet paper';
-- Insert ingredients for each recipe
INSERT INTO public.recipe_ingredients (recipe_id, master_item_id, quantity, unit) VALUES
@@ -1125,6 +1204,17 @@ BEGIN
(bolognese_recipe_id, family_tag), (bolognese_recipe_id, beef_tag), (bolognese_recipe_id, weeknight_tag),
(stir_fry_recipe_id, quick_easy_tag), (stir_fry_recipe_id, healthy_tag), (stir_fry_recipe_id, vegetarian_tag)
ON CONFLICT (recipe_id, tag_id) DO NOTHING;
INSERT INTO public.master_item_aliases (master_item_id, alias) VALUES
(ground_beef_id, 'ground chuck'), (ground_beef_id, 'lean ground beef'),
(ground_beef_id, 'extra lean ground beef'), (ground_beef_id, 'hamburger meat'),
(chicken_breast_id, 'boneless skinless chicken breast'), (chicken_breast_id, 'chicken cutlets'),
(chicken_thighs_id, 'boneless skinless chicken thighs'), (chicken_thighs_id, 'bone-in chicken thighs'),
(bell_peppers_id, 'red pepper'), (bell_peppers_id, 'green pepper'), (bell_peppers_id, 'yellow pepper'), (bell_peppers_id, 'orange pepper'),
(soda_item_id, 'pop'), (soda_item_id, 'soft drink'), (soda_item_id, 'coke'), (soda_item_id, 'pepsi'),
(paper_towels_id, 'paper towel'),
(toilet_paper_id, 'bathroom tissue'), (toilet_paper_id, 'toilet tissue')
ON CONFLICT (alias) DO NOTHING;
END $$;
-- Pre-populate the unit_conversions table with common cooking conversions.
@@ -1173,7 +1263,8 @@ INSERT INTO public.achievements (name, description, icon, points_value) VALUES
('List Sharer', 'Share a shopping list with another user for the first time.', 'list', 20),
('First Favorite', 'Mark a recipe as one of your favorites.', 'heart', 5),
('First Fork', 'Make a personal copy of a public recipe.', 'git-fork', 10),
('First Budget Created', 'Create your first budget to track spending.', 'piggy-bank', 15)
('First Budget Created', 'Create your first budget to track spending.', 'piggy-bank', 15),
('First-Upload', 'Upload your first flyer.', 'upload-cloud', 25)
ON CONFLICT (name) DO NOTHING;
-- ============================================================================
@@ -2039,6 +2130,61 @@ AS $$
ORDER BY potential_savings_cents DESC;
$$;
-- Function to get a user's spending breakdown by category for a given date range.
DROP FUNCTION IF EXISTS public.get_spending_by_category(UUID, DATE, DATE);
CREATE OR REPLACE FUNCTION public.get_spending_by_category(p_user_id UUID, p_start_date DATE, p_end_date DATE)
RETURNS TABLE (
category_id BIGINT,
category_name TEXT,
total_spent_cents BIGINT
)
LANGUAGE sql
STABLE
SECURITY INVOKER
AS $$
WITH all_purchases AS (
-- CTE 1: Combine purchases from completed shopping trips.
-- We only consider items that have a price paid.
SELECT
sti.master_item_id,
sti.price_paid_cents
FROM public.shopping_trip_items sti
JOIN public.shopping_trips st ON sti.shopping_trip_id = st.shopping_trip_id
WHERE st.user_id = p_user_id
AND st.completed_at::date BETWEEN p_start_date AND p_end_date
AND sti.price_paid_cents IS NOT NULL
UNION ALL
-- CTE 2: Combine purchases from processed receipts.
SELECT
ri.master_item_id,
ri.price_paid_cents
FROM public.receipt_items ri
JOIN public.receipts r ON ri.receipt_id = r.receipt_id
WHERE r.user_id = p_user_id
AND r.transaction_date::date BETWEEN p_start_date AND p_end_date
AND ri.master_item_id IS NOT NULL -- Only include items matched to a master item
)
-- Final Aggregation: Group all combined purchases by category and sum the spending.
SELECT
c.category_id,
c.name AS category_name,
SUM(ap.price_paid_cents)::BIGINT AS total_spent_cents
FROM all_purchases ap
-- Join with master_grocery_items to get the category_id for each purchase.
JOIN public.master_grocery_items mgi ON ap.master_item_id = mgi.master_grocery_item_id
-- Join with categories to get the category name for display.
JOIN public.categories c ON mgi.category_id = c.category_id
GROUP BY
c.category_id, c.name
HAVING
SUM(ap.price_paid_cents) > 0
ORDER BY
total_spent_cents DESC;
$$;
-- Function to approve a suggested correction and apply it.
DROP FUNCTION IF EXISTS public.approve_correction(BIGINT);
@@ -2128,11 +2274,61 @@ DROP FUNCTION IF EXISTS public.fork_recipe(UUID, BIGINT);
CREATE OR REPLACE FUNCTION public.fork_recipe(p_user_id UUID, p_original_recipe_id BIGINT)
RETURNS SETOF public.recipes
LANGUAGE sql
LANGUAGE plpgsql
SECURITY INVOKER
AS $$
-- The entire forking logic is now encapsulated in a single, atomic database function.
SELECT * FROM public.fork_recipe(p_user_id, p_original_recipe_id);
DECLARE
new_recipe_id BIGINT;
BEGIN
-- 1. Create a copy of the recipe, linking it to the new user and the original recipe.
INSERT INTO public.recipes (
user_id,
original_recipe_id,
name,
description,
instructions,
prep_time_minutes,
cook_time_minutes,
servings,
photo_url,
calories_per_serving,
protein_grams,
fat_grams,
carb_grams,
status -- Forked recipes should be private by default
)
SELECT
p_user_id,
p_original_recipe_id,
original.name || ' (Fork)', -- Append '(Fork)' to distinguish it
original.description,
original.instructions,
original.prep_time_minutes,
original.cook_time_minutes,
original.servings,
original.photo_url,
original.calories_per_serving,
original.protein_grams,
original.fat_grams,
original.carb_grams,
'private'
FROM public.recipes AS original
WHERE original.recipe_id = p_original_recipe_id
RETURNING recipe_id INTO new_recipe_id;
-- If the original recipe didn't exist, new_recipe_id will be null.
IF new_recipe_id IS NULL THEN
RETURN;
END IF;
-- 2. Copy all ingredients, tags, and appliances from the original recipe to the new one.
INSERT INTO public.recipe_ingredients (recipe_id, master_item_id, quantity, unit) SELECT new_recipe_id, master_item_id, quantity, unit FROM public.recipe_ingredients WHERE recipe_id = p_original_recipe_id;
INSERT INTO public.recipe_tags (recipe_id, tag_id) SELECT new_recipe_id, tag_id FROM public.recipe_tags WHERE recipe_id = p_original_recipe_id;
INSERT INTO public.recipe_appliances (recipe_id, appliance_id) SELECT new_recipe_id, appliance_id FROM public.recipe_appliances WHERE recipe_id = p_original_recipe_id;
-- 3. Return the newly created recipe record.
RETURN QUERY SELECT * FROM public.recipes WHERE recipe_id = new_recipe_id;
END;
$$;
@@ -2432,8 +2628,15 @@ DROP FUNCTION IF EXISTS public.log_new_flyer();
CREATE OR REPLACE FUNCTION public.log_new_flyer()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO public.activity_log (action, display_text, icon, details)
-- If the flyer was uploaded by a registered user, award the 'First-Upload' achievement.
-- The award_achievement function handles checking if the user already has it.
IF NEW.uploaded_by IS NOT NULL THEN
PERFORM public.award_achievement(NEW.uploaded_by, 'First-Upload');
END IF;
INSERT INTO public.activity_log (user_id, action, display_text, icon, details)
VALUES (
NEW.uploaded_by, -- Log the user who uploaded it
'flyer_uploaded',
'A new flyer for ' || (SELECT name FROM public.stores WHERE store_id = NEW.store_id) || ' has been uploaded.',
'file-text',
@@ -2491,6 +2694,7 @@ BEGIN
(SELECT full_name FROM public.profiles WHERE user_id = NEW.shared_by_user_id) || ' shared a shopping list.',
'share-2',
jsonb_build_object(
'shopping_list_id', NEW.shopping_list_id,
'list_name', (SELECT name FROM public.shopping_lists WHERE shopping_list_id = NEW.shopping_list_id),
'shared_with_user_id', NEW.shared_with_user_id
)
@@ -2538,6 +2742,66 @@ CREATE TRIGGER on_new_recipe_collection_share
AFTER INSERT ON public.shared_recipe_collections
FOR EACH ROW EXECUTE FUNCTION public.log_new_recipe_collection_share();
-- 10. Trigger function to geocode a store location's address.
-- This function is triggered when an address is inserted or updated, and is
-- designed to be extensible for external geocoding services to populate the
-- latitude, longitude, and location fields.
DROP FUNCTION IF EXISTS public.geocode_address();
CREATE OR REPLACE FUNCTION public.geocode_address()
RETURNS TRIGGER AS $$
DECLARE
full_address TEXT;
BEGIN
-- Only proceed if an address component has actually changed.
IF TG_OP = 'INSERT' OR (TG_OP = 'UPDATE' AND (
NEW.address_line_1 IS DISTINCT FROM OLD.address_line_1 OR
NEW.address_line_2 IS DISTINCT FROM OLD.address_line_2 OR
NEW.city IS DISTINCT FROM OLD.city OR
NEW.province_state IS DISTINCT FROM OLD.province_state OR
NEW.postal_code IS DISTINCT FROM OLD.postal_code OR
NEW.country IS DISTINCT FROM OLD.country
)) THEN
-- Concatenate address parts into a single string for the geocoder.
full_address := CONCAT_WS(', ', NEW.address_line_1, NEW.address_line_2, NEW.city, NEW.province_state, NEW.postal_code, NEW.country);
-- Placeholder for Geocoding API Call.
-- In a real application, you would call a service here and update NEW.latitude, NEW.longitude, and NEW.location.
-- e.g., NEW.latitude := result.lat; NEW.longitude := result.lon;
-- NEW.location := ST_SetSRID(ST_MakePoint(NEW.longitude, NEW.latitude), 4326);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- This trigger calls the geocoding function when an address changes.
DROP TRIGGER IF EXISTS on_address_change_geocode ON public.addresses;
CREATE TRIGGER on_address_change_geocode
BEFORE INSERT OR UPDATE ON public.addresses
FOR EACH ROW EXECUTE FUNCTION public.geocode_address();
-- 11. Trigger function to increment the fork_count on the original recipe.
DROP FUNCTION IF EXISTS public.increment_recipe_fork_count();
CREATE OR REPLACE FUNCTION public.increment_recipe_fork_count()
RETURNS TRIGGER AS $$
BEGIN
-- Only run if the recipe is a fork (original_recipe_id is not null).
IF NEW.original_recipe_id IS NOT NULL THEN
UPDATE public.recipes SET fork_count = fork_count + 1 WHERE recipe_id = NEW.original_recipe_id;
-- Award 'First Fork' achievement.
PERFORM public.award_achievement(NEW.user_id, 'First Fork');
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS on_recipe_fork ON public.recipes;
CREATE TRIGGER on_recipe_fork
AFTER INSERT ON public.recipes
FOR EACH ROW EXECUTE FUNCTION public.increment_recipe_fork_count();
-- =================================================================
-- Function: get_best_sale_prices_for_all_users()
-- Description: Retrieves the best sale price for every item on every user's watchlist.
@@ -2545,16 +2809,19 @@ CREATE TRIGGER on_new_recipe_collection_share
-- It replaces the need to call get_best_sale_prices_for_user for each user individually.
-- Returns: TABLE(...) - A set of records including user details and deal information.
-- =================================================================
DROP FUNCTION IF EXISTS public.get_best_sale_prices_for_all_users();
CREATE OR REPLACE FUNCTION public.get_best_sale_prices_for_all_users()
RETURNS TABLE(
user_id uuid,
email text,
full_name text,
master_item_id integer,
master_item_id bigint,
item_name text,
best_price_in_cents integer,
store_name text,
flyer_id integer,
flyer_id bigint,
valid_to date
) AS $$
BEGIN
@@ -2562,22 +2829,27 @@ BEGIN
WITH
-- Step 1: Find all flyer items that are currently on sale and have a valid price.
current_sales AS (
SELECT
fi.master_item_id,
fi.price_in_cents,
f.store_name,
s.name as store_name,
f.flyer_id,
f.valid_to
FROM public.flyer_items fi
JOIN public.flyers f ON fi.flyer_id = f.flyer_id
JOIN public.stores s ON f.store_id = s.store_id
WHERE
fi.master_item_id IS NOT NULL
AND fi.price_in_cents IS NOT NULL
AND f.valid_to >= CURRENT_DATE
),
-- Step 2: For each master item, find its absolute best (lowest) price across all current sales.
-- We use a window function to rank the sales for each item by price.
best_prices AS (
SELECT
cs.master_item_id,
cs.price_in_cents AS best_price_in_cents,
@@ -2590,6 +2862,7 @@ BEGIN
)
-- Step 3: Join the best-priced items with the user watchlist and user details.
SELECT
u.user_id,
u.email,
p.full_name,
@@ -2609,6 +2882,7 @@ BEGIN
JOIN public.master_grocery_items mgi ON bp.master_item_id = mgi.master_grocery_item_id
WHERE
-- Only include the items that are at their absolute best price (rank = 1).
bp.price_rank = 1;
END;
$$ LANGUAGE plpgsql;

View File

@@ -20,9 +20,98 @@ import {
mockUseUserData,
mockUseFlyerItems,
} from './tests/setup/mockHooks';
import './tests/setup/mockUI';
import { useAppInitialization } from './hooks/useAppInitialization';
// Mock top-level components rendered by App's routes
vi.mock('./components/Header', () => ({
Header: ({ onOpenProfile, onOpenVoiceAssistant }: any) => (
<div data-testid="header-mock">
<button onClick={onOpenProfile}>Open Profile</button>
<button onClick={onOpenVoiceAssistant}>Open Voice Assistant</button>
</div>
),
}));
vi.mock('./components/Footer', () => ({
Footer: () => <div data-testid="footer-mock">Mock Footer</div>,
}));
vi.mock('./layouts/MainLayout', async () => {
const { Outlet } = await vi.importActual<typeof import('react-router-dom')>('react-router-dom');
return {
MainLayout: () => (
<div data-testid="main-layout-mock">
<Outlet />
</div>
),
};
});
vi.mock('./pages/HomePage', () => ({
HomePage: ({ selectedFlyer, onOpenCorrectionTool }: any) => (
<div data-testid="home-page-mock" data-selected-flyer-id={selectedFlyer?.flyer_id}>
<button onClick={onOpenCorrectionTool}>Open Correction Tool</button>
</div>
),
}));
vi.mock('./pages/admin/AdminPage', () => ({
AdminPage: () => <div data-testid="admin-page-mock">AdminPage</div>,
}));
vi.mock('./pages/admin/CorrectionsPage', () => ({
CorrectionsPage: () => <div data-testid="corrections-page-mock">CorrectionsPage</div>,
}));
vi.mock('./pages/admin/AdminStatsPage', () => ({
AdminStatsPage: () => <div data-testid="admin-stats-page-mock">AdminStatsPage</div>,
}));
vi.mock('./pages/admin/FlyerReviewPage', () => ({
FlyerReviewPage: () => <div data-testid="flyer-review-page-mock">FlyerReviewPage</div>,
}));
vi.mock('./pages/VoiceLabPage', () => ({
VoiceLabPage: () => <div data-testid="voice-lab-page-mock">VoiceLabPage</div>,
}));
vi.mock('./pages/ResetPasswordPage', () => ({
ResetPasswordPage: () => <div data-testid="reset-password-page-mock">ResetPasswordPage</div>,
}));
vi.mock('./pages/admin/components/ProfileManager', () => ({
ProfileManager: ({ isOpen, onClose, onProfileUpdate, onLoginSuccess }: any) =>
isOpen ? (
<div data-testid="profile-manager-mock">
<button onClick={onClose}>Close Profile</button>
<button onClick={() => onProfileUpdate({ full_name: 'Updated' })}>Update Profile</button>
<button onClick={() => onLoginSuccess({}, 'token', false)}>Login</button>
</div>
) : null,
}));
vi.mock('./features/voice-assistant/VoiceAssistant', () => ({
VoiceAssistant: ({ isOpen, onClose }: any) =>
isOpen ? (
<div data-testid="voice-assistant-mock">
<button onClick={onClose}>Close Voice Assistant</button>
</div>
) : null,
}));
vi.mock('./components/FlyerCorrectionTool', () => ({
FlyerCorrectionTool: ({ isOpen, onClose, onDataExtracted }: any) =>
isOpen ? (
<div data-testid="flyer-correction-tool-mock">
<button onClick={onClose}>Close Correction</button>
<button onClick={() => onDataExtracted('store_name', 'New Store')}>Extract Store</button>
<button onClick={() => onDataExtracted('dates', 'New Dates')}>Extract Dates</button>
</div>
) : null,
}));
// Mock pdfjs-dist to prevent the "DOMMatrix is not defined" error in JSDOM.
// This must be done in any test file that imports App.tsx.
vi.mock('pdfjs-dist', () => ({
@@ -36,7 +125,7 @@ vi.mock('pdfjs-dist', () => ({
// Mock the new config module
vi.mock('./config', () => ({
default: {
app: { version: '1.0.0', commitMessage: 'Initial commit', commitUrl: '#' },
app: { version: '20250101-1200:abc1234:1.0.0', commitMessage: 'Initial commit', commitUrl: '#' },
google: { mapsEmbedApiKey: 'mock-key' },
},
}));
@@ -52,77 +141,31 @@ vi.mock('./hooks/useFlyerItems', async () => {
return { useFlyerItems: hooks.mockUseFlyerItems };
});
vi.mock('./hooks/useAppInitialization');
const mockedUseAppInitialization = vi.mocked(useAppInitialization);
vi.mock('./hooks/useAuth', async () => {
const hooks = await import('./tests/setup/mockHooks');
return { useAuth: hooks.mockUseAuth };
});
vi.mock('./components/Footer', async () => {
const { MockFooter } = await import('./tests/utils/componentMocks');
return { Footer: MockFooter };
vi.mock('./components/AppGuard', async () => {
// We need to use the real useModal hook inside our mock AppGuard
const { useModal } = await vi.importActual<typeof import('./hooks/useModal')>('./hooks/useModal');
return {
AppGuard: ({ children }: { children: React.ReactNode }) => {
const { isModalOpen } = useModal();
return (
<div data-testid="app-guard-mock">
{children}
{isModalOpen('whatsNew') && <div data-testid="whats-new-modal-mock" />}
</div>
);
},
};
});
vi.mock('./components/Header', async () => {
const { MockHeader } = await import('./tests/utils/componentMocks');
return { Header: MockHeader };
});
vi.mock('./pages/HomePage', async () => {
const { MockHomePage } = await import('./tests/utils/componentMocks');
return { HomePage: MockHomePage };
});
vi.mock('./pages/admin/AdminPage', async () => {
const { MockAdminPage } = await import('./tests/utils/componentMocks');
return { AdminPage: MockAdminPage };
});
vi.mock('./pages/admin/CorrectionsPage', async () => {
const { MockCorrectionsPage } = await import('./tests/utils/componentMocks');
return { CorrectionsPage: MockCorrectionsPage };
});
vi.mock('./pages/admin/AdminStatsPage', async () => {
const { MockAdminStatsPage } = await import('./tests/utils/componentMocks');
return { AdminStatsPage: MockAdminStatsPage };
});
vi.mock('./pages/VoiceLabPage', async () => {
const { MockVoiceLabPage } = await import('./tests/utils/componentMocks');
return { VoiceLabPage: MockVoiceLabPage };
});
vi.mock('./pages/ResetPasswordPage', async () => {
const { MockResetPasswordPage } = await import('./tests/utils/componentMocks');
return { ResetPasswordPage: MockResetPasswordPage };
});
vi.mock('./pages/admin/components/ProfileManager', async () => {
const { MockProfileManager } = await import('./tests/utils/componentMocks');
return { ProfileManager: MockProfileManager };
});
vi.mock('./features/voice-assistant/VoiceAssistant', async () => {
const { MockVoiceAssistant } = await import('./tests/utils/componentMocks');
return { VoiceAssistant: MockVoiceAssistant };
});
vi.mock('./components/FlyerCorrectionTool', async () => {
const { MockFlyerCorrectionTool } = await import('./tests/utils/componentMocks');
return { FlyerCorrectionTool: MockFlyerCorrectionTool };
});
vi.mock('./components/WhatsNewModal', async () => {
const { MockWhatsNewModal } = await import('./tests/utils/componentMocks');
return { WhatsNewModal: MockWhatsNewModal };
});
vi.mock('./layouts/MainLayout', async () => {
const { MockMainLayout } = await import('./tests/utils/componentMocks');
return { MainLayout: MockMainLayout };
});
const mockedAiApiClient = vi.mocked(aiApiClient); // Mock aiApiClient
const mockedAiApiClient = vi.mocked(aiApiClient);
const mockedApiClient = vi.mocked(apiClient);
const mockFlyers: Flyer[] = [
@@ -131,33 +174,6 @@ const mockFlyers: Flyer[] = [
];
describe('App Component', () => {
// Mock localStorage
let storage: { [key: string]: string } = {};
const localStorageMock = {
getItem: vi.fn((key: string) => storage[key] || null),
setItem: vi.fn((key: string, value: string) => {
storage[key] = value;
}),
removeItem: vi.fn((key: string) => {
delete storage[key];
}),
clear: vi.fn(() => {
storage = {};
}),
};
// Mock matchMedia
const matchMediaMock = vi.fn().mockImplementation((query) => ({
matches: false, // Default to light mode
media: query,
onchange: null,
addListener: vi.fn(), // deprecated
removeListener: vi.fn(), // deprecated
addEventListener: vi.fn(),
removeEventListener: vi.fn(),
dispatchEvent: vi.fn(),
}));
beforeEach(() => {
console.log('[TEST DEBUG] beforeEach: Clearing mocks and setting up defaults');
vi.clearAllMocks();
@@ -199,17 +215,16 @@ describe('App Component', () => {
mockUseUserData.mockReturnValue({
watchedItems: [],
shoppingLists: [],
isLoadingShoppingLists: false,
setWatchedItems: vi.fn(),
setShoppingLists: vi.fn(),
});
mockUseFlyerItems.mockReturnValue({
flyerItems: [],
isLoading: false,
error: null,
});
// Clear local storage to prevent state from leaking between tests.
localStorage.clear();
Object.defineProperty(window, 'localStorage', { value: localStorageMock, configurable: true });
Object.defineProperty(window, 'matchMedia', { value: matchMediaMock, configurable: true });
mockedUseAppInitialization.mockReturnValue({ isDarkMode: false, unitSystem: 'imperial' });
// Default mocks for API calls
// Use mockImplementation to create a new Response object for each call,
@@ -261,6 +276,7 @@ describe('App Component', () => {
it('should render the main layout and header', async () => {
// Simulate the auth hook finishing its initial check
mockedUseAppInitialization.mockReturnValue({ isDarkMode: false, unitSystem: 'imperial' });
mockUseAuth.mockReturnValue({
userProfile: null,
authStatus: 'SIGNED_OUT',
@@ -272,6 +288,7 @@ describe('App Component', () => {
renderApp();
await waitFor(() => {
expect(screen.getByTestId('app-guard-mock')).toBeInTheDocument();
expect(screen.getByTestId('header-mock')).toBeInTheDocument();
// Check that the main layout and home page are rendered for the root path
expect(screen.getByTestId('main-layout-mock')).toBeInTheDocument();
@@ -364,203 +381,12 @@ describe('App Component', () => {
});
});
describe('Theme and Unit System Synchronization', () => {
it('should set dark mode based on user profile preferences', async () => {
console.log(
'[TEST DEBUG] Test Start: should set dark mode based on user profile preferences',
);
const profileWithDarkMode: UserProfile = createMockUserProfile({
user: createMockUser({ user_id: 'user-1', email: 'dark@mode.com' }),
role: 'user',
points: 0,
preferences: { darkMode: true },
});
mockUseAuth.mockReturnValue({
userProfile: profileWithDarkMode,
authStatus: 'AUTHENTICATED',
isLoading: false,
login: vi.fn(),
logout: vi.fn(),
updateProfile: vi.fn(),
});
console.log('[TEST DEBUG] Rendering App');
renderApp();
// The useEffect that sets the theme is asynchronous. We must wait for the update.
await waitFor(() => {
console.log(
'[TEST DEBUG] Checking for dark class. Current classes:',
document.documentElement.className,
);
expect(document.documentElement).toHaveClass('dark');
});
});
it('should set light mode based on user profile preferences', async () => {
const profileWithLightMode: UserProfile = createMockUserProfile({
user: createMockUser({ user_id: 'user-1', email: 'light@mode.com' }),
role: 'user',
points: 0,
preferences: { darkMode: false },
});
mockUseAuth.mockReturnValue({
userProfile: profileWithLightMode,
authStatus: 'AUTHENTICATED',
isLoading: false,
login: vi.fn(),
logout: vi.fn(),
updateProfile: vi.fn(),
});
renderApp();
await waitFor(() => {
expect(document.documentElement).not.toHaveClass('dark');
});
});
it('should set dark mode based on localStorage if profile has no preference', async () => {
localStorageMock.setItem('darkMode', 'true');
renderApp();
await waitFor(() => {
expect(document.documentElement).toHaveClass('dark');
});
});
it('should set dark mode based on system preference if no other setting exists', async () => {
matchMediaMock.mockImplementationOnce((query) => ({ matches: true, media: query }));
renderApp();
await waitFor(() => {
expect(document.documentElement).toHaveClass('dark');
});
});
it('should set unit system based on user profile preferences', async () => {
const profileWithMetric: UserProfile = createMockUserProfile({
user: createMockUser({ user_id: 'user-1', email: 'metric@user.com' }),
role: 'user',
points: 0,
preferences: { unitSystem: 'metric' },
});
mockUseAuth.mockReturnValue({
userProfile: profileWithMetric,
authStatus: 'AUTHENTICATED',
isLoading: false,
login: vi.fn(),
logout: vi.fn(),
updateProfile: vi.fn(),
});
renderApp();
// The unit system is passed as a prop to Header, which is mocked.
// We can't directly see the result in the DOM easily, so we trust the state is set.
// A more integrated test would be needed to verify the Header receives the prop.
// For now, this test ensures the useEffect logic runs without crashing.
await waitFor(() => {
expect(screen.getByTestId('header-mock')).toBeInTheDocument();
});
});
});
describe('OAuth Token Handling', () => {
it('should call login when a googleAuthToken is in the URL', async () => {
console.log(
'[TEST DEBUG] Test Start: should call login when a googleAuthToken is in the URL',
);
const mockLogin = vi.fn().mockResolvedValue(undefined);
mockUseAuth.mockReturnValue({
userProfile: null,
authStatus: 'SIGNED_OUT',
isLoading: false,
login: mockLogin,
logout: vi.fn(),
updateProfile: vi.fn(),
});
console.log('[TEST DEBUG] Rendering App with googleAuthToken');
renderApp(['/?googleAuthToken=test-google-token']);
await waitFor(() => {
console.log('[TEST DEBUG] Checking mockLogin calls:', mockLogin.mock.calls);
expect(mockLogin).toHaveBeenCalledWith('test-google-token');
});
});
it('should call login when a githubAuthToken is in the URL', async () => {
console.log(
'[TEST DEBUG] Test Start: should call login when a githubAuthToken is in the URL',
);
const mockLogin = vi.fn().mockResolvedValue(undefined);
mockUseAuth.mockReturnValue({
userProfile: null,
authStatus: 'SIGNED_OUT',
isLoading: false,
login: mockLogin,
logout: vi.fn(),
updateProfile: vi.fn(),
});
console.log('[TEST DEBUG] Rendering App with githubAuthToken');
renderApp(['/?githubAuthToken=test-github-token']);
await waitFor(() => {
console.log('[TEST DEBUG] Checking mockLogin calls:', mockLogin.mock.calls);
expect(mockLogin).toHaveBeenCalledWith('test-github-token');
});
});
it('should log an error if login with a GitHub token fails', async () => {
console.log(
'[TEST DEBUG] Test Start: should log an error if login with a GitHub token fails',
);
const mockLogin = vi.fn().mockRejectedValue(new Error('GitHub login failed'));
mockUseAuth.mockReturnValue({
userProfile: null,
authStatus: 'SIGNED_OUT',
isLoading: false,
login: mockLogin,
logout: vi.fn(),
updateProfile: vi.fn(),
});
console.log('[TEST DEBUG] Rendering App with githubAuthToken');
renderApp(['/?githubAuthToken=bad-token']);
await waitFor(() => {
console.log('[TEST DEBUG] Checking mockLogin calls:', mockLogin.mock.calls);
expect(mockLogin).toHaveBeenCalled();
});
});
it('should log an error if login with a token fails', async () => {
console.log('[TEST DEBUG] Test Start: should log an error if login with a token fails');
const mockLogin = vi.fn().mockRejectedValue(new Error('Token login failed'));
mockUseAuth.mockReturnValue({
userProfile: null,
authStatus: 'SIGNED_OUT',
isLoading: false,
login: mockLogin,
logout: vi.fn(),
updateProfile: vi.fn(),
});
console.log('[TEST DEBUG] Rendering App with googleAuthToken');
renderApp(['/?googleAuthToken=bad-token']);
await waitFor(() => {
console.log('[TEST DEBUG] Checking mockLogin calls:', mockLogin.mock.calls);
expect(mockLogin).toHaveBeenCalled();
});
});
});
describe('Flyer Selection from URL', () => {
it('should select a flyer when flyerId is present in the URL', async () => {
renderApp(['/flyers/2']);
// The HomePage mock will be rendered. The important part is that the selection logic
// in App.tsx runs and passes the correct `selectedFlyer` prop down.
// Since HomePage is mocked, we can't see the direct result, but we can
// infer that the logic ran without crashing and the correct route was matched.
await waitFor(() => {
expect(screen.getByTestId('home-page-mock')).toBeInTheDocument();
expect(screen.getByTestId('home-page-mock')).toHaveAttribute('data-selected-flyer-id', '2');
});
});
@@ -583,23 +409,9 @@ describe('App Component', () => {
});
});
describe('Version and "What\'s New" Modal', () => {
it('should show the "What\'s New" modal if the app version is new', async () => {
// Mock the config module for this specific test
vi.mock('./config', () => ({
default: {
app: { version: '1.0.1', commitMessage: 'New feature!', commitUrl: '#' },
google: { mapsEmbedApiKey: 'mock-key' },
},
}));
localStorageMock.setItem('lastSeenVersion', '1.0.0');
renderApp();
await expect(screen.findByTestId('whats-new-modal-mock')).resolves.toBeInTheDocument();
});
});
describe('Modal Interactions', () => {
it('should open and close the ProfileManager modal', async () => {
console.log('[TEST DEBUG] Test Start: should open and close the ProfileManager modal');
renderApp();
expect(screen.queryByTestId('profile-manager-mock')).not.toBeInTheDocument();
@@ -607,11 +419,13 @@ describe('App Component', () => {
fireEvent.click(screen.getByText('Open Profile'));
expect(await screen.findByTestId('profile-manager-mock')).toBeInTheDocument();
console.log('[TEST DEBUG] ProfileManager modal opened. Now closing...');
// Close modal
fireEvent.click(screen.getByText('Close Profile'));
await waitFor(() => {
expect(screen.queryByTestId('profile-manager-mock')).not.toBeInTheDocument();
});
console.log('[TEST DEBUG] ProfileManager modal closed.');
});
it('should open and close the VoiceAssistant modal for authenticated users', async () => {
@@ -636,7 +450,7 @@ describe('App Component', () => {
fireEvent.click(screen.getByText('Open Voice Assistant'));
console.log('[TEST DEBUG] Waiting for voice-assistant-mock');
expect(await screen.findByTestId('voice-assistant-mock')).toBeInTheDocument();
expect(await screen.findByTestId('voice-assistant-mock', {}, { timeout: 3000 })).toBeInTheDocument();
// Close modal
fireEvent.click(screen.getByText('Close Voice Assistant'));
@@ -735,64 +549,6 @@ describe('App Component', () => {
});
});
describe("Version Display and What's New", () => {
beforeEach(() => {
// Also mock the config module to reflect this change
vi.mock('./config', () => ({
default: {
app: {
version: '2.0.0',
commitMessage: 'A new version!',
commitUrl: 'http://example.com/commit/2.0.0',
},
google: { mapsEmbedApiKey: 'mock-key' },
},
}));
});
it('should display the version number and commit link', () => {
renderApp();
const versionLink = screen.getByText(`Version: 2.0.0`);
expect(versionLink).toBeInTheDocument();
expect(versionLink).toHaveAttribute('href', 'http://example.com/commit/2.0.0');
});
it('should open the "What\'s New" modal when the question mark icon is clicked', async () => {
// Pre-set the localStorage to prevent the modal from opening automatically
localStorageMock.setItem('lastSeenVersion', '2.0.0');
renderApp();
expect(screen.queryByTestId('whats-new-modal-mock')).not.toBeInTheDocument();
const openButton = await screen.findByTitle("Show what's new in this version");
fireEvent.click(openButton);
expect(await screen.findByTestId('whats-new-modal-mock')).toBeInTheDocument();
});
});
describe('Dynamic Toaster Styles', () => {
it('should render the correct CSS variables for toast styling in light mode', async () => {
renderApp();
await waitFor(() => {
const styleTag = document.querySelector('style');
expect(styleTag).not.toBeNull();
expect(styleTag!.innerHTML).toContain('--toast-bg: #FFFFFF');
expect(styleTag!.innerHTML).toContain('--toast-color: #1F2937');
});
});
it('should render the correct CSS variables for toast styling in dark mode', async () => {
localStorageMock.setItem('darkMode', 'true');
renderApp();
await waitFor(() => {
const styleTag = document.querySelector('style');
expect(styleTag).not.toBeNull();
expect(styleTag!.innerHTML).toContain('--toast-bg: #4B5563');
});
});
});
describe('Profile and Login Handlers', () => {
it('should call updateProfile when handleProfileUpdate is triggered', async () => {
console.log(
@@ -841,12 +597,19 @@ describe('App Component', () => {
logout: vi.fn(),
updateProfile: vi.fn(),
});
// Mock the login function to simulate a successful login. Signature: (token, profile)
const mockLoginSuccess = vi.fn(async (_token: string, _profile?: UserProfile) => {
// Simulate fetching profile after login
const profileResponse = await mockedApiClient.getAuthenticatedUserProfile();
const userProfileData: UserProfile = await profileResponse.json();
mockUseAuth.mockReturnValue({ ...mockUseAuth(), userProfile: userProfileData, authStatus: 'AUTHENTICATED' });
});
console.log('[TEST DEBUG] Rendering App');
renderApp();
console.log('[TEST DEBUG] Opening Profile');
fireEvent.click(screen.getByText('Open Profile'));
const loginButton = await screen.findByText('Login');
const loginButton = await screen.findByRole('button', { name: 'Login' });
console.log('[TEST DEBUG] Clicking Login');
fireEvent.click(loginButton);
@@ -857,4 +620,33 @@ describe('App Component', () => {
});
});
});
describe("Version Display and What's New", () => {
beforeEach(() => {
vi.mock('./config', () => ({
default: {
app: {
version: '2.0.0',
commitMessage: 'A new version!',
commitUrl: 'https://example.com/commit/2.0.0',
},
},
}));
});
it('should display the version number and commit link', () => {
renderApp();
const versionLink = screen.getByText(`Version: 2.0.0`);
expect(versionLink).toBeInTheDocument();
expect(versionLink).toHaveAttribute('href', 'https://example.com/commit/2.0.0');
});
it('should open the "What\'s New" modal when the question mark icon is clicked', async () => {
renderApp();
const openButton = await screen.findByTitle("Show what's new in this version");
fireEvent.click(openButton);
// The mock AppGuard now renders the modal when it's open
expect(await screen.findByTestId('whats-new-modal-mock')).toBeInTheDocument();
});
});
});

View File

@@ -1,9 +1,9 @@
// src/App.tsx
import React, { useState, useCallback, useEffect } from 'react';
import { Routes, Route, useParams, useLocation, useNavigate } from 'react-router-dom';
import { Toaster } from 'react-hot-toast';
import { Routes, Route, useLocation, matchPath } from 'react-router-dom';
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import * as pdfjsLib from 'pdfjs-dist';
import { Footer } from './components/Footer'; // Assuming this is where your Footer component will live
import { Footer } from './components/Footer';
import { Header } from './components/Header';
import { logger } from './services/logger.client';
import type { Flyer, Profile, UserProfile } from './types';
@@ -13,18 +13,20 @@ import { AdminPage } from './pages/admin/AdminPage';
import { AdminRoute } from './components/AdminRoute';
import { CorrectionsPage } from './pages/admin/CorrectionsPage';
import { AdminStatsPage } from './pages/admin/AdminStatsPage';
import { FlyerReviewPage } from './pages/admin/FlyerReviewPage';
import { ResetPasswordPage } from './pages/ResetPasswordPage';
import { VoiceLabPage } from './pages/VoiceLabPage';
import { WhatsNewModal } from './components/WhatsNewModal';
import { FlyerCorrectionTool } from './components/FlyerCorrectionTool';
import { QuestionMarkCircleIcon } from './components/icons/QuestionMarkCircleIcon';
import { useAuth } from './hooks/useAuth';
import { useFlyers } from './hooks/useFlyers'; // Assuming useFlyers fetches all flyers
import { useFlyerItems } from './hooks/useFlyerItems'; // Import the new hook for flyer items
import { useFlyers } from './hooks/useFlyers';
import { useFlyerItems } from './hooks/useFlyerItems';
import { useModal } from './hooks/useModal';
import { MainLayout } from './layouts/MainLayout';
import config from './config';
import { HomePage } from './pages/HomePage';
import { AppGuard } from './components/AppGuard';
import { useAppInitialization } from './hooks/useAppInitialization';
// pdf.js worker configuration
// This is crucial for allowing pdf.js to process PDFs in a separate thread, preventing the UI from freezing.
@@ -35,14 +37,21 @@ pdfjsLib.GlobalWorkerOptions.workerSrc = new URL(
import.meta.url,
).toString();
// Create a client
const queryClient = new QueryClient();
function App() {
const { userProfile, authStatus, login, logout, updateProfile } = useAuth();
const { flyers } = useFlyers();
const [selectedFlyer, setSelectedFlyer] = useState<Flyer | null>(null);
const { openModal, closeModal, isModalOpen } = useModal();
const location = useLocation();
const navigate = useNavigate();
const params = useParams<{ flyerId?: string }>();
const match = matchPath('/flyers/:flyerId', location.pathname);
const flyerIdFromUrl = match?.params.flyerId;
// This hook now handles initialization effects (OAuth, version check, theme)
// and returns the theme/unit state needed by other components.
const { isDarkMode, unitSystem } = useAppInitialization();
// Debugging: Log renders to identify infinite loops
useEffect(() => {
@@ -50,17 +59,14 @@ function App() {
console.log('[App] Render:', {
flyersCount: flyers.length,
selectedFlyerId: selectedFlyer?.flyer_id,
paramsFlyerId: params?.flyerId, // This was a duplicate, fixed.
flyerIdFromUrl,
authStatus,
profileId: userProfile?.user.user_id,
locationSearch: location.search,
});
}
});
const [isDarkMode, setIsDarkMode] = useState(false);
const { flyerItems } = useFlyerItems(selectedFlyer);
const [unitSystem, setUnitSystem] = useState<'metric' | 'imperial'>('imperial');
// Define modal handlers with useCallback at the top level to avoid Rules of Hooks violations
const handleOpenProfile = useCallback(() => openModal('profile'), [openModal]);
@@ -105,37 +111,6 @@ function App() {
// --- State Synchronization and Error Handling ---
// Effect to set initial theme based on user profile, local storage, or system preference
useEffect(() => {
if (process.env.NODE_ENV === 'test')
console.log('[App] Effect: Theme Update', { profileId: userProfile?.user.user_id });
if (userProfile && userProfile.preferences?.darkMode !== undefined) {
// Preference from DB
const dbDarkMode = userProfile.preferences.darkMode;
setIsDarkMode(dbDarkMode);
document.documentElement.classList.toggle('dark', dbDarkMode);
} else {
// Fallback to local storage or system preference
const savedMode = localStorage.getItem('darkMode');
const prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
const initialDarkMode = savedMode !== null ? savedMode === 'true' : prefersDark;
setIsDarkMode(initialDarkMode);
document.documentElement.classList.toggle('dark', initialDarkMode);
}
}, [userProfile?.preferences?.darkMode, userProfile?.user.user_id]);
// Effect to set initial unit system based on user profile or local storage
useEffect(() => {
if (userProfile && userProfile.preferences?.unitSystem) {
setUnitSystem(userProfile.preferences.unitSystem);
} else {
const savedSystem = localStorage.getItem('unitSystem') as 'metric' | 'imperial' | null;
if (savedSystem) {
setUnitSystem(savedSystem);
}
}
}, [userProfile?.preferences?.unitSystem, userProfile?.user.user_id]);
// This is the login handler that will be passed to the ProfileManager component.
const handleLoginSuccess = useCallback(
async (userProfile: UserProfile, token: string, _rememberMe: boolean) => {
@@ -153,36 +128,6 @@ function App() {
[login],
);
// Effect to handle the token from Google OAuth redirect
useEffect(() => {
const urlParams = new URLSearchParams(location.search);
const googleToken = urlParams.get('googleAuthToken');
if (googleToken) {
logger.info('Received Google Auth token from URL. Authenticating...');
// The login flow is now handled by the useAuth hook. We just need to trigger it.
// We pass only the token; the AuthProvider will fetch the user profile.
login(googleToken).catch((err) =>
logger.error('Failed to log in with Google token', { error: err }),
);
// Clean the token from the URL
navigate(location.pathname, { replace: true });
}
const githubToken = urlParams.get('githubAuthToken');
if (githubToken) {
logger.info('Received GitHub Auth token from URL. Authenticating...');
login(githubToken).catch((err) => {
logger.error('Failed to log in with GitHub token', { error: err });
// Optionally, redirect to a page with an error message
// navigate('/login?error=github_auth_failed');
});
// Clean the token from the URL
navigate(location.pathname, { replace: true });
}
}, [login, location.search, navigate, location.pathname]);
const handleFlyerSelect = useCallback(async (flyer: Flyer) => {
setSelectedFlyer(flyer);
}, []);
@@ -196,8 +141,6 @@ function App() {
// New effect to handle routing to a specific flyer ID from the URL
useEffect(() => {
const flyerIdFromUrl = params.flyerId;
if (flyerIdFromUrl && flyers.length > 0) {
const flyerId = parseInt(flyerIdFromUrl, 10);
const flyerToSelect = flyers.find((f) => f.flyer_id === flyerId);
@@ -205,36 +148,15 @@ function App() {
handleFlyerSelect(flyerToSelect);
}
}
}, [flyers, handleFlyerSelect, selectedFlyer, params.flyerId]);
}, [flyers, handleFlyerSelect, selectedFlyer, flyerIdFromUrl]);
// Read the application version injected at build time.
// This will only be available in the production build, not during local development.
const appVersion = config.app.version;
const commitMessage = config.app.commitMessage;
useEffect(() => {
if (appVersion) {
logger.info(`Application version: ${appVersion}`);
const lastSeenVersion = localStorage.getItem('lastSeenVersion');
// If the current version is new, show the "What's New" modal.
if (appVersion !== lastSeenVersion) {
openModal('whatsNew');
localStorage.setItem('lastSeenVersion', appVersion);
}
}
}, [appVersion]);
return (
<div className="bg-gray-100 dark:bg-gray-950 min-h-screen font-sans text-gray-800 dark:text-gray-200">
{/* Toaster component for displaying notifications. It's placed at the top level. */}
<Toaster position="top-center" reverseOrder={false} />
{/* Add CSS variables for toast theming based on dark mode */}
<style>{`
:root {
--toast-bg: ${isDarkMode ? '#4B5563' : '#FFFFFF'};
--toast-color: ${isDarkMode ? '#F9FAFB' : '#1F2937'};
}
`}</style>
// AppGuard now handles the main page wrapper, theme styles, and "What's New" modal
<AppGuard>
<Header
isDarkMode={isDarkMode}
unitSystem={unitSystem}
@@ -261,15 +183,6 @@ function App() {
/>
)}
{appVersion && commitMessage && (
<WhatsNewModal
isOpen={isModalOpen('whatsNew')}
onClose={handleCloseWhatsNew}
version={appVersion}
commitMessage={commitMessage}
/>
)}
{selectedFlyer && (
<FlyerCorrectionTool
isOpen={isModalOpen('correctionTool')}
@@ -316,6 +229,7 @@ function App() {
<Route path="/admin" element={<AdminPage />} />
<Route path="/admin/corrections" element={<CorrectionsPage />} />
<Route path="/admin/stats" element={<AdminStatsPage />} />
<Route path="/admin/flyer-review" element={<FlyerReviewPage />} />
<Route path="/admin/voice-lab" element={<VoiceLabPage />} />
</Route>
<Route path="/reset-password/:token" element={<ResetPasswordPage />} />
@@ -341,8 +255,14 @@ function App() {
)}
<Footer />
</div>
</AppGuard>
);
}
export default App;
const WrappedApp = () => (
<QueryClientProvider client={queryClient}>
<App />
</QueryClientProvider>
);
export default WrappedApp;

View File

@@ -1,9 +1,10 @@
// src/components/AchievementsList.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { screen } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { AchievementsList } from './AchievementsList';
import { createMockUserAchievement } from '../tests/utils/mockFactories';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('AchievementsList', () => {
it('should render the list of achievements with correct details', () => {
@@ -22,9 +23,10 @@ describe('AchievementsList', () => {
points_value: 15,
}),
createMockUserAchievement({ achievement_id: 3, name: 'Unknown Achievement', icon: 'star' }), // This icon is not in the component's map
createMockUserAchievement({ achievement_id: 4, name: 'No Icon Achievement', icon: '' }), // Triggers the fallback for missing name
];
render(<AchievementsList achievements={mockAchievements} />);
renderWithProviders(<AchievementsList achievements={mockAchievements} />);
expect(screen.getByRole('heading', { name: /achievements/i })).toBeInTheDocument();
@@ -40,11 +42,19 @@ describe('AchievementsList', () => {
// Check achievement with default icon
expect(screen.getByText('Unknown Achievement')).toBeInTheDocument();
expect(screen.getByText('🏆')).toBeInTheDocument(); // Default icon
// We expect at least one trophy (for unknown achievement).
// Since we added another one that produces a trophy (No Icon), we use getAllByText.
expect(screen.getAllByText('🏆').length).toBeGreaterThan(0);
// Check achievement with missing icon (empty string)
expect(screen.getByText('No Icon Achievement')).toBeInTheDocument();
// Verify the specific placeholder class is rendered, ensuring the early return in Icon component is hit
const noIconCard = screen.getByText('No Icon Achievement').closest('.bg-white');
expect(noIconCard?.querySelector('.icon-placeholder')).toBeInTheDocument();
});
it('should render a message when there are no achievements', () => {
render(<AchievementsList achievements={[]} />);
renderWithProviders(<AchievementsList achievements={[]} />);
expect(
screen.getByText('No achievements earned yet. Keep exploring to unlock them!'),
).toBeInTheDocument();

View File

@@ -1,11 +1,12 @@
// src/components/AdminRoute.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { MemoryRouter, Routes, Route } from 'react-router-dom';
import { screen } from '@testing-library/react';
import { describe, it, expect, vi } from 'vitest';
import { Routes, Route } from 'react-router-dom';
import { AdminRoute } from './AdminRoute';
import type { Profile } from '../types';
import { createMockProfile } from '../tests/utils/mockFactories';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Unmock the component to test the real implementation
vi.unmock('./AdminRoute');
@@ -14,15 +15,14 @@ const AdminContent = () => <div>Admin Page Content</div>;
const HomePage = () => <div>Home Page</div>;
const renderWithRouter = (profile: Profile | null, initialPath: string) => {
render(
<MemoryRouter initialEntries={[initialPath]}>
<Routes>
<Route path="/" element={<HomePage />} />
<Route path="/admin" element={<AdminRoute profile={profile} />}>
<Route index element={<AdminContent />} />
</Route>
</Routes>
</MemoryRouter>,
renderWithProviders(
<Routes>
<Route path="/" element={<HomePage />} />
<Route path="/admin" element={<AdminRoute profile={profile} />}>
<Route index element={<AdminContent />} />
</Route>
</Routes>,
{ initialEntries: [initialPath] },
);
};

View File

@@ -1,11 +1,12 @@
// src/components/AnonymousUserBanner.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi } from 'vitest';
import { AnonymousUserBanner } from './AnonymousUserBanner';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock the icon to ensure it is rendered correctly
vi.mock('../../../components/icons/InformationCircleIcon', () => ({
vi.mock('./icons/InformationCircleIcon', () => ({
InformationCircleIcon: (props: React.SVGProps<SVGSVGElement>) => (
<svg data-testid="info-icon" {...props} />
),
@@ -14,7 +15,7 @@ vi.mock('../../../components/icons/InformationCircleIcon', () => ({
describe('AnonymousUserBanner', () => {
it('should render the banner with the correct text content and accessibility role', () => {
const mockOnOpenProfile = vi.fn();
render(<AnonymousUserBanner onOpenProfile={mockOnOpenProfile} />);
renderWithProviders(<AnonymousUserBanner onOpenProfile={mockOnOpenProfile} />);
// Check for accessibility role
expect(screen.getByRole('alert')).toBeInTheDocument();
@@ -30,7 +31,7 @@ describe('AnonymousUserBanner', () => {
it('should call onOpenProfile when the "sign up or log in" button is clicked', () => {
const mockOnOpenProfile = vi.fn();
render(<AnonymousUserBanner onOpenProfile={mockOnOpenProfile} />);
renderWithProviders(<AnonymousUserBanner onOpenProfile={mockOnOpenProfile} />);
const loginButton = screen.getByRole('button', { name: /sign up or log in/i });
fireEvent.click(loginButton);

View File

@@ -1,6 +1,6 @@
// src/pages/admin/components/AnonymousUserBanner.tsx
// src/components/AnonymousUserBanner.tsx
import React from 'react';
import { InformationCircleIcon } from '../../../components/icons/InformationCircleIcon';
import { InformationCircleIcon } from './icons/InformationCircleIcon';
interface AnonymousUserBannerProps {
/**

View File

@@ -0,0 +1,97 @@
// src/components/AppGuard.test.tsx
import React from 'react';
import { screen, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { AppGuard } from './AppGuard';
import { useAppInitialization } from '../hooks/useAppInitialization';
import * as apiClient from '../services/apiClient';
import { useModal } from '../hooks/useModal';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock dependencies
// The apiClient is mocked globally in `src/tests/setup/globalApiMock.ts`.
vi.mock('../hooks/useAppInitialization');
vi.mock('../hooks/useModal');
vi.mock('./WhatsNewModal', () => ({
WhatsNewModal: ({ isOpen }: { isOpen: boolean }) =>
isOpen ? <div data-testid="whats-new-modal-mock" /> : null,
}));
vi.mock('../config', () => ({
default: {
app: { version: '1.0.0', commitMessage: 'Test commit' },
},
}));
const mockedApiClient = vi.mocked(apiClient);
const mockedUseAppInitialization = vi.mocked(useAppInitialization);
const mockedUseModal = vi.mocked(useModal);
describe('AppGuard', () => {
beforeEach(() => {
vi.clearAllMocks();
// Default mocks
mockedUseAppInitialization.mockReturnValue({
isDarkMode: false,
unitSystem: 'imperial',
});
mockedUseModal.mockReturnValue({
isModalOpen: vi.fn().mockReturnValue(false),
openModal: vi.fn(),
closeModal: vi.fn(),
});
});
it('should render children', () => {
renderWithProviders(
<AppGuard>
<div>Child Content</div>
</AppGuard>,
);
expect(screen.getByText('Child Content')).toBeInTheDocument();
});
it('should render WhatsNewModal when it is open', () => {
mockedUseModal.mockReturnValue({
...mockedUseModal(),
isModalOpen: (modalId) => modalId === 'whatsNew',
});
renderWithProviders(
<AppGuard>
<div>Child</div>
</AppGuard>,
);
expect(screen.getByTestId('whats-new-modal-mock')).toBeInTheDocument();
});
it('should set dark mode styles for toaster', async () => {
mockedUseAppInitialization.mockReturnValue({
isDarkMode: true,
unitSystem: 'imperial',
});
renderWithProviders(
<AppGuard>
<div>Child</div>
</AppGuard>,
);
await waitFor(() => {
const styleTag = document.querySelector('style');
expect(styleTag).not.toBeNull();
expect(styleTag!.innerHTML).toContain('--toast-bg: #4B5563');
expect(styleTag!.innerHTML).toContain('--toast-color: #F9FAFB');
});
});
it('should set light mode styles for toaster', async () => {
renderWithProviders(
<AppGuard>
<div>Child</div>
</AppGuard>,
);
await waitFor(() => {
const styleTag = document.querySelector('style');
expect(styleTag).not.toBeNull();
expect(styleTag!.innerHTML).toContain('--toast-bg: #FFFFFF');
expect(styleTag!.innerHTML).toContain('--toast-color: #1F2937');
});
});
});

View File

@@ -0,0 +1,47 @@
// src/components/AppGuard.tsx
import React, { useCallback } from 'react';
import { Toaster } from 'react-hot-toast';
import { useAppInitialization } from '../hooks/useAppInitialization';
import { useModal } from '../hooks/useModal';
import { WhatsNewModal } from './WhatsNewModal';
import config from '../config';
interface AppGuardProps {
children: React.ReactNode;
}
export const AppGuard: React.FC<AppGuardProps> = ({ children }) => {
// This hook handles OAuth tokens, version checks, and returns theme state.
const { isDarkMode } = useAppInitialization();
const { isModalOpen, closeModal } = useModal();
const handleCloseWhatsNew = useCallback(() => closeModal('whatsNew'), [closeModal]);
const appVersion = config.app.version;
const commitMessage = config.app.commitMessage;
return (
<div className="bg-gray-100 dark:bg-gray-950 min-h-screen font-sans text-gray-800 dark:text-gray-200">
{/* Toaster component for displaying notifications. It's placed at the top level. */}
<Toaster position="top-center" reverseOrder={false} />
{/* Add CSS variables for toast theming based on dark mode */}
<style>{`
:root {
--toast-bg: ${isDarkMode ? '#4B5563' : '#FFFFFF'};
--toast-color: ${isDarkMode ? '#F9FAFB' : '#1F2937'};
}
`}</style>
{appVersion && commitMessage && (
<WhatsNewModal
isOpen={isModalOpen('whatsNew')}
onClose={handleCloseWhatsNew}
version={appVersion}
commitMessage={commitMessage}
/>
)}
{children}
</div>
);
};

View File

@@ -1,8 +1,9 @@
// src/components/ConfirmationModal.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { ConfirmationModal } from './ConfirmationModal';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('ConfirmationModal (in components)', () => {
const mockOnClose = vi.fn();
@@ -21,12 +22,12 @@ describe('ConfirmationModal (in components)', () => {
});
it('should not render when isOpen is false', () => {
const { container } = render(<ConfirmationModal {...defaultProps} isOpen={false} />);
const { container } = renderWithProviders(<ConfirmationModal {...defaultProps} isOpen={false} />);
expect(container.firstChild).toBeNull();
});
it('should render correctly when isOpen is true', () => {
render(<ConfirmationModal {...defaultProps} />);
renderWithProviders(<ConfirmationModal {...defaultProps} />);
expect(screen.getByRole('heading', { name: 'Confirm Action' })).toBeInTheDocument();
expect(screen.getByText('Are you sure you want to do this?')).toBeInTheDocument();
expect(screen.getByRole('button', { name: 'Confirm' })).toBeInTheDocument();
@@ -34,38 +35,38 @@ describe('ConfirmationModal (in components)', () => {
});
it('should call onConfirm when the confirm button is clicked', () => {
render(<ConfirmationModal {...defaultProps} />);
renderWithProviders(<ConfirmationModal {...defaultProps} />);
fireEvent.click(screen.getByRole('button', { name: 'Confirm' }));
expect(mockOnConfirm).toHaveBeenCalledTimes(1);
});
it('should call onClose when the cancel button is clicked', () => {
render(<ConfirmationModal {...defaultProps} />);
renderWithProviders(<ConfirmationModal {...defaultProps} />);
fireEvent.click(screen.getByRole('button', { name: 'Cancel' }));
expect(mockOnClose).toHaveBeenCalledTimes(1);
});
it('should call onClose when the close icon is clicked', () => {
render(<ConfirmationModal {...defaultProps} />);
renderWithProviders(<ConfirmationModal {...defaultProps} />);
fireEvent.click(screen.getByLabelText('Close confirmation modal'));
expect(mockOnClose).toHaveBeenCalledTimes(1);
});
it('should call onClose when the overlay is clicked', () => {
render(<ConfirmationModal {...defaultProps} />);
renderWithProviders(<ConfirmationModal {...defaultProps} />);
// The overlay is the parent of the modal content div
fireEvent.click(screen.getByRole('dialog'));
expect(mockOnClose).toHaveBeenCalledTimes(1);
});
it('should not call onClose when clicking inside the modal content', () => {
render(<ConfirmationModal {...defaultProps} />);
renderWithProviders(<ConfirmationModal {...defaultProps} />);
fireEvent.click(screen.getByText('Are you sure you want to do this?'));
expect(mockOnClose).not.toHaveBeenCalled();
});
it('should render custom button text and classes', () => {
render(
renderWithProviders(
<ConfirmationModal
{...defaultProps}
confirmButtonText="Yes, Delete"

View File

@@ -1,8 +1,9 @@
// src/components/DarkModeToggle.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { DarkModeToggle } from './DarkModeToggle';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock the icon components to isolate the toggle's logic
vi.mock('./icons/SunIcon', () => ({
@@ -20,7 +21,7 @@ describe('DarkModeToggle', () => {
});
it('should render in light mode state', () => {
render(<DarkModeToggle isDarkMode={false} onToggle={mockOnToggle} />);
renderWithProviders(<DarkModeToggle isDarkMode={false} onToggle={mockOnToggle} />);
const checkbox = screen.getByRole('checkbox');
expect(checkbox).not.toBeChecked();
@@ -29,7 +30,7 @@ describe('DarkModeToggle', () => {
});
it('should render in dark mode state', () => {
render(<DarkModeToggle isDarkMode={true} onToggle={mockOnToggle} />);
renderWithProviders(<DarkModeToggle isDarkMode={true} onToggle={mockOnToggle} />);
const checkbox = screen.getByRole('checkbox');
expect(checkbox).toBeChecked();
@@ -38,7 +39,7 @@ describe('DarkModeToggle', () => {
});
it('should call onToggle when the label is clicked', () => {
render(<DarkModeToggle isDarkMode={false} onToggle={mockOnToggle} />);
renderWithProviders(<DarkModeToggle isDarkMode={false} onToggle={mockOnToggle} />);
// Clicking the label triggers the checkbox change
const label = screen.getByTitle('Switch to Dark Mode');

View File

@@ -0,0 +1,67 @@
// src/components/Dashboard.test.tsx
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { screen } from '@testing-library/react';
import { Dashboard } from './Dashboard';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock child components to isolate Dashboard logic
// Note: The Dashboard component imports these using '../components/RecipeSuggester'
// which resolves to the same file as './RecipeSuggester' when inside src/components.
vi.mock('./RecipeSuggester', () => ({
RecipeSuggester: () => <div data-testid="recipe-suggester-mock">Recipe Suggester</div>,
}));
vi.mock('./FlyerCountDisplay', () => ({
FlyerCountDisplay: () => <div data-testid="flyer-count-display-mock">Flyer Count Display</div>,
}));
vi.mock('./Leaderboard', () => ({
Leaderboard: () => <div data-testid="leaderboard-mock">Leaderboard</div>,
}));
describe('Dashboard Component', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('renders the dashboard title', () => {
console.log('TEST: Verifying dashboard title render');
renderWithProviders(<Dashboard />);
expect(screen.getByRole('heading', { name: /dashboard/i, level: 1 })).toBeInTheDocument();
});
it('renders the RecipeSuggester widget', () => {
console.log('TEST: Verifying RecipeSuggester presence');
renderWithProviders(<Dashboard />);
expect(screen.getByTestId('recipe-suggester-mock')).toBeInTheDocument();
});
it('renders the FlyerCountDisplay widget within the "Your Flyers" section', () => {
console.log('TEST: Verifying FlyerCountDisplay presence and section title');
renderWithProviders(<Dashboard />);
// Check for the section heading
expect(screen.getByRole('heading', { name: /your flyers/i, level: 2 })).toBeInTheDocument();
// Check for the component
expect(screen.getByTestId('flyer-count-display-mock')).toBeInTheDocument();
});
it('renders the Leaderboard widget in the sidebar area', () => {
console.log('TEST: Verifying Leaderboard presence');
renderWithProviders(<Dashboard />);
expect(screen.getByTestId('leaderboard-mock')).toBeInTheDocument();
});
it('renders with the correct grid layout classes', () => {
console.log('TEST: Verifying layout classes');
const { container } = renderWithProviders(<Dashboard />);
// The main grid container
const gridContainer = container.querySelector('.grid');
expect(gridContainer).toBeInTheDocument();
expect(gridContainer).toHaveClass('grid-cols-1');
expect(gridContainer).toHaveClass('lg:grid-cols-3');
expect(gridContainer).toHaveClass('gap-6');
});
});

View File

@@ -0,0 +1,33 @@
import React from 'react';
import { RecipeSuggester } from '../components/RecipeSuggester';
import { FlyerCountDisplay } from '../components/FlyerCountDisplay';
import { Leaderboard } from '../components/Leaderboard';
export const Dashboard: React.FC = () => {
return (
<div className="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-8">
<h1 className="text-2xl font-bold text-gray-900 dark:text-white mb-6">Dashboard</h1>
<div className="grid grid-cols-1 lg:grid-cols-3 gap-6">
{/* Main Content Area */}
<div className="lg:col-span-2 space-y-6">
{/* Recipe Suggester Section */}
<RecipeSuggester />
{/* Other Dashboard Widgets */}
<div className="bg-white dark:bg-gray-800 shadow rounded-lg p-6">
<h2 className="text-lg font-medium text-gray-900 dark:text-white mb-4">Your Flyers</h2>
<FlyerCountDisplay />
</div>
</div>
{/* Sidebar Area */}
<div className="space-y-6">
<Leaderboard />
</div>
</div>
</div>
);
};
export default Dashboard;

View File

@@ -1,24 +1,25 @@
// src/components/ErrorDisplay.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { screen } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { ErrorDisplay } from './ErrorDisplay';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('ErrorDisplay (in components)', () => {
it('should not render when the message is empty', () => {
const { container } = render(<ErrorDisplay message="" />);
const { container } = renderWithProviders(<ErrorDisplay message="" />);
expect(container.firstChild).toBeNull();
});
it('should not render when the message is null', () => {
// The component expects a string, but we test for nullish values as a safeguard.
const { container } = render(<ErrorDisplay message={null as unknown as string} />);
const { container } = renderWithProviders(<ErrorDisplay message={null as unknown as string} />);
expect(container.firstChild).toBeNull();
});
it('should render the error message when provided', () => {
const errorMessage = 'Something went terribly wrong.';
render(<ErrorDisplay message={errorMessage} />);
renderWithProviders(<ErrorDisplay message={errorMessage} />);
const alert = screen.getByRole('alert');
expect(alert).toBeInTheDocument();

View File

@@ -1,31 +1,25 @@
// src/components/FlyerCorrectionTool.test.tsx
import React from 'react';
import { render, screen, fireEvent, waitFor, act } from '@testing-library/react';
import { screen, fireEvent, waitFor, act } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, type Mocked } from 'vitest';
import { FlyerCorrectionTool } from './FlyerCorrectionTool';
import * as aiApiClient from '../services/aiApiClient';
import { notifyError, notifySuccess } from '../services/notificationService';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Unmock the component to test the real implementation
vi.unmock('./FlyerCorrectionTool');
// Mock dependencies
vi.mock('../services/aiApiClient');
vi.mock('../services/notificationService');
vi.mock('../services/logger', () => ({
logger: {
error: vi.fn(),
},
}));
const mockedAiApiClient = aiApiClient as Mocked<typeof aiApiClient>;
// The aiApiClient, notificationService, and logger are mocked globally.
// We can get a typed reference to the aiApiClient for individual test overrides.
const mockedAiApiClient = vi.mocked(aiApiClient);
const mockedNotifySuccess = notifySuccess as Mocked<typeof notifySuccess>;
const mockedNotifyError = notifyError as Mocked<typeof notifyError>;
const defaultProps = {
isOpen: true,
onClose: vi.fn(),
imageUrl: 'http://example.com/flyer.jpg',
imageUrl: 'https://example.com/flyer.jpg',
onDataExtracted: vi.fn(),
};
@@ -54,12 +48,12 @@ describe('FlyerCorrectionTool', () => {
});
it('should not render when isOpen is false', () => {
const { container } = render(<FlyerCorrectionTool {...defaultProps} isOpen={false} />);
const { container } = renderWithProviders(<FlyerCorrectionTool {...defaultProps} isOpen={false} />);
expect(container.firstChild).toBeNull();
});
it('should render correctly when isOpen is true', () => {
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
expect(screen.getByRole('heading', { name: /flyer correction tool/i })).toBeInTheDocument();
expect(screen.getByAltText('Flyer for correction')).toBeInTheDocument();
expect(screen.getByRole('button', { name: /extract store name/i })).toBeInTheDocument();
@@ -67,7 +61,7 @@ describe('FlyerCorrectionTool', () => {
});
it('should call onClose when the close button is clicked', () => {
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
// Use the specific aria-label defined in the component to find the close button
const closeButton = screen.getByLabelText(/close correction tool/i);
fireEvent.click(closeButton);
@@ -75,13 +69,13 @@ describe('FlyerCorrectionTool', () => {
});
it('should have disabled extraction buttons initially', () => {
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
expect(screen.getByRole('button', { name: /extract store name/i })).toBeDisabled();
expect(screen.getByRole('button', { name: /extract sale dates/i })).toBeDisabled();
});
it('should enable extraction buttons after a selection is made', () => {
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
const canvas = screen.getByRole('dialog').querySelector('canvas')!;
// Simulate drawing a rectangle
@@ -94,7 +88,7 @@ describe('FlyerCorrectionTool', () => {
});
it('should stop drawing when the mouse leaves the canvas', () => {
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
const canvas = screen.getByRole('dialog').querySelector('canvas')!;
fireEvent.mouseDown(canvas, { clientX: 10, clientY: 10 });
@@ -114,7 +108,7 @@ describe('FlyerCorrectionTool', () => {
});
mockedAiApiClient.rescanImageArea.mockReturnValue(rescanPromise);
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
// Wait for the image fetch to complete to ensure 'imageFile' state is populated
console.log('--- [TEST LOG] ---: Awaiting image fetch inside component...');
@@ -192,7 +186,7 @@ describe('FlyerCorrectionTool', () => {
// Mock fetch to reject
global.fetch = vi.fn(() => Promise.reject(new Error('Network error'))) as Mocked<typeof fetch>;
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
await waitFor(() => {
expect(mockedNotifyError).toHaveBeenCalledWith('Could not load the image for correction.');
@@ -211,7 +205,7 @@ describe('FlyerCorrectionTool', () => {
return new Promise(() => {});
}) as Mocked<typeof fetch>;
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
const canvas = screen.getByRole('dialog').querySelector('canvas')!;
@@ -238,7 +232,7 @@ describe('FlyerCorrectionTool', () => {
it('should handle non-standard API errors during rescan', async () => {
console.log('TEST: Starting "should handle non-standard API errors during rescan"');
mockedAiApiClient.rescanImageArea.mockRejectedValue('A plain string error');
render(<FlyerCorrectionTool {...defaultProps} />);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
// Wait for image fetch to ensure imageFile is set before we interact
await waitFor(() => expect(global.fetch).toHaveBeenCalled());
@@ -258,4 +252,54 @@ describe('FlyerCorrectionTool', () => {
expect(mockedNotifyError).toHaveBeenCalledWith('An unknown error occurred.');
});
});
it('should handle API failure response (ok: false) correctly', async () => {
console.log('TEST: Starting "should handle API failure response (ok: false) correctly"');
mockedAiApiClient.rescanImageArea.mockResolvedValue({
ok: false,
json: async () => ({ message: 'Custom API Error' }),
} as Response);
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
// Wait for image fetch
await waitFor(() => expect(global.fetch).toHaveBeenCalled());
// Draw selection
const canvas = screen.getByRole('dialog').querySelector('canvas')!;
fireEvent.mouseDown(canvas, { clientX: 10, clientY: 10 });
fireEvent.mouseMove(canvas, { clientX: 50, clientY: 50 });
fireEvent.mouseUp(canvas);
// Click extract
fireEvent.click(screen.getByRole('button', { name: /extract store name/i }));
await waitFor(() => {
expect(mockedNotifyError).toHaveBeenCalledWith('Custom API Error');
});
});
it('should redraw the canvas when the image loads', () => {
console.log('TEST: Starting "should redraw the canvas when the image loads"');
const clearRectSpy = vi.fn();
// Override the getContext mock for this test to capture the spy
window.HTMLCanvasElement.prototype.getContext = vi.fn(() => ({
clearRect: clearRectSpy,
strokeRect: vi.fn(),
setLineDash: vi.fn(),
strokeStyle: '',
lineWidth: 0,
})) as any;
renderWithProviders(<FlyerCorrectionTool {...defaultProps} />);
const image = screen.getByAltText('Flyer for correction');
// The draw function is called on mount via useEffect, so we clear that call.
clearRectSpy.mockClear();
// Simulate image load event which triggers onLoad={draw}
fireEvent.load(image);
expect(clearRectSpy).toHaveBeenCalled();
});
});

View File

@@ -44,7 +44,7 @@ export const FlyerCorrectionTool: React.FC<FlyerCorrectionToolProps> = ({
})
.catch((err) => {
console.error('[DEBUG] FlyerCorrectionTool: Failed to fetch image.', { err });
logger.error('Failed to fetch image for correction tool', { error: err });
logger.error({ error: err }, 'Failed to fetch image for correction tool');
notifyError('Could not load the image for correction.');
});
}
@@ -164,7 +164,7 @@ export const FlyerCorrectionTool: React.FC<FlyerCorrectionToolProps> = ({
const msg = err instanceof Error ? err.message : 'An unknown error occurred.';
console.error('[DEBUG] handleRescan: Caught an error.', { error: err });
notifyError(msg);
logger.error('Error during rescan:', { error: err });
logger.error({ error: err }, 'Error during rescan:');
} finally {
console.debug('[DEBUG] handleRescan: Finished. Setting isProcessing=false.');
setIsProcessing(false);

View File

@@ -1,11 +1,12 @@
// src/components/FlyerCountDisplay.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { screen } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { FlyerCountDisplay } from './FlyerCountDisplay';
import { useFlyers } from '../hooks/useFlyers';
import type { Flyer } from '../types';
import { createMockFlyer } from '../tests/utils/mockFactories';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock the dependencies
vi.mock('../hooks/useFlyers');
@@ -32,7 +33,7 @@ describe('FlyerCountDisplay', () => {
});
// Act: Render the component.
render(<FlyerCountDisplay />);
renderWithProviders(<FlyerCountDisplay />);
// Assert: Check that the loading spinner is visible.
expect(screen.getByTestId('loading-spinner')).toBeInTheDocument();
@@ -53,7 +54,7 @@ describe('FlyerCountDisplay', () => {
});
// Act
render(<FlyerCountDisplay />);
renderWithProviders(<FlyerCountDisplay />);
// Assert: Check that the error message is displayed.
expect(screen.getByRole('alert')).toHaveTextContent(errorMessage);
@@ -73,7 +74,7 @@ describe('FlyerCountDisplay', () => {
});
// Act
render(<FlyerCountDisplay />);
renderWithProviders(<FlyerCountDisplay />);
// Assert: Check that the correct count is displayed.
const countDisplay = screen.getByTestId('flyer-count');

View File

@@ -1,8 +1,9 @@
// src/components/Footer.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { screen } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { Footer } from './Footer';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('Footer', () => {
beforeEach(() => {
@@ -21,7 +22,7 @@ describe('Footer', () => {
vi.setSystemTime(mockDate);
// Act: Render the component
render(<Footer />);
renderWithProviders(<Footer />);
// Assert: Check that the rendered text includes the mocked year
expect(screen.getByText('Copyright 2025-2025')).toBeInTheDocument();
@@ -29,7 +30,7 @@ describe('Footer', () => {
it('should display the correct year when it changes', () => {
vi.setSystemTime(new Date('2030-01-01T00:00:00Z'));
render(<Footer />);
renderWithProviders(<Footer />);
expect(screen.getByText('Copyright 2025-2030')).toBeInTheDocument();
});
});

View File

@@ -1,11 +1,11 @@
// src/components/Header.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { MemoryRouter } from 'react-router-dom';
import { Header } from './Header';
import type { UserProfile } from '../types';
import { createMockUserProfile } from '../tests/utils/mockFactories';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Unmock the component to test the real implementation
vi.unmock('./Header');
@@ -34,12 +34,8 @@ const defaultProps = {
};
// Helper to render with router context
const renderWithRouter = (props: Partial<React.ComponentProps<typeof Header>>) => {
return render(
<MemoryRouter>
<Header {...defaultProps} {...props} />
</MemoryRouter>,
);
const renderHeader = (props: Partial<React.ComponentProps<typeof Header>>) => {
return renderWithProviders(<Header {...defaultProps} {...props} />);
};
describe('Header', () => {
@@ -48,30 +44,30 @@ describe('Header', () => {
});
it('should render the application title', () => {
renderWithRouter({});
renderHeader({});
expect(screen.getByRole('heading', { name: /flyer crawler/i })).toBeInTheDocument();
});
it('should display unit system and theme mode', () => {
renderWithRouter({ isDarkMode: true, unitSystem: 'metric' });
renderHeader({ isDarkMode: true, unitSystem: 'metric' });
expect(screen.getByText(/metric/i)).toBeInTheDocument();
expect(screen.getByText(/dark mode/i)).toBeInTheDocument();
});
describe('When user is logged out', () => {
it('should show a Login button', () => {
renderWithRouter({ userProfile: null, authStatus: 'SIGNED_OUT' });
renderHeader({ userProfile: null, authStatus: 'SIGNED_OUT' });
expect(screen.getByRole('button', { name: /login/i })).toBeInTheDocument();
});
it('should call onOpenProfile when Login button is clicked', () => {
renderWithRouter({ userProfile: null, authStatus: 'SIGNED_OUT' });
renderHeader({ userProfile: null, authStatus: 'SIGNED_OUT' });
fireEvent.click(screen.getByRole('button', { name: /login/i }));
expect(mockOnOpenProfile).toHaveBeenCalledTimes(1);
});
it('should not show user-specific buttons', () => {
renderWithRouter({ userProfile: null, authStatus: 'SIGNED_OUT' });
renderHeader({ userProfile: null, authStatus: 'SIGNED_OUT' });
expect(screen.queryByLabelText(/open voice assistant/i)).not.toBeInTheDocument();
expect(screen.queryByLabelText(/open my account settings/i)).not.toBeInTheDocument();
expect(screen.queryByRole('button', { name: /logout/i })).not.toBeInTheDocument();
@@ -80,29 +76,29 @@ describe('Header', () => {
describe('When user is authenticated', () => {
it('should display the user email', () => {
renderWithRouter({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
renderHeader({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
expect(screen.getByText(mockUserProfile.user.email)).toBeInTheDocument();
});
it('should display "Guest" for anonymous users', () => {
renderWithRouter({ userProfile: mockUserProfile, authStatus: 'SIGNED_OUT' });
renderHeader({ userProfile: mockUserProfile, authStatus: 'SIGNED_OUT' });
expect(screen.getByText(/guest/i)).toBeInTheDocument();
});
it('should call onOpenVoiceAssistant when microphone icon is clicked', () => {
renderWithRouter({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
renderHeader({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
fireEvent.click(screen.getByLabelText(/open voice assistant/i));
expect(mockOnOpenVoiceAssistant).toHaveBeenCalledTimes(1);
});
it('should call onOpenProfile when cog icon is clicked', () => {
renderWithRouter({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
renderHeader({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
fireEvent.click(screen.getByLabelText(/open my account settings/i));
expect(mockOnOpenProfile).toHaveBeenCalledTimes(1);
});
it('should call onSignOut when Logout button is clicked', () => {
renderWithRouter({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
renderHeader({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
fireEvent.click(screen.getByRole('button', { name: /logout/i }));
expect(mockOnSignOut).toHaveBeenCalledTimes(1);
});
@@ -110,14 +106,14 @@ describe('Header', () => {
describe('Admin user', () => {
it('should show the Admin Area link for admin users', () => {
renderWithRouter({ userProfile: mockAdminProfile, authStatus: 'AUTHENTICATED' });
renderHeader({ userProfile: mockAdminProfile, authStatus: 'AUTHENTICATED' });
const adminLink = screen.getByTitle(/admin area/i);
expect(adminLink).toBeInTheDocument();
expect(adminLink.closest('a')).toHaveAttribute('href', '/admin');
});
it('should not show the Admin Area link for non-admin users', () => {
renderWithRouter({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
renderHeader({ userProfile: mockUserProfile, authStatus: 'AUTHENTICATED' });
expect(screen.queryByTitle(/admin area/i)).not.toBeInTheDocument();
});
});

View File

@@ -1,21 +1,17 @@
// src/components/Leaderboard.test.tsx
import React from 'react';
import { render, screen, waitFor } from '@testing-library/react';
import { screen, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, type Mocked } from 'vitest';
import Leaderboard from './Leaderboard';
import * as apiClient from '../services/apiClient';
import { LeaderboardUser } from '../types';
import { createMockLeaderboardUser } from '../tests/utils/mockFactories';
import { createMockLogger } from '../tests/utils/mockLogger';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock the apiClient
vi.mock('../services/apiClient'); // This was correct
const mockedApiClient = apiClient as Mocked<typeof apiClient>;
// Mock the logger
vi.mock('../services/logger', () => ({
logger: createMockLogger(),
}));
// The apiClient and logger are mocked globally.
// We can get a typed reference to the apiClient for individual test overrides.
const mockedApiClient = vi.mocked(apiClient);
// Mock lucide-react icons to prevent rendering errors in the test environment
vi.mock('lucide-react', () => ({
@@ -29,7 +25,7 @@ const mockLeaderboardData: LeaderboardUser[] = [
createMockLeaderboardUser({
user_id: 'user-2',
full_name: 'Bob',
avatar_url: 'http://example.com/bob.jpg',
avatar_url: 'https://example.com/bob.jpg',
points: 950,
rank: '2',
}),
@@ -45,13 +41,13 @@ describe('Leaderboard', () => {
it('should display a loading message initially', () => {
// Mock a pending promise that never resolves to keep it in the loading state
mockedApiClient.fetchLeaderboard.mockReturnValue(new Promise(() => {}));
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
expect(screen.getByText('Loading Leaderboard...')).toBeInTheDocument();
});
it('should display an error message if the API call fails', async () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(new Response(null, { status: 500 }));
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
expect(screen.getByRole('alert')).toBeInTheDocument();
@@ -62,7 +58,7 @@ describe('Leaderboard', () => {
it('should display a generic error for unknown error types', async () => {
const unknownError = 'A string error';
mockedApiClient.fetchLeaderboard.mockRejectedValue(unknownError);
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
expect(screen.getByRole('alert')).toBeInTheDocument();
@@ -72,7 +68,7 @@ describe('Leaderboard', () => {
it('should display a message when the leaderboard is empty', async () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(new Response(JSON.stringify([])));
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
expect(
@@ -85,7 +81,7 @@ describe('Leaderboard', () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(
new Response(JSON.stringify(mockLeaderboardData)),
);
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
expect(screen.getByRole('heading', { name: 'Top Users' })).toBeInTheDocument();
@@ -99,7 +95,7 @@ describe('Leaderboard', () => {
// Check for correct avatar URLs
const bobAvatar = screen.getByAltText('Bob') as HTMLImageElement;
expect(bobAvatar.src).toBe('http://example.com/bob.jpg');
expect(bobAvatar.src).toBe('https://example.com/bob.jpg');
const aliceAvatar = screen.getByAltText('Alice') as HTMLImageElement;
expect(aliceAvatar.src).toContain('api.dicebear.com'); // Check for fallback avatar
@@ -110,7 +106,7 @@ describe('Leaderboard', () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(
new Response(JSON.stringify(mockLeaderboardData)),
);
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
// Rank 1, 2, and 3 should have a crown icon
@@ -129,7 +125,7 @@ describe('Leaderboard', () => {
mockedApiClient.fetchLeaderboard.mockResolvedValue(
new Response(JSON.stringify(dataWithMissingNames)),
);
render(<Leaderboard />);
renderWithProviders(<Leaderboard />);
await waitFor(() => {
// Check for fallback name

View File

@@ -1,19 +1,19 @@
// src/components/LoadingSpinner.test.tsx
import React from 'react';
import { render } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { LoadingSpinner } from './LoadingSpinner';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('LoadingSpinner (in components)', () => {
it('should render the SVG with animation classes', () => {
const { container } = render(<LoadingSpinner />);
const { container } = renderWithProviders(<LoadingSpinner />);
const svgElement = container.querySelector('svg');
expect(svgElement).toBeInTheDocument();
expect(svgElement).toHaveClass('animate-spin');
});
it('should contain the correct SVG paths for the spinner graphic', () => {
const { container } = render(<LoadingSpinner />);
const { container } = renderWithProviders(<LoadingSpinner />);
const circle = container.querySelector('circle');
const path = container.querySelector('path');
expect(circle).toBeInTheDocument();

View File

@@ -1,9 +1,10 @@
// src/components/MapView.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { screen } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { MapView } from './MapView';
import config from '../config';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Create a type-safe mocked version of the config for easier manipulation
const mockedConfig = vi.mocked(config);
@@ -40,14 +41,14 @@ describe('MapView', () => {
describe('when API key is not configured', () => {
it('should render a disabled message', () => {
render(<MapView {...defaultProps} />);
renderWithProviders(<MapView {...defaultProps} />);
expect(
screen.getByText('Map view is disabled: API key is not configured.'),
).toBeInTheDocument();
});
it('should not render the iframe', () => {
render(<MapView {...defaultProps} />);
renderWithProviders(<MapView {...defaultProps} />);
// Use queryByTitle because iframes don't have a default "iframe" role
expect(screen.queryByTitle('Map view')).not.toBeInTheDocument();
});
@@ -62,7 +63,7 @@ describe('MapView', () => {
});
it('should render the iframe with the correct src URL', () => {
render(<MapView {...defaultProps} />);
renderWithProviders(<MapView {...defaultProps} />);
// Use getByTitle to access the iframe
const iframe = screen.getByTitle('Map view');

View File

@@ -1,8 +1,9 @@
// src/components/PasswordInput.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi } from 'vitest';
import { PasswordInput } from './PasswordInput';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Mock the child PasswordStrengthIndicator component to isolate the test (relative to new location)
vi.mock('./PasswordStrengthIndicator', () => ({
PasswordStrengthIndicator: ({ password }: { password?: string }) => (
@@ -12,13 +13,13 @@ vi.mock('./PasswordStrengthIndicator', () => ({
describe('PasswordInput (in auth feature)', () => {
it('should render as a password input by default', () => {
render(<PasswordInput placeholder="Enter password" />);
renderWithProviders(<PasswordInput placeholder="Enter password" />);
const input = screen.getByPlaceholderText('Enter password');
expect(input).toHaveAttribute('type', 'password');
});
it('should toggle input type between password and text when the eye icon is clicked', () => {
render(<PasswordInput placeholder="Enter password" />);
renderWithProviders(<PasswordInput placeholder="Enter password" />);
const input = screen.getByPlaceholderText('Enter password');
const toggleButton = screen.getByRole('button', { name: /show password/i });
@@ -38,7 +39,7 @@ describe('PasswordInput (in auth feature)', () => {
it('should pass through standard input attributes', () => {
const handleChange = vi.fn();
render(
renderWithProviders(
<PasswordInput
value="test"
onChange={handleChange}
@@ -56,38 +57,38 @@ describe('PasswordInput (in auth feature)', () => {
});
it('should not show strength indicator by default', () => {
render(<PasswordInput value="some-password" onChange={() => {}} />);
renderWithProviders(<PasswordInput value="some-password" onChange={() => {}} />);
expect(screen.queryByTestId('strength-indicator')).not.toBeInTheDocument();
});
it('should show strength indicator when showStrength is true and there is a value', () => {
render(<PasswordInput value="some-password" showStrength onChange={() => {}} />);
renderWithProviders(<PasswordInput value="some-password" showStrength onChange={() => {}} />);
const indicator = screen.getByTestId('strength-indicator');
expect(indicator).toBeInTheDocument();
expect(indicator).toHaveTextContent('Strength for: some-password');
});
it('should not show strength indicator when showStrength is true but value is empty', () => {
render(<PasswordInput value="" showStrength onChange={() => {}} />);
renderWithProviders(<PasswordInput value="" showStrength onChange={() => {}} />);
expect(screen.queryByTestId('strength-indicator')).not.toBeInTheDocument();
});
it('should handle undefined className gracefully', () => {
render(<PasswordInput placeholder="No class" />);
renderWithProviders(<PasswordInput placeholder="No class" />);
const input = screen.getByPlaceholderText('No class');
expect(input.className).not.toContain('undefined');
expect(input.className).toContain('block w-full');
});
it('should not show strength indicator if value is undefined', () => {
render(<PasswordInput showStrength onChange={() => {}} />);
renderWithProviders(<PasswordInput showStrength onChange={() => {}} />);
expect(screen.queryByTestId('strength-indicator')).not.toBeInTheDocument();
});
it('should not show strength indicator if value is not a string', () => {
// Force a non-string value to test the typeof check
const props = { value: 12345, showStrength: true, onChange: () => {} } as any;
render(<PasswordInput {...props} />);
renderWithProviders(<PasswordInput {...props} />);
expect(screen.queryByTestId('strength-indicator')).not.toBeInTheDocument();
});
});

View File

@@ -1,7 +1,7 @@
// src/pages/admin/components/PasswordInput.tsx
// src/components/PasswordInput.tsx
import React, { useState } from 'react';
import { EyeIcon } from '../../../components/icons/EyeIcon';
import { EyeSlashIcon } from '../../../components/icons/EyeSlashIcon';
import { EyeIcon } from './icons/EyeIcon';
import { EyeSlashIcon } from './icons/EyeSlashIcon';
import { PasswordStrengthIndicator } from './PasswordStrengthIndicator';
/**

View File

@@ -1,8 +1,9 @@
// src/pages/admin/components/PasswordStrengthIndicator.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { screen } from '@testing-library/react';
import { describe, it, expect, vi, type Mock } from 'vitest';
import { PasswordStrengthIndicator } from './PasswordStrengthIndicator';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
import zxcvbn from 'zxcvbn';
// Mock the zxcvbn library to control its output for testing
@@ -11,7 +12,7 @@ vi.mock('zxcvbn');
describe('PasswordStrengthIndicator', () => {
it('should render 5 gray bars when no password is provided', () => {
(zxcvbn as Mock).mockReturnValue({ score: -1, feedback: { warning: '', suggestions: [] } });
const { container } = render(<PasswordStrengthIndicator password="" />);
const { container } = renderWithProviders(<PasswordStrengthIndicator password="" />);
const bars = container.querySelectorAll('.h-1\\.5');
expect(bars).toHaveLength(5);
bars.forEach((bar) => {
@@ -28,7 +29,7 @@ describe('PasswordStrengthIndicator', () => {
{ score: 4, label: 'Strong', color: 'bg-green-500', bars: 5 },
])('should render correctly for score $score ($label)', ({ score, label, color, bars }) => {
(zxcvbn as Mock).mockReturnValue({ score, feedback: { warning: '', suggestions: [] } });
const { container } = render(<PasswordStrengthIndicator password="some-password" />);
const { container } = renderWithProviders(<PasswordStrengthIndicator password="some-password" />);
// Check the label
expect(screen.getByText(label)).toBeInTheDocument();
@@ -54,7 +55,7 @@ describe('PasswordStrengthIndicator', () => {
suggestions: [],
},
});
render(<PasswordStrengthIndicator password="password" />);
renderWithProviders(<PasswordStrengthIndicator password="password" />);
expect(screen.getByText(/this is a very common password/i)).toBeInTheDocument();
});
@@ -66,7 +67,7 @@ describe('PasswordStrengthIndicator', () => {
suggestions: ['Add another word or two'],
},
});
render(<PasswordStrengthIndicator password="pass" />);
renderWithProviders(<PasswordStrengthIndicator password="pass" />);
expect(screen.getByText(/add another word or two/i)).toBeInTheDocument();
});
@@ -75,14 +76,14 @@ describe('PasswordStrengthIndicator', () => {
score: 1,
feedback: { warning: 'A warning here', suggestions: ['A suggestion here'] },
});
render(<PasswordStrengthIndicator password="password" />);
renderWithProviders(<PasswordStrengthIndicator password="password" />);
expect(screen.getByText(/a warning here/i)).toBeInTheDocument();
expect(screen.queryByText(/a suggestion here/i)).not.toBeInTheDocument();
});
it('should use default empty string if password prop is undefined', () => {
(zxcvbn as Mock).mockReturnValue({ score: 0, feedback: { warning: '', suggestions: [] } });
const { container } = render(<PasswordStrengthIndicator />);
const { container } = renderWithProviders(<PasswordStrengthIndicator />);
const bars = container.querySelectorAll('.h-1\\.5');
expect(bars).toHaveLength(5);
bars.forEach((bar) => {
@@ -94,7 +95,7 @@ describe('PasswordStrengthIndicator', () => {
it('should handle out-of-range scores gracefully (defensive)', () => {
// Mock a score that isn't 0-4 to hit default switch cases
(zxcvbn as Mock).mockReturnValue({ score: 99, feedback: { warning: '', suggestions: [] } });
const { container } = render(<PasswordStrengthIndicator password="test" />);
const { container } = renderWithProviders(<PasswordStrengthIndicator password="test" />);
// Check bars - should hit default case in getBarColor which returns gray
const bars = container.querySelectorAll('.h-1\\.5');

View File

@@ -1,4 +1,5 @@
// src/pages/admin/components/PasswordStrengthIndicator.tsx
// src/components/PasswordStrengthIndicator.tsx
import React from 'react';
import zxcvbn from 'zxcvbn';

View File

@@ -0,0 +1,202 @@
// src/components/RecipeSuggester.test.tsx
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { screen, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { RecipeSuggester } from './RecipeSuggester'; // This should be after mocks
import * as apiClient from '../services/apiClient';
import { logger } from '../services/logger.client';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
import '@testing-library/jest-dom';
// The apiClient is mocked globally in `src/tests/setup/globalApiMock.ts`.
// We can get a typed reference to it for individual test overrides.
const mockedApiClient = vi.mocked(apiClient);
describe('RecipeSuggester Component', () => {
beforeEach(() => {
vi.clearAllMocks();
// Reset console logs if needed, or just keep them for debug visibility
});
it('renders correctly with initial state', () => {
console.log('TEST: Verifying initial render state');
renderWithProviders(<RecipeSuggester />);
expect(screen.getByText('Get a Recipe Suggestion')).toBeInTheDocument();
expect(screen.getByLabelText(/Ingredients:/i)).toBeInTheDocument();
expect(screen.getByRole('button', { name: /Suggest a Recipe/i })).toBeInTheDocument();
expect(screen.queryByText('Getting suggestion...')).not.toBeInTheDocument();
});
it('shows validation error if no ingredients are entered', async () => {
console.log('TEST: Verifying validation for empty input');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
expect(await screen.findByText('Please enter at least one ingredient.')).toBeInTheDocument();
expect(mockedApiClient.suggestRecipe).not.toHaveBeenCalled();
console.log('TEST: Validation error displayed correctly');
});
it('calls suggestRecipe and displays suggestion on success', async () => {
console.log('TEST: Verifying successful recipe suggestion flow');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
const input = screen.getByLabelText(/Ingredients:/i);
await user.type(input, 'chicken, rice');
// Mock successful API response
const mockSuggestion = 'Here is a nice Chicken and Rice recipe...';
// Add a delay to ensure the loading state is visible during the test
mockedApiClient.suggestRecipe.mockImplementation(async () => {
await new Promise((resolve) => setTimeout(resolve, 50));
return { ok: true, json: async () => ({ suggestion: mockSuggestion }) } as Response;
});
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
// Check loading state
expect(screen.getByRole('button')).toBeDisabled();
expect(screen.getByText('Getting suggestion...')).toBeInTheDocument();
await waitFor(() => {
expect(screen.getByText(mockSuggestion)).toBeInTheDocument();
});
expect(mockedApiClient.suggestRecipe).toHaveBeenCalledWith(['chicken', 'rice']);
console.log('TEST: Suggestion displayed and API called with correct args');
});
it('handles API errors (non-200 response) gracefully', async () => {
console.log('TEST: Verifying API error handling (400/500 responses)');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
const input = screen.getByLabelText(/Ingredients:/i);
await user.type(input, 'rocks');
// Mock API failure response
const errorMessage = 'Invalid ingredients provided.';
mockedApiClient.suggestRecipe.mockResolvedValue({
ok: false,
json: async () => ({ message: errorMessage }),
} as Response);
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
await waitFor(() => {
expect(screen.getByText(errorMessage)).toBeInTheDocument();
});
// Ensure loading state is reset
expect(screen.getByRole('button', { name: /Suggest a Recipe/i })).toBeEnabled();
console.log('TEST: API error message displayed to user');
});
it('handles network exceptions and logs them', async () => {
console.log('TEST: Verifying network exception handling');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
const input = screen.getByLabelText(/Ingredients:/i);
await user.type(input, 'beef');
// Mock network error
const networkError = new Error('Network Error');
mockedApiClient.suggestRecipe.mockRejectedValue(networkError);
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
await waitFor(() => {
expect(screen.getByText('Network Error')).toBeInTheDocument();
});
expect(logger.error).toHaveBeenCalledWith(
{ error: networkError },
'Failed to fetch recipe suggestion.'
);
console.log('TEST: Network error caught and logged');
});
it('clears previous errors when submitting again', async () => {
console.log('TEST: Verifying error clearing on re-submit');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
// Trigger validation error first
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
expect(screen.getByText('Please enter at least one ingredient.')).toBeInTheDocument();
// Now type something to clear it (state change doesn't clear it, submit does)
const input = screen.getByLabelText(/Ingredients:/i);
await user.type(input, 'tofu');
// Mock success for the second click
mockedApiClient.suggestRecipe.mockResolvedValue({
ok: true,
json: async () => ({ suggestion: 'Tofu Stir Fry' }),
} as Response);
await user.click(button);
await waitFor(() => {
expect(screen.queryByText('Please enter at least one ingredient.')).not.toBeInTheDocument();
expect(screen.getByText('Tofu Stir Fry')).toBeInTheDocument();
});
console.log('TEST: Previous error cleared successfully');
});
it('uses default error message when API error response has no message', async () => {
console.log('TEST: Verifying default error message for API failure');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
const input = screen.getByLabelText(/Ingredients:/i);
await user.type(input, 'mystery');
// Mock API failure response without a message property
mockedApiClient.suggestRecipe.mockResolvedValue({
ok: false,
json: async () => ({}), // Empty object
} as Response);
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
await waitFor(() => {
expect(screen.getByText('Failed to get suggestion.')).toBeInTheDocument();
});
});
it('handles non-Error objects thrown during fetch', async () => {
console.log('TEST: Verifying handling of non-Error exceptions');
const user = userEvent.setup();
renderWithProviders(<RecipeSuggester />);
const input = screen.getByLabelText(/Ingredients:/i);
await user.type(input, 'chaos');
// Mock a rejection that is NOT an Error object
mockedApiClient.suggestRecipe.mockRejectedValue('Something weird happened');
const button = screen.getByRole('button', { name: /Suggest a Recipe/i });
await user.click(button);
await waitFor(() => {
expect(screen.getByText('An unknown error occurred.')).toBeInTheDocument();
});
expect(logger.error).toHaveBeenCalledWith(
{ error: 'Something weird happened' },
'Failed to fetch recipe suggestion.'
);
});
});

View File

@@ -0,0 +1,80 @@
// src/components/RecipeSuggester.tsx
import React, { useState, useCallback } from 'react';
import { suggestRecipe } from '../services/apiClient';
import { logger } from '../services/logger.client';
export const RecipeSuggester: React.FC = () => {
const [ingredients, setIngredients] = useState<string>('');
const [suggestion, setSuggestion] = useState<string | null>(null);
const [isLoading, setIsLoading] = useState<boolean>(false);
const [error, setError] = useState<string | null>(null);
const handleSubmit = useCallback(async (event: React.FormEvent<HTMLFormElement>) => {
event.preventDefault();
setIsLoading(true);
setError(null);
setSuggestion(null);
const ingredientList = ingredients.split(',').map(item => item.trim()).filter(Boolean);
if (ingredientList.length === 0) {
setError('Please enter at least one ingredient.');
setIsLoading(false);
return;
}
try {
const response = await suggestRecipe(ingredientList);
const data = await response.json();
if (!response.ok) {
throw new Error(data.message || 'Failed to get suggestion.');
}
setSuggestion(data.suggestion);
} catch (err) {
const errorMessage = err instanceof Error ? err.message : 'An unknown error occurred.';
logger.error({ error: err }, 'Failed to fetch recipe suggestion.');
setError(errorMessage);
} finally {
setIsLoading(false);
}
}, [ingredients]);
return (
<div className="bg-white dark:bg-gray-800 shadow rounded-lg p-6">
<h2 className="text-xl font-semibold text-gray-900 dark:text-white mb-2">Get a Recipe Suggestion</h2>
<p className="text-gray-600 dark:text-gray-400 mb-4">Enter some ingredients you have, separated by commas.</p>
<form onSubmit={handleSubmit}>
<div className="mb-4">
<label htmlFor="ingredients-input" className="block text-sm font-medium text-gray-700 dark:text-gray-300 mb-1">Ingredients:</label>
<input
id="ingredients-input"
type="text"
value={ingredients}
onChange={(e) => setIngredients(e.target.value)}
placeholder="e.g., chicken, rice, broccoli"
disabled={isLoading}
className="block w-full rounded-md border-gray-300 shadow-sm focus:border-blue-500 focus:ring-blue-500 dark:bg-gray-700 dark:border-gray-600 dark:text-white sm:text-sm p-2 border"
/>
</div>
<button type="submit" disabled={isLoading} className="w-full flex justify-center py-2 px-4 border border-transparent rounded-md shadow-sm text-sm font-medium text-white bg-blue-600 hover:bg-blue-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500 disabled:opacity-50 transition-colors">
{isLoading ? 'Getting suggestion...' : 'Suggest a Recipe'}
</button>
</form>
{error && (
<div className="mt-4 p-4 bg-red-50 dark:bg-red-900/50 text-red-700 dark:text-red-200 rounded-md text-sm">{error}</div>
)}
{suggestion && (
<div className="mt-6 bg-gray-50 dark:bg-gray-700/50 rounded-lg p-4 border border-gray-200 dark:border-gray-600">
<div className="prose dark:prose-invert max-w-none">
<h5 className="text-lg font-medium text-gray-900 dark:text-white mb-2">Recipe Suggestion</h5>
<p className="text-gray-700 dark:text-gray-300 whitespace-pre-wrap">{suggestion}</p>
</div>
</div>
)}
</div>
);
};

View File

@@ -0,0 +1,34 @@
// src/components/StatCard.test.tsx
import React from 'react';
import { screen } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { StatCard } from './StatCard';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
import '@testing-library/jest-dom';
describe('StatCard', () => {
it('renders title and value correctly', () => {
renderWithProviders(
<StatCard
title="Total Users"
value="1,234"
icon={<div data-testid="mock-icon">Icon</div>}
/>,
);
expect(screen.getByText('Total Users')).toBeInTheDocument();
expect(screen.getByText('1,234')).toBeInTheDocument();
});
it('renders the icon', () => {
renderWithProviders(
<StatCard
title="Total Users"
value="1,234"
icon={<div data-testid="mock-icon">Icon</div>}
/>,
);
expect(screen.getByTestId('mock-icon')).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,32 @@
// src/components/StatCard.tsx
import React, { ReactNode } from 'react';
interface StatCardProps {
title: string;
value: string;
icon: ReactNode;
}
export const StatCard: React.FC<StatCardProps> = ({ title, value, icon }) => {
return (
<div className="bg-white dark:bg-gray-800 overflow-hidden shadow rounded-lg">
<div className="p-5">
<div className="flex items-center">
<div className="flex-shrink-0">
<div className="flex items-center justify-center h-12 w-12 rounded-md bg-blue-500 text-white">
{icon}
</div>
</div>
<div className="ml-5 w-0 flex-1">
<dl>
<dt className="text-sm font-medium text-gray-500 dark:text-gray-400 truncate">{title}</dt>
<dd>
<div className="text-lg font-medium text-gray-900 dark:text-white">{value}</div>
</dd>
</dl>
</div>
</div>
</div>
</div>
);
};

View File

@@ -1,8 +1,9 @@
// src/components/UnitSystemToggle.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { UnitSystemToggle } from './UnitSystemToggle';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('UnitSystemToggle', () => {
const mockOnToggle = vi.fn();
@@ -12,7 +13,7 @@ describe('UnitSystemToggle', () => {
});
it('should render correctly for imperial system', () => {
render(<UnitSystemToggle currentSystem="imperial" onToggle={mockOnToggle} />);
renderWithProviders(<UnitSystemToggle currentSystem="imperial" onToggle={mockOnToggle} />);
const checkbox = screen.getByRole('checkbox');
expect(checkbox).toBeChecked();
@@ -23,7 +24,7 @@ describe('UnitSystemToggle', () => {
});
it('should render correctly for metric system', () => {
render(<UnitSystemToggle currentSystem="metric" onToggle={mockOnToggle} />);
renderWithProviders(<UnitSystemToggle currentSystem="metric" onToggle={mockOnToggle} />);
const checkbox = screen.getByRole('checkbox');
expect(checkbox).not.toBeChecked();
@@ -34,7 +35,7 @@ describe('UnitSystemToggle', () => {
});
it('should call onToggle when the toggle is clicked', () => {
render(<UnitSystemToggle currentSystem="metric" onToggle={mockOnToggle} />);
renderWithProviders(<UnitSystemToggle currentSystem="metric" onToggle={mockOnToggle} />);
fireEvent.click(screen.getByRole('checkbox'));
expect(mockOnToggle).toHaveBeenCalledTimes(1);
});

View File

@@ -1,34 +1,34 @@
// src/components/UserMenuSkeleton.test.tsx
import React from 'react';
import { render } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { UserMenuSkeleton } from './UserMenuSkeleton';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
describe('UserMenuSkeleton', () => {
it('should render without crashing', () => {
const { container } = render(<UserMenuSkeleton />);
const { container } = renderWithProviders(<UserMenuSkeleton />);
expect(container.firstChild).toBeInTheDocument();
});
it('should have the main container with pulse animation', () => {
const { container } = render(<UserMenuSkeleton />);
const { container } = renderWithProviders(<UserMenuSkeleton />);
expect(container.firstChild).toHaveClass('animate-pulse');
});
it('should render two child placeholder elements', () => {
const { container } = render(<UserMenuSkeleton />);
const { container } = renderWithProviders(<UserMenuSkeleton />);
expect(container.firstChild?.childNodes.length).toBe(2);
});
it('should render a rectangular placeholder with correct styles', () => {
const { container } = render(<UserMenuSkeleton />);
const { container } = renderWithProviders(<UserMenuSkeleton />);
expect(container.querySelector('.rounded-md')).toHaveClass(
'h-8 w-24 bg-gray-200 dark:bg-gray-700',
);
});
it('should render a circular placeholder with correct styles', () => {
const { container } = render(<UserMenuSkeleton />);
const { container } = renderWithProviders(<UserMenuSkeleton />);
expect(container.querySelector('.rounded-full')).toHaveClass(
'h-10 w-10 bg-gray-200 dark:bg-gray-700',
);

View File

@@ -1,8 +1,9 @@
// src/components/WhatsNewModal.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { WhatsNewModal } from './WhatsNewModal';
import { renderWithProviders } from '../tests/utils/renderWithProviders';
// Unmock the component to test the real implementation
vi.unmock('./WhatsNewModal');
@@ -21,13 +22,13 @@ describe('WhatsNewModal', () => {
});
it('should not render when isOpen is false', () => {
const { container } = render(<WhatsNewModal {...defaultProps} isOpen={false} />);
const { container } = renderWithProviders(<WhatsNewModal {...defaultProps} isOpen={false} />);
// The component returns null, so the container should be empty.
expect(container.firstChild).toBeNull();
});
it('should render correctly when isOpen is true', () => {
render(<WhatsNewModal {...defaultProps} />);
renderWithProviders(<WhatsNewModal {...defaultProps} />);
expect(screen.getByRole('heading', { name: /what's new/i })).toBeInTheDocument();
expect(screen.getByText(`Version: ${defaultProps.version}`)).toBeInTheDocument();
@@ -36,13 +37,13 @@ describe('WhatsNewModal', () => {
});
it('should call onClose when the "Got it!" button is clicked', () => {
render(<WhatsNewModal {...defaultProps} />);
renderWithProviders(<WhatsNewModal {...defaultProps} />);
fireEvent.click(screen.getByRole('button', { name: /got it/i }));
expect(mockOnClose).toHaveBeenCalledTimes(1);
});
it('should call onClose when the close icon button is clicked', () => {
render(<WhatsNewModal {...defaultProps} />);
renderWithProviders(<WhatsNewModal {...defaultProps} />);
// The close button is an SVG icon inside a button, best queried by its aria-label.
const closeButton = screen.getByRole('button', { name: /close/i });
fireEvent.click(closeButton);
@@ -50,7 +51,7 @@ describe('WhatsNewModal', () => {
});
it('should call onClose when clicking on the overlay', () => {
render(<WhatsNewModal {...defaultProps} />);
renderWithProviders(<WhatsNewModal {...defaultProps} />);
// The overlay is the root div with the background color.
const overlay = screen.getByRole('dialog').parentElement;
fireEvent.click(overlay!);
@@ -58,7 +59,7 @@ describe('WhatsNewModal', () => {
});
it('should not call onClose when clicking inside the modal content', () => {
render(<WhatsNewModal {...defaultProps} />);
renderWithProviders(<WhatsNewModal {...defaultProps} />);
fireEvent.click(screen.getByText(defaultProps.commitMessage));
expect(mockOnClose).not.toHaveBeenCalled();
});

View File

@@ -0,0 +1,18 @@
import React from 'react';
export const DocumentMagnifyingGlassIcon: React.FC<React.SVGProps<SVGSVGElement>> = (props) => (
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
{...props}
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M19.5 14.25v-2.625a3.375 3.375 0 0 0-3.375-3.375h-1.5A1.125 1.125 0 0 1 13.5 7.125v-1.5a3.375 3.375 0 0 0-3.375-3.375H8.25m5.231 13.481L15 17.25m-4.5 4.5L6.75 21.75m0 0L2.25 17.25m4.5 4.5v-4.5m13.5-3V9A2.25 2.25 0 0 0 16.5 6.75h-9A2.25 2.25 0 0 0 5.25 9v9.75m14.25-10.5a2.25 2.25 0 0 0-2.25-2.25H5.25a2.25 2.25 0 0 0-2.25 2.25v10.5a2.25 2.25 0 0 0 2.25 225h5.25"
/>
</svg>
);

View File

@@ -16,4 +16,4 @@ const config = {
},
};
export default config;
export default config;

147
src/config/rateLimiters.ts Normal file
View File

@@ -0,0 +1,147 @@
// src/config/rateLimiters.ts
import rateLimit from 'express-rate-limit';
import { shouldSkipRateLimit } from '../utils/rateLimit';
const standardConfig = {
standardHeaders: true,
legacyHeaders: false,
skip: shouldSkipRateLimit,
};
// --- AUTHENTICATION ---
export const loginLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 5,
message: 'Too many login attempts from this IP, please try again after 15 minutes.',
});
export const registerLimiter = rateLimit({
...standardConfig,
windowMs: 60 * 60 * 1000, // 1 hour
max: 5,
message: 'Too many accounts created from this IP, please try again after an hour.',
});
export const forgotPasswordLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 5,
message: 'Too many password reset requests from this IP, please try again after 15 minutes.',
});
export const resetPasswordLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10,
message: 'Too many password reset attempts from this IP, please try again after 15 minutes.',
});
export const refreshTokenLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 20,
message: 'Too many token refresh attempts from this IP, please try again after 15 minutes.',
});
export const logoutLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10,
message: 'Too many logout attempts from this IP, please try again after 15 minutes.',
});
// --- GENERAL PUBLIC & USER ---
export const publicReadLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
message: 'Too many requests from this IP, please try again later.',
});
export const userReadLimiter = publicReadLimiter; // Alias for consistency
export const userUpdateLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
message: 'Too many update requests from this IP, please try again after 15 minutes.',
});
export const reactionToggleLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 150,
message: 'Too many reaction requests from this IP, please try again later.',
});
export const trackingLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 200,
message: 'Too many tracking requests from this IP, please try again later.',
});
// --- SENSITIVE / COSTLY ---
export const userSensitiveUpdateLimiter = rateLimit({
...standardConfig,
windowMs: 60 * 60 * 1000, // 1 hour
max: 5,
message: 'Too many sensitive requests from this IP, please try again after an hour.',
});
export const adminTriggerLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 30,
message: 'Too many administrative triggers from this IP, please try again later.',
});
export const aiGenerationLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 20,
message: 'Too many AI generation requests from this IP, please try again after 15 minutes.',
});
export const suggestionLimiter = aiGenerationLimiter; // Alias
export const geocodeLimiter = rateLimit({
...standardConfig,
windowMs: 60 * 60 * 1000, // 1 hour
max: 100,
message: 'Too many geocoding requests from this IP, please try again later.',
});
export const priceHistoryLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 50,
message: 'Too many price history requests from this IP, please try again later.',
});
// --- UPLOADS / BATCH ---
export const adminUploadLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 20,
message: 'Too many file uploads from this IP, please try again after 15 minutes.',
});
export const userUploadLimiter = adminUploadLimiter; // Alias
export const aiUploadLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10,
message: 'Too many file uploads from this IP, please try again after 15 minutes.',
});
export const batchLimiter = rateLimit({
...standardConfig,
windowMs: 15 * 60 * 1000, // 15 minutes
max: 50,
message: 'Too many batch requests from this IP, please try again later.',
});
export const budgetUpdateLimiter = batchLimiter; // Alias

View File

@@ -6,10 +6,11 @@
* DO NOT run this on a production database.
*/
import { Pool } from 'pg';
import { Pool, PoolClient } from 'pg';
import fs from 'node:fs/promises';
import path from 'node:path';
import bcrypt from 'bcrypt';
import { logger } from '../services/logger.server';
import { CATEGORIES } from '../types';
const pool = new Pool({
user: process.env.DB_USER,
@@ -20,81 +21,55 @@ const pool = new Pool({
});
async function main() {
// Declare client outside the try block so it's accessible in the finally block.
let client;
let client: PoolClient | undefined;
try {
client = await pool.connect();
logger.info('Connected to the database for seeding.');
await client.query('BEGIN');
// 1. Clean the database
logger.info('--- Wiping existing data... ---');
// Using TRUNCATE ... RESTART IDENTITY CASCADE is a powerful way to clean all tables
// and reset auto-incrementing keys, while respecting foreign key relationships.
const tablesRes = await client.query(`
SELECT tablename
FROM pg_tables
WHERE schemaname = 'public'
-- Exclude PostGIS system tables from truncation to avoid permission errors.
AND tablename NOT IN ('spatial_ref_sys', 'geometry_columns')
`);
const tables = tablesRes.rows.map((row) => `"${row.tablename}"`).join(', ');
if (tables) {
await client.query(`TRUNCATE ${tables} RESTART IDENTITY CASCADE`);
logger.info('All tables in public schema have been truncated.');
}
// 1. Clean the database by dropping and recreating the schema
logger.info('--- Wiping and rebuilding schema... ---');
const dropScriptPath = path.resolve(process.cwd(), 'sql/drop_tables.sql');
const dropSql = await fs.readFile(dropScriptPath, 'utf-8');
await client.query(dropSql);
logger.info('All tables dropped successfully.');
// 2. Seed Categories
logger.info('--- Seeding Categories... ---');
const categoryQuery = `INSERT INTO public.categories (name) VALUES ${CATEGORIES.map((_, i) => `($${i + 1})`).join(', ')} RETURNING category_id, name`;
const seededCategories = (
await client.query<{ category_id: number; name: string }>(categoryQuery, CATEGORIES)
).rows;
const categoryMap = new Map(seededCategories.map((c) => [c.name, c.category_id]));
logger.info(`Seeded ${seededCategories.length} categories.`);
const schemaScriptPath = path.resolve(process.cwd(), 'sql/master_schema_rollup.sql');
const schemaSql = await fs.readFile(schemaScriptPath, 'utf-8');
await client.query(schemaSql);
logger.info(
'Schema rebuilt and static data seeded successfully from master_schema_rollup.sql.',
);
// 3. Seed Stores
// 2. Seed Additional Stores (if any beyond what's in the rollup)
logger.info('--- Seeding Stores... ---');
const stores = ['Safeway', 'No Frills', 'Costco', 'Superstore'];
const storeQuery = `INSERT INTO public.stores (name) VALUES ${stores.map((_, i) => `($${i + 1})`).join(', ')} RETURNING store_id, name`;
const seededStores = (
await client.query<{ store_id: number; name: string }>(storeQuery, stores)
const storeQuery = `INSERT INTO public.stores (name) VALUES ${stores.map((_, i) => `($${i + 1})`).join(', ')} ON CONFLICT (name) DO NOTHING RETURNING store_id, name`;
await client.query<{ store_id: number; name: string }>(storeQuery, stores);
const allStores = (
await client.query<{ store_id: number; name: string }>(
'SELECT store_id, name FROM public.stores',
)
).rows;
const storeMap = new Map(seededStores.map((s) => [s.name, s.store_id]));
logger.info(`Seeded ${seededStores.length} stores.`);
// 4. Seed Master Grocery Items
logger.info('--- Seeding Master Grocery Items... ---');
const masterItems = [
{ name: 'Chicken Breast, Boneless Skinless', category: 'Meat & Seafood' },
{ name: 'Ground Beef, Lean', category: 'Meat & Seafood' },
{ name: 'Avocado', category: 'Fruits & Vegetables' },
{ name: 'Bananas', category: 'Fruits & Vegetables' },
{ name: 'Broccoli', category: 'Fruits & Vegetables' },
{ name: 'Cheddar Cheese, Block', category: 'Dairy & Eggs' },
{ name: 'Milk, 2%', category: 'Dairy & Eggs' },
{ name: 'Eggs, Large', category: 'Dairy & Eggs' },
{ name: 'Whole Wheat Bread', category: 'Bakery & Bread' },
{ name: 'Pasta, Spaghetti', category: 'Pantry & Dry Goods' },
{ name: 'Canned Tomatoes, Diced', category: 'Canned Goods' },
{ name: 'Coca-Cola, 12-pack', category: 'Beverages' },
{ name: 'Frozen Pizza', category: 'Frozen Foods' },
{ name: 'Paper Towels', category: 'Household & Cleaning' },
];
const masterItemValues = masterItems
.map((item) => `('${item.name.replace(/'/g, "''")}', ${categoryMap.get(item.category)})`)
.join(', ');
const masterItemQuery = `INSERT INTO public.master_grocery_items (name, category_id) VALUES ${masterItemValues} RETURNING master_grocery_item_id, name`;
const seededMasterItems = (
await client.query<{ master_grocery_item_id: number; name: string }>(masterItemQuery)
).rows;
const masterItemMap = new Map(
seededMasterItems.map((item) => [item.name, item.master_grocery_item_id]),
const storeMap = new Map(
allStores.map((s: { name: string; store_id: number }) => [s.name, s.store_id]),
);
logger.info(`Seeded ${seededMasterItems.length} master grocery items.`);
logger.info(`Seeded/verified ${allStores.length} total stores.`);
// 5. Seed Users & Profiles
// Fetch maps for items seeded by the master rollup script
const masterItemMap = new Map(
(
await client.query<{ master_grocery_item_id: number; name: string }>(
'SELECT master_grocery_item_id, name FROM public.master_grocery_items',
)
).rows.map((item: { name: string; master_grocery_item_id: number }) => [
item.name,
item.master_grocery_item_id,
]),
);
// 3. Seed Users & Profiles
logger.info('--- Seeding Users & Profiles... ---');
const saltRounds = 10;
const adminPassHash = await bcrypt.hash('adminpass', saltRounds);
@@ -126,7 +101,7 @@ async function main() {
const userId = userRes.rows[0].user_id;
logger.info('Seeded regular user (user@example.com / userpass)');
// 6. Seed a Flyer
// 4. Seed a Flyer
logger.info('--- Seeding a Sample Flyer... ---');
const today = new Date();
const validFrom = new Date(today);
@@ -135,8 +110,8 @@ async function main() {
validTo.setDate(today.getDate() + 5);
const flyerQuery = `
INSERT INTO public.flyers (file_name, image_url, checksum, store_id, valid_from, valid_to)
VALUES ('safeway-flyer.jpg', '/sample-assets/safeway-flyer.jpg', 'sample-checksum-123', ${storeMap.get('Safeway')}, $1, $2)
INSERT INTO public.flyers (file_name, image_url, icon_url, checksum, store_id, valid_from, valid_to)
VALUES ('safeway-flyer.jpg', 'https://example.com/flyer-images/safeway-flyer.jpg', 'https://example.com/flyer-images/icons/safeway-flyer.jpg', 'a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0', ${storeMap.get('Safeway')}, $1, $2)
RETURNING flyer_id;
`;
const flyerRes = await client.query<{ flyer_id: number }>(flyerQuery, [
@@ -146,29 +121,29 @@ async function main() {
const flyerId = flyerRes.rows[0].flyer_id;
logger.info(`Seeded flyer for Safeway (ID: ${flyerId}).`);
// 7. Seed Flyer Items
// 5. Seed Flyer Items
logger.info('--- Seeding Flyer Items... ---');
const flyerItems = [
{
name: 'Chicken Breast, Boneless Skinless',
name: 'chicken breast',
price_display: '$3.99 /lb',
price_in_cents: 399,
quantity: 'per lb',
master_item_id: masterItemMap.get('Chicken Breast, Boneless Skinless'),
master_item_id: masterItemMap.get('chicken breast'),
},
{
name: 'Avocado',
name: 'avocados',
price_display: '2 for $5.00',
price_in_cents: 250,
quantity: 'each',
master_item_id: masterItemMap.get('Avocado'),
master_item_id: masterItemMap.get('avocados'),
},
{
name: 'Coca-Cola 12-pack',
name: 'soda',
price_display: '$6.99',
price_in_cents: 699,
quantity: '12x355ml',
master_item_id: masterItemMap.get('Coca-Cola, 12-pack'),
master_item_id: masterItemMap.get('soda'),
},
{
name: 'Unmatched Sample Item',
@@ -194,12 +169,12 @@ async function main() {
}
logger.info(`Seeded ${flyerItems.length} items for the Safeway flyer.`);
// 8. Seed Watched Items for the user
// 6. Seed Watched Items for the user
logger.info('--- Seeding Watched Items... ---');
const watchedItemIds = [
masterItemMap.get('Chicken Breast, Boneless Skinless'),
masterItemMap.get('Avocado'),
masterItemMap.get('Ground Beef, Lean'),
masterItemMap.get('chicken breast'),
masterItemMap.get('avocados'),
masterItemMap.get('ground beef'),
];
for (const itemId of watchedItemIds) {
if (itemId) {
@@ -211,7 +186,7 @@ async function main() {
}
logger.info(`Seeded ${watchedItemIds.length} watched items for Test User.`);
// 9. Seed a Shopping List
// 7. Seed a Shopping List
logger.info('--- Seeding a Shopping List... ---');
const listRes = await client.query<{ shopping_list_id: number }>(
'INSERT INTO public.shopping_lists (user_id, name) VALUES ($1, $2) RETURNING shopping_list_id',
@@ -220,8 +195,8 @@ async function main() {
const listId = listRes.rows[0].shopping_list_id;
const shoppingListItems = [
{ master_item_id: masterItemMap.get('Milk, 2%'), quantity: 1 },
{ master_item_id: masterItemMap.get('Eggs, Large'), quantity: 1 },
{ master_item_id: masterItemMap.get('milk'), quantity: 1 },
{ master_item_id: masterItemMap.get('eggs'), quantity: 1 },
{ custom_item_name: 'Specialty Hot Sauce', quantity: 1 },
];
@@ -235,75 +210,6 @@ async function main() {
`Seeded shopping list "Weekly Groceries" with ${shoppingListItems.length} items for Test User.`,
);
// 10. Seed Brands
logger.info('--- Seeding Brands... ---');
const brands = [
'Coca-Cola',
'Kraft',
'Maple Leaf',
"Dempster's",
'No Name',
"President's Choice",
];
const brandQuery = `INSERT INTO public.brands (name) VALUES ${brands.map((_, i) => `($${i + 1})`).join(', ')} ON CONFLICT (name) DO NOTHING`;
await client.query(brandQuery, brands);
logger.info(`Seeded ${brands.length} brands.`);
// Link store-specific brands
const loblawsId = storeMap.get('Loblaws');
if (loblawsId) {
await client.query('UPDATE public.brands SET store_id = $1 WHERE name = $2 OR name = $3', [
loblawsId,
'No Name',
"President's Choice",
]);
logger.info('Linked store brands to Loblaws.');
}
// 11. Seed Recipes
logger.info('--- Seeding Recipes... ---');
const recipes = [
{
name: 'Simple Chicken and Rice',
description: 'A quick and healthy weeknight meal.',
instructions: '1. Cook rice. 2. Cook chicken. 3. Combine.',
prep: 10,
cook: 20,
servings: 4,
},
{
name: 'Classic Spaghetti Bolognese',
description: 'A rich and hearty meat sauce.',
instructions: '1. Brown beef. 2. Add sauce. 3. Simmer.',
prep: 15,
cook: 45,
servings: 6,
},
{
name: 'Vegetable Stir-fry',
description: 'A fast and flavorful vegetarian meal.',
instructions: '1. Chop veggies. 2. Stir-fry. 3. Add sauce.',
prep: 10,
cook: 10,
servings: 3,
},
];
for (const recipe of recipes) {
await client.query(
`INSERT INTO public.recipes (name, description, instructions, prep_time_minutes, cook_time_minutes, servings, status)
VALUES ($1, $2, $3, $4, $5, $6, 'public') ON CONFLICT (name) WHERE user_id IS NULL DO NOTHING`,
[
recipe.name,
recipe.description,
recipe.instructions,
recipe.prep,
recipe.cook,
recipe.servings,
],
);
}
logger.info(`Seeded ${recipes.length} recipes.`);
// --- SEED SCRIPT DEBUG LOGGING ---
// Corrected the query to be unambiguous by specifying the table alias for each column.
// `id` and `email` come from the `users` table (u), and `role` comes from the `profiles` table (p).

View File

@@ -77,6 +77,18 @@ describe('PriceChart', () => {
expect(screen.getByText(/no deals for your watched items/i)).toBeInTheDocument();
});
it('should render an error message when an error occurs', () => {
mockedUseActiveDeals.mockReturnValue({
...mockedUseActiveDeals(),
activeDeals: [],
isLoading: false,
error: 'Failed to fetch deals.',
});
render(<PriceChart {...defaultProps} />);
expect(screen.getByText('Failed to fetch deals.')).toBeInTheDocument();
});
it('should render the table with deal items when data is provided', () => {
render(<PriceChart {...defaultProps} />);

View File

@@ -38,8 +38,26 @@ vi.mock('recharts', () => ({
),
CartesianGrid: () => <div data-testid="cartesian-grid" />,
XAxis: () => <div data-testid="x-axis" />,
YAxis: () => <div data-testid="y-axis" />,
Tooltip: () => <div data-testid="tooltip" />,
YAxis: ({ tickFormatter, domain }: any) => {
// Execute functions for coverage
if (typeof tickFormatter === 'function') {
tickFormatter(1000);
}
if (Array.isArray(domain)) {
domain.forEach((d) => {
if (typeof d === 'function') d(100);
});
}
return <div data-testid="y-axis" />;
},
Tooltip: ({ formatter }: any) => {
// Execute formatter for coverage
if (typeof formatter === 'function') {
formatter(1000);
formatter(undefined);
}
return <div data-testid="tooltip" />;
},
Legend: () => <div data-testid="legend" />,
// Fix: Use dataKey if name is not explicitly provided, as the component relies on dataKey
Line: ({ name, dataKey }: { name?: string; dataKey?: string }) => (
@@ -301,4 +319,66 @@ describe('PriceHistoryChart', () => {
expect(chartData).toHaveLength(2);
});
});
it('should handle malformed data points and unmatched items gracefully', async () => {
const malformedData: any[] = [
{ master_item_id: null, summary_date: '2024-10-01', avg_price_in_cents: 100 }, // Missing ID
{ master_item_id: 1, summary_date: null, avg_price_in_cents: 100 }, // Missing date
{ master_item_id: 1, summary_date: '2024-10-01', avg_price_in_cents: null }, // Missing price
{ master_item_id: 999, summary_date: '2024-10-01', avg_price_in_cents: 100 }, // ID not in watchlist
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(malformedData)),
);
render(<PriceHistoryChart />);
await waitFor(() => {
// Should show "Not enough historical data" because all points are invalid or filtered
expect(
screen.getByText(
'Not enough historical data for your watched items. Process more flyers to build a trend.',
),
).toBeInTheDocument();
});
});
it('should ignore higher prices for the same day', async () => {
const dataWithHigherPrice: HistoricalPriceDataPoint[] = [
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-01',
avg_price_in_cents: 100,
}),
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-01',
avg_price_in_cents: 150, // Higher price should be ignored
}),
createMockHistoricalPriceDataPoint({
master_item_id: 1,
summary_date: '2024-10-08',
avg_price_in_cents: 100,
}),
];
vi.mocked(apiClient.fetchHistoricalPriceData).mockResolvedValue(
new Response(JSON.stringify(dataWithHigherPrice)),
);
render(<PriceHistoryChart />);
await waitFor(() => {
const chart = screen.getByTestId('line-chart');
const chartData = JSON.parse(chart.getAttribute('data-chartdata')!);
const dataPoint = chartData.find((d: any) => d.date === 'Oct 1');
expect(dataPoint['Organic Bananas']).toBe(100);
});
});
it('should handle non-Error objects thrown during fetch', async () => {
vi.mocked(apiClient.fetchHistoricalPriceData).mockRejectedValue('String Error');
render(<PriceHistoryChart />);
await waitFor(() => {
expect(screen.getByText('Failed to load price history.')).toBeInTheDocument();
});
});
});

View File

@@ -1,4 +1,4 @@
// src/components/PriceHistoryChart.tsx
// src/features/charts/PriceHistoryChart.tsx
import React, { useState, useEffect, useMemo } from 'react';
import {
LineChart,
@@ -142,7 +142,7 @@ export const PriceHistoryChart: React.FC = () => {
const renderContent = () => {
if (isLoading || isLoadingUserData) {
return (
<div role="status" className="flex justify-center items-center h-full min-h-[200px]">
<div role="status" className="flex justify-center items-center h-full min-h-50]">
<LoadingSpinner /> <span className="ml-2">Loading Price History...</span>
</div>
);
@@ -198,7 +198,12 @@ export const PriceHistoryChart: React.FC = () => {
borderRadius: '0.5rem',
}}
labelStyle={{ color: '#F9FAFB' }}
formatter={(value: number) => `$${(value / 100).toFixed(2)}`}
formatter={(value: number | undefined) => {
if (typeof value === 'number') {
return [`$${(value / 100).toFixed(2)}`];
}
return [null];
}}
/>
<Legend wrapperStyle={{ fontSize: '12px' }} />
{availableItems.map((item, index) => (

View File

@@ -8,9 +8,13 @@ interface TopDealsProps {
export const TopDeals: React.FC<TopDealsProps> = ({ items }) => {
const topDeals = useMemo(() => {
// Use a type guard in the filter to inform TypeScript that price_in_cents is non-null
// in subsequent operations. This allows removing the redundant nullish coalescing in sort.
return [...items]
.filter((item) => item.price_in_cents !== null) // Only include items with a parseable price
.sort((a, b) => (a.price_in_cents ?? Infinity) - (b.price_in_cents ?? Infinity))
.filter(
(item): item is FlyerItem & { price_in_cents: number } => item.price_in_cents !== null,
)
.sort((a, b) => a.price_in_cents - b.price_in_cents)
.slice(0, 10);
}, [items]);

View File

@@ -160,9 +160,9 @@ describe('AnalysisPanel', () => {
results: { WEB_SEARCH: 'Search results text.' },
sources: {
WEB_SEARCH: [
{ title: 'Valid Source', uri: 'http://example.com/source1' },
{ title: 'Valid Source', uri: 'https://example.com/source1' },
{ title: 'Source without URI', uri: null },
{ title: 'Another Valid Source', uri: 'http://example.com/source2' },
{ title: 'Another Valid Source', uri: 'https://example.com/source2' },
],
},
loadingAnalysis: null,
@@ -178,7 +178,7 @@ describe('AnalysisPanel', () => {
expect(screen.getByText('Sources:')).toBeInTheDocument();
const source1 = screen.getByText('Valid Source');
expect(source1).toBeInTheDocument();
expect(source1.closest('a')).toHaveAttribute('href', 'http://example.com/source1');
expect(source1.closest('a')).toHaveAttribute('href', 'https://example.com/source1');
expect(screen.queryByText('Source without URI')).not.toBeInTheDocument();
expect(screen.getByText('Another Valid Source')).toBeInTheDocument();
});
@@ -278,13 +278,13 @@ describe('AnalysisPanel', () => {
loadingAnalysis: null,
error: null,
runAnalysis: mockRunAnalysis,
generatedImageUrl: 'http://example.com/meal.jpg',
generatedImageUrl: 'https://example.com/meal.jpg',
generateImage: mockGenerateImage,
});
rerender(<AnalysisPanel selectedFlyer={mockFlyer} />);
const image = screen.getByAltText('AI generated meal plan');
expect(image).toBeInTheDocument();
expect(image).toHaveAttribute('src', 'http://example.com/meal.jpg');
expect(image).toHaveAttribute('src', 'https://example.com/meal.jpg');
});
it('should not show sources for non-search analysis types', () => {

View File

@@ -406,6 +406,74 @@ describe('ExtractedDataTable', () => {
render(<ExtractedDataTable {...defaultProps} items={singleCategoryItems} />);
expect(screen.queryByLabelText('Filter by category')).not.toBeInTheDocument();
});
it('should allow switching filter back to All Categories', () => {
render(<ExtractedDataTable {...defaultProps} />);
const categoryFilter = screen.getByLabelText('Filter by category');
// Filter to Dairy
fireEvent.change(categoryFilter, { target: { value: 'Dairy' } });
expect(screen.queryByText('Gala Apples')).not.toBeInTheDocument();
expect(screen.getByText('2% Milk')).toBeInTheDocument();
// Filter back to All
fireEvent.change(categoryFilter, { target: { value: 'all' } });
expect(screen.getByText('Gala Apples')).toBeInTheDocument();
expect(screen.getByText('2% Milk')).toBeInTheDocument();
});
it('should sort items alphabetically within watched and unwatched groups', () => {
const items = [
createMockFlyerItem({
flyer_item_id: 1,
item: 'Yam',
master_item_id: 3,
category_name: 'Produce',
}), // Unwatched
createMockFlyerItem({
flyer_item_id: 2,
item: 'Zebra',
master_item_id: 1,
category_name: 'Produce',
}), // Watched
createMockFlyerItem({
flyer_item_id: 3,
item: 'Banana',
master_item_id: 4,
category_name: 'Produce',
}), // Unwatched
createMockFlyerItem({
flyer_item_id: 4,
item: 'Apple',
master_item_id: 2,
category_name: 'Produce',
}), // Watched
];
vi.mocked(useUserData).mockReturnValue({
watchedItems: [
createMockMasterGroceryItem({ master_grocery_item_id: 1, name: 'Zebra' }),
createMockMasterGroceryItem({ master_grocery_item_id: 2, name: 'Apple' }),
],
shoppingLists: [],
setWatchedItems: vi.fn(),
setShoppingLists: vi.fn(),
isLoading: false,
error: null,
});
render(<ExtractedDataTable {...defaultProps} items={items} />);
const rows = screen.getAllByRole('row');
// Extract item names based on the bold/semibold classes used for names
const itemNames = rows.map((row) => {
const nameEl = row.querySelector('.font-bold, .font-semibold');
return nameEl?.textContent;
});
// Expected: Watched items first (Apple, Zebra), then Unwatched (Banana, Yam)
expect(itemNames).toEqual(['Apple', 'Zebra', 'Banana', 'Yam']);
});
});
describe('Data Edge Cases', () => {
@@ -460,5 +528,46 @@ describe('ExtractedDataTable', () => {
// Check for the unit suffix, which might be in a separate element or part of the string
expect(within(chickenItemRow).getAllByText(/\/kg/i).length).toBeGreaterThan(0);
});
it('should handle activeListId pointing to a non-existent list', () => {
vi.mocked(useShoppingLists).mockReturnValue({
activeListId: 999, // Non-existent
shoppingLists: mockShoppingLists,
addItemToList: mockAddItemToList,
setActiveListId: vi.fn(),
createList: vi.fn(),
deleteList: vi.fn(),
updateItemInList: vi.fn(),
removeItemFromList: vi.fn(),
isCreatingList: false,
isDeletingList: false,
isAddingItem: false,
isUpdatingItem: false,
isRemovingItem: false,
error: null,
});
render(<ExtractedDataTable {...defaultProps} />);
// Should behave as if item is not in list (Add button enabled)
const appleItemRow = screen.getByText('Gala Apples').closest('tr')!;
const addToListButton = within(appleItemRow).getByTitle('Add Apples to list');
expect(addToListButton).toBeInTheDocument();
expect(addToListButton).not.toBeDisabled();
});
it('should display numeric quantity in parentheses if available', () => {
const itemWithQtyNum = createMockFlyerItem({
flyer_item_id: 999,
item: 'Bulk Rice',
quantity: 'Bag',
quantity_num: 5,
unit_price: { value: 10, unit: 'kg' },
category_name: 'Pantry',
flyer_id: 1,
});
render(<ExtractedDataTable {...defaultProps} items={[itemWithQtyNum]} />);
expect(screen.getByText('(5)')).toBeInTheDocument();
});
});
});

View File

@@ -8,13 +8,13 @@ import { createMockStore } from '../../tests/utils/mockFactories';
const mockStore = createMockStore({
store_id: 1,
name: 'SuperMart',
logo_url: 'http://example.com/logo.png',
logo_url: 'https://example.com/logo.png',
});
const mockOnOpenCorrectionTool = vi.fn();
const defaultProps = {
imageUrl: 'http://example.com/flyer.jpg',
imageUrl: 'https://example.com/flyer.jpg',
store: mockStore,
validFrom: '2023-10-26',
validTo: '2023-11-01',

View File

@@ -1,8 +1,8 @@
// src/features/flyer/FlyerDisplay.tsx
import React from 'react';
import { ScanIcon } from '../../components/icons/ScanIcon';
import { formatDateRange } from '../../utils/dateUtils';
import type { Store } from '../../types';
import { formatDateRange } from './dateUtils';
import { ScanIcon } from '../../components/icons/ScanIcon';
export interface FlyerDisplayProps {
imageUrl: string | null;

View File

@@ -1,9 +1,9 @@
// src/features/flyer/FlyerList.test.tsx
import React from 'react';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach, type Mocked } from 'vitest';
import { describe, it, expect, vi, beforeEach, afterEach, type Mocked } from 'vitest';
import { FlyerList } from './FlyerList';
import { formatShortDate } from './dateUtils';
import { formatShortDate } from '../../utils/dateUtils';
import type { Flyer, UserProfile } from '../../types';
import { createMockUserProfile } from '../../tests/utils/mockFactories';
import { createMockFlyer } from '../../tests/utils/mockFactories';
@@ -19,7 +19,7 @@ const mockFlyers: Flyer[] = [
flyer_id: 1,
file_name: 'metro_flyer_oct_1.pdf',
item_count: 50,
image_url: 'http://example.com/flyer1.jpg',
image_url: 'https://example.com/flyer1.jpg',
store: { store_id: 101, name: 'Metro' },
valid_from: '2023-10-05',
valid_to: '2023-10-11',
@@ -29,7 +29,7 @@ const mockFlyers: Flyer[] = [
flyer_id: 2,
file_name: 'walmart_flyer.pdf',
item_count: 75,
image_url: 'http://example.com/flyer2.jpg',
image_url: 'https://example.com/flyer2.jpg',
store: { store_id: 102, name: 'Walmart' },
valid_from: '2023-10-06',
valid_to: '2023-10-06', // Same day
@@ -40,8 +40,8 @@ const mockFlyers: Flyer[] = [
flyer_id: 3,
file_name: 'no-store-flyer.pdf',
item_count: 10,
image_url: 'http://example.com/flyer3.jpg',
icon_url: 'http://example.com/icon3.png',
image_url: 'https://example.com/flyer3.jpg',
icon_url: 'https://example.com/icon3.png',
valid_from: '2023-10-07',
valid_to: '2023-10-08',
store_address: '456 Side St, Ottawa',
@@ -53,7 +53,7 @@ const mockFlyers: Flyer[] = [
flyer_id: 4,
file_name: 'bad-date-flyer.pdf',
item_count: 5,
image_url: 'http://example.com/flyer4.jpg',
image_url: 'https://example.com/flyer4.jpg',
store: { store_id: 103, name: 'Date Store' },
created_at: 'invalid-date',
valid_from: 'invalid-from',
@@ -163,7 +163,7 @@ describe('FlyerList', () => {
const flyerWithIcon = screen.getByText('Unknown Store').closest('li'); // Flyer ID 3
const iconImage = flyerWithIcon?.querySelector('img');
expect(iconImage).toBeInTheDocument();
expect(iconImage).toHaveAttribute('src', 'http://example.com/icon3.png');
expect(iconImage).toHaveAttribute('src', 'https://example.com/icon3.png');
});
it('should render a document icon when icon_url is not present', () => {
@@ -257,6 +257,73 @@ describe('FlyerList', () => {
});
});
describe('Expiration Status Logic', () => {
beforeEach(() => {
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
it('should show "Expired" for past dates', () => {
// Flyer 1 valid_to is 2023-10-11
vi.setSystemTime(new Date('2023-10-12T12:00:00Z'));
render(
<FlyerList
flyers={[mockFlyers[0]]}
onFlyerSelect={mockOnFlyerSelect}
selectedFlyerId={null}
profile={mockProfile}
/>,
);
expect(screen.getByText('• Expired')).toBeInTheDocument();
expect(screen.getByText('• Expired')).toHaveClass('text-red-500');
});
it('should show "Expires today" when valid_to is today', () => {
vi.setSystemTime(new Date('2023-10-11T12:00:00Z'));
render(
<FlyerList
flyers={[mockFlyers[0]]}
onFlyerSelect={mockOnFlyerSelect}
selectedFlyerId={null}
profile={mockProfile}
/>,
);
expect(screen.getByText('• Expires today')).toBeInTheDocument();
expect(screen.getByText('• Expires today')).toHaveClass('text-orange-500');
});
it('should show "Expires in X days" (orange) for <= 3 days', () => {
vi.setSystemTime(new Date('2023-10-09T12:00:00Z')); // 2 days left
render(
<FlyerList
flyers={[mockFlyers[0]]}
onFlyerSelect={mockOnFlyerSelect}
selectedFlyerId={null}
profile={mockProfile}
/>,
);
expect(screen.getByText('• Expires in 2 days')).toBeInTheDocument();
expect(screen.getByText('• Expires in 2 days')).toHaveClass('text-orange-500');
});
it('should show "Expires in X days" (green) for > 3 days', () => {
vi.setSystemTime(new Date('2023-10-05T12:00:00Z')); // 6 days left
render(
<FlyerList
flyers={[mockFlyers[0]]}
onFlyerSelect={mockOnFlyerSelect}
selectedFlyerId={null}
profile={mockProfile}
/>,
);
expect(screen.getByText('• Expires in 6 days')).toBeInTheDocument();
expect(screen.getByText('• Expires in 6 days')).toHaveClass('text-green-600');
});
});
describe('Admin Functionality', () => {
const adminProfile: UserProfile = createMockUserProfile({
user: { user_id: 'admin-1', email: 'admin@example.com' },

View File

@@ -7,7 +7,7 @@ import { parseISO, format, isValid } from 'date-fns';
import { MapPinIcon, Trash2Icon } from 'lucide-react';
import { logger } from '../../services/logger.client';
import * as apiClient from '../../services/apiClient';
import { calculateDaysBetween, formatDateRange } from './dateUtils';
import { calculateDaysBetween, formatDateRange, getCurrentDateISOString } from '../../utils/dateUtils';
interface FlyerListProps {
flyers: Flyer[];
@@ -54,7 +54,7 @@ export const FlyerList: React.FC<FlyerListProps> = ({
verbose: true,
});
const daysLeft = calculateDaysBetween(format(new Date(), 'yyyy-MM-dd'), flyer.valid_to);
const daysLeft = calculateDaysBetween(getCurrentDateISOString(), flyer.valid_to);
let daysLeftText = '';
let daysLeftColor = '';

View File

@@ -6,14 +6,24 @@ import { FlyerUploader } from './FlyerUploader';
import * as aiApiClientModule from '../../services/aiApiClient';
import * as checksumModule from '../../utils/checksum';
import { useNavigate, MemoryRouter } from 'react-router-dom';
import { QueryClient, QueryClientProvider, onlineManager } from '@tanstack/react-query';
// Mock dependencies
vi.mock('../../services/aiApiClient');
vi.mock('../../services/aiApiClient', async (importOriginal) => {
const actual = await importOriginal<typeof import('../../services/aiApiClient')>();
return {
...actual,
uploadAndProcessFlyer: vi.fn(),
getJobStatus: vi.fn(),
};
});
vi.mock('../../services/logger.client', () => ({
// Keep the original logger.info/error but also spy on it for test assertions if needed
logger: {
info: vi.fn((...args) => console.log('[LOGGER.INFO]', ...args)),
error: vi.fn((...args) => console.error('[LOGGER.ERROR]', ...args)),
warn: vi.fn((...args) => console.warn('[LOGGER.WARN]', ...args)),
debug: vi.fn((...args) => console.debug('[LOGGER.DEBUG]', ...args)),
},
}));
vi.mock('../../utils/checksum', () => ({
@@ -39,10 +49,19 @@ const mockedChecksumModule = checksumModule as unknown as {
const renderComponent = (onProcessingComplete = vi.fn()) => {
console.log('--- [TEST LOG] ---: Rendering component inside MemoryRouter.');
const queryClient = new QueryClient({
defaultOptions: {
queries: {
retry: false,
},
},
});
return render(
<MemoryRouter>
<FlyerUploader onProcessingComplete={onProcessingComplete} />
</MemoryRouter>,
<QueryClientProvider client={queryClient}>
<MemoryRouter>
<FlyerUploader onProcessingComplete={onProcessingComplete} />
</MemoryRouter>
</QueryClientProvider>,
);
};
@@ -50,10 +69,11 @@ describe('FlyerUploader', () => {
const navigateSpy = vi.fn();
beforeEach(() => {
// Disable react-query's online manager to prevent it from interfering with fake timers
onlineManager.setEventListener((setOnline) => {
return () => {};
});
console.log(`\n--- [TEST LOG] ---: Starting test: "${expect.getState().currentTestName}"`);
// Use the 'modern' implementation of fake timers to handle promise microtasks correctly.
vi.useFakeTimers({ toFake: ['setTimeout'], shouldAdvanceTime: true });
console.log('--- [TEST LOG] ---: MODERN fake timers enabled.');
vi.resetAllMocks(); // Resets mock implementations AND call history.
console.log('--- [TEST LOG] ---: Mocks reset.');
mockedChecksumModule.generateFileChecksum.mockResolvedValue('mock-checksum');
@@ -61,7 +81,6 @@ describe('FlyerUploader', () => {
});
afterEach(() => {
vi.useRealTimers();
console.log(`--- [TEST LOG] ---: Finished test: "${expect.getState().currentTestName}"\n`);
});
@@ -73,12 +92,11 @@ describe('FlyerUploader', () => {
it('should handle file upload and start polling', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mocks for upload and polling.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-123' }), { status: 200 }),
);
mockedAiApiClient.getJobStatus.mockResolvedValue(
new Response(JSON.stringify({ state: 'active', progress: { message: 'Checking...' } })),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-123' });
mockedAiApiClient.getJobStatus.mockResolvedValue({
state: 'active',
progress: { message: 'Checking...' },
});
console.log('--- [TEST LOG] ---: 2. Rendering component and preparing file.');
renderComponent();
@@ -105,21 +123,18 @@ describe('FlyerUploader', () => {
expect(mockedAiApiClient.getJobStatus).toHaveBeenCalledTimes(1);
console.log('--- [TEST LOG] ---: 7. Mocks verified. Advancing timers now...');
await act(async () => {
console.log('--- [TEST LOG] ---: 8a. vi.advanceTimersByTime(3000) starting...');
vi.advanceTimersByTime(3000);
console.log('--- [TEST LOG] ---: 8b. vi.advanceTimersByTime(3000) complete.');
});
// With real timers, we now wait for the polling interval to elapse.
console.log(
`--- [TEST LOG] ---: 9. Act block finished. Now checking if getJobStatus was called again.`,
);
try {
// The polling interval is 3s, so we wait for a bit longer.
await waitFor(() => {
const calls = mockedAiApiClient.getJobStatus.mock.calls.length;
console.log(`--- [TEST LOG] ---: 10. waitFor check: getJobStatus calls = ${calls}`);
expect(mockedAiApiClient.getJobStatus).toHaveBeenCalledTimes(2);
});
}, { timeout: 4000 });
console.log('--- [TEST LOG] ---: 11. SUCCESS: Second poll confirmed.');
} catch (error) {
console.error('--- [TEST LOG] ---: 11. ERROR: waitFor for second poll timed out.');
@@ -131,12 +146,11 @@ describe('FlyerUploader', () => {
it('should handle file upload via drag and drop', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mocks for drag and drop.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-dnd' }), { status: 200 }),
);
mockedAiApiClient.getJobStatus.mockResolvedValue(
new Response(JSON.stringify({ state: 'active', progress: { message: 'Dropped...' } })),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-dnd' });
mockedAiApiClient.getJobStatus.mockResolvedValue({
state: 'active',
progress: { message: 'Dropped...' },
});
console.log('--- [TEST LOG] ---: 2. Rendering component and preparing file for drop.');
renderComponent();
@@ -159,16 +173,10 @@ describe('FlyerUploader', () => {
it('should poll for status, complete successfully, and redirect', async () => {
const onProcessingComplete = vi.fn();
console.log('--- [TEST LOG] ---: 1. Setting up mock sequence for polling.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-123' }), { status: 200 }),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-123' });
mockedAiApiClient.getJobStatus
.mockResolvedValueOnce(
new Response(JSON.stringify({ state: 'active', progress: { message: 'Analyzing...' } })),
)
.mockResolvedValueOnce(
new Response(JSON.stringify({ state: 'completed', returnValue: { flyerId: 42 } })),
);
.mockResolvedValueOnce({ state: 'active', progress: { message: 'Analyzing...' } })
.mockResolvedValueOnce({ state: 'completed', returnValue: { flyerId: 42 } });
console.log('--- [TEST LOG] ---: 2. Rendering component and uploading file.');
renderComponent(onProcessingComplete);
@@ -189,24 +197,21 @@ describe('FlyerUploader', () => {
expect(mockedAiApiClient.getJobStatus).toHaveBeenCalledTimes(1);
console.log('--- [TEST LOG] ---: 5. First poll confirmed. Now AWAITING timer advancement.');
await act(async () => {
console.log(`--- [TEST LOG] ---: 6. Advancing timers by 4000ms for the second poll...`);
vi.advanceTimersByTime(4000);
});
console.log(`--- [TEST LOG] ---: 7. Timers advanced. Now AWAITING completion message.`);
try {
console.log(
'--- [TEST LOG] ---: 8a. waitFor check: Waiting for completion text and job status count.',
);
// Wait for the second poll to occur and the UI to update.
await waitFor(() => {
console.log(
`--- [TEST LOG] ---: 8b. waitFor interval: calls=${mockedAiApiClient.getJobStatus.mock.calls.length}`,
`--- [TEST LOG] ---: 8b. waitFor interval: calls=${
mockedAiApiClient.getJobStatus.mock.calls.length
}`,
);
expect(
screen.getByText('Processing complete! Redirecting to flyer 42...'),
).toBeInTheDocument();
});
}, { timeout: 4000 });
console.log('--- [TEST LOG] ---: 9. SUCCESS: Completion message found.');
} catch (error) {
console.error('--- [TEST LOG] ---: 9. ERROR: waitFor for completion message timed out.');
@@ -216,12 +221,9 @@ describe('FlyerUploader', () => {
}
expect(mockedAiApiClient.getJobStatus).toHaveBeenCalledTimes(2);
await act(async () => {
console.log(`--- [TEST LOG] ---: 10. Advancing timers by 2000ms for redirect...`);
vi.advanceTimersByTime(2000);
});
// Wait for the redirect timer (1.5s in component) to fire.
await act(() => new Promise((r) => setTimeout(r, 2000)));
console.log(`--- [TEST LOG] ---: 11. Timers advanced. Now asserting navigation.`);
expect(onProcessingComplete).toHaveBeenCalled();
expect(navigateSpy).toHaveBeenCalledWith('/flyers/42');
console.log('--- [TEST LOG] ---: 12. Callback and navigation confirmed.');
@@ -229,12 +231,11 @@ describe('FlyerUploader', () => {
it('should handle a failed job', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mocks for a failed job.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-fail' }), { status: 200 }),
);
mockedAiApiClient.getJobStatus.mockResolvedValue(
new Response(JSON.stringify({ state: 'failed', failedReason: 'AI model exploded' })),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-fail' });
// The getJobStatus function throws a specific error when the job fails,
// which is then caught by react-query and placed in the `error` state.
const jobFailedError = new aiApiClientModule.JobFailedError('AI model exploded', 'UNKNOWN_ERROR');
mockedAiApiClient.getJobStatus.mockRejectedValue(jobFailedError);
console.log('--- [TEST LOG] ---: 2. Rendering and uploading.');
renderComponent();
@@ -247,7 +248,8 @@ describe('FlyerUploader', () => {
try {
console.log('--- [TEST LOG] ---: 4. AWAITING failure message...');
expect(await screen.findByText(/Processing failed: AI model exploded/i)).toBeInTheDocument();
// The UI should now display the error from the `pollError` state, which includes the "Polling failed" prefix.
expect(await screen.findByText(/Polling failed: AI model exploded/i)).toBeInTheDocument();
console.log('--- [TEST LOG] ---: 5. SUCCESS: Failure message found.');
} catch (error) {
console.error('--- [TEST LOG] ---: 5. ERROR: findByText for failure message timed out.');
@@ -260,11 +262,80 @@ describe('FlyerUploader', () => {
console.log('--- [TEST LOG] ---: 6. "Upload Another" button confirmed.');
});
it('should clear the polling timeout when a job fails', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mocks for failed job timeout clearance.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-fail-timeout' });
// We need at least one 'active' response to establish a timeout loop so we have something to clear
// The second call should be a rejection, as this is how getJobStatus signals a failure.
mockedAiApiClient.getJobStatus
.mockResolvedValueOnce({
state: 'active',
progress: { message: 'Working...' },
} as aiApiClientModule.JobStatus)
.mockRejectedValueOnce(new aiApiClientModule.JobFailedError('Fatal Error', 'UNKNOWN_ERROR'));
renderComponent();
const file = new File(['content'], 'flyer.pdf', { type: 'application/pdf' });
const input = screen.getByLabelText(/click to select a file/i);
fireEvent.change(input, { target: { files: [file] } });
// Wait for the first poll to complete and UI to update to "Working..."
await screen.findByText('Working...');
// Wait for the failure UI
await waitFor(() => expect(screen.getByText(/Polling failed: Fatal Error/i)).toBeInTheDocument(), { timeout: 4000 });
});
it('should stop polling for job status when the component unmounts', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mocks for unmount polling stop.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-unmount' });
// Mock getJobStatus to always return 'active' to keep polling
mockedAiApiClient.getJobStatus.mockResolvedValue({
state: 'active',
progress: { message: 'Polling...' },
});
const { unmount } = renderComponent();
const file = new File(['content'], 'flyer.pdf', { type: 'application/pdf' });
const input = screen.getByLabelText(/click to select a file/i);
fireEvent.change(input, { target: { files: [file] } });
// Wait for the first poll to complete and UI to update
await screen.findByText('Polling...');
// Wait for exactly one call to be sure polling has started.
await waitFor(() => {
expect(mockedAiApiClient.getJobStatus).toHaveBeenCalledTimes(1);
});
console.log('--- [TEST LOG] ---: 2. First poll confirmed.');
// Record the number of calls before unmounting.
const callsBeforeUnmount = mockedAiApiClient.getJobStatus.mock.calls.length;
// Now unmount the component, which should stop the polling.
console.log('--- [TEST LOG] ---: 3. Unmounting component.');
unmount();
// Wait for a duration longer than the polling interval (3s) to see if more calls are made.
console.log('--- [TEST LOG] ---: 4. Waiting for 4 seconds to check for further polling.');
await act(() => new Promise((resolve) => setTimeout(resolve, 4000)));
// Verify that getJobStatus was not called again after unmounting.
console.log('--- [TEST LOG] ---: 5. Asserting no new polls occurred.');
expect(mockedAiApiClient.getJobStatus).toHaveBeenCalledTimes(callsBeforeUnmount);
});
it('should handle a duplicate flyer error (409)', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mock for 409 duplicate error.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ flyerId: 99, message: 'Duplicate' }), { status: 409 }),
);
// The API client throws a structured error, which useFlyerUploader now parses
// to set both the errorMessage and the duplicateFlyerId.
mockedAiApiClient.uploadAndProcessFlyer.mockRejectedValue({
status: 409,
body: { flyerId: 99, message: 'This flyer has already been processed.' },
});
console.log('--- [TEST LOG] ---: 2. Rendering and uploading.');
renderComponent();
@@ -277,9 +348,10 @@ describe('FlyerUploader', () => {
try {
console.log('--- [TEST LOG] ---: 4. AWAITING duplicate flyer message...');
expect(
await screen.findByText('This flyer has already been processed. You can view it here:'),
).toBeInTheDocument();
// With the fix, the duplicate error message and the link are combined into a single paragraph.
// We now look for this combined message.
const errorMessage = await screen.findByText(/This flyer has already been processed. You can view it here:/i);
expect(errorMessage).toBeInTheDocument();
console.log('--- [TEST LOG] ---: 5. SUCCESS: Duplicate message found.');
} catch (error) {
console.error('--- [TEST LOG] ---: 5. ERROR: findByText for duplicate message timed out.');
@@ -295,12 +367,11 @@ describe('FlyerUploader', () => {
it('should allow the user to stop watching progress', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mocks for infinite polling.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-stop' }), { status: 200 }),
);
mockedAiApiClient.getJobStatus.mockResolvedValue(
new Response(JSON.stringify({ state: 'active', progress: { message: 'Analyzing...' } })),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-stop' });
mockedAiApiClient.getJobStatus.mockResolvedValue({
state: 'active',
progress: { message: 'Analyzing...' },
} as any);
console.log('--- [TEST LOG] ---: 2. Rendering and uploading.');
renderComponent();
@@ -362,9 +433,11 @@ describe('FlyerUploader', () => {
it('should handle a generic network error during upload', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mock for generic upload error.');
mockedAiApiClient.uploadAndProcessFlyer.mockRejectedValue(
new Error('Network Error During Upload'),
);
// Simulate a structured error from the API client
mockedAiApiClient.uploadAndProcessFlyer.mockRejectedValue({
status: 500,
body: { message: 'Network Error During Upload' },
});
renderComponent();
const file = new File(['content'], 'flyer.pdf', { type: 'application/pdf' });
const input = screen.getByLabelText(/click to select a file/i);
@@ -379,9 +452,7 @@ describe('FlyerUploader', () => {
it('should handle a generic network error during polling', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mock for polling error.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-poll-fail' }), { status: 200 }),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-poll-fail' });
mockedAiApiClient.getJobStatus.mockRejectedValue(new Error('Polling Network Error'));
renderComponent();
@@ -392,17 +463,15 @@ describe('FlyerUploader', () => {
fireEvent.change(input, { target: { files: [file] } });
console.log('--- [TEST LOG] ---: 3. Awaiting error message.');
expect(await screen.findByText(/Polling Network Error/i)).toBeInTheDocument();
expect(await screen.findByText(/Polling failed: Polling Network Error/i)).toBeInTheDocument();
console.log('--- [TEST LOG] ---: 4. Assertions passed.');
});
it('should handle a completed job with a missing flyerId', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mock for malformed completion payload.');
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue(
new Response(JSON.stringify({ jobId: 'job-no-flyerid' }), { status: 200 }),
);
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-no-flyerid' });
mockedAiApiClient.getJobStatus.mockResolvedValue(
new Response(JSON.stringify({ state: 'completed', returnValue: {} })), // No flyerId
{ state: 'completed', returnValue: {} }, // No flyerId
);
renderComponent();
@@ -419,6 +488,29 @@ describe('FlyerUploader', () => {
console.log('--- [TEST LOG] ---: 4. Assertions passed.');
});
it('should handle a non-JSON response during polling', async () => {
console.log('--- [TEST LOG] ---: 1. Setting up mock for non-JSON response.');
// The actual function would throw, so we mock the rejection.
// The new getJobStatus would throw an error like "Failed to parse JSON..."
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: 'job-bad-json' });
mockedAiApiClient.getJobStatus.mockRejectedValue(
new Error('Failed to parse JSON response from server. Body: <html>502 Bad Gateway</html>'),
);
renderComponent();
const file = new File(['content'], 'flyer.pdf', { type: 'application/pdf' });
const input = screen.getByLabelText(/click to select a file/i);
console.log('--- [TEST LOG] ---: 2. Firing file change event.');
fireEvent.change(input, { target: { files: [file] } });
console.log('--- [TEST LOG] ---: 3. Awaiting error message.');
expect(
await screen.findByText(/Polling failed: Failed to parse JSON response from server/i),
).toBeInTheDocument();
console.log('--- [TEST LOG] ---: 4. Assertions passed.');
});
it('should do nothing if the file input is cancelled', () => {
renderComponent();
const input = screen.getByLabelText(/click to select a file/i);

View File

@@ -1,213 +1,68 @@
// src/features/flyer/FlyerUploader.tsx
import React, { useState, useEffect, useRef, useCallback } from 'react';
import React, { useEffect, useCallback } from 'react';
import { useNavigate, Link } from 'react-router-dom';
import { uploadAndProcessFlyer, getJobStatus } from '../../services/aiApiClient';
import { generateFileChecksum } from '../../utils/checksum';
import { logger } from '../../services/logger.client';
import { ProcessingStatus } from './ProcessingStatus';
import type { ProcessingStage } from '../../types';
import { useDragAndDrop } from '../../hooks/useDragAndDrop';
type ProcessingState = 'idle' | 'uploading' | 'polling' | 'completed' | 'error';
import { useFlyerUploader } from '../../hooks/useFlyerUploader';
interface FlyerUploaderProps {
onProcessingComplete: () => void;
}
export const FlyerUploader: React.FC<FlyerUploaderProps> = ({ onProcessingComplete }) => {
const [processingState, setProcessingState] = useState<ProcessingState>('idle');
const [statusMessage, setStatusMessage] = useState<string | null>(null);
const [jobId, setJobId] = useState<string | null>(null);
const [errorMessage, setErrorMessage] = useState<string | null>(null);
const [duplicateFlyerId, setDuplicateFlyerId] = useState<number | null>(null);
const navigate = useNavigate();
const pollingTimeoutRef = useRef<number | null>(null);
const [processingStages, setProcessingStages] = useState<ProcessingStage[]>([]);
const [estimatedTime, setEstimatedTime] = useState(0);
const [currentFile, setCurrentFile] = useState<string | null>(null);
// DEBUG: Log component mount and unmount
useEffect(() => {
console.debug('[DEBUG] FlyerUploader: Component did mount.');
return () => {
console.debug('[DEBUG] FlyerUploader: Component will unmount.');
};
}, []);
// DEBUG: Log state changes
useEffect(() => {
console.debug(`[DEBUG] FlyerUploader: processingState changed to -> ${processingState}`);
}, [processingState]);
const {
processingState,
statusMessage,
errorMessage,
duplicateFlyerId,
processingStages,
estimatedTime,
currentFile,
flyerId,
upload,
resetUploaderState,
} = useFlyerUploader();
useEffect(() => {
if (statusMessage) logger.info(`FlyerUploader Status: ${statusMessage}`);
}, [statusMessage]);
useEffect(() => {
console.debug(`[DEBUG] Polling Effect Triggered: state=${processingState}, jobId=${jobId}`);
if (processingState !== 'polling' || !jobId) {
if (pollingTimeoutRef.current) {
console.debug(
`[DEBUG] Polling Effect: Clearing timeout ID ${pollingTimeoutRef.current} because state is not 'polling' or no jobId exists.`,
);
clearTimeout(pollingTimeoutRef.current);
}
return;
if (errorMessage) {
logger.error(`[FlyerUploader] Error encountered: ${errorMessage}`, { duplicateFlyerId });
}
}, [errorMessage, duplicateFlyerId]);
const pollStatus = async () => {
console.debug(`[DEBUG] pollStatus(): Polling for jobId: ${jobId}`);
try {
const statusResponse = await getJobStatus(jobId);
console.debug(`[DEBUG] pollStatus(): API response status: ${statusResponse.status}`);
if (!statusResponse.ok) {
throw new Error(`Failed to get job status (HTTP ${statusResponse.status})`);
}
const job = await statusResponse.json();
console.debug('[DEBUG] pollStatus(): Job status received:', job);
if (job.progress) {
setProcessingStages(job.progress.stages || []);
setEstimatedTime(job.progress.estimatedTimeRemaining || 0);
setStatusMessage(job.progress.message || null);
}
switch (job.state) {
case 'completed':
console.debug('[DEBUG] pollStatus(): Job state is "completed".');
const flyerId = job.returnValue?.flyerId;
if (flyerId) {
setStatusMessage(`Processing complete! Redirecting to flyer ${flyerId}...`);
setProcessingState('completed');
onProcessingComplete();
console.debug('[DEBUG] pollStatus(): Setting 1500ms timeout for redirect.');
setTimeout(() => {
console.debug(`[DEBUG] pollStatus(): Redirecting to /flyers/${flyerId}`);
navigate(`/flyers/${flyerId}`);
}, 1500);
} else {
throw new Error('Job completed but did not return a flyer ID.');
}
break;
case 'failed':
console.debug(
`[DEBUG] pollStatus(): Job state is "failed". Reason: ${job.failedReason}`,
);
setErrorMessage(`Processing failed: ${job.failedReason || 'Unknown error'}`);
setProcessingState('error');
break;
case 'active':
case 'waiting':
default:
console.debug(
`[DEBUG] pollStatus(): Job state is "${job.state}". Setting timeout for next poll (3000ms).`,
);
pollingTimeoutRef.current = window.setTimeout(pollStatus, 3000);
console.debug(`[DEBUG] pollStatus(): Timeout ID ${pollingTimeoutRef.current} set.`);
break;
}
} catch (error) {
logger.error('Error during polling:', { error });
setErrorMessage(
error instanceof Error ? error.message : 'An unexpected error occurred during polling.',
);
setProcessingState('error');
}
};
pollStatus();
return () => {
if (pollingTimeoutRef.current) {
console.debug(
`[DEBUG] Polling Effect Cleanup: Clearing timeout ID ${pollingTimeoutRef.current}`,
);
clearTimeout(pollingTimeoutRef.current);
pollingTimeoutRef.current = null;
} else {
console.debug('[DEBUG] Polling Effect Cleanup: No active timeout to clear.');
}
};
}, [processingState, jobId, onProcessingComplete, navigate]);
const processFile = useCallback(async (file: File) => {
console.debug('[DEBUG] processFile(): Starting file processing for', file.name);
setProcessingState('uploading');
setErrorMessage(null);
setDuplicateFlyerId(null);
setCurrentFile(file.name);
try {
console.debug('[DEBUG] processFile(): Generating file checksum.');
const checksum = await generateFileChecksum(file);
setStatusMessage('Uploading file...');
console.debug(
`[DEBUG] processFile(): Checksum generated: ${checksum}. Calling uploadAndProcessFlyer.`,
);
const startResponse = await uploadAndProcessFlyer(file, checksum);
console.debug(`[DEBUG] processFile(): Upload response status: ${startResponse.status}`);
if (!startResponse.ok) {
const errorData = await startResponse.json();
console.debug('[DEBUG] processFile(): Upload failed. Error data:', errorData);
if (startResponse.status === 409 && errorData.flyerId) {
setErrorMessage(`This flyer has already been processed. You can view it here:`);
setDuplicateFlyerId(errorData.flyerId);
} else {
setErrorMessage(errorData.message || `Upload failed with status ${startResponse.status}`);
}
setProcessingState('error');
return;
}
const { jobId: newJobId } = await startResponse.json();
console.debug(`[DEBUG] processFile(): Upload successful. Received jobId: ${newJobId}`);
setJobId(newJobId);
setProcessingState('polling');
} catch (error) {
logger.error('An unexpected error occurred during file upload:', { error });
setErrorMessage(error instanceof Error ? error.message : 'An unexpected error occurred.');
setProcessingState('error');
// Handle completion and navigation
useEffect(() => {
if (processingState === 'completed' && flyerId) {
onProcessingComplete();
// Small delay to show the "Complete" state before redirecting
const timer = setTimeout(() => {
navigate(`/flyers/${flyerId}`);
}, 1500);
return () => clearTimeout(timer);
}
}, []);
}, [processingState, flyerId, onProcessingComplete, navigate]);
const handleFileChange = (event: React.ChangeEvent<HTMLInputElement>) => {
console.debug('[DEBUG] handleFileChange(): File input changed.');
const file = event.target.files?.[0];
if (file) {
processFile(file);
upload(file);
}
event.target.value = '';
};
const resetUploaderState = useCallback(() => {
console.debug(
`[DEBUG] resetUploaderState(): User triggered reset. Previous jobId was: ${jobId}`,
);
setProcessingState('idle');
setJobId(null);
setErrorMessage(null);
setDuplicateFlyerId(null);
setCurrentFile(null);
setProcessingStages([]);
setEstimatedTime(0);
logger.info('Uploader state has been reset. Previous job ID was:', jobId);
}, [jobId]);
const onFilesDropped = useCallback(
(files: FileList) => {
console.debug('[DEBUG] onFilesDropped(): Files were dropped.');
if (files && files.length > 0) {
processFile(files[0]);
upload(files[0]);
}
},
[processFile],
[upload],
);
const isProcessing = processingState === 'uploading' || processingState === 'polling';
@@ -221,11 +76,6 @@ export const FlyerUploader: React.FC<FlyerUploaderProps> = ({ onProcessingComple
? 'bg-brand-light/50 dark:bg-brand-dark/20'
: 'bg-gray-50/50 dark:bg-gray-800/20';
// If processing, show the detailed status component. Otherwise, show the uploader.
console.debug(
`[DEBUG] FlyerUploader: Rendering. State=${processingState}, Msg=${statusMessage}, Err=${!!errorMessage}`,
);
if (isProcessing || processingState === 'completed' || processingState === 'error') {
return (
<div className="max-w-4xl mx-auto">
@@ -235,22 +85,30 @@ export const FlyerUploader: React.FC<FlyerUploaderProps> = ({ onProcessingComple
currentFile={currentFile}
/>
<div className="mt-4 text-center">
{/* Display the current status message to the user and the test runner */}
{statusMessage && (
{/* Display status message if not completed (completed has its own redirect logic) */}
{statusMessage && processingState !== 'completed' && (
<p className="text-gray-600 dark:text-gray-400 mt-2 italic animate-pulse">
{statusMessage}
</p>
)}
{processingState === 'completed' && (
<p className="text-green-600 dark:text-green-400 mt-2 font-bold">
Processing complete! Redirecting to flyer {flyerId}...
</p>
)}
{errorMessage && (
<div className="text-red-600 dark:text-red-400 font-semibold p-4 bg-red-100 dark:bg-red-900/30 rounded-md">
<p>{errorMessage}</p>
{duplicateFlyerId && (
{duplicateFlyerId ? (
<p>
<Link to={`/flyers/${duplicateFlyerId}`} className="text-blue-500 underline">
{errorMessage} You can view it here:{' '}
<Link to={`/flyers/${duplicateFlyerId}`} className="text-blue-500 underline" data-discover="true">
Flyer #{duplicateFlyerId}
</Link>
</p>
) : (
<p>{errorMessage}</p>
)}
</div>
)}

View File

@@ -1,130 +0,0 @@
// src/features/flyer/dateUtils.test.ts
import { describe, it, expect } from 'vitest';
import { formatShortDate, calculateDaysBetween, formatDateRange } from './dateUtils';
describe('formatShortDate', () => {
it('should format a valid YYYY-MM-DD date string correctly', () => {
expect(formatShortDate('2024-07-26')).toBe('Jul 26');
});
it('should handle single-digit days correctly', () => {
expect(formatShortDate('2025-01-05')).toBe('Jan 5');
});
it('should handle dates at the end of the year', () => {
expect(formatShortDate('2023-12-31')).toBe('Dec 31');
});
it('should return null for a null input', () => {
expect(formatShortDate(null)).toBeNull();
});
it('should return null for an undefined input', () => {
expect(formatShortDate(undefined)).toBeNull();
});
it('should return null for an empty string input', () => {
expect(formatShortDate('')).toBeNull();
});
it('should return null for an invalid date string', () => {
expect(formatShortDate('not-a-real-date')).toBeNull();
});
it('should return null for a malformed date string', () => {
expect(formatShortDate('2024-13-01')).toBeNull(); // Invalid month
});
it('should correctly format a full ISO string with time and timezone', () => {
expect(formatShortDate('2024-12-25T10:00:00Z')).toBe('Dec 25');
});
});
describe('calculateDaysBetween', () => {
it('should calculate the difference in days between two valid date strings', () => {
expect(calculateDaysBetween('2023-01-01', '2023-01-05')).toBe(4);
});
it('should return a negative number if the end date is before the start date', () => {
expect(calculateDaysBetween('2023-01-05', '2023-01-01')).toBe(-4);
});
it('should handle Date objects', () => {
const start = new Date('2023-01-01');
const end = new Date('2023-01-10');
expect(calculateDaysBetween(start, end)).toBe(9);
});
it('should return null if either date is null or undefined', () => {
expect(calculateDaysBetween(null, '2023-01-01')).toBeNull();
expect(calculateDaysBetween('2023-01-01', undefined)).toBeNull();
});
it('should return null if either date is invalid', () => {
expect(calculateDaysBetween('invalid', '2023-01-01')).toBeNull();
expect(calculateDaysBetween('2023-01-01', 'invalid')).toBeNull();
});
});
describe('formatDateRange', () => {
it('should format a range with two different valid dates', () => {
expect(formatDateRange('2023-01-01', '2023-01-05')).toBe('Jan 1 - Jan 5');
});
it('should format a range with the same start and end date as a single date', () => {
expect(formatDateRange('2023-01-01', '2023-01-01')).toBe('Jan 1');
});
it('should return only the start date if end date is missing', () => {
expect(formatDateRange('2023-01-01', null)).toBe('Jan 1');
expect(formatDateRange('2023-01-01', undefined)).toBe('Jan 1');
});
it('should return only the end date if start date is missing', () => {
expect(formatDateRange(null, '2023-01-05')).toBe('Jan 5');
expect(formatDateRange(undefined, '2023-01-05')).toBe('Jan 5');
});
it('should return null if both dates are missing or invalid', () => {
expect(formatDateRange(null, null)).toBeNull();
expect(formatDateRange(undefined, undefined)).toBeNull();
expect(formatDateRange('invalid', 'invalid')).toBeNull();
});
it('should handle one valid and one invalid date by showing only the valid one', () => {
expect(formatDateRange('2023-01-01', 'invalid')).toBe('Jan 1');
expect(formatDateRange('invalid', '2023-01-05')).toBe('Jan 5');
});
describe('verbose mode', () => {
it('should format a range with two different valid dates verbosely', () => {
expect(formatDateRange('2023-01-01', '2023-01-05', { verbose: true })).toBe(
'Deals valid from January 1, 2023 to January 5, 2023',
);
});
it('should format a range with the same start and end date verbosely', () => {
expect(formatDateRange('2023-01-01', '2023-01-01', { verbose: true })).toBe(
'Valid on January 1, 2023',
);
});
it('should format only the start date verbosely', () => {
expect(formatDateRange('2023-01-01', null, { verbose: true })).toBe(
'Deals start January 1, 2023',
);
});
it('should format only the end date verbosely', () => {
expect(formatDateRange(null, '2023-01-05', { verbose: true })).toBe(
'Deals end January 5, 2023',
);
});
it('should handle one valid and one invalid date verbosely', () => {
expect(formatDateRange('2023-01-01', 'invalid', { verbose: true })).toBe(
'Deals start January 1, 2023',
);
});
});
});

View File

@@ -1,65 +0,0 @@
// src/features/flyer/dateUtils.ts
import { parseISO, format, isValid, differenceInDays } from 'date-fns';
export const formatShortDate = (dateString: string | null | undefined): string | null => {
if (!dateString) return null;
// Using `parseISO` from date-fns is more reliable than `new Date()` for YYYY-MM-DD strings.
// It correctly interprets the string as a local date, avoiding timezone-related "off-by-one" errors.
const date = parseISO(dateString);
if (isValid(date)) {
return format(date, 'MMM d');
}
return null;
};
export const calculateDaysBetween = (
startDate: string | Date | null | undefined,
endDate: string | Date | null | undefined,
): number | null => {
if (!startDate || !endDate) return null;
const start = typeof startDate === 'string' ? parseISO(startDate) : startDate;
const end = typeof endDate === 'string' ? parseISO(endDate) : endDate;
if (!isValid(start) || !isValid(end)) return null;
return differenceInDays(end, start);
};
interface DateRangeOptions {
verbose?: boolean;
}
export const formatDateRange = (
startDate: string | null | undefined,
endDate: string | null | undefined,
options?: DateRangeOptions,
): string | null => {
if (!options?.verbose) {
const start = formatShortDate(startDate);
const end = formatShortDate(endDate);
if (start && end) {
return start === end ? start : `${start} - ${end}`;
}
return start || end || null;
}
// Verbose format logic
const dateFormat = 'MMMM d, yyyy';
const formatFn = (dateStr: string | null | undefined) => {
if (!dateStr) return null;
const date = parseISO(dateStr);
return isValid(date) ? format(date, dateFormat) : null;
};
const start = formatFn(startDate);
const end = formatFn(endDate);
if (start && end) {
return start === end ? `Valid on ${start}` : `Deals valid from ${start} to ${end}`;
}
if (start) return `Deals start ${start}`;
if (end) return `Deals end ${end}`;
return null;
};

View File

@@ -236,6 +236,24 @@ describe('ShoppingListComponent (in shopping feature)', () => {
alertSpy.mockRestore();
});
it('should show a generic alert if reading aloud fails with a non-Error object', async () => {
const alertSpy = vi.spyOn(window, 'alert').mockImplementation(() => {});
vi.spyOn(aiApiClient, 'generateSpeechFromText').mockRejectedValue('A string error');
render(<ShoppingListComponent {...defaultProps} />);
const readAloudButton = screen.getByTitle(/read list aloud/i);
fireEvent.click(readAloudButton);
await waitFor(() => {
expect(alertSpy).toHaveBeenCalledWith(
'Could not read list aloud: An unknown error occurred while generating audio.',
);
});
alertSpy.mockRestore();
});
it('should handle interactions with purchased items', () => {
render(<ShoppingListComponent {...defaultProps} />);

View File

@@ -1,5 +1,5 @@
// src/features/shopping/ShoppingList.tsx
import React, { useState, useMemo, useCallback, useEffect } from 'react';
import React, { useState, useMemo, useCallback } from 'react';
import type { ShoppingList, ShoppingListItem, User } from '../../types';
import { UserIcon } from '../../components/icons/UserIcon';
import { ListBulletIcon } from '../../components/icons/ListBulletIcon';
@@ -56,28 +56,6 @@ export const ShoppingListComponent: React.FC<ShoppingListComponentProps> = ({
return { neededItems, purchasedItems };
}, [activeList]);
useEffect(() => {
if (activeList) {
console.log('ShoppingList Debug: Active List:', activeList.name);
console.log(
'ShoppingList Debug: Needed Items:',
neededItems.map((i) => ({
id: i.shopping_list_item_id,
name: i.custom_item_name || i.master_item?.name,
raw: i,
})),
);
console.log(
'ShoppingList Debug: Purchased Items:',
purchasedItems.map((i) => ({
id: i.shopping_list_item_id,
name: i.custom_item_name || i.master_item?.name,
raw: i,
})),
);
}
}, [activeList, neededItems, purchasedItems]);
const handleCreateList = async () => {
const name = prompt('Enter a name for your new shopping list:');
if (name && name.trim()) {

View File

@@ -164,6 +164,15 @@ describe('WatchedItemsList (in shopping feature)', () => {
expect(itemsDesc[1]).toHaveTextContent('Eggs');
expect(itemsDesc[2]).toHaveTextContent('Bread');
expect(itemsDesc[3]).toHaveTextContent('Apples');
// Click again to sort ascending
fireEvent.click(sortButton);
const itemsAscAgain = screen.getAllByRole('listitem');
expect(itemsAscAgain[0]).toHaveTextContent('Apples');
expect(itemsAscAgain[1]).toHaveTextContent('Bread');
expect(itemsAscAgain[2]).toHaveTextContent('Eggs');
expect(itemsAscAgain[3]).toHaveTextContent('Milk');
});
it('should call onAddItemToList when plus icon is clicked', () => {
@@ -222,6 +231,18 @@ describe('WatchedItemsList (in shopping feature)', () => {
fireEvent.change(nameInput, { target: { value: 'Grapes' } });
expect(addButton).toBeDisabled();
});
it('should not submit if form is submitted with invalid data', () => {
render(<WatchedItemsList {...defaultProps} />);
const nameInput = screen.getByPlaceholderText(/add item/i);
const form = nameInput.closest('form')!;
const categorySelect = screen.getByDisplayValue('Select a category');
fireEvent.change(categorySelect, { target: { value: 'Dairy & Eggs' } });
fireEvent.change(nameInput, { target: { value: ' ' } });
fireEvent.submit(form);
expect(mockOnAddItem).not.toHaveBeenCalled();
});
});
describe('Error Handling', () => {

View File

@@ -12,12 +12,7 @@ import {
} from '../tests/utils/mockFactories';
import { mockUseFlyers, mockUseUserData } from '../tests/setup/mockHooks';
// Explicitly mock apiClient to ensure stable spies are used
vi.mock('../services/apiClient', () => ({
countFlyerItemsForFlyers: vi.fn(),
fetchFlyerItemsForFlyers: vi.fn(),
}));
// The apiClient is mocked globally in `src/tests/setup/globalApiMock.ts`.
// Mock the hooks to avoid Missing Context errors
vi.mock('./useFlyers', () => ({
useFlyers: () => mockUseFlyers(),
@@ -30,14 +25,6 @@ vi.mock('../hooks/useUserData', () => ({
// The apiClient is globally mocked in our test setup, so we just need to cast it
const mockedApiClient = vi.mocked(apiClient);
// Mock the logger to prevent console noise
vi.mock('../services/logger.client', () => ({
logger: {
error: vi.fn(),
info: vi.fn(), // Added to prevent crashes on abort logging
},
}));
// Set a consistent "today" for testing flyer validity to make tests deterministic
const TODAY = new Date('2024-01-15T12:00:00.000Z');

View File

@@ -3,6 +3,7 @@ import { useState, useCallback, useRef, useEffect } from 'react';
import { logger } from '../services/logger.client';
import { notifyError } from '../services/notificationService';
/**
* A custom React hook to simplify API calls, including loading and error states.
* It is designed to work with apiClient functions that return a `Promise<Response>`.
@@ -26,8 +27,17 @@ export function useApi<T, TArgs extends unknown[]>(
const [isRefetching, setIsRefetching] = useState<boolean>(false);
const [error, setError] = useState<Error | null>(null);
const hasBeenExecuted = useRef(false);
const lastErrorMessageRef = useRef<string | null>(null);
const abortControllerRef = useRef<AbortController>(new AbortController());
// Use a ref to track the latest apiFunction. This allows us to keep `execute` stable
// even if `apiFunction` is recreated on every render (common with inline arrow functions).
const apiFunctionRef = useRef(apiFunction);
useEffect(() => {
apiFunctionRef.current = apiFunction;
}, [apiFunction]);
// This effect ensures that when the component using the hook unmounts,
// any in-flight request is cancelled.
useEffect(() => {
@@ -52,12 +62,13 @@ export function useApi<T, TArgs extends unknown[]>(
async (...args: TArgs): Promise<T | null> => {
setLoading(true);
setError(null);
lastErrorMessageRef.current = null;
if (hasBeenExecuted.current) {
setIsRefetching(true);
}
try {
const response = await apiFunction(...args, abortControllerRef.current.signal);
const response = await apiFunctionRef.current(...args, abortControllerRef.current.signal);
if (!response.ok) {
// Attempt to parse a JSON error response. This is aligned with ADR-003,
@@ -96,7 +107,17 @@ export function useApi<T, TArgs extends unknown[]>(
}
return result;
} catch (e) {
const err = e instanceof Error ? e : new Error('An unknown error occurred.');
let err: Error;
if (e instanceof Error) {
err = e;
} else if (typeof e === 'object' && e !== null && 'status' in e) {
// Handle structured errors (e.g. { status: 409, body: { ... } })
const structuredError = e as { status: number; body?: { message?: string } };
const message = structuredError.body?.message || `Request failed with status ${structuredError.status}`;
err = new Error(message);
} else {
err = new Error('An unknown error occurred.');
}
// If the error is an AbortError, it's an intentional cancellation, so we don't set an error state.
if (err.name === 'AbortError') {
logger.info('API request was cancelled.', { functionName: apiFunction.name });
@@ -106,7 +127,13 @@ export function useApi<T, TArgs extends unknown[]>(
error: err.message,
functionName: apiFunction.name,
});
setError(err);
// Only set a new error object if the message is different from the last one.
// This prevents creating new object references for the same error (e.g. repeated timeouts)
// and helps break infinite loops in components that depend on the `error` object.
if (err.message !== lastErrorMessageRef.current) {
setError(err);
lastErrorMessageRef.current = err.message;
}
notifyError(err.message); // Optionally notify the user automatically.
return null; // Return null on failure.
} finally {
@@ -114,7 +141,7 @@ export function useApi<T, TArgs extends unknown[]>(
setIsRefetching(false);
}
},
[apiFunction],
[], // execute is now stable because it uses apiFunctionRef
); // abortControllerRef is stable
return { execute, loading, isRefetching, error, data, reset };

View File

@@ -0,0 +1,174 @@
// src/hooks/useAppInitialization.test.tsx
import { renderHook, waitFor } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { MemoryRouter, useNavigate } from 'react-router-dom';
import { useAppInitialization } from './useAppInitialization';
import { useAuth } from './useAuth';
import { useModal } from './useModal';
import { createMockUserProfile } from '../tests/utils/mockFactories';
// Mock dependencies
vi.mock('./useAuth');
vi.mock('./useModal');
vi.mock('react-router-dom', async (importOriginal) => {
const actual = await importOriginal<typeof import('react-router-dom')>();
return {
...actual,
useNavigate: vi.fn(),
};
});
vi.mock('../services/logger.client');
vi.mock('../config', () => ({
default: {
app: { version: '1.0.1' },
},
}));
const mockedUseAuth = vi.mocked(useAuth);
const mockedUseModal = vi.mocked(useModal);
const mockedUseNavigate = vi.mocked(useNavigate);
const mockLogin = vi.fn().mockResolvedValue(undefined);
const mockNavigate = vi.fn();
const mockOpenModal = vi.fn();
// Wrapper with MemoryRouter is needed because the hook uses useLocation and useNavigate
const wrapper = ({
children,
initialEntries = ['/'],
}: {
children: React.ReactNode;
initialEntries?: string[];
}) => <MemoryRouter initialEntries={initialEntries}>{children}</MemoryRouter>;
describe('useAppInitialization Hook', () => {
beforeEach(() => {
vi.clearAllMocks();
mockedUseNavigate.mockReturnValue(mockNavigate);
mockedUseAuth.mockReturnValue({
userProfile: null,
login: mockLogin,
authStatus: 'SIGNED_OUT',
isLoading: false,
logout: vi.fn(),
updateProfile: vi.fn(),
});
mockedUseModal.mockReturnValue({
openModal: mockOpenModal,
closeModal: vi.fn(),
isModalOpen: vi.fn(),
});
// Mock localStorage
Object.defineProperty(window, 'localStorage', {
value: {
getItem: vi.fn().mockReturnValue(null),
setItem: vi.fn(),
removeItem: vi.fn(),
clear: vi.fn(),
},
writable: true,
});
// Mock matchMedia
Object.defineProperty(window, 'matchMedia', {
value: vi.fn().mockImplementation((query) => ({
matches: false, // default to light mode
})),
writable: true,
configurable: true,
});
});
it('should call login when googleAuthToken is in URL', async () => {
renderHook(() => useAppInitialization(), {
wrapper: (props) => wrapper({ ...props, initialEntries: ['/?googleAuthToken=test-token'] }),
});
await waitFor(() => {
expect(mockLogin).toHaveBeenCalledWith('test-token');
});
});
it('should call login when githubAuthToken is in URL', async () => {
renderHook(() => useAppInitialization(), {
wrapper: (props) => wrapper({ ...props, initialEntries: ['/?githubAuthToken=test-token'] }),
});
await waitFor(() => {
expect(mockLogin).toHaveBeenCalledWith('test-token');
});
});
it('should call navigate to clean the URL after processing a token', async () => {
renderHook(() => useAppInitialization(), {
wrapper: (props) => wrapper({ ...props, initialEntries: ['/some/path?googleAuthToken=test-token'] }),
});
await waitFor(() => {
expect(mockLogin).toHaveBeenCalledWith('test-token');
});
expect(mockNavigate).toHaveBeenCalledWith('/some/path', { replace: true });
});
it("should open \"What's New\" modal if version is new", () => {
vi.spyOn(window.localStorage, 'getItem').mockReturnValue('1.0.0');
renderHook(() => useAppInitialization(), { wrapper });
expect(mockOpenModal).toHaveBeenCalledWith('whatsNew');
expect(window.localStorage.setItem).toHaveBeenCalledWith('lastSeenVersion', '1.0.1');
});
it("should not open \"What's New\" modal if version is the same", () => {
vi.spyOn(window.localStorage, 'getItem').mockReturnValue('1.0.1');
renderHook(() => useAppInitialization(), { wrapper });
expect(mockOpenModal).not.toHaveBeenCalled();
});
it('should set dark mode from user profile', async () => {
mockedUseAuth.mockReturnValue({
...mockedUseAuth(),
userProfile: createMockUserProfile({ preferences: { darkMode: true } }),
});
const { result } = renderHook(() => useAppInitialization(), { wrapper });
await waitFor(() => {
expect(result.current.isDarkMode).toBe(true);
expect(document.documentElement.classList.contains('dark')).toBe(true);
});
});
it('should set dark mode from localStorage', async () => {
vi.spyOn(window.localStorage, 'getItem').mockImplementation((key) =>
key === 'darkMode' ? 'true' : null,
);
const { result } = renderHook(() => useAppInitialization(), { wrapper });
await waitFor(() => {
expect(result.current.isDarkMode).toBe(true);
expect(document.documentElement.classList.contains('dark')).toBe(true);
});
});
it('should set dark mode from system preference', async () => {
vi.spyOn(window, 'matchMedia').mockReturnValue({ matches: true } as any);
const { result } = renderHook(() => useAppInitialization(), { wrapper });
await waitFor(() => {
expect(result.current.isDarkMode).toBe(true);
expect(document.documentElement.classList.contains('dark')).toBe(true);
});
});
it('should set unit system from user profile', async () => {
mockedUseAuth.mockReturnValue({
...mockedUseAuth(),
userProfile: createMockUserProfile({ preferences: { unitSystem: 'metric' } }),
});
const { result } = renderHook(() => useAppInitialization(), { wrapper });
await waitFor(() => {
expect(result.current.unitSystem).toBe('metric');
});
});
it('should set unit system from localStorage', async () => {
vi.spyOn(window.localStorage, 'getItem').mockImplementation((key) =>
key === 'unitSystem' ? 'metric' : null,
);
const { result } = renderHook(() => useAppInitialization(), { wrapper });
await waitFor(() => {
expect(result.current.unitSystem).toBe('metric');
});
});
});

View File

@@ -0,0 +1,88 @@
// src/hooks/useAppInitialization.ts
import { useState, useEffect } from 'react';
import { useLocation, useNavigate } from 'react-router-dom';
import { useAuth } from './useAuth';
import { useModal } from './useModal';
import { logger } from '../services/logger.client';
import config from '../config';
export const useAppInitialization = () => {
const { userProfile, login } = useAuth();
const { openModal } = useModal();
const location = useLocation();
const navigate = useNavigate();
const [isDarkMode, setIsDarkMode] = useState(false);
const [unitSystem, setUnitSystem] = useState<'metric' | 'imperial'>('imperial');
// Effect to handle the token from Google/GitHub OAuth redirect
useEffect(() => {
const urlParams = new URLSearchParams(location.search);
const googleToken = urlParams.get('googleAuthToken');
if (googleToken) {
logger.info('Received Google Auth token from URL. Authenticating...');
login(googleToken).catch((err) =>
logger.error('Failed to log in with Google token', { error: err }),
);
navigate(location.pathname, { replace: true });
}
const githubToken = urlParams.get('githubAuthToken');
if (githubToken) {
logger.info('Received GitHub Auth token from URL. Authenticating...');
login(githubToken).catch((err) => {
logger.error('Failed to log in with GitHub token', { error: err });
});
navigate(location.pathname, { replace: true });
}
}, [login, location.search, navigate, location.pathname]);
// Effect to handle "What's New" modal
useEffect(() => {
const appVersion = config.app.version;
if (appVersion) {
logger.info(`Application version: ${appVersion}`);
const lastSeenVersion = localStorage.getItem('lastSeenVersion');
if (appVersion !== lastSeenVersion) {
openModal('whatsNew');
localStorage.setItem('lastSeenVersion', appVersion);
}
}
}, [openModal]);
// Effect to set initial theme based on user profile, local storage, or system preference
useEffect(() => {
let darkModeValue: boolean;
if (userProfile && userProfile.preferences?.darkMode !== undefined) {
// Preference from DB
darkModeValue = userProfile.preferences.darkMode;
} else {
// Fallback to local storage or system preference
const savedMode = localStorage.getItem('darkMode');
const prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
darkModeValue = savedMode !== null ? savedMode === 'true' : prefersDark;
}
setIsDarkMode(darkModeValue);
document.documentElement.classList.toggle('dark', darkModeValue);
// Also save to local storage if coming from profile, to persist on logout
if (userProfile && userProfile.preferences?.darkMode !== undefined) {
localStorage.setItem('darkMode', String(userProfile.preferences.darkMode));
}
}, [userProfile]);
// Effect to set initial unit system based on user profile or local storage
useEffect(() => {
if (userProfile && userProfile.preferences?.unitSystem) {
setUnitSystem(userProfile.preferences.unitSystem);
localStorage.setItem('unitSystem', userProfile.preferences.unitSystem);
} else {
const savedSystem = localStorage.getItem('unitSystem') as 'metric' | 'imperial' | null;
if (savedSystem) {
setUnitSystem(savedSystem);
}
}
}, [userProfile?.preferences?.unitSystem, userProfile?.user.user_id]);
return { isDarkMode, unitSystem };
};

View File

@@ -6,24 +6,16 @@ import { useAuth } from './useAuth';
import { AuthProvider } from '../providers/AuthProvider';
import * as apiClient from '../services/apiClient';
import type { UserProfile } from '../types';
import * as tokenStorage from '../services/tokenStorage';
import { createMockUserProfile } from '../tests/utils/mockFactories';
import { logger } from '../services/logger.client';
// Mock the dependencies
vi.mock('../services/apiClient', () => ({
// Mock other functions if needed
getAuthenticatedUserProfile: vi.fn(),
}));
// Mock the logger to see auth provider logs during test execution
vi.mock('../services/logger.client', () => ({
logger: {
info: vi.fn((...args) => console.log('[AUTH-INFO]', ...args)),
warn: vi.fn((...args) => console.warn('[AUTH-WARN]', ...args)),
error: vi.fn((...args) => console.error('[AUTH-ERROR]', ...args)),
},
}));
// The apiClient is mocked globally in `src/tests/setup/globalApiMock.ts`.
vi.mock('../services/tokenStorage');
const mockedApiClient = vi.mocked(apiClient);
const mockedTokenStorage = vi.mocked(tokenStorage);
const mockProfile: UserProfile = createMockUserProfile({
full_name: 'Test User',
@@ -36,26 +28,9 @@ const mockProfile: UserProfile = createMockUserProfile({
const wrapper = ({ children }: { children: ReactNode }) => <AuthProvider>{children}</AuthProvider>;
describe('useAuth Hook and AuthProvider', () => {
// Mock localStorage
let storage: { [key: string]: string } = {};
const localStorageMock = {
getItem: vi.fn((key: string) => storage[key] || null),
setItem: vi.fn((key: string, value: string) => {
storage[key] = value;
}),
removeItem: vi.fn((key: string) => {
delete storage[key];
}),
clear: vi.fn(() => {
storage = {};
}),
};
beforeEach(() => {
// Reset mocks and storage before each test
vi.clearAllMocks();
storage = {};
Object.defineProperty(window, 'localStorage', { value: localStorageMock, configurable: true });
});
afterEach(() => {
@@ -85,7 +60,8 @@ describe('useAuth Hook and AuthProvider', () => {
});
describe('Initial Auth Check (useEffect)', () => {
it('sets state to SIGNED_OUT if no token is found', async () => {
it('sets state to SIGNED_OUT if no token is found in storage', async () => {
mockedTokenStorage.getToken.mockReturnValue(null);
const { result } = renderHook(() => useAuth(), { wrapper });
await waitFor(() => {
@@ -97,7 +73,7 @@ describe('useAuth Hook and AuthProvider', () => {
});
it('sets state to AUTHENTICATED if a valid token is found', async () => {
localStorageMock.setItem('authToken', 'valid-token');
mockedTokenStorage.getToken.mockReturnValue('valid-token');
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue({
ok: true,
status: 200,
@@ -121,7 +97,7 @@ describe('useAuth Hook and AuthProvider', () => {
});
it('sets state to SIGNED_OUT and removes token if validation fails', async () => {
localStorageMock.setItem('authToken', 'invalid-token');
mockedTokenStorage.getToken.mockReturnValue('invalid-token');
mockedApiClient.getAuthenticatedUserProfile.mockRejectedValue(new Error('Invalid token'));
const { result } = renderHook(() => useAuth(), { wrapper });
@@ -132,13 +108,40 @@ describe('useAuth Hook and AuthProvider', () => {
expect(result.current.authStatus).toBe('SIGNED_OUT');
expect(result.current.userProfile).toBeNull();
expect(localStorageMock.removeItem).toHaveBeenCalledWith('authToken');
expect(mockedTokenStorage.removeToken).toHaveBeenCalled();
});
});
it('sets state to SIGNED_OUT and removes token if profile fetch returns null after token validation', async () => {
mockedTokenStorage.getToken.mockReturnValue('valid-token');
// Mock getAuthenticatedUserProfile to return a 200 OK response with a null body
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue({
ok: true,
status: 200,
json: () => Promise.resolve(null), // Simulate API returning no profile data
} as unknown as Response);
const { result } = renderHook(() => useAuth(), { wrapper });
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
expect(result.current.authStatus).toBe('SIGNED_OUT');
expect(result.current.userProfile).toBeNull();
expect(mockedTokenStorage.removeToken).toHaveBeenCalled();
expect(logger.warn).toHaveBeenCalledWith(
'[AuthProvider-Effect] Token was present but validation returned no profile. Signing out.',
);
});
describe('login function', () => {
// This was the failing test
it('sets token, fetches profile, and updates state on successful login', async () => {
// --- FIX ---
// Explicitly mock that no token exists initially to prevent state leakage from other tests.
mockedTokenStorage.getToken.mockReturnValue(null);
// --- FIX ---
// The mock for `getAuthenticatedUserProfile` must resolve to a `Response`-like object,
// as this is the return type of the actual function. The `useApi` hook then
@@ -172,7 +175,7 @@ describe('useAuth Hook and AuthProvider', () => {
console.log('[TEST-DEBUG] State immediately after login `act` call:', result.current);
// 3. Assertions
expect(localStorageMock.setItem).toHaveBeenCalledWith('authToken', 'new-valid-token');
expect(mockedTokenStorage.setToken).toHaveBeenCalledWith('new-valid-token');
// 4. We must wait for the state update inside the hook to propagate
await waitFor(() => {
@@ -202,16 +205,44 @@ describe('useAuth Hook and AuthProvider', () => {
});
// Should trigger the logout flow
expect(localStorageMock.removeItem).toHaveBeenCalledWith('authToken');
expect(mockedTokenStorage.removeToken).toHaveBeenCalled();
expect(result.current.authStatus).toBe('SIGNED_OUT'); // This was a duplicate, fixed.
expect(result.current.userProfile).toBeNull();
});
it('logs out and throws an error if profile fetch returns null after login (no profileData)', async () => {
// Simulate successful token setting, but subsequent profile fetch returns null
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue({
ok: true,
status: 200,
json: () => Promise.resolve(null), // Simulate API returning no profile data
} as unknown as Response);
const { result } = renderHook(() => useAuth(), { wrapper });
await waitFor(() => expect(result.current.isLoading).toBe(false));
// Call login without profileData, forcing a profile fetch
await act(async () => {
await expect(result.current.login('new-token-no-profile-data')).rejects.toThrow(
'Login succeeded, but failed to fetch your data: Received null or undefined profile from API.',
);
});
// Should trigger the logout flow
expect(mockedTokenStorage.removeToken).toHaveBeenCalled();
expect(result.current.authStatus).toBe('SIGNED_OUT');
expect(result.current.userProfile).toBeNull();
expect(logger.error).toHaveBeenCalledWith(
expect.any(String), // The error message
expect.objectContaining({ error: 'Received null or undefined profile from API.' }),
);
});
});
describe('logout function', () => {
it('removes token and resets auth state', async () => {
// Start in a logged-in state
localStorageMock.setItem('authToken', 'valid-token');
// Start in a logged-in state by mocking the token storage
mockedTokenStorage.getToken.mockReturnValue('valid-token');
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue({
ok: true,
status: 200,
@@ -227,16 +258,15 @@ describe('useAuth Hook and AuthProvider', () => {
result.current.logout();
});
expect(localStorageMock.removeItem).toHaveBeenCalledWith('authToken');
expect(mockedTokenStorage.removeToken).toHaveBeenCalled();
expect(result.current.authStatus).toBe('SIGNED_OUT');
expect(result.current.userProfile).toBeNull();
});
});
describe('updateProfile function', () => {
it('merges new data into the existing profile state', async () => {
// Start in a logged-in state
localStorageMock.setItem('authToken', 'valid-token');
it('merges new data into the existing profile state', async () => { // Start in a logged-in state
mockedTokenStorage.getToken.mockReturnValue('valid-token');
mockedApiClient.getAuthenticatedUserProfile.mockResolvedValue({
ok: true,
status: 200,
@@ -264,6 +294,10 @@ describe('useAuth Hook and AuthProvider', () => {
});
it('should not update profile if user is not authenticated', async () => {
// --- FIX ---
// Explicitly mock that no token exists initially to prevent state leakage from other tests.
mockedTokenStorage.getToken.mockReturnValue(null);
const { result } = renderHook(() => useAuth(), { wrapper });
// Wait for initial check to complete

View File

@@ -3,12 +3,11 @@ import { renderHook } from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { useFlyerItems } from './useFlyerItems';
import { useApiOnMount } from './useApiOnMount';
import { createMockFlyer, createMockFlyerItem } from '../tests/utils/mockFactories';
import * as apiClient from '../services/apiClient';
import { createMockFlyer, createMockFlyerItem } from '../tests/utils/mockFactories';
// Mock the underlying useApiOnMount hook to isolate the useFlyerItems hook's logic.
vi.mock('./useApiOnMount');
vi.mock('../services/apiClient');
const mockedUseApiOnMount = vi.mocked(useApiOnMount);
@@ -16,8 +15,8 @@ describe('useFlyerItems Hook', () => {
const mockFlyer = createMockFlyer({
flyer_id: 123,
file_name: 'test-flyer.jpg',
image_url: '/test.jpg',
icon_url: '/icon.jpg',
image_url: 'https://example.com/test.jpg',
icon_url: 'https://example.com/icon.jpg',
checksum: 'abc',
valid_from: '2024-01-01',
valid_to: '2024-01-07',
@@ -61,7 +60,6 @@ describe('useFlyerItems Hook', () => {
expect(result.current.flyerItems).toEqual([]);
expect(result.current.isLoading).toBe(false);
expect(result.current.error).toBeNull();
// Assert: Check that useApiOnMount was called with `enabled: false`.
expect(mockedUseApiOnMount).toHaveBeenCalledWith(
expect.any(Function), // the wrapped fetcher function
@@ -171,11 +169,11 @@ describe('useFlyerItems Hook', () => {
const wrappedFetcher = mockedUseApiOnMount.mock.calls[0][0];
const mockResponse = new Response();
vi.mocked(apiClient.fetchFlyerItems).mockResolvedValue(mockResponse);
const mockedApiClient = vi.mocked(apiClient);
mockedApiClient.fetchFlyerItems.mockResolvedValue(mockResponse);
const response = await wrappedFetcher(123);
expect(apiClient.fetchFlyerItems).toHaveBeenCalledWith(123);
expect(mockedApiClient.fetchFlyerItems).toHaveBeenCalledWith(123);
expect(response).toBe(mockResponse);
});
});

View File

@@ -0,0 +1,136 @@
import { renderHook, act, waitFor } from '@testing-library/react';
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { useFlyerUploader } from './useFlyerUploader';
import * as aiApiClient from '../services/aiApiClient';
import * as checksumUtil from '../utils/checksum';
// Import the actual error class because the module is mocked
const { JobFailedError } = await vi.importActual<typeof import('../services/aiApiClient')>(
'../services/aiApiClient',
);
// Mock dependencies
vi.mock('../services/aiApiClient');
vi.mock('../utils/checksum');
vi.mock('../services/logger.client', () => ({
logger: {
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
},
}));
const mockedAiApiClient = vi.mocked(aiApiClient);
const mockedChecksumUtil = vi.mocked(checksumUtil);
// Helper to wrap the hook with QueryClientProvider, which is required by react-query
const createWrapper = () => {
const queryClient = new QueryClient({
defaultOptions: {
queries: {
retry: false, // Disable retries for tests for predictable behavior
},
},
});
return ({ children }: { children: React.ReactNode }) => (
<QueryClientProvider client={queryClient}>{children}</QueryClientProvider>
);
};
describe('useFlyerUploader Hook with React Query', () => {
beforeEach(() => {
vi.resetAllMocks();
mockedChecksumUtil.generateFileChecksum.mockResolvedValue('mock-checksum');
});
it('should handle a successful upload and polling flow', async () => {
// Arrange
const mockJobId = 'job-123';
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: mockJobId });
mockedAiApiClient.getJobStatus
.mockResolvedValueOnce({
// First poll: active
id: mockJobId,
state: 'active',
progress: { message: 'Processing...' },
returnValue: null,
failedReason: null,
} as aiApiClient.JobStatus)
.mockResolvedValueOnce({
// Second poll: completed
id: mockJobId,
state: 'completed',
progress: { message: 'Complete!' },
returnValue: { flyerId: 777 },
failedReason: null,
} as aiApiClient.JobStatus);
const { result } = renderHook(() => useFlyerUploader(), { wrapper: createWrapper() });
const mockFile = new File([''], 'flyer.pdf');
// Act
await act(async () => {
result.current.upload(mockFile);
});
// Assert initial upload state
await waitFor(() => expect(result.current.processingState).toBe('polling'));
expect(result.current.jobId).toBe(mockJobId);
// Assert polling state
await waitFor(() => expect(result.current.statusMessage).toBe('Processing...'));
// Assert completed state
await waitFor(() => expect(result.current.processingState).toBe('completed'), { timeout: 5000 });
expect(result.current.flyerId).toBe(777);
});
it('should handle an upload failure', async () => {
// Arrange
const uploadError = {
status: 409,
body: { message: 'Duplicate flyer detected.', flyerId: 99 },
};
mockedAiApiClient.uploadAndProcessFlyer.mockRejectedValue(uploadError);
const { result } = renderHook(() => useFlyerUploader(), { wrapper: createWrapper() });
const mockFile = new File([''], 'flyer.pdf');
// Act
await act(async () => {
result.current.upload(mockFile);
});
// Assert error state
await waitFor(() => expect(result.current.processingState).toBe('error'));
expect(result.current.errorMessage).toBe('Duplicate flyer detected.');
expect(result.current.duplicateFlyerId).toBe(99);
expect(result.current.jobId).toBeNull();
});
it('should handle a job failure during polling', async () => {
// Arrange
const mockJobId = 'job-456';
mockedAiApiClient.uploadAndProcessFlyer.mockResolvedValue({ jobId: mockJobId });
// Mock getJobStatus to throw a JobFailedError
mockedAiApiClient.getJobStatus.mockRejectedValue(
new JobFailedError('AI validation failed.', 'AI_VALIDATION_FAILED'),
);
const { result } = renderHook(() => useFlyerUploader(), { wrapper: createWrapper() });
const mockFile = new File([''], 'flyer.pdf');
// Act
await act(async () => {
result.current.upload(mockFile);
});
// Assert error state after polling fails
await waitFor(() => expect(result.current.processingState).toBe('error'));
expect(result.current.errorMessage).toBe('Polling failed: AI validation failed.');
expect(result.current.flyerId).toBeNull();
});
});

Some files were not shown because too many files have changed in this diff Show More