Compare commits
370 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3c8316f4f7 | ||
| 2564df1c64 | |||
|
|
696c547238 | ||
| 38165bdb9a | |||
|
|
6139dca072 | ||
| 68bfaa50e6 | |||
|
|
9c42621f74 | ||
| 1b98282202 | |||
|
|
b6731b220c | ||
| 3507d455e8 | |||
|
|
92b2adf8e8 | ||
| d6c7452256 | |||
|
|
d812b681dd | ||
| b4306a6092 | |||
|
|
57fdd159d5 | ||
| 4a747ca042 | |||
|
|
e0bf96824c | ||
| e86e09703e | |||
|
|
275741c79e | ||
| 3a40249ddb | |||
|
|
4c70905950 | ||
| 0b4884ff2a | |||
|
|
e4acab77c8 | ||
| 4e20b1b430 | |||
|
|
15747ac942 | ||
| e5fa89ef17 | |||
|
|
2c65da31e9 | ||
| eeec6af905 | |||
|
|
e7d03951b9 | ||
| af8816e0af | |||
|
|
64f6427e1a | ||
| c9b7a75429 | |||
|
|
0490f6922e | ||
| 057c4c9174 | |||
|
|
a9e56bc707 | ||
| e5d09c73b7 | |||
|
|
6e1298b825 | ||
| fc8e43437a | |||
|
|
cb453aa949 | ||
| 2651bd16ae | |||
|
|
91e0f0c46f | ||
| e6986d512b | |||
|
|
8f9c21675c | ||
| 7fb22cdd20 | |||
|
|
780291303d | ||
| 4f607f7d2f | |||
|
|
208227b3ed | ||
| bf1c7d4adf | |||
|
|
a7a30cf983 | ||
| 0bc0676b33 | |||
|
|
73484d3eb4 | ||
| b3253d5bbc | |||
|
|
54f3769e90 | ||
| bad6f74ee6 | |||
|
|
bcf16168b6 | ||
| 498fbd9e0e | |||
|
|
007ff8e538 | ||
| 1fc70e3915 | |||
|
|
d891e47e02 | ||
| 08c39afde4 | |||
|
|
c579543b8a | ||
| 0d84137786 | |||
|
|
20ee30c4b4 | ||
| 93612137e3 | |||
|
|
6e70f08e3c | ||
| 459f5f7976 | |||
|
|
a2e6331ddd | ||
| 13cd30bec9 | |||
|
|
baeb9488c6 | ||
| 0cba0f987e | |||
|
|
958a79997d | ||
| 8fb1c96f93 | |||
| 6e6fe80c7f | |||
|
|
d1554050bd | ||
|
|
b1fae270bb | ||
|
|
c852483e18 | ||
| 2e01ad5bc9 | |||
|
|
26763c7183 | ||
| f0c5c2c45b | |||
|
|
034bb60fd5 | ||
| d4b389cb79 | |||
|
|
a71fb81468 | ||
| 9bee0a013b | |||
|
|
8bcb4311b3 | ||
| 9fd15f3a50 | |||
|
|
e3c876c7be | ||
| 32dcf3b89e | |||
| 7066b937f6 | |||
|
|
8553ea8811 | ||
| 19885a50f7 | |||
|
|
ce82034b9d | ||
| 4528da2934 | |||
|
|
146d4c1351 | ||
| 88625706f4 | |||
|
|
e395faed30 | ||
| e8f8399896 | |||
|
|
ac0115af2b | ||
| f24b15f19b | |||
|
|
e64426bd84 | ||
| 0ec4cd68d2 | |||
|
|
840516d2a3 | ||
| 59355c3eef | |||
| d024935fe9 | |||
|
|
5a5470634e | ||
| 392231ad63 | |||
|
|
4b1c896621 | ||
| 720920a51c | |||
|
|
460adb9506 | ||
| 7aa1f756a9 | |||
|
|
c484a8ca9b | ||
| 28d2c9f4ec | |||
|
|
ee253e9449 | ||
| b6c15e53d0 | |||
|
|
722162c2c3 | ||
| 02a76fe996 | |||
|
|
0ebb03a7ab | ||
| 748ac9e049 | |||
|
|
495edd621c | ||
| 4ffca19db6 | |||
|
|
717427c5d7 | ||
| cc438a0e36 | |||
|
|
a32a0b62fc | ||
| 342f72b713 | |||
|
|
91254d18f3 | ||
| 40580dbf15 | |||
| 7f1d74c047 | |||
|
|
ecec686347 | ||
| 86de680080 | |||
|
|
0371947065 | ||
| 296698758c | |||
|
|
18c1161587 | ||
| 0010396780 | |||
|
|
d4557e13fb | ||
| 3e41130c69 | |||
|
|
d9034563d6 | ||
| 5836a75157 | |||
|
|
790008ae0d | ||
|
|
b5b91eb968 | ||
| 38eb810e7a | |||
|
|
458588a6e7 | ||
| 0b4113417f | |||
|
|
b59d2a9533 | ||
| 6740b35f8a | |||
|
|
92ad82a012 | ||
| 672e4ca597 | |||
|
|
e4d70a9b37 | ||
| c30f1c4162 | |||
|
|
44062a9f5b | ||
| 17fac8cf86 | |||
|
|
9fa8553486 | ||
|
|
f5b0b3b543 | ||
| e3ed5c7e63 | |||
|
|
ae0040e092 | ||
| 1f3f99d430 | |||
|
|
7be72f1758 | ||
| 0967c7a33d | |||
| 1f1c0fa6f3 | |||
|
|
728b1a20d3 | ||
| f248f7cbd0 | |||
|
|
0ad9bb16c2 | ||
| 510787bc5b | |||
|
|
9f696e7676 | ||
|
|
a77105316f | ||
| cadacb63f5 | |||
|
|
62592f707e | ||
| 023e48d99a | |||
|
|
99efca0371 | ||
| 1448950b81 | |||
|
|
a811fdac63 | ||
| 1201fe4d3c | |||
|
|
ba9228c9cb | ||
| b392b82c25 | |||
|
|
87825d13d6 | ||
| 21a6a796cf | |||
|
|
ecd0a73bc8 | ||
|
|
39d61dc7ad | ||
|
|
43491359d9 | ||
| 5ed2cea7e9 | |||
|
|
cbb16a8d52 | ||
| 70e94a6ce0 | |||
|
|
b61a00003a | ||
| 52dba6f890 | |||
| 4242678aab | |||
|
|
b2e086d5ba | ||
| 07a9787570 | |||
|
|
4bf5dc3d58 | ||
| be3d269928 | |||
|
|
80a53fae94 | ||
| e15d2b6c2f | |||
|
|
7a52bf499e | ||
| 2489ec8d2d | |||
|
|
4a4f349805 | ||
| 517a268307 | |||
|
|
a94b2a97b1 | ||
| 542cdfbb82 | |||
|
|
262062f468 | ||
| 0a14193371 | |||
|
|
7f665f5117 | ||
| 2782a8fb3b | |||
|
|
c182ef6d30 | ||
| fdb3b76cbd | |||
|
|
01e7c843cb | ||
| a0dbefbfa0 | |||
|
|
ab3fc318a0 | ||
| e658b35e43 | |||
|
|
67e106162a | ||
| b7f3182fd6 | |||
|
|
ac60072d88 | ||
| 9390f38bf6 | |||
|
|
236d5518c9 | ||
| fd52a79a72 | |||
|
|
f72819e343 | ||
| 1af8be3f15 | |||
|
|
28d03f4e21 | ||
| 2e72ee81dd | |||
|
|
ba67ace190 | ||
|
|
50782c30e5 | ||
| 4a2ff8afc5 | |||
|
|
7a1c14ce89 | ||
| 6fafc3d089 | |||
|
|
4316866bce | ||
| 356c1a1894 | |||
|
|
2a310648ca | ||
| 8592633c22 | |||
|
|
0a9cdb8709 | ||
| 0d21e098f8 | |||
| b6799ed167 | |||
|
|
be5bda169e | ||
| 4ede403356 | |||
| 5d31605b80 | |||
| ddd4ad024e | |||
|
|
4e927f48bd | ||
| af5644d17a | |||
|
|
016c0a883a | ||
| c6a5f889b4 | |||
|
|
c895ecdb28 | ||
| 05e3f8a61c | |||
|
|
f79a2abc65 | ||
| a726c270bb | |||
|
|
8a4965c45b | ||
| 93497bf7c7 | |||
|
|
20584af729 | ||
| be9f452656 | |||
| ef4b8e58fe | |||
|
|
a42f7d7007 | ||
| 768d02b9ed | |||
|
|
c4742959e4 | ||
| 97c54c0c5c | |||
| 7cc50907d1 | |||
|
|
b4199f7c48 | ||
| dda36f7bc5 | |||
| 27810bbb36 | |||
|
|
7a1421d5c2 | ||
| 1b52478f97 | |||
| fe8b000737 | |||
|
|
d2babbe3b0 | ||
|
|
684d81db2a | ||
| 59ffa65562 | |||
| 0c0dd852ac | |||
|
|
cde766872e | ||
| 604b543c12 | |||
| fd67fe2941 | |||
|
|
582035b60e | ||
| 44e7670a89 | |||
| 2abfb3ed6e | |||
|
|
219de4a25c | ||
| 1540d5051f | |||
| 9c978c26fa | |||
|
|
adb109d8e9 | ||
| c668c8785f | |||
|
|
695bbb61b9 | ||
| 877c971833 | |||
| ed3af07aab | |||
|
|
dd4b34edfa | ||
| 91fa2f0516 | |||
|
|
aefd57e57b | ||
| 2ca4eb47ac | |||
| a4fe30da22 | |||
|
|
abab7fd25e | ||
| 53dd26d2d9 | |||
| ab3da0336c | |||
|
|
ed6d6349a2 | ||
| d4db2a709a | |||
| 508583809b | |||
|
|
6b1f7e7590 | ||
| 07bb31f4fb | |||
| a42fb76da8 | |||
|
|
08c320423c | ||
| d2498065ed | |||
| 56dc96f418 | |||
|
|
4e9aa0efc3 | ||
| e5e4b1316c | |||
| e8d511b4de | |||
|
|
c4bbf5c251 | ||
| 32a9e6732b | |||
| e7c076e2ed | |||
|
|
dbe8e72efe | ||
| 38bd193042 | |||
|
|
57215e2778 | ||
| 2c1de24e9a | |||
| c8baff7aac | |||
| de3f21a7ec | |||
|
|
c6adbf79e7 | ||
| 7399a27600 | |||
|
|
68aadcaa4e | ||
| 971d2c3fa7 | |||
|
|
daaacfde5e | ||
| 7ac8fe1d29 | |||
| a2462dfb6b | |||
|
|
a911224fb4 | ||
|
|
bf4bcef890 | ||
| ac6cd2e0a1 | |||
| eea03880c1 | |||
|
|
7fc263691f | ||
| c0912d36d5 | |||
| 612c2b5943 | |||
|
|
8e787ddcf0 | ||
| 11c52d284c | |||
|
|
b528bd3651 | ||
| 4c5ceb1bd6 | |||
| bcc4ad64dc | |||
|
|
d520980322 | ||
| d79955aaa0 | |||
| e66027dc8e | |||
|
|
027df989a4 | ||
| d4d69caaf7 | |||
| 03b5af39e1 | |||
|
|
8a86333f86 | ||
| f173f805ea | |||
| d3b0996ad5 | |||
|
|
b939262f0c | ||
| 9437f3d6c6 | |||
| f1e028d498 | |||
|
|
5274650aea | ||
| de5a9a565b | |||
| 10a379c5e3 | |||
| a6a484d432 | |||
|
|
4b0a172c35 | ||
| e8c894d5cf | |||
| 6c8fd4b126 | |||
|
|
a1f52544d0 | ||
| 2334359756 | |||
| 406954ca06 | |||
|
|
95d441be98 | ||
| 186ed484b7 | |||
|
|
3669958e9d | ||
| 5f3daf0539 | |||
| ae7afaaf97 | |||
|
|
3ae7b9e0d4 | ||
| 921c48fc57 | |||
|
|
2571864b91 | ||
| 065d0c746a | |||
| 395f6c21a2 | |||
|
|
aec56dfc23 | ||
| a12a0e5207 | |||
| e337bd67b1 | |||
|
|
a8f5b4e51a | ||
| d0ce8021d6 | |||
| efbb162880 | |||
|
|
e353ce8a81 | ||
| b5cbf271b8 | |||
|
|
2041b4ac3c | ||
| e547363a65 | |||
| bddaf765fc | |||
|
|
3c0bebb65c | ||
| 265cc3ffd4 | |||
| 3d5767b60b | |||
|
|
e9cb45efe0 | ||
| 99a57f3a30 | |||
| e46f5eb7f6 |
18
.devcontainer/devcontainer.json
Normal file
18
.devcontainer/devcontainer.json
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
{
|
||||||
|
"name": "Flyer Crawler Dev (Ubuntu 22.04)",
|
||||||
|
"dockerComposeFile": ["../compose.dev.yml"],
|
||||||
|
"service": "app",
|
||||||
|
"workspaceFolder": "/app",
|
||||||
|
"customizations": {
|
||||||
|
"vscode": {
|
||||||
|
"extensions": ["dbaeumer.vscode-eslint", "esbenp.prettier-vscode"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"remoteUser": "root",
|
||||||
|
// Automatically install dependencies when the container is created.
|
||||||
|
// This runs inside the container, populating the isolated node_modules volume.
|
||||||
|
"postCreateCommand": "npm install",
|
||||||
|
"postAttachCommand": "npm run dev:container",
|
||||||
|
// Try to start podman machine, but exit with success (0) even if it's already running
|
||||||
|
"initializeCommand": "powershell -Command \"podman machine start; exit 0\""
|
||||||
|
}
|
||||||
@@ -47,6 +47,19 @@ jobs:
|
|||||||
- name: Install Dependencies
|
- name: Install Dependencies
|
||||||
run: npm ci
|
run: npm ci
|
||||||
|
|
||||||
|
- name: Bump Minor Version and Push
|
||||||
|
run: |
|
||||||
|
# Configure git for the commit.
|
||||||
|
git config --global user.name 'Gitea Actions'
|
||||||
|
git config --global user.email 'actions@gitea.projectium.com'
|
||||||
|
|
||||||
|
# Bump the minor version number. This creates a new commit and a new tag.
|
||||||
|
# The commit message includes [skip ci] to prevent this push from triggering another workflow run.
|
||||||
|
npm version minor -m "ci: Bump version to %s for production release [skip ci]"
|
||||||
|
|
||||||
|
# Push the new commit and the new tag back to the main branch.
|
||||||
|
git push --follow-tags
|
||||||
|
|
||||||
- name: Check for Production Database Schema Changes
|
- name: Check for Production Database Schema Changes
|
||||||
env:
|
env:
|
||||||
DB_HOST: ${{ secrets.DB_HOST }}
|
DB_HOST: ${{ secrets.DB_HOST }}
|
||||||
@@ -61,9 +74,10 @@ jobs:
|
|||||||
echo "--- Checking for production schema changes ---"
|
echo "--- Checking for production schema changes ---"
|
||||||
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
||||||
echo "Current Git Schema Hash: $CURRENT_HASH"
|
echo "Current Git Schema Hash: $CURRENT_HASH"
|
||||||
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A || echo "none")
|
# The psql command will now fail the step if the query errors (e.g., column missing), preventing deployment on a bad schema.
|
||||||
|
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
|
||||||
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
|
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
|
||||||
if [ "$DEPLOYED_HASH" = "none" ] || [ -z "$DEPLOYED_HASH" ]; then
|
if [ -z "$DEPLOYED_HASH" ]; then
|
||||||
echo "WARNING: No schema hash found in the production database. This is expected for a first-time deployment."
|
echo "WARNING: No schema hash found in the production database. This is expected for a first-time deployment."
|
||||||
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
|
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
|
||||||
echo "ERROR: Database schema mismatch detected! A manual database migration is required."
|
echo "ERROR: Database schema mismatch detected! A manual database migration is required."
|
||||||
@@ -79,8 +93,9 @@ jobs:
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
GITEA_SERVER_URL="https://gitea.projectium.com"
|
GITEA_SERVER_URL="https://gitea.projectium.com"
|
||||||
COMMIT_MESSAGE=$(git log -1 --pretty=%s)
|
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
|
||||||
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD)" \
|
PACKAGE_VERSION=$(node -p "require('./package.json').version")
|
||||||
|
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
|
||||||
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
|
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
|
||||||
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
|
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
|
||||||
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
|
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
|
||||||
@@ -123,6 +138,10 @@ jobs:
|
|||||||
cd /var/www/flyer-crawler.projectium.com
|
cd /var/www/flyer-crawler.projectium.com
|
||||||
npm install --omit=dev
|
npm install --omit=dev
|
||||||
|
|
||||||
|
# --- Cleanup Errored Processes ---
|
||||||
|
echo "Cleaning up errored or stopped PM2 processes..."
|
||||||
|
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
|
||||||
|
|
||||||
# --- Version Check Logic ---
|
# --- Version Check Logic ---
|
||||||
# Get the version from the newly deployed package.json
|
# Get the version from the newly deployed package.json
|
||||||
NEW_VERSION=$(node -p "require('./package.json').version")
|
NEW_VERSION=$(node -p "require('./package.json').version")
|
||||||
@@ -139,7 +158,7 @@ jobs:
|
|||||||
else
|
else
|
||||||
echo "Version mismatch (Running: $RUNNING_VERSION -> Deployed: $NEW_VERSION) or app not running. Reloading PM2..."
|
echo "Version mismatch (Running: $RUNNING_VERSION -> Deployed: $NEW_VERSION) or app not running. Reloading PM2..."
|
||||||
fi
|
fi
|
||||||
pm2 startOrReload ecosystem.config.cjs --env production && pm2 save
|
pm2 startOrReload ecosystem.config.cjs --env production --update-env && pm2 save
|
||||||
echo "Production backend server reloaded successfully."
|
echo "Production backend server reloaded successfully."
|
||||||
else
|
else
|
||||||
echo "Version $NEW_VERSION is already running. Skipping PM2 reload."
|
echo "Version $NEW_VERSION is already running. Skipping PM2 reload."
|
||||||
@@ -148,7 +167,12 @@ jobs:
|
|||||||
echo "Updating schema hash in production database..."
|
echo "Updating schema hash in production database..."
|
||||||
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
||||||
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
|
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
|
||||||
"INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('production', '$CURRENT_HASH', NOW())
|
"CREATE TABLE IF NOT EXISTS public.schema_info (
|
||||||
|
environment VARCHAR(50) PRIMARY KEY,
|
||||||
|
schema_hash VARCHAR(64) NOT NULL,
|
||||||
|
deployed_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('production', '$CURRENT_HASH', NOW())
|
||||||
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
|
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
|
||||||
|
|
||||||
UPDATED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
|
UPDATED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
|
||||||
@@ -161,7 +185,17 @@ jobs:
|
|||||||
- name: Show PM2 Environment for Production
|
- name: Show PM2 Environment for Production
|
||||||
run: |
|
run: |
|
||||||
echo "--- Displaying recent PM2 logs for flyer-crawler-api ---"
|
echo "--- Displaying recent PM2 logs for flyer-crawler-api ---"
|
||||||
sleep 5
|
sleep 5 # Wait a few seconds for the app to start and log its output.
|
||||||
pm2 describe flyer-crawler-api || echo "Could not find production pm2 process."
|
|
||||||
pm2 logs flyer-crawler-api --lines 20 --nostream || echo "Could not find production pm2 process."
|
# Resolve the PM2 ID dynamically to ensure we target the correct process
|
||||||
pm2 env flyer-crawler-api || echo "Could not find production pm2 process."
|
PM2_ID=$(pm2 jlist | node -e "try { const list = JSON.parse(require('fs').readFileSync(0, 'utf-8')); const app = list.find(p => p.name === 'flyer-crawler-api'); console.log(app ? app.pm2_env.pm_id : ''); } catch(e) { console.log(''); }")
|
||||||
|
|
||||||
|
if [ -n "$PM2_ID" ]; then
|
||||||
|
echo "Found process ID: $PM2_ID"
|
||||||
|
pm2 describe "$PM2_ID" || echo "Failed to describe process $PM2_ID"
|
||||||
|
pm2 logs "$PM2_ID" --lines 20 --nostream || echo "Failed to get logs for $PM2_ID"
|
||||||
|
pm2 env "$PM2_ID" || echo "Failed to get env for $PM2_ID"
|
||||||
|
else
|
||||||
|
echo "Could not find process 'flyer-crawler-api' in pm2 list."
|
||||||
|
pm2 list # Fallback to listing everything to help debug
|
||||||
|
fi
|
||||||
|
|||||||
@@ -90,10 +90,11 @@ jobs:
|
|||||||
# integration test suite can launch its own, fresh server instance.
|
# integration test suite can launch its own, fresh server instance.
|
||||||
# '|| true' ensures the workflow doesn't fail if the process isn't running.
|
# '|| true' ensures the workflow doesn't fail if the process isn't running.
|
||||||
run: |
|
run: |
|
||||||
pm2 stop flyer-crawler-api-test || true
|
echo "--- Stopping and deleting all test processes ---"
|
||||||
pm2 stop flyer-crawler-worker-test || true
|
# Use a script to parse pm2's JSON output and delete any process whose name ends with '-test'.
|
||||||
pm2 delete flyer-crawler-api-test || true
|
# This is safer than 'pm2 delete all' and more robust than naming each process individually.
|
||||||
pm2 delete flyer-crawler-worker-test || true
|
# It prevents the accumulation of duplicate processes from previous test runs.
|
||||||
|
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.name && p.name.endsWith('-test')) { console.log('Deleting test process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id, e.message); } } }); console.log('✅ Test process cleanup complete.'); } catch (e) { if (e.stdout.toString().includes('No process found')) { console.log('No PM2 processes running, cleanup not needed.'); } else { console.error('Error cleaning up test processes:', e.message); } }" || true
|
||||||
|
|
||||||
- name: Run All Tests and Generate Merged Coverage Report
|
- name: Run All Tests and Generate Merged Coverage Report
|
||||||
# This single step runs both unit and integration tests, then merges their
|
# This single step runs both unit and integration tests, then merges their
|
||||||
@@ -112,16 +113,21 @@ jobs:
|
|||||||
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
|
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
|
||||||
|
|
||||||
# --- Integration test specific variables ---
|
# --- Integration test specific variables ---
|
||||||
FRONTEND_URL: 'http://localhost:3000'
|
FRONTEND_URL: 'https://example.com'
|
||||||
VITE_API_BASE_URL: 'http://localhost:3001/api'
|
VITE_API_BASE_URL: 'http://localhost:3001/api'
|
||||||
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}
|
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}
|
||||||
|
|
||||||
# --- JWT Secret for Passport authentication in tests ---
|
# --- JWT Secret for Passport authentication in tests ---
|
||||||
JWT_SECRET: ${{ secrets.JWT_SECRET }}
|
JWT_SECRET: ${{ secrets.JWT_SECRET }}
|
||||||
|
|
||||||
|
# --- V8 Coverage for Server Process ---
|
||||||
|
# This variable tells the Node.js process (our server, started by globalSetup)
|
||||||
|
# where to output its raw V8 coverage data.
|
||||||
|
NODE_V8_COVERAGE: '.coverage/tmp/integration-server'
|
||||||
|
|
||||||
# --- Increase Node.js memory limit to prevent heap out of memory errors ---
|
# --- Increase Node.js memory limit to prevent heap out of memory errors ---
|
||||||
# This is crucial for memory-intensive tasks like running tests and coverage.
|
# This is crucial for memory-intensive tasks like running tests and coverage.
|
||||||
NODE_OPTIONS: '--max-old-space-size=8192'
|
NODE_OPTIONS: '--max-old-space-size=8192 --trace-warnings --unhandled-rejections=strict'
|
||||||
|
|
||||||
run: |
|
run: |
|
||||||
# Fail-fast check to ensure secrets are configured in Gitea for testing.
|
# Fail-fast check to ensure secrets are configured in Gitea for testing.
|
||||||
@@ -136,10 +142,49 @@ jobs:
|
|||||||
# Run unit and integration tests as separate steps.
|
# Run unit and integration tests as separate steps.
|
||||||
# The `|| true` ensures the workflow continues even if tests fail, allowing coverage to run.
|
# The `|| true` ensures the workflow continues even if tests fail, allowing coverage to run.
|
||||||
echo "--- Running Unit Tests ---"
|
echo "--- Running Unit Tests ---"
|
||||||
npm run test:unit -- --coverage --reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
|
# npm run test:unit -- --coverage --reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
|
||||||
|
npm run test:unit -- --coverage \
|
||||||
|
--coverage.exclude='**/*.test.ts' \
|
||||||
|
--coverage.exclude='**/tests/**' \
|
||||||
|
--coverage.exclude='**/mocks/**' \
|
||||||
|
--coverage.exclude='src/components/icons/**' \
|
||||||
|
--coverage.exclude='src/db/**' \
|
||||||
|
--coverage.exclude='src/lib/**' \
|
||||||
|
--coverage.exclude='src/types/**' \
|
||||||
|
--coverage.exclude='**/index.tsx' \
|
||||||
|
--coverage.exclude='**/vite-env.d.ts' \
|
||||||
|
--coverage.exclude='**/vitest.setup.ts' \
|
||||||
|
--reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only --no-file-parallelism || true
|
||||||
|
|
||||||
echo "--- Running Integration Tests ---"
|
echo "--- Running Integration Tests ---"
|
||||||
npm run test:integration -- --coverage --reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
|
npm run test:integration -- --coverage \
|
||||||
|
--coverage.exclude='**/*.test.ts' \
|
||||||
|
--coverage.exclude='**/tests/**' \
|
||||||
|
--coverage.exclude='**/mocks/**' \
|
||||||
|
--coverage.exclude='src/components/icons/**' \
|
||||||
|
--coverage.exclude='src/db/**' \
|
||||||
|
--coverage.exclude='src/lib/**' \
|
||||||
|
--coverage.exclude='src/types/**' \
|
||||||
|
--coverage.exclude='**/index.tsx' \
|
||||||
|
--coverage.exclude='**/vite-env.d.ts' \
|
||||||
|
--coverage.exclude='**/vitest.setup.ts' \
|
||||||
|
--reporter=verbose --includeTaskLocation --testTimeout=10000 --silent=passed-only || true
|
||||||
|
|
||||||
|
echo "--- Running E2E Tests ---"
|
||||||
|
# Run E2E tests using the dedicated E2E config which inherits from integration config.
|
||||||
|
# We still pass --coverage to enable it, but directory and timeout are now in the config.
|
||||||
|
npx vitest run --config vitest.config.e2e.ts --coverage \
|
||||||
|
--coverage.exclude='**/*.test.ts' \
|
||||||
|
--coverage.exclude='**/tests/**' \
|
||||||
|
--coverage.exclude='**/mocks/**' \
|
||||||
|
--coverage.exclude='src/components/icons/**' \
|
||||||
|
--coverage.exclude='src/db/**' \
|
||||||
|
--coverage.exclude='src/lib/**' \
|
||||||
|
--coverage.exclude='src/types/**' \
|
||||||
|
--coverage.exclude='**/index.tsx' \
|
||||||
|
--coverage.exclude='**/vite-env.d.ts' \
|
||||||
|
--coverage.exclude='**/vitest.setup.ts' \
|
||||||
|
--reporter=verbose --no-file-parallelism || true
|
||||||
|
|
||||||
# Re-enable secret masking for subsequent steps.
|
# Re-enable secret masking for subsequent steps.
|
||||||
echo "::secret-masking::"
|
echo "::secret-masking::"
|
||||||
@@ -155,6 +200,7 @@ jobs:
|
|||||||
echo "Checking for source coverage files..."
|
echo "Checking for source coverage files..."
|
||||||
ls -l .coverage/unit/coverage-final.json
|
ls -l .coverage/unit/coverage-final.json
|
||||||
ls -l .coverage/integration/coverage-final.json
|
ls -l .coverage/integration/coverage-final.json
|
||||||
|
ls -l .coverage/e2e/coverage-final.json || echo "E2E coverage file not found"
|
||||||
|
|
||||||
# --- V8 Coverage Processing for Backend Server ---
|
# --- V8 Coverage Processing for Backend Server ---
|
||||||
# The integration tests start the server, which generates raw V8 coverage data.
|
# The integration tests start the server, which generates raw V8 coverage data.
|
||||||
@@ -167,7 +213,7 @@ jobs:
|
|||||||
# Run c8: read raw files from the temp dir, and output an Istanbul JSON report.
|
# Run c8: read raw files from the temp dir, and output an Istanbul JSON report.
|
||||||
# We only generate the 'json' report here because it's all nyc needs for merging.
|
# We only generate the 'json' report here because it's all nyc needs for merging.
|
||||||
echo "Server coverage report about to be generated..."
|
echo "Server coverage report about to be generated..."
|
||||||
npx c8 report --reporter=json --temp-directory .coverage/tmp/integration-server --reports-dir .coverage/integration-server
|
npx c8 report --exclude='**/*.test.ts' --exclude='**/tests/**' --exclude='**/mocks/**' --reporter=json --temp-directory .coverage/tmp/integration-server --reports-dir .coverage/integration-server
|
||||||
echo "Server coverage report generated. Verifying existence:"
|
echo "Server coverage report generated. Verifying existence:"
|
||||||
ls -l .coverage/integration-server/coverage-final.json
|
ls -l .coverage/integration-server/coverage-final.json
|
||||||
|
|
||||||
@@ -186,6 +232,7 @@ jobs:
|
|||||||
# We give them unique names to be safe, though it's not strictly necessary.
|
# We give them unique names to be safe, though it's not strictly necessary.
|
||||||
cp .coverage/unit/coverage-final.json "$NYC_SOURCE_DIR/unit-coverage.json"
|
cp .coverage/unit/coverage-final.json "$NYC_SOURCE_DIR/unit-coverage.json"
|
||||||
cp .coverage/integration/coverage-final.json "$NYC_SOURCE_DIR/integration-coverage.json"
|
cp .coverage/integration/coverage-final.json "$NYC_SOURCE_DIR/integration-coverage.json"
|
||||||
|
cp .coverage/e2e/coverage-final.json "$NYC_SOURCE_DIR/e2e-coverage.json" || echo "E2E coverage file not found, skipping."
|
||||||
# This file might not exist if integration tests fail early, so we add `|| true`
|
# This file might not exist if integration tests fail early, so we add `|| true`
|
||||||
cp .coverage/integration-server/coverage-final.json "$NYC_SOURCE_DIR/integration-server-coverage.json" || echo "Server coverage file not found, skipping."
|
cp .coverage/integration-server/coverage-final.json "$NYC_SOURCE_DIR/integration-server-coverage.json" || echo "Server coverage file not found, skipping."
|
||||||
echo "Copied coverage files to source directory. Contents:"
|
echo "Copied coverage files to source directory. Contents:"
|
||||||
@@ -205,7 +252,13 @@ jobs:
|
|||||||
--reporter=text \
|
--reporter=text \
|
||||||
--reporter=html \
|
--reporter=html \
|
||||||
--report-dir .coverage/ \
|
--report-dir .coverage/ \
|
||||||
--temp-dir "$NYC_SOURCE_DIR"
|
--temp-dir "$NYC_SOURCE_DIR" \
|
||||||
|
--exclude "**/*.test.ts" \
|
||||||
|
--exclude "**/tests/**" \
|
||||||
|
--exclude "**/mocks/**" \
|
||||||
|
--exclude "**/index.tsx" \
|
||||||
|
--exclude "**/vite-env.d.ts" \
|
||||||
|
--exclude "**/vitest.setup.ts"
|
||||||
|
|
||||||
# Re-enable secret masking for subsequent steps.
|
# Re-enable secret masking for subsequent steps.
|
||||||
echo "::secret-masking::"
|
echo "::secret-masking::"
|
||||||
@@ -218,16 +271,6 @@ jobs:
|
|||||||
if: always() # This step runs even if the previous test or coverage steps failed.
|
if: always() # This step runs even if the previous test or coverage steps failed.
|
||||||
run: echo "Skipping test artifact cleanup on runner; this is handled on the server."
|
run: echo "Skipping test artifact cleanup on runner; this is handled on the server."
|
||||||
|
|
||||||
- name: Deploy Coverage Report to Public URL
|
|
||||||
if: always()
|
|
||||||
run: |
|
|
||||||
TARGET_DIR="/var/www/flyer-crawler-test.projectium.com/coverage"
|
|
||||||
echo "Deploying HTML coverage report to $TARGET_DIR..."
|
|
||||||
mkdir -p "$TARGET_DIR"
|
|
||||||
rm -rf "$TARGET_DIR"/*
|
|
||||||
cp -r .coverage/* "$TARGET_DIR/"
|
|
||||||
echo "✅ Coverage report deployed to https://flyer-crawler-test.projectium.com/coverage"
|
|
||||||
|
|
||||||
- name: Archive Code Coverage Report
|
- name: Archive Code Coverage Report
|
||||||
# This action saves the generated HTML coverage report as a downloadable artifact.
|
# This action saves the generated HTML coverage report as a downloadable artifact.
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
@@ -256,18 +299,19 @@ jobs:
|
|||||||
# We normalize line endings to ensure the hash is consistent across different OS environments.
|
# We normalize line endings to ensure the hash is consistent across different OS environments.
|
||||||
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
||||||
echo "Current Git Schema Hash: $CURRENT_HASH"
|
echo "Current Git Schema Hash: $CURRENT_HASH"
|
||||||
|
|
||||||
# Query the production database to get the hash of the deployed schema.
|
# Query the production database to get the hash of the deployed schema.
|
||||||
# The `psql` command requires PGPASSWORD to be set.
|
# The `psql` command requires PGPASSWORD to be set.
|
||||||
# `\t` sets tuples-only mode and `\A` unaligns output to get just the raw value.
|
# `\t` sets tuples-only mode and `\A` unaligns output to get just the raw value.
|
||||||
# The `|| echo "none"` ensures the command doesn't fail if the table or row doesn't exist yet.
|
# The psql command will now fail the step if the query errors (e.g., column missing), preventing deployment on a bad schema.
|
||||||
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'test';" -t -A || echo "none")
|
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'test';" -t -A)
|
||||||
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
|
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
|
||||||
|
|
||||||
# Check if the hash is "none" (command failed) OR if it's an empty string (table exists but is empty).
|
# Check if the hash is "none" (command failed) OR if it's an empty string (table exists but is empty).
|
||||||
if [ "$DEPLOYED_HASH" = "none" ] || [ -z "$DEPLOYED_HASH" ]; then
|
if [ -z "$DEPLOYED_HASH" ]; then
|
||||||
echo "WARNING: No schema hash found in the test database."
|
echo "WARNING: No schema hash found in the test database."
|
||||||
echo "This is expected for a first-time deployment. The hash will be set after a successful deployment."
|
echo "This is expected for a first-time deployment. The hash will be set after a successful deployment."
|
||||||
|
echo "--- Debug: Dumping schema_info table ---"
|
||||||
|
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=0 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -P pager=off -c "SELECT * FROM public.schema_info;" || true
|
||||||
|
echo "----------------------------------------"
|
||||||
# We allow the deployment to continue, but a manual schema update is required.
|
# We allow the deployment to continue, but a manual schema update is required.
|
||||||
# You could choose to fail here by adding `exit 1`.
|
# You could choose to fail here by adding `exit 1`.
|
||||||
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
|
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
|
||||||
@@ -291,8 +335,10 @@ jobs:
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
GITEA_SERVER_URL="https://gitea.projectium.com" # Your Gitea instance URL
|
GITEA_SERVER_URL="https://gitea.projectium.com" # Your Gitea instance URL
|
||||||
COMMIT_MESSAGE=$(git log -1 --pretty=%s)
|
# Sanitize commit message to prevent shell injection or build breaks (removes quotes, backticks, backslashes, $)
|
||||||
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD)" \
|
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s | tr -d '"`\\$')
|
||||||
|
PACKAGE_VERSION=$(node -p "require('./package.json').version")
|
||||||
|
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
|
||||||
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
|
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
|
||||||
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
|
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
|
||||||
VITE_API_BASE_URL="https://flyer-crawler-test.projectium.com/api" VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }} npm run build
|
VITE_API_BASE_URL="https://flyer-crawler-test.projectium.com/api" VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }} npm run build
|
||||||
@@ -315,6 +361,17 @@ jobs:
|
|||||||
rsync -avz dist/ "$APP_PATH"
|
rsync -avz dist/ "$APP_PATH"
|
||||||
echo "Application deployment complete."
|
echo "Application deployment complete."
|
||||||
|
|
||||||
|
- name: Deploy Coverage Report to Public URL
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
TARGET_DIR="/var/www/flyer-crawler-test.projectium.com/coverage"
|
||||||
|
echo "Deploying HTML coverage report to $TARGET_DIR..."
|
||||||
|
mkdir -p "$TARGET_DIR"
|
||||||
|
rm -rf "$TARGET_DIR"/*
|
||||||
|
# The merged nyc report is generated in the .coverage directory. We copy its contents.
|
||||||
|
cp -r .coverage/* "$TARGET_DIR/"
|
||||||
|
echo "✅ Coverage report deployed to https://flyer-crawler-test.projectium.com/coverage"
|
||||||
|
|
||||||
- name: Install Backend Dependencies and Restart Test Server
|
- name: Install Backend Dependencies and Restart Test Server
|
||||||
env:
|
env:
|
||||||
# --- Test Secrets Injection ---
|
# --- Test Secrets Injection ---
|
||||||
@@ -332,8 +389,8 @@ jobs:
|
|||||||
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
|
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_TEST }}
|
||||||
|
|
||||||
# Application Secrets
|
# Application Secrets
|
||||||
FRONTEND_URL: 'https://flyer-crawler-test.projectium.com'
|
FRONTEND_URL: 'https://example.com'
|
||||||
JWT_SECRET: ${{ secrets.JWT_SECRET_TEST }}
|
JWT_SECRET: ${{ secrets.JWT_SECRET }}
|
||||||
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }}
|
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY_TEST }}
|
||||||
GOOGLE_MAPS_API_KEY: ${{ secrets.GOOGLE_MAPS_API_KEY }}
|
GOOGLE_MAPS_API_KEY: ${{ secrets.GOOGLE_MAPS_API_KEY }}
|
||||||
|
|
||||||
@@ -347,18 +404,30 @@ jobs:
|
|||||||
|
|
||||||
run: |
|
run: |
|
||||||
# Fail-fast check to ensure secrets are configured in Gitea.
|
# Fail-fast check to ensure secrets are configured in Gitea.
|
||||||
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
|
MISSING_SECRETS=""
|
||||||
echo "ERROR: One or more test database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_TEST) are not set in Gitea repository settings."
|
if [ -z "$DB_HOST" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_HOST"; fi
|
||||||
|
if [ -z "$DB_USER" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_USER"; fi
|
||||||
|
if [ -z "$DB_PASSWORD" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_PASSWORD"; fi
|
||||||
|
if [ -z "$DB_NAME" ]; then MISSING_SECRETS="${MISSING_SECRETS} DB_NAME"; fi
|
||||||
|
if [ -z "$JWT_SECRET" ]; then MISSING_SECRETS="${MISSING_SECRETS} JWT_SECRET"; fi
|
||||||
|
|
||||||
|
if [ ! -z "$MISSING_SECRETS" ]; then
|
||||||
|
echo "ERROR: The following required secrets are missing in Gitea:${MISSING_SECRETS}"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "Installing production dependencies and restarting test server..."
|
echo "Installing production dependencies and restarting test server..."
|
||||||
cd /var/www/flyer-crawler-test.projectium.com
|
cd /var/www/flyer-crawler-test.projectium.com
|
||||||
npm install --omit=dev # Install only production dependencies
|
npm install --omit=dev
|
||||||
|
|
||||||
|
# --- Cleanup Errored Processes ---
|
||||||
|
echo "Cleaning up errored or stopped PM2 processes..."
|
||||||
|
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
|
||||||
|
|
||||||
# Use `startOrReload` with the ecosystem file. This is the standard, idempotent way to deploy.
|
# Use `startOrReload` with the ecosystem file. This is the standard, idempotent way to deploy.
|
||||||
# It will START the process if it's not running, or RELOAD it if it is.
|
# It will START the process if it's not running, or RELOAD it if it is.
|
||||||
# We also add `&& pm2 save` to persist the process list across server reboots.
|
# We also add `&& pm2 save` to persist the process list across server reboots.
|
||||||
pm2 startOrReload ecosystem.config.cjs --env test && pm2 save
|
pm2 startOrReload ecosystem.config.cjs --env test --update-env && pm2 save
|
||||||
echo "Test backend server reloaded successfully."
|
echo "Test backend server reloaded successfully."
|
||||||
|
|
||||||
# After a successful deployment, update the schema hash in the database.
|
# After a successful deployment, update the schema hash in the database.
|
||||||
@@ -366,7 +435,12 @@ jobs:
|
|||||||
echo "Updating schema hash in test database..."
|
echo "Updating schema hash in test database..."
|
||||||
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
||||||
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
|
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
|
||||||
"INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('test', '$CURRENT_HASH', NOW())
|
"CREATE TABLE IF NOT EXISTS public.schema_info (
|
||||||
|
environment VARCHAR(50) PRIMARY KEY,
|
||||||
|
schema_hash VARCHAR(64) NOT NULL,
|
||||||
|
deployed_at TIMESTAMP DEFAULT NOW()
|
||||||
|
);
|
||||||
|
INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('test', '$CURRENT_HASH', NOW())
|
||||||
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
|
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
|
||||||
|
|
||||||
# Verify the hash was updated
|
# Verify the hash was updated
|
||||||
@@ -388,7 +462,17 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
echo "--- Displaying recent PM2 logs for flyer-crawler-api-test ---"
|
echo "--- Displaying recent PM2 logs for flyer-crawler-api-test ---"
|
||||||
# After a reload, the server restarts. We'll show the last 20 lines of the log to see the startup messages.
|
# After a reload, the server restarts. We'll show the last 20 lines of the log to see the startup messages.
|
||||||
sleep 5 # Wait a few seconds for the app to start and log its output.
|
sleep 5
|
||||||
pm2 describe flyer-crawler-api-test || echo "Could not find test pm2 process."
|
|
||||||
pm2 logs flyer-crawler-api-test --lines 20 --nostream || echo "Could not find test pm2 process."
|
# Resolve the PM2 ID dynamically to ensure we target the correct process
|
||||||
pm2 env flyer-crawler-api-test || echo "Could not find test pm2 process."
|
PM2_ID=$(pm2 jlist | node -e "try { const list = JSON.parse(require('fs').readFileSync(0, 'utf-8')); const app = list.find(p => p.name === 'flyer-crawler-api-test'); console.log(app ? app.pm2_env.pm_id : ''); } catch(e) { console.log(''); }")
|
||||||
|
|
||||||
|
if [ -n "$PM2_ID" ]; then
|
||||||
|
echo "Found process ID: $PM2_ID"
|
||||||
|
pm2 describe "$PM2_ID" || echo "Failed to describe process $PM2_ID"
|
||||||
|
pm2 logs "$PM2_ID" --lines 20 --nostream || echo "Failed to get logs for $PM2_ID"
|
||||||
|
pm2 env "$PM2_ID" || echo "Failed to get env for $PM2_ID"
|
||||||
|
else
|
||||||
|
echo "Could not find process 'flyer-crawler-api-test' in pm2 list."
|
||||||
|
pm2 list # Fallback to listing everything to help debug
|
||||||
|
fi
|
||||||
|
|||||||
@@ -60,4 +60,4 @@ jobs:
|
|||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: database-backup
|
name: database-backup
|
||||||
path: ${{ env.backup_filename }}
|
path: ${{ env.backup_filename }}
|
||||||
|
|||||||
@@ -144,4 +144,4 @@ jobs:
|
|||||||
find "$APP_PATH/flyer-images" -type f -name '*-test-flyer-image.*' -delete
|
find "$APP_PATH/flyer-images" -type f -name '*-test-flyer-image.*' -delete
|
||||||
find "$APP_PATH/flyer-images/icons" -type f -name '*-test-flyer-image.*' -delete
|
find "$APP_PATH/flyer-images/icons" -type f -name '*-test-flyer-image.*' -delete
|
||||||
find "$APP_PATH/flyer-images/archive" -mindepth 1 -maxdepth 1 -type f -delete || echo "Archive directory not found, skipping."
|
find "$APP_PATH/flyer-images/archive" -mindepth 1 -maxdepth 1 -type f -delete || echo "Archive directory not found, skipping."
|
||||||
echo "✅ Flyer asset directories cleared."
|
echo "✅ Flyer asset directories cleared."
|
||||||
|
|||||||
@@ -130,4 +130,4 @@ jobs:
|
|||||||
find "$APP_PATH/flyer-images" -mindepth 1 -type f -delete
|
find "$APP_PATH/flyer-images" -mindepth 1 -type f -delete
|
||||||
find "$APP_PATH/flyer-images/icons" -mindepth 1 -type f -delete
|
find "$APP_PATH/flyer-images/icons" -mindepth 1 -type f -delete
|
||||||
find "$APP_PATH/flyer-images/archive" -mindepth 1 -type f -delete || echo "Archive directory not found, skipping."
|
find "$APP_PATH/flyer-images/archive" -mindepth 1 -type f -delete || echo "Archive directory not found, skipping."
|
||||||
echo "✅ Test flyer asset directories cleared."
|
echo "✅ Test flyer asset directories cleared."
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ jobs:
|
|||||||
DB_USER: ${{ secrets.DB_USER }}
|
DB_USER: ${{ secrets.DB_USER }}
|
||||||
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
|
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
|
||||||
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
|
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
|
||||||
BACKUP_DIR: "/var/www/backups" # Define a dedicated directory for backups
|
BACKUP_DIR: '/var/www/backups' # Define a dedicated directory for backups
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Validate Secrets and Inputs
|
- name: Validate Secrets and Inputs
|
||||||
@@ -92,4 +92,4 @@ jobs:
|
|||||||
echo "Restarting application server..."
|
echo "Restarting application server..."
|
||||||
cd /var/www/flyer-crawler.projectium.com
|
cd /var/www/flyer-crawler.projectium.com
|
||||||
pm2 startOrReload ecosystem.config.cjs --env production && pm2 save
|
pm2 startOrReload ecosystem.config.cjs --env production && pm2 save
|
||||||
echo "✅ Application server restarted."
|
echo "✅ Application server restarted."
|
||||||
|
|||||||
185
.gitea/workflows/manual-deploy-major.yml
Normal file
185
.gitea/workflows/manual-deploy-major.yml
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
# .gitea/workflows/manual-deploy-major.yml
|
||||||
|
#
|
||||||
|
# This workflow provides a MANUAL trigger to perform a MAJOR version bump
|
||||||
|
# and deploy the application to the PRODUCTION environment.
|
||||||
|
name: Manual - Deploy Major Version to Production
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
confirmation:
|
||||||
|
description: 'Type "deploy-major-to-prod" to confirm you want to deploy a new major version.'
|
||||||
|
required: true
|
||||||
|
default: 'do-not-run'
|
||||||
|
force_reload:
|
||||||
|
description: 'Force PM2 reload even if version matches (true/false).'
|
||||||
|
required: false
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
deploy-production-major:
|
||||||
|
runs-on: projectium.com
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Verify Confirmation Phrase
|
||||||
|
run: |
|
||||||
|
if [ "${{ gitea.event.inputs.confirmation }}" != "deploy-major-to-prod" ]; then
|
||||||
|
echo "ERROR: Confirmation phrase did not match. Aborting deployment."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "✅ Confirmation accepted. Proceeding with major version production deployment."
|
||||||
|
|
||||||
|
- name: Checkout Code from 'main' branch
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
ref: 'main' # Explicitly check out the main branch for production deployment
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Setup Node.js
|
||||||
|
uses: actions/setup-node@v3
|
||||||
|
with:
|
||||||
|
node-version: '20'
|
||||||
|
cache: 'npm'
|
||||||
|
cache-dependency-path: '**/package-lock.json'
|
||||||
|
|
||||||
|
- name: Install Dependencies
|
||||||
|
run: npm ci
|
||||||
|
|
||||||
|
- name: Bump Major Version and Push
|
||||||
|
run: |
|
||||||
|
# Configure git for the commit.
|
||||||
|
git config --global user.name 'Gitea Actions'
|
||||||
|
git config --global user.email 'actions@gitea.projectium.com'
|
||||||
|
|
||||||
|
# Bump the major version number. This creates a new commit and a new tag.
|
||||||
|
# The commit message includes [skip ci] to prevent this push from triggering another workflow run.
|
||||||
|
npm version major -m "ci: Bump version to %s for major release [skip ci]"
|
||||||
|
|
||||||
|
# Push the new commit and the new tag back to the main branch.
|
||||||
|
git push --follow-tags
|
||||||
|
|
||||||
|
- name: Check for Production Database Schema Changes
|
||||||
|
env:
|
||||||
|
DB_HOST: ${{ secrets.DB_HOST }}
|
||||||
|
DB_USER: ${{ secrets.DB_USER }}
|
||||||
|
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
|
||||||
|
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
|
||||||
|
run: |
|
||||||
|
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
|
||||||
|
echo "ERROR: One or more production database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_PROD) are not set."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "--- Checking for production schema changes ---"
|
||||||
|
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
||||||
|
echo "Current Git Schema Hash: $CURRENT_HASH"
|
||||||
|
# The psql command will now fail the step if the query errors (e.g., column missing), preventing deployment on a bad schema.
|
||||||
|
DEPLOYED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
|
||||||
|
echo "Deployed DB Schema Hash: $DEPLOYED_HASH"
|
||||||
|
if [ -z "$DEPLOYED_HASH" ]; then
|
||||||
|
echo "WARNING: No schema hash found in the production database. This is expected for a first-time deployment."
|
||||||
|
elif [ "$CURRENT_HASH" != "$DEPLOYED_HASH" ]; then
|
||||||
|
echo "ERROR: Database schema mismatch detected! A manual database migration is required."
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
echo "✅ Schema is up to date. No changes detected."
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Build React Application for Production
|
||||||
|
run: |
|
||||||
|
if [ -z "${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}" ]; then
|
||||||
|
echo "ERROR: The VITE_GOOGLE_GENAI_API_KEY secret is not set."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
GITEA_SERVER_URL="https://gitea.projectium.com"
|
||||||
|
COMMIT_MESSAGE=$(git log -1 --grep="\[skip ci\]" --invert-grep --pretty=%s)
|
||||||
|
PACKAGE_VERSION=$(node -p "require('./package.json').version")
|
||||||
|
VITE_APP_VERSION="$(date +'%Y%m%d-%H%M'):$(git rev-parse --short HEAD):$PACKAGE_VERSION" \
|
||||||
|
VITE_APP_COMMIT_URL="$GITEA_SERVER_URL/${{ gitea.repository }}/commit/${{ gitea.sha }}" \
|
||||||
|
VITE_APP_COMMIT_MESSAGE="$COMMIT_MESSAGE" \
|
||||||
|
VITE_API_BASE_URL=/api VITE_API_KEY=${{ secrets.VITE_GOOGLE_GENAI_API_KEY }} npm run build
|
||||||
|
|
||||||
|
- name: Deploy Application to Production Server
|
||||||
|
run: |
|
||||||
|
echo "Deploying application files to /var/www/flyer-crawler.projectium.com..."
|
||||||
|
APP_PATH="/var/www/flyer-crawler.projectium.com"
|
||||||
|
mkdir -p "$APP_PATH"
|
||||||
|
mkdir -p "$APP_PATH/flyer-images/icons" "$APP_PATH/flyer-images/archive"
|
||||||
|
rsync -avz --delete --exclude 'node_modules' --exclude '.git' --exclude 'dist' --exclude 'flyer-images' ./ "$APP_PATH/"
|
||||||
|
rsync -avz dist/ "$APP_PATH"
|
||||||
|
echo "Application deployment complete."
|
||||||
|
|
||||||
|
- name: Install Backend Dependencies and Restart Production Server
|
||||||
|
env:
|
||||||
|
# --- Production Secrets Injection ---
|
||||||
|
DB_HOST: ${{ secrets.DB_HOST }}
|
||||||
|
DB_USER: ${{ secrets.DB_USER }}
|
||||||
|
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
|
||||||
|
DB_NAME: ${{ secrets.DB_DATABASE_PROD }}
|
||||||
|
REDIS_URL: 'redis://localhost:6379'
|
||||||
|
REDIS_PASSWORD: ${{ secrets.REDIS_PASSWORD_PROD }}
|
||||||
|
FRONTEND_URL: 'https://flyer-crawler.projectium.com'
|
||||||
|
JWT_SECRET: ${{ secrets.JWT_SECRET }}
|
||||||
|
GEMINI_API_KEY: ${{ secrets.VITE_GOOGLE_GENAI_API_KEY }}
|
||||||
|
GOOGLE_MAPS_API_KEY: ${{ secrets.GOOGLE_MAPS_API_KEY }}
|
||||||
|
SMTP_HOST: 'localhost'
|
||||||
|
SMTP_PORT: '1025'
|
||||||
|
SMTP_SECURE: 'false'
|
||||||
|
SMTP_USER: ''
|
||||||
|
SMTP_PASS: ''
|
||||||
|
SMTP_FROM_EMAIL: 'noreply@flyer-crawler.projectium.com'
|
||||||
|
run: |
|
||||||
|
if [ -z "$DB_HOST" ] || [ -z "$DB_USER" ] || [ -z "$DB_PASSWORD" ] || [ -z "$DB_NAME" ]; then
|
||||||
|
echo "ERROR: One or more production database secrets (DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE_PROD) are not set."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "Installing production dependencies and restarting server..."
|
||||||
|
cd /var/www/flyer-crawler.projectium.com
|
||||||
|
npm install --omit=dev
|
||||||
|
|
||||||
|
# --- Cleanup Errored Processes ---
|
||||||
|
echo "Cleaning up errored or stopped PM2 processes..."
|
||||||
|
node -e "const exec = require('child_process').execSync; try { const list = JSON.parse(exec('pm2 jlist').toString()); list.forEach(p => { if (p.pm2_env.status === 'errored' || p.pm2_env.status === 'stopped') { console.log('Deleting ' + p.pm2_env.status + ' process: ' + p.name + ' (' + p.pm2_env.pm_id + ')'); try { exec('pm2 delete ' + p.pm2_env.pm_id); } catch(e) { console.error('Failed to delete ' + p.pm2_env.pm_id); } } }); } catch (e) { console.error('Error cleaning up processes:', e); }"
|
||||||
|
|
||||||
|
# --- Version Check Logic ---
|
||||||
|
# Get the version from the newly deployed package.json
|
||||||
|
NEW_VERSION=$(node -p "require('./package.json').version")
|
||||||
|
echo "Deployed Package Version: $NEW_VERSION"
|
||||||
|
|
||||||
|
# Get the running version from PM2 for the main API process
|
||||||
|
# We use a small node script to parse the JSON output from pm2 jlist
|
||||||
|
RUNNING_VERSION=$(pm2 jlist | node -e "try { const list = JSON.parse(require('fs').readFileSync(0, 'utf-8')); const app = list.find(p => p.name === 'flyer-crawler-api'); console.log(app ? app.pm2_env.version : ''); } catch(e) { console.log(''); }")
|
||||||
|
echo "Running PM2 Version: $RUNNING_VERSION"
|
||||||
|
|
||||||
|
if [ "${{ gitea.event.inputs.force_reload }}" == "true" ] || [ "$NEW_VERSION" != "$RUNNING_VERSION" ] || [ -z "$RUNNING_VERSION" ]; then
|
||||||
|
if [ "${{ gitea.event.inputs.force_reload }}" == "true" ]; then
|
||||||
|
echo "Force reload triggered by manual input. Reloading PM2..."
|
||||||
|
else
|
||||||
|
echo "Version mismatch (Running: $RUNNING_VERSION -> Deployed: $NEW_VERSION) or app not running. Reloading PM2..."
|
||||||
|
fi
|
||||||
|
pm2 startOrReload ecosystem.config.cjs --env production --update-env && pm2 save
|
||||||
|
echo "Production backend server reloaded successfully."
|
||||||
|
else
|
||||||
|
echo "Version $NEW_VERSION is already running. Skipping PM2 reload."
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Updating schema hash in production database..."
|
||||||
|
CURRENT_HASH=$(cat sql/master_schema_rollup.sql | dos2unix | sha256sum | awk '{ print $1 }')
|
||||||
|
PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c \
|
||||||
|
"INSERT INTO public.schema_info (environment, schema_hash, deployed_at) VALUES ('production', '$CURRENT_HASH', NOW())
|
||||||
|
ON CONFLICT (environment) DO UPDATE SET schema_hash = EXCLUDED.schema_hash, deployed_at = NOW();"
|
||||||
|
|
||||||
|
UPDATED_HASH=$(PGPASSWORD="$DB_PASSWORD" psql -v ON_ERROR_STOP=1 -h "$DB_HOST" -p 5432 -U "$DB_USER" -d "$DB_NAME" -c "SELECT schema_hash FROM public.schema_info WHERE environment = 'production';" -t -A)
|
||||||
|
if [ "$CURRENT_HASH" = "$UPDATED_HASH" ]; then
|
||||||
|
echo "✅ Schema hash successfully updated in the database to: $UPDATED_HASH"
|
||||||
|
else
|
||||||
|
echo "ERROR: Failed to update schema hash in the database."
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Show PM2 Environment for Production
|
||||||
|
run: |
|
||||||
|
echo "--- Displaying recent PM2 logs for flyer-crawler-api ---"
|
||||||
|
sleep 5
|
||||||
|
pm2 describe flyer-crawler-api || echo "Could not find production pm2 process."
|
||||||
|
pm2 logs flyer-crawler-api --lines 20 --nostream || echo "Could not find production pm2 process."
|
||||||
|
pm2 env flyer-crawler-api || echo "Could not find production pm2 process."
|
||||||
@@ -6,4 +6,4 @@
|
|||||||
"tabWidth": 2,
|
"tabWidth": 2,
|
||||||
"useTabs": false,
|
"useTabs": false,
|
||||||
"endOfLine": "auto"
|
"endOfLine": "auto"
|
||||||
}
|
}
|
||||||
|
|||||||
31
Dockerfile.dev
Normal file
31
Dockerfile.dev
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Use Ubuntu 22.04 (LTS) as the base image to match production
|
||||||
|
FROM ubuntu:22.04
|
||||||
|
|
||||||
|
# Set environment variables to non-interactive to avoid prompts during installation
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive
|
||||||
|
|
||||||
|
# Update package lists and install essential tools
|
||||||
|
# - curl: for downloading Node.js setup script
|
||||||
|
# - git: for version control operations
|
||||||
|
# - build-essential: for compiling native Node.js modules (node-gyp)
|
||||||
|
# - python3: required by some Node.js build tools
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
curl \
|
||||||
|
git \
|
||||||
|
build-essential \
|
||||||
|
python3 \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install Node.js 20.x (LTS) from NodeSource
|
||||||
|
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
|
||||||
|
&& apt-get install -y nodejs
|
||||||
|
|
||||||
|
# Set the working directory inside the container
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Set default environment variables for development
|
||||||
|
ENV NODE_ENV=development
|
||||||
|
ENV NODE_OPTIONS='--max-old-space-size=8192'
|
||||||
|
|
||||||
|
# Default command keeps the container running so you can attach to it
|
||||||
|
CMD ["bash"]
|
||||||
130
README.md
130
README.md
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
Flyer Crawler is a web application that uses the Google Gemini AI to extract, analyze, and manage data from grocery store flyers. Users can upload flyer images or PDFs, and the application will automatically identify items, prices, and sale dates, storing the structured data in a PostgreSQL database for historical analysis, price tracking, and personalized deal alerts.
|
Flyer Crawler is a web application that uses the Google Gemini AI to extract, analyze, and manage data from grocery store flyers. Users can upload flyer images or PDFs, and the application will automatically identify items, prices, and sale dates, storing the structured data in a PostgreSQL database for historical analysis, price tracking, and personalized deal alerts.
|
||||||
|
|
||||||
We are working on an app to help people save money, by finding good deals that are only advertized in store flyers/ads. So, the primary purpose of the site is to make uploading flyers as easy as possible and as accurate as possible, and to store peoples needs, so sales can be matched to needs.
|
We are working on an app to help people save money, by finding good deals that are only advertized in store flyers/ads. So, the primary purpose of the site is to make uploading flyers as easy as possible and as accurate as possible, and to store peoples needs, so sales can be matched to needs.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
@@ -45,9 +45,9 @@ This project is configured to run in a CI/CD environment and does not use `.env`
|
|||||||
|
|
||||||
1. **Set up a PostgreSQL database instance.**
|
1. **Set up a PostgreSQL database instance.**
|
||||||
2. **Run the Database Schema**:
|
2. **Run the Database Schema**:
|
||||||
- Connect to your database using a tool like `psql` or DBeaver.
|
- Connect to your database using a tool like `psql` or DBeaver.
|
||||||
- Open `sql/schema.sql.txt`, copy its entire contents, and execute it against your database.
|
- Open `sql/schema.sql.txt`, copy its entire contents, and execute it against your database.
|
||||||
- This will create all necessary tables, functions, and relationships.
|
- This will create all necessary tables, functions, and relationships.
|
||||||
|
|
||||||
### Step 2: Install Dependencies and Run the Application
|
### Step 2: Install Dependencies and Run the Application
|
||||||
|
|
||||||
@@ -79,11 +79,11 @@ sudo nano /etc/nginx/mime.types
|
|||||||
|
|
||||||
change
|
change
|
||||||
|
|
||||||
application/javascript js;
|
application/javascript js;
|
||||||
|
|
||||||
TO
|
TO
|
||||||
|
|
||||||
application/javascript js mjs;
|
application/javascript js mjs;
|
||||||
|
|
||||||
RESTART NGINX
|
RESTART NGINX
|
||||||
|
|
||||||
@@ -95,7 +95,7 @@ actually the proper change was to do this in the /etc/nginx/sites-available/flye
|
|||||||
## for OAuth
|
## for OAuth
|
||||||
|
|
||||||
1. Get Google OAuth Credentials
|
1. Get Google OAuth Credentials
|
||||||
This is a crucial step that you must do outside the codebase:
|
This is a crucial step that you must do outside the codebase:
|
||||||
|
|
||||||
Go to the Google Cloud Console.
|
Go to the Google Cloud Console.
|
||||||
|
|
||||||
@@ -112,7 +112,7 @@ Under Authorized redirect URIs, click ADD URI and enter the URL where Google wil
|
|||||||
Click Create. You will be given a Client ID and a Client Secret.
|
Click Create. You will be given a Client ID and a Client Secret.
|
||||||
|
|
||||||
2. Get GitHub OAuth Credentials
|
2. Get GitHub OAuth Credentials
|
||||||
You'll need to obtain a Client ID and Client Secret from GitHub:
|
You'll need to obtain a Client ID and Client Secret from GitHub:
|
||||||
|
|
||||||
Go to your GitHub profile settings.
|
Go to your GitHub profile settings.
|
||||||
|
|
||||||
@@ -133,21 +133,23 @@ You will be given a Client ID and a Client Secret.
|
|||||||
|
|
||||||
psql -h localhost -U flyer_crawler_user -d "flyer-crawler-prod" -W
|
psql -h localhost -U flyer_crawler_user -d "flyer-crawler-prod" -W
|
||||||
|
|
||||||
|
|
||||||
## postgis
|
## postgis
|
||||||
|
|
||||||
flyer-crawler-prod=> SELECT version();
|
flyer-crawler-prod=> SELECT version();
|
||||||
version
|
version
|
||||||
------------------------------------------------------------------------------------------------------------------------------------------
|
|
||||||
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0, 64-bit
|
---
|
||||||
|
|
||||||
|
PostgreSQL 14.19 (Ubuntu 14.19-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0, 64-bit
|
||||||
(1 row)
|
(1 row)
|
||||||
|
|
||||||
flyer-crawler-prod=> SELECT PostGIS_Full_Version();
|
flyer-crawler-prod=> SELECT PostGIS_Full_Version();
|
||||||
postgis_full_version
|
postgis_full_version
|
||||||
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
|
||||||
POSTGIS="3.2.0 c3e3cc0" [EXTENSION] PGSQL="140" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1" LIBXML="2.9.12" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
|
|
||||||
(1 row)
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
POSTGIS="3.2.0 c3e3cc0" [EXTENSION] PGSQL="140" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1" LIBXML="2.9.12" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
|
||||||
|
(1 row)
|
||||||
|
|
||||||
## production postgres setup
|
## production postgres setup
|
||||||
|
|
||||||
@@ -201,9 +203,13 @@ Step 4: Seed the Admin Account (If Needed)
|
|||||||
Your application has a separate script to create the initial admin user. To run it, you must first set the required environment variables in your shell session.
|
Your application has a separate script to create the initial admin user. To run it, you must first set the required environment variables in your shell session.
|
||||||
|
|
||||||
bash
|
bash
|
||||||
|
|
||||||
# Set variables for the current session
|
# Set variables for the current session
|
||||||
|
|
||||||
export DB_USER=flyer_crawler_user DB_PASSWORD=your_password DB_NAME="flyer-crawler-prod" ...
|
export DB_USER=flyer_crawler_user DB_PASSWORD=your_password DB_NAME="flyer-crawler-prod" ...
|
||||||
|
|
||||||
# Run the seeding script
|
# Run the seeding script
|
||||||
|
|
||||||
npx tsx src/db/seed_admin_account.ts
|
npx tsx src/db/seed_admin_account.ts
|
||||||
Your production database is now ready!
|
Your production database is now ready!
|
||||||
|
|
||||||
@@ -284,8 +290,6 @@ Test Execution: Your tests run against this clean, isolated schema.
|
|||||||
|
|
||||||
This approach is faster, more reliable, and removes the need for sudo access within the CI pipeline.
|
This approach is faster, more reliable, and removes the need for sudo access within the CI pipeline.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
gitea-runner@projectium:~$ pm2 install pm2-logrotate
|
gitea-runner@projectium:~$ pm2 install pm2-logrotate
|
||||||
[PM2][Module] Installing NPM pm2-logrotate module
|
[PM2][Module] Installing NPM pm2-logrotate module
|
||||||
[PM2][Module] Calling [NPM] to install pm2-logrotate ...
|
[PM2][Module] Calling [NPM] to install pm2-logrotate ...
|
||||||
@@ -293,7 +297,7 @@ gitea-runner@projectium:~$ pm2 install pm2-logrotate
|
|||||||
added 161 packages in 5s
|
added 161 packages in 5s
|
||||||
|
|
||||||
21 packages are looking for funding
|
21 packages are looking for funding
|
||||||
run `npm fund` for details
|
run `npm fund` for details
|
||||||
npm notice
|
npm notice
|
||||||
npm notice New patch version of npm available! 11.6.3 -> 11.6.4
|
npm notice New patch version of npm available! 11.6.3 -> 11.6.4
|
||||||
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.6.4
|
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.6.4
|
||||||
@@ -308,23 +312,23 @@ $ pm2 set pm2-logrotate:retain 30
|
|||||||
$ pm2 set pm2-logrotate:compress false
|
$ pm2 set pm2-logrotate:compress false
|
||||||
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
|
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
|
||||||
$ pm2 set pm2-logrotate:workerInterval 30
|
$ pm2 set pm2-logrotate:workerInterval 30
|
||||||
$ pm2 set pm2-logrotate:rotateInterval 0 0 * * *
|
$ pm2 set pm2-logrotate:rotateInterval 0 0 \* \* _
|
||||||
$ pm2 set pm2-logrotate:rotateModule true
|
$ pm2 set pm2-logrotate:rotateModule true
|
||||||
Modules configuration. Copy/Paste line to edit values.
|
Modules configuration. Copy/Paste line to edit values.
|
||||||
[PM2][Module] Module successfully installed and launched
|
[PM2][Module] Module successfully installed and launched
|
||||||
[PM2][Module] Checkout module options: `$ pm2 conf`
|
[PM2][Module] Checkout module options: `$ pm2 conf`
|
||||||
┌────┬───────────────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
|
┌────┬───────────────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
|
||||||
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
|
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
|
||||||
├────┼───────────────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
|
├────┼───────────────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
|
||||||
│ 2 │ flyer-crawler-analytics-worker │ default │ 0.0.0 │ fork │ 3846981 │ 7m │ 5 │ online │ 0% │ 55.8mb │ git… │ disabled │
|
│ 2 │ flyer-crawler-analytics-worker │ default │ 0.0.0 │ fork │ 3846981 │ 7m │ 5 │ online │ 0% │ 55.8mb │ git… │ disabled │
|
||||||
│ 11 │ flyer-crawler-api │ default │ 0.0.0 │ fork │ 3846987 │ 7m │ 0 │ online │ 0% │ 59.0mb │ git… │ disabled │
|
│ 11 │ flyer-crawler-api │ default │ 0.0.0 │ fork │ 3846987 │ 7m │ 0 │ online │ 0% │ 59.0mb │ git… │ disabled │
|
||||||
│ 12 │ flyer-crawler-worker │ default │ 0.0.0 │ fork │ 3846988 │ 7m │ 0 │ online │ 0% │ 54.2mb │ git… │ disabled │
|
│ 12 │ flyer-crawler-worker │ default │ 0.0.0 │ fork │ 3846988 │ 7m │ 0 │ online │ 0% │ 54.2mb │ git… │ disabled │
|
||||||
└────┴───────────────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
|
└────┴───────────────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
|
||||||
Module
|
Module
|
||||||
┌────┬──────────────────────────────┬───────────────┬──────────┬──────────┬──────┬──────────┬──────────┬──────────┐
|
┌────┬──────────────────────────────┬───────────────┬──────────┬──────────┬──────┬──────────┬──────────┬──────────┐
|
||||||
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
|
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
|
||||||
├────┼──────────────────────────────┼───────────────┼──────────┼──────────┼──────┼──────────┼──────────┼──────────┤
|
├────┼──────────────────────────────┼───────────────┼──────────┼──────────┼──────┼──────────┼──────────┼──────────┤
|
||||||
│ 13 │ pm2-logrotate │ 3.0.0 │ 3848878 │ online │ 0 │ 0% │ 20.1mb │ git… │
|
│ 13 │ pm2-logrotate │ 3.0.0 │ 3848878 │ online │ 0 │ 0% │ 20.1mb │ git… │
|
||||||
└────┴──────────────────────────────┴───────────────┴──────────┴──────────┴──────┴──────────┴──────────┴──────────┘
|
└────┴──────────────────────────────┴───────────────┴──────────┴──────────┴──────┴──────────┴──────────┴──────────┘
|
||||||
gitea-runner@projectium:~$ pm2 set pm2-logrotate:max_size 10M
|
gitea-runner@projectium:~$ pm2 set pm2-logrotate:max_size 10M
|
||||||
[PM2] Module pm2-logrotate restarted
|
[PM2] Module pm2-logrotate restarted
|
||||||
@@ -335,7 +339,7 @@ $ pm2 set pm2-logrotate:retain 30
|
|||||||
$ pm2 set pm2-logrotate:compress false
|
$ pm2 set pm2-logrotate:compress false
|
||||||
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
|
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
|
||||||
$ pm2 set pm2-logrotate:workerInterval 30
|
$ pm2 set pm2-logrotate:workerInterval 30
|
||||||
$ pm2 set pm2-logrotate:rotateInterval 0 0 * * *
|
$ pm2 set pm2-logrotate:rotateInterval 0 0 _ \* _
|
||||||
$ pm2 set pm2-logrotate:rotateModule true
|
$ pm2 set pm2-logrotate:rotateModule true
|
||||||
gitea-runner@projectium:~$ pm2 set pm2-logrotate:retain 14
|
gitea-runner@projectium:~$ pm2 set pm2-logrotate:retain 14
|
||||||
[PM2] Module pm2-logrotate restarted
|
[PM2] Module pm2-logrotate restarted
|
||||||
@@ -346,33 +350,31 @@ $ pm2 set pm2-logrotate:retain 14
|
|||||||
$ pm2 set pm2-logrotate:compress false
|
$ pm2 set pm2-logrotate:compress false
|
||||||
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
|
$ pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
|
||||||
$ pm2 set pm2-logrotate:workerInterval 30
|
$ pm2 set pm2-logrotate:workerInterval 30
|
||||||
$ pm2 set pm2-logrotate:rotateInterval 0 0 * * *
|
$ pm2 set pm2-logrotate:rotateInterval 0 0 _ \* \*
|
||||||
$ pm2 set pm2-logrotate:rotateModule true
|
$ pm2 set pm2-logrotate:rotateModule true
|
||||||
gitea-runner@projectium:~$
|
gitea-runner@projectium:~$
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## dev server setup:
|
## dev server setup:
|
||||||
|
|
||||||
Here are the steps to set up the development environment on Windows using Podman with an Ubuntu container:
|
Here are the steps to set up the development environment on Windows using Podman with an Ubuntu container:
|
||||||
|
|
||||||
1. Install Prerequisites on Windows
|
1. Install Prerequisites on Windows
|
||||||
Install WSL 2: Podman on Windows relies on the Windows Subsystem for Linux. Install it by running wsl --install in an administrator PowerShell.
|
Install WSL 2: Podman on Windows relies on the Windows Subsystem for Linux. Install it by running wsl --install in an administrator PowerShell.
|
||||||
Install Podman Desktop: Download and install Podman Desktop for Windows.
|
Install Podman Desktop: Download and install Podman Desktop for Windows.
|
||||||
|
|
||||||
2. Set Up Podman
|
2. Set Up Podman
|
||||||
Initialize Podman: Launch Podman Desktop. It will automatically set up its WSL 2 machine.
|
Initialize Podman: Launch Podman Desktop. It will automatically set up its WSL 2 machine.
|
||||||
Start Podman: Ensure the Podman machine is running from the Podman Desktop interface.
|
Start Podman: Ensure the Podman machine is running from the Podman Desktop interface.
|
||||||
|
|
||||||
3. Set Up the Ubuntu Container
|
3. Set Up the Ubuntu Container
|
||||||
- Pull Ubuntu Image: Open a PowerShell or command prompt and pull the latest Ubuntu image:
|
|
||||||
podman pull ubuntu:latest
|
- Pull Ubuntu Image: Open a PowerShell or command prompt and pull the latest Ubuntu image:
|
||||||
- Create a Podman Volume: Create a volume to persist node_modules and avoid installing them every time the container starts.
|
podman pull ubuntu:latest
|
||||||
podman volume create node_modules_cache
|
- Create a Podman Volume: Create a volume to persist node_modules and avoid installing them every time the container starts.
|
||||||
- Run the Ubuntu Container: Start a new container with the project directory mounted and the necessary ports forwarded.
|
podman volume create node_modules_cache
|
||||||
- Open a terminal in your project's root directory on Windows.
|
- Run the Ubuntu Container: Start a new container with the project directory mounted and the necessary ports forwarded.
|
||||||
- Run the following command, replacing D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com with the full path to your project:
|
- Open a terminal in your project's root directory on Windows.
|
||||||
|
- Run the following command, replacing D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com with the full path to your project:
|
||||||
|
|
||||||
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com:/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
|
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "D:\gitea\flyer-crawler.projectium.com\flyer-crawler.projectium.com:/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
|
||||||
|
|
||||||
@@ -383,46 +385,40 @@ podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "D:\gitea\flyer-cra
|
|||||||
-v "node_modules_cache:/app/node_modules": Mounts the named volume for node_modules.
|
-v "node_modules_cache:/app/node_modules": Mounts the named volume for node_modules.
|
||||||
|
|
||||||
4. Configure the Ubuntu Environment
|
4. Configure the Ubuntu Environment
|
||||||
You are now inside the Ubuntu container's shell.
|
You are now inside the Ubuntu container's shell.
|
||||||
|
|
||||||
- Update Package Lists:
|
- Update Package Lists:
|
||||||
apt-get update
|
apt-get update
|
||||||
- Install Dependencies: Install curl, git, and nodejs (which includes npm).
|
- Install Dependencies: Install curl, git, and nodejs (which includes npm).
|
||||||
apt-get install -y curl git
|
apt-get install -y curl git
|
||||||
curl -sL https://deb.nodesource.com/setup_20.x | bash -
|
curl -sL https://deb.nodesource.com/setup_20.x | bash -
|
||||||
apt-get install -y nodejs
|
apt-get install -y nodejs
|
||||||
- Navigate to Project Directory:
|
- Navigate to Project Directory:
|
||||||
cd /app
|
cd /app
|
||||||
|
|
||||||
- Install Project Dependencies:
|
- Install Project Dependencies:
|
||||||
npm install
|
npm install
|
||||||
|
|
||||||
5. Run the Development Server
|
5. Run the Development Server
|
||||||
- Start the Application:
|
- Start the Application:
|
||||||
npm run dev
|
npm run dev
|
||||||
|
|
||||||
6. Accessing the Application
|
6. Accessing the Application
|
||||||
- Frontend: Open your browser and go to http://localhost:5173.
|
|
||||||
- Backend: The frontend will make API calls to http://localhost:3001.
|
- Frontend: Open your browser and go to http://localhost:5173.
|
||||||
|
- Backend: The frontend will make API calls to http://localhost:3001.
|
||||||
|
|
||||||
Managing the Environment
|
Managing the Environment
|
||||||
- Stopping the Container: Press Ctrl+C in the container terminal, then type exit.
|
|
||||||
- Restarting the Container:
|
|
||||||
podman start -a -i flyer-dev
|
|
||||||
|
|
||||||
|
|
||||||
|
- Stopping the Container: Press Ctrl+C in the container terminal, then type exit.
|
||||||
|
- Restarting the Container:
|
||||||
|
podman start -a -i flyer-dev
|
||||||
|
|
||||||
## for me:
|
## for me:
|
||||||
|
|
||||||
cd /mnt/d/gitea/flyer-crawler.projectium.com/flyer-crawler.projectium.com
|
cd /mnt/d/gitea/flyer-crawler.projectium.com/flyer-crawler.projectium.com
|
||||||
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "$(pwd):/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
|
podman run -it -p 3001:3001 -p 5173:5173 --name flyer-dev -v "$(pwd):/app" -v "node_modules_cache:/app/node_modules" ubuntu:latest
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
rate limiting
|
rate limiting
|
||||||
|
|
||||||
respect the AI service's rate limits, making it more stable and robust. You can adjust the GEMINI_RPM environment variable in your production environment as needed without changing the code.
|
respect the AI service's rate limits, making it more stable and robust. You can adjust the GEMINI_RPM environment variable in your production environment as needed without changing the code.
|
||||||
|
|||||||
52
compose.dev.yml
Normal file
52
compose.dev.yml
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
container_name: flyer-crawler-dev
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile.dev
|
||||||
|
volumes:
|
||||||
|
# Mount the current directory to /app in the container
|
||||||
|
- .:/app
|
||||||
|
# Create a volume for node_modules to avoid conflicts with Windows host
|
||||||
|
# and improve performance.
|
||||||
|
- node_modules_data:/app/node_modules
|
||||||
|
ports:
|
||||||
|
- '3000:3000' # Frontend (Vite default)
|
||||||
|
- '3001:3001' # Backend API
|
||||||
|
environment:
|
||||||
|
- NODE_ENV=development
|
||||||
|
- DB_HOST=postgres
|
||||||
|
- DB_USER=postgres
|
||||||
|
- DB_PASSWORD=postgres
|
||||||
|
- DB_NAME=flyer_crawler_dev
|
||||||
|
- REDIS_URL=redis://redis:6379
|
||||||
|
# Add other secrets here or use a .env file
|
||||||
|
depends_on:
|
||||||
|
- postgres
|
||||||
|
- redis
|
||||||
|
# Keep container running so VS Code can attach
|
||||||
|
command: tail -f /dev/null
|
||||||
|
|
||||||
|
postgres:
|
||||||
|
image: docker.io/library/postgis/postgis:15-3.4
|
||||||
|
container_name: flyer-crawler-postgres
|
||||||
|
ports:
|
||||||
|
- '5432:5432'
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: postgres
|
||||||
|
POSTGRES_PASSWORD: postgres
|
||||||
|
POSTGRES_DB: flyer_crawler_dev
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: docker.io/library/redis:alpine
|
||||||
|
container_name: flyer-crawler-redis
|
||||||
|
ports:
|
||||||
|
- '6379:6379'
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
node_modules_data:
|
||||||
@@ -34,7 +34,7 @@ We will adopt a strict, consistent error-handling contract for the service and r
|
|||||||
**Robustness**: Eliminates an entire class of bugs where `undefined` is passed to `res.json()`, preventing incorrect `500` errors.
|
**Robustness**: Eliminates an entire class of bugs where `undefined` is passed to `res.json()`, preventing incorrect `500` errors.
|
||||||
**Consistency & Predictability**: All data-fetching methods now have a predictable contract. They either return the expected data or throw a specific, typed error.
|
**Consistency & Predictability**: All data-fetching methods now have a predictable contract. They either return the expected data or throw a specific, typed error.
|
||||||
**Developer Experience**: Route handlers become simpler, cleaner, and easier to write correctly. The cognitive load on developers is reduced as they no longer need to remember to check for `undefined`.
|
**Developer Experience**: Route handlers become simpler, cleaner, and easier to write correctly. The cognitive load on developers is reduced as they no longer need to remember to check for `undefined`.
|
||||||
**Improved Testability**: Tests become more reliable and realistic. Mocks can now throw the *exact* error type (`new NotFoundError()`) that the real implementation would, ensuring tests accurately reflect the application's behavior.
|
**Improved Testability**: Tests become more reliable and realistic. Mocks can now throw the _exact_ error type (`new NotFoundError()`) that the real implementation would, ensuring tests accurately reflect the application's behavior.
|
||||||
**Centralized Control**: Error-to-HTTP-status logic is centralized in the `errorHandler` middleware, making it easy to manage and modify error responses globally.
|
**Centralized Control**: Error-to-HTTP-status logic is centralized in the `errorHandler` middleware, making it easy to manage and modify error responses globally.
|
||||||
|
|
||||||
### Negative
|
### Negative
|
||||||
|
|||||||
@@ -10,21 +10,19 @@ Following the standardization of error handling in ADR-001, the next most common
|
|||||||
|
|
||||||
This manual approach has several drawbacks:
|
This manual approach has several drawbacks:
|
||||||
**Repetitive Boilerplate**: The `try/catch/finally` block for transaction management is duplicated across multiple files.
|
**Repetitive Boilerplate**: The `try/catch/finally` block for transaction management is duplicated across multiple files.
|
||||||
**Error-Prone**: It is easy to forget to `client.release()` in all code paths, which can lead to connection pool exhaustion and bring down the application.
|
**Error-Prone**: It is easy to forget to `client.release()` in all code paths, which can lead to connection pool exhaustion and bring down the application. 3. **Poor Composability**: It is difficult to compose multiple repository methods into a single, atomic "Unit of Work". For example, a service function that needs to update a user's points and create a budget in a single transaction cannot easily do so if both underlying repository methods create their own transactions.
|
||||||
3. **Poor Composability**: It is difficult to compose multiple repository methods into a single, atomic "Unit of Work". For example, a service function that needs to update a user's points and create a budget in a single transaction cannot easily do so if both underlying repository methods create their own transactions.
|
|
||||||
|
|
||||||
## Decision
|
## Decision
|
||||||
|
|
||||||
We will implement a standardized "Unit of Work" pattern through a high-level `withTransaction` helper function. This function will abstract away the complexity of transaction management.
|
We will implement a standardized "Unit of Work" pattern through a high-level `withTransaction` helper function. This function will abstract away the complexity of transaction management.
|
||||||
|
|
||||||
1. **`withTransaction` Helper**: A new helper function, `withTransaction<T>(callback: (client: PoolClient) => Promise<T>): Promise<T>`, will be created. This function will be responsible for:
|
1. **`withTransaction` Helper**: A new helper function, `withTransaction<T>(callback: (client: PoolClient) => Promise<T>): Promise<T>`, will be created. This function will be responsible for:
|
||||||
|
- Acquiring a client from the database pool.
|
||||||
* Acquiring a client from the database pool.
|
- Starting a transaction (`BEGIN`).
|
||||||
* Starting a transaction (`BEGIN`).
|
- Executing the `callback` function, passing the transactional client to it.
|
||||||
* Executing the `callback` function, passing the transactional client to it.
|
- If the callback succeeds, it will `COMMIT` the transaction.
|
||||||
* If the callback succeeds, it will `COMMIT` the transaction.
|
- If the callback throws an error, it will `ROLLBACK` the transaction and re-throw the error.
|
||||||
* If the callback throws an error, it will `ROLLBACK` the transaction and re-throw the error.
|
- In all cases, it will `RELEASE` the client back to the pool.
|
||||||
* In all cases, it will `RELEASE` the client back to the pool.
|
|
||||||
|
|
||||||
2. **Repository Method Signature**: Repository methods that need to be part of a transaction will be updated to optionally accept a `PoolClient` in their constructor or as a method parameter. By default, they will use the global pool. When called from within a `withTransaction` block, they will be passed the transactional client.
|
2. **Repository Method Signature**: Repository methods that need to be part of a transaction will be updated to optionally accept a `PoolClient` in their constructor or as a method parameter. By default, they will use the global pool. When called from within a `withTransaction` block, they will be passed the transactional client.
|
||||||
3. **Service Layer Orchestration**: Service-layer functions that orchestrate multi-step operations will use `withTransaction` to ensure atomicity. They will instantiate or call repository methods, providing them with the transactional client from the callback.
|
3. **Service Layer Orchestration**: Service-layer functions that orchestrate multi-step operations will use `withTransaction` to ensure atomicity. They will instantiate or call repository methods, providing them with the transactional client from the callback.
|
||||||
@@ -40,7 +38,7 @@ async function registerUserAndCreateDefaultList(userData) {
|
|||||||
const shoppingRepo = new ShoppingRepository(client);
|
const shoppingRepo = new ShoppingRepository(client);
|
||||||
|
|
||||||
const newUser = await userRepo.createUser(userData);
|
const newUser = await userRepo.createUser(userData);
|
||||||
await shoppingRepo.createShoppingList(newUser.user_id, "My First List");
|
await shoppingRepo.createShoppingList(newUser.user_id, 'My First List');
|
||||||
|
|
||||||
return newUser;
|
return newUser;
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -20,8 +20,8 @@ We will adopt a schema-based approach for input validation using the `zod` libra
|
|||||||
1. **Adopt `zod` for Schema Definition**: We will use `zod` to define clear, type-safe schemas for the `params`, `query`, and `body` of each API request. `zod` provides powerful and declarative validation rules and automatically infers TypeScript types.
|
1. **Adopt `zod` for Schema Definition**: We will use `zod` to define clear, type-safe schemas for the `params`, `query`, and `body` of each API request. `zod` provides powerful and declarative validation rules and automatically infers TypeScript types.
|
||||||
|
|
||||||
2. **Create a Reusable Validation Middleware**: A generic `validateRequest(schema)` middleware will be created. This middleware will take a `zod` schema, parse the incoming request against it, and handle success and error cases.
|
2. **Create a Reusable Validation Middleware**: A generic `validateRequest(schema)` middleware will be created. This middleware will take a `zod` schema, parse the incoming request against it, and handle success and error cases.
|
||||||
* On successful validation, the parsed and typed data will be attached to the `req` object (e.g., `req.body` will be replaced with the parsed body), and `next()` will be called.
|
- On successful validation, the parsed and typed data will be attached to the `req` object (e.g., `req.body` will be replaced with the parsed body), and `next()` will be called.
|
||||||
* On validation failure, the middleware will call `next()` with a custom `ValidationError` containing a structured list of issues, which `ADR-001`'s `errorHandler` can then format into a user-friendly `400 Bad Request` response.
|
- On validation failure, the middleware will call `next()` with a custom `ValidationError` containing a structured list of issues, which `ADR-001`'s `errorHandler` can then format into a user-friendly `400 Bad Request` response.
|
||||||
|
|
||||||
3. **Refactor Routes**: All route handlers will be refactored to use this new middleware, removing all manual validation logic.
|
3. **Refactor Routes**: All route handlers will be refactored to use this new middleware, removing all manual validation logic.
|
||||||
|
|
||||||
@@ -46,18 +46,18 @@ const getFlyerSchema = z.object({
|
|||||||
type GetFlyerRequest = z.infer<typeof getFlyerSchema>;
|
type GetFlyerRequest = z.infer<typeof getFlyerSchema>;
|
||||||
|
|
||||||
// 3. Apply the middleware and use an inline cast for the request
|
// 3. Apply the middleware and use an inline cast for the request
|
||||||
router.get('/:id', validateRequest(getFlyerSchema), (async (req, res, next) => {
|
router.get('/:id', validateRequest(getFlyerSchema), async (req, res, next) => {
|
||||||
// Cast 'req' to the inferred type.
|
// Cast 'req' to the inferred type.
|
||||||
// This provides full type safety for params, query, and body.
|
// This provides full type safety for params, query, and body.
|
||||||
const { params } = req as unknown as GetFlyerRequest;
|
const { params } = req as unknown as GetFlyerRequest;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const flyer = await db.flyerRepo.getFlyerById(params.id); // params.id is 'number'
|
const flyer = await db.flyerRepo.getFlyerById(params.id); // params.id is 'number'
|
||||||
res.json(flyer);
|
res.json(flyer);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
next(error);
|
next(error);
|
||||||
}
|
}
|
||||||
}));
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|||||||
@@ -20,9 +20,9 @@ We will adopt a standardized, application-wide structured logging policy. All lo
|
|||||||
|
|
||||||
**Request-Scoped Logger with Context**: We will create a middleware that runs at the beginning of the request lifecycle. This middleware will:
|
**Request-Scoped Logger with Context**: We will create a middleware that runs at the beginning of the request lifecycle. This middleware will:
|
||||||
|
|
||||||
* Generate a unique `request_id` for each incoming request.
|
- Generate a unique `request_id` for each incoming request.
|
||||||
* Create a request-scoped logger instance (a "child logger") that automatically includes the `request_id`, `user_id` (if authenticated), and `ip_address` in every log message it generates.
|
- Create a request-scoped logger instance (a "child logger") that automatically includes the `request_id`, `user_id` (if authenticated), and `ip_address` in every log message it generates.
|
||||||
* Attach this child logger to the `req` object (e.g., `req.log`).
|
- Attach this child logger to the `req` object (e.g., `req.log`).
|
||||||
|
|
||||||
**Mandatory Use of Request-Scoped Logger**: All route handlers and any service functions called by them **MUST** use the request-scoped logger (`req.log`) instead of the global logger instance. This ensures all logs for a given request are automatically correlated.
|
**Mandatory Use of Request-Scoped Logger**: All route handlers and any service functions called by them **MUST** use the request-scoped logger (`req.log`) instead of the global logger instance. This ensures all logs for a given request are automatically correlated.
|
||||||
|
|
||||||
@@ -32,9 +32,9 @@ We will adopt a standardized, application-wide structured logging policy. All lo
|
|||||||
|
|
||||||
**Standardized Logging Practices**:
|
**Standardized Logging Practices**:
|
||||||
**INFO**: Log key business events, such as `User logged in` or `Flyer processed`.
|
**INFO**: Log key business events, such as `User logged in` or `Flyer processed`.
|
||||||
**WARN**: Log recoverable errors or unusual situations that do not break the request, such as `Client Error: 404 on GET /api/non-existent-route` or `Retrying failed database connection`.
|
**WARN**: Log recoverable errors or unusual situations that do not break the request, such as `Client Error: 404 on GET /api/non-existent-route` or `Retrying failed database connection`.
|
||||||
**ERROR**: Log only unhandled or server-side errors that cause a request to fail (typically handled by the `errorHandler`). Avoid logging expected client errors (like 4xx) at this level.
|
**ERROR**: Log only unhandled or server-side errors that cause a request to fail (typically handled by the `errorHandler`). Avoid logging expected client errors (like 4xx) at this level.
|
||||||
**DEBUG**: Log detailed diagnostic information useful during development, such as function entry/exit points or variable states.
|
**DEBUG**: Log detailed diagnostic information useful during development, such as function entry/exit points or variable states.
|
||||||
|
|
||||||
### Example Usage
|
### Example Usage
|
||||||
|
|
||||||
@@ -59,15 +59,15 @@ export const requestLogger = (req, res, next) => {
|
|||||||
|
|
||||||
// In a route handler:
|
// In a route handler:
|
||||||
router.get('/:id', async (req, res, next) => {
|
router.get('/:id', async (req, res, next) => {
|
||||||
// Use the request-scoped logger
|
// Use the request-scoped logger
|
||||||
req.log.info({ flyerId: req.params.id }, 'Fetching flyer by ID');
|
req.log.info({ flyerId: req.params.id }, 'Fetching flyer by ID');
|
||||||
try {
|
try {
|
||||||
// ... business logic ...
|
// ... business logic ...
|
||||||
res.json(flyer);
|
res.json(flyer);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
// The error itself will be logged with full context by the errorHandler
|
// The error itself will be logged with full context by the errorHandler
|
||||||
next(error);
|
next(error);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -14,5 +14,5 @@ We will formalize a centralized Role-Based Access Control (RBAC) or Attribute-Ba
|
|||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
* **Positive**: Ensures authorization logic is consistent, easy to audit, and decoupled from business logic. Improves security by centralizing access control.
|
- **Positive**: Ensures authorization logic is consistent, easy to audit, and decoupled from business logic. Improves security by centralizing access control.
|
||||||
* **Negative**: Requires a significant refactoring effort to integrate the new authorization system across all protected routes and features. Introduces a new dependency if an external library is chosen.
|
- **Negative**: Requires a significant refactoring effort to integrate the new authorization system across all protected routes and features. Introduces a new dependency if an external library is chosen.
|
||||||
|
|||||||
@@ -14,5 +14,5 @@ We will establish a formal Design System and Component Library. This will involv
|
|||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
* **Positive**: Ensures a consistent and high-quality user interface. Accelerates frontend development by providing reusable, well-documented components. Improves maintainability and reduces technical debt.
|
- **Positive**: Ensures a consistent and high-quality user interface. Accelerates frontend development by providing reusable, well-documented components. Improves maintainability and reduces technical debt.
|
||||||
* **Negative**: Requires an initial investment in setting up Storybook and migrating existing components. Adds a new dependency and a new workflow for frontend development.
|
- **Negative**: Requires an initial investment in setting up Storybook and migrating existing components. Adds a new dependency and a new workflow for frontend development.
|
||||||
|
|||||||
@@ -14,5 +14,5 @@ We will adopt a dedicated database migration tool, such as **`node-pg-migrate`**
|
|||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
* **Positive**: Provides a safe, repeatable, and reversible way to evolve the database schema. Improves team collaboration on database changes. Reduces the risk of data loss or downtime during deployments.
|
- **Positive**: Provides a safe, repeatable, and reversible way to evolve the database schema. Improves team collaboration on database changes. Reduces the risk of data loss or downtime during deployments.
|
||||||
* **Negative**: Requires an initial setup and learning curve for the chosen migration tool. All future schema changes must adhere to the migration workflow.
|
- **Negative**: Requires an initial setup and learning curve for the chosen migration tool. All future schema changes must adhere to the migration workflow.
|
||||||
|
|||||||
@@ -14,5 +14,5 @@ We will standardize the deployment process by containerizing the application usi
|
|||||||
|
|
||||||
## Consequences
|
## Consequences
|
||||||
|
|
||||||
* **Positive**: Ensures consistency between development and production environments. Simplifies the setup for new developers. Improves portability and scalability of the application.
|
- **Positive**: Ensures consistency between development and production environments. Simplifies the setup for new developers. Improves portability and scalability of the application.
|
||||||
* **Negative**: Requires learning Docker and containerization concepts. Adds `Dockerfile` and `docker-compose.yml` to the project's configuration.
|
- **Negative**: Requires learning Docker and containerization concepts. Adds `Dockerfile` and `docker-compose.yml` to the project's configuration.
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user