8.5 KiB
Claude Code Project Instructions
Communication Style: Ask Before Assuming
IMPORTANT: When helping with tasks, ask clarifying questions before making assumptions. Do not assume:
- What steps the user has or hasn't completed
- What the user already knows or has configured
- What external services (OAuth providers, APIs, etc.) are already set up
- What secrets or credentials have already been created
Instead, ask the user to confirm the current state before providing instructions or making recommendations. This prevents wasted effort and respects the user's existing work.
Platform Requirement: Linux Only
CRITICAL: This application is designed to run exclusively on Linux. See ADR-014 for full details.
Environment Terminology
- Dev Container (or just "dev"): The containerized Linux development environment (
flyer-crawler-dev). This is where all development and testing should occur. - Host: The Windows machine running Podman/Docker and VS Code.
When instructions say "run in dev" or "run in the dev container", they mean executing commands inside the flyer-crawler-dev container.
Test Execution Rules
- ALL tests MUST be executed in the dev container - the Linux container environment
- NEVER run tests directly on Windows host - test results from Windows are unreliable
- Always use the dev container for testing when developing on Windows
How to Run Tests Correctly
# If on Windows, first open VS Code and "Reopen in Container"
# Then run tests inside the dev container:
npm test # Run all unit tests
npm run test:unit # Run unit tests only
npm run test:integration # Run integration tests (requires DB/Redis)
Running Tests via Podman (from Windows host)
The command to run unit tests in the dev container via podman:
podman exec -it flyer-crawler-dev npm run test:unit
The command to run integration tests in the dev container via podman:
podman exec -it flyer-crawler-dev npm run test:integration
For running specific test files:
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
Why Linux Only?
- Path separators: Code uses POSIX-style paths (
/) which may break on Windows - Shell scripts in
scripts/directory are Linux-only - External dependencies like
pdftocairoassume Linux installation paths - Unix-style file permissions are assumed throughout
Test Result Interpretation
- Tests that pass on Windows but fail on Linux = BROKEN tests (must be fixed)
- Tests that fail on Windows but pass on Linux = PASSING tests (acceptable)
Development Workflow
- Open project in VS Code
- Use "Reopen in Container" (Dev Containers extension required) to enter the dev environment
- Wait for dev container initialization to complete
- Run
npm testto verify the dev environment is working - Make changes and run tests inside the dev container
Code Change Verification
After making any code changes, always run a type-check to catch TypeScript errors before committing:
npm run type-check
This prevents linting/type errors from being introduced into the codebase.
Quick Reference
| Command | Description |
|---|---|
npm test |
Run all unit tests |
npm run test:unit |
Run unit tests only |
npm run test:integration |
Run integration tests |
npm run dev:container |
Start dev server (container) |
npm run build |
Build for production |
npm run type-check |
Run TypeScript type checking |
Known Integration Test Issues and Solutions
This section documents common test issues encountered in integration tests, their root causes, and solutions. These patterns recur frequently.
1. Vitest globalSetup Runs in Separate Node.js Context
Problem: Vitest's globalSetup runs in a completely separate Node.js context from test files. This means:
- Singletons created in globalSetup are NOT the same instances as those in test files
global,globalThis, andprocessare all isolated between contextsvi.spyOn()on module exports doesn't work cross-context- Dependency injection via setter methods fails across contexts
Affected Tests: Any test trying to inject mocks into BullMQ worker services (e.g., AI failure tests, DB failure tests)
Solution Options:
- Mark tests as
.todo()until an API-based mock injection mechanism is implemented - Create test-only API endpoints that allow setting mock behaviors via HTTP
- Use file-based or Redis-based mock flags that services check at runtime
Example of affected code pattern:
// This DOES NOT work - different module instances
const { flyerProcessingService } = await import('../../services/workers.server');
flyerProcessingService._getAiProcessor()._setExtractAndValidateData(mockFn);
// The worker uses a different flyerProcessingService instance!
2. BullMQ Cleanup Queue Deleting Files Before Test Verification
Problem: The cleanup worker runs in the globalSetup context and processes cleanup jobs even when tests spy on cleanupQueue.add(). The spy intercepts calls in the test context, but jobs already queued run in the worker's context.
Affected Tests: EXIF/PNG metadata stripping tests that need to verify file contents before deletion
Solution: Drain and pause the cleanup queue before the test:
const { cleanupQueue } = await import('../../services/queues.server');
await cleanupQueue.drain(); // Remove existing jobs
await cleanupQueue.pause(); // Prevent new jobs from processing
// ... run test ...
await cleanupQueue.resume(); // Restore normal operation
3. Cache Invalidation After Direct Database Inserts
Problem: Tests that insert data directly via SQL (bypassing the service layer) don't trigger cache invalidation. Subsequent API calls return stale cached data.
Affected Tests: Any test using pool.query() to insert flyers, stores, or other cached entities
Solution: Manually invalidate the cache after direct inserts:
await pool.query('INSERT INTO flyers ...');
await cacheService.invalidateFlyers(); // Clear stale cache
4. Unique Filenames Required for Test Isolation
Problem: Multer generates predictable filenames in test environments, causing race conditions when multiple tests upload files concurrently or in sequence.
Affected Tests: Flyer processing tests, file upload tests
Solution: Always use unique filenames with timestamps:
// In multer.middleware.ts
const uniqueSuffix = `${Date.now()}-${Math.round(Math.random() * 1e9)}`;
cb(null, `${file.fieldname}-${uniqueSuffix}-${sanitizedOriginalName}`);
5. Response Format Mismatches
Problem: API response formats may change, causing tests to fail when expecting old formats.
Common Issues:
response.body.data.jobIdvsresponse.body.data.job.id- Nested objects vs flat response structures
- Type coercion (string vs number for IDs)
Solution: Always log response bodies during debugging and update test assertions to match actual API contracts.
6. External Service Availability
Problem: Tests depending on external services (PM2, Redis health checks) fail when those services aren't available in the test environment.
Solution: Use try/catch with graceful degradation or mock the external service checks.
MCP Servers
The following MCP servers are configured for this project:
| Server | Purpose |
|---|---|
| gitea-projectium | Gitea API for gitea.projectium.com |
| gitea-torbonium | Gitea API for gitea.torbonium.com |
| podman | Container management |
| filesystem | File system access |
| fetch | Web fetching |
| markitdown | Convert documents to markdown |
| sequential-thinking | Step-by-step reasoning |
| memory | Knowledge graph persistence |
| postgres | Direct database queries (localhost:5432) |
| playwright | Browser automation and testing |
| redis | Redis cache inspection (localhost:6379) |
Note: MCP servers are currently only available in Claude CLI. Due to a bug in Claude VS Code extension, MCP servers do not work there yet.