Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 1m10s
255 lines
9.8 KiB
Markdown
255 lines
9.8 KiB
Markdown
# Claude Code Project Instructions
|
|
|
|
## Communication Style: Ask Before Assuming
|
|
|
|
**IMPORTANT**: When helping with tasks, **ask clarifying questions before making assumptions**. Do not assume:
|
|
|
|
- What steps the user has or hasn't completed
|
|
- What the user already knows or has configured
|
|
- What external services (OAuth providers, APIs, etc.) are already set up
|
|
- What secrets or credentials have already been created
|
|
|
|
Instead, ask the user to confirm the current state before providing instructions or making recommendations. This prevents wasted effort and respects the user's existing work.
|
|
|
|
## Platform Requirement: Linux Only
|
|
|
|
**CRITICAL**: This application is designed to run **exclusively on Linux**. See [ADR-014](docs/adr/0014-containerization-and-deployment-strategy.md) for full details.
|
|
|
|
### Environment Terminology
|
|
|
|
- **Dev Container** (or just "dev"): The containerized Linux development environment (`flyer-crawler-dev`). This is where all development and testing should occur.
|
|
- **Host**: The Windows machine running Podman/Docker and VS Code.
|
|
|
|
When instructions say "run in dev" or "run in the dev container", they mean executing commands inside the `flyer-crawler-dev` container.
|
|
|
|
### Test Execution Rules
|
|
|
|
1. **ALL tests MUST be executed in the dev container** - the Linux container environment
|
|
2. **NEVER run tests directly on Windows host** - test results from Windows are unreliable
|
|
3. **Always use the dev container for testing** when developing on Windows
|
|
|
|
### How to Run Tests Correctly
|
|
|
|
```bash
|
|
# If on Windows, first open VS Code and "Reopen in Container"
|
|
# Then run tests inside the dev container:
|
|
npm test # Run all unit tests
|
|
npm run test:unit # Run unit tests only
|
|
npm run test:integration # Run integration tests (requires DB/Redis)
|
|
```
|
|
|
|
### Running Tests via Podman (from Windows host)
|
|
|
|
The command to run unit tests in the dev container via podman:
|
|
|
|
```bash
|
|
podman exec -it flyer-crawler-dev npm run test:unit
|
|
```
|
|
|
|
The command to run integration tests in the dev container via podman:
|
|
|
|
```bash
|
|
podman exec -it flyer-crawler-dev npm run test:integration
|
|
```
|
|
|
|
For running specific test files:
|
|
|
|
```bash
|
|
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
|
|
```
|
|
|
|
### Why Linux Only?
|
|
|
|
- Path separators: Code uses POSIX-style paths (`/`) which may break on Windows
|
|
- Shell scripts in `scripts/` directory are Linux-only
|
|
- External dependencies like `pdftocairo` assume Linux installation paths
|
|
- Unix-style file permissions are assumed throughout
|
|
|
|
### Test Result Interpretation
|
|
|
|
- Tests that **pass on Windows but fail on Linux** = **BROKEN tests** (must be fixed)
|
|
- Tests that **fail on Windows but pass on Linux** = **PASSING tests** (acceptable)
|
|
|
|
## Development Workflow
|
|
|
|
1. Open project in VS Code
|
|
2. Use "Reopen in Container" (Dev Containers extension required) to enter the dev environment
|
|
3. Wait for dev container initialization to complete
|
|
4. Run `npm test` to verify the dev environment is working
|
|
5. Make changes and run tests inside the dev container
|
|
|
|
## Code Change Verification
|
|
|
|
After making any code changes, **always run a type-check** to catch TypeScript errors before committing:
|
|
|
|
```bash
|
|
npm run type-check
|
|
```
|
|
|
|
This prevents linting/type errors from being introduced into the codebase.
|
|
|
|
## Quick Reference
|
|
|
|
| Command | Description |
|
|
| -------------------------- | ---------------------------- |
|
|
| `npm test` | Run all unit tests |
|
|
| `npm run test:unit` | Run unit tests only |
|
|
| `npm run test:integration` | Run integration tests |
|
|
| `npm run dev:container` | Start dev server (container) |
|
|
| `npm run build` | Build for production |
|
|
| `npm run type-check` | Run TypeScript type checking |
|
|
|
|
## Known Integration Test Issues and Solutions
|
|
|
|
This section documents common test issues encountered in integration tests, their root causes, and solutions. These patterns recur frequently.
|
|
|
|
### 1. Vitest globalSetup Runs in Separate Node.js Context
|
|
|
|
**Problem:** Vitest's `globalSetup` runs in a completely separate Node.js context from test files. This means:
|
|
|
|
- Singletons created in globalSetup are NOT the same instances as those in test files
|
|
- `global`, `globalThis`, and `process` are all isolated between contexts
|
|
- `vi.spyOn()` on module exports doesn't work cross-context
|
|
- Dependency injection via setter methods fails across contexts
|
|
|
|
**Affected Tests:** Any test trying to inject mocks into BullMQ worker services (e.g., AI failure tests, DB failure tests)
|
|
|
|
**Solution Options:**
|
|
|
|
1. Mark tests as `.todo()` until an API-based mock injection mechanism is implemented
|
|
2. Create test-only API endpoints that allow setting mock behaviors via HTTP
|
|
3. Use file-based or Redis-based mock flags that services check at runtime
|
|
|
|
**Example of affected code pattern:**
|
|
|
|
```typescript
|
|
// This DOES NOT work - different module instances
|
|
const { flyerProcessingService } = await import('../../services/workers.server');
|
|
flyerProcessingService._getAiProcessor()._setExtractAndValidateData(mockFn);
|
|
// The worker uses a different flyerProcessingService instance!
|
|
```
|
|
|
|
### 2. BullMQ Cleanup Queue Deleting Files Before Test Verification
|
|
|
|
**Problem:** The cleanup worker runs in the globalSetup context and processes cleanup jobs even when tests spy on `cleanupQueue.add()`. The spy intercepts calls in the test context, but jobs already queued run in the worker's context.
|
|
|
|
**Affected Tests:** EXIF/PNG metadata stripping tests that need to verify file contents before deletion
|
|
|
|
**Solution:** Drain and pause the cleanup queue before the test:
|
|
|
|
```typescript
|
|
const { cleanupQueue } = await import('../../services/queues.server');
|
|
await cleanupQueue.drain(); // Remove existing jobs
|
|
await cleanupQueue.pause(); // Prevent new jobs from processing
|
|
// ... run test ...
|
|
await cleanupQueue.resume(); // Restore normal operation
|
|
```
|
|
|
|
### 3. Cache Invalidation After Direct Database Inserts
|
|
|
|
**Problem:** Tests that insert data directly via SQL (bypassing the service layer) don't trigger cache invalidation. Subsequent API calls return stale cached data.
|
|
|
|
**Affected Tests:** Any test using `pool.query()` to insert flyers, stores, or other cached entities
|
|
|
|
**Solution:** Manually invalidate the cache after direct inserts:
|
|
|
|
```typescript
|
|
await pool.query('INSERT INTO flyers ...');
|
|
await cacheService.invalidateFlyers(); // Clear stale cache
|
|
```
|
|
|
|
### 4. Unique Filenames Required for Test Isolation
|
|
|
|
**Problem:** Multer generates predictable filenames in test environments, causing race conditions when multiple tests upload files concurrently or in sequence.
|
|
|
|
**Affected Tests:** Flyer processing tests, file upload tests
|
|
|
|
**Solution:** Always use unique filenames with timestamps:
|
|
|
|
```typescript
|
|
// In multer.middleware.ts
|
|
const uniqueSuffix = `${Date.now()}-${Math.round(Math.random() * 1e9)}`;
|
|
cb(null, `${file.fieldname}-${uniqueSuffix}-${sanitizedOriginalName}`);
|
|
```
|
|
|
|
### 5. Response Format Mismatches
|
|
|
|
**Problem:** API response formats may change, causing tests to fail when expecting old formats.
|
|
|
|
**Common Issues:**
|
|
|
|
- `response.body.data.jobId` vs `response.body.data.job.id`
|
|
- Nested objects vs flat response structures
|
|
- Type coercion (string vs number for IDs)
|
|
|
|
**Solution:** Always log response bodies during debugging and update test assertions to match actual API contracts.
|
|
|
|
### 6. External Service Availability
|
|
|
|
**Problem:** Tests depending on external services (PM2, Redis health checks) fail when those services aren't available in the test environment.
|
|
|
|
**Solution:** Use try/catch with graceful degradation or mock the external service checks.
|
|
|
|
## MCP Servers
|
|
|
|
The following MCP servers are configured for this project:
|
|
|
|
| Server | Purpose |
|
|
| --------------------- | ------------------------------------------- |
|
|
| gitea-projectium | Gitea API for gitea.projectium.com |
|
|
| gitea-torbonium | Gitea API for gitea.torbonium.com |
|
|
| podman | Container management |
|
|
| filesystem | File system access |
|
|
| fetch | Web fetching |
|
|
| markitdown | Convert documents to markdown |
|
|
| sequential-thinking | Step-by-step reasoning |
|
|
| memory | Knowledge graph persistence |
|
|
| postgres | Direct database queries (localhost:5432) |
|
|
| playwright | Browser automation and testing |
|
|
| redis | Redis cache inspection (localhost:6379) |
|
|
| sentry-selfhosted-mcp | Error tracking via Bugsink (localhost:8000) |
|
|
|
|
**Note:** MCP servers are currently only available in **Claude CLI**. Due to a bug in Claude VS Code extension, MCP servers do not work there yet.
|
|
|
|
### Sentry/Bugsink MCP Server Setup (ADR-015)
|
|
|
|
To enable Claude Code to query and analyze application errors from Bugsink:
|
|
|
|
1. **Install the MCP server**:
|
|
|
|
```bash
|
|
# Clone the sentry-selfhosted-mcp repository
|
|
git clone https://github.com/ddfourtwo/sentry-selfhosted-mcp.git
|
|
cd sentry-selfhosted-mcp
|
|
npm install
|
|
```
|
|
|
|
2. **Configure Claude Code** (add to `.claude/mcp.json`):
|
|
|
|
```json
|
|
{
|
|
"sentry-selfhosted-mcp": {
|
|
"command": "node",
|
|
"args": ["/path/to/sentry-selfhosted-mcp/dist/index.js"],
|
|
"env": {
|
|
"SENTRY_URL": "http://localhost:8000",
|
|
"SENTRY_AUTH_TOKEN": "<get-from-bugsink-ui>",
|
|
"SENTRY_ORG_SLUG": "flyer-crawler"
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
3. **Get the auth token**:
|
|
- Navigate to Bugsink UI at `http://localhost:8000`
|
|
- Log in with admin credentials
|
|
- Go to Settings > API Keys
|
|
- Create a new API key with read access
|
|
|
|
4. **Available capabilities**:
|
|
- List projects and issues
|
|
- View detailed error events
|
|
- Search by error message or stack trace
|
|
- Update issue status (resolve, ignore)
|
|
- Add comments to issues
|