Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m50s
475 lines
13 KiB
Markdown
475 lines
13 KiB
Markdown
# Testing Guide
|
|
|
|
## Overview
|
|
|
|
This project has comprehensive test coverage including unit tests, integration tests, and E2E tests. All tests must be run in the **Linux dev container environment** for reliable results.
|
|
|
|
## Test Execution Environment
|
|
|
|
**CRITICAL**: All tests and type-checking MUST be executed inside the dev container (Linux environment).
|
|
|
|
### Why Linux Only?
|
|
|
|
- Path separators: Code uses POSIX-style paths (`/`) which may break on Windows
|
|
- TypeScript compilation works differently on Windows vs Linux
|
|
- Shell scripts and external dependencies assume Linux
|
|
- Test results from Windows are **unreliable and should be ignored**
|
|
|
|
### Running Tests Correctly
|
|
|
|
#### Option 1: Inside Dev Container (Recommended)
|
|
|
|
Open VS Code and use "Reopen in Container", then:
|
|
|
|
```bash
|
|
npm test # Run all tests
|
|
npm run test:unit # Run unit tests only
|
|
npm run test:integration # Run integration tests
|
|
npm run type-check # Run TypeScript type checking
|
|
```
|
|
|
|
#### Option 2: Via Podman from Windows Host
|
|
|
|
From the Windows host, execute commands in the container:
|
|
|
|
```bash
|
|
# Run unit tests (2900+ tests - pipe to file for AI processing)
|
|
podman exec -it flyer-crawler-dev npm run test:unit 2>&1 | tee test-results.txt
|
|
|
|
# Run integration tests
|
|
podman exec -it flyer-crawler-dev npm run test:integration
|
|
|
|
# Run type checking
|
|
podman exec -it flyer-crawler-dev npm run type-check
|
|
|
|
# Run specific test file
|
|
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
|
|
```
|
|
|
|
## Type Checking
|
|
|
|
TypeScript type checking is performed using `tsc --noEmit`.
|
|
|
|
### Type Check Command
|
|
|
|
```bash
|
|
npm run type-check
|
|
```
|
|
|
|
### Type Check Validation
|
|
|
|
The type-check command will:
|
|
|
|
- Exit with code 0 if no errors are found
|
|
- Exit with non-zero code and print errors if type errors exist
|
|
- Check all files in the `src/` directory as defined in `tsconfig.json`
|
|
|
|
**IMPORTANT**: Type-check on Windows may not show errors reliably. Always verify type-check results by running in the dev container.
|
|
|
|
### Verifying Type Check Works
|
|
|
|
To verify type-check is working correctly:
|
|
|
|
1. Run type-check in dev container: `podman exec -it flyer-crawler-dev npm run type-check`
|
|
2. Check for output - errors will be displayed with file paths and line numbers
|
|
3. No output + exit code 0 = no type errors
|
|
|
|
Example error output:
|
|
|
|
```
|
|
src/pages/MyDealsPage.tsx:68:31 - error TS2339: Property 'store_name' does not exist on type 'WatchedItemDeal'.
|
|
|
|
68 <span>{deal.store_name}</span>
|
|
~~~~~~~~~~
|
|
```
|
|
|
|
## Pre-Commit Hooks
|
|
|
|
The project uses Husky and lint-staged for pre-commit validation:
|
|
|
|
```bash
|
|
# .husky/pre-commit
|
|
npx lint-staged
|
|
```
|
|
|
|
Lint-staged configuration (`.lintstagedrc.json`):
|
|
|
|
```json
|
|
{
|
|
"*.{js,jsx,ts,tsx}": ["eslint --fix --no-color", "prettier --write"],
|
|
"*.{json,md,css,html,yml,yaml}": ["prettier --write"]
|
|
}
|
|
```
|
|
|
|
**Note**: The `--no-color` flag prevents ANSI color codes from breaking file path links in git output.
|
|
|
|
## Test Suite Structure
|
|
|
|
### Unit Tests (~2900 tests)
|
|
|
|
Located throughout `src/` directory alongside source files with `.test.ts` or `.test.tsx` extensions.
|
|
|
|
```bash
|
|
npm run test:unit
|
|
```
|
|
|
|
### Integration Tests (5 test files)
|
|
|
|
Located in `src/tests/integration/`:
|
|
|
|
- `admin.integration.test.ts`
|
|
- `flyer.integration.test.ts`
|
|
- `price.integration.test.ts`
|
|
- `public.routes.integration.test.ts`
|
|
- `receipt.integration.test.ts`
|
|
|
|
Requires PostgreSQL and Redis services running.
|
|
|
|
```bash
|
|
npm run test:integration
|
|
```
|
|
|
|
### E2E Tests (3 test files)
|
|
|
|
Located in `src/tests/e2e/`:
|
|
|
|
- `deals-journey.e2e.test.ts`
|
|
- `budget-journey.e2e.test.ts`
|
|
- `receipt-journey.e2e.test.ts`
|
|
|
|
Requires all services (PostgreSQL, Redis, BullMQ workers) running.
|
|
|
|
```bash
|
|
npm run test:e2e
|
|
```
|
|
|
|
## Test Result Interpretation
|
|
|
|
- Tests that **pass on Windows but fail on Linux** = **BROKEN tests** (must be fixed)
|
|
- Tests that **fail on Windows but pass on Linux** = **PASSING tests** (acceptable)
|
|
- Always use **Linux (dev container) results** as the source of truth
|
|
|
|
## Test Helpers
|
|
|
|
### Store Test Helpers
|
|
|
|
Located in `src/tests/utils/storeHelpers.ts`:
|
|
|
|
```typescript
|
|
// Create a store with a location in one call
|
|
const store = await createStoreWithLocation({
|
|
storeName: 'Test Store',
|
|
address: {
|
|
address_line_1: '123 Main St',
|
|
city: 'Toronto',
|
|
province_state: 'ON',
|
|
postal_code: 'M1M 1M1',
|
|
},
|
|
pool,
|
|
log,
|
|
});
|
|
|
|
// Cleanup stores and their locations
|
|
await cleanupStoreLocations([storeId1, storeId2], pool, log);
|
|
```
|
|
|
|
### Mock Factories
|
|
|
|
Located in `src/tests/utils/mockFactories.ts`:
|
|
|
|
```typescript
|
|
// Create mock data for tests
|
|
const mockStore = createMockStore({ name: 'Test Store' });
|
|
const mockAddress = createMockAddress({ city: 'Toronto' });
|
|
const mockStoreLocation = createMockStoreLocationWithAddress();
|
|
const mockStoreWithLocations = createMockStoreWithLocations({
|
|
locations: [{ address: { city: 'Toronto' } }],
|
|
});
|
|
```
|
|
|
|
### Test Assets
|
|
|
|
Test images and other assets are located in `src/tests/assets/`:
|
|
|
|
| File | Purpose |
|
|
| ---------------------- | ---------------------------------------------- |
|
|
| `test-flyer-image.jpg` | Sample flyer image for upload/processing tests |
|
|
| `test-flyer-icon.png` | Sample flyer icon (64x64) for thumbnail tests |
|
|
|
|
These images are copied to `public/flyer-images/` by the seed script (`npm run seed`) and served via NGINX at `/flyer-images/`.
|
|
|
|
## Known Integration Test Issues
|
|
|
|
See `CLAUDE.md` for documentation of common integration test issues and their solutions, including:
|
|
|
|
1. Vitest globalSetup context isolation
|
|
2. BullMQ cleanup queue timing issues
|
|
3. Cache invalidation after direct database inserts
|
|
4. Unique filename requirements for file uploads
|
|
5. Response format mismatches
|
|
6. External service availability
|
|
|
|
## Continuous Integration
|
|
|
|
Tests run automatically on:
|
|
|
|
- Pre-commit (via Husky hooks)
|
|
- Pull request creation/update (via Gitea CI/CD)
|
|
- Merge to main branch (via Gitea CI/CD)
|
|
|
|
CI/CD configuration:
|
|
|
|
- `.gitea/workflows/deploy-to-prod.yml`
|
|
- `.gitea/workflows/deploy-to-test.yml`
|
|
|
|
## Coverage Reports
|
|
|
|
Test coverage is tracked using Vitest's built-in coverage tools.
|
|
|
|
```bash
|
|
npm run test:coverage
|
|
```
|
|
|
|
Coverage reports are generated in the `coverage/` directory.
|
|
|
|
## Debugging Tests
|
|
|
|
### Enable Verbose Logging
|
|
|
|
```bash
|
|
# Run tests with verbose output
|
|
npm test -- --reporter=verbose
|
|
|
|
# Run specific test with logging
|
|
DEBUG=* npm test -- --run src/path/to/test.test.ts
|
|
```
|
|
|
|
### Using Vitest UI
|
|
|
|
```bash
|
|
npm run test:ui
|
|
```
|
|
|
|
Opens a browser-based test runner with filtering and debugging capabilities.
|
|
|
|
## Best Practices
|
|
|
|
1. **Always run tests in dev container** - never trust Windows test results
|
|
2. **Run type-check before committing** - catches TypeScript errors early
|
|
3. **Use test helpers** - `createStoreWithLocation()`, mock factories, etc.
|
|
4. **Clean up test data** - use cleanup helpers in `afterEach`/`afterAll`
|
|
5. **Verify cache invalidation** - tests that insert data directly must invalidate cache
|
|
6. **Use unique filenames** - file upload tests need timestamp-based filenames
|
|
7. **Check exit codes** - `npm run type-check` returns 0 on success, non-zero on error
|
|
8. **Use `req.originalUrl` in error logs** - never hardcode API paths in error messages
|
|
9. **Use versioned API paths** - always use `/api/v1/` prefix in test requests
|
|
10. **Use `vi.hoisted()` for module mocks** - ensure mocks are available during module initialization
|
|
|
|
## Testing Error Log Messages
|
|
|
|
When testing route error handlers, ensure assertions account for versioned API paths.
|
|
|
|
### Problem: Hardcoded Paths Break Tests
|
|
|
|
Error log messages with hardcoded paths cause test failures when API versions change:
|
|
|
|
```typescript
|
|
// Production code (INCORRECT - hardcoded path)
|
|
req.log.error({ error }, 'Error in /api/flyers/:id:');
|
|
|
|
// Test expects versioned path
|
|
expect(logSpy).toHaveBeenCalledWith(
|
|
expect.objectContaining({ error: expect.any(Error) }),
|
|
expect.stringContaining('/api/v1/flyers'), // FAILS - actual log has /api/flyers
|
|
);
|
|
```
|
|
|
|
### Solution: Dynamic Paths with `req.originalUrl`
|
|
|
|
Production code should use `req.originalUrl` for dynamic path logging:
|
|
|
|
```typescript
|
|
// Production code (CORRECT - dynamic path)
|
|
req.log.error({ error }, `Error in ${req.originalUrl.split('?')[0]}:`);
|
|
```
|
|
|
|
### Writing Robust Test Assertions
|
|
|
|
```typescript
|
|
// Good - matches versioned path
|
|
expect(logSpy).toHaveBeenCalledWith(
|
|
expect.objectContaining({ error: expect.any(Error) }),
|
|
expect.stringContaining('/api/v1/flyers'),
|
|
);
|
|
|
|
// Good - flexible match for any version
|
|
expect(logSpy).toHaveBeenCalledWith(
|
|
expect.objectContaining({ error: expect.any(Error) }),
|
|
expect.stringMatching(/\/api\/v\d+\/flyers/),
|
|
);
|
|
|
|
// Bad - hardcoded unversioned path
|
|
expect(logSpy).toHaveBeenCalledWith(
|
|
expect.objectContaining({ error: expect.any(Error) }),
|
|
'Error in /api/flyers:', // Will fail with versioned routes
|
|
);
|
|
```
|
|
|
|
See [Error Logging Path Patterns](ERROR-LOGGING-PATHS.md) for complete documentation.
|
|
|
|
## API Versioning in Tests (ADR-008, ADR-057)
|
|
|
|
All API endpoints use the `/api/v1/` prefix. Tests must use versioned paths.
|
|
|
|
### Configuration
|
|
|
|
API base URLs are configured centrally in Vitest config files:
|
|
|
|
| Config File | Environment Variable | Value |
|
|
| ------------------------------ | -------------------- | ------------------------------ |
|
|
| `vite.config.ts` | `VITE_API_BASE_URL` | `/api/v1` |
|
|
| `vitest.config.e2e.ts` | `VITE_API_BASE_URL` | `http://localhost:3098/api/v1` |
|
|
| `vitest.config.integration.ts` | `VITE_API_BASE_URL` | `http://localhost:3099/api/v1` |
|
|
|
|
### Writing API Tests
|
|
|
|
```typescript
|
|
// Good - versioned path
|
|
const response = await request.post('/api/v1/auth/login').send({...});
|
|
|
|
// Bad - unversioned path (will fail)
|
|
const response = await request.post('/api/auth/login').send({...});
|
|
```
|
|
|
|
### Migration Checklist
|
|
|
|
When API version changes (e.g., v1 to v2):
|
|
|
|
1. Update all Vitest config `VITE_API_BASE_URL` values
|
|
2. Search and replace API paths in E2E tests: `grep -r "/api/v1/" src/tests/e2e/`
|
|
3. Search and replace API paths in integration tests
|
|
4. Verify route handler error logs use `req.originalUrl`
|
|
5. Run full test suite in dev container
|
|
|
|
See [ADR-057](../adr/0057-test-remediation-post-api-versioning.md) for complete migration guidance.
|
|
|
|
## vi.hoisted() Pattern for Module Mocks
|
|
|
|
When mocking modules that are imported at module initialization time (like queues or database connections), use `vi.hoisted()` to ensure mocks are available during hoisting.
|
|
|
|
### Problem: Mock Not Available During Import
|
|
|
|
```typescript
|
|
// BAD: Mock might not be ready when module imports it
|
|
vi.mock('../services/queues.server', () => ({
|
|
flyerQueue: { getJobCounts: vi.fn() }, // May not exist yet
|
|
}));
|
|
|
|
import healthRouter from './health.routes'; // Imports queues.server
|
|
```
|
|
|
|
### Solution: Use vi.hoisted()
|
|
|
|
```typescript
|
|
// GOOD: Mocks are created during hoisting, before vi.mock runs
|
|
const { mockQueuesModule } = vi.hoisted(() => {
|
|
const createMockQueue = () => ({
|
|
getJobCounts: vi.fn().mockResolvedValue({
|
|
waiting: 0,
|
|
active: 0,
|
|
failed: 0,
|
|
delayed: 0,
|
|
}),
|
|
});
|
|
|
|
return {
|
|
mockQueuesModule: {
|
|
flyerQueue: createMockQueue(),
|
|
emailQueue: createMockQueue(),
|
|
// ... additional queues
|
|
},
|
|
};
|
|
});
|
|
|
|
// Now the mock object exists when vi.mock factory runs
|
|
vi.mock('../services/queues.server', () => mockQueuesModule);
|
|
|
|
// Safe to import after mocks are defined
|
|
import healthRouter from './health.routes';
|
|
```
|
|
|
|
See [ADR-057](../adr/0057-test-remediation-post-api-versioning.md) for additional patterns.
|
|
|
|
## Testing Role-Based Component Visibility
|
|
|
|
When testing components that render differently based on user roles:
|
|
|
|
### Pattern: Separate Test Cases by Role
|
|
|
|
```typescript
|
|
describe('for authenticated users', () => {
|
|
beforeEach(() => {
|
|
mockedUseAuth.mockReturnValue({
|
|
authStatus: 'AUTHENTICATED',
|
|
userProfile: createMockUserProfile({ role: 'user' }),
|
|
});
|
|
});
|
|
|
|
it('renders user-accessible components', () => {
|
|
render(<MyComponent />);
|
|
expect(screen.getByTestId('user-component')).toBeInTheDocument();
|
|
// Admin-only should NOT be present
|
|
expect(screen.queryByTestId('admin-only')).not.toBeInTheDocument();
|
|
});
|
|
});
|
|
|
|
describe('for admin users', () => {
|
|
beforeEach(() => {
|
|
mockedUseAuth.mockReturnValue({
|
|
authStatus: 'AUTHENTICATED',
|
|
userProfile: createMockUserProfile({ role: 'admin' }),
|
|
});
|
|
});
|
|
|
|
it('renders admin-only components', () => {
|
|
render(<MyComponent />);
|
|
expect(screen.getByTestId('admin-only')).toBeInTheDocument();
|
|
});
|
|
});
|
|
```
|
|
|
|
### Key Points
|
|
|
|
1. Create separate `describe` blocks for each role
|
|
2. Set up role-specific mocks in `beforeEach`
|
|
3. Test both presence AND absence of role-gated components
|
|
4. Use `screen.queryByTestId()` for elements that should NOT exist
|
|
|
|
## CSS Class Assertions After UI Refactors
|
|
|
|
After frontend style changes, update test assertions to match new CSS classes.
|
|
|
|
### Handling Tailwind Class Changes
|
|
|
|
```typescript
|
|
// Before refactor
|
|
expect(selectedItem).toHaveClass('ring-2', 'ring-brand-primary');
|
|
|
|
// After refactor - update to new classes
|
|
expect(selectedItem).toHaveClass('border-brand-primary', 'bg-teal-50/50');
|
|
```
|
|
|
|
### Flexible Matching
|
|
|
|
For complex class combinations, consider partial matching:
|
|
|
|
```typescript
|
|
// Check for key classes, ignore utility classes
|
|
expect(element).toHaveClass('border-brand-primary');
|
|
|
|
// Or use regex for patterns
|
|
expect(element.className).toMatch(/dark:bg-teal-\d+/);
|
|
```
|
|
|
|
See [ADR-057](../adr/0057-test-remediation-post-api-versioning.md) for lessons learned from the test remediation effort.
|