13 KiB
Testing Guide
Overview
This project has comprehensive test coverage including unit tests, integration tests, and E2E tests. All tests must be run in the Linux dev container environment for reliable results.
Test Execution Environment
CRITICAL: All tests and type-checking MUST be executed inside the dev container (Linux environment).
Why Linux Only?
- Path separators: Code uses POSIX-style paths (
/) which may break on Windows - TypeScript compilation works differently on Windows vs Linux
- Shell scripts and external dependencies assume Linux
- Test results from Windows are unreliable and should be ignored
Running Tests Correctly
Option 1: Inside Dev Container (Recommended)
Open VS Code and use "Reopen in Container", then:
npm test # Run all tests
npm run test:unit # Run unit tests only
npm run test:integration # Run integration tests
npm run type-check # Run TypeScript type checking
Option 2: Via Podman from Windows Host
From the Windows host, execute commands in the container:
# Run unit tests (2900+ tests - pipe to file for AI processing)
podman exec -it flyer-crawler-dev npm run test:unit 2>&1 | tee test-results.txt
# Run integration tests
podman exec -it flyer-crawler-dev npm run test:integration
# Run type checking
podman exec -it flyer-crawler-dev npm run type-check
# Run specific test file
podman exec -it flyer-crawler-dev npm test -- --run src/hooks/useAuth.test.tsx
Type Checking
TypeScript type checking is performed using tsc --noEmit.
Type Check Command
npm run type-check
Type Check Validation
The type-check command will:
- Exit with code 0 if no errors are found
- Exit with non-zero code and print errors if type errors exist
- Check all files in the
src/directory as defined intsconfig.json
IMPORTANT: Type-check on Windows may not show errors reliably. Always verify type-check results by running in the dev container.
Verifying Type Check Works
To verify type-check is working correctly:
- Run type-check in dev container:
podman exec -it flyer-crawler-dev npm run type-check - Check for output - errors will be displayed with file paths and line numbers
- No output + exit code 0 = no type errors
Example error output:
src/pages/MyDealsPage.tsx:68:31 - error TS2339: Property 'store_name' does not exist on type 'WatchedItemDeal'.
68 <span>{deal.store_name}</span>
~~~~~~~~~~
Pre-Commit Hooks
The project uses Husky and lint-staged for pre-commit validation:
# .husky/pre-commit
npx lint-staged
Lint-staged configuration (.lintstagedrc.json):
{
"*.{js,jsx,ts,tsx}": ["eslint --fix --no-color", "prettier --write"],
"*.{json,md,css,html,yml,yaml}": ["prettier --write"]
}
Note: The --no-color flag prevents ANSI color codes from breaking file path links in git output.
Test Suite Structure
Unit Tests (~2900 tests)
Located throughout src/ directory alongside source files with .test.ts or .test.tsx extensions.
npm run test:unit
Integration Tests (5 test files)
Located in src/tests/integration/:
admin.integration.test.tsflyer.integration.test.tsprice.integration.test.tspublic.routes.integration.test.tsreceipt.integration.test.ts
Requires PostgreSQL and Redis services running.
npm run test:integration
E2E Tests (3 test files)
Located in src/tests/e2e/:
deals-journey.e2e.test.tsbudget-journey.e2e.test.tsreceipt-journey.e2e.test.ts
Requires all services (PostgreSQL, Redis, BullMQ workers) running.
npm run test:e2e
Test Result Interpretation
- Tests that pass on Windows but fail on Linux = BROKEN tests (must be fixed)
- Tests that fail on Windows but pass on Linux = PASSING tests (acceptable)
- Always use Linux (dev container) results as the source of truth
Test Helpers
Store Test Helpers
Located in src/tests/utils/storeHelpers.ts:
// Create a store with a location in one call
const store = await createStoreWithLocation({
storeName: 'Test Store',
address: {
address_line_1: '123 Main St',
city: 'Toronto',
province_state: 'ON',
postal_code: 'M1M 1M1',
},
pool,
log,
});
// Cleanup stores and their locations
await cleanupStoreLocations([storeId1, storeId2], pool, log);
Mock Factories
Located in src/tests/utils/mockFactories.ts:
// Create mock data for tests
const mockStore = createMockStore({ name: 'Test Store' });
const mockAddress = createMockAddress({ city: 'Toronto' });
const mockStoreLocation = createMockStoreLocationWithAddress();
const mockStoreWithLocations = createMockStoreWithLocations({
locations: [{ address: { city: 'Toronto' } }],
});
Test Assets
Test images and other assets are located in src/tests/assets/:
| File | Purpose |
|---|---|
test-flyer-image.jpg |
Sample flyer image for upload/processing tests |
test-flyer-icon.png |
Sample flyer icon (64x64) for thumbnail tests |
These images are copied to public/flyer-images/ by the seed script (npm run seed) and served via NGINX at /flyer-images/.
Known Integration Test Issues
See CLAUDE.md for documentation of common integration test issues and their solutions, including:
- Vitest globalSetup context isolation
- BullMQ cleanup queue timing issues
- Cache invalidation after direct database inserts
- Unique filename requirements for file uploads
- Response format mismatches
- External service availability
Continuous Integration
Tests run automatically on:
- Pre-commit (via Husky hooks)
- Pull request creation/update (via Gitea CI/CD)
- Merge to main branch (via Gitea CI/CD)
CI/CD configuration:
.gitea/workflows/deploy-to-prod.yml.gitea/workflows/deploy-to-test.yml
Coverage Reports
Test coverage is tracked using Vitest's built-in coverage tools.
npm run test:coverage
Coverage reports are generated in the coverage/ directory.
Debugging Tests
Enable Verbose Logging
# Run tests with verbose output
npm test -- --reporter=verbose
# Run specific test with logging
DEBUG=* npm test -- --run src/path/to/test.test.ts
Using Vitest UI
npm run test:ui
Opens a browser-based test runner with filtering and debugging capabilities.
Best Practices
- Always run tests in dev container - never trust Windows test results
- Run type-check before committing - catches TypeScript errors early
- Use test helpers -
createStoreWithLocation(), mock factories, etc. - Clean up test data - use cleanup helpers in
afterEach/afterAll - Verify cache invalidation - tests that insert data directly must invalidate cache
- Use unique filenames - file upload tests need timestamp-based filenames
- Check exit codes -
npm run type-checkreturns 0 on success, non-zero on error - Use
req.originalUrlin error logs - never hardcode API paths in error messages - Use versioned API paths - always use
/api/v1/prefix in test requests - Use
vi.hoisted()for module mocks - ensure mocks are available during module initialization
Testing Error Log Messages
When testing route error handlers, ensure assertions account for versioned API paths.
Problem: Hardcoded Paths Break Tests
Error log messages with hardcoded paths cause test failures when API versions change:
// Production code (INCORRECT - hardcoded path)
req.log.error({ error }, 'Error in /api/flyers/:id:');
// Test expects versioned path
expect(logSpy).toHaveBeenCalledWith(
expect.objectContaining({ error: expect.any(Error) }),
expect.stringContaining('/api/v1/flyers'), // FAILS - actual log has /api/flyers
);
Solution: Dynamic Paths with req.originalUrl
Production code should use req.originalUrl for dynamic path logging:
// Production code (CORRECT - dynamic path)
req.log.error({ error }, `Error in ${req.originalUrl.split('?')[0]}:`);
Writing Robust Test Assertions
// Good - matches versioned path
expect(logSpy).toHaveBeenCalledWith(
expect.objectContaining({ error: expect.any(Error) }),
expect.stringContaining('/api/v1/flyers'),
);
// Good - flexible match for any version
expect(logSpy).toHaveBeenCalledWith(
expect.objectContaining({ error: expect.any(Error) }),
expect.stringMatching(/\/api\/v\d+\/flyers/),
);
// Bad - hardcoded unversioned path
expect(logSpy).toHaveBeenCalledWith(
expect.objectContaining({ error: expect.any(Error) }),
'Error in /api/flyers:', // Will fail with versioned routes
);
See Error Logging Path Patterns for complete documentation.
API Versioning in Tests (ADR-008, ADR-057)
All API endpoints use the /api/v1/ prefix. Tests must use versioned paths.
Configuration
API base URLs are configured centrally in Vitest config files:
| Config File | Environment Variable | Value |
|---|---|---|
vite.config.ts |
VITE_API_BASE_URL |
/api/v1 |
vitest.config.e2e.ts |
VITE_API_BASE_URL |
http://localhost:3098/api/v1 |
vitest.config.integration.ts |
VITE_API_BASE_URL |
http://localhost:3099/api/v1 |
Writing API Tests
// Good - versioned path
const response = await request.post('/api/v1/auth/login').send({...});
// Bad - unversioned path (will fail)
const response = await request.post('/api/auth/login').send({...});
Migration Checklist
When API version changes (e.g., v1 to v2):
- Update all Vitest config
VITE_API_BASE_URLvalues - Search and replace API paths in E2E tests:
grep -r "/api/v1/" src/tests/e2e/ - Search and replace API paths in integration tests
- Verify route handler error logs use
req.originalUrl - Run full test suite in dev container
See ADR-057 for complete migration guidance.
vi.hoisted() Pattern for Module Mocks
When mocking modules that are imported at module initialization time (like queues or database connections), use vi.hoisted() to ensure mocks are available during hoisting.
Problem: Mock Not Available During Import
// BAD: Mock might not be ready when module imports it
vi.mock('../services/queues.server', () => ({
flyerQueue: { getJobCounts: vi.fn() }, // May not exist yet
}));
import healthRouter from './health.routes'; // Imports queues.server
Solution: Use vi.hoisted()
// GOOD: Mocks are created during hoisting, before vi.mock runs
const { mockQueuesModule } = vi.hoisted(() => {
const createMockQueue = () => ({
getJobCounts: vi.fn().mockResolvedValue({
waiting: 0,
active: 0,
failed: 0,
delayed: 0,
}),
});
return {
mockQueuesModule: {
flyerQueue: createMockQueue(),
emailQueue: createMockQueue(),
// ... additional queues
},
};
});
// Now the mock object exists when vi.mock factory runs
vi.mock('../services/queues.server', () => mockQueuesModule);
// Safe to import after mocks are defined
import healthRouter from './health.routes';
See ADR-057 for additional patterns.
Testing Role-Based Component Visibility
When testing components that render differently based on user roles:
Pattern: Separate Test Cases by Role
describe('for authenticated users', () => {
beforeEach(() => {
mockedUseAuth.mockReturnValue({
authStatus: 'AUTHENTICATED',
userProfile: createMockUserProfile({ role: 'user' }),
});
});
it('renders user-accessible components', () => {
render(<MyComponent />);
expect(screen.getByTestId('user-component')).toBeInTheDocument();
// Admin-only should NOT be present
expect(screen.queryByTestId('admin-only')).not.toBeInTheDocument();
});
});
describe('for admin users', () => {
beforeEach(() => {
mockedUseAuth.mockReturnValue({
authStatus: 'AUTHENTICATED',
userProfile: createMockUserProfile({ role: 'admin' }),
});
});
it('renders admin-only components', () => {
render(<MyComponent />);
expect(screen.getByTestId('admin-only')).toBeInTheDocument();
});
});
Key Points
- Create separate
describeblocks for each role - Set up role-specific mocks in
beforeEach - Test both presence AND absence of role-gated components
- Use
screen.queryByTestId()for elements that should NOT exist
CSS Class Assertions After UI Refactors
After frontend style changes, update test assertions to match new CSS classes.
Handling Tailwind Class Changes
// Before refactor
expect(selectedItem).toHaveClass('ring-2', 'ring-brand-primary');
// After refactor - update to new classes
expect(selectedItem).toHaveClass('border-brand-primary', 'bg-teal-50/50');
Flexible Matching
For complex class combinations, consider partial matching:
// Check for key classes, ignore utility classes
expect(element).toHaveClass('border-brand-primary');
// Or use regex for patterns
expect(element.className).toMatch(/dark:bg-teal-\d+/);
See ADR-057 for lessons learned from the test remediation effort.