Some checks failed
Deploy to Test Environment / deploy-to-test (push) Failing after 2m15s
417 lines
12 KiB
Markdown
417 lines
12 KiB
Markdown
# Tester and Testwriter Subagent Guide
|
|
|
|
This guide covers two related but distinct subagents for testing in the Flyer Crawler project:
|
|
|
|
- **tester**: Adversarial testing to find edge cases, race conditions, and vulnerabilities
|
|
- **testwriter**: Creating comprehensive test suites for features and fixes
|
|
|
|
## Quick Reference
|
|
|
|
| Aspect | tester | testwriter |
|
|
| ---------------- | -------------------------------------------- | ------------------------------------------ |
|
|
| **Primary Use** | Find bugs, security issues, edge cases | Create test suites, improve coverage |
|
|
| **Key Files** | N/A (analysis-focused) | `*.test.ts`, `src/tests/utils/` |
|
|
| **Key ADRs** | ADR-010 (Testing), ADR-040 (Test Economics) | ADR-010 (Testing), ADR-045 (Test Fixtures) |
|
|
| **Test Command** | `podman exec -it flyer-crawler-dev npm test` | Same |
|
|
| **Test Stack** | Vitest, Supertest, Testing Library | Same |
|
|
| **Delegate To** | `testwriter` (write tests for findings) | `coder` (fix failing tests) |
|
|
|
|
## Understanding the Difference
|
|
|
|
| Aspect | tester | testwriter |
|
|
| --------------- | ------------------------------- | ------------------------------- |
|
|
| **Purpose** | Find bugs and weaknesses | Create test coverage |
|
|
| **Approach** | Adversarial, exploratory | Systematic, comprehensive |
|
|
| **Output** | Bug reports, security findings | Test files, test utilities |
|
|
| **When to Use** | Before release, security review | During development, refactoring |
|
|
|
|
## The tester Subagent
|
|
|
|
### When to Use
|
|
|
|
Use the **tester** subagent when you need to:
|
|
|
|
- Find edge cases that might cause failures
|
|
- Identify race conditions in async code
|
|
- Test security vulnerabilities
|
|
- Stress test APIs or database queries
|
|
- Validate error handling paths
|
|
- Find memory leaks or performance issues
|
|
|
|
### What the tester Knows
|
|
|
|
The tester subagent understands:
|
|
|
|
- Common vulnerability patterns (SQL injection, XSS, CSRF)
|
|
- Race condition scenarios in Node.js
|
|
- Edge cases in data validation
|
|
- Authentication and authorization bypasses
|
|
- BullMQ queue edge cases
|
|
- Database transaction isolation issues
|
|
|
|
### Example Requests
|
|
|
|
**Finding edge cases:**
|
|
|
|
```
|
|
"Use the tester subagent to find edge cases in the flyer upload
|
|
endpoint. Consider file types, sizes, concurrent uploads, and
|
|
invalid data scenarios."
|
|
```
|
|
|
|
**Security testing:**
|
|
|
|
```
|
|
"Use the tester subagent to review the authentication flow for
|
|
security vulnerabilities, including JWT handling, session management,
|
|
and OAuth integration."
|
|
```
|
|
|
|
**Race condition analysis:**
|
|
|
|
```
|
|
"Use the tester subagent to identify potential race conditions in
|
|
the shopping list sharing feature where multiple users might modify
|
|
the same list simultaneously."
|
|
```
|
|
|
|
### Sample Output from tester
|
|
|
|
The tester subagent typically produces:
|
|
|
|
1. **Vulnerability Reports**
|
|
- Issue description
|
|
- Reproduction steps
|
|
- Severity assessment
|
|
- Recommended fix
|
|
|
|
2. **Edge Case Catalog**
|
|
- Input combinations to test
|
|
- Expected vs actual behavior
|
|
- Priority for fixing
|
|
|
|
3. **Test Scenarios**
|
|
- Detailed test cases for the testwriter
|
|
- Setup and teardown requirements
|
|
- Assertions to verify
|
|
|
|
## The testwriter Subagent
|
|
|
|
### When to Use
|
|
|
|
Use the **testwriter** subagent when you need to:
|
|
|
|
- Write unit tests for new features
|
|
- Add integration tests for API endpoints
|
|
- Create end-to-end test scenarios
|
|
- Improve test coverage for existing code
|
|
- Write regression tests for bug fixes
|
|
- Create test utilities and factories
|
|
|
|
### What the testwriter Knows
|
|
|
|
The testwriter subagent understands:
|
|
|
|
- Project testing stack (Vitest, Testing Library, Supertest)
|
|
- Mock factory patterns (`src/tests/utils/mockFactories.ts`)
|
|
- Test helper utilities (`src/tests/utils/testHelpers.ts`)
|
|
- Database cleanup patterns
|
|
- Integration test setup with globalSetup
|
|
- Known testing issues documented in CLAUDE.md
|
|
|
|
### Testing Framework Stack
|
|
|
|
| Tool | Version | Purpose |
|
|
| ------------------------- | ------- | ----------------- |
|
|
| Vitest | 4.0.15 | Test runner |
|
|
| @testing-library/react | 16.3.0 | Component testing |
|
|
| @testing-library/jest-dom | 6.9.1 | DOM assertions |
|
|
| supertest | 7.1.4 | API testing |
|
|
| msw | 2.12.3 | Network mocking |
|
|
|
|
### Test File Organization
|
|
|
|
```
|
|
src/
|
|
├── components/
|
|
│ └── *.test.tsx # Component tests (colocated)
|
|
├── hooks/
|
|
│ └── *.test.ts # Hook tests (colocated)
|
|
├── services/
|
|
│ └── *.test.ts # Service tests (colocated)
|
|
├── routes/
|
|
│ └── *.test.ts # Route handler tests (colocated)
|
|
└── tests/
|
|
├── integration/ # Integration tests
|
|
└── e2e/ # End-to-end tests
|
|
```
|
|
|
|
### Example Requests
|
|
|
|
**Unit tests for a new feature:**
|
|
|
|
```
|
|
"Use the testwriter subagent to create comprehensive unit tests
|
|
for the new StoreSearchService in src/services/storeSearchService.ts.
|
|
Include edge cases for empty results, partial matches, and pagination."
|
|
```
|
|
|
|
**Integration tests for API:**
|
|
|
|
```
|
|
"Use the testwriter subagent to add integration tests for the
|
|
POST /api/flyers endpoint, covering successful uploads, validation
|
|
errors, authentication requirements, and file size limits."
|
|
```
|
|
|
|
**Regression test for bug fix:**
|
|
|
|
```
|
|
"Use the testwriter subagent to create a regression test that
|
|
verifies the fix for issue #123 where duplicate flyer items were
|
|
created when uploading certain PDFs."
|
|
```
|
|
|
|
### Test Patterns the testwriter Uses
|
|
|
|
#### Unit Test Pattern
|
|
|
|
```typescript
|
|
// src/services/storeSearchService.test.ts
|
|
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
|
import { createMockStore, resetMockIds } from '@/tests/utils/mockFactories';
|
|
|
|
describe('StoreSearchService', () => {
|
|
beforeEach(() => {
|
|
resetMockIds(); // Ensure deterministic IDs
|
|
vi.clearAllMocks();
|
|
});
|
|
|
|
describe('searchByName', () => {
|
|
it('returns matching stores when query matches', async () => {
|
|
const mockStore = createMockStore({ name: 'Test Mart' });
|
|
// ... test implementation
|
|
});
|
|
|
|
it('returns empty array when no matches found', async () => {
|
|
// ... test implementation
|
|
});
|
|
|
|
it('handles special characters in search query', async () => {
|
|
// ... test implementation
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
#### Integration Test Pattern
|
|
|
|
```typescript
|
|
// src/tests/integration/stores.integration.test.ts
|
|
import supertest from 'supertest';
|
|
import { createAndLoginUser, cleanupDb } from '@/tests/utils/testHelpers';
|
|
|
|
describe('Stores API', () => {
|
|
let request: ReturnType<typeof supertest>;
|
|
let authToken: string;
|
|
let testUserId: string;
|
|
|
|
beforeAll(async () => {
|
|
const app = (await import('../../../server')).default;
|
|
request = supertest(app);
|
|
const { token, userId } = await createAndLoginUser(request);
|
|
authToken = token;
|
|
testUserId = userId;
|
|
});
|
|
|
|
afterAll(async () => {
|
|
await cleanupDb({ users: [testUserId] });
|
|
});
|
|
|
|
describe('GET /api/stores', () => {
|
|
it('returns list of stores', async () => {
|
|
const response = await request.get('/api/stores').set('Authorization', `Bearer ${authToken}`);
|
|
|
|
expect(response.status).toBe(200);
|
|
expect(response.body.data.stores).toBeInstanceOf(Array);
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
#### Component Test Pattern
|
|
|
|
```typescript
|
|
// src/components/StoreCard.test.tsx
|
|
import { describe, it, expect, vi } from 'vitest';
|
|
import { renderWithProviders, screen } from '@/tests/utils/renderWithProviders';
|
|
import { createMockStore } from '@/tests/utils/mockFactories';
|
|
import { StoreCard } from './StoreCard';
|
|
|
|
describe('StoreCard', () => {
|
|
it('renders store name and location count', () => {
|
|
const store = createMockStore({
|
|
name: 'Test Store',
|
|
location_count: 5
|
|
});
|
|
|
|
renderWithProviders(<StoreCard store={store} />);
|
|
|
|
expect(screen.getByText('Test Store')).toBeInTheDocument();
|
|
expect(screen.getByText('5 locations')).toBeInTheDocument();
|
|
});
|
|
|
|
it('calls onSelect when clicked', async () => {
|
|
const store = createMockStore();
|
|
const handleSelect = vi.fn();
|
|
|
|
renderWithProviders(<StoreCard store={store} onSelect={handleSelect} />);
|
|
|
|
await userEvent.click(screen.getByText(store.name));
|
|
|
|
expect(handleSelect).toHaveBeenCalledWith(store);
|
|
});
|
|
});
|
|
```
|
|
|
|
## Test Execution Environment
|
|
|
|
### Critical Requirement
|
|
|
|
> **ALL tests MUST be executed inside the dev container (Linux environment)**
|
|
|
|
Tests that pass on Windows but fail on Linux are considered **broken tests**.
|
|
|
|
### Running Tests
|
|
|
|
```bash
|
|
# From Windows host - run in container
|
|
podman exec -it flyer-crawler-dev npm run test:unit
|
|
podman exec -it flyer-crawler-dev npm run test:integration
|
|
|
|
# Inside dev container
|
|
npm run test:unit
|
|
npm run test:integration
|
|
|
|
# Run specific test file
|
|
npm test -- --run src/services/storeService.test.ts
|
|
```
|
|
|
|
### Test Commands Reference
|
|
|
|
| Command | Description |
|
|
| -------------------------- | ------------------------------------- |
|
|
| `npm test` | All unit tests |
|
|
| `npm run test:unit` | Unit tests only |
|
|
| `npm run test:integration` | Integration tests (requires DB/Redis) |
|
|
| `npm run test:coverage` | Tests with coverage report |
|
|
|
|
## Known Testing Issues
|
|
|
|
The testwriter subagent is aware of these documented issues:
|
|
|
|
### 1. Vitest globalSetup Context Isolation
|
|
|
|
Vitest's `globalSetup` runs in a separate Node.js context. Mocks and spies do NOT share instances with test files.
|
|
|
|
**Impact**: BullMQ worker service mocks don't work in integration tests.
|
|
|
|
**Solution**: Use `.todo()` for affected tests or create test-only API endpoints.
|
|
|
|
### 2. Cleanup Queue Timing
|
|
|
|
The cleanup worker may process jobs before tests can verify them.
|
|
|
|
**Solution**:
|
|
|
|
```typescript
|
|
const { cleanupQueue } = await import('../../services/queues.server');
|
|
await cleanupQueue.drain();
|
|
await cleanupQueue.pause();
|
|
// ... run test ...
|
|
await cleanupQueue.resume();
|
|
```
|
|
|
|
### 3. Cache Stale After Direct SQL
|
|
|
|
Direct database inserts bypass cache invalidation.
|
|
|
|
**Solution**:
|
|
|
|
```typescript
|
|
await cacheService.invalidateFlyers();
|
|
```
|
|
|
|
### 4. Unique Filenames Required
|
|
|
|
File upload tests need unique filenames to avoid collisions.
|
|
|
|
**Solution**:
|
|
|
|
```typescript
|
|
const filename = `test-${Date.now()}-${Math.round(Math.random() * 1e9)}.jpg`;
|
|
```
|
|
|
|
## Test Coverage Guidelines
|
|
|
|
### When Writing Tests
|
|
|
|
1. **Unit Tests** (required for all new code):
|
|
- Pure functions and utilities
|
|
- React components
|
|
- Custom hooks
|
|
- Service methods
|
|
- Repository methods
|
|
|
|
2. **Integration Tests** (required for API changes):
|
|
- New API endpoints
|
|
- Authentication flows
|
|
- Middleware behavior
|
|
|
|
3. **E2E Tests** (for critical paths):
|
|
- User registration/login
|
|
- Flyer upload workflow
|
|
- Admin operations
|
|
|
|
### Test Isolation
|
|
|
|
1. Reset mock IDs in `beforeEach()`
|
|
2. Use unique test data (timestamps, UUIDs)
|
|
3. Clean up after tests with `cleanupDb()`
|
|
4. Don't share state between tests
|
|
|
|
## Combining tester and testwriter
|
|
|
|
A typical workflow for thorough testing:
|
|
|
|
1. **Development**: Write code with basic tests using `testwriter`
|
|
2. **Edge Cases**: Use `tester` to identify edge cases and vulnerabilities
|
|
3. **Coverage**: Use `testwriter` to add tests for identified edge cases
|
|
4. **Review**: Use `code-reviewer` to verify test quality
|
|
|
|
### Example Combined Workflow
|
|
|
|
```
|
|
1. "Use testwriter to create initial tests for the new discount
|
|
calculation feature"
|
|
|
|
2. "Use tester to find edge cases in the discount calculation -
|
|
consider rounding errors, negative values, percentage limits,
|
|
and currency precision"
|
|
|
|
3. "Use testwriter to add tests for the edge cases identified:
|
|
- Rounding to 2 decimal places
|
|
- Negative discount values
|
|
- Discounts over 100%
|
|
- Very small amounts (under $0.01)"
|
|
```
|
|
|
|
## Related Documentation
|
|
|
|
- [OVERVIEW.md](./OVERVIEW.md) - Subagent system overview
|
|
- [CODER-GUIDE.md](./CODER-GUIDE.md) - Working with the coder subagent
|
|
- [SECURITY-DEBUG-GUIDE.md](./SECURITY-DEBUG-GUIDE.md) - Security testing and code review
|
|
- [../development/TESTING.md](../development/TESTING.md) - Testing guide
|
|
- [../adr/0010-testing-strategy-and-standards.md](../adr/0010-testing-strategy-and-standards.md) - Testing ADR
|
|
- [../adr/0040-testing-economics-and-priorities.md](../adr/0040-testing-economics-and-priorities.md) - Testing priorities
|