Context: Early-career developer documenting the test strategy I actually run on my repos (Car-Match, CheeseMath, BasicServerSetup, AWS labs). No production on-call experience yet.
AI assist: ChatGPT helped me reorder notes; every tool listed below is in use today (or clearly labeled “pilot”).
Status: Snapshot, not perfection. Contract testing + accessibility automation still need work.

Reality snapshot

  • Unit/component tests: Jest/Vitest + Testing Library. Run locally (watch mode) and in CI on every PR.
  • Integration tests: Supertest + Dockerized Postgres/Mongo + LocalStack for AWS services. Run on PRs touching backend code.
  • End-to-end: Playwright smoke tests on Netlify deploy previews, Percy (pilot) for Gatsby visual diffs.
  • Contract tests: Pact/OpenAPI schema checks run manually before major refactors—automation coming soon.
  • Observability: Test runs push results to GitHub Checks + Slack notifications. Failures block merges.

Pyramid breakdown

LayerScopeToolsCadenceStatus
UnitPure functions, hooksJest, VitestWatch + PR
ComponentReact UI, accessibilityTesting Library, Storybook test runnerPR + nightly
IntegrationAPI + DB, AWS mocksSupertest, LocalStack, Docker ComposePRs touching backend
ContractAPI request/response contractsPact, OpenAPI validatorsManual before breaking changes🧪 Pilot
End-to-endUser flowsPlaywright, Cypress (legacy), Percy (visual)Main merges + scheduled✅ / 🧪

Tooling details

  • Jest/Vitest: Cover utility modules (date math, data transforms), React hooks, and components. Mocks replaced with MSW where possible.
  • Testing Library: Queries by role/label to ensure accessibility. If a component is hard to test, it’s usually poorly structured.
  • Supertest + Docker Compose: Spins up Express + Postgres containers, seeds data, runs API tests, tears everything down.
  • LocalStack: Emulates S3/DynamoDB/SNS for AWS labs. Lets me test IaC templates without hitting real AWS (saves $$).
  • Playwright: Automates login → CRUD → logout. Runs on Netlify deploy previews so I can review failures before shipping.
  • Percy (pilot): Visual snapshots for this Gatsby site. Still deciding if the cost is worth it.

Reliability practices

  • Each test suite uses isolated data (per-worker DB, temporary DynamoDB tables).
  • Factories generate deterministic fixtures to avoid flaky assertions.
  • Only end-to-end tests have retries (max 2) to handle occasional network hiccups.
  • Monthly “test hygiene” session: delete redundant tests, update snapshots, document new commands.

Workflow

  1. Pre-commit: Lint + unit tests via Husky.
  2. Pull request: Unit, component, integration suites run in GitHub Actions.
  3. Deploy preview (Netlify/Render): Playwright smoke suite + optional Percy run.
  4. Main merge: Deploy + regressions checks, nightly contract suite.
  5. Weekly: Manual contract tests (until automated) + accessibility spot checks.

Known gaps

  • Contract tests aren’t automated yet. I run them before schema changes, but a nightly job is on the roadmap.
  • Accessibility checks only run manually; I want axe in CI for components/pages.
  • Mobile end-to-end tests are limited—need to add Playwright mobile viewports.
  • Observability for Playwright runs is basic (GitHub logs). Would like better dashboards.

Failures that shaped this pyramid

  • Mocked everything, missed reality: Early tests mocked fetch/API too much; bugs slipped. Swapped to MSW and added integration tests with real DBs to catch schema issues.
  • Flaky E2E from shared data: Tests raced on the same DB rows. Fixed by seeding per-test DBs and resetting between runs.
  • Silent contract breaks: Frontend shipped a new field, backend choked. Added OpenAPI schema validation and manual Pact runs before merges.
  • Visual regressions: CSS tweak broke dark mode. Percy (pilot) caught it; I now run targeted visual snapshots for header/nav.

How I explain this to interviewers

  • Scope clarity: I map each test type to risk it mitigates (unit = logic, integration = wiring, e2e = user trust).
  • Automation philosophy: Start small (unit/comp), add integration where it hurts, reserve e2e for top flows.
  • Honesty: No production on-call; my testing habits come from student projects and labs. I state the gaps and the roadmap.
  • Tool choices: GitHub Actions + Playwright + MSW because they’re fast, cheap, and easy to share with classmates.

Next experiments

  • Add axe to CI for accessibility checks on key pages.
  • Automate contract tests nightly with PactFlow or a lightweight schema diff step.
  • Run Playwright in mobile viewports + low-bandwidth mode to simulate real users.
  • Publish a “testing starter” template repo with MSW + Playwright + contract example.

Links

  • Testing templates: https://github.com/BradleyMatera/testing-templates
  • Example suites: Car-Match (tests/), BasicServerSetup (postman/), CheeseMath (__tests__/).
  • Prompt logs + retros: notes/testing-journal.md

References